venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
ICLR | Title
Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View
Abstract
From the intuitive notion of disentanglement, the image variations corresponding to different factors should be distinct from each other, and the disentangled representation should reflect those variations with separate dimensions. To discover the factors and learn disentangled representation, previous methods typically leverage an extra regularization term when learning to generate realistic images. However, the term usually results in a trade-off between disentanglement and generation quality. For the generative models pretrained without any disentanglement term, the generated images show semantically meaningful variations when traversing along different directions in the latent space. Based on this observation, we argue that it is possible to mitigate the trade-off by (i) leveraging the pretrained generative models with high generation quality, (ii) focusing on discovering the traversal directions as factors for disentangled representation learning. To achieve this, we propose Disentaglement via Contrast (DisCo) as a framework to model the variations based on the target disentangled representations, and contrast the variations to jointly discover disentangled directions and learn disentangled representations. DisCo achieves the state-of-the-art disentangled representation learning and distinct direction discovering, given pretrained nondisentangled generative models including GAN, VAE, and Flow. Source code is at https://github.com/xrenaa/DisCo.
1 INTRODUCTION
Disentangled representation learning aims to identify and decompose the underlying explanatory factors hidden in the observed data, which is believed by many to be the only way to understand the world for AI fundamentally (Bengio & LeCun, 2007). To achieve the goal, as shown in Figure 1 (a), we need an encoder and a generator. The encoder to extract representations from images with each dimension corresponds to one factor individually. The generator (decoder) decodes the changing of each factor into different kinds of image variations.
With supervision, we can constrain each dimension of the representation only sensitive to one kind of image variation caused by changing one factor respectively. However, this kind of exhaustive supervision is often not available in real-world data. The typical unsupervised methods are based on a generative model to build the above encoder and generator framework, e.g., VAE (Kingma & Welling, 2014) provides encoder and generator, and GAN (Goodfellow et al., 2014; Miyato et al., 2018; Karras et al., 2019) provides generator. During the training process of the encoder and generator, to achieve disentangled representation, the typical methods rely on an additional disentanglement regularization term, e.g., the total correlation for VAE-based methods (Higgins et al., 2017; Burgess et al., 2018; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018) or mutual information for InfoGAN-based methods (Chen et al., 2016; Lin et al., 2020).
∗Equal contribution. Work done during internships at Microsoft Research Asia. †Corresponding author
However, the extra terms usually result in a trade-off between disentanglement and generation quality (Burgess et al., 2018; Khrulkov et al., 2021). Furthermore, those unsupervised methods have been proved to have an infinite number of entangled solutions without introducing inductive bias (Locatello et al., 2019). Recent works (Shen & Zhou, 2021; Khrulkov et al., 2021; Karras et al., 2019; Härkönen et al., 2020; Voynov & Babenko, 2020) show that, for GANs purely trained for image generation, traversing along different directions in the latent space causes different variations of the generated image. This phenomenon indicates that there is some disentanglement property embedded in the latent space of the pretrained GAN. The above observations indicate that training the encoder and generator simultaneous may not be the best choice.
We provide an alternative route to learn disentangled representation: fix the pretrained generator, jointly discover the factors in the latent space of the generator and train the encoder to extract disentangled representation, as shown in Figure 1(b). From the intuitive notion of disentangled representation, similar image variations should be caused by changing the same factor, and different image variations should be caused by changing different factors. This provide a novel contrastive learning view for disentangled representation learning and inspires us to propose a framework: Disentanglement via Contrast (DisCo) for disentangled representation learning.
In DisCo, changing a factor is implemented by traversing one discovered direction in the latent space. For discovering the factors, DisCo adopts a typical network module, Navigator, to provides candidate traversal directions in the latent space (Voynov & Babenko, 2020; Jahanian et al., 2020; Shen et al., 2020). For disentangled representation learning, to model the various image variations, we propose a novel ∆-Contrastor to build a Variation Space where we apply the contrastive loss. In addition to the above architecture innovations, we propose two key techniques for DisCo: (i) an entropy-based domination loss to encourage the encoded representations to be more disentangled, (ii) a hard negatives flipping strategy for better optimization of Contrastive Loss.
We evaluate DisCo on three major generative models (GAN, VAE, and Flow) on three popular disentanglement datasets. DisCo achieves the state-of-the-art (SOTA) disentanglement performance compared to all the previous discovering-based methods and typical (VAE/InfoGAN-based) methods. Furthermore, we evaluate DisCo on the real-world dataset FFHQ (Karras et al., 2019) to demonstrate that it can discover SOTA disentangled directions in the latent space of pretrained generative models.
Our main contributions can be summarized as: (i) To our best knowledge, DisCo is the first unified framework for jointly learning disentangled representation and discovering the latent space of pretrained generative models by contrasting the image variations. (ii) We propose a novel ∆-Contrastor to model image variations based on the disentangled representations for utilizing Contrastive Learning. (iii) DisCo is an unsupervised and model-agnostic method that endows non-disentangled VAE, GAN, or Flow models with the SOTA disentangled representation learning and latent space discovering. (iv) We propose two key techniques for DisCo: an entropy-based domination loss and a hard negatives flipping strategy.
2 RELATED WORK
Typical unsupervised disentanglement. There have been a lot of studies on unsupervised disentangled representation learning based on VAE (Higgins et al., 2017; Burgess et al., 2018; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018) or InfoGAN (Chen et al., 2016; Lin et al., 2020). These methods achieve disentanglement via an extra regularization, which often sacrifices the generation quality (Burgess et al., 2018; Khrulkov et al., 2021). VAE-based methods disentangle the variations by factorizing aggregated posterior, and InfoGAN-based methods maximize the mutual
information between latent factors and related observations. VAE-based methods achieve relatively good disentanglement performance but have low-quality generation. InfoGAN-based methods have a relatively high quality of generation but poor disentanglement performance. Our method supplements generative models pretrained without disentanglement regularization term with contrastive learning in the Variation Space to achieve both high-fidelity image generation and SOTA disentanglement.
Interpretable directions in the latent space. Recently, researchers have been interested in discovering the interpretable directions in the latent space of generative models without supervision, especially for GAN (Goodfellow et al., 2014; Miyato et al., 2018; Karras et al., 2020). Based on the fact that the GAN latent space often possesses semantically meaningful directions (Radford et al., 2015; Shen et al., 2020; Jahanian et al., 2020), Voynov & Babenko (2020) propose a regression-based method to explore interpretable directions in the latent space of a pretrained GAN. The subsequent works focus on extracting the directions from a specific layer of GANs. Härkönen et al. (2020) search for important and meaningful directions by performing PCA in the style space of StyleGAN (Karras et al., 2019; 2020). Shen & Zhou (2021) propose to use the singular vectors of the first layer of a generator as the interpretable directions, and Khrulkov et al. (2021) extend this method to the intermediate layers by Jacobian matrix. All the above methods only discover the interpretable directions in the latent space, except for Khrulkov et al. (2021) which also learns disentangled representation of generated images by training an extra encoder in an extra stage. However, all these methods can not outperform the typical disentanglement methods. Our method is the first to jointly learn the disentangled representation and discover the directions in the latent spaces.
Contrastive Learning. Contrastive Learning gains popularity due to its effectiveness in representation learning (He et al., 2020; Grill et al., 2020; van den Oord et al., 2018; Hénaff, 2020; Li et al., 2020; Chen et al., 2020). Typically, contrastive approaches bring representations of different views of the same image (positive pairs) closer, and push representations of views from different images (negative pairs) apart using instance-level classification with Contrastive Loss. Recently, Contrastive Learning is extended to various tasks, such as image translation (Liu et al., 2021; Park et al., 2020) and controllable generation (Deng et al., 2020). In this work, we focus on the variations of representations and achieve SOTA disentanglement with Contrastive Learning in the Variation Space. Contrastive Learning is suitable for disentanglement due to: (i) the actual number of disentangled directions is usually unknown, which is similar to Contrastive Learning for retrieval (Le-Khac et al., 2020), (ii) it works in the representation space directly without any extra layers for classification or regression.
3 DISENTANGLEMENT VIA CONTRAST
3.1 OVERVIEW OF DISCO
From the contrastive view of the intuitive notion of disentangled representation learning, we propose a DisCo to leverage pretrained generative models to jointly discover the factors embedded as directions in the latent space of the generative models and learn to extract disentangled representation. The benefits of leveraging a pretrained generative model are two-fold: (i) the pretrained models with high-quality image generation are readily available, which is important for reflecting detailed image variations and downstream tasks like controllable generation; (ii) the factors are embedded in the pretrained model, severing as an inductive bias for unsupervised disentangled representation learning.
DisCo consists of a Navigator to provides candidate traversal directions in the latent space and a ∆-Contrastor to extract the representation of image variations and build a Variation Space based on the target disentangled representations. More specifically, ∆-Contrastor is composed of two sharedweight Disentangling Encoders. The variation between two images is modeled as the difference of their corresponding encoded representations extracted by the Disentangling Encoders.
In the Variation Space, by pulling together the variation samples resulted from traversing the same direction and pushing away the ones resulted from traversing different directions, the Navigator learns to discover disentangled directions as factors, and Disentangling Encoder learns to extract disentangled representations from images. Thus, traversing along the discovered directions causes distinct image variations, which causes separated dimensions of disentangled representations respond.
Different from VAE-based or InfoGAN-based methods, our disentangled representations and factors are in two separate spaces, which actually does not affect the applications. Similar to the typical
methods, the Disentangling Encoder can extract disentangled representations from images, and the pretrained generative model with discovered factors can be applied to controllable generation. Moreover, DisCo can be applied to different types of generative models.
Here we provide a detailed workflow of DisCo. As Figure 2 shows, given a pretrained generative model G: Z → I, where Z ∈ RL denotes the latent space, and I denotes the image space, the workflow is: 1) A Navigator A provides a total of D candidate traversal directions in the latent space Z , e.g., in the linear case, A ∈ RL×D is a learnable matrix, and each column is regarded as a candidate direction. 2) Image pairs G(z), G(z′) are generated. z is sampled from Z and z′ = z + A(d, ε), where d ∈ {1, ..., D} and ε ∈ R, and A(d, ε) denotes the shift along the dth direction with ε scalar. 3) The ∆-Contrastor, composed of two shared-weight Disentangling Encoders E, encodes the image pair to a sample v ∈ V as
v(z, d, ε) = |E(G(z +A(d, ε)))−E(G(z))| , (1) where V ∈ RJ+ denotes the Variation Space. Then we apply Contrastive Learning in V to optimize the Disentangling Encoder E to extract disentangled representations and simultaneously enable Navigator A to find the disentangled directions in the latent space Z .
3.2 DESIGN OF DISCO
We present the design details of DisCo, which include: (i) the collection of query set Q = {qi}Bi=1, positive key set K+ = {k+i }Ni=1 and negative key set K− = {k − i }Mi=1, which are three subsets of the Variation Space V , (ii) the formulation of the Contrastive Loss. According to our goal of contrasting the variations, the samples from Q and K+ share the same traversal direction and should be pulled together, while the samples from Q and K− have different directions and should be pushed away. Recall that each sample v in V is determined as v(z, d, ε). To achieve the contrastive learning process, we construct the query sample qi = v(zi, di, εi), the key sample k+i = v(z + i , d + i , ε + i ) and the negative sample k − i = v(z − i , d − i , ε − i ). Specifically, we randomly sample a direction index d̂ from a discrete uniform distribution U{1, D} for {di}Bi=1 and {d+i }Ni=1 to guarantee they are the same. We randomly sample {d − i }Mi=1 from the set of the rest of the directions U{1, D} \ {d̂} individually and independently to cover the rest of directions in Navigator A. Note that the discovered direction should be independent with the starting point and the scale of variation, which is in line with the disentangled factors. Therefore, {zi}Bi=1, {z + i }Ni=1, {z − i }Mi=1 are all sampled from latent space Z , and {εi}Bi=1, {ε + i }Ni=1, {ε − i }Mi=1 are all sampled from a shared continuous uniform distribution U [−ϵ, ϵ] individually and independently. We normalize each sample in Q, K+, and K− to a unit vector to eliminate the impact caused by different shift scalars.
For the design of Contrastive Loss, a well-known form of Contrastive Loss is InfoNCE (van den Oord et al., 2018):
LNCE = − 1
|B| B∑ i=1 N∑ j=1 log exp(qi · k+j /τ)∑N+M s=1 exp(qi · ks/τ) , (2)
where τ is a temperature hyper-parameter and {ki}N+Mi=1 = {k + i }Ni=1 ⋃ {k−i }Mi=1. The InfoNCE is originate from BCELoss (Gutmann & Hyvärinen, 2010). BCELoss has been used to achieve contrastive learning (Wu et al., 2018; Le-Khac et al., 2020; Mnih & Kavukcuoglu, 2013; Mnih & Teh, 2012). We choose to follow them to use BCELoss Llogits for reducing computational cost:
Llogits = − 1
|B| B∑ i=1 ( l−i + l + i ) , (3)
l+i = N∑ j=1 log σ(qi · k+j /τ), l − i = M∑ m=1 log(1− σ(qi · k−m/τ)), (4)
where σ denotes the sigmoid function, l+i denotes the part for positive samples, and l − i denotes the part for the negative ones.Note that we use a shared positive set for B different queries to reduce the computational cost.
3.3 KEY TECHNIQUES FOR DISCO
Entropy-based domination loss. By optimizing the Contrastive Loss, Navigator A is optimized to find the disentangled directions in the latent space, and Disentangling Encoder E is optimized to extract disentangled representations from images. To further make the encoded representations more disentangled, i.e., when traversing along one disentangled direction, only one dimension of the encoded representation should respond, we thus propose an entropy-based domination loss to encourage the corresponding samples in the Variation Space to be one-hot. To implement the entropy-based domination loss, we first get the mean c of Q and K+ as
c = 1
|B +N | ( B∑ i=1 qi + N∑ i=1 k+i ) . (5)
We then compute the probability as pi = exp c(i)/ ∑J
j=1 exp c(j), where c(i) is the i-th element of c and J is the number of dimensions of c. The entropy-based domination loss Led is calculated as
Led = − 1
J J∑ j=1 pj log(pj). (6)
Hard negatives flipping. Since the latent space of the generative models is a high-dimension complex manifold, many different directions carry the same semantic meaning. These directions with the same semantic meaning result in hard negatives during the optimization of Contrastive Loss. The hard negatives here are different from the hard negatives in the works of self-supervised representation learning (He et al., 2020; Coskun et al., 2018), where they have reliable annotations of the samples. Here, our hard negatives are more likely to be “false” negatives, and we choose to flip these hard negatives into positives. Specifically, we use a threshold T to identify the hard negative samples, and use their similarity to the queries as the pseudo-labels for them:
l̂−i = ∑
αij<T
log(1− σ(αij)) + ∑
αij≥T
αij log(σ(αij)), (7)
where l̂−i denotes the modified l − i , and αij = qi · k − j /τ . Therefore, the modified final BCELoss is:
Llogits−f = 1
|B| B∑ i=1 ( l+i + l̂ − i ) . (8)
Typical disentanglement baselines:
Methods on pretrained GAN:
Methods on pretrained VAE:
Methods on pretrained Flow:
Full objective. With the above two techniques, the full objective is:
L = Llogits−f + λLed, (9)
where λ is the weighting hyper-parameter for entropy-based domination loss Led.
4 EXPERIMENT
In this section, we first follow the well-accepted protocol (Locatello et al., 2019; Khrulkov et al., 2021) to evaluate the learned disentangled representation, which also reflects the performance of discovered directions implicitly (Lin et al., 2020) (Section 4.1). Secondly, we follow Li et al. (2021a) to directly evaluate the discovered directions (Section 4.2). Finally, we conduct ablation study (Section 4.3).
4.1 EVALUATIONS ON DISENTANGLED REPRESENTATION
4.1.1 EXPERIMENTAL SETUP
Datasets. We consider the following popular datasets in the disentanglement areas: Shapes3D (Kim & Mnih, 2018) with 6 ground truth factors, MPI3D (Gondal et al., 2019) with 7 ground truth factors,
and Cars3D (Reed et al., 2015) with 3 ground truth factors. In the experiments of the above datasets, images are resized to the 64x64 resolution.
Pretrained generative models. For GAN, we use the StyleGAN2 model (Karras et al., 2020). For VAE, we use a common structure with convolutions (Locatello et al., 2019). For Flow, we use Glow (Kingma & Dhariwal, 2018).
Baseline. For the typical disentanglement baselines, we choose FactorVAE (Kim & Mnih, 2018), β-TCVAE (Chen et al., 2018) and InfoGAN-CR (Lin et al., 2020). For discovering-based methods, we consider serveral recent methods: GANspace (GS) (Härkönen et al., 2020), LatentDiscovery (LD) (Voynov & Babenko, 2020), ClosedForm (CF) (Shen & Zhou, 2021) and DeepSpectral (DS) (Khrulkov et al., 2021). For these methods, we follow Khrulkov et al. (2021) to train an additional encoder to extract disentangled representation. We are the first to extract disentangled representations from pretrained VAE and Flow, so we extend LD to VAE and Flow as a baseline.
Disentanglement metrics. We mainly consider two representative ones: the Mutual Information Gap (MIG) (Chen et al., 2018) and the Disentanglement metric (DCI) (Eastwood & Williams, 2018). MIG requires each factor to be only perturbed by changes of a single dimension of representation. DCI requires each dimension only to encode the information of a single dominant factor. We evaluate the disentanglement in terms of both representation and factors. We also provide results for β-VAE score (Higgins et al., 2017) and FactorVAE score (Kim & Mnih, 2018) in Appendix B.3.
Randomness. We consider the randomness caused by random seeds and the strength of the regularization term (Locatello et al., 2019). For random seeds, we follow the same setting as the baselines. Since DisCo does not have a regularization term, we consider the randomness of the pretrained generative models. For all methods, we ensure there are 25 runs, except that Glow only has one run, limited by GPU resources. More details are presented in Appendix A.
4.1.2 EXPERIMENTAL RESULTS
The quantitative results are summarized in Table 1 and Figure 3. More details about the experimental settings and results are presented in Appendix A & C.
DisCo vs. typical baselines. Our DisCo achieves the SOTA performance consistently in terms of MIG and DCI scores. The variance due to randomness of DisCo tends to be smaller than those typical baselines. We demonstrate that the method, which extracts disentangled representation from pretrained non-disentangled models, can outperform typical disentanglement baselines.
DisCo vs. discovering-based methods. Among the baselines based on discovering pretrained GAN, CF achieves the best performance. DisCo outperforms CF in almost all the cases by a large margin. Besides, these baselines need an extra stage (Khrulkov et al., 2021) to get disentangled representation, while our Disentangling Encoder can directly extract disentangled representation.
4.2 EVALUATIONS ON DISCOVERED DIRECTIONS
To evaluate the discovered directions, we compare DisCo on StyleGAN2 with GS, LD, CF and DS on the real-world dataset FFHQ (Karras et al., 2019)1. and adopt the comprehensive Manipulation Disentanglement Score (MDS) (Li et al., 2021a) as a metric. To calculate MDS, we use 40 CelebaHQ-Attributes predictors released by StyleGAN. Among them, we select Young, Smile, Bald and Blonde Hair, as they are attributes with an available predictor and commonly found by all methods at the same
time. The results are summarized in Table 3. DisCo has shown better overall performance compared to other baselines, which verifies our assumption that learning disentangled representation benefits latent space discovering. We also provide qualitative comparisons in Figure 4.
Finally, we provide an intuitive analysis in Appendix D for why DisCo can find those disentangled directions.
4.3 ABLATION STUDY
In this section, we perform ablation study of DisCo only on GAN, limited by the space. For the experiments, we use the Shapes3D dataset, and the random seed is fixed.
Choice of latent space. For style–based GANs (Karras et al., 2019; 2020), there is a style space W , which is the output of style network (MLP) whose input is a random latent space Z . As demonstrated in Karras et al. (2019), W is more interpretable than Z . We conduct experiments on W and Z respectively to see how the latent space influences the performance. As shown in Table 4, DisCo on W is better, indicating that the better the latent space is organized, the better disentanglement DisCo can achieve.
Choices of A. Following the setting of Voynov & Babenko (2020), we mainly consider three options of A: a linear operator with all matrix columns having a unit length, a linear operator with orthonormal matrix columns, or a nonlinear operator of 3 fully-connected layers.
The results are shown in Table 4. For latent spaces W and Z , A with unit-norm columns achieves nearly the best performance in terms of MIG and DCI scores. Compared to A with orthonormal matrix columns, using A with unitnorm columns is more expressive with less constraints. Another possible reason is that A is global without conditioned on the latent code z. A non-linear operator is more suitable for a
local navigator A. For such a much more complex local and non-linear setting, more inductive bias or supervision should be introduced.
Entropy-based domination loss. Here, we verify the effectiveness of entropy-based domination loss Led for disentanglement. For a desirable disentangled representation, one semantic meaning corresponds to one dimension. As shown in Table 4, Led can improve the performance by a large
1The above disentanglement metrics (DCI and MIG) are not available for FFHQ dataset.
margin. We also visualize the Variation Space to further demonstrate the effectiveness of our proposed loss in Figure 5. Adding the domination loss makes the samples in the Variation Space to be one-hot, which is desirable for disentanglement.
Hard negatives flipping. We run our DisCo with or without the hard negatives flipping strategy to study its influence. As shown in Table 4, flipping hard negatives can improve the disentanglement ability of DisCo. The reason is that the hard negatives have the same semantics as the positive samples. In this case, treating them as the hard negatives does not make sense. Flipping them with pseudo-labels can make the optimization of Contrastive Learning easier.
Hyperparmeter N & M. We run DisCo with different ratios of N : M with a fixed sum of 96, and different sum of N +M with a fixed ratio 1 : 2 to study their impacts. As shown in Figure 6 (a), the best ratio is N : M = 32 : 64 = 1 : 2, as the red line (MIG) and blue line (DCI) in the figure show that larger or smaller ratios will hurt DisCo, which indicates DisCo requires a balance between N and M . As shown in Figure 6 (b), the sum of N +M has slight impact on DisCo. For other hyperparameters, we set them empirically, and more details are presented in Appendix A.
Contrast vs. Classification. To verify the effectiveness of Contrast, we substitute it with classification by adopting an additional linear layer to recover the corresponding direction index and the shift along this direction. As Table 2 shows, Contrastive Learning outperforms Classification significantly.
Concatenation vs. Variation. We further demonstrate that the Variation Space is crucial for DisCo. By replacing the difference operator with concatenation, the performance drops significantly (Table 2), indicating that the encoded representation is not well disentangled. On the other hand, the disentangled representations of images are achieved by Contrastive Learning in the Variation Space.
4.4 ANALYSIS OF DIFFERENT GENERATIVE MODELS
As shown in Table 1, DisCo can be well generalized to different generative models (GAN, VAE, and Flow). DisCo on GAN and VAE can achieve relative good performance, while DisCo on Flow is not as good. The possible reason is similar to the choice of latent space of GAN. We assume the disentangled directions are global linear and thus use a linear navigator. In contrast to GAN and VAE, we suspect that Flow may not conform to this assumption well. Furthermore, Flow has the problems of high GPU cost and unstable training, which limit us to do further exploration.
5 CONCLUSION
In this paper, we present an unsupervised and model-agnostic method DisCo, which is a Contrastive Learning framework to learn disentangled representation by exploiting pretrained generative models. We propose an entropy-based domination loss and a hard negatives flipping strategy to achieve better disentanglement. DisCo outperforms typical unsupervised disentanglement methods while maintaining high image quality. We pinpoint a new direction that Contrastive Learning can be well applied to extract disentangled representation from pretrained generative models. There may be some specific complex generative models, for which the global linear assumption of disentangled directions in the latent space could be a limitation. For future work, extending DisCo to the existing VAE-based disentanglement framework is an exciting direction.
A.2 SETTING FOR BASELINES
In this section, we introduce the implementation setting for the baselines (including randomness).
VAE-based methods. We choose FactorVAE and β-TCVAE as the SOTA VAE-based methods, we follow Locatello et al. (2019) to use the same architecture of encoder and decoder. For the hyper-parameters, we use the the best settings by grid search. We set the latent dimension of representation to 10. For FactorVAE, we set the hyperparameter γ to 10. For β-TCVAE, we set the hyperparameter β to 6. For the random seeds, considering our method has 25 run, we run 25 times with different random seeds for each model to make the comparison fair.
InfoGAN-based methods. We choose InfoGAN-CR as a baseline. We use the official implementation 2 with the best hyperparameter settings by grid search. For the random seeds, we run 25 times with different random seeds
Discovering-based methods. We follow Khrulkov et al. (2021) to use the same settings for the following four baselines: LD (GAN), CF, GS, and DS. Similar to our method (DisCo), discoveringbased methods do not have a regularization term. Thus, for the randomness, we adopt the same strategy with DisCo. We take the top-10 directions for 5 different random seeds for GAN and 5 different random seeds for the additional encoder to learn disentangled representations.
LD (VAE) & LD (Flow). We follow LD (GAN) to use the same settings and substitute the GAN with VAE / Glow. The only exception is the randomness for LD (Flow). We only run one random seed to pretrain the Glow and use one random seed for the encoder.
A.3 MANIPULATION DISENTANGLEMENT SCORE
As claimed in Li et al. (2021a), it is difficult to evaluate the performance on discovering the latent space among different methods, which often use model-specific hyper-parameters to control the editing strength. Thus, Li et al. (2021a) propose a comprehensive metric called Manipulation Disentanglement Score (MDS), which takes both the accuracy and the disentanglement of manipulation into consideration. For more details, please refer to Li et al. (2021a).
A.4 DOMAIN GAP PROBLEM
Please note that there exists a domain gap between the generated images of pretrained generative models and the real images. However, the good performance on disentanglement metrics shows that the domain gap has limited influence on DisCo.
2https://github.com/fjxmlzn/InfoGAN-CR
A.5 ARCHITECTURE
Here, we provide the model architectures in our work. For the architecture of StyleGAN2, we follow Khrulkov et al. (2021). For the architecture of Glow, we use the open-source implementation 3.
3https://github.com/rosinality/glow-pytorch
B MORE EXPERIMENTS
B.1 MORE QUALITATIVE COMPARISON
We provide some examples for qualitative comparison. We first demonstrate the trade-off problem of the VAE-based methods. As shown in Figure 7, DisCo leverages the pretrained generative model and does not have the trade-off between disentanglement and generation quality.
DisCo
Furthermore, as shown in Figure 8 and Figure 9, VAE-based methods suffer from poor image quality. When changing one attribute, the results of discovering-based methods tend to also change other attributes.
We also provide qualitative comparisons between DisCo and InfoGAN-CR. Note that the latent space of InfoGAN-CR is not aligned with the pretrained StyleGAN2. InfoGAN-CR also suffers from the trade-off problem, and its disentanglement ability is worse than DisCo.
We explain the comparison in the main paper and show more manipulation comparisons here.
B.2 ANALYSIS OF THE LEARNED DISENTANGLED REPRESENTATIONS
We feed the images traversing the three most significant factors (wall color, floor color, and object color) of Shapes3D into the Disentangling Encoders and plot the corresponding dimensions of the encoded representations to visualize the learned disentangled space. The location of each point is the disentangled representation of the corresponding image. An ideal result is that all the points form a cube, and color variation is continuous. We consider three baselines that have relatively higher MIG and DCI: CF, DS, LD. As the figures below show, the points in the latent space of CF and DS are not well organized, and the latent space of all the three baselines are not well aligned with the axes, especially for LD. DisCo learns a well-aligned and well-organized latent space, which signifies a better disentanglement.
CF DS LD Ours
B.3 MORE QUANTITATIVE COMPARISON
We provide additional quantitative comparisons in terms of β-VAE score and FactorVAE score. DisCo on pretrained GAN is comparable to discovering-based baselines in terms of β-VAE score and FactorVAE score, suggesting that some disagreement between these two scores and MIG/ DCI. However, note that the qualitative evaluation in Figure 8, Figure 9 and Section B.2 shows that the disentanglement ability of DisCo is better than all the baselines on Shapes3D dataset.
Typical disentanglement baselines:
Methods on pretrained GAN:
Methods on pretrained VAE:
Methods on pretrained Flow:
We also provide an additional experiment on Noisy-DSprites dataset. We compare DisCo with β-TCVAE (the best typical method) and CF (the best discovering-based method) in terms of MIG and DCI metrics.
C LATENT TRAVERSALS
In this section, we visualize the disentangled directions of the latent space discovered by DisCo on each dataset. For Cars3D, Shapes3D, Anime and MNIST, the iamge resolution is 64× 64. For FFHQ, LSUN cat and LSUN church, the image resolution is 256× 256. Besides StyleGAN2, we also provide results of Spectral Norm GAN (Miyato et al., 2018) 4 on MNIST (LeCun et al., 2010) and Anime Face (Jin et al., 2017) to demonstrate that DisCo can be well generalized to other types of GAN.
4https://github.com/anvoynov/GANLatentDiscovery
D AN INTUITIVE ANALYSIS FOR DISCO
DisCo works by contrasting the variations resulted from traversing along the directions provided by the Navigator. Is the method sufficient to converge to the disentangled solution? Note that it is very challenging to answer this question. To our best knowledge, for unsupervised disentangled representation learning, there is no sufficient theoretical constraint to guarantee the convergence to a disentangled solution Locatello et al. (2019). Here we provide an intuitive analysis for DisCo and try to provide our thoughts on how DisCo find the disentangled direction in the latent space, which is supported by our observations on pretrained GAN both quantitatively and qualitatively. The intuitive analysis consists of two part: (i) The directions that can be discovered by DisCo have different variation patterns compared to random directions. (ii) DisCo hardly converges to the an entangled solution.
D.1 WHAT KIND OF DIRECTIONS DISCO CAN CONVERGE TO?
First, we visualize the latent space and show that there are some variation patterns in the latent space for disentangled factors. We design the following visualization method. Given a pretrained GAN and two directions in the latent space, we traverse along the plane expanded by the two directions to generate a grid of images. The range is large enough to include all values of these disentangled factors, and the step is small enough to obtain a dense grid. Then, we input these images into an encoder that trained with ground truth factors labels. We get a heatmap of each factor (the value is the response value corresponding dimension of the factor). In this way, we can observe the variation pattern that emerged in the latent space.
We take the pretrained StyleGAN on Shapes3D (synthetic) and FFHQ (real-world). For Shapes3D, we take background color and floor color as the two factors (since they refer to different areas in the image, these two factors are disentangled). For FFHQ, we take smile (mouth) and bald (hair) as the two factors (disentangled for referring to different areas). We then choose random directions and the directions discovered by DisCo. The results are shown in Figure 27 and Figure 28.
We find a clear difference between random directions and directions discovered by DisCo. This is because DisCo can learn the directions by separating the variations resulted from traversing along with them. However, not all directions can be separated. For those directions in which the variations are not able to be recognized or clustered by the encoder E, it is nearly impossible for DisCo to converge to them. Conversely, for those directions that can be easily recognized and clustered, DisCo will converge to them with a higher probability. From the following observations, we find that the variation patterns resulting from the directions corresponding to disentangled factors are easily recognized and clustered.
D.2 WHY DISCO HARDLY CONVERGES TO THE ENTANGLED CASES?
In the previous section, we show that DisCo can discover the directions with distinct variation patterns and exclude random directions. Here we discuss why DisCo can hardly converge to the following entangled case (trivial solution based on disentangled one). For example, suppose there is an entangled direction of factors A and B (A and B change with the same rate when traversing along with it) in the latent space of generative models, and DisCo can separate the variations resulting
from the direction of A and the entangled direction. In that case, DisCo has no additional bias to update these directions to converge to disentangled ones.
In the following text, for ease of referring to, we denote the entangled direction of factors A and B (A and B change with the same rate when traversing along with it) as A+B direction, and direction of factor A (only A change when we traverse along with it). The reasons for why DisCo is hardly converged to the case of A and A+B are two-fold:
(i) Our encoder is a lightweight network (5 CNN layers + 3 FC layers). It is nearly impossible for it to separate the A and A+B directions.
(ii) In the latent space of the pretrained generative models, the disentangled directions (A, B) are consistent at different locations. In contrast, the entangled directions (A+B) are not, as shown in Figure 29.
We conduct the following experiments to verify them. For (i), we replace our encoder in DisCo with a ResNet-50 and train DisCo from scratch on the Shapes3D dataset. The loss, MIG, and DCI are presented in Table 11. The trivial solution is possible when the encoder is powerful enough to fit the A and A+B directions to “become orthogonal”. With this consideration, in DisCo we adopt a lightweight encoder to avoid this issue.
For (ii), as the sketch Figure 29 demonstrates, the disentangled directions (”A“- blue color or “B”green color) are consistent, which is invariant to the location in the latent space, while the entangled directions (”A+B“- red color) is not consistent on different locations. The fundamental reason is that: the directions of the disentangled variations are invariant with the position in the latent space. However, the “rate” of the variation is not. E.g., at any point in the latent space, going “up” constantly changes the camera’s pose. However, at point a, going “up” with step 1 means rotating 10 degrees. At point b, going “up” with step 1 means rotating 5 degrees. When the variation “rate” of “A” and “B” are different, the “A+B” directions at different locations are not consistent.
Based on the different properties of disentangled and entangled directions in the latent space, DisCo can discover the disentangled directions with contrastive loss. The contrastive loss can be understood from the clustered view (Wang & Isola, 2020; Li et al., 2021b). The variations from the disentangled directions are more consistent and can be better clustered compared to the variations from the
entangled ones. Thus, DisCo can discover the disentangled directions in the latent space and learn disentangled representations from images. We further provide the following experiments to support our above analysis.
D.2.1 QUANTITATIVE EXPERIMENT
We compare the losses of three different settings:
• A: For a navigator with disentangled directions, we fix the navigator and train the encoder until convergence.
• A + B: For a navigator with entangled directions (we use the linear combination of the disentangled directions to initialize the navigator), we fix it and train the encoder until convergence.
• A+B → A: After A+B is convergent, we update both the encoder and the navigator until convergence.
The Contrastive loss after convergence is presented in Table 12.
The results show that: (i) The disentangled directions (A) can lead to lower loss and better performance than entangled directions (A+B), indicating no trivial solution. (ii) Even though the encoder with A+B is converged, when we optimize the navigator, gradients will still backpropagate to the navigator and converge to A.
D.2.2 QUALITATIVE EXPERIMENT
We visualize the latent space of GAN in Figure 30 to verify the variation “rate” in the following way: in the latent space, we select two ground truth disentangled directions: floor color (A) and background color (B) obtained by supervision with InterFaceGAN (Shen et al., 2020), we conduct equally spaced sampling along the two disentangled directions: A (labeled with green color variation), B (labeled with gradient blue color) and composite direction (A+B, labeled with gradient red color) as shown in Figure 30 (a).
Then we generate the images (also include other images on the grid as shown in Figure 30 (b) ), and feed the images in the bounding boxes into a “ground truth” encoder (trained with ground truth disentangled factors) to regress the “ground truth” disentangled representations of the images.
In Figure 30 (c), the points labeled with green color are well aligned with the x-axis indicating only floor color change, points labeled with blue variation are well aligned with the y-axis indicating only
background color change. However, the points labeled with red color are NOT aligned with any line, which indicates the directions of A+B are not consistent. Further, the variation “rate” is relevant to the latent space locations for the two disentangled directions. This observation well supports our idea shown in Figure 29. The different properties between disentangled and entangled directions enable DisCo to discover the disentangled directions in the latent space.
E EXTENSION: BRIDGE THE PRETRAINED VAE AND PRETRAINED GAN
Researchers are recently interested in improving image quality given the disentangled representation generated by typical disentanglement methods. Lee et al.(Lee et al., 2020) propose a post-processing stage using a GAN based on disentangled representations learned by VAE-based disentanglement models. This method scarifies a little generation ability due to an additional constraint. Similarly, Srivastava et al. (Srivastava et al., 2020) propose to use a deep generative model with AdaIN (Huang & Belongie, 2017) as a post-processing stage to improve the reconstruction ability. Following this setting, we can replace the encoder in DisCo (GAN) with an encoder pretrained by VAE-based disentangled baselines. In this way, we can bridge the pretrained disentangled VAE and pretrained GAN, as shown in Figure 31. Compared to previous methods, our method can fully utilize the state-of-the-art GAN and the state-of-the-art VAE-based method and does not need to train a deep generative model from scratch.
F DISCUSSION ON RELATION BETWEEN BCELOSS AND NCELOSS
We would like to present a deep discussion on the relation between the BCELoss Llogits and NCELoss LNCE . This discussion is related to the NCE paper Gutmann & Hyvärinen (2010), and InfoNCE paper van den Oord et al. (2018). The discussion is as following: (i) we first provide a formulation of a general problem and get two objectives, L1 and L2, and L1 is the upper bound of L2. (ii) Following Gutmann & Hyvärinen (2010), we show that L1 is aligned with LBCE under the setting of Gutmann & Hyvärinen (2010). (iii) Following van den Oord et al. (2018), we prove L2 is aligned with LNCE under the setting of van den Oord et al. (2018). (iii) We discuss the relation between these objectives and the loss in our paper.
Part I. Assume we have S observations {xi}Si=1 from a data distribution p(x), each with a label Ci ∈ {0, 1}. The we denote the posterior probabilities as p+(x) = p(x|C = 1) and p−(x) = p(x|C = 0). We define two objectives as follow:
L1 = − S∑
i=1
Ci logP (Ci = 1|xi) + (1− Ci) logP (Ci = 0|xi), (10)
and
L2 = − S∑
i=1
Ci logP (Ci = 1|xi) (11)
Since − ∑S
i=1(1− Ci) log p(Ci = 0|xi) ≥ 0, we have L1 ≥ L2. (12)
L1 is the upper bound of L2. This a general formulation of a binary classification problem. In the context of our paper, we have a paired observation xi : (q, ki), with q as the query, and the key ki is either from a positive key set {k+j }Nj=1 or as negative key set {k−m}Mm=1 (i.e., {ki} N+M i=1 = {k + j }Nj=1 ⋃ {k−m}Mm=1), where M = S −N . And Ci is assigned as:
Ci = { 1, ki ∈ {k+j }Nj=1 0, ki ∈ {k−m}Mm=1
(13)
In our paper, we have h(x) = exp(q · k/τ).
Part II. In this part, following Gutmann & Hyvärinen (2010), we show that L1 is aligned with Llogits (Equation 3 in the main paper) under the setting of Gutmann & Hyvärinen (2010). Following Gutmann & Hyvärinen (2010)), we assume the prior distribution P (C = 0) = P (C = 1) = 1/2, according to the Bayes rule, we have
P (C = 1|x) = p(x|C = 1)P (C = 1) p(x|C = 1)P (C = 1) + p(x|C = 0)P (C = 0) = 1
1 + p −(x)
p+(x)
. (14)
And P (C = 0|x) = 1− P (C = 1|x). On the other hand, we have a general form of BCELoss, as
LBCE = − S∑
i=1
Ci log σ(q · ki/τ) + (1− Ci) log(1− σ(q · ki/τ)), (15)
where σ(·) is the sigmoid function. We have
σ(q · k/τ) = 1 1 + exp(−q · k/τ) = 1
1 + 1exp(q·k/τ) =
1
1 + 1h(x) , (16)
From Gutmann & Hyvärinen (2010) Theorem 1, we know that when LBCE is minimized, we have
h(x) = p+(x)
p−(x) . (17)
Thus, we know the BCELoss LBCE is a approximation of the objective L1.
Part. III Following van den Oord et al. (2018), we prove L2 is aligned with LNCE (Equation 2 in the main paper) under the setting of van den Oord et al. (2018)
From the typical contrastive setting (one positive sample, others are negative samples, following van den Oord et al. (2018)), we assume there is only one positive sample, others are negatives in {xi}Si=1. Then, the probability of xi sample from p+(x) rather then p−(x) is as follows,
P (Ci = 1|xi) = p+(xi)Πl ̸=ip −(xl)∑S j=1 p +(xj)Πl ̸=ip−(xl) = p+(xi) p−(xi)∑S j=1 p+(xj) p−(xj)
(18)
From van den Oord et al. (2018), we know that when minimize Equation 11, we have h(x) = exp(q · k/τ) ∝ p+(x)p−(x) . In this case, we get the form of LNCE as
LNCE = − S∑
i=1 Ci log exp(q · ki/τ)∑S j=1 exp(q · kj/τ)
(19)
LNCE is a approximate of L2.
Part. IV When generalize the contrastive loss into our setting (N positive samples, M negative samples). The BCELoss (Equation 15) can be reformulated as
The BCELoss (Equation 15) can be reformulated as
L̂BCE = − N∑ j=1 log σ(q · k+j /τ)− M∑ m=1 log(1− σ(q · k−m/τ)). (20)
Similarly, the NCEloss (Equation 19) can be reformulated as
L̂NCE = − N∑ j=1 log exp(q · k+j /τ)∑M+N s=1 exp(q · ks/τ) (21)
L̂BCE is aligned with Llogits (Equation 3 in our main paper), and L̂NCE is aligned with LNCE (Equation 2 in the main paper).
Now we have L1 (approximated by LBCE) is the upper bound of L2 (approximated by LNCE). However, as you may notice, the assumptions we made in Part II and Part III are different, one is P (C = 0) = P (C = 1), the other one is only one positive sample, others are negative. Also the extent to our situation is more general case (N positives, others are negatives).
However, they have the same objective, which is by contrasting positives and negatives, we can use h(x) = exp(q · k/τ) to estimate p+/p−. We can think the h(x) as a similarity score, i.e. if q and k are from a positive pair (they have the same direction in our paper), h(x) should be as large as possible (p+/p− > 1) and vice versa. From this way, we can learn the representations (q, k) to reflect the image variation, i.e., similar variations have higher score h(x) , while different kinds of variation have lower score h(x). Then with this meaningful representation, in the latent space, can help to discover the directions carrying different kinds of image variation. This is an understanding, from a contrastive learning view, of how our method works. | 1. What is the focus and contribution of the paper on disentangled directions for pretrained models?
2. What are the strengths of the proposed framework, particularly in terms of its model-agnostic nature and ability to mitigate poor generation quality?
3. What are the weaknesses of the approach, especially regarding the requirement for multiple components and hyperparameters tuning?
4. Do you have any concerns about the necessity of certain components or their impact on the overall performance gain?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a framework to model disentangled directions for pretrained models. Such an approach mitigates the problems with poor generation quality arising while training models with additional regularization terms to force disentanglement. The underlying idea is contrastive-based: similar image variations are caused by changing the same factors in contrast to the remaining image variations. The proposed framework is model-agnostic: it can be applied to GANs, VAEs and flow models.
Review
Strengths:
The approach does not require any specific training.
There is no fixed generative model type: it can be applied to GANs, VAEs and Flow models.
The method significantly outperforms previous models in terms of disentanglement metrics.
The method is quite stable to random seeds.
The authors provide a thorough ablation study, report the model accuracy with std due to random seeds, check the model sensitivity to the values of hyperparameter T.
Weaknesses:
The approach requires many 'tricks' and parts to work: Navigator, Contrastor consisting of two weight sharing encoders, contrastive approach, hard negatives flipping. Each component requires its own set of hyperparameters. The overall performance gain is significant, and the necessity is partially covered in the Ablation study section. But I wonder if it is needed to have two encoders with shared weights or it is possible to have only one? Is it required to tune hyperparameters for every component? |
ICLR | Title
Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View
Abstract
From the intuitive notion of disentanglement, the image variations corresponding to different factors should be distinct from each other, and the disentangled representation should reflect those variations with separate dimensions. To discover the factors and learn disentangled representation, previous methods typically leverage an extra regularization term when learning to generate realistic images. However, the term usually results in a trade-off between disentanglement and generation quality. For the generative models pretrained without any disentanglement term, the generated images show semantically meaningful variations when traversing along different directions in the latent space. Based on this observation, we argue that it is possible to mitigate the trade-off by (i) leveraging the pretrained generative models with high generation quality, (ii) focusing on discovering the traversal directions as factors for disentangled representation learning. To achieve this, we propose Disentaglement via Contrast (DisCo) as a framework to model the variations based on the target disentangled representations, and contrast the variations to jointly discover disentangled directions and learn disentangled representations. DisCo achieves the state-of-the-art disentangled representation learning and distinct direction discovering, given pretrained nondisentangled generative models including GAN, VAE, and Flow. Source code is at https://github.com/xrenaa/DisCo.
1 INTRODUCTION
Disentangled representation learning aims to identify and decompose the underlying explanatory factors hidden in the observed data, which is believed by many to be the only way to understand the world for AI fundamentally (Bengio & LeCun, 2007). To achieve the goal, as shown in Figure 1 (a), we need an encoder and a generator. The encoder to extract representations from images with each dimension corresponds to one factor individually. The generator (decoder) decodes the changing of each factor into different kinds of image variations.
With supervision, we can constrain each dimension of the representation only sensitive to one kind of image variation caused by changing one factor respectively. However, this kind of exhaustive supervision is often not available in real-world data. The typical unsupervised methods are based on a generative model to build the above encoder and generator framework, e.g., VAE (Kingma & Welling, 2014) provides encoder and generator, and GAN (Goodfellow et al., 2014; Miyato et al., 2018; Karras et al., 2019) provides generator. During the training process of the encoder and generator, to achieve disentangled representation, the typical methods rely on an additional disentanglement regularization term, e.g., the total correlation for VAE-based methods (Higgins et al., 2017; Burgess et al., 2018; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018) or mutual information for InfoGAN-based methods (Chen et al., 2016; Lin et al., 2020).
∗Equal contribution. Work done during internships at Microsoft Research Asia. †Corresponding author
However, the extra terms usually result in a trade-off between disentanglement and generation quality (Burgess et al., 2018; Khrulkov et al., 2021). Furthermore, those unsupervised methods have been proved to have an infinite number of entangled solutions without introducing inductive bias (Locatello et al., 2019). Recent works (Shen & Zhou, 2021; Khrulkov et al., 2021; Karras et al., 2019; Härkönen et al., 2020; Voynov & Babenko, 2020) show that, for GANs purely trained for image generation, traversing along different directions in the latent space causes different variations of the generated image. This phenomenon indicates that there is some disentanglement property embedded in the latent space of the pretrained GAN. The above observations indicate that training the encoder and generator simultaneous may not be the best choice.
We provide an alternative route to learn disentangled representation: fix the pretrained generator, jointly discover the factors in the latent space of the generator and train the encoder to extract disentangled representation, as shown in Figure 1(b). From the intuitive notion of disentangled representation, similar image variations should be caused by changing the same factor, and different image variations should be caused by changing different factors. This provide a novel contrastive learning view for disentangled representation learning and inspires us to propose a framework: Disentanglement via Contrast (DisCo) for disentangled representation learning.
In DisCo, changing a factor is implemented by traversing one discovered direction in the latent space. For discovering the factors, DisCo adopts a typical network module, Navigator, to provides candidate traversal directions in the latent space (Voynov & Babenko, 2020; Jahanian et al., 2020; Shen et al., 2020). For disentangled representation learning, to model the various image variations, we propose a novel ∆-Contrastor to build a Variation Space where we apply the contrastive loss. In addition to the above architecture innovations, we propose two key techniques for DisCo: (i) an entropy-based domination loss to encourage the encoded representations to be more disentangled, (ii) a hard negatives flipping strategy for better optimization of Contrastive Loss.
We evaluate DisCo on three major generative models (GAN, VAE, and Flow) on three popular disentanglement datasets. DisCo achieves the state-of-the-art (SOTA) disentanglement performance compared to all the previous discovering-based methods and typical (VAE/InfoGAN-based) methods. Furthermore, we evaluate DisCo on the real-world dataset FFHQ (Karras et al., 2019) to demonstrate that it can discover SOTA disentangled directions in the latent space of pretrained generative models.
Our main contributions can be summarized as: (i) To our best knowledge, DisCo is the first unified framework for jointly learning disentangled representation and discovering the latent space of pretrained generative models by contrasting the image variations. (ii) We propose a novel ∆-Contrastor to model image variations based on the disentangled representations for utilizing Contrastive Learning. (iii) DisCo is an unsupervised and model-agnostic method that endows non-disentangled VAE, GAN, or Flow models with the SOTA disentangled representation learning and latent space discovering. (iv) We propose two key techniques for DisCo: an entropy-based domination loss and a hard negatives flipping strategy.
2 RELATED WORK
Typical unsupervised disentanglement. There have been a lot of studies on unsupervised disentangled representation learning based on VAE (Higgins et al., 2017; Burgess et al., 2018; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018) or InfoGAN (Chen et al., 2016; Lin et al., 2020). These methods achieve disentanglement via an extra regularization, which often sacrifices the generation quality (Burgess et al., 2018; Khrulkov et al., 2021). VAE-based methods disentangle the variations by factorizing aggregated posterior, and InfoGAN-based methods maximize the mutual
information between latent factors and related observations. VAE-based methods achieve relatively good disentanglement performance but have low-quality generation. InfoGAN-based methods have a relatively high quality of generation but poor disentanglement performance. Our method supplements generative models pretrained without disentanglement regularization term with contrastive learning in the Variation Space to achieve both high-fidelity image generation and SOTA disentanglement.
Interpretable directions in the latent space. Recently, researchers have been interested in discovering the interpretable directions in the latent space of generative models without supervision, especially for GAN (Goodfellow et al., 2014; Miyato et al., 2018; Karras et al., 2020). Based on the fact that the GAN latent space often possesses semantically meaningful directions (Radford et al., 2015; Shen et al., 2020; Jahanian et al., 2020), Voynov & Babenko (2020) propose a regression-based method to explore interpretable directions in the latent space of a pretrained GAN. The subsequent works focus on extracting the directions from a specific layer of GANs. Härkönen et al. (2020) search for important and meaningful directions by performing PCA in the style space of StyleGAN (Karras et al., 2019; 2020). Shen & Zhou (2021) propose to use the singular vectors of the first layer of a generator as the interpretable directions, and Khrulkov et al. (2021) extend this method to the intermediate layers by Jacobian matrix. All the above methods only discover the interpretable directions in the latent space, except for Khrulkov et al. (2021) which also learns disentangled representation of generated images by training an extra encoder in an extra stage. However, all these methods can not outperform the typical disentanglement methods. Our method is the first to jointly learn the disentangled representation and discover the directions in the latent spaces.
Contrastive Learning. Contrastive Learning gains popularity due to its effectiveness in representation learning (He et al., 2020; Grill et al., 2020; van den Oord et al., 2018; Hénaff, 2020; Li et al., 2020; Chen et al., 2020). Typically, contrastive approaches bring representations of different views of the same image (positive pairs) closer, and push representations of views from different images (negative pairs) apart using instance-level classification with Contrastive Loss. Recently, Contrastive Learning is extended to various tasks, such as image translation (Liu et al., 2021; Park et al., 2020) and controllable generation (Deng et al., 2020). In this work, we focus on the variations of representations and achieve SOTA disentanglement with Contrastive Learning in the Variation Space. Contrastive Learning is suitable for disentanglement due to: (i) the actual number of disentangled directions is usually unknown, which is similar to Contrastive Learning for retrieval (Le-Khac et al., 2020), (ii) it works in the representation space directly without any extra layers for classification or regression.
3 DISENTANGLEMENT VIA CONTRAST
3.1 OVERVIEW OF DISCO
From the contrastive view of the intuitive notion of disentangled representation learning, we propose a DisCo to leverage pretrained generative models to jointly discover the factors embedded as directions in the latent space of the generative models and learn to extract disentangled representation. The benefits of leveraging a pretrained generative model are two-fold: (i) the pretrained models with high-quality image generation are readily available, which is important for reflecting detailed image variations and downstream tasks like controllable generation; (ii) the factors are embedded in the pretrained model, severing as an inductive bias for unsupervised disentangled representation learning.
DisCo consists of a Navigator to provides candidate traversal directions in the latent space and a ∆-Contrastor to extract the representation of image variations and build a Variation Space based on the target disentangled representations. More specifically, ∆-Contrastor is composed of two sharedweight Disentangling Encoders. The variation between two images is modeled as the difference of their corresponding encoded representations extracted by the Disentangling Encoders.
In the Variation Space, by pulling together the variation samples resulted from traversing the same direction and pushing away the ones resulted from traversing different directions, the Navigator learns to discover disentangled directions as factors, and Disentangling Encoder learns to extract disentangled representations from images. Thus, traversing along the discovered directions causes distinct image variations, which causes separated dimensions of disentangled representations respond.
Different from VAE-based or InfoGAN-based methods, our disentangled representations and factors are in two separate spaces, which actually does not affect the applications. Similar to the typical
methods, the Disentangling Encoder can extract disentangled representations from images, and the pretrained generative model with discovered factors can be applied to controllable generation. Moreover, DisCo can be applied to different types of generative models.
Here we provide a detailed workflow of DisCo. As Figure 2 shows, given a pretrained generative model G: Z → I, where Z ∈ RL denotes the latent space, and I denotes the image space, the workflow is: 1) A Navigator A provides a total of D candidate traversal directions in the latent space Z , e.g., in the linear case, A ∈ RL×D is a learnable matrix, and each column is regarded as a candidate direction. 2) Image pairs G(z), G(z′) are generated. z is sampled from Z and z′ = z + A(d, ε), where d ∈ {1, ..., D} and ε ∈ R, and A(d, ε) denotes the shift along the dth direction with ε scalar. 3) The ∆-Contrastor, composed of two shared-weight Disentangling Encoders E, encodes the image pair to a sample v ∈ V as
v(z, d, ε) = |E(G(z +A(d, ε)))−E(G(z))| , (1) where V ∈ RJ+ denotes the Variation Space. Then we apply Contrastive Learning in V to optimize the Disentangling Encoder E to extract disentangled representations and simultaneously enable Navigator A to find the disentangled directions in the latent space Z .
3.2 DESIGN OF DISCO
We present the design details of DisCo, which include: (i) the collection of query set Q = {qi}Bi=1, positive key set K+ = {k+i }Ni=1 and negative key set K− = {k − i }Mi=1, which are three subsets of the Variation Space V , (ii) the formulation of the Contrastive Loss. According to our goal of contrasting the variations, the samples from Q and K+ share the same traversal direction and should be pulled together, while the samples from Q and K− have different directions and should be pushed away. Recall that each sample v in V is determined as v(z, d, ε). To achieve the contrastive learning process, we construct the query sample qi = v(zi, di, εi), the key sample k+i = v(z + i , d + i , ε + i ) and the negative sample k − i = v(z − i , d − i , ε − i ). Specifically, we randomly sample a direction index d̂ from a discrete uniform distribution U{1, D} for {di}Bi=1 and {d+i }Ni=1 to guarantee they are the same. We randomly sample {d − i }Mi=1 from the set of the rest of the directions U{1, D} \ {d̂} individually and independently to cover the rest of directions in Navigator A. Note that the discovered direction should be independent with the starting point and the scale of variation, which is in line with the disentangled factors. Therefore, {zi}Bi=1, {z + i }Ni=1, {z − i }Mi=1 are all sampled from latent space Z , and {εi}Bi=1, {ε + i }Ni=1, {ε − i }Mi=1 are all sampled from a shared continuous uniform distribution U [−ϵ, ϵ] individually and independently. We normalize each sample in Q, K+, and K− to a unit vector to eliminate the impact caused by different shift scalars.
For the design of Contrastive Loss, a well-known form of Contrastive Loss is InfoNCE (van den Oord et al., 2018):
LNCE = − 1
|B| B∑ i=1 N∑ j=1 log exp(qi · k+j /τ)∑N+M s=1 exp(qi · ks/τ) , (2)
where τ is a temperature hyper-parameter and {ki}N+Mi=1 = {k + i }Ni=1 ⋃ {k−i }Mi=1. The InfoNCE is originate from BCELoss (Gutmann & Hyvärinen, 2010). BCELoss has been used to achieve contrastive learning (Wu et al., 2018; Le-Khac et al., 2020; Mnih & Kavukcuoglu, 2013; Mnih & Teh, 2012). We choose to follow them to use BCELoss Llogits for reducing computational cost:
Llogits = − 1
|B| B∑ i=1 ( l−i + l + i ) , (3)
l+i = N∑ j=1 log σ(qi · k+j /τ), l − i = M∑ m=1 log(1− σ(qi · k−m/τ)), (4)
where σ denotes the sigmoid function, l+i denotes the part for positive samples, and l − i denotes the part for the negative ones.Note that we use a shared positive set for B different queries to reduce the computational cost.
3.3 KEY TECHNIQUES FOR DISCO
Entropy-based domination loss. By optimizing the Contrastive Loss, Navigator A is optimized to find the disentangled directions in the latent space, and Disentangling Encoder E is optimized to extract disentangled representations from images. To further make the encoded representations more disentangled, i.e., when traversing along one disentangled direction, only one dimension of the encoded representation should respond, we thus propose an entropy-based domination loss to encourage the corresponding samples in the Variation Space to be one-hot. To implement the entropy-based domination loss, we first get the mean c of Q and K+ as
c = 1
|B +N | ( B∑ i=1 qi + N∑ i=1 k+i ) . (5)
We then compute the probability as pi = exp c(i)/ ∑J
j=1 exp c(j), where c(i) is the i-th element of c and J is the number of dimensions of c. The entropy-based domination loss Led is calculated as
Led = − 1
J J∑ j=1 pj log(pj). (6)
Hard negatives flipping. Since the latent space of the generative models is a high-dimension complex manifold, many different directions carry the same semantic meaning. These directions with the same semantic meaning result in hard negatives during the optimization of Contrastive Loss. The hard negatives here are different from the hard negatives in the works of self-supervised representation learning (He et al., 2020; Coskun et al., 2018), where they have reliable annotations of the samples. Here, our hard negatives are more likely to be “false” negatives, and we choose to flip these hard negatives into positives. Specifically, we use a threshold T to identify the hard negative samples, and use their similarity to the queries as the pseudo-labels for them:
l̂−i = ∑
αij<T
log(1− σ(αij)) + ∑
αij≥T
αij log(σ(αij)), (7)
where l̂−i denotes the modified l − i , and αij = qi · k − j /τ . Therefore, the modified final BCELoss is:
Llogits−f = 1
|B| B∑ i=1 ( l+i + l̂ − i ) . (8)
Typical disentanglement baselines:
Methods on pretrained GAN:
Methods on pretrained VAE:
Methods on pretrained Flow:
Full objective. With the above two techniques, the full objective is:
L = Llogits−f + λLed, (9)
where λ is the weighting hyper-parameter for entropy-based domination loss Led.
4 EXPERIMENT
In this section, we first follow the well-accepted protocol (Locatello et al., 2019; Khrulkov et al., 2021) to evaluate the learned disentangled representation, which also reflects the performance of discovered directions implicitly (Lin et al., 2020) (Section 4.1). Secondly, we follow Li et al. (2021a) to directly evaluate the discovered directions (Section 4.2). Finally, we conduct ablation study (Section 4.3).
4.1 EVALUATIONS ON DISENTANGLED REPRESENTATION
4.1.1 EXPERIMENTAL SETUP
Datasets. We consider the following popular datasets in the disentanglement areas: Shapes3D (Kim & Mnih, 2018) with 6 ground truth factors, MPI3D (Gondal et al., 2019) with 7 ground truth factors,
and Cars3D (Reed et al., 2015) with 3 ground truth factors. In the experiments of the above datasets, images are resized to the 64x64 resolution.
Pretrained generative models. For GAN, we use the StyleGAN2 model (Karras et al., 2020). For VAE, we use a common structure with convolutions (Locatello et al., 2019). For Flow, we use Glow (Kingma & Dhariwal, 2018).
Baseline. For the typical disentanglement baselines, we choose FactorVAE (Kim & Mnih, 2018), β-TCVAE (Chen et al., 2018) and InfoGAN-CR (Lin et al., 2020). For discovering-based methods, we consider serveral recent methods: GANspace (GS) (Härkönen et al., 2020), LatentDiscovery (LD) (Voynov & Babenko, 2020), ClosedForm (CF) (Shen & Zhou, 2021) and DeepSpectral (DS) (Khrulkov et al., 2021). For these methods, we follow Khrulkov et al. (2021) to train an additional encoder to extract disentangled representation. We are the first to extract disentangled representations from pretrained VAE and Flow, so we extend LD to VAE and Flow as a baseline.
Disentanglement metrics. We mainly consider two representative ones: the Mutual Information Gap (MIG) (Chen et al., 2018) and the Disentanglement metric (DCI) (Eastwood & Williams, 2018). MIG requires each factor to be only perturbed by changes of a single dimension of representation. DCI requires each dimension only to encode the information of a single dominant factor. We evaluate the disentanglement in terms of both representation and factors. We also provide results for β-VAE score (Higgins et al., 2017) and FactorVAE score (Kim & Mnih, 2018) in Appendix B.3.
Randomness. We consider the randomness caused by random seeds and the strength of the regularization term (Locatello et al., 2019). For random seeds, we follow the same setting as the baselines. Since DisCo does not have a regularization term, we consider the randomness of the pretrained generative models. For all methods, we ensure there are 25 runs, except that Glow only has one run, limited by GPU resources. More details are presented in Appendix A.
4.1.2 EXPERIMENTAL RESULTS
The quantitative results are summarized in Table 1 and Figure 3. More details about the experimental settings and results are presented in Appendix A & C.
DisCo vs. typical baselines. Our DisCo achieves the SOTA performance consistently in terms of MIG and DCI scores. The variance due to randomness of DisCo tends to be smaller than those typical baselines. We demonstrate that the method, which extracts disentangled representation from pretrained non-disentangled models, can outperform typical disentanglement baselines.
DisCo vs. discovering-based methods. Among the baselines based on discovering pretrained GAN, CF achieves the best performance. DisCo outperforms CF in almost all the cases by a large margin. Besides, these baselines need an extra stage (Khrulkov et al., 2021) to get disentangled representation, while our Disentangling Encoder can directly extract disentangled representation.
4.2 EVALUATIONS ON DISCOVERED DIRECTIONS
To evaluate the discovered directions, we compare DisCo on StyleGAN2 with GS, LD, CF and DS on the real-world dataset FFHQ (Karras et al., 2019)1. and adopt the comprehensive Manipulation Disentanglement Score (MDS) (Li et al., 2021a) as a metric. To calculate MDS, we use 40 CelebaHQ-Attributes predictors released by StyleGAN. Among them, we select Young, Smile, Bald and Blonde Hair, as they are attributes with an available predictor and commonly found by all methods at the same
time. The results are summarized in Table 3. DisCo has shown better overall performance compared to other baselines, which verifies our assumption that learning disentangled representation benefits latent space discovering. We also provide qualitative comparisons in Figure 4.
Finally, we provide an intuitive analysis in Appendix D for why DisCo can find those disentangled directions.
4.3 ABLATION STUDY
In this section, we perform ablation study of DisCo only on GAN, limited by the space. For the experiments, we use the Shapes3D dataset, and the random seed is fixed.
Choice of latent space. For style–based GANs (Karras et al., 2019; 2020), there is a style space W , which is the output of style network (MLP) whose input is a random latent space Z . As demonstrated in Karras et al. (2019), W is more interpretable than Z . We conduct experiments on W and Z respectively to see how the latent space influences the performance. As shown in Table 4, DisCo on W is better, indicating that the better the latent space is organized, the better disentanglement DisCo can achieve.
Choices of A. Following the setting of Voynov & Babenko (2020), we mainly consider three options of A: a linear operator with all matrix columns having a unit length, a linear operator with orthonormal matrix columns, or a nonlinear operator of 3 fully-connected layers.
The results are shown in Table 4. For latent spaces W and Z , A with unit-norm columns achieves nearly the best performance in terms of MIG and DCI scores. Compared to A with orthonormal matrix columns, using A with unitnorm columns is more expressive with less constraints. Another possible reason is that A is global without conditioned on the latent code z. A non-linear operator is more suitable for a
local navigator A. For such a much more complex local and non-linear setting, more inductive bias or supervision should be introduced.
Entropy-based domination loss. Here, we verify the effectiveness of entropy-based domination loss Led for disentanglement. For a desirable disentangled representation, one semantic meaning corresponds to one dimension. As shown in Table 4, Led can improve the performance by a large
1The above disentanglement metrics (DCI and MIG) are not available for FFHQ dataset.
margin. We also visualize the Variation Space to further demonstrate the effectiveness of our proposed loss in Figure 5. Adding the domination loss makes the samples in the Variation Space to be one-hot, which is desirable for disentanglement.
Hard negatives flipping. We run our DisCo with or without the hard negatives flipping strategy to study its influence. As shown in Table 4, flipping hard negatives can improve the disentanglement ability of DisCo. The reason is that the hard negatives have the same semantics as the positive samples. In this case, treating them as the hard negatives does not make sense. Flipping them with pseudo-labels can make the optimization of Contrastive Learning easier.
Hyperparmeter N & M. We run DisCo with different ratios of N : M with a fixed sum of 96, and different sum of N +M with a fixed ratio 1 : 2 to study their impacts. As shown in Figure 6 (a), the best ratio is N : M = 32 : 64 = 1 : 2, as the red line (MIG) and blue line (DCI) in the figure show that larger or smaller ratios will hurt DisCo, which indicates DisCo requires a balance between N and M . As shown in Figure 6 (b), the sum of N +M has slight impact on DisCo. For other hyperparameters, we set them empirically, and more details are presented in Appendix A.
Contrast vs. Classification. To verify the effectiveness of Contrast, we substitute it with classification by adopting an additional linear layer to recover the corresponding direction index and the shift along this direction. As Table 2 shows, Contrastive Learning outperforms Classification significantly.
Concatenation vs. Variation. We further demonstrate that the Variation Space is crucial for DisCo. By replacing the difference operator with concatenation, the performance drops significantly (Table 2), indicating that the encoded representation is not well disentangled. On the other hand, the disentangled representations of images are achieved by Contrastive Learning in the Variation Space.
4.4 ANALYSIS OF DIFFERENT GENERATIVE MODELS
As shown in Table 1, DisCo can be well generalized to different generative models (GAN, VAE, and Flow). DisCo on GAN and VAE can achieve relative good performance, while DisCo on Flow is not as good. The possible reason is similar to the choice of latent space of GAN. We assume the disentangled directions are global linear and thus use a linear navigator. In contrast to GAN and VAE, we suspect that Flow may not conform to this assumption well. Furthermore, Flow has the problems of high GPU cost and unstable training, which limit us to do further exploration.
5 CONCLUSION
In this paper, we present an unsupervised and model-agnostic method DisCo, which is a Contrastive Learning framework to learn disentangled representation by exploiting pretrained generative models. We propose an entropy-based domination loss and a hard negatives flipping strategy to achieve better disentanglement. DisCo outperforms typical unsupervised disentanglement methods while maintaining high image quality. We pinpoint a new direction that Contrastive Learning can be well applied to extract disentangled representation from pretrained generative models. There may be some specific complex generative models, for which the global linear assumption of disentangled directions in the latent space could be a limitation. For future work, extending DisCo to the existing VAE-based disentanglement framework is an exciting direction.
A.2 SETTING FOR BASELINES
In this section, we introduce the implementation setting for the baselines (including randomness).
VAE-based methods. We choose FactorVAE and β-TCVAE as the SOTA VAE-based methods, we follow Locatello et al. (2019) to use the same architecture of encoder and decoder. For the hyper-parameters, we use the the best settings by grid search. We set the latent dimension of representation to 10. For FactorVAE, we set the hyperparameter γ to 10. For β-TCVAE, we set the hyperparameter β to 6. For the random seeds, considering our method has 25 run, we run 25 times with different random seeds for each model to make the comparison fair.
InfoGAN-based methods. We choose InfoGAN-CR as a baseline. We use the official implementation 2 with the best hyperparameter settings by grid search. For the random seeds, we run 25 times with different random seeds
Discovering-based methods. We follow Khrulkov et al. (2021) to use the same settings for the following four baselines: LD (GAN), CF, GS, and DS. Similar to our method (DisCo), discoveringbased methods do not have a regularization term. Thus, for the randomness, we adopt the same strategy with DisCo. We take the top-10 directions for 5 different random seeds for GAN and 5 different random seeds for the additional encoder to learn disentangled representations.
LD (VAE) & LD (Flow). We follow LD (GAN) to use the same settings and substitute the GAN with VAE / Glow. The only exception is the randomness for LD (Flow). We only run one random seed to pretrain the Glow and use one random seed for the encoder.
A.3 MANIPULATION DISENTANGLEMENT SCORE
As claimed in Li et al. (2021a), it is difficult to evaluate the performance on discovering the latent space among different methods, which often use model-specific hyper-parameters to control the editing strength. Thus, Li et al. (2021a) propose a comprehensive metric called Manipulation Disentanglement Score (MDS), which takes both the accuracy and the disentanglement of manipulation into consideration. For more details, please refer to Li et al. (2021a).
A.4 DOMAIN GAP PROBLEM
Please note that there exists a domain gap between the generated images of pretrained generative models and the real images. However, the good performance on disentanglement metrics shows that the domain gap has limited influence on DisCo.
2https://github.com/fjxmlzn/InfoGAN-CR
A.5 ARCHITECTURE
Here, we provide the model architectures in our work. For the architecture of StyleGAN2, we follow Khrulkov et al. (2021). For the architecture of Glow, we use the open-source implementation 3.
3https://github.com/rosinality/glow-pytorch
B MORE EXPERIMENTS
B.1 MORE QUALITATIVE COMPARISON
We provide some examples for qualitative comparison. We first demonstrate the trade-off problem of the VAE-based methods. As shown in Figure 7, DisCo leverages the pretrained generative model and does not have the trade-off between disentanglement and generation quality.
DisCo
Furthermore, as shown in Figure 8 and Figure 9, VAE-based methods suffer from poor image quality. When changing one attribute, the results of discovering-based methods tend to also change other attributes.
We also provide qualitative comparisons between DisCo and InfoGAN-CR. Note that the latent space of InfoGAN-CR is not aligned with the pretrained StyleGAN2. InfoGAN-CR also suffers from the trade-off problem, and its disentanglement ability is worse than DisCo.
We explain the comparison in the main paper and show more manipulation comparisons here.
B.2 ANALYSIS OF THE LEARNED DISENTANGLED REPRESENTATIONS
We feed the images traversing the three most significant factors (wall color, floor color, and object color) of Shapes3D into the Disentangling Encoders and plot the corresponding dimensions of the encoded representations to visualize the learned disentangled space. The location of each point is the disentangled representation of the corresponding image. An ideal result is that all the points form a cube, and color variation is continuous. We consider three baselines that have relatively higher MIG and DCI: CF, DS, LD. As the figures below show, the points in the latent space of CF and DS are not well organized, and the latent space of all the three baselines are not well aligned with the axes, especially for LD. DisCo learns a well-aligned and well-organized latent space, which signifies a better disentanglement.
CF DS LD Ours
B.3 MORE QUANTITATIVE COMPARISON
We provide additional quantitative comparisons in terms of β-VAE score and FactorVAE score. DisCo on pretrained GAN is comparable to discovering-based baselines in terms of β-VAE score and FactorVAE score, suggesting that some disagreement between these two scores and MIG/ DCI. However, note that the qualitative evaluation in Figure 8, Figure 9 and Section B.2 shows that the disentanglement ability of DisCo is better than all the baselines on Shapes3D dataset.
Typical disentanglement baselines:
Methods on pretrained GAN:
Methods on pretrained VAE:
Methods on pretrained Flow:
We also provide an additional experiment on Noisy-DSprites dataset. We compare DisCo with β-TCVAE (the best typical method) and CF (the best discovering-based method) in terms of MIG and DCI metrics.
C LATENT TRAVERSALS
In this section, we visualize the disentangled directions of the latent space discovered by DisCo on each dataset. For Cars3D, Shapes3D, Anime and MNIST, the iamge resolution is 64× 64. For FFHQ, LSUN cat and LSUN church, the image resolution is 256× 256. Besides StyleGAN2, we also provide results of Spectral Norm GAN (Miyato et al., 2018) 4 on MNIST (LeCun et al., 2010) and Anime Face (Jin et al., 2017) to demonstrate that DisCo can be well generalized to other types of GAN.
4https://github.com/anvoynov/GANLatentDiscovery
D AN INTUITIVE ANALYSIS FOR DISCO
DisCo works by contrasting the variations resulted from traversing along the directions provided by the Navigator. Is the method sufficient to converge to the disentangled solution? Note that it is very challenging to answer this question. To our best knowledge, for unsupervised disentangled representation learning, there is no sufficient theoretical constraint to guarantee the convergence to a disentangled solution Locatello et al. (2019). Here we provide an intuitive analysis for DisCo and try to provide our thoughts on how DisCo find the disentangled direction in the latent space, which is supported by our observations on pretrained GAN both quantitatively and qualitatively. The intuitive analysis consists of two part: (i) The directions that can be discovered by DisCo have different variation patterns compared to random directions. (ii) DisCo hardly converges to the an entangled solution.
D.1 WHAT KIND OF DIRECTIONS DISCO CAN CONVERGE TO?
First, we visualize the latent space and show that there are some variation patterns in the latent space for disentangled factors. We design the following visualization method. Given a pretrained GAN and two directions in the latent space, we traverse along the plane expanded by the two directions to generate a grid of images. The range is large enough to include all values of these disentangled factors, and the step is small enough to obtain a dense grid. Then, we input these images into an encoder that trained with ground truth factors labels. We get a heatmap of each factor (the value is the response value corresponding dimension of the factor). In this way, we can observe the variation pattern that emerged in the latent space.
We take the pretrained StyleGAN on Shapes3D (synthetic) and FFHQ (real-world). For Shapes3D, we take background color and floor color as the two factors (since they refer to different areas in the image, these two factors are disentangled). For FFHQ, we take smile (mouth) and bald (hair) as the two factors (disentangled for referring to different areas). We then choose random directions and the directions discovered by DisCo. The results are shown in Figure 27 and Figure 28.
We find a clear difference between random directions and directions discovered by DisCo. This is because DisCo can learn the directions by separating the variations resulted from traversing along with them. However, not all directions can be separated. For those directions in which the variations are not able to be recognized or clustered by the encoder E, it is nearly impossible for DisCo to converge to them. Conversely, for those directions that can be easily recognized and clustered, DisCo will converge to them with a higher probability. From the following observations, we find that the variation patterns resulting from the directions corresponding to disentangled factors are easily recognized and clustered.
D.2 WHY DISCO HARDLY CONVERGES TO THE ENTANGLED CASES?
In the previous section, we show that DisCo can discover the directions with distinct variation patterns and exclude random directions. Here we discuss why DisCo can hardly converge to the following entangled case (trivial solution based on disentangled one). For example, suppose there is an entangled direction of factors A and B (A and B change with the same rate when traversing along with it) in the latent space of generative models, and DisCo can separate the variations resulting
from the direction of A and the entangled direction. In that case, DisCo has no additional bias to update these directions to converge to disentangled ones.
In the following text, for ease of referring to, we denote the entangled direction of factors A and B (A and B change with the same rate when traversing along with it) as A+B direction, and direction of factor A (only A change when we traverse along with it). The reasons for why DisCo is hardly converged to the case of A and A+B are two-fold:
(i) Our encoder is a lightweight network (5 CNN layers + 3 FC layers). It is nearly impossible for it to separate the A and A+B directions.
(ii) In the latent space of the pretrained generative models, the disentangled directions (A, B) are consistent at different locations. In contrast, the entangled directions (A+B) are not, as shown in Figure 29.
We conduct the following experiments to verify them. For (i), we replace our encoder in DisCo with a ResNet-50 and train DisCo from scratch on the Shapes3D dataset. The loss, MIG, and DCI are presented in Table 11. The trivial solution is possible when the encoder is powerful enough to fit the A and A+B directions to “become orthogonal”. With this consideration, in DisCo we adopt a lightweight encoder to avoid this issue.
For (ii), as the sketch Figure 29 demonstrates, the disentangled directions (”A“- blue color or “B”green color) are consistent, which is invariant to the location in the latent space, while the entangled directions (”A+B“- red color) is not consistent on different locations. The fundamental reason is that: the directions of the disentangled variations are invariant with the position in the latent space. However, the “rate” of the variation is not. E.g., at any point in the latent space, going “up” constantly changes the camera’s pose. However, at point a, going “up” with step 1 means rotating 10 degrees. At point b, going “up” with step 1 means rotating 5 degrees. When the variation “rate” of “A” and “B” are different, the “A+B” directions at different locations are not consistent.
Based on the different properties of disentangled and entangled directions in the latent space, DisCo can discover the disentangled directions with contrastive loss. The contrastive loss can be understood from the clustered view (Wang & Isola, 2020; Li et al., 2021b). The variations from the disentangled directions are more consistent and can be better clustered compared to the variations from the
entangled ones. Thus, DisCo can discover the disentangled directions in the latent space and learn disentangled representations from images. We further provide the following experiments to support our above analysis.
D.2.1 QUANTITATIVE EXPERIMENT
We compare the losses of three different settings:
• A: For a navigator with disentangled directions, we fix the navigator and train the encoder until convergence.
• A + B: For a navigator with entangled directions (we use the linear combination of the disentangled directions to initialize the navigator), we fix it and train the encoder until convergence.
• A+B → A: After A+B is convergent, we update both the encoder and the navigator until convergence.
The Contrastive loss after convergence is presented in Table 12.
The results show that: (i) The disentangled directions (A) can lead to lower loss and better performance than entangled directions (A+B), indicating no trivial solution. (ii) Even though the encoder with A+B is converged, when we optimize the navigator, gradients will still backpropagate to the navigator and converge to A.
D.2.2 QUALITATIVE EXPERIMENT
We visualize the latent space of GAN in Figure 30 to verify the variation “rate” in the following way: in the latent space, we select two ground truth disentangled directions: floor color (A) and background color (B) obtained by supervision with InterFaceGAN (Shen et al., 2020), we conduct equally spaced sampling along the two disentangled directions: A (labeled with green color variation), B (labeled with gradient blue color) and composite direction (A+B, labeled with gradient red color) as shown in Figure 30 (a).
Then we generate the images (also include other images on the grid as shown in Figure 30 (b) ), and feed the images in the bounding boxes into a “ground truth” encoder (trained with ground truth disentangled factors) to regress the “ground truth” disentangled representations of the images.
In Figure 30 (c), the points labeled with green color are well aligned with the x-axis indicating only floor color change, points labeled with blue variation are well aligned with the y-axis indicating only
background color change. However, the points labeled with red color are NOT aligned with any line, which indicates the directions of A+B are not consistent. Further, the variation “rate” is relevant to the latent space locations for the two disentangled directions. This observation well supports our idea shown in Figure 29. The different properties between disentangled and entangled directions enable DisCo to discover the disentangled directions in the latent space.
E EXTENSION: BRIDGE THE PRETRAINED VAE AND PRETRAINED GAN
Researchers are recently interested in improving image quality given the disentangled representation generated by typical disentanglement methods. Lee et al.(Lee et al., 2020) propose a post-processing stage using a GAN based on disentangled representations learned by VAE-based disentanglement models. This method scarifies a little generation ability due to an additional constraint. Similarly, Srivastava et al. (Srivastava et al., 2020) propose to use a deep generative model with AdaIN (Huang & Belongie, 2017) as a post-processing stage to improve the reconstruction ability. Following this setting, we can replace the encoder in DisCo (GAN) with an encoder pretrained by VAE-based disentangled baselines. In this way, we can bridge the pretrained disentangled VAE and pretrained GAN, as shown in Figure 31. Compared to previous methods, our method can fully utilize the state-of-the-art GAN and the state-of-the-art VAE-based method and does not need to train a deep generative model from scratch.
F DISCUSSION ON RELATION BETWEEN BCELOSS AND NCELOSS
We would like to present a deep discussion on the relation between the BCELoss Llogits and NCELoss LNCE . This discussion is related to the NCE paper Gutmann & Hyvärinen (2010), and InfoNCE paper van den Oord et al. (2018). The discussion is as following: (i) we first provide a formulation of a general problem and get two objectives, L1 and L2, and L1 is the upper bound of L2. (ii) Following Gutmann & Hyvärinen (2010), we show that L1 is aligned with LBCE under the setting of Gutmann & Hyvärinen (2010). (iii) Following van den Oord et al. (2018), we prove L2 is aligned with LNCE under the setting of van den Oord et al. (2018). (iii) We discuss the relation between these objectives and the loss in our paper.
Part I. Assume we have S observations {xi}Si=1 from a data distribution p(x), each with a label Ci ∈ {0, 1}. The we denote the posterior probabilities as p+(x) = p(x|C = 1) and p−(x) = p(x|C = 0). We define two objectives as follow:
L1 = − S∑
i=1
Ci logP (Ci = 1|xi) + (1− Ci) logP (Ci = 0|xi), (10)
and
L2 = − S∑
i=1
Ci logP (Ci = 1|xi) (11)
Since − ∑S
i=1(1− Ci) log p(Ci = 0|xi) ≥ 0, we have L1 ≥ L2. (12)
L1 is the upper bound of L2. This a general formulation of a binary classification problem. In the context of our paper, we have a paired observation xi : (q, ki), with q as the query, and the key ki is either from a positive key set {k+j }Nj=1 or as negative key set {k−m}Mm=1 (i.e., {ki} N+M i=1 = {k + j }Nj=1 ⋃ {k−m}Mm=1), where M = S −N . And Ci is assigned as:
Ci = { 1, ki ∈ {k+j }Nj=1 0, ki ∈ {k−m}Mm=1
(13)
In our paper, we have h(x) = exp(q · k/τ).
Part II. In this part, following Gutmann & Hyvärinen (2010), we show that L1 is aligned with Llogits (Equation 3 in the main paper) under the setting of Gutmann & Hyvärinen (2010). Following Gutmann & Hyvärinen (2010)), we assume the prior distribution P (C = 0) = P (C = 1) = 1/2, according to the Bayes rule, we have
P (C = 1|x) = p(x|C = 1)P (C = 1) p(x|C = 1)P (C = 1) + p(x|C = 0)P (C = 0) = 1
1 + p −(x)
p+(x)
. (14)
And P (C = 0|x) = 1− P (C = 1|x). On the other hand, we have a general form of BCELoss, as
LBCE = − S∑
i=1
Ci log σ(q · ki/τ) + (1− Ci) log(1− σ(q · ki/τ)), (15)
where σ(·) is the sigmoid function. We have
σ(q · k/τ) = 1 1 + exp(−q · k/τ) = 1
1 + 1exp(q·k/τ) =
1
1 + 1h(x) , (16)
From Gutmann & Hyvärinen (2010) Theorem 1, we know that when LBCE is minimized, we have
h(x) = p+(x)
p−(x) . (17)
Thus, we know the BCELoss LBCE is a approximation of the objective L1.
Part. III Following van den Oord et al. (2018), we prove L2 is aligned with LNCE (Equation 2 in the main paper) under the setting of van den Oord et al. (2018)
From the typical contrastive setting (one positive sample, others are negative samples, following van den Oord et al. (2018)), we assume there is only one positive sample, others are negatives in {xi}Si=1. Then, the probability of xi sample from p+(x) rather then p−(x) is as follows,
P (Ci = 1|xi) = p+(xi)Πl ̸=ip −(xl)∑S j=1 p +(xj)Πl ̸=ip−(xl) = p+(xi) p−(xi)∑S j=1 p+(xj) p−(xj)
(18)
From van den Oord et al. (2018), we know that when minimize Equation 11, we have h(x) = exp(q · k/τ) ∝ p+(x)p−(x) . In this case, we get the form of LNCE as
LNCE = − S∑
i=1 Ci log exp(q · ki/τ)∑S j=1 exp(q · kj/τ)
(19)
LNCE is a approximate of L2.
Part. IV When generalize the contrastive loss into our setting (N positive samples, M negative samples). The BCELoss (Equation 15) can be reformulated as
The BCELoss (Equation 15) can be reformulated as
L̂BCE = − N∑ j=1 log σ(q · k+j /τ)− M∑ m=1 log(1− σ(q · k−m/τ)). (20)
Similarly, the NCEloss (Equation 19) can be reformulated as
L̂NCE = − N∑ j=1 log exp(q · k+j /τ)∑M+N s=1 exp(q · ks/τ) (21)
L̂BCE is aligned with Llogits (Equation 3 in our main paper), and L̂NCE is aligned with LNCE (Equation 2 in the main paper).
Now we have L1 (approximated by LBCE) is the upper bound of L2 (approximated by LNCE). However, as you may notice, the assumptions we made in Part II and Part III are different, one is P (C = 0) = P (C = 1), the other one is only one positive sample, others are negative. Also the extent to our situation is more general case (N positives, others are negatives).
However, they have the same objective, which is by contrasting positives and negatives, we can use h(x) = exp(q · k/τ) to estimate p+/p−. We can think the h(x) as a similarity score, i.e. if q and k are from a positive pair (they have the same direction in our paper), h(x) should be as large as possible (p+/p− > 1) and vice versa. From this way, we can learn the representations (q, k) to reflect the image variation, i.e., similar variations have higher score h(x) , while different kinds of variation have lower score h(x). Then with this meaningful representation, in the latent space, can help to discover the directions carrying different kinds of image variation. This is an understanding, from a contrastive learning view, of how our method works. | 1. What is the focus and contribution of the paper on disentanglement?
2. What are the strengths of the proposed approach, particularly in terms of its ability to achieve SOTA results and ensure good generation quality?
3. What are the weaknesses of the paper, especially regarding the proposed method's flaws and the use of outdated metrics?
4. Do you have any concerns or suggestions regarding the computation and use of MIG and DCI for discovering-based methods?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes DisCo, a framework that learns disentangled representations from pretrained entangled generative models. Extensive experimental results show that DisCo outperforms many baselines in both quantitative and qualitative evaluations.
Review
Pros:
The proposed method is novel and achieves SOTA results in disentanglement while ensuring good generation quality.
Extensive experiments and ablation studies.
In general, the paper is well written and easy to read.
Cons:
There are still some flaws in the proposed method.
Some details about how to compute MIG and DCI for discovering-based methods are missing.
MIG and DCI metrics are out-of-date and may not well characterize disentanglement. |
ICLR | Title
Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View
Abstract
From the intuitive notion of disentanglement, the image variations corresponding to different factors should be distinct from each other, and the disentangled representation should reflect those variations with separate dimensions. To discover the factors and learn disentangled representation, previous methods typically leverage an extra regularization term when learning to generate realistic images. However, the term usually results in a trade-off between disentanglement and generation quality. For the generative models pretrained without any disentanglement term, the generated images show semantically meaningful variations when traversing along different directions in the latent space. Based on this observation, we argue that it is possible to mitigate the trade-off by (i) leveraging the pretrained generative models with high generation quality, (ii) focusing on discovering the traversal directions as factors for disentangled representation learning. To achieve this, we propose Disentaglement via Contrast (DisCo) as a framework to model the variations based on the target disentangled representations, and contrast the variations to jointly discover disentangled directions and learn disentangled representations. DisCo achieves the state-of-the-art disentangled representation learning and distinct direction discovering, given pretrained nondisentangled generative models including GAN, VAE, and Flow. Source code is at https://github.com/xrenaa/DisCo.
1 INTRODUCTION
Disentangled representation learning aims to identify and decompose the underlying explanatory factors hidden in the observed data, which is believed by many to be the only way to understand the world for AI fundamentally (Bengio & LeCun, 2007). To achieve the goal, as shown in Figure 1 (a), we need an encoder and a generator. The encoder to extract representations from images with each dimension corresponds to one factor individually. The generator (decoder) decodes the changing of each factor into different kinds of image variations.
With supervision, we can constrain each dimension of the representation only sensitive to one kind of image variation caused by changing one factor respectively. However, this kind of exhaustive supervision is often not available in real-world data. The typical unsupervised methods are based on a generative model to build the above encoder and generator framework, e.g., VAE (Kingma & Welling, 2014) provides encoder and generator, and GAN (Goodfellow et al., 2014; Miyato et al., 2018; Karras et al., 2019) provides generator. During the training process of the encoder and generator, to achieve disentangled representation, the typical methods rely on an additional disentanglement regularization term, e.g., the total correlation for VAE-based methods (Higgins et al., 2017; Burgess et al., 2018; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018) or mutual information for InfoGAN-based methods (Chen et al., 2016; Lin et al., 2020).
∗Equal contribution. Work done during internships at Microsoft Research Asia. †Corresponding author
However, the extra terms usually result in a trade-off between disentanglement and generation quality (Burgess et al., 2018; Khrulkov et al., 2021). Furthermore, those unsupervised methods have been proved to have an infinite number of entangled solutions without introducing inductive bias (Locatello et al., 2019). Recent works (Shen & Zhou, 2021; Khrulkov et al., 2021; Karras et al., 2019; Härkönen et al., 2020; Voynov & Babenko, 2020) show that, for GANs purely trained for image generation, traversing along different directions in the latent space causes different variations of the generated image. This phenomenon indicates that there is some disentanglement property embedded in the latent space of the pretrained GAN. The above observations indicate that training the encoder and generator simultaneous may not be the best choice.
We provide an alternative route to learn disentangled representation: fix the pretrained generator, jointly discover the factors in the latent space of the generator and train the encoder to extract disentangled representation, as shown in Figure 1(b). From the intuitive notion of disentangled representation, similar image variations should be caused by changing the same factor, and different image variations should be caused by changing different factors. This provide a novel contrastive learning view for disentangled representation learning and inspires us to propose a framework: Disentanglement via Contrast (DisCo) for disentangled representation learning.
In DisCo, changing a factor is implemented by traversing one discovered direction in the latent space. For discovering the factors, DisCo adopts a typical network module, Navigator, to provides candidate traversal directions in the latent space (Voynov & Babenko, 2020; Jahanian et al., 2020; Shen et al., 2020). For disentangled representation learning, to model the various image variations, we propose a novel ∆-Contrastor to build a Variation Space where we apply the contrastive loss. In addition to the above architecture innovations, we propose two key techniques for DisCo: (i) an entropy-based domination loss to encourage the encoded representations to be more disentangled, (ii) a hard negatives flipping strategy for better optimization of Contrastive Loss.
We evaluate DisCo on three major generative models (GAN, VAE, and Flow) on three popular disentanglement datasets. DisCo achieves the state-of-the-art (SOTA) disentanglement performance compared to all the previous discovering-based methods and typical (VAE/InfoGAN-based) methods. Furthermore, we evaluate DisCo on the real-world dataset FFHQ (Karras et al., 2019) to demonstrate that it can discover SOTA disentangled directions in the latent space of pretrained generative models.
Our main contributions can be summarized as: (i) To our best knowledge, DisCo is the first unified framework for jointly learning disentangled representation and discovering the latent space of pretrained generative models by contrasting the image variations. (ii) We propose a novel ∆-Contrastor to model image variations based on the disentangled representations for utilizing Contrastive Learning. (iii) DisCo is an unsupervised and model-agnostic method that endows non-disentangled VAE, GAN, or Flow models with the SOTA disentangled representation learning and latent space discovering. (iv) We propose two key techniques for DisCo: an entropy-based domination loss and a hard negatives flipping strategy.
2 RELATED WORK
Typical unsupervised disentanglement. There have been a lot of studies on unsupervised disentangled representation learning based on VAE (Higgins et al., 2017; Burgess et al., 2018; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018) or InfoGAN (Chen et al., 2016; Lin et al., 2020). These methods achieve disentanglement via an extra regularization, which often sacrifices the generation quality (Burgess et al., 2018; Khrulkov et al., 2021). VAE-based methods disentangle the variations by factorizing aggregated posterior, and InfoGAN-based methods maximize the mutual
information between latent factors and related observations. VAE-based methods achieve relatively good disentanglement performance but have low-quality generation. InfoGAN-based methods have a relatively high quality of generation but poor disentanglement performance. Our method supplements generative models pretrained without disentanglement regularization term with contrastive learning in the Variation Space to achieve both high-fidelity image generation and SOTA disentanglement.
Interpretable directions in the latent space. Recently, researchers have been interested in discovering the interpretable directions in the latent space of generative models without supervision, especially for GAN (Goodfellow et al., 2014; Miyato et al., 2018; Karras et al., 2020). Based on the fact that the GAN latent space often possesses semantically meaningful directions (Radford et al., 2015; Shen et al., 2020; Jahanian et al., 2020), Voynov & Babenko (2020) propose a regression-based method to explore interpretable directions in the latent space of a pretrained GAN. The subsequent works focus on extracting the directions from a specific layer of GANs. Härkönen et al. (2020) search for important and meaningful directions by performing PCA in the style space of StyleGAN (Karras et al., 2019; 2020). Shen & Zhou (2021) propose to use the singular vectors of the first layer of a generator as the interpretable directions, and Khrulkov et al. (2021) extend this method to the intermediate layers by Jacobian matrix. All the above methods only discover the interpretable directions in the latent space, except for Khrulkov et al. (2021) which also learns disentangled representation of generated images by training an extra encoder in an extra stage. However, all these methods can not outperform the typical disentanglement methods. Our method is the first to jointly learn the disentangled representation and discover the directions in the latent spaces.
Contrastive Learning. Contrastive Learning gains popularity due to its effectiveness in representation learning (He et al., 2020; Grill et al., 2020; van den Oord et al., 2018; Hénaff, 2020; Li et al., 2020; Chen et al., 2020). Typically, contrastive approaches bring representations of different views of the same image (positive pairs) closer, and push representations of views from different images (negative pairs) apart using instance-level classification with Contrastive Loss. Recently, Contrastive Learning is extended to various tasks, such as image translation (Liu et al., 2021; Park et al., 2020) and controllable generation (Deng et al., 2020). In this work, we focus on the variations of representations and achieve SOTA disentanglement with Contrastive Learning in the Variation Space. Contrastive Learning is suitable for disentanglement due to: (i) the actual number of disentangled directions is usually unknown, which is similar to Contrastive Learning for retrieval (Le-Khac et al., 2020), (ii) it works in the representation space directly without any extra layers for classification or regression.
3 DISENTANGLEMENT VIA CONTRAST
3.1 OVERVIEW OF DISCO
From the contrastive view of the intuitive notion of disentangled representation learning, we propose a DisCo to leverage pretrained generative models to jointly discover the factors embedded as directions in the latent space of the generative models and learn to extract disentangled representation. The benefits of leveraging a pretrained generative model are two-fold: (i) the pretrained models with high-quality image generation are readily available, which is important for reflecting detailed image variations and downstream tasks like controllable generation; (ii) the factors are embedded in the pretrained model, severing as an inductive bias for unsupervised disentangled representation learning.
DisCo consists of a Navigator to provides candidate traversal directions in the latent space and a ∆-Contrastor to extract the representation of image variations and build a Variation Space based on the target disentangled representations. More specifically, ∆-Contrastor is composed of two sharedweight Disentangling Encoders. The variation between two images is modeled as the difference of their corresponding encoded representations extracted by the Disentangling Encoders.
In the Variation Space, by pulling together the variation samples resulted from traversing the same direction and pushing away the ones resulted from traversing different directions, the Navigator learns to discover disentangled directions as factors, and Disentangling Encoder learns to extract disentangled representations from images. Thus, traversing along the discovered directions causes distinct image variations, which causes separated dimensions of disentangled representations respond.
Different from VAE-based or InfoGAN-based methods, our disentangled representations and factors are in two separate spaces, which actually does not affect the applications. Similar to the typical
methods, the Disentangling Encoder can extract disentangled representations from images, and the pretrained generative model with discovered factors can be applied to controllable generation. Moreover, DisCo can be applied to different types of generative models.
Here we provide a detailed workflow of DisCo. As Figure 2 shows, given a pretrained generative model G: Z → I, where Z ∈ RL denotes the latent space, and I denotes the image space, the workflow is: 1) A Navigator A provides a total of D candidate traversal directions in the latent space Z , e.g., in the linear case, A ∈ RL×D is a learnable matrix, and each column is regarded as a candidate direction. 2) Image pairs G(z), G(z′) are generated. z is sampled from Z and z′ = z + A(d, ε), where d ∈ {1, ..., D} and ε ∈ R, and A(d, ε) denotes the shift along the dth direction with ε scalar. 3) The ∆-Contrastor, composed of two shared-weight Disentangling Encoders E, encodes the image pair to a sample v ∈ V as
v(z, d, ε) = |E(G(z +A(d, ε)))−E(G(z))| , (1) where V ∈ RJ+ denotes the Variation Space. Then we apply Contrastive Learning in V to optimize the Disentangling Encoder E to extract disentangled representations and simultaneously enable Navigator A to find the disentangled directions in the latent space Z .
3.2 DESIGN OF DISCO
We present the design details of DisCo, which include: (i) the collection of query set Q = {qi}Bi=1, positive key set K+ = {k+i }Ni=1 and negative key set K− = {k − i }Mi=1, which are three subsets of the Variation Space V , (ii) the formulation of the Contrastive Loss. According to our goal of contrasting the variations, the samples from Q and K+ share the same traversal direction and should be pulled together, while the samples from Q and K− have different directions and should be pushed away. Recall that each sample v in V is determined as v(z, d, ε). To achieve the contrastive learning process, we construct the query sample qi = v(zi, di, εi), the key sample k+i = v(z + i , d + i , ε + i ) and the negative sample k − i = v(z − i , d − i , ε − i ). Specifically, we randomly sample a direction index d̂ from a discrete uniform distribution U{1, D} for {di}Bi=1 and {d+i }Ni=1 to guarantee they are the same. We randomly sample {d − i }Mi=1 from the set of the rest of the directions U{1, D} \ {d̂} individually and independently to cover the rest of directions in Navigator A. Note that the discovered direction should be independent with the starting point and the scale of variation, which is in line with the disentangled factors. Therefore, {zi}Bi=1, {z + i }Ni=1, {z − i }Mi=1 are all sampled from latent space Z , and {εi}Bi=1, {ε + i }Ni=1, {ε − i }Mi=1 are all sampled from a shared continuous uniform distribution U [−ϵ, ϵ] individually and independently. We normalize each sample in Q, K+, and K− to a unit vector to eliminate the impact caused by different shift scalars.
For the design of Contrastive Loss, a well-known form of Contrastive Loss is InfoNCE (van den Oord et al., 2018):
LNCE = − 1
|B| B∑ i=1 N∑ j=1 log exp(qi · k+j /τ)∑N+M s=1 exp(qi · ks/τ) , (2)
where τ is a temperature hyper-parameter and {ki}N+Mi=1 = {k + i }Ni=1 ⋃ {k−i }Mi=1. The InfoNCE is originate from BCELoss (Gutmann & Hyvärinen, 2010). BCELoss has been used to achieve contrastive learning (Wu et al., 2018; Le-Khac et al., 2020; Mnih & Kavukcuoglu, 2013; Mnih & Teh, 2012). We choose to follow them to use BCELoss Llogits for reducing computational cost:
Llogits = − 1
|B| B∑ i=1 ( l−i + l + i ) , (3)
l+i = N∑ j=1 log σ(qi · k+j /τ), l − i = M∑ m=1 log(1− σ(qi · k−m/τ)), (4)
where σ denotes the sigmoid function, l+i denotes the part for positive samples, and l − i denotes the part for the negative ones.Note that we use a shared positive set for B different queries to reduce the computational cost.
3.3 KEY TECHNIQUES FOR DISCO
Entropy-based domination loss. By optimizing the Contrastive Loss, Navigator A is optimized to find the disentangled directions in the latent space, and Disentangling Encoder E is optimized to extract disentangled representations from images. To further make the encoded representations more disentangled, i.e., when traversing along one disentangled direction, only one dimension of the encoded representation should respond, we thus propose an entropy-based domination loss to encourage the corresponding samples in the Variation Space to be one-hot. To implement the entropy-based domination loss, we first get the mean c of Q and K+ as
c = 1
|B +N | ( B∑ i=1 qi + N∑ i=1 k+i ) . (5)
We then compute the probability as pi = exp c(i)/ ∑J
j=1 exp c(j), where c(i) is the i-th element of c and J is the number of dimensions of c. The entropy-based domination loss Led is calculated as
Led = − 1
J J∑ j=1 pj log(pj). (6)
Hard negatives flipping. Since the latent space of the generative models is a high-dimension complex manifold, many different directions carry the same semantic meaning. These directions with the same semantic meaning result in hard negatives during the optimization of Contrastive Loss. The hard negatives here are different from the hard negatives in the works of self-supervised representation learning (He et al., 2020; Coskun et al., 2018), where they have reliable annotations of the samples. Here, our hard negatives are more likely to be “false” negatives, and we choose to flip these hard negatives into positives. Specifically, we use a threshold T to identify the hard negative samples, and use their similarity to the queries as the pseudo-labels for them:
l̂−i = ∑
αij<T
log(1− σ(αij)) + ∑
αij≥T
αij log(σ(αij)), (7)
where l̂−i denotes the modified l − i , and αij = qi · k − j /τ . Therefore, the modified final BCELoss is:
Llogits−f = 1
|B| B∑ i=1 ( l+i + l̂ − i ) . (8)
Typical disentanglement baselines:
Methods on pretrained GAN:
Methods on pretrained VAE:
Methods on pretrained Flow:
Full objective. With the above two techniques, the full objective is:
L = Llogits−f + λLed, (9)
where λ is the weighting hyper-parameter for entropy-based domination loss Led.
4 EXPERIMENT
In this section, we first follow the well-accepted protocol (Locatello et al., 2019; Khrulkov et al., 2021) to evaluate the learned disentangled representation, which also reflects the performance of discovered directions implicitly (Lin et al., 2020) (Section 4.1). Secondly, we follow Li et al. (2021a) to directly evaluate the discovered directions (Section 4.2). Finally, we conduct ablation study (Section 4.3).
4.1 EVALUATIONS ON DISENTANGLED REPRESENTATION
4.1.1 EXPERIMENTAL SETUP
Datasets. We consider the following popular datasets in the disentanglement areas: Shapes3D (Kim & Mnih, 2018) with 6 ground truth factors, MPI3D (Gondal et al., 2019) with 7 ground truth factors,
and Cars3D (Reed et al., 2015) with 3 ground truth factors. In the experiments of the above datasets, images are resized to the 64x64 resolution.
Pretrained generative models. For GAN, we use the StyleGAN2 model (Karras et al., 2020). For VAE, we use a common structure with convolutions (Locatello et al., 2019). For Flow, we use Glow (Kingma & Dhariwal, 2018).
Baseline. For the typical disentanglement baselines, we choose FactorVAE (Kim & Mnih, 2018), β-TCVAE (Chen et al., 2018) and InfoGAN-CR (Lin et al., 2020). For discovering-based methods, we consider serveral recent methods: GANspace (GS) (Härkönen et al., 2020), LatentDiscovery (LD) (Voynov & Babenko, 2020), ClosedForm (CF) (Shen & Zhou, 2021) and DeepSpectral (DS) (Khrulkov et al., 2021). For these methods, we follow Khrulkov et al. (2021) to train an additional encoder to extract disentangled representation. We are the first to extract disentangled representations from pretrained VAE and Flow, so we extend LD to VAE and Flow as a baseline.
Disentanglement metrics. We mainly consider two representative ones: the Mutual Information Gap (MIG) (Chen et al., 2018) and the Disentanglement metric (DCI) (Eastwood & Williams, 2018). MIG requires each factor to be only perturbed by changes of a single dimension of representation. DCI requires each dimension only to encode the information of a single dominant factor. We evaluate the disentanglement in terms of both representation and factors. We also provide results for β-VAE score (Higgins et al., 2017) and FactorVAE score (Kim & Mnih, 2018) in Appendix B.3.
Randomness. We consider the randomness caused by random seeds and the strength of the regularization term (Locatello et al., 2019). For random seeds, we follow the same setting as the baselines. Since DisCo does not have a regularization term, we consider the randomness of the pretrained generative models. For all methods, we ensure there are 25 runs, except that Glow only has one run, limited by GPU resources. More details are presented in Appendix A.
4.1.2 EXPERIMENTAL RESULTS
The quantitative results are summarized in Table 1 and Figure 3. More details about the experimental settings and results are presented in Appendix A & C.
DisCo vs. typical baselines. Our DisCo achieves the SOTA performance consistently in terms of MIG and DCI scores. The variance due to randomness of DisCo tends to be smaller than those typical baselines. We demonstrate that the method, which extracts disentangled representation from pretrained non-disentangled models, can outperform typical disentanglement baselines.
DisCo vs. discovering-based methods. Among the baselines based on discovering pretrained GAN, CF achieves the best performance. DisCo outperforms CF in almost all the cases by a large margin. Besides, these baselines need an extra stage (Khrulkov et al., 2021) to get disentangled representation, while our Disentangling Encoder can directly extract disentangled representation.
4.2 EVALUATIONS ON DISCOVERED DIRECTIONS
To evaluate the discovered directions, we compare DisCo on StyleGAN2 with GS, LD, CF and DS on the real-world dataset FFHQ (Karras et al., 2019)1. and adopt the comprehensive Manipulation Disentanglement Score (MDS) (Li et al., 2021a) as a metric. To calculate MDS, we use 40 CelebaHQ-Attributes predictors released by StyleGAN. Among them, we select Young, Smile, Bald and Blonde Hair, as they are attributes with an available predictor and commonly found by all methods at the same
time. The results are summarized in Table 3. DisCo has shown better overall performance compared to other baselines, which verifies our assumption that learning disentangled representation benefits latent space discovering. We also provide qualitative comparisons in Figure 4.
Finally, we provide an intuitive analysis in Appendix D for why DisCo can find those disentangled directions.
4.3 ABLATION STUDY
In this section, we perform ablation study of DisCo only on GAN, limited by the space. For the experiments, we use the Shapes3D dataset, and the random seed is fixed.
Choice of latent space. For style–based GANs (Karras et al., 2019; 2020), there is a style space W , which is the output of style network (MLP) whose input is a random latent space Z . As demonstrated in Karras et al. (2019), W is more interpretable than Z . We conduct experiments on W and Z respectively to see how the latent space influences the performance. As shown in Table 4, DisCo on W is better, indicating that the better the latent space is organized, the better disentanglement DisCo can achieve.
Choices of A. Following the setting of Voynov & Babenko (2020), we mainly consider three options of A: a linear operator with all matrix columns having a unit length, a linear operator with orthonormal matrix columns, or a nonlinear operator of 3 fully-connected layers.
The results are shown in Table 4. For latent spaces W and Z , A with unit-norm columns achieves nearly the best performance in terms of MIG and DCI scores. Compared to A with orthonormal matrix columns, using A with unitnorm columns is more expressive with less constraints. Another possible reason is that A is global without conditioned on the latent code z. A non-linear operator is more suitable for a
local navigator A. For such a much more complex local and non-linear setting, more inductive bias or supervision should be introduced.
Entropy-based domination loss. Here, we verify the effectiveness of entropy-based domination loss Led for disentanglement. For a desirable disentangled representation, one semantic meaning corresponds to one dimension. As shown in Table 4, Led can improve the performance by a large
1The above disentanglement metrics (DCI and MIG) are not available for FFHQ dataset.
margin. We also visualize the Variation Space to further demonstrate the effectiveness of our proposed loss in Figure 5. Adding the domination loss makes the samples in the Variation Space to be one-hot, which is desirable for disentanglement.
Hard negatives flipping. We run our DisCo with or without the hard negatives flipping strategy to study its influence. As shown in Table 4, flipping hard negatives can improve the disentanglement ability of DisCo. The reason is that the hard negatives have the same semantics as the positive samples. In this case, treating them as the hard negatives does not make sense. Flipping them with pseudo-labels can make the optimization of Contrastive Learning easier.
Hyperparmeter N & M. We run DisCo with different ratios of N : M with a fixed sum of 96, and different sum of N +M with a fixed ratio 1 : 2 to study their impacts. As shown in Figure 6 (a), the best ratio is N : M = 32 : 64 = 1 : 2, as the red line (MIG) and blue line (DCI) in the figure show that larger or smaller ratios will hurt DisCo, which indicates DisCo requires a balance between N and M . As shown in Figure 6 (b), the sum of N +M has slight impact on DisCo. For other hyperparameters, we set them empirically, and more details are presented in Appendix A.
Contrast vs. Classification. To verify the effectiveness of Contrast, we substitute it with classification by adopting an additional linear layer to recover the corresponding direction index and the shift along this direction. As Table 2 shows, Contrastive Learning outperforms Classification significantly.
Concatenation vs. Variation. We further demonstrate that the Variation Space is crucial for DisCo. By replacing the difference operator with concatenation, the performance drops significantly (Table 2), indicating that the encoded representation is not well disentangled. On the other hand, the disentangled representations of images are achieved by Contrastive Learning in the Variation Space.
4.4 ANALYSIS OF DIFFERENT GENERATIVE MODELS
As shown in Table 1, DisCo can be well generalized to different generative models (GAN, VAE, and Flow). DisCo on GAN and VAE can achieve relative good performance, while DisCo on Flow is not as good. The possible reason is similar to the choice of latent space of GAN. We assume the disentangled directions are global linear and thus use a linear navigator. In contrast to GAN and VAE, we suspect that Flow may not conform to this assumption well. Furthermore, Flow has the problems of high GPU cost and unstable training, which limit us to do further exploration.
5 CONCLUSION
In this paper, we present an unsupervised and model-agnostic method DisCo, which is a Contrastive Learning framework to learn disentangled representation by exploiting pretrained generative models. We propose an entropy-based domination loss and a hard negatives flipping strategy to achieve better disentanglement. DisCo outperforms typical unsupervised disentanglement methods while maintaining high image quality. We pinpoint a new direction that Contrastive Learning can be well applied to extract disentangled representation from pretrained generative models. There may be some specific complex generative models, for which the global linear assumption of disentangled directions in the latent space could be a limitation. For future work, extending DisCo to the existing VAE-based disentanglement framework is an exciting direction.
A.2 SETTING FOR BASELINES
In this section, we introduce the implementation setting for the baselines (including randomness).
VAE-based methods. We choose FactorVAE and β-TCVAE as the SOTA VAE-based methods, we follow Locatello et al. (2019) to use the same architecture of encoder and decoder. For the hyper-parameters, we use the the best settings by grid search. We set the latent dimension of representation to 10. For FactorVAE, we set the hyperparameter γ to 10. For β-TCVAE, we set the hyperparameter β to 6. For the random seeds, considering our method has 25 run, we run 25 times with different random seeds for each model to make the comparison fair.
InfoGAN-based methods. We choose InfoGAN-CR as a baseline. We use the official implementation 2 with the best hyperparameter settings by grid search. For the random seeds, we run 25 times with different random seeds
Discovering-based methods. We follow Khrulkov et al. (2021) to use the same settings for the following four baselines: LD (GAN), CF, GS, and DS. Similar to our method (DisCo), discoveringbased methods do not have a regularization term. Thus, for the randomness, we adopt the same strategy with DisCo. We take the top-10 directions for 5 different random seeds for GAN and 5 different random seeds for the additional encoder to learn disentangled representations.
LD (VAE) & LD (Flow). We follow LD (GAN) to use the same settings and substitute the GAN with VAE / Glow. The only exception is the randomness for LD (Flow). We only run one random seed to pretrain the Glow and use one random seed for the encoder.
A.3 MANIPULATION DISENTANGLEMENT SCORE
As claimed in Li et al. (2021a), it is difficult to evaluate the performance on discovering the latent space among different methods, which often use model-specific hyper-parameters to control the editing strength. Thus, Li et al. (2021a) propose a comprehensive metric called Manipulation Disentanglement Score (MDS), which takes both the accuracy and the disentanglement of manipulation into consideration. For more details, please refer to Li et al. (2021a).
A.4 DOMAIN GAP PROBLEM
Please note that there exists a domain gap between the generated images of pretrained generative models and the real images. However, the good performance on disentanglement metrics shows that the domain gap has limited influence on DisCo.
2https://github.com/fjxmlzn/InfoGAN-CR
A.5 ARCHITECTURE
Here, we provide the model architectures in our work. For the architecture of StyleGAN2, we follow Khrulkov et al. (2021). For the architecture of Glow, we use the open-source implementation 3.
3https://github.com/rosinality/glow-pytorch
B MORE EXPERIMENTS
B.1 MORE QUALITATIVE COMPARISON
We provide some examples for qualitative comparison. We first demonstrate the trade-off problem of the VAE-based methods. As shown in Figure 7, DisCo leverages the pretrained generative model and does not have the trade-off between disentanglement and generation quality.
DisCo
Furthermore, as shown in Figure 8 and Figure 9, VAE-based methods suffer from poor image quality. When changing one attribute, the results of discovering-based methods tend to also change other attributes.
We also provide qualitative comparisons between DisCo and InfoGAN-CR. Note that the latent space of InfoGAN-CR is not aligned with the pretrained StyleGAN2. InfoGAN-CR also suffers from the trade-off problem, and its disentanglement ability is worse than DisCo.
We explain the comparison in the main paper and show more manipulation comparisons here.
B.2 ANALYSIS OF THE LEARNED DISENTANGLED REPRESENTATIONS
We feed the images traversing the three most significant factors (wall color, floor color, and object color) of Shapes3D into the Disentangling Encoders and plot the corresponding dimensions of the encoded representations to visualize the learned disentangled space. The location of each point is the disentangled representation of the corresponding image. An ideal result is that all the points form a cube, and color variation is continuous. We consider three baselines that have relatively higher MIG and DCI: CF, DS, LD. As the figures below show, the points in the latent space of CF and DS are not well organized, and the latent space of all the three baselines are not well aligned with the axes, especially for LD. DisCo learns a well-aligned and well-organized latent space, which signifies a better disentanglement.
CF DS LD Ours
B.3 MORE QUANTITATIVE COMPARISON
We provide additional quantitative comparisons in terms of β-VAE score and FactorVAE score. DisCo on pretrained GAN is comparable to discovering-based baselines in terms of β-VAE score and FactorVAE score, suggesting that some disagreement between these two scores and MIG/ DCI. However, note that the qualitative evaluation in Figure 8, Figure 9 and Section B.2 shows that the disentanglement ability of DisCo is better than all the baselines on Shapes3D dataset.
Typical disentanglement baselines:
Methods on pretrained GAN:
Methods on pretrained VAE:
Methods on pretrained Flow:
We also provide an additional experiment on Noisy-DSprites dataset. We compare DisCo with β-TCVAE (the best typical method) and CF (the best discovering-based method) in terms of MIG and DCI metrics.
C LATENT TRAVERSALS
In this section, we visualize the disentangled directions of the latent space discovered by DisCo on each dataset. For Cars3D, Shapes3D, Anime and MNIST, the iamge resolution is 64× 64. For FFHQ, LSUN cat and LSUN church, the image resolution is 256× 256. Besides StyleGAN2, we also provide results of Spectral Norm GAN (Miyato et al., 2018) 4 on MNIST (LeCun et al., 2010) and Anime Face (Jin et al., 2017) to demonstrate that DisCo can be well generalized to other types of GAN.
4https://github.com/anvoynov/GANLatentDiscovery
D AN INTUITIVE ANALYSIS FOR DISCO
DisCo works by contrasting the variations resulted from traversing along the directions provided by the Navigator. Is the method sufficient to converge to the disentangled solution? Note that it is very challenging to answer this question. To our best knowledge, for unsupervised disentangled representation learning, there is no sufficient theoretical constraint to guarantee the convergence to a disentangled solution Locatello et al. (2019). Here we provide an intuitive analysis for DisCo and try to provide our thoughts on how DisCo find the disentangled direction in the latent space, which is supported by our observations on pretrained GAN both quantitatively and qualitatively. The intuitive analysis consists of two part: (i) The directions that can be discovered by DisCo have different variation patterns compared to random directions. (ii) DisCo hardly converges to the an entangled solution.
D.1 WHAT KIND OF DIRECTIONS DISCO CAN CONVERGE TO?
First, we visualize the latent space and show that there are some variation patterns in the latent space for disentangled factors. We design the following visualization method. Given a pretrained GAN and two directions in the latent space, we traverse along the plane expanded by the two directions to generate a grid of images. The range is large enough to include all values of these disentangled factors, and the step is small enough to obtain a dense grid. Then, we input these images into an encoder that trained with ground truth factors labels. We get a heatmap of each factor (the value is the response value corresponding dimension of the factor). In this way, we can observe the variation pattern that emerged in the latent space.
We take the pretrained StyleGAN on Shapes3D (synthetic) and FFHQ (real-world). For Shapes3D, we take background color and floor color as the two factors (since they refer to different areas in the image, these two factors are disentangled). For FFHQ, we take smile (mouth) and bald (hair) as the two factors (disentangled for referring to different areas). We then choose random directions and the directions discovered by DisCo. The results are shown in Figure 27 and Figure 28.
We find a clear difference between random directions and directions discovered by DisCo. This is because DisCo can learn the directions by separating the variations resulted from traversing along with them. However, not all directions can be separated. For those directions in which the variations are not able to be recognized or clustered by the encoder E, it is nearly impossible for DisCo to converge to them. Conversely, for those directions that can be easily recognized and clustered, DisCo will converge to them with a higher probability. From the following observations, we find that the variation patterns resulting from the directions corresponding to disentangled factors are easily recognized and clustered.
D.2 WHY DISCO HARDLY CONVERGES TO THE ENTANGLED CASES?
In the previous section, we show that DisCo can discover the directions with distinct variation patterns and exclude random directions. Here we discuss why DisCo can hardly converge to the following entangled case (trivial solution based on disentangled one). For example, suppose there is an entangled direction of factors A and B (A and B change with the same rate when traversing along with it) in the latent space of generative models, and DisCo can separate the variations resulting
from the direction of A and the entangled direction. In that case, DisCo has no additional bias to update these directions to converge to disentangled ones.
In the following text, for ease of referring to, we denote the entangled direction of factors A and B (A and B change with the same rate when traversing along with it) as A+B direction, and direction of factor A (only A change when we traverse along with it). The reasons for why DisCo is hardly converged to the case of A and A+B are two-fold:
(i) Our encoder is a lightweight network (5 CNN layers + 3 FC layers). It is nearly impossible for it to separate the A and A+B directions.
(ii) In the latent space of the pretrained generative models, the disentangled directions (A, B) are consistent at different locations. In contrast, the entangled directions (A+B) are not, as shown in Figure 29.
We conduct the following experiments to verify them. For (i), we replace our encoder in DisCo with a ResNet-50 and train DisCo from scratch on the Shapes3D dataset. The loss, MIG, and DCI are presented in Table 11. The trivial solution is possible when the encoder is powerful enough to fit the A and A+B directions to “become orthogonal”. With this consideration, in DisCo we adopt a lightweight encoder to avoid this issue.
For (ii), as the sketch Figure 29 demonstrates, the disentangled directions (”A“- blue color or “B”green color) are consistent, which is invariant to the location in the latent space, while the entangled directions (”A+B“- red color) is not consistent on different locations. The fundamental reason is that: the directions of the disentangled variations are invariant with the position in the latent space. However, the “rate” of the variation is not. E.g., at any point in the latent space, going “up” constantly changes the camera’s pose. However, at point a, going “up” with step 1 means rotating 10 degrees. At point b, going “up” with step 1 means rotating 5 degrees. When the variation “rate” of “A” and “B” are different, the “A+B” directions at different locations are not consistent.
Based on the different properties of disentangled and entangled directions in the latent space, DisCo can discover the disentangled directions with contrastive loss. The contrastive loss can be understood from the clustered view (Wang & Isola, 2020; Li et al., 2021b). The variations from the disentangled directions are more consistent and can be better clustered compared to the variations from the
entangled ones. Thus, DisCo can discover the disentangled directions in the latent space and learn disentangled representations from images. We further provide the following experiments to support our above analysis.
D.2.1 QUANTITATIVE EXPERIMENT
We compare the losses of three different settings:
• A: For a navigator with disentangled directions, we fix the navigator and train the encoder until convergence.
• A + B: For a navigator with entangled directions (we use the linear combination of the disentangled directions to initialize the navigator), we fix it and train the encoder until convergence.
• A+B → A: After A+B is convergent, we update both the encoder and the navigator until convergence.
The Contrastive loss after convergence is presented in Table 12.
The results show that: (i) The disentangled directions (A) can lead to lower loss and better performance than entangled directions (A+B), indicating no trivial solution. (ii) Even though the encoder with A+B is converged, when we optimize the navigator, gradients will still backpropagate to the navigator and converge to A.
D.2.2 QUALITATIVE EXPERIMENT
We visualize the latent space of GAN in Figure 30 to verify the variation “rate” in the following way: in the latent space, we select two ground truth disentangled directions: floor color (A) and background color (B) obtained by supervision with InterFaceGAN (Shen et al., 2020), we conduct equally spaced sampling along the two disentangled directions: A (labeled with green color variation), B (labeled with gradient blue color) and composite direction (A+B, labeled with gradient red color) as shown in Figure 30 (a).
Then we generate the images (also include other images on the grid as shown in Figure 30 (b) ), and feed the images in the bounding boxes into a “ground truth” encoder (trained with ground truth disentangled factors) to regress the “ground truth” disentangled representations of the images.
In Figure 30 (c), the points labeled with green color are well aligned with the x-axis indicating only floor color change, points labeled with blue variation are well aligned with the y-axis indicating only
background color change. However, the points labeled with red color are NOT aligned with any line, which indicates the directions of A+B are not consistent. Further, the variation “rate” is relevant to the latent space locations for the two disentangled directions. This observation well supports our idea shown in Figure 29. The different properties between disentangled and entangled directions enable DisCo to discover the disentangled directions in the latent space.
E EXTENSION: BRIDGE THE PRETRAINED VAE AND PRETRAINED GAN
Researchers are recently interested in improving image quality given the disentangled representation generated by typical disentanglement methods. Lee et al.(Lee et al., 2020) propose a post-processing stage using a GAN based on disentangled representations learned by VAE-based disentanglement models. This method scarifies a little generation ability due to an additional constraint. Similarly, Srivastava et al. (Srivastava et al., 2020) propose to use a deep generative model with AdaIN (Huang & Belongie, 2017) as a post-processing stage to improve the reconstruction ability. Following this setting, we can replace the encoder in DisCo (GAN) with an encoder pretrained by VAE-based disentangled baselines. In this way, we can bridge the pretrained disentangled VAE and pretrained GAN, as shown in Figure 31. Compared to previous methods, our method can fully utilize the state-of-the-art GAN and the state-of-the-art VAE-based method and does not need to train a deep generative model from scratch.
F DISCUSSION ON RELATION BETWEEN BCELOSS AND NCELOSS
We would like to present a deep discussion on the relation between the BCELoss Llogits and NCELoss LNCE . This discussion is related to the NCE paper Gutmann & Hyvärinen (2010), and InfoNCE paper van den Oord et al. (2018). The discussion is as following: (i) we first provide a formulation of a general problem and get two objectives, L1 and L2, and L1 is the upper bound of L2. (ii) Following Gutmann & Hyvärinen (2010), we show that L1 is aligned with LBCE under the setting of Gutmann & Hyvärinen (2010). (iii) Following van den Oord et al. (2018), we prove L2 is aligned with LNCE under the setting of van den Oord et al. (2018). (iii) We discuss the relation between these objectives and the loss in our paper.
Part I. Assume we have S observations {xi}Si=1 from a data distribution p(x), each with a label Ci ∈ {0, 1}. The we denote the posterior probabilities as p+(x) = p(x|C = 1) and p−(x) = p(x|C = 0). We define two objectives as follow:
L1 = − S∑
i=1
Ci logP (Ci = 1|xi) + (1− Ci) logP (Ci = 0|xi), (10)
and
L2 = − S∑
i=1
Ci logP (Ci = 1|xi) (11)
Since − ∑S
i=1(1− Ci) log p(Ci = 0|xi) ≥ 0, we have L1 ≥ L2. (12)
L1 is the upper bound of L2. This a general formulation of a binary classification problem. In the context of our paper, we have a paired observation xi : (q, ki), with q as the query, and the key ki is either from a positive key set {k+j }Nj=1 or as negative key set {k−m}Mm=1 (i.e., {ki} N+M i=1 = {k + j }Nj=1 ⋃ {k−m}Mm=1), where M = S −N . And Ci is assigned as:
Ci = { 1, ki ∈ {k+j }Nj=1 0, ki ∈ {k−m}Mm=1
(13)
In our paper, we have h(x) = exp(q · k/τ).
Part II. In this part, following Gutmann & Hyvärinen (2010), we show that L1 is aligned with Llogits (Equation 3 in the main paper) under the setting of Gutmann & Hyvärinen (2010). Following Gutmann & Hyvärinen (2010)), we assume the prior distribution P (C = 0) = P (C = 1) = 1/2, according to the Bayes rule, we have
P (C = 1|x) = p(x|C = 1)P (C = 1) p(x|C = 1)P (C = 1) + p(x|C = 0)P (C = 0) = 1
1 + p −(x)
p+(x)
. (14)
And P (C = 0|x) = 1− P (C = 1|x). On the other hand, we have a general form of BCELoss, as
LBCE = − S∑
i=1
Ci log σ(q · ki/τ) + (1− Ci) log(1− σ(q · ki/τ)), (15)
where σ(·) is the sigmoid function. We have
σ(q · k/τ) = 1 1 + exp(−q · k/τ) = 1
1 + 1exp(q·k/τ) =
1
1 + 1h(x) , (16)
From Gutmann & Hyvärinen (2010) Theorem 1, we know that when LBCE is minimized, we have
h(x) = p+(x)
p−(x) . (17)
Thus, we know the BCELoss LBCE is a approximation of the objective L1.
Part. III Following van den Oord et al. (2018), we prove L2 is aligned with LNCE (Equation 2 in the main paper) under the setting of van den Oord et al. (2018)
From the typical contrastive setting (one positive sample, others are negative samples, following van den Oord et al. (2018)), we assume there is only one positive sample, others are negatives in {xi}Si=1. Then, the probability of xi sample from p+(x) rather then p−(x) is as follows,
P (Ci = 1|xi) = p+(xi)Πl ̸=ip −(xl)∑S j=1 p +(xj)Πl ̸=ip−(xl) = p+(xi) p−(xi)∑S j=1 p+(xj) p−(xj)
(18)
From van den Oord et al. (2018), we know that when minimize Equation 11, we have h(x) = exp(q · k/τ) ∝ p+(x)p−(x) . In this case, we get the form of LNCE as
LNCE = − S∑
i=1 Ci log exp(q · ki/τ)∑S j=1 exp(q · kj/τ)
(19)
LNCE is a approximate of L2.
Part. IV When generalize the contrastive loss into our setting (N positive samples, M negative samples). The BCELoss (Equation 15) can be reformulated as
The BCELoss (Equation 15) can be reformulated as
L̂BCE = − N∑ j=1 log σ(q · k+j /τ)− M∑ m=1 log(1− σ(q · k−m/τ)). (20)
Similarly, the NCEloss (Equation 19) can be reformulated as
L̂NCE = − N∑ j=1 log exp(q · k+j /τ)∑M+N s=1 exp(q · ks/τ) (21)
L̂BCE is aligned with Llogits (Equation 3 in our main paper), and L̂NCE is aligned with LNCE (Equation 2 in the main paper).
Now we have L1 (approximated by LBCE) is the upper bound of L2 (approximated by LNCE). However, as you may notice, the assumptions we made in Part II and Part III are different, one is P (C = 0) = P (C = 1), the other one is only one positive sample, others are negative. Also the extent to our situation is more general case (N positives, others are negatives).
However, they have the same objective, which is by contrasting positives and negatives, we can use h(x) = exp(q · k/τ) to estimate p+/p−. We can think the h(x) as a similarity score, i.e. if q and k are from a positive pair (they have the same direction in our paper), h(x) should be as large as possible (p+/p− > 1) and vice versa. From this way, we can learn the representations (q, k) to reflect the image variation, i.e., similar variations have higher score h(x) , while different kinds of variation have lower score h(x). Then with this meaningful representation, in the latent space, can help to discover the directions carrying different kinds of image variation. This is an understanding, from a contrastive learning view, of how our method works. | 1. What is the focus of the paper regarding representation learning?
2. What are the strengths of the proposed method, particularly its simplicity and effectiveness?
3. What are the weaknesses of the paper, such as the lack of explanation for choosing semantically meaningful directions?
4. How does the reviewer assess the method's performance compared to prior works?
5. Do you have any questions or suggestions regarding the paper's content, such as removing certain statements or providing more explanations? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a novel representation learning technique to disentangle the latent space of pre-trained generative models, by discovering semantically meaningful directions in them.
The method trains a navigator and a delta-contrastor network, which consists of 2 encoders sharing weights. First, random samples are perturbed along the directions obtained from the navigator. The perturbed vectors are then decoded with the pre-trained generator, then encoded and the difference between 2 samples are taken. The output is in the variation space, where a contrastive learning technique clusters together the samples that were perturbed with the same direction.
Review
Good:
The idea is very simple and easy to implement.
The paper is very well written and easy to understand.
There are extensive ablations that show the effect of design choices and hyper-parameters.
The qualitative results look very good for disentangling, the proposed method preserves e.g. the identity much better when changing other attributes, like smile or baldness.
Quantitatively the proposed method shows better performance than the baseline for many datasets and 3 different kinds of generative model: GAN, VAE and Flow. This is impressive and shows the methods generality.
Bad:
Although the paper explained the method well from the perspective of reproducibility, it does not explain why the method should chose semantically meaningful directions. One can imagine a shortcut scenario, where the method learn 0.5a+0.5b and 0.5a-0.5b directions, where a and b are perfect semantically meaningful directions. In principle the training loss could be minimised with this solution as well (?). The reason why this does not happens is because of the implicit biases in the networks (?). But then why is this method performing better than prior works?
"(ii) the factors are embedded in the pretrained model, severing as an inductive bias for unsupervised disentangled representation learning." This still allows for the mixed solution 0.5a+0.5b and 0.5a-0.5b.
I think the following statement is incorrect, it should be removed: "A composed of 3 fully-connected layers performs poorly, indicating the disentangled directions of the latent space W of StyleGAN is nearly linear."
W is nearly linear because there are good directions in it, and a linear method can perform well in it.
The 3 layer network fails for some other reason, in principle it should work at least as good as the linear model, as it has the preresentation capacity.
The method is very sensitive to the ratio between positive and negative samples. A very good tuning is needed, which is shown in the paper for most hyper-parameters. One might think that the gains come from the extensive tuning rather than the proposed idea itself.
minor:
Although the images are resized to 64x64, it would be nice to see full resolution results with e.g. the StyleGAN2 generator. Or was the generator also retrained with reduced size images (for faster training I guess)?
some typos and grammar could be fixed, e.g. "... generative model are two-ford: ..." |
ICLR | Title
Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View
Abstract
From the intuitive notion of disentanglement, the image variations corresponding to different factors should be distinct from each other, and the disentangled representation should reflect those variations with separate dimensions. To discover the factors and learn disentangled representation, previous methods typically leverage an extra regularization term when learning to generate realistic images. However, the term usually results in a trade-off between disentanglement and generation quality. For the generative models pretrained without any disentanglement term, the generated images show semantically meaningful variations when traversing along different directions in the latent space. Based on this observation, we argue that it is possible to mitigate the trade-off by (i) leveraging the pretrained generative models with high generation quality, (ii) focusing on discovering the traversal directions as factors for disentangled representation learning. To achieve this, we propose Disentaglement via Contrast (DisCo) as a framework to model the variations based on the target disentangled representations, and contrast the variations to jointly discover disentangled directions and learn disentangled representations. DisCo achieves the state-of-the-art disentangled representation learning and distinct direction discovering, given pretrained nondisentangled generative models including GAN, VAE, and Flow. Source code is at https://github.com/xrenaa/DisCo.
1 INTRODUCTION
Disentangled representation learning aims to identify and decompose the underlying explanatory factors hidden in the observed data, which is believed by many to be the only way to understand the world for AI fundamentally (Bengio & LeCun, 2007). To achieve the goal, as shown in Figure 1 (a), we need an encoder and a generator. The encoder to extract representations from images with each dimension corresponds to one factor individually. The generator (decoder) decodes the changing of each factor into different kinds of image variations.
With supervision, we can constrain each dimension of the representation only sensitive to one kind of image variation caused by changing one factor respectively. However, this kind of exhaustive supervision is often not available in real-world data. The typical unsupervised methods are based on a generative model to build the above encoder and generator framework, e.g., VAE (Kingma & Welling, 2014) provides encoder and generator, and GAN (Goodfellow et al., 2014; Miyato et al., 2018; Karras et al., 2019) provides generator. During the training process of the encoder and generator, to achieve disentangled representation, the typical methods rely on an additional disentanglement regularization term, e.g., the total correlation for VAE-based methods (Higgins et al., 2017; Burgess et al., 2018; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018) or mutual information for InfoGAN-based methods (Chen et al., 2016; Lin et al., 2020).
∗Equal contribution. Work done during internships at Microsoft Research Asia. †Corresponding author
However, the extra terms usually result in a trade-off between disentanglement and generation quality (Burgess et al., 2018; Khrulkov et al., 2021). Furthermore, those unsupervised methods have been proved to have an infinite number of entangled solutions without introducing inductive bias (Locatello et al., 2019). Recent works (Shen & Zhou, 2021; Khrulkov et al., 2021; Karras et al., 2019; Härkönen et al., 2020; Voynov & Babenko, 2020) show that, for GANs purely trained for image generation, traversing along different directions in the latent space causes different variations of the generated image. This phenomenon indicates that there is some disentanglement property embedded in the latent space of the pretrained GAN. The above observations indicate that training the encoder and generator simultaneous may not be the best choice.
We provide an alternative route to learn disentangled representation: fix the pretrained generator, jointly discover the factors in the latent space of the generator and train the encoder to extract disentangled representation, as shown in Figure 1(b). From the intuitive notion of disentangled representation, similar image variations should be caused by changing the same factor, and different image variations should be caused by changing different factors. This provide a novel contrastive learning view for disentangled representation learning and inspires us to propose a framework: Disentanglement via Contrast (DisCo) for disentangled representation learning.
In DisCo, changing a factor is implemented by traversing one discovered direction in the latent space. For discovering the factors, DisCo adopts a typical network module, Navigator, to provides candidate traversal directions in the latent space (Voynov & Babenko, 2020; Jahanian et al., 2020; Shen et al., 2020). For disentangled representation learning, to model the various image variations, we propose a novel ∆-Contrastor to build a Variation Space where we apply the contrastive loss. In addition to the above architecture innovations, we propose two key techniques for DisCo: (i) an entropy-based domination loss to encourage the encoded representations to be more disentangled, (ii) a hard negatives flipping strategy for better optimization of Contrastive Loss.
We evaluate DisCo on three major generative models (GAN, VAE, and Flow) on three popular disentanglement datasets. DisCo achieves the state-of-the-art (SOTA) disentanglement performance compared to all the previous discovering-based methods and typical (VAE/InfoGAN-based) methods. Furthermore, we evaluate DisCo on the real-world dataset FFHQ (Karras et al., 2019) to demonstrate that it can discover SOTA disentangled directions in the latent space of pretrained generative models.
Our main contributions can be summarized as: (i) To our best knowledge, DisCo is the first unified framework for jointly learning disentangled representation and discovering the latent space of pretrained generative models by contrasting the image variations. (ii) We propose a novel ∆-Contrastor to model image variations based on the disentangled representations for utilizing Contrastive Learning. (iii) DisCo is an unsupervised and model-agnostic method that endows non-disentangled VAE, GAN, or Flow models with the SOTA disentangled representation learning and latent space discovering. (iv) We propose two key techniques for DisCo: an entropy-based domination loss and a hard negatives flipping strategy.
2 RELATED WORK
Typical unsupervised disentanglement. There have been a lot of studies on unsupervised disentangled representation learning based on VAE (Higgins et al., 2017; Burgess et al., 2018; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018) or InfoGAN (Chen et al., 2016; Lin et al., 2020). These methods achieve disentanglement via an extra regularization, which often sacrifices the generation quality (Burgess et al., 2018; Khrulkov et al., 2021). VAE-based methods disentangle the variations by factorizing aggregated posterior, and InfoGAN-based methods maximize the mutual
information between latent factors and related observations. VAE-based methods achieve relatively good disentanglement performance but have low-quality generation. InfoGAN-based methods have a relatively high quality of generation but poor disentanglement performance. Our method supplements generative models pretrained without disentanglement regularization term with contrastive learning in the Variation Space to achieve both high-fidelity image generation and SOTA disentanglement.
Interpretable directions in the latent space. Recently, researchers have been interested in discovering the interpretable directions in the latent space of generative models without supervision, especially for GAN (Goodfellow et al., 2014; Miyato et al., 2018; Karras et al., 2020). Based on the fact that the GAN latent space often possesses semantically meaningful directions (Radford et al., 2015; Shen et al., 2020; Jahanian et al., 2020), Voynov & Babenko (2020) propose a regression-based method to explore interpretable directions in the latent space of a pretrained GAN. The subsequent works focus on extracting the directions from a specific layer of GANs. Härkönen et al. (2020) search for important and meaningful directions by performing PCA in the style space of StyleGAN (Karras et al., 2019; 2020). Shen & Zhou (2021) propose to use the singular vectors of the first layer of a generator as the interpretable directions, and Khrulkov et al. (2021) extend this method to the intermediate layers by Jacobian matrix. All the above methods only discover the interpretable directions in the latent space, except for Khrulkov et al. (2021) which also learns disentangled representation of generated images by training an extra encoder in an extra stage. However, all these methods can not outperform the typical disentanglement methods. Our method is the first to jointly learn the disentangled representation and discover the directions in the latent spaces.
Contrastive Learning. Contrastive Learning gains popularity due to its effectiveness in representation learning (He et al., 2020; Grill et al., 2020; van den Oord et al., 2018; Hénaff, 2020; Li et al., 2020; Chen et al., 2020). Typically, contrastive approaches bring representations of different views of the same image (positive pairs) closer, and push representations of views from different images (negative pairs) apart using instance-level classification with Contrastive Loss. Recently, Contrastive Learning is extended to various tasks, such as image translation (Liu et al., 2021; Park et al., 2020) and controllable generation (Deng et al., 2020). In this work, we focus on the variations of representations and achieve SOTA disentanglement with Contrastive Learning in the Variation Space. Contrastive Learning is suitable for disentanglement due to: (i) the actual number of disentangled directions is usually unknown, which is similar to Contrastive Learning for retrieval (Le-Khac et al., 2020), (ii) it works in the representation space directly without any extra layers for classification or regression.
3 DISENTANGLEMENT VIA CONTRAST
3.1 OVERVIEW OF DISCO
From the contrastive view of the intuitive notion of disentangled representation learning, we propose a DisCo to leverage pretrained generative models to jointly discover the factors embedded as directions in the latent space of the generative models and learn to extract disentangled representation. The benefits of leveraging a pretrained generative model are two-fold: (i) the pretrained models with high-quality image generation are readily available, which is important for reflecting detailed image variations and downstream tasks like controllable generation; (ii) the factors are embedded in the pretrained model, severing as an inductive bias for unsupervised disentangled representation learning.
DisCo consists of a Navigator to provides candidate traversal directions in the latent space and a ∆-Contrastor to extract the representation of image variations and build a Variation Space based on the target disentangled representations. More specifically, ∆-Contrastor is composed of two sharedweight Disentangling Encoders. The variation between two images is modeled as the difference of their corresponding encoded representations extracted by the Disentangling Encoders.
In the Variation Space, by pulling together the variation samples resulted from traversing the same direction and pushing away the ones resulted from traversing different directions, the Navigator learns to discover disentangled directions as factors, and Disentangling Encoder learns to extract disentangled representations from images. Thus, traversing along the discovered directions causes distinct image variations, which causes separated dimensions of disentangled representations respond.
Different from VAE-based or InfoGAN-based methods, our disentangled representations and factors are in two separate spaces, which actually does not affect the applications. Similar to the typical
methods, the Disentangling Encoder can extract disentangled representations from images, and the pretrained generative model with discovered factors can be applied to controllable generation. Moreover, DisCo can be applied to different types of generative models.
Here we provide a detailed workflow of DisCo. As Figure 2 shows, given a pretrained generative model G: Z → I, where Z ∈ RL denotes the latent space, and I denotes the image space, the workflow is: 1) A Navigator A provides a total of D candidate traversal directions in the latent space Z , e.g., in the linear case, A ∈ RL×D is a learnable matrix, and each column is regarded as a candidate direction. 2) Image pairs G(z), G(z′) are generated. z is sampled from Z and z′ = z + A(d, ε), where d ∈ {1, ..., D} and ε ∈ R, and A(d, ε) denotes the shift along the dth direction with ε scalar. 3) The ∆-Contrastor, composed of two shared-weight Disentangling Encoders E, encodes the image pair to a sample v ∈ V as
v(z, d, ε) = |E(G(z +A(d, ε)))−E(G(z))| , (1) where V ∈ RJ+ denotes the Variation Space. Then we apply Contrastive Learning in V to optimize the Disentangling Encoder E to extract disentangled representations and simultaneously enable Navigator A to find the disentangled directions in the latent space Z .
3.2 DESIGN OF DISCO
We present the design details of DisCo, which include: (i) the collection of query set Q = {qi}Bi=1, positive key set K+ = {k+i }Ni=1 and negative key set K− = {k − i }Mi=1, which are three subsets of the Variation Space V , (ii) the formulation of the Contrastive Loss. According to our goal of contrasting the variations, the samples from Q and K+ share the same traversal direction and should be pulled together, while the samples from Q and K− have different directions and should be pushed away. Recall that each sample v in V is determined as v(z, d, ε). To achieve the contrastive learning process, we construct the query sample qi = v(zi, di, εi), the key sample k+i = v(z + i , d + i , ε + i ) and the negative sample k − i = v(z − i , d − i , ε − i ). Specifically, we randomly sample a direction index d̂ from a discrete uniform distribution U{1, D} for {di}Bi=1 and {d+i }Ni=1 to guarantee they are the same. We randomly sample {d − i }Mi=1 from the set of the rest of the directions U{1, D} \ {d̂} individually and independently to cover the rest of directions in Navigator A. Note that the discovered direction should be independent with the starting point and the scale of variation, which is in line with the disentangled factors. Therefore, {zi}Bi=1, {z + i }Ni=1, {z − i }Mi=1 are all sampled from latent space Z , and {εi}Bi=1, {ε + i }Ni=1, {ε − i }Mi=1 are all sampled from a shared continuous uniform distribution U [−ϵ, ϵ] individually and independently. We normalize each sample in Q, K+, and K− to a unit vector to eliminate the impact caused by different shift scalars.
For the design of Contrastive Loss, a well-known form of Contrastive Loss is InfoNCE (van den Oord et al., 2018):
LNCE = − 1
|B| B∑ i=1 N∑ j=1 log exp(qi · k+j /τ)∑N+M s=1 exp(qi · ks/τ) , (2)
where τ is a temperature hyper-parameter and {ki}N+Mi=1 = {k + i }Ni=1 ⋃ {k−i }Mi=1. The InfoNCE is originate from BCELoss (Gutmann & Hyvärinen, 2010). BCELoss has been used to achieve contrastive learning (Wu et al., 2018; Le-Khac et al., 2020; Mnih & Kavukcuoglu, 2013; Mnih & Teh, 2012). We choose to follow them to use BCELoss Llogits for reducing computational cost:
Llogits = − 1
|B| B∑ i=1 ( l−i + l + i ) , (3)
l+i = N∑ j=1 log σ(qi · k+j /τ), l − i = M∑ m=1 log(1− σ(qi · k−m/τ)), (4)
where σ denotes the sigmoid function, l+i denotes the part for positive samples, and l − i denotes the part for the negative ones.Note that we use a shared positive set for B different queries to reduce the computational cost.
3.3 KEY TECHNIQUES FOR DISCO
Entropy-based domination loss. By optimizing the Contrastive Loss, Navigator A is optimized to find the disentangled directions in the latent space, and Disentangling Encoder E is optimized to extract disentangled representations from images. To further make the encoded representations more disentangled, i.e., when traversing along one disentangled direction, only one dimension of the encoded representation should respond, we thus propose an entropy-based domination loss to encourage the corresponding samples in the Variation Space to be one-hot. To implement the entropy-based domination loss, we first get the mean c of Q and K+ as
c = 1
|B +N | ( B∑ i=1 qi + N∑ i=1 k+i ) . (5)
We then compute the probability as pi = exp c(i)/ ∑J
j=1 exp c(j), where c(i) is the i-th element of c and J is the number of dimensions of c. The entropy-based domination loss Led is calculated as
Led = − 1
J J∑ j=1 pj log(pj). (6)
Hard negatives flipping. Since the latent space of the generative models is a high-dimension complex manifold, many different directions carry the same semantic meaning. These directions with the same semantic meaning result in hard negatives during the optimization of Contrastive Loss. The hard negatives here are different from the hard negatives in the works of self-supervised representation learning (He et al., 2020; Coskun et al., 2018), where they have reliable annotations of the samples. Here, our hard negatives are more likely to be “false” negatives, and we choose to flip these hard negatives into positives. Specifically, we use a threshold T to identify the hard negative samples, and use their similarity to the queries as the pseudo-labels for them:
l̂−i = ∑
αij<T
log(1− σ(αij)) + ∑
αij≥T
αij log(σ(αij)), (7)
where l̂−i denotes the modified l − i , and αij = qi · k − j /τ . Therefore, the modified final BCELoss is:
Llogits−f = 1
|B| B∑ i=1 ( l+i + l̂ − i ) . (8)
Typical disentanglement baselines:
Methods on pretrained GAN:
Methods on pretrained VAE:
Methods on pretrained Flow:
Full objective. With the above two techniques, the full objective is:
L = Llogits−f + λLed, (9)
where λ is the weighting hyper-parameter for entropy-based domination loss Led.
4 EXPERIMENT
In this section, we first follow the well-accepted protocol (Locatello et al., 2019; Khrulkov et al., 2021) to evaluate the learned disentangled representation, which also reflects the performance of discovered directions implicitly (Lin et al., 2020) (Section 4.1). Secondly, we follow Li et al. (2021a) to directly evaluate the discovered directions (Section 4.2). Finally, we conduct ablation study (Section 4.3).
4.1 EVALUATIONS ON DISENTANGLED REPRESENTATION
4.1.1 EXPERIMENTAL SETUP
Datasets. We consider the following popular datasets in the disentanglement areas: Shapes3D (Kim & Mnih, 2018) with 6 ground truth factors, MPI3D (Gondal et al., 2019) with 7 ground truth factors,
and Cars3D (Reed et al., 2015) with 3 ground truth factors. In the experiments of the above datasets, images are resized to the 64x64 resolution.
Pretrained generative models. For GAN, we use the StyleGAN2 model (Karras et al., 2020). For VAE, we use a common structure with convolutions (Locatello et al., 2019). For Flow, we use Glow (Kingma & Dhariwal, 2018).
Baseline. For the typical disentanglement baselines, we choose FactorVAE (Kim & Mnih, 2018), β-TCVAE (Chen et al., 2018) and InfoGAN-CR (Lin et al., 2020). For discovering-based methods, we consider serveral recent methods: GANspace (GS) (Härkönen et al., 2020), LatentDiscovery (LD) (Voynov & Babenko, 2020), ClosedForm (CF) (Shen & Zhou, 2021) and DeepSpectral (DS) (Khrulkov et al., 2021). For these methods, we follow Khrulkov et al. (2021) to train an additional encoder to extract disentangled representation. We are the first to extract disentangled representations from pretrained VAE and Flow, so we extend LD to VAE and Flow as a baseline.
Disentanglement metrics. We mainly consider two representative ones: the Mutual Information Gap (MIG) (Chen et al., 2018) and the Disentanglement metric (DCI) (Eastwood & Williams, 2018). MIG requires each factor to be only perturbed by changes of a single dimension of representation. DCI requires each dimension only to encode the information of a single dominant factor. We evaluate the disentanglement in terms of both representation and factors. We also provide results for β-VAE score (Higgins et al., 2017) and FactorVAE score (Kim & Mnih, 2018) in Appendix B.3.
Randomness. We consider the randomness caused by random seeds and the strength of the regularization term (Locatello et al., 2019). For random seeds, we follow the same setting as the baselines. Since DisCo does not have a regularization term, we consider the randomness of the pretrained generative models. For all methods, we ensure there are 25 runs, except that Glow only has one run, limited by GPU resources. More details are presented in Appendix A.
4.1.2 EXPERIMENTAL RESULTS
The quantitative results are summarized in Table 1 and Figure 3. More details about the experimental settings and results are presented in Appendix A & C.
DisCo vs. typical baselines. Our DisCo achieves the SOTA performance consistently in terms of MIG and DCI scores. The variance due to randomness of DisCo tends to be smaller than those typical baselines. We demonstrate that the method, which extracts disentangled representation from pretrained non-disentangled models, can outperform typical disentanglement baselines.
DisCo vs. discovering-based methods. Among the baselines based on discovering pretrained GAN, CF achieves the best performance. DisCo outperforms CF in almost all the cases by a large margin. Besides, these baselines need an extra stage (Khrulkov et al., 2021) to get disentangled representation, while our Disentangling Encoder can directly extract disentangled representation.
4.2 EVALUATIONS ON DISCOVERED DIRECTIONS
To evaluate the discovered directions, we compare DisCo on StyleGAN2 with GS, LD, CF and DS on the real-world dataset FFHQ (Karras et al., 2019)1. and adopt the comprehensive Manipulation Disentanglement Score (MDS) (Li et al., 2021a) as a metric. To calculate MDS, we use 40 CelebaHQ-Attributes predictors released by StyleGAN. Among them, we select Young, Smile, Bald and Blonde Hair, as they are attributes with an available predictor and commonly found by all methods at the same
time. The results are summarized in Table 3. DisCo has shown better overall performance compared to other baselines, which verifies our assumption that learning disentangled representation benefits latent space discovering. We also provide qualitative comparisons in Figure 4.
Finally, we provide an intuitive analysis in Appendix D for why DisCo can find those disentangled directions.
4.3 ABLATION STUDY
In this section, we perform ablation study of DisCo only on GAN, limited by the space. For the experiments, we use the Shapes3D dataset, and the random seed is fixed.
Choice of latent space. For style–based GANs (Karras et al., 2019; 2020), there is a style space W , which is the output of style network (MLP) whose input is a random latent space Z . As demonstrated in Karras et al. (2019), W is more interpretable than Z . We conduct experiments on W and Z respectively to see how the latent space influences the performance. As shown in Table 4, DisCo on W is better, indicating that the better the latent space is organized, the better disentanglement DisCo can achieve.
Choices of A. Following the setting of Voynov & Babenko (2020), we mainly consider three options of A: a linear operator with all matrix columns having a unit length, a linear operator with orthonormal matrix columns, or a nonlinear operator of 3 fully-connected layers.
The results are shown in Table 4. For latent spaces W and Z , A with unit-norm columns achieves nearly the best performance in terms of MIG and DCI scores. Compared to A with orthonormal matrix columns, using A with unitnorm columns is more expressive with less constraints. Another possible reason is that A is global without conditioned on the latent code z. A non-linear operator is more suitable for a
local navigator A. For such a much more complex local and non-linear setting, more inductive bias or supervision should be introduced.
Entropy-based domination loss. Here, we verify the effectiveness of entropy-based domination loss Led for disentanglement. For a desirable disentangled representation, one semantic meaning corresponds to one dimension. As shown in Table 4, Led can improve the performance by a large
1The above disentanglement metrics (DCI and MIG) are not available for FFHQ dataset.
margin. We also visualize the Variation Space to further demonstrate the effectiveness of our proposed loss in Figure 5. Adding the domination loss makes the samples in the Variation Space to be one-hot, which is desirable for disentanglement.
Hard negatives flipping. We run our DisCo with or without the hard negatives flipping strategy to study its influence. As shown in Table 4, flipping hard negatives can improve the disentanglement ability of DisCo. The reason is that the hard negatives have the same semantics as the positive samples. In this case, treating them as the hard negatives does not make sense. Flipping them with pseudo-labels can make the optimization of Contrastive Learning easier.
Hyperparmeter N & M. We run DisCo with different ratios of N : M with a fixed sum of 96, and different sum of N +M with a fixed ratio 1 : 2 to study their impacts. As shown in Figure 6 (a), the best ratio is N : M = 32 : 64 = 1 : 2, as the red line (MIG) and blue line (DCI) in the figure show that larger or smaller ratios will hurt DisCo, which indicates DisCo requires a balance between N and M . As shown in Figure 6 (b), the sum of N +M has slight impact on DisCo. For other hyperparameters, we set them empirically, and more details are presented in Appendix A.
Contrast vs. Classification. To verify the effectiveness of Contrast, we substitute it with classification by adopting an additional linear layer to recover the corresponding direction index and the shift along this direction. As Table 2 shows, Contrastive Learning outperforms Classification significantly.
Concatenation vs. Variation. We further demonstrate that the Variation Space is crucial for DisCo. By replacing the difference operator with concatenation, the performance drops significantly (Table 2), indicating that the encoded representation is not well disentangled. On the other hand, the disentangled representations of images are achieved by Contrastive Learning in the Variation Space.
4.4 ANALYSIS OF DIFFERENT GENERATIVE MODELS
As shown in Table 1, DisCo can be well generalized to different generative models (GAN, VAE, and Flow). DisCo on GAN and VAE can achieve relative good performance, while DisCo on Flow is not as good. The possible reason is similar to the choice of latent space of GAN. We assume the disentangled directions are global linear and thus use a linear navigator. In contrast to GAN and VAE, we suspect that Flow may not conform to this assumption well. Furthermore, Flow has the problems of high GPU cost and unstable training, which limit us to do further exploration.
5 CONCLUSION
In this paper, we present an unsupervised and model-agnostic method DisCo, which is a Contrastive Learning framework to learn disentangled representation by exploiting pretrained generative models. We propose an entropy-based domination loss and a hard negatives flipping strategy to achieve better disentanglement. DisCo outperforms typical unsupervised disentanglement methods while maintaining high image quality. We pinpoint a new direction that Contrastive Learning can be well applied to extract disentangled representation from pretrained generative models. There may be some specific complex generative models, for which the global linear assumption of disentangled directions in the latent space could be a limitation. For future work, extending DisCo to the existing VAE-based disentanglement framework is an exciting direction.
A.2 SETTING FOR BASELINES
In this section, we introduce the implementation setting for the baselines (including randomness).
VAE-based methods. We choose FactorVAE and β-TCVAE as the SOTA VAE-based methods, we follow Locatello et al. (2019) to use the same architecture of encoder and decoder. For the hyper-parameters, we use the the best settings by grid search. We set the latent dimension of representation to 10. For FactorVAE, we set the hyperparameter γ to 10. For β-TCVAE, we set the hyperparameter β to 6. For the random seeds, considering our method has 25 run, we run 25 times with different random seeds for each model to make the comparison fair.
InfoGAN-based methods. We choose InfoGAN-CR as a baseline. We use the official implementation 2 with the best hyperparameter settings by grid search. For the random seeds, we run 25 times with different random seeds
Discovering-based methods. We follow Khrulkov et al. (2021) to use the same settings for the following four baselines: LD (GAN), CF, GS, and DS. Similar to our method (DisCo), discoveringbased methods do not have a regularization term. Thus, for the randomness, we adopt the same strategy with DisCo. We take the top-10 directions for 5 different random seeds for GAN and 5 different random seeds for the additional encoder to learn disentangled representations.
LD (VAE) & LD (Flow). We follow LD (GAN) to use the same settings and substitute the GAN with VAE / Glow. The only exception is the randomness for LD (Flow). We only run one random seed to pretrain the Glow and use one random seed for the encoder.
A.3 MANIPULATION DISENTANGLEMENT SCORE
As claimed in Li et al. (2021a), it is difficult to evaluate the performance on discovering the latent space among different methods, which often use model-specific hyper-parameters to control the editing strength. Thus, Li et al. (2021a) propose a comprehensive metric called Manipulation Disentanglement Score (MDS), which takes both the accuracy and the disentanglement of manipulation into consideration. For more details, please refer to Li et al. (2021a).
A.4 DOMAIN GAP PROBLEM
Please note that there exists a domain gap between the generated images of pretrained generative models and the real images. However, the good performance on disentanglement metrics shows that the domain gap has limited influence on DisCo.
2https://github.com/fjxmlzn/InfoGAN-CR
A.5 ARCHITECTURE
Here, we provide the model architectures in our work. For the architecture of StyleGAN2, we follow Khrulkov et al. (2021). For the architecture of Glow, we use the open-source implementation 3.
3https://github.com/rosinality/glow-pytorch
B MORE EXPERIMENTS
B.1 MORE QUALITATIVE COMPARISON
We provide some examples for qualitative comparison. We first demonstrate the trade-off problem of the VAE-based methods. As shown in Figure 7, DisCo leverages the pretrained generative model and does not have the trade-off between disentanglement and generation quality.
DisCo
Furthermore, as shown in Figure 8 and Figure 9, VAE-based methods suffer from poor image quality. When changing one attribute, the results of discovering-based methods tend to also change other attributes.
We also provide qualitative comparisons between DisCo and InfoGAN-CR. Note that the latent space of InfoGAN-CR is not aligned with the pretrained StyleGAN2. InfoGAN-CR also suffers from the trade-off problem, and its disentanglement ability is worse than DisCo.
We explain the comparison in the main paper and show more manipulation comparisons here.
B.2 ANALYSIS OF THE LEARNED DISENTANGLED REPRESENTATIONS
We feed the images traversing the three most significant factors (wall color, floor color, and object color) of Shapes3D into the Disentangling Encoders and plot the corresponding dimensions of the encoded representations to visualize the learned disentangled space. The location of each point is the disentangled representation of the corresponding image. An ideal result is that all the points form a cube, and color variation is continuous. We consider three baselines that have relatively higher MIG and DCI: CF, DS, LD. As the figures below show, the points in the latent space of CF and DS are not well organized, and the latent space of all the three baselines are not well aligned with the axes, especially for LD. DisCo learns a well-aligned and well-organized latent space, which signifies a better disentanglement.
CF DS LD Ours
B.3 MORE QUANTITATIVE COMPARISON
We provide additional quantitative comparisons in terms of β-VAE score and FactorVAE score. DisCo on pretrained GAN is comparable to discovering-based baselines in terms of β-VAE score and FactorVAE score, suggesting that some disagreement between these two scores and MIG/ DCI. However, note that the qualitative evaluation in Figure 8, Figure 9 and Section B.2 shows that the disentanglement ability of DisCo is better than all the baselines on Shapes3D dataset.
Typical disentanglement baselines:
Methods on pretrained GAN:
Methods on pretrained VAE:
Methods on pretrained Flow:
We also provide an additional experiment on Noisy-DSprites dataset. We compare DisCo with β-TCVAE (the best typical method) and CF (the best discovering-based method) in terms of MIG and DCI metrics.
C LATENT TRAVERSALS
In this section, we visualize the disentangled directions of the latent space discovered by DisCo on each dataset. For Cars3D, Shapes3D, Anime and MNIST, the iamge resolution is 64× 64. For FFHQ, LSUN cat and LSUN church, the image resolution is 256× 256. Besides StyleGAN2, we also provide results of Spectral Norm GAN (Miyato et al., 2018) 4 on MNIST (LeCun et al., 2010) and Anime Face (Jin et al., 2017) to demonstrate that DisCo can be well generalized to other types of GAN.
4https://github.com/anvoynov/GANLatentDiscovery
D AN INTUITIVE ANALYSIS FOR DISCO
DisCo works by contrasting the variations resulted from traversing along the directions provided by the Navigator. Is the method sufficient to converge to the disentangled solution? Note that it is very challenging to answer this question. To our best knowledge, for unsupervised disentangled representation learning, there is no sufficient theoretical constraint to guarantee the convergence to a disentangled solution Locatello et al. (2019). Here we provide an intuitive analysis for DisCo and try to provide our thoughts on how DisCo find the disentangled direction in the latent space, which is supported by our observations on pretrained GAN both quantitatively and qualitatively. The intuitive analysis consists of two part: (i) The directions that can be discovered by DisCo have different variation patterns compared to random directions. (ii) DisCo hardly converges to the an entangled solution.
D.1 WHAT KIND OF DIRECTIONS DISCO CAN CONVERGE TO?
First, we visualize the latent space and show that there are some variation patterns in the latent space for disentangled factors. We design the following visualization method. Given a pretrained GAN and two directions in the latent space, we traverse along the plane expanded by the two directions to generate a grid of images. The range is large enough to include all values of these disentangled factors, and the step is small enough to obtain a dense grid. Then, we input these images into an encoder that trained with ground truth factors labels. We get a heatmap of each factor (the value is the response value corresponding dimension of the factor). In this way, we can observe the variation pattern that emerged in the latent space.
We take the pretrained StyleGAN on Shapes3D (synthetic) and FFHQ (real-world). For Shapes3D, we take background color and floor color as the two factors (since they refer to different areas in the image, these two factors are disentangled). For FFHQ, we take smile (mouth) and bald (hair) as the two factors (disentangled for referring to different areas). We then choose random directions and the directions discovered by DisCo. The results are shown in Figure 27 and Figure 28.
We find a clear difference between random directions and directions discovered by DisCo. This is because DisCo can learn the directions by separating the variations resulted from traversing along with them. However, not all directions can be separated. For those directions in which the variations are not able to be recognized or clustered by the encoder E, it is nearly impossible for DisCo to converge to them. Conversely, for those directions that can be easily recognized and clustered, DisCo will converge to them with a higher probability. From the following observations, we find that the variation patterns resulting from the directions corresponding to disentangled factors are easily recognized and clustered.
D.2 WHY DISCO HARDLY CONVERGES TO THE ENTANGLED CASES?
In the previous section, we show that DisCo can discover the directions with distinct variation patterns and exclude random directions. Here we discuss why DisCo can hardly converge to the following entangled case (trivial solution based on disentangled one). For example, suppose there is an entangled direction of factors A and B (A and B change with the same rate when traversing along with it) in the latent space of generative models, and DisCo can separate the variations resulting
from the direction of A and the entangled direction. In that case, DisCo has no additional bias to update these directions to converge to disentangled ones.
In the following text, for ease of referring to, we denote the entangled direction of factors A and B (A and B change with the same rate when traversing along with it) as A+B direction, and direction of factor A (only A change when we traverse along with it). The reasons for why DisCo is hardly converged to the case of A and A+B are two-fold:
(i) Our encoder is a lightweight network (5 CNN layers + 3 FC layers). It is nearly impossible for it to separate the A and A+B directions.
(ii) In the latent space of the pretrained generative models, the disentangled directions (A, B) are consistent at different locations. In contrast, the entangled directions (A+B) are not, as shown in Figure 29.
We conduct the following experiments to verify them. For (i), we replace our encoder in DisCo with a ResNet-50 and train DisCo from scratch on the Shapes3D dataset. The loss, MIG, and DCI are presented in Table 11. The trivial solution is possible when the encoder is powerful enough to fit the A and A+B directions to “become orthogonal”. With this consideration, in DisCo we adopt a lightweight encoder to avoid this issue.
For (ii), as the sketch Figure 29 demonstrates, the disentangled directions (”A“- blue color or “B”green color) are consistent, which is invariant to the location in the latent space, while the entangled directions (”A+B“- red color) is not consistent on different locations. The fundamental reason is that: the directions of the disentangled variations are invariant with the position in the latent space. However, the “rate” of the variation is not. E.g., at any point in the latent space, going “up” constantly changes the camera’s pose. However, at point a, going “up” with step 1 means rotating 10 degrees. At point b, going “up” with step 1 means rotating 5 degrees. When the variation “rate” of “A” and “B” are different, the “A+B” directions at different locations are not consistent.
Based on the different properties of disentangled and entangled directions in the latent space, DisCo can discover the disentangled directions with contrastive loss. The contrastive loss can be understood from the clustered view (Wang & Isola, 2020; Li et al., 2021b). The variations from the disentangled directions are more consistent and can be better clustered compared to the variations from the
entangled ones. Thus, DisCo can discover the disentangled directions in the latent space and learn disentangled representations from images. We further provide the following experiments to support our above analysis.
D.2.1 QUANTITATIVE EXPERIMENT
We compare the losses of three different settings:
• A: For a navigator with disentangled directions, we fix the navigator and train the encoder until convergence.
• A + B: For a navigator with entangled directions (we use the linear combination of the disentangled directions to initialize the navigator), we fix it and train the encoder until convergence.
• A+B → A: After A+B is convergent, we update both the encoder and the navigator until convergence.
The Contrastive loss after convergence is presented in Table 12.
The results show that: (i) The disentangled directions (A) can lead to lower loss and better performance than entangled directions (A+B), indicating no trivial solution. (ii) Even though the encoder with A+B is converged, when we optimize the navigator, gradients will still backpropagate to the navigator and converge to A.
D.2.2 QUALITATIVE EXPERIMENT
We visualize the latent space of GAN in Figure 30 to verify the variation “rate” in the following way: in the latent space, we select two ground truth disentangled directions: floor color (A) and background color (B) obtained by supervision with InterFaceGAN (Shen et al., 2020), we conduct equally spaced sampling along the two disentangled directions: A (labeled with green color variation), B (labeled with gradient blue color) and composite direction (A+B, labeled with gradient red color) as shown in Figure 30 (a).
Then we generate the images (also include other images on the grid as shown in Figure 30 (b) ), and feed the images in the bounding boxes into a “ground truth” encoder (trained with ground truth disentangled factors) to regress the “ground truth” disentangled representations of the images.
In Figure 30 (c), the points labeled with green color are well aligned with the x-axis indicating only floor color change, points labeled with blue variation are well aligned with the y-axis indicating only
background color change. However, the points labeled with red color are NOT aligned with any line, which indicates the directions of A+B are not consistent. Further, the variation “rate” is relevant to the latent space locations for the two disentangled directions. This observation well supports our idea shown in Figure 29. The different properties between disentangled and entangled directions enable DisCo to discover the disentangled directions in the latent space.
E EXTENSION: BRIDGE THE PRETRAINED VAE AND PRETRAINED GAN
Researchers are recently interested in improving image quality given the disentangled representation generated by typical disentanglement methods. Lee et al.(Lee et al., 2020) propose a post-processing stage using a GAN based on disentangled representations learned by VAE-based disentanglement models. This method scarifies a little generation ability due to an additional constraint. Similarly, Srivastava et al. (Srivastava et al., 2020) propose to use a deep generative model with AdaIN (Huang & Belongie, 2017) as a post-processing stage to improve the reconstruction ability. Following this setting, we can replace the encoder in DisCo (GAN) with an encoder pretrained by VAE-based disentangled baselines. In this way, we can bridge the pretrained disentangled VAE and pretrained GAN, as shown in Figure 31. Compared to previous methods, our method can fully utilize the state-of-the-art GAN and the state-of-the-art VAE-based method and does not need to train a deep generative model from scratch.
F DISCUSSION ON RELATION BETWEEN BCELOSS AND NCELOSS
We would like to present a deep discussion on the relation between the BCELoss Llogits and NCELoss LNCE . This discussion is related to the NCE paper Gutmann & Hyvärinen (2010), and InfoNCE paper van den Oord et al. (2018). The discussion is as following: (i) we first provide a formulation of a general problem and get two objectives, L1 and L2, and L1 is the upper bound of L2. (ii) Following Gutmann & Hyvärinen (2010), we show that L1 is aligned with LBCE under the setting of Gutmann & Hyvärinen (2010). (iii) Following van den Oord et al. (2018), we prove L2 is aligned with LNCE under the setting of van den Oord et al. (2018). (iii) We discuss the relation between these objectives and the loss in our paper.
Part I. Assume we have S observations {xi}Si=1 from a data distribution p(x), each with a label Ci ∈ {0, 1}. The we denote the posterior probabilities as p+(x) = p(x|C = 1) and p−(x) = p(x|C = 0). We define two objectives as follow:
L1 = − S∑
i=1
Ci logP (Ci = 1|xi) + (1− Ci) logP (Ci = 0|xi), (10)
and
L2 = − S∑
i=1
Ci logP (Ci = 1|xi) (11)
Since − ∑S
i=1(1− Ci) log p(Ci = 0|xi) ≥ 0, we have L1 ≥ L2. (12)
L1 is the upper bound of L2. This a general formulation of a binary classification problem. In the context of our paper, we have a paired observation xi : (q, ki), with q as the query, and the key ki is either from a positive key set {k+j }Nj=1 or as negative key set {k−m}Mm=1 (i.e., {ki} N+M i=1 = {k + j }Nj=1 ⋃ {k−m}Mm=1), where M = S −N . And Ci is assigned as:
Ci = { 1, ki ∈ {k+j }Nj=1 0, ki ∈ {k−m}Mm=1
(13)
In our paper, we have h(x) = exp(q · k/τ).
Part II. In this part, following Gutmann & Hyvärinen (2010), we show that L1 is aligned with Llogits (Equation 3 in the main paper) under the setting of Gutmann & Hyvärinen (2010). Following Gutmann & Hyvärinen (2010)), we assume the prior distribution P (C = 0) = P (C = 1) = 1/2, according to the Bayes rule, we have
P (C = 1|x) = p(x|C = 1)P (C = 1) p(x|C = 1)P (C = 1) + p(x|C = 0)P (C = 0) = 1
1 + p −(x)
p+(x)
. (14)
And P (C = 0|x) = 1− P (C = 1|x). On the other hand, we have a general form of BCELoss, as
LBCE = − S∑
i=1
Ci log σ(q · ki/τ) + (1− Ci) log(1− σ(q · ki/τ)), (15)
where σ(·) is the sigmoid function. We have
σ(q · k/τ) = 1 1 + exp(−q · k/τ) = 1
1 + 1exp(q·k/τ) =
1
1 + 1h(x) , (16)
From Gutmann & Hyvärinen (2010) Theorem 1, we know that when LBCE is minimized, we have
h(x) = p+(x)
p−(x) . (17)
Thus, we know the BCELoss LBCE is a approximation of the objective L1.
Part. III Following van den Oord et al. (2018), we prove L2 is aligned with LNCE (Equation 2 in the main paper) under the setting of van den Oord et al. (2018)
From the typical contrastive setting (one positive sample, others are negative samples, following van den Oord et al. (2018)), we assume there is only one positive sample, others are negatives in {xi}Si=1. Then, the probability of xi sample from p+(x) rather then p−(x) is as follows,
P (Ci = 1|xi) = p+(xi)Πl ̸=ip −(xl)∑S j=1 p +(xj)Πl ̸=ip−(xl) = p+(xi) p−(xi)∑S j=1 p+(xj) p−(xj)
(18)
From van den Oord et al. (2018), we know that when minimize Equation 11, we have h(x) = exp(q · k/τ) ∝ p+(x)p−(x) . In this case, we get the form of LNCE as
LNCE = − S∑
i=1 Ci log exp(q · ki/τ)∑S j=1 exp(q · kj/τ)
(19)
LNCE is a approximate of L2.
Part. IV When generalize the contrastive loss into our setting (N positive samples, M negative samples). The BCELoss (Equation 15) can be reformulated as
The BCELoss (Equation 15) can be reformulated as
L̂BCE = − N∑ j=1 log σ(q · k+j /τ)− M∑ m=1 log(1− σ(q · k−m/τ)). (20)
Similarly, the NCEloss (Equation 19) can be reformulated as
L̂NCE = − N∑ j=1 log exp(q · k+j /τ)∑M+N s=1 exp(q · ks/τ) (21)
L̂BCE is aligned with Llogits (Equation 3 in our main paper), and L̂NCE is aligned with LNCE (Equation 2 in the main paper).
Now we have L1 (approximated by LBCE) is the upper bound of L2 (approximated by LNCE). However, as you may notice, the assumptions we made in Part II and Part III are different, one is P (C = 0) = P (C = 1), the other one is only one positive sample, others are negative. Also the extent to our situation is more general case (N positives, others are negatives).
However, they have the same objective, which is by contrasting positives and negatives, we can use h(x) = exp(q · k/τ) to estimate p+/p−. We can think the h(x) as a similarity score, i.e. if q and k are from a positive pair (they have the same direction in our paper), h(x) should be as large as possible (p+/p− > 1) and vice versa. From this way, we can learn the representations (q, k) to reflect the image variation, i.e., similar variations have higher score h(x) , while different kinds of variation have lower score h(x). Then with this meaningful representation, in the latent space, can help to discover the directions carrying different kinds of image variation. This is an understanding, from a contrastive learning view, of how our method works. | 1. What is the focus of the paper regarding disentanglement learning?
2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness?
3. What are the questions raised by the reviewer regarding the method's application in StyleGAN2?
4. How does the reviewer assess the quality and impact of the extensive experiments conducted in the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes to learn disentangled representations via contrastive learning on well-pretrained generative models. Extensive experiments are conducted on various datasets and the results validate the effectiveness of the method.
Review
Strengths:
Learning disentangled representations via contrastive learning on pretrained models is an interesting direction for this community.
The introduced method is intuitive, simple, and effective.
Extensive experiments on disentanglement learning and high-resolution datasets are conducted with satisfying results obtained.
The introduced entropy-based domain loss and the hard negative flipping technique are effective and practical.
The proposed method is general for multiple generative models.
Problem: For qualitative results on StyleGAN2 (Fig. 4 and Fig. 17 - 20), I wonder is there any manual selection on the layers to modify in the w space (like done in the methods GS and CF) or the modification happens globally on all layers in the w space? |
ICLR | Title
ProMP: Proximal Meta-Policy Search
Abstract
Credit assignment in Meta-reinforcement learning (Meta-RL) is still poorly understood. Existing methods either neglect credit assignment to pre-adaptation behavior or implement it naively. This leads to poor sample-efficiency during metatraining as well as ineffective task identification strategies. This paper provides a theoretical analysis of credit assignment in gradient-based Meta-RL. Building on the gained insights we develop a novel meta-learning algorithm that overcomes both the issue of poor credit assignment and previous difficulties in estimating meta-policy gradients. By controlling the statistical distance of both pre-adaptation and adapted policies during meta-policy search, the proposed algorithm endows efficient and stable meta-learning. Our approach leads to superior pre-adaptation policy behavior and consistently outperforms previous Meta-RL algorithms in sample-efficiency, wall-clock time, and asymptotic performance.
1 INTRODUCTION
A remarkable trait of human intelligence is the ability to adapt to new situations in the face of limited experience. In contrast, our most successful artificial agents struggle in such scenarios. While achieving impressive results, they suffer from high sample complexity in learning even a single task, fail to generalize to new situations, and require large amounts of additional data to successfully adapt to new environments. Meta-learning addresses these shortcomings by learning how to learn. Its objective is to learn an algorithm that allows the artificial agent to succeed in an unseen task when only limited experience is available, aiming to achieve the same fast adaptation that humans possess (Schmidhuber, 1987; Thrun & Pratt, 1998).
Despite recent progress, deep reinforcement learning (RL) still relies heavily on hand-crafted features and reward functions as well as engineered problem specific inductive bias. Meta-RL aims to forego such reliance by acquiring inductive bias in a data-driven manner. Recent work proves this approach to be promising, demonstrating that Meta-RL allows agents to obtain a diverse set of skills, attain better exploration strategies, and learn faster through meta-learned dynamics models or synthetic returns (Duan et al., 2016; Xu et al., 2018; Gupta et al., 2018b; Saemundsson et al., 2018).
Meta-RL is a multi-stage process in which the agent, after a few sampled environment interactions, adapts its behavior to the given task. Despite its wide utilization, little work has been done to promote theoretical understanding of this process, leaving Meta-RL grounded on unstable foundations. Although the behavior prior to the adaptation step is instrumental for task identification, the interplay between pre-adaptation sampling and posterior performance of the policy remains poorly understood. In fact, prior work in gradient-based Meta-RL has either entirely neglected credit assignment to the pre-update distribution (Finn et al., 2017) or implemented such credit assignment in a naive way (Al-Shedivat et al., 2018; Stadie et al., 2018).
To our knowledge, we provide the first formal in-depth analysis of credit assignment w.r.t. preadaptation sampling distribution in Meta-RL. Based on our findings, we develop a novel Meta-RL algorithm. First, we analyze two distinct methods for assigning credit to pre-adaptation behavior. ∗authors contributed equally to this work
We show that the recent formulation introduced by Al-Shedivat et al. (2018) and Stadie et al. (2018) leads to poor credit assignment, while the MAML formulation (Finn et al., 2017) potentially yields superior meta-policy updates. Second, based on insights from our formal analysis, we highlight both the importance and difficulty of proper meta-policy gradient estimates. In light of this, we propose the low variance curvature (LVC) surrogate objective which yields gradient estimates with a favorable bias-variance trade-off. Finally, building upon the LVC estimator we develop Proximal MetaPolicy Search (ProMP), an efficient and stable meta-learning algorithm for RL. In our experiments, we show that ProMP consistently outperforms previous Meta-RL algorithms in sample-efficiency, wall-clock time, and asymptotic performance.
2 RELATED WORK
Meta-Learning concerns the question of “learning to learn”, aiming to acquire inductive bias in a data driven manner, so that the learning process in face of unseen data or new problem settings is accelerated (Schmidhuber, 1987; Schmidhuber et al., 1997; Thrun & Pratt, 1998).
This can be achieved in various ways. One category of methods attempts to learn the “learning program” of an universal Turing machine in form of a recurrent / memory-augmented model that ingests datasets and either outputs the parameters of the trained model (Hochreiter et al., 2001; Andrychowicz et al., 2016; Chen et al., 2017; Ravi & Larochelle, 2017) or directly outputs predictions for given test inputs (Duan et al., 2016; Santoro et al., 2016; Mishra et al., 2018). Though very flexible and capable of learning very efficient adaptations, such methods lack performance guarantees and are difficult to train on long sequences that arise in Meta-RL.
Another set of methods embeds the structure of a classical learning algorithm in the meta-learning procedure, and optimizes the parameters of the embedded learner during the meta-training (Hüsken & Goerick, 2000; Finn et al., 2017; Nichol et al., 2018; Miconi et al., 2018). A particular instance of the latter that has proven to be particularly successful in the context of RL is gradient-based metalearning (Finn et al., 2017; Al-Shedivat et al., 2018; Stadie et al., 2018). Its objective is to learn an initialization such that after one or few steps of policy gradients the agent attains full performance on a new task. A desirable property of this approach is that even if fast adaptation fails, the agent just falls back on vanilla policy-gradients. However, as we show, previous gradient-based Meta-RL methods either neglect or perform poor credit assignment w.r.t. the pre-update sampling distribution.
A diverse set of methods building on Meta-RL, has recently been introduced. This includes: learning exploration strategies (Gupta et al., 2018b), synthetic rewards (Sung et al., 2017; Xu et al., 2018), unsupervised policy acquisition (Gupta et al., 2018a), model-based RL (Clavera et al., 2018; Saemundsson et al., 2018), learning in competitive environments (Al-Shedivat et al., 2018) and meta-learning modular policies (Frans et al., 2018; Alet et al., 2018). Many of the mentioned approaches build on previous gradient-based meta-learning methods that insufficiently account for the pre-update distribution. ProMP overcomes these deficiencies, providing the necessary framework for novel applications of Meta-RL in unsolved problems.
3 BACKGROUND
Reinforcement Learning. A discrete-time finite Markov decision process (MDP), T , is defined by the tuple (S,A, p, p0, r,H). Here, S is the set of states, A the action space, p(st+1|st, at) the transition distribution, p0 represents the initial state distribution, r : S × A → R is a reward function, and H the time horizon. We omit the discount factor γ in the following elaborations for notational brevity. However, it is straightforward to include it by substituting the reward by r(st, at) := γ
tr(st, at). We define the return R(τ) as the sum of rewards along a trajectory τ := (s0, a0, ..., sH−1, aH−1, sH). The goal of reinforcement learning is to find a policy π(a|s) that maximizes the expected return Eτ∼PT (τ |π) [R(τ )].
Meta-Reinforcement Learning goes one step further, aiming to learn a learning algorithm which is able to quickly learn the optimal policy for a task T drawn from a distribution of tasks ρ(T ). Each task T corresponds to a different MDP. Typically, it is assumed that the distribution of tasks share the action and state space, but may differ in their reward function or their dynamics.
Gradient-based meta-learning aims to solve this problem by learning the parameters θ of a policy πθ such that performing a single or few steps of vanilla policy gradient (VPG) with the given task leads to the optimal policy for that task. This meta-learning formulation, also known under the name
of MAML, was first introduced by Finn et al. (2017). We refer to it as formulation I which can be expressed as maximizing the objective
JI(θ) = ET ∼ρ(T ) [ Eτ ′∼PT (τ ′|θ′) [R(τ ′)] ] with θ′ := U(θ, T ) = θ + α∇θEτ∼PT (τ |θ) [R(τ )]
In that U denotes the update function which depends on the task T , and performs one VPG step towards maximizing the performance of the policy in T . For national brevity and conciseness we assume a single policy gradient adaptation step. Nonetheless, all presented concepts can easily be extended to multiple adaptation steps.
Later work proposes a slightly different notion of gradient-based Meta-RL, also known as E-MAML, that attempts to circumvent issues with the meta-gradient estimation in MAML (Al-Shedivat et al., 2018; Stadie et al., 2018): JII(θ) = ET ∼ρ(T ) [ Eτ1:N∼PT (τ1:N |θ)
τ ′∼PT (τ ′|θ′)
[ R(τ ′) ]] with θ′ := U(θ, τ 1:N ) = θ+α∇θ N∑ n=1 [ R(τ (n)) ] Formulation II views U as a deterministic function that depends on N sampled trajectories from a specific task. In contrast to formulation I, the expectation over pre-update trajectories τ is applied outside of the update function. Throughout this paper we refer to πθ as pre-update policy, and πθ′ as post-update policy.
4 SAMPLING DISTRIBUTION CREDIT ASSIGNMENT
This section analyzes the two gradient-based Meta-RL formulations introduced in Section 3. Figure 1 illustrates the stochastic computation graphs (Schulman et al., 2015b) of both formulations. The red arrows depict how credit assignment w.r.t the pre-update sampling distribution PT (τ |θ) is propagated. Formulation I (left) propagates the credit assignment through the update step, thereby exploiting the full problem structure. In contrast, formulation II (right) neglects the inherent structure, directly assigning credit from post-update return R′ to the pre-update policy πθ which leads to noisier, less effective credit assignment.
Both formulations optimize for the same objective, and are equivalent at the 0th order. However, because of the difference in their formulation and stochastic computation graph, their gradients and the resulting optimization step differs. In the following, we shed light on how and where formulation II loses signal by analyzing the gradients of both formulations, which can be written as (see Appendix A for more details and derivations)
∇θJ(θ) = ET ∼ρ(T ) [ E τ∼PT (τ |θ) τ ′∼PT (τ ′|θ′) [ ∇θJpost(τ , τ ′) +∇θJpre(τ , τ ′) ]] (1)
The first term ∇θJpost(τ , τ ′) is equal in both formulations, but the second term, ∇θJpre(τ , τ ′), differs between them. In particular, they correspond to
∇θJpost(τ , τ ′) = ∇θ′ log πθ(τ ′)R(τ ′)︸ ︷︷ ︸ ∇θ′Jouter
( I + αR(τ )∇2θ log πθ′(τ )) )︸ ︷︷ ︸ transformation from θ′ to θ
(2)
∇θJIIpre(τ , τ ′) = α∇θ log πθ(τ )R(τ ′) (3) ∇θJIpre(τ , τ ′) = α∇θ log πθ(τ ) (
(∇θ log πθ(τ )R(τ ))>︸ ︷︷ ︸ ∇θJ inner (∇θ′ log πθ′(τ ′)R(τ ′))︸ ︷︷ ︸ ∇θ′Jouter
) (4)
∇θJpost(τ , τ ′) simply corresponds to a policy gradient step on the post-update policy πθ′ w.r.t θ′, followed by a linear transformation from post- to pre-update parameters. It corresponds to increasing the likelihood of the trajectories τ ′ that led to higher returns. However, this term does not optimize for the pre-update sampling distribution, i.e., which trajectories τ led to better adaptation steps.
The credit assignment w.r.t. the pre-updated sampling distribution is carried out by the second term. In formulation II, ∇θJIIpre can be viewed as standard reinforcement learning on πθ with R(τ ′) as reward signal, treating the update function U as part of the unknown dynamics of the system. This shifts the pre-update sampling distribution to better adaptation steps.
Formulation I takes the causal dependence of PT (τ ′|θ′) on PT (τ |θ) into account. It does so by maximizing the inner product of pre-update and post-update policy gradients (see Eq. 4). This steers the pre-update policy towards 1) larger post-updates returns 2) larger adaptation steps α∇θJ inner, 3) better alignment of pre- and post-update policy gradients (Li et al., 2017; Nichol et al., 2018). When combined, these effects directly optimize for adaptation. As a result, we expect the first meta-policy gradient formulation, JI , to yield superior learning properties.
5 LOW VARIANCE CURVATURE ESTIMATOR
In the previous section we show that the formulation introduced by Finn et al. (2017) results in superior meta-gradient updates, which should in principle lead to improved convergence properties. However, obtaining correct and low variance estimates of the respective meta-gradients proves challenging. As discussed by Foerster et al. (2018), and shown in Appendix B.3, the score function surrogate objective approach is ill suited for calculating higher order derivatives via automatic differentiation toolboxes. This important fact was overlooked in the original RL-MAML implementation (Finn et al., 2017) leading to incorrect meta-gradient estimates1. As a result, ∇θJpre does not appear in the gradients of the meta-objective (i.e. ∇θJ = ∇θJpost). Hence, MAML does not perform any credit assignment to pre-adaptation behavior.
But, even when properly implemented, we show that the meta-gradients exhibit high variance. Specifically, the estimation of the hessian of the RL-objective, which is inherent in the metagradients, requires special consideration. In this section, we motivate and introduce the low variance curvature estimator (LVC): an improved estimator for the hessian of the RL-objective which promotes better meta-policy gradient updates. As we show in Appendix A.1, we can write the gradient of the meta-learning objective as
∇θJI(θ) = ET ∼ρ(T ) [ Eτ ′∼PT (τ ′|θ′) [ ∇θ′ logPT (τ ′|θ′)R(τ ′)∇θU(θ, T ) ]] (5)
Since the update function U resembles a policy gradient step, its gradient∇θU(θ, T ) involves computing the hessian of the reinforcement learning objective, i.e., ∇2θ Eτ∼PT (τ |θ) [R(τ )]. Estimating this hessian has been discussed in Baxter & Bartlett (2001) and Furmston et al. (2016). In the infinite horizon MDP case, Baxter & Bartlett (2001) derived a decomposition of the hessian. We extend their finding to the finite horizon case, showing that the hessian can be decomposed into three matrix terms (see Appendix B.2 for proof):
∇θU(θ, T ) = I + α∇2θ Eτ∼PT (τ |θ) [R(τ )] = I + α ( H1 +H2 +H12 +H>12 ) (6)
whereby
H1 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θ log πθ(at, st)> ( H−1∑ t′=t r(st′ ,at′) )]
H2 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇2θ log πθ(at, st) ( H−1∑ t′=t r(st′ ,at′) )]
H12 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θQπθt (st,at)> ]
1Note that MAML is theoretically sound, but does not attend to correctly estimating the meta-policy gradients. As consequence, the gradients in the corresponding implementation do not comply with the theory.
Here Qπθt (st,at) = Eτ t+1:H−1∼PT (·|θ) [∑H−1 t′=t r(st′ ,at′)|st, at ]
denotes the expected state-action value function under policy πθ at time t.
Computing the expectation of the RL-objective is in general intractable. Typically, its gradients are computed with a Monte Carlo estimate based on the policy gradient theorem (Eq. 82). In practical implementations, such an estimate is obtained by automatically differentiating a surrogate objective (Schulman et al., 2015b). However, this results in a highly biased hessian estimate which just computesH2, entirely dropping the termsH1 andH12+H>12. In the notation of the previous section, it leads to neglecting the∇θJpre term, ignoring the influence of the pre-update sampling distribution. The issue can be overcome using the DiCE formulation, which allows to compute unbiased higherorder Monte Carlos estimates of arbitrary stochastic computation graphs (Foerster et al., 2018). The DiCE-RL objective can be rewritten as follows
JDiCE(τ ) = H−1∑ t=0
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
) r(st,at) τ ∼ PT (τ ) (7)
Eτ∼PT (τ |θ) [ ∇2θJDiCE(τ ) ] = H1 +H2 +H12 +H>12 (8)
In that, ⊥ denotes the “stop gradient” operator, i.e., ⊥(fθ(x))→ fθ(x) but ∇θ⊥(fθ(x))→ 0. The sequential dependence of πθ(at|st) within the trajectory, manifesting itself through the product of importance weights in (7), results in high variance estimates of the hessian ∇2θ Eτ∼PT (τ |θ) [R(τ )]. As noted by Furmston et al. (2016), H12 is particularly difficult to estimate, since it involves three nested sums along the trajectory. In section 7.2 we empirically show that the high variance estimates of the DiCE objective lead to noisy meta-policy gradients and poor learning performance.
To facilitate a sample efficient meta-learning, we introduce the low variance curvature (LVC) estimator:
JLVC(τ ) = H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ( H−1∑ t′=t r(st′ ,at′) ) τ ∼ PT (τ ) (9)
Eτ∼PT (τ |θ) [ ∇2θJLVC(τ ) ] = H1 +H2 (10)
By removing the sequential dependence of πθ(at|st) within trajectories, the hessian estimate neglects the term H12 +H>12 which leads to a variance reduction, but makes the estimate biased. The choice of this objective function is motivated by findings in Furmston et al. (2016): under certain conditions the termH12 +H>12 vanishes around local optima θ∗, i.e., Eτ [∇2θJLVC]→ Eτ [∇2θJDiCE] as θ → θ∗. Hence, the bias of the LVC estimator becomes negligible close to local optima. The experiments in section 7.2 underpin the theoretical findings, showing that the low variance hessian estimates obtained through JLVC improve the sample-efficiency of meta-learning by a significant margin when compared to JDiCE. We refer the interested reader to Appendix B for derivations and a more detailed discussion.
6 PROMP: PROXIMAL META-POLICY SEARCH
Building on the previous sections, we develop a novel meta-policy search method based on the low variance curvature objective which aims to solve the following optimization problem:
max θ
ET ∼ρ(T ) [ Eτ ′∼PT (τ ′|θ′) [R(τ ′)] ] with θ′ := θ + α ∇θEτ∼PT (τ |θ) [ JLVC(τ ) ] (11)
Prior work has optimized this objective using either vanilla policy gradient (VPG) or TRPO (Schulman et al., 2015a). TRPO holds the promise to be more data efficient and stable during the learning process when compared to VPG. However, it requires computing the Fisher information matrix (FIM). Estimating the FIM is particularly problematic in the meta-learning set up. The meta-policy gradients already involve second order derivatives; as a result, the time complexity of the FIM estimate is cubic in the number of policy parameters. Typically, the problem is circumvented using finite difference methods, which introduce further approximation errors.
The recently introduced PPO algorithm (Schulman et al., 2017) achieves comparable results to TRPO with the advantage of being a first order method. PPO uses a surrogate clipping objective which allows it to safely take multiple gradient steps without re-sampling trajectories.
JCLIPT (θ) = Eτ∼PT (τ ,θo) [∑H−1 t=0 min ( πθ(at|st) πθo (at|st) Aπθo (st,at) , clip1+ 1− ( πθ(at|st) πθo (at|st) ) Aπθo (st,at) )]
Algorithm 1 Proximal Meta-Policy Search (ProMP) Require: Task distribution ρ, step sizes α, β, KL-penalty coefficient η, clipping range
1: Randomly initialize θ 2: while θ not converged do 3: Sample batch of tasks Ti ∼ ρ(T ) 4: for step n = 0, ..., N − 1 do 5: if n = 0 then 6: Set θo ← θ 7: for all Ti ∼ ρ(T ) do 8: Sample pre-update trajectories Di = {τi} from Ti using πθ 9: Compute adapted parameters θ′o,i ← θ + α ∇θJLRTi (θ) with Di = {τi}
10: Sample post-update trajectories D′i = {τ ′i} from Ti using πθ′o,i 11: Update θ ← θ + β ∑ Ti ∇θJ ProMP Ti (θ) using each D ′ i = {τ ′i}
In case of Meta-RL, it does not suffice to just replace the post-update reward objective with JCLIPT . In order to safely perform multiple meta-gradient steps based on the same sampled data from a recent policy πθo , we also need to 1) account for changes in the pre-update action distribution πθ(at|st), and 2) bound changes in the pre-update state visitation distribution (Kakade & Langford, 2002).
We propose Proximal Meta-Policy Search (ProMP) which incorporates both the benefits of proximal policy optimization and the low variance curvature objective (see Alg. 1.) In order to comply with requirement 1), ProMP replaces the “stop gradient” importance weight πθ(at|st)⊥(πθ(at|st)) by the likelihood ratio πθ(at|st)πθo (at|st)) , which results in the following objective
JLRT (θ) = Eτ∼PT (τ ,θo) [ H−1∑ t=0 πθ(at|st) πθo(at|st) Aπθo (st,at) ] (12)
An important feature of this objective is that its derivatives w.r.t θ evaluated at θo are identical to those of the LVC objective, and it additionally accounts for changes in the pre-update action distribution. To satisfy condition 2) we extend the clipped meta-objective with a KL-penalty term between πθ and πθo . This KL-penalty term enforces a soft local “trust region” around πθo , preventing the shift in state visitation distribution to become large during optimization. This enables us to take multiple meta-policy gradient steps without re-sampling. Altogether, ProMP optimizes
JProMPT (θ) = J CLIP T (θ ′)− ηD̄KL(πθo , πθ) s.t. θ′ = θ + α ∇θJLRT (θ) , T ∼ ρ(T ) (13)
ProMP consolidates the insights developed throughout the course of this paper, while at the same time making maximal use of recently developed policy gradients algorithms. First, its meta-learning formulation exploits the full structural knowledge of gradient-based meta-learning. Second, it incorporates a low variance estimate of the RL-objective hessian. Third, ProMP controls the statistical distance of both pre- and post-adaptation policies, promoting efficient and stable meta-learning. All in all, ProMP consistently outperforms previous gradient-based meta-RL algorithms in sample complexity, wall clock time, and asymptotic performance (see Section 7.1).
7 EXPERIMENTS
In order to empirically validate the theoretical arguments outlined above, this section provides a detailed experimental analysis that aims to answer the following questions: (i) How does ProMP perform against previous Meta-RL algorithms? (ii) How do the lower variance but biased LVC gradient estimates compare to the high variance, unbiased DiCE estimates? (iii) Do the different formulations result in different pre-update exploration properties? (iv) How do formulation I and formulation II differ in their meta-gradient estimates and convergence properties?
To answer the posed questions, we evaluate our approach on six continuous control Meta-RL benchmark environments based on OpenAI Gym and the Mujoco simulator (Brockman et al., 2016; Todorov et al., 2012). A description of the experimental setup is found in Appendix D. In all experiments, the reported curves are averaged over at least three random seeds. Returns are estimated
AntRandDir
HumanoidRandDir
based on sampled trajectories from the adapted post-update policies and averaged over sampled tasks. The source code and the experiment data are available on our supplementary website.2
7.1 META-GRADIENT BASED COMPARISON
We compare our method, ProMP, in sample complexity and asymptotic performance to the gradientbased meta-learning approaches MAML-TRPO (Finn et al., 2017) and E-MAML-TRPO (see Fig. 2). Note that MAML corresponds to the original implementation of RL-MAML by (Finn et al., 2017) where no credit assignment to the pre-adaptation policy is happening (see Appendix B.3 for details). Moreover, we provide a second study which focuses on the underlying meta-gradient estimator. Specifically, we compare the LVC, DiCE, MAML and E-MAML estimators while optimizing meta-learning objective with vanilla policy gradient (VPG) ascent. This can be viewed as an ablated version of the algorithms which tries to eliminate the influences of the outer optimizers on the learning performance (see Fig. 3).
These algorithms are benchmarked on six different locomotion tasks that require adaptation: the half-cheetah and walker must switch between running forward and backward, the high-dimensional agents ant and humanoid must learn to adapt to run in different directions in the 2D-plane, and the hopper and walker have to adapt to different configuration of their dynamics.
AntRandDir
HumanoidRandDir
2https://sites.google.com/view/pro-mp
The results in Figure 2 highlight the strength of ProMP in terms of sample efficiency and asymptotic performance. In the meta-gradient estimator study in Fig. 3, we demonstrate the positive effect of the LVC objective, as it consistently outperforms the other estimators. In contrast, DiCE learns only slowly when compared to the other approaches. As we have motivated mathematically and substantiate empirically in the following experiment, the poor performance of DiCE may be ascribed to the high variance of its meta-gradient estimates. The fact that the results of MAML and EMAML are comparable underpins the ineffectiveness of the naive pre-update credit assignment (i.e. formulation II), as discussed in section 4.
Results for four additional environments are displayed in Appendix D along with hyperparameter settings, environment specifications and a wall-clock time comparison of the algorithms.
7.2 GRADIENT ESTIMATOR VARIANCE AND ITS EFFECT ON META-LEARNING
In Section 5 we discussed how the DiCE formulation yields unbiased but high variance estimates of the RL-objective hessian and served as motivation for the low variance curvature (LVC) estimator. Here we investigate the meta-gradient variance of both estimators as well as its implication on the learning performance. Specifically, we report the relative standard deviation of the metapolicy gradients as well as the average return throughout the learning process in three of the metaenvironments.
The results, depicted in Figure 4, highlight the advantage of the low variance curvature estimate. The trajectory level dependencies inherent in the DiCE estimator leads to a meta-gradient standard deviation that is on average 60% higher when compared to LVC. As the learning curves indicate, the noisy gradients may be a driving factor for the poor performance of DiCE, impeding sample efficient meta-learning. Meta-policy search based on the LVC estimator leads to substantially better sample-efficiency and asymptotic performance.
In case of HalfCheetahFwdBack, we observe some unstable learning behavior of LVC-VPG which is most likely caused by the bias of LVC in combination with the naive VPG optimizer. However, the mechanisms in ProMP that ensure proximity w.r.t. to the policys KL-divergence seem to counteract these instabilities during training, giving us a stable and efficient meta-learning algorithm.
7.3 COMPARISON OF INITIAL SAMPLING DISTRIBUTIONS
Here we evaluate the effect of the different objectives on the learned pre-update sampling distribution. We compare the low variance curvature (LVC) estimator with TRPO (LVC-TRPO) against MAML (Finn et al., 2017) and E-MAML-TRPO (Stadie et al., 2018) in a 2D environment on which the exploration behavior can be visualized. Each task of this environment corresponds to reaching a different corner location; however, the 2D agent only experiences reward when it is sufficiently close to the corner (translucent regions of Figure 5). Thus, to successfully identify the task, the agent must explore the different regions. We perform three inner adaptation steps on each task, allowing the agent to fully change its behavior from exploration to exploitation.
The different exploration-exploitation strategies are displayed in Figure 5. Since the MAML implementation does not assign credit to the pre-update sampling trajectory, it is unable to learn a sound exploration strategy for task identification and thus fails to accomplish the task. On the other hand, E-MAML, which corresponds to formulation II, learns to explore in long but random paths: because it can only assign credit to batches of pre-update trajectories, there is no notion of which actions in particular facilitate good task adaptation. As consequence the adapted policy slightly misses the task-specific target. The LVC estimator, instead, learns a consistent pattern of exploration, visiting each of the four regions, which it harnesses to fully solve the task.
7.4 GRADIENT UPDATE DIRECTIONS OF THE TWO META-RL FORMULATIONS
To shed more light on the differences of the gradients of formulation I and formulation II, we evaluate the meta-gradient updates and the corresponding convergence to the optimum of both formulations in a simple 1D environment. In this environment, the agent starts in a random position in the real line and has to reach a goal located at the position 1 or -1. In order to visualize the convergence, we parameterize the policy with only two parameters θ0 and θ1. We employ formulation I by optimizing the DiCE objective with VPG, and formulation II by optimizing its (E-MAML) objective with VPG.
Figure 6 depicts meta-gradient updates of the parameters θi for both formulations. Formulation I (red) exploits the internal structure of the adaptation update yielding faster and steadier convergence to the optimum. Due to its inferior credit assignment, formulation II (green) produces noisier gradient estimates leading to worse convergence properties.
8 CONCLUSION
In this paper we propose a novel Meta-RL algorithm, proximal meta-policy search (ProMP), which fully optimizes for the pre-update sampling distribution leading to effective task identification. Our method is the result of a theoretical analysis of gradient-based Meta-RL formulations, based on which we develop the low variance curvature (LVC) surrogate objective that produces low variance meta-policy gradient estimates. Experimental results demonstrate that our approach surpasses previous meta-reinforcement learning approaches in a diverse set of continuous control tasks. Finally, we underpin our theoretical contributions with illustrative examples which further justify the soundness and effectiveness of our method.
ACKNOWLEDGMENTS
Ignasi Clavera was supported by the La Caixa Fellowship. The research leading to these results has received funding from the German Research Foundation (DFG: Deutsche Forschungsgemeinschaft) under Priority Program on Autonomous Learning (SPP 1527) and was supported by Berkeley Deep Drive, Amazon Web Services, and Huawei. Also we thank Abhishek Gupta, Chelsea Finn, aand Aviv Tamar for their valuable feedback.
A TWO META-POLICY GRADIENT FORMULATIONS
In this section we discuss two different gradient-based meta-learning formulations, derive their gradients and analyze the differences between them.
A.1 META-POLICY GRADIENT FORMULATION I
The first meta-learning formulation, known as MAML (Finn et al., 2017), views the inner update rule U(θ, T ) as a mapping from the pre-update parameter θ and the task T to an adapted policy parameter θ′. The update function can be viewed as stand-alone procedure that encapsulates sampling from the task-specific trajectory distribution PT (τ |πθ) and updating the policy parameters. Building on this concept, the meta-objective can be written as
JI(θ) = ET ∼ρ(T ) [ Eτ ′∼PT (τ ′|θ′) [R(τ ′)] ]
with θ′ := U(θ, T ) (14) The task-specific gradients follow as
∇θJIT (θ) = ∇θEτ ′∼PT (τ ′|θ′) [R(τ ′)] (15)
= Eτ ′∼PT (τ ′|θ′) [∇θ logPT (τ ′|θ′)R(τ ′)] (16) = Eτ ′∼PT (τ ′|θ′) [∇θ′ logPT (τ ′|θ′)R(τ ′)∇θθ′] (17)
In order to derive the gradients of the inner update ∇θθ′ = ∇θU(θ, T ) it is necessary to know the structure of U . The main part of this paper assumes the inner update rule to be a policy gradient descent step
∇θU(θ, T ) = ∇θ ( θ + α ∇θEτ∼PT (τ |θ) [R(τ )] ) (18)
= I + α∇2θ Eτ∼PT (τ |θ) [R(τ )] (19) Thereby the second term in (19) is the local curvature (hessian) of the inner adaptation objective function. The correct hessian of the inner objective can be derived as follows:
∇2θ Eτ∼PT (τ |θ) [R(τ )] = ∇θ Eτ∼PT (τ |θ) [∇θ log πθ(τ )R(τ )] (20) = ∇θ ∫ PT (τ |θ)∇θ log πθ(τ )R(τ )dτ (21)
= ∫ PT (τ |θ)∇θ log πθ(τ )∇θ log πθ(τ )>R(τ )+ (22)
PT (τ |θ)∇2θ log πθ(τ )R(τ )dτ (23) = Eτ∼PT (τ |θ) [ R(τ ) ( ∇2θ log πθ(τ ) +∇θ log πθ(τ )∇θ log πθ(τ )> )] (24)
A.2 META-POLICY GRADIENT FORMULATION II
The second meta-reinforcement learning formulation views the the inner update θ′ = U(θ, τ1:N ) as a deterministic function of the pre-update policy parameters θ and N trajectories τ 1:N ∼ PT (τ
1:N |θ) sampled from the pre-update trajectory distribution. This formulation was introduced in Al-Shedivat et al. (2018) and further discussed with respect to its exploration properties in Stadie et al. (2018).
Viewing U as a function that adapts the policy parameters θ to a specific task T given policy rollouts in this task, the corresponding meta-learning objective can be written as
JII(θ) = ET ∼ρ(T ) [ Eτ1:N∼PT (τ1:N |θ) [ Eτ ′∼PT (τ ′|θ′) [ R(τ ′) ]]] with θ′ := U(θ, τ 1:N ) (25)
Since the first part of the gradient derivation is agnostic to the inner update rule U(θ, τ1:N ), we only assume that the inner update function U is differentiable w.r.t. θ. First we rewrite the meta-objective J(θ) as expectation of task specific objectives JIIT (θ) under the task distribution. This allows us to express the meta-policy gradients as expectation of task-specific gradients:
∇θJII(θ) = ET ∼ρ(T ) [ ∇θJIIT (θ) ] (26)
The task specific gradients can be calculated as follows ∇θJIIT (θ) = ∇θEτ∼PT (τ1:N |θ) [ Eτ ′∼PT (τ ′|θ′) [ R(τ ′) ]] = ∇θ ∫ ∫ R(τ ′) PT (τ ′|θ′) PT (τ 1:N |θ) dτ ′ dτ
= ∫ ∫ R(τ ′) PT (τ ′|θ′)∇θ logPT (τ 1:N |θ)PT (τ 1:N |θ)+
R(τ ′)∇θ logPT (τ ′|θ′)PT (τ ′|θ′) PT (τ 1:N |θ) dτ ′ dτ
= Eτ1:N∼PT (τ1:N |θ) τ ′∼PT (τ ′|θ′)
[ R(τ ′) ( ∇θ logPT (τ ′|θ′) +
N∑ i=1 ∇θ logPT (τ (n)|θ)
)]
= Eτ1:N∼PT (τ1:N |θ) τ ′∼PT (τ ′|θ′)
[ R(τ ′) ( ∇θ′ logPT (τ ′|θ′)∇θθ′ +
N∑ n=1 ∇θ logPT (τ (n)|θ)
)]
As in A.1 the structure of U(θ, τ 1:N ) must be known in order to derive the gradient∇θθ′. Since we assume the inner update to be vanilla policy gradient, the respective gradient follows as
U(θ, τ1:N ) = θ+α 1
N N∑ n=1 ∇θ log πθ(τ (n)))R(τ (n)) with ∇θ log πθ(τ) = H−1∑ t=0 ∇θ log πθ(at|st)
The respective gradient of U(θ, τ1:N ) follows as
∇θU(θ, τ1:N ) = ∇θ ( θ + α 1
N N∑ n=1 ∇θ log πθ(τ (n)))R(τ (n))
) (27)
= I + α 1
N N∑ n=1 ∇2θ log πθ(τ (n)))R(τ (n)) (28)
A.3 COMPARING THE GRADIENTS OF THE TWO FORMULATIONS
In the following we analyze the differences between the gradients derived for the two formulations. To do so, we begin with ∇θJIT (θ) by inserting the gradient of the inner adaptation step (19) into (17):
∇θJIT (θ) = Eτ ′∼PT (τ ′|θ′) [ ∇θ′ logPT (τ ′|θ′)R(τ ′) ( I + α∇2θ Eτ∼PT (τ |θ) [R(τ )] )] (29)
We can substitute the hessian of the inner objective by its derived expression from (24) and then rearrange the terms. Also note that ∇θ logPT (τ |θ) = ∇θ log πθ(τ ) = ∑H−1 t=1 log πθ(at|st) where H is the MDP horizon.
∇θJIT (θ) = Eτ ′∼PT (τ ′|θ′) [ ∇θ′ logPT (τ ′|θ′)R(τ ′) ( I + αEτ∼PT (τ |θ) [ R(τ ) (30)
( ∇2θ log πθ(τ ) +∇θ log πθ(τ )∇θ log πθ(τ )> )])] (31)
= E τ∼PT (τ |θ) τ ′∼PT (τ ′|θ′)
∇θ′ log πθ′(τ ′)R(τ ′) ( I + αR(τ )∇2θ log πθ(τ ) ) ︸ ︷︷ ︸
∇θJpost(τ ,τ ′)
(32)
+α∇θ′ log πθ′(τ ′)R(τ ′)R(τ )∇θ log πθ(τ )∇θ log πθ(τ )>︸ ︷︷ ︸ ∇θJIpre(τ ,τ ′)
(33)
Next, we rearrange the gradient of JII into a similar form as∇θJIT (θ). For that, we start by inserting (28) for∇θθ′ and replacing the expectation over pre-update trajectories τ 1:N by the expectation over a single trajectory τ .
∇θJIT (θ) = E τ∼PT (τ |θ) τ ′∼PT (τ ′|θ′)
[ R(τ ′)∇θ′ log πθ(τ ′) ( I + αR(τ )∇2θ log πθ(τ)) )︸ ︷︷ ︸ ∇θJpost(τ ,τ ′)
(34)
+R(τ ′)∇θ log πθ(τ )︸ ︷︷ ︸ ∇θJIpre(τ ,τ ′)
] (35)
While the first part of the gradients match ((32) and (34)), the second part ((33) and (35)) differs. Since the second gradient term can be viewed as responsible for shifting the pre-update sampling distribution PT (τ |θ) towards higher post-update returns, we refer to it as∇θJpre(τ , τ ′) . To further analyze the difference between ∇θJIpre and ∇θJIIpre we slightly rearrange (33) and put both gradient terms next to each other:
∇θJIpre(τ , τ ′) = α∇θ log πθ(τ ) (∇θ log πθ(τ )R(τ ))>︸ ︷︷ ︸ ∇θJ inner (∇θ′ log πθ′(τ ′)R(τ ′))︸ ︷︷ ︸ ∇θ′Jouter (36) ∇θJIIpre(τ , τ ′) = α∇θ log πθ(τ )R(τ ′) (37)
In the following we interpret and and compare of the derived gradient terms, aiming to provide intuition for the differences between the formulations:
The first gradient term Jpost that matches in both formulations corresponds to a policy gradient step on the post-update policy πθ′ . Since θ′ itself is a function of θ, the term ( I + αR(τ )∇2θ log πθ(τ)) ) can be seen as linear transformation of the policy gradient update R(τ ′)∇θ′ log πθ(τ ′) from the post-update parameter θ′ into θ. Although Jpost takes into account the functional relationship between θ′ and θ, it does not take into account the pre-update sampling distribution PT (τ |θ). This is where ∇θJpre comes into play: ∇θJIpre can be viewed as policy gradient update of the preupdate policy πθ w.r.t. to the post-update return R(τ ′). Hence this gradient term aims to shift the pre-update sampling distribution so that higher post-update returns are achieved. However, ∇θJIIpre does not take into account the causal dependence of the post-update policy on the pre-update policy. Thus a change in θ due to∇θJIIpre may counteract the change due to∇θJIIpost. In contrast,∇θJIpre takes the dependence of the the post-update policy on the pre-update sampling distribution into account. Instead of simply weighting the gradients of the pre-update policy ∇θ log πθ(τ ) with R(τ ′) as in ∇θJIpost, ∇θJIpost weights the gradients with inner product of the pre-update and post-update policy gradients. This inner product can be written as
∇θJ inner >∇θ′Jouter = ||∇θJ inner||2 · ||∇θ′Jouter||2 · cos(δ) (38)
wherein δ denotes the angle between the the inner and outer pre-update and post-update policy gradients. Hence,∇θJIpost steers the pre-update policy towards not only towards larger post-updates returns but also towards larger adaptation steps α∇θJ inner, and better alignment of pre- and postupdate policy gradients. This directly optimizes for maximal improvement / adaptation for the respective task. See Li et al. (2017); Nichol et al. (2018) for a comparable analysis in case of domain generalization and supervised meta-learning. Also note that (38) allows formulation I to perform credit assignment on the trajectory level whereas formulation II can only assign credit to entire batches of N pre-update trajectories τ1:N .
As a result, we expect the first meta-policy gradient formulation to learn faster and more stably since the respective gradients take the dependence of the pre-update returns on the pre-update sampling distribution into account while this causal link is neglected in the second formulation.
B ESTIMATING THE META-POLICY GRADIENTS
When employing formulation I for gradient-based meta-learning, we aim maximize the loss J(θ) = ET ∼ρ(T ) [ Eτ ′∼PT (τ ′|θ′) [R(τ ′)] ] with θ′ := θ + α ∇θEτ∼PT (τ |θ) [R(τ )] (39)
by performing a form of gradient-descent on J(θ). Note that we, from now on, assume J := JI and thus omit the superscript indicating the respective meta-learning formulation. As shown in A.2 the gradient can be derived as∇θJ(θ) = E(T )∼ρ(T )[∇θJT (θ)] with
∇θJT (θ) = Eτ ′∼PT (τ ′|θ′) [ ∇θ′ logPT (τ ′|θ′)R(τ ′) ( I + α∇2θ Eτ∼PT (τ |θ) [R(τ )] )] (40)
where∇2θJinner(θ) := ∇2θ Eτ∼PT (τ |θ) [R(τ )] denotes hessian of the inner adaptation objective w.r.t. θ. This section concerns the question of how to properly estimate this hessian.
B.1 ESTIMATING GRADIENTS OF THE RL REWARD OBJECTIVE
Since the expectation over the trajectory distribution PT (τ |θ) is in general intractable, the score function trick is typically used to used to produce a Monte Carlo estimate of the policy gradients. Although the gradient estimate can be directly defined, when using a automatic-differentiation toolbox it is usually more convenient to use an objective function whose gradients correspond to the policy gradient estimate. Due to the Policy Gradient Theorem (PGT) Sutton et al. (2000) such a “surrogate” objective can be written as:
ĴPGT = 1
K ∑ τk H−1∑ t=0 log πθ(at|st) ( H∑ t′=t r(st′ , at′) ) τk ∼ PT (τ) (41)
= 1
K ∑ τk H−1∑ t=0
( t∑
t′=0
log πθ(at|st) ) r(st′ , at′) τk ∼ PT (τ) (42)
While (41) and (42) are equivalent (Peters & Schaal, 2006), the more popular formulation formulation (41) can be seen as forward looking credit assignment while (42) can be interpreted as backward looking credit assignment (Foerster et al., 2018). A generalized procedure for constructing “surrogate” objectives for arbitrary stochastic computation graphs can be found in Schulman et al. (2015a).
B.2 A DECOMPOSITION OF THE HESSIAN
Estimating the the hessian of the reinforcement learning objective has been discussed in Furmston et al. (2016) and Baxter & Bartlett (2001) with focus on second order policy gradient methods. In the infinite horizon MDP case, Baxter & Bartlett (2001) derive a decomposition of the hessian. In the following, we extend their finding to the finite horizon case.
Proposition. The hessian of the RL objective can be decomposed into four matrix terms:
∇2θJinner(θ) = H1 +H2 +H12 +H>12 (43)
where
H1 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θ log πθ(at, st)> ( H−1∑ t′=t r(st′ ,at′) )] (44)
H2 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇2θ log πθ(at, st) ( H−1∑ t′=t r(st′ ,at′) )] (45)
H12 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θQπθt (st,at)> ]
(46)
Here Qπθt (st,at) = Eτ t+1:H−1∼PT (·|θ) [∑H−1 t′=t r(st′ ,at′)|st, at ]
denotes the expected state-action value function under policy πθ at time t.
Proof. As derived in (24), the hessian of Jinner(θ) follows as: ∇2θJinner = Eτ∼PT (τ |θ) [ R(τ ) ( ∇2θ log πθ(τ ) +∇θ log πθ(τ )∇θ log πθ(τ )> )] (47)
= Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 ∇2θ log πθ(at′ , st′) ) r(st,at) ] (48)
+ Eτ∼PT (τ |θ) H−1∑ t=0 ( t∑ t′=0 ∇θ log πθ(at′ , st′) )( t∑ t′=0 ∇θ log πθ(at′ , st′) )> r(st,at) (49)
= Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇2θ log πθ(at, st) ( H−1∑ t′=t r(st′ ,at′) )] (50)
+ Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 t∑ h=0 ∇θ log πθ(at′ , st′)∇θ log πθ(ah, sh)> ) r(st,at) ] (51)
The term in (50) is equal toH2. We continue by showing that the remaining term in (51) is equivalent toH1 +H12 +H>12. For that, we split the inner double sum in (51) into three components:
Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 t∑ h=0 ∇θ log πθ(at′ , st′)∇θ log πθ(ah, sh)> ) r(st,at) ] (52)
= Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 ∇θ log πθ(at′ , st′)∇θ log πθ(at′ , st′)> ) r(st,at) ] (53)
+ Eτ∼PT (τ |θ) H−1∑ t=0 t∑ t′=0 t′−1∑ h=0 ∇θ log πθ(at′ , st′)∇θ log πθ(ah, sh)> r(st,at) (54) + Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 t∑ h=t′+1 ∇θ log πθ(at′ , st′)∇θ log πθ(ah, sh)> ) r(st,at) ] (55)
By changing the backward looking summation over outer products into a forward looking summation of rewards, (53) can be shown to be equal toH1:
Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 ∇θ log πθ(at′ , st′)∇θ log πθ(at′ , st′)> ) r(st,at) ] (56)
= Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θ log πθ(at, st)> ( H−1∑ t′=t r(st′ ,at′) )] (57)
= H1 (58) By simply exchanging the summation indices t′ and h in (55) it is straightforward to show that (55) is the transpose of (54). Hence it is sufficient to show that (54) is equivalent to H12. However, instead of following the direction of the previous proof we will now start with the definition ofH12 and derive the expression in (54).
H12 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θQπθt (st,at)> ]
(59)
(60)
The gradient of Qπθt can be expressed recursively: ∇θQπθt (st,at) = ∇θEst+1at+1 [ Qπθt+1(st+1,at+1) ] (61)
= Est+1 at+1
[ ∇θ log πθ(at+1, st+1)Qπθt+1(st+1,at+1) +∇θQ πθ t+1(st+1,at+1) ] (62)
By induction, it follows that
∇θQπθt (st,at) = Eτ t+1:H−1∼PT (·|θ) [ H−1∑ t′=t+1 ∇θ log πθ(at′ , st′) ( H−1∑ h=t′ r(sh,ah) )] (63)
When inserting (63) into (59) and swapping the summation, we are able to show that H12 is equivalent to (54).
H12 = Eτ∼PT (τ |θ) [ H−1∑ t=0 H−1∑ t′=t+1 ∇θ log πθ(at, st)∇θ log πθ(at′ , st′)> ( H−1∑ h=t′ r(sh,ah) )] (64)
= Eτ∼PT (τ |θ) H−1∑ t=0 t∑ t′=0 t′−1∑ h=0 ∇θ log πθ(at′ , st′)∇θ log πθ(ah, sh)> r(st,at) (65) This concludes the proof that the hessian of the expected sum of rewards under policy πθ and an MDP with finite time horizon H can be decomposed intoH1 +H2 +H12 +H>12.
B.3 ESTIMATING THE HESSIAN OF THE RL REWARD OBJECTIVE
As pointed out by Al-Shedivat et al. (2018); Stadie et al. (2018) and Foerster et al. (2018), simply differentiating through the gradient of surrogate objective JPGT as done in the original MAML version (Finn et al., 2017) leads to biased hessian estimates. Specifically, when compared with the unbiased estimate, as derived in (24) and decomposed in Appendix B.2, bothH1 andH12 +H>12 are missing. Thus, ∇θJpre does not appear in the gradients of the meta-objective (i.e. ∇θJ = ∇θJpost). Only performing gradient descent with ∇θJpost entirely neglects influences of the pre-update sampling distribution. This issue was overseen in the RL-MAML implementation of Finn et al. (2017). As discussed in Stadie et al. (2018) this leads to poor performance in meta-learning problems that require exploration during the pre-update sampling.
B.3.1 THE DICE MONTE-CARLO ESTIMATOR
Addressing the issue of incorrect higher-order derivatives of monte-carlo estimators, Foerster et al. (2018) propose DICE which mainly builds upon an newly introduced MagicBox( ) operator. This operator allows to formulate monte-carlo estimators with correct higher-order derivatives. A DICE formulation of a policy gradient estimator reads as:
JDICE = H−1∑ t=0 θ({at ′≤t})r(st, at) (66)
= H−1∑ t=0 exp
( t∑
t′=0
log πθ(at′ |st′)−⊥(log πθ(at′ |st′) ) r(st, at) (67)
In that, ⊥ denotes a “stop gradient” operator (i.e. ⊥(fθ(x)) → fθ(x) but ∇θ⊥(fθ(x)) → 0). Note that → denotes a “evaluates to” and does not necessarily imply equality w.r.t. to gradients. Hence, JDICE(θ) evaluates to the sum of rewards at 0th order but produces the unbiased gradients ∇nθJDICE(θ) when differentiated n-times (see Foerster et al. (2018) for proof). To shed more light on the maverick DICE formulation, we rewrite (67) as follows:
JDICE = H−1∑ t=0
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
) r(st, at) (68)
Interpreting this novel formulation, the MagicBox operator θ({at ′≤t}) can be understood as “dry” importance sampling weight. At 0th order it evaluates to 1 and leaves the objective function unaffected, but when differentiated once it yields an estimator for the marginal rate of return due to a change in the policy-implied trajectory distribution.
In the following we show that on expectation 1) the gradients of (81) match standard policy gradients and 2) its hessian estimate is equal to the hessian of inner RL objective, derived in B.2.
∇θJDICE = H−1∑ t=0 ∇θ
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
) r(st, at) (69)
= H−1∑ t=0
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
)( t∑
t′=0
∇θ log πθ(at′ |st′) ) r(st, at) (70)
→ H−1∑ t=0
( t∑
t′=0
∇θ log πθ(at′ |st′) ) r(st, at) (71)
Here, (71) corresponds to the backward looking credit assignment formulation of policy gradients ∇θJPGT as discussed in B.1. Once again we take the derivative in order to obtain the Hessian of JDICE:
∇2θJDICE = H−1∑ t=0 ∇θ
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
)( t∑
t′=0
∇θ log πθ(at′ |st′) ) r(st, at) (72)
+
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
) ∇θ ( t∑
t′=0
∇θ log πθ(at′ |st′) ) r(st, at) (73)
→ H−1∑ t=0
( t∑
t′=0
∇θ log πθ(at′ |st′)
)( t∑
t′=0
∇θ log πθ(at′ |st′) )> r(st, at) (74)
+
( t∑
t′=0
∇2θ log πθ(at′ |st′) ) r(st, at) (75)
In expectation, Eτ∼PT (τ |θ)[∇2θJDICE] the DICE monte carlo estimate of the hessian is equivalent to the hessian of the inner objective. To show this, we use the expression of∇2θJinner (49):
Eτ∼PT (τ |θ)[∇ 2 θJ DICE] (76)
= Eτ∼PT (τ |θ) [H−1∑ t=0 ( t∑ t′=0 ∇θ log πθ(at′ |st′) )( t∑ t′=0 ∇θ log πθ(at′ |st′) )> (77)
r(st, at) +
( t∑
t′=0
∇2θ log πθ(at′ |st′) ) r(st, at) ] (78)
= H1 +H2 +H12 +H>12 (79) = ∇2θJinner (80)
B.4 BIAS AND VARIANCE OF THE CURVATURE ESTIMATE
As shown in the previous section,∇2θJDICE provides an unbiased estimate of the hessian of the inner objective Jinner = Eτ∼PT (τ |θ) [R(τ )]. However, recall the DICE objective involves a product of importance weights along the trajectory.
JDICE = H−1∑ t=0
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
) r(st, at) (81)
Taking the 2nd derivative of this product leads to the outer product of sums in (74) which is of high variance w.r.t to τ . Specifically, this outer product of sums can be decomposed into three terms H1 +H12 +H>12 (see Appendix B.2). As noted by Furmston et al. (2016),H12 +H>12 is particularly difficult to estimate. In section 7.2 we empirically show that the high variance curvature estimates obtained with the DICE objective require large batch sizes and impede sample efficient learning.
In the following we develop a low variance curvature (LVC) estimator JLVC which matches JDICE at the gradient level and yields lower-variance estimates of the hessian by neglecting H12 + H>12.
Before formally introducing JLVC, we motivate such estimator starting with the policy gradient estimate that was originally derived in Sutton et al. (2000), followed by marginalizing the trajectory level distribution PT (τ |θ) over states st and actions at. Note that we omit reward baselines for notational simplicity.
∇θJinner = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) )] (82)
= H−1∑ t=0 E st∼pπθt (st) at∼πθ(at|st)
[ ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) )] (83)
In that, pπθt (st) denotes the state visitation frequency at time step t, i.e. the probability density of being in st after t steps under the policy πθ. In the general case pπθt (st) is intractable but depends on the policy parameter θ. We make the simplifying assumption that pπθt (st) is fixed in a local region of θ. Since we make this assumption at the gradient level, this corresponds to a 1st order Taylor expansion of pπθt (st) in θ. Note that this assumption is also used in the Monotonic Policy Improvement Theory (Kakade & Langford, 2002; Schulman et al., 2015a). Based on this condition, the hessian follows as derivative of (83) whereby a “stop gradient” expression around the state visitation frequency pπθt (st) resembles the 1st order Taylor approximation:
Eτ [ ∇2θJLVC ] = ∇θ H−1∑ t=0 Est∼⊥(pπθt (st)) at∼πθ(at|st)
[ ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) )] (84)
= H−1∑ t=0 Est∼⊥(pπθt (st)) at∼πθ(at|st) [ ∇θ log πθ(at|st)∇θ log πθ(at|st)> ( H−1∑ t′=t r(st′ , at′) ) (85)
+∇2θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) )] (86)
Since the expectation in (84) is intractable it must be evaluated by a monte carlo estimate. However, simply replacing the expectation with an average of samples trajectories induces a wrong hessian that does not correspond to (86) since outer product of log-gradients would be missing when differentiated. To ensure that automatic differentiation still yields the correct hessian, we add a “dry” importance weight comparable to DICE:
∇θJLVC = H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) τ ∼ PT (τ |θ) (87)
When integrated this resembles the LVC “surrogate” objective JLVC.
JLVC = H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ( H−1∑ t′=t r(st′ , at′) ) τ ∼ PT (τ |θ) (88)
The gradients of JLVC match∇θJDICE and resemble an unbiased policy gradient estimate:
∇θJLVC = H−1∑ t=0 ∇θπθ(at|st) ⊥(πθ(at|st)) ( H−1∑ t′=t r(st′ , at′) ) (89)
= H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) (90)
→ H−1∑ t=0 ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) (91)
The respective Hessian can be obtained by differentiating (90):
∇2θJLVC = ∇θ H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) (92)
= H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ∇θ log πθ(at|st)∇θ log πθ(at|st)> ( H−1∑ t′=t r(st′ , at′) ) (93)
+ πθ(at|st) ⊥(πθ(at|st)) ∇2θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) (94)
→ H−1∑ t=0 ∇θ log πθ(at|st)∇θ log πθ(at|st)> ( H−1∑ t′=t r(st′ , at′) ) (95)
+∇2θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) (96)
= H−1∑ t=0
( t∑
t′=0
∇θ log πθ(at′ |st′)∇θ log πθ(at|st)> ) r(st, at) (97)
+
( t∑
t′=0
∇2θ log πθ(at′ |st′) ) r(st, at) (98)
In expectation∇2θJLVC is equivalent toH1 +H2:
Eτ∼PT (τ |θ) [ JLVC ] = Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 ∇θ log πθ(at′ |st′)∇θ log πθ(at|st)> ) r(st,at) ] (99)
+ Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 ∇2θ log πθ(at′ |st′) ) r(st,at) ] (100)
=H1 +H2 (101)
The Hessian ∇2θJLVC no longer provides an unbiased estimate of ∇2θJinner since neglects the matrix termH12 +H>12. This approximation is based on the assumption that the state visitation distribution is locally unaffected by marginal changes in θ and leads to a substantial reduction of variance in the hessian estimate. Furmston et al. (2016) show that under certain conditions (i.e. infinite horizon MDP, sufficiently rich policy parameterisation) the termH12+H>12 vanishes around a local optimum θ∗. Given that the conditions hold, this implies that Eτ [∇2θJLVC] → Eτ [∇2θJDICE] as θ → θ∗, i.e. the bias of the LCV estimator becomes negligible close to the local optimum. The experiments in section 7.2 confirm this theoretical argument empirically and show that using the low variance curvature estimates obtained through JLVC improve the sample-efficiency of meta-learning by a significant margin.
C PROXIMAL POLICY SEARCH METHODS
C.1 MONOTONIC POLICY IMPROVEMENT THEORY
This section provides a brief introduction to policy performance bounds and the theory of monotonic policy improvement in the setting of reinforcement learning. While Section 6 discusses the extension of this theory to meta learning, the following explanations assume a standard RL setting where T is exogenously given. Hence, we will omit mentioning the dependence on T for notational brevity. Since the monotonic policy improvement frameworks relies on infinite-time horizon MDPs, we assume H →∞ for the remainder of this chapter.
In addition to the expected reward J(π) under policy π, we will use the state value function V π , the state-action value function Qπ as well as the advantage function Aπ:
V π(s) = Ea0,s1,... [ ∞∑ t=0 γtr(st,at) ∣∣∣∣st = s ]
Qπ(s, a) = Es1,a1,... [ ∞∑ t=0 γtr(st,at) ∣∣∣∣st = s,a0 = a ] = r(s, a) + γEs′∼p(s′|s,a) [Vπ(s′)] Aπ(s, a) = Qπ(s, a)− V π(s)
with at ∼ π(at|st) and st+1 ∼ p(st+1|st, at). The expected return under a policy π̃ can be expressed as the sum of the expected return of another policy π and the expected discounted advantage of π̃ over π (see Schulman et al. (2015a) for proof).
J(π̃) = J(π) + Eτ∼P (τ ,π̃) [ ∞∑ t=0 γtAπ(st,at) ] Let dπ denote the discounted state visitation frequency:
dπ(s) = γt ∞∑ t=0 p(st = s|π)
We can use dπ to express the expectation over trajectories τ ∼ pπ(τ) in terms of states and actions:
J(π̃) = J(π) + Es∼dπ̃(s) a∼π̃(a|s) [Aπ(s,a)] (102)
Local policy search aims to find a policy update π → π̃ in the proximity of π so that J(π̃) is maximized. Since J(π) is not affected by the policy update π → π̃, it is sufficient to maximize the expected advantage under π̃. However, the complex dependence of dπ̃(s) on π̃ makes it hard to directly maximize the objective in (102). Using a local approximation of (102) where it is assumed that the state visitation frequencies dπ and dπ̃ are identical, the optimization can be phrased as
J̃π(π̃) = J(π) + Es∼dπ(s) a∼π̃(a|s) [Aπ(s,a)] = J(π) + Es∼dπ(s) a∼π(a|s) [ π̃(a|s) π(a|s) Aπ(s,a) ] (103)
In the following we refer to J̃(π̃) as surrogate objective. It can be shown that the surrogate objective J̃ matches J to first order when π = π̃ (see Kakade & Langford (2002)). If πθ is a parametric and differentiable function with parameter vector θ, this means that for any θo:
J̃πθo (πθo) = Jπθo (πθo) and ∇θJ̃πθo (πθ) ∣∣ θo = ∇θJπθo (πθ) ∣∣ θo
(104)
When π 6= π̃, an approximation error of the surrogate objective J̃ w.r.t. to the true objective J is introduced. Achiam et al. (2017) derive a lower bound for the true expected return of π̃:
J(π̃) ≥ Jπ(π̃)− C √ Es∼dπ [DKL[π̃(·|s)||π(·|s)]] = Jπ(π̃)− C √ D̄KL[π̃||π] (105)
with C = √ 2γ
1−γ maxs |Ea∼π̃(a,s)[A π(s,a)]|
C.2 TRUST REGION POLICY OPTIMIZATION (TRPO)
Trust region policy optimization (TPRO) (Schulman et al., 2015a) attempts to approximate the bound in (105) by phrasing local policy search as a constrained optimization problem:
arg max θ Es∼dπθo (s) a∼πθo (a|s)
[ πθ(a|s) πθo(a|s) Aπθo (s,a) ] s.t. D̄KL[πθo ||πθ] ≤ δ (106)
Thereby the KL-constraint δ induces a local trust region around the current policy πθo . A practical implementation of TPRO uses a quadratic approximation of the KL-constraint which leads to the following update rule:
θ ← θ +
√ 2δ
g>Fg F−1g (107)
with g := ∇θEs∼dπθo (s) a∼πθo (a|s)
[ πθ(a|s) πθo (a|s) Aπθo (s,a) ] being the gradient of the objective and F =
∇2θD̄KL[πθo ||πθ] the Fisher information matrix of the current policy πθo . In order to avoid the cubic time complexity that arise when inverting F , the Conjugate Gradient (CG) algorithm is typically used to approximate the Hessian vector product F−1g.
C.3 PROXIMAL POLICY OPTIMIZATION (PPO)
While TPRO is framed as constrained optimization, the theory discussed in Appendix C.1 suggest to optimize the lower bound. Based on this insight, Schulman et al. (2017) propose adding a KL penalty to the objective and solve the following unconstrained optimization problem:
arg max θ Es∼dπθo (s) a∼πθo (a|s)
[ πθ(a|s) πθo(a|s) Aπθo (s,a)− βDKL[πθo(·|s)||πθ(·|s)] ]
(108)
However, they also show that it is not sufficient to set a fixed penalty coefficient β and propose two alternative methods, known as Proximal Policy Optimization (PPO) that aim towards alleviating this issue:
1) Adapting the KL coefficient β so that a desired target KL-divergence D̄KL[πθo ||πθ] between the policy before and after the parameter update is achieved
2) Clipping the likelihood ratio so that the optimization has no incentive to move the policy πθ too far away from the original policy πθo . A corresponding optimization objective reads as:
JCLIP = Es∼dπθo (s) a∼πθo (a|s)
[ min ( πθ(a|s) πθo(a|s) Aπθo (s,a) , clip1+ 1− ( πθ(a|s) πθo(a|s) ) Aπθo (s,a) )] (109)
Empirical results show that the latter approach leads to better learning performance (Schulman et al., 2017).
Since PPO | 1. What are the differences in gradient calculation between the original MAML and E-MAML?
2. How does the proposed new objective and surrogate address the potential error due to auto-differentiation?
3. What is the concern regarding the effect of using (3) compared to (4) in calculating the gradient?
4. Are there any other factors that may contribute to the better performance of the proposed algorithm, aside from the corrected gradient computation?
5. Can the authors provide further clarification on the notation used in Equations (2) and (3)? | Review | Review
In this paper, the authors investigate the gradient calculation in the original MAML (Finn et al. 2017) and E-MAML (Al-Shedivat et al. 2018). By comparing the differences in the gradients of these two algorithms, the authors demonstrate the advantages of the original MAML in taking the casual dependence into account. To obtain the correct estimation of the gradient through auto-differentiation, the authors exploit the DiCE formulation. Considering the variance in the DiCE objective formulation, the authors finally propose an objective which leads to low-variance but biased gradient. The authors verify the proposed methods in meta-RL tasks and achieves comparable performances to MAML and E-MAML.
Although the ultimate algorithm proposed by this paper is not far away from MAML and E-MAML, they did a quite good job in clarify the differences in the existing variants of MAML from the gradient computation perspective and reveal the potential error due to the auto-differentiation. The proposed new objective and the surrogate is well-motivated from such observation and the trade-off between variance and bias.
My major concern is how big the effect is if we use (3) comparing to (4) in calculate the gradient. As the authors showed, the only difference between (3) and (4) is the weights in front of the term \nabla_\theta\log\pi_\theta: the E-MAML is a fixed weight and the MAML is using a adaptive through the inner product. Whether the final difference in Figure 4 between MAML and E-MAML is all caused by such difference in gradient estimation is not clearly. In fact, based on the other large-scale high-dimension empirical experiments in Figure 2, it seems the difference in gradient estimator (3) and (4) does not induced too much difference in final performances between MAML and E-MAML. Based on such observation, I was wondering the consistent better performance of the proposed algorithm might not because the corrected gradient computation from the proposed objective. It might because the clip operation or other components in the algorithm. To make a more convincing argument, it will be better if the authors can evaluate different gradient within the same updates.
I am willing to raise my score if the author can address the question.
minor:
The gradients calculation in Eq (2) and (3) are not consistent with the Algorithm and the appendix.
The notation is not consistent with common usage: \nabla^2 is actually used for denoting the Laplace operator, i.e., \nabla^2 = \nabla \cdot \nable, which is a scalar. |
ICLR | Title
ProMP: Proximal Meta-Policy Search
Abstract
Credit assignment in Meta-reinforcement learning (Meta-RL) is still poorly understood. Existing methods either neglect credit assignment to pre-adaptation behavior or implement it naively. This leads to poor sample-efficiency during metatraining as well as ineffective task identification strategies. This paper provides a theoretical analysis of credit assignment in gradient-based Meta-RL. Building on the gained insights we develop a novel meta-learning algorithm that overcomes both the issue of poor credit assignment and previous difficulties in estimating meta-policy gradients. By controlling the statistical distance of both pre-adaptation and adapted policies during meta-policy search, the proposed algorithm endows efficient and stable meta-learning. Our approach leads to superior pre-adaptation policy behavior and consistently outperforms previous Meta-RL algorithms in sample-efficiency, wall-clock time, and asymptotic performance.
1 INTRODUCTION
A remarkable trait of human intelligence is the ability to adapt to new situations in the face of limited experience. In contrast, our most successful artificial agents struggle in such scenarios. While achieving impressive results, they suffer from high sample complexity in learning even a single task, fail to generalize to new situations, and require large amounts of additional data to successfully adapt to new environments. Meta-learning addresses these shortcomings by learning how to learn. Its objective is to learn an algorithm that allows the artificial agent to succeed in an unseen task when only limited experience is available, aiming to achieve the same fast adaptation that humans possess (Schmidhuber, 1987; Thrun & Pratt, 1998).
Despite recent progress, deep reinforcement learning (RL) still relies heavily on hand-crafted features and reward functions as well as engineered problem specific inductive bias. Meta-RL aims to forego such reliance by acquiring inductive bias in a data-driven manner. Recent work proves this approach to be promising, demonstrating that Meta-RL allows agents to obtain a diverse set of skills, attain better exploration strategies, and learn faster through meta-learned dynamics models or synthetic returns (Duan et al., 2016; Xu et al., 2018; Gupta et al., 2018b; Saemundsson et al., 2018).
Meta-RL is a multi-stage process in which the agent, after a few sampled environment interactions, adapts its behavior to the given task. Despite its wide utilization, little work has been done to promote theoretical understanding of this process, leaving Meta-RL grounded on unstable foundations. Although the behavior prior to the adaptation step is instrumental for task identification, the interplay between pre-adaptation sampling and posterior performance of the policy remains poorly understood. In fact, prior work in gradient-based Meta-RL has either entirely neglected credit assignment to the pre-update distribution (Finn et al., 2017) or implemented such credit assignment in a naive way (Al-Shedivat et al., 2018; Stadie et al., 2018).
To our knowledge, we provide the first formal in-depth analysis of credit assignment w.r.t. preadaptation sampling distribution in Meta-RL. Based on our findings, we develop a novel Meta-RL algorithm. First, we analyze two distinct methods for assigning credit to pre-adaptation behavior. ∗authors contributed equally to this work
We show that the recent formulation introduced by Al-Shedivat et al. (2018) and Stadie et al. (2018) leads to poor credit assignment, while the MAML formulation (Finn et al., 2017) potentially yields superior meta-policy updates. Second, based on insights from our formal analysis, we highlight both the importance and difficulty of proper meta-policy gradient estimates. In light of this, we propose the low variance curvature (LVC) surrogate objective which yields gradient estimates with a favorable bias-variance trade-off. Finally, building upon the LVC estimator we develop Proximal MetaPolicy Search (ProMP), an efficient and stable meta-learning algorithm for RL. In our experiments, we show that ProMP consistently outperforms previous Meta-RL algorithms in sample-efficiency, wall-clock time, and asymptotic performance.
2 RELATED WORK
Meta-Learning concerns the question of “learning to learn”, aiming to acquire inductive bias in a data driven manner, so that the learning process in face of unseen data or new problem settings is accelerated (Schmidhuber, 1987; Schmidhuber et al., 1997; Thrun & Pratt, 1998).
This can be achieved in various ways. One category of methods attempts to learn the “learning program” of an universal Turing machine in form of a recurrent / memory-augmented model that ingests datasets and either outputs the parameters of the trained model (Hochreiter et al., 2001; Andrychowicz et al., 2016; Chen et al., 2017; Ravi & Larochelle, 2017) or directly outputs predictions for given test inputs (Duan et al., 2016; Santoro et al., 2016; Mishra et al., 2018). Though very flexible and capable of learning very efficient adaptations, such methods lack performance guarantees and are difficult to train on long sequences that arise in Meta-RL.
Another set of methods embeds the structure of a classical learning algorithm in the meta-learning procedure, and optimizes the parameters of the embedded learner during the meta-training (Hüsken & Goerick, 2000; Finn et al., 2017; Nichol et al., 2018; Miconi et al., 2018). A particular instance of the latter that has proven to be particularly successful in the context of RL is gradient-based metalearning (Finn et al., 2017; Al-Shedivat et al., 2018; Stadie et al., 2018). Its objective is to learn an initialization such that after one or few steps of policy gradients the agent attains full performance on a new task. A desirable property of this approach is that even if fast adaptation fails, the agent just falls back on vanilla policy-gradients. However, as we show, previous gradient-based Meta-RL methods either neglect or perform poor credit assignment w.r.t. the pre-update sampling distribution.
A diverse set of methods building on Meta-RL, has recently been introduced. This includes: learning exploration strategies (Gupta et al., 2018b), synthetic rewards (Sung et al., 2017; Xu et al., 2018), unsupervised policy acquisition (Gupta et al., 2018a), model-based RL (Clavera et al., 2018; Saemundsson et al., 2018), learning in competitive environments (Al-Shedivat et al., 2018) and meta-learning modular policies (Frans et al., 2018; Alet et al., 2018). Many of the mentioned approaches build on previous gradient-based meta-learning methods that insufficiently account for the pre-update distribution. ProMP overcomes these deficiencies, providing the necessary framework for novel applications of Meta-RL in unsolved problems.
3 BACKGROUND
Reinforcement Learning. A discrete-time finite Markov decision process (MDP), T , is defined by the tuple (S,A, p, p0, r,H). Here, S is the set of states, A the action space, p(st+1|st, at) the transition distribution, p0 represents the initial state distribution, r : S × A → R is a reward function, and H the time horizon. We omit the discount factor γ in the following elaborations for notational brevity. However, it is straightforward to include it by substituting the reward by r(st, at) := γ
tr(st, at). We define the return R(τ) as the sum of rewards along a trajectory τ := (s0, a0, ..., sH−1, aH−1, sH). The goal of reinforcement learning is to find a policy π(a|s) that maximizes the expected return Eτ∼PT (τ |π) [R(τ )].
Meta-Reinforcement Learning goes one step further, aiming to learn a learning algorithm which is able to quickly learn the optimal policy for a task T drawn from a distribution of tasks ρ(T ). Each task T corresponds to a different MDP. Typically, it is assumed that the distribution of tasks share the action and state space, but may differ in their reward function or their dynamics.
Gradient-based meta-learning aims to solve this problem by learning the parameters θ of a policy πθ such that performing a single or few steps of vanilla policy gradient (VPG) with the given task leads to the optimal policy for that task. This meta-learning formulation, also known under the name
of MAML, was first introduced by Finn et al. (2017). We refer to it as formulation I which can be expressed as maximizing the objective
JI(θ) = ET ∼ρ(T ) [ Eτ ′∼PT (τ ′|θ′) [R(τ ′)] ] with θ′ := U(θ, T ) = θ + α∇θEτ∼PT (τ |θ) [R(τ )]
In that U denotes the update function which depends on the task T , and performs one VPG step towards maximizing the performance of the policy in T . For national brevity and conciseness we assume a single policy gradient adaptation step. Nonetheless, all presented concepts can easily be extended to multiple adaptation steps.
Later work proposes a slightly different notion of gradient-based Meta-RL, also known as E-MAML, that attempts to circumvent issues with the meta-gradient estimation in MAML (Al-Shedivat et al., 2018; Stadie et al., 2018): JII(θ) = ET ∼ρ(T ) [ Eτ1:N∼PT (τ1:N |θ)
τ ′∼PT (τ ′|θ′)
[ R(τ ′) ]] with θ′ := U(θ, τ 1:N ) = θ+α∇θ N∑ n=1 [ R(τ (n)) ] Formulation II views U as a deterministic function that depends on N sampled trajectories from a specific task. In contrast to formulation I, the expectation over pre-update trajectories τ is applied outside of the update function. Throughout this paper we refer to πθ as pre-update policy, and πθ′ as post-update policy.
4 SAMPLING DISTRIBUTION CREDIT ASSIGNMENT
This section analyzes the two gradient-based Meta-RL formulations introduced in Section 3. Figure 1 illustrates the stochastic computation graphs (Schulman et al., 2015b) of both formulations. The red arrows depict how credit assignment w.r.t the pre-update sampling distribution PT (τ |θ) is propagated. Formulation I (left) propagates the credit assignment through the update step, thereby exploiting the full problem structure. In contrast, formulation II (right) neglects the inherent structure, directly assigning credit from post-update return R′ to the pre-update policy πθ which leads to noisier, less effective credit assignment.
Both formulations optimize for the same objective, and are equivalent at the 0th order. However, because of the difference in their formulation and stochastic computation graph, their gradients and the resulting optimization step differs. In the following, we shed light on how and where formulation II loses signal by analyzing the gradients of both formulations, which can be written as (see Appendix A for more details and derivations)
∇θJ(θ) = ET ∼ρ(T ) [ E τ∼PT (τ |θ) τ ′∼PT (τ ′|θ′) [ ∇θJpost(τ , τ ′) +∇θJpre(τ , τ ′) ]] (1)
The first term ∇θJpost(τ , τ ′) is equal in both formulations, but the second term, ∇θJpre(τ , τ ′), differs between them. In particular, they correspond to
∇θJpost(τ , τ ′) = ∇θ′ log πθ(τ ′)R(τ ′)︸ ︷︷ ︸ ∇θ′Jouter
( I + αR(τ )∇2θ log πθ′(τ )) )︸ ︷︷ ︸ transformation from θ′ to θ
(2)
∇θJIIpre(τ , τ ′) = α∇θ log πθ(τ )R(τ ′) (3) ∇θJIpre(τ , τ ′) = α∇θ log πθ(τ ) (
(∇θ log πθ(τ )R(τ ))>︸ ︷︷ ︸ ∇θJ inner (∇θ′ log πθ′(τ ′)R(τ ′))︸ ︷︷ ︸ ∇θ′Jouter
) (4)
∇θJpost(τ , τ ′) simply corresponds to a policy gradient step on the post-update policy πθ′ w.r.t θ′, followed by a linear transformation from post- to pre-update parameters. It corresponds to increasing the likelihood of the trajectories τ ′ that led to higher returns. However, this term does not optimize for the pre-update sampling distribution, i.e., which trajectories τ led to better adaptation steps.
The credit assignment w.r.t. the pre-updated sampling distribution is carried out by the second term. In formulation II, ∇θJIIpre can be viewed as standard reinforcement learning on πθ with R(τ ′) as reward signal, treating the update function U as part of the unknown dynamics of the system. This shifts the pre-update sampling distribution to better adaptation steps.
Formulation I takes the causal dependence of PT (τ ′|θ′) on PT (τ |θ) into account. It does so by maximizing the inner product of pre-update and post-update policy gradients (see Eq. 4). This steers the pre-update policy towards 1) larger post-updates returns 2) larger adaptation steps α∇θJ inner, 3) better alignment of pre- and post-update policy gradients (Li et al., 2017; Nichol et al., 2018). When combined, these effects directly optimize for adaptation. As a result, we expect the first meta-policy gradient formulation, JI , to yield superior learning properties.
5 LOW VARIANCE CURVATURE ESTIMATOR
In the previous section we show that the formulation introduced by Finn et al. (2017) results in superior meta-gradient updates, which should in principle lead to improved convergence properties. However, obtaining correct and low variance estimates of the respective meta-gradients proves challenging. As discussed by Foerster et al. (2018), and shown in Appendix B.3, the score function surrogate objective approach is ill suited for calculating higher order derivatives via automatic differentiation toolboxes. This important fact was overlooked in the original RL-MAML implementation (Finn et al., 2017) leading to incorrect meta-gradient estimates1. As a result, ∇θJpre does not appear in the gradients of the meta-objective (i.e. ∇θJ = ∇θJpost). Hence, MAML does not perform any credit assignment to pre-adaptation behavior.
But, even when properly implemented, we show that the meta-gradients exhibit high variance. Specifically, the estimation of the hessian of the RL-objective, which is inherent in the metagradients, requires special consideration. In this section, we motivate and introduce the low variance curvature estimator (LVC): an improved estimator for the hessian of the RL-objective which promotes better meta-policy gradient updates. As we show in Appendix A.1, we can write the gradient of the meta-learning objective as
∇θJI(θ) = ET ∼ρ(T ) [ Eτ ′∼PT (τ ′|θ′) [ ∇θ′ logPT (τ ′|θ′)R(τ ′)∇θU(θ, T ) ]] (5)
Since the update function U resembles a policy gradient step, its gradient∇θU(θ, T ) involves computing the hessian of the reinforcement learning objective, i.e., ∇2θ Eτ∼PT (τ |θ) [R(τ )]. Estimating this hessian has been discussed in Baxter & Bartlett (2001) and Furmston et al. (2016). In the infinite horizon MDP case, Baxter & Bartlett (2001) derived a decomposition of the hessian. We extend their finding to the finite horizon case, showing that the hessian can be decomposed into three matrix terms (see Appendix B.2 for proof):
∇θU(θ, T ) = I + α∇2θ Eτ∼PT (τ |θ) [R(τ )] = I + α ( H1 +H2 +H12 +H>12 ) (6)
whereby
H1 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θ log πθ(at, st)> ( H−1∑ t′=t r(st′ ,at′) )]
H2 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇2θ log πθ(at, st) ( H−1∑ t′=t r(st′ ,at′) )]
H12 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θQπθt (st,at)> ]
1Note that MAML is theoretically sound, but does not attend to correctly estimating the meta-policy gradients. As consequence, the gradients in the corresponding implementation do not comply with the theory.
Here Qπθt (st,at) = Eτ t+1:H−1∼PT (·|θ) [∑H−1 t′=t r(st′ ,at′)|st, at ]
denotes the expected state-action value function under policy πθ at time t.
Computing the expectation of the RL-objective is in general intractable. Typically, its gradients are computed with a Monte Carlo estimate based on the policy gradient theorem (Eq. 82). In practical implementations, such an estimate is obtained by automatically differentiating a surrogate objective (Schulman et al., 2015b). However, this results in a highly biased hessian estimate which just computesH2, entirely dropping the termsH1 andH12+H>12. In the notation of the previous section, it leads to neglecting the∇θJpre term, ignoring the influence of the pre-update sampling distribution. The issue can be overcome using the DiCE formulation, which allows to compute unbiased higherorder Monte Carlos estimates of arbitrary stochastic computation graphs (Foerster et al., 2018). The DiCE-RL objective can be rewritten as follows
JDiCE(τ ) = H−1∑ t=0
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
) r(st,at) τ ∼ PT (τ ) (7)
Eτ∼PT (τ |θ) [ ∇2θJDiCE(τ ) ] = H1 +H2 +H12 +H>12 (8)
In that, ⊥ denotes the “stop gradient” operator, i.e., ⊥(fθ(x))→ fθ(x) but ∇θ⊥(fθ(x))→ 0. The sequential dependence of πθ(at|st) within the trajectory, manifesting itself through the product of importance weights in (7), results in high variance estimates of the hessian ∇2θ Eτ∼PT (τ |θ) [R(τ )]. As noted by Furmston et al. (2016), H12 is particularly difficult to estimate, since it involves three nested sums along the trajectory. In section 7.2 we empirically show that the high variance estimates of the DiCE objective lead to noisy meta-policy gradients and poor learning performance.
To facilitate a sample efficient meta-learning, we introduce the low variance curvature (LVC) estimator:
JLVC(τ ) = H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ( H−1∑ t′=t r(st′ ,at′) ) τ ∼ PT (τ ) (9)
Eτ∼PT (τ |θ) [ ∇2θJLVC(τ ) ] = H1 +H2 (10)
By removing the sequential dependence of πθ(at|st) within trajectories, the hessian estimate neglects the term H12 +H>12 which leads to a variance reduction, but makes the estimate biased. The choice of this objective function is motivated by findings in Furmston et al. (2016): under certain conditions the termH12 +H>12 vanishes around local optima θ∗, i.e., Eτ [∇2θJLVC]→ Eτ [∇2θJDiCE] as θ → θ∗. Hence, the bias of the LVC estimator becomes negligible close to local optima. The experiments in section 7.2 underpin the theoretical findings, showing that the low variance hessian estimates obtained through JLVC improve the sample-efficiency of meta-learning by a significant margin when compared to JDiCE. We refer the interested reader to Appendix B for derivations and a more detailed discussion.
6 PROMP: PROXIMAL META-POLICY SEARCH
Building on the previous sections, we develop a novel meta-policy search method based on the low variance curvature objective which aims to solve the following optimization problem:
max θ
ET ∼ρ(T ) [ Eτ ′∼PT (τ ′|θ′) [R(τ ′)] ] with θ′ := θ + α ∇θEτ∼PT (τ |θ) [ JLVC(τ ) ] (11)
Prior work has optimized this objective using either vanilla policy gradient (VPG) or TRPO (Schulman et al., 2015a). TRPO holds the promise to be more data efficient and stable during the learning process when compared to VPG. However, it requires computing the Fisher information matrix (FIM). Estimating the FIM is particularly problematic in the meta-learning set up. The meta-policy gradients already involve second order derivatives; as a result, the time complexity of the FIM estimate is cubic in the number of policy parameters. Typically, the problem is circumvented using finite difference methods, which introduce further approximation errors.
The recently introduced PPO algorithm (Schulman et al., 2017) achieves comparable results to TRPO with the advantage of being a first order method. PPO uses a surrogate clipping objective which allows it to safely take multiple gradient steps without re-sampling trajectories.
JCLIPT (θ) = Eτ∼PT (τ ,θo) [∑H−1 t=0 min ( πθ(at|st) πθo (at|st) Aπθo (st,at) , clip1+ 1− ( πθ(at|st) πθo (at|st) ) Aπθo (st,at) )]
Algorithm 1 Proximal Meta-Policy Search (ProMP) Require: Task distribution ρ, step sizes α, β, KL-penalty coefficient η, clipping range
1: Randomly initialize θ 2: while θ not converged do 3: Sample batch of tasks Ti ∼ ρ(T ) 4: for step n = 0, ..., N − 1 do 5: if n = 0 then 6: Set θo ← θ 7: for all Ti ∼ ρ(T ) do 8: Sample pre-update trajectories Di = {τi} from Ti using πθ 9: Compute adapted parameters θ′o,i ← θ + α ∇θJLRTi (θ) with Di = {τi}
10: Sample post-update trajectories D′i = {τ ′i} from Ti using πθ′o,i 11: Update θ ← θ + β ∑ Ti ∇θJ ProMP Ti (θ) using each D ′ i = {τ ′i}
In case of Meta-RL, it does not suffice to just replace the post-update reward objective with JCLIPT . In order to safely perform multiple meta-gradient steps based on the same sampled data from a recent policy πθo , we also need to 1) account for changes in the pre-update action distribution πθ(at|st), and 2) bound changes in the pre-update state visitation distribution (Kakade & Langford, 2002).
We propose Proximal Meta-Policy Search (ProMP) which incorporates both the benefits of proximal policy optimization and the low variance curvature objective (see Alg. 1.) In order to comply with requirement 1), ProMP replaces the “stop gradient” importance weight πθ(at|st)⊥(πθ(at|st)) by the likelihood ratio πθ(at|st)πθo (at|st)) , which results in the following objective
JLRT (θ) = Eτ∼PT (τ ,θo) [ H−1∑ t=0 πθ(at|st) πθo(at|st) Aπθo (st,at) ] (12)
An important feature of this objective is that its derivatives w.r.t θ evaluated at θo are identical to those of the LVC objective, and it additionally accounts for changes in the pre-update action distribution. To satisfy condition 2) we extend the clipped meta-objective with a KL-penalty term between πθ and πθo . This KL-penalty term enforces a soft local “trust region” around πθo , preventing the shift in state visitation distribution to become large during optimization. This enables us to take multiple meta-policy gradient steps without re-sampling. Altogether, ProMP optimizes
JProMPT (θ) = J CLIP T (θ ′)− ηD̄KL(πθo , πθ) s.t. θ′ = θ + α ∇θJLRT (θ) , T ∼ ρ(T ) (13)
ProMP consolidates the insights developed throughout the course of this paper, while at the same time making maximal use of recently developed policy gradients algorithms. First, its meta-learning formulation exploits the full structural knowledge of gradient-based meta-learning. Second, it incorporates a low variance estimate of the RL-objective hessian. Third, ProMP controls the statistical distance of both pre- and post-adaptation policies, promoting efficient and stable meta-learning. All in all, ProMP consistently outperforms previous gradient-based meta-RL algorithms in sample complexity, wall clock time, and asymptotic performance (see Section 7.1).
7 EXPERIMENTS
In order to empirically validate the theoretical arguments outlined above, this section provides a detailed experimental analysis that aims to answer the following questions: (i) How does ProMP perform against previous Meta-RL algorithms? (ii) How do the lower variance but biased LVC gradient estimates compare to the high variance, unbiased DiCE estimates? (iii) Do the different formulations result in different pre-update exploration properties? (iv) How do formulation I and formulation II differ in their meta-gradient estimates and convergence properties?
To answer the posed questions, we evaluate our approach on six continuous control Meta-RL benchmark environments based on OpenAI Gym and the Mujoco simulator (Brockman et al., 2016; Todorov et al., 2012). A description of the experimental setup is found in Appendix D. In all experiments, the reported curves are averaged over at least three random seeds. Returns are estimated
AntRandDir
HumanoidRandDir
based on sampled trajectories from the adapted post-update policies and averaged over sampled tasks. The source code and the experiment data are available on our supplementary website.2
7.1 META-GRADIENT BASED COMPARISON
We compare our method, ProMP, in sample complexity and asymptotic performance to the gradientbased meta-learning approaches MAML-TRPO (Finn et al., 2017) and E-MAML-TRPO (see Fig. 2). Note that MAML corresponds to the original implementation of RL-MAML by (Finn et al., 2017) where no credit assignment to the pre-adaptation policy is happening (see Appendix B.3 for details). Moreover, we provide a second study which focuses on the underlying meta-gradient estimator. Specifically, we compare the LVC, DiCE, MAML and E-MAML estimators while optimizing meta-learning objective with vanilla policy gradient (VPG) ascent. This can be viewed as an ablated version of the algorithms which tries to eliminate the influences of the outer optimizers on the learning performance (see Fig. 3).
These algorithms are benchmarked on six different locomotion tasks that require adaptation: the half-cheetah and walker must switch between running forward and backward, the high-dimensional agents ant and humanoid must learn to adapt to run in different directions in the 2D-plane, and the hopper and walker have to adapt to different configuration of their dynamics.
AntRandDir
HumanoidRandDir
2https://sites.google.com/view/pro-mp
The results in Figure 2 highlight the strength of ProMP in terms of sample efficiency and asymptotic performance. In the meta-gradient estimator study in Fig. 3, we demonstrate the positive effect of the LVC objective, as it consistently outperforms the other estimators. In contrast, DiCE learns only slowly when compared to the other approaches. As we have motivated mathematically and substantiate empirically in the following experiment, the poor performance of DiCE may be ascribed to the high variance of its meta-gradient estimates. The fact that the results of MAML and EMAML are comparable underpins the ineffectiveness of the naive pre-update credit assignment (i.e. formulation II), as discussed in section 4.
Results for four additional environments are displayed in Appendix D along with hyperparameter settings, environment specifications and a wall-clock time comparison of the algorithms.
7.2 GRADIENT ESTIMATOR VARIANCE AND ITS EFFECT ON META-LEARNING
In Section 5 we discussed how the DiCE formulation yields unbiased but high variance estimates of the RL-objective hessian and served as motivation for the low variance curvature (LVC) estimator. Here we investigate the meta-gradient variance of both estimators as well as its implication on the learning performance. Specifically, we report the relative standard deviation of the metapolicy gradients as well as the average return throughout the learning process in three of the metaenvironments.
The results, depicted in Figure 4, highlight the advantage of the low variance curvature estimate. The trajectory level dependencies inherent in the DiCE estimator leads to a meta-gradient standard deviation that is on average 60% higher when compared to LVC. As the learning curves indicate, the noisy gradients may be a driving factor for the poor performance of DiCE, impeding sample efficient meta-learning. Meta-policy search based on the LVC estimator leads to substantially better sample-efficiency and asymptotic performance.
In case of HalfCheetahFwdBack, we observe some unstable learning behavior of LVC-VPG which is most likely caused by the bias of LVC in combination with the naive VPG optimizer. However, the mechanisms in ProMP that ensure proximity w.r.t. to the policys KL-divergence seem to counteract these instabilities during training, giving us a stable and efficient meta-learning algorithm.
7.3 COMPARISON OF INITIAL SAMPLING DISTRIBUTIONS
Here we evaluate the effect of the different objectives on the learned pre-update sampling distribution. We compare the low variance curvature (LVC) estimator with TRPO (LVC-TRPO) against MAML (Finn et al., 2017) and E-MAML-TRPO (Stadie et al., 2018) in a 2D environment on which the exploration behavior can be visualized. Each task of this environment corresponds to reaching a different corner location; however, the 2D agent only experiences reward when it is sufficiently close to the corner (translucent regions of Figure 5). Thus, to successfully identify the task, the agent must explore the different regions. We perform three inner adaptation steps on each task, allowing the agent to fully change its behavior from exploration to exploitation.
The different exploration-exploitation strategies are displayed in Figure 5. Since the MAML implementation does not assign credit to the pre-update sampling trajectory, it is unable to learn a sound exploration strategy for task identification and thus fails to accomplish the task. On the other hand, E-MAML, which corresponds to formulation II, learns to explore in long but random paths: because it can only assign credit to batches of pre-update trajectories, there is no notion of which actions in particular facilitate good task adaptation. As consequence the adapted policy slightly misses the task-specific target. The LVC estimator, instead, learns a consistent pattern of exploration, visiting each of the four regions, which it harnesses to fully solve the task.
7.4 GRADIENT UPDATE DIRECTIONS OF THE TWO META-RL FORMULATIONS
To shed more light on the differences of the gradients of formulation I and formulation II, we evaluate the meta-gradient updates and the corresponding convergence to the optimum of both formulations in a simple 1D environment. In this environment, the agent starts in a random position in the real line and has to reach a goal located at the position 1 or -1. In order to visualize the convergence, we parameterize the policy with only two parameters θ0 and θ1. We employ formulation I by optimizing the DiCE objective with VPG, and formulation II by optimizing its (E-MAML) objective with VPG.
Figure 6 depicts meta-gradient updates of the parameters θi for both formulations. Formulation I (red) exploits the internal structure of the adaptation update yielding faster and steadier convergence to the optimum. Due to its inferior credit assignment, formulation II (green) produces noisier gradient estimates leading to worse convergence properties.
8 CONCLUSION
In this paper we propose a novel Meta-RL algorithm, proximal meta-policy search (ProMP), which fully optimizes for the pre-update sampling distribution leading to effective task identification. Our method is the result of a theoretical analysis of gradient-based Meta-RL formulations, based on which we develop the low variance curvature (LVC) surrogate objective that produces low variance meta-policy gradient estimates. Experimental results demonstrate that our approach surpasses previous meta-reinforcement learning approaches in a diverse set of continuous control tasks. Finally, we underpin our theoretical contributions with illustrative examples which further justify the soundness and effectiveness of our method.
ACKNOWLEDGMENTS
Ignasi Clavera was supported by the La Caixa Fellowship. The research leading to these results has received funding from the German Research Foundation (DFG: Deutsche Forschungsgemeinschaft) under Priority Program on Autonomous Learning (SPP 1527) and was supported by Berkeley Deep Drive, Amazon Web Services, and Huawei. Also we thank Abhishek Gupta, Chelsea Finn, aand Aviv Tamar for their valuable feedback.
A TWO META-POLICY GRADIENT FORMULATIONS
In this section we discuss two different gradient-based meta-learning formulations, derive their gradients and analyze the differences between them.
A.1 META-POLICY GRADIENT FORMULATION I
The first meta-learning formulation, known as MAML (Finn et al., 2017), views the inner update rule U(θ, T ) as a mapping from the pre-update parameter θ and the task T to an adapted policy parameter θ′. The update function can be viewed as stand-alone procedure that encapsulates sampling from the task-specific trajectory distribution PT (τ |πθ) and updating the policy parameters. Building on this concept, the meta-objective can be written as
JI(θ) = ET ∼ρ(T ) [ Eτ ′∼PT (τ ′|θ′) [R(τ ′)] ]
with θ′ := U(θ, T ) (14) The task-specific gradients follow as
∇θJIT (θ) = ∇θEτ ′∼PT (τ ′|θ′) [R(τ ′)] (15)
= Eτ ′∼PT (τ ′|θ′) [∇θ logPT (τ ′|θ′)R(τ ′)] (16) = Eτ ′∼PT (τ ′|θ′) [∇θ′ logPT (τ ′|θ′)R(τ ′)∇θθ′] (17)
In order to derive the gradients of the inner update ∇θθ′ = ∇θU(θ, T ) it is necessary to know the structure of U . The main part of this paper assumes the inner update rule to be a policy gradient descent step
∇θU(θ, T ) = ∇θ ( θ + α ∇θEτ∼PT (τ |θ) [R(τ )] ) (18)
= I + α∇2θ Eτ∼PT (τ |θ) [R(τ )] (19) Thereby the second term in (19) is the local curvature (hessian) of the inner adaptation objective function. The correct hessian of the inner objective can be derived as follows:
∇2θ Eτ∼PT (τ |θ) [R(τ )] = ∇θ Eτ∼PT (τ |θ) [∇θ log πθ(τ )R(τ )] (20) = ∇θ ∫ PT (τ |θ)∇θ log πθ(τ )R(τ )dτ (21)
= ∫ PT (τ |θ)∇θ log πθ(τ )∇θ log πθ(τ )>R(τ )+ (22)
PT (τ |θ)∇2θ log πθ(τ )R(τ )dτ (23) = Eτ∼PT (τ |θ) [ R(τ ) ( ∇2θ log πθ(τ ) +∇θ log πθ(τ )∇θ log πθ(τ )> )] (24)
A.2 META-POLICY GRADIENT FORMULATION II
The second meta-reinforcement learning formulation views the the inner update θ′ = U(θ, τ1:N ) as a deterministic function of the pre-update policy parameters θ and N trajectories τ 1:N ∼ PT (τ
1:N |θ) sampled from the pre-update trajectory distribution. This formulation was introduced in Al-Shedivat et al. (2018) and further discussed with respect to its exploration properties in Stadie et al. (2018).
Viewing U as a function that adapts the policy parameters θ to a specific task T given policy rollouts in this task, the corresponding meta-learning objective can be written as
JII(θ) = ET ∼ρ(T ) [ Eτ1:N∼PT (τ1:N |θ) [ Eτ ′∼PT (τ ′|θ′) [ R(τ ′) ]]] with θ′ := U(θ, τ 1:N ) (25)
Since the first part of the gradient derivation is agnostic to the inner update rule U(θ, τ1:N ), we only assume that the inner update function U is differentiable w.r.t. θ. First we rewrite the meta-objective J(θ) as expectation of task specific objectives JIIT (θ) under the task distribution. This allows us to express the meta-policy gradients as expectation of task-specific gradients:
∇θJII(θ) = ET ∼ρ(T ) [ ∇θJIIT (θ) ] (26)
The task specific gradients can be calculated as follows ∇θJIIT (θ) = ∇θEτ∼PT (τ1:N |θ) [ Eτ ′∼PT (τ ′|θ′) [ R(τ ′) ]] = ∇θ ∫ ∫ R(τ ′) PT (τ ′|θ′) PT (τ 1:N |θ) dτ ′ dτ
= ∫ ∫ R(τ ′) PT (τ ′|θ′)∇θ logPT (τ 1:N |θ)PT (τ 1:N |θ)+
R(τ ′)∇θ logPT (τ ′|θ′)PT (τ ′|θ′) PT (τ 1:N |θ) dτ ′ dτ
= Eτ1:N∼PT (τ1:N |θ) τ ′∼PT (τ ′|θ′)
[ R(τ ′) ( ∇θ logPT (τ ′|θ′) +
N∑ i=1 ∇θ logPT (τ (n)|θ)
)]
= Eτ1:N∼PT (τ1:N |θ) τ ′∼PT (τ ′|θ′)
[ R(τ ′) ( ∇θ′ logPT (τ ′|θ′)∇θθ′ +
N∑ n=1 ∇θ logPT (τ (n)|θ)
)]
As in A.1 the structure of U(θ, τ 1:N ) must be known in order to derive the gradient∇θθ′. Since we assume the inner update to be vanilla policy gradient, the respective gradient follows as
U(θ, τ1:N ) = θ+α 1
N N∑ n=1 ∇θ log πθ(τ (n)))R(τ (n)) with ∇θ log πθ(τ) = H−1∑ t=0 ∇θ log πθ(at|st)
The respective gradient of U(θ, τ1:N ) follows as
∇θU(θ, τ1:N ) = ∇θ ( θ + α 1
N N∑ n=1 ∇θ log πθ(τ (n)))R(τ (n))
) (27)
= I + α 1
N N∑ n=1 ∇2θ log πθ(τ (n)))R(τ (n)) (28)
A.3 COMPARING THE GRADIENTS OF THE TWO FORMULATIONS
In the following we analyze the differences between the gradients derived for the two formulations. To do so, we begin with ∇θJIT (θ) by inserting the gradient of the inner adaptation step (19) into (17):
∇θJIT (θ) = Eτ ′∼PT (τ ′|θ′) [ ∇θ′ logPT (τ ′|θ′)R(τ ′) ( I + α∇2θ Eτ∼PT (τ |θ) [R(τ )] )] (29)
We can substitute the hessian of the inner objective by its derived expression from (24) and then rearrange the terms. Also note that ∇θ logPT (τ |θ) = ∇θ log πθ(τ ) = ∑H−1 t=1 log πθ(at|st) where H is the MDP horizon.
∇θJIT (θ) = Eτ ′∼PT (τ ′|θ′) [ ∇θ′ logPT (τ ′|θ′)R(τ ′) ( I + αEτ∼PT (τ |θ) [ R(τ ) (30)
( ∇2θ log πθ(τ ) +∇θ log πθ(τ )∇θ log πθ(τ )> )])] (31)
= E τ∼PT (τ |θ) τ ′∼PT (τ ′|θ′)
∇θ′ log πθ′(τ ′)R(τ ′) ( I + αR(τ )∇2θ log πθ(τ ) ) ︸ ︷︷ ︸
∇θJpost(τ ,τ ′)
(32)
+α∇θ′ log πθ′(τ ′)R(τ ′)R(τ )∇θ log πθ(τ )∇θ log πθ(τ )>︸ ︷︷ ︸ ∇θJIpre(τ ,τ ′)
(33)
Next, we rearrange the gradient of JII into a similar form as∇θJIT (θ). For that, we start by inserting (28) for∇θθ′ and replacing the expectation over pre-update trajectories τ 1:N by the expectation over a single trajectory τ .
∇θJIT (θ) = E τ∼PT (τ |θ) τ ′∼PT (τ ′|θ′)
[ R(τ ′)∇θ′ log πθ(τ ′) ( I + αR(τ )∇2θ log πθ(τ)) )︸ ︷︷ ︸ ∇θJpost(τ ,τ ′)
(34)
+R(τ ′)∇θ log πθ(τ )︸ ︷︷ ︸ ∇θJIpre(τ ,τ ′)
] (35)
While the first part of the gradients match ((32) and (34)), the second part ((33) and (35)) differs. Since the second gradient term can be viewed as responsible for shifting the pre-update sampling distribution PT (τ |θ) towards higher post-update returns, we refer to it as∇θJpre(τ , τ ′) . To further analyze the difference between ∇θJIpre and ∇θJIIpre we slightly rearrange (33) and put both gradient terms next to each other:
∇θJIpre(τ , τ ′) = α∇θ log πθ(τ ) (∇θ log πθ(τ )R(τ ))>︸ ︷︷ ︸ ∇θJ inner (∇θ′ log πθ′(τ ′)R(τ ′))︸ ︷︷ ︸ ∇θ′Jouter (36) ∇θJIIpre(τ , τ ′) = α∇θ log πθ(τ )R(τ ′) (37)
In the following we interpret and and compare of the derived gradient terms, aiming to provide intuition for the differences between the formulations:
The first gradient term Jpost that matches in both formulations corresponds to a policy gradient step on the post-update policy πθ′ . Since θ′ itself is a function of θ, the term ( I + αR(τ )∇2θ log πθ(τ)) ) can be seen as linear transformation of the policy gradient update R(τ ′)∇θ′ log πθ(τ ′) from the post-update parameter θ′ into θ. Although Jpost takes into account the functional relationship between θ′ and θ, it does not take into account the pre-update sampling distribution PT (τ |θ). This is where ∇θJpre comes into play: ∇θJIpre can be viewed as policy gradient update of the preupdate policy πθ w.r.t. to the post-update return R(τ ′). Hence this gradient term aims to shift the pre-update sampling distribution so that higher post-update returns are achieved. However, ∇θJIIpre does not take into account the causal dependence of the post-update policy on the pre-update policy. Thus a change in θ due to∇θJIIpre may counteract the change due to∇θJIIpost. In contrast,∇θJIpre takes the dependence of the the post-update policy on the pre-update sampling distribution into account. Instead of simply weighting the gradients of the pre-update policy ∇θ log πθ(τ ) with R(τ ′) as in ∇θJIpost, ∇θJIpost weights the gradients with inner product of the pre-update and post-update policy gradients. This inner product can be written as
∇θJ inner >∇θ′Jouter = ||∇θJ inner||2 · ||∇θ′Jouter||2 · cos(δ) (38)
wherein δ denotes the angle between the the inner and outer pre-update and post-update policy gradients. Hence,∇θJIpost steers the pre-update policy towards not only towards larger post-updates returns but also towards larger adaptation steps α∇θJ inner, and better alignment of pre- and postupdate policy gradients. This directly optimizes for maximal improvement / adaptation for the respective task. See Li et al. (2017); Nichol et al. (2018) for a comparable analysis in case of domain generalization and supervised meta-learning. Also note that (38) allows formulation I to perform credit assignment on the trajectory level whereas formulation II can only assign credit to entire batches of N pre-update trajectories τ1:N .
As a result, we expect the first meta-policy gradient formulation to learn faster and more stably since the respective gradients take the dependence of the pre-update returns on the pre-update sampling distribution into account while this causal link is neglected in the second formulation.
B ESTIMATING THE META-POLICY GRADIENTS
When employing formulation I for gradient-based meta-learning, we aim maximize the loss J(θ) = ET ∼ρ(T ) [ Eτ ′∼PT (τ ′|θ′) [R(τ ′)] ] with θ′ := θ + α ∇θEτ∼PT (τ |θ) [R(τ )] (39)
by performing a form of gradient-descent on J(θ). Note that we, from now on, assume J := JI and thus omit the superscript indicating the respective meta-learning formulation. As shown in A.2 the gradient can be derived as∇θJ(θ) = E(T )∼ρ(T )[∇θJT (θ)] with
∇θJT (θ) = Eτ ′∼PT (τ ′|θ′) [ ∇θ′ logPT (τ ′|θ′)R(τ ′) ( I + α∇2θ Eτ∼PT (τ |θ) [R(τ )] )] (40)
where∇2θJinner(θ) := ∇2θ Eτ∼PT (τ |θ) [R(τ )] denotes hessian of the inner adaptation objective w.r.t. θ. This section concerns the question of how to properly estimate this hessian.
B.1 ESTIMATING GRADIENTS OF THE RL REWARD OBJECTIVE
Since the expectation over the trajectory distribution PT (τ |θ) is in general intractable, the score function trick is typically used to used to produce a Monte Carlo estimate of the policy gradients. Although the gradient estimate can be directly defined, when using a automatic-differentiation toolbox it is usually more convenient to use an objective function whose gradients correspond to the policy gradient estimate. Due to the Policy Gradient Theorem (PGT) Sutton et al. (2000) such a “surrogate” objective can be written as:
ĴPGT = 1
K ∑ τk H−1∑ t=0 log πθ(at|st) ( H∑ t′=t r(st′ , at′) ) τk ∼ PT (τ) (41)
= 1
K ∑ τk H−1∑ t=0
( t∑
t′=0
log πθ(at|st) ) r(st′ , at′) τk ∼ PT (τ) (42)
While (41) and (42) are equivalent (Peters & Schaal, 2006), the more popular formulation formulation (41) can be seen as forward looking credit assignment while (42) can be interpreted as backward looking credit assignment (Foerster et al., 2018). A generalized procedure for constructing “surrogate” objectives for arbitrary stochastic computation graphs can be found in Schulman et al. (2015a).
B.2 A DECOMPOSITION OF THE HESSIAN
Estimating the the hessian of the reinforcement learning objective has been discussed in Furmston et al. (2016) and Baxter & Bartlett (2001) with focus on second order policy gradient methods. In the infinite horizon MDP case, Baxter & Bartlett (2001) derive a decomposition of the hessian. In the following, we extend their finding to the finite horizon case.
Proposition. The hessian of the RL objective can be decomposed into four matrix terms:
∇2θJinner(θ) = H1 +H2 +H12 +H>12 (43)
where
H1 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θ log πθ(at, st)> ( H−1∑ t′=t r(st′ ,at′) )] (44)
H2 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇2θ log πθ(at, st) ( H−1∑ t′=t r(st′ ,at′) )] (45)
H12 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θQπθt (st,at)> ]
(46)
Here Qπθt (st,at) = Eτ t+1:H−1∼PT (·|θ) [∑H−1 t′=t r(st′ ,at′)|st, at ]
denotes the expected state-action value function under policy πθ at time t.
Proof. As derived in (24), the hessian of Jinner(θ) follows as: ∇2θJinner = Eτ∼PT (τ |θ) [ R(τ ) ( ∇2θ log πθ(τ ) +∇θ log πθ(τ )∇θ log πθ(τ )> )] (47)
= Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 ∇2θ log πθ(at′ , st′) ) r(st,at) ] (48)
+ Eτ∼PT (τ |θ) H−1∑ t=0 ( t∑ t′=0 ∇θ log πθ(at′ , st′) )( t∑ t′=0 ∇θ log πθ(at′ , st′) )> r(st,at) (49)
= Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇2θ log πθ(at, st) ( H−1∑ t′=t r(st′ ,at′) )] (50)
+ Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 t∑ h=0 ∇θ log πθ(at′ , st′)∇θ log πθ(ah, sh)> ) r(st,at) ] (51)
The term in (50) is equal toH2. We continue by showing that the remaining term in (51) is equivalent toH1 +H12 +H>12. For that, we split the inner double sum in (51) into three components:
Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 t∑ h=0 ∇θ log πθ(at′ , st′)∇θ log πθ(ah, sh)> ) r(st,at) ] (52)
= Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 ∇θ log πθ(at′ , st′)∇θ log πθ(at′ , st′)> ) r(st,at) ] (53)
+ Eτ∼PT (τ |θ) H−1∑ t=0 t∑ t′=0 t′−1∑ h=0 ∇θ log πθ(at′ , st′)∇θ log πθ(ah, sh)> r(st,at) (54) + Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 t∑ h=t′+1 ∇θ log πθ(at′ , st′)∇θ log πθ(ah, sh)> ) r(st,at) ] (55)
By changing the backward looking summation over outer products into a forward looking summation of rewards, (53) can be shown to be equal toH1:
Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 ∇θ log πθ(at′ , st′)∇θ log πθ(at′ , st′)> ) r(st,at) ] (56)
= Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θ log πθ(at, st)> ( H−1∑ t′=t r(st′ ,at′) )] (57)
= H1 (58) By simply exchanging the summation indices t′ and h in (55) it is straightforward to show that (55) is the transpose of (54). Hence it is sufficient to show that (54) is equivalent to H12. However, instead of following the direction of the previous proof we will now start with the definition ofH12 and derive the expression in (54).
H12 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θQπθt (st,at)> ]
(59)
(60)
The gradient of Qπθt can be expressed recursively: ∇θQπθt (st,at) = ∇θEst+1at+1 [ Qπθt+1(st+1,at+1) ] (61)
= Est+1 at+1
[ ∇θ log πθ(at+1, st+1)Qπθt+1(st+1,at+1) +∇θQ πθ t+1(st+1,at+1) ] (62)
By induction, it follows that
∇θQπθt (st,at) = Eτ t+1:H−1∼PT (·|θ) [ H−1∑ t′=t+1 ∇θ log πθ(at′ , st′) ( H−1∑ h=t′ r(sh,ah) )] (63)
When inserting (63) into (59) and swapping the summation, we are able to show that H12 is equivalent to (54).
H12 = Eτ∼PT (τ |θ) [ H−1∑ t=0 H−1∑ t′=t+1 ∇θ log πθ(at, st)∇θ log πθ(at′ , st′)> ( H−1∑ h=t′ r(sh,ah) )] (64)
= Eτ∼PT (τ |θ) H−1∑ t=0 t∑ t′=0 t′−1∑ h=0 ∇θ log πθ(at′ , st′)∇θ log πθ(ah, sh)> r(st,at) (65) This concludes the proof that the hessian of the expected sum of rewards under policy πθ and an MDP with finite time horizon H can be decomposed intoH1 +H2 +H12 +H>12.
B.3 ESTIMATING THE HESSIAN OF THE RL REWARD OBJECTIVE
As pointed out by Al-Shedivat et al. (2018); Stadie et al. (2018) and Foerster et al. (2018), simply differentiating through the gradient of surrogate objective JPGT as done in the original MAML version (Finn et al., 2017) leads to biased hessian estimates. Specifically, when compared with the unbiased estimate, as derived in (24) and decomposed in Appendix B.2, bothH1 andH12 +H>12 are missing. Thus, ∇θJpre does not appear in the gradients of the meta-objective (i.e. ∇θJ = ∇θJpost). Only performing gradient descent with ∇θJpost entirely neglects influences of the pre-update sampling distribution. This issue was overseen in the RL-MAML implementation of Finn et al. (2017). As discussed in Stadie et al. (2018) this leads to poor performance in meta-learning problems that require exploration during the pre-update sampling.
B.3.1 THE DICE MONTE-CARLO ESTIMATOR
Addressing the issue of incorrect higher-order derivatives of monte-carlo estimators, Foerster et al. (2018) propose DICE which mainly builds upon an newly introduced MagicBox( ) operator. This operator allows to formulate monte-carlo estimators with correct higher-order derivatives. A DICE formulation of a policy gradient estimator reads as:
JDICE = H−1∑ t=0 θ({at ′≤t})r(st, at) (66)
= H−1∑ t=0 exp
( t∑
t′=0
log πθ(at′ |st′)−⊥(log πθ(at′ |st′) ) r(st, at) (67)
In that, ⊥ denotes a “stop gradient” operator (i.e. ⊥(fθ(x)) → fθ(x) but ∇θ⊥(fθ(x)) → 0). Note that → denotes a “evaluates to” and does not necessarily imply equality w.r.t. to gradients. Hence, JDICE(θ) evaluates to the sum of rewards at 0th order but produces the unbiased gradients ∇nθJDICE(θ) when differentiated n-times (see Foerster et al. (2018) for proof). To shed more light on the maverick DICE formulation, we rewrite (67) as follows:
JDICE = H−1∑ t=0
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
) r(st, at) (68)
Interpreting this novel formulation, the MagicBox operator θ({at ′≤t}) can be understood as “dry” importance sampling weight. At 0th order it evaluates to 1 and leaves the objective function unaffected, but when differentiated once it yields an estimator for the marginal rate of return due to a change in the policy-implied trajectory distribution.
In the following we show that on expectation 1) the gradients of (81) match standard policy gradients and 2) its hessian estimate is equal to the hessian of inner RL objective, derived in B.2.
∇θJDICE = H−1∑ t=0 ∇θ
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
) r(st, at) (69)
= H−1∑ t=0
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
)( t∑
t′=0
∇θ log πθ(at′ |st′) ) r(st, at) (70)
→ H−1∑ t=0
( t∑
t′=0
∇θ log πθ(at′ |st′) ) r(st, at) (71)
Here, (71) corresponds to the backward looking credit assignment formulation of policy gradients ∇θJPGT as discussed in B.1. Once again we take the derivative in order to obtain the Hessian of JDICE:
∇2θJDICE = H−1∑ t=0 ∇θ
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
)( t∑
t′=0
∇θ log πθ(at′ |st′) ) r(st, at) (72)
+
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
) ∇θ ( t∑
t′=0
∇θ log πθ(at′ |st′) ) r(st, at) (73)
→ H−1∑ t=0
( t∑
t′=0
∇θ log πθ(at′ |st′)
)( t∑
t′=0
∇θ log πθ(at′ |st′) )> r(st, at) (74)
+
( t∑
t′=0
∇2θ log πθ(at′ |st′) ) r(st, at) (75)
In expectation, Eτ∼PT (τ |θ)[∇2θJDICE] the DICE monte carlo estimate of the hessian is equivalent to the hessian of the inner objective. To show this, we use the expression of∇2θJinner (49):
Eτ∼PT (τ |θ)[∇ 2 θJ DICE] (76)
= Eτ∼PT (τ |θ) [H−1∑ t=0 ( t∑ t′=0 ∇θ log πθ(at′ |st′) )( t∑ t′=0 ∇θ log πθ(at′ |st′) )> (77)
r(st, at) +
( t∑
t′=0
∇2θ log πθ(at′ |st′) ) r(st, at) ] (78)
= H1 +H2 +H12 +H>12 (79) = ∇2θJinner (80)
B.4 BIAS AND VARIANCE OF THE CURVATURE ESTIMATE
As shown in the previous section,∇2θJDICE provides an unbiased estimate of the hessian of the inner objective Jinner = Eτ∼PT (τ |θ) [R(τ )]. However, recall the DICE objective involves a product of importance weights along the trajectory.
JDICE = H−1∑ t=0
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
) r(st, at) (81)
Taking the 2nd derivative of this product leads to the outer product of sums in (74) which is of high variance w.r.t to τ . Specifically, this outer product of sums can be decomposed into three terms H1 +H12 +H>12 (see Appendix B.2). As noted by Furmston et al. (2016),H12 +H>12 is particularly difficult to estimate. In section 7.2 we empirically show that the high variance curvature estimates obtained with the DICE objective require large batch sizes and impede sample efficient learning.
In the following we develop a low variance curvature (LVC) estimator JLVC which matches JDICE at the gradient level and yields lower-variance estimates of the hessian by neglecting H12 + H>12.
Before formally introducing JLVC, we motivate such estimator starting with the policy gradient estimate that was originally derived in Sutton et al. (2000), followed by marginalizing the trajectory level distribution PT (τ |θ) over states st and actions at. Note that we omit reward baselines for notational simplicity.
∇θJinner = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) )] (82)
= H−1∑ t=0 E st∼pπθt (st) at∼πθ(at|st)
[ ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) )] (83)
In that, pπθt (st) denotes the state visitation frequency at time step t, i.e. the probability density of being in st after t steps under the policy πθ. In the general case pπθt (st) is intractable but depends on the policy parameter θ. We make the simplifying assumption that pπθt (st) is fixed in a local region of θ. Since we make this assumption at the gradient level, this corresponds to a 1st order Taylor expansion of pπθt (st) in θ. Note that this assumption is also used in the Monotonic Policy Improvement Theory (Kakade & Langford, 2002; Schulman et al., 2015a). Based on this condition, the hessian follows as derivative of (83) whereby a “stop gradient” expression around the state visitation frequency pπθt (st) resembles the 1st order Taylor approximation:
Eτ [ ∇2θJLVC ] = ∇θ H−1∑ t=0 Est∼⊥(pπθt (st)) at∼πθ(at|st)
[ ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) )] (84)
= H−1∑ t=0 Est∼⊥(pπθt (st)) at∼πθ(at|st) [ ∇θ log πθ(at|st)∇θ log πθ(at|st)> ( H−1∑ t′=t r(st′ , at′) ) (85)
+∇2θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) )] (86)
Since the expectation in (84) is intractable it must be evaluated by a monte carlo estimate. However, simply replacing the expectation with an average of samples trajectories induces a wrong hessian that does not correspond to (86) since outer product of log-gradients would be missing when differentiated. To ensure that automatic differentiation still yields the correct hessian, we add a “dry” importance weight comparable to DICE:
∇θJLVC = H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) τ ∼ PT (τ |θ) (87)
When integrated this resembles the LVC “surrogate” objective JLVC.
JLVC = H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ( H−1∑ t′=t r(st′ , at′) ) τ ∼ PT (τ |θ) (88)
The gradients of JLVC match∇θJDICE and resemble an unbiased policy gradient estimate:
∇θJLVC = H−1∑ t=0 ∇θπθ(at|st) ⊥(πθ(at|st)) ( H−1∑ t′=t r(st′ , at′) ) (89)
= H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) (90)
→ H−1∑ t=0 ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) (91)
The respective Hessian can be obtained by differentiating (90):
∇2θJLVC = ∇θ H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) (92)
= H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ∇θ log πθ(at|st)∇θ log πθ(at|st)> ( H−1∑ t′=t r(st′ , at′) ) (93)
+ πθ(at|st) ⊥(πθ(at|st)) ∇2θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) (94)
→ H−1∑ t=0 ∇θ log πθ(at|st)∇θ log πθ(at|st)> ( H−1∑ t′=t r(st′ , at′) ) (95)
+∇2θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) (96)
= H−1∑ t=0
( t∑
t′=0
∇θ log πθ(at′ |st′)∇θ log πθ(at|st)> ) r(st, at) (97)
+
( t∑
t′=0
∇2θ log πθ(at′ |st′) ) r(st, at) (98)
In expectation∇2θJLVC is equivalent toH1 +H2:
Eτ∼PT (τ |θ) [ JLVC ] = Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 ∇θ log πθ(at′ |st′)∇θ log πθ(at|st)> ) r(st,at) ] (99)
+ Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 ∇2θ log πθ(at′ |st′) ) r(st,at) ] (100)
=H1 +H2 (101)
The Hessian ∇2θJLVC no longer provides an unbiased estimate of ∇2θJinner since neglects the matrix termH12 +H>12. This approximation is based on the assumption that the state visitation distribution is locally unaffected by marginal changes in θ and leads to a substantial reduction of variance in the hessian estimate. Furmston et al. (2016) show that under certain conditions (i.e. infinite horizon MDP, sufficiently rich policy parameterisation) the termH12+H>12 vanishes around a local optimum θ∗. Given that the conditions hold, this implies that Eτ [∇2θJLVC] → Eτ [∇2θJDICE] as θ → θ∗, i.e. the bias of the LCV estimator becomes negligible close to the local optimum. The experiments in section 7.2 confirm this theoretical argument empirically and show that using the low variance curvature estimates obtained through JLVC improve the sample-efficiency of meta-learning by a significant margin.
C PROXIMAL POLICY SEARCH METHODS
C.1 MONOTONIC POLICY IMPROVEMENT THEORY
This section provides a brief introduction to policy performance bounds and the theory of monotonic policy improvement in the setting of reinforcement learning. While Section 6 discusses the extension of this theory to meta learning, the following explanations assume a standard RL setting where T is exogenously given. Hence, we will omit mentioning the dependence on T for notational brevity. Since the monotonic policy improvement frameworks relies on infinite-time horizon MDPs, we assume H →∞ for the remainder of this chapter.
In addition to the expected reward J(π) under policy π, we will use the state value function V π , the state-action value function Qπ as well as the advantage function Aπ:
V π(s) = Ea0,s1,... [ ∞∑ t=0 γtr(st,at) ∣∣∣∣st = s ]
Qπ(s, a) = Es1,a1,... [ ∞∑ t=0 γtr(st,at) ∣∣∣∣st = s,a0 = a ] = r(s, a) + γEs′∼p(s′|s,a) [Vπ(s′)] Aπ(s, a) = Qπ(s, a)− V π(s)
with at ∼ π(at|st) and st+1 ∼ p(st+1|st, at). The expected return under a policy π̃ can be expressed as the sum of the expected return of another policy π and the expected discounted advantage of π̃ over π (see Schulman et al. (2015a) for proof).
J(π̃) = J(π) + Eτ∼P (τ ,π̃) [ ∞∑ t=0 γtAπ(st,at) ] Let dπ denote the discounted state visitation frequency:
dπ(s) = γt ∞∑ t=0 p(st = s|π)
We can use dπ to express the expectation over trajectories τ ∼ pπ(τ) in terms of states and actions:
J(π̃) = J(π) + Es∼dπ̃(s) a∼π̃(a|s) [Aπ(s,a)] (102)
Local policy search aims to find a policy update π → π̃ in the proximity of π so that J(π̃) is maximized. Since J(π) is not affected by the policy update π → π̃, it is sufficient to maximize the expected advantage under π̃. However, the complex dependence of dπ̃(s) on π̃ makes it hard to directly maximize the objective in (102). Using a local approximation of (102) where it is assumed that the state visitation frequencies dπ and dπ̃ are identical, the optimization can be phrased as
J̃π(π̃) = J(π) + Es∼dπ(s) a∼π̃(a|s) [Aπ(s,a)] = J(π) + Es∼dπ(s) a∼π(a|s) [ π̃(a|s) π(a|s) Aπ(s,a) ] (103)
In the following we refer to J̃(π̃) as surrogate objective. It can be shown that the surrogate objective J̃ matches J to first order when π = π̃ (see Kakade & Langford (2002)). If πθ is a parametric and differentiable function with parameter vector θ, this means that for any θo:
J̃πθo (πθo) = Jπθo (πθo) and ∇θJ̃πθo (πθ) ∣∣ θo = ∇θJπθo (πθ) ∣∣ θo
(104)
When π 6= π̃, an approximation error of the surrogate objective J̃ w.r.t. to the true objective J is introduced. Achiam et al. (2017) derive a lower bound for the true expected return of π̃:
J(π̃) ≥ Jπ(π̃)− C √ Es∼dπ [DKL[π̃(·|s)||π(·|s)]] = Jπ(π̃)− C √ D̄KL[π̃||π] (105)
with C = √ 2γ
1−γ maxs |Ea∼π̃(a,s)[A π(s,a)]|
C.2 TRUST REGION POLICY OPTIMIZATION (TRPO)
Trust region policy optimization (TPRO) (Schulman et al., 2015a) attempts to approximate the bound in (105) by phrasing local policy search as a constrained optimization problem:
arg max θ Es∼dπθo (s) a∼πθo (a|s)
[ πθ(a|s) πθo(a|s) Aπθo (s,a) ] s.t. D̄KL[πθo ||πθ] ≤ δ (106)
Thereby the KL-constraint δ induces a local trust region around the current policy πθo . A practical implementation of TPRO uses a quadratic approximation of the KL-constraint which leads to the following update rule:
θ ← θ +
√ 2δ
g>Fg F−1g (107)
with g := ∇θEs∼dπθo (s) a∼πθo (a|s)
[ πθ(a|s) πθo (a|s) Aπθo (s,a) ] being the gradient of the objective and F =
∇2θD̄KL[πθo ||πθ] the Fisher information matrix of the current policy πθo . In order to avoid the cubic time complexity that arise when inverting F , the Conjugate Gradient (CG) algorithm is typically used to approximate the Hessian vector product F−1g.
C.3 PROXIMAL POLICY OPTIMIZATION (PPO)
While TPRO is framed as constrained optimization, the theory discussed in Appendix C.1 suggest to optimize the lower bound. Based on this insight, Schulman et al. (2017) propose adding a KL penalty to the objective and solve the following unconstrained optimization problem:
arg max θ Es∼dπθo (s) a∼πθo (a|s)
[ πθ(a|s) πθo(a|s) Aπθo (s,a)− βDKL[πθo(·|s)||πθ(·|s)] ]
(108)
However, they also show that it is not sufficient to set a fixed penalty coefficient β and propose two alternative methods, known as Proximal Policy Optimization (PPO) that aim towards alleviating this issue:
1) Adapting the KL coefficient β so that a desired target KL-divergence D̄KL[πθo ||πθ] between the policy before and after the parameter update is achieved
2) Clipping the likelihood ratio so that the optimization has no incentive to move the policy πθ too far away from the original policy πθo . A corresponding optimization objective reads as:
JCLIP = Es∼dπθo (s) a∼πθo (a|s)
[ min ( πθ(a|s) πθo(a|s) Aπθo (s,a) , clip1+ 1− ( πθ(a|s) πθo(a|s) ) Aπθo (s,a) )] (109)
Empirical results show that the latter approach leads to better learning performance (Schulman et al., 2017).
Since PPO | 1. What is the focus of the paper regarding meta-reinforcement learning?
2. What are the strengths of the proposed approach in comparison to other methods like MAML, E-MAML-TRPO, LVC-VPG, and DiCE?
3. Do you have any concerns or suggestions regarding the experimental analysis?
4. How does the reviewer assess the significance and practicality of the work?
5. Are there any limitations or areas for improvement in the proposed method? | Review | Review
In this paper, the author proposed an efficient surrogate loss for estimating Hessian in the setting of Meta-reinforcement learning (Finn.et al, 2017), which significantly reduce the variance while introducing small bias. The author verified their proposed method with other meta-learning algorithms on the Mujoco benchmarks. The author also compared with unbiased higher order gradient estimation method-DiCE in terms of gradient variance and average return.
The work is essentially important due to the need for second-order gradient estimation for meta-learning (Finn et al., 2017) and other related work such as multi-agent RL. The results look promising and the method is easy to implement. I have two detail questions about the experiment:
1) As the author states, the new proposed method introduces bias while reducing variance significantly. It is necessary to examine the MSE, Bias, Variance of the gradient estimatorsquantitatively for the proposed and related baseline methods (including MAML, E-MAML-TRPO, LVC-VPG, etc). If the bias is not a big issue empirically, the proposed method is good to use in practice.
2) The author should add DiCE in the benchmark in section 7.1, which will verify its advantage over DiCE thoroughly.
Overall this is a good paper and I vote for acceptance.
Finn, Chelsea, et al. "Model-agnostic meta-learning for fast adaptation of deep networks." ICML 2017.
Foerster, Jakob, et al. "DiCE: The Infinitely Differentiable Monte-Carlo Estimator." ICML 2018. |
ICLR | Title
ProMP: Proximal Meta-Policy Search
Abstract
Credit assignment in Meta-reinforcement learning (Meta-RL) is still poorly understood. Existing methods either neglect credit assignment to pre-adaptation behavior or implement it naively. This leads to poor sample-efficiency during metatraining as well as ineffective task identification strategies. This paper provides a theoretical analysis of credit assignment in gradient-based Meta-RL. Building on the gained insights we develop a novel meta-learning algorithm that overcomes both the issue of poor credit assignment and previous difficulties in estimating meta-policy gradients. By controlling the statistical distance of both pre-adaptation and adapted policies during meta-policy search, the proposed algorithm endows efficient and stable meta-learning. Our approach leads to superior pre-adaptation policy behavior and consistently outperforms previous Meta-RL algorithms in sample-efficiency, wall-clock time, and asymptotic performance.
1 INTRODUCTION
A remarkable trait of human intelligence is the ability to adapt to new situations in the face of limited experience. In contrast, our most successful artificial agents struggle in such scenarios. While achieving impressive results, they suffer from high sample complexity in learning even a single task, fail to generalize to new situations, and require large amounts of additional data to successfully adapt to new environments. Meta-learning addresses these shortcomings by learning how to learn. Its objective is to learn an algorithm that allows the artificial agent to succeed in an unseen task when only limited experience is available, aiming to achieve the same fast adaptation that humans possess (Schmidhuber, 1987; Thrun & Pratt, 1998).
Despite recent progress, deep reinforcement learning (RL) still relies heavily on hand-crafted features and reward functions as well as engineered problem specific inductive bias. Meta-RL aims to forego such reliance by acquiring inductive bias in a data-driven manner. Recent work proves this approach to be promising, demonstrating that Meta-RL allows agents to obtain a diverse set of skills, attain better exploration strategies, and learn faster through meta-learned dynamics models or synthetic returns (Duan et al., 2016; Xu et al., 2018; Gupta et al., 2018b; Saemundsson et al., 2018).
Meta-RL is a multi-stage process in which the agent, after a few sampled environment interactions, adapts its behavior to the given task. Despite its wide utilization, little work has been done to promote theoretical understanding of this process, leaving Meta-RL grounded on unstable foundations. Although the behavior prior to the adaptation step is instrumental for task identification, the interplay between pre-adaptation sampling and posterior performance of the policy remains poorly understood. In fact, prior work in gradient-based Meta-RL has either entirely neglected credit assignment to the pre-update distribution (Finn et al., 2017) or implemented such credit assignment in a naive way (Al-Shedivat et al., 2018; Stadie et al., 2018).
To our knowledge, we provide the first formal in-depth analysis of credit assignment w.r.t. preadaptation sampling distribution in Meta-RL. Based on our findings, we develop a novel Meta-RL algorithm. First, we analyze two distinct methods for assigning credit to pre-adaptation behavior. ∗authors contributed equally to this work
We show that the recent formulation introduced by Al-Shedivat et al. (2018) and Stadie et al. (2018) leads to poor credit assignment, while the MAML formulation (Finn et al., 2017) potentially yields superior meta-policy updates. Second, based on insights from our formal analysis, we highlight both the importance and difficulty of proper meta-policy gradient estimates. In light of this, we propose the low variance curvature (LVC) surrogate objective which yields gradient estimates with a favorable bias-variance trade-off. Finally, building upon the LVC estimator we develop Proximal MetaPolicy Search (ProMP), an efficient and stable meta-learning algorithm for RL. In our experiments, we show that ProMP consistently outperforms previous Meta-RL algorithms in sample-efficiency, wall-clock time, and asymptotic performance.
2 RELATED WORK
Meta-Learning concerns the question of “learning to learn”, aiming to acquire inductive bias in a data driven manner, so that the learning process in face of unseen data or new problem settings is accelerated (Schmidhuber, 1987; Schmidhuber et al., 1997; Thrun & Pratt, 1998).
This can be achieved in various ways. One category of methods attempts to learn the “learning program” of an universal Turing machine in form of a recurrent / memory-augmented model that ingests datasets and either outputs the parameters of the trained model (Hochreiter et al., 2001; Andrychowicz et al., 2016; Chen et al., 2017; Ravi & Larochelle, 2017) or directly outputs predictions for given test inputs (Duan et al., 2016; Santoro et al., 2016; Mishra et al., 2018). Though very flexible and capable of learning very efficient adaptations, such methods lack performance guarantees and are difficult to train on long sequences that arise in Meta-RL.
Another set of methods embeds the structure of a classical learning algorithm in the meta-learning procedure, and optimizes the parameters of the embedded learner during the meta-training (Hüsken & Goerick, 2000; Finn et al., 2017; Nichol et al., 2018; Miconi et al., 2018). A particular instance of the latter that has proven to be particularly successful in the context of RL is gradient-based metalearning (Finn et al., 2017; Al-Shedivat et al., 2018; Stadie et al., 2018). Its objective is to learn an initialization such that after one or few steps of policy gradients the agent attains full performance on a new task. A desirable property of this approach is that even if fast adaptation fails, the agent just falls back on vanilla policy-gradients. However, as we show, previous gradient-based Meta-RL methods either neglect or perform poor credit assignment w.r.t. the pre-update sampling distribution.
A diverse set of methods building on Meta-RL, has recently been introduced. This includes: learning exploration strategies (Gupta et al., 2018b), synthetic rewards (Sung et al., 2017; Xu et al., 2018), unsupervised policy acquisition (Gupta et al., 2018a), model-based RL (Clavera et al., 2018; Saemundsson et al., 2018), learning in competitive environments (Al-Shedivat et al., 2018) and meta-learning modular policies (Frans et al., 2018; Alet et al., 2018). Many of the mentioned approaches build on previous gradient-based meta-learning methods that insufficiently account for the pre-update distribution. ProMP overcomes these deficiencies, providing the necessary framework for novel applications of Meta-RL in unsolved problems.
3 BACKGROUND
Reinforcement Learning. A discrete-time finite Markov decision process (MDP), T , is defined by the tuple (S,A, p, p0, r,H). Here, S is the set of states, A the action space, p(st+1|st, at) the transition distribution, p0 represents the initial state distribution, r : S × A → R is a reward function, and H the time horizon. We omit the discount factor γ in the following elaborations for notational brevity. However, it is straightforward to include it by substituting the reward by r(st, at) := γ
tr(st, at). We define the return R(τ) as the sum of rewards along a trajectory τ := (s0, a0, ..., sH−1, aH−1, sH). The goal of reinforcement learning is to find a policy π(a|s) that maximizes the expected return Eτ∼PT (τ |π) [R(τ )].
Meta-Reinforcement Learning goes one step further, aiming to learn a learning algorithm which is able to quickly learn the optimal policy for a task T drawn from a distribution of tasks ρ(T ). Each task T corresponds to a different MDP. Typically, it is assumed that the distribution of tasks share the action and state space, but may differ in their reward function or their dynamics.
Gradient-based meta-learning aims to solve this problem by learning the parameters θ of a policy πθ such that performing a single or few steps of vanilla policy gradient (VPG) with the given task leads to the optimal policy for that task. This meta-learning formulation, also known under the name
of MAML, was first introduced by Finn et al. (2017). We refer to it as formulation I which can be expressed as maximizing the objective
JI(θ) = ET ∼ρ(T ) [ Eτ ′∼PT (τ ′|θ′) [R(τ ′)] ] with θ′ := U(θ, T ) = θ + α∇θEτ∼PT (τ |θ) [R(τ )]
In that U denotes the update function which depends on the task T , and performs one VPG step towards maximizing the performance of the policy in T . For national brevity and conciseness we assume a single policy gradient adaptation step. Nonetheless, all presented concepts can easily be extended to multiple adaptation steps.
Later work proposes a slightly different notion of gradient-based Meta-RL, also known as E-MAML, that attempts to circumvent issues with the meta-gradient estimation in MAML (Al-Shedivat et al., 2018; Stadie et al., 2018): JII(θ) = ET ∼ρ(T ) [ Eτ1:N∼PT (τ1:N |θ)
τ ′∼PT (τ ′|θ′)
[ R(τ ′) ]] with θ′ := U(θ, τ 1:N ) = θ+α∇θ N∑ n=1 [ R(τ (n)) ] Formulation II views U as a deterministic function that depends on N sampled trajectories from a specific task. In contrast to formulation I, the expectation over pre-update trajectories τ is applied outside of the update function. Throughout this paper we refer to πθ as pre-update policy, and πθ′ as post-update policy.
4 SAMPLING DISTRIBUTION CREDIT ASSIGNMENT
This section analyzes the two gradient-based Meta-RL formulations introduced in Section 3. Figure 1 illustrates the stochastic computation graphs (Schulman et al., 2015b) of both formulations. The red arrows depict how credit assignment w.r.t the pre-update sampling distribution PT (τ |θ) is propagated. Formulation I (left) propagates the credit assignment through the update step, thereby exploiting the full problem structure. In contrast, formulation II (right) neglects the inherent structure, directly assigning credit from post-update return R′ to the pre-update policy πθ which leads to noisier, less effective credit assignment.
Both formulations optimize for the same objective, and are equivalent at the 0th order. However, because of the difference in their formulation and stochastic computation graph, their gradients and the resulting optimization step differs. In the following, we shed light on how and where formulation II loses signal by analyzing the gradients of both formulations, which can be written as (see Appendix A for more details and derivations)
∇θJ(θ) = ET ∼ρ(T ) [ E τ∼PT (τ |θ) τ ′∼PT (τ ′|θ′) [ ∇θJpost(τ , τ ′) +∇θJpre(τ , τ ′) ]] (1)
The first term ∇θJpost(τ , τ ′) is equal in both formulations, but the second term, ∇θJpre(τ , τ ′), differs between them. In particular, they correspond to
∇θJpost(τ , τ ′) = ∇θ′ log πθ(τ ′)R(τ ′)︸ ︷︷ ︸ ∇θ′Jouter
( I + αR(τ )∇2θ log πθ′(τ )) )︸ ︷︷ ︸ transformation from θ′ to θ
(2)
∇θJIIpre(τ , τ ′) = α∇θ log πθ(τ )R(τ ′) (3) ∇θJIpre(τ , τ ′) = α∇θ log πθ(τ ) (
(∇θ log πθ(τ )R(τ ))>︸ ︷︷ ︸ ∇θJ inner (∇θ′ log πθ′(τ ′)R(τ ′))︸ ︷︷ ︸ ∇θ′Jouter
) (4)
∇θJpost(τ , τ ′) simply corresponds to a policy gradient step on the post-update policy πθ′ w.r.t θ′, followed by a linear transformation from post- to pre-update parameters. It corresponds to increasing the likelihood of the trajectories τ ′ that led to higher returns. However, this term does not optimize for the pre-update sampling distribution, i.e., which trajectories τ led to better adaptation steps.
The credit assignment w.r.t. the pre-updated sampling distribution is carried out by the second term. In formulation II, ∇θJIIpre can be viewed as standard reinforcement learning on πθ with R(τ ′) as reward signal, treating the update function U as part of the unknown dynamics of the system. This shifts the pre-update sampling distribution to better adaptation steps.
Formulation I takes the causal dependence of PT (τ ′|θ′) on PT (τ |θ) into account. It does so by maximizing the inner product of pre-update and post-update policy gradients (see Eq. 4). This steers the pre-update policy towards 1) larger post-updates returns 2) larger adaptation steps α∇θJ inner, 3) better alignment of pre- and post-update policy gradients (Li et al., 2017; Nichol et al., 2018). When combined, these effects directly optimize for adaptation. As a result, we expect the first meta-policy gradient formulation, JI , to yield superior learning properties.
5 LOW VARIANCE CURVATURE ESTIMATOR
In the previous section we show that the formulation introduced by Finn et al. (2017) results in superior meta-gradient updates, which should in principle lead to improved convergence properties. However, obtaining correct and low variance estimates of the respective meta-gradients proves challenging. As discussed by Foerster et al. (2018), and shown in Appendix B.3, the score function surrogate objective approach is ill suited for calculating higher order derivatives via automatic differentiation toolboxes. This important fact was overlooked in the original RL-MAML implementation (Finn et al., 2017) leading to incorrect meta-gradient estimates1. As a result, ∇θJpre does not appear in the gradients of the meta-objective (i.e. ∇θJ = ∇θJpost). Hence, MAML does not perform any credit assignment to pre-adaptation behavior.
But, even when properly implemented, we show that the meta-gradients exhibit high variance. Specifically, the estimation of the hessian of the RL-objective, which is inherent in the metagradients, requires special consideration. In this section, we motivate and introduce the low variance curvature estimator (LVC): an improved estimator for the hessian of the RL-objective which promotes better meta-policy gradient updates. As we show in Appendix A.1, we can write the gradient of the meta-learning objective as
∇θJI(θ) = ET ∼ρ(T ) [ Eτ ′∼PT (τ ′|θ′) [ ∇θ′ logPT (τ ′|θ′)R(τ ′)∇θU(θ, T ) ]] (5)
Since the update function U resembles a policy gradient step, its gradient∇θU(θ, T ) involves computing the hessian of the reinforcement learning objective, i.e., ∇2θ Eτ∼PT (τ |θ) [R(τ )]. Estimating this hessian has been discussed in Baxter & Bartlett (2001) and Furmston et al. (2016). In the infinite horizon MDP case, Baxter & Bartlett (2001) derived a decomposition of the hessian. We extend their finding to the finite horizon case, showing that the hessian can be decomposed into three matrix terms (see Appendix B.2 for proof):
∇θU(θ, T ) = I + α∇2θ Eτ∼PT (τ |θ) [R(τ )] = I + α ( H1 +H2 +H12 +H>12 ) (6)
whereby
H1 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θ log πθ(at, st)> ( H−1∑ t′=t r(st′ ,at′) )]
H2 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇2θ log πθ(at, st) ( H−1∑ t′=t r(st′ ,at′) )]
H12 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θQπθt (st,at)> ]
1Note that MAML is theoretically sound, but does not attend to correctly estimating the meta-policy gradients. As consequence, the gradients in the corresponding implementation do not comply with the theory.
Here Qπθt (st,at) = Eτ t+1:H−1∼PT (·|θ) [∑H−1 t′=t r(st′ ,at′)|st, at ]
denotes the expected state-action value function under policy πθ at time t.
Computing the expectation of the RL-objective is in general intractable. Typically, its gradients are computed with a Monte Carlo estimate based on the policy gradient theorem (Eq. 82). In practical implementations, such an estimate is obtained by automatically differentiating a surrogate objective (Schulman et al., 2015b). However, this results in a highly biased hessian estimate which just computesH2, entirely dropping the termsH1 andH12+H>12. In the notation of the previous section, it leads to neglecting the∇θJpre term, ignoring the influence of the pre-update sampling distribution. The issue can be overcome using the DiCE formulation, which allows to compute unbiased higherorder Monte Carlos estimates of arbitrary stochastic computation graphs (Foerster et al., 2018). The DiCE-RL objective can be rewritten as follows
JDiCE(τ ) = H−1∑ t=0
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
) r(st,at) τ ∼ PT (τ ) (7)
Eτ∼PT (τ |θ) [ ∇2θJDiCE(τ ) ] = H1 +H2 +H12 +H>12 (8)
In that, ⊥ denotes the “stop gradient” operator, i.e., ⊥(fθ(x))→ fθ(x) but ∇θ⊥(fθ(x))→ 0. The sequential dependence of πθ(at|st) within the trajectory, manifesting itself through the product of importance weights in (7), results in high variance estimates of the hessian ∇2θ Eτ∼PT (τ |θ) [R(τ )]. As noted by Furmston et al. (2016), H12 is particularly difficult to estimate, since it involves three nested sums along the trajectory. In section 7.2 we empirically show that the high variance estimates of the DiCE objective lead to noisy meta-policy gradients and poor learning performance.
To facilitate a sample efficient meta-learning, we introduce the low variance curvature (LVC) estimator:
JLVC(τ ) = H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ( H−1∑ t′=t r(st′ ,at′) ) τ ∼ PT (τ ) (9)
Eτ∼PT (τ |θ) [ ∇2θJLVC(τ ) ] = H1 +H2 (10)
By removing the sequential dependence of πθ(at|st) within trajectories, the hessian estimate neglects the term H12 +H>12 which leads to a variance reduction, but makes the estimate biased. The choice of this objective function is motivated by findings in Furmston et al. (2016): under certain conditions the termH12 +H>12 vanishes around local optima θ∗, i.e., Eτ [∇2θJLVC]→ Eτ [∇2θJDiCE] as θ → θ∗. Hence, the bias of the LVC estimator becomes negligible close to local optima. The experiments in section 7.2 underpin the theoretical findings, showing that the low variance hessian estimates obtained through JLVC improve the sample-efficiency of meta-learning by a significant margin when compared to JDiCE. We refer the interested reader to Appendix B for derivations and a more detailed discussion.
6 PROMP: PROXIMAL META-POLICY SEARCH
Building on the previous sections, we develop a novel meta-policy search method based on the low variance curvature objective which aims to solve the following optimization problem:
max θ
ET ∼ρ(T ) [ Eτ ′∼PT (τ ′|θ′) [R(τ ′)] ] with θ′ := θ + α ∇θEτ∼PT (τ |θ) [ JLVC(τ ) ] (11)
Prior work has optimized this objective using either vanilla policy gradient (VPG) or TRPO (Schulman et al., 2015a). TRPO holds the promise to be more data efficient and stable during the learning process when compared to VPG. However, it requires computing the Fisher information matrix (FIM). Estimating the FIM is particularly problematic in the meta-learning set up. The meta-policy gradients already involve second order derivatives; as a result, the time complexity of the FIM estimate is cubic in the number of policy parameters. Typically, the problem is circumvented using finite difference methods, which introduce further approximation errors.
The recently introduced PPO algorithm (Schulman et al., 2017) achieves comparable results to TRPO with the advantage of being a first order method. PPO uses a surrogate clipping objective which allows it to safely take multiple gradient steps without re-sampling trajectories.
JCLIPT (θ) = Eτ∼PT (τ ,θo) [∑H−1 t=0 min ( πθ(at|st) πθo (at|st) Aπθo (st,at) , clip1+ 1− ( πθ(at|st) πθo (at|st) ) Aπθo (st,at) )]
Algorithm 1 Proximal Meta-Policy Search (ProMP) Require: Task distribution ρ, step sizes α, β, KL-penalty coefficient η, clipping range
1: Randomly initialize θ 2: while θ not converged do 3: Sample batch of tasks Ti ∼ ρ(T ) 4: for step n = 0, ..., N − 1 do 5: if n = 0 then 6: Set θo ← θ 7: for all Ti ∼ ρ(T ) do 8: Sample pre-update trajectories Di = {τi} from Ti using πθ 9: Compute adapted parameters θ′o,i ← θ + α ∇θJLRTi (θ) with Di = {τi}
10: Sample post-update trajectories D′i = {τ ′i} from Ti using πθ′o,i 11: Update θ ← θ + β ∑ Ti ∇θJ ProMP Ti (θ) using each D ′ i = {τ ′i}
In case of Meta-RL, it does not suffice to just replace the post-update reward objective with JCLIPT . In order to safely perform multiple meta-gradient steps based on the same sampled data from a recent policy πθo , we also need to 1) account for changes in the pre-update action distribution πθ(at|st), and 2) bound changes in the pre-update state visitation distribution (Kakade & Langford, 2002).
We propose Proximal Meta-Policy Search (ProMP) which incorporates both the benefits of proximal policy optimization and the low variance curvature objective (see Alg. 1.) In order to comply with requirement 1), ProMP replaces the “stop gradient” importance weight πθ(at|st)⊥(πθ(at|st)) by the likelihood ratio πθ(at|st)πθo (at|st)) , which results in the following objective
JLRT (θ) = Eτ∼PT (τ ,θo) [ H−1∑ t=0 πθ(at|st) πθo(at|st) Aπθo (st,at) ] (12)
An important feature of this objective is that its derivatives w.r.t θ evaluated at θo are identical to those of the LVC objective, and it additionally accounts for changes in the pre-update action distribution. To satisfy condition 2) we extend the clipped meta-objective with a KL-penalty term between πθ and πθo . This KL-penalty term enforces a soft local “trust region” around πθo , preventing the shift in state visitation distribution to become large during optimization. This enables us to take multiple meta-policy gradient steps without re-sampling. Altogether, ProMP optimizes
JProMPT (θ) = J CLIP T (θ ′)− ηD̄KL(πθo , πθ) s.t. θ′ = θ + α ∇θJLRT (θ) , T ∼ ρ(T ) (13)
ProMP consolidates the insights developed throughout the course of this paper, while at the same time making maximal use of recently developed policy gradients algorithms. First, its meta-learning formulation exploits the full structural knowledge of gradient-based meta-learning. Second, it incorporates a low variance estimate of the RL-objective hessian. Third, ProMP controls the statistical distance of both pre- and post-adaptation policies, promoting efficient and stable meta-learning. All in all, ProMP consistently outperforms previous gradient-based meta-RL algorithms in sample complexity, wall clock time, and asymptotic performance (see Section 7.1).
7 EXPERIMENTS
In order to empirically validate the theoretical arguments outlined above, this section provides a detailed experimental analysis that aims to answer the following questions: (i) How does ProMP perform against previous Meta-RL algorithms? (ii) How do the lower variance but biased LVC gradient estimates compare to the high variance, unbiased DiCE estimates? (iii) Do the different formulations result in different pre-update exploration properties? (iv) How do formulation I and formulation II differ in their meta-gradient estimates and convergence properties?
To answer the posed questions, we evaluate our approach on six continuous control Meta-RL benchmark environments based on OpenAI Gym and the Mujoco simulator (Brockman et al., 2016; Todorov et al., 2012). A description of the experimental setup is found in Appendix D. In all experiments, the reported curves are averaged over at least three random seeds. Returns are estimated
AntRandDir
HumanoidRandDir
based on sampled trajectories from the adapted post-update policies and averaged over sampled tasks. The source code and the experiment data are available on our supplementary website.2
7.1 META-GRADIENT BASED COMPARISON
We compare our method, ProMP, in sample complexity and asymptotic performance to the gradientbased meta-learning approaches MAML-TRPO (Finn et al., 2017) and E-MAML-TRPO (see Fig. 2). Note that MAML corresponds to the original implementation of RL-MAML by (Finn et al., 2017) where no credit assignment to the pre-adaptation policy is happening (see Appendix B.3 for details). Moreover, we provide a second study which focuses on the underlying meta-gradient estimator. Specifically, we compare the LVC, DiCE, MAML and E-MAML estimators while optimizing meta-learning objective with vanilla policy gradient (VPG) ascent. This can be viewed as an ablated version of the algorithms which tries to eliminate the influences of the outer optimizers on the learning performance (see Fig. 3).
These algorithms are benchmarked on six different locomotion tasks that require adaptation: the half-cheetah and walker must switch between running forward and backward, the high-dimensional agents ant and humanoid must learn to adapt to run in different directions in the 2D-plane, and the hopper and walker have to adapt to different configuration of their dynamics.
AntRandDir
HumanoidRandDir
2https://sites.google.com/view/pro-mp
The results in Figure 2 highlight the strength of ProMP in terms of sample efficiency and asymptotic performance. In the meta-gradient estimator study in Fig. 3, we demonstrate the positive effect of the LVC objective, as it consistently outperforms the other estimators. In contrast, DiCE learns only slowly when compared to the other approaches. As we have motivated mathematically and substantiate empirically in the following experiment, the poor performance of DiCE may be ascribed to the high variance of its meta-gradient estimates. The fact that the results of MAML and EMAML are comparable underpins the ineffectiveness of the naive pre-update credit assignment (i.e. formulation II), as discussed in section 4.
Results for four additional environments are displayed in Appendix D along with hyperparameter settings, environment specifications and a wall-clock time comparison of the algorithms.
7.2 GRADIENT ESTIMATOR VARIANCE AND ITS EFFECT ON META-LEARNING
In Section 5 we discussed how the DiCE formulation yields unbiased but high variance estimates of the RL-objective hessian and served as motivation for the low variance curvature (LVC) estimator. Here we investigate the meta-gradient variance of both estimators as well as its implication on the learning performance. Specifically, we report the relative standard deviation of the metapolicy gradients as well as the average return throughout the learning process in three of the metaenvironments.
The results, depicted in Figure 4, highlight the advantage of the low variance curvature estimate. The trajectory level dependencies inherent in the DiCE estimator leads to a meta-gradient standard deviation that is on average 60% higher when compared to LVC. As the learning curves indicate, the noisy gradients may be a driving factor for the poor performance of DiCE, impeding sample efficient meta-learning. Meta-policy search based on the LVC estimator leads to substantially better sample-efficiency and asymptotic performance.
In case of HalfCheetahFwdBack, we observe some unstable learning behavior of LVC-VPG which is most likely caused by the bias of LVC in combination with the naive VPG optimizer. However, the mechanisms in ProMP that ensure proximity w.r.t. to the policys KL-divergence seem to counteract these instabilities during training, giving us a stable and efficient meta-learning algorithm.
7.3 COMPARISON OF INITIAL SAMPLING DISTRIBUTIONS
Here we evaluate the effect of the different objectives on the learned pre-update sampling distribution. We compare the low variance curvature (LVC) estimator with TRPO (LVC-TRPO) against MAML (Finn et al., 2017) and E-MAML-TRPO (Stadie et al., 2018) in a 2D environment on which the exploration behavior can be visualized. Each task of this environment corresponds to reaching a different corner location; however, the 2D agent only experiences reward when it is sufficiently close to the corner (translucent regions of Figure 5). Thus, to successfully identify the task, the agent must explore the different regions. We perform three inner adaptation steps on each task, allowing the agent to fully change its behavior from exploration to exploitation.
The different exploration-exploitation strategies are displayed in Figure 5. Since the MAML implementation does not assign credit to the pre-update sampling trajectory, it is unable to learn a sound exploration strategy for task identification and thus fails to accomplish the task. On the other hand, E-MAML, which corresponds to formulation II, learns to explore in long but random paths: because it can only assign credit to batches of pre-update trajectories, there is no notion of which actions in particular facilitate good task adaptation. As consequence the adapted policy slightly misses the task-specific target. The LVC estimator, instead, learns a consistent pattern of exploration, visiting each of the four regions, which it harnesses to fully solve the task.
7.4 GRADIENT UPDATE DIRECTIONS OF THE TWO META-RL FORMULATIONS
To shed more light on the differences of the gradients of formulation I and formulation II, we evaluate the meta-gradient updates and the corresponding convergence to the optimum of both formulations in a simple 1D environment. In this environment, the agent starts in a random position in the real line and has to reach a goal located at the position 1 or -1. In order to visualize the convergence, we parameterize the policy with only two parameters θ0 and θ1. We employ formulation I by optimizing the DiCE objective with VPG, and formulation II by optimizing its (E-MAML) objective with VPG.
Figure 6 depicts meta-gradient updates of the parameters θi for both formulations. Formulation I (red) exploits the internal structure of the adaptation update yielding faster and steadier convergence to the optimum. Due to its inferior credit assignment, formulation II (green) produces noisier gradient estimates leading to worse convergence properties.
8 CONCLUSION
In this paper we propose a novel Meta-RL algorithm, proximal meta-policy search (ProMP), which fully optimizes for the pre-update sampling distribution leading to effective task identification. Our method is the result of a theoretical analysis of gradient-based Meta-RL formulations, based on which we develop the low variance curvature (LVC) surrogate objective that produces low variance meta-policy gradient estimates. Experimental results demonstrate that our approach surpasses previous meta-reinforcement learning approaches in a diverse set of continuous control tasks. Finally, we underpin our theoretical contributions with illustrative examples which further justify the soundness and effectiveness of our method.
ACKNOWLEDGMENTS
Ignasi Clavera was supported by the La Caixa Fellowship. The research leading to these results has received funding from the German Research Foundation (DFG: Deutsche Forschungsgemeinschaft) under Priority Program on Autonomous Learning (SPP 1527) and was supported by Berkeley Deep Drive, Amazon Web Services, and Huawei. Also we thank Abhishek Gupta, Chelsea Finn, aand Aviv Tamar for their valuable feedback.
A TWO META-POLICY GRADIENT FORMULATIONS
In this section we discuss two different gradient-based meta-learning formulations, derive their gradients and analyze the differences between them.
A.1 META-POLICY GRADIENT FORMULATION I
The first meta-learning formulation, known as MAML (Finn et al., 2017), views the inner update rule U(θ, T ) as a mapping from the pre-update parameter θ and the task T to an adapted policy parameter θ′. The update function can be viewed as stand-alone procedure that encapsulates sampling from the task-specific trajectory distribution PT (τ |πθ) and updating the policy parameters. Building on this concept, the meta-objective can be written as
JI(θ) = ET ∼ρ(T ) [ Eτ ′∼PT (τ ′|θ′) [R(τ ′)] ]
with θ′ := U(θ, T ) (14) The task-specific gradients follow as
∇θJIT (θ) = ∇θEτ ′∼PT (τ ′|θ′) [R(τ ′)] (15)
= Eτ ′∼PT (τ ′|θ′) [∇θ logPT (τ ′|θ′)R(τ ′)] (16) = Eτ ′∼PT (τ ′|θ′) [∇θ′ logPT (τ ′|θ′)R(τ ′)∇θθ′] (17)
In order to derive the gradients of the inner update ∇θθ′ = ∇θU(θ, T ) it is necessary to know the structure of U . The main part of this paper assumes the inner update rule to be a policy gradient descent step
∇θU(θ, T ) = ∇θ ( θ + α ∇θEτ∼PT (τ |θ) [R(τ )] ) (18)
= I + α∇2θ Eτ∼PT (τ |θ) [R(τ )] (19) Thereby the second term in (19) is the local curvature (hessian) of the inner adaptation objective function. The correct hessian of the inner objective can be derived as follows:
∇2θ Eτ∼PT (τ |θ) [R(τ )] = ∇θ Eτ∼PT (τ |θ) [∇θ log πθ(τ )R(τ )] (20) = ∇θ ∫ PT (τ |θ)∇θ log πθ(τ )R(τ )dτ (21)
= ∫ PT (τ |θ)∇θ log πθ(τ )∇θ log πθ(τ )>R(τ )+ (22)
PT (τ |θ)∇2θ log πθ(τ )R(τ )dτ (23) = Eτ∼PT (τ |θ) [ R(τ ) ( ∇2θ log πθ(τ ) +∇θ log πθ(τ )∇θ log πθ(τ )> )] (24)
A.2 META-POLICY GRADIENT FORMULATION II
The second meta-reinforcement learning formulation views the the inner update θ′ = U(θ, τ1:N ) as a deterministic function of the pre-update policy parameters θ and N trajectories τ 1:N ∼ PT (τ
1:N |θ) sampled from the pre-update trajectory distribution. This formulation was introduced in Al-Shedivat et al. (2018) and further discussed with respect to its exploration properties in Stadie et al. (2018).
Viewing U as a function that adapts the policy parameters θ to a specific task T given policy rollouts in this task, the corresponding meta-learning objective can be written as
JII(θ) = ET ∼ρ(T ) [ Eτ1:N∼PT (τ1:N |θ) [ Eτ ′∼PT (τ ′|θ′) [ R(τ ′) ]]] with θ′ := U(θ, τ 1:N ) (25)
Since the first part of the gradient derivation is agnostic to the inner update rule U(θ, τ1:N ), we only assume that the inner update function U is differentiable w.r.t. θ. First we rewrite the meta-objective J(θ) as expectation of task specific objectives JIIT (θ) under the task distribution. This allows us to express the meta-policy gradients as expectation of task-specific gradients:
∇θJII(θ) = ET ∼ρ(T ) [ ∇θJIIT (θ) ] (26)
The task specific gradients can be calculated as follows ∇θJIIT (θ) = ∇θEτ∼PT (τ1:N |θ) [ Eτ ′∼PT (τ ′|θ′) [ R(τ ′) ]] = ∇θ ∫ ∫ R(τ ′) PT (τ ′|θ′) PT (τ 1:N |θ) dτ ′ dτ
= ∫ ∫ R(τ ′) PT (τ ′|θ′)∇θ logPT (τ 1:N |θ)PT (τ 1:N |θ)+
R(τ ′)∇θ logPT (τ ′|θ′)PT (τ ′|θ′) PT (τ 1:N |θ) dτ ′ dτ
= Eτ1:N∼PT (τ1:N |θ) τ ′∼PT (τ ′|θ′)
[ R(τ ′) ( ∇θ logPT (τ ′|θ′) +
N∑ i=1 ∇θ logPT (τ (n)|θ)
)]
= Eτ1:N∼PT (τ1:N |θ) τ ′∼PT (τ ′|θ′)
[ R(τ ′) ( ∇θ′ logPT (τ ′|θ′)∇θθ′ +
N∑ n=1 ∇θ logPT (τ (n)|θ)
)]
As in A.1 the structure of U(θ, τ 1:N ) must be known in order to derive the gradient∇θθ′. Since we assume the inner update to be vanilla policy gradient, the respective gradient follows as
U(θ, τ1:N ) = θ+α 1
N N∑ n=1 ∇θ log πθ(τ (n)))R(τ (n)) with ∇θ log πθ(τ) = H−1∑ t=0 ∇θ log πθ(at|st)
The respective gradient of U(θ, τ1:N ) follows as
∇θU(θ, τ1:N ) = ∇θ ( θ + α 1
N N∑ n=1 ∇θ log πθ(τ (n)))R(τ (n))
) (27)
= I + α 1
N N∑ n=1 ∇2θ log πθ(τ (n)))R(τ (n)) (28)
A.3 COMPARING THE GRADIENTS OF THE TWO FORMULATIONS
In the following we analyze the differences between the gradients derived for the two formulations. To do so, we begin with ∇θJIT (θ) by inserting the gradient of the inner adaptation step (19) into (17):
∇θJIT (θ) = Eτ ′∼PT (τ ′|θ′) [ ∇θ′ logPT (τ ′|θ′)R(τ ′) ( I + α∇2θ Eτ∼PT (τ |θ) [R(τ )] )] (29)
We can substitute the hessian of the inner objective by its derived expression from (24) and then rearrange the terms. Also note that ∇θ logPT (τ |θ) = ∇θ log πθ(τ ) = ∑H−1 t=1 log πθ(at|st) where H is the MDP horizon.
∇θJIT (θ) = Eτ ′∼PT (τ ′|θ′) [ ∇θ′ logPT (τ ′|θ′)R(τ ′) ( I + αEτ∼PT (τ |θ) [ R(τ ) (30)
( ∇2θ log πθ(τ ) +∇θ log πθ(τ )∇θ log πθ(τ )> )])] (31)
= E τ∼PT (τ |θ) τ ′∼PT (τ ′|θ′)
∇θ′ log πθ′(τ ′)R(τ ′) ( I + αR(τ )∇2θ log πθ(τ ) ) ︸ ︷︷ ︸
∇θJpost(τ ,τ ′)
(32)
+α∇θ′ log πθ′(τ ′)R(τ ′)R(τ )∇θ log πθ(τ )∇θ log πθ(τ )>︸ ︷︷ ︸ ∇θJIpre(τ ,τ ′)
(33)
Next, we rearrange the gradient of JII into a similar form as∇θJIT (θ). For that, we start by inserting (28) for∇θθ′ and replacing the expectation over pre-update trajectories τ 1:N by the expectation over a single trajectory τ .
∇θJIT (θ) = E τ∼PT (τ |θ) τ ′∼PT (τ ′|θ′)
[ R(τ ′)∇θ′ log πθ(τ ′) ( I + αR(τ )∇2θ log πθ(τ)) )︸ ︷︷ ︸ ∇θJpost(τ ,τ ′)
(34)
+R(τ ′)∇θ log πθ(τ )︸ ︷︷ ︸ ∇θJIpre(τ ,τ ′)
] (35)
While the first part of the gradients match ((32) and (34)), the second part ((33) and (35)) differs. Since the second gradient term can be viewed as responsible for shifting the pre-update sampling distribution PT (τ |θ) towards higher post-update returns, we refer to it as∇θJpre(τ , τ ′) . To further analyze the difference between ∇θJIpre and ∇θJIIpre we slightly rearrange (33) and put both gradient terms next to each other:
∇θJIpre(τ , τ ′) = α∇θ log πθ(τ ) (∇θ log πθ(τ )R(τ ))>︸ ︷︷ ︸ ∇θJ inner (∇θ′ log πθ′(τ ′)R(τ ′))︸ ︷︷ ︸ ∇θ′Jouter (36) ∇θJIIpre(τ , τ ′) = α∇θ log πθ(τ )R(τ ′) (37)
In the following we interpret and and compare of the derived gradient terms, aiming to provide intuition for the differences between the formulations:
The first gradient term Jpost that matches in both formulations corresponds to a policy gradient step on the post-update policy πθ′ . Since θ′ itself is a function of θ, the term ( I + αR(τ )∇2θ log πθ(τ)) ) can be seen as linear transformation of the policy gradient update R(τ ′)∇θ′ log πθ(τ ′) from the post-update parameter θ′ into θ. Although Jpost takes into account the functional relationship between θ′ and θ, it does not take into account the pre-update sampling distribution PT (τ |θ). This is where ∇θJpre comes into play: ∇θJIpre can be viewed as policy gradient update of the preupdate policy πθ w.r.t. to the post-update return R(τ ′). Hence this gradient term aims to shift the pre-update sampling distribution so that higher post-update returns are achieved. However, ∇θJIIpre does not take into account the causal dependence of the post-update policy on the pre-update policy. Thus a change in θ due to∇θJIIpre may counteract the change due to∇θJIIpost. In contrast,∇θJIpre takes the dependence of the the post-update policy on the pre-update sampling distribution into account. Instead of simply weighting the gradients of the pre-update policy ∇θ log πθ(τ ) with R(τ ′) as in ∇θJIpost, ∇θJIpost weights the gradients with inner product of the pre-update and post-update policy gradients. This inner product can be written as
∇θJ inner >∇θ′Jouter = ||∇θJ inner||2 · ||∇θ′Jouter||2 · cos(δ) (38)
wherein δ denotes the angle between the the inner and outer pre-update and post-update policy gradients. Hence,∇θJIpost steers the pre-update policy towards not only towards larger post-updates returns but also towards larger adaptation steps α∇θJ inner, and better alignment of pre- and postupdate policy gradients. This directly optimizes for maximal improvement / adaptation for the respective task. See Li et al. (2017); Nichol et al. (2018) for a comparable analysis in case of domain generalization and supervised meta-learning. Also note that (38) allows formulation I to perform credit assignment on the trajectory level whereas formulation II can only assign credit to entire batches of N pre-update trajectories τ1:N .
As a result, we expect the first meta-policy gradient formulation to learn faster and more stably since the respective gradients take the dependence of the pre-update returns on the pre-update sampling distribution into account while this causal link is neglected in the second formulation.
B ESTIMATING THE META-POLICY GRADIENTS
When employing formulation I for gradient-based meta-learning, we aim maximize the loss J(θ) = ET ∼ρ(T ) [ Eτ ′∼PT (τ ′|θ′) [R(τ ′)] ] with θ′ := θ + α ∇θEτ∼PT (τ |θ) [R(τ )] (39)
by performing a form of gradient-descent on J(θ). Note that we, from now on, assume J := JI and thus omit the superscript indicating the respective meta-learning formulation. As shown in A.2 the gradient can be derived as∇θJ(θ) = E(T )∼ρ(T )[∇θJT (θ)] with
∇θJT (θ) = Eτ ′∼PT (τ ′|θ′) [ ∇θ′ logPT (τ ′|θ′)R(τ ′) ( I + α∇2θ Eτ∼PT (τ |θ) [R(τ )] )] (40)
where∇2θJinner(θ) := ∇2θ Eτ∼PT (τ |θ) [R(τ )] denotes hessian of the inner adaptation objective w.r.t. θ. This section concerns the question of how to properly estimate this hessian.
B.1 ESTIMATING GRADIENTS OF THE RL REWARD OBJECTIVE
Since the expectation over the trajectory distribution PT (τ |θ) is in general intractable, the score function trick is typically used to used to produce a Monte Carlo estimate of the policy gradients. Although the gradient estimate can be directly defined, when using a automatic-differentiation toolbox it is usually more convenient to use an objective function whose gradients correspond to the policy gradient estimate. Due to the Policy Gradient Theorem (PGT) Sutton et al. (2000) such a “surrogate” objective can be written as:
ĴPGT = 1
K ∑ τk H−1∑ t=0 log πθ(at|st) ( H∑ t′=t r(st′ , at′) ) τk ∼ PT (τ) (41)
= 1
K ∑ τk H−1∑ t=0
( t∑
t′=0
log πθ(at|st) ) r(st′ , at′) τk ∼ PT (τ) (42)
While (41) and (42) are equivalent (Peters & Schaal, 2006), the more popular formulation formulation (41) can be seen as forward looking credit assignment while (42) can be interpreted as backward looking credit assignment (Foerster et al., 2018). A generalized procedure for constructing “surrogate” objectives for arbitrary stochastic computation graphs can be found in Schulman et al. (2015a).
B.2 A DECOMPOSITION OF THE HESSIAN
Estimating the the hessian of the reinforcement learning objective has been discussed in Furmston et al. (2016) and Baxter & Bartlett (2001) with focus on second order policy gradient methods. In the infinite horizon MDP case, Baxter & Bartlett (2001) derive a decomposition of the hessian. In the following, we extend their finding to the finite horizon case.
Proposition. The hessian of the RL objective can be decomposed into four matrix terms:
∇2θJinner(θ) = H1 +H2 +H12 +H>12 (43)
where
H1 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θ log πθ(at, st)> ( H−1∑ t′=t r(st′ ,at′) )] (44)
H2 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇2θ log πθ(at, st) ( H−1∑ t′=t r(st′ ,at′) )] (45)
H12 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θQπθt (st,at)> ]
(46)
Here Qπθt (st,at) = Eτ t+1:H−1∼PT (·|θ) [∑H−1 t′=t r(st′ ,at′)|st, at ]
denotes the expected state-action value function under policy πθ at time t.
Proof. As derived in (24), the hessian of Jinner(θ) follows as: ∇2θJinner = Eτ∼PT (τ |θ) [ R(τ ) ( ∇2θ log πθ(τ ) +∇θ log πθ(τ )∇θ log πθ(τ )> )] (47)
= Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 ∇2θ log πθ(at′ , st′) ) r(st,at) ] (48)
+ Eτ∼PT (τ |θ) H−1∑ t=0 ( t∑ t′=0 ∇θ log πθ(at′ , st′) )( t∑ t′=0 ∇θ log πθ(at′ , st′) )> r(st,at) (49)
= Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇2θ log πθ(at, st) ( H−1∑ t′=t r(st′ ,at′) )] (50)
+ Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 t∑ h=0 ∇θ log πθ(at′ , st′)∇θ log πθ(ah, sh)> ) r(st,at) ] (51)
The term in (50) is equal toH2. We continue by showing that the remaining term in (51) is equivalent toH1 +H12 +H>12. For that, we split the inner double sum in (51) into three components:
Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 t∑ h=0 ∇θ log πθ(at′ , st′)∇θ log πθ(ah, sh)> ) r(st,at) ] (52)
= Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 ∇θ log πθ(at′ , st′)∇θ log πθ(at′ , st′)> ) r(st,at) ] (53)
+ Eτ∼PT (τ |θ) H−1∑ t=0 t∑ t′=0 t′−1∑ h=0 ∇θ log πθ(at′ , st′)∇θ log πθ(ah, sh)> r(st,at) (54) + Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 t∑ h=t′+1 ∇θ log πθ(at′ , st′)∇θ log πθ(ah, sh)> ) r(st,at) ] (55)
By changing the backward looking summation over outer products into a forward looking summation of rewards, (53) can be shown to be equal toH1:
Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 ∇θ log πθ(at′ , st′)∇θ log πθ(at′ , st′)> ) r(st,at) ] (56)
= Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θ log πθ(at, st)> ( H−1∑ t′=t r(st′ ,at′) )] (57)
= H1 (58) By simply exchanging the summation indices t′ and h in (55) it is straightforward to show that (55) is the transpose of (54). Hence it is sufficient to show that (54) is equivalent to H12. However, instead of following the direction of the previous proof we will now start with the definition ofH12 and derive the expression in (54).
H12 = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at, st)∇θQπθt (st,at)> ]
(59)
(60)
The gradient of Qπθt can be expressed recursively: ∇θQπθt (st,at) = ∇θEst+1at+1 [ Qπθt+1(st+1,at+1) ] (61)
= Est+1 at+1
[ ∇θ log πθ(at+1, st+1)Qπθt+1(st+1,at+1) +∇θQ πθ t+1(st+1,at+1) ] (62)
By induction, it follows that
∇θQπθt (st,at) = Eτ t+1:H−1∼PT (·|θ) [ H−1∑ t′=t+1 ∇θ log πθ(at′ , st′) ( H−1∑ h=t′ r(sh,ah) )] (63)
When inserting (63) into (59) and swapping the summation, we are able to show that H12 is equivalent to (54).
H12 = Eτ∼PT (τ |θ) [ H−1∑ t=0 H−1∑ t′=t+1 ∇θ log πθ(at, st)∇θ log πθ(at′ , st′)> ( H−1∑ h=t′ r(sh,ah) )] (64)
= Eτ∼PT (τ |θ) H−1∑ t=0 t∑ t′=0 t′−1∑ h=0 ∇θ log πθ(at′ , st′)∇θ log πθ(ah, sh)> r(st,at) (65) This concludes the proof that the hessian of the expected sum of rewards under policy πθ and an MDP with finite time horizon H can be decomposed intoH1 +H2 +H12 +H>12.
B.3 ESTIMATING THE HESSIAN OF THE RL REWARD OBJECTIVE
As pointed out by Al-Shedivat et al. (2018); Stadie et al. (2018) and Foerster et al. (2018), simply differentiating through the gradient of surrogate objective JPGT as done in the original MAML version (Finn et al., 2017) leads to biased hessian estimates. Specifically, when compared with the unbiased estimate, as derived in (24) and decomposed in Appendix B.2, bothH1 andH12 +H>12 are missing. Thus, ∇θJpre does not appear in the gradients of the meta-objective (i.e. ∇θJ = ∇θJpost). Only performing gradient descent with ∇θJpost entirely neglects influences of the pre-update sampling distribution. This issue was overseen in the RL-MAML implementation of Finn et al. (2017). As discussed in Stadie et al. (2018) this leads to poor performance in meta-learning problems that require exploration during the pre-update sampling.
B.3.1 THE DICE MONTE-CARLO ESTIMATOR
Addressing the issue of incorrect higher-order derivatives of monte-carlo estimators, Foerster et al. (2018) propose DICE which mainly builds upon an newly introduced MagicBox( ) operator. This operator allows to formulate monte-carlo estimators with correct higher-order derivatives. A DICE formulation of a policy gradient estimator reads as:
JDICE = H−1∑ t=0 θ({at ′≤t})r(st, at) (66)
= H−1∑ t=0 exp
( t∑
t′=0
log πθ(at′ |st′)−⊥(log πθ(at′ |st′) ) r(st, at) (67)
In that, ⊥ denotes a “stop gradient” operator (i.e. ⊥(fθ(x)) → fθ(x) but ∇θ⊥(fθ(x)) → 0). Note that → denotes a “evaluates to” and does not necessarily imply equality w.r.t. to gradients. Hence, JDICE(θ) evaluates to the sum of rewards at 0th order but produces the unbiased gradients ∇nθJDICE(θ) when differentiated n-times (see Foerster et al. (2018) for proof). To shed more light on the maverick DICE formulation, we rewrite (67) as follows:
JDICE = H−1∑ t=0
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
) r(st, at) (68)
Interpreting this novel formulation, the MagicBox operator θ({at ′≤t}) can be understood as “dry” importance sampling weight. At 0th order it evaluates to 1 and leaves the objective function unaffected, but when differentiated once it yields an estimator for the marginal rate of return due to a change in the policy-implied trajectory distribution.
In the following we show that on expectation 1) the gradients of (81) match standard policy gradients and 2) its hessian estimate is equal to the hessian of inner RL objective, derived in B.2.
∇θJDICE = H−1∑ t=0 ∇θ
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
) r(st, at) (69)
= H−1∑ t=0
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
)( t∑
t′=0
∇θ log πθ(at′ |st′) ) r(st, at) (70)
→ H−1∑ t=0
( t∑
t′=0
∇θ log πθ(at′ |st′) ) r(st, at) (71)
Here, (71) corresponds to the backward looking credit assignment formulation of policy gradients ∇θJPGT as discussed in B.1. Once again we take the derivative in order to obtain the Hessian of JDICE:
∇2θJDICE = H−1∑ t=0 ∇θ
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
)( t∑
t′=0
∇θ log πθ(at′ |st′) ) r(st, at) (72)
+
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
) ∇θ ( t∑
t′=0
∇θ log πθ(at′ |st′) ) r(st, at) (73)
→ H−1∑ t=0
( t∑
t′=0
∇θ log πθ(at′ |st′)
)( t∑
t′=0
∇θ log πθ(at′ |st′) )> r(st, at) (74)
+
( t∑
t′=0
∇2θ log πθ(at′ |st′) ) r(st, at) (75)
In expectation, Eτ∼PT (τ |θ)[∇2θJDICE] the DICE monte carlo estimate of the hessian is equivalent to the hessian of the inner objective. To show this, we use the expression of∇2θJinner (49):
Eτ∼PT (τ |θ)[∇ 2 θJ DICE] (76)
= Eτ∼PT (τ |θ) [H−1∑ t=0 ( t∑ t′=0 ∇θ log πθ(at′ |st′) )( t∑ t′=0 ∇θ log πθ(at′ |st′) )> (77)
r(st, at) +
( t∑
t′=0
∇2θ log πθ(at′ |st′) ) r(st, at) ] (78)
= H1 +H2 +H12 +H>12 (79) = ∇2θJinner (80)
B.4 BIAS AND VARIANCE OF THE CURVATURE ESTIMATE
As shown in the previous section,∇2θJDICE provides an unbiased estimate of the hessian of the inner objective Jinner = Eτ∼PT (τ |θ) [R(τ )]. However, recall the DICE objective involves a product of importance weights along the trajectory.
JDICE = H−1∑ t=0
( t∏
t′=0
πθ(at′ |st′) ⊥(πθ(at′ |st′))
) r(st, at) (81)
Taking the 2nd derivative of this product leads to the outer product of sums in (74) which is of high variance w.r.t to τ . Specifically, this outer product of sums can be decomposed into three terms H1 +H12 +H>12 (see Appendix B.2). As noted by Furmston et al. (2016),H12 +H>12 is particularly difficult to estimate. In section 7.2 we empirically show that the high variance curvature estimates obtained with the DICE objective require large batch sizes and impede sample efficient learning.
In the following we develop a low variance curvature (LVC) estimator JLVC which matches JDICE at the gradient level and yields lower-variance estimates of the hessian by neglecting H12 + H>12.
Before formally introducing JLVC, we motivate such estimator starting with the policy gradient estimate that was originally derived in Sutton et al. (2000), followed by marginalizing the trajectory level distribution PT (τ |θ) over states st and actions at. Note that we omit reward baselines for notational simplicity.
∇θJinner = Eτ∼PT (τ |θ) [ H−1∑ t=0 ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) )] (82)
= H−1∑ t=0 E st∼pπθt (st) at∼πθ(at|st)
[ ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) )] (83)
In that, pπθt (st) denotes the state visitation frequency at time step t, i.e. the probability density of being in st after t steps under the policy πθ. In the general case pπθt (st) is intractable but depends on the policy parameter θ. We make the simplifying assumption that pπθt (st) is fixed in a local region of θ. Since we make this assumption at the gradient level, this corresponds to a 1st order Taylor expansion of pπθt (st) in θ. Note that this assumption is also used in the Monotonic Policy Improvement Theory (Kakade & Langford, 2002; Schulman et al., 2015a). Based on this condition, the hessian follows as derivative of (83) whereby a “stop gradient” expression around the state visitation frequency pπθt (st) resembles the 1st order Taylor approximation:
Eτ [ ∇2θJLVC ] = ∇θ H−1∑ t=0 Est∼⊥(pπθt (st)) at∼πθ(at|st)
[ ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) )] (84)
= H−1∑ t=0 Est∼⊥(pπθt (st)) at∼πθ(at|st) [ ∇θ log πθ(at|st)∇θ log πθ(at|st)> ( H−1∑ t′=t r(st′ , at′) ) (85)
+∇2θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) )] (86)
Since the expectation in (84) is intractable it must be evaluated by a monte carlo estimate. However, simply replacing the expectation with an average of samples trajectories induces a wrong hessian that does not correspond to (86) since outer product of log-gradients would be missing when differentiated. To ensure that automatic differentiation still yields the correct hessian, we add a “dry” importance weight comparable to DICE:
∇θJLVC = H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) τ ∼ PT (τ |θ) (87)
When integrated this resembles the LVC “surrogate” objective JLVC.
JLVC = H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ( H−1∑ t′=t r(st′ , at′) ) τ ∼ PT (τ |θ) (88)
The gradients of JLVC match∇θJDICE and resemble an unbiased policy gradient estimate:
∇θJLVC = H−1∑ t=0 ∇θπθ(at|st) ⊥(πθ(at|st)) ( H−1∑ t′=t r(st′ , at′) ) (89)
= H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) (90)
→ H−1∑ t=0 ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) (91)
The respective Hessian can be obtained by differentiating (90):
∇2θJLVC = ∇θ H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ∇θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) (92)
= H−1∑ t=0 πθ(at|st) ⊥(πθ(at|st)) ∇θ log πθ(at|st)∇θ log πθ(at|st)> ( H−1∑ t′=t r(st′ , at′) ) (93)
+ πθ(at|st) ⊥(πθ(at|st)) ∇2θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) (94)
→ H−1∑ t=0 ∇θ log πθ(at|st)∇θ log πθ(at|st)> ( H−1∑ t′=t r(st′ , at′) ) (95)
+∇2θ log πθ(at|st) ( H−1∑ t′=t r(st′ , at′) ) (96)
= H−1∑ t=0
( t∑
t′=0
∇θ log πθ(at′ |st′)∇θ log πθ(at|st)> ) r(st, at) (97)
+
( t∑
t′=0
∇2θ log πθ(at′ |st′) ) r(st, at) (98)
In expectation∇2θJLVC is equivalent toH1 +H2:
Eτ∼PT (τ |θ) [ JLVC ] = Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 ∇θ log πθ(at′ |st′)∇θ log πθ(at|st)> ) r(st,at) ] (99)
+ Eτ∼PT (τ |θ) [ H−1∑ t=0 ( t∑ t′=0 ∇2θ log πθ(at′ |st′) ) r(st,at) ] (100)
=H1 +H2 (101)
The Hessian ∇2θJLVC no longer provides an unbiased estimate of ∇2θJinner since neglects the matrix termH12 +H>12. This approximation is based on the assumption that the state visitation distribution is locally unaffected by marginal changes in θ and leads to a substantial reduction of variance in the hessian estimate. Furmston et al. (2016) show that under certain conditions (i.e. infinite horizon MDP, sufficiently rich policy parameterisation) the termH12+H>12 vanishes around a local optimum θ∗. Given that the conditions hold, this implies that Eτ [∇2θJLVC] → Eτ [∇2θJDICE] as θ → θ∗, i.e. the bias of the LCV estimator becomes negligible close to the local optimum. The experiments in section 7.2 confirm this theoretical argument empirically and show that using the low variance curvature estimates obtained through JLVC improve the sample-efficiency of meta-learning by a significant margin.
C PROXIMAL POLICY SEARCH METHODS
C.1 MONOTONIC POLICY IMPROVEMENT THEORY
This section provides a brief introduction to policy performance bounds and the theory of monotonic policy improvement in the setting of reinforcement learning. While Section 6 discusses the extension of this theory to meta learning, the following explanations assume a standard RL setting where T is exogenously given. Hence, we will omit mentioning the dependence on T for notational brevity. Since the monotonic policy improvement frameworks relies on infinite-time horizon MDPs, we assume H →∞ for the remainder of this chapter.
In addition to the expected reward J(π) under policy π, we will use the state value function V π , the state-action value function Qπ as well as the advantage function Aπ:
V π(s) = Ea0,s1,... [ ∞∑ t=0 γtr(st,at) ∣∣∣∣st = s ]
Qπ(s, a) = Es1,a1,... [ ∞∑ t=0 γtr(st,at) ∣∣∣∣st = s,a0 = a ] = r(s, a) + γEs′∼p(s′|s,a) [Vπ(s′)] Aπ(s, a) = Qπ(s, a)− V π(s)
with at ∼ π(at|st) and st+1 ∼ p(st+1|st, at). The expected return under a policy π̃ can be expressed as the sum of the expected return of another policy π and the expected discounted advantage of π̃ over π (see Schulman et al. (2015a) for proof).
J(π̃) = J(π) + Eτ∼P (τ ,π̃) [ ∞∑ t=0 γtAπ(st,at) ] Let dπ denote the discounted state visitation frequency:
dπ(s) = γt ∞∑ t=0 p(st = s|π)
We can use dπ to express the expectation over trajectories τ ∼ pπ(τ) in terms of states and actions:
J(π̃) = J(π) + Es∼dπ̃(s) a∼π̃(a|s) [Aπ(s,a)] (102)
Local policy search aims to find a policy update π → π̃ in the proximity of π so that J(π̃) is maximized. Since J(π) is not affected by the policy update π → π̃, it is sufficient to maximize the expected advantage under π̃. However, the complex dependence of dπ̃(s) on π̃ makes it hard to directly maximize the objective in (102). Using a local approximation of (102) where it is assumed that the state visitation frequencies dπ and dπ̃ are identical, the optimization can be phrased as
J̃π(π̃) = J(π) + Es∼dπ(s) a∼π̃(a|s) [Aπ(s,a)] = J(π) + Es∼dπ(s) a∼π(a|s) [ π̃(a|s) π(a|s) Aπ(s,a) ] (103)
In the following we refer to J̃(π̃) as surrogate objective. It can be shown that the surrogate objective J̃ matches J to first order when π = π̃ (see Kakade & Langford (2002)). If πθ is a parametric and differentiable function with parameter vector θ, this means that for any θo:
J̃πθo (πθo) = Jπθo (πθo) and ∇θJ̃πθo (πθ) ∣∣ θo = ∇θJπθo (πθ) ∣∣ θo
(104)
When π 6= π̃, an approximation error of the surrogate objective J̃ w.r.t. to the true objective J is introduced. Achiam et al. (2017) derive a lower bound for the true expected return of π̃:
J(π̃) ≥ Jπ(π̃)− C √ Es∼dπ [DKL[π̃(·|s)||π(·|s)]] = Jπ(π̃)− C √ D̄KL[π̃||π] (105)
with C = √ 2γ
1−γ maxs |Ea∼π̃(a,s)[A π(s,a)]|
C.2 TRUST REGION POLICY OPTIMIZATION (TRPO)
Trust region policy optimization (TPRO) (Schulman et al., 2015a) attempts to approximate the bound in (105) by phrasing local policy search as a constrained optimization problem:
arg max θ Es∼dπθo (s) a∼πθo (a|s)
[ πθ(a|s) πθo(a|s) Aπθo (s,a) ] s.t. D̄KL[πθo ||πθ] ≤ δ (106)
Thereby the KL-constraint δ induces a local trust region around the current policy πθo . A practical implementation of TPRO uses a quadratic approximation of the KL-constraint which leads to the following update rule:
θ ← θ +
√ 2δ
g>Fg F−1g (107)
with g := ∇θEs∼dπθo (s) a∼πθo (a|s)
[ πθ(a|s) πθo (a|s) Aπθo (s,a) ] being the gradient of the objective and F =
∇2θD̄KL[πθo ||πθ] the Fisher information matrix of the current policy πθo . In order to avoid the cubic time complexity that arise when inverting F , the Conjugate Gradient (CG) algorithm is typically used to approximate the Hessian vector product F−1g.
C.3 PROXIMAL POLICY OPTIMIZATION (PPO)
While TPRO is framed as constrained optimization, the theory discussed in Appendix C.1 suggest to optimize the lower bound. Based on this insight, Schulman et al. (2017) propose adding a KL penalty to the objective and solve the following unconstrained optimization problem:
arg max θ Es∼dπθo (s) a∼πθo (a|s)
[ πθ(a|s) πθo(a|s) Aπθo (s,a)− βDKL[πθo(·|s)||πθ(·|s)] ]
(108)
However, they also show that it is not sufficient to set a fixed penalty coefficient β and propose two alternative methods, known as Proximal Policy Optimization (PPO) that aim towards alleviating this issue:
1) Adapting the KL coefficient β so that a desired target KL-divergence D̄KL[πθo ||πθ] between the policy before and after the parameter update is achieved
2) Clipping the likelihood ratio so that the optimization has no incentive to move the policy πθ too far away from the original policy πθo . A corresponding optimization objective reads as:
JCLIP = Es∼dπθo (s) a∼πθo (a|s)
[ min ( πθ(a|s) πθo(a|s) Aπθo (s,a) , clip1+ 1− ( πθ(a|s) πθo(a|s) ) Aπθo (s,a) )] (109)
Empirical results show that the latter approach leads to better learning performance (Schulman et al., 2017).
Since PPO | 1. What are the main contributions and improvements introduced by the paper regarding MAML and E-MAML?
2. How does the proposed method optimize the objective function, and what is the role of LVC in this process?
3. Can you provide more details about the empirical results shown in Figure 4, and how do they support the theoretical findings?
4. Do you have any concerns or suggestions regarding the paper's comparisons and analyses, especially regarding variance reduction? | Review | Review
The paper first examines the objective function optimized in MAML and E-MAML and interprets the terms as different credit assignment criteria. MAML takes into account the dependences between pre-update trajectory and pre-update policy, post-update trajectory and post-update policy by forcing the gradient of the two policies to be aligned, which results in better learning properties.
Thought better, the paper points out MAML has incorrect estimation for the hessian in the objective. To address that, the paper propose a low variance curvature estimator (LVC). However, naively solving the new objective with LVC with TRPO is computationally prohibitive. The paper addresses this problem by proposing an objective function that combines PPO and a slightly modified version of LVC.
Quality: strong, clarity:strong, originality:strong, significance: strong,
Pros:
- The paper provides strong theoretical results. Though mathematically intense, the paper is written quite well and is easy to follow.
- The proposed method is able to improve in sample complexity, speed and convergence over past methods.
- The paper provides strong empirical results over MAML, E-MAML. They also show the effective of the LVC objective by comparing LVC over E-MAML using vanilla gradient update.
- Figure 4 is particularly interesting. The results show different exploration patterns used by different method and is quite aligned with the theory.
Cons:
- It would be nice to add more comparison and analysis on the variance. Since LVC is claimed to reduce variance of the gradient, it would be nice to show more empirical evidences that supports this. (By looking at Figure 2, although not directly related, LVC-VPG seems to have pretty noisy behaviour) |
ICLR | Title
Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms
Abstract
Most of existing deep learning models rely on excessive amounts of labeled training data in order to achieve state-of-the-art results, even though these data can be hard or costly to get in practice. One attractive alternative is to learn with little supervision, commonly referred to as few-shot learning (FSL), and, in particular, meta-learning that learns to learn with few data from related tasks. Despite the practical success of meta-learning, many of its algorithmic solutions proposed in the literature are based on sound intuitions, but lack a solid theoretical analysis of the expected performance on the test task. In this paper, we review the recent advances in meta-learning theory and show how they can be used in practice both to better understand the behavior of popular meta-learning algorithms and to improve their generalization capacity. This latter is achieved by integrating the theoretical assumptions ensuring efficient meta-learning in the form of regularization terms into several popular meta-learning algorithms for which we provide a large study of their behavior on classic few-shot classification benchmarks. To the best of our knowledge, this is the first contribution that puts the most recent learning bounds of meta-learning theory into practice for the task of few-shot classification.
1 INTRODUCTION
Since the very seeding of the machine learning field, its algorithmic advances were inevitably followed or preceded by the accompanying theoretical analyses establishing the conditions required for the corresponding algorithms to learn well. Such a synergy between theory and practice is reflected in numerous concepts and learning strategies that took their origins in the statistical learning theory: for instance, the famous regularized risk minimization approach is directly related to the minimization of the complexity of the hypothesis space, as suggested by the generalization bounds established for supervised learning (Vapnik, 1992), while most of the adversarial algorithms in transfer learning (e.g., DANN from (Ganin & Lempitsky, 2015)) follow the theoretical insights provided by the seminal theory of its domain (Ben-David et al., 2010).
Even though many machine learning methods now enjoy a solid theoretical justification, some more recent advances in the field are still in their preliminary state which requires the hypotheses put forward by the theoretical studies to be implemented and verified in practice. One such notable example is the emerging field of meta-learning, also called learning to learn (LTL), where the goal is to produce a model on data coming from a set of (meta-train) source tasks to use it as a starting point for learning successfully a new previously unseen (meta-test) target task with little supervision. This kind of approach comes in particularly handy when training deep learning models as their performance crucially depends on the amount of training data that can be difficult and/or expensive to get in some applications. Several theoretical studies (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020)1 provided probabilistic meta-learning bounds that require the amount of data in the meta-train source task and the number of meta-train tasks to tend to infinity for efficient meta-learning. While capturing the underlying general intuition, these bounds do not suggest that all the source data is useful in such learning setup due to the
1We omit other works for meta-learning via online convex optimization (Finn et al., 2019; Balcan et al., 2019; Khodak et al., 2019; Denevi et al., 2019) as they concern a different learning setup.
additive relationship between the two terms mentioned above. To tackle this drawback, two very recent studies (Du et al., 2020; Tripuraneni et al., 2020) aimed at finding deterministic assumptions that lead to faster learning rates allowing meta-learning algorithms to benefit from all the source data. Contrary to probabilistic bounds that have been used to derive novel learning strategies for meta-learning algorithms (Amit & Meir, 2018; Yin et al., 2020), there was no attempt to verify the validity of the assumptions leading to the fastest known learning rates in practice or to enforce them through an appropriate optimization procedure.
In this paper, we bridge the meta-learning theory with practice by harvesting the theoretical results from Tripuraneni et al. (2020) and Du et al. (2020), and by showing how they can be implemented algorithmically and integrated, when needed, to popular existing meta-learning algorithms used for few-shot classification (FSC). This latter task consists in classifying new data having seen only few training examples, and represents one of the most prominent examples where meta-learning has shown to be highly efficient. More precisely, our contributions are three-fold:
1. We identify two common assumptions from the theoretical works on meta-learning and show how they can be verified and forced via a novel regularization scheme.
2. We investigate whether these assumptions are satisfied for popular meta-learning algorithms and observe that some of them naturally satisfy them, while others do not.
3. With the proposed regularization strategy, we show that enforcing the assumptions to be valid in practice leads to better generalization of the considered algorithms.
The rest of the paper is organized as follows. After presenting preliminary knowledge on the metalearning problem in Section 2, we detail the existing meta-learning theoretical results with their corresponding assumptions and show how they can be enforced via a general regularization technique in Section 3. Then, we provide an experimental evaluation of several popular few-shot learning (FSL) methods in Section 4 and highlight the different advantages brought by the proposed regularization in practice. Finally, we conclude and outline future research perspectives in Section 5.
2 PRELIMINARY KNOWLEDGE
We start by formally defining the meta-learning problem following the model described in Du et al. (2020). To this end, we assume having access to T source tasks characterized by their respective data generating distributions {µt}Tt=1 supported over the joint input-output space X × Y with X ⊆ Rd and Y ⊆ R. We further assume that these distributions are observed only through finite size samples of size n1 grouped into matrices Xt = (xt,1, . . . ,xt,n1) ∈ Rn1×d and vectors of outputs yt = (yt,1, . . . , yt,n1) ∈ Rn1 , ∀t ∈ [[T ]] := {1, . . . , T}. Given this set of tasks, our goal is to learn a shared representation φ belonging to a certain class of functions Φ := {φ | φ : X → V, V ⊆ Rk} and linear predictors wt ∈ Rk, ∀t ∈ [[T ]] grouped in a matrix W ∈ RT×k. More formally, this is done by solving the following optimization problem:
φ̂,Ŵ = arg min φ∈Φ,W∈RT×k
1
2Tn1
T∑
t=1
n1∑
i=1
`(yt,i, 〈wt, φ(xt,i)〉), (1)
where ` : Y× Y → R+ is a loss function. Once such a representation is learned, we want to apply it to a new previously unseen target task observed through a pair (XT+1 ∈ Rn2×d, yT+1 ∈ Rn2) containing n2 samples generated by the distribution µT+1. We expect that a linear classifier w learned on top of the obtained representation leads to a low true risk over the whole distribution µT+1. More precisely, we first use φ̂ to solve the following problem:
ŵT+1 = arg min w∈Rk
1
n2
n2∑
i=1
`(yT+1,i, 〈w, φ̂(xT+1,i)〉).
Then, we define the true target risk of the learned linear classifier ŵT+1 as:
L(φ̂, ŵT+1) = E(x,y)∼µT+1 [`(y, 〈ŵT+1, φ̂(x)〉)] and want it to be small and as close as possible to the ideal true risk L(φ∗,w∗T+1) where
∀t ∈ [[T + 1]] and (x, y) ∼ µt, y = 〈w∗t , φ∗(x)〉+ ε, ε ∼ N (0, σ2). (2)
Equivalently, most of the works found in the literature seek to upper-bound the excess risk defined as ER(φ̂, ŵT+1) := L(φ̂, ŵT+1)− L(φ∗,w∗T+1) with quantities involved in the learning process.
Remark 1 We note that many popular meta-learning algorithms used for FSL do not follow exactly the approach described above. However, we believe that the exact way of how this is done algorithmically (with or without the support set, with or without learning episodes) does not change the statistical challenge of it which is to learn a model that can provably generalize with little supervision. Supervised learning theory tells us that generalization in this case is poor (not enough target data and it is difficult to rely on data coming from different probability distributions), while the theoretical works we built upon suggest that source data may contribute in improving the generalization of the learned model alongside the target data if the assumptions described below are satisfied.
3 FROM THEORY TO PRACTICE
In this section, we highlight main theoretical contributions that provably ensure the success of metalearning in improving the performance on the previously unseen target task with the increasing number of source tasks and the amount of data available for them. We then concentrate our attention on the most recent theoretical advances leading to the fastest learning rates and show how the assumptions used to obtain them can be forced in practice through a novel regularization strategy.
3.1 WHEN DOES META-LEARNING PROVABLY WORK?
One requirement for meta-learning to succeed in FSC is that a representation learned on meta-train data should be useful for learning a good predictor on the meta-test data set. This is reflected by bounding the excess target risk by a quantity that involves the number of samples in both meta-train and meta-test samples and the number of available meta-train tasks.
To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d. from the same random distribution. This assumption, however, is considered unrealistic as in FSL source and target tasks’ data are often given by different draws (without replacement) from the same dataset. In this setup, the above-mentioned works obtained the bounds having the following form:
ER(φ̂, ŵT+1) ≤ O (
1√ n1 + 1√ T
) .
This guarantee implies that not only the number of source data, but also the number of tasks should be large in order to draw the second term to 0. An improvement was then proposed by Du et al. (2020) and Tripuraneni et al. (2020) that obtained the bounds on the excess risk behaving as
O ( kd√ n1T + k√ n2 ) and Õ ( kd n1T + k n2 ) ,
respectively, where k d is the dimensionality of the learned representation and Õ(·) hides logarithmic factors. Both these results show that all the source and target samples are useful in minimizing the excess risk: in the FSL regime where target data is scarce, all source data helps to learn well. From a set of assumptions made by the authors in both of these works , we note the following two:
Assumption 1. The matrix of optimal predictors W∗ should cover all the directions in Rk evenly. More formally, this can be stated as
Rσ(W ∗) =
σ1(W ∗) σk(W∗) = O(1), (3)
where σi(·) denotes the ith singular value of W∗. As pointed out by the authors, such an assumption can be seen as a measure of diversity between the source tasks that are expected to be complementary to each other in order to provide a meaningful representation for a previously unseen target task.
Assumption 2. The norm of the optimal predictors w∗ should not increase with the number of tasks seen during meta-training2. This assumption says that the classification margin of linear predictors should remain constant thus avoiding over- or under-specialization to the seen tasks.
While being highly insightful, the authors did not provide any experimental evidence suggesting that verifying these assumptions in practice helps to learn more efficiently in the considered learning setting. To bridge this gap, we propose to use a general regularization scheme that allows us to enforce these assumptions when learning the matrix of predictors in several popular meta-learning algorithms.
3.2 PUTTING THEORY TO WORK
As the assumptions mentioned above are stated for the optimal predictors that are inherently linked to the data generating process, one may wonder what happens when these latter do not satisfy them. To this end, we aim to answer the following question:
Given W∗ such that Rσ(W∗) 1, can we learn Ŵ with Rσ(Ŵ) ≈ 1 while solving the underlying classification problems equally well?
It turns out that we can construct an example illustrated in Fig. 1 for which the answer to this question is positive. To this end, let us consider a binary classification problem over X ⊆ R3 with labels Y = {−1, 1} and two source tasks generated for k, ε ∈ ]0, 1], as follows:
1. µ1 is uniform over {1− kε, k, 1} × {1} ∪ {1 + kε, k,−1} × {−1}; 2. µ2 is uniform over {1 + kε, k, k−1ε } × {1} ∪ {−1 + kε, k, 1+kε } × {−1}.
We now define the optimal representation and two optimal predictors for each distribution as the solution to Eq. 1 over the two data generating distributions and Φ = {φ| φ(x) = ΦTx, Φ ∈ R3×2}:
φ∗,W∗ = arg min φ∈Φ,W∈R2×2
2∑
i=1
E (x,y)∼µi `(y, 〈wi, φ(x)〉), (4)
One solution to this problem can be given as follows:
Φ∗ = ( 1 0 0 0 1 0 )T , W∗ = ( 1 ε 1 −ε ) ,
where φ∗ projects the data generated by µi to a two-dimensional space by discarding its third dimension and the linear predictors satisfy the data generating process from Eq. 2 with ε = 0. One can verify that in this case W∗ have singular values equal to √ 2 and √ 2ε, so that the ratioRσ(W∗) = 1ε : when ε→ 0, the optimal predictors make the ratio arbitrary large thus violating Assumption 1.
2While not stated as a separate assumption, in Du et al. (2020) assume it to derive the Assumption 1 mentioned above. See p.5 and the discussion after Assumption 4.3 in their pre-print.
Let us now consider a different problem where we want to solve Eq. 4 with a constraint that forces linear predictors to satisfy Assumption 1:
φ̂,Ŵ = arg min φ∈Φ,W∈R2×2
2∑
i=1
E (x,y)∼µi `(y, 〈wi, φ(x)〉), s.t. Rσ(W) ≈ 1. (5)
Its solution is different and is given by
Φ̂ = ( 0 1 0 0 0 1 )T , Ŵ = ( 0 1 1 −ε ) .
Similarly to Φ∗, Φ̂ projects to a two-dimensional space by discarding the first dimension of the data generated by µi. The learned predictors in this case also satisfy Eq. 2 with ε = 0, but contrary to
W∗, Rσ(Ŵ) = √ 2+ε2+ε √ ε2+4
2+ε2−ε √ ε2+4
tends to 1 when ε→ 0.
Several remarks are in order here. First, it shows that even when W∗ does not satisfy Assumption 1 in the space induced by φ∗, it may still be possible to learn a new representation space φ̂ such that the optimal predictors in this space will satisfy Assumption 1. This can be done either by considering the constrained problem from Eq. 5, or by using a more common strategy that consists in adding Rσ(W) directly as a regularization term
φ̂,Ŵ = arg min φ∈Φ,W∈RT×k
1
2Tn1
T∑
t=1
n1∑
i=1
`(yt,i, 〈wt, φ(xt,i)〉) + λ1Rσ(W). (6)
Below, we explain how to implement this idea in practice for popular meta-learning algorithms.
Ensuring assumption 1. We propose to compute singular values of W during the meta-training stage and follow its evolution during the learning episodes. In practice, this can be done by performing the Singular Value Decomposition (SVD) on W ∈ RT×k with a computational cost of O(Tk2) floating-point operations (flop). However, as T is typically quite large, we propose a more computationally efficient solution that is to take into account only the last batch of N predictors (with N T ) grouped in the matrix WN ∈ RN×k that capture the latest dynamics in the learning process. We further note that σi(WNW>N ) = σ 2 i (WN ), ∀i ∈ [[N ]] implying that we can calculate the SVD of WNW>N (or W > NWN for k ≤ N ) and retrieve the singular values from it afterwards.
We now want to verify whether the optimal linear predictors wt cover all directions in the embedding space by tracking the evolution of the ratio of singular values Rσ(WN ) during the training process. For the sake of conciseness, we use Rσ instead of Rσ(WN ) thereafter. According to the theory, we expect Rσ to decrease during training thus improving the generalization of the learned predictors and preparing them for the target task. When we want to enforce such a behavior in practice, we propose to use Rσ as a regularization term in the training loss of popular meta-learning algorithms.
Alternatively, as the smallest singular value σN (WN ) can be close to 0 and lead to numerical errors, we propose to replace the ratio of the vector of singular values by its entropy as follows:
Hσ(WN ) = − N∑
i=1
softmax(σ(WN ))i · log softmax(σ(WN ))i,
where softmax(·)i is the ith output of the softmax function. As with Rσ , we write Hσ instead of Hσ(WN ) from now on. Since uniform distribution has the highest entropy, regularizing with Rσ or −Hσ leads to a better coverage of Rk by ensuring a nearly identical importance regardless of the direction. We refer the reader to the Supplementary materials for the derivations ensuring the existence of the subgradients for these terms.
Ensuring assumption 2. In addition to the full coverage of the embedding space by the linear predictors, the meta-learning theory assumes that the norm of the linear predictors does not increase with the number of tasks seen during meta-training, i.e., ‖w‖2 = O(1) or, equivalently, ‖W‖2F = O(T ). If this assumption does not hold in practice, we propose to regularize the norm of linear predictors during training or directly normalize the obtained linear predictors w̄ = w‖w‖2 .
The final meta-training loss with the theory-inspired regularization terms is given as:
min φ∈Φ,W∈RT×k
1
2Tn1
T∑
t=1
n1∑
i=1
`(yt,i, 〈wt, φ(xt,i)〉) + λ1Rσ(WN ) + λ2‖WN‖2F , (7)
and depending on the considered algorithm, we can replace Rσ by −Hσ and/or replace wt by w̄t instead of regularizing with ‖WN‖2F . In what follows, we consider λ1 = λ2 = 1 and we refer the reader to the Supplementary materials for more details and experiments with other values.
To the best of our knowledge, such regularization terms based on insights from the advances in metalearning theory have never been used in the literature before. We also further use the basic quantities involved in the proposed regularization terms as indicators of whether a given meta-learning algorithm naturally satisfies the assumptions ensuring an efficient meta-learning in practice or not.
3.3 RELATED WORK
Below, we discuss several related studies aiming at improving the general understanding of metalearning, and mention other regularization terms specifically designed for meta-learning.
Understanding meta-learning While a complete theory for meta-learning is still lacking, several recent works aimed to shed light on phenomena commonly observed in meta-learning by evaluating different intuitive heuristics. For instance, Raghu et al. (2020) investigated whether the popular gradient-based MAML algorithm relies on rapid learning with significant changes in the representations when deployed on target task, or due to feature reuse where the learned representation remains almost intact. They establish that the latter factor is dominant and propose a new variation of MAML that freezes all but task-specific layers of the neural network when learning new tasks. In another study (Goldblum et al., 2020) the authors explain the success of meta-learning approaches by their capability to either cluster classes more tightly in feature space (task-specific adaptation approach), or to search for meta-parameters that lie close in weight space to many task-specific minima (full fine-tuning approach). Finally, the effect of the number of shots on the classification accuracy was studied theoretically and illustrated empirically in Cao et al. (2020) for the popular metric-based PROTONET algorithm. Our paper is complementary to all other works mentioned above as it investigates a new aspect of meta-learning that has never been studied before, while following a sound theory. Also, we provide a more complete experimental evaluation as the three different approaches of meta-learning (based on gradient, metric or transfer learning), separately presented in Raghu et al. (2020), Cao et al. (2020) and Goldblum et al. (2020), are now compared together.
Other regularization strategies Regularization is a common tool to reduce model complexity during learning for better generalization, and the variations of its two most famous instances given by weight decay (Krogh & Hertz, 1992) and dropout (Srivastava et al., 2014) are commonly used as a basis in meta-learning literature as well. In general, regularization in meta-learning is applied to the weights of the whole neural network (Balaji et al., 2018; Yin et al., 2020), the predictions (Jamal & Qi, 2019; Goldblum et al., 2020) or is introduced via a prior hypothesis biased regularized empirical risk minimization (Pentina & Lampert, 2014; Kuzborskij & Orabona, 2017; Denevi et al., 2018a;b; 2019). Our proposal is different from all the approaches mentioned above for the following reasons. First, we do not regularize the whole weight matrix learned by the neural network but the linear predictors of its last layer contrary to what was done in the methods of the first group, and, more specifically, the famous weight decay approach (Krogh & Hertz, 1992). The purpose of the regularization in our case is also completely different: weight decay is used to improve generalization through sparsity in order to avoid overfitting, while our goal is to keep the classification margin unchanged during the training to avoid over-/under-specialization to some source tasks. Similarly, spectral normalization proposed by Miyato et al. (2018) to satisfy the Lipschitz constraint in GANs through dividing W values by σmax(W) does not affect the ratio between σmax(W) and σmin(W) and serves a completely different purpose. Second, we regularize the singular values (entropy or ratio) of the matrix of linear predictors instead of the predictions, as done by the methods of the second group (e.g., using the theoretic-information quantities in Jamal & Qi (2019) and Yin et al. (2020)). Finally, the works of the last group are related to the online setting with convex loss functions only, and, similarly to the algorithms from the second group, do not specifically target the spectral properties of the learned predictors. Last, but not least, our proposal is built upon the most recent advances in the meta-learning field leading to faster learning rates contrary to previous works.
4 PRACTICAL RESULTS
In this section, we use extensive experimental evaluations to answer the following two questions:
Q1) Do popular meta-learning methods naturally satisfy the learning bounds assumptions? Q2) Does ensuring these assumptions help to (meta-)learn more efficiently?
For Q1, we run the original implementations of popular meta-learning methods to see what is their natural behavior. For Q2, we study the impact of forcing them to closely follow the theoretical setup.
4.1 EXPERIMENTAL SETUP
Datasets & Baselines We consider few-shot image classification problem on three benchmark datasets, namely: 1) Omniglot (Lake et al., 2015) consisting of 1,623 classes with 20 images/class of size 28×28; 2) miniImageNet (Ravi & Larochelle, 2017) consisting of 100 classes with 600 images of size 84 × 84 per class and 3) tieredImageNet (Ren et al., 2018) consisting of 779,165 images divided into 608 classes. For each dataset, we follow the commonly adopted experimental protocol used in Finn et al. (2017) and Chen et al. (2019) and use a four-layer convolution backbone (Conv4) with 64 filters as done by Chen et al. (2019). On Omniglot, we perform 20-way classification with 1 shot and 5 shots, while on miniImageNet and tieredImageNet we perform 5-way classification with 1 shot and 5 shots. Finally, we evaluate four FSL methods: two popular meta-learning strategies, namely, MAML (Finn et al., 2017), a gradient-based method, and Prototypical Networks (PROTONET) (Snell et al., 2017), a metric-based approach; two popular transfer learning baselines, termed as BASELINE and BASELINE++ (Ravi & Larochelle, 2017; Gidaris & Komodakis, 2018; Chen et al., 2019). Even though these baselines are trained with the standard supervised learning framework, such a training can also be seen as learning a single task in the LTL framework.
Implementation details Enforcing Assumptions 1 and 2 for MAML is straightforward as it closely follows the LTL framework of episodic training. For each task, the model learns a batch of linear predictors and we can directly take them as WN to compute its SVD. Since the linear predictors are the weights of our model and change slowly, regularizing the norm ‖WN‖F and the ratio of singular values Rσ does not cause instabilities during training. Meanwhile, metric-based methods do not use linear predictors but compute a similarity between features. In the case of PROTONET, the similarity is computed with respect to class prototypes (i.e. the mean features of the images of each class). Since they act as linear predictors, a first idea would be to regularize the norm and ratio of singular values of the prototypes. Unfortunately, this latter strategy hinders the convergence of the network and leads to numerical instabilities. Most likely because prototypes are computed from image features which suffer from rapid changes across batches. Consequently, we regularize the entropy of singular values Hσ instead of the ratio Rσ to avoid instabilities during training to ensure Assumption 1 and we normalize the prototypes to ensure Assumption 2 by replacing wt with w̄t in Eq. 7. For transfer learning methods BASELINE and BASELINE++, the last layer of the network is discarded and linear predictors are learned during meta-testing. Thus, we only regularize the norm ‖WN‖F of predictors learned during the finetuning phase of meta-testing. Similarly to MAML, we compute Rσ with the last layer of the network during training and fine-tuning phase.
Remark 2 We choose well-established meta-learning algorithms for our comparison, but the proposed regularization can be integrated similarly into their recent variations (Park & Oliva, 2019; Lee et al., 2019) (see Supplementary materials for results obtained with the method of Park & Oliva (2019)). Finally, using models that do not rely on linear predictors is also possible but might be more difficult as it would require upstream work to understand which part of the model acts as predictors (as done for PROTONET in this paper) and how to compute and track the desired quantities.
4.2 INSIGHTS
Q1 – Verifying the assumptions According to theory, ‖WN‖F and Rσ should remain constant or converge toward a constant value when monitoring the last N tasks. From Fig. 2(a), we can see that for MAML (Fig. 2(a) top), both ‖WN‖F and Rσ increase with the number of tasks seen during training, whereas PROTONET (Fig. 2(a) bottom) naturally learns the prototypes with a good coverage of the embedding space, and minimizes their norm. This behavior is rather peculiar as neither
of the two methods specifically controls the theoretical quantities of interest, and still, PROTONET manages to do it implicitly. As for the transfer learning baselines (Fig. 2(b) top and bottom), we expect them to learn features that cover the embedding space with Rσ rapidly converging towards a constant value. As can be seen in Fig. 2(b), similarly to PROTONET, BASELINE++ naturally learns linear predictors that cover the embedding space. As for BASELINE, it learns a good coverage for Omniglot dataset, but fails to do so for the more complicated tieredImageNet dataset. The observed behavior of these different methods leads to a conclusion that some meta-learning algorithms are inherently more explorative of the embedding space.
Q2 – Ensuring the assumptions Armed with our regularization terms, we now aim to force the considered algorithms to verify the assumptions when it is not naturally done. In particular, for MAML we regularize both ‖WN‖F and Rσ in order to keep them constant throughout the training. Similarly, we regularize Rσ during the training of BASELINE and BASELINE++, and both ‖WN‖F and Rσ during the finetuning phase of meta-testing. For PROTONET, we enforce a normalization of
the prototypes. According to our results for Q1, regularizing the singular values of the prototypes through the entropy Hσ is not necessary.3 Based on the obtained results, we can make the following conclusions. First, from Fig. 2(a) (left, middle) and Fig. 2(b) (left), we note that for all methods considered, our proposed methodology used to enforce the theoretical assumptions works as expected, and leads to a desired behavior during the learning process. This means that the differences in terms of results presented in Table 1 are explained fully by this particular addition to the optimized objective function. Second, from the shape of the accuracy curves provided in Fig. 2(a) (right) and the accuracy gaps when enforcing the assumptions given in Table 1, we can see that respecting the assumptions leads to several significant improvements related to different aspects of learning. On the one hand, we observe that the final validation accuracy improves significantly in all benchmarks for meta-learning methods and in most of experiments for BASELINE (except for Omniglot, where BASELINE already learns to regularize its linear predictors). In accordance with the theory, we attribute the improvements to the fact that we fully utilize the training data which leads to a tighter bound on the excess target risk and, consequently, to a better generalization performance. On the other hand, we also note that our regularization reduces the sample complexity of learning the target task, as indicated by the faster increase of the validation accuracy from the very beginning of the meta-training. Roughly speaking, less meta-training data is necessary to achieve a performance comparable to that obtained without the proposed regularization using more tasks. Finally, we note that BASELINE++ and PROTONET methods naturally satisfy some assumptions: both learn diverse linear predictors by design, while BASELINE++ also normalizes the weights of its linear predictors. Thus, these methods do not benefit from additional regularization as explained before.
5 CONCLUSION
In this paper, we studied the validity of the theoretical assumptions made in recent papers applied to popular meta-learning algorithms and proposed practical ways of enforcing them.
On the one hand, we showed that depending on the problem and algorithm, some models can naturally fulfill the theoretical conditions during training. Some algorithms offer a better covering of the embedding space than others. On the other hand, when the conditions are not verified, learning with our proposed regularization terms allows to learn faster and improve the generalization capabilities of meta-learning methods. The theoretical framework studied in this paper explains the observed performance gain. Notice that no specific hyperparameter tuning was performed as we rather aim at showing the effect of ensuring learning bounds assumptions than comparing performance of the methods. Absolute accuracy results are detailed in the Supplementary materials.
While this paper proposes an initial approach to bridging the gap between theory and practice in meta-learning, some questions remain open on the inner workings of these algorithms. In particular, being able to take better advantage of the particularities of the training tasks during meta-training could help improve the effectiveness of these approaches. Self-supervised meta-learning and multiple target tasks prediction are also important future perspectives for the application of meta-learning.
3The effect of entropic regularization on PROTONET is detailed in the Supplementary materials. | 1. What is the focus of the reviewed paper regarding meta-learning?
2. What are the concerns regarding the regularization methods proposed in the paper?
3. How do the experimental results compare to other approaches in meta-learning? | Review | Review
########################################################################## Summary:
The paper reviews common assumptions made by recent theoretical analysis of meta-learning and applies them to meta-learning methods as regularization. Results show that these regularization terms improve over vanilla meta-learning.
########################################################################## Reasons for score:
Overall, I vote for reject. The main idea of applying theory to practice is reasonable, but the regularization methods proposed are mainly known. Regularizing the singular value is similar to the spectral normalization proposed in [1]. The Frobenius norm regularization is similar to the commonly used weight decay.
########################################################################## 1. Assumption 1 in Du et al. states that the ground truth weight should cover all directions evenly. It cannot be ensured when the tasks are fixed. The proposed regularization penalizes the condition number of the weight matrix during training, which is more similar to the spectral normalization proposed in [1]. As to regularizing the Frobenius norm, there exist a line of literature showing weight decay works for general settings apart from meta-learning. Thus, I think the regularization proposed in this paper is known. 2. The experimental results indeed improve over vanilla meta-learning. However, as shown in [2], even by with some simple tricks, meta-learning can be more stable and achieves better results. This casts doubt on the value of the proposed method.
[1] Spectral Normalization for Generative Adversarial Networks, ICLR 2018 [2] HOW TO TRAIN YOUR MAML, ICLR 2019 |
ICLR | Title
Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms
Abstract
Most of existing deep learning models rely on excessive amounts of labeled training data in order to achieve state-of-the-art results, even though these data can be hard or costly to get in practice. One attractive alternative is to learn with little supervision, commonly referred to as few-shot learning (FSL), and, in particular, meta-learning that learns to learn with few data from related tasks. Despite the practical success of meta-learning, many of its algorithmic solutions proposed in the literature are based on sound intuitions, but lack a solid theoretical analysis of the expected performance on the test task. In this paper, we review the recent advances in meta-learning theory and show how they can be used in practice both to better understand the behavior of popular meta-learning algorithms and to improve their generalization capacity. This latter is achieved by integrating the theoretical assumptions ensuring efficient meta-learning in the form of regularization terms into several popular meta-learning algorithms for which we provide a large study of their behavior on classic few-shot classification benchmarks. To the best of our knowledge, this is the first contribution that puts the most recent learning bounds of meta-learning theory into practice for the task of few-shot classification.
1 INTRODUCTION
Since the very seeding of the machine learning field, its algorithmic advances were inevitably followed or preceded by the accompanying theoretical analyses establishing the conditions required for the corresponding algorithms to learn well. Such a synergy between theory and practice is reflected in numerous concepts and learning strategies that took their origins in the statistical learning theory: for instance, the famous regularized risk minimization approach is directly related to the minimization of the complexity of the hypothesis space, as suggested by the generalization bounds established for supervised learning (Vapnik, 1992), while most of the adversarial algorithms in transfer learning (e.g., DANN from (Ganin & Lempitsky, 2015)) follow the theoretical insights provided by the seminal theory of its domain (Ben-David et al., 2010).
Even though many machine learning methods now enjoy a solid theoretical justification, some more recent advances in the field are still in their preliminary state which requires the hypotheses put forward by the theoretical studies to be implemented and verified in practice. One such notable example is the emerging field of meta-learning, also called learning to learn (LTL), where the goal is to produce a model on data coming from a set of (meta-train) source tasks to use it as a starting point for learning successfully a new previously unseen (meta-test) target task with little supervision. This kind of approach comes in particularly handy when training deep learning models as their performance crucially depends on the amount of training data that can be difficult and/or expensive to get in some applications. Several theoretical studies (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020)1 provided probabilistic meta-learning bounds that require the amount of data in the meta-train source task and the number of meta-train tasks to tend to infinity for efficient meta-learning. While capturing the underlying general intuition, these bounds do not suggest that all the source data is useful in such learning setup due to the
1We omit other works for meta-learning via online convex optimization (Finn et al., 2019; Balcan et al., 2019; Khodak et al., 2019; Denevi et al., 2019) as they concern a different learning setup.
additive relationship between the two terms mentioned above. To tackle this drawback, two very recent studies (Du et al., 2020; Tripuraneni et al., 2020) aimed at finding deterministic assumptions that lead to faster learning rates allowing meta-learning algorithms to benefit from all the source data. Contrary to probabilistic bounds that have been used to derive novel learning strategies for meta-learning algorithms (Amit & Meir, 2018; Yin et al., 2020), there was no attempt to verify the validity of the assumptions leading to the fastest known learning rates in practice or to enforce them through an appropriate optimization procedure.
In this paper, we bridge the meta-learning theory with practice by harvesting the theoretical results from Tripuraneni et al. (2020) and Du et al. (2020), and by showing how they can be implemented algorithmically and integrated, when needed, to popular existing meta-learning algorithms used for few-shot classification (FSC). This latter task consists in classifying new data having seen only few training examples, and represents one of the most prominent examples where meta-learning has shown to be highly efficient. More precisely, our contributions are three-fold:
1. We identify two common assumptions from the theoretical works on meta-learning and show how they can be verified and forced via a novel regularization scheme.
2. We investigate whether these assumptions are satisfied for popular meta-learning algorithms and observe that some of them naturally satisfy them, while others do not.
3. With the proposed regularization strategy, we show that enforcing the assumptions to be valid in practice leads to better generalization of the considered algorithms.
The rest of the paper is organized as follows. After presenting preliminary knowledge on the metalearning problem in Section 2, we detail the existing meta-learning theoretical results with their corresponding assumptions and show how they can be enforced via a general regularization technique in Section 3. Then, we provide an experimental evaluation of several popular few-shot learning (FSL) methods in Section 4 and highlight the different advantages brought by the proposed regularization in practice. Finally, we conclude and outline future research perspectives in Section 5.
2 PRELIMINARY KNOWLEDGE
We start by formally defining the meta-learning problem following the model described in Du et al. (2020). To this end, we assume having access to T source tasks characterized by their respective data generating distributions {µt}Tt=1 supported over the joint input-output space X × Y with X ⊆ Rd and Y ⊆ R. We further assume that these distributions are observed only through finite size samples of size n1 grouped into matrices Xt = (xt,1, . . . ,xt,n1) ∈ Rn1×d and vectors of outputs yt = (yt,1, . . . , yt,n1) ∈ Rn1 , ∀t ∈ [[T ]] := {1, . . . , T}. Given this set of tasks, our goal is to learn a shared representation φ belonging to a certain class of functions Φ := {φ | φ : X → V, V ⊆ Rk} and linear predictors wt ∈ Rk, ∀t ∈ [[T ]] grouped in a matrix W ∈ RT×k. More formally, this is done by solving the following optimization problem:
φ̂,Ŵ = arg min φ∈Φ,W∈RT×k
1
2Tn1
T∑
t=1
n1∑
i=1
`(yt,i, 〈wt, φ(xt,i)〉), (1)
where ` : Y× Y → R+ is a loss function. Once such a representation is learned, we want to apply it to a new previously unseen target task observed through a pair (XT+1 ∈ Rn2×d, yT+1 ∈ Rn2) containing n2 samples generated by the distribution µT+1. We expect that a linear classifier w learned on top of the obtained representation leads to a low true risk over the whole distribution µT+1. More precisely, we first use φ̂ to solve the following problem:
ŵT+1 = arg min w∈Rk
1
n2
n2∑
i=1
`(yT+1,i, 〈w, φ̂(xT+1,i)〉).
Then, we define the true target risk of the learned linear classifier ŵT+1 as:
L(φ̂, ŵT+1) = E(x,y)∼µT+1 [`(y, 〈ŵT+1, φ̂(x)〉)] and want it to be small and as close as possible to the ideal true risk L(φ∗,w∗T+1) where
∀t ∈ [[T + 1]] and (x, y) ∼ µt, y = 〈w∗t , φ∗(x)〉+ ε, ε ∼ N (0, σ2). (2)
Equivalently, most of the works found in the literature seek to upper-bound the excess risk defined as ER(φ̂, ŵT+1) := L(φ̂, ŵT+1)− L(φ∗,w∗T+1) with quantities involved in the learning process.
Remark 1 We note that many popular meta-learning algorithms used for FSL do not follow exactly the approach described above. However, we believe that the exact way of how this is done algorithmically (with or without the support set, with or without learning episodes) does not change the statistical challenge of it which is to learn a model that can provably generalize with little supervision. Supervised learning theory tells us that generalization in this case is poor (not enough target data and it is difficult to rely on data coming from different probability distributions), while the theoretical works we built upon suggest that source data may contribute in improving the generalization of the learned model alongside the target data if the assumptions described below are satisfied.
3 FROM THEORY TO PRACTICE
In this section, we highlight main theoretical contributions that provably ensure the success of metalearning in improving the performance on the previously unseen target task with the increasing number of source tasks and the amount of data available for them. We then concentrate our attention on the most recent theoretical advances leading to the fastest learning rates and show how the assumptions used to obtain them can be forced in practice through a novel regularization strategy.
3.1 WHEN DOES META-LEARNING PROVABLY WORK?
One requirement for meta-learning to succeed in FSC is that a representation learned on meta-train data should be useful for learning a good predictor on the meta-test data set. This is reflected by bounding the excess target risk by a quantity that involves the number of samples in both meta-train and meta-test samples and the number of available meta-train tasks.
To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d. from the same random distribution. This assumption, however, is considered unrealistic as in FSL source and target tasks’ data are often given by different draws (without replacement) from the same dataset. In this setup, the above-mentioned works obtained the bounds having the following form:
ER(φ̂, ŵT+1) ≤ O (
1√ n1 + 1√ T
) .
This guarantee implies that not only the number of source data, but also the number of tasks should be large in order to draw the second term to 0. An improvement was then proposed by Du et al. (2020) and Tripuraneni et al. (2020) that obtained the bounds on the excess risk behaving as
O ( kd√ n1T + k√ n2 ) and Õ ( kd n1T + k n2 ) ,
respectively, where k d is the dimensionality of the learned representation and Õ(·) hides logarithmic factors. Both these results show that all the source and target samples are useful in minimizing the excess risk: in the FSL regime where target data is scarce, all source data helps to learn well. From a set of assumptions made by the authors in both of these works , we note the following two:
Assumption 1. The matrix of optimal predictors W∗ should cover all the directions in Rk evenly. More formally, this can be stated as
Rσ(W ∗) =
σ1(W ∗) σk(W∗) = O(1), (3)
where σi(·) denotes the ith singular value of W∗. As pointed out by the authors, such an assumption can be seen as a measure of diversity between the source tasks that are expected to be complementary to each other in order to provide a meaningful representation for a previously unseen target task.
Assumption 2. The norm of the optimal predictors w∗ should not increase with the number of tasks seen during meta-training2. This assumption says that the classification margin of linear predictors should remain constant thus avoiding over- or under-specialization to the seen tasks.
While being highly insightful, the authors did not provide any experimental evidence suggesting that verifying these assumptions in practice helps to learn more efficiently in the considered learning setting. To bridge this gap, we propose to use a general regularization scheme that allows us to enforce these assumptions when learning the matrix of predictors in several popular meta-learning algorithms.
3.2 PUTTING THEORY TO WORK
As the assumptions mentioned above are stated for the optimal predictors that are inherently linked to the data generating process, one may wonder what happens when these latter do not satisfy them. To this end, we aim to answer the following question:
Given W∗ such that Rσ(W∗) 1, can we learn Ŵ with Rσ(Ŵ) ≈ 1 while solving the underlying classification problems equally well?
It turns out that we can construct an example illustrated in Fig. 1 for which the answer to this question is positive. To this end, let us consider a binary classification problem over X ⊆ R3 with labels Y = {−1, 1} and two source tasks generated for k, ε ∈ ]0, 1], as follows:
1. µ1 is uniform over {1− kε, k, 1} × {1} ∪ {1 + kε, k,−1} × {−1}; 2. µ2 is uniform over {1 + kε, k, k−1ε } × {1} ∪ {−1 + kε, k, 1+kε } × {−1}.
We now define the optimal representation and two optimal predictors for each distribution as the solution to Eq. 1 over the two data generating distributions and Φ = {φ| φ(x) = ΦTx, Φ ∈ R3×2}:
φ∗,W∗ = arg min φ∈Φ,W∈R2×2
2∑
i=1
E (x,y)∼µi `(y, 〈wi, φ(x)〉), (4)
One solution to this problem can be given as follows:
Φ∗ = ( 1 0 0 0 1 0 )T , W∗ = ( 1 ε 1 −ε ) ,
where φ∗ projects the data generated by µi to a two-dimensional space by discarding its third dimension and the linear predictors satisfy the data generating process from Eq. 2 with ε = 0. One can verify that in this case W∗ have singular values equal to √ 2 and √ 2ε, so that the ratioRσ(W∗) = 1ε : when ε→ 0, the optimal predictors make the ratio arbitrary large thus violating Assumption 1.
2While not stated as a separate assumption, in Du et al. (2020) assume it to derive the Assumption 1 mentioned above. See p.5 and the discussion after Assumption 4.3 in their pre-print.
Let us now consider a different problem where we want to solve Eq. 4 with a constraint that forces linear predictors to satisfy Assumption 1:
φ̂,Ŵ = arg min φ∈Φ,W∈R2×2
2∑
i=1
E (x,y)∼µi `(y, 〈wi, φ(x)〉), s.t. Rσ(W) ≈ 1. (5)
Its solution is different and is given by
Φ̂ = ( 0 1 0 0 0 1 )T , Ŵ = ( 0 1 1 −ε ) .
Similarly to Φ∗, Φ̂ projects to a two-dimensional space by discarding the first dimension of the data generated by µi. The learned predictors in this case also satisfy Eq. 2 with ε = 0, but contrary to
W∗, Rσ(Ŵ) = √ 2+ε2+ε √ ε2+4
2+ε2−ε √ ε2+4
tends to 1 when ε→ 0.
Several remarks are in order here. First, it shows that even when W∗ does not satisfy Assumption 1 in the space induced by φ∗, it may still be possible to learn a new representation space φ̂ such that the optimal predictors in this space will satisfy Assumption 1. This can be done either by considering the constrained problem from Eq. 5, or by using a more common strategy that consists in adding Rσ(W) directly as a regularization term
φ̂,Ŵ = arg min φ∈Φ,W∈RT×k
1
2Tn1
T∑
t=1
n1∑
i=1
`(yt,i, 〈wt, φ(xt,i)〉) + λ1Rσ(W). (6)
Below, we explain how to implement this idea in practice for popular meta-learning algorithms.
Ensuring assumption 1. We propose to compute singular values of W during the meta-training stage and follow its evolution during the learning episodes. In practice, this can be done by performing the Singular Value Decomposition (SVD) on W ∈ RT×k with a computational cost of O(Tk2) floating-point operations (flop). However, as T is typically quite large, we propose a more computationally efficient solution that is to take into account only the last batch of N predictors (with N T ) grouped in the matrix WN ∈ RN×k that capture the latest dynamics in the learning process. We further note that σi(WNW>N ) = σ 2 i (WN ), ∀i ∈ [[N ]] implying that we can calculate the SVD of WNW>N (or W > NWN for k ≤ N ) and retrieve the singular values from it afterwards.
We now want to verify whether the optimal linear predictors wt cover all directions in the embedding space by tracking the evolution of the ratio of singular values Rσ(WN ) during the training process. For the sake of conciseness, we use Rσ instead of Rσ(WN ) thereafter. According to the theory, we expect Rσ to decrease during training thus improving the generalization of the learned predictors and preparing them for the target task. When we want to enforce such a behavior in practice, we propose to use Rσ as a regularization term in the training loss of popular meta-learning algorithms.
Alternatively, as the smallest singular value σN (WN ) can be close to 0 and lead to numerical errors, we propose to replace the ratio of the vector of singular values by its entropy as follows:
Hσ(WN ) = − N∑
i=1
softmax(σ(WN ))i · log softmax(σ(WN ))i,
where softmax(·)i is the ith output of the softmax function. As with Rσ , we write Hσ instead of Hσ(WN ) from now on. Since uniform distribution has the highest entropy, regularizing with Rσ or −Hσ leads to a better coverage of Rk by ensuring a nearly identical importance regardless of the direction. We refer the reader to the Supplementary materials for the derivations ensuring the existence of the subgradients for these terms.
Ensuring assumption 2. In addition to the full coverage of the embedding space by the linear predictors, the meta-learning theory assumes that the norm of the linear predictors does not increase with the number of tasks seen during meta-training, i.e., ‖w‖2 = O(1) or, equivalently, ‖W‖2F = O(T ). If this assumption does not hold in practice, we propose to regularize the norm of linear predictors during training or directly normalize the obtained linear predictors w̄ = w‖w‖2 .
The final meta-training loss with the theory-inspired regularization terms is given as:
min φ∈Φ,W∈RT×k
1
2Tn1
T∑
t=1
n1∑
i=1
`(yt,i, 〈wt, φ(xt,i)〉) + λ1Rσ(WN ) + λ2‖WN‖2F , (7)
and depending on the considered algorithm, we can replace Rσ by −Hσ and/or replace wt by w̄t instead of regularizing with ‖WN‖2F . In what follows, we consider λ1 = λ2 = 1 and we refer the reader to the Supplementary materials for more details and experiments with other values.
To the best of our knowledge, such regularization terms based on insights from the advances in metalearning theory have never been used in the literature before. We also further use the basic quantities involved in the proposed regularization terms as indicators of whether a given meta-learning algorithm naturally satisfies the assumptions ensuring an efficient meta-learning in practice or not.
3.3 RELATED WORK
Below, we discuss several related studies aiming at improving the general understanding of metalearning, and mention other regularization terms specifically designed for meta-learning.
Understanding meta-learning While a complete theory for meta-learning is still lacking, several recent works aimed to shed light on phenomena commonly observed in meta-learning by evaluating different intuitive heuristics. For instance, Raghu et al. (2020) investigated whether the popular gradient-based MAML algorithm relies on rapid learning with significant changes in the representations when deployed on target task, or due to feature reuse where the learned representation remains almost intact. They establish that the latter factor is dominant and propose a new variation of MAML that freezes all but task-specific layers of the neural network when learning new tasks. In another study (Goldblum et al., 2020) the authors explain the success of meta-learning approaches by their capability to either cluster classes more tightly in feature space (task-specific adaptation approach), or to search for meta-parameters that lie close in weight space to many task-specific minima (full fine-tuning approach). Finally, the effect of the number of shots on the classification accuracy was studied theoretically and illustrated empirically in Cao et al. (2020) for the popular metric-based PROTONET algorithm. Our paper is complementary to all other works mentioned above as it investigates a new aspect of meta-learning that has never been studied before, while following a sound theory. Also, we provide a more complete experimental evaluation as the three different approaches of meta-learning (based on gradient, metric or transfer learning), separately presented in Raghu et al. (2020), Cao et al. (2020) and Goldblum et al. (2020), are now compared together.
Other regularization strategies Regularization is a common tool to reduce model complexity during learning for better generalization, and the variations of its two most famous instances given by weight decay (Krogh & Hertz, 1992) and dropout (Srivastava et al., 2014) are commonly used as a basis in meta-learning literature as well. In general, regularization in meta-learning is applied to the weights of the whole neural network (Balaji et al., 2018; Yin et al., 2020), the predictions (Jamal & Qi, 2019; Goldblum et al., 2020) or is introduced via a prior hypothesis biased regularized empirical risk minimization (Pentina & Lampert, 2014; Kuzborskij & Orabona, 2017; Denevi et al., 2018a;b; 2019). Our proposal is different from all the approaches mentioned above for the following reasons. First, we do not regularize the whole weight matrix learned by the neural network but the linear predictors of its last layer contrary to what was done in the methods of the first group, and, more specifically, the famous weight decay approach (Krogh & Hertz, 1992). The purpose of the regularization in our case is also completely different: weight decay is used to improve generalization through sparsity in order to avoid overfitting, while our goal is to keep the classification margin unchanged during the training to avoid over-/under-specialization to some source tasks. Similarly, spectral normalization proposed by Miyato et al. (2018) to satisfy the Lipschitz constraint in GANs through dividing W values by σmax(W) does not affect the ratio between σmax(W) and σmin(W) and serves a completely different purpose. Second, we regularize the singular values (entropy or ratio) of the matrix of linear predictors instead of the predictions, as done by the methods of the second group (e.g., using the theoretic-information quantities in Jamal & Qi (2019) and Yin et al. (2020)). Finally, the works of the last group are related to the online setting with convex loss functions only, and, similarly to the algorithms from the second group, do not specifically target the spectral properties of the learned predictors. Last, but not least, our proposal is built upon the most recent advances in the meta-learning field leading to faster learning rates contrary to previous works.
4 PRACTICAL RESULTS
In this section, we use extensive experimental evaluations to answer the following two questions:
Q1) Do popular meta-learning methods naturally satisfy the learning bounds assumptions? Q2) Does ensuring these assumptions help to (meta-)learn more efficiently?
For Q1, we run the original implementations of popular meta-learning methods to see what is their natural behavior. For Q2, we study the impact of forcing them to closely follow the theoretical setup.
4.1 EXPERIMENTAL SETUP
Datasets & Baselines We consider few-shot image classification problem on three benchmark datasets, namely: 1) Omniglot (Lake et al., 2015) consisting of 1,623 classes with 20 images/class of size 28×28; 2) miniImageNet (Ravi & Larochelle, 2017) consisting of 100 classes with 600 images of size 84 × 84 per class and 3) tieredImageNet (Ren et al., 2018) consisting of 779,165 images divided into 608 classes. For each dataset, we follow the commonly adopted experimental protocol used in Finn et al. (2017) and Chen et al. (2019) and use a four-layer convolution backbone (Conv4) with 64 filters as done by Chen et al. (2019). On Omniglot, we perform 20-way classification with 1 shot and 5 shots, while on miniImageNet and tieredImageNet we perform 5-way classification with 1 shot and 5 shots. Finally, we evaluate four FSL methods: two popular meta-learning strategies, namely, MAML (Finn et al., 2017), a gradient-based method, and Prototypical Networks (PROTONET) (Snell et al., 2017), a metric-based approach; two popular transfer learning baselines, termed as BASELINE and BASELINE++ (Ravi & Larochelle, 2017; Gidaris & Komodakis, 2018; Chen et al., 2019). Even though these baselines are trained with the standard supervised learning framework, such a training can also be seen as learning a single task in the LTL framework.
Implementation details Enforcing Assumptions 1 and 2 for MAML is straightforward as it closely follows the LTL framework of episodic training. For each task, the model learns a batch of linear predictors and we can directly take them as WN to compute its SVD. Since the linear predictors are the weights of our model and change slowly, regularizing the norm ‖WN‖F and the ratio of singular values Rσ does not cause instabilities during training. Meanwhile, metric-based methods do not use linear predictors but compute a similarity between features. In the case of PROTONET, the similarity is computed with respect to class prototypes (i.e. the mean features of the images of each class). Since they act as linear predictors, a first idea would be to regularize the norm and ratio of singular values of the prototypes. Unfortunately, this latter strategy hinders the convergence of the network and leads to numerical instabilities. Most likely because prototypes are computed from image features which suffer from rapid changes across batches. Consequently, we regularize the entropy of singular values Hσ instead of the ratio Rσ to avoid instabilities during training to ensure Assumption 1 and we normalize the prototypes to ensure Assumption 2 by replacing wt with w̄t in Eq. 7. For transfer learning methods BASELINE and BASELINE++, the last layer of the network is discarded and linear predictors are learned during meta-testing. Thus, we only regularize the norm ‖WN‖F of predictors learned during the finetuning phase of meta-testing. Similarly to MAML, we compute Rσ with the last layer of the network during training and fine-tuning phase.
Remark 2 We choose well-established meta-learning algorithms for our comparison, but the proposed regularization can be integrated similarly into their recent variations (Park & Oliva, 2019; Lee et al., 2019) (see Supplementary materials for results obtained with the method of Park & Oliva (2019)). Finally, using models that do not rely on linear predictors is also possible but might be more difficult as it would require upstream work to understand which part of the model acts as predictors (as done for PROTONET in this paper) and how to compute and track the desired quantities.
4.2 INSIGHTS
Q1 – Verifying the assumptions According to theory, ‖WN‖F and Rσ should remain constant or converge toward a constant value when monitoring the last N tasks. From Fig. 2(a), we can see that for MAML (Fig. 2(a) top), both ‖WN‖F and Rσ increase with the number of tasks seen during training, whereas PROTONET (Fig. 2(a) bottom) naturally learns the prototypes with a good coverage of the embedding space, and minimizes their norm. This behavior is rather peculiar as neither
of the two methods specifically controls the theoretical quantities of interest, and still, PROTONET manages to do it implicitly. As for the transfer learning baselines (Fig. 2(b) top and bottom), we expect them to learn features that cover the embedding space with Rσ rapidly converging towards a constant value. As can be seen in Fig. 2(b), similarly to PROTONET, BASELINE++ naturally learns linear predictors that cover the embedding space. As for BASELINE, it learns a good coverage for Omniglot dataset, but fails to do so for the more complicated tieredImageNet dataset. The observed behavior of these different methods leads to a conclusion that some meta-learning algorithms are inherently more explorative of the embedding space.
Q2 – Ensuring the assumptions Armed with our regularization terms, we now aim to force the considered algorithms to verify the assumptions when it is not naturally done. In particular, for MAML we regularize both ‖WN‖F and Rσ in order to keep them constant throughout the training. Similarly, we regularize Rσ during the training of BASELINE and BASELINE++, and both ‖WN‖F and Rσ during the finetuning phase of meta-testing. For PROTONET, we enforce a normalization of
the prototypes. According to our results for Q1, regularizing the singular values of the prototypes through the entropy Hσ is not necessary.3 Based on the obtained results, we can make the following conclusions. First, from Fig. 2(a) (left, middle) and Fig. 2(b) (left), we note that for all methods considered, our proposed methodology used to enforce the theoretical assumptions works as expected, and leads to a desired behavior during the learning process. This means that the differences in terms of results presented in Table 1 are explained fully by this particular addition to the optimized objective function. Second, from the shape of the accuracy curves provided in Fig. 2(a) (right) and the accuracy gaps when enforcing the assumptions given in Table 1, we can see that respecting the assumptions leads to several significant improvements related to different aspects of learning. On the one hand, we observe that the final validation accuracy improves significantly in all benchmarks for meta-learning methods and in most of experiments for BASELINE (except for Omniglot, where BASELINE already learns to regularize its linear predictors). In accordance with the theory, we attribute the improvements to the fact that we fully utilize the training data which leads to a tighter bound on the excess target risk and, consequently, to a better generalization performance. On the other hand, we also note that our regularization reduces the sample complexity of learning the target task, as indicated by the faster increase of the validation accuracy from the very beginning of the meta-training. Roughly speaking, less meta-training data is necessary to achieve a performance comparable to that obtained without the proposed regularization using more tasks. Finally, we note that BASELINE++ and PROTONET methods naturally satisfy some assumptions: both learn diverse linear predictors by design, while BASELINE++ also normalizes the weights of its linear predictors. Thus, these methods do not benefit from additional regularization as explained before.
5 CONCLUSION
In this paper, we studied the validity of the theoretical assumptions made in recent papers applied to popular meta-learning algorithms and proposed practical ways of enforcing them.
On the one hand, we showed that depending on the problem and algorithm, some models can naturally fulfill the theoretical conditions during training. Some algorithms offer a better covering of the embedding space than others. On the other hand, when the conditions are not verified, learning with our proposed regularization terms allows to learn faster and improve the generalization capabilities of meta-learning methods. The theoretical framework studied in this paper explains the observed performance gain. Notice that no specific hyperparameter tuning was performed as we rather aim at showing the effect of ensuring learning bounds assumptions than comparing performance of the methods. Absolute accuracy results are detailed in the Supplementary materials.
While this paper proposes an initial approach to bridging the gap between theory and practice in meta-learning, some questions remain open on the inner workings of these algorithms. In particular, being able to take better advantage of the particularities of the training tasks during meta-training could help improve the effectiveness of these approaches. Self-supervised meta-learning and multiple target tasks prediction are also important future perspectives for the application of meta-learning.
3The effect of entropic regularization on PROTONET is detailed in the Supplementary materials. | 1. What are the limitations of the paper regarding its application of meta-learning theory in few-shot learning?
2. How does the reviewer assess the validity and relevance of the proposed regularizer in improving the model's generalization ability?
3. What are the weaknesses of the paper regarding its comparisons with other works and the choice of objective function?
4. How can the authors improve their discussion on the task distribution and provide further empirical results to support their claims?
5. Are there any concerns about the presentation and clarity of the paper's content? | Review | Review
The main motivation of this paper is based on the theoretical results of meta-learning. To ensure the assumptions of the theories, the authors propose a novel regularizer, which improves the generalization ability of the model. Some results on few-shot learning benchmarks show the proposed method improves w.r.t. those baselines.
Here are the main concerns of this paper:
The proposed method in this paper is based on the meta-learning theory as stated in Section 2. However, the theoretical setting here is not fully consistent with the few-shot learning setting. For example, there is no validation set in Eq. 1. The authors should make more discussions here to show will these differences influence the final results.
One main theoretical assumption in meta-learning theory is the task distribution. Could the authors make this notion clear? Should we do empirical results on those tasks with different kinds of task distributions?
The meta-learning loss in Eq. 4 is a bit different from the popular meta-learning objective. For example, in MAML, we do not optimize the classifier W till convergence while only a limited number of gradient steps are used.
The authors should list those baseline values in Table 1, which are still important for reference. |
ICLR | Title
Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms
Abstract
Most of existing deep learning models rely on excessive amounts of labeled training data in order to achieve state-of-the-art results, even though these data can be hard or costly to get in practice. One attractive alternative is to learn with little supervision, commonly referred to as few-shot learning (FSL), and, in particular, meta-learning that learns to learn with few data from related tasks. Despite the practical success of meta-learning, many of its algorithmic solutions proposed in the literature are based on sound intuitions, but lack a solid theoretical analysis of the expected performance on the test task. In this paper, we review the recent advances in meta-learning theory and show how they can be used in practice both to better understand the behavior of popular meta-learning algorithms and to improve their generalization capacity. This latter is achieved by integrating the theoretical assumptions ensuring efficient meta-learning in the form of regularization terms into several popular meta-learning algorithms for which we provide a large study of their behavior on classic few-shot classification benchmarks. To the best of our knowledge, this is the first contribution that puts the most recent learning bounds of meta-learning theory into practice for the task of few-shot classification.
1 INTRODUCTION
Since the very seeding of the machine learning field, its algorithmic advances were inevitably followed or preceded by the accompanying theoretical analyses establishing the conditions required for the corresponding algorithms to learn well. Such a synergy between theory and practice is reflected in numerous concepts and learning strategies that took their origins in the statistical learning theory: for instance, the famous regularized risk minimization approach is directly related to the minimization of the complexity of the hypothesis space, as suggested by the generalization bounds established for supervised learning (Vapnik, 1992), while most of the adversarial algorithms in transfer learning (e.g., DANN from (Ganin & Lempitsky, 2015)) follow the theoretical insights provided by the seminal theory of its domain (Ben-David et al., 2010).
Even though many machine learning methods now enjoy a solid theoretical justification, some more recent advances in the field are still in their preliminary state which requires the hypotheses put forward by the theoretical studies to be implemented and verified in practice. One such notable example is the emerging field of meta-learning, also called learning to learn (LTL), where the goal is to produce a model on data coming from a set of (meta-train) source tasks to use it as a starting point for learning successfully a new previously unseen (meta-test) target task with little supervision. This kind of approach comes in particularly handy when training deep learning models as their performance crucially depends on the amount of training data that can be difficult and/or expensive to get in some applications. Several theoretical studies (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020)1 provided probabilistic meta-learning bounds that require the amount of data in the meta-train source task and the number of meta-train tasks to tend to infinity for efficient meta-learning. While capturing the underlying general intuition, these bounds do not suggest that all the source data is useful in such learning setup due to the
1We omit other works for meta-learning via online convex optimization (Finn et al., 2019; Balcan et al., 2019; Khodak et al., 2019; Denevi et al., 2019) as they concern a different learning setup.
additive relationship between the two terms mentioned above. To tackle this drawback, two very recent studies (Du et al., 2020; Tripuraneni et al., 2020) aimed at finding deterministic assumptions that lead to faster learning rates allowing meta-learning algorithms to benefit from all the source data. Contrary to probabilistic bounds that have been used to derive novel learning strategies for meta-learning algorithms (Amit & Meir, 2018; Yin et al., 2020), there was no attempt to verify the validity of the assumptions leading to the fastest known learning rates in practice or to enforce them through an appropriate optimization procedure.
In this paper, we bridge the meta-learning theory with practice by harvesting the theoretical results from Tripuraneni et al. (2020) and Du et al. (2020), and by showing how they can be implemented algorithmically and integrated, when needed, to popular existing meta-learning algorithms used for few-shot classification (FSC). This latter task consists in classifying new data having seen only few training examples, and represents one of the most prominent examples where meta-learning has shown to be highly efficient. More precisely, our contributions are three-fold:
1. We identify two common assumptions from the theoretical works on meta-learning and show how they can be verified and forced via a novel regularization scheme.
2. We investigate whether these assumptions are satisfied for popular meta-learning algorithms and observe that some of them naturally satisfy them, while others do not.
3. With the proposed regularization strategy, we show that enforcing the assumptions to be valid in practice leads to better generalization of the considered algorithms.
The rest of the paper is organized as follows. After presenting preliminary knowledge on the metalearning problem in Section 2, we detail the existing meta-learning theoretical results with their corresponding assumptions and show how they can be enforced via a general regularization technique in Section 3. Then, we provide an experimental evaluation of several popular few-shot learning (FSL) methods in Section 4 and highlight the different advantages brought by the proposed regularization in practice. Finally, we conclude and outline future research perspectives in Section 5.
2 PRELIMINARY KNOWLEDGE
We start by formally defining the meta-learning problem following the model described in Du et al. (2020). To this end, we assume having access to T source tasks characterized by their respective data generating distributions {µt}Tt=1 supported over the joint input-output space X × Y with X ⊆ Rd and Y ⊆ R. We further assume that these distributions are observed only through finite size samples of size n1 grouped into matrices Xt = (xt,1, . . . ,xt,n1) ∈ Rn1×d and vectors of outputs yt = (yt,1, . . . , yt,n1) ∈ Rn1 , ∀t ∈ [[T ]] := {1, . . . , T}. Given this set of tasks, our goal is to learn a shared representation φ belonging to a certain class of functions Φ := {φ | φ : X → V, V ⊆ Rk} and linear predictors wt ∈ Rk, ∀t ∈ [[T ]] grouped in a matrix W ∈ RT×k. More formally, this is done by solving the following optimization problem:
φ̂,Ŵ = arg min φ∈Φ,W∈RT×k
1
2Tn1
T∑
t=1
n1∑
i=1
`(yt,i, 〈wt, φ(xt,i)〉), (1)
where ` : Y× Y → R+ is a loss function. Once such a representation is learned, we want to apply it to a new previously unseen target task observed through a pair (XT+1 ∈ Rn2×d, yT+1 ∈ Rn2) containing n2 samples generated by the distribution µT+1. We expect that a linear classifier w learned on top of the obtained representation leads to a low true risk over the whole distribution µT+1. More precisely, we first use φ̂ to solve the following problem:
ŵT+1 = arg min w∈Rk
1
n2
n2∑
i=1
`(yT+1,i, 〈w, φ̂(xT+1,i)〉).
Then, we define the true target risk of the learned linear classifier ŵT+1 as:
L(φ̂, ŵT+1) = E(x,y)∼µT+1 [`(y, 〈ŵT+1, φ̂(x)〉)] and want it to be small and as close as possible to the ideal true risk L(φ∗,w∗T+1) where
∀t ∈ [[T + 1]] and (x, y) ∼ µt, y = 〈w∗t , φ∗(x)〉+ ε, ε ∼ N (0, σ2). (2)
Equivalently, most of the works found in the literature seek to upper-bound the excess risk defined as ER(φ̂, ŵT+1) := L(φ̂, ŵT+1)− L(φ∗,w∗T+1) with quantities involved in the learning process.
Remark 1 We note that many popular meta-learning algorithms used for FSL do not follow exactly the approach described above. However, we believe that the exact way of how this is done algorithmically (with or without the support set, with or without learning episodes) does not change the statistical challenge of it which is to learn a model that can provably generalize with little supervision. Supervised learning theory tells us that generalization in this case is poor (not enough target data and it is difficult to rely on data coming from different probability distributions), while the theoretical works we built upon suggest that source data may contribute in improving the generalization of the learned model alongside the target data if the assumptions described below are satisfied.
3 FROM THEORY TO PRACTICE
In this section, we highlight main theoretical contributions that provably ensure the success of metalearning in improving the performance on the previously unseen target task with the increasing number of source tasks and the amount of data available for them. We then concentrate our attention on the most recent theoretical advances leading to the fastest learning rates and show how the assumptions used to obtain them can be forced in practice through a novel regularization strategy.
3.1 WHEN DOES META-LEARNING PROVABLY WORK?
One requirement for meta-learning to succeed in FSC is that a representation learned on meta-train data should be useful for learning a good predictor on the meta-test data set. This is reflected by bounding the excess target risk by a quantity that involves the number of samples in both meta-train and meta-test samples and the number of available meta-train tasks.
To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d. from the same random distribution. This assumption, however, is considered unrealistic as in FSL source and target tasks’ data are often given by different draws (without replacement) from the same dataset. In this setup, the above-mentioned works obtained the bounds having the following form:
ER(φ̂, ŵT+1) ≤ O (
1√ n1 + 1√ T
) .
This guarantee implies that not only the number of source data, but also the number of tasks should be large in order to draw the second term to 0. An improvement was then proposed by Du et al. (2020) and Tripuraneni et al. (2020) that obtained the bounds on the excess risk behaving as
O ( kd√ n1T + k√ n2 ) and Õ ( kd n1T + k n2 ) ,
respectively, where k d is the dimensionality of the learned representation and Õ(·) hides logarithmic factors. Both these results show that all the source and target samples are useful in minimizing the excess risk: in the FSL regime where target data is scarce, all source data helps to learn well. From a set of assumptions made by the authors in both of these works , we note the following two:
Assumption 1. The matrix of optimal predictors W∗ should cover all the directions in Rk evenly. More formally, this can be stated as
Rσ(W ∗) =
σ1(W ∗) σk(W∗) = O(1), (3)
where σi(·) denotes the ith singular value of W∗. As pointed out by the authors, such an assumption can be seen as a measure of diversity between the source tasks that are expected to be complementary to each other in order to provide a meaningful representation for a previously unseen target task.
Assumption 2. The norm of the optimal predictors w∗ should not increase with the number of tasks seen during meta-training2. This assumption says that the classification margin of linear predictors should remain constant thus avoiding over- or under-specialization to the seen tasks.
While being highly insightful, the authors did not provide any experimental evidence suggesting that verifying these assumptions in practice helps to learn more efficiently in the considered learning setting. To bridge this gap, we propose to use a general regularization scheme that allows us to enforce these assumptions when learning the matrix of predictors in several popular meta-learning algorithms.
3.2 PUTTING THEORY TO WORK
As the assumptions mentioned above are stated for the optimal predictors that are inherently linked to the data generating process, one may wonder what happens when these latter do not satisfy them. To this end, we aim to answer the following question:
Given W∗ such that Rσ(W∗) 1, can we learn Ŵ with Rσ(Ŵ) ≈ 1 while solving the underlying classification problems equally well?
It turns out that we can construct an example illustrated in Fig. 1 for which the answer to this question is positive. To this end, let us consider a binary classification problem over X ⊆ R3 with labels Y = {−1, 1} and two source tasks generated for k, ε ∈ ]0, 1], as follows:
1. µ1 is uniform over {1− kε, k, 1} × {1} ∪ {1 + kε, k,−1} × {−1}; 2. µ2 is uniform over {1 + kε, k, k−1ε } × {1} ∪ {−1 + kε, k, 1+kε } × {−1}.
We now define the optimal representation and two optimal predictors for each distribution as the solution to Eq. 1 over the two data generating distributions and Φ = {φ| φ(x) = ΦTx, Φ ∈ R3×2}:
φ∗,W∗ = arg min φ∈Φ,W∈R2×2
2∑
i=1
E (x,y)∼µi `(y, 〈wi, φ(x)〉), (4)
One solution to this problem can be given as follows:
Φ∗ = ( 1 0 0 0 1 0 )T , W∗ = ( 1 ε 1 −ε ) ,
where φ∗ projects the data generated by µi to a two-dimensional space by discarding its third dimension and the linear predictors satisfy the data generating process from Eq. 2 with ε = 0. One can verify that in this case W∗ have singular values equal to √ 2 and √ 2ε, so that the ratioRσ(W∗) = 1ε : when ε→ 0, the optimal predictors make the ratio arbitrary large thus violating Assumption 1.
2While not stated as a separate assumption, in Du et al. (2020) assume it to derive the Assumption 1 mentioned above. See p.5 and the discussion after Assumption 4.3 in their pre-print.
Let us now consider a different problem where we want to solve Eq. 4 with a constraint that forces linear predictors to satisfy Assumption 1:
φ̂,Ŵ = arg min φ∈Φ,W∈R2×2
2∑
i=1
E (x,y)∼µi `(y, 〈wi, φ(x)〉), s.t. Rσ(W) ≈ 1. (5)
Its solution is different and is given by
Φ̂ = ( 0 1 0 0 0 1 )T , Ŵ = ( 0 1 1 −ε ) .
Similarly to Φ∗, Φ̂ projects to a two-dimensional space by discarding the first dimension of the data generated by µi. The learned predictors in this case also satisfy Eq. 2 with ε = 0, but contrary to
W∗, Rσ(Ŵ) = √ 2+ε2+ε √ ε2+4
2+ε2−ε √ ε2+4
tends to 1 when ε→ 0.
Several remarks are in order here. First, it shows that even when W∗ does not satisfy Assumption 1 in the space induced by φ∗, it may still be possible to learn a new representation space φ̂ such that the optimal predictors in this space will satisfy Assumption 1. This can be done either by considering the constrained problem from Eq. 5, or by using a more common strategy that consists in adding Rσ(W) directly as a regularization term
φ̂,Ŵ = arg min φ∈Φ,W∈RT×k
1
2Tn1
T∑
t=1
n1∑
i=1
`(yt,i, 〈wt, φ(xt,i)〉) + λ1Rσ(W). (6)
Below, we explain how to implement this idea in practice for popular meta-learning algorithms.
Ensuring assumption 1. We propose to compute singular values of W during the meta-training stage and follow its evolution during the learning episodes. In practice, this can be done by performing the Singular Value Decomposition (SVD) on W ∈ RT×k with a computational cost of O(Tk2) floating-point operations (flop). However, as T is typically quite large, we propose a more computationally efficient solution that is to take into account only the last batch of N predictors (with N T ) grouped in the matrix WN ∈ RN×k that capture the latest dynamics in the learning process. We further note that σi(WNW>N ) = σ 2 i (WN ), ∀i ∈ [[N ]] implying that we can calculate the SVD of WNW>N (or W > NWN for k ≤ N ) and retrieve the singular values from it afterwards.
We now want to verify whether the optimal linear predictors wt cover all directions in the embedding space by tracking the evolution of the ratio of singular values Rσ(WN ) during the training process. For the sake of conciseness, we use Rσ instead of Rσ(WN ) thereafter. According to the theory, we expect Rσ to decrease during training thus improving the generalization of the learned predictors and preparing them for the target task. When we want to enforce such a behavior in practice, we propose to use Rσ as a regularization term in the training loss of popular meta-learning algorithms.
Alternatively, as the smallest singular value σN (WN ) can be close to 0 and lead to numerical errors, we propose to replace the ratio of the vector of singular values by its entropy as follows:
Hσ(WN ) = − N∑
i=1
softmax(σ(WN ))i · log softmax(σ(WN ))i,
where softmax(·)i is the ith output of the softmax function. As with Rσ , we write Hσ instead of Hσ(WN ) from now on. Since uniform distribution has the highest entropy, regularizing with Rσ or −Hσ leads to a better coverage of Rk by ensuring a nearly identical importance regardless of the direction. We refer the reader to the Supplementary materials for the derivations ensuring the existence of the subgradients for these terms.
Ensuring assumption 2. In addition to the full coverage of the embedding space by the linear predictors, the meta-learning theory assumes that the norm of the linear predictors does not increase with the number of tasks seen during meta-training, i.e., ‖w‖2 = O(1) or, equivalently, ‖W‖2F = O(T ). If this assumption does not hold in practice, we propose to regularize the norm of linear predictors during training or directly normalize the obtained linear predictors w̄ = w‖w‖2 .
The final meta-training loss with the theory-inspired regularization terms is given as:
min φ∈Φ,W∈RT×k
1
2Tn1
T∑
t=1
n1∑
i=1
`(yt,i, 〈wt, φ(xt,i)〉) + λ1Rσ(WN ) + λ2‖WN‖2F , (7)
and depending on the considered algorithm, we can replace Rσ by −Hσ and/or replace wt by w̄t instead of regularizing with ‖WN‖2F . In what follows, we consider λ1 = λ2 = 1 and we refer the reader to the Supplementary materials for more details and experiments with other values.
To the best of our knowledge, such regularization terms based on insights from the advances in metalearning theory have never been used in the literature before. We also further use the basic quantities involved in the proposed regularization terms as indicators of whether a given meta-learning algorithm naturally satisfies the assumptions ensuring an efficient meta-learning in practice or not.
3.3 RELATED WORK
Below, we discuss several related studies aiming at improving the general understanding of metalearning, and mention other regularization terms specifically designed for meta-learning.
Understanding meta-learning While a complete theory for meta-learning is still lacking, several recent works aimed to shed light on phenomena commonly observed in meta-learning by evaluating different intuitive heuristics. For instance, Raghu et al. (2020) investigated whether the popular gradient-based MAML algorithm relies on rapid learning with significant changes in the representations when deployed on target task, or due to feature reuse where the learned representation remains almost intact. They establish that the latter factor is dominant and propose a new variation of MAML that freezes all but task-specific layers of the neural network when learning new tasks. In another study (Goldblum et al., 2020) the authors explain the success of meta-learning approaches by their capability to either cluster classes more tightly in feature space (task-specific adaptation approach), or to search for meta-parameters that lie close in weight space to many task-specific minima (full fine-tuning approach). Finally, the effect of the number of shots on the classification accuracy was studied theoretically and illustrated empirically in Cao et al. (2020) for the popular metric-based PROTONET algorithm. Our paper is complementary to all other works mentioned above as it investigates a new aspect of meta-learning that has never been studied before, while following a sound theory. Also, we provide a more complete experimental evaluation as the three different approaches of meta-learning (based on gradient, metric or transfer learning), separately presented in Raghu et al. (2020), Cao et al. (2020) and Goldblum et al. (2020), are now compared together.
Other regularization strategies Regularization is a common tool to reduce model complexity during learning for better generalization, and the variations of its two most famous instances given by weight decay (Krogh & Hertz, 1992) and dropout (Srivastava et al., 2014) are commonly used as a basis in meta-learning literature as well. In general, regularization in meta-learning is applied to the weights of the whole neural network (Balaji et al., 2018; Yin et al., 2020), the predictions (Jamal & Qi, 2019; Goldblum et al., 2020) or is introduced via a prior hypothesis biased regularized empirical risk minimization (Pentina & Lampert, 2014; Kuzborskij & Orabona, 2017; Denevi et al., 2018a;b; 2019). Our proposal is different from all the approaches mentioned above for the following reasons. First, we do not regularize the whole weight matrix learned by the neural network but the linear predictors of its last layer contrary to what was done in the methods of the first group, and, more specifically, the famous weight decay approach (Krogh & Hertz, 1992). The purpose of the regularization in our case is also completely different: weight decay is used to improve generalization through sparsity in order to avoid overfitting, while our goal is to keep the classification margin unchanged during the training to avoid over-/under-specialization to some source tasks. Similarly, spectral normalization proposed by Miyato et al. (2018) to satisfy the Lipschitz constraint in GANs through dividing W values by σmax(W) does not affect the ratio between σmax(W) and σmin(W) and serves a completely different purpose. Second, we regularize the singular values (entropy or ratio) of the matrix of linear predictors instead of the predictions, as done by the methods of the second group (e.g., using the theoretic-information quantities in Jamal & Qi (2019) and Yin et al. (2020)). Finally, the works of the last group are related to the online setting with convex loss functions only, and, similarly to the algorithms from the second group, do not specifically target the spectral properties of the learned predictors. Last, but not least, our proposal is built upon the most recent advances in the meta-learning field leading to faster learning rates contrary to previous works.
4 PRACTICAL RESULTS
In this section, we use extensive experimental evaluations to answer the following two questions:
Q1) Do popular meta-learning methods naturally satisfy the learning bounds assumptions? Q2) Does ensuring these assumptions help to (meta-)learn more efficiently?
For Q1, we run the original implementations of popular meta-learning methods to see what is their natural behavior. For Q2, we study the impact of forcing them to closely follow the theoretical setup.
4.1 EXPERIMENTAL SETUP
Datasets & Baselines We consider few-shot image classification problem on three benchmark datasets, namely: 1) Omniglot (Lake et al., 2015) consisting of 1,623 classes with 20 images/class of size 28×28; 2) miniImageNet (Ravi & Larochelle, 2017) consisting of 100 classes with 600 images of size 84 × 84 per class and 3) tieredImageNet (Ren et al., 2018) consisting of 779,165 images divided into 608 classes. For each dataset, we follow the commonly adopted experimental protocol used in Finn et al. (2017) and Chen et al. (2019) and use a four-layer convolution backbone (Conv4) with 64 filters as done by Chen et al. (2019). On Omniglot, we perform 20-way classification with 1 shot and 5 shots, while on miniImageNet and tieredImageNet we perform 5-way classification with 1 shot and 5 shots. Finally, we evaluate four FSL methods: two popular meta-learning strategies, namely, MAML (Finn et al., 2017), a gradient-based method, and Prototypical Networks (PROTONET) (Snell et al., 2017), a metric-based approach; two popular transfer learning baselines, termed as BASELINE and BASELINE++ (Ravi & Larochelle, 2017; Gidaris & Komodakis, 2018; Chen et al., 2019). Even though these baselines are trained with the standard supervised learning framework, such a training can also be seen as learning a single task in the LTL framework.
Implementation details Enforcing Assumptions 1 and 2 for MAML is straightforward as it closely follows the LTL framework of episodic training. For each task, the model learns a batch of linear predictors and we can directly take them as WN to compute its SVD. Since the linear predictors are the weights of our model and change slowly, regularizing the norm ‖WN‖F and the ratio of singular values Rσ does not cause instabilities during training. Meanwhile, metric-based methods do not use linear predictors but compute a similarity between features. In the case of PROTONET, the similarity is computed with respect to class prototypes (i.e. the mean features of the images of each class). Since they act as linear predictors, a first idea would be to regularize the norm and ratio of singular values of the prototypes. Unfortunately, this latter strategy hinders the convergence of the network and leads to numerical instabilities. Most likely because prototypes are computed from image features which suffer from rapid changes across batches. Consequently, we regularize the entropy of singular values Hσ instead of the ratio Rσ to avoid instabilities during training to ensure Assumption 1 and we normalize the prototypes to ensure Assumption 2 by replacing wt with w̄t in Eq. 7. For transfer learning methods BASELINE and BASELINE++, the last layer of the network is discarded and linear predictors are learned during meta-testing. Thus, we only regularize the norm ‖WN‖F of predictors learned during the finetuning phase of meta-testing. Similarly to MAML, we compute Rσ with the last layer of the network during training and fine-tuning phase.
Remark 2 We choose well-established meta-learning algorithms for our comparison, but the proposed regularization can be integrated similarly into their recent variations (Park & Oliva, 2019; Lee et al., 2019) (see Supplementary materials for results obtained with the method of Park & Oliva (2019)). Finally, using models that do not rely on linear predictors is also possible but might be more difficult as it would require upstream work to understand which part of the model acts as predictors (as done for PROTONET in this paper) and how to compute and track the desired quantities.
4.2 INSIGHTS
Q1 – Verifying the assumptions According to theory, ‖WN‖F and Rσ should remain constant or converge toward a constant value when monitoring the last N tasks. From Fig. 2(a), we can see that for MAML (Fig. 2(a) top), both ‖WN‖F and Rσ increase with the number of tasks seen during training, whereas PROTONET (Fig. 2(a) bottom) naturally learns the prototypes with a good coverage of the embedding space, and minimizes their norm. This behavior is rather peculiar as neither
of the two methods specifically controls the theoretical quantities of interest, and still, PROTONET manages to do it implicitly. As for the transfer learning baselines (Fig. 2(b) top and bottom), we expect them to learn features that cover the embedding space with Rσ rapidly converging towards a constant value. As can be seen in Fig. 2(b), similarly to PROTONET, BASELINE++ naturally learns linear predictors that cover the embedding space. As for BASELINE, it learns a good coverage for Omniglot dataset, but fails to do so for the more complicated tieredImageNet dataset. The observed behavior of these different methods leads to a conclusion that some meta-learning algorithms are inherently more explorative of the embedding space.
Q2 – Ensuring the assumptions Armed with our regularization terms, we now aim to force the considered algorithms to verify the assumptions when it is not naturally done. In particular, for MAML we regularize both ‖WN‖F and Rσ in order to keep them constant throughout the training. Similarly, we regularize Rσ during the training of BASELINE and BASELINE++, and both ‖WN‖F and Rσ during the finetuning phase of meta-testing. For PROTONET, we enforce a normalization of
the prototypes. According to our results for Q1, regularizing the singular values of the prototypes through the entropy Hσ is not necessary.3 Based on the obtained results, we can make the following conclusions. First, from Fig. 2(a) (left, middle) and Fig. 2(b) (left), we note that for all methods considered, our proposed methodology used to enforce the theoretical assumptions works as expected, and leads to a desired behavior during the learning process. This means that the differences in terms of results presented in Table 1 are explained fully by this particular addition to the optimized objective function. Second, from the shape of the accuracy curves provided in Fig. 2(a) (right) and the accuracy gaps when enforcing the assumptions given in Table 1, we can see that respecting the assumptions leads to several significant improvements related to different aspects of learning. On the one hand, we observe that the final validation accuracy improves significantly in all benchmarks for meta-learning methods and in most of experiments for BASELINE (except for Omniglot, where BASELINE already learns to regularize its linear predictors). In accordance with the theory, we attribute the improvements to the fact that we fully utilize the training data which leads to a tighter bound on the excess target risk and, consequently, to a better generalization performance. On the other hand, we also note that our regularization reduces the sample complexity of learning the target task, as indicated by the faster increase of the validation accuracy from the very beginning of the meta-training. Roughly speaking, less meta-training data is necessary to achieve a performance comparable to that obtained without the proposed regularization using more tasks. Finally, we note that BASELINE++ and PROTONET methods naturally satisfy some assumptions: both learn diverse linear predictors by design, while BASELINE++ also normalizes the weights of its linear predictors. Thus, these methods do not benefit from additional regularization as explained before.
5 CONCLUSION
In this paper, we studied the validity of the theoretical assumptions made in recent papers applied to popular meta-learning algorithms and proposed practical ways of enforcing them.
On the one hand, we showed that depending on the problem and algorithm, some models can naturally fulfill the theoretical conditions during training. Some algorithms offer a better covering of the embedding space than others. On the other hand, when the conditions are not verified, learning with our proposed regularization terms allows to learn faster and improve the generalization capabilities of meta-learning methods. The theoretical framework studied in this paper explains the observed performance gain. Notice that no specific hyperparameter tuning was performed as we rather aim at showing the effect of ensuring learning bounds assumptions than comparing performance of the methods. Absolute accuracy results are detailed in the Supplementary materials.
While this paper proposes an initial approach to bridging the gap between theory and practice in meta-learning, some questions remain open on the inner workings of these algorithms. In particular, being able to take better advantage of the particularities of the training tasks during meta-training could help improve the effectiveness of these approaches. Self-supervised meta-learning and multiple target tasks prediction are also important future perspectives for the application of meta-learning.
3The effect of entropic regularization on PROTONET is detailed in the Supplementary materials. | 1. What are the contributions and novel aspects of the paper regarding meta-learning algorithms?
2. What are the concerns regarding the efficacy of the proposed methods, particularly in experimental results?
3. How could the loss function in Eq. (4) be improved regarding weighting parameters for regularization terms?
4. Is there any confusion in the paper regarding enforcing/ensuring assumptions versus respecting them?
5. Can the authors provide synthetic experiments to clarify the effectiveness of the proposed regularization terms in cases where the learning problem does not satisfy the two assumptions? | Review | Review
To improve the practical performance of meta-learning algorithms, this paper proposes two regularization terms that are motivated by two common assumptions in some recent theoretical work on meta-learning, namely (1) the optimal (linear) predictors cover the embedding space evenly, and (2) the norms of the optimal predictors remain bounded as the number of tasks grow. Numerical experiments show that the proposed regularization terms help achieve better performance of meta-learning in some tasks.
This work serves as a nice attempt to instruct the practice of meta-learning with theoretical insights. Below are some of my concerns.
In some experimental results, the improvement due to the proposed regularization seems to be at the same level of the standard deviation, as well as the difference between the reproduced results of existing meta-learning algorithms and those reported in earlier papers. This casts doubt on the true efficacy of the proposed methods.
For the loss function in Eq. (4), it is more reasonable and natural to introduce two weighting parameters (as tunable hyperparameters) for the proposed regularization terms.
The authors often talk about "enforcing/ensuring the assumptions". However, from my understanding, whether the assumptions (on the optimal linear predictors, or "ground-truth" predictors) hold or not depends on the learning problem itself, NOT on the algorithms. Therefore, there is no way we can enforce/ensure these assumptions. I would prefer using the phrase "respecting the assumptions" (used by the authors on Page 8); this seems more accurate and reasonable.
Following the previous point, I'm curious about one question: if the learning problem actually doesn't satisfy the two assumptions, then is it still helpful to add the proposed regularization terms to the loss function? (I'm not sure, but my guess is no; indeed, it might even hurt.) To solve puzzles like this, I would encourage the authors to conduct some synthetic experiments, where they can design the data generating process (e.g. they can control whether the true linear predictors satisfy the assumptions or not). Since this work is a connection between theory and practice, I believe that experiments with synthetic data can help explain things more clearly and make the claims more convincing. |
ICLR | Title
Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms
Abstract
Most of existing deep learning models rely on excessive amounts of labeled training data in order to achieve state-of-the-art results, even though these data can be hard or costly to get in practice. One attractive alternative is to learn with little supervision, commonly referred to as few-shot learning (FSL), and, in particular, meta-learning that learns to learn with few data from related tasks. Despite the practical success of meta-learning, many of its algorithmic solutions proposed in the literature are based on sound intuitions, but lack a solid theoretical analysis of the expected performance on the test task. In this paper, we review the recent advances in meta-learning theory and show how they can be used in practice both to better understand the behavior of popular meta-learning algorithms and to improve their generalization capacity. This latter is achieved by integrating the theoretical assumptions ensuring efficient meta-learning in the form of regularization terms into several popular meta-learning algorithms for which we provide a large study of their behavior on classic few-shot classification benchmarks. To the best of our knowledge, this is the first contribution that puts the most recent learning bounds of meta-learning theory into practice for the task of few-shot classification.
1 INTRODUCTION
Since the very seeding of the machine learning field, its algorithmic advances were inevitably followed or preceded by the accompanying theoretical analyses establishing the conditions required for the corresponding algorithms to learn well. Such a synergy between theory and practice is reflected in numerous concepts and learning strategies that took their origins in the statistical learning theory: for instance, the famous regularized risk minimization approach is directly related to the minimization of the complexity of the hypothesis space, as suggested by the generalization bounds established for supervised learning (Vapnik, 1992), while most of the adversarial algorithms in transfer learning (e.g., DANN from (Ganin & Lempitsky, 2015)) follow the theoretical insights provided by the seminal theory of its domain (Ben-David et al., 2010).
Even though many machine learning methods now enjoy a solid theoretical justification, some more recent advances in the field are still in their preliminary state which requires the hypotheses put forward by the theoretical studies to be implemented and verified in practice. One such notable example is the emerging field of meta-learning, also called learning to learn (LTL), where the goal is to produce a model on data coming from a set of (meta-train) source tasks to use it as a starting point for learning successfully a new previously unseen (meta-test) target task with little supervision. This kind of approach comes in particularly handy when training deep learning models as their performance crucially depends on the amount of training data that can be difficult and/or expensive to get in some applications. Several theoretical studies (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020)1 provided probabilistic meta-learning bounds that require the amount of data in the meta-train source task and the number of meta-train tasks to tend to infinity for efficient meta-learning. While capturing the underlying general intuition, these bounds do not suggest that all the source data is useful in such learning setup due to the
1We omit other works for meta-learning via online convex optimization (Finn et al., 2019; Balcan et al., 2019; Khodak et al., 2019; Denevi et al., 2019) as they concern a different learning setup.
additive relationship between the two terms mentioned above. To tackle this drawback, two very recent studies (Du et al., 2020; Tripuraneni et al., 2020) aimed at finding deterministic assumptions that lead to faster learning rates allowing meta-learning algorithms to benefit from all the source data. Contrary to probabilistic bounds that have been used to derive novel learning strategies for meta-learning algorithms (Amit & Meir, 2018; Yin et al., 2020), there was no attempt to verify the validity of the assumptions leading to the fastest known learning rates in practice or to enforce them through an appropriate optimization procedure.
In this paper, we bridge the meta-learning theory with practice by harvesting the theoretical results from Tripuraneni et al. (2020) and Du et al. (2020), and by showing how they can be implemented algorithmically and integrated, when needed, to popular existing meta-learning algorithms used for few-shot classification (FSC). This latter task consists in classifying new data having seen only few training examples, and represents one of the most prominent examples where meta-learning has shown to be highly efficient. More precisely, our contributions are three-fold:
1. We identify two common assumptions from the theoretical works on meta-learning and show how they can be verified and forced via a novel regularization scheme.
2. We investigate whether these assumptions are satisfied for popular meta-learning algorithms and observe that some of them naturally satisfy them, while others do not.
3. With the proposed regularization strategy, we show that enforcing the assumptions to be valid in practice leads to better generalization of the considered algorithms.
The rest of the paper is organized as follows. After presenting preliminary knowledge on the metalearning problem in Section 2, we detail the existing meta-learning theoretical results with their corresponding assumptions and show how they can be enforced via a general regularization technique in Section 3. Then, we provide an experimental evaluation of several popular few-shot learning (FSL) methods in Section 4 and highlight the different advantages brought by the proposed regularization in practice. Finally, we conclude and outline future research perspectives in Section 5.
2 PRELIMINARY KNOWLEDGE
We start by formally defining the meta-learning problem following the model described in Du et al. (2020). To this end, we assume having access to T source tasks characterized by their respective data generating distributions {µt}Tt=1 supported over the joint input-output space X × Y with X ⊆ Rd and Y ⊆ R. We further assume that these distributions are observed only through finite size samples of size n1 grouped into matrices Xt = (xt,1, . . . ,xt,n1) ∈ Rn1×d and vectors of outputs yt = (yt,1, . . . , yt,n1) ∈ Rn1 , ∀t ∈ [[T ]] := {1, . . . , T}. Given this set of tasks, our goal is to learn a shared representation φ belonging to a certain class of functions Φ := {φ | φ : X → V, V ⊆ Rk} and linear predictors wt ∈ Rk, ∀t ∈ [[T ]] grouped in a matrix W ∈ RT×k. More formally, this is done by solving the following optimization problem:
φ̂,Ŵ = arg min φ∈Φ,W∈RT×k
1
2Tn1
T∑
t=1
n1∑
i=1
`(yt,i, 〈wt, φ(xt,i)〉), (1)
where ` : Y× Y → R+ is a loss function. Once such a representation is learned, we want to apply it to a new previously unseen target task observed through a pair (XT+1 ∈ Rn2×d, yT+1 ∈ Rn2) containing n2 samples generated by the distribution µT+1. We expect that a linear classifier w learned on top of the obtained representation leads to a low true risk over the whole distribution µT+1. More precisely, we first use φ̂ to solve the following problem:
ŵT+1 = arg min w∈Rk
1
n2
n2∑
i=1
`(yT+1,i, 〈w, φ̂(xT+1,i)〉).
Then, we define the true target risk of the learned linear classifier ŵT+1 as:
L(φ̂, ŵT+1) = E(x,y)∼µT+1 [`(y, 〈ŵT+1, φ̂(x)〉)] and want it to be small and as close as possible to the ideal true risk L(φ∗,w∗T+1) where
∀t ∈ [[T + 1]] and (x, y) ∼ µt, y = 〈w∗t , φ∗(x)〉+ ε, ε ∼ N (0, σ2). (2)
Equivalently, most of the works found in the literature seek to upper-bound the excess risk defined as ER(φ̂, ŵT+1) := L(φ̂, ŵT+1)− L(φ∗,w∗T+1) with quantities involved in the learning process.
Remark 1 We note that many popular meta-learning algorithms used for FSL do not follow exactly the approach described above. However, we believe that the exact way of how this is done algorithmically (with or without the support set, with or without learning episodes) does not change the statistical challenge of it which is to learn a model that can provably generalize with little supervision. Supervised learning theory tells us that generalization in this case is poor (not enough target data and it is difficult to rely on data coming from different probability distributions), while the theoretical works we built upon suggest that source data may contribute in improving the generalization of the learned model alongside the target data if the assumptions described below are satisfied.
3 FROM THEORY TO PRACTICE
In this section, we highlight main theoretical contributions that provably ensure the success of metalearning in improving the performance on the previously unseen target task with the increasing number of source tasks and the amount of data available for them. We then concentrate our attention on the most recent theoretical advances leading to the fastest learning rates and show how the assumptions used to obtain them can be forced in practice through a novel regularization strategy.
3.1 WHEN DOES META-LEARNING PROVABLY WORK?
One requirement for meta-learning to succeed in FSC is that a representation learned on meta-train data should be useful for learning a good predictor on the meta-test data set. This is reflected by bounding the excess target risk by a quantity that involves the number of samples in both meta-train and meta-test samples and the number of available meta-train tasks.
To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d. from the same random distribution. This assumption, however, is considered unrealistic as in FSL source and target tasks’ data are often given by different draws (without replacement) from the same dataset. In this setup, the above-mentioned works obtained the bounds having the following form:
ER(φ̂, ŵT+1) ≤ O (
1√ n1 + 1√ T
) .
This guarantee implies that not only the number of source data, but also the number of tasks should be large in order to draw the second term to 0. An improvement was then proposed by Du et al. (2020) and Tripuraneni et al. (2020) that obtained the bounds on the excess risk behaving as
O ( kd√ n1T + k√ n2 ) and Õ ( kd n1T + k n2 ) ,
respectively, where k d is the dimensionality of the learned representation and Õ(·) hides logarithmic factors. Both these results show that all the source and target samples are useful in minimizing the excess risk: in the FSL regime where target data is scarce, all source data helps to learn well. From a set of assumptions made by the authors in both of these works , we note the following two:
Assumption 1. The matrix of optimal predictors W∗ should cover all the directions in Rk evenly. More formally, this can be stated as
Rσ(W ∗) =
σ1(W ∗) σk(W∗) = O(1), (3)
where σi(·) denotes the ith singular value of W∗. As pointed out by the authors, such an assumption can be seen as a measure of diversity between the source tasks that are expected to be complementary to each other in order to provide a meaningful representation for a previously unseen target task.
Assumption 2. The norm of the optimal predictors w∗ should not increase with the number of tasks seen during meta-training2. This assumption says that the classification margin of linear predictors should remain constant thus avoiding over- or under-specialization to the seen tasks.
While being highly insightful, the authors did not provide any experimental evidence suggesting that verifying these assumptions in practice helps to learn more efficiently in the considered learning setting. To bridge this gap, we propose to use a general regularization scheme that allows us to enforce these assumptions when learning the matrix of predictors in several popular meta-learning algorithms.
3.2 PUTTING THEORY TO WORK
As the assumptions mentioned above are stated for the optimal predictors that are inherently linked to the data generating process, one may wonder what happens when these latter do not satisfy them. To this end, we aim to answer the following question:
Given W∗ such that Rσ(W∗) 1, can we learn Ŵ with Rσ(Ŵ) ≈ 1 while solving the underlying classification problems equally well?
It turns out that we can construct an example illustrated in Fig. 1 for which the answer to this question is positive. To this end, let us consider a binary classification problem over X ⊆ R3 with labels Y = {−1, 1} and two source tasks generated for k, ε ∈ ]0, 1], as follows:
1. µ1 is uniform over {1− kε, k, 1} × {1} ∪ {1 + kε, k,−1} × {−1}; 2. µ2 is uniform over {1 + kε, k, k−1ε } × {1} ∪ {−1 + kε, k, 1+kε } × {−1}.
We now define the optimal representation and two optimal predictors for each distribution as the solution to Eq. 1 over the two data generating distributions and Φ = {φ| φ(x) = ΦTx, Φ ∈ R3×2}:
φ∗,W∗ = arg min φ∈Φ,W∈R2×2
2∑
i=1
E (x,y)∼µi `(y, 〈wi, φ(x)〉), (4)
One solution to this problem can be given as follows:
Φ∗ = ( 1 0 0 0 1 0 )T , W∗ = ( 1 ε 1 −ε ) ,
where φ∗ projects the data generated by µi to a two-dimensional space by discarding its third dimension and the linear predictors satisfy the data generating process from Eq. 2 with ε = 0. One can verify that in this case W∗ have singular values equal to √ 2 and √ 2ε, so that the ratioRσ(W∗) = 1ε : when ε→ 0, the optimal predictors make the ratio arbitrary large thus violating Assumption 1.
2While not stated as a separate assumption, in Du et al. (2020) assume it to derive the Assumption 1 mentioned above. See p.5 and the discussion after Assumption 4.3 in their pre-print.
Let us now consider a different problem where we want to solve Eq. 4 with a constraint that forces linear predictors to satisfy Assumption 1:
φ̂,Ŵ = arg min φ∈Φ,W∈R2×2
2∑
i=1
E (x,y)∼µi `(y, 〈wi, φ(x)〉), s.t. Rσ(W) ≈ 1. (5)
Its solution is different and is given by
Φ̂ = ( 0 1 0 0 0 1 )T , Ŵ = ( 0 1 1 −ε ) .
Similarly to Φ∗, Φ̂ projects to a two-dimensional space by discarding the first dimension of the data generated by µi. The learned predictors in this case also satisfy Eq. 2 with ε = 0, but contrary to
W∗, Rσ(Ŵ) = √ 2+ε2+ε √ ε2+4
2+ε2−ε √ ε2+4
tends to 1 when ε→ 0.
Several remarks are in order here. First, it shows that even when W∗ does not satisfy Assumption 1 in the space induced by φ∗, it may still be possible to learn a new representation space φ̂ such that the optimal predictors in this space will satisfy Assumption 1. This can be done either by considering the constrained problem from Eq. 5, or by using a more common strategy that consists in adding Rσ(W) directly as a regularization term
φ̂,Ŵ = arg min φ∈Φ,W∈RT×k
1
2Tn1
T∑
t=1
n1∑
i=1
`(yt,i, 〈wt, φ(xt,i)〉) + λ1Rσ(W). (6)
Below, we explain how to implement this idea in practice for popular meta-learning algorithms.
Ensuring assumption 1. We propose to compute singular values of W during the meta-training stage and follow its evolution during the learning episodes. In practice, this can be done by performing the Singular Value Decomposition (SVD) on W ∈ RT×k with a computational cost of O(Tk2) floating-point operations (flop). However, as T is typically quite large, we propose a more computationally efficient solution that is to take into account only the last batch of N predictors (with N T ) grouped in the matrix WN ∈ RN×k that capture the latest dynamics in the learning process. We further note that σi(WNW>N ) = σ 2 i (WN ), ∀i ∈ [[N ]] implying that we can calculate the SVD of WNW>N (or W > NWN for k ≤ N ) and retrieve the singular values from it afterwards.
We now want to verify whether the optimal linear predictors wt cover all directions in the embedding space by tracking the evolution of the ratio of singular values Rσ(WN ) during the training process. For the sake of conciseness, we use Rσ instead of Rσ(WN ) thereafter. According to the theory, we expect Rσ to decrease during training thus improving the generalization of the learned predictors and preparing them for the target task. When we want to enforce such a behavior in practice, we propose to use Rσ as a regularization term in the training loss of popular meta-learning algorithms.
Alternatively, as the smallest singular value σN (WN ) can be close to 0 and lead to numerical errors, we propose to replace the ratio of the vector of singular values by its entropy as follows:
Hσ(WN ) = − N∑
i=1
softmax(σ(WN ))i · log softmax(σ(WN ))i,
where softmax(·)i is the ith output of the softmax function. As with Rσ , we write Hσ instead of Hσ(WN ) from now on. Since uniform distribution has the highest entropy, regularizing with Rσ or −Hσ leads to a better coverage of Rk by ensuring a nearly identical importance regardless of the direction. We refer the reader to the Supplementary materials for the derivations ensuring the existence of the subgradients for these terms.
Ensuring assumption 2. In addition to the full coverage of the embedding space by the linear predictors, the meta-learning theory assumes that the norm of the linear predictors does not increase with the number of tasks seen during meta-training, i.e., ‖w‖2 = O(1) or, equivalently, ‖W‖2F = O(T ). If this assumption does not hold in practice, we propose to regularize the norm of linear predictors during training or directly normalize the obtained linear predictors w̄ = w‖w‖2 .
The final meta-training loss with the theory-inspired regularization terms is given as:
min φ∈Φ,W∈RT×k
1
2Tn1
T∑
t=1
n1∑
i=1
`(yt,i, 〈wt, φ(xt,i)〉) + λ1Rσ(WN ) + λ2‖WN‖2F , (7)
and depending on the considered algorithm, we can replace Rσ by −Hσ and/or replace wt by w̄t instead of regularizing with ‖WN‖2F . In what follows, we consider λ1 = λ2 = 1 and we refer the reader to the Supplementary materials for more details and experiments with other values.
To the best of our knowledge, such regularization terms based on insights from the advances in metalearning theory have never been used in the literature before. We also further use the basic quantities involved in the proposed regularization terms as indicators of whether a given meta-learning algorithm naturally satisfies the assumptions ensuring an efficient meta-learning in practice or not.
3.3 RELATED WORK
Below, we discuss several related studies aiming at improving the general understanding of metalearning, and mention other regularization terms specifically designed for meta-learning.
Understanding meta-learning While a complete theory for meta-learning is still lacking, several recent works aimed to shed light on phenomena commonly observed in meta-learning by evaluating different intuitive heuristics. For instance, Raghu et al. (2020) investigated whether the popular gradient-based MAML algorithm relies on rapid learning with significant changes in the representations when deployed on target task, or due to feature reuse where the learned representation remains almost intact. They establish that the latter factor is dominant and propose a new variation of MAML that freezes all but task-specific layers of the neural network when learning new tasks. In another study (Goldblum et al., 2020) the authors explain the success of meta-learning approaches by their capability to either cluster classes more tightly in feature space (task-specific adaptation approach), or to search for meta-parameters that lie close in weight space to many task-specific minima (full fine-tuning approach). Finally, the effect of the number of shots on the classification accuracy was studied theoretically and illustrated empirically in Cao et al. (2020) for the popular metric-based PROTONET algorithm. Our paper is complementary to all other works mentioned above as it investigates a new aspect of meta-learning that has never been studied before, while following a sound theory. Also, we provide a more complete experimental evaluation as the three different approaches of meta-learning (based on gradient, metric or transfer learning), separately presented in Raghu et al. (2020), Cao et al. (2020) and Goldblum et al. (2020), are now compared together.
Other regularization strategies Regularization is a common tool to reduce model complexity during learning for better generalization, and the variations of its two most famous instances given by weight decay (Krogh & Hertz, 1992) and dropout (Srivastava et al., 2014) are commonly used as a basis in meta-learning literature as well. In general, regularization in meta-learning is applied to the weights of the whole neural network (Balaji et al., 2018; Yin et al., 2020), the predictions (Jamal & Qi, 2019; Goldblum et al., 2020) or is introduced via a prior hypothesis biased regularized empirical risk minimization (Pentina & Lampert, 2014; Kuzborskij & Orabona, 2017; Denevi et al., 2018a;b; 2019). Our proposal is different from all the approaches mentioned above for the following reasons. First, we do not regularize the whole weight matrix learned by the neural network but the linear predictors of its last layer contrary to what was done in the methods of the first group, and, more specifically, the famous weight decay approach (Krogh & Hertz, 1992). The purpose of the regularization in our case is also completely different: weight decay is used to improve generalization through sparsity in order to avoid overfitting, while our goal is to keep the classification margin unchanged during the training to avoid over-/under-specialization to some source tasks. Similarly, spectral normalization proposed by Miyato et al. (2018) to satisfy the Lipschitz constraint in GANs through dividing W values by σmax(W) does not affect the ratio between σmax(W) and σmin(W) and serves a completely different purpose. Second, we regularize the singular values (entropy or ratio) of the matrix of linear predictors instead of the predictions, as done by the methods of the second group (e.g., using the theoretic-information quantities in Jamal & Qi (2019) and Yin et al. (2020)). Finally, the works of the last group are related to the online setting with convex loss functions only, and, similarly to the algorithms from the second group, do not specifically target the spectral properties of the learned predictors. Last, but not least, our proposal is built upon the most recent advances in the meta-learning field leading to faster learning rates contrary to previous works.
4 PRACTICAL RESULTS
In this section, we use extensive experimental evaluations to answer the following two questions:
Q1) Do popular meta-learning methods naturally satisfy the learning bounds assumptions? Q2) Does ensuring these assumptions help to (meta-)learn more efficiently?
For Q1, we run the original implementations of popular meta-learning methods to see what is their natural behavior. For Q2, we study the impact of forcing them to closely follow the theoretical setup.
4.1 EXPERIMENTAL SETUP
Datasets & Baselines We consider few-shot image classification problem on three benchmark datasets, namely: 1) Omniglot (Lake et al., 2015) consisting of 1,623 classes with 20 images/class of size 28×28; 2) miniImageNet (Ravi & Larochelle, 2017) consisting of 100 classes with 600 images of size 84 × 84 per class and 3) tieredImageNet (Ren et al., 2018) consisting of 779,165 images divided into 608 classes. For each dataset, we follow the commonly adopted experimental protocol used in Finn et al. (2017) and Chen et al. (2019) and use a four-layer convolution backbone (Conv4) with 64 filters as done by Chen et al. (2019). On Omniglot, we perform 20-way classification with 1 shot and 5 shots, while on miniImageNet and tieredImageNet we perform 5-way classification with 1 shot and 5 shots. Finally, we evaluate four FSL methods: two popular meta-learning strategies, namely, MAML (Finn et al., 2017), a gradient-based method, and Prototypical Networks (PROTONET) (Snell et al., 2017), a metric-based approach; two popular transfer learning baselines, termed as BASELINE and BASELINE++ (Ravi & Larochelle, 2017; Gidaris & Komodakis, 2018; Chen et al., 2019). Even though these baselines are trained with the standard supervised learning framework, such a training can also be seen as learning a single task in the LTL framework.
Implementation details Enforcing Assumptions 1 and 2 for MAML is straightforward as it closely follows the LTL framework of episodic training. For each task, the model learns a batch of linear predictors and we can directly take them as WN to compute its SVD. Since the linear predictors are the weights of our model and change slowly, regularizing the norm ‖WN‖F and the ratio of singular values Rσ does not cause instabilities during training. Meanwhile, metric-based methods do not use linear predictors but compute a similarity between features. In the case of PROTONET, the similarity is computed with respect to class prototypes (i.e. the mean features of the images of each class). Since they act as linear predictors, a first idea would be to regularize the norm and ratio of singular values of the prototypes. Unfortunately, this latter strategy hinders the convergence of the network and leads to numerical instabilities. Most likely because prototypes are computed from image features which suffer from rapid changes across batches. Consequently, we regularize the entropy of singular values Hσ instead of the ratio Rσ to avoid instabilities during training to ensure Assumption 1 and we normalize the prototypes to ensure Assumption 2 by replacing wt with w̄t in Eq. 7. For transfer learning methods BASELINE and BASELINE++, the last layer of the network is discarded and linear predictors are learned during meta-testing. Thus, we only regularize the norm ‖WN‖F of predictors learned during the finetuning phase of meta-testing. Similarly to MAML, we compute Rσ with the last layer of the network during training and fine-tuning phase.
Remark 2 We choose well-established meta-learning algorithms for our comparison, but the proposed regularization can be integrated similarly into their recent variations (Park & Oliva, 2019; Lee et al., 2019) (see Supplementary materials for results obtained with the method of Park & Oliva (2019)). Finally, using models that do not rely on linear predictors is also possible but might be more difficult as it would require upstream work to understand which part of the model acts as predictors (as done for PROTONET in this paper) and how to compute and track the desired quantities.
4.2 INSIGHTS
Q1 – Verifying the assumptions According to theory, ‖WN‖F and Rσ should remain constant or converge toward a constant value when monitoring the last N tasks. From Fig. 2(a), we can see that for MAML (Fig. 2(a) top), both ‖WN‖F and Rσ increase with the number of tasks seen during training, whereas PROTONET (Fig. 2(a) bottom) naturally learns the prototypes with a good coverage of the embedding space, and minimizes their norm. This behavior is rather peculiar as neither
of the two methods specifically controls the theoretical quantities of interest, and still, PROTONET manages to do it implicitly. As for the transfer learning baselines (Fig. 2(b) top and bottom), we expect them to learn features that cover the embedding space with Rσ rapidly converging towards a constant value. As can be seen in Fig. 2(b), similarly to PROTONET, BASELINE++ naturally learns linear predictors that cover the embedding space. As for BASELINE, it learns a good coverage for Omniglot dataset, but fails to do so for the more complicated tieredImageNet dataset. The observed behavior of these different methods leads to a conclusion that some meta-learning algorithms are inherently more explorative of the embedding space.
Q2 – Ensuring the assumptions Armed with our regularization terms, we now aim to force the considered algorithms to verify the assumptions when it is not naturally done. In particular, for MAML we regularize both ‖WN‖F and Rσ in order to keep them constant throughout the training. Similarly, we regularize Rσ during the training of BASELINE and BASELINE++, and both ‖WN‖F and Rσ during the finetuning phase of meta-testing. For PROTONET, we enforce a normalization of
the prototypes. According to our results for Q1, regularizing the singular values of the prototypes through the entropy Hσ is not necessary.3 Based on the obtained results, we can make the following conclusions. First, from Fig. 2(a) (left, middle) and Fig. 2(b) (left), we note that for all methods considered, our proposed methodology used to enforce the theoretical assumptions works as expected, and leads to a desired behavior during the learning process. This means that the differences in terms of results presented in Table 1 are explained fully by this particular addition to the optimized objective function. Second, from the shape of the accuracy curves provided in Fig. 2(a) (right) and the accuracy gaps when enforcing the assumptions given in Table 1, we can see that respecting the assumptions leads to several significant improvements related to different aspects of learning. On the one hand, we observe that the final validation accuracy improves significantly in all benchmarks for meta-learning methods and in most of experiments for BASELINE (except for Omniglot, where BASELINE already learns to regularize its linear predictors). In accordance with the theory, we attribute the improvements to the fact that we fully utilize the training data which leads to a tighter bound on the excess target risk and, consequently, to a better generalization performance. On the other hand, we also note that our regularization reduces the sample complexity of learning the target task, as indicated by the faster increase of the validation accuracy from the very beginning of the meta-training. Roughly speaking, less meta-training data is necessary to achieve a performance comparable to that obtained without the proposed regularization using more tasks. Finally, we note that BASELINE++ and PROTONET methods naturally satisfy some assumptions: both learn diverse linear predictors by design, while BASELINE++ also normalizes the weights of its linear predictors. Thus, these methods do not benefit from additional regularization as explained before.
5 CONCLUSION
In this paper, we studied the validity of the theoretical assumptions made in recent papers applied to popular meta-learning algorithms and proposed practical ways of enforcing them.
On the one hand, we showed that depending on the problem and algorithm, some models can naturally fulfill the theoretical conditions during training. Some algorithms offer a better covering of the embedding space than others. On the other hand, when the conditions are not verified, learning with our proposed regularization terms allows to learn faster and improve the generalization capabilities of meta-learning methods. The theoretical framework studied in this paper explains the observed performance gain. Notice that no specific hyperparameter tuning was performed as we rather aim at showing the effect of ensuring learning bounds assumptions than comparing performance of the methods. Absolute accuracy results are detailed in the Supplementary materials.
While this paper proposes an initial approach to bridging the gap between theory and practice in meta-learning, some questions remain open on the inner workings of these algorithms. In particular, being able to take better advantage of the particularities of the training tasks during meta-training could help improve the effectiveness of these approaches. Self-supervised meta-learning and multiple target tasks prediction are also important future perspectives for the application of meta-learning.
3The effect of entropic regularization on PROTONET is detailed in the Supplementary materials. | 1. What are the strengths and weaknesses of the paper regarding its motivation, organization, experimental setting, and results?
2. How does the reviewer assess the novelty of the second regularization term in the paper?
3. Does the reviewer have any concerns about the applicability of the proposed regularizations when applied to more complex models?
4. Are there any limitations in comparing the proposed method with recent methods?
5. Are there any issues with the calculation of subgradients in the paper, particularly regarding the use of auto-differentiation tools? | Review | Review
Summary: In this paper, the authors aim at bridging the gap between the practice and theory in meta-learning approaches. Specifically, they propose two regularization terms to 1) capture the diversity of the tasks and 2) control the norm of the prediction layer, thereby satisfying the assumptions in meta-learning theory.
Strength:
The motivation of this paper is interesting, before proposing the methodology. These theoretical assumptions have not been paid enough attention before.
The paper is well-organized and clearly written.
The experimental setting is designed in a good manner and the results are promising.
Weakness:
I am skeptical of the novelty of the second regularize in Eq.(4). According to Section 3.2, it is equivalent to ||w||_{2}=O(1). So what is its difference to a simple l2 weight decay?
According to Section 2, the outer-level parameters are restricted as a linear layer. Is this means the proposed regularizes would become trivial while applied on top of a more complicated model, e.g., LEO[1]?
Too few competitors. It would be better to add some comparisons with recent methods.
The details to calculate the subgradients of the singular values, which is quite complicated, are missing. Especially seeing that there is no guarantee that an auto-differentiation tool will do that correct.
Ref: [1] Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, Raia Hadsell: Meta-Learning with Latent Embedding Optimization. ICLR 2019
Above all, since the contribution and the technical details to calculate the subgradients are not clear to me, I have to currently recommend a weak reject. |
ICLR | Title
Localized random projections challenge benchmarks for bio-plausible deep learning
Abstract
Similar to models of brain-like computation, artificial deep neural networks rely on distributed coding, parallel processing and plastic synaptic weights. Training deep neural networks with the error-backpropagation algorithm, however, is considered bio-implausible. An appealing alternative to training deep neural networks is to use one or a few hidden layers with fixed random weights or trained with an unsupervised, local learning rule and train a single readout layer with a supervised, local learning rule. We find that a network of leaky-integrate-andfire neurons with fixed random, localized receptive fields in the hidden layer and spike timing dependent plasticity to train the readout layer achieves 98.1% test accuracy on MNIST, which is close to the optimal result achievable with errorbackpropagation in non-convolutional networks of rate neurons with one hidden layer. To support the design choices of the spiking network, we systematically compare the classification performance of rate networks with a single hidden layer, where the weights of this layer are either random and fixed, trained with unsupervised Principal Component Analysis or Sparse Coding, or trained with the backpropagation algorithm. This comparison revealed, first, that unsupervised learning does not lead to better performance than fixed random projections for large hidden layers on digit classification (MNIST) and object recognition (CIFAR10); second, networks with random projections and localized receptive fields perform significantly better than networks with all-to-all connectivity and almost reach the performance of networks trained with the backpropagation algorithm. The performance of these simple random projection networks is comparable to most current models of bio-plausible deep learning and thus provides an interesting benchmark for future approaches.
1 INTRODUCTION
While learning a new task, synapses deep in the brain undergo task-relevant changes (HayashiTakagi et al., 2015). These synapses are often many neurons downstream of sensors and many neurons upstream of actuators. Since the rules that govern such changes deep in the brain are poorly understood, it is appealing to draw inspiration from deep artificial neural networks (DNNs) (LeCun et al., 2015). DNNs and the cerebral cortex process information in multiple layers of many neurons (Yamins & DiCarlo, 2016; Kriegeskorte, 2015) and in both, the artificial and the biological neural networks, learning depends on changes of synaptic strengths (Hebbian theory, Hebb (1949)). However, learning rules in the brain are most likely different from the backpropagation algorithm (Crick, 1989; Marblestone et al., 2016; Rumelhart et al., 1986). Furthermore, biological neurons communicate by sending discrete spikes as opposed to real-valued numbers used in DNNs. Differences like these suggest that there exist other, possibly equally powerful, algorithms that are capable to solve the same tasks by using different, more biologically plausible mechanisms. Thus, an important question in computational neuroscience is how to explain the fascinating learning capabilities of the brain with bio-plausible network architectures and learning rules. On the other hand, from a pure machine learning perspective there is increasing interest in neuron-like architectures with local learning rules, mainly motivated by the current advances in neuromorphic hardware (Nawrocki et al., 2016).
Image recognition is a popular task to test the proposed models. Because of its relative simplicity and popularity, the MNIST dataset (28×28-pixel grey level images of handwritten digits, LeCun
(1998)) is often used for benchmarking. Typical performances of existing models are around 97- 99% classification accuracy on the MNIST test set (see section 2 and Table 8). This value lies in the region of the benchmarks for a large class of classical DNNs trained with backpropagation but without data-augmentation or convolutional layers (see table in LeCun (1998)). Thus, accuracies around this value are assumed to be an empirical signature of backpropagation-like deep learning (Lillicrap et al., 2016; Sacramento et al., 2017). It is noteworthy, however, that several of the most promising approaches that perform well on MNIST have been found to fail on harder tasks (Bartunov et al., 2018).
An alternative to supervised training of all layers with backpropagation are fixed random weights, as proposed by general approximation theory (Barron, 1993) and the extreme learning field (Huang et al., 2006), or unsupervised training in the first layers, combined with supervised training of a readout layer. Unsupervised methods are appealing since they can be implemented with local learning rules, see e.g. “Oja’s rule” (Oja, 1982; Sanger, 1989) for principal component analysis or algorithms in Olshausen & Field (1997); Rozell et al. (2008); Liu & Jia (2012); Brito & Gerstner (2016) for sparse coding. A single readout layer can also be implemented with a local delta-rule (also called “perceptron rule”), which may be implemented by pyramidal spiking neurons with dendritic prediction of somatic spiking (Urbanczik & Senn, 2014). Since it is pointless to simply stack multiple fully connected layers trained with principal component analysis or sparse coding (Olshausen & Field, 1997) we investigate here networks with a single hidden layer.
The main objective of this study was to see how far we can go with a single hidden layer and local learning rules in networks of spiking neurons. To support the design choices of the spiking model, we compared the classification performance of different rate networks: networks trained with backpropagation, networks where the hidden layer is trained with unsupervised methods, and networks with fixed random projections in the hidden layer. Since sparse connectivity is sometimes superior to dense connectivity (Litwin-Kumar et al., 2017; Bartunov et al., 2018) and successful convolutional networks leverage local receptive fields, we investigated also sparse connectivity between input and hidden layer, where each hidden neuron receives input only from a few neighboring pixels of the input image.
2 RELATED WORK
In recent years, many bio-plausible approaches to deep learning have been proposed (see e.g. Marblestone et al. (2016) for a review). For achieving performances similar to deep learning methods, existing approaches usually use either involved architectures or elaborate mechanisms to approximate the backpropagation algorithm. Examples include the use of convolutional layers (Tavanaei & Maida (2016); Lee et al. (2018); Kheradpisheh et al. (2018) and table therein), dendritic computations (Hussain et al., 2014; Guergiuev et al., 2016; Sacramento et al., 2017) or approximations of the backpropagation algorithm such as feedback alignment (Lillicrap et al., 2016; Baldi et al., 2016; Nøkland, 2016; Samadi et al., 2017; Kohan et al., 2018; Bartunov et al., 2018) equilibrium propagation (Scellier & Bengio, 2017), membrane potential based backpropagation (Lee et al., 2016), restricted Boltzmann machines and deep belief networks (O’Connor et al., 2013; Neftci et al., 2014), (localized) difference target propagation (Lee et al., 2015; Bartunov et al., 2018), reinforcementsignal models like AuGMEnT (Rombouts et al., 2015) or approaches using predictive coding (Whittington & Bogacz, 2017). Many models implement spiking neurons to stress bio-plausibility (Liu et al. (2016); Neftci et al. (2017); Kulkarni & Rajendran (2018); Wu et al. (2018); Liu & Yue (2018) and table therein) or for coding efficiency (O’Connor et al., 2017). The conversion of DNNs to spiking neural networks (SNN) after training with backpropagation (Diehl et al., 2015) is a common technique to evade the difficulties of training with spikes. Furthermore, there are models including recurrent activity (Spoerer et al., 2017; Bellec et al., 2018) or even starting directly from realistic circuits (Delahunt & Kutz, 2018). We refer to Table 8 for a list of current bio-plausible MNIST benchmark models.
3 RESULTS
We study networks that consist of an input (l0), one hidden (l1) and an output-layer (l2) connected by weight matrices W1 and W2 (Figure 1). Training the hidden layer weights W1 with standard
supervised training involves (non-local) error backpropagation using the transposed weight matrix WT2 (Figure 1A). In the bio-plausible network considered in this paper (Figure 1B), the input-tohidden weights W1 are either learned with an unsupervised method (Principal Component Analysis or Sparse Coding) or are fixed random projections. The unsupervised methods assume recurrent inhibitory weights V1 between hidden units to implement competition.
3.1 SPIKING LOCALIZED RANDOM PROJECTIONS
We first present the results with networks of leaky integrate-and-fire (LIF) neurons. The network architecture is as in Figure 1B, but without the recurrent connections V1. For implementing localized Random Projections (l-RP) in the hidden layer weights W1, we first chose the centers of the localized receptive fields at random positions in the input space and then randomly chose the weights therein, see Figure 1C. The receptive field patches span p×p pixels around their center position (we used p = 10 for the 28×28-pixel MNIST data). The output layer weights W2 are trained with a supervised spike timing dependent plasticity (STDP) rule.
3.1.1 LIF AND STDP DYNAMICS
The spiking dynamics follow the usual LIF equations (see methods A.4) and the readout weights W2 evolve according to a supervised STDP delta rule using post-synaptic spike-traces tri(t) and a post-synaptic target trace tgti(t)
τtr dtri(t) dt = −tri(t) + ∑ f δ ( t− tfi ) ∆w2,ij = α · ( tgtposti (t)− tr post i (t) ) δ ( t− tfj ) . (1)
Thus, for a specific readout weight w2,ij , the post-synaptic trace is updated at every post-synaptic spike time tfi and the weight is updated at every pre-synaptic spike time t f j . The target trace is used for feeding in the one-hot coded, supervisory signal for the MNIST classification into the output layer (l2).
For a proof-of-principle and efficient parameter search we first investigate an LIF rate model. This rate model mimics the LIF dynamics by using the LIF activation function ϕLIF as nonlinearity,
rate(u) = ϕLIF (u) = [ ∆abs − τm ln ( 1− ϑ
u
)]−1 , (2)
where u is the membrane potential, ∆abs the refractory period, τm the membrane time constant and ϑ the firing threshold of the LIF model. Furthermore, it employs the rate-version of the STDP delta rule Equation 1 (see methods section A.4 for details)
∆wij = α̃ · rateprej · ( tgtrateposti − rate post i ) , (3)
where tgtrateposti is the post-synaptic target rate, corresponding to the post-synaptic target trace tgti(t) in Equation 1. We obtained similar spiking and weight dynamics when the readout weights W2 were either directly trained with STDP or trained with the LIF rate model and then plugged into the spiking LIF network (as done in e.g. Diehl et al. (2015)).
To illustrate the LIF and STDP dynamics, a toy example consisting of one pre- connected to one post-synaptic neuron was integrated for 650 ms. The pre- and post-synaptic membrane potentials show periodic spiking (Figure 2A) which induces post-synaptic spike traces and corresponding weight changes (Figure 2B), according to Equation 1. For the MNIST task, Figure 2C shows a raster plot for an exemplary training and testing protocol. During activity transients after pattern switches, learning is disabled until regular spiking is recovered. This is done, first, to ensure stability during activity transients (see Naud et al. (2008) and references therein) and second, to achieve decorrelation between the activities of subsequent patterns, as needed for stochastic gradient descent (SGD). During the testing period, learning is shut off permanently (see methods section A.4 for more details).
3.1.2 CLASSIFICATION RESULTS FOR LIF l-RP
When directly trained with the STDP rule in Equation 1 the spiking LIF l-RP model (nh = 5000 hidden units and patch size p = 10) reaches 98.1% test accuracy on MNIST. The corresponding LIF
rate model reaches 98.5% test accuracy. Transferring weights learned with the LIF rate model into the spiking LIF model resulted in similar accuracies as the LIF rate model. Table 1 compares the performances of the rate and spiking LIF l-RP models with the reference algorithm l-BP, which is a rate model trained with backpropagation, see subsection 3.2 and subsection 3.3 (for same hidden layer size nh and patch size p). We can see that the spiking LIF model almost reaches the performance of the corresponding rate model. The remaining gap (0.4%) between rate and spiking LIF model presumably stems from transients and the shorter training time of the spiking model (only 106 compared to 107 iterations due to long simulation times). Both, the rate and spiking LIF model of l-RP achieve accuracies close to the backpropagation reference algorithm l-BP and certainly lie in the range of current bio-plausible MNIST benchmarks, i.e. 97-99% test accuracy (see section 2 and Table 8). Based on these numbers we conclude that the spiking LIF model of localized random projections using STDP is capable of learning the MNIST task to a level that is competitive with known benchmarks for spiking networks.
3.2 BENCHMARKING RATE MODELS TRAINED WITH UNSUPERVISED LEARNING AND BACKPROPAGATION
To justify the design choices of the spiking model, we systematically investigated rate models with different methods to initialize or learn the hidden layer weights W1 (see Figure 1 and methods subsection A.1 for details). To set these hidden layer weights, we use either one of the unsupervised methods Principal Component Analysis (PCA) or Sparse Coding (SC), or train only the readout layer W2 and use fixed Random Projections (RP, as in subsection 3.1) for the hidden layer weights W1 (see Figure 1B). All these methods can be implemented with local, bio-plausible learning rules (Oja, 1982; Olshausen & Field, 1997). As a reference and upper performance bound, we train networks with the same architecture with standard backpropagation (BP, see Figure 1A). As a more bio-plausible approximation of BP, we include Feedback Alignment (FA, Lillicrap et al. (2016)) which uses fixed random feedback weights for error-backpropagation (see methods subsection A.3 for further explanation). A Simple Perceptron (SP) without a hidden layer serves as a simplistic reference, since it corresponds to direct classification of the input.
The hidden-to-output weights W2 are trained with standard stochastic gradient descent (SGD), using a one-hot representation of the class label as target. Since no error-backpropagation is needed for a single layer, the learning rule is local (“delta” or “perceptron”-rule, similar to Equation 3 of the LIF rate model). Therefore the system as a whole is bio-plausible in terms of online learning and synaptic updates using only local variables. For computational efficiency, we train first the hidden layer and then the output layer, however, both layers could be trained simultaneously.
We compared the test errors on the MNIST digit recognition data set for varying numbers of hidden neurons nh (Figure 3). The green PCA curve in Figure 3 ends at the vertical line nh = d = 784 because the number of principal components (PCs), i.e. the number of hidden units nh, is limited by the input dimension d. Since the PCs span the subspace of highest variance, classification performance quickly improves when adding more PCs for small nh and then saturates for larger nh, crossing the (dotted) Simple Perceptron line at nh = 25 PC hidden neurons. This intersection and other measures of effective dimensionality (see methods subsection A.1) suggest that the MNIST dataset lies mostly in a low-dimensional linear subspace with deff ≈ 25 d. SC performance (red curve) starts at a higher test error but improves as quickly with nh as PCA. With overcomplete representations (nh > d), the network achieves a remarkable classification performance of around 96 % test accuracy. This suggests that the sparse representation and the features extracted by SC are indeed useful for classification, especially in the overcomplete case.
The performance of RP (blue curve) for small numbers of hidden units (nh < d) is worse than for feature extractors like PCA and SC. Also for large hidden layers, performance improves only slowly with nh, which is in line with theory (Barron, 1993) and findings in the extreme learning field (Huang et al., 2006). However, for large hidden layers sizes, RP outperforms SC. This suggests that the high dimensionality of the hidden layers is more important for reaching high performance than the features extracted by PCA or SC. Tests on the object recognition task CIFAR10 lead to the same conclusion, indicating that this observation is not entirely task specific (see subsection 3.3 for further analysis on CIFAR10). For all tested methods and hidden layer sizes, performance is significantly worse than the one reached with BP (black curve in Figure 3). In line with (Lillicrap et al., 2016), we find that FA (cyan curve) performs as well as BP on MNIST.
Universal function approximation theory predicts lower bounds for the squared error that follow a power law with hidden layer size nh for both BP (O(1/nh)) and RP (O(1/n2/dh ), where d is the input dimension Barron et al. (1994); Barron (1993)). In the log-log-plot in Figure 3 this would correspond to a factor d/2 = 784/2 = 392 between the slopes of the curves of BP and RP, or at least a factor deff/2 ≈ 10 using an effective dimensionality of MNIST (see methods A.1). We find a much faster decay of classification error in RP and a smaller difference between RP and BP slopes than suggested by the theoretical lower bounds.
3.3 LOCALIZED RANDOM RECEPTIVE FIELDS
There are good reasons to reduce the connectivity from all-to-all to localized receptive fields (Figure 1C): local connectivity patterns are observed in real neural circuits (Hubel & Wiesel, 1962), proven useful theoretically (Litwin-Kumar et al., 2017) and empirically (Bartunov et al., 2018), and successfully used in convolutional networks (CNNs). Even though this modification seems well justified from both biological and algorithmic sides, it reduces the generality of the algorithm to input data such as images where neighborhood relations between pixels (i.e. input dimensions) are important.
For random projections with localized receptive fields (l-RP), the centers of the patches were chosen at random positions in the input space and their weights where randomly fixed (as in subsection 3.1, see Figure 1C). We tested different patch sizes of p × p pixels and found an optimum around p ≈ 10 which is more pronounced for large hidden layer sizes nh (see Figure 4A). Note that p = 1 corresponds to resampling the data with random weights, and p = 28 recovers fully connected RP performance.
The main finding here is the significant improvement in performance using l-RP: the optimum around p ≈ 10 almost reaches BP performance for nh = 5000 hidden neurons (blue arrow in
Figure 4B). As expected l-RP and the LIF rate model of l-RP in subsection 3.1 perform equally well. To achieve a fair comparison BP and SC were also tested with localized receptive fields (lBP, l-SC, see Figure 4B). Also these algorithms seem to benefit from localized connectivity (also with an optimum for patch size p = 10), however, not as much as RP. This makes l-RP a strong competitor of SC (and also FA, see Figure 3) as a bio-plausible algorithm in the regime of large, overcomplete hidden layers nh > d.
Since classification performances of l-RP and l-BP are very close for layer sizes above nh = 5000, we investigated the misclassified MNIST digits for both algorithms. We find that 75% of the (≈ 125) misclassified digits of l-BP (nh = 5000) are contained in the misclassified ones of l-RP (nh = 5000). This means that in roughly 75% of the cases l-RP fails, also the reference algorithm lBP fails, suggesting that these digits are particularly hard to recognize for networks with one hidden layer. We trained networks with up to nh = 100000 hidden neurons to test if (l-)RP can finally reach (l-)BP performance, since the latter saturates for large nh (see Figure 4B). Indeed for simulations with nh = 100000 and p = 10, l-BP and l-RP performance was not significantly different any more, both being at 1.2% test error.
To test whether l-RP only works for the relatively simple MNIST data set (centered digits, noninformative margin pixels, no clutter, uniform features and perspective etc.) or generalizes to more difficult tasks, we applied it to the CIFAR10 data set (Krizhevsky, 2013). We first reproduced a typical benchmark performance of a fully connected network with one hidden layer trained with standard BP (≈ 56% test accuracy, nh = 5000, see also Lin & Memisevic (2016)). Again, l-RP outperforms the unsupervised methods PCA and l-SC in the case of large, overcomplete hidden layers (see Table 2). Furthermore, as on MNIST, classification performance increases for increasing hidden layer size nh and localized receptive fields perform better than full connectivity for all methods.
Also on CIFAR10, l-RP comes close to the performance of the reference algorithm l-BP, however, the difference between l-RP and l-BP is larger than on MNIST. Given that state-of-the-art performance on the CIFAR10 dataset with deep convolutional neural networks is close to 98% (e.g. Real et al. (2018)), the limitations of l-RP and the difference in difficulty between MNIST and CIFAR10 become apparent.
4 DISCUSSION
The rules that govern plasticity of synapses deep in the brain remain elusive. In contrast to bioplausible deep learning based on approximations of the backpropagation algorithm, we focused
here on training a readout layer with a supervised, local learning rule combined with a single hidden layer with either fixed random weights or trained with unsupervised, local learning rules.
To our surprise, randomly initialized fixed weights (RP) of large hidden layers lead to better classification performance than training them with unsupervised methods like PCA or sparse coding (SC). This implies that the inductive bias of PCA and sparse coding is not well suited for the task of digit classification and object recognition. It may be interesting to search for alternative unsupervised, local learning rules with a stronger inductive bias.
Replacing all-to-all connectivity with localized input filters is such an inductive bias that was already seen to be useful in other models (Bartunov et al., 2018) and proved to be particularly useful in conjunction with randomly initialized static weights. Already for a hidden layer size of 5000 neurons the performance of l-RP almost reaches the performance of backpropagation on MNIST. Furthermore, performance scaling with the number of hidden units nh was found to be orders of magnitudes better than the lower bound suggested by universal function approximation theory (Barron, 1993).
Since we wanted to keep our models as simple as possible, we used online (no mini-batches) stochastic gradient descent (SGD) with a constant learning rate in all our experiments. There are many known ways to further tweak the final performance, e.g. with adaptive learning rate schedules or data augmentation, but our goal here was to demonstrate that even the simple model with localized random projections and spike timing dependent plasticity with a constant learning rate achieves results that are comparable with more elaborate approaches that use e.g. convolutional layers with weight sharing (Panda & Roy, 2016), backpropagation approximations (Lee et al., 2016), multiple hidden layers (Lillicrap et al., 2016), dendritic neurons (Sacramento et al., 2017), recurrence (Diehl & Cook, 2015) or conversion from rate to spikes (Diehl et al., 2015).
Above 98% accuracy we have to take into account a saturating effect of the network training: better models will only lead to subtle improvements in accuracy. It is not obvious whether improvements are really a proof of having achieved deep learning or just the result of tweaking the models towards the peculiarities of the MNIST dataset (centered digits, non-informative margin pixels, no clutter, uniform features and perspective etc.). We observed that more challenging data sets such as CIFAR10 clearly highlight the limitations of l-RP and thus are better suited to test deep learning capabilities. We are aware that state-of-the-art deep learning has moved from MNIST to harder datasets, such as ImageNet (Deng et al., 2009), long ago. Yet MNIST seems to be the current reference task for most bio-plausible deep learning models (see section 2 and Table 8).
In this paper we presented a new MNIST benchmark for bio-plausible spiking networks. Using localized random projections (l-RP) and STDP learning, our spiking LIF model reached 98.1% test accuracy on MNIST which lies within the range of current benchmarks for bio-plausible models for deep learning (see section 2 and Table 8). Our network model is particularly simple, i.e. it has only one trainable layer and does not depend on sophisticated architectural or algorithmic features (e.g. to approximate backpropagation). Instead it relies on the properties of high-dimensional localized random projections. We suggest that novel, progressive approaches to bio-plausible deep learning should significantly outperform the benchmark presented here.
A METHODS
A.1 RATE NETWORK MODEL
We use a 3-layer (input l0, hidden l1 = lh and output l2) feed-forward rate-based architecture with layer sizes (n0 for input), n1 (hidden) and n2 (output, with n2 = # classes). The layers are connected via weight matrices W1 ∈ Rn1×n0 and W2 ∈ Rn2×n1 and each neuron receives bias from the bias vectors b1 ∈ Rn1 and b2 ∈ Rn2 respectively (see Figure 1). The neurons themselves are nonlinear units with an element-wise, possibly layer-specific, nonlinearity ai = ϕl(ui). The feed-forward pass of this model thus reads
ul+1 = Wl+1ul + bl+1 al+1 = ϕl+1(ul+1). (4)
The simple perceptron (SP) only consists of one layer (l2, W2 ∈ Rn2×n0 , b2 ∈ Rn2 ). The sparse coding (SC) model assumes recurrent inhibition within the hidden layer l1. This inhibition is not modeled by an explicit inhibitory population, as required by Dale’s principle (Dale, 1935), but direct, plastic, inhibitory synapses V1 ∈ Rn1×n1 are assumed between neurons in l1. Classification error variances in Figure 3 & Figure 4 are displayed as shaded, semi-transparent areas with the same colors as the corresponding curves. Their lower and upper bounds correspond to the 25% and 75% percentiles of at least 10 independent runs.
An effective dimensionality deff of the MNIST data set can be obtained, e.g. via eigen-spectrum analysis, keeping 90% of the variance. We obtain values around deff ≈ 20. The measure proposed in Litwin-Kumar et al. (2017) gives the same value deff ≈ 20. Another measure is the crossing of the PCA curve with the Simple Perceptron line in Figure 3 at nh = 25(= deff). We checked that training a perceptron (1 hidden layer, nh = 1000, 107 iterations, ReLU, standard BP) on the first 25 PCs of MNIST leads to 1.7% test error (vs 1.5% test error on the full MNIST data). Together, these findings suggest that the MNIST dataset lies mostly in a low-dimensional linear subspace with deff ≈ 25 d. The MNIST (& CIFAR10) data was rescaled to values in [0,1] and mean centered, which means that the pixel-wise average over the data was subtracted from the pixel values of every image. The code for the implementation of our rate network model will be available online upon acceptance.
A.2 UNSUPERVISED TECHNIQUES
A.2.1 PRINCIPAL COMPONENT ANALYSIS (PCA)
In this paper we do not implement PCA learning explicitly as a neural learning algorithm but by a standard PCA algorithm (https://github.com/JuliaStats/MultivariateStats. jl). For d-dimensional data such algorithms output the values of the n ≤ d first principal components as well as the principal subspace projection matrix P ∈ Rn×d. This matrix can directly be used as feedforward matrix W1 in our network since the lines of P correspond to the projections of the data onto the single principal components. In other words each neuron in the hidden layer l1 extracts another principal component of the data.
Since PCA is a linear model, biases b1 were set to 0 and the nonlinearity was chosen linear, i.e. ϕ1(u) = u. With this, we can write the (trained) feed-forward pass of the first layer of our PCA model as follows:
a1 = u1 = W1 · a0 with W1 = P (5)
Since the maximum number of PCs that can be extracted is the dimensionality of the data, nmax = d, the number of neurons in the hidden layer n1 is limited by d. This makes PCA unusable for overcomplete hidden representations as investigated for SC and RP.
Consistency between the used standard algorithm and neural implementations of PCA (“Sanger’s” rule Sanger (1989)) was checked by comparing the extracted PCs and visualizing the learned projections (lines of P) for the case of 30 extracted PCs, i.e. n = 30.
A.2.2 SPARSE CODING (SC)
For d-dimensional data, SC aims at finding a dictionary W ∈ Rh×d of features that lead to an optimal representation a1 ∈ Rh which is sparse, i.e. has as few non-zero elements as possible. The corresponding optimization problem reads:
Wopt, aopt1 = argmin L(W, a1)
L(W, a1) = 1
2 ‖a0 −W>a1‖22 + λ‖a1‖1. (6)
Since this is a nonlinear optimization problem with latent variables (hidden layer) it cannot be solved directly. Usually an iterative two step procedure is applied (akin to the expectation-maximization algorithm) until convergence: First optimize with respect to the activities a with fixed weights W. Second, assuming fixed activities, perform a gradient step w.r.t to weights.
We implement a biologically plausible SC model using a 2-layer network with recurrent inhibition and local plasticity rules similar to the one in Brito & Gerstner (2016). For a rigorous motivation (and derivation) that such a network architecture can indeed implement sparse coding we refer to Olshausen & Field (1997); Zylberberg et al. (2011); Pehlevan & Chklovskii (2015); Brito & Gerstner (2016). We apply the above mentioned two step optimization procedure to solve the SC problem given our network model. The following two steps are repeated in alternation until convergence of the weights:
1. Optimizing the hidden activations: We assume given and fixed weights W1 and V1 and ask for optimal hidden activations a1. Because of the recurrent inhibition V1 the resulting equation for the hidden activities a1 is nonlinear and implicit. To solve this equation iteratively, we simulate the dynamics of a neural model with time-dependent internal and external variables u1(t) and a1(t) respectively. The dynamics of the system is then given by Zylberberg et al. (2011); Brito & Gerstner (2016):
τu du1(t) dt = −u1(t) + (W1a0(t)− V1a1(t))
a1(t) = ϕ(u1(t)) (7)
In practice the dynamics is simulated for Niter = 50 iterations, which leads to satisfying convergence (change in hidden activations < 5%).
2. Optimizing the weights: Now the activities a1 are kept fixed and we want to update the weights following the gradient of the loss function. The weight update rules are Hebbian-type local learning rules (Brito & Gerstner, 2016):
∆W1,ji = αw · a0,i · a1,j ∆V1,jk = αv · a1,k · (a1,j − 〈a1,j〉) (8)
〈·〉 is a moving average (low-pass filter) with some time constant τmav. At the beginning of the simulation (or after a new pattern presentation) τmav is increased starting from 0 to τmav during the first τmav. The values of the lines of W1 are normalized after each update, however this can also be achieved by adding a weight decay term. Additionally the values of V1 are clamped to positive values after each update to ensure that the recurrent input is inhibitory. Also the diagonal of V1 is kept at zero to avoid self-inhibition.
During SC learning, at every iteration, the variabes u1(t) and a1(t) are reset (to avoid transients) before an input is presented. Then for every of the N iterations, equation 7 is iterated for Niter steps
and the weights are updated according to equation 8.
For comparison with localized RP (l-RP, see subsubsection A.2.3), a localized version of SC was implemented with the same initialization of W1 as in l-RP. The usual SC learning rule equation 8 is applied and the localized connectivity is kept by clamping weights outside the receptive fields to zero. Lateral inhibition weights V1 are initialized and learned as in normal SC (full competition is kept). For a detailed parameter list, see Table 3.
A.2.3 RANDOM PROJECTIONS (RP)
For RP, the weight matrix W1 between input and hidden layer is initialized randomly W1 ∼ N (0, σ2) with variance-preserving scaling: σ2 ∝ 1/n0. The biases b1 are initialized by sampling from a uniform distribution U([0, 0.1]) between 0 and 0.1. In practice we used the specific initialization
W1 ∼ N (0, σ2) σ2 = 1
100 n0 b1 ∼ U([0, 0.1]) (9)
for RP (keeping weights fixed), SC, SP and also BP & RF (both layers with W2,b2 and n1 respectively). The initialization of the biases b was found to be uncritical in the range of [0,0.1]
For localized RP (l-RP), neurons in the hidden layer receive input only from a fraction of the input units called a receptive field. Receptive fields are chosen to form a compact patch over neighbouring pixels in the image space. For each hidden neuron a receptive field of size p × p (p ∈ N) input neurons is created at a random position in the input space. The weight values for each receptive field (rf) and the biases are initialized as:
W1,rf ∼ N (0, σ2rf) σ2rf = c
100 p (10)
b1 ∼ U([0, 0.1]) (11) were the optimization factor c = 3 was found empirically through a grid-search optimization of classification performance. For exact parameter values, see Table 4.
A.3 CLASSIFIER & SUPERVISED REFERENCE ALGORITHMS
The connections W2 from hidden to output layer are updated by a simple delta-rule which is equivalent to BP in a single-layer network and hence is bio-plausible. For having a reference for our bio-plausible models (Figure 1B), we compare it to networks with the same architecture (number of layers, neurons, connectivity) but trained in a fully supervised way with standard backpropagation (Figure 1A). The forward pass of the model reads:
ul+1 = Wl+1ul + bl+1 (12) al+1 = ϕl+1(ul+1) (13)
The error ẽL is calculated from the comparison of activations in the last layer aL with the (one-hot encoded) target activations tgt, with respect to the chosen loss function: mean squared error (MSE),
ẽL = tgt− aL (14)
LMSE = 1
2 ‖tgt− aL‖22 (15)
or softmax/cross-entropy loss (CE),
p = softmax (aL) (16) ẽL = tgt− p (17)
LCE = − nL∑ i=1 tgti · log (pi) (18)
Classification results (on the test set) for MSE- and CE-loss were found to be not significantly different. Rectified linear units (ReLU) were used as nonlinearity ϕ(ul) for all layers (MSE-loss) or for the first layer only (CE-loss).
In BP the weight and bias update is obtained by stochastic gradient descent, i.e. ∆Wl,ij ∝ ∂L∂Wl,ij . The full BP algorithm for deep networks reads (Rumelhart et al., 1986):
eL = ϕ′L(uL) ẽL el−1 = ϕ′l−1(ul) W>l el
∆Wl = α · el ⊗ al−1 ∆bl = α · el (19)
where stands for element-wise multiplication, ⊗ is the outer (dyadic) product, ϕ′l(·) is the derivative of the nonlinearity and α is the learning rate. FA (Lillicrap et al., 2016) uses a fixed random matrix Rl instead of the transpose of the weight matrix W>l for the error backpropagation step in equation 19.
To allow for a fair comparison with l-RP, BP and FA were implemented with full connectivity and with localized receptive fields with the same initialization as in l-RP. During training with BP (or FA), the usual weight update equation 19 was applied to the weights in the receptive fields, keeping all other weights at zero. The exact parameter values can be found in Table 5.
A.4 SPIKING IMPLEMENTATION
A.4.1 LIF MODEL
The spiking simulations were performed with a custom-made event-based leaky integrate-and-fire (LIF) integrator written in the Julia-language. For large network sizes, the exact, event-based integration can be inefficient due to a large frequency of events. To alleviate dramatic slow-down, an Euler-forward integration was added to the framework. For sufficiently small time discretization (e.g. ∆t ≤ 5 · 10−2 ms for the parameters given in Table 6) the error of this approximate integration does not have negative consequences on the learning outcome. Consistent results were obtained using event-based and Euler-forward integration. The code of this framework will be available online upon acceptance.
The dynamics of the LIF network is given by:
τm dui(t)
dt = −ui(t) +RIi(t)
with Ii(t) = I ff i (t) + I ext i (t) = ∑ j,f wij ( t− tfj ) + Iexti (t)
and the spiking condition: if ui(t) ≥ ϑi: ui → ureset (20)
where ui(t) is the membrane potential, τm the membrane time-constant,R the membrane resistance, wij are the synaptic weights, (t) = δ(t)/τm (with τm in seconds) is the post-synaptic potential evoked by a pre-synaptic spike arrival, ϑi is the spiking threshold and ureset the reset potential after a spike. The input is split into a feed-forward (Iff (t)) and an external (Iext(t)) contribution. Each neuron in the input layer l0 (n0 = d) receives only external input Iext proportional to one pixel value in the data. To avoid synchrony between the spikes of different neurons, the starting potentials and parameters (e.g. thresholds) for the different neurons are drawn from a (small) range around the respective mean values.
We implement STDP using post-synaptic spike-traces tri(t) and a post-synaptic target-trace tgti(t).
τtr dtri(t) dt = −tri(t) + ∑ f δ ( t− tfi ) ∆wij = g ( trposti (t), tgt post i (t) ) δ ( t− tfj ) (21)
with the plasticity function g ( trposti (t), tgti(t) ) = α · ( tgtposti (t)− tr post i (t) ) . (22)
To train the network, we present patterns to the input layer and a target-trace to the output layer. The MNIST input is scaled by the input amplitude ampinp, the targets tgt(t) of the output layer are the one-hot-coded classes, scaled by the target amplitude amptgt. Additionally, every neuron receives a static bias input Iextbias ≈ ϑ to avoid silent units in the hidden layer. Every pattern is presented as fixed input for a time Tpat and the LIF dynamics as well as the learning evolves according to equation 20 and equation 21 respectively. To ensure stability during transients (see Naud et al. (2008) and references therein), learning is disabled after pattern switches for a duration of about Ttrans = 4τm. With the parameters we used for the simulations (see Table 6), firing rates of single neurons in the whole network stayed below 1 kHz which was considered as a bio-plausible regime. For the toy example in Figure 2A& B we used static input and target with the parameters ampinp = 40, amptgt = 5 (i.e. target trace = 0.005), ϑmean = 20, σϑ= 0, τm = 50, α = 1.2 · 10−5. For the raster plot in Figure 2C we used ampinp = 300, amptgt = 300, ϑmean = 20, σϑ= 0, τm = 50, α = 1.2 · 10−5 Tpat = 50 ms, Ttrans = 100 ms.
A.4.2 LIF RATE MODEL
The LIF dynamics can be mapped to a rate model described by the following equations:
ul = Wlul−1 +RIext
al = ϕLIF (ul) ∆wij = g̃ ( aprej , a post i , tgt post i ) (23)
with the (element-wise) LIF-activation function ϕLIF(·) and the modified plasticity function g̃(·):
ϕLIF (uk) = [ ∆abs − τm ln ( 1− ϑk
uk
)]−1 (24)
g̃ ( aprej , a post i , tgt post i ) = α̃ · aprej · ( tgtposti − a post i ) (25)
The latter can be obtained by integrating the STDP rule Equation 21 and taking the expectation. Most of the parameters of the spiking- and the LIF rate models can be mapped to each other directly (see Tabs. 6 & 7). The learningrate α must be adapted since the LIF weight change depends on the presentation time of a pattern Tpat. In the limit of long pattern presentation times (Tpat τm, τtr), the transition from the learning rate of the LIF rate model (α̃) to the one of the spiking LIF model (α) is
α = 1000 ms Tpat [ms] · 1000 · α̃, (26)
where the second factor comes from a unit change from Hz to kHz. It is also possible to train weight matrices computationally efficient in the LIF rate model and plug them into the spiking LIF model afterwards (as in e.g. Diehl et al. (2015)). The reasons for the remaining difference in performance presumably lie in transients and single-spike effects that cannot be captured by the rate model. Also, the spiking network was only trained with 106 image presentations (compared to 107 for the rate model) due to long simulation times.
B PARAMETER TABLES
In the following tables we use scientific E-notation XeY = X · 10Y for better readability. For all simulations, we scaled the learning rate proportional to 1/nh for nh > 5000 to ensure convergence.
C BIO-PLAUSIBLE MNIST BENCHMARKS | 1. What is the focus and contribution of the paper on biologically plausible ANNs?
2. What are the strengths of the proposed approach, particularly in terms of local learning rules and unsupervised learning?
3. What are the limitations of the paper regarding its choice of dataset and architectures?
4. How does the reviewer assess the performance of the proposed method compared to other biologically plausible algorithms?
5. What are the concerns regarding the evaluation metrics used in the paper, especially for spiking networks?
6. How does the reviewer view the overall objective of achieving biological plausibility in deep learning? | Review | Review
Summary:
The authors propose a benchmark of biologically plausible ANNs on the MNIST dataset with an emphasis on local learning rules (ruling out backpropagation, and enforcing small receptive fields). They find that random projection (RP) networks provide good performance close to backpropagation and outperform other local learning rules based on unsupervised learning.
Evaluation:
A well-executed work but with major limitations: it is based mostly on MNIST, analysis of spiking network is limited, and deep biologically plausible learning rules are not investigated.
Detailed comments:
While the paper reads well, choosing how to evaluate the contribution for such benchmark paper is a bit difficult, as the novelty is by definition more limited than in papers proposing a new approach.
In the following I chose to focus on what information such benchmark may bring to the field for addressing to challenges ahead.
1. Strengths
The authors made the effort of implementing several biologically plausible learning rules, including Feedback alignment, and sparse coding. In particular, the idea of using local unsupervised learning rules as baselines for learning the hidden layer is a good idea to extend the range of tested algorithms.
2. “Easy” dataset
It is unclear to me in which way MNIST result can help evaluate the next challenges in the field. While it is good to know that simple algorithms can achieve close to state of the art, I am not sure this is enough for a paper submitted in 2018. Ideally, most of the analysis could be reproduced at least for CIFAR10 (as the authors started to do in table 2).
3. Limited architectures
Most of the analysis is restricted to one single layer. However, biologically plausible algorithms have also been proposed that can in principle apply to multiple layers. In addition to feedback alignment (implemented in the manuscript in the single hidden layer case), you can find relatively simple approaches in the literature, for example
“Balduzzi, David, Hastagiri Vanchinathan, and Joachim M. Buhmann. Kickback Cuts Backprop's Red-Tape: Biologically Plausible Credit Assignment in Neural Networks. AAAI. 2015.” Given the dominant view that depth is key for learning challenging datasets, not exploring this option at all in a benchmark seems a significant weakness.
4. Spiking networks
While the authors seem to emphasize spiking as an important aspect of biological plausibility (by using LIF neurons and STDP). The challenges of such approaches seem to be largely unaddressed and the main take home message is a performance similar to the corresponding rate models. It would be very interesting, for example, to see how many spikes (or spikes per neurons) are need per example to achieve a robust classification.
5. Overall objective behind biological plausibility
Extending the previous point, the results are to some extent limited to accuracy. If one wishes to achieve biological plausibility, more aspect can be taken into consideration. For example:
- During test: the average number of activated neurons, the average number of activated synapses.
- During training: the overall number of activations needed to train the algorithm.
In relation to these consideration, a more concrete discussion about the potential benefits of biological plausibility would be helpful. |
ICLR | Title
Localized random projections challenge benchmarks for bio-plausible deep learning
Abstract
Similar to models of brain-like computation, artificial deep neural networks rely on distributed coding, parallel processing and plastic synaptic weights. Training deep neural networks with the error-backpropagation algorithm, however, is considered bio-implausible. An appealing alternative to training deep neural networks is to use one or a few hidden layers with fixed random weights or trained with an unsupervised, local learning rule and train a single readout layer with a supervised, local learning rule. We find that a network of leaky-integrate-andfire neurons with fixed random, localized receptive fields in the hidden layer and spike timing dependent plasticity to train the readout layer achieves 98.1% test accuracy on MNIST, which is close to the optimal result achievable with errorbackpropagation in non-convolutional networks of rate neurons with one hidden layer. To support the design choices of the spiking network, we systematically compare the classification performance of rate networks with a single hidden layer, where the weights of this layer are either random and fixed, trained with unsupervised Principal Component Analysis or Sparse Coding, or trained with the backpropagation algorithm. This comparison revealed, first, that unsupervised learning does not lead to better performance than fixed random projections for large hidden layers on digit classification (MNIST) and object recognition (CIFAR10); second, networks with random projections and localized receptive fields perform significantly better than networks with all-to-all connectivity and almost reach the performance of networks trained with the backpropagation algorithm. The performance of these simple random projection networks is comparable to most current models of bio-plausible deep learning and thus provides an interesting benchmark for future approaches.
1 INTRODUCTION
While learning a new task, synapses deep in the brain undergo task-relevant changes (HayashiTakagi et al., 2015). These synapses are often many neurons downstream of sensors and many neurons upstream of actuators. Since the rules that govern such changes deep in the brain are poorly understood, it is appealing to draw inspiration from deep artificial neural networks (DNNs) (LeCun et al., 2015). DNNs and the cerebral cortex process information in multiple layers of many neurons (Yamins & DiCarlo, 2016; Kriegeskorte, 2015) and in both, the artificial and the biological neural networks, learning depends on changes of synaptic strengths (Hebbian theory, Hebb (1949)). However, learning rules in the brain are most likely different from the backpropagation algorithm (Crick, 1989; Marblestone et al., 2016; Rumelhart et al., 1986). Furthermore, biological neurons communicate by sending discrete spikes as opposed to real-valued numbers used in DNNs. Differences like these suggest that there exist other, possibly equally powerful, algorithms that are capable to solve the same tasks by using different, more biologically plausible mechanisms. Thus, an important question in computational neuroscience is how to explain the fascinating learning capabilities of the brain with bio-plausible network architectures and learning rules. On the other hand, from a pure machine learning perspective there is increasing interest in neuron-like architectures with local learning rules, mainly motivated by the current advances in neuromorphic hardware (Nawrocki et al., 2016).
Image recognition is a popular task to test the proposed models. Because of its relative simplicity and popularity, the MNIST dataset (28×28-pixel grey level images of handwritten digits, LeCun
(1998)) is often used for benchmarking. Typical performances of existing models are around 97- 99% classification accuracy on the MNIST test set (see section 2 and Table 8). This value lies in the region of the benchmarks for a large class of classical DNNs trained with backpropagation but without data-augmentation or convolutional layers (see table in LeCun (1998)). Thus, accuracies around this value are assumed to be an empirical signature of backpropagation-like deep learning (Lillicrap et al., 2016; Sacramento et al., 2017). It is noteworthy, however, that several of the most promising approaches that perform well on MNIST have been found to fail on harder tasks (Bartunov et al., 2018).
An alternative to supervised training of all layers with backpropagation are fixed random weights, as proposed by general approximation theory (Barron, 1993) and the extreme learning field (Huang et al., 2006), or unsupervised training in the first layers, combined with supervised training of a readout layer. Unsupervised methods are appealing since they can be implemented with local learning rules, see e.g. “Oja’s rule” (Oja, 1982; Sanger, 1989) for principal component analysis or algorithms in Olshausen & Field (1997); Rozell et al. (2008); Liu & Jia (2012); Brito & Gerstner (2016) for sparse coding. A single readout layer can also be implemented with a local delta-rule (also called “perceptron rule”), which may be implemented by pyramidal spiking neurons with dendritic prediction of somatic spiking (Urbanczik & Senn, 2014). Since it is pointless to simply stack multiple fully connected layers trained with principal component analysis or sparse coding (Olshausen & Field, 1997) we investigate here networks with a single hidden layer.
The main objective of this study was to see how far we can go with a single hidden layer and local learning rules in networks of spiking neurons. To support the design choices of the spiking model, we compared the classification performance of different rate networks: networks trained with backpropagation, networks where the hidden layer is trained with unsupervised methods, and networks with fixed random projections in the hidden layer. Since sparse connectivity is sometimes superior to dense connectivity (Litwin-Kumar et al., 2017; Bartunov et al., 2018) and successful convolutional networks leverage local receptive fields, we investigated also sparse connectivity between input and hidden layer, where each hidden neuron receives input only from a few neighboring pixels of the input image.
2 RELATED WORK
In recent years, many bio-plausible approaches to deep learning have been proposed (see e.g. Marblestone et al. (2016) for a review). For achieving performances similar to deep learning methods, existing approaches usually use either involved architectures or elaborate mechanisms to approximate the backpropagation algorithm. Examples include the use of convolutional layers (Tavanaei & Maida (2016); Lee et al. (2018); Kheradpisheh et al. (2018) and table therein), dendritic computations (Hussain et al., 2014; Guergiuev et al., 2016; Sacramento et al., 2017) or approximations of the backpropagation algorithm such as feedback alignment (Lillicrap et al., 2016; Baldi et al., 2016; Nøkland, 2016; Samadi et al., 2017; Kohan et al., 2018; Bartunov et al., 2018) equilibrium propagation (Scellier & Bengio, 2017), membrane potential based backpropagation (Lee et al., 2016), restricted Boltzmann machines and deep belief networks (O’Connor et al., 2013; Neftci et al., 2014), (localized) difference target propagation (Lee et al., 2015; Bartunov et al., 2018), reinforcementsignal models like AuGMEnT (Rombouts et al., 2015) or approaches using predictive coding (Whittington & Bogacz, 2017). Many models implement spiking neurons to stress bio-plausibility (Liu et al. (2016); Neftci et al. (2017); Kulkarni & Rajendran (2018); Wu et al. (2018); Liu & Yue (2018) and table therein) or for coding efficiency (O’Connor et al., 2017). The conversion of DNNs to spiking neural networks (SNN) after training with backpropagation (Diehl et al., 2015) is a common technique to evade the difficulties of training with spikes. Furthermore, there are models including recurrent activity (Spoerer et al., 2017; Bellec et al., 2018) or even starting directly from realistic circuits (Delahunt & Kutz, 2018). We refer to Table 8 for a list of current bio-plausible MNIST benchmark models.
3 RESULTS
We study networks that consist of an input (l0), one hidden (l1) and an output-layer (l2) connected by weight matrices W1 and W2 (Figure 1). Training the hidden layer weights W1 with standard
supervised training involves (non-local) error backpropagation using the transposed weight matrix WT2 (Figure 1A). In the bio-plausible network considered in this paper (Figure 1B), the input-tohidden weights W1 are either learned with an unsupervised method (Principal Component Analysis or Sparse Coding) or are fixed random projections. The unsupervised methods assume recurrent inhibitory weights V1 between hidden units to implement competition.
3.1 SPIKING LOCALIZED RANDOM PROJECTIONS
We first present the results with networks of leaky integrate-and-fire (LIF) neurons. The network architecture is as in Figure 1B, but without the recurrent connections V1. For implementing localized Random Projections (l-RP) in the hidden layer weights W1, we first chose the centers of the localized receptive fields at random positions in the input space and then randomly chose the weights therein, see Figure 1C. The receptive field patches span p×p pixels around their center position (we used p = 10 for the 28×28-pixel MNIST data). The output layer weights W2 are trained with a supervised spike timing dependent plasticity (STDP) rule.
3.1.1 LIF AND STDP DYNAMICS
The spiking dynamics follow the usual LIF equations (see methods A.4) and the readout weights W2 evolve according to a supervised STDP delta rule using post-synaptic spike-traces tri(t) and a post-synaptic target trace tgti(t)
τtr dtri(t) dt = −tri(t) + ∑ f δ ( t− tfi ) ∆w2,ij = α · ( tgtposti (t)− tr post i (t) ) δ ( t− tfj ) . (1)
Thus, for a specific readout weight w2,ij , the post-synaptic trace is updated at every post-synaptic spike time tfi and the weight is updated at every pre-synaptic spike time t f j . The target trace is used for feeding in the one-hot coded, supervisory signal for the MNIST classification into the output layer (l2).
For a proof-of-principle and efficient parameter search we first investigate an LIF rate model. This rate model mimics the LIF dynamics by using the LIF activation function ϕLIF as nonlinearity,
rate(u) = ϕLIF (u) = [ ∆abs − τm ln ( 1− ϑ
u
)]−1 , (2)
where u is the membrane potential, ∆abs the refractory period, τm the membrane time constant and ϑ the firing threshold of the LIF model. Furthermore, it employs the rate-version of the STDP delta rule Equation 1 (see methods section A.4 for details)
∆wij = α̃ · rateprej · ( tgtrateposti − rate post i ) , (3)
where tgtrateposti is the post-synaptic target rate, corresponding to the post-synaptic target trace tgti(t) in Equation 1. We obtained similar spiking and weight dynamics when the readout weights W2 were either directly trained with STDP or trained with the LIF rate model and then plugged into the spiking LIF network (as done in e.g. Diehl et al. (2015)).
To illustrate the LIF and STDP dynamics, a toy example consisting of one pre- connected to one post-synaptic neuron was integrated for 650 ms. The pre- and post-synaptic membrane potentials show periodic spiking (Figure 2A) which induces post-synaptic spike traces and corresponding weight changes (Figure 2B), according to Equation 1. For the MNIST task, Figure 2C shows a raster plot for an exemplary training and testing protocol. During activity transients after pattern switches, learning is disabled until regular spiking is recovered. This is done, first, to ensure stability during activity transients (see Naud et al. (2008) and references therein) and second, to achieve decorrelation between the activities of subsequent patterns, as needed for stochastic gradient descent (SGD). During the testing period, learning is shut off permanently (see methods section A.4 for more details).
3.1.2 CLASSIFICATION RESULTS FOR LIF l-RP
When directly trained with the STDP rule in Equation 1 the spiking LIF l-RP model (nh = 5000 hidden units and patch size p = 10) reaches 98.1% test accuracy on MNIST. The corresponding LIF
rate model reaches 98.5% test accuracy. Transferring weights learned with the LIF rate model into the spiking LIF model resulted in similar accuracies as the LIF rate model. Table 1 compares the performances of the rate and spiking LIF l-RP models with the reference algorithm l-BP, which is a rate model trained with backpropagation, see subsection 3.2 and subsection 3.3 (for same hidden layer size nh and patch size p). We can see that the spiking LIF model almost reaches the performance of the corresponding rate model. The remaining gap (0.4%) between rate and spiking LIF model presumably stems from transients and the shorter training time of the spiking model (only 106 compared to 107 iterations due to long simulation times). Both, the rate and spiking LIF model of l-RP achieve accuracies close to the backpropagation reference algorithm l-BP and certainly lie in the range of current bio-plausible MNIST benchmarks, i.e. 97-99% test accuracy (see section 2 and Table 8). Based on these numbers we conclude that the spiking LIF model of localized random projections using STDP is capable of learning the MNIST task to a level that is competitive with known benchmarks for spiking networks.
3.2 BENCHMARKING RATE MODELS TRAINED WITH UNSUPERVISED LEARNING AND BACKPROPAGATION
To justify the design choices of the spiking model, we systematically investigated rate models with different methods to initialize or learn the hidden layer weights W1 (see Figure 1 and methods subsection A.1 for details). To set these hidden layer weights, we use either one of the unsupervised methods Principal Component Analysis (PCA) or Sparse Coding (SC), or train only the readout layer W2 and use fixed Random Projections (RP, as in subsection 3.1) for the hidden layer weights W1 (see Figure 1B). All these methods can be implemented with local, bio-plausible learning rules (Oja, 1982; Olshausen & Field, 1997). As a reference and upper performance bound, we train networks with the same architecture with standard backpropagation (BP, see Figure 1A). As a more bio-plausible approximation of BP, we include Feedback Alignment (FA, Lillicrap et al. (2016)) which uses fixed random feedback weights for error-backpropagation (see methods subsection A.3 for further explanation). A Simple Perceptron (SP) without a hidden layer serves as a simplistic reference, since it corresponds to direct classification of the input.
The hidden-to-output weights W2 are trained with standard stochastic gradient descent (SGD), using a one-hot representation of the class label as target. Since no error-backpropagation is needed for a single layer, the learning rule is local (“delta” or “perceptron”-rule, similar to Equation 3 of the LIF rate model). Therefore the system as a whole is bio-plausible in terms of online learning and synaptic updates using only local variables. For computational efficiency, we train first the hidden layer and then the output layer, however, both layers could be trained simultaneously.
We compared the test errors on the MNIST digit recognition data set for varying numbers of hidden neurons nh (Figure 3). The green PCA curve in Figure 3 ends at the vertical line nh = d = 784 because the number of principal components (PCs), i.e. the number of hidden units nh, is limited by the input dimension d. Since the PCs span the subspace of highest variance, classification performance quickly improves when adding more PCs for small nh and then saturates for larger nh, crossing the (dotted) Simple Perceptron line at nh = 25 PC hidden neurons. This intersection and other measures of effective dimensionality (see methods subsection A.1) suggest that the MNIST dataset lies mostly in a low-dimensional linear subspace with deff ≈ 25 d. SC performance (red curve) starts at a higher test error but improves as quickly with nh as PCA. With overcomplete representations (nh > d), the network achieves a remarkable classification performance of around 96 % test accuracy. This suggests that the sparse representation and the features extracted by SC are indeed useful for classification, especially in the overcomplete case.
The performance of RP (blue curve) for small numbers of hidden units (nh < d) is worse than for feature extractors like PCA and SC. Also for large hidden layers, performance improves only slowly with nh, which is in line with theory (Barron, 1993) and findings in the extreme learning field (Huang et al., 2006). However, for large hidden layers sizes, RP outperforms SC. This suggests that the high dimensionality of the hidden layers is more important for reaching high performance than the features extracted by PCA or SC. Tests on the object recognition task CIFAR10 lead to the same conclusion, indicating that this observation is not entirely task specific (see subsection 3.3 for further analysis on CIFAR10). For all tested methods and hidden layer sizes, performance is significantly worse than the one reached with BP (black curve in Figure 3). In line with (Lillicrap et al., 2016), we find that FA (cyan curve) performs as well as BP on MNIST.
Universal function approximation theory predicts lower bounds for the squared error that follow a power law with hidden layer size nh for both BP (O(1/nh)) and RP (O(1/n2/dh ), where d is the input dimension Barron et al. (1994); Barron (1993)). In the log-log-plot in Figure 3 this would correspond to a factor d/2 = 784/2 = 392 between the slopes of the curves of BP and RP, or at least a factor deff/2 ≈ 10 using an effective dimensionality of MNIST (see methods A.1). We find a much faster decay of classification error in RP and a smaller difference between RP and BP slopes than suggested by the theoretical lower bounds.
3.3 LOCALIZED RANDOM RECEPTIVE FIELDS
There are good reasons to reduce the connectivity from all-to-all to localized receptive fields (Figure 1C): local connectivity patterns are observed in real neural circuits (Hubel & Wiesel, 1962), proven useful theoretically (Litwin-Kumar et al., 2017) and empirically (Bartunov et al., 2018), and successfully used in convolutional networks (CNNs). Even though this modification seems well justified from both biological and algorithmic sides, it reduces the generality of the algorithm to input data such as images where neighborhood relations between pixels (i.e. input dimensions) are important.
For random projections with localized receptive fields (l-RP), the centers of the patches were chosen at random positions in the input space and their weights where randomly fixed (as in subsection 3.1, see Figure 1C). We tested different patch sizes of p × p pixels and found an optimum around p ≈ 10 which is more pronounced for large hidden layer sizes nh (see Figure 4A). Note that p = 1 corresponds to resampling the data with random weights, and p = 28 recovers fully connected RP performance.
The main finding here is the significant improvement in performance using l-RP: the optimum around p ≈ 10 almost reaches BP performance for nh = 5000 hidden neurons (blue arrow in
Figure 4B). As expected l-RP and the LIF rate model of l-RP in subsection 3.1 perform equally well. To achieve a fair comparison BP and SC were also tested with localized receptive fields (lBP, l-SC, see Figure 4B). Also these algorithms seem to benefit from localized connectivity (also with an optimum for patch size p = 10), however, not as much as RP. This makes l-RP a strong competitor of SC (and also FA, see Figure 3) as a bio-plausible algorithm in the regime of large, overcomplete hidden layers nh > d.
Since classification performances of l-RP and l-BP are very close for layer sizes above nh = 5000, we investigated the misclassified MNIST digits for both algorithms. We find that 75% of the (≈ 125) misclassified digits of l-BP (nh = 5000) are contained in the misclassified ones of l-RP (nh = 5000). This means that in roughly 75% of the cases l-RP fails, also the reference algorithm lBP fails, suggesting that these digits are particularly hard to recognize for networks with one hidden layer. We trained networks with up to nh = 100000 hidden neurons to test if (l-)RP can finally reach (l-)BP performance, since the latter saturates for large nh (see Figure 4B). Indeed for simulations with nh = 100000 and p = 10, l-BP and l-RP performance was not significantly different any more, both being at 1.2% test error.
To test whether l-RP only works for the relatively simple MNIST data set (centered digits, noninformative margin pixels, no clutter, uniform features and perspective etc.) or generalizes to more difficult tasks, we applied it to the CIFAR10 data set (Krizhevsky, 2013). We first reproduced a typical benchmark performance of a fully connected network with one hidden layer trained with standard BP (≈ 56% test accuracy, nh = 5000, see also Lin & Memisevic (2016)). Again, l-RP outperforms the unsupervised methods PCA and l-SC in the case of large, overcomplete hidden layers (see Table 2). Furthermore, as on MNIST, classification performance increases for increasing hidden layer size nh and localized receptive fields perform better than full connectivity for all methods.
Also on CIFAR10, l-RP comes close to the performance of the reference algorithm l-BP, however, the difference between l-RP and l-BP is larger than on MNIST. Given that state-of-the-art performance on the CIFAR10 dataset with deep convolutional neural networks is close to 98% (e.g. Real et al. (2018)), the limitations of l-RP and the difference in difficulty between MNIST and CIFAR10 become apparent.
4 DISCUSSION
The rules that govern plasticity of synapses deep in the brain remain elusive. In contrast to bioplausible deep learning based on approximations of the backpropagation algorithm, we focused
here on training a readout layer with a supervised, local learning rule combined with a single hidden layer with either fixed random weights or trained with unsupervised, local learning rules.
To our surprise, randomly initialized fixed weights (RP) of large hidden layers lead to better classification performance than training them with unsupervised methods like PCA or sparse coding (SC). This implies that the inductive bias of PCA and sparse coding is not well suited for the task of digit classification and object recognition. It may be interesting to search for alternative unsupervised, local learning rules with a stronger inductive bias.
Replacing all-to-all connectivity with localized input filters is such an inductive bias that was already seen to be useful in other models (Bartunov et al., 2018) and proved to be particularly useful in conjunction with randomly initialized static weights. Already for a hidden layer size of 5000 neurons the performance of l-RP almost reaches the performance of backpropagation on MNIST. Furthermore, performance scaling with the number of hidden units nh was found to be orders of magnitudes better than the lower bound suggested by universal function approximation theory (Barron, 1993).
Since we wanted to keep our models as simple as possible, we used online (no mini-batches) stochastic gradient descent (SGD) with a constant learning rate in all our experiments. There are many known ways to further tweak the final performance, e.g. with adaptive learning rate schedules or data augmentation, but our goal here was to demonstrate that even the simple model with localized random projections and spike timing dependent plasticity with a constant learning rate achieves results that are comparable with more elaborate approaches that use e.g. convolutional layers with weight sharing (Panda & Roy, 2016), backpropagation approximations (Lee et al., 2016), multiple hidden layers (Lillicrap et al., 2016), dendritic neurons (Sacramento et al., 2017), recurrence (Diehl & Cook, 2015) or conversion from rate to spikes (Diehl et al., 2015).
Above 98% accuracy we have to take into account a saturating effect of the network training: better models will only lead to subtle improvements in accuracy. It is not obvious whether improvements are really a proof of having achieved deep learning or just the result of tweaking the models towards the peculiarities of the MNIST dataset (centered digits, non-informative margin pixels, no clutter, uniform features and perspective etc.). We observed that more challenging data sets such as CIFAR10 clearly highlight the limitations of l-RP and thus are better suited to test deep learning capabilities. We are aware that state-of-the-art deep learning has moved from MNIST to harder datasets, such as ImageNet (Deng et al., 2009), long ago. Yet MNIST seems to be the current reference task for most bio-plausible deep learning models (see section 2 and Table 8).
In this paper we presented a new MNIST benchmark for bio-plausible spiking networks. Using localized random projections (l-RP) and STDP learning, our spiking LIF model reached 98.1% test accuracy on MNIST which lies within the range of current benchmarks for bio-plausible models for deep learning (see section 2 and Table 8). Our network model is particularly simple, i.e. it has only one trainable layer and does not depend on sophisticated architectural or algorithmic features (e.g. to approximate backpropagation). Instead it relies on the properties of high-dimensional localized random projections. We suggest that novel, progressive approaches to bio-plausible deep learning should significantly outperform the benchmark presented here.
A METHODS
A.1 RATE NETWORK MODEL
We use a 3-layer (input l0, hidden l1 = lh and output l2) feed-forward rate-based architecture with layer sizes (n0 for input), n1 (hidden) and n2 (output, with n2 = # classes). The layers are connected via weight matrices W1 ∈ Rn1×n0 and W2 ∈ Rn2×n1 and each neuron receives bias from the bias vectors b1 ∈ Rn1 and b2 ∈ Rn2 respectively (see Figure 1). The neurons themselves are nonlinear units with an element-wise, possibly layer-specific, nonlinearity ai = ϕl(ui). The feed-forward pass of this model thus reads
ul+1 = Wl+1ul + bl+1 al+1 = ϕl+1(ul+1). (4)
The simple perceptron (SP) only consists of one layer (l2, W2 ∈ Rn2×n0 , b2 ∈ Rn2 ). The sparse coding (SC) model assumes recurrent inhibition within the hidden layer l1. This inhibition is not modeled by an explicit inhibitory population, as required by Dale’s principle (Dale, 1935), but direct, plastic, inhibitory synapses V1 ∈ Rn1×n1 are assumed between neurons in l1. Classification error variances in Figure 3 & Figure 4 are displayed as shaded, semi-transparent areas with the same colors as the corresponding curves. Their lower and upper bounds correspond to the 25% and 75% percentiles of at least 10 independent runs.
An effective dimensionality deff of the MNIST data set can be obtained, e.g. via eigen-spectrum analysis, keeping 90% of the variance. We obtain values around deff ≈ 20. The measure proposed in Litwin-Kumar et al. (2017) gives the same value deff ≈ 20. Another measure is the crossing of the PCA curve with the Simple Perceptron line in Figure 3 at nh = 25(= deff). We checked that training a perceptron (1 hidden layer, nh = 1000, 107 iterations, ReLU, standard BP) on the first 25 PCs of MNIST leads to 1.7% test error (vs 1.5% test error on the full MNIST data). Together, these findings suggest that the MNIST dataset lies mostly in a low-dimensional linear subspace with deff ≈ 25 d. The MNIST (& CIFAR10) data was rescaled to values in [0,1] and mean centered, which means that the pixel-wise average over the data was subtracted from the pixel values of every image. The code for the implementation of our rate network model will be available online upon acceptance.
A.2 UNSUPERVISED TECHNIQUES
A.2.1 PRINCIPAL COMPONENT ANALYSIS (PCA)
In this paper we do not implement PCA learning explicitly as a neural learning algorithm but by a standard PCA algorithm (https://github.com/JuliaStats/MultivariateStats. jl). For d-dimensional data such algorithms output the values of the n ≤ d first principal components as well as the principal subspace projection matrix P ∈ Rn×d. This matrix can directly be used as feedforward matrix W1 in our network since the lines of P correspond to the projections of the data onto the single principal components. In other words each neuron in the hidden layer l1 extracts another principal component of the data.
Since PCA is a linear model, biases b1 were set to 0 and the nonlinearity was chosen linear, i.e. ϕ1(u) = u. With this, we can write the (trained) feed-forward pass of the first layer of our PCA model as follows:
a1 = u1 = W1 · a0 with W1 = P (5)
Since the maximum number of PCs that can be extracted is the dimensionality of the data, nmax = d, the number of neurons in the hidden layer n1 is limited by d. This makes PCA unusable for overcomplete hidden representations as investigated for SC and RP.
Consistency between the used standard algorithm and neural implementations of PCA (“Sanger’s” rule Sanger (1989)) was checked by comparing the extracted PCs and visualizing the learned projections (lines of P) for the case of 30 extracted PCs, i.e. n = 30.
A.2.2 SPARSE CODING (SC)
For d-dimensional data, SC aims at finding a dictionary W ∈ Rh×d of features that lead to an optimal representation a1 ∈ Rh which is sparse, i.e. has as few non-zero elements as possible. The corresponding optimization problem reads:
Wopt, aopt1 = argmin L(W, a1)
L(W, a1) = 1
2 ‖a0 −W>a1‖22 + λ‖a1‖1. (6)
Since this is a nonlinear optimization problem with latent variables (hidden layer) it cannot be solved directly. Usually an iterative two step procedure is applied (akin to the expectation-maximization algorithm) until convergence: First optimize with respect to the activities a with fixed weights W. Second, assuming fixed activities, perform a gradient step w.r.t to weights.
We implement a biologically plausible SC model using a 2-layer network with recurrent inhibition and local plasticity rules similar to the one in Brito & Gerstner (2016). For a rigorous motivation (and derivation) that such a network architecture can indeed implement sparse coding we refer to Olshausen & Field (1997); Zylberberg et al. (2011); Pehlevan & Chklovskii (2015); Brito & Gerstner (2016). We apply the above mentioned two step optimization procedure to solve the SC problem given our network model. The following two steps are repeated in alternation until convergence of the weights:
1. Optimizing the hidden activations: We assume given and fixed weights W1 and V1 and ask for optimal hidden activations a1. Because of the recurrent inhibition V1 the resulting equation for the hidden activities a1 is nonlinear and implicit. To solve this equation iteratively, we simulate the dynamics of a neural model with time-dependent internal and external variables u1(t) and a1(t) respectively. The dynamics of the system is then given by Zylberberg et al. (2011); Brito & Gerstner (2016):
τu du1(t) dt = −u1(t) + (W1a0(t)− V1a1(t))
a1(t) = ϕ(u1(t)) (7)
In practice the dynamics is simulated for Niter = 50 iterations, which leads to satisfying convergence (change in hidden activations < 5%).
2. Optimizing the weights: Now the activities a1 are kept fixed and we want to update the weights following the gradient of the loss function. The weight update rules are Hebbian-type local learning rules (Brito & Gerstner, 2016):
∆W1,ji = αw · a0,i · a1,j ∆V1,jk = αv · a1,k · (a1,j − 〈a1,j〉) (8)
〈·〉 is a moving average (low-pass filter) with some time constant τmav. At the beginning of the simulation (or after a new pattern presentation) τmav is increased starting from 0 to τmav during the first τmav. The values of the lines of W1 are normalized after each update, however this can also be achieved by adding a weight decay term. Additionally the values of V1 are clamped to positive values after each update to ensure that the recurrent input is inhibitory. Also the diagonal of V1 is kept at zero to avoid self-inhibition.
During SC learning, at every iteration, the variabes u1(t) and a1(t) are reset (to avoid transients) before an input is presented. Then for every of the N iterations, equation 7 is iterated for Niter steps
and the weights are updated according to equation 8.
For comparison with localized RP (l-RP, see subsubsection A.2.3), a localized version of SC was implemented with the same initialization of W1 as in l-RP. The usual SC learning rule equation 8 is applied and the localized connectivity is kept by clamping weights outside the receptive fields to zero. Lateral inhibition weights V1 are initialized and learned as in normal SC (full competition is kept). For a detailed parameter list, see Table 3.
A.2.3 RANDOM PROJECTIONS (RP)
For RP, the weight matrix W1 between input and hidden layer is initialized randomly W1 ∼ N (0, σ2) with variance-preserving scaling: σ2 ∝ 1/n0. The biases b1 are initialized by sampling from a uniform distribution U([0, 0.1]) between 0 and 0.1. In practice we used the specific initialization
W1 ∼ N (0, σ2) σ2 = 1
100 n0 b1 ∼ U([0, 0.1]) (9)
for RP (keeping weights fixed), SC, SP and also BP & RF (both layers with W2,b2 and n1 respectively). The initialization of the biases b was found to be uncritical in the range of [0,0.1]
For localized RP (l-RP), neurons in the hidden layer receive input only from a fraction of the input units called a receptive field. Receptive fields are chosen to form a compact patch over neighbouring pixels in the image space. For each hidden neuron a receptive field of size p × p (p ∈ N) input neurons is created at a random position in the input space. The weight values for each receptive field (rf) and the biases are initialized as:
W1,rf ∼ N (0, σ2rf) σ2rf = c
100 p (10)
b1 ∼ U([0, 0.1]) (11) were the optimization factor c = 3 was found empirically through a grid-search optimization of classification performance. For exact parameter values, see Table 4.
A.3 CLASSIFIER & SUPERVISED REFERENCE ALGORITHMS
The connections W2 from hidden to output layer are updated by a simple delta-rule which is equivalent to BP in a single-layer network and hence is bio-plausible. For having a reference for our bio-plausible models (Figure 1B), we compare it to networks with the same architecture (number of layers, neurons, connectivity) but trained in a fully supervised way with standard backpropagation (Figure 1A). The forward pass of the model reads:
ul+1 = Wl+1ul + bl+1 (12) al+1 = ϕl+1(ul+1) (13)
The error ẽL is calculated from the comparison of activations in the last layer aL with the (one-hot encoded) target activations tgt, with respect to the chosen loss function: mean squared error (MSE),
ẽL = tgt− aL (14)
LMSE = 1
2 ‖tgt− aL‖22 (15)
or softmax/cross-entropy loss (CE),
p = softmax (aL) (16) ẽL = tgt− p (17)
LCE = − nL∑ i=1 tgti · log (pi) (18)
Classification results (on the test set) for MSE- and CE-loss were found to be not significantly different. Rectified linear units (ReLU) were used as nonlinearity ϕ(ul) for all layers (MSE-loss) or for the first layer only (CE-loss).
In BP the weight and bias update is obtained by stochastic gradient descent, i.e. ∆Wl,ij ∝ ∂L∂Wl,ij . The full BP algorithm for deep networks reads (Rumelhart et al., 1986):
eL = ϕ′L(uL) ẽL el−1 = ϕ′l−1(ul) W>l el
∆Wl = α · el ⊗ al−1 ∆bl = α · el (19)
where stands for element-wise multiplication, ⊗ is the outer (dyadic) product, ϕ′l(·) is the derivative of the nonlinearity and α is the learning rate. FA (Lillicrap et al., 2016) uses a fixed random matrix Rl instead of the transpose of the weight matrix W>l for the error backpropagation step in equation 19.
To allow for a fair comparison with l-RP, BP and FA were implemented with full connectivity and with localized receptive fields with the same initialization as in l-RP. During training with BP (or FA), the usual weight update equation 19 was applied to the weights in the receptive fields, keeping all other weights at zero. The exact parameter values can be found in Table 5.
A.4 SPIKING IMPLEMENTATION
A.4.1 LIF MODEL
The spiking simulations were performed with a custom-made event-based leaky integrate-and-fire (LIF) integrator written in the Julia-language. For large network sizes, the exact, event-based integration can be inefficient due to a large frequency of events. To alleviate dramatic slow-down, an Euler-forward integration was added to the framework. For sufficiently small time discretization (e.g. ∆t ≤ 5 · 10−2 ms for the parameters given in Table 6) the error of this approximate integration does not have negative consequences on the learning outcome. Consistent results were obtained using event-based and Euler-forward integration. The code of this framework will be available online upon acceptance.
The dynamics of the LIF network is given by:
τm dui(t)
dt = −ui(t) +RIi(t)
with Ii(t) = I ff i (t) + I ext i (t) = ∑ j,f wij ( t− tfj ) + Iexti (t)
and the spiking condition: if ui(t) ≥ ϑi: ui → ureset (20)
where ui(t) is the membrane potential, τm the membrane time-constant,R the membrane resistance, wij are the synaptic weights, (t) = δ(t)/τm (with τm in seconds) is the post-synaptic potential evoked by a pre-synaptic spike arrival, ϑi is the spiking threshold and ureset the reset potential after a spike. The input is split into a feed-forward (Iff (t)) and an external (Iext(t)) contribution. Each neuron in the input layer l0 (n0 = d) receives only external input Iext proportional to one pixel value in the data. To avoid synchrony between the spikes of different neurons, the starting potentials and parameters (e.g. thresholds) for the different neurons are drawn from a (small) range around the respective mean values.
We implement STDP using post-synaptic spike-traces tri(t) and a post-synaptic target-trace tgti(t).
τtr dtri(t) dt = −tri(t) + ∑ f δ ( t− tfi ) ∆wij = g ( trposti (t), tgt post i (t) ) δ ( t− tfj ) (21)
with the plasticity function g ( trposti (t), tgti(t) ) = α · ( tgtposti (t)− tr post i (t) ) . (22)
To train the network, we present patterns to the input layer and a target-trace to the output layer. The MNIST input is scaled by the input amplitude ampinp, the targets tgt(t) of the output layer are the one-hot-coded classes, scaled by the target amplitude amptgt. Additionally, every neuron receives a static bias input Iextbias ≈ ϑ to avoid silent units in the hidden layer. Every pattern is presented as fixed input for a time Tpat and the LIF dynamics as well as the learning evolves according to equation 20 and equation 21 respectively. To ensure stability during transients (see Naud et al. (2008) and references therein), learning is disabled after pattern switches for a duration of about Ttrans = 4τm. With the parameters we used for the simulations (see Table 6), firing rates of single neurons in the whole network stayed below 1 kHz which was considered as a bio-plausible regime. For the toy example in Figure 2A& B we used static input and target with the parameters ampinp = 40, amptgt = 5 (i.e. target trace = 0.005), ϑmean = 20, σϑ= 0, τm = 50, α = 1.2 · 10−5. For the raster plot in Figure 2C we used ampinp = 300, amptgt = 300, ϑmean = 20, σϑ= 0, τm = 50, α = 1.2 · 10−5 Tpat = 50 ms, Ttrans = 100 ms.
A.4.2 LIF RATE MODEL
The LIF dynamics can be mapped to a rate model described by the following equations:
ul = Wlul−1 +RIext
al = ϕLIF (ul) ∆wij = g̃ ( aprej , a post i , tgt post i ) (23)
with the (element-wise) LIF-activation function ϕLIF(·) and the modified plasticity function g̃(·):
ϕLIF (uk) = [ ∆abs − τm ln ( 1− ϑk
uk
)]−1 (24)
g̃ ( aprej , a post i , tgt post i ) = α̃ · aprej · ( tgtposti − a post i ) (25)
The latter can be obtained by integrating the STDP rule Equation 21 and taking the expectation. Most of the parameters of the spiking- and the LIF rate models can be mapped to each other directly (see Tabs. 6 & 7). The learningrate α must be adapted since the LIF weight change depends on the presentation time of a pattern Tpat. In the limit of long pattern presentation times (Tpat τm, τtr), the transition from the learning rate of the LIF rate model (α̃) to the one of the spiking LIF model (α) is
α = 1000 ms Tpat [ms] · 1000 · α̃, (26)
where the second factor comes from a unit change from Hz to kHz. It is also possible to train weight matrices computationally efficient in the LIF rate model and plug them into the spiking LIF model afterwards (as in e.g. Diehl et al. (2015)). The reasons for the remaining difference in performance presumably lie in transients and single-spike effects that cannot be captured by the rate model. Also, the spiking network was only trained with 106 image presentations (compared to 107 for the rate model) due to long simulation times.
B PARAMETER TABLES
In the following tables we use scientific E-notation XeY = X · 10Y for better readability. For all simulations, we scaled the learning rate proportional to 1/nh for nh > 5000 to ensure convergence.
C BIO-PLAUSIBLE MNIST BENCHMARKS | 1. What is the focus of the paper regarding image classification tasks?
2. What are the strengths and weaknesses of the proposed biologically plausible network architecture?
3. How does the reviewer assess the significance and novelty of the work compared to prior research?
4. Are there any concerns regarding the training process and performance evaluation of the network?
5. How does the reviewer view the relevance of the employed architecture to biological vision models? | Review | Review
In this work authors benchmark a biologically plausible network architecture for image classification. The employed architecture consists of one hidden layer, where input to hidden layer weights W1 are either trained with PCA or sparse coding, or are kept fixed after random initialization. The output layer units are modeled as leaky integrate-and-fire (LIF) neurons and hidden to output connections W2 are tuned using a rate model that mimics STDP learning dynamics in the LIF neurons. The authors compare classification results on MNIST and CIFAR10 datasets, where they also include results of an equivalent feed-forward network that is trained with standard error backpropagation.
The authors find that in the bio-plausible network with a large hidden layer, unsupervised training of input to hidden layer weights does not lead to as good of a classification performance as achieved through fixed random projections. They furthermore find that localized patch-style connectivity from input to hidden layer further improves the classification performance.
Overall the paper is well-written and easy to follow, but I fail to see any significant contribution in this work. As compared to the findings of Hubel & Wiesel, how bio-plausible are random projections for low-level feature representation? One may also argue that unsupervised tuning of W1 may require a lot more training data than available in MNIST or CIFAR10. The authors also need to take the capacity of their network into account; they draw conclusions based on a biologically-plausible network, but one that only has two feed-forward layers. It is hard to imagine that a more accurate biologically-plausible vision model would prefer random projections over low-level feature extractors that are well-tuned to the input statistics.
Regarding the observation that localized fields perform better than densely connected layers, I find it simply in line with physiological findings (starting from the work of Hubel & Wiesel) and artificial neural network architectures they inspired like CNNs. |
ICLR | Title
Localized random projections challenge benchmarks for bio-plausible deep learning
Abstract
Similar to models of brain-like computation, artificial deep neural networks rely on distributed coding, parallel processing and plastic synaptic weights. Training deep neural networks with the error-backpropagation algorithm, however, is considered bio-implausible. An appealing alternative to training deep neural networks is to use one or a few hidden layers with fixed random weights or trained with an unsupervised, local learning rule and train a single readout layer with a supervised, local learning rule. We find that a network of leaky-integrate-andfire neurons with fixed random, localized receptive fields in the hidden layer and spike timing dependent plasticity to train the readout layer achieves 98.1% test accuracy on MNIST, which is close to the optimal result achievable with errorbackpropagation in non-convolutional networks of rate neurons with one hidden layer. To support the design choices of the spiking network, we systematically compare the classification performance of rate networks with a single hidden layer, where the weights of this layer are either random and fixed, trained with unsupervised Principal Component Analysis or Sparse Coding, or trained with the backpropagation algorithm. This comparison revealed, first, that unsupervised learning does not lead to better performance than fixed random projections for large hidden layers on digit classification (MNIST) and object recognition (CIFAR10); second, networks with random projections and localized receptive fields perform significantly better than networks with all-to-all connectivity and almost reach the performance of networks trained with the backpropagation algorithm. The performance of these simple random projection networks is comparable to most current models of bio-plausible deep learning and thus provides an interesting benchmark for future approaches.
1 INTRODUCTION
While learning a new task, synapses deep in the brain undergo task-relevant changes (HayashiTakagi et al., 2015). These synapses are often many neurons downstream of sensors and many neurons upstream of actuators. Since the rules that govern such changes deep in the brain are poorly understood, it is appealing to draw inspiration from deep artificial neural networks (DNNs) (LeCun et al., 2015). DNNs and the cerebral cortex process information in multiple layers of many neurons (Yamins & DiCarlo, 2016; Kriegeskorte, 2015) and in both, the artificial and the biological neural networks, learning depends on changes of synaptic strengths (Hebbian theory, Hebb (1949)). However, learning rules in the brain are most likely different from the backpropagation algorithm (Crick, 1989; Marblestone et al., 2016; Rumelhart et al., 1986). Furthermore, biological neurons communicate by sending discrete spikes as opposed to real-valued numbers used in DNNs. Differences like these suggest that there exist other, possibly equally powerful, algorithms that are capable to solve the same tasks by using different, more biologically plausible mechanisms. Thus, an important question in computational neuroscience is how to explain the fascinating learning capabilities of the brain with bio-plausible network architectures and learning rules. On the other hand, from a pure machine learning perspective there is increasing interest in neuron-like architectures with local learning rules, mainly motivated by the current advances in neuromorphic hardware (Nawrocki et al., 2016).
Image recognition is a popular task to test the proposed models. Because of its relative simplicity and popularity, the MNIST dataset (28×28-pixel grey level images of handwritten digits, LeCun
(1998)) is often used for benchmarking. Typical performances of existing models are around 97- 99% classification accuracy on the MNIST test set (see section 2 and Table 8). This value lies in the region of the benchmarks for a large class of classical DNNs trained with backpropagation but without data-augmentation or convolutional layers (see table in LeCun (1998)). Thus, accuracies around this value are assumed to be an empirical signature of backpropagation-like deep learning (Lillicrap et al., 2016; Sacramento et al., 2017). It is noteworthy, however, that several of the most promising approaches that perform well on MNIST have been found to fail on harder tasks (Bartunov et al., 2018).
An alternative to supervised training of all layers with backpropagation are fixed random weights, as proposed by general approximation theory (Barron, 1993) and the extreme learning field (Huang et al., 2006), or unsupervised training in the first layers, combined with supervised training of a readout layer. Unsupervised methods are appealing since they can be implemented with local learning rules, see e.g. “Oja’s rule” (Oja, 1982; Sanger, 1989) for principal component analysis or algorithms in Olshausen & Field (1997); Rozell et al. (2008); Liu & Jia (2012); Brito & Gerstner (2016) for sparse coding. A single readout layer can also be implemented with a local delta-rule (also called “perceptron rule”), which may be implemented by pyramidal spiking neurons with dendritic prediction of somatic spiking (Urbanczik & Senn, 2014). Since it is pointless to simply stack multiple fully connected layers trained with principal component analysis or sparse coding (Olshausen & Field, 1997) we investigate here networks with a single hidden layer.
The main objective of this study was to see how far we can go with a single hidden layer and local learning rules in networks of spiking neurons. To support the design choices of the spiking model, we compared the classification performance of different rate networks: networks trained with backpropagation, networks where the hidden layer is trained with unsupervised methods, and networks with fixed random projections in the hidden layer. Since sparse connectivity is sometimes superior to dense connectivity (Litwin-Kumar et al., 2017; Bartunov et al., 2018) and successful convolutional networks leverage local receptive fields, we investigated also sparse connectivity between input and hidden layer, where each hidden neuron receives input only from a few neighboring pixels of the input image.
2 RELATED WORK
In recent years, many bio-plausible approaches to deep learning have been proposed (see e.g. Marblestone et al. (2016) for a review). For achieving performances similar to deep learning methods, existing approaches usually use either involved architectures or elaborate mechanisms to approximate the backpropagation algorithm. Examples include the use of convolutional layers (Tavanaei & Maida (2016); Lee et al. (2018); Kheradpisheh et al. (2018) and table therein), dendritic computations (Hussain et al., 2014; Guergiuev et al., 2016; Sacramento et al., 2017) or approximations of the backpropagation algorithm such as feedback alignment (Lillicrap et al., 2016; Baldi et al., 2016; Nøkland, 2016; Samadi et al., 2017; Kohan et al., 2018; Bartunov et al., 2018) equilibrium propagation (Scellier & Bengio, 2017), membrane potential based backpropagation (Lee et al., 2016), restricted Boltzmann machines and deep belief networks (O’Connor et al., 2013; Neftci et al., 2014), (localized) difference target propagation (Lee et al., 2015; Bartunov et al., 2018), reinforcementsignal models like AuGMEnT (Rombouts et al., 2015) or approaches using predictive coding (Whittington & Bogacz, 2017). Many models implement spiking neurons to stress bio-plausibility (Liu et al. (2016); Neftci et al. (2017); Kulkarni & Rajendran (2018); Wu et al. (2018); Liu & Yue (2018) and table therein) or for coding efficiency (O’Connor et al., 2017). The conversion of DNNs to spiking neural networks (SNN) after training with backpropagation (Diehl et al., 2015) is a common technique to evade the difficulties of training with spikes. Furthermore, there are models including recurrent activity (Spoerer et al., 2017; Bellec et al., 2018) or even starting directly from realistic circuits (Delahunt & Kutz, 2018). We refer to Table 8 for a list of current bio-plausible MNIST benchmark models.
3 RESULTS
We study networks that consist of an input (l0), one hidden (l1) and an output-layer (l2) connected by weight matrices W1 and W2 (Figure 1). Training the hidden layer weights W1 with standard
supervised training involves (non-local) error backpropagation using the transposed weight matrix WT2 (Figure 1A). In the bio-plausible network considered in this paper (Figure 1B), the input-tohidden weights W1 are either learned with an unsupervised method (Principal Component Analysis or Sparse Coding) or are fixed random projections. The unsupervised methods assume recurrent inhibitory weights V1 between hidden units to implement competition.
3.1 SPIKING LOCALIZED RANDOM PROJECTIONS
We first present the results with networks of leaky integrate-and-fire (LIF) neurons. The network architecture is as in Figure 1B, but without the recurrent connections V1. For implementing localized Random Projections (l-RP) in the hidden layer weights W1, we first chose the centers of the localized receptive fields at random positions in the input space and then randomly chose the weights therein, see Figure 1C. The receptive field patches span p×p pixels around their center position (we used p = 10 for the 28×28-pixel MNIST data). The output layer weights W2 are trained with a supervised spike timing dependent plasticity (STDP) rule.
3.1.1 LIF AND STDP DYNAMICS
The spiking dynamics follow the usual LIF equations (see methods A.4) and the readout weights W2 evolve according to a supervised STDP delta rule using post-synaptic spike-traces tri(t) and a post-synaptic target trace tgti(t)
τtr dtri(t) dt = −tri(t) + ∑ f δ ( t− tfi ) ∆w2,ij = α · ( tgtposti (t)− tr post i (t) ) δ ( t− tfj ) . (1)
Thus, for a specific readout weight w2,ij , the post-synaptic trace is updated at every post-synaptic spike time tfi and the weight is updated at every pre-synaptic spike time t f j . The target trace is used for feeding in the one-hot coded, supervisory signal for the MNIST classification into the output layer (l2).
For a proof-of-principle and efficient parameter search we first investigate an LIF rate model. This rate model mimics the LIF dynamics by using the LIF activation function ϕLIF as nonlinearity,
rate(u) = ϕLIF (u) = [ ∆abs − τm ln ( 1− ϑ
u
)]−1 , (2)
where u is the membrane potential, ∆abs the refractory period, τm the membrane time constant and ϑ the firing threshold of the LIF model. Furthermore, it employs the rate-version of the STDP delta rule Equation 1 (see methods section A.4 for details)
∆wij = α̃ · rateprej · ( tgtrateposti − rate post i ) , (3)
where tgtrateposti is the post-synaptic target rate, corresponding to the post-synaptic target trace tgti(t) in Equation 1. We obtained similar spiking and weight dynamics when the readout weights W2 were either directly trained with STDP or trained with the LIF rate model and then plugged into the spiking LIF network (as done in e.g. Diehl et al. (2015)).
To illustrate the LIF and STDP dynamics, a toy example consisting of one pre- connected to one post-synaptic neuron was integrated for 650 ms. The pre- and post-synaptic membrane potentials show periodic spiking (Figure 2A) which induces post-synaptic spike traces and corresponding weight changes (Figure 2B), according to Equation 1. For the MNIST task, Figure 2C shows a raster plot for an exemplary training and testing protocol. During activity transients after pattern switches, learning is disabled until regular spiking is recovered. This is done, first, to ensure stability during activity transients (see Naud et al. (2008) and references therein) and second, to achieve decorrelation between the activities of subsequent patterns, as needed for stochastic gradient descent (SGD). During the testing period, learning is shut off permanently (see methods section A.4 for more details).
3.1.2 CLASSIFICATION RESULTS FOR LIF l-RP
When directly trained with the STDP rule in Equation 1 the spiking LIF l-RP model (nh = 5000 hidden units and patch size p = 10) reaches 98.1% test accuracy on MNIST. The corresponding LIF
rate model reaches 98.5% test accuracy. Transferring weights learned with the LIF rate model into the spiking LIF model resulted in similar accuracies as the LIF rate model. Table 1 compares the performances of the rate and spiking LIF l-RP models with the reference algorithm l-BP, which is a rate model trained with backpropagation, see subsection 3.2 and subsection 3.3 (for same hidden layer size nh and patch size p). We can see that the spiking LIF model almost reaches the performance of the corresponding rate model. The remaining gap (0.4%) between rate and spiking LIF model presumably stems from transients and the shorter training time of the spiking model (only 106 compared to 107 iterations due to long simulation times). Both, the rate and spiking LIF model of l-RP achieve accuracies close to the backpropagation reference algorithm l-BP and certainly lie in the range of current bio-plausible MNIST benchmarks, i.e. 97-99% test accuracy (see section 2 and Table 8). Based on these numbers we conclude that the spiking LIF model of localized random projections using STDP is capable of learning the MNIST task to a level that is competitive with known benchmarks for spiking networks.
3.2 BENCHMARKING RATE MODELS TRAINED WITH UNSUPERVISED LEARNING AND BACKPROPAGATION
To justify the design choices of the spiking model, we systematically investigated rate models with different methods to initialize or learn the hidden layer weights W1 (see Figure 1 and methods subsection A.1 for details). To set these hidden layer weights, we use either one of the unsupervised methods Principal Component Analysis (PCA) or Sparse Coding (SC), or train only the readout layer W2 and use fixed Random Projections (RP, as in subsection 3.1) for the hidden layer weights W1 (see Figure 1B). All these methods can be implemented with local, bio-plausible learning rules (Oja, 1982; Olshausen & Field, 1997). As a reference and upper performance bound, we train networks with the same architecture with standard backpropagation (BP, see Figure 1A). As a more bio-plausible approximation of BP, we include Feedback Alignment (FA, Lillicrap et al. (2016)) which uses fixed random feedback weights for error-backpropagation (see methods subsection A.3 for further explanation). A Simple Perceptron (SP) without a hidden layer serves as a simplistic reference, since it corresponds to direct classification of the input.
The hidden-to-output weights W2 are trained with standard stochastic gradient descent (SGD), using a one-hot representation of the class label as target. Since no error-backpropagation is needed for a single layer, the learning rule is local (“delta” or “perceptron”-rule, similar to Equation 3 of the LIF rate model). Therefore the system as a whole is bio-plausible in terms of online learning and synaptic updates using only local variables. For computational efficiency, we train first the hidden layer and then the output layer, however, both layers could be trained simultaneously.
We compared the test errors on the MNIST digit recognition data set for varying numbers of hidden neurons nh (Figure 3). The green PCA curve in Figure 3 ends at the vertical line nh = d = 784 because the number of principal components (PCs), i.e. the number of hidden units nh, is limited by the input dimension d. Since the PCs span the subspace of highest variance, classification performance quickly improves when adding more PCs for small nh and then saturates for larger nh, crossing the (dotted) Simple Perceptron line at nh = 25 PC hidden neurons. This intersection and other measures of effective dimensionality (see methods subsection A.1) suggest that the MNIST dataset lies mostly in a low-dimensional linear subspace with deff ≈ 25 d. SC performance (red curve) starts at a higher test error but improves as quickly with nh as PCA. With overcomplete representations (nh > d), the network achieves a remarkable classification performance of around 96 % test accuracy. This suggests that the sparse representation and the features extracted by SC are indeed useful for classification, especially in the overcomplete case.
The performance of RP (blue curve) for small numbers of hidden units (nh < d) is worse than for feature extractors like PCA and SC. Also for large hidden layers, performance improves only slowly with nh, which is in line with theory (Barron, 1993) and findings in the extreme learning field (Huang et al., 2006). However, for large hidden layers sizes, RP outperforms SC. This suggests that the high dimensionality of the hidden layers is more important for reaching high performance than the features extracted by PCA or SC. Tests on the object recognition task CIFAR10 lead to the same conclusion, indicating that this observation is not entirely task specific (see subsection 3.3 for further analysis on CIFAR10). For all tested methods and hidden layer sizes, performance is significantly worse than the one reached with BP (black curve in Figure 3). In line with (Lillicrap et al., 2016), we find that FA (cyan curve) performs as well as BP on MNIST.
Universal function approximation theory predicts lower bounds for the squared error that follow a power law with hidden layer size nh for both BP (O(1/nh)) and RP (O(1/n2/dh ), where d is the input dimension Barron et al. (1994); Barron (1993)). In the log-log-plot in Figure 3 this would correspond to a factor d/2 = 784/2 = 392 between the slopes of the curves of BP and RP, or at least a factor deff/2 ≈ 10 using an effective dimensionality of MNIST (see methods A.1). We find a much faster decay of classification error in RP and a smaller difference between RP and BP slopes than suggested by the theoretical lower bounds.
3.3 LOCALIZED RANDOM RECEPTIVE FIELDS
There are good reasons to reduce the connectivity from all-to-all to localized receptive fields (Figure 1C): local connectivity patterns are observed in real neural circuits (Hubel & Wiesel, 1962), proven useful theoretically (Litwin-Kumar et al., 2017) and empirically (Bartunov et al., 2018), and successfully used in convolutional networks (CNNs). Even though this modification seems well justified from both biological and algorithmic sides, it reduces the generality of the algorithm to input data such as images where neighborhood relations between pixels (i.e. input dimensions) are important.
For random projections with localized receptive fields (l-RP), the centers of the patches were chosen at random positions in the input space and their weights where randomly fixed (as in subsection 3.1, see Figure 1C). We tested different patch sizes of p × p pixels and found an optimum around p ≈ 10 which is more pronounced for large hidden layer sizes nh (see Figure 4A). Note that p = 1 corresponds to resampling the data with random weights, and p = 28 recovers fully connected RP performance.
The main finding here is the significant improvement in performance using l-RP: the optimum around p ≈ 10 almost reaches BP performance for nh = 5000 hidden neurons (blue arrow in
Figure 4B). As expected l-RP and the LIF rate model of l-RP in subsection 3.1 perform equally well. To achieve a fair comparison BP and SC were also tested with localized receptive fields (lBP, l-SC, see Figure 4B). Also these algorithms seem to benefit from localized connectivity (also with an optimum for patch size p = 10), however, not as much as RP. This makes l-RP a strong competitor of SC (and also FA, see Figure 3) as a bio-plausible algorithm in the regime of large, overcomplete hidden layers nh > d.
Since classification performances of l-RP and l-BP are very close for layer sizes above nh = 5000, we investigated the misclassified MNIST digits for both algorithms. We find that 75% of the (≈ 125) misclassified digits of l-BP (nh = 5000) are contained in the misclassified ones of l-RP (nh = 5000). This means that in roughly 75% of the cases l-RP fails, also the reference algorithm lBP fails, suggesting that these digits are particularly hard to recognize for networks with one hidden layer. We trained networks with up to nh = 100000 hidden neurons to test if (l-)RP can finally reach (l-)BP performance, since the latter saturates for large nh (see Figure 4B). Indeed for simulations with nh = 100000 and p = 10, l-BP and l-RP performance was not significantly different any more, both being at 1.2% test error.
To test whether l-RP only works for the relatively simple MNIST data set (centered digits, noninformative margin pixels, no clutter, uniform features and perspective etc.) or generalizes to more difficult tasks, we applied it to the CIFAR10 data set (Krizhevsky, 2013). We first reproduced a typical benchmark performance of a fully connected network with one hidden layer trained with standard BP (≈ 56% test accuracy, nh = 5000, see also Lin & Memisevic (2016)). Again, l-RP outperforms the unsupervised methods PCA and l-SC in the case of large, overcomplete hidden layers (see Table 2). Furthermore, as on MNIST, classification performance increases for increasing hidden layer size nh and localized receptive fields perform better than full connectivity for all methods.
Also on CIFAR10, l-RP comes close to the performance of the reference algorithm l-BP, however, the difference between l-RP and l-BP is larger than on MNIST. Given that state-of-the-art performance on the CIFAR10 dataset with deep convolutional neural networks is close to 98% (e.g. Real et al. (2018)), the limitations of l-RP and the difference in difficulty between MNIST and CIFAR10 become apparent.
4 DISCUSSION
The rules that govern plasticity of synapses deep in the brain remain elusive. In contrast to bioplausible deep learning based on approximations of the backpropagation algorithm, we focused
here on training a readout layer with a supervised, local learning rule combined with a single hidden layer with either fixed random weights or trained with unsupervised, local learning rules.
To our surprise, randomly initialized fixed weights (RP) of large hidden layers lead to better classification performance than training them with unsupervised methods like PCA or sparse coding (SC). This implies that the inductive bias of PCA and sparse coding is not well suited for the task of digit classification and object recognition. It may be interesting to search for alternative unsupervised, local learning rules with a stronger inductive bias.
Replacing all-to-all connectivity with localized input filters is such an inductive bias that was already seen to be useful in other models (Bartunov et al., 2018) and proved to be particularly useful in conjunction with randomly initialized static weights. Already for a hidden layer size of 5000 neurons the performance of l-RP almost reaches the performance of backpropagation on MNIST. Furthermore, performance scaling with the number of hidden units nh was found to be orders of magnitudes better than the lower bound suggested by universal function approximation theory (Barron, 1993).
Since we wanted to keep our models as simple as possible, we used online (no mini-batches) stochastic gradient descent (SGD) with a constant learning rate in all our experiments. There are many known ways to further tweak the final performance, e.g. with adaptive learning rate schedules or data augmentation, but our goal here was to demonstrate that even the simple model with localized random projections and spike timing dependent plasticity with a constant learning rate achieves results that are comparable with more elaborate approaches that use e.g. convolutional layers with weight sharing (Panda & Roy, 2016), backpropagation approximations (Lee et al., 2016), multiple hidden layers (Lillicrap et al., 2016), dendritic neurons (Sacramento et al., 2017), recurrence (Diehl & Cook, 2015) or conversion from rate to spikes (Diehl et al., 2015).
Above 98% accuracy we have to take into account a saturating effect of the network training: better models will only lead to subtle improvements in accuracy. It is not obvious whether improvements are really a proof of having achieved deep learning or just the result of tweaking the models towards the peculiarities of the MNIST dataset (centered digits, non-informative margin pixels, no clutter, uniform features and perspective etc.). We observed that more challenging data sets such as CIFAR10 clearly highlight the limitations of l-RP and thus are better suited to test deep learning capabilities. We are aware that state-of-the-art deep learning has moved from MNIST to harder datasets, such as ImageNet (Deng et al., 2009), long ago. Yet MNIST seems to be the current reference task for most bio-plausible deep learning models (see section 2 and Table 8).
In this paper we presented a new MNIST benchmark for bio-plausible spiking networks. Using localized random projections (l-RP) and STDP learning, our spiking LIF model reached 98.1% test accuracy on MNIST which lies within the range of current benchmarks for bio-plausible models for deep learning (see section 2 and Table 8). Our network model is particularly simple, i.e. it has only one trainable layer and does not depend on sophisticated architectural or algorithmic features (e.g. to approximate backpropagation). Instead it relies on the properties of high-dimensional localized random projections. We suggest that novel, progressive approaches to bio-plausible deep learning should significantly outperform the benchmark presented here.
A METHODS
A.1 RATE NETWORK MODEL
We use a 3-layer (input l0, hidden l1 = lh and output l2) feed-forward rate-based architecture with layer sizes (n0 for input), n1 (hidden) and n2 (output, with n2 = # classes). The layers are connected via weight matrices W1 ∈ Rn1×n0 and W2 ∈ Rn2×n1 and each neuron receives bias from the bias vectors b1 ∈ Rn1 and b2 ∈ Rn2 respectively (see Figure 1). The neurons themselves are nonlinear units with an element-wise, possibly layer-specific, nonlinearity ai = ϕl(ui). The feed-forward pass of this model thus reads
ul+1 = Wl+1ul + bl+1 al+1 = ϕl+1(ul+1). (4)
The simple perceptron (SP) only consists of one layer (l2, W2 ∈ Rn2×n0 , b2 ∈ Rn2 ). The sparse coding (SC) model assumes recurrent inhibition within the hidden layer l1. This inhibition is not modeled by an explicit inhibitory population, as required by Dale’s principle (Dale, 1935), but direct, plastic, inhibitory synapses V1 ∈ Rn1×n1 are assumed between neurons in l1. Classification error variances in Figure 3 & Figure 4 are displayed as shaded, semi-transparent areas with the same colors as the corresponding curves. Their lower and upper bounds correspond to the 25% and 75% percentiles of at least 10 independent runs.
An effective dimensionality deff of the MNIST data set can be obtained, e.g. via eigen-spectrum analysis, keeping 90% of the variance. We obtain values around deff ≈ 20. The measure proposed in Litwin-Kumar et al. (2017) gives the same value deff ≈ 20. Another measure is the crossing of the PCA curve with the Simple Perceptron line in Figure 3 at nh = 25(= deff). We checked that training a perceptron (1 hidden layer, nh = 1000, 107 iterations, ReLU, standard BP) on the first 25 PCs of MNIST leads to 1.7% test error (vs 1.5% test error on the full MNIST data). Together, these findings suggest that the MNIST dataset lies mostly in a low-dimensional linear subspace with deff ≈ 25 d. The MNIST (& CIFAR10) data was rescaled to values in [0,1] and mean centered, which means that the pixel-wise average over the data was subtracted from the pixel values of every image. The code for the implementation of our rate network model will be available online upon acceptance.
A.2 UNSUPERVISED TECHNIQUES
A.2.1 PRINCIPAL COMPONENT ANALYSIS (PCA)
In this paper we do not implement PCA learning explicitly as a neural learning algorithm but by a standard PCA algorithm (https://github.com/JuliaStats/MultivariateStats. jl). For d-dimensional data such algorithms output the values of the n ≤ d first principal components as well as the principal subspace projection matrix P ∈ Rn×d. This matrix can directly be used as feedforward matrix W1 in our network since the lines of P correspond to the projections of the data onto the single principal components. In other words each neuron in the hidden layer l1 extracts another principal component of the data.
Since PCA is a linear model, biases b1 were set to 0 and the nonlinearity was chosen linear, i.e. ϕ1(u) = u. With this, we can write the (trained) feed-forward pass of the first layer of our PCA model as follows:
a1 = u1 = W1 · a0 with W1 = P (5)
Since the maximum number of PCs that can be extracted is the dimensionality of the data, nmax = d, the number of neurons in the hidden layer n1 is limited by d. This makes PCA unusable for overcomplete hidden representations as investigated for SC and RP.
Consistency between the used standard algorithm and neural implementations of PCA (“Sanger’s” rule Sanger (1989)) was checked by comparing the extracted PCs and visualizing the learned projections (lines of P) for the case of 30 extracted PCs, i.e. n = 30.
A.2.2 SPARSE CODING (SC)
For d-dimensional data, SC aims at finding a dictionary W ∈ Rh×d of features that lead to an optimal representation a1 ∈ Rh which is sparse, i.e. has as few non-zero elements as possible. The corresponding optimization problem reads:
Wopt, aopt1 = argmin L(W, a1)
L(W, a1) = 1
2 ‖a0 −W>a1‖22 + λ‖a1‖1. (6)
Since this is a nonlinear optimization problem with latent variables (hidden layer) it cannot be solved directly. Usually an iterative two step procedure is applied (akin to the expectation-maximization algorithm) until convergence: First optimize with respect to the activities a with fixed weights W. Second, assuming fixed activities, perform a gradient step w.r.t to weights.
We implement a biologically plausible SC model using a 2-layer network with recurrent inhibition and local plasticity rules similar to the one in Brito & Gerstner (2016). For a rigorous motivation (and derivation) that such a network architecture can indeed implement sparse coding we refer to Olshausen & Field (1997); Zylberberg et al. (2011); Pehlevan & Chklovskii (2015); Brito & Gerstner (2016). We apply the above mentioned two step optimization procedure to solve the SC problem given our network model. The following two steps are repeated in alternation until convergence of the weights:
1. Optimizing the hidden activations: We assume given and fixed weights W1 and V1 and ask for optimal hidden activations a1. Because of the recurrent inhibition V1 the resulting equation for the hidden activities a1 is nonlinear and implicit. To solve this equation iteratively, we simulate the dynamics of a neural model with time-dependent internal and external variables u1(t) and a1(t) respectively. The dynamics of the system is then given by Zylberberg et al. (2011); Brito & Gerstner (2016):
τu du1(t) dt = −u1(t) + (W1a0(t)− V1a1(t))
a1(t) = ϕ(u1(t)) (7)
In practice the dynamics is simulated for Niter = 50 iterations, which leads to satisfying convergence (change in hidden activations < 5%).
2. Optimizing the weights: Now the activities a1 are kept fixed and we want to update the weights following the gradient of the loss function. The weight update rules are Hebbian-type local learning rules (Brito & Gerstner, 2016):
∆W1,ji = αw · a0,i · a1,j ∆V1,jk = αv · a1,k · (a1,j − 〈a1,j〉) (8)
〈·〉 is a moving average (low-pass filter) with some time constant τmav. At the beginning of the simulation (or after a new pattern presentation) τmav is increased starting from 0 to τmav during the first τmav. The values of the lines of W1 are normalized after each update, however this can also be achieved by adding a weight decay term. Additionally the values of V1 are clamped to positive values after each update to ensure that the recurrent input is inhibitory. Also the diagonal of V1 is kept at zero to avoid self-inhibition.
During SC learning, at every iteration, the variabes u1(t) and a1(t) are reset (to avoid transients) before an input is presented. Then for every of the N iterations, equation 7 is iterated for Niter steps
and the weights are updated according to equation 8.
For comparison with localized RP (l-RP, see subsubsection A.2.3), a localized version of SC was implemented with the same initialization of W1 as in l-RP. The usual SC learning rule equation 8 is applied and the localized connectivity is kept by clamping weights outside the receptive fields to zero. Lateral inhibition weights V1 are initialized and learned as in normal SC (full competition is kept). For a detailed parameter list, see Table 3.
A.2.3 RANDOM PROJECTIONS (RP)
For RP, the weight matrix W1 between input and hidden layer is initialized randomly W1 ∼ N (0, σ2) with variance-preserving scaling: σ2 ∝ 1/n0. The biases b1 are initialized by sampling from a uniform distribution U([0, 0.1]) between 0 and 0.1. In practice we used the specific initialization
W1 ∼ N (0, σ2) σ2 = 1
100 n0 b1 ∼ U([0, 0.1]) (9)
for RP (keeping weights fixed), SC, SP and also BP & RF (both layers with W2,b2 and n1 respectively). The initialization of the biases b was found to be uncritical in the range of [0,0.1]
For localized RP (l-RP), neurons in the hidden layer receive input only from a fraction of the input units called a receptive field. Receptive fields are chosen to form a compact patch over neighbouring pixels in the image space. For each hidden neuron a receptive field of size p × p (p ∈ N) input neurons is created at a random position in the input space. The weight values for each receptive field (rf) and the biases are initialized as:
W1,rf ∼ N (0, σ2rf) σ2rf = c
100 p (10)
b1 ∼ U([0, 0.1]) (11) were the optimization factor c = 3 was found empirically through a grid-search optimization of classification performance. For exact parameter values, see Table 4.
A.3 CLASSIFIER & SUPERVISED REFERENCE ALGORITHMS
The connections W2 from hidden to output layer are updated by a simple delta-rule which is equivalent to BP in a single-layer network and hence is bio-plausible. For having a reference for our bio-plausible models (Figure 1B), we compare it to networks with the same architecture (number of layers, neurons, connectivity) but trained in a fully supervised way with standard backpropagation (Figure 1A). The forward pass of the model reads:
ul+1 = Wl+1ul + bl+1 (12) al+1 = ϕl+1(ul+1) (13)
The error ẽL is calculated from the comparison of activations in the last layer aL with the (one-hot encoded) target activations tgt, with respect to the chosen loss function: mean squared error (MSE),
ẽL = tgt− aL (14)
LMSE = 1
2 ‖tgt− aL‖22 (15)
or softmax/cross-entropy loss (CE),
p = softmax (aL) (16) ẽL = tgt− p (17)
LCE = − nL∑ i=1 tgti · log (pi) (18)
Classification results (on the test set) for MSE- and CE-loss were found to be not significantly different. Rectified linear units (ReLU) were used as nonlinearity ϕ(ul) for all layers (MSE-loss) or for the first layer only (CE-loss).
In BP the weight and bias update is obtained by stochastic gradient descent, i.e. ∆Wl,ij ∝ ∂L∂Wl,ij . The full BP algorithm for deep networks reads (Rumelhart et al., 1986):
eL = ϕ′L(uL) ẽL el−1 = ϕ′l−1(ul) W>l el
∆Wl = α · el ⊗ al−1 ∆bl = α · el (19)
where stands for element-wise multiplication, ⊗ is the outer (dyadic) product, ϕ′l(·) is the derivative of the nonlinearity and α is the learning rate. FA (Lillicrap et al., 2016) uses a fixed random matrix Rl instead of the transpose of the weight matrix W>l for the error backpropagation step in equation 19.
To allow for a fair comparison with l-RP, BP and FA were implemented with full connectivity and with localized receptive fields with the same initialization as in l-RP. During training with BP (or FA), the usual weight update equation 19 was applied to the weights in the receptive fields, keeping all other weights at zero. The exact parameter values can be found in Table 5.
A.4 SPIKING IMPLEMENTATION
A.4.1 LIF MODEL
The spiking simulations were performed with a custom-made event-based leaky integrate-and-fire (LIF) integrator written in the Julia-language. For large network sizes, the exact, event-based integration can be inefficient due to a large frequency of events. To alleviate dramatic slow-down, an Euler-forward integration was added to the framework. For sufficiently small time discretization (e.g. ∆t ≤ 5 · 10−2 ms for the parameters given in Table 6) the error of this approximate integration does not have negative consequences on the learning outcome. Consistent results were obtained using event-based and Euler-forward integration. The code of this framework will be available online upon acceptance.
The dynamics of the LIF network is given by:
τm dui(t)
dt = −ui(t) +RIi(t)
with Ii(t) = I ff i (t) + I ext i (t) = ∑ j,f wij ( t− tfj ) + Iexti (t)
and the spiking condition: if ui(t) ≥ ϑi: ui → ureset (20)
where ui(t) is the membrane potential, τm the membrane time-constant,R the membrane resistance, wij are the synaptic weights, (t) = δ(t)/τm (with τm in seconds) is the post-synaptic potential evoked by a pre-synaptic spike arrival, ϑi is the spiking threshold and ureset the reset potential after a spike. The input is split into a feed-forward (Iff (t)) and an external (Iext(t)) contribution. Each neuron in the input layer l0 (n0 = d) receives only external input Iext proportional to one pixel value in the data. To avoid synchrony between the spikes of different neurons, the starting potentials and parameters (e.g. thresholds) for the different neurons are drawn from a (small) range around the respective mean values.
We implement STDP using post-synaptic spike-traces tri(t) and a post-synaptic target-trace tgti(t).
τtr dtri(t) dt = −tri(t) + ∑ f δ ( t− tfi ) ∆wij = g ( trposti (t), tgt post i (t) ) δ ( t− tfj ) (21)
with the plasticity function g ( trposti (t), tgti(t) ) = α · ( tgtposti (t)− tr post i (t) ) . (22)
To train the network, we present patterns to the input layer and a target-trace to the output layer. The MNIST input is scaled by the input amplitude ampinp, the targets tgt(t) of the output layer are the one-hot-coded classes, scaled by the target amplitude amptgt. Additionally, every neuron receives a static bias input Iextbias ≈ ϑ to avoid silent units in the hidden layer. Every pattern is presented as fixed input for a time Tpat and the LIF dynamics as well as the learning evolves according to equation 20 and equation 21 respectively. To ensure stability during transients (see Naud et al. (2008) and references therein), learning is disabled after pattern switches for a duration of about Ttrans = 4τm. With the parameters we used for the simulations (see Table 6), firing rates of single neurons in the whole network stayed below 1 kHz which was considered as a bio-plausible regime. For the toy example in Figure 2A& B we used static input and target with the parameters ampinp = 40, amptgt = 5 (i.e. target trace = 0.005), ϑmean = 20, σϑ= 0, τm = 50, α = 1.2 · 10−5. For the raster plot in Figure 2C we used ampinp = 300, amptgt = 300, ϑmean = 20, σϑ= 0, τm = 50, α = 1.2 · 10−5 Tpat = 50 ms, Ttrans = 100 ms.
A.4.2 LIF RATE MODEL
The LIF dynamics can be mapped to a rate model described by the following equations:
ul = Wlul−1 +RIext
al = ϕLIF (ul) ∆wij = g̃ ( aprej , a post i , tgt post i ) (23)
with the (element-wise) LIF-activation function ϕLIF(·) and the modified plasticity function g̃(·):
ϕLIF (uk) = [ ∆abs − τm ln ( 1− ϑk
uk
)]−1 (24)
g̃ ( aprej , a post i , tgt post i ) = α̃ · aprej · ( tgtposti − a post i ) (25)
The latter can be obtained by integrating the STDP rule Equation 21 and taking the expectation. Most of the parameters of the spiking- and the LIF rate models can be mapped to each other directly (see Tabs. 6 & 7). The learningrate α must be adapted since the LIF weight change depends on the presentation time of a pattern Tpat. In the limit of long pattern presentation times (Tpat τm, τtr), the transition from the learning rate of the LIF rate model (α̃) to the one of the spiking LIF model (α) is
α = 1000 ms Tpat [ms] · 1000 · α̃, (26)
where the second factor comes from a unit change from Hz to kHz. It is also possible to train weight matrices computationally efficient in the LIF rate model and plug them into the spiking LIF model afterwards (as in e.g. Diehl et al. (2015)). The reasons for the remaining difference in performance presumably lie in transients and single-spike effects that cannot be captured by the rate model. Also, the spiking network was only trained with 106 image presentations (compared to 107 for the rate model) due to long simulation times.
B PARAMETER TABLES
In the following tables we use scientific E-notation XeY = X · 10Y for better readability. For all simulations, we scaled the learning rate proportional to 1/nh for nh > 5000 to ensure convergence.
C BIO-PLAUSIBLE MNIST BENCHMARKS | 1. What are the strengths and weaknesses of the paper regarding its contribution to training multi-layer spiking neural networks in a bio-plausible way?
2. How does the reviewer assess the significance of the paper's findings, particularly in comparison to other state-of-the-art and bio-plausible SNNs?
3. What are some minor comments or suggestions the reviewer has regarding the paper's writing, structure, and content? | Review | Review
This article compares different methods to train a two-layer spiking neural network (SNN) in a bio-plausible way on the MNIST dataset, showing that fixed localized random connections that form the hidden layer, in combination with a supervised local learning rule on the output layer can achieve close to state-of-the-art accuracy compared to other SNN architectures. The authors investigate three methods to train the first layer in an unsupervised way: principal component analysis (PCA) on the rates, sparse coding of activations, and fixed random local receptive fields. Each of the methods is evaluated on the one hand in a time-stepped simulator, using LIF neurons and on the other hand using a rate-approximated model which allows for faster simulations. Results are compared between each other and as reference with standard backpropagation and feedback alignment. The main finding is that localized random projections outperform other unsupervised ways of computing first layer features, and with many hidden neurons approaches backpropagation results. These results are summarized in Table 8, which compares results of the paper and other state-of-the-art and bio-plausible SNNs. PCA and sparse coding work worse on MNIST than local random projections, regardless if the network is rate-based, spike-based or a regular ANN trained with the delta rule. Feedback Alignment, although only meant for comparison, performs best of the algorithms investigated in this paper.
In general the question how to train multi-layer spiking neural networks in a bio-plausible way is very relevant for computational neuroscience, and has attracted some attention from the machine learning community in recent years (e.g. Bengio et al. 2015, Scellier & Bengio 2016, Sacramento et al. 2018). It is therefore a suitable topic for ICLR. Of course the good performance of single-layer random projections is not surprising, because it is essentially the idea of the Extreme Learning Machine, and this concept has been well studied also for neuromorphic approaches (e.g. Yao & Basu, 2017), and versions with local receptive fields exist as well (Huang et al. 2015 "Local Receptive Fields Based Extreme Learning Machine"). While the comparison of different unsupervised methods on MNIST is somehow interesting, it fails to show any deeper insights because MNIST is a particularly simple task, and already the CIFAR 10 results are far away from the state-of-the-art (which is >96% using CNNs). Another interesting comparison that is missing is with clustering weights, which has shown good performance for CNNs e.g. in (Coates & Ng, 2012) or (Dundar et al. 2015), and is also unsupervised.
The motivation is not 100% clear because the first experiment uses spikes, and shows a non-negligible difference to rate models (the authors claim it's almost the same, but for MNIST differences of 0.5% are significant). All later results are purely about rate models. The authors apparently did not explore e.g. conversion techniques as in (Diehl et al. 2015) to make the spiking results match the rate versions better e.g. by weight normalization.
I would rate the significance to the SNN community as average, and to the entire ICLR community as low. The significance would be higher if it was shown that this method scales to deeper networks or at least can be utilized in deeper architectures. Scrutinizing the possibilitites with random projections on the other hand could lead to more interesting results. But the best results here are obtained with 5000 neurons with 10x10 receptive fields on images of size 28x28, thus the representation is more than overcomplete, and of higher complexity than a convolution layer with 3x3 kernels and many input maps.
Because the results provide only limited insights beyond MNIST I can therefore not support acceptance at ICLR.
Pros:
+ interesting comparison of unsupervised feature learning techniques
+ interesting topic of bio-plausible deep learning
Cons:
- only MNIST, no indications if method will scale
- results are not better than state-of-the-art
Minor comments:
The paper is generally well-written and structured, although some of the design choices could have been explained in more detail. Generally, it is not discussed if random connections have any advantage over other spiking models in terms of accuracy, efficiency or speed, besides the obvious fact that one does not have to train this layer.
The title is a bit confusing. While it's not wrong, I had to read it multiple times to understand what was meant.
The first sentence in the caption for Fig. 2 is also confusing, mixing the descriptions of panel A and B. Also, in A membrane potentials are shown, but the post-membrane potential seems to integrate a constant current instead of individual spikes. Is this already the rate approximation of Eq. 2? Or is it because of the statement in the caption that they both receive very high external inputs. In general, the figures in panel A and B do not make the dynamics of the network or the supervised STDP much clearer.
Principal Component Analysis and Sparse Coding are done algorithmically instead of using a sort of nonlinear Hebbian Learning as in Lillicrap 2016. It would have been interesting to see if this changes the comparatively bad results for PCA and SC.
In Fig. 3, the curve in the random projections case is not saturated, maybe it would have been interesting to go above n_h = 5000. As there are 784 input neurons, a convolutional neural network with 7 filter banks already would have around 5000 neurons, but in this case each filter would be convolved over the whole image, while with random projections the filter only exists locally.
In Eq. 1, the notation is a bit ambigous: The first delta-function seems to be the Dirac-delta for continuous t, while the second delta is a Kronecker-delta with discrete t.
In A.1 and A.4.2 it is stated that the output of a layer is u_{t+1} = W u_t + b but I think in both cases it should be W a_t + b where a_t = phi(u_t). Otherwise, you just have a linear model and no activations.
In Table 3, a typo: "eq. equation" |
ICLR | Title
LEARNING EXECUTION THROUGH NEURAL CODE FUSION
Abstract
As the performance of computer systems stagnates due to the end of Moore’s Law, there is a need for new models that can understand and optimize the execution of general purpose code. While there is a growing body of work on using Graph Neural Networks (GNNs) to learn static representations of source code, these representations do not understand how code executes at runtime. In this work, we propose a new approach using GNNs to learn fused representations of general source code and its execution. Our approach defines a multi-task GNN over low-level representations of source code and program state (i.e., assembly code and dynamic memory states), converting complex source code constructs and data structures into a simpler, more uniform format. We show that this leads to improved performance over similar methods that do not use execution and it opens the door to applying GNN models to new tasks that would not be feasible from static code alone. As an illustration of this, we apply the new model to challenging dynamic tasks (branch prediction and prefetching) from the SPEC CPU benchmark suite, outperforming the state-of-the-art by 26% and 45% respectively. Moreover, we use the learned fused graph embeddings to demonstrate transfer learning with high performance on an indirectly related algorithm classification task.
1 INTRODUCTION
Over the last 50 years, hardware improvements have led to exponential increases in software performance, driven by Moore’s Law. The end of this exponential scaling has enormous ramifications for computing (Hennessy & Patterson, 2019) since the demand for compute has simultaneously grown exponentially, relying on Moore’s Law to compensate (Ranganathan, 2017). As the onus of performance optimization shifts to software, new models, representations, and methodologies for program understanding are needed to drive research and development in computer architectures, compilers, and to aid engineers in writing high performance code.
Deep learning has emerged as a powerful framework for solving difficult prediction problems across many domains, including vision (Krizhevsky et al., 2012), speech (Hinton et al., 2012), and text (Sutskever et al., 2014). Recent work has started to frame many canonical tasks in computer architecture as analogous prediction problems, and have shown that deep learning has the potential to outperform traditional heuristics (Hashemi et al., 2018). In this work, we focus on two representative tasks: address prefetching (modeling data-flow during execution) (Jouppi, 1990; Wenisch et al., 2009; Hashemi et al., 2018) and branch prediction (modeling control-flow during execution) (Jiménez & Lin, 2001; Seznec, 2011; Smith, 1981)1. Traditional models for solving these tasks memorize historical access patterns and branch history to make predictions about the future. However, this approach is inherently limited as there are simple cases where history-based methods cannot generalize
∗Work completed during an internship at Google. 1As Moore’s Law ends, prediction techniques in these fields have also stagnated. For example, the winner of
the most recent branch prediction championship increased precision by 3.7% (Dundas, 2016).
(Section 4.6). Instead, we argue that these tasks (branch-prediction and prefetching) jointly model the intermediate behavior of a program as it executes. During execution, there is a rich and informative set of features in intermediate memory states that models can learn to drive both prediction tasks. Additionally, since programs are highly structured objects, static program syntax can supplement dynamic information with additional context about the program’s execution.
We combine these two sources of information by learning a representation of a program from both its static syntax and its dynamic intermediate state during execution. This incorporates a new set of previously unexplored features for prefetching and branch prediction, and we demonstrate that these can be leveraged to obtain significant performance improvements. Inspired by recent work on learning representations of code (Allamanis et al., 2017), our approach is distinguished by two aspects. First, instead of using high level source code, we construct a new graph representation of low-level assembly code and model it with a graph neural network. Assembly makes operations like register reads, memory accesses, and branch statements explicit, naturally allowing us to model multiple problems within a single, unified representation. Second, to model intermediate state, we propose a novel snapshot mechanism that feeds limited memory states into the graph (Section 3.2).
We call our approach neural code fusion (NCF). This same representation can easily be leveraged for a bevy of other low-level optimizations (including: indirect branch prediction, value prediction, memory disambiguation) and opens up new possibilities for multi-task learning that were not previously possible with traditional heuristics. NCF can also be used to generate useful representations of programs for indirectly related downstream tasks, and we demonstrate this transfer learning approach on an algorithm classification problem.
On the SPEC CPU2006 benchmarks (Sta, 2006), NCF outperforms the state-of-the-art in address and branch prediction by a significant margin. Moreover, NCF is orthogonal to existing historybased methods, and could easily combine them with our learned representations to potentially boost accuracy further. To our knowledge, NCF is the first instance of a single model that can learn simultaneously on dynamic control-flow and data-flow tasks, setting the stage for teaching neural network models to better understand how programs execute.
In summary, this paper makes the following contributions: • An extensible graph neural network based representation of code that fuses static code and
dynamic execution information into one graph.
• A binary representation for dynamic memory states that generalizes better than scalar or categorical representations.
• The first unified representation for control-flow and data-flow during program execution. • State-of-the-art performance in branch prediction (by 26%) and prefetching (by 45%). • We show that NCF representations pre-trained on branch prediction are useful for transfer
learning, achieving competitive performance on an algorithm classification task.
2 BACKGROUND
In order to generate our fused representation (Figure 1), we combine three fundamental components. The representation itself builds on Graph Neural Networks (GNNs). Instead of directly representing source code, our static representation uses assembly code. To drive dynamic information through the GNN, we use binary memory snapshots. We start with background on these three components.
2.1 GATED GRAPH NEURAL NETWORKS
A generic graph neural network structure G = (V,E) consist of a set of nodes V and K sets of directed edges E = E1, . . . , EK where Ek ⊆ V × V is the set of directed edges of type k. Each node v ∈ V is annotated with a initial node embedding denoted by xv ∈ RD and associated with a node state vector htv ∈ RD for each step of propagation t = 1, . . . , T . Our work builds on a specific GNN variant – Gated Graph Neural Networks (GGNNs) (Li et al., 2015). GGNNs propagate information in the graph through message passing. At each step of propagation, “messages” to each node v are computed as:
mtkv = ∑
u:(u,v)∈Ek
f(htu; θk), (1)
where mtkv is the zero vector if there are no edges of type k directed towards v. f is a linear layer with parameters θk in this model, but can be an arbitrary function. To update the state vector of a node v, all nonzero incoming messages are aggregated as:
m̃tv = g({mtkv | for k such that ∃u.(u, v) ∈ Ek}). (2)
Here g is an aggregation function, for which we use element-wise summation. Finally, the next state vector is computed using a gated recurrent unit (GRU) (Chung et al., 2014):
ht+1v = GRU(m̃ t v, h t v). (3)
The propagation is initialized with h1v = xv and repeated T times. The state vectors h T v are considered as the final node embeddings. For each task, we mark a specific node v∗ as the “task node”. We feed its final state vector hTv∗ to a linear output layer to make final predictions.
2.2 PROGRAM REPRESENTATIONS
Here we give a brief review of how compilers and processors represent source code and program state, along with tools for extracting these representations from programs and their executions.
Dynamic Execution State. The dynamic state of a program is the set of values that change as a program executes. This is defined by a fixed set of registers (referenced by names like %rdi and %rax) and memory (which is much larger and indexed by an integer memory address). Values are moved from memory to registers via load instructions and from registers to memory via store instructions. Finally, the instruction pointer specifies which instruction should be executed next.
So, what is the correct subset of dynamic state to feed into a model? In principle it could include all registers and memory. However, this can be difficult to work with (memory is very large) and it is expensive to access arbitrary memory at test time. Instead, we restrict dynamic state to a snapshot that only includes CPU general purpose registers and recently used memory states. These values are cheaply obtainable in hardware through buffers that hold recently used data and in software through dynamic instrumentation tools like Pin (see Tools section).
Assembly Code. Assembly code is compiled from source code and is specific to a particular processor architecture (such as x86). It is a sequence of instructions, some of which operate on register values, some of which move values between registers and memory (loads and stores), and some of which conditionally branch or jump to other locations in the program. A common way of organizing assembly code is in a control flow graph (CFG). Nodes of a CFG are basic blocks, which are sequences of instructions without any control flow statements. Edges point from a source basic block to a target basic block when it is possible for control to jump from the source bock to the target block. For x86 direct branches, there are only two possible target blocks for a given source block, which we can refer to as the true block and false block. A benefit of assembly code in our context is that it is typically less stylish and tighter to program semantics. For example, programs that are syntactically different but semantically equivalent tend to correspond to similar assembly (Figure 2).
While we only use assembly for static code, it is also possible to link assembly code to the source code it was generated from to gain additional information about high-level constructs like data structures.
Tasks. We test learned understanding of control-flow during execution using the branch prediction task. Branch prediction traditionally uses heuristics to predict which target basic block will be entered next. The instruction pointer determines which basic block is currently being executed, and the target output is a boolean specifying either the true block or false block.
Branch prediction is a difficult problem with large performance implications for small relative improvements. Modern microprocessors execute hundreds of instructions speculatively, a mispredicted branch means that the processor has to discard all work completed after that branch and re-execute.
Learned understanding of data-flow during execution is tested using the prefetching task. Prefetching predicts the memory address that will be accessed in the next load operation. Since data access time is the largest bottleneck in server applications, solving data prefetching has significant implications for scaling computer architectures (Hashemi et al., 2018). Note that there is generally interleaving of branching and memory instructions, so predicting the next memory access may depend on an unknown branch decision, and vice versa.
Tools. Compilers convert source code into assembly code. We use gcc. Creating a usable snapshot of the dynamic state of a program is nontrivial. Given the large size of memory, we need to focus on memory locations that are relevant to the execution. These are obtained by monitoring the dynamic target memory addresses of load instructions that are executed. To obtain these snapshots, we instrument instructions during execution with a tool called Pin (Luk et al., 2005).
3 MODEL
We model the static assembly as a GNN (Section 3.1). Dynamic snapshots are used as features to inform the GNN of the instruction-level dynamics during execution (Section 3.2), which we show leads to model to learn the behavior of the application (Section 4).
3.1 GRAPH STRUCTURE
Figure 3 provides an example of our graph structure translating from 3 lines of assembly to a GNN. The graph consists of three major types of nodes: instruction nodes (in white), variable nodes (in yellow), and pseudo nodes (in grey).
Instruction nodes are created from instructions to serve as the backbone of the graph. Each instruction can have variable nodes or pseudo nodes as child nodes.
Variable nodes represent variables that use dynamic values, including registers and constants.
Instead of connecting instructions nodes directly to their child variable nodes, Pseudo nodes represent the sub-operations inside an instruction. The value associated with a pseudo node is computed in a bottom-up manner by recursively executing the sub-operations of its child nodes. For example, in instruction 0 in Figure 3, a pseudo node is created to represent the source operand that loads data from memory2, which contain a child constant 0x48 and a child register %rbx. There are a number of different pseudo node types listed in the appendix.
Three major types of edges are used to connect nodes in the graph: control-flow edges, parent edges and usage edges. Control-flow edges connect an instruction node to all potential subsequent instruction nodes. For non-branch instructions, the control-flow edge from an instruction node points to the next sequential instruction node in the program. For branch instructions, control-flow edges are used to connect to both the next instruction and the branch target. Parent edges are used to connect child variable nodes or pseudo nodes to their parent instruction nodes or pseudo nodes. Usage edges provide the graph with data flow information, connecting variable nodes with their last read or write. Given this static structure, Section 3.2 describes how the GNN is initialized and used.
3.2 FUSED STATIC/DYNAMIC GATED GRAPH NEURAL NETWORKS
Node initialization. Unlike previous approaches to code analysis where node embeddings are initialized with the static text of source code, we fuse the static graph with dynamic snapshots by using dynamic state to initialize nodes in the graph.
Each variable node and pseudo node is initialized with a dynamic value from the memory snapshot. These values are converted into initial
node embeddings via a learned embedding layer. We find that the numerical format of the dynamic values are critical to allowing the model to understand the application. We consider three types of representations for data values: categorical, scalar and binary. Our results (Section 4.6) show that binary has an inherent ability to generalize more efficiently than categorical or scalar representations. The intuition behind why binary generalizes so well is that the representation is inherently hierarchical, which allows for stronger generalization to previously unseen bit patterns.
Lastly, instruction nodes are initialized with zero vectors as embeddings. Given the initial embeddings, the GNN runs for a predefined number of propagation steps to obtain the final embeddings.
Defining tasks on the graph. Tasks are defined on nodes using masking. Similar to masking in RNNs to handle variable sequence lengths, masking in GNNs handles different numbers of task nodes. A node defined with a task has a mask value of 1 and the ones without a task are masked out using 0 during both forward and backward propagation.
Branch-prediction is defined on the branch instruction node. Since each branch can either be taken or not taken, this is a binary decision. The final node embeddings are fed into a linear layer to generate a scalar output using a sigmoid activation and a cross entropy loss.
Prefetching is defined on the src pseudo node that represents a memory load operation. The task is to predict the 64-bit target address of the next memory load from this node. A 64-bit output is generated by feeding the final node embeddings of the task node to a different linear layer. In this
2In x86 assembly, parentheses represent addressing memory
case, the output layer is 64-dimensional to correspond to a 64-bit address. The loss is the summation of sigmoid cross entropy loss on all 64 bits.3
Scaling to large programs. For large-scale programs, it is unrealistic to utilize the static graph built on the entire assembly file (the gcc benchmark has >500K instructions). As in (Li et al., 2015), to handle large graphs, only nodes which are within 100 steps to the task node will affect the prediction.
4 EXPERIMENTS
4.1 DATA COLLECTION
Our model consists of two parts, the static assembly and dynamic snapshots. To collect static assembly we use gcc to compile source code for each binary. This binary is then disassembled using the GNU binary utilities to obtain the assembly code.
The dynamic snapshots are captured for conditional branch and memory load instructions using the dynamic instrumentation tool Pin (Luk et al., 2005). We run the benchmarks with the reference input set and use SimPoint (Hamerly et al., 2005) to generate a single representative sample of 100 million instructions for each benchmark. Our tool attaches to the running process, fast forwards to the region of interest and outputs values of general registers and related memory addresses into a file every time the target conditional branch instructions or memory load instructions are executed by the instrumented application. We use SPECint 2006 to evaluate our proposal. This is a standard benchmark suite commonly used to evaluate hardware and software system performance.
4.2 EXPERIMENTAL SETUP
We train the model on each benchmark independently. The first 70% of snapshots are used for training, and the last 30% for evaluation. Hyperparameters are reported in the appendix.
4.3 METRICS
To evaluate branch prediction we follow computer architecture research and use mispredictions per thousand instructions (MPKI) (Jiménez & Lin, 2001; Lee et al., 1997) as a metric. Prefetching is a harder problem as the predictor needs to accurately predict all bits of a target memory address. A prediction with even 1 bit off, especially in the high bits, is an error at often distant memory locations. We evaluate prefetching using complete accuracy, defined as an accurate prediction in all bits.
4.4 MODEL COMPARISONS
We compare our model to three branch predictors. The first is a bimodal predictor that uses a 2-bit saturating counter for each branch instruction to keep track of its branch history (Lee et al., 1997). The second is a widely used, state-of-the-art perceptron branch predictor (Jiménez & Lin, 2001) that uses the perceptron learning algorithm on long sequential binary taken/not-taken branch histories (Jiménez, 2016). As a more powerful baseline, we implement an offline non-linear multi-layer perceptron (MLP). The MLP has two hidden layers and each layer is of the same size as the input layer. A default SGD solver is used for optimization. The results are shown in Figure 4. We find that NCF reduces MPKI by 26% and 22% compared to the perceptron and MLP respectively. Note that some of the benchmarks (libquantum, perlbench) have zero MPKI.
Three baselines are used to evaluate our prefetching model in Figure 5. The first is a stride data prefetcher (Chen & Baer, 1995) that is good at detecting regular patterns, such as array operations. The second is a state-of-the-art address correlation (AC) prefetcher that handles irregular patterns by learning temporal address correlation (Wenisch et al., 2009). LSTM-delta is a learning-based prefetcher that captures correlation among deltas between addresses (Hashemi et al., 2018). Due to our binary representation, NCF achieves nearly 100% coverage of all addresses, unlike the 50-80% reported for the LSTM-prefetcher of (Hashemi et al., 2018). Figure 5 shows that NCF achieves significantly higher performance than prior work by handling both regular and irregular patterns with its binary representation. In both Figures 4 and 5, the applications are sorted from most-challenging to least-challenging. We find that NCF particularly outperforms the traditional baselines on the most challenging datasets. The traditional baselines in both branch prediction and prefetching leverage long sequential features. Our NCF does not yet use sequential features or sequential snapshots, we leave this for future work.
3Our framework supports multitasking in that it handles control-flow and data-flow tasks simultaneously. However, in our ablation studies, we did not see significant evidence that these tasks currently help each other.
The effectiveness of the GNN depends on the input graph, and we perform ablation studies in the appendix (Section B.1).
4.5 ALGORITHM CLASSIFICATION
To test if the model has learned about the behavior of the application, we test the NCF representation on an algorithm classification dataset (Lili Mou, 2016). We randomly select a subset of 15 problems from this dataset4 and generate inputs for each program. 50 programs are randomly selected from each class. These are split into 30 for training, 10 for validation (tuning the linear SVM described below) and 10 for testing.
We generate the graph for each program post-compilation and obtain memory snapshots via our instrumentation tool. The representation is pre-trained on branch prediction and the resultant embeddings are averaged to serve as the final embedding of the program. A linear SVM is trained using the pre-trained embeddings to output a predicted class.
This yields 96.0% test accuracy, where the state-of-the-art (Ben-Nun et al., 2018) achieves 95.3% on the same subset. In contrast to Ben-Nun et al. (2018), which pre-trains an LSTM on over 50M lines of LLVM IR, our embeddings are trained on 203k lines of assembly from the algorithm classification dataset itself. This shows that branch prediction can be highly predictive of high-level program attributes, suggesting that it may be fruitful to use dynamic information to solve other static tasks.
4.6 GENERALIZATION TEST ON REPRESENTATIONS
Lastly, we test the effectiveness of binary representations of memory state. There are three major options for representing dynamic state: categorical, real-valued scalar, and binary. State-of-the-
4We use a subset because the programs had to be modified (by adding appropriate headers, fixing bugs) to compile and run in order to retrieve the assembly code and dynamic states.
art data prefetchers tend to use categorical representations. Recent advances in representing and manipulating numbers for neural arithmetic logic units use scalar representations (Trask et al., 2018).
We evaluate the generalization ability of these representations using a simple loop. We replace the constant 10 total iterations of the loop in Figure 2(a) with a variable k. The control-flow of the loop decides to stay in or jump out of the loop by comparing variable i and k. The branch will be not taken for the first k− 1 times but will be taken at the kth time. Since traditional state-of-the-art branch predictors depend on memorizing past branch history, they will always mispredict the final branch (as it has always been taken). Our proposal is able to make the correct prediction at the kth time.
The challenge for our model is that the value k can change during program execution, and the model needs to generalize to unseen values. We use this example to test the three representations
and create a testing set using k values from 1 to 80. The training set only contains k values from 1 to 40 with a step size of 3 (1, 4, 7, ..., 37). We feed all three representations to MLP predictors that have one hidden layer of the same size of each input representation (160 for categorical, 2 for scalar and 14 for binary). The results are shown in Figure 6.
The categorical representation can only correctly predict training samples, missing every two out of three k values, where scalar and binary representations are both able to generalize across a continuous range, filling the “holes” between training samples. The binary representation generalizes to a larger range than a scalar representation, as long as the bits have been seen and toggled in the training set. Since binary is inherently hierarchical (the range increases exponentially with the number of bits), this advantage is greater in a real world 64-bit machine.
5 RELATED WORK
5.1 LEARNING FROM SOURCE CODE & EXECUTION BEHAVIOR
There is a significant body of work on learning for code, and we refer the reader to Allamanis et al. (2018) for a survey. We focus on the most relevant methods here. Li et al. (2015) use GNNs to represent the state of heap memory for a program verification application. Allamanis et al. (2017) learn to represent source code with GNNs.
Similar to us, Ben-Nun et al. (2018); Mendis et al. (2018) learn representations of code from low-level syntax, the LLVM intermediate representation (IR) or assmebly, but do not use dynamic information. We use assembly code instead of IR to maintain a 1:1 mapping between dynamic state and the static backbone of the graph (since instructions are atomic when executed). Prior work that builds graphs purely based on static source code disregard the instruction-level dynamics that are created during program execution, as a single static piece of code can execute in different ways depending on the provided inputs.
Wang et al. (2017) embed the sequences of values that variables take on during the execution of a program as a dynamic program embedding. The code is not otherwise used. The states are relatively simple (variables can take on relatively few possible values) in contrast to our dynamic states that are “from the wild.” Cummins et al. (2017) embeds code and optionally allows a flat vector of auxiliary features that can depend on dynamic information. Abstract program execution can also be used as a basis for learning program representations (DeFreez et al., 2018; Henkel et al., 2018). However, neither uses concrete program state.
5.2 USING PROGRAM STATE TO GUIDE PROGRAM SYNTHESIS
There are several works that learn from program state to aid program synthesis (Balog et al., 2016; Parisotto et al., 2016; Devlin et al., 2017; Zohar & Wolf, 2018; Chen et al., 2019; Vijayakumar et al., 2018; Menon et al., 2013). In particular, Balog et al. (2016) use neural networks to learn a mapping from list-of-integer-valued input-output examples to the set of primitives needed. All of
these operate on programs in relatively simple Domain Specific Languages and are learning mappings from program state to code, rather than learning joint embeddings of code and program state.
5.3 DYNAMIC PREDICTION TASKS
Branch prediction and prefetching are heavily studied in the computer architecture domain. Highperformance modern microprocessors commonly include perceptron (Jiménez & Lin, 2001) or table-based branch predictors that memorize commonly taken paths through code (Seznec, 2011).
While there has been a significant amount of work around correlation prefetching in academia (Wenisch et al., 2009; Charney & Reeves, 1995; Roth et al., 1998), modern processors only commonly implement simple stream prefetchers (Chen & Baer, 1995; Jouppi, 1990; Gindele, 1977). Recent work has related prefetching to natural language models and shown that LSTMs achieve high accuracy (Hashemi et al., 2018). However, their categorical representation covers only a limited portion of the access patterns while the binary representation described here is more general.
6 CONCLUSION
We develop a novel graph neural network that uses both static and dynamic features to learn a rich representation for code. Since the representation is based on a relational network, it is easy to envision extensions that include high-level source code into the model or to add new prediction tasks. Instead of focusing on hardware-realizeable systems with real-time performance, our primary focus in this paper is to develop representations that explore the limits of predictive accuracy for these problems with extremely powerful models, so that the improvements can be be eventually be distilled. This is common in machine learning research, where typically the limits of performance for a given approach are reached, and then distilled into a performant system, e.g. (Van Den Oord et al., 2016; Oord et al., 2017). However, benefits can still be derived by using the model to affect program behavior through compilation hints (Chilimbi & Hirzel, 2002; Jagannathan & Wright, 1996; Wolf et al., 1996), making this exploration immediately practical. We argue that fusing both static and dynamic features into one representation is an exciting direction to enable further progress in neural program understanding.
A HYPERPARAMETERS
The hyperparameters for all models are given in Table 1.
B NODE SUB-TYPES
We describe the node sub-types in Table 2. Pseudo-nodes implement operations that are commonly known as the addressing modes of the Instruction Set Architecture. Note that node sub-types are used to derive initial node embeddings and for interpretability. They do not factor into the computation of the graph neural network.
Table 2: Descriptions about sub node types.
Major node
type Sub-type Description
Pseudo nodes
non-memsrc
a source operand that does not involve memory load operation, obtained directly from register(s) and/or constant(s)
mem-src a source operand that involves a memory loadoperation, obtained from loading data from a memory location
non-memtgt
a target operand that does not involve memory write operation, writing directly to a register
mem-tgt a source operand that involves a memory write operation,writing data to a memory location base a base that is obtained directly from a variable node ind-base an indirect base that is obtained from certain operations on thechild variable nodes, like multiplying a register by a constant offset an offset value that is to be added to a base
Variable nodes reg a register, value is dynamically changed during executionconst a constant, value is specified in the assembly
B.1 ABLATION STUDY
The effectiveness of the GNN depends on the input graph. As pseudo nodes are a large component of the static graph, we run additional experiments to understand their importance. In particular, we try to only use the pseudo nodes src and tgt, which are directly connected to instruction nodes. Our data shows that removing pseudo nodes other than src and tgt and connecting variable nodes directly to src and tgt has little impact on branch prediction (an MPKI increase of 0.26), but has a large impact on the data-flow accuracy (accuracy goes down by 12.1%).
Figure 7 shows the sensitivity of task performance to the number of propagation steps during training for the GNN on omnetpp. We find that prefetching is more sensitive to propagation steps than branch prediction, and requires 5-8 steps for peak accuracy. Due to the control flow of programs, we find that 5-8 steps propagates information for 50-60 instruction nodes across the graph’s backbone for omnetpp (up to 6000 nodes for perlbench). | 1. How does the proposed method improve code representation compared to previous approaches?
2. What is the significance of using assembly code in the graph representation?
3. Can you explain the snapshot mechanism and its role in improving performance?
4. How effective is the proposed method in downstream tasks such as branch prediction and program classification?
5. How does the paper contribute to the field of computer science and software engineering? | Review | Review
The paper proposes using Graph Neural Networks to learn representations of source code and its execution. They test their method on the SPEC CPU benchmark suite and show substantial improvement over methods that do not use execution.
The paper's main question is to answer how to learn code representations. The main novelty introduced in their approach is to build a graph representation not of high level code but of assembly code. They also develop a way to use what they call a "snapshot mechanism" that feeds limited memory states into the graph. The downstream consequences of their methods are improved methods for example for branch prediction. Interestingly NCF can be also used to represent programs for use in downstream tasks. This is demonstrated via transfer learning in an algorithm classification problem. The paper is well written and the background / related work makes it easy for the reader to understand the problem's relevance within the related literature.
The results look well justified and empirically verified. |
ICLR | Title
LEARNING EXECUTION THROUGH NEURAL CODE FUSION
Abstract
As the performance of computer systems stagnates due to the end of Moore’s Law, there is a need for new models that can understand and optimize the execution of general purpose code. While there is a growing body of work on using Graph Neural Networks (GNNs) to learn static representations of source code, these representations do not understand how code executes at runtime. In this work, we propose a new approach using GNNs to learn fused representations of general source code and its execution. Our approach defines a multi-task GNN over low-level representations of source code and program state (i.e., assembly code and dynamic memory states), converting complex source code constructs and data structures into a simpler, more uniform format. We show that this leads to improved performance over similar methods that do not use execution and it opens the door to applying GNN models to new tasks that would not be feasible from static code alone. As an illustration of this, we apply the new model to challenging dynamic tasks (branch prediction and prefetching) from the SPEC CPU benchmark suite, outperforming the state-of-the-art by 26% and 45% respectively. Moreover, we use the learned fused graph embeddings to demonstrate transfer learning with high performance on an indirectly related algorithm classification task.
1 INTRODUCTION
Over the last 50 years, hardware improvements have led to exponential increases in software performance, driven by Moore’s Law. The end of this exponential scaling has enormous ramifications for computing (Hennessy & Patterson, 2019) since the demand for compute has simultaneously grown exponentially, relying on Moore’s Law to compensate (Ranganathan, 2017). As the onus of performance optimization shifts to software, new models, representations, and methodologies for program understanding are needed to drive research and development in computer architectures, compilers, and to aid engineers in writing high performance code.
Deep learning has emerged as a powerful framework for solving difficult prediction problems across many domains, including vision (Krizhevsky et al., 2012), speech (Hinton et al., 2012), and text (Sutskever et al., 2014). Recent work has started to frame many canonical tasks in computer architecture as analogous prediction problems, and have shown that deep learning has the potential to outperform traditional heuristics (Hashemi et al., 2018). In this work, we focus on two representative tasks: address prefetching (modeling data-flow during execution) (Jouppi, 1990; Wenisch et al., 2009; Hashemi et al., 2018) and branch prediction (modeling control-flow during execution) (Jiménez & Lin, 2001; Seznec, 2011; Smith, 1981)1. Traditional models for solving these tasks memorize historical access patterns and branch history to make predictions about the future. However, this approach is inherently limited as there are simple cases where history-based methods cannot generalize
∗Work completed during an internship at Google. 1As Moore’s Law ends, prediction techniques in these fields have also stagnated. For example, the winner of
the most recent branch prediction championship increased precision by 3.7% (Dundas, 2016).
(Section 4.6). Instead, we argue that these tasks (branch-prediction and prefetching) jointly model the intermediate behavior of a program as it executes. During execution, there is a rich and informative set of features in intermediate memory states that models can learn to drive both prediction tasks. Additionally, since programs are highly structured objects, static program syntax can supplement dynamic information with additional context about the program’s execution.
We combine these two sources of information by learning a representation of a program from both its static syntax and its dynamic intermediate state during execution. This incorporates a new set of previously unexplored features for prefetching and branch prediction, and we demonstrate that these can be leveraged to obtain significant performance improvements. Inspired by recent work on learning representations of code (Allamanis et al., 2017), our approach is distinguished by two aspects. First, instead of using high level source code, we construct a new graph representation of low-level assembly code and model it with a graph neural network. Assembly makes operations like register reads, memory accesses, and branch statements explicit, naturally allowing us to model multiple problems within a single, unified representation. Second, to model intermediate state, we propose a novel snapshot mechanism that feeds limited memory states into the graph (Section 3.2).
We call our approach neural code fusion (NCF). This same representation can easily be leveraged for a bevy of other low-level optimizations (including: indirect branch prediction, value prediction, memory disambiguation) and opens up new possibilities for multi-task learning that were not previously possible with traditional heuristics. NCF can also be used to generate useful representations of programs for indirectly related downstream tasks, and we demonstrate this transfer learning approach on an algorithm classification problem.
On the SPEC CPU2006 benchmarks (Sta, 2006), NCF outperforms the state-of-the-art in address and branch prediction by a significant margin. Moreover, NCF is orthogonal to existing historybased methods, and could easily combine them with our learned representations to potentially boost accuracy further. To our knowledge, NCF is the first instance of a single model that can learn simultaneously on dynamic control-flow and data-flow tasks, setting the stage for teaching neural network models to better understand how programs execute.
In summary, this paper makes the following contributions: • An extensible graph neural network based representation of code that fuses static code and
dynamic execution information into one graph.
• A binary representation for dynamic memory states that generalizes better than scalar or categorical representations.
• The first unified representation for control-flow and data-flow during program execution. • State-of-the-art performance in branch prediction (by 26%) and prefetching (by 45%). • We show that NCF representations pre-trained on branch prediction are useful for transfer
learning, achieving competitive performance on an algorithm classification task.
2 BACKGROUND
In order to generate our fused representation (Figure 1), we combine three fundamental components. The representation itself builds on Graph Neural Networks (GNNs). Instead of directly representing source code, our static representation uses assembly code. To drive dynamic information through the GNN, we use binary memory snapshots. We start with background on these three components.
2.1 GATED GRAPH NEURAL NETWORKS
A generic graph neural network structure G = (V,E) consist of a set of nodes V and K sets of directed edges E = E1, . . . , EK where Ek ⊆ V × V is the set of directed edges of type k. Each node v ∈ V is annotated with a initial node embedding denoted by xv ∈ RD and associated with a node state vector htv ∈ RD for each step of propagation t = 1, . . . , T . Our work builds on a specific GNN variant – Gated Graph Neural Networks (GGNNs) (Li et al., 2015). GGNNs propagate information in the graph through message passing. At each step of propagation, “messages” to each node v are computed as:
mtkv = ∑
u:(u,v)∈Ek
f(htu; θk), (1)
where mtkv is the zero vector if there are no edges of type k directed towards v. f is a linear layer with parameters θk in this model, but can be an arbitrary function. To update the state vector of a node v, all nonzero incoming messages are aggregated as:
m̃tv = g({mtkv | for k such that ∃u.(u, v) ∈ Ek}). (2)
Here g is an aggregation function, for which we use element-wise summation. Finally, the next state vector is computed using a gated recurrent unit (GRU) (Chung et al., 2014):
ht+1v = GRU(m̃ t v, h t v). (3)
The propagation is initialized with h1v = xv and repeated T times. The state vectors h T v are considered as the final node embeddings. For each task, we mark a specific node v∗ as the “task node”. We feed its final state vector hTv∗ to a linear output layer to make final predictions.
2.2 PROGRAM REPRESENTATIONS
Here we give a brief review of how compilers and processors represent source code and program state, along with tools for extracting these representations from programs and their executions.
Dynamic Execution State. The dynamic state of a program is the set of values that change as a program executes. This is defined by a fixed set of registers (referenced by names like %rdi and %rax) and memory (which is much larger and indexed by an integer memory address). Values are moved from memory to registers via load instructions and from registers to memory via store instructions. Finally, the instruction pointer specifies which instruction should be executed next.
So, what is the correct subset of dynamic state to feed into a model? In principle it could include all registers and memory. However, this can be difficult to work with (memory is very large) and it is expensive to access arbitrary memory at test time. Instead, we restrict dynamic state to a snapshot that only includes CPU general purpose registers and recently used memory states. These values are cheaply obtainable in hardware through buffers that hold recently used data and in software through dynamic instrumentation tools like Pin (see Tools section).
Assembly Code. Assembly code is compiled from source code and is specific to a particular processor architecture (such as x86). It is a sequence of instructions, some of which operate on register values, some of which move values between registers and memory (loads and stores), and some of which conditionally branch or jump to other locations in the program. A common way of organizing assembly code is in a control flow graph (CFG). Nodes of a CFG are basic blocks, which are sequences of instructions without any control flow statements. Edges point from a source basic block to a target basic block when it is possible for control to jump from the source bock to the target block. For x86 direct branches, there are only two possible target blocks for a given source block, which we can refer to as the true block and false block. A benefit of assembly code in our context is that it is typically less stylish and tighter to program semantics. For example, programs that are syntactically different but semantically equivalent tend to correspond to similar assembly (Figure 2).
While we only use assembly for static code, it is also possible to link assembly code to the source code it was generated from to gain additional information about high-level constructs like data structures.
Tasks. We test learned understanding of control-flow during execution using the branch prediction task. Branch prediction traditionally uses heuristics to predict which target basic block will be entered next. The instruction pointer determines which basic block is currently being executed, and the target output is a boolean specifying either the true block or false block.
Branch prediction is a difficult problem with large performance implications for small relative improvements. Modern microprocessors execute hundreds of instructions speculatively, a mispredicted branch means that the processor has to discard all work completed after that branch and re-execute.
Learned understanding of data-flow during execution is tested using the prefetching task. Prefetching predicts the memory address that will be accessed in the next load operation. Since data access time is the largest bottleneck in server applications, solving data prefetching has significant implications for scaling computer architectures (Hashemi et al., 2018). Note that there is generally interleaving of branching and memory instructions, so predicting the next memory access may depend on an unknown branch decision, and vice versa.
Tools. Compilers convert source code into assembly code. We use gcc. Creating a usable snapshot of the dynamic state of a program is nontrivial. Given the large size of memory, we need to focus on memory locations that are relevant to the execution. These are obtained by monitoring the dynamic target memory addresses of load instructions that are executed. To obtain these snapshots, we instrument instructions during execution with a tool called Pin (Luk et al., 2005).
3 MODEL
We model the static assembly as a GNN (Section 3.1). Dynamic snapshots are used as features to inform the GNN of the instruction-level dynamics during execution (Section 3.2), which we show leads to model to learn the behavior of the application (Section 4).
3.1 GRAPH STRUCTURE
Figure 3 provides an example of our graph structure translating from 3 lines of assembly to a GNN. The graph consists of three major types of nodes: instruction nodes (in white), variable nodes (in yellow), and pseudo nodes (in grey).
Instruction nodes are created from instructions to serve as the backbone of the graph. Each instruction can have variable nodes or pseudo nodes as child nodes.
Variable nodes represent variables that use dynamic values, including registers and constants.
Instead of connecting instructions nodes directly to their child variable nodes, Pseudo nodes represent the sub-operations inside an instruction. The value associated with a pseudo node is computed in a bottom-up manner by recursively executing the sub-operations of its child nodes. For example, in instruction 0 in Figure 3, a pseudo node is created to represent the source operand that loads data from memory2, which contain a child constant 0x48 and a child register %rbx. There are a number of different pseudo node types listed in the appendix.
Three major types of edges are used to connect nodes in the graph: control-flow edges, parent edges and usage edges. Control-flow edges connect an instruction node to all potential subsequent instruction nodes. For non-branch instructions, the control-flow edge from an instruction node points to the next sequential instruction node in the program. For branch instructions, control-flow edges are used to connect to both the next instruction and the branch target. Parent edges are used to connect child variable nodes or pseudo nodes to their parent instruction nodes or pseudo nodes. Usage edges provide the graph with data flow information, connecting variable nodes with their last read or write. Given this static structure, Section 3.2 describes how the GNN is initialized and used.
3.2 FUSED STATIC/DYNAMIC GATED GRAPH NEURAL NETWORKS
Node initialization. Unlike previous approaches to code analysis where node embeddings are initialized with the static text of source code, we fuse the static graph with dynamic snapshots by using dynamic state to initialize nodes in the graph.
Each variable node and pseudo node is initialized with a dynamic value from the memory snapshot. These values are converted into initial
node embeddings via a learned embedding layer. We find that the numerical format of the dynamic values are critical to allowing the model to understand the application. We consider three types of representations for data values: categorical, scalar and binary. Our results (Section 4.6) show that binary has an inherent ability to generalize more efficiently than categorical or scalar representations. The intuition behind why binary generalizes so well is that the representation is inherently hierarchical, which allows for stronger generalization to previously unseen bit patterns.
Lastly, instruction nodes are initialized with zero vectors as embeddings. Given the initial embeddings, the GNN runs for a predefined number of propagation steps to obtain the final embeddings.
Defining tasks on the graph. Tasks are defined on nodes using masking. Similar to masking in RNNs to handle variable sequence lengths, masking in GNNs handles different numbers of task nodes. A node defined with a task has a mask value of 1 and the ones without a task are masked out using 0 during both forward and backward propagation.
Branch-prediction is defined on the branch instruction node. Since each branch can either be taken or not taken, this is a binary decision. The final node embeddings are fed into a linear layer to generate a scalar output using a sigmoid activation and a cross entropy loss.
Prefetching is defined on the src pseudo node that represents a memory load operation. The task is to predict the 64-bit target address of the next memory load from this node. A 64-bit output is generated by feeding the final node embeddings of the task node to a different linear layer. In this
2In x86 assembly, parentheses represent addressing memory
case, the output layer is 64-dimensional to correspond to a 64-bit address. The loss is the summation of sigmoid cross entropy loss on all 64 bits.3
Scaling to large programs. For large-scale programs, it is unrealistic to utilize the static graph built on the entire assembly file (the gcc benchmark has >500K instructions). As in (Li et al., 2015), to handle large graphs, only nodes which are within 100 steps to the task node will affect the prediction.
4 EXPERIMENTS
4.1 DATA COLLECTION
Our model consists of two parts, the static assembly and dynamic snapshots. To collect static assembly we use gcc to compile source code for each binary. This binary is then disassembled using the GNU binary utilities to obtain the assembly code.
The dynamic snapshots are captured for conditional branch and memory load instructions using the dynamic instrumentation tool Pin (Luk et al., 2005). We run the benchmarks with the reference input set and use SimPoint (Hamerly et al., 2005) to generate a single representative sample of 100 million instructions for each benchmark. Our tool attaches to the running process, fast forwards to the region of interest and outputs values of general registers and related memory addresses into a file every time the target conditional branch instructions or memory load instructions are executed by the instrumented application. We use SPECint 2006 to evaluate our proposal. This is a standard benchmark suite commonly used to evaluate hardware and software system performance.
4.2 EXPERIMENTAL SETUP
We train the model on each benchmark independently. The first 70% of snapshots are used for training, and the last 30% for evaluation. Hyperparameters are reported in the appendix.
4.3 METRICS
To evaluate branch prediction we follow computer architecture research and use mispredictions per thousand instructions (MPKI) (Jiménez & Lin, 2001; Lee et al., 1997) as a metric. Prefetching is a harder problem as the predictor needs to accurately predict all bits of a target memory address. A prediction with even 1 bit off, especially in the high bits, is an error at often distant memory locations. We evaluate prefetching using complete accuracy, defined as an accurate prediction in all bits.
4.4 MODEL COMPARISONS
We compare our model to three branch predictors. The first is a bimodal predictor that uses a 2-bit saturating counter for each branch instruction to keep track of its branch history (Lee et al., 1997). The second is a widely used, state-of-the-art perceptron branch predictor (Jiménez & Lin, 2001) that uses the perceptron learning algorithm on long sequential binary taken/not-taken branch histories (Jiménez, 2016). As a more powerful baseline, we implement an offline non-linear multi-layer perceptron (MLP). The MLP has two hidden layers and each layer is of the same size as the input layer. A default SGD solver is used for optimization. The results are shown in Figure 4. We find that NCF reduces MPKI by 26% and 22% compared to the perceptron and MLP respectively. Note that some of the benchmarks (libquantum, perlbench) have zero MPKI.
Three baselines are used to evaluate our prefetching model in Figure 5. The first is a stride data prefetcher (Chen & Baer, 1995) that is good at detecting regular patterns, such as array operations. The second is a state-of-the-art address correlation (AC) prefetcher that handles irregular patterns by learning temporal address correlation (Wenisch et al., 2009). LSTM-delta is a learning-based prefetcher that captures correlation among deltas between addresses (Hashemi et al., 2018). Due to our binary representation, NCF achieves nearly 100% coverage of all addresses, unlike the 50-80% reported for the LSTM-prefetcher of (Hashemi et al., 2018). Figure 5 shows that NCF achieves significantly higher performance than prior work by handling both regular and irregular patterns with its binary representation. In both Figures 4 and 5, the applications are sorted from most-challenging to least-challenging. We find that NCF particularly outperforms the traditional baselines on the most challenging datasets. The traditional baselines in both branch prediction and prefetching leverage long sequential features. Our NCF does not yet use sequential features or sequential snapshots, we leave this for future work.
3Our framework supports multitasking in that it handles control-flow and data-flow tasks simultaneously. However, in our ablation studies, we did not see significant evidence that these tasks currently help each other.
The effectiveness of the GNN depends on the input graph, and we perform ablation studies in the appendix (Section B.1).
4.5 ALGORITHM CLASSIFICATION
To test if the model has learned about the behavior of the application, we test the NCF representation on an algorithm classification dataset (Lili Mou, 2016). We randomly select a subset of 15 problems from this dataset4 and generate inputs for each program. 50 programs are randomly selected from each class. These are split into 30 for training, 10 for validation (tuning the linear SVM described below) and 10 for testing.
We generate the graph for each program post-compilation and obtain memory snapshots via our instrumentation tool. The representation is pre-trained on branch prediction and the resultant embeddings are averaged to serve as the final embedding of the program. A linear SVM is trained using the pre-trained embeddings to output a predicted class.
This yields 96.0% test accuracy, where the state-of-the-art (Ben-Nun et al., 2018) achieves 95.3% on the same subset. In contrast to Ben-Nun et al. (2018), which pre-trains an LSTM on over 50M lines of LLVM IR, our embeddings are trained on 203k lines of assembly from the algorithm classification dataset itself. This shows that branch prediction can be highly predictive of high-level program attributes, suggesting that it may be fruitful to use dynamic information to solve other static tasks.
4.6 GENERALIZATION TEST ON REPRESENTATIONS
Lastly, we test the effectiveness of binary representations of memory state. There are three major options for representing dynamic state: categorical, real-valued scalar, and binary. State-of-the-
4We use a subset because the programs had to be modified (by adding appropriate headers, fixing bugs) to compile and run in order to retrieve the assembly code and dynamic states.
art data prefetchers tend to use categorical representations. Recent advances in representing and manipulating numbers for neural arithmetic logic units use scalar representations (Trask et al., 2018).
We evaluate the generalization ability of these representations using a simple loop. We replace the constant 10 total iterations of the loop in Figure 2(a) with a variable k. The control-flow of the loop decides to stay in or jump out of the loop by comparing variable i and k. The branch will be not taken for the first k− 1 times but will be taken at the kth time. Since traditional state-of-the-art branch predictors depend on memorizing past branch history, they will always mispredict the final branch (as it has always been taken). Our proposal is able to make the correct prediction at the kth time.
The challenge for our model is that the value k can change during program execution, and the model needs to generalize to unseen values. We use this example to test the three representations
and create a testing set using k values from 1 to 80. The training set only contains k values from 1 to 40 with a step size of 3 (1, 4, 7, ..., 37). We feed all three representations to MLP predictors that have one hidden layer of the same size of each input representation (160 for categorical, 2 for scalar and 14 for binary). The results are shown in Figure 6.
The categorical representation can only correctly predict training samples, missing every two out of three k values, where scalar and binary representations are both able to generalize across a continuous range, filling the “holes” between training samples. The binary representation generalizes to a larger range than a scalar representation, as long as the bits have been seen and toggled in the training set. Since binary is inherently hierarchical (the range increases exponentially with the number of bits), this advantage is greater in a real world 64-bit machine.
5 RELATED WORK
5.1 LEARNING FROM SOURCE CODE & EXECUTION BEHAVIOR
There is a significant body of work on learning for code, and we refer the reader to Allamanis et al. (2018) for a survey. We focus on the most relevant methods here. Li et al. (2015) use GNNs to represent the state of heap memory for a program verification application. Allamanis et al. (2017) learn to represent source code with GNNs.
Similar to us, Ben-Nun et al. (2018); Mendis et al. (2018) learn representations of code from low-level syntax, the LLVM intermediate representation (IR) or assmebly, but do not use dynamic information. We use assembly code instead of IR to maintain a 1:1 mapping between dynamic state and the static backbone of the graph (since instructions are atomic when executed). Prior work that builds graphs purely based on static source code disregard the instruction-level dynamics that are created during program execution, as a single static piece of code can execute in different ways depending on the provided inputs.
Wang et al. (2017) embed the sequences of values that variables take on during the execution of a program as a dynamic program embedding. The code is not otherwise used. The states are relatively simple (variables can take on relatively few possible values) in contrast to our dynamic states that are “from the wild.” Cummins et al. (2017) embeds code and optionally allows a flat vector of auxiliary features that can depend on dynamic information. Abstract program execution can also be used as a basis for learning program representations (DeFreez et al., 2018; Henkel et al., 2018). However, neither uses concrete program state.
5.2 USING PROGRAM STATE TO GUIDE PROGRAM SYNTHESIS
There are several works that learn from program state to aid program synthesis (Balog et al., 2016; Parisotto et al., 2016; Devlin et al., 2017; Zohar & Wolf, 2018; Chen et al., 2019; Vijayakumar et al., 2018; Menon et al., 2013). In particular, Balog et al. (2016) use neural networks to learn a mapping from list-of-integer-valued input-output examples to the set of primitives needed. All of
these operate on programs in relatively simple Domain Specific Languages and are learning mappings from program state to code, rather than learning joint embeddings of code and program state.
5.3 DYNAMIC PREDICTION TASKS
Branch prediction and prefetching are heavily studied in the computer architecture domain. Highperformance modern microprocessors commonly include perceptron (Jiménez & Lin, 2001) or table-based branch predictors that memorize commonly taken paths through code (Seznec, 2011).
While there has been a significant amount of work around correlation prefetching in academia (Wenisch et al., 2009; Charney & Reeves, 1995; Roth et al., 1998), modern processors only commonly implement simple stream prefetchers (Chen & Baer, 1995; Jouppi, 1990; Gindele, 1977). Recent work has related prefetching to natural language models and shown that LSTMs achieve high accuracy (Hashemi et al., 2018). However, their categorical representation covers only a limited portion of the access patterns while the binary representation described here is more general.
6 CONCLUSION
We develop a novel graph neural network that uses both static and dynamic features to learn a rich representation for code. Since the representation is based on a relational network, it is easy to envision extensions that include high-level source code into the model or to add new prediction tasks. Instead of focusing on hardware-realizeable systems with real-time performance, our primary focus in this paper is to develop representations that explore the limits of predictive accuracy for these problems with extremely powerful models, so that the improvements can be be eventually be distilled. This is common in machine learning research, where typically the limits of performance for a given approach are reached, and then distilled into a performant system, e.g. (Van Den Oord et al., 2016; Oord et al., 2017). However, benefits can still be derived by using the model to affect program behavior through compilation hints (Chilimbi & Hirzel, 2002; Jagannathan & Wright, 1996; Wolf et al., 1996), making this exploration immediately practical. We argue that fusing both static and dynamic features into one representation is an exciting direction to enable further progress in neural program understanding.
A HYPERPARAMETERS
The hyperparameters for all models are given in Table 1.
B NODE SUB-TYPES
We describe the node sub-types in Table 2. Pseudo-nodes implement operations that are commonly known as the addressing modes of the Instruction Set Architecture. Note that node sub-types are used to derive initial node embeddings and for interpretability. They do not factor into the computation of the graph neural network.
Table 2: Descriptions about sub node types.
Major node
type Sub-type Description
Pseudo nodes
non-memsrc
a source operand that does not involve memory load operation, obtained directly from register(s) and/or constant(s)
mem-src a source operand that involves a memory loadoperation, obtained from loading data from a memory location
non-memtgt
a target operand that does not involve memory write operation, writing directly to a register
mem-tgt a source operand that involves a memory write operation,writing data to a memory location base a base that is obtained directly from a variable node ind-base an indirect base that is obtained from certain operations on thechild variable nodes, like multiplying a register by a constant offset an offset value that is to be added to a base
Variable nodes reg a register, value is dynamically changed during executionconst a constant, value is specified in the assembly
B.1 ABLATION STUDY
The effectiveness of the GNN depends on the input graph. As pseudo nodes are a large component of the static graph, we run additional experiments to understand their importance. In particular, we try to only use the pseudo nodes src and tgt, which are directly connected to instruction nodes. Our data shows that removing pseudo nodes other than src and tgt and connecting variable nodes directly to src and tgt has little impact on branch prediction (an MPKI increase of 0.26), but has a large impact on the data-flow accuracy (accuracy goes down by 12.1%).
Figure 7 shows the sensitivity of task performance to the number of propagation steps during training for the GNN on omnetpp. We find that prefetching is more sensitive to propagation steps than branch prediction, and requires 5-8 steps for peak accuracy. Due to the control flow of programs, we find that 5-8 steps propagates information for 50-60 instruction nodes across the graph’s backbone for omnetpp (up to 6000 nodes for perlbench). | 1. What is the novel improvement in methodology for learning code execution presented in the paper?
2. What are the strengths of the proposed approach, particularly in combining static and dynamic program descriptions?
3. Are there any concerns regarding the fairness of comparison against baselines in the experimental results?
4. How does the reviewer assess the clarity and balance of the presentation, including the background material, method description, and experiment results?
5. What is the significance of the study on memory representations, and how does it add value to the paper? | Review | Review
This paper presents a novel improvement in methodology for learning code execution (at the level of branch-predictions and prefetching). They combine static program description with dynamic program state into one graph neural network, for the first time, to achieve significant performance gains on standard benchmarks.
I would vote to accept this paper. They appear to have developed a new model structure and interface to the program information (i.e. inputs to the model), and the design decisions appear thoughtful, sensible, and well-justified (e.g. use of assembly code). The presentation is mostly clear, with a good balance of background material, method description, and experiment results.
Taken at face value, the results are impressive, although I am not familiar enough with this field to assess the fairness of comparison against the baselines. For example, it's a little unclear what the difference is vs previous baselines just from switching to source-code-as-input to assembly-code-as-input?
The study on memory representations (categorical vs scalar vs binary) is a helpful component which adds its own value, and the context for popularity of the alternatives is described.
Few details as to implementation are discussed, although the code is included in the submission, and after a quick glance appears substantial. |
ICLR | Title
LEARNING EXECUTION THROUGH NEURAL CODE FUSION
Abstract
As the performance of computer systems stagnates due to the end of Moore’s Law, there is a need for new models that can understand and optimize the execution of general purpose code. While there is a growing body of work on using Graph Neural Networks (GNNs) to learn static representations of source code, these representations do not understand how code executes at runtime. In this work, we propose a new approach using GNNs to learn fused representations of general source code and its execution. Our approach defines a multi-task GNN over low-level representations of source code and program state (i.e., assembly code and dynamic memory states), converting complex source code constructs and data structures into a simpler, more uniform format. We show that this leads to improved performance over similar methods that do not use execution and it opens the door to applying GNN models to new tasks that would not be feasible from static code alone. As an illustration of this, we apply the new model to challenging dynamic tasks (branch prediction and prefetching) from the SPEC CPU benchmark suite, outperforming the state-of-the-art by 26% and 45% respectively. Moreover, we use the learned fused graph embeddings to demonstrate transfer learning with high performance on an indirectly related algorithm classification task.
1 INTRODUCTION
Over the last 50 years, hardware improvements have led to exponential increases in software performance, driven by Moore’s Law. The end of this exponential scaling has enormous ramifications for computing (Hennessy & Patterson, 2019) since the demand for compute has simultaneously grown exponentially, relying on Moore’s Law to compensate (Ranganathan, 2017). As the onus of performance optimization shifts to software, new models, representations, and methodologies for program understanding are needed to drive research and development in computer architectures, compilers, and to aid engineers in writing high performance code.
Deep learning has emerged as a powerful framework for solving difficult prediction problems across many domains, including vision (Krizhevsky et al., 2012), speech (Hinton et al., 2012), and text (Sutskever et al., 2014). Recent work has started to frame many canonical tasks in computer architecture as analogous prediction problems, and have shown that deep learning has the potential to outperform traditional heuristics (Hashemi et al., 2018). In this work, we focus on two representative tasks: address prefetching (modeling data-flow during execution) (Jouppi, 1990; Wenisch et al., 2009; Hashemi et al., 2018) and branch prediction (modeling control-flow during execution) (Jiménez & Lin, 2001; Seznec, 2011; Smith, 1981)1. Traditional models for solving these tasks memorize historical access patterns and branch history to make predictions about the future. However, this approach is inherently limited as there are simple cases where history-based methods cannot generalize
∗Work completed during an internship at Google. 1As Moore’s Law ends, prediction techniques in these fields have also stagnated. For example, the winner of
the most recent branch prediction championship increased precision by 3.7% (Dundas, 2016).
(Section 4.6). Instead, we argue that these tasks (branch-prediction and prefetching) jointly model the intermediate behavior of a program as it executes. During execution, there is a rich and informative set of features in intermediate memory states that models can learn to drive both prediction tasks. Additionally, since programs are highly structured objects, static program syntax can supplement dynamic information with additional context about the program’s execution.
We combine these two sources of information by learning a representation of a program from both its static syntax and its dynamic intermediate state during execution. This incorporates a new set of previously unexplored features for prefetching and branch prediction, and we demonstrate that these can be leveraged to obtain significant performance improvements. Inspired by recent work on learning representations of code (Allamanis et al., 2017), our approach is distinguished by two aspects. First, instead of using high level source code, we construct a new graph representation of low-level assembly code and model it with a graph neural network. Assembly makes operations like register reads, memory accesses, and branch statements explicit, naturally allowing us to model multiple problems within a single, unified representation. Second, to model intermediate state, we propose a novel snapshot mechanism that feeds limited memory states into the graph (Section 3.2).
We call our approach neural code fusion (NCF). This same representation can easily be leveraged for a bevy of other low-level optimizations (including: indirect branch prediction, value prediction, memory disambiguation) and opens up new possibilities for multi-task learning that were not previously possible with traditional heuristics. NCF can also be used to generate useful representations of programs for indirectly related downstream tasks, and we demonstrate this transfer learning approach on an algorithm classification problem.
On the SPEC CPU2006 benchmarks (Sta, 2006), NCF outperforms the state-of-the-art in address and branch prediction by a significant margin. Moreover, NCF is orthogonal to existing historybased methods, and could easily combine them with our learned representations to potentially boost accuracy further. To our knowledge, NCF is the first instance of a single model that can learn simultaneously on dynamic control-flow and data-flow tasks, setting the stage for teaching neural network models to better understand how programs execute.
In summary, this paper makes the following contributions: • An extensible graph neural network based representation of code that fuses static code and
dynamic execution information into one graph.
• A binary representation for dynamic memory states that generalizes better than scalar or categorical representations.
• The first unified representation for control-flow and data-flow during program execution. • State-of-the-art performance in branch prediction (by 26%) and prefetching (by 45%). • We show that NCF representations pre-trained on branch prediction are useful for transfer
learning, achieving competitive performance on an algorithm classification task.
2 BACKGROUND
In order to generate our fused representation (Figure 1), we combine three fundamental components. The representation itself builds on Graph Neural Networks (GNNs). Instead of directly representing source code, our static representation uses assembly code. To drive dynamic information through the GNN, we use binary memory snapshots. We start with background on these three components.
2.1 GATED GRAPH NEURAL NETWORKS
A generic graph neural network structure G = (V,E) consist of a set of nodes V and K sets of directed edges E = E1, . . . , EK where Ek ⊆ V × V is the set of directed edges of type k. Each node v ∈ V is annotated with a initial node embedding denoted by xv ∈ RD and associated with a node state vector htv ∈ RD for each step of propagation t = 1, . . . , T . Our work builds on a specific GNN variant – Gated Graph Neural Networks (GGNNs) (Li et al., 2015). GGNNs propagate information in the graph through message passing. At each step of propagation, “messages” to each node v are computed as:
mtkv = ∑
u:(u,v)∈Ek
f(htu; θk), (1)
where mtkv is the zero vector if there are no edges of type k directed towards v. f is a linear layer with parameters θk in this model, but can be an arbitrary function. To update the state vector of a node v, all nonzero incoming messages are aggregated as:
m̃tv = g({mtkv | for k such that ∃u.(u, v) ∈ Ek}). (2)
Here g is an aggregation function, for which we use element-wise summation. Finally, the next state vector is computed using a gated recurrent unit (GRU) (Chung et al., 2014):
ht+1v = GRU(m̃ t v, h t v). (3)
The propagation is initialized with h1v = xv and repeated T times. The state vectors h T v are considered as the final node embeddings. For each task, we mark a specific node v∗ as the “task node”. We feed its final state vector hTv∗ to a linear output layer to make final predictions.
2.2 PROGRAM REPRESENTATIONS
Here we give a brief review of how compilers and processors represent source code and program state, along with tools for extracting these representations from programs and their executions.
Dynamic Execution State. The dynamic state of a program is the set of values that change as a program executes. This is defined by a fixed set of registers (referenced by names like %rdi and %rax) and memory (which is much larger and indexed by an integer memory address). Values are moved from memory to registers via load instructions and from registers to memory via store instructions. Finally, the instruction pointer specifies which instruction should be executed next.
So, what is the correct subset of dynamic state to feed into a model? In principle it could include all registers and memory. However, this can be difficult to work with (memory is very large) and it is expensive to access arbitrary memory at test time. Instead, we restrict dynamic state to a snapshot that only includes CPU general purpose registers and recently used memory states. These values are cheaply obtainable in hardware through buffers that hold recently used data and in software through dynamic instrumentation tools like Pin (see Tools section).
Assembly Code. Assembly code is compiled from source code and is specific to a particular processor architecture (such as x86). It is a sequence of instructions, some of which operate on register values, some of which move values between registers and memory (loads and stores), and some of which conditionally branch or jump to other locations in the program. A common way of organizing assembly code is in a control flow graph (CFG). Nodes of a CFG are basic blocks, which are sequences of instructions without any control flow statements. Edges point from a source basic block to a target basic block when it is possible for control to jump from the source bock to the target block. For x86 direct branches, there are only two possible target blocks for a given source block, which we can refer to as the true block and false block. A benefit of assembly code in our context is that it is typically less stylish and tighter to program semantics. For example, programs that are syntactically different but semantically equivalent tend to correspond to similar assembly (Figure 2).
While we only use assembly for static code, it is also possible to link assembly code to the source code it was generated from to gain additional information about high-level constructs like data structures.
Tasks. We test learned understanding of control-flow during execution using the branch prediction task. Branch prediction traditionally uses heuristics to predict which target basic block will be entered next. The instruction pointer determines which basic block is currently being executed, and the target output is a boolean specifying either the true block or false block.
Branch prediction is a difficult problem with large performance implications for small relative improvements. Modern microprocessors execute hundreds of instructions speculatively, a mispredicted branch means that the processor has to discard all work completed after that branch and re-execute.
Learned understanding of data-flow during execution is tested using the prefetching task. Prefetching predicts the memory address that will be accessed in the next load operation. Since data access time is the largest bottleneck in server applications, solving data prefetching has significant implications for scaling computer architectures (Hashemi et al., 2018). Note that there is generally interleaving of branching and memory instructions, so predicting the next memory access may depend on an unknown branch decision, and vice versa.
Tools. Compilers convert source code into assembly code. We use gcc. Creating a usable snapshot of the dynamic state of a program is nontrivial. Given the large size of memory, we need to focus on memory locations that are relevant to the execution. These are obtained by monitoring the dynamic target memory addresses of load instructions that are executed. To obtain these snapshots, we instrument instructions during execution with a tool called Pin (Luk et al., 2005).
3 MODEL
We model the static assembly as a GNN (Section 3.1). Dynamic snapshots are used as features to inform the GNN of the instruction-level dynamics during execution (Section 3.2), which we show leads to model to learn the behavior of the application (Section 4).
3.1 GRAPH STRUCTURE
Figure 3 provides an example of our graph structure translating from 3 lines of assembly to a GNN. The graph consists of three major types of nodes: instruction nodes (in white), variable nodes (in yellow), and pseudo nodes (in grey).
Instruction nodes are created from instructions to serve as the backbone of the graph. Each instruction can have variable nodes or pseudo nodes as child nodes.
Variable nodes represent variables that use dynamic values, including registers and constants.
Instead of connecting instructions nodes directly to their child variable nodes, Pseudo nodes represent the sub-operations inside an instruction. The value associated with a pseudo node is computed in a bottom-up manner by recursively executing the sub-operations of its child nodes. For example, in instruction 0 in Figure 3, a pseudo node is created to represent the source operand that loads data from memory2, which contain a child constant 0x48 and a child register %rbx. There are a number of different pseudo node types listed in the appendix.
Three major types of edges are used to connect nodes in the graph: control-flow edges, parent edges and usage edges. Control-flow edges connect an instruction node to all potential subsequent instruction nodes. For non-branch instructions, the control-flow edge from an instruction node points to the next sequential instruction node in the program. For branch instructions, control-flow edges are used to connect to both the next instruction and the branch target. Parent edges are used to connect child variable nodes or pseudo nodes to their parent instruction nodes or pseudo nodes. Usage edges provide the graph with data flow information, connecting variable nodes with their last read or write. Given this static structure, Section 3.2 describes how the GNN is initialized and used.
3.2 FUSED STATIC/DYNAMIC GATED GRAPH NEURAL NETWORKS
Node initialization. Unlike previous approaches to code analysis where node embeddings are initialized with the static text of source code, we fuse the static graph with dynamic snapshots by using dynamic state to initialize nodes in the graph.
Each variable node and pseudo node is initialized with a dynamic value from the memory snapshot. These values are converted into initial
node embeddings via a learned embedding layer. We find that the numerical format of the dynamic values are critical to allowing the model to understand the application. We consider three types of representations for data values: categorical, scalar and binary. Our results (Section 4.6) show that binary has an inherent ability to generalize more efficiently than categorical or scalar representations. The intuition behind why binary generalizes so well is that the representation is inherently hierarchical, which allows for stronger generalization to previously unseen bit patterns.
Lastly, instruction nodes are initialized with zero vectors as embeddings. Given the initial embeddings, the GNN runs for a predefined number of propagation steps to obtain the final embeddings.
Defining tasks on the graph. Tasks are defined on nodes using masking. Similar to masking in RNNs to handle variable sequence lengths, masking in GNNs handles different numbers of task nodes. A node defined with a task has a mask value of 1 and the ones without a task are masked out using 0 during both forward and backward propagation.
Branch-prediction is defined on the branch instruction node. Since each branch can either be taken or not taken, this is a binary decision. The final node embeddings are fed into a linear layer to generate a scalar output using a sigmoid activation and a cross entropy loss.
Prefetching is defined on the src pseudo node that represents a memory load operation. The task is to predict the 64-bit target address of the next memory load from this node. A 64-bit output is generated by feeding the final node embeddings of the task node to a different linear layer. In this
2In x86 assembly, parentheses represent addressing memory
case, the output layer is 64-dimensional to correspond to a 64-bit address. The loss is the summation of sigmoid cross entropy loss on all 64 bits.3
Scaling to large programs. For large-scale programs, it is unrealistic to utilize the static graph built on the entire assembly file (the gcc benchmark has >500K instructions). As in (Li et al., 2015), to handle large graphs, only nodes which are within 100 steps to the task node will affect the prediction.
4 EXPERIMENTS
4.1 DATA COLLECTION
Our model consists of two parts, the static assembly and dynamic snapshots. To collect static assembly we use gcc to compile source code for each binary. This binary is then disassembled using the GNU binary utilities to obtain the assembly code.
The dynamic snapshots are captured for conditional branch and memory load instructions using the dynamic instrumentation tool Pin (Luk et al., 2005). We run the benchmarks with the reference input set and use SimPoint (Hamerly et al., 2005) to generate a single representative sample of 100 million instructions for each benchmark. Our tool attaches to the running process, fast forwards to the region of interest and outputs values of general registers and related memory addresses into a file every time the target conditional branch instructions or memory load instructions are executed by the instrumented application. We use SPECint 2006 to evaluate our proposal. This is a standard benchmark suite commonly used to evaluate hardware and software system performance.
4.2 EXPERIMENTAL SETUP
We train the model on each benchmark independently. The first 70% of snapshots are used for training, and the last 30% for evaluation. Hyperparameters are reported in the appendix.
4.3 METRICS
To evaluate branch prediction we follow computer architecture research and use mispredictions per thousand instructions (MPKI) (Jiménez & Lin, 2001; Lee et al., 1997) as a metric. Prefetching is a harder problem as the predictor needs to accurately predict all bits of a target memory address. A prediction with even 1 bit off, especially in the high bits, is an error at often distant memory locations. We evaluate prefetching using complete accuracy, defined as an accurate prediction in all bits.
4.4 MODEL COMPARISONS
We compare our model to three branch predictors. The first is a bimodal predictor that uses a 2-bit saturating counter for each branch instruction to keep track of its branch history (Lee et al., 1997). The second is a widely used, state-of-the-art perceptron branch predictor (Jiménez & Lin, 2001) that uses the perceptron learning algorithm on long sequential binary taken/not-taken branch histories (Jiménez, 2016). As a more powerful baseline, we implement an offline non-linear multi-layer perceptron (MLP). The MLP has two hidden layers and each layer is of the same size as the input layer. A default SGD solver is used for optimization. The results are shown in Figure 4. We find that NCF reduces MPKI by 26% and 22% compared to the perceptron and MLP respectively. Note that some of the benchmarks (libquantum, perlbench) have zero MPKI.
Three baselines are used to evaluate our prefetching model in Figure 5. The first is a stride data prefetcher (Chen & Baer, 1995) that is good at detecting regular patterns, such as array operations. The second is a state-of-the-art address correlation (AC) prefetcher that handles irregular patterns by learning temporal address correlation (Wenisch et al., 2009). LSTM-delta is a learning-based prefetcher that captures correlation among deltas between addresses (Hashemi et al., 2018). Due to our binary representation, NCF achieves nearly 100% coverage of all addresses, unlike the 50-80% reported for the LSTM-prefetcher of (Hashemi et al., 2018). Figure 5 shows that NCF achieves significantly higher performance than prior work by handling both regular and irregular patterns with its binary representation. In both Figures 4 and 5, the applications are sorted from most-challenging to least-challenging. We find that NCF particularly outperforms the traditional baselines on the most challenging datasets. The traditional baselines in both branch prediction and prefetching leverage long sequential features. Our NCF does not yet use sequential features or sequential snapshots, we leave this for future work.
3Our framework supports multitasking in that it handles control-flow and data-flow tasks simultaneously. However, in our ablation studies, we did not see significant evidence that these tasks currently help each other.
The effectiveness of the GNN depends on the input graph, and we perform ablation studies in the appendix (Section B.1).
4.5 ALGORITHM CLASSIFICATION
To test if the model has learned about the behavior of the application, we test the NCF representation on an algorithm classification dataset (Lili Mou, 2016). We randomly select a subset of 15 problems from this dataset4 and generate inputs for each program. 50 programs are randomly selected from each class. These are split into 30 for training, 10 for validation (tuning the linear SVM described below) and 10 for testing.
We generate the graph for each program post-compilation and obtain memory snapshots via our instrumentation tool. The representation is pre-trained on branch prediction and the resultant embeddings are averaged to serve as the final embedding of the program. A linear SVM is trained using the pre-trained embeddings to output a predicted class.
This yields 96.0% test accuracy, where the state-of-the-art (Ben-Nun et al., 2018) achieves 95.3% on the same subset. In contrast to Ben-Nun et al. (2018), which pre-trains an LSTM on over 50M lines of LLVM IR, our embeddings are trained on 203k lines of assembly from the algorithm classification dataset itself. This shows that branch prediction can be highly predictive of high-level program attributes, suggesting that it may be fruitful to use dynamic information to solve other static tasks.
4.6 GENERALIZATION TEST ON REPRESENTATIONS
Lastly, we test the effectiveness of binary representations of memory state. There are three major options for representing dynamic state: categorical, real-valued scalar, and binary. State-of-the-
4We use a subset because the programs had to be modified (by adding appropriate headers, fixing bugs) to compile and run in order to retrieve the assembly code and dynamic states.
art data prefetchers tend to use categorical representations. Recent advances in representing and manipulating numbers for neural arithmetic logic units use scalar representations (Trask et al., 2018).
We evaluate the generalization ability of these representations using a simple loop. We replace the constant 10 total iterations of the loop in Figure 2(a) with a variable k. The control-flow of the loop decides to stay in or jump out of the loop by comparing variable i and k. The branch will be not taken for the first k− 1 times but will be taken at the kth time. Since traditional state-of-the-art branch predictors depend on memorizing past branch history, they will always mispredict the final branch (as it has always been taken). Our proposal is able to make the correct prediction at the kth time.
The challenge for our model is that the value k can change during program execution, and the model needs to generalize to unseen values. We use this example to test the three representations
and create a testing set using k values from 1 to 80. The training set only contains k values from 1 to 40 with a step size of 3 (1, 4, 7, ..., 37). We feed all three representations to MLP predictors that have one hidden layer of the same size of each input representation (160 for categorical, 2 for scalar and 14 for binary). The results are shown in Figure 6.
The categorical representation can only correctly predict training samples, missing every two out of three k values, where scalar and binary representations are both able to generalize across a continuous range, filling the “holes” between training samples. The binary representation generalizes to a larger range than a scalar representation, as long as the bits have been seen and toggled in the training set. Since binary is inherently hierarchical (the range increases exponentially with the number of bits), this advantage is greater in a real world 64-bit machine.
5 RELATED WORK
5.1 LEARNING FROM SOURCE CODE & EXECUTION BEHAVIOR
There is a significant body of work on learning for code, and we refer the reader to Allamanis et al. (2018) for a survey. We focus on the most relevant methods here. Li et al. (2015) use GNNs to represent the state of heap memory for a program verification application. Allamanis et al. (2017) learn to represent source code with GNNs.
Similar to us, Ben-Nun et al. (2018); Mendis et al. (2018) learn representations of code from low-level syntax, the LLVM intermediate representation (IR) or assmebly, but do not use dynamic information. We use assembly code instead of IR to maintain a 1:1 mapping between dynamic state and the static backbone of the graph (since instructions are atomic when executed). Prior work that builds graphs purely based on static source code disregard the instruction-level dynamics that are created during program execution, as a single static piece of code can execute in different ways depending on the provided inputs.
Wang et al. (2017) embed the sequences of values that variables take on during the execution of a program as a dynamic program embedding. The code is not otherwise used. The states are relatively simple (variables can take on relatively few possible values) in contrast to our dynamic states that are “from the wild.” Cummins et al. (2017) embeds code and optionally allows a flat vector of auxiliary features that can depend on dynamic information. Abstract program execution can also be used as a basis for learning program representations (DeFreez et al., 2018; Henkel et al., 2018). However, neither uses concrete program state.
5.2 USING PROGRAM STATE TO GUIDE PROGRAM SYNTHESIS
There are several works that learn from program state to aid program synthesis (Balog et al., 2016; Parisotto et al., 2016; Devlin et al., 2017; Zohar & Wolf, 2018; Chen et al., 2019; Vijayakumar et al., 2018; Menon et al., 2013). In particular, Balog et al. (2016) use neural networks to learn a mapping from list-of-integer-valued input-output examples to the set of primitives needed. All of
these operate on programs in relatively simple Domain Specific Languages and are learning mappings from program state to code, rather than learning joint embeddings of code and program state.
5.3 DYNAMIC PREDICTION TASKS
Branch prediction and prefetching are heavily studied in the computer architecture domain. Highperformance modern microprocessors commonly include perceptron (Jiménez & Lin, 2001) or table-based branch predictors that memorize commonly taken paths through code (Seznec, 2011).
While there has been a significant amount of work around correlation prefetching in academia (Wenisch et al., 2009; Charney & Reeves, 1995; Roth et al., 1998), modern processors only commonly implement simple stream prefetchers (Chen & Baer, 1995; Jouppi, 1990; Gindele, 1977). Recent work has related prefetching to natural language models and shown that LSTMs achieve high accuracy (Hashemi et al., 2018). However, their categorical representation covers only a limited portion of the access patterns while the binary representation described here is more general.
6 CONCLUSION
We develop a novel graph neural network that uses both static and dynamic features to learn a rich representation for code. Since the representation is based on a relational network, it is easy to envision extensions that include high-level source code into the model or to add new prediction tasks. Instead of focusing on hardware-realizeable systems with real-time performance, our primary focus in this paper is to develop representations that explore the limits of predictive accuracy for these problems with extremely powerful models, so that the improvements can be be eventually be distilled. This is common in machine learning research, where typically the limits of performance for a given approach are reached, and then distilled into a performant system, e.g. (Van Den Oord et al., 2016; Oord et al., 2017). However, benefits can still be derived by using the model to affect program behavior through compilation hints (Chilimbi & Hirzel, 2002; Jagannathan & Wright, 1996; Wolf et al., 1996), making this exploration immediately practical. We argue that fusing both static and dynamic features into one representation is an exciting direction to enable further progress in neural program understanding.
A HYPERPARAMETERS
The hyperparameters for all models are given in Table 1.
B NODE SUB-TYPES
We describe the node sub-types in Table 2. Pseudo-nodes implement operations that are commonly known as the addressing modes of the Instruction Set Architecture. Note that node sub-types are used to derive initial node embeddings and for interpretability. They do not factor into the computation of the graph neural network.
Table 2: Descriptions about sub node types.
Major node
type Sub-type Description
Pseudo nodes
non-memsrc
a source operand that does not involve memory load operation, obtained directly from register(s) and/or constant(s)
mem-src a source operand that involves a memory loadoperation, obtained from loading data from a memory location
non-memtgt
a target operand that does not involve memory write operation, writing directly to a register
mem-tgt a source operand that involves a memory write operation,writing data to a memory location base a base that is obtained directly from a variable node ind-base an indirect base that is obtained from certain operations on thechild variable nodes, like multiplying a register by a constant offset an offset value that is to be added to a base
Variable nodes reg a register, value is dynamically changed during executionconst a constant, value is specified in the assembly
B.1 ABLATION STUDY
The effectiveness of the GNN depends on the input graph. As pseudo nodes are a large component of the static graph, we run additional experiments to understand their importance. In particular, we try to only use the pseudo nodes src and tgt, which are directly connected to instruction nodes. Our data shows that removing pseudo nodes other than src and tgt and connecting variable nodes directly to src and tgt has little impact on branch prediction (an MPKI increase of 0.26), but has a large impact on the data-flow accuracy (accuracy goes down by 12.1%).
Figure 7 shows the sensitivity of task performance to the number of propagation steps during training for the GNN on omnetpp. We find that prefetching is more sensitive to propagation steps than branch prediction, and requires 5-8 steps for peak accuracy. Due to the control flow of programs, we find that 5-8 steps propagates information for 50-60 instruction nodes across the graph’s backbone for omnetpp (up to 6000 nodes for perlbench). | 1. What are the contributions of the paper in using deep learning and GNNs for optimizing code performance?
2. What are the concerns regarding the reasoning behind binary representations being better than categorical or scalar representations?
3. How does the reviewer assess the transfer learning experiments and comparisons with other works in the field?
4. Are there any questions about the construction of the binary code or its relationship to the neural network's ability to generalize?
5. Can the reviewer provide more context or information about the datasets used in the comparison with Ben-Nun et al.'s method?
6. Is there a question about the impact of pre-training on the performance of the proposed method? | Review | Review
Using Deep Learning and especially, GNNs seems to be a popular area of research. I am no
expert at optimizing code performance, so please take my review with a grain of salt. The algorithmic contributions of the paper are as following:
(a) GNN that combines static code and dynamic execution trace.
(b) Binary encoding of features leads to better performance in comparison to categorical and scalar representations.
The results show that the proposed method outperforms existing methods on standard benchmarks in the program execution community.
From a machine learning stand point, the contributions are straightforward and the results make sense. I have the following questions:
(I) Authors argue that binary representations are better because of their hierarchical nature. They mention that they can generalize even if not all combinations of bits are seen, but a subset is seen in a manner that every bit has been flipped a couple of times. I don’t agree with this reasoning, as seeing the individual bits flip has no guarantee that a NN would generalize to a new combination of bits unless the distance in the binary code makes sense. Is there some special way in which the binary code is constructed?
(ii) Transfer learning experiments: Its unclear to me if the comparison presented in the paper is a fair one. Comparison is made against Ben-Nun et al. pre-training on LLVM IR. I am not sure how different is LLVM IR dataset from the algorithm classification dataset. If the dataset is very different, then obviously a lot of pre-training will only result in modest performance gain. What happens with Ben-Nun method is pre-trained on the same dataset as the proposed method? Also, what is the difference in performance between the cases when the proposed method is applied to algorithm classification with and without pre-training?
Overall, the paper is a application of GNN to optimizing code execution. The technical innovations are domain-specific and do not inform the general machine learning community. Given lack of expertise in the area of program execution, I cannot judge the significance of the performance improvements reported in the paper.
Given my current concerns, I cannot recommend acceptance. I might change my ratings based on the review discussions and the author’s responses to the above questions. |
ICLR | Title
Crafting Data-free Universal Adversaries with Dilate Loss
Abstract
We introduce a method to create Universal Adversarial Perturbations (UAP) for a given CNN in a data-free manner. Data-free approaches suite scenarios where the original training data is unavailable for crafting adversaries. We show that the adversary generation with full training data can be approximated to a formulation without data. This is realized through a sequential optimization of the adversarial perturbation with the proposed dilate loss. Dilate loss basically maximizes the Euclidean norm of the output before nonlinearity at any layer. By doing so, the perturbation constrains the ReLU activation function at every layer to act roughly linear for data points and thus eliminate the dependency on data for crafting UAPs. Extensive experiments demonstrate that our method not only has theoretical support, but achieves higher fooling rate than the existing data-free work. Furthermore, we evidence improvement in limited data cases as well.
1 INTRODUCTION
Despite the phenomenal success of deep neural networks in many practical applications, adversarial attacks are being a constant plague. These attacks corrupt the input with a small and usually imperceptible structured noise causing the model to output incorrect predictions. The sole existence of such a vulnerability not only raises concerns about the security of deep learning models, but also questions the robustness of the learned representations. To make it further worse, it has been shown that a single noise, called universal adversarial perturbation (UAP), can be added to any image and fool the network. UAPs do not require any optimization on the input image at attack time, but the corruption effectively works for most of the images. Interestingly, such perturbations created for one model exhibit transferability of attack and induce high fooling on other models. One drawback of UAPs though, is the requirement of training data for crafting perturbations. This is increasingly infeasible as the datasets are becoming quite large and might not be publicly released due to privacy or copyright reasons. In such cases where the original data is not available, data-free methods are gaining traction. In the data-free setting, the perturbation is created only with the trained neural network. Such methods typically rely on the trained weights and the CNN structure to find vulnerable patterns that can maximally disturb the normal propagation of activations across the network. A higher transfer of attack across networks is observed for data-free UAPs as well, raising its practical utility. Moreover, the study of these perturbations might lead to new insights on how deep neural networks actually work.
In this paper, we propose a new method for crafting data-free UAPs for any given CNN using ReLU nonlinearity. The approach relies on finding the singular vectors of a linearly approximated network (Section 3.1). A loss formulation is devised to enable this approximation under certain conditions. Dilate loss forms the major component of the method, which generates a perturbation that maximizes the Euclidean norm of the activation vector (before the nonlinearity) at a given layer (Section 3.2). We show that the perturbation crafted through dilation has the effect of linearly approximating the ReLU layer responses for any data points. These dilations are done sequentially for all the layers from the input to the last classification stage (Section 3.3). We argue that the sequential dilations results in a perturbation that aligns with the first singular vector of the linearly approximated network. Our approach outperforms the existing data-free method in fooling rates and the evaluation is also done for less data scenarios (Section 4).
In summary, the work contributes the following:
• A new method that can create universal adversarial perturbation without using data and achieve state-of-the-art data-free fooling rates.
• A detailed theoretical analysis which formulates the proposed sequential dilation algorithm by approximating the adversary generation with full training data under certain conditions.
2 RELATED WORK
The vulnerability of deep neural networks to adversarial samples is first shown in Szegedy et al. (2013). Following Szegedy et al. (2013), several methods (Goodfellow et al., 2014; Kurakin et al., 2016; Dong et al., 2018; Madry et al., 2017; Moosavi-Dezfooli et al., 2016; Brendel et al., 2017; Athalye et al., 2018; Carlini & Wagner, 2017) are being proposed to craft such adversarial samples. One of the simplest method is the Fast Gradient Sign Method (FGSM) formulated in Goodfellow et al. (2014). FGSM obtains the perturbation by single step gradient ascent of the loss function with respect to the input image. There are multi step variants to FGSM like iterative FGSM (Kurakin et al., 2016), Momentum (Dong et al., 2018), Projected Gradient Descent (PGD) (Madry et al., 2017), Deepfool (Moosavi-Dezfooli et al., 2016), etc. These attacks are image-specific, where the perturbation is a function of the input and requires a separate optimization for each image.
Moosavi-Dezfooli et al. (2017) introduce the idea of Universal Adversarial Perturbations (UAP), a single perturbation that can fool the model for most of the input images. UAP is obtained by jointly maximizing the training loss for dataset images. There are also generative approaches like NAG (Reddy Mopuri et al., 2018b), AAA (Reddy Mopuri et al., 2018a), GAP (Poursaeed et al., 2018) for crafting universal adversaries. Khrulkov & Oseledets (2018) propose a method based on singular vectors of Jacobian matrix to create universal adversaries. They show impressive fooling performance with a very small set of training images, but the method is not data-free. Though the study of adversarial attacks started with the classification task, there are several works (Xie et al., 2017; Metzen et al., 2017) that extend such attacks to other tasks like segmentation, detection, etc. Further, adversarial examples are shown to generalize to the physical world in Kurakin et al. (2016). While most attacks changes each pixel in the image with small imperceptible noise, there are methods (Sharif et al., 2016; Brown et al., 2017; Papernot et al., 2016) that perturb limited number of pixels with large noise as these are more practical in nature.
The attacks discussed so far, in general, rely on maximizing the training loss. In contrast, Mopuri et al. (2018) devise a generalizable data-free objective for crafting UAPs (GDUAP). GDUAP maximizes the activations at the output of all convolutional layers corrupting the feature representations learned and hence fooling the model. Our method has similarity to GDUAP, but with the crucial difference that the Euclidean norm maximization is performed before the nonlinearity in our case. Further, we maximize the norms of the layers one after the other in a sequential fashion as opposed to a single joint optimization. We show theoretically and experimentally that these changes cause a lot of difference in fooling performance. Moreover, no sound reasoning is available in Mopuri et al. (2018) to justify the formulation, whereas we provide theoretical explanation for the algorithm.
3 OUR APPROACH
3.1 CRAFTING A DATA-FREE OBJECTIVE
Consider a deep neural network with L layers already trained for classification task. We assume the activation function employed in the network to be ReLU (Nair & Hinton, 2010), defined as
σR(x) = { x if x > 0 0 otherwise,
which basically zeros out all negative elements and retains the positive ones when applied on a matrix. Let f1(x) = W1x, f2(x) = W2σR(W1x), ..., fl(x) = WlσR(. . .W2σR(W1x) . . .) be the outputs at different layers of the network for an input vector x. Note that the output fi for the ith layer is taken before the nonlinearity and fL represents the pre-softmax neuron layer. We ignore the bias terms for mathematical simplicity. The weights (Wis) of the network are trained with input and label pairs from the dataset D.
Our aim is to craft a perturbation vector p with Euclidean norm c such that the network incorrectly classifies for most of the data samples. Mathematically, the optimization can be written as,
max p:|p|=c ∑ (xi,yi)∈D Iargmax(σS(fL(xi+p))) 6=yi , (1)
where σS is the softmax function. Note that the condition for the indicator function (I) that checks for misclassification is dependent on the ground truth labels (yis). Assuming a high classification accuracy for the model, we approximate the condition to argmax(σS(fL(xi + p))) 6= argmax(σS(fL(xi))). Since softmax function is monotonic, we further relax the objective 1 to
max p:|p|=c ∑ (xi,yi)∈D |fL(xi + p)− fL(xi)|2, (2)
which amounts to finding a p that maximizes the network response after being added to the inputs. In other words, p should maximally disturb the output fL for all data points. Note that for some xis, maximizing |fL(xi + p)− fL(xi)|2 might not result in incorrect prediction. We assume such cases to be minority and the objective could lead to significant adversarial changes to fL responses for majority input samples. If X = {x0,x1, . . . ,xN} denote the matrix formed by the assembling all the N data samples as columns and 1 represent a column vector of ones of appropriate size, then the optimization 2 can be rewritten as,
max p:|p|=c |fL(X + p1T )− fL(X)|2F . (3)
We recognize that optimization 3 is not exactly equivalent to the original objective 1, but is an approximation which does not require the ground truth labels. But still the training data X is essential for computation and needs to be eliminated for a complete data-free approach. Now observe that if fL were to be a linear function, then the objective 3 reduces to,
max p:|p|=c |fL(p)|2, (4)
which means that p has to align along the first right singular vector of the linear fL map. The singular p could potentially disturb the output fL more for all the xis than any other vector. Interestingly, note that the optimization 4 is a data-free objective under the linear assumption of fL. However, fL is nonlinear due to the presence of ReLU activation functions at every layer. Note that the formulation 4 is valid even if fL is not a complete linear map, but satisfies fL(X + p1T ) = fL(X) + fL(p1T ) for some p. Hence, we devise an algorithm to seek a perturbation that can approximately induce the above additivity property to the ReLU network.
3.2 LINEARLY APPROXIMATING THE NETWORK
We start by noting that the only nonlinearity in the network is due to the ReLU activation function at every layer. But ReLU is piece-wise linear; especially, observe that σR(a+ b) = σR(a) +σR(b) if vectors a and b are in the same orthant. Now consider the ReLU nonlinearity after the first layer,
σR(W1X + W1p1 T ), which becomes additive if column vectors in W1X are in the same orthant as W1p. We relax this criteria and favour the case of making the vectors as close as possible by,
max p:|p|=c 1T (W1X) T (W1p) = N(W1x̄1) T (W1p), (5)
where x̄1 stands for the mean of the N data samples in X . The solution of the optimization 5 is expected to minimize the error due to the additive approximation of the layer. In order to eliminate the data term from the objective, we make an assumption that the first singular vector of the weight matrices align along the mean vector of its corresponding input. In other words, the dot product of data mean x̄1 with the singular vectors of W1 is maximum for the first singular vector. Now we use the following lemma to argue that the objective 5 is maximum when p aligns with the first singular vector of W1 (proof available in Appendix A). Lemma 1. If x has positive and larger scalar projection on the first singular vector of W than remaining singular vectors, then argmaxpxW TWp = argmaxp|Wp|2 subject to |p| = c.
Hence, the optimization problem 5 is equivalent to,
max p:|p|=c |W1p|2, (6)
which we call as the dilation of the first layer. We justify the assumption based on the premise that the singular vectors of the weights must have captured the discriminatory modes of data samples while training. By discriminatory mode we refer to the components of X that are essential for the classification task and most likely extracted by the hierarchy of weights in the network. These does not correspond to the modes of variation of data points. The assumption essentially means that the first singular vector carries the most important features common to most of the data points than the remaining singular directions. This is taken to be valid for any layer weight Wl with difference that the mean vector x̄l is averaged over the layer l−1 output, i.e. x̄l = (1/N)σR(fl−1(X))1 for l > 1. Now consider the second layer of the network given by σR(W2σR(W1X +W1p1T )), where there are two ReLU functions in action. Suppose the first ReLU function is linearly approximated with dilation objective 6. Consequently, the second layer output can be written as σR(W2σR(W1X) + W2σR(W1p1
T )). Note that the second ReLU can be linearly approximated if column vectors in W2σR(W1X) are close to W2σR(W1p). Considering the two approximations, we formulate the optimization as,
max p:|p|=c 1T (W2σR(W1X)) T (W2σR(W1p)) + 1 T (W1X) T (W1p), (7)
max p:|p|=c (W2x̄2) T (W2σR(W1p)) + (W1x̄1) T (W1p). (8)
Again, we leverage the assumption that the data mean projects more to the first singular vector of the weight matrix and with Lemma 1, the problem becomes the dilation of the second layer,
max p:|p|=c |W2σR(W1p)|2 + |W1p|2. (9)
We extend the same arguments to further layers and see that the dilations tends to make the network layers approximately additive with respect to the generated perturbation vector. For the last layer, the dilation terms are added to objective 4 to account for the errors introduced due to linear approximation of all the ReLU layers. Hence, the final optimization problem for UAP generation becomes,
max p:|p|=c |fL(p)|2 + L−1∑ l=1 |fl(p)|2, (10)
which is clearly a completely data-free formulation.
3.3 SEQUENTIAL DILATION ALGORITHM
We leverage the theoretical intuitions from the previous Section to formulate an algorithm for UAP generation in a data-free manner. Note that the direct implementation of optimization 10 through any gradient descent algorithm would lead to sub-optimal solutions as the chances of getting stuck
Algorithm 1: The sequential dilation algorithm for crafting data-free UAPs. The input is the multi-layer neural network f and the perturbation strength c. A set of adversarial perturbations {pl}Ll=1, one for each layer, is returned as the output. Note that λ is the learning rate. p0 ∼ U(−10, 10) for l = {1, 2, . . . , L} do
pl = pl−1 while convergence do pl = pl + λ∇pl ∑l i=1 log(|fi(pl)|2)
Set |pl|∞ = c end
end
in local minimas is high. This is especially true since no data is used and the only variable being optimized is p with no sources of randomness. Hence, we perform the dilations of optimization 10 in sequential manner so as to avoid chances of reaching local minima solutions. Some more changes are applied in the way the original optimization is implemented, mainly for training stability and to compare fairly with existing methods. For numerical stability of the optimization, we follow Mopuri et al. (2018) and maximize logarithm of the Euclidean norm in the dilate loss. In order to compare with existing methods, l∞ norm is restricted instead of the l2 in the problem 10. This constrains the maximum of absolute value of the adversarial noise.
Algorithm 1 elucidates our proposed sequential dilation algorithm for ReLU based neural networks. The procedure loops over all the layers of the network. For the first layer, we find a vector p1 which maximizes the logarithm of l2 norm of W1p1, essentially finding the first singular vector of W1. After the dilation of the first layer, the perturbation p1 is used as an initialization for maximizing the Euclidean norm of second layer. But note the first loss term |W1p|22 is still kept in the dilation of second layer. This loss formulation tries to maximize the norm of output at the current layer along with all the previous layers that feed into it. In short, dilation of lth layer starts the optimization with perturbation obtained from dilation of (l − 1)th layer and involves the joint dilation of all l layers. The method runs till the softmax layer of the network and the final perturbation pL is a UAP, created without using any training data and could potentially fool majority of input samples.
We only consider CNNs trained for classification task. The optimization is performed using standard ADAM optimizer (Kingma & Ba, 2014) with a fixed learning rate schedule till the training loss saturates. Typical learning rate is 0.1. At every step of the optimization, the values of the perturbation are clipped to limit the allowed range. The l∞ norm is set as 10 for all our experiments. Although, Euclidean and maximum norms are not theoretically equivalent, practically we observe that the final perturbations are saturated, with roughly more than 78% of the values reaching ±10. This implies the l2 norm also to be approximately restricted under the saturation assumption. Once the perturbation gets saturated while optimization, the loss might saturate and could be stuck in local minimas.
To prevent this, after dilation at every layer, we rescale the perturbation by dividing the pixel values by 2. This does not make any difference to the procedure as only the magnitude is changed to make room for further optimization. Ideally, we should do sequential dilations for all the convolutional and fully connected layers of the CNN from input side to the end softmax classifier. But for very deep models like Inception and ResNet, the dilations are done only for every architectural blocks. Because of this the optimization might be more nonlinear than what is being assumed in Section 3.1, with the maximum norm further inducing the clipping nonlinearity. Hence, we initialize the perturbation (p0) with the values drawn randomly from a uniform distribution U(−10, 10). Finally, note that absolutely no data in any form is used for creating adversarial noise, not even a validation set is employed in contrast to Mopuri et al. (2018). The code for our approach is available for review at the anonymous link https://github.com/anoniclr/uap seq dilate.
4 EXPERIMENTS
We benchmark our proposed sequential dilation method with the existing data-free approaches. All the experiments are performed on popular classification models like VGG (Simonyan & Zisserman, 2014), ResNet (He et al., 2016) and Inception (Szegedy et al., 2016). These models are already trained on Imagenet (Deng et al., 2009) dataset and delivers very high classification accuracy. Figure 2 shows the perturbations crafted using the proposed method for various networks. We follow other works (Moosavi-Dezfooli et al., 2017; Mopuri et al., 2018) and assess the performance of our adversarial attack using the fooling rate metric. Fooling rate is defined as the fraction of test images on which the network prediction differs before and after the addition of the adversarial noise. Table 1 reports the fooling rates obtained by our method along with that of other works. The first comparison is with the random baseline, which is the fooling incurred with just random noise. Second baseline is with the only existing data-free approach GDUAP. Clearly, our proposed data-free objective achieves significantly higher fooling rates than the other data-free work. This indicates that sequential dilation algorithm, not only has theoretical backing, but also results in higher fooling rates in practice. Note that we have run our method ten times and the results produced in the table are mean fooling rate along with the standard deviation, to statistically validate the performance improvement.
Now we ablate the different aspects of the sequential dilation algorithm to demonstrate the usefulness of the design choices. Table 2 reports the results of the various ablative experiments. First
experiment is non-sequential version of dilation, listed as single dilation in the table. This is a single joint optimization maximizing the norm of activations before the non-linearity. PSM maximization refers to simple maximization of the pre-softmax layer (fL) alone, which is same as the approximated objective 4. As described in Section 3.3, our dilation of a layer involves keeping the maximization terms of all the previous layers. We empirically validate the necessity of such a scheme by just sequentially maximizing the layer norms without the cumulative loss term for the experiment Ours without accumulation in Table 2. Note that each maximization starts with the perturbation initialized from the previous optimization. The results for these ablations evidence that our exact formulation of sequential dilation achieves higher fooling rates in data-free scenario. Further, Figure 3 displays the perturbations obtained through sequential dilation at every layer for VGG-16 network, basically the pls from Algorithm 1. We also indicate the corresponding fooling rates for each of the perturbation. It is interesting to observe that the fooling rate increases as we successively dilate layers and saturates towards the end, again emphasizing the need for the sequential process.
In many practical attack scenarios, the actual deployed model might not be available to generate the adversarial perturbation. Hence, the ability of the perturbation crafted for one network to cause reasonable fooling on another network is often highly sought-after property. This setting is known as black-box, for which we compare our method with GDUAP (Mopuri et al., 2018) in Table 3. The results evidence better black-box performance for our method than the existing data-free work, suggesting higher generalization for the perturbations from sequential dilation.
The experiments performed so far show that proposed sequential dilate loss formulation achieves state-of-the-art fooling rates in data-free scenarios. We now consider the case where minimal training data is available, called the less data setting. For this case, the sequential dilation is applied with the limited data. The input to the network at any stage of the optimization is the image added with the current perturbation (xi + pl for layer l). With the help of some data points, we expect the solution to approach more closer to the actual adversarial perturbation obtained with full data. Table 4 indicates the fooling rates of the less data setting with varied amount of training samples. Note that to compare with GDUAP, we also use a validation set to select the best perturbation while training. Our approach performs significantly better than GDUAP when data samples are very less, increasing the practical utility of the method. We also observe that the fooling rates with less data, in general, have increased than data-free and became comparable to full data UAP (see Table 1).
Furthermore, Table 5 compares our approach with Singular Fool (Khrulkov & Oseledets, 2018) in extremely less data scenario. For fair comparison with Khrulkov & Oseledets (2018), we use only 64 images for crafting the perturbation and no validation set is employed. The best perturbation is selected based on the training loss. As expected, our method achieves significantly higher fooling performance than Khrulkov & Oseledets (2018). Even more, we apply our algorithm with 64 randomly chosen images from Pascal VOC (Everingham et al., 2011). Interestingly, despite the models used are being trained for a different dataset, the fooling rates remain more or less similar and is higher than that of Khrulkov & Oseledets (2018). This shows that our approach works well in less data cases even when the available images are not from the dataset on which the model is trained.
5 CONCLUSIONS AND FUTURE WORK
In this paper, we have presented a new algorithm, called the sequential dilation, to craft universal adversaries in a data-free manner. The approach relies on finding the first singular vector of the linearly approximated neural network. The approximation is being enabled by optimizing with the proposed dilate loss. Elaborate experiments and ablations demonstrate that our approach achieves superior data-free fooling performance. One promising direction for future research would be to modify the algorithm and generate targeted UAP, where the objective is fooling to a specific class.
A PROOF OF LEMMA 1
If W = USV T denote the singular value decomposition of W , then
max p:|p|=c xTW TWp = xTV S2V Tp = ∑ i S2ii(x TV:,i)(p TV:,i). (11)
Since xTV:,1 > xTV:,i (by assumption) and S11 > Sii for all i > 1, the solution to the optimization is cV:,1, a scaled version of the first singular vector of W . The same solution can be obtained through the definition of the first singular vector as,
max p:|p|=c
|Wp|2, (12)
completing the proof.
B EXPERIMENTAL SETUP
All our experiments reported in the main paper are run on NVIDIA DGX cluster (Dual 20-core Intel Xeon E5-2698 v4 2.2 GHz) within the Tensorflow docker. We use the pretrained classification models from Tensorflow Slim library S. Guadarrama (2016).
C DEMONSTRATION OF ADVERSARIAL ATTACK
Figures 4 to 9 demonstrate adversarial attack through perturbation generated from our proposed sequential dilation algorithm for various networks. The first row in the figures shows the clean images with the class predicted by the model, while the second row has the corresponding perturbed samples with the flipped prediction labels. | 1. What is the focus of the paper, and what are its contributions to the field?
2. How does the proposed approach differ from previous methods, specifically GDUAP?
3. Can you explain the Euclidean norm maximization and its significance in the proposed method?
4. Why did the authors choose to optimize the perturbations in each layer sequentially rather than jointly?
5. What is the significance of the detail discussed in Section 3.2, and how does it relate to the rest of the paper?
6. Are there any concerns or limitations regarding the proposed method that the authors did not address? | Review | Review
The paper is well written and easy to follow. In this paper, a new data-free method is proposed to create universal adversarial perturbation without using data. There are some similarities with GDUAP though, authors also make some crucial improvements. They perform Euclidean norm maximization before the non-linearity of each layer, which not only has theoretical backing but also brings better performance in practice. Meanwhile, they optimize the perturbations in each layer in a sequential manner instead of joint optimization to avoid chances of reaching local minima solutions.
The authors provide a detailed theoretical analysis and systematic experimental results to demonstrate their arguments, which is convincing. What’s more, the proposed method achieves state-of-the-art data-free fooling rates on the large-scale dataset, which strongly demonstrates the effectiveness of their method.
In section 3.2, (the top of page 4) “which becomes additive if column vectors in W1X are in the same orthant as W1p. We relax this criteria and favour the case of making the vectors as close as possible by”
Could the authors provide more discussions about it? |
ICLR | Title
Crafting Data-free Universal Adversaries with Dilate Loss
Abstract
We introduce a method to create Universal Adversarial Perturbations (UAP) for a given CNN in a data-free manner. Data-free approaches suite scenarios where the original training data is unavailable for crafting adversaries. We show that the adversary generation with full training data can be approximated to a formulation without data. This is realized through a sequential optimization of the adversarial perturbation with the proposed dilate loss. Dilate loss basically maximizes the Euclidean norm of the output before nonlinearity at any layer. By doing so, the perturbation constrains the ReLU activation function at every layer to act roughly linear for data points and thus eliminate the dependency on data for crafting UAPs. Extensive experiments demonstrate that our method not only has theoretical support, but achieves higher fooling rate than the existing data-free work. Furthermore, we evidence improvement in limited data cases as well.
1 INTRODUCTION
Despite the phenomenal success of deep neural networks in many practical applications, adversarial attacks are being a constant plague. These attacks corrupt the input with a small and usually imperceptible structured noise causing the model to output incorrect predictions. The sole existence of such a vulnerability not only raises concerns about the security of deep learning models, but also questions the robustness of the learned representations. To make it further worse, it has been shown that a single noise, called universal adversarial perturbation (UAP), can be added to any image and fool the network. UAPs do not require any optimization on the input image at attack time, but the corruption effectively works for most of the images. Interestingly, such perturbations created for one model exhibit transferability of attack and induce high fooling on other models. One drawback of UAPs though, is the requirement of training data for crafting perturbations. This is increasingly infeasible as the datasets are becoming quite large and might not be publicly released due to privacy or copyright reasons. In such cases where the original data is not available, data-free methods are gaining traction. In the data-free setting, the perturbation is created only with the trained neural network. Such methods typically rely on the trained weights and the CNN structure to find vulnerable patterns that can maximally disturb the normal propagation of activations across the network. A higher transfer of attack across networks is observed for data-free UAPs as well, raising its practical utility. Moreover, the study of these perturbations might lead to new insights on how deep neural networks actually work.
In this paper, we propose a new method for crafting data-free UAPs for any given CNN using ReLU nonlinearity. The approach relies on finding the singular vectors of a linearly approximated network (Section 3.1). A loss formulation is devised to enable this approximation under certain conditions. Dilate loss forms the major component of the method, which generates a perturbation that maximizes the Euclidean norm of the activation vector (before the nonlinearity) at a given layer (Section 3.2). We show that the perturbation crafted through dilation has the effect of linearly approximating the ReLU layer responses for any data points. These dilations are done sequentially for all the layers from the input to the last classification stage (Section 3.3). We argue that the sequential dilations results in a perturbation that aligns with the first singular vector of the linearly approximated network. Our approach outperforms the existing data-free method in fooling rates and the evaluation is also done for less data scenarios (Section 4).
In summary, the work contributes the following:
• A new method that can create universal adversarial perturbation without using data and achieve state-of-the-art data-free fooling rates.
• A detailed theoretical analysis which formulates the proposed sequential dilation algorithm by approximating the adversary generation with full training data under certain conditions.
2 RELATED WORK
The vulnerability of deep neural networks to adversarial samples is first shown in Szegedy et al. (2013). Following Szegedy et al. (2013), several methods (Goodfellow et al., 2014; Kurakin et al., 2016; Dong et al., 2018; Madry et al., 2017; Moosavi-Dezfooli et al., 2016; Brendel et al., 2017; Athalye et al., 2018; Carlini & Wagner, 2017) are being proposed to craft such adversarial samples. One of the simplest method is the Fast Gradient Sign Method (FGSM) formulated in Goodfellow et al. (2014). FGSM obtains the perturbation by single step gradient ascent of the loss function with respect to the input image. There are multi step variants to FGSM like iterative FGSM (Kurakin et al., 2016), Momentum (Dong et al., 2018), Projected Gradient Descent (PGD) (Madry et al., 2017), Deepfool (Moosavi-Dezfooli et al., 2016), etc. These attacks are image-specific, where the perturbation is a function of the input and requires a separate optimization for each image.
Moosavi-Dezfooli et al. (2017) introduce the idea of Universal Adversarial Perturbations (UAP), a single perturbation that can fool the model for most of the input images. UAP is obtained by jointly maximizing the training loss for dataset images. There are also generative approaches like NAG (Reddy Mopuri et al., 2018b), AAA (Reddy Mopuri et al., 2018a), GAP (Poursaeed et al., 2018) for crafting universal adversaries. Khrulkov & Oseledets (2018) propose a method based on singular vectors of Jacobian matrix to create universal adversaries. They show impressive fooling performance with a very small set of training images, but the method is not data-free. Though the study of adversarial attacks started with the classification task, there are several works (Xie et al., 2017; Metzen et al., 2017) that extend such attacks to other tasks like segmentation, detection, etc. Further, adversarial examples are shown to generalize to the physical world in Kurakin et al. (2016). While most attacks changes each pixel in the image with small imperceptible noise, there are methods (Sharif et al., 2016; Brown et al., 2017; Papernot et al., 2016) that perturb limited number of pixels with large noise as these are more practical in nature.
The attacks discussed so far, in general, rely on maximizing the training loss. In contrast, Mopuri et al. (2018) devise a generalizable data-free objective for crafting UAPs (GDUAP). GDUAP maximizes the activations at the output of all convolutional layers corrupting the feature representations learned and hence fooling the model. Our method has similarity to GDUAP, but with the crucial difference that the Euclidean norm maximization is performed before the nonlinearity in our case. Further, we maximize the norms of the layers one after the other in a sequential fashion as opposed to a single joint optimization. We show theoretically and experimentally that these changes cause a lot of difference in fooling performance. Moreover, no sound reasoning is available in Mopuri et al. (2018) to justify the formulation, whereas we provide theoretical explanation for the algorithm.
3 OUR APPROACH
3.1 CRAFTING A DATA-FREE OBJECTIVE
Consider a deep neural network with L layers already trained for classification task. We assume the activation function employed in the network to be ReLU (Nair & Hinton, 2010), defined as
σR(x) = { x if x > 0 0 otherwise,
which basically zeros out all negative elements and retains the positive ones when applied on a matrix. Let f1(x) = W1x, f2(x) = W2σR(W1x), ..., fl(x) = WlσR(. . .W2σR(W1x) . . .) be the outputs at different layers of the network for an input vector x. Note that the output fi for the ith layer is taken before the nonlinearity and fL represents the pre-softmax neuron layer. We ignore the bias terms for mathematical simplicity. The weights (Wis) of the network are trained with input and label pairs from the dataset D.
Our aim is to craft a perturbation vector p with Euclidean norm c such that the network incorrectly classifies for most of the data samples. Mathematically, the optimization can be written as,
max p:|p|=c ∑ (xi,yi)∈D Iargmax(σS(fL(xi+p))) 6=yi , (1)
where σS is the softmax function. Note that the condition for the indicator function (I) that checks for misclassification is dependent on the ground truth labels (yis). Assuming a high classification accuracy for the model, we approximate the condition to argmax(σS(fL(xi + p))) 6= argmax(σS(fL(xi))). Since softmax function is monotonic, we further relax the objective 1 to
max p:|p|=c ∑ (xi,yi)∈D |fL(xi + p)− fL(xi)|2, (2)
which amounts to finding a p that maximizes the network response after being added to the inputs. In other words, p should maximally disturb the output fL for all data points. Note that for some xis, maximizing |fL(xi + p)− fL(xi)|2 might not result in incorrect prediction. We assume such cases to be minority and the objective could lead to significant adversarial changes to fL responses for majority input samples. If X = {x0,x1, . . . ,xN} denote the matrix formed by the assembling all the N data samples as columns and 1 represent a column vector of ones of appropriate size, then the optimization 2 can be rewritten as,
max p:|p|=c |fL(X + p1T )− fL(X)|2F . (3)
We recognize that optimization 3 is not exactly equivalent to the original objective 1, but is an approximation which does not require the ground truth labels. But still the training data X is essential for computation and needs to be eliminated for a complete data-free approach. Now observe that if fL were to be a linear function, then the objective 3 reduces to,
max p:|p|=c |fL(p)|2, (4)
which means that p has to align along the first right singular vector of the linear fL map. The singular p could potentially disturb the output fL more for all the xis than any other vector. Interestingly, note that the optimization 4 is a data-free objective under the linear assumption of fL. However, fL is nonlinear due to the presence of ReLU activation functions at every layer. Note that the formulation 4 is valid even if fL is not a complete linear map, but satisfies fL(X + p1T ) = fL(X) + fL(p1T ) for some p. Hence, we devise an algorithm to seek a perturbation that can approximately induce the above additivity property to the ReLU network.
3.2 LINEARLY APPROXIMATING THE NETWORK
We start by noting that the only nonlinearity in the network is due to the ReLU activation function at every layer. But ReLU is piece-wise linear; especially, observe that σR(a+ b) = σR(a) +σR(b) if vectors a and b are in the same orthant. Now consider the ReLU nonlinearity after the first layer,
σR(W1X + W1p1 T ), which becomes additive if column vectors in W1X are in the same orthant as W1p. We relax this criteria and favour the case of making the vectors as close as possible by,
max p:|p|=c 1T (W1X) T (W1p) = N(W1x̄1) T (W1p), (5)
where x̄1 stands for the mean of the N data samples in X . The solution of the optimization 5 is expected to minimize the error due to the additive approximation of the layer. In order to eliminate the data term from the objective, we make an assumption that the first singular vector of the weight matrices align along the mean vector of its corresponding input. In other words, the dot product of data mean x̄1 with the singular vectors of W1 is maximum for the first singular vector. Now we use the following lemma to argue that the objective 5 is maximum when p aligns with the first singular vector of W1 (proof available in Appendix A). Lemma 1. If x has positive and larger scalar projection on the first singular vector of W than remaining singular vectors, then argmaxpxW TWp = argmaxp|Wp|2 subject to |p| = c.
Hence, the optimization problem 5 is equivalent to,
max p:|p|=c |W1p|2, (6)
which we call as the dilation of the first layer. We justify the assumption based on the premise that the singular vectors of the weights must have captured the discriminatory modes of data samples while training. By discriminatory mode we refer to the components of X that are essential for the classification task and most likely extracted by the hierarchy of weights in the network. These does not correspond to the modes of variation of data points. The assumption essentially means that the first singular vector carries the most important features common to most of the data points than the remaining singular directions. This is taken to be valid for any layer weight Wl with difference that the mean vector x̄l is averaged over the layer l−1 output, i.e. x̄l = (1/N)σR(fl−1(X))1 for l > 1. Now consider the second layer of the network given by σR(W2σR(W1X +W1p1T )), where there are two ReLU functions in action. Suppose the first ReLU function is linearly approximated with dilation objective 6. Consequently, the second layer output can be written as σR(W2σR(W1X) + W2σR(W1p1
T )). Note that the second ReLU can be linearly approximated if column vectors in W2σR(W1X) are close to W2σR(W1p). Considering the two approximations, we formulate the optimization as,
max p:|p|=c 1T (W2σR(W1X)) T (W2σR(W1p)) + 1 T (W1X) T (W1p), (7)
max p:|p|=c (W2x̄2) T (W2σR(W1p)) + (W1x̄1) T (W1p). (8)
Again, we leverage the assumption that the data mean projects more to the first singular vector of the weight matrix and with Lemma 1, the problem becomes the dilation of the second layer,
max p:|p|=c |W2σR(W1p)|2 + |W1p|2. (9)
We extend the same arguments to further layers and see that the dilations tends to make the network layers approximately additive with respect to the generated perturbation vector. For the last layer, the dilation terms are added to objective 4 to account for the errors introduced due to linear approximation of all the ReLU layers. Hence, the final optimization problem for UAP generation becomes,
max p:|p|=c |fL(p)|2 + L−1∑ l=1 |fl(p)|2, (10)
which is clearly a completely data-free formulation.
3.3 SEQUENTIAL DILATION ALGORITHM
We leverage the theoretical intuitions from the previous Section to formulate an algorithm for UAP generation in a data-free manner. Note that the direct implementation of optimization 10 through any gradient descent algorithm would lead to sub-optimal solutions as the chances of getting stuck
Algorithm 1: The sequential dilation algorithm for crafting data-free UAPs. The input is the multi-layer neural network f and the perturbation strength c. A set of adversarial perturbations {pl}Ll=1, one for each layer, is returned as the output. Note that λ is the learning rate. p0 ∼ U(−10, 10) for l = {1, 2, . . . , L} do
pl = pl−1 while convergence do pl = pl + λ∇pl ∑l i=1 log(|fi(pl)|2)
Set |pl|∞ = c end
end
in local minimas is high. This is especially true since no data is used and the only variable being optimized is p with no sources of randomness. Hence, we perform the dilations of optimization 10 in sequential manner so as to avoid chances of reaching local minima solutions. Some more changes are applied in the way the original optimization is implemented, mainly for training stability and to compare fairly with existing methods. For numerical stability of the optimization, we follow Mopuri et al. (2018) and maximize logarithm of the Euclidean norm in the dilate loss. In order to compare with existing methods, l∞ norm is restricted instead of the l2 in the problem 10. This constrains the maximum of absolute value of the adversarial noise.
Algorithm 1 elucidates our proposed sequential dilation algorithm for ReLU based neural networks. The procedure loops over all the layers of the network. For the first layer, we find a vector p1 which maximizes the logarithm of l2 norm of W1p1, essentially finding the first singular vector of W1. After the dilation of the first layer, the perturbation p1 is used as an initialization for maximizing the Euclidean norm of second layer. But note the first loss term |W1p|22 is still kept in the dilation of second layer. This loss formulation tries to maximize the norm of output at the current layer along with all the previous layers that feed into it. In short, dilation of lth layer starts the optimization with perturbation obtained from dilation of (l − 1)th layer and involves the joint dilation of all l layers. The method runs till the softmax layer of the network and the final perturbation pL is a UAP, created without using any training data and could potentially fool majority of input samples.
We only consider CNNs trained for classification task. The optimization is performed using standard ADAM optimizer (Kingma & Ba, 2014) with a fixed learning rate schedule till the training loss saturates. Typical learning rate is 0.1. At every step of the optimization, the values of the perturbation are clipped to limit the allowed range. The l∞ norm is set as 10 for all our experiments. Although, Euclidean and maximum norms are not theoretically equivalent, practically we observe that the final perturbations are saturated, with roughly more than 78% of the values reaching ±10. This implies the l2 norm also to be approximately restricted under the saturation assumption. Once the perturbation gets saturated while optimization, the loss might saturate and could be stuck in local minimas.
To prevent this, after dilation at every layer, we rescale the perturbation by dividing the pixel values by 2. This does not make any difference to the procedure as only the magnitude is changed to make room for further optimization. Ideally, we should do sequential dilations for all the convolutional and fully connected layers of the CNN from input side to the end softmax classifier. But for very deep models like Inception and ResNet, the dilations are done only for every architectural blocks. Because of this the optimization might be more nonlinear than what is being assumed in Section 3.1, with the maximum norm further inducing the clipping nonlinearity. Hence, we initialize the perturbation (p0) with the values drawn randomly from a uniform distribution U(−10, 10). Finally, note that absolutely no data in any form is used for creating adversarial noise, not even a validation set is employed in contrast to Mopuri et al. (2018). The code for our approach is available for review at the anonymous link https://github.com/anoniclr/uap seq dilate.
4 EXPERIMENTS
We benchmark our proposed sequential dilation method with the existing data-free approaches. All the experiments are performed on popular classification models like VGG (Simonyan & Zisserman, 2014), ResNet (He et al., 2016) and Inception (Szegedy et al., 2016). These models are already trained on Imagenet (Deng et al., 2009) dataset and delivers very high classification accuracy. Figure 2 shows the perturbations crafted using the proposed method for various networks. We follow other works (Moosavi-Dezfooli et al., 2017; Mopuri et al., 2018) and assess the performance of our adversarial attack using the fooling rate metric. Fooling rate is defined as the fraction of test images on which the network prediction differs before and after the addition of the adversarial noise. Table 1 reports the fooling rates obtained by our method along with that of other works. The first comparison is with the random baseline, which is the fooling incurred with just random noise. Second baseline is with the only existing data-free approach GDUAP. Clearly, our proposed data-free objective achieves significantly higher fooling rates than the other data-free work. This indicates that sequential dilation algorithm, not only has theoretical backing, but also results in higher fooling rates in practice. Note that we have run our method ten times and the results produced in the table are mean fooling rate along with the standard deviation, to statistically validate the performance improvement.
Now we ablate the different aspects of the sequential dilation algorithm to demonstrate the usefulness of the design choices. Table 2 reports the results of the various ablative experiments. First
experiment is non-sequential version of dilation, listed as single dilation in the table. This is a single joint optimization maximizing the norm of activations before the non-linearity. PSM maximization refers to simple maximization of the pre-softmax layer (fL) alone, which is same as the approximated objective 4. As described in Section 3.3, our dilation of a layer involves keeping the maximization terms of all the previous layers. We empirically validate the necessity of such a scheme by just sequentially maximizing the layer norms without the cumulative loss term for the experiment Ours without accumulation in Table 2. Note that each maximization starts with the perturbation initialized from the previous optimization. The results for these ablations evidence that our exact formulation of sequential dilation achieves higher fooling rates in data-free scenario. Further, Figure 3 displays the perturbations obtained through sequential dilation at every layer for VGG-16 network, basically the pls from Algorithm 1. We also indicate the corresponding fooling rates for each of the perturbation. It is interesting to observe that the fooling rate increases as we successively dilate layers and saturates towards the end, again emphasizing the need for the sequential process.
In many practical attack scenarios, the actual deployed model might not be available to generate the adversarial perturbation. Hence, the ability of the perturbation crafted for one network to cause reasonable fooling on another network is often highly sought-after property. This setting is known as black-box, for which we compare our method with GDUAP (Mopuri et al., 2018) in Table 3. The results evidence better black-box performance for our method than the existing data-free work, suggesting higher generalization for the perturbations from sequential dilation.
The experiments performed so far show that proposed sequential dilate loss formulation achieves state-of-the-art fooling rates in data-free scenarios. We now consider the case where minimal training data is available, called the less data setting. For this case, the sequential dilation is applied with the limited data. The input to the network at any stage of the optimization is the image added with the current perturbation (xi + pl for layer l). With the help of some data points, we expect the solution to approach more closer to the actual adversarial perturbation obtained with full data. Table 4 indicates the fooling rates of the less data setting with varied amount of training samples. Note that to compare with GDUAP, we also use a validation set to select the best perturbation while training. Our approach performs significantly better than GDUAP when data samples are very less, increasing the practical utility of the method. We also observe that the fooling rates with less data, in general, have increased than data-free and became comparable to full data UAP (see Table 1).
Furthermore, Table 5 compares our approach with Singular Fool (Khrulkov & Oseledets, 2018) in extremely less data scenario. For fair comparison with Khrulkov & Oseledets (2018), we use only 64 images for crafting the perturbation and no validation set is employed. The best perturbation is selected based on the training loss. As expected, our method achieves significantly higher fooling performance than Khrulkov & Oseledets (2018). Even more, we apply our algorithm with 64 randomly chosen images from Pascal VOC (Everingham et al., 2011). Interestingly, despite the models used are being trained for a different dataset, the fooling rates remain more or less similar and is higher than that of Khrulkov & Oseledets (2018). This shows that our approach works well in less data cases even when the available images are not from the dataset on which the model is trained.
5 CONCLUSIONS AND FUTURE WORK
In this paper, we have presented a new algorithm, called the sequential dilation, to craft universal adversaries in a data-free manner. The approach relies on finding the first singular vector of the linearly approximated neural network. The approximation is being enabled by optimizing with the proposed dilate loss. Elaborate experiments and ablations demonstrate that our approach achieves superior data-free fooling performance. One promising direction for future research would be to modify the algorithm and generate targeted UAP, where the objective is fooling to a specific class.
A PROOF OF LEMMA 1
If W = USV T denote the singular value decomposition of W , then
max p:|p|=c xTW TWp = xTV S2V Tp = ∑ i S2ii(x TV:,i)(p TV:,i). (11)
Since xTV:,1 > xTV:,i (by assumption) and S11 > Sii for all i > 1, the solution to the optimization is cV:,1, a scaled version of the first singular vector of W . The same solution can be obtained through the definition of the first singular vector as,
max p:|p|=c
|Wp|2, (12)
completing the proof.
B EXPERIMENTAL SETUP
All our experiments reported in the main paper are run on NVIDIA DGX cluster (Dual 20-core Intel Xeon E5-2698 v4 2.2 GHz) within the Tensorflow docker. We use the pretrained classification models from Tensorflow Slim library S. Guadarrama (2016).
C DEMONSTRATION OF ADVERSARIAL ATTACK
Figures 4 to 9 demonstrate adversarial attack through perturbation generated from our proposed sequential dilation algorithm for various networks. The first row in the figures shows the clean images with the class predicted by the model, while the second row has the corresponding perturbed samples with the flipped prediction labels. | 1. What is the focus of the paper regarding data-free white-box adversarial attacks?
2. What are the strengths and weaknesses of the proposed method compared to prior works, specifically GDUAP?
3. Do you have any concerns or questions regarding the theoretical analysis and assumptions made in the paper?
4. How does the reviewer assess the significance and impact of the proposed approach?
5. Are there any suggestions for improving the experimental results or expanding the scope of the study? | Review | Review
This paper proposed a white-box (known network architecture, known network weight) data free (without need to access the data) adversarial attacking method. The main idea is to find a perturbation that maximizes the activations at different layers jointly. But the optimization is done sequentially, treating each layer’s activation (before ReLU) as a linear transformation output.
The method is compared with existing methods (only one existing approach for the problem, GDUAP by Mopuri et al. 2018) in terms of the fool rate. It shows significant improvement. Ablation study is carried out to compare with baselines like perturbation maximizing only first layer activation, only last layer activation, etc. Also on some other settings (black-box testing, less data) the proposed method outperforms GDUAP.
The problem of data-free white-box attack is very interesting and does make sense. The proposed method achieve significant improvement over the previous one (GDUAP). I do have the following concerns though.
1), the novelty of the proposed idea seems relatively limited. The proposed idea seeks perturbation maximizing activations over all layers. It incur perturbation before ReLU. But overall, the flavor of the idea is not significantly different from GDUAP, despite the significant performance boost.
2), it was mentioned that compare with GDUAP, this paper has more theoretical analysis. But this is not very convincing to me. There are many steps of approximation/relaxation from the original problem (Equation (1)) to the final formula (Equation (10)). Many assumptions are made over the steps. It is OK to use these steps to derive a heuristic. But these steps can hardly be called "theoretical analysis".
I am particularly uncomfortable with Equation (5), which is the basis of the main idea. It assumes that all data in $W_1X$ are in the same orthant as $W_1p$. But this is unrealistic as different data in X will for sure incur different activation patterns. Did I misunderstand anything?
3) I do like the experimental results. It looks impressive. But the baselines are really limited (granted, there are not many existing approaches). There is only one task (image classification). How about other tasks like segmentation etc shown in Mopuri et al. 2018? Also it would be nice to also show the results of other UAP methods, as it gives us a better sense of the gap between with and without data.
4) I wonder how will the attack affect some model which has been trained with some defense mechanism, e.g., adversarial training.
Typo:
Equation (5), RHS missing a max |
ICLR | Title
Crafting Data-free Universal Adversaries with Dilate Loss
Abstract
We introduce a method to create Universal Adversarial Perturbations (UAP) for a given CNN in a data-free manner. Data-free approaches suite scenarios where the original training data is unavailable for crafting adversaries. We show that the adversary generation with full training data can be approximated to a formulation without data. This is realized through a sequential optimization of the adversarial perturbation with the proposed dilate loss. Dilate loss basically maximizes the Euclidean norm of the output before nonlinearity at any layer. By doing so, the perturbation constrains the ReLU activation function at every layer to act roughly linear for data points and thus eliminate the dependency on data for crafting UAPs. Extensive experiments demonstrate that our method not only has theoretical support, but achieves higher fooling rate than the existing data-free work. Furthermore, we evidence improvement in limited data cases as well.
1 INTRODUCTION
Despite the phenomenal success of deep neural networks in many practical applications, adversarial attacks are being a constant plague. These attacks corrupt the input with a small and usually imperceptible structured noise causing the model to output incorrect predictions. The sole existence of such a vulnerability not only raises concerns about the security of deep learning models, but also questions the robustness of the learned representations. To make it further worse, it has been shown that a single noise, called universal adversarial perturbation (UAP), can be added to any image and fool the network. UAPs do not require any optimization on the input image at attack time, but the corruption effectively works for most of the images. Interestingly, such perturbations created for one model exhibit transferability of attack and induce high fooling on other models. One drawback of UAPs though, is the requirement of training data for crafting perturbations. This is increasingly infeasible as the datasets are becoming quite large and might not be publicly released due to privacy or copyright reasons. In such cases where the original data is not available, data-free methods are gaining traction. In the data-free setting, the perturbation is created only with the trained neural network. Such methods typically rely on the trained weights and the CNN structure to find vulnerable patterns that can maximally disturb the normal propagation of activations across the network. A higher transfer of attack across networks is observed for data-free UAPs as well, raising its practical utility. Moreover, the study of these perturbations might lead to new insights on how deep neural networks actually work.
In this paper, we propose a new method for crafting data-free UAPs for any given CNN using ReLU nonlinearity. The approach relies on finding the singular vectors of a linearly approximated network (Section 3.1). A loss formulation is devised to enable this approximation under certain conditions. Dilate loss forms the major component of the method, which generates a perturbation that maximizes the Euclidean norm of the activation vector (before the nonlinearity) at a given layer (Section 3.2). We show that the perturbation crafted through dilation has the effect of linearly approximating the ReLU layer responses for any data points. These dilations are done sequentially for all the layers from the input to the last classification stage (Section 3.3). We argue that the sequential dilations results in a perturbation that aligns with the first singular vector of the linearly approximated network. Our approach outperforms the existing data-free method in fooling rates and the evaluation is also done for less data scenarios (Section 4).
In summary, the work contributes the following:
• A new method that can create universal adversarial perturbation without using data and achieve state-of-the-art data-free fooling rates.
• A detailed theoretical analysis which formulates the proposed sequential dilation algorithm by approximating the adversary generation with full training data under certain conditions.
2 RELATED WORK
The vulnerability of deep neural networks to adversarial samples is first shown in Szegedy et al. (2013). Following Szegedy et al. (2013), several methods (Goodfellow et al., 2014; Kurakin et al., 2016; Dong et al., 2018; Madry et al., 2017; Moosavi-Dezfooli et al., 2016; Brendel et al., 2017; Athalye et al., 2018; Carlini & Wagner, 2017) are being proposed to craft such adversarial samples. One of the simplest method is the Fast Gradient Sign Method (FGSM) formulated in Goodfellow et al. (2014). FGSM obtains the perturbation by single step gradient ascent of the loss function with respect to the input image. There are multi step variants to FGSM like iterative FGSM (Kurakin et al., 2016), Momentum (Dong et al., 2018), Projected Gradient Descent (PGD) (Madry et al., 2017), Deepfool (Moosavi-Dezfooli et al., 2016), etc. These attacks are image-specific, where the perturbation is a function of the input and requires a separate optimization for each image.
Moosavi-Dezfooli et al. (2017) introduce the idea of Universal Adversarial Perturbations (UAP), a single perturbation that can fool the model for most of the input images. UAP is obtained by jointly maximizing the training loss for dataset images. There are also generative approaches like NAG (Reddy Mopuri et al., 2018b), AAA (Reddy Mopuri et al., 2018a), GAP (Poursaeed et al., 2018) for crafting universal adversaries. Khrulkov & Oseledets (2018) propose a method based on singular vectors of Jacobian matrix to create universal adversaries. They show impressive fooling performance with a very small set of training images, but the method is not data-free. Though the study of adversarial attacks started with the classification task, there are several works (Xie et al., 2017; Metzen et al., 2017) that extend such attacks to other tasks like segmentation, detection, etc. Further, adversarial examples are shown to generalize to the physical world in Kurakin et al. (2016). While most attacks changes each pixel in the image with small imperceptible noise, there are methods (Sharif et al., 2016; Brown et al., 2017; Papernot et al., 2016) that perturb limited number of pixels with large noise as these are more practical in nature.
The attacks discussed so far, in general, rely on maximizing the training loss. In contrast, Mopuri et al. (2018) devise a generalizable data-free objective for crafting UAPs (GDUAP). GDUAP maximizes the activations at the output of all convolutional layers corrupting the feature representations learned and hence fooling the model. Our method has similarity to GDUAP, but with the crucial difference that the Euclidean norm maximization is performed before the nonlinearity in our case. Further, we maximize the norms of the layers one after the other in a sequential fashion as opposed to a single joint optimization. We show theoretically and experimentally that these changes cause a lot of difference in fooling performance. Moreover, no sound reasoning is available in Mopuri et al. (2018) to justify the formulation, whereas we provide theoretical explanation for the algorithm.
3 OUR APPROACH
3.1 CRAFTING A DATA-FREE OBJECTIVE
Consider a deep neural network with L layers already trained for classification task. We assume the activation function employed in the network to be ReLU (Nair & Hinton, 2010), defined as
σR(x) = { x if x > 0 0 otherwise,
which basically zeros out all negative elements and retains the positive ones when applied on a matrix. Let f1(x) = W1x, f2(x) = W2σR(W1x), ..., fl(x) = WlσR(. . .W2σR(W1x) . . .) be the outputs at different layers of the network for an input vector x. Note that the output fi for the ith layer is taken before the nonlinearity and fL represents the pre-softmax neuron layer. We ignore the bias terms for mathematical simplicity. The weights (Wis) of the network are trained with input and label pairs from the dataset D.
Our aim is to craft a perturbation vector p with Euclidean norm c such that the network incorrectly classifies for most of the data samples. Mathematically, the optimization can be written as,
max p:|p|=c ∑ (xi,yi)∈D Iargmax(σS(fL(xi+p))) 6=yi , (1)
where σS is the softmax function. Note that the condition for the indicator function (I) that checks for misclassification is dependent on the ground truth labels (yis). Assuming a high classification accuracy for the model, we approximate the condition to argmax(σS(fL(xi + p))) 6= argmax(σS(fL(xi))). Since softmax function is monotonic, we further relax the objective 1 to
max p:|p|=c ∑ (xi,yi)∈D |fL(xi + p)− fL(xi)|2, (2)
which amounts to finding a p that maximizes the network response after being added to the inputs. In other words, p should maximally disturb the output fL for all data points. Note that for some xis, maximizing |fL(xi + p)− fL(xi)|2 might not result in incorrect prediction. We assume such cases to be minority and the objective could lead to significant adversarial changes to fL responses for majority input samples. If X = {x0,x1, . . . ,xN} denote the matrix formed by the assembling all the N data samples as columns and 1 represent a column vector of ones of appropriate size, then the optimization 2 can be rewritten as,
max p:|p|=c |fL(X + p1T )− fL(X)|2F . (3)
We recognize that optimization 3 is not exactly equivalent to the original objective 1, but is an approximation which does not require the ground truth labels. But still the training data X is essential for computation and needs to be eliminated for a complete data-free approach. Now observe that if fL were to be a linear function, then the objective 3 reduces to,
max p:|p|=c |fL(p)|2, (4)
which means that p has to align along the first right singular vector of the linear fL map. The singular p could potentially disturb the output fL more for all the xis than any other vector. Interestingly, note that the optimization 4 is a data-free objective under the linear assumption of fL. However, fL is nonlinear due to the presence of ReLU activation functions at every layer. Note that the formulation 4 is valid even if fL is not a complete linear map, but satisfies fL(X + p1T ) = fL(X) + fL(p1T ) for some p. Hence, we devise an algorithm to seek a perturbation that can approximately induce the above additivity property to the ReLU network.
3.2 LINEARLY APPROXIMATING THE NETWORK
We start by noting that the only nonlinearity in the network is due to the ReLU activation function at every layer. But ReLU is piece-wise linear; especially, observe that σR(a+ b) = σR(a) +σR(b) if vectors a and b are in the same orthant. Now consider the ReLU nonlinearity after the first layer,
σR(W1X + W1p1 T ), which becomes additive if column vectors in W1X are in the same orthant as W1p. We relax this criteria and favour the case of making the vectors as close as possible by,
max p:|p|=c 1T (W1X) T (W1p) = N(W1x̄1) T (W1p), (5)
where x̄1 stands for the mean of the N data samples in X . The solution of the optimization 5 is expected to minimize the error due to the additive approximation of the layer. In order to eliminate the data term from the objective, we make an assumption that the first singular vector of the weight matrices align along the mean vector of its corresponding input. In other words, the dot product of data mean x̄1 with the singular vectors of W1 is maximum for the first singular vector. Now we use the following lemma to argue that the objective 5 is maximum when p aligns with the first singular vector of W1 (proof available in Appendix A). Lemma 1. If x has positive and larger scalar projection on the first singular vector of W than remaining singular vectors, then argmaxpxW TWp = argmaxp|Wp|2 subject to |p| = c.
Hence, the optimization problem 5 is equivalent to,
max p:|p|=c |W1p|2, (6)
which we call as the dilation of the first layer. We justify the assumption based on the premise that the singular vectors of the weights must have captured the discriminatory modes of data samples while training. By discriminatory mode we refer to the components of X that are essential for the classification task and most likely extracted by the hierarchy of weights in the network. These does not correspond to the modes of variation of data points. The assumption essentially means that the first singular vector carries the most important features common to most of the data points than the remaining singular directions. This is taken to be valid for any layer weight Wl with difference that the mean vector x̄l is averaged over the layer l−1 output, i.e. x̄l = (1/N)σR(fl−1(X))1 for l > 1. Now consider the second layer of the network given by σR(W2σR(W1X +W1p1T )), where there are two ReLU functions in action. Suppose the first ReLU function is linearly approximated with dilation objective 6. Consequently, the second layer output can be written as σR(W2σR(W1X) + W2σR(W1p1
T )). Note that the second ReLU can be linearly approximated if column vectors in W2σR(W1X) are close to W2σR(W1p). Considering the two approximations, we formulate the optimization as,
max p:|p|=c 1T (W2σR(W1X)) T (W2σR(W1p)) + 1 T (W1X) T (W1p), (7)
max p:|p|=c (W2x̄2) T (W2σR(W1p)) + (W1x̄1) T (W1p). (8)
Again, we leverage the assumption that the data mean projects more to the first singular vector of the weight matrix and with Lemma 1, the problem becomes the dilation of the second layer,
max p:|p|=c |W2σR(W1p)|2 + |W1p|2. (9)
We extend the same arguments to further layers and see that the dilations tends to make the network layers approximately additive with respect to the generated perturbation vector. For the last layer, the dilation terms are added to objective 4 to account for the errors introduced due to linear approximation of all the ReLU layers. Hence, the final optimization problem for UAP generation becomes,
max p:|p|=c |fL(p)|2 + L−1∑ l=1 |fl(p)|2, (10)
which is clearly a completely data-free formulation.
3.3 SEQUENTIAL DILATION ALGORITHM
We leverage the theoretical intuitions from the previous Section to formulate an algorithm for UAP generation in a data-free manner. Note that the direct implementation of optimization 10 through any gradient descent algorithm would lead to sub-optimal solutions as the chances of getting stuck
Algorithm 1: The sequential dilation algorithm for crafting data-free UAPs. The input is the multi-layer neural network f and the perturbation strength c. A set of adversarial perturbations {pl}Ll=1, one for each layer, is returned as the output. Note that λ is the learning rate. p0 ∼ U(−10, 10) for l = {1, 2, . . . , L} do
pl = pl−1 while convergence do pl = pl + λ∇pl ∑l i=1 log(|fi(pl)|2)
Set |pl|∞ = c end
end
in local minimas is high. This is especially true since no data is used and the only variable being optimized is p with no sources of randomness. Hence, we perform the dilations of optimization 10 in sequential manner so as to avoid chances of reaching local minima solutions. Some more changes are applied in the way the original optimization is implemented, mainly for training stability and to compare fairly with existing methods. For numerical stability of the optimization, we follow Mopuri et al. (2018) and maximize logarithm of the Euclidean norm in the dilate loss. In order to compare with existing methods, l∞ norm is restricted instead of the l2 in the problem 10. This constrains the maximum of absolute value of the adversarial noise.
Algorithm 1 elucidates our proposed sequential dilation algorithm for ReLU based neural networks. The procedure loops over all the layers of the network. For the first layer, we find a vector p1 which maximizes the logarithm of l2 norm of W1p1, essentially finding the first singular vector of W1. After the dilation of the first layer, the perturbation p1 is used as an initialization for maximizing the Euclidean norm of second layer. But note the first loss term |W1p|22 is still kept in the dilation of second layer. This loss formulation tries to maximize the norm of output at the current layer along with all the previous layers that feed into it. In short, dilation of lth layer starts the optimization with perturbation obtained from dilation of (l − 1)th layer and involves the joint dilation of all l layers. The method runs till the softmax layer of the network and the final perturbation pL is a UAP, created without using any training data and could potentially fool majority of input samples.
We only consider CNNs trained for classification task. The optimization is performed using standard ADAM optimizer (Kingma & Ba, 2014) with a fixed learning rate schedule till the training loss saturates. Typical learning rate is 0.1. At every step of the optimization, the values of the perturbation are clipped to limit the allowed range. The l∞ norm is set as 10 for all our experiments. Although, Euclidean and maximum norms are not theoretically equivalent, practically we observe that the final perturbations are saturated, with roughly more than 78% of the values reaching ±10. This implies the l2 norm also to be approximately restricted under the saturation assumption. Once the perturbation gets saturated while optimization, the loss might saturate and could be stuck in local minimas.
To prevent this, after dilation at every layer, we rescale the perturbation by dividing the pixel values by 2. This does not make any difference to the procedure as only the magnitude is changed to make room for further optimization. Ideally, we should do sequential dilations for all the convolutional and fully connected layers of the CNN from input side to the end softmax classifier. But for very deep models like Inception and ResNet, the dilations are done only for every architectural blocks. Because of this the optimization might be more nonlinear than what is being assumed in Section 3.1, with the maximum norm further inducing the clipping nonlinearity. Hence, we initialize the perturbation (p0) with the values drawn randomly from a uniform distribution U(−10, 10). Finally, note that absolutely no data in any form is used for creating adversarial noise, not even a validation set is employed in contrast to Mopuri et al. (2018). The code for our approach is available for review at the anonymous link https://github.com/anoniclr/uap seq dilate.
4 EXPERIMENTS
We benchmark our proposed sequential dilation method with the existing data-free approaches. All the experiments are performed on popular classification models like VGG (Simonyan & Zisserman, 2014), ResNet (He et al., 2016) and Inception (Szegedy et al., 2016). These models are already trained on Imagenet (Deng et al., 2009) dataset and delivers very high classification accuracy. Figure 2 shows the perturbations crafted using the proposed method for various networks. We follow other works (Moosavi-Dezfooli et al., 2017; Mopuri et al., 2018) and assess the performance of our adversarial attack using the fooling rate metric. Fooling rate is defined as the fraction of test images on which the network prediction differs before and after the addition of the adversarial noise. Table 1 reports the fooling rates obtained by our method along with that of other works. The first comparison is with the random baseline, which is the fooling incurred with just random noise. Second baseline is with the only existing data-free approach GDUAP. Clearly, our proposed data-free objective achieves significantly higher fooling rates than the other data-free work. This indicates that sequential dilation algorithm, not only has theoretical backing, but also results in higher fooling rates in practice. Note that we have run our method ten times and the results produced in the table are mean fooling rate along with the standard deviation, to statistically validate the performance improvement.
Now we ablate the different aspects of the sequential dilation algorithm to demonstrate the usefulness of the design choices. Table 2 reports the results of the various ablative experiments. First
experiment is non-sequential version of dilation, listed as single dilation in the table. This is a single joint optimization maximizing the norm of activations before the non-linearity. PSM maximization refers to simple maximization of the pre-softmax layer (fL) alone, which is same as the approximated objective 4. As described in Section 3.3, our dilation of a layer involves keeping the maximization terms of all the previous layers. We empirically validate the necessity of such a scheme by just sequentially maximizing the layer norms without the cumulative loss term for the experiment Ours without accumulation in Table 2. Note that each maximization starts with the perturbation initialized from the previous optimization. The results for these ablations evidence that our exact formulation of sequential dilation achieves higher fooling rates in data-free scenario. Further, Figure 3 displays the perturbations obtained through sequential dilation at every layer for VGG-16 network, basically the pls from Algorithm 1. We also indicate the corresponding fooling rates for each of the perturbation. It is interesting to observe that the fooling rate increases as we successively dilate layers and saturates towards the end, again emphasizing the need for the sequential process.
In many practical attack scenarios, the actual deployed model might not be available to generate the adversarial perturbation. Hence, the ability of the perturbation crafted for one network to cause reasonable fooling on another network is often highly sought-after property. This setting is known as black-box, for which we compare our method with GDUAP (Mopuri et al., 2018) in Table 3. The results evidence better black-box performance for our method than the existing data-free work, suggesting higher generalization for the perturbations from sequential dilation.
The experiments performed so far show that proposed sequential dilate loss formulation achieves state-of-the-art fooling rates in data-free scenarios. We now consider the case where minimal training data is available, called the less data setting. For this case, the sequential dilation is applied with the limited data. The input to the network at any stage of the optimization is the image added with the current perturbation (xi + pl for layer l). With the help of some data points, we expect the solution to approach more closer to the actual adversarial perturbation obtained with full data. Table 4 indicates the fooling rates of the less data setting with varied amount of training samples. Note that to compare with GDUAP, we also use a validation set to select the best perturbation while training. Our approach performs significantly better than GDUAP when data samples are very less, increasing the practical utility of the method. We also observe that the fooling rates with less data, in general, have increased than data-free and became comparable to full data UAP (see Table 1).
Furthermore, Table 5 compares our approach with Singular Fool (Khrulkov & Oseledets, 2018) in extremely less data scenario. For fair comparison with Khrulkov & Oseledets (2018), we use only 64 images for crafting the perturbation and no validation set is employed. The best perturbation is selected based on the training loss. As expected, our method achieves significantly higher fooling performance than Khrulkov & Oseledets (2018). Even more, we apply our algorithm with 64 randomly chosen images from Pascal VOC (Everingham et al., 2011). Interestingly, despite the models used are being trained for a different dataset, the fooling rates remain more or less similar and is higher than that of Khrulkov & Oseledets (2018). This shows that our approach works well in less data cases even when the available images are not from the dataset on which the model is trained.
5 CONCLUSIONS AND FUTURE WORK
In this paper, we have presented a new algorithm, called the sequential dilation, to craft universal adversaries in a data-free manner. The approach relies on finding the first singular vector of the linearly approximated neural network. The approximation is being enabled by optimizing with the proposed dilate loss. Elaborate experiments and ablations demonstrate that our approach achieves superior data-free fooling performance. One promising direction for future research would be to modify the algorithm and generate targeted UAP, where the objective is fooling to a specific class.
A PROOF OF LEMMA 1
If W = USV T denote the singular value decomposition of W , then
max p:|p|=c xTW TWp = xTV S2V Tp = ∑ i S2ii(x TV:,i)(p TV:,i). (11)
Since xTV:,1 > xTV:,i (by assumption) and S11 > Sii for all i > 1, the solution to the optimization is cV:,1, a scaled version of the first singular vector of W . The same solution can be obtained through the definition of the first singular vector as,
max p:|p|=c
|Wp|2, (12)
completing the proof.
B EXPERIMENTAL SETUP
All our experiments reported in the main paper are run on NVIDIA DGX cluster (Dual 20-core Intel Xeon E5-2698 v4 2.2 GHz) within the Tensorflow docker. We use the pretrained classification models from Tensorflow Slim library S. Guadarrama (2016).
C DEMONSTRATION OF ADVERSARIAL ATTACK
Figures 4 to 9 demonstrate adversarial attack through perturbation generated from our proposed sequential dilation algorithm for various networks. The first row in the figures shows the clean images with the class predicted by the model, while the second row has the corresponding perturbed samples with the flipped prediction labels. | 1. What are the strengths and weaknesses of the proposed method for generating universal adversarial examples?
2. How does the reviewer assess the quality and reliability of the experimental results presented in the paper?
3. What are the limitations and assumptions made in the paper regarding the data-free approach, and how do they impact the validity and practicality of the proposed method?
4. How does the reviewer evaluate the significance and novelty of the paper's contributions in the context of existing works on adversarial attacks and defenses?
5. Are there any ethical or societal implications of the proposed method that the reviewer thinks the authors should have considered but did not? | Review | Review
The paper proposes a data free method for generating universal adversarial examples. Their method finds an input that maximizes the output of each layer by maximizing the dilation loss. They gave a well motivated derivation going from the data matrix, the data mean and to data free. The experiments results seems solid as the numbers show that their method is much better in many cases.
I have 2 main issues:
* The fooling rate experiments does not seem to control for how much distortion there really is. How do you make sure that different methods have similar level of distortion and not just similar l_\inf. Given that the authors says most of their method saturates all values, it is not clear that the baselines and competition really has a similar level of distortion. The fooling rate for random seems rather high. Why is random noise not mostly ignored by the model?
* while the method is data free. It needs complete access to the model and relies on properties of ReLu. I am not sure how realistic this setting is, and how this compares to methods that has black box access to the model. While it is interesting, the paper did not establish that universal adversarial perturbation is well-motivated and why data free is more important that model free or targeted perturbations. An attacker probably always see the input and probably wants to make it misclassified into a particular class, instead of just making the model wrong. |
ICLR | Title
Crafting Data-free Universal Adversaries with Dilate Loss
Abstract
We introduce a method to create Universal Adversarial Perturbations (UAP) for a given CNN in a data-free manner. Data-free approaches suite scenarios where the original training data is unavailable for crafting adversaries. We show that the adversary generation with full training data can be approximated to a formulation without data. This is realized through a sequential optimization of the adversarial perturbation with the proposed dilate loss. Dilate loss basically maximizes the Euclidean norm of the output before nonlinearity at any layer. By doing so, the perturbation constrains the ReLU activation function at every layer to act roughly linear for data points and thus eliminate the dependency on data for crafting UAPs. Extensive experiments demonstrate that our method not only has theoretical support, but achieves higher fooling rate than the existing data-free work. Furthermore, we evidence improvement in limited data cases as well.
1 INTRODUCTION
Despite the phenomenal success of deep neural networks in many practical applications, adversarial attacks are being a constant plague. These attacks corrupt the input with a small and usually imperceptible structured noise causing the model to output incorrect predictions. The sole existence of such a vulnerability not only raises concerns about the security of deep learning models, but also questions the robustness of the learned representations. To make it further worse, it has been shown that a single noise, called universal adversarial perturbation (UAP), can be added to any image and fool the network. UAPs do not require any optimization on the input image at attack time, but the corruption effectively works for most of the images. Interestingly, such perturbations created for one model exhibit transferability of attack and induce high fooling on other models. One drawback of UAPs though, is the requirement of training data for crafting perturbations. This is increasingly infeasible as the datasets are becoming quite large and might not be publicly released due to privacy or copyright reasons. In such cases where the original data is not available, data-free methods are gaining traction. In the data-free setting, the perturbation is created only with the trained neural network. Such methods typically rely on the trained weights and the CNN structure to find vulnerable patterns that can maximally disturb the normal propagation of activations across the network. A higher transfer of attack across networks is observed for data-free UAPs as well, raising its practical utility. Moreover, the study of these perturbations might lead to new insights on how deep neural networks actually work.
In this paper, we propose a new method for crafting data-free UAPs for any given CNN using ReLU nonlinearity. The approach relies on finding the singular vectors of a linearly approximated network (Section 3.1). A loss formulation is devised to enable this approximation under certain conditions. Dilate loss forms the major component of the method, which generates a perturbation that maximizes the Euclidean norm of the activation vector (before the nonlinearity) at a given layer (Section 3.2). We show that the perturbation crafted through dilation has the effect of linearly approximating the ReLU layer responses for any data points. These dilations are done sequentially for all the layers from the input to the last classification stage (Section 3.3). We argue that the sequential dilations results in a perturbation that aligns with the first singular vector of the linearly approximated network. Our approach outperforms the existing data-free method in fooling rates and the evaluation is also done for less data scenarios (Section 4).
In summary, the work contributes the following:
• A new method that can create universal adversarial perturbation without using data and achieve state-of-the-art data-free fooling rates.
• A detailed theoretical analysis which formulates the proposed sequential dilation algorithm by approximating the adversary generation with full training data under certain conditions.
2 RELATED WORK
The vulnerability of deep neural networks to adversarial samples is first shown in Szegedy et al. (2013). Following Szegedy et al. (2013), several methods (Goodfellow et al., 2014; Kurakin et al., 2016; Dong et al., 2018; Madry et al., 2017; Moosavi-Dezfooli et al., 2016; Brendel et al., 2017; Athalye et al., 2018; Carlini & Wagner, 2017) are being proposed to craft such adversarial samples. One of the simplest method is the Fast Gradient Sign Method (FGSM) formulated in Goodfellow et al. (2014). FGSM obtains the perturbation by single step gradient ascent of the loss function with respect to the input image. There are multi step variants to FGSM like iterative FGSM (Kurakin et al., 2016), Momentum (Dong et al., 2018), Projected Gradient Descent (PGD) (Madry et al., 2017), Deepfool (Moosavi-Dezfooli et al., 2016), etc. These attacks are image-specific, where the perturbation is a function of the input and requires a separate optimization for each image.
Moosavi-Dezfooli et al. (2017) introduce the idea of Universal Adversarial Perturbations (UAP), a single perturbation that can fool the model for most of the input images. UAP is obtained by jointly maximizing the training loss for dataset images. There are also generative approaches like NAG (Reddy Mopuri et al., 2018b), AAA (Reddy Mopuri et al., 2018a), GAP (Poursaeed et al., 2018) for crafting universal adversaries. Khrulkov & Oseledets (2018) propose a method based on singular vectors of Jacobian matrix to create universal adversaries. They show impressive fooling performance with a very small set of training images, but the method is not data-free. Though the study of adversarial attacks started with the classification task, there are several works (Xie et al., 2017; Metzen et al., 2017) that extend such attacks to other tasks like segmentation, detection, etc. Further, adversarial examples are shown to generalize to the physical world in Kurakin et al. (2016). While most attacks changes each pixel in the image with small imperceptible noise, there are methods (Sharif et al., 2016; Brown et al., 2017; Papernot et al., 2016) that perturb limited number of pixels with large noise as these are more practical in nature.
The attacks discussed so far, in general, rely on maximizing the training loss. In contrast, Mopuri et al. (2018) devise a generalizable data-free objective for crafting UAPs (GDUAP). GDUAP maximizes the activations at the output of all convolutional layers corrupting the feature representations learned and hence fooling the model. Our method has similarity to GDUAP, but with the crucial difference that the Euclidean norm maximization is performed before the nonlinearity in our case. Further, we maximize the norms of the layers one after the other in a sequential fashion as opposed to a single joint optimization. We show theoretically and experimentally that these changes cause a lot of difference in fooling performance. Moreover, no sound reasoning is available in Mopuri et al. (2018) to justify the formulation, whereas we provide theoretical explanation for the algorithm.
3 OUR APPROACH
3.1 CRAFTING A DATA-FREE OBJECTIVE
Consider a deep neural network with L layers already trained for classification task. We assume the activation function employed in the network to be ReLU (Nair & Hinton, 2010), defined as
σR(x) = { x if x > 0 0 otherwise,
which basically zeros out all negative elements and retains the positive ones when applied on a matrix. Let f1(x) = W1x, f2(x) = W2σR(W1x), ..., fl(x) = WlσR(. . .W2σR(W1x) . . .) be the outputs at different layers of the network for an input vector x. Note that the output fi for the ith layer is taken before the nonlinearity and fL represents the pre-softmax neuron layer. We ignore the bias terms for mathematical simplicity. The weights (Wis) of the network are trained with input and label pairs from the dataset D.
Our aim is to craft a perturbation vector p with Euclidean norm c such that the network incorrectly classifies for most of the data samples. Mathematically, the optimization can be written as,
max p:|p|=c ∑ (xi,yi)∈D Iargmax(σS(fL(xi+p))) 6=yi , (1)
where σS is the softmax function. Note that the condition for the indicator function (I) that checks for misclassification is dependent on the ground truth labels (yis). Assuming a high classification accuracy for the model, we approximate the condition to argmax(σS(fL(xi + p))) 6= argmax(σS(fL(xi))). Since softmax function is monotonic, we further relax the objective 1 to
max p:|p|=c ∑ (xi,yi)∈D |fL(xi + p)− fL(xi)|2, (2)
which amounts to finding a p that maximizes the network response after being added to the inputs. In other words, p should maximally disturb the output fL for all data points. Note that for some xis, maximizing |fL(xi + p)− fL(xi)|2 might not result in incorrect prediction. We assume such cases to be minority and the objective could lead to significant adversarial changes to fL responses for majority input samples. If X = {x0,x1, . . . ,xN} denote the matrix formed by the assembling all the N data samples as columns and 1 represent a column vector of ones of appropriate size, then the optimization 2 can be rewritten as,
max p:|p|=c |fL(X + p1T )− fL(X)|2F . (3)
We recognize that optimization 3 is not exactly equivalent to the original objective 1, but is an approximation which does not require the ground truth labels. But still the training data X is essential for computation and needs to be eliminated for a complete data-free approach. Now observe that if fL were to be a linear function, then the objective 3 reduces to,
max p:|p|=c |fL(p)|2, (4)
which means that p has to align along the first right singular vector of the linear fL map. The singular p could potentially disturb the output fL more for all the xis than any other vector. Interestingly, note that the optimization 4 is a data-free objective under the linear assumption of fL. However, fL is nonlinear due to the presence of ReLU activation functions at every layer. Note that the formulation 4 is valid even if fL is not a complete linear map, but satisfies fL(X + p1T ) = fL(X) + fL(p1T ) for some p. Hence, we devise an algorithm to seek a perturbation that can approximately induce the above additivity property to the ReLU network.
3.2 LINEARLY APPROXIMATING THE NETWORK
We start by noting that the only nonlinearity in the network is due to the ReLU activation function at every layer. But ReLU is piece-wise linear; especially, observe that σR(a+ b) = σR(a) +σR(b) if vectors a and b are in the same orthant. Now consider the ReLU nonlinearity after the first layer,
σR(W1X + W1p1 T ), which becomes additive if column vectors in W1X are in the same orthant as W1p. We relax this criteria and favour the case of making the vectors as close as possible by,
max p:|p|=c 1T (W1X) T (W1p) = N(W1x̄1) T (W1p), (5)
where x̄1 stands for the mean of the N data samples in X . The solution of the optimization 5 is expected to minimize the error due to the additive approximation of the layer. In order to eliminate the data term from the objective, we make an assumption that the first singular vector of the weight matrices align along the mean vector of its corresponding input. In other words, the dot product of data mean x̄1 with the singular vectors of W1 is maximum for the first singular vector. Now we use the following lemma to argue that the objective 5 is maximum when p aligns with the first singular vector of W1 (proof available in Appendix A). Lemma 1. If x has positive and larger scalar projection on the first singular vector of W than remaining singular vectors, then argmaxpxW TWp = argmaxp|Wp|2 subject to |p| = c.
Hence, the optimization problem 5 is equivalent to,
max p:|p|=c |W1p|2, (6)
which we call as the dilation of the first layer. We justify the assumption based on the premise that the singular vectors of the weights must have captured the discriminatory modes of data samples while training. By discriminatory mode we refer to the components of X that are essential for the classification task and most likely extracted by the hierarchy of weights in the network. These does not correspond to the modes of variation of data points. The assumption essentially means that the first singular vector carries the most important features common to most of the data points than the remaining singular directions. This is taken to be valid for any layer weight Wl with difference that the mean vector x̄l is averaged over the layer l−1 output, i.e. x̄l = (1/N)σR(fl−1(X))1 for l > 1. Now consider the second layer of the network given by σR(W2σR(W1X +W1p1T )), where there are two ReLU functions in action. Suppose the first ReLU function is linearly approximated with dilation objective 6. Consequently, the second layer output can be written as σR(W2σR(W1X) + W2σR(W1p1
T )). Note that the second ReLU can be linearly approximated if column vectors in W2σR(W1X) are close to W2σR(W1p). Considering the two approximations, we formulate the optimization as,
max p:|p|=c 1T (W2σR(W1X)) T (W2σR(W1p)) + 1 T (W1X) T (W1p), (7)
max p:|p|=c (W2x̄2) T (W2σR(W1p)) + (W1x̄1) T (W1p). (8)
Again, we leverage the assumption that the data mean projects more to the first singular vector of the weight matrix and with Lemma 1, the problem becomes the dilation of the second layer,
max p:|p|=c |W2σR(W1p)|2 + |W1p|2. (9)
We extend the same arguments to further layers and see that the dilations tends to make the network layers approximately additive with respect to the generated perturbation vector. For the last layer, the dilation terms are added to objective 4 to account for the errors introduced due to linear approximation of all the ReLU layers. Hence, the final optimization problem for UAP generation becomes,
max p:|p|=c |fL(p)|2 + L−1∑ l=1 |fl(p)|2, (10)
which is clearly a completely data-free formulation.
3.3 SEQUENTIAL DILATION ALGORITHM
We leverage the theoretical intuitions from the previous Section to formulate an algorithm for UAP generation in a data-free manner. Note that the direct implementation of optimization 10 through any gradient descent algorithm would lead to sub-optimal solutions as the chances of getting stuck
Algorithm 1: The sequential dilation algorithm for crafting data-free UAPs. The input is the multi-layer neural network f and the perturbation strength c. A set of adversarial perturbations {pl}Ll=1, one for each layer, is returned as the output. Note that λ is the learning rate. p0 ∼ U(−10, 10) for l = {1, 2, . . . , L} do
pl = pl−1 while convergence do pl = pl + λ∇pl ∑l i=1 log(|fi(pl)|2)
Set |pl|∞ = c end
end
in local minimas is high. This is especially true since no data is used and the only variable being optimized is p with no sources of randomness. Hence, we perform the dilations of optimization 10 in sequential manner so as to avoid chances of reaching local minima solutions. Some more changes are applied in the way the original optimization is implemented, mainly for training stability and to compare fairly with existing methods. For numerical stability of the optimization, we follow Mopuri et al. (2018) and maximize logarithm of the Euclidean norm in the dilate loss. In order to compare with existing methods, l∞ norm is restricted instead of the l2 in the problem 10. This constrains the maximum of absolute value of the adversarial noise.
Algorithm 1 elucidates our proposed sequential dilation algorithm for ReLU based neural networks. The procedure loops over all the layers of the network. For the first layer, we find a vector p1 which maximizes the logarithm of l2 norm of W1p1, essentially finding the first singular vector of W1. After the dilation of the first layer, the perturbation p1 is used as an initialization for maximizing the Euclidean norm of second layer. But note the first loss term |W1p|22 is still kept in the dilation of second layer. This loss formulation tries to maximize the norm of output at the current layer along with all the previous layers that feed into it. In short, dilation of lth layer starts the optimization with perturbation obtained from dilation of (l − 1)th layer and involves the joint dilation of all l layers. The method runs till the softmax layer of the network and the final perturbation pL is a UAP, created without using any training data and could potentially fool majority of input samples.
We only consider CNNs trained for classification task. The optimization is performed using standard ADAM optimizer (Kingma & Ba, 2014) with a fixed learning rate schedule till the training loss saturates. Typical learning rate is 0.1. At every step of the optimization, the values of the perturbation are clipped to limit the allowed range. The l∞ norm is set as 10 for all our experiments. Although, Euclidean and maximum norms are not theoretically equivalent, practically we observe that the final perturbations are saturated, with roughly more than 78% of the values reaching ±10. This implies the l2 norm also to be approximately restricted under the saturation assumption. Once the perturbation gets saturated while optimization, the loss might saturate and could be stuck in local minimas.
To prevent this, after dilation at every layer, we rescale the perturbation by dividing the pixel values by 2. This does not make any difference to the procedure as only the magnitude is changed to make room for further optimization. Ideally, we should do sequential dilations for all the convolutional and fully connected layers of the CNN from input side to the end softmax classifier. But for very deep models like Inception and ResNet, the dilations are done only for every architectural blocks. Because of this the optimization might be more nonlinear than what is being assumed in Section 3.1, with the maximum norm further inducing the clipping nonlinearity. Hence, we initialize the perturbation (p0) with the values drawn randomly from a uniform distribution U(−10, 10). Finally, note that absolutely no data in any form is used for creating adversarial noise, not even a validation set is employed in contrast to Mopuri et al. (2018). The code for our approach is available for review at the anonymous link https://github.com/anoniclr/uap seq dilate.
4 EXPERIMENTS
We benchmark our proposed sequential dilation method with the existing data-free approaches. All the experiments are performed on popular classification models like VGG (Simonyan & Zisserman, 2014), ResNet (He et al., 2016) and Inception (Szegedy et al., 2016). These models are already trained on Imagenet (Deng et al., 2009) dataset and delivers very high classification accuracy. Figure 2 shows the perturbations crafted using the proposed method for various networks. We follow other works (Moosavi-Dezfooli et al., 2017; Mopuri et al., 2018) and assess the performance of our adversarial attack using the fooling rate metric. Fooling rate is defined as the fraction of test images on which the network prediction differs before and after the addition of the adversarial noise. Table 1 reports the fooling rates obtained by our method along with that of other works. The first comparison is with the random baseline, which is the fooling incurred with just random noise. Second baseline is with the only existing data-free approach GDUAP. Clearly, our proposed data-free objective achieves significantly higher fooling rates than the other data-free work. This indicates that sequential dilation algorithm, not only has theoretical backing, but also results in higher fooling rates in practice. Note that we have run our method ten times and the results produced in the table are mean fooling rate along with the standard deviation, to statistically validate the performance improvement.
Now we ablate the different aspects of the sequential dilation algorithm to demonstrate the usefulness of the design choices. Table 2 reports the results of the various ablative experiments. First
experiment is non-sequential version of dilation, listed as single dilation in the table. This is a single joint optimization maximizing the norm of activations before the non-linearity. PSM maximization refers to simple maximization of the pre-softmax layer (fL) alone, which is same as the approximated objective 4. As described in Section 3.3, our dilation of a layer involves keeping the maximization terms of all the previous layers. We empirically validate the necessity of such a scheme by just sequentially maximizing the layer norms without the cumulative loss term for the experiment Ours without accumulation in Table 2. Note that each maximization starts with the perturbation initialized from the previous optimization. The results for these ablations evidence that our exact formulation of sequential dilation achieves higher fooling rates in data-free scenario. Further, Figure 3 displays the perturbations obtained through sequential dilation at every layer for VGG-16 network, basically the pls from Algorithm 1. We also indicate the corresponding fooling rates for each of the perturbation. It is interesting to observe that the fooling rate increases as we successively dilate layers and saturates towards the end, again emphasizing the need for the sequential process.
In many practical attack scenarios, the actual deployed model might not be available to generate the adversarial perturbation. Hence, the ability of the perturbation crafted for one network to cause reasonable fooling on another network is often highly sought-after property. This setting is known as black-box, for which we compare our method with GDUAP (Mopuri et al., 2018) in Table 3. The results evidence better black-box performance for our method than the existing data-free work, suggesting higher generalization for the perturbations from sequential dilation.
The experiments performed so far show that proposed sequential dilate loss formulation achieves state-of-the-art fooling rates in data-free scenarios. We now consider the case where minimal training data is available, called the less data setting. For this case, the sequential dilation is applied with the limited data. The input to the network at any stage of the optimization is the image added with the current perturbation (xi + pl for layer l). With the help of some data points, we expect the solution to approach more closer to the actual adversarial perturbation obtained with full data. Table 4 indicates the fooling rates of the less data setting with varied amount of training samples. Note that to compare with GDUAP, we also use a validation set to select the best perturbation while training. Our approach performs significantly better than GDUAP when data samples are very less, increasing the practical utility of the method. We also observe that the fooling rates with less data, in general, have increased than data-free and became comparable to full data UAP (see Table 1).
Furthermore, Table 5 compares our approach with Singular Fool (Khrulkov & Oseledets, 2018) in extremely less data scenario. For fair comparison with Khrulkov & Oseledets (2018), we use only 64 images for crafting the perturbation and no validation set is employed. The best perturbation is selected based on the training loss. As expected, our method achieves significantly higher fooling performance than Khrulkov & Oseledets (2018). Even more, we apply our algorithm with 64 randomly chosen images from Pascal VOC (Everingham et al., 2011). Interestingly, despite the models used are being trained for a different dataset, the fooling rates remain more or less similar and is higher than that of Khrulkov & Oseledets (2018). This shows that our approach works well in less data cases even when the available images are not from the dataset on which the model is trained.
5 CONCLUSIONS AND FUTURE WORK
In this paper, we have presented a new algorithm, called the sequential dilation, to craft universal adversaries in a data-free manner. The approach relies on finding the first singular vector of the linearly approximated neural network. The approximation is being enabled by optimizing with the proposed dilate loss. Elaborate experiments and ablations demonstrate that our approach achieves superior data-free fooling performance. One promising direction for future research would be to modify the algorithm and generate targeted UAP, where the objective is fooling to a specific class.
A PROOF OF LEMMA 1
If W = USV T denote the singular value decomposition of W , then
max p:|p|=c xTW TWp = xTV S2V Tp = ∑ i S2ii(x TV:,i)(p TV:,i). (11)
Since xTV:,1 > xTV:,i (by assumption) and S11 > Sii for all i > 1, the solution to the optimization is cV:,1, a scaled version of the first singular vector of W . The same solution can be obtained through the definition of the first singular vector as,
max p:|p|=c
|Wp|2, (12)
completing the proof.
B EXPERIMENTAL SETUP
All our experiments reported in the main paper are run on NVIDIA DGX cluster (Dual 20-core Intel Xeon E5-2698 v4 2.2 GHz) within the Tensorflow docker. We use the pretrained classification models from Tensorflow Slim library S. Guadarrama (2016).
C DEMONSTRATION OF ADVERSARIAL ATTACK
Figures 4 to 9 demonstrate adversarial attack through perturbation generated from our proposed sequential dilation algorithm for various networks. The first row in the figures shows the clean images with the class predicted by the model, while the second row has the corresponding perturbed samples with the flipped prediction labels. | 1. What is the focus and contribution of the paper regarding universal adversarial perturbations?
2. What are the strengths and weaknesses of the proposed method compared to prior works like GDUAP?
3. Do you have any concerns about the theoretical analysis and its assumptions?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions regarding the design principle and effectiveness of the proposed algorithm? | Review | Review
Summary:
This paper proposed a method to generate universal adversarial perturbations without training data. This task is timely and practical. The proposed method maximizes the norm of the output before nonlinearity at any layer to craft the universal perturbation. A sequential dilation algorithm is designed to calculate UAPs. The experiments show that the proposed method outperforms GDUAP.
My major concern is that there is not much novelty in the proposed method compared with GDUAP. The dilate loss function (4) is similar to the objective function (3) in the GDUAP paper. This paper provides a theoretical explanation of the dilate loss function and an improvement on the non-linearity function, which, however, is not convincing. Equation 10 is derived based on many strong assumptions. See the comments below.
Pros:
- The theoretical analysis is clear.
- The proposed method performs better than GDUAP in the data-free and black-box setting.
- The writing is good. The paper is easy to follow.
Cons:
- The theoretical analysis is based on many strong assumptions/criteria. For example:
o To derive equation (5), W1X and W1p must be in the same orthant. It is unclear how to satisfy the criteria In the algorithm.
o In Lemma 1, problem (5) approximates problem (6) only if x has a very large projection on the first singular vector of W. However, x and W are fixed and independent of p. This assumption largely depends on the dataset and the weights of the model.
o It would be better if the authors show that in what cases these assumptions can be satisfied.
- Other factors such as batch normalization and max pooling used in Inception v3, may also affect the linearity of the model. It would be better if the authors provide theoretical analysis or an ablation study on these factors.
- What’s the design principle behind Algorithm 1? Why can this algorithm solve the sub-optimal problem? The weights of different layers are not closely related. In the initialization part, why can we start learning p from the result of the previous layer? Would it be possible that the performance is improved due to the algorithm instead of the dilate loss?
- The proposed method performs worse than GDUAP does in some less data settings.
- The results in Table 4 and 5 are inconsistent. These two experiments use the same dataset (Imagenet) and the same number of images (D=64). |
ICLR | Title
Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis
Abstract
Training Generative Adversarial Networks (GAN) on high-fidelity images usually requires large-scale GPU-clusters and a vast number of training images. In this paper, we study the few-shot image synthesis task for GAN with minimum computing cost. We propose a light-weight GAN structure that gains superior quality on 1024 × 1024 resolution. Notably, the model converges from scratch with just a few hours of training on a single RTX-2080 GPU, and has a consistent performance, even with less than 100 training samples. Two technique designs constitute our work, a skip-layer channel-wise excitation module and a self-supervised discriminator trained as a feature-encoder. With thirteen datasets covering a wide variety of image domains 1, we show our model’s superior performance compared to the state-of-the-art StyleGAN2, when data and computing budget are limited.
1 INTRODUCTION
The fascinating ability to synthesize images using the state-of-the-art (SOTA) Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) display a great potential of GANs for many intriguing real-life applications, such as image translation, photo editing, and artistic creation. However, expensive computing cost and the vast amount of required training data limit these SOTAs in real applications with only small image sets and low computing budgets.
In real-life scenarios, the available samples to train a GAN can be minimal, such as the medical images of a rare disease, a particular celebrity’s portrait set, and a specific artist’s artworks. Transferlearning with a pre-trained model (Mo et al., 2020; Wang et al., 2020) is one solution for the lack of training images. Nevertheless, there is no guarantee to find a compatible pre-training dataset. Furthermore, if not, fine-tuning probably leads to even worse performance (Zhao et al., 2020).
In a recent study, it was highlighted that in art creation applications, most artists prefers to train their models from scratch based on their own images to avoid biases from fine-tuned pre-trained model. Moreover, It was shown that in most cases artists want to train their models with datasets of less than
1The datasets and code are available at: https://github.com/odegeasslbc/FastGAN-pytorch
100 images (Elgammal et al., 2020). Dynamic data-augmentation (Karras et al., 2020a; Zhao et al., 2020) smooths the gap and stabilizes GAN training with fewer images. However, the computing cost from the SOTA models such as StyleGAN2 (Karras et al., 2020b) and BigGAN (Brock et al., 2019) remain to be high, especially when trained with the image resolution on 1024× 1024. In this paper, our goal is to learn an unconditional GAN on high-resolution images, with low computational cost and few training samples. As summarized in Fig. 2, these training conditions expose the model to a high risk of overfitting and mode-collapse (Arjovsky & Bottou, 2017; Zhang & Khoreva, 2018). To train a GAN given the demanding training conditions, we need a generator (G) that can learn fast, and a discriminator (D) that can continuously provide useful signals to train G. To address these challenges, we summarize our contribution as:
• We design the Skip-Layer channel-wise Excitation (SLE) module, which leverages lowscale activations to revise the channel responses on high-scale feature-maps. SLE allows a more robust gradient flow throughout the model weights for faster training. It also leads to an automated learning of a style/content disentanglement like StyleGAN2.
• We propose a self-supervised discriminator D trained as a feature-encoder with an extra decoder. We force D to learn a more descriptive feature-map covering more regions from an input image, thus yielding more comprehensive signals to train G. We test multiple selfsupervision strategies for D, among which we show that auto-encoding works the best.
• We build a computational-efficient GAN model based on the two proposed techniques, and show the model’s robustness on multiple high-fidelity datasets, as demonstrated in Fig. 1.
2 RELATED WORKS
Speed up the GAN training: Speeding up the training of GAN has been approached from various perspectives. Ngxande et al. propose to reduce the computing time with depth-wise convolutions. Zhong et al. adjust the GAN objective into a min-max-min problem for a shorter optimization path. Sinha et al. suggest to prepare each batch of training samples via a coreset selection, leverage the better data preparation for a faster convergence. However, these methods only bring a limited improvement in
training speed. Moreover, the synthesis quality is not advanced within the shortened training time.
Train GAN on high resolution: High-resolution training for GAN can be problematic. Firstly, the increased model parameters lead to a more rigid gradient flow to optimize G. Secondly, the target distribution formed by the images on 1024 × 1024 resolution is super sparse, making GAN much harder to converge. Denton et al. (2015); Zhang et al. (2017); Huang et al. (2017); Wang et al. (2018); Karras et al. (2019); Karnewar & Wang (2020); Karras et al. (2020b); Liu et al. (2021) develop the multi-scale GAN structures to alleviate the gradient flow issue, where G outputs images and receives feedback from several resolutions simultaneously. However, all these approaches further increase the computational cost, consuming even more GPU memory and training time.
Stabilize the GAN training: Mode-collapse on G is one of the big challenges when training GANs. And it becomes even more challenging given fewer training samples and a lower computational budget (a smaller batch-size). As D is more likely to be overfitting on the datasets, thus unable to provide meaningful gradients to train G (Gulrajani et al., 2017).
Prior works tackle the overfitting issue by seeking a good regularization for D, including different objectives (Arjovsky et al., 2017; Lim & Ye, 2017; Tran et al., 2017); regularizing the gradients (Gulrajani et al., 2017; Mescheder et al., 2018); normalizing the model weights (Miyato et al., 2018); and augmenting the training data (Karras et al., 2020a; Zhao et al., 2020). However, the effects of these methods degrade fast when the training batch-size is limited, since appropriate batch statistics can hardly be calculated for the regularization (normalization) over the training iterations.
Meanwhile, self-supervision on D has been shown to be an effective method to stabilize the GAN training as studied in Tran et al. (2019); Chen et al. (2019). However, the auxiliary self-supervision tasks in prior works have limited using scenario and image domain. Moreover, prior works only studied on low resolution images (322 to 1282), and without a computing resource limitation.
3 METHOD
We adopt a minimalistic design for our model. In particular, we use a single conv-layer on each resolution in G, and apply only three (input and output) channels for the conv-layers on the high resolutions (≥ 512×512) in both G and D. Fig. 3 and Fig. 4 illustrate the model structure for our G and D, with descriptions of the component layers and forward flow. These structure designs make our GAN much smaller than SOTA models and substantially faster to train. Meanwhile, our model remains robust on small datasets due to its compact size with the two proposed techniques.
3.1 SKIP-LAYER CHANNEL-WISE EXCITATION
For synthesizing higher resolution images, the generator G inevitably needs to become deeper, with more conv-layers, in concert with the up-sampling needs. A deeper model with more convolution layers leads to a longer training time of GAN, due to the increased number of model parameters and a weaker gradient flow through G (Zhang et al., 2017; Karras et al., 2018; Karnewar & Wang, 2020). To better train a deep model, He et al. design the Residual structure (ResBlock), which uses a skip-layer connection to strengthen the gradient signals between layers. However, while ResBlock has been widely used in GAN literature (Wang et al., 2018; Karras et al., 2020b), it also increases the computation cost.
We reformulate the skip-connection idea with two unique designs into the Skip-Layer Excitation module (SLE). First, ResBlock implements skip-connection as an element-wise addition between the activations from different conv-layers. It requires the spatial dimensions of the activations to be the same. Instead of addition, we apply channel-wise multiplications between the activations, eliminating the heavy computation of convolution (since one side of the activations now has a spatial dimension of 12). Second, in prior GAN works, skip-connections are only used within the same resolution. In contrast, we perform skip-connection between resolutions with a much longer range (e.g., 82 and 1282, 162 and 2562), since an equal spatial-dimension is no longer required. The two designs make SLE inherits the advantages of ResBlock with a shortcut gradient flow, meanwhile without an extra computation burden.
Formally, we define the Skip-Layer Excitation module as:
y = F(xlow, {Wi}) · xhigh (1)
Here x and y are the input and output feature-maps of the SLE module, the function F contains the operations on xlow, and Wi indicates the module weights to be learned. The left panel in Fig. 3 shows an SLE module in practice, where xlow and xhigh are the feature-maps at 8×8 and 128×128 resolution respectively. An adaptive average-pooling layer in F first down-samples xlow into 4× 4
along the spatial-dimensions, then a conv-layer further down-samples it into 1 × 1. A LeakyReLU is used to model the non-linearity, and another conv-layer projects xlow to have the same channel size as xhigh. Finally, after a gating operation via a Sigmoid function, the output from F multiplies xhigh along the channel dimension, yielding y with the same shape as xhigh.
SLE partially resembles the Squeeze-and-Excitation module (SE) proposed by Hu et al.. However, SE operates within one feature-map as a self-gating module. In comparison, SLE performs between feature-maps that are far away from each other. While SLE brings the benefit of channel-wise feature re-calibration just like SE, it also strengthens the whole model’s gradient flow like ResBlock. The channel-wise multiplication in SLE also coincides with Instance Normalization (Ulyanov et al., 2016; Huang & Belongie, 2017), which is widely used in style-transfer. Similarly, we show that SLE enables G to automatically disentangle the content and style attributes, just like StyleGAN (Karras et al., 2019). As SLE performs on high-resolution feature-maps, altering these feature-maps is shown to be more likely to change the style attributes of the generated image (Karras et al., 2019; Liu et al., 2021). By replacing xlow in SLE from another synthesized sample, our G can generate an image with the content unchanged, but in the same style of the new replacing image.
3.2 SELF-SUPERVISED DISCRIMINATOR
Our approach to provide a strong regularization for D is surprisingly simple. We treat D as an encoder and train it with small decoders. Such auto-encoding training forces D to extract image features that the decoders can give good reconstructions. The decoders are optimized together with D on a simple reconstruction loss, which is only trained on real samples:
Lrecons = Ef∼Dencode(x), x∼Ireal [||G(f)− T (x)||], (2)
where f is the intermediate feature-maps from D, the function G contains the processing on f and the decoder, and the function T represents the processing on sample x from real images Ireal.
Our self-supervised D is illustrated in Fig. 4, where we employ two decoders for the feature-maps on two scales: f1 on 162 and f2 on 82 . The decoders only have four conv-layers to produce images at 128×128 resolution, causing little extra computations (much less than other regularization methods). We randomly crop f1 with 18 of its height and width, then crop the real image on the same portion to get Ipart. We resize the real image to get I . The decoders produce I ′part from the cropped f1, and I ′ from f2. Finally, D and the decoders are trained together to minimize the loss in eq. 2, by matching I ′part to Ipart and I ′ to I .
Such reconstructive training makes sure that D extracts a more comprehensive representation from the inputs, covering both the overall compositions (from f2) and detailed textures (from f1). Note that the processing in G and T are not limited to cropping; more operations remain to be explored for better performance. The auto-encoding approach we employ is a typical method for self-supervised learning, which has been well recognized to improve the model robustness and generalization ability (He et al., 2020; Hendrycks et al., 2019; Jing & Tian, 2020; Goyal et al., 2019). In the context of GAN, we find that a regularized D via self-supervision training strategies significantly improves the synthesis quality on G, among which auto-encoding brings the most performance boost.
Although our self-supervision strategy for D comes in the form of an auto-encoder (AE), this approach is fundamentally different from works trying to combine GAN and AE (Larsen et al., 2016;
Guo et al., 2019; Zhao et al., 2016; Berthelot et al., 2017). The latter works mostly train G as a decoder on a learned latent space from D, or treat the adversarial training with D as an supplementary loss besides AE’s training. In contrast, our model is a pure GAN with a much simpler training schema. The auto-encoding training is only for regularizing D, where G is not involved.
In sum, we employ the hinge version of the adversarial loss (Lim & Ye (2017); Tran et al. (2017)) to iteratively train our D and G. We find the different GAN losses make little performance difference, while hinge loss computes the fastest:
LD =− Ex∼Ireal [min(0,−1 +D(x))]− Ex̂∼G(z)[min(0,−1−D(x̂)] + Lrecons (3) LG =− Ez∼N [D(G(z))], (4)
4 EXPERIMENT
Datasets: We conduct experiments on multiple datasets with a wide range of content categories. On 256 × 256 resolution, we test on Animal-Face Dog and Cat (Si & Zhu, 2011), 100-Shot-Obama, Panda, and Grumpy-cat (Zhao et al., 2020). On 1024 × 1024 resolution, we test on Flickr-FaceHQ (FFHQ) (Karras et al., 2019), Oxford-flowers (Nilsback & Zisserman, 2006), art paintings from WikiArt (wikiart.org), photographs on natural landscape from Unsplash (unsplash.com), Pokemon (pokemon.com), anime face, skull, and shell. These datasets are designed to cover images with different characteristics: photo realistic, graphic-illustration, and art-like images.
Metrics: We use two metrics to measure the models’ synthesis performance: 1) Fréchet Inception Distance (FID) (Heusel et al., 2017) measures the overall semantic realism of the synthesized images. For datasets with less than 1000 images (most only have 100 images), we let G generate 5000 images and compute FID between the synthesized images and the whole training set. 2) Learned perceptual similarity (LPIPS) (Zhang et al., 2018) provides a perceptual distance between two images. We use LPIPS to report the reconstruction quality when we perform latent space back-tracking on G given real images, and measure the auto-encoding performance. We find it unnecessary to involve other metrics, as FID is unlikely to be inconsistent with the others, given the notable performance gap between our model and the compared ones. For all the testings, we train the models 5 times with random seeds, and report the highest scores. The relative error is less than five percent on average.
Compared Models: We compare our model with: 1) the state-of-the-art (SOTA) unconditional model, StyleGAN2, 2) a baseline model ablated from our proposed one. Note that we adopt StyleGAN2 with recent studies from (Karras et al., 2020a; Zhao et al., 2020), including the model configuration and differentiable data-augmentation, for the best training on few-sample datasets. Since StyleGAN2 requires much more computing-cost (cc) to train, we derive an extra baseline model. In sum, we compare our model with StyleGAN2 on the absolute image synthesis quality regardless of cc, and use the baseline model for the reference within a comparable cc range.
The baseline model is the strongest performer that we integrated from various GAN techniques based on DCGAN (Radford et al., 2015): 1) spectral-normalization (Miyato et al., 2018), 2) exponentialmoving-average (Yazıcı et al., 2018) optimization on G, 3) differentiable-augmentation, 4) GLU (Dauphin et al., 2017) instead of ReLU in G. We build our model upon the baseline with the two proposed techniques: the skip-layer excitation module and the self-supervised discriminator.
Table. 1 presents the normalized cc figures of the models on Nvidia’s RTX 2080-Ti GPU, implemented using PyTorch (Paszke et al., 2017). Importantly, the slimed StyleGAN2 with 14 parameters cannot converge on the tested datasets at 10242 resolution. We compare to the StyleGAN2 with 12 parameters (if not specifically mentioned) in the following experiments.
4.1 IMAGE SYNTHESIS PERFORMANCE
Few-shot generation: Collecting large-scale image datasets are expensive, or even impossible, for a certain character, a genre, or a topic. On those few-shot datasets, a data-efficient model becomes especially valuable for the image generation task. In Table. 2 and Table. 3, we show that our model not only achieves superior performance on the few-shot datasets, but also much more computationalefficient than the compared methods. We save the checkpoints every 10k iterations during training and report the best FID from the checkpoints (happens at least after 15 hours of training for StyleGAN2 on all datasets). Among the 12 datasets, our model performs the best on 10 of them.
Please note that, due to the VRAM requirement for StyleGAN2 when trained on 10242 resolution, we have to train the models in Table. 3 on a RTX TITAN GPU. In practice, 2080-TI and TITAN share a similar performance, and our model runs the same time on both GPUs.
Training from scratch vs. fine-tuning: Fine-tuning from a pre-trained GAN (Mo et al., 2020; Noguchi & Harada, 2019; Wang et al., 2020) has been the go-to method for the image generation task on datasets with few samples. However, its performance highly depends on the semantic consistency between the new dataset and the available pre-trained model. According to Zhao et al., fine-tuning performs worse than training from scratch in most cases, when the content from the new dataset strays away from the original one. We confirm the limitation of current fine-tuning methods from Table. 2 and Table. 3, where we fine-tune StyleGAN2 trained on FFHQ use the Freeze-D method from Mo et al.. Among all the tested datasets, only Obama and Skull favor the fine-tuning method, making sense since the two sets share the most similar contents to FFHQ.
Module ablation study: We experiment with the two proposed modules in Table. 2, where both SLE (skip) and decoding-on-D (decode) can separately boost the model performance. It shows that the two modules are orthogonal to each other in improving the model performance, and the self-supervised D makes the biggest contribution. Importantly, the baseline model and StyleGAN2 diverge fast after the listed training time. In contrast, our model is less likely to mode collapse among the tested datasets. Unlike the baseline model which usually model-collapse after trained for 10 hours, our model maintains a good synthesis quality and won’t collapse even after trained for 20 hours. We argue that it is the decoding regularization on D that prevents the model from divergence.
Panda
Obama
FFHQ
Shell
Art
Real image
Interpolation between back-tracked images Real image
Figure 6: Latent space back-tracking and interpolation.
Table 5: LPIPS of back-tracking with G
Cat Dog FFHQ Art
Resolution 256 1024
Baseline @ 20k iter 2.113 2.073 2.589 2.916 Baseline @ 40k iter 2.513 2.171 2.583 2.812 Ours @ 40k iter 1.821 1.918 2.425 2.624 Ours @ 80k iter 1.897 1.986 2.342 2.601
Training with more images: For more thorough evaluation, we also test our model on datasets with more sufficient training samples, as shown in Table. 4. We train the full StyleGAN2 for around five days on the Art and Photograph dataset with a batch-size of 16 on two TITAN RTX GPUs, and use the latest official figures on FFHQ from Zhao et al.. Instead, we train our model for only 24 hours, with a batch-size of 8 on a single 2080-Ti GPU. Specifically, for FFHQ with all 70000 images, we train our model with a larger batch-size of 32, to reflect an optimal performance of our model.
In this test, we follow the common practice of computing FID by generating 50k images and use the whole training set as the reference distribution. Note that StyleGAN2 has more than double the parameters compared to our model, and trained with a much larger batch-size on FFHQ. These factors contribute to its better performances when given enough training samples and computing power. Meanwhile, our model keeps up well with StyleGAN2 across all testings with a considerably lower computing budget, showing a compelling performance even on larger-scale datasets, and a consistent performance boost over the baseline model.
Qualitative results: The advantage of our model becomes more clear from the qualitative comparisons in Fig. 5. Given the same batch-size and training time, StyleGAN2 either converges slower or suffers from mode collapse. In contrast, our model consistently generates satisfactory images. Note that the best results from our model on Flower, Shell, and Pokemon only take three hours’ training, and for the rest three datasets, the best performance is achieved at training for eight hours. For StyleGAN2 on “shell”, “anime face”, and “Pokemon”, the images shown in Fig. 5 are already from the best epoch, which they match the scores in Table. 2 and Table. 3. For the rest of the datasets, the quality increase from StyleGAN2 is also limited given more training time.
4.2 MORE ANALYSIS AND APPLICATIONS
Testing mode collapse with back-tracking: From a well trained GAN, one can take a real image and invert it back to a vector in the latent space of G, thus editing the image’s content by altering the back-tracked vector. Despite the various back-tracking methods (Zhu et al., 2016; Lipton & Tripathi, 2017; Zhu et al., 2020; Abdal et al., 2019), a well generalized G is arguably as important for the good inversions. To this end, we show that our model, although trained on limited image samples, still gets a desirable performance on real image back-tracking.
In Table 5, we split the images from each dataset with a training/testing ratio of 9:1, and train G on the training set. We compute a reconstruction error between all the images from the testing set and their inversions from G, after the same update of 1000 iterations on the latent vectors (to prevent the vectors from being far off the normal distribution). The baseline model’s performance is getting worse with more training iterations, which reflects mode-collapse on G. In contrast, our model gives better reconstructions with consistent performance over more training iterations. Fig. 6 presents the back-tracked examples (left-most and right-most samples in the middle panel) given the real images.
Shell
Art Painting
The smooth interpolations from the back-tracked latent vectors also suggest little mode-collapse of our G (Radford et al., 2015; Zhao et al., 2020; Robb et al., 2020).
In addition, we show qualitative comparisons in appendix D, where our model maintains a good generation while StyleGAN2 and baseline are model-collapsed.
The self-supervision methods and generalization ability on D: Apart from the auto-encoding training for D, we show that D with other common self-supervising strategies also boost GAN’s performance in our training settings. We test five self-supervision settings, as shown in Table 6, which all brings a substantial performance boost compared to the baseline model. Specifically, setting-a refers to contrastive learning which we treat each real image as a unique class and let D classify them. For setting-b, we train D to predict the real image’s original aspect-ratio since they are reshaped to square when fed to D. Setting-c is the method we employ in our model, which
trains D as an encoder with a decoder to reconstruct real images. To better validate the benefit of self-supervision on D, all the testings are conducted on full training sets with 10000 images, with a batch-size of 8 to be consistent with Table 4. We also tried training with a larger batch-size of 16, which the results are consistent to the batch-size of 8.
Interestingly, according to Table 6, while setting-c performs the best, combining it with the rest two settings lead to a clear performance downgrade. The similar behavior can be found on some other self-supervision settings, e.g. when follow Chen et al. (2019) with a ”rotation-predicting” task on art-paintings and FFHQ datasets, we observe a performance downgrade even compared to the baseline model. We hypothesis the reason being that the auto-encoding forces D to pay attention to more areas of the input image, thus extracts a more comprehensive feature-map to describe the input image (for a good reconstruction). In contrast, a classification task does not guarantee D to cover the whole image. Instead, the task drives D to only focus on small regions because the model can find class cues from small regions of the images. Focusing on limited regions (i.e., react to limited image patterns) is a typical overfitting behavior, which is also widely happening for D in vanilla GANs. More discussion can be found in appendix B.
Style mixing like StyleGAN. With the channel-wise excitation module, our model gets the same functionality as StyleGAN: it learns to disentangle the images’ high-level semantic attributes (style and content) in an unsupervised way, from G’s conv-layers at different scales. The style-mixing results are displayed in Fig. 7, where the top three datasets are 256 × 256 resolution, and the bottom three are 1024 × 1024 resolution. While StyleGAN2 suffers from converging on the bottom high-resolution datasets, our model successfully learns the style representations along the channel dimension on the “excited” layers (i.e., for feature-maps on 256×256, 512×512 resolution). Please refer to appendix A and C for more information on SLE and style-mixing.
5 CONCLUSION
We introduce two techniques that stabilize the GAN training with an improved synthesis quality, given sub-hundred high-fidelity images and a limited computing resource. On thirteen datasets with a diverse content variation, we show that a skip-layer channel-wise excitation mechanism (SLE) and a self-supervised regularization on the discriminator significantly boost the synthesis performance of GAN. Both proposed techniques require minor changes to a vanilla GAN, enhancing GAN’s practicality with a desirable plug-and-play property. We hope this work can benefit downstream tasks of GAN and provide new study perspectives for future research. | 1. What is the focus of the paper regarding GAN architecture?
2. What are the strengths of the proposed modifications in the GAN architecture?
3. How does the reviewer assess the presentation clarity and significance of the results?
4. What is the suggestion provided by the reviewer to strengthen the paper?
5. What are the concerns regarding the comparison with a baseline StyleGAN?
6. How does the reviewer view the scalability of the proposed model?
7. Are there any legal concerns regarding the distribution of copyrighted images in the dataset?
8. How can the authors improve the reproducibility of their research? | Review | Review
Summary: This paper proposes a lightweight GAN architecture which is tuned for learning generative models in the case where one has access to only a relatively small datasets, as well as a simple autoencoding modification for GAN discriminators to help prevent overfitting and mode collapse. Results are presented on a range of benchmark and new datasets, using standard metrics to compare performance against existing models. The new models compare favorably in the target regime, while unsurprisingly not being as strong as the baseline models in the large-data regime.
My take:
This is a decent paper, with reasonable (albeit not perfect) presentation clarity and passably significant results. The architectural modifications are not trivial (i.e. one could just try and make a StyleGAN less wide, but the authors give specific attention to the design of the higher resolution layers in G), and the changes to the training procedure are simple and effective, while being sufficiently different from previous autoencoder-based approaches to merit being called novel. The results show that the proposed models perform well (qualitatively and quantitatively) in the low-data regime against a full-size StyleGAN baseline, particularly when controlling for compute budget. I would rate this paper about a 6.5: I do not have any major concerns (I would evaluate the technical and methodological soundness of this paper as high) but I also would not expect this paper to have an especially high impact. As I tend to accept, I expect to reconsider my rating to a 7 after the discussion period unless major concerns are raised.
My main suggestion to the authors which I think could strengthen this paper would be to compare against a baseline StyleGAN with the width multipliers decreased. If I were a practitioner seeking to improve model performance in the low-data regime, this would be my first approach: to take an existing, working model, and make it smaller. As the authors’ architectural changes are not this simple, one would expect that, for an equivalent FLOP budget and training budget, the new architecture would outperform a “StyleGAN-slim,” but it would be good to have quantitative evidence on this front.
If the authors have the compute budget, it would also be good to see how this model “scales up,” in the case where it is made deeper or wider (simply changing the width multipliers) and tested against StyleGAN on the full FFHQ; a plot comparing FID over time for the two models (so that a practitioner could see how long one would have to train an equivalent StyleGAN on the full dataset to outperform this model, or vice versa) would be useful. Since the model has a different architecture I would expect it to have different scaling properties. However, this reviewer appreciates that this would be a compute intensive experiment that is likely not possible to run in the revision period, and does not wish to push the authors in this direction given that it is outside the target scope of low-data modeling.
“Note that we collect the last six datasets in the wild, which we do not have a license to re-distribute, yet one can freely collect them just like us.”
I appreciate that the authors have given some considered the legal implications of distributing potentially copyrighted images (and given that there’s not much established legal precedent that I’m aware of on whether doing research using copyrighted images of Pokemon constitutes fair us, this reviewer does not consider this cause for concern). For open-sourcing, the authors might want to consider releasing the URLs of the datasets to enable reproducibility (at least for as long as the images are up), which is an approach that has been used for other datasets. This would also allow the authors revise this sentence to be a bit more “academic,” e.g. “Note that we do not have a license to re-distribute the last six datasets, which we collect from the wild, but we provide the URLs in order to enable reproducibility.”
Edit: As mentioned in my comment below, I believe the authors have done sufficient work in their revision to address my concerns and am revising my score to an acceptance. |
ICLR | Title
Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis
Abstract
Training Generative Adversarial Networks (GAN) on high-fidelity images usually requires large-scale GPU-clusters and a vast number of training images. In this paper, we study the few-shot image synthesis task for GAN with minimum computing cost. We propose a light-weight GAN structure that gains superior quality on 1024 × 1024 resolution. Notably, the model converges from scratch with just a few hours of training on a single RTX-2080 GPU, and has a consistent performance, even with less than 100 training samples. Two technique designs constitute our work, a skip-layer channel-wise excitation module and a self-supervised discriminator trained as a feature-encoder. With thirteen datasets covering a wide variety of image domains 1, we show our model’s superior performance compared to the state-of-the-art StyleGAN2, when data and computing budget are limited.
1 INTRODUCTION
The fascinating ability to synthesize images using the state-of-the-art (SOTA) Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) display a great potential of GANs for many intriguing real-life applications, such as image translation, photo editing, and artistic creation. However, expensive computing cost and the vast amount of required training data limit these SOTAs in real applications with only small image sets and low computing budgets.
In real-life scenarios, the available samples to train a GAN can be minimal, such as the medical images of a rare disease, a particular celebrity’s portrait set, and a specific artist’s artworks. Transferlearning with a pre-trained model (Mo et al., 2020; Wang et al., 2020) is one solution for the lack of training images. Nevertheless, there is no guarantee to find a compatible pre-training dataset. Furthermore, if not, fine-tuning probably leads to even worse performance (Zhao et al., 2020).
In a recent study, it was highlighted that in art creation applications, most artists prefers to train their models from scratch based on their own images to avoid biases from fine-tuned pre-trained model. Moreover, It was shown that in most cases artists want to train their models with datasets of less than
1The datasets and code are available at: https://github.com/odegeasslbc/FastGAN-pytorch
100 images (Elgammal et al., 2020). Dynamic data-augmentation (Karras et al., 2020a; Zhao et al., 2020) smooths the gap and stabilizes GAN training with fewer images. However, the computing cost from the SOTA models such as StyleGAN2 (Karras et al., 2020b) and BigGAN (Brock et al., 2019) remain to be high, especially when trained with the image resolution on 1024× 1024. In this paper, our goal is to learn an unconditional GAN on high-resolution images, with low computational cost and few training samples. As summarized in Fig. 2, these training conditions expose the model to a high risk of overfitting and mode-collapse (Arjovsky & Bottou, 2017; Zhang & Khoreva, 2018). To train a GAN given the demanding training conditions, we need a generator (G) that can learn fast, and a discriminator (D) that can continuously provide useful signals to train G. To address these challenges, we summarize our contribution as:
• We design the Skip-Layer channel-wise Excitation (SLE) module, which leverages lowscale activations to revise the channel responses on high-scale feature-maps. SLE allows a more robust gradient flow throughout the model weights for faster training. It also leads to an automated learning of a style/content disentanglement like StyleGAN2.
• We propose a self-supervised discriminator D trained as a feature-encoder with an extra decoder. We force D to learn a more descriptive feature-map covering more regions from an input image, thus yielding more comprehensive signals to train G. We test multiple selfsupervision strategies for D, among which we show that auto-encoding works the best.
• We build a computational-efficient GAN model based on the two proposed techniques, and show the model’s robustness on multiple high-fidelity datasets, as demonstrated in Fig. 1.
2 RELATED WORKS
Speed up the GAN training: Speeding up the training of GAN has been approached from various perspectives. Ngxande et al. propose to reduce the computing time with depth-wise convolutions. Zhong et al. adjust the GAN objective into a min-max-min problem for a shorter optimization path. Sinha et al. suggest to prepare each batch of training samples via a coreset selection, leverage the better data preparation for a faster convergence. However, these methods only bring a limited improvement in
training speed. Moreover, the synthesis quality is not advanced within the shortened training time.
Train GAN on high resolution: High-resolution training for GAN can be problematic. Firstly, the increased model parameters lead to a more rigid gradient flow to optimize G. Secondly, the target distribution formed by the images on 1024 × 1024 resolution is super sparse, making GAN much harder to converge. Denton et al. (2015); Zhang et al. (2017); Huang et al. (2017); Wang et al. (2018); Karras et al. (2019); Karnewar & Wang (2020); Karras et al. (2020b); Liu et al. (2021) develop the multi-scale GAN structures to alleviate the gradient flow issue, where G outputs images and receives feedback from several resolutions simultaneously. However, all these approaches further increase the computational cost, consuming even more GPU memory and training time.
Stabilize the GAN training: Mode-collapse on G is one of the big challenges when training GANs. And it becomes even more challenging given fewer training samples and a lower computational budget (a smaller batch-size). As D is more likely to be overfitting on the datasets, thus unable to provide meaningful gradients to train G (Gulrajani et al., 2017).
Prior works tackle the overfitting issue by seeking a good regularization for D, including different objectives (Arjovsky et al., 2017; Lim & Ye, 2017; Tran et al., 2017); regularizing the gradients (Gulrajani et al., 2017; Mescheder et al., 2018); normalizing the model weights (Miyato et al., 2018); and augmenting the training data (Karras et al., 2020a; Zhao et al., 2020). However, the effects of these methods degrade fast when the training batch-size is limited, since appropriate batch statistics can hardly be calculated for the regularization (normalization) over the training iterations.
Meanwhile, self-supervision on D has been shown to be an effective method to stabilize the GAN training as studied in Tran et al. (2019); Chen et al. (2019). However, the auxiliary self-supervision tasks in prior works have limited using scenario and image domain. Moreover, prior works only studied on low resolution images (322 to 1282), and without a computing resource limitation.
3 METHOD
We adopt a minimalistic design for our model. In particular, we use a single conv-layer on each resolution in G, and apply only three (input and output) channels for the conv-layers on the high resolutions (≥ 512×512) in both G and D. Fig. 3 and Fig. 4 illustrate the model structure for our G and D, with descriptions of the component layers and forward flow. These structure designs make our GAN much smaller than SOTA models and substantially faster to train. Meanwhile, our model remains robust on small datasets due to its compact size with the two proposed techniques.
3.1 SKIP-LAYER CHANNEL-WISE EXCITATION
For synthesizing higher resolution images, the generator G inevitably needs to become deeper, with more conv-layers, in concert with the up-sampling needs. A deeper model with more convolution layers leads to a longer training time of GAN, due to the increased number of model parameters and a weaker gradient flow through G (Zhang et al., 2017; Karras et al., 2018; Karnewar & Wang, 2020). To better train a deep model, He et al. design the Residual structure (ResBlock), which uses a skip-layer connection to strengthen the gradient signals between layers. However, while ResBlock has been widely used in GAN literature (Wang et al., 2018; Karras et al., 2020b), it also increases the computation cost.
We reformulate the skip-connection idea with two unique designs into the Skip-Layer Excitation module (SLE). First, ResBlock implements skip-connection as an element-wise addition between the activations from different conv-layers. It requires the spatial dimensions of the activations to be the same. Instead of addition, we apply channel-wise multiplications between the activations, eliminating the heavy computation of convolution (since one side of the activations now has a spatial dimension of 12). Second, in prior GAN works, skip-connections are only used within the same resolution. In contrast, we perform skip-connection between resolutions with a much longer range (e.g., 82 and 1282, 162 and 2562), since an equal spatial-dimension is no longer required. The two designs make SLE inherits the advantages of ResBlock with a shortcut gradient flow, meanwhile without an extra computation burden.
Formally, we define the Skip-Layer Excitation module as:
y = F(xlow, {Wi}) · xhigh (1)
Here x and y are the input and output feature-maps of the SLE module, the function F contains the operations on xlow, and Wi indicates the module weights to be learned. The left panel in Fig. 3 shows an SLE module in practice, where xlow and xhigh are the feature-maps at 8×8 and 128×128 resolution respectively. An adaptive average-pooling layer in F first down-samples xlow into 4× 4
along the spatial-dimensions, then a conv-layer further down-samples it into 1 × 1. A LeakyReLU is used to model the non-linearity, and another conv-layer projects xlow to have the same channel size as xhigh. Finally, after a gating operation via a Sigmoid function, the output from F multiplies xhigh along the channel dimension, yielding y with the same shape as xhigh.
SLE partially resembles the Squeeze-and-Excitation module (SE) proposed by Hu et al.. However, SE operates within one feature-map as a self-gating module. In comparison, SLE performs between feature-maps that are far away from each other. While SLE brings the benefit of channel-wise feature re-calibration just like SE, it also strengthens the whole model’s gradient flow like ResBlock. The channel-wise multiplication in SLE also coincides with Instance Normalization (Ulyanov et al., 2016; Huang & Belongie, 2017), which is widely used in style-transfer. Similarly, we show that SLE enables G to automatically disentangle the content and style attributes, just like StyleGAN (Karras et al., 2019). As SLE performs on high-resolution feature-maps, altering these feature-maps is shown to be more likely to change the style attributes of the generated image (Karras et al., 2019; Liu et al., 2021). By replacing xlow in SLE from another synthesized sample, our G can generate an image with the content unchanged, but in the same style of the new replacing image.
3.2 SELF-SUPERVISED DISCRIMINATOR
Our approach to provide a strong regularization for D is surprisingly simple. We treat D as an encoder and train it with small decoders. Such auto-encoding training forces D to extract image features that the decoders can give good reconstructions. The decoders are optimized together with D on a simple reconstruction loss, which is only trained on real samples:
Lrecons = Ef∼Dencode(x), x∼Ireal [||G(f)− T (x)||], (2)
where f is the intermediate feature-maps from D, the function G contains the processing on f and the decoder, and the function T represents the processing on sample x from real images Ireal.
Our self-supervised D is illustrated in Fig. 4, where we employ two decoders for the feature-maps on two scales: f1 on 162 and f2 on 82 . The decoders only have four conv-layers to produce images at 128×128 resolution, causing little extra computations (much less than other regularization methods). We randomly crop f1 with 18 of its height and width, then crop the real image on the same portion to get Ipart. We resize the real image to get I . The decoders produce I ′part from the cropped f1, and I ′ from f2. Finally, D and the decoders are trained together to minimize the loss in eq. 2, by matching I ′part to Ipart and I ′ to I .
Such reconstructive training makes sure that D extracts a more comprehensive representation from the inputs, covering both the overall compositions (from f2) and detailed textures (from f1). Note that the processing in G and T are not limited to cropping; more operations remain to be explored for better performance. The auto-encoding approach we employ is a typical method for self-supervised learning, which has been well recognized to improve the model robustness and generalization ability (He et al., 2020; Hendrycks et al., 2019; Jing & Tian, 2020; Goyal et al., 2019). In the context of GAN, we find that a regularized D via self-supervision training strategies significantly improves the synthesis quality on G, among which auto-encoding brings the most performance boost.
Although our self-supervision strategy for D comes in the form of an auto-encoder (AE), this approach is fundamentally different from works trying to combine GAN and AE (Larsen et al., 2016;
Guo et al., 2019; Zhao et al., 2016; Berthelot et al., 2017). The latter works mostly train G as a decoder on a learned latent space from D, or treat the adversarial training with D as an supplementary loss besides AE’s training. In contrast, our model is a pure GAN with a much simpler training schema. The auto-encoding training is only for regularizing D, where G is not involved.
In sum, we employ the hinge version of the adversarial loss (Lim & Ye (2017); Tran et al. (2017)) to iteratively train our D and G. We find the different GAN losses make little performance difference, while hinge loss computes the fastest:
LD =− Ex∼Ireal [min(0,−1 +D(x))]− Ex̂∼G(z)[min(0,−1−D(x̂)] + Lrecons (3) LG =− Ez∼N [D(G(z))], (4)
4 EXPERIMENT
Datasets: We conduct experiments on multiple datasets with a wide range of content categories. On 256 × 256 resolution, we test on Animal-Face Dog and Cat (Si & Zhu, 2011), 100-Shot-Obama, Panda, and Grumpy-cat (Zhao et al., 2020). On 1024 × 1024 resolution, we test on Flickr-FaceHQ (FFHQ) (Karras et al., 2019), Oxford-flowers (Nilsback & Zisserman, 2006), art paintings from WikiArt (wikiart.org), photographs on natural landscape from Unsplash (unsplash.com), Pokemon (pokemon.com), anime face, skull, and shell. These datasets are designed to cover images with different characteristics: photo realistic, graphic-illustration, and art-like images.
Metrics: We use two metrics to measure the models’ synthesis performance: 1) Fréchet Inception Distance (FID) (Heusel et al., 2017) measures the overall semantic realism of the synthesized images. For datasets with less than 1000 images (most only have 100 images), we let G generate 5000 images and compute FID between the synthesized images and the whole training set. 2) Learned perceptual similarity (LPIPS) (Zhang et al., 2018) provides a perceptual distance between two images. We use LPIPS to report the reconstruction quality when we perform latent space back-tracking on G given real images, and measure the auto-encoding performance. We find it unnecessary to involve other metrics, as FID is unlikely to be inconsistent with the others, given the notable performance gap between our model and the compared ones. For all the testings, we train the models 5 times with random seeds, and report the highest scores. The relative error is less than five percent on average.
Compared Models: We compare our model with: 1) the state-of-the-art (SOTA) unconditional model, StyleGAN2, 2) a baseline model ablated from our proposed one. Note that we adopt StyleGAN2 with recent studies from (Karras et al., 2020a; Zhao et al., 2020), including the model configuration and differentiable data-augmentation, for the best training on few-sample datasets. Since StyleGAN2 requires much more computing-cost (cc) to train, we derive an extra baseline model. In sum, we compare our model with StyleGAN2 on the absolute image synthesis quality regardless of cc, and use the baseline model for the reference within a comparable cc range.
The baseline model is the strongest performer that we integrated from various GAN techniques based on DCGAN (Radford et al., 2015): 1) spectral-normalization (Miyato et al., 2018), 2) exponentialmoving-average (Yazıcı et al., 2018) optimization on G, 3) differentiable-augmentation, 4) GLU (Dauphin et al., 2017) instead of ReLU in G. We build our model upon the baseline with the two proposed techniques: the skip-layer excitation module and the self-supervised discriminator.
Table. 1 presents the normalized cc figures of the models on Nvidia’s RTX 2080-Ti GPU, implemented using PyTorch (Paszke et al., 2017). Importantly, the slimed StyleGAN2 with 14 parameters cannot converge on the tested datasets at 10242 resolution. We compare to the StyleGAN2 with 12 parameters (if not specifically mentioned) in the following experiments.
4.1 IMAGE SYNTHESIS PERFORMANCE
Few-shot generation: Collecting large-scale image datasets are expensive, or even impossible, for a certain character, a genre, or a topic. On those few-shot datasets, a data-efficient model becomes especially valuable for the image generation task. In Table. 2 and Table. 3, we show that our model not only achieves superior performance on the few-shot datasets, but also much more computationalefficient than the compared methods. We save the checkpoints every 10k iterations during training and report the best FID from the checkpoints (happens at least after 15 hours of training for StyleGAN2 on all datasets). Among the 12 datasets, our model performs the best on 10 of them.
Please note that, due to the VRAM requirement for StyleGAN2 when trained on 10242 resolution, we have to train the models in Table. 3 on a RTX TITAN GPU. In practice, 2080-TI and TITAN share a similar performance, and our model runs the same time on both GPUs.
Training from scratch vs. fine-tuning: Fine-tuning from a pre-trained GAN (Mo et al., 2020; Noguchi & Harada, 2019; Wang et al., 2020) has been the go-to method for the image generation task on datasets with few samples. However, its performance highly depends on the semantic consistency between the new dataset and the available pre-trained model. According to Zhao et al., fine-tuning performs worse than training from scratch in most cases, when the content from the new dataset strays away from the original one. We confirm the limitation of current fine-tuning methods from Table. 2 and Table. 3, where we fine-tune StyleGAN2 trained on FFHQ use the Freeze-D method from Mo et al.. Among all the tested datasets, only Obama and Skull favor the fine-tuning method, making sense since the two sets share the most similar contents to FFHQ.
Module ablation study: We experiment with the two proposed modules in Table. 2, where both SLE (skip) and decoding-on-D (decode) can separately boost the model performance. It shows that the two modules are orthogonal to each other in improving the model performance, and the self-supervised D makes the biggest contribution. Importantly, the baseline model and StyleGAN2 diverge fast after the listed training time. In contrast, our model is less likely to mode collapse among the tested datasets. Unlike the baseline model which usually model-collapse after trained for 10 hours, our model maintains a good synthesis quality and won’t collapse even after trained for 20 hours. We argue that it is the decoding regularization on D that prevents the model from divergence.
Panda
Obama
FFHQ
Shell
Art
Real image
Interpolation between back-tracked images Real image
Figure 6: Latent space back-tracking and interpolation.
Table 5: LPIPS of back-tracking with G
Cat Dog FFHQ Art
Resolution 256 1024
Baseline @ 20k iter 2.113 2.073 2.589 2.916 Baseline @ 40k iter 2.513 2.171 2.583 2.812 Ours @ 40k iter 1.821 1.918 2.425 2.624 Ours @ 80k iter 1.897 1.986 2.342 2.601
Training with more images: For more thorough evaluation, we also test our model on datasets with more sufficient training samples, as shown in Table. 4. We train the full StyleGAN2 for around five days on the Art and Photograph dataset with a batch-size of 16 on two TITAN RTX GPUs, and use the latest official figures on FFHQ from Zhao et al.. Instead, we train our model for only 24 hours, with a batch-size of 8 on a single 2080-Ti GPU. Specifically, for FFHQ with all 70000 images, we train our model with a larger batch-size of 32, to reflect an optimal performance of our model.
In this test, we follow the common practice of computing FID by generating 50k images and use the whole training set as the reference distribution. Note that StyleGAN2 has more than double the parameters compared to our model, and trained with a much larger batch-size on FFHQ. These factors contribute to its better performances when given enough training samples and computing power. Meanwhile, our model keeps up well with StyleGAN2 across all testings with a considerably lower computing budget, showing a compelling performance even on larger-scale datasets, and a consistent performance boost over the baseline model.
Qualitative results: The advantage of our model becomes more clear from the qualitative comparisons in Fig. 5. Given the same batch-size and training time, StyleGAN2 either converges slower or suffers from mode collapse. In contrast, our model consistently generates satisfactory images. Note that the best results from our model on Flower, Shell, and Pokemon only take three hours’ training, and for the rest three datasets, the best performance is achieved at training for eight hours. For StyleGAN2 on “shell”, “anime face”, and “Pokemon”, the images shown in Fig. 5 are already from the best epoch, which they match the scores in Table. 2 and Table. 3. For the rest of the datasets, the quality increase from StyleGAN2 is also limited given more training time.
4.2 MORE ANALYSIS AND APPLICATIONS
Testing mode collapse with back-tracking: From a well trained GAN, one can take a real image and invert it back to a vector in the latent space of G, thus editing the image’s content by altering the back-tracked vector. Despite the various back-tracking methods (Zhu et al., 2016; Lipton & Tripathi, 2017; Zhu et al., 2020; Abdal et al., 2019), a well generalized G is arguably as important for the good inversions. To this end, we show that our model, although trained on limited image samples, still gets a desirable performance on real image back-tracking.
In Table 5, we split the images from each dataset with a training/testing ratio of 9:1, and train G on the training set. We compute a reconstruction error between all the images from the testing set and their inversions from G, after the same update of 1000 iterations on the latent vectors (to prevent the vectors from being far off the normal distribution). The baseline model’s performance is getting worse with more training iterations, which reflects mode-collapse on G. In contrast, our model gives better reconstructions with consistent performance over more training iterations. Fig. 6 presents the back-tracked examples (left-most and right-most samples in the middle panel) given the real images.
Shell
Art Painting
The smooth interpolations from the back-tracked latent vectors also suggest little mode-collapse of our G (Radford et al., 2015; Zhao et al., 2020; Robb et al., 2020).
In addition, we show qualitative comparisons in appendix D, where our model maintains a good generation while StyleGAN2 and baseline are model-collapsed.
The self-supervision methods and generalization ability on D: Apart from the auto-encoding training for D, we show that D with other common self-supervising strategies also boost GAN’s performance in our training settings. We test five self-supervision settings, as shown in Table 6, which all brings a substantial performance boost compared to the baseline model. Specifically, setting-a refers to contrastive learning which we treat each real image as a unique class and let D classify them. For setting-b, we train D to predict the real image’s original aspect-ratio since they are reshaped to square when fed to D. Setting-c is the method we employ in our model, which
trains D as an encoder with a decoder to reconstruct real images. To better validate the benefit of self-supervision on D, all the testings are conducted on full training sets with 10000 images, with a batch-size of 8 to be consistent with Table 4. We also tried training with a larger batch-size of 16, which the results are consistent to the batch-size of 8.
Interestingly, according to Table 6, while setting-c performs the best, combining it with the rest two settings lead to a clear performance downgrade. The similar behavior can be found on some other self-supervision settings, e.g. when follow Chen et al. (2019) with a ”rotation-predicting” task on art-paintings and FFHQ datasets, we observe a performance downgrade even compared to the baseline model. We hypothesis the reason being that the auto-encoding forces D to pay attention to more areas of the input image, thus extracts a more comprehensive feature-map to describe the input image (for a good reconstruction). In contrast, a classification task does not guarantee D to cover the whole image. Instead, the task drives D to only focus on small regions because the model can find class cues from small regions of the images. Focusing on limited regions (i.e., react to limited image patterns) is a typical overfitting behavior, which is also widely happening for D in vanilla GANs. More discussion can be found in appendix B.
Style mixing like StyleGAN. With the channel-wise excitation module, our model gets the same functionality as StyleGAN: it learns to disentangle the images’ high-level semantic attributes (style and content) in an unsupervised way, from G’s conv-layers at different scales. The style-mixing results are displayed in Fig. 7, where the top three datasets are 256 × 256 resolution, and the bottom three are 1024 × 1024 resolution. While StyleGAN2 suffers from converging on the bottom high-resolution datasets, our model successfully learns the style representations along the channel dimension on the “excited” layers (i.e., for feature-maps on 256×256, 512×512 resolution). Please refer to appendix A and C for more information on SLE and style-mixing.
5 CONCLUSION
We introduce two techniques that stabilize the GAN training with an improved synthesis quality, given sub-hundred high-fidelity images and a limited computing resource. On thirteen datasets with a diverse content variation, we show that a skip-layer channel-wise excitation mechanism (SLE) and a self-supervised regularization on the discriminator significantly boost the synthesis performance of GAN. Both proposed techniques require minor changes to a vanilla GAN, enhancing GAN’s practicality with a desirable plug-and-play property. We hope this work can benefit downstream tasks of GAN and provide new study perspectives for future research. | 1. What is the focus and contribution of the paper on Generative Adversarial Networks (GANs)?
2. What are the strengths of the proposed architecture, particularly in terms of the skip-layer channel-wise excitation (SLE) modules and the self-supervised discriminator?
3. Do you have any concerns or questions regarding the paper's experiments and results?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any limitations or trade-offs in the proposed approach that the reviewer would like to know more about? | Review | Review
Summary:
This paper introduces a new GAN architecture that targets high resolution generation for small datasets. Two techniques are introduced for this purpose: skip-layer channel-wise excitation (SLE) modules, and regularization of the discriminator via a self-supervised auxiliary task. The proposed architecture is shown to outperform current SOTA models on a variety of small datasets, while training in less time.
Strengths:
-Paper is well written and easy to understand.
-SLE combines benefits of skip-connections, channel-attention, and style-modulation in a single operation.
-Self-supervised discriminator appears to be very effective at preventing overfitting in low data regimes.
-Strong generation results on a variety of datasets, including better image quality, and faster training time than the baseline models with similar numbers of parameters.
-Ablation study demonstrates the usefulness of each of the proposed components.
Weaknesses:
-No significant weaknesses that I can think of.
Recommendation and Justification:
I quite like this paper and tend to vote for acceptance. It is refreshing to see a new architecture designed specifically for the low data, low compute regime, rather than simply reducing the capacity of existing architectures. I particularly like the idea of regularizing the discriminator with an auto-encoding task. Many other methods that attempt to combine auto-encoding and GANs seem to constrain the model too much due to the requirement of mapping all examples in the dataset into the latent space, but this method does not appear to share this constraint. It also has the added benefit of sharing discriminator capacity, rather than introducing an additional encoder which further increases computational cost.
Clarifying Questions:
-Why perform random cropping in the discriminator at 16x16 resolution? Why not perform reconstruction on the full image? Is this mainly for computational savings?
-Is there any weighting on the reconstruction loss in Equation 3 or is the weighting effectively equal to 1 here? -In Table 4, the full StyleGAN2 model outperforms the proposed model when more images are available. However, as is stated in the paper, the StyleGAN2 model has twice as many parameters. If the number of parameters in both models were equal (either by doubling the amount in the proposed model of halving the amount in the StyleGAN2), which would be expected to achieve better performance? |
ICLR | Title
Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis
Abstract
Training Generative Adversarial Networks (GAN) on high-fidelity images usually requires large-scale GPU-clusters and a vast number of training images. In this paper, we study the few-shot image synthesis task for GAN with minimum computing cost. We propose a light-weight GAN structure that gains superior quality on 1024 × 1024 resolution. Notably, the model converges from scratch with just a few hours of training on a single RTX-2080 GPU, and has a consistent performance, even with less than 100 training samples. Two technique designs constitute our work, a skip-layer channel-wise excitation module and a self-supervised discriminator trained as a feature-encoder. With thirteen datasets covering a wide variety of image domains 1, we show our model’s superior performance compared to the state-of-the-art StyleGAN2, when data and computing budget are limited.
1 INTRODUCTION
The fascinating ability to synthesize images using the state-of-the-art (SOTA) Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) display a great potential of GANs for many intriguing real-life applications, such as image translation, photo editing, and artistic creation. However, expensive computing cost and the vast amount of required training data limit these SOTAs in real applications with only small image sets and low computing budgets.
In real-life scenarios, the available samples to train a GAN can be minimal, such as the medical images of a rare disease, a particular celebrity’s portrait set, and a specific artist’s artworks. Transferlearning with a pre-trained model (Mo et al., 2020; Wang et al., 2020) is one solution for the lack of training images. Nevertheless, there is no guarantee to find a compatible pre-training dataset. Furthermore, if not, fine-tuning probably leads to even worse performance (Zhao et al., 2020).
In a recent study, it was highlighted that in art creation applications, most artists prefers to train their models from scratch based on their own images to avoid biases from fine-tuned pre-trained model. Moreover, It was shown that in most cases artists want to train their models with datasets of less than
1The datasets and code are available at: https://github.com/odegeasslbc/FastGAN-pytorch
100 images (Elgammal et al., 2020). Dynamic data-augmentation (Karras et al., 2020a; Zhao et al., 2020) smooths the gap and stabilizes GAN training with fewer images. However, the computing cost from the SOTA models such as StyleGAN2 (Karras et al., 2020b) and BigGAN (Brock et al., 2019) remain to be high, especially when trained with the image resolution on 1024× 1024. In this paper, our goal is to learn an unconditional GAN on high-resolution images, with low computational cost and few training samples. As summarized in Fig. 2, these training conditions expose the model to a high risk of overfitting and mode-collapse (Arjovsky & Bottou, 2017; Zhang & Khoreva, 2018). To train a GAN given the demanding training conditions, we need a generator (G) that can learn fast, and a discriminator (D) that can continuously provide useful signals to train G. To address these challenges, we summarize our contribution as:
• We design the Skip-Layer channel-wise Excitation (SLE) module, which leverages lowscale activations to revise the channel responses on high-scale feature-maps. SLE allows a more robust gradient flow throughout the model weights for faster training. It also leads to an automated learning of a style/content disentanglement like StyleGAN2.
• We propose a self-supervised discriminator D trained as a feature-encoder with an extra decoder. We force D to learn a more descriptive feature-map covering more regions from an input image, thus yielding more comprehensive signals to train G. We test multiple selfsupervision strategies for D, among which we show that auto-encoding works the best.
• We build a computational-efficient GAN model based on the two proposed techniques, and show the model’s robustness on multiple high-fidelity datasets, as demonstrated in Fig. 1.
2 RELATED WORKS
Speed up the GAN training: Speeding up the training of GAN has been approached from various perspectives. Ngxande et al. propose to reduce the computing time with depth-wise convolutions. Zhong et al. adjust the GAN objective into a min-max-min problem for a shorter optimization path. Sinha et al. suggest to prepare each batch of training samples via a coreset selection, leverage the better data preparation for a faster convergence. However, these methods only bring a limited improvement in
training speed. Moreover, the synthesis quality is not advanced within the shortened training time.
Train GAN on high resolution: High-resolution training for GAN can be problematic. Firstly, the increased model parameters lead to a more rigid gradient flow to optimize G. Secondly, the target distribution formed by the images on 1024 × 1024 resolution is super sparse, making GAN much harder to converge. Denton et al. (2015); Zhang et al. (2017); Huang et al. (2017); Wang et al. (2018); Karras et al. (2019); Karnewar & Wang (2020); Karras et al. (2020b); Liu et al. (2021) develop the multi-scale GAN structures to alleviate the gradient flow issue, where G outputs images and receives feedback from several resolutions simultaneously. However, all these approaches further increase the computational cost, consuming even more GPU memory and training time.
Stabilize the GAN training: Mode-collapse on G is one of the big challenges when training GANs. And it becomes even more challenging given fewer training samples and a lower computational budget (a smaller batch-size). As D is more likely to be overfitting on the datasets, thus unable to provide meaningful gradients to train G (Gulrajani et al., 2017).
Prior works tackle the overfitting issue by seeking a good regularization for D, including different objectives (Arjovsky et al., 2017; Lim & Ye, 2017; Tran et al., 2017); regularizing the gradients (Gulrajani et al., 2017; Mescheder et al., 2018); normalizing the model weights (Miyato et al., 2018); and augmenting the training data (Karras et al., 2020a; Zhao et al., 2020). However, the effects of these methods degrade fast when the training batch-size is limited, since appropriate batch statistics can hardly be calculated for the regularization (normalization) over the training iterations.
Meanwhile, self-supervision on D has been shown to be an effective method to stabilize the GAN training as studied in Tran et al. (2019); Chen et al. (2019). However, the auxiliary self-supervision tasks in prior works have limited using scenario and image domain. Moreover, prior works only studied on low resolution images (322 to 1282), and without a computing resource limitation.
3 METHOD
We adopt a minimalistic design for our model. In particular, we use a single conv-layer on each resolution in G, and apply only three (input and output) channels for the conv-layers on the high resolutions (≥ 512×512) in both G and D. Fig. 3 and Fig. 4 illustrate the model structure for our G and D, with descriptions of the component layers and forward flow. These structure designs make our GAN much smaller than SOTA models and substantially faster to train. Meanwhile, our model remains robust on small datasets due to its compact size with the two proposed techniques.
3.1 SKIP-LAYER CHANNEL-WISE EXCITATION
For synthesizing higher resolution images, the generator G inevitably needs to become deeper, with more conv-layers, in concert with the up-sampling needs. A deeper model with more convolution layers leads to a longer training time of GAN, due to the increased number of model parameters and a weaker gradient flow through G (Zhang et al., 2017; Karras et al., 2018; Karnewar & Wang, 2020). To better train a deep model, He et al. design the Residual structure (ResBlock), which uses a skip-layer connection to strengthen the gradient signals between layers. However, while ResBlock has been widely used in GAN literature (Wang et al., 2018; Karras et al., 2020b), it also increases the computation cost.
We reformulate the skip-connection idea with two unique designs into the Skip-Layer Excitation module (SLE). First, ResBlock implements skip-connection as an element-wise addition between the activations from different conv-layers. It requires the spatial dimensions of the activations to be the same. Instead of addition, we apply channel-wise multiplications between the activations, eliminating the heavy computation of convolution (since one side of the activations now has a spatial dimension of 12). Second, in prior GAN works, skip-connections are only used within the same resolution. In contrast, we perform skip-connection between resolutions with a much longer range (e.g., 82 and 1282, 162 and 2562), since an equal spatial-dimension is no longer required. The two designs make SLE inherits the advantages of ResBlock with a shortcut gradient flow, meanwhile without an extra computation burden.
Formally, we define the Skip-Layer Excitation module as:
y = F(xlow, {Wi}) · xhigh (1)
Here x and y are the input and output feature-maps of the SLE module, the function F contains the operations on xlow, and Wi indicates the module weights to be learned. The left panel in Fig. 3 shows an SLE module in practice, where xlow and xhigh are the feature-maps at 8×8 and 128×128 resolution respectively. An adaptive average-pooling layer in F first down-samples xlow into 4× 4
along the spatial-dimensions, then a conv-layer further down-samples it into 1 × 1. A LeakyReLU is used to model the non-linearity, and another conv-layer projects xlow to have the same channel size as xhigh. Finally, after a gating operation via a Sigmoid function, the output from F multiplies xhigh along the channel dimension, yielding y with the same shape as xhigh.
SLE partially resembles the Squeeze-and-Excitation module (SE) proposed by Hu et al.. However, SE operates within one feature-map as a self-gating module. In comparison, SLE performs between feature-maps that are far away from each other. While SLE brings the benefit of channel-wise feature re-calibration just like SE, it also strengthens the whole model’s gradient flow like ResBlock. The channel-wise multiplication in SLE also coincides with Instance Normalization (Ulyanov et al., 2016; Huang & Belongie, 2017), which is widely used in style-transfer. Similarly, we show that SLE enables G to automatically disentangle the content and style attributes, just like StyleGAN (Karras et al., 2019). As SLE performs on high-resolution feature-maps, altering these feature-maps is shown to be more likely to change the style attributes of the generated image (Karras et al., 2019; Liu et al., 2021). By replacing xlow in SLE from another synthesized sample, our G can generate an image with the content unchanged, but in the same style of the new replacing image.
3.2 SELF-SUPERVISED DISCRIMINATOR
Our approach to provide a strong regularization for D is surprisingly simple. We treat D as an encoder and train it with small decoders. Such auto-encoding training forces D to extract image features that the decoders can give good reconstructions. The decoders are optimized together with D on a simple reconstruction loss, which is only trained on real samples:
Lrecons = Ef∼Dencode(x), x∼Ireal [||G(f)− T (x)||], (2)
where f is the intermediate feature-maps from D, the function G contains the processing on f and the decoder, and the function T represents the processing on sample x from real images Ireal.
Our self-supervised D is illustrated in Fig. 4, where we employ two decoders for the feature-maps on two scales: f1 on 162 and f2 on 82 . The decoders only have four conv-layers to produce images at 128×128 resolution, causing little extra computations (much less than other regularization methods). We randomly crop f1 with 18 of its height and width, then crop the real image on the same portion to get Ipart. We resize the real image to get I . The decoders produce I ′part from the cropped f1, and I ′ from f2. Finally, D and the decoders are trained together to minimize the loss in eq. 2, by matching I ′part to Ipart and I ′ to I .
Such reconstructive training makes sure that D extracts a more comprehensive representation from the inputs, covering both the overall compositions (from f2) and detailed textures (from f1). Note that the processing in G and T are not limited to cropping; more operations remain to be explored for better performance. The auto-encoding approach we employ is a typical method for self-supervised learning, which has been well recognized to improve the model robustness and generalization ability (He et al., 2020; Hendrycks et al., 2019; Jing & Tian, 2020; Goyal et al., 2019). In the context of GAN, we find that a regularized D via self-supervision training strategies significantly improves the synthesis quality on G, among which auto-encoding brings the most performance boost.
Although our self-supervision strategy for D comes in the form of an auto-encoder (AE), this approach is fundamentally different from works trying to combine GAN and AE (Larsen et al., 2016;
Guo et al., 2019; Zhao et al., 2016; Berthelot et al., 2017). The latter works mostly train G as a decoder on a learned latent space from D, or treat the adversarial training with D as an supplementary loss besides AE’s training. In contrast, our model is a pure GAN with a much simpler training schema. The auto-encoding training is only for regularizing D, where G is not involved.
In sum, we employ the hinge version of the adversarial loss (Lim & Ye (2017); Tran et al. (2017)) to iteratively train our D and G. We find the different GAN losses make little performance difference, while hinge loss computes the fastest:
LD =− Ex∼Ireal [min(0,−1 +D(x))]− Ex̂∼G(z)[min(0,−1−D(x̂)] + Lrecons (3) LG =− Ez∼N [D(G(z))], (4)
4 EXPERIMENT
Datasets: We conduct experiments on multiple datasets with a wide range of content categories. On 256 × 256 resolution, we test on Animal-Face Dog and Cat (Si & Zhu, 2011), 100-Shot-Obama, Panda, and Grumpy-cat (Zhao et al., 2020). On 1024 × 1024 resolution, we test on Flickr-FaceHQ (FFHQ) (Karras et al., 2019), Oxford-flowers (Nilsback & Zisserman, 2006), art paintings from WikiArt (wikiart.org), photographs on natural landscape from Unsplash (unsplash.com), Pokemon (pokemon.com), anime face, skull, and shell. These datasets are designed to cover images with different characteristics: photo realistic, graphic-illustration, and art-like images.
Metrics: We use two metrics to measure the models’ synthesis performance: 1) Fréchet Inception Distance (FID) (Heusel et al., 2017) measures the overall semantic realism of the synthesized images. For datasets with less than 1000 images (most only have 100 images), we let G generate 5000 images and compute FID between the synthesized images and the whole training set. 2) Learned perceptual similarity (LPIPS) (Zhang et al., 2018) provides a perceptual distance between two images. We use LPIPS to report the reconstruction quality when we perform latent space back-tracking on G given real images, and measure the auto-encoding performance. We find it unnecessary to involve other metrics, as FID is unlikely to be inconsistent with the others, given the notable performance gap between our model and the compared ones. For all the testings, we train the models 5 times with random seeds, and report the highest scores. The relative error is less than five percent on average.
Compared Models: We compare our model with: 1) the state-of-the-art (SOTA) unconditional model, StyleGAN2, 2) a baseline model ablated from our proposed one. Note that we adopt StyleGAN2 with recent studies from (Karras et al., 2020a; Zhao et al., 2020), including the model configuration and differentiable data-augmentation, for the best training on few-sample datasets. Since StyleGAN2 requires much more computing-cost (cc) to train, we derive an extra baseline model. In sum, we compare our model with StyleGAN2 on the absolute image synthesis quality regardless of cc, and use the baseline model for the reference within a comparable cc range.
The baseline model is the strongest performer that we integrated from various GAN techniques based on DCGAN (Radford et al., 2015): 1) spectral-normalization (Miyato et al., 2018), 2) exponentialmoving-average (Yazıcı et al., 2018) optimization on G, 3) differentiable-augmentation, 4) GLU (Dauphin et al., 2017) instead of ReLU in G. We build our model upon the baseline with the two proposed techniques: the skip-layer excitation module and the self-supervised discriminator.
Table. 1 presents the normalized cc figures of the models on Nvidia’s RTX 2080-Ti GPU, implemented using PyTorch (Paszke et al., 2017). Importantly, the slimed StyleGAN2 with 14 parameters cannot converge on the tested datasets at 10242 resolution. We compare to the StyleGAN2 with 12 parameters (if not specifically mentioned) in the following experiments.
4.1 IMAGE SYNTHESIS PERFORMANCE
Few-shot generation: Collecting large-scale image datasets are expensive, or even impossible, for a certain character, a genre, or a topic. On those few-shot datasets, a data-efficient model becomes especially valuable for the image generation task. In Table. 2 and Table. 3, we show that our model not only achieves superior performance on the few-shot datasets, but also much more computationalefficient than the compared methods. We save the checkpoints every 10k iterations during training and report the best FID from the checkpoints (happens at least after 15 hours of training for StyleGAN2 on all datasets). Among the 12 datasets, our model performs the best on 10 of them.
Please note that, due to the VRAM requirement for StyleGAN2 when trained on 10242 resolution, we have to train the models in Table. 3 on a RTX TITAN GPU. In practice, 2080-TI and TITAN share a similar performance, and our model runs the same time on both GPUs.
Training from scratch vs. fine-tuning: Fine-tuning from a pre-trained GAN (Mo et al., 2020; Noguchi & Harada, 2019; Wang et al., 2020) has been the go-to method for the image generation task on datasets with few samples. However, its performance highly depends on the semantic consistency between the new dataset and the available pre-trained model. According to Zhao et al., fine-tuning performs worse than training from scratch in most cases, when the content from the new dataset strays away from the original one. We confirm the limitation of current fine-tuning methods from Table. 2 and Table. 3, where we fine-tune StyleGAN2 trained on FFHQ use the Freeze-D method from Mo et al.. Among all the tested datasets, only Obama and Skull favor the fine-tuning method, making sense since the two sets share the most similar contents to FFHQ.
Module ablation study: We experiment with the two proposed modules in Table. 2, where both SLE (skip) and decoding-on-D (decode) can separately boost the model performance. It shows that the two modules are orthogonal to each other in improving the model performance, and the self-supervised D makes the biggest contribution. Importantly, the baseline model and StyleGAN2 diverge fast after the listed training time. In contrast, our model is less likely to mode collapse among the tested datasets. Unlike the baseline model which usually model-collapse after trained for 10 hours, our model maintains a good synthesis quality and won’t collapse even after trained for 20 hours. We argue that it is the decoding regularization on D that prevents the model from divergence.
Panda
Obama
FFHQ
Shell
Art
Real image
Interpolation between back-tracked images Real image
Figure 6: Latent space back-tracking and interpolation.
Table 5: LPIPS of back-tracking with G
Cat Dog FFHQ Art
Resolution 256 1024
Baseline @ 20k iter 2.113 2.073 2.589 2.916 Baseline @ 40k iter 2.513 2.171 2.583 2.812 Ours @ 40k iter 1.821 1.918 2.425 2.624 Ours @ 80k iter 1.897 1.986 2.342 2.601
Training with more images: For more thorough evaluation, we also test our model on datasets with more sufficient training samples, as shown in Table. 4. We train the full StyleGAN2 for around five days on the Art and Photograph dataset with a batch-size of 16 on two TITAN RTX GPUs, and use the latest official figures on FFHQ from Zhao et al.. Instead, we train our model for only 24 hours, with a batch-size of 8 on a single 2080-Ti GPU. Specifically, for FFHQ with all 70000 images, we train our model with a larger batch-size of 32, to reflect an optimal performance of our model.
In this test, we follow the common practice of computing FID by generating 50k images and use the whole training set as the reference distribution. Note that StyleGAN2 has more than double the parameters compared to our model, and trained with a much larger batch-size on FFHQ. These factors contribute to its better performances when given enough training samples and computing power. Meanwhile, our model keeps up well with StyleGAN2 across all testings with a considerably lower computing budget, showing a compelling performance even on larger-scale datasets, and a consistent performance boost over the baseline model.
Qualitative results: The advantage of our model becomes more clear from the qualitative comparisons in Fig. 5. Given the same batch-size and training time, StyleGAN2 either converges slower or suffers from mode collapse. In contrast, our model consistently generates satisfactory images. Note that the best results from our model on Flower, Shell, and Pokemon only take three hours’ training, and for the rest three datasets, the best performance is achieved at training for eight hours. For StyleGAN2 on “shell”, “anime face”, and “Pokemon”, the images shown in Fig. 5 are already from the best epoch, which they match the scores in Table. 2 and Table. 3. For the rest of the datasets, the quality increase from StyleGAN2 is also limited given more training time.
4.2 MORE ANALYSIS AND APPLICATIONS
Testing mode collapse with back-tracking: From a well trained GAN, one can take a real image and invert it back to a vector in the latent space of G, thus editing the image’s content by altering the back-tracked vector. Despite the various back-tracking methods (Zhu et al., 2016; Lipton & Tripathi, 2017; Zhu et al., 2020; Abdal et al., 2019), a well generalized G is arguably as important for the good inversions. To this end, we show that our model, although trained on limited image samples, still gets a desirable performance on real image back-tracking.
In Table 5, we split the images from each dataset with a training/testing ratio of 9:1, and train G on the training set. We compute a reconstruction error between all the images from the testing set and their inversions from G, after the same update of 1000 iterations on the latent vectors (to prevent the vectors from being far off the normal distribution). The baseline model’s performance is getting worse with more training iterations, which reflects mode-collapse on G. In contrast, our model gives better reconstructions with consistent performance over more training iterations. Fig. 6 presents the back-tracked examples (left-most and right-most samples in the middle panel) given the real images.
Shell
Art Painting
The smooth interpolations from the back-tracked latent vectors also suggest little mode-collapse of our G (Radford et al., 2015; Zhao et al., 2020; Robb et al., 2020).
In addition, we show qualitative comparisons in appendix D, where our model maintains a good generation while StyleGAN2 and baseline are model-collapsed.
The self-supervision methods and generalization ability on D: Apart from the auto-encoding training for D, we show that D with other common self-supervising strategies also boost GAN’s performance in our training settings. We test five self-supervision settings, as shown in Table 6, which all brings a substantial performance boost compared to the baseline model. Specifically, setting-a refers to contrastive learning which we treat each real image as a unique class and let D classify them. For setting-b, we train D to predict the real image’s original aspect-ratio since they are reshaped to square when fed to D. Setting-c is the method we employ in our model, which
trains D as an encoder with a decoder to reconstruct real images. To better validate the benefit of self-supervision on D, all the testings are conducted on full training sets with 10000 images, with a batch-size of 8 to be consistent with Table 4. We also tried training with a larger batch-size of 16, which the results are consistent to the batch-size of 8.
Interestingly, according to Table 6, while setting-c performs the best, combining it with the rest two settings lead to a clear performance downgrade. The similar behavior can be found on some other self-supervision settings, e.g. when follow Chen et al. (2019) with a ”rotation-predicting” task on art-paintings and FFHQ datasets, we observe a performance downgrade even compared to the baseline model. We hypothesis the reason being that the auto-encoding forces D to pay attention to more areas of the input image, thus extracts a more comprehensive feature-map to describe the input image (for a good reconstruction). In contrast, a classification task does not guarantee D to cover the whole image. Instead, the task drives D to only focus on small regions because the model can find class cues from small regions of the images. Focusing on limited regions (i.e., react to limited image patterns) is a typical overfitting behavior, which is also widely happening for D in vanilla GANs. More discussion can be found in appendix B.
Style mixing like StyleGAN. With the channel-wise excitation module, our model gets the same functionality as StyleGAN: it learns to disentangle the images’ high-level semantic attributes (style and content) in an unsupervised way, from G’s conv-layers at different scales. The style-mixing results are displayed in Fig. 7, where the top three datasets are 256 × 256 resolution, and the bottom three are 1024 × 1024 resolution. While StyleGAN2 suffers from converging on the bottom high-resolution datasets, our model successfully learns the style representations along the channel dimension on the “excited” layers (i.e., for feature-maps on 256×256, 512×512 resolution). Please refer to appendix A and C for more information on SLE and style-mixing.
5 CONCLUSION
We introduce two techniques that stabilize the GAN training with an improved synthesis quality, given sub-hundred high-fidelity images and a limited computing resource. On thirteen datasets with a diverse content variation, we show that a skip-layer channel-wise excitation mechanism (SLE) and a self-supervised regularization on the discriminator significantly boost the synthesis performance of GAN. Both proposed techniques require minor changes to a vanilla GAN, enhancing GAN’s practicality with a desirable plug-and-play property. We hope this work can benefit downstream tasks of GAN and provide new study perspectives for future research. | 1. What is the main contribution of the paper on training GANs for high-resolution image synthesis with small datasets?
2. What are the strengths of the proposed approach, particularly regarding the SLE module and SS-discriminator?
3. Do you have any concerns or questions regarding the novelty of the proposed techniques compared to previous works?
4. How does the reviewer assess the effectiveness and efficiency of the proposed model in comparison to existing approaches, such as StyleGAN2?
5. What are the limitations and potential biases in the evaluation and experimental setup of the paper, especially regarding the choice of metrics and the comparison to other methods?
6. Are there any suggestions or recommendations for future improvements or extensions of the proposed method? | Review | Review
Paper summary
This work studies training GANs on small datasets (in a few-shot setting) for high-resolution image synthesis. To generate high-quality samples with minimum computation cost, and to alleviate overfitting and training instabilities, two techniques are proposed: 1) For the generator the Skip-Layer channel-wise Excitation (SLE) module is introduced, “skip-connecting” low-scale layers with high-scale ones, which facilitates the gradient flow and allows style-content mixing from different images. 2) The discriminator is trained with additional small decoders to reconstruct given images, which acts as self-supervision and helps to reduce overfitting. Experiments show that the proposed GAN model copes well with high-resolution image synthesis task (256x256 – 1024x1024) while being trained on small datasets (down to 60 – 100 images), providing a significant speed up for this setting compared to existing approaches.
Strengths
The paper proposes an approach for a very challenging and important task of training GANs with the small amount of training data. To my knowledge, this paper is the first to generate high-resolution realistic images from datasets of such a small scale. This is valuable, as it potentially extends the domain of possible GAN applications. It is also good to see that the paper compares to the recent advances in low-data GANs (Karras et al., 2020a).
The paper achieves good results. The performance gain in comparison to StyleGAN2 and the considered baseline is visible across multiple small-scale datasets, see Tables 2 and 3. The improvement in visual quality is also clearly seen from Figure 5, though the evaluation setting might be unfair to StyleGAN2 (see my comments below).
SLE module seems interesting, as it is a novel way for designing a skip-connection between layers of different spatial resolutions in the generator. Besides facilitating the gradient flow, which helps the generator to learn quicker, it serves as a tool for style-content mixing by modulating high-scale features on low-scale encoding of another image.
Weaknesses
Limited technical novelty and missing comparisons to the closely related work. Each of the two proposed technical solutions have been proposed in similar forms in previous works, and the paper does not directly compare with them.
SLE: The proposed SLE module is a combination of skip connection + plus channel attention mechanism. However, no clear comparison with the related work is provided. As skip connections, one could also use simply residual connections, or use MSG-GAN (Karnevar & Wang, 2019) or StyleGAN2 approaches to improve the gradient flow (see Fig. 7 in the StyleGAN2 paper). They are probably heavier in terms of training speed, but I think there has to be an ablation where the proposed SLE is fairly compared to other ways of using skip connections to improve the gradient flow (memory, speed, performance). From channel attention mechanism, the proposed technique resembles the Squeeze-and-Excitation module (SE) proposed by Hu et al. However, there were other follow up works that show superior results, such as Efficient Channel Attention (ECA) module proposed by Wang et al. CVPR’2020 or Convolutional Block Attention Module by Woo et al. ECCV’18. The paper doesn’t compare the proposed SLE to the above methods, thus it’s hard to judge how effective it is in comparison.
SS-discriminator: The employed auto-encoding approach is a typical method for self-supervised learning. However, there has been a line of works which uses different self-supervision techniques (e.g. Auxiliary Rotation Loss in Chen et al. CVPR 2019) or regularizations on the discriminator side (e.g. Zhao et al. 2020) for the same purpose as the proposed self-supervision, and it would be beneficial to see the comparison of the proposed s-s to existing approaches. Table 6 provides only the comparison with the two SS techniques, which might be suboptimal for the task at hand.
Generally, the proposed model compares well to the considered baseline (DCGAN + extras), but the need for the proposed solutions is not totally justified. Other similar existing solutions, mentioned above, implemented on top of the baseline, potentially could lead to the same performance improvement.
Incomplete evaluation and lack of the experimental support of some claims.
The comparison is done using only one metric - FID. This metric is known for not being able to detect overfitting, and, as was recently shown, is not a proper metric in low-data regimes (see Fig.4 in [*]). This raises the concern that on such small datasets the metric simply shows the degree of overfitting to the training set. With a limited training time used for the reported experiments this is naturally simpler to achieve for the proposed (lighter) model than for StyleGAN2, which might explain such a performance gap between two models. Overall, I disagree with the claim “We find it unnecessary to involve other metrics”, as the used FID metric could be misleading. It would be beneficial to employ also other metrics to measure the diversity of the generated samples. Thus, I find the evaluation presented in Tables 2,3 incomplete.
[*] Robb et al. FEW-SHOT ADAPTATION OF GENERATIVE ADVERSARIAL NETWORKS, ArXiv’2020.
Overfitting is certainly one of the main challenges in a few-shot synthesis setting. However, the paper pays relatively small attention on analyzing the issue. Also in its current state it’s not clear if the proposed solutions actually help to avoid overfitting. On small datasets, the generated images probably resemble the training examples. This is seen for “Skull” in Figure 5, where for each generated image one could find the similar training example, also noticeable for “Shell” in Figure 6, where the interpolations tend to resemble the image on the left or on the right. I agree that Table 5 is valuable, and that it shows relative overfitting strength compared to the baseline. However, I would also expect the analysis on the absolute values, as well as the comparison to StyleGAN2. For example, reporting LPIPS to the nearest training example would be helpful, together with showing generated samples together with the closest training examples.
Looking at the results in Table 4, as the number of training images become larger, the StyleGAN2 outperforms the proposed model. This also shows that even with a half of the parameters the StyleGAN2 model capacity is too large for less than 2k images. So I would expect for a low data regime setting StyleGAN2 with fewer parameters may have better results than the halved StyleGAN2. Given also that on larger datasets (5k-10k, Table 4) the proposed model underperforms, the problem might be the limited capacity of the proposed model, so increasing the number of parameters, e.g. number of channels, might help to improve the overall performance. This trade-off between the model capacity and the size of the training set is not analysed in the paper, and from my point of view lead to unfair comparison in Tables 2,3,4.
For Figure 5, the images shown are from a different epoch than in Tables 3 and 4. Moreover, it might be unfair to clip StyleGAN2 at 10 hours of training, as its best epochs are coming later. The figure illustrates the speed-up from the proposed model, but the reader cannot match it to FID values in tables, also it is not possible to see the performance of StyleGAN2 in its best checkpoints on the studied settings.
It is unclear, what is meant by the “robustness” of the model in the paper. The model is claimed to be robust, but this claim is not really explained and supported experimentally.
Minor
Why also not to employ the SLE module in the discriminator?
How does the style mixing via SLE compares to other approaches, e.g. StyleGAN2?
Overall, I give the paper a borderline rating. I note that the paper studies an important problem, achieves good results, and advances GANs extending their application areas. On the other hand, I find some incompleteness in the experimental evaluation and unsupported claims in the paper, and have concerns about the limited novelty of the proposed technical solutions.
Post-rebuttal: I believe the authors have done sufficient work in their revision to address my concerns. Thus I'm leaning towards acceptance and raising my score to a 7. |
ICLR | Title
Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis
Abstract
Training Generative Adversarial Networks (GAN) on high-fidelity images usually requires large-scale GPU-clusters and a vast number of training images. In this paper, we study the few-shot image synthesis task for GAN with minimum computing cost. We propose a light-weight GAN structure that gains superior quality on 1024 × 1024 resolution. Notably, the model converges from scratch with just a few hours of training on a single RTX-2080 GPU, and has a consistent performance, even with less than 100 training samples. Two technique designs constitute our work, a skip-layer channel-wise excitation module and a self-supervised discriminator trained as a feature-encoder. With thirteen datasets covering a wide variety of image domains 1, we show our model’s superior performance compared to the state-of-the-art StyleGAN2, when data and computing budget are limited.
1 INTRODUCTION
The fascinating ability to synthesize images using the state-of-the-art (SOTA) Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) display a great potential of GANs for many intriguing real-life applications, such as image translation, photo editing, and artistic creation. However, expensive computing cost and the vast amount of required training data limit these SOTAs in real applications with only small image sets and low computing budgets.
In real-life scenarios, the available samples to train a GAN can be minimal, such as the medical images of a rare disease, a particular celebrity’s portrait set, and a specific artist’s artworks. Transferlearning with a pre-trained model (Mo et al., 2020; Wang et al., 2020) is one solution for the lack of training images. Nevertheless, there is no guarantee to find a compatible pre-training dataset. Furthermore, if not, fine-tuning probably leads to even worse performance (Zhao et al., 2020).
In a recent study, it was highlighted that in art creation applications, most artists prefers to train their models from scratch based on their own images to avoid biases from fine-tuned pre-trained model. Moreover, It was shown that in most cases artists want to train their models with datasets of less than
1The datasets and code are available at: https://github.com/odegeasslbc/FastGAN-pytorch
100 images (Elgammal et al., 2020). Dynamic data-augmentation (Karras et al., 2020a; Zhao et al., 2020) smooths the gap and stabilizes GAN training with fewer images. However, the computing cost from the SOTA models such as StyleGAN2 (Karras et al., 2020b) and BigGAN (Brock et al., 2019) remain to be high, especially when trained with the image resolution on 1024× 1024. In this paper, our goal is to learn an unconditional GAN on high-resolution images, with low computational cost and few training samples. As summarized in Fig. 2, these training conditions expose the model to a high risk of overfitting and mode-collapse (Arjovsky & Bottou, 2017; Zhang & Khoreva, 2018). To train a GAN given the demanding training conditions, we need a generator (G) that can learn fast, and a discriminator (D) that can continuously provide useful signals to train G. To address these challenges, we summarize our contribution as:
• We design the Skip-Layer channel-wise Excitation (SLE) module, which leverages lowscale activations to revise the channel responses on high-scale feature-maps. SLE allows a more robust gradient flow throughout the model weights for faster training. It also leads to an automated learning of a style/content disentanglement like StyleGAN2.
• We propose a self-supervised discriminator D trained as a feature-encoder with an extra decoder. We force D to learn a more descriptive feature-map covering more regions from an input image, thus yielding more comprehensive signals to train G. We test multiple selfsupervision strategies for D, among which we show that auto-encoding works the best.
• We build a computational-efficient GAN model based on the two proposed techniques, and show the model’s robustness on multiple high-fidelity datasets, as demonstrated in Fig. 1.
2 RELATED WORKS
Speed up the GAN training: Speeding up the training of GAN has been approached from various perspectives. Ngxande et al. propose to reduce the computing time with depth-wise convolutions. Zhong et al. adjust the GAN objective into a min-max-min problem for a shorter optimization path. Sinha et al. suggest to prepare each batch of training samples via a coreset selection, leverage the better data preparation for a faster convergence. However, these methods only bring a limited improvement in
training speed. Moreover, the synthesis quality is not advanced within the shortened training time.
Train GAN on high resolution: High-resolution training for GAN can be problematic. Firstly, the increased model parameters lead to a more rigid gradient flow to optimize G. Secondly, the target distribution formed by the images on 1024 × 1024 resolution is super sparse, making GAN much harder to converge. Denton et al. (2015); Zhang et al. (2017); Huang et al. (2017); Wang et al. (2018); Karras et al. (2019); Karnewar & Wang (2020); Karras et al. (2020b); Liu et al. (2021) develop the multi-scale GAN structures to alleviate the gradient flow issue, where G outputs images and receives feedback from several resolutions simultaneously. However, all these approaches further increase the computational cost, consuming even more GPU memory and training time.
Stabilize the GAN training: Mode-collapse on G is one of the big challenges when training GANs. And it becomes even more challenging given fewer training samples and a lower computational budget (a smaller batch-size). As D is more likely to be overfitting on the datasets, thus unable to provide meaningful gradients to train G (Gulrajani et al., 2017).
Prior works tackle the overfitting issue by seeking a good regularization for D, including different objectives (Arjovsky et al., 2017; Lim & Ye, 2017; Tran et al., 2017); regularizing the gradients (Gulrajani et al., 2017; Mescheder et al., 2018); normalizing the model weights (Miyato et al., 2018); and augmenting the training data (Karras et al., 2020a; Zhao et al., 2020). However, the effects of these methods degrade fast when the training batch-size is limited, since appropriate batch statistics can hardly be calculated for the regularization (normalization) over the training iterations.
Meanwhile, self-supervision on D has been shown to be an effective method to stabilize the GAN training as studied in Tran et al. (2019); Chen et al. (2019). However, the auxiliary self-supervision tasks in prior works have limited using scenario and image domain. Moreover, prior works only studied on low resolution images (322 to 1282), and without a computing resource limitation.
3 METHOD
We adopt a minimalistic design for our model. In particular, we use a single conv-layer on each resolution in G, and apply only three (input and output) channels for the conv-layers on the high resolutions (≥ 512×512) in both G and D. Fig. 3 and Fig. 4 illustrate the model structure for our G and D, with descriptions of the component layers and forward flow. These structure designs make our GAN much smaller than SOTA models and substantially faster to train. Meanwhile, our model remains robust on small datasets due to its compact size with the two proposed techniques.
3.1 SKIP-LAYER CHANNEL-WISE EXCITATION
For synthesizing higher resolution images, the generator G inevitably needs to become deeper, with more conv-layers, in concert with the up-sampling needs. A deeper model with more convolution layers leads to a longer training time of GAN, due to the increased number of model parameters and a weaker gradient flow through G (Zhang et al., 2017; Karras et al., 2018; Karnewar & Wang, 2020). To better train a deep model, He et al. design the Residual structure (ResBlock), which uses a skip-layer connection to strengthen the gradient signals between layers. However, while ResBlock has been widely used in GAN literature (Wang et al., 2018; Karras et al., 2020b), it also increases the computation cost.
We reformulate the skip-connection idea with two unique designs into the Skip-Layer Excitation module (SLE). First, ResBlock implements skip-connection as an element-wise addition between the activations from different conv-layers. It requires the spatial dimensions of the activations to be the same. Instead of addition, we apply channel-wise multiplications between the activations, eliminating the heavy computation of convolution (since one side of the activations now has a spatial dimension of 12). Second, in prior GAN works, skip-connections are only used within the same resolution. In contrast, we perform skip-connection between resolutions with a much longer range (e.g., 82 and 1282, 162 and 2562), since an equal spatial-dimension is no longer required. The two designs make SLE inherits the advantages of ResBlock with a shortcut gradient flow, meanwhile without an extra computation burden.
Formally, we define the Skip-Layer Excitation module as:
y = F(xlow, {Wi}) · xhigh (1)
Here x and y are the input and output feature-maps of the SLE module, the function F contains the operations on xlow, and Wi indicates the module weights to be learned. The left panel in Fig. 3 shows an SLE module in practice, where xlow and xhigh are the feature-maps at 8×8 and 128×128 resolution respectively. An adaptive average-pooling layer in F first down-samples xlow into 4× 4
along the spatial-dimensions, then a conv-layer further down-samples it into 1 × 1. A LeakyReLU is used to model the non-linearity, and another conv-layer projects xlow to have the same channel size as xhigh. Finally, after a gating operation via a Sigmoid function, the output from F multiplies xhigh along the channel dimension, yielding y with the same shape as xhigh.
SLE partially resembles the Squeeze-and-Excitation module (SE) proposed by Hu et al.. However, SE operates within one feature-map as a self-gating module. In comparison, SLE performs between feature-maps that are far away from each other. While SLE brings the benefit of channel-wise feature re-calibration just like SE, it also strengthens the whole model’s gradient flow like ResBlock. The channel-wise multiplication in SLE also coincides with Instance Normalization (Ulyanov et al., 2016; Huang & Belongie, 2017), which is widely used in style-transfer. Similarly, we show that SLE enables G to automatically disentangle the content and style attributes, just like StyleGAN (Karras et al., 2019). As SLE performs on high-resolution feature-maps, altering these feature-maps is shown to be more likely to change the style attributes of the generated image (Karras et al., 2019; Liu et al., 2021). By replacing xlow in SLE from another synthesized sample, our G can generate an image with the content unchanged, but in the same style of the new replacing image.
3.2 SELF-SUPERVISED DISCRIMINATOR
Our approach to provide a strong regularization for D is surprisingly simple. We treat D as an encoder and train it with small decoders. Such auto-encoding training forces D to extract image features that the decoders can give good reconstructions. The decoders are optimized together with D on a simple reconstruction loss, which is only trained on real samples:
Lrecons = Ef∼Dencode(x), x∼Ireal [||G(f)− T (x)||], (2)
where f is the intermediate feature-maps from D, the function G contains the processing on f and the decoder, and the function T represents the processing on sample x from real images Ireal.
Our self-supervised D is illustrated in Fig. 4, where we employ two decoders for the feature-maps on two scales: f1 on 162 and f2 on 82 . The decoders only have four conv-layers to produce images at 128×128 resolution, causing little extra computations (much less than other regularization methods). We randomly crop f1 with 18 of its height and width, then crop the real image on the same portion to get Ipart. We resize the real image to get I . The decoders produce I ′part from the cropped f1, and I ′ from f2. Finally, D and the decoders are trained together to minimize the loss in eq. 2, by matching I ′part to Ipart and I ′ to I .
Such reconstructive training makes sure that D extracts a more comprehensive representation from the inputs, covering both the overall compositions (from f2) and detailed textures (from f1). Note that the processing in G and T are not limited to cropping; more operations remain to be explored for better performance. The auto-encoding approach we employ is a typical method for self-supervised learning, which has been well recognized to improve the model robustness and generalization ability (He et al., 2020; Hendrycks et al., 2019; Jing & Tian, 2020; Goyal et al., 2019). In the context of GAN, we find that a regularized D via self-supervision training strategies significantly improves the synthesis quality on G, among which auto-encoding brings the most performance boost.
Although our self-supervision strategy for D comes in the form of an auto-encoder (AE), this approach is fundamentally different from works trying to combine GAN and AE (Larsen et al., 2016;
Guo et al., 2019; Zhao et al., 2016; Berthelot et al., 2017). The latter works mostly train G as a decoder on a learned latent space from D, or treat the adversarial training with D as an supplementary loss besides AE’s training. In contrast, our model is a pure GAN with a much simpler training schema. The auto-encoding training is only for regularizing D, where G is not involved.
In sum, we employ the hinge version of the adversarial loss (Lim & Ye (2017); Tran et al. (2017)) to iteratively train our D and G. We find the different GAN losses make little performance difference, while hinge loss computes the fastest:
LD =− Ex∼Ireal [min(0,−1 +D(x))]− Ex̂∼G(z)[min(0,−1−D(x̂)] + Lrecons (3) LG =− Ez∼N [D(G(z))], (4)
4 EXPERIMENT
Datasets: We conduct experiments on multiple datasets with a wide range of content categories. On 256 × 256 resolution, we test on Animal-Face Dog and Cat (Si & Zhu, 2011), 100-Shot-Obama, Panda, and Grumpy-cat (Zhao et al., 2020). On 1024 × 1024 resolution, we test on Flickr-FaceHQ (FFHQ) (Karras et al., 2019), Oxford-flowers (Nilsback & Zisserman, 2006), art paintings from WikiArt (wikiart.org), photographs on natural landscape from Unsplash (unsplash.com), Pokemon (pokemon.com), anime face, skull, and shell. These datasets are designed to cover images with different characteristics: photo realistic, graphic-illustration, and art-like images.
Metrics: We use two metrics to measure the models’ synthesis performance: 1) Fréchet Inception Distance (FID) (Heusel et al., 2017) measures the overall semantic realism of the synthesized images. For datasets with less than 1000 images (most only have 100 images), we let G generate 5000 images and compute FID between the synthesized images and the whole training set. 2) Learned perceptual similarity (LPIPS) (Zhang et al., 2018) provides a perceptual distance between two images. We use LPIPS to report the reconstruction quality when we perform latent space back-tracking on G given real images, and measure the auto-encoding performance. We find it unnecessary to involve other metrics, as FID is unlikely to be inconsistent with the others, given the notable performance gap between our model and the compared ones. For all the testings, we train the models 5 times with random seeds, and report the highest scores. The relative error is less than five percent on average.
Compared Models: We compare our model with: 1) the state-of-the-art (SOTA) unconditional model, StyleGAN2, 2) a baseline model ablated from our proposed one. Note that we adopt StyleGAN2 with recent studies from (Karras et al., 2020a; Zhao et al., 2020), including the model configuration and differentiable data-augmentation, for the best training on few-sample datasets. Since StyleGAN2 requires much more computing-cost (cc) to train, we derive an extra baseline model. In sum, we compare our model with StyleGAN2 on the absolute image synthesis quality regardless of cc, and use the baseline model for the reference within a comparable cc range.
The baseline model is the strongest performer that we integrated from various GAN techniques based on DCGAN (Radford et al., 2015): 1) spectral-normalization (Miyato et al., 2018), 2) exponentialmoving-average (Yazıcı et al., 2018) optimization on G, 3) differentiable-augmentation, 4) GLU (Dauphin et al., 2017) instead of ReLU in G. We build our model upon the baseline with the two proposed techniques: the skip-layer excitation module and the self-supervised discriminator.
Table. 1 presents the normalized cc figures of the models on Nvidia’s RTX 2080-Ti GPU, implemented using PyTorch (Paszke et al., 2017). Importantly, the slimed StyleGAN2 with 14 parameters cannot converge on the tested datasets at 10242 resolution. We compare to the StyleGAN2 with 12 parameters (if not specifically mentioned) in the following experiments.
4.1 IMAGE SYNTHESIS PERFORMANCE
Few-shot generation: Collecting large-scale image datasets are expensive, or even impossible, for a certain character, a genre, or a topic. On those few-shot datasets, a data-efficient model becomes especially valuable for the image generation task. In Table. 2 and Table. 3, we show that our model not only achieves superior performance on the few-shot datasets, but also much more computationalefficient than the compared methods. We save the checkpoints every 10k iterations during training and report the best FID from the checkpoints (happens at least after 15 hours of training for StyleGAN2 on all datasets). Among the 12 datasets, our model performs the best on 10 of them.
Please note that, due to the VRAM requirement for StyleGAN2 when trained on 10242 resolution, we have to train the models in Table. 3 on a RTX TITAN GPU. In practice, 2080-TI and TITAN share a similar performance, and our model runs the same time on both GPUs.
Training from scratch vs. fine-tuning: Fine-tuning from a pre-trained GAN (Mo et al., 2020; Noguchi & Harada, 2019; Wang et al., 2020) has been the go-to method for the image generation task on datasets with few samples. However, its performance highly depends on the semantic consistency between the new dataset and the available pre-trained model. According to Zhao et al., fine-tuning performs worse than training from scratch in most cases, when the content from the new dataset strays away from the original one. We confirm the limitation of current fine-tuning methods from Table. 2 and Table. 3, where we fine-tune StyleGAN2 trained on FFHQ use the Freeze-D method from Mo et al.. Among all the tested datasets, only Obama and Skull favor the fine-tuning method, making sense since the two sets share the most similar contents to FFHQ.
Module ablation study: We experiment with the two proposed modules in Table. 2, where both SLE (skip) and decoding-on-D (decode) can separately boost the model performance. It shows that the two modules are orthogonal to each other in improving the model performance, and the self-supervised D makes the biggest contribution. Importantly, the baseline model and StyleGAN2 diverge fast after the listed training time. In contrast, our model is less likely to mode collapse among the tested datasets. Unlike the baseline model which usually model-collapse after trained for 10 hours, our model maintains a good synthesis quality and won’t collapse even after trained for 20 hours. We argue that it is the decoding regularization on D that prevents the model from divergence.
Panda
Obama
FFHQ
Shell
Art
Real image
Interpolation between back-tracked images Real image
Figure 6: Latent space back-tracking and interpolation.
Table 5: LPIPS of back-tracking with G
Cat Dog FFHQ Art
Resolution 256 1024
Baseline @ 20k iter 2.113 2.073 2.589 2.916 Baseline @ 40k iter 2.513 2.171 2.583 2.812 Ours @ 40k iter 1.821 1.918 2.425 2.624 Ours @ 80k iter 1.897 1.986 2.342 2.601
Training with more images: For more thorough evaluation, we also test our model on datasets with more sufficient training samples, as shown in Table. 4. We train the full StyleGAN2 for around five days on the Art and Photograph dataset with a batch-size of 16 on two TITAN RTX GPUs, and use the latest official figures on FFHQ from Zhao et al.. Instead, we train our model for only 24 hours, with a batch-size of 8 on a single 2080-Ti GPU. Specifically, for FFHQ with all 70000 images, we train our model with a larger batch-size of 32, to reflect an optimal performance of our model.
In this test, we follow the common practice of computing FID by generating 50k images and use the whole training set as the reference distribution. Note that StyleGAN2 has more than double the parameters compared to our model, and trained with a much larger batch-size on FFHQ. These factors contribute to its better performances when given enough training samples and computing power. Meanwhile, our model keeps up well with StyleGAN2 across all testings with a considerably lower computing budget, showing a compelling performance even on larger-scale datasets, and a consistent performance boost over the baseline model.
Qualitative results: The advantage of our model becomes more clear from the qualitative comparisons in Fig. 5. Given the same batch-size and training time, StyleGAN2 either converges slower or suffers from mode collapse. In contrast, our model consistently generates satisfactory images. Note that the best results from our model on Flower, Shell, and Pokemon only take three hours’ training, and for the rest three datasets, the best performance is achieved at training for eight hours. For StyleGAN2 on “shell”, “anime face”, and “Pokemon”, the images shown in Fig. 5 are already from the best epoch, which they match the scores in Table. 2 and Table. 3. For the rest of the datasets, the quality increase from StyleGAN2 is also limited given more training time.
4.2 MORE ANALYSIS AND APPLICATIONS
Testing mode collapse with back-tracking: From a well trained GAN, one can take a real image and invert it back to a vector in the latent space of G, thus editing the image’s content by altering the back-tracked vector. Despite the various back-tracking methods (Zhu et al., 2016; Lipton & Tripathi, 2017; Zhu et al., 2020; Abdal et al., 2019), a well generalized G is arguably as important for the good inversions. To this end, we show that our model, although trained on limited image samples, still gets a desirable performance on real image back-tracking.
In Table 5, we split the images from each dataset with a training/testing ratio of 9:1, and train G on the training set. We compute a reconstruction error between all the images from the testing set and their inversions from G, after the same update of 1000 iterations on the latent vectors (to prevent the vectors from being far off the normal distribution). The baseline model’s performance is getting worse with more training iterations, which reflects mode-collapse on G. In contrast, our model gives better reconstructions with consistent performance over more training iterations. Fig. 6 presents the back-tracked examples (left-most and right-most samples in the middle panel) given the real images.
Shell
Art Painting
The smooth interpolations from the back-tracked latent vectors also suggest little mode-collapse of our G (Radford et al., 2015; Zhao et al., 2020; Robb et al., 2020).
In addition, we show qualitative comparisons in appendix D, where our model maintains a good generation while StyleGAN2 and baseline are model-collapsed.
The self-supervision methods and generalization ability on D: Apart from the auto-encoding training for D, we show that D with other common self-supervising strategies also boost GAN’s performance in our training settings. We test five self-supervision settings, as shown in Table 6, which all brings a substantial performance boost compared to the baseline model. Specifically, setting-a refers to contrastive learning which we treat each real image as a unique class and let D classify them. For setting-b, we train D to predict the real image’s original aspect-ratio since they are reshaped to square when fed to D. Setting-c is the method we employ in our model, which
trains D as an encoder with a decoder to reconstruct real images. To better validate the benefit of self-supervision on D, all the testings are conducted on full training sets with 10000 images, with a batch-size of 8 to be consistent with Table 4. We also tried training with a larger batch-size of 16, which the results are consistent to the batch-size of 8.
Interestingly, according to Table 6, while setting-c performs the best, combining it with the rest two settings lead to a clear performance downgrade. The similar behavior can be found on some other self-supervision settings, e.g. when follow Chen et al. (2019) with a ”rotation-predicting” task on art-paintings and FFHQ datasets, we observe a performance downgrade even compared to the baseline model. We hypothesis the reason being that the auto-encoding forces D to pay attention to more areas of the input image, thus extracts a more comprehensive feature-map to describe the input image (for a good reconstruction). In contrast, a classification task does not guarantee D to cover the whole image. Instead, the task drives D to only focus on small regions because the model can find class cues from small regions of the images. Focusing on limited regions (i.e., react to limited image patterns) is a typical overfitting behavior, which is also widely happening for D in vanilla GANs. More discussion can be found in appendix B.
Style mixing like StyleGAN. With the channel-wise excitation module, our model gets the same functionality as StyleGAN: it learns to disentangle the images’ high-level semantic attributes (style and content) in an unsupervised way, from G’s conv-layers at different scales. The style-mixing results are displayed in Fig. 7, where the top three datasets are 256 × 256 resolution, and the bottom three are 1024 × 1024 resolution. While StyleGAN2 suffers from converging on the bottom high-resolution datasets, our model successfully learns the style representations along the channel dimension on the “excited” layers (i.e., for feature-maps on 256×256, 512×512 resolution). Please refer to appendix A and C for more information on SLE and style-mixing.
5 CONCLUSION
We introduce two techniques that stabilize the GAN training with an improved synthesis quality, given sub-hundred high-fidelity images and a limited computing resource. On thirteen datasets with a diverse content variation, we show that a skip-layer channel-wise excitation mechanism (SLE) and a self-supervised regularization on the discriminator significantly boost the synthesis performance of GAN. Both proposed techniques require minor changes to a vanilla GAN, enhancing GAN’s practicality with a desirable plug-and-play property. We hope this work can benefit downstream tasks of GAN and provide new study perspectives for future research. | 1. What are the strengths and weaknesses of the proposed framework for unconditional image generation?
2. Do you have any concerns regarding the skip-layer excitation module (SLE)?
3. How does the discriminator contribute to the autoencoding process?
4. Can you provide more details about the experimental setup and results?
5. How does the proposed method compare to other self-supervised methods?
6. Are there any limitations or potential improvements for the decoders used in the model?
7. How does the choice of sigmoid activation function in SLE impact the performance of the model?
8. Can you explain why cropping feature maps helps improve performance?
9. How do the training time and batch size affect the comparison between the proposed method and StyleGAN2?
10. What is the significance of smooth interpolations from backtracked latent vectors in relation to overfitting and mode collapse?
11. Can you provide a figure showing reconstruction results to demonstrate the quality of the decoders?
12. Are there any specific layers where style mixing is being performed, and how does it relate to semantic concepts?
13. How can the writing be improved in certain areas of the review? | Review | Review
In this paper, the authors introduce a new framework for unconditional image generation. The introduce a skip-layer excitation module (SLE) that allows gradient flow between activations of different spatial size. They also included a discriminator that is forced to autoencode the image. The authors claim that their framework is able to produce images of higher quality compared to SOTA with less resources.
Figures and tables are not great Figure 3: The figure on right flows from top to bottom to top, making it a little difficult to follow. Arrows sometimes correspond to an operation (upsampling) or the workflow (the red arrows). Different fonts are used.
Figure 4: Similarly, hard to follow. Arrows aren’t straight. What’s the resolution of the crop?
Figure 5: StyleGAN2 and proposed method’s image sizes should be the same for easy qualitative comparison. Perhaps plot real images in a single row at the top. Current organization is difficult to follow.
Table 2:organization can be better. What is stylegan2 ft?
The choice of sigmoid sigmoid in SLE is not justified, especially since AdaIN[1], a very similar method, does not use sigmoid.
No intuition is given for cropping the feature maps for decoder. Why does it help?
Experiments in table 2 and table 3 are computed with different gpus which makes their results/training time incomparable.
Similarly, comparisons with StyleGAN2 have different training time and batch sizes. I understand the point the authors are trying to point out is with less resources, their model performs better. However, it is unclear if their model would mode collapse with longer training or would it perform worse on bigger batches. I would like to see a fair comparison and also a line plot of FID w.r.t training time.
Under 4.2, the authors claim that a well generalized G is the key for good inversions. This claim is not true since Image2StyleGAN[2] showed that even a network with randomly initialized weights can achieve good inversion.
Under 4.2 the authors claim that “The smooth interpolations from the back-tracked latent vectors also suggest little overfitting and mode-collapse of our G.” I am aware that in StyleGAN[3], they claim smoothness of interpolation in latent space is correlated with how disentangled the latent space. But I am not aware of work that show interpolation is related to overfitting and mode collapse. Citation is needed.
How good are the decoders? A figure showing the reconstruction will be good.
For comparisons with other self-supervision method, the experimental setup is missing.
In which layers are style mixing being performed on? In my opinion, the style mixing results show more of a color change rather than anything semantically meaningful. StyleGAN showed that styles from different layers correspond to different semantic concepts. This seems to be missing in this model.
General comments: Writing can be improved. Some sentences can be rewritten to flow better 1) “The biggest challenge lies on the overfitting issue on D, thus leading to mode-collapse on G, given the sparse training samples and a low computing budget.”
2) “elevated model depth” 3) “In contrast, our model maintains a good synthesis quality, even double the training time, thanks to the decoding regularization of D” 4) “at most an eight hours’ training is needed”
Overall, I think the idea of SLE is interesting, but clear comparisons and ablations should be done to validate its usefulness.
[1] Huang, Xun, and Serge Belongie. "Arbitrary style transfer in real-time with adaptive instance normalization." Proceedings of the IEEE International Conference on Computer Vision. 2017.
[2] Abdal, Rameen, Yipeng Qin, and Peter Wonka. "Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space?." (2019).
[3] Karras, Tero, Samuli Laine, and Timo Aila. "A style-based generator architecture for generative adversarial networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2019. |
ICLR | Title
PatchDCT: Patch Refinement for High Quality Instance Segmentation
Abstract
High-quality instance segmentation has shown emerging importance in computer vision. Without any refinement, DCT-Mask directly generates high-resolution masks by compressed vectors. To further refine masks obtained by compressed vectors, we propose for the first time a compressed vector based multi-stage refinement framework. However, the vanilla combination does not bring significant gains, because changes in some elements of the DCT vector will affect the prediction of the entire mask. Thus, we propose a simple and novel method named PatchDCT, which separates the mask decoded from a DCT vector into several patches and refines each patch by the designed classifier and regressor. Specifically, the classifier is used to distinguish mixed patches from all patches, and to correct previously mispredicted foreground and background patches. In contrast, the regressor is used for DCT vector prediction of mixed patches, further refining the segmentation quality at boundary locations. Experiments on COCO show that our method achieves 2.0%, 3.2%, 4.5% AP and 3.4%, 5.3%, 7.0% Boundary AP improvements over Mask-RCNN on COCO, LVIS, and Cityscapes, respectively. It also surpasses DCT-Mask by 0.7%, 1.1%, 1.3% AP and 0.9%, 1.7%, 4.2% Boundary AP on COCO, LVIS and Cityscapes. Besides, the performance of PatchDCT is also competitive with other state-of-the-art methods.
1 INTRODUCTION
Instance segmentation (Li et al., 2017; He et al., 2017) is a fundamental but challenging task in computer vision, which aims to locate objects in images and precisely segment each instance. The mainstream instance segmentation methods follow Mask-RCNN (He et al., 2017) paradigm, which often segment instances in a low-resolution grid (Kang et al., 2020; Cheng et al., 2020c; Chen et al., 2019; Ke et al., 2021). However, limited by the coarse mask representation ( i.e. 28 × 28 in Mask-RCNN), most of these algorithms cannot obtain high-quality segmentation results due to the loss of details. DCT-Mask (Shen et al., 2021) achieves considerable performance gain by predicting an informative 300-dimensional Discrete Cosine Transform (DCT) (Ahmed et al., 1974) vector compressed from a 128 × 128 mask. To further improve the segmentation results of DCTMask, we follow the refine mechanism (Ke et al., 2022; Zhang et al., 2021; Kirillov et al., 2020) to correct the mask details in a multi-stage manner.
A straightforward implementation is to refine the 300-dimensional DCT vector multiple times. However, experimental results show that this naive implementation does not succeed, which improves mask average precision (mAP) by 0.1% from 36.5% to 36.6% on COCO val set. The main reason for the limited improvement is that the full 300-dimensional DCT vector is not suitable for refining some important local regions, such as wrong predicted regions and boundary regions in masks. As each pixel value in the mask is calculated by all elements of the DCT vector in the inference stage, once some elements in the DCT vector change, the entire mask will change, and even the correct segmentation areas may be affected, refer to Figure 1a.
∗Corresponding author is Kewei Liang.
To overcome the above issue, we propose a novel method, called PatchDCT, which divides the mask decoded from a DCT vector into several independent patches and refines each patch with a threeclass classifier and a regressor, respectively. In detail, each patch is first classified into one of three categories: foreground, background, and mixed by the classifier, and then previously mispredicted foreground and background patches will be corrected. Mixed patches are fed into the regressor to predict their corresponding n-dimensional (n ≪ 300) DCT vectors. In the inference stage, we use Inverse Discrete Cosine Transform (IDCT) to decode the predicted vectors of the mixed patches as their refined masks, and merge them with the masks of other foreground and background patches to obtain a high-resolution mask. It is also worth emphasizing that each patch is independent, so the element change of a DCT vector will only affect the corresponding mixed patch, as shown in Figure 1b. In general, patching allows the model to focus on the refinement of local regions, thereby continuously improving the quality of segmentation, resulting in significant performance improvements. Our main contributions are:
1) To our best knowledge, PatchDCT is the first compressed vector based multi-stage refinement detector to predict high-quality masks.
2) PatchDCT innovatively adopts the patching technique, which successfully allows the model to focus on the refinement of important local regions, fully exploiting the advantages of multi-stage refinement and high-resolution information compression.
3) Compared to Mask RCNN, PatchDCT improves about 2.0% AP and 3.4% Boundary AP on COCO, 3.2% AP and 5.3% Boundary AP on LVIS∗1, 4.5% AP and 7.0% Boundary AP on Cityscapes. It also achieves 0.7% AP and 0.9% Boundary AP on COCO, 1.1% AP and 1.7% Boundary AP on LVIS∗, 1.3% AP and 4.2% Boundary AP on Cityscapes over DCT-Mask.
4) Demonstrated by experiments on COCO test-dev, the performance of PatchDCT is also competitive with other state-of-the-art methods.
2 RELATED WORK
Instance segmentation. Instance segmentation assigns a pixel-level mask to each instance of interest. Mask-RCNN (He et al., 2017) generates bounding boxes for each instance with a powerful detector (Ren et al., 2015) and categorizes each pixel in bounding boxes as foreground or background to obtain 28 × 28 binary grid masks. Several methods that build on Mask-RCNN improve the quality of masks. Mask Scoring RCNN (Huang et al., 2019) learns to regress mask IoU to select better-quality instance masks. HTC (Chen et al., 2019) utilizes interleaved execution, mask information flow, and semantic feature fusion to improve Mask-RCNN. BMask RCNN (Cheng et al., 2020c) adds a boundary branch on Mask-RCNN to detect the boundaries of masks. Bounding Shape Mask R-CNN (Kang et al., 2020) improves performance on object detection and instance segmentation by its bounding shape mask branch. BCNet (Ke et al., 2021) uses two GCN (Welling & Kipf, 2016) layers to detect overlapping instances. Although these algorithms have yielded promising results, they are still restricted in the low-resolution mask representation and thus do not generate high-quality masks.
1COCO dataset with LVIS annotations
Towards high-quality instance segmentation. To take full advantage of high-resolution masks, DCT-Mask (Shen et al., 2021) learns to regress a 300-dimensional DCT vector compressed from a 128 × 128 mask. SOLQ (Dong et al., 2021) is a query-based method, which also encodes highresolution masks into DCT vectors and predicts the vectors by queries. Both of these methods generate high-resolution masks in a one-shot manner, without any refinement. Although they have made considerable gains, there is still potential for improvement. Multi-stage refinement is another common technique for obtaining high-quality masks. PointRend (Kirillov et al., 2020) adaptively selects several locations to refine, rendering 224×224 masks from 7×7 coarse masks. RefineMask (Zhang et al., 2021) introduces semantic segmentation masks as auxiliary inputs, and generates 112 × 112 masks in a multi-stage manner. Mask Transfiner (Ke et al., 2022) represents image regions as a quadtree and corrects the errors of error-prone tree nodes to generate 112× 112 masks. PBR (Tang et al., 2021) is a post-processing method that refines patches along the mask boundaries. Unlike these refinement methods based on the binary grid mask representation, our method is based on compressed vectors.
Generating high-quality masks is also one of the main concerns in the field of semantic segmentation. CRFasRNN (Zheng et al., 2015) connects CRF (Krähenbühl & Koltun, 2011) with FCN (Long et al., 2015), formulating mean-field approximate inference for the CRF with Gaussian pairwise potentials as Recurrent Neural Networks. DeepLab (Chen et al., 2017) effectively improves the quality of masks by using atrous convolution for receptive field enhancement, ASPP for multiscale segmentation, and CRF for boundary refinement. SegModel (Shen et al., 2017) utilizes a guidance CRF to improve the segmentation quality. CascadePSP (Cheng et al., 2020b) trains independently a refinement module designed in a cascade fashion. RGR (Dias & Medeiros, 2018) is a post-processing module based on region growing. In contrast, PatchDCT can obtain high-quality segmentation results in an end-to-end learning manner without any additional post-processing.
3 METHODS
In this section, we show the difficulties in refining DCT vectors and then introduce PatchDCT to overcome these difficulties and generate finer masks.
3.1 DIFFICULTIES IN REFINING DCT VECTORS
Given a K ×K mask, DCT-Mask (Shen et al., 2021) encodes the mask MK×K into the frequency domain MfK×K :
MfK×K(u, v) = 2
K C(u)C(v) K−1∑ x=0 K−1∑ y=0 MK×K(x, y) cos (2x+ 1)uπ 2K cos (2y + 1)vπ 2K , (1)
where C(w) = 1/ √ 2 for w = 0 and C(w) = 1 otherwise. Non-zero values are concentrated in the upper left corner of MfK×K , which are low-frequency elements that contain the most information of the mask. The N -dimensional DCT vector is obtained by zigzag scanning (Al-Ani & Awad, 2013) MfK×K and selecting the top-N elements.
In the inference stage, MfK×K is recovered by filling the remaining elements to zero. Then each pixel in the mask MK×K is calculated as follow:
MK×K(x, y) = 2
K C(x)C(y) K−1∑ u=0 K−1∑ v=0 MfK×K(u, v) cos (2x+ 1)uπ 2K cos (2y + 1)vπ 2K , (2)
Equation 2 reveals that each pixel in the mask MK×K is calculated by all elements of MfK×K . When refining the N -dimensional DCT vector, once an element is incorrectly changed, all pixels in MK×K will be affected, even those correctly segmented regions, which is also shown in Figure 1. Therefore, when fixing some specific error regions (e.g. borders), it is difficult to get the correct refinement result unless all the elements in the DCT vector are correctly refined. In practice, however, it is almost impossible to correctly predict all N elements.
3.2 PATCHDCT
To prevent the above issue when refining the global DCT vector, we propose a method named PatchDCT, which divides the K ×K mask into m×m patches and refines each patch respectively. The overall architecture of PatchDCT is shown in Figure 2, which mainly consists of a three-class classifier and a DCT vector regressor. Specifically, the classifier is used to identify mixed patches and refine foreground and background patches. Each mixed patch is then refined by an n-dimensional DCT vector, which is obtained from the DCT vector regressor.
Three-class classifier. We define the patches with only foreground pixels and only background pixels as foreground patches and background patches, respectively, while the others are mixed patches. The task of differentiating patch categories is accomplished by a fully convolutional three-class classifier. Moreover, the mispredicted initial foreground and background patches are corrected by the classifier. We utilize a three-class classifier instead of a DCT vector regressor to refine foreground and background patches because of the particular form of their DCT vectors. For background patches, simply from Equation 1, all elements of DCT vectors are zero. For foreground patches, all elements are zero except for the first element named DC component (DCC), which is equal to the patch size m. The mathematical proof of the DCT vector form for the foreground patches is shown in the Appendix. DCT vector elements of foreground and background
patches are discrete data that are more suitable for classification. Referring to Figure 3, DCT vector elements of mixed patches are continuously distributed and therefore more suitable for regression.
Regressor. Similar to the phenomenon described in DCT-Mask (Shen et al., 2021), refining highresolution masks with the binary grid mask representation introduces performance degradation due to the high training complexity (refer to DCT-Mask (Shen et al., 2021) for more details). Learning to regress informative DCT vectors eases the training process. The specific experimental results are discussed in the experiments section (Sec. 4).
The regressor is trained and inferred for mixed patches only. It is actually a boundary attention module, since the mixed patches are distributed exactly along the boundary of the instance mask. For each mixed patch, the regressor predicts an n-dimensional DCT vector, which is very short but highly informative. Table 1 shows mask AP obtained by different lengths of ground truth patch DCT
vectors using Mask-RCNN framework on COCO val2017. The low-dimensional DCT vectors have been able to provide sufficient ground truth information.
3.3 MULTI-STAGE REFINEMENT AND LOSS FUNCTION
PatchDCT is a module where the input and output masks have the same resolution. Thus, the mask generated by a PatchDCT module can be fed into another PatchDCT module for further refinement, as shown in the upper right corner of Figure 2.
With multi-stage refinement, the loss function of the mask branch is defined as Lmask = λ0LdctN + ∑ s>0 λs(Lsclspatch + L s dctn), (3)
λ0 and λs are the loss weights. The first item LdctN of Equation 3 is the loss in predicting N - dimensional vectors of the entire masks (Shen et al., 2021).
LdctN = 1
N N∑ i R(V̂i − Vi), (4)
where Vi and V̂i are the i-th element in ground-truth and the prediction vector respectively. R is the loss function and N is the length of the vectors. The classification loss Lsclspatch of s-th stage is the cross-entropy loss over three classes. The regression loss Lsdctn of s-th stage is
Lsdctn = 1
Nm Nall∑ k
[ pk ( 1
n n∑ i R(V̂i − Vi)
)] , (5)
where Nm, Nall are the number of mixed patches and all patches respectively. n is the length of the patch DCT vectors. If the k-th patch is a mixed patch, pk = 1, otherwise pk = 0, indicating that only DCT vectors of mixed patches are regressed.
4 EXPERIMENTS
4.1 DATASETS
We evaluate our method on two standard instance segmentation datasets: COCO (Lin et al., 2014) and Cityscapes (Cordts et al., 2016). COCO provides 80 categories with instance-level annotations. Cityscapes is a dataset focused on urban street scenes. It contains 8 categories for instance segmentation, providing 2,975, 500 and 1,525 high-resolution images (1, 024 × 2, 048) for training, validation, and test respectively.
We report the standard mask AP metric and the Boundary AP (Cheng et al., 2021) metric (APB), the latter focusing on evaluating the boundary quality. Following (Kirillov et al., 2020), we also report AP∗ and AP∗B , which evaluate COCO val2017 with high-quality annotations provided by LVIS (Gupta et al., 2019). Note that for AP∗ and AP∗B , models are still trained on COCO train2017.
4.2 IMPLEMENT DETAILS
We build the model based on DCT-Mask (Shen et al., 2021). We first decode the 300-dimensional DCT vector to obtain a 112 × 112 mask. This mask is then fed into PatchDCT, together with a 42 × 42 feature map cropped from FPN-P2 (Lin et al., 2017). PatchDCT refines each patch of the mask and outputs a 112 × 112 mask. We set the patch size to 8 and each patch is represented by a 6-dimensional DCT vector. Our model is class-specific by default, i.e. one mask per class. L1 loss and cross-entropy loss are used for DCT vector regression and patch classification respectively. By default, only one PatchDCT module is used, and both λ0 and λ1 are set to 1. We implement our algorithm based on Detectron2 (Wu et al., 2019), and all hyperparameters remain the same as Mask-RCNN in Detectron2. Unless otherwise stated, 1× learning schedule is used.
4.3 MAIN RESULTS
Results on COCO. We compare PatchDCT with Mask-RCNN and DCT-Mask over different backbones. As shown in Table 2, on COCO val2017 with R50-FPN, PatchDCT improves 2.0% AP and 3.4% APB over Mask-RCNN. Compared with DCT-Mask, PatchDCT also achieves 0.7% AP and 0.9% APB improvements. When evaluating with LVIS annotations, PatchDCT yields significant gains of 3.2% AP∗ and 5.3% AP∗B over Mask-RCNN, and 1.1% AP
∗ and 1.7% AP∗B over DCTMask. Consistent improvements are observed on R101-FPN and RX101-FPN. Since AP∗ and AP∗B are evaluated with high-quality annotations, the significant improvements of these two metrics emphasize the superiority of our model. In addition, considering the improvement in mask quality, the cost in runtime is almost negligible, i.e. about 1.5 FPS degradation on the A100 GPU.
We also compare the performance of PatchDCT with state-of-the-art methods of instance segmentation on COCO test-dev2017. With RX101 backbone, PatchDCT surpasses PointRender (Kirillov et al., 2020) and RefineMask (Zhang et al., 2021), which are both multi-stage refinement methods based on binary grid masks, by 0.8% and 0.4%. PatchDCT also achieves comparable performance with Mask Transfiner (Ke et al., 2022) with R101 backbone. However, Mask-Transifer runs at 5.5 FPS on the A100 GPU, which is almost two times slower than PatchDCT. With Swin-B back-
bone, PatchDCT outperforms Mask Transfiner (Ke et al., 2022) by 0.7% AP. It is worth noting that PatchDCT is faster than most multi-stage refinement methods since only one refine process is required. These results demonstrate the effectiveness of PatchDCT in generating high-quality masks.
Results on Cityscapes. We also report results on Cityscapes val set in Table 3. In comparison with Mask-RCNN, PatchDCT obtains 4.5% AP and 7.0% APB improvements. It also outperforms DCT-Mask by 1.3% AP and 4.2% APB . Compared with other SOTA methods, PatchDCT is still competitive. PatchDCT achieves 0.8%, 1.4%, 2.1% APB gains over Mask Transfiner (Ke et al., 2022), RefineMask (Zhang et al., 2021) and PointRender (Kirillov et al., 2020) respectively. The large difference in APB highlights the ability of PatchDCT to generate masks with more detailed borders.
4.4 ABLATION EXPERIMENTS
We conduct extensive ablation experiments to further analyze PatchDCT. We adopt R50-FPN as the backbone and evaluate the performance on COCO val2017.
Simply refine DCT vectors. Simply refining the global DCT vectors does not succeed. To demonstrate that, we design a model named ‘Two-stage DCT’, which regresses a new 300-dimensional DCT vector after fusing the initial mask with a 42×42 feature map from FPN-P2. The refined mask is decoded from the final DCT vector. From Table 5, Two-stage DCT achieves only little improvements over DCT-Mask, since changes in some elements of the global DCT vector may affect the entire mask, even for the correct segmentation areas. PatchDCT leverages the patching mechanism to overcome this issue and outperforms Two-stage DCT by 1.0 AP∗B .
Binary grid refinement. Refining masks with the binary grid mask representation can be considered as the extreme patching mechanism, which treats each pixel as a patch. However, simply refining high-resolution masks with the binary grid mask representation introduces performance degradation. We construct an experiment named ‘binary grid refinement’, which predicts another 112×112 mask with the binary grid mask representation after fusing the initial mask as well as a 56×56 feature map from FPN-P2. Experimental results in Table 5 show that the performance of binary grid refinement is worse than PatchDCT, and even DCT-Mask. This is because binary grid refinement requires the refinement module to learn 12544 (112× 112) outputs, while PatchDCT only needs to learn at most 1176 (14× 14× 6) outputs, which reduces the training complexity. Effectiveness of three-class classifier. In addition to identifying mixed patches, a more important role of the three-class classifier is to correct previously mispredicted foreground and background patches. To validate the effectiveness of refining non-mixed patches (i.e. foreground and background patches), we construct a binary-class classifier, which only classifies patches as mixed or non-mixed and keeps masks of non-mixed patches unchanged. As shown in Table 6, the binary-class classifier is inferior to our three-class classifier by 0.3% AP and 0.4% AP∗, since the refinement of previously incorrectly predicted foreground and background patches is ignored.
Refinement of foreground and background patches can also be accomplished with the DCT vector regressor. However, as discussed in Sec. 3.2, the DCT vector elements of the non-mixed patches
Table 7: Mask AP obtained by PatchDCT with regressor focusing on all patches and mixed patches on val2017. The best results are obtained by regressing only the mixed patches.
Regressor AP APS APM APL APB AP∗ AP∗B all 36.6 17.7 39.5 52.2 23.6 39.6 28.6 mixed 37.2 18.3 39.5 54.2 24.5 40.8 30.1
Table 9: Mask AP obtained by models with different dimensions of patch DCT vectors on COCO val2017. Model with 6-dimensional vectors achieves the best performance.
Patch Dim. AP APS APM APL APB AP∗ AP∗B 3 36.8 17.6 39.2 53.5 24.0 40.5 29.5 6 37.2 18.3 39.5 54.1 24.5 40.8 30.1 9 36.9 17.1 39.3 53.3 24.3 40.6 30.1
only involve zero and m, making it ineffective to learn the DCT vectors of all patches directly. As shown in Table 7, the performance of the method refining non-mixed regions with the DCT vector regressor is lower than the method using a three-class classifier by 0.6% AP and 1.2% AP∗. Need to note that, APB and AP∗B decrease by 0.9% and 1.5% respectively, reflecting that learning to regress non-mixed patches also affects the prediction of boundaries.
Effectiveness of the regressor. The regressor is actually a boundary attention module that generates finer boundaries. As shown in Table 8, after removing the regressor and keeping only the classifier, the overall AP only decreases by 0.5% , but APB and AP∗B decrease by 1.2% and 3.0% respectively. The phenomenon demonstrates the importance of the regressor for generating finer boundaries.
Dimension of PatchDCT vectors We look for an appropriate patch DCT vector length to encode each mixed patch. Results in Table 9 show that the model with 6-dimensional patch DCT vectors obtains the best performance. As also shown in Table 1, the 6-dimensional patch DCT vector already contains most of the ground truth information. As more elements bring only very little incremental information, regressing these elements does not improve the prediction.
Multi-stage PatchDCT. We compare the performance of the multi-stage procedure in Table 10. One-stage PatchDCT already provides high-quality masks, while two-stage PatchDCT further improves the prediction. However, the computational cost of the mask branch has nearly doubled with tiny improvements in the quality of masks, so we choose to use one-stage PatchDCT in our paper.
Size of the patch. We evaluate the influence of patch size in Table 11. We keep the resolution of the mask and the size of the input feature map unchanged and compare the model performance with different patch sizes. PatchDCT with 8× 8 patches performs better than other settings. Size of the feature map. We compare the model with different sizes of the feature map used in PatchDCT. Table 12 illustrates that the performance saturates with the 42× 42 feature map. Feature map from FPN. We evaluate PatchDCT with the feature map cropped from all pyramid levels or P2. Table 13 shows that PatchDCT benefits from the finer feature map of P2.
4.5 QUALITATIVE RESULTS
In Figure 4 we visualize some outputs of PatchDCT on COCO val2017. PatchDCT generates finer boundaries among different instances, such as the shoulder of the person (the first column), the contour of the kite (the third column), and the arm of the girl (the fourth column). PatchDCT obtains masks of higher quality in comparison with Mask-RCNN and DCT-Mask.
5 CONCLUSIONS
In this work, we propose PatchDCT, a compressed vector based method towards high-quality instance segmentation. In contrast to previous methods, PatchDCT refines each patch of masks respectively and utilizes patch DCT vectors to compress boundaries that are full of details. By using a classifier to refine foreground and background patches, and predicting an informative lowdimensional DCT vector for each mixed patch, PatchDCT generates a high-resolution mask with fine boundaries. PatchDCT is designed with a simple and clean structure, which allows the method to obtain high-quality segmentation with almost negligible cost in speed compared to Mask-RCNN and DCT-Mask. We hope that our approach will benefit future studies in instance segmentation.
A MORE QUALITATIVE RESULTS
A.1 TWO-STAGE DCT
We visualize some outputs of two-stage DCT and compare them with DCT-Mask to demonstrate the disadvantages of simply combining DCT-Mask with multi-stage progress.
As shown in Figure 5, in two-stage DCT, the areas that were previously correctly predicted may be influenced in refinement. The phenomenon further proves the difficulties in refining DCT vectors directly.
A.2 QUALITATIVE RESULTS ON CITYSCAPES
We show some qualitative results on Cityscapes in Figure 6. In comparison with Mask-RCNN and DCT-Mask, PatchDCT generates finer boundaries that greatly improve the quality of masks.
B MORE TECHNICAL DETAILS
We prove that all elements except the DCCs for foreground patches are zero.
It can be derived from Equation 6 that DCC is equal to the patch size m in the foreground patch since Mm×m(x, y) = 1.
DCC = 1
m m−1∑ x=0 m−1∑ y=0 Mm×m(x, y) = m, (6)
Note that for a m×m patch Mfm×m(u, v) Equation 1 can be written as
Mfm×m(u, v) = 2
m C(u)C(v) ( m−1∑ x=0 A(x, u) )( m−1∑ y=0 A(y, v) ) , (7)
where A(a, b) = cos (2a+1)bπ2m .
If u is odd,
A(m− 1− x, u) = cos (2(m− 1− x) + 1)uπ 2m
= cos ( − (2x+ 1)uπ
2m + uπ ) = −A(x, u), (8)
If u is even and larger than zero, since from Euler’s formula
eiθ = cosθ + isinθ, (9)
We have m−1∑ x=0 A(x, u) = m−1∑ x=0 cos (2x+ 1)uπ 2m
= Re ( m−1∑ x=0 e (2x+1)uπi 2m )
= Re ( e uπi 2m 1− euπi
1− euπim
) = 0, (10)
Since u is even,
euπi = cos(uπ) + isin(uπ) = 1, (11)
We obtain m−1∑ x=0 A(x, u) = 0, ∀u ̸= 0, (12)
Therefore for foreground patches
Mfm×m(i, j) = { m, i = 0, j = 0, 0, otherwise.
(13)
This illustrates except the DCCs, elements of DCT vectors of foreground patches are all zero.
C LIMITATIONS AND FUTURE OUTLOOK
In the process of visualization, we observe that the model may generate masks with holes. These problems usually occur in semantical ambiguous areas, and rarely in the center of the mask where the semantic information is very clear. We demonstrate some typical bad cases in Figure 7. In these cases, the model either misclassifies these patches or generates imprecise patch DCT vectors, resulting in disconnected masks. We leave better classification and regression vectors as future work. In addition, we also plan to carry out further verification in other more challenging areas, such as aerial images, medical images, etc. Taking aerial images as an example, this field still focuses on the research of object detection (Yang et al., 2019; 2021a;b;c; 2023), especially oriented object detection (Yang & Yan, 2022; Zhou et al., 2022; Yang et al., 2022), which lacks the exploration of more precise positioning tasks, i.e instance segmentation. | 1. What is the focus and contribution of the paper on instance segmentation?
2. What are the strengths of the proposed approach, particularly in its patch-based refinement mechanism?
3. What are the weaknesses of the paper regarding its claims and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns about the paper that the reviewer has but did not explicitly mention? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes PatchDCT for high-quality instance segmentation. Different from DCT-Mask, the whole image mask is divided into different patches. Each patch is refined individually by the classifier and regressor. The refinement is performed in a multi-stage. Improvements on mask quality are observed on COCO, Cityscapes and LVIS.
Strengths And Weaknesses
Strength:
Figure 1 shows clear motivation of the method. Although inspired by previous methods (such as PointRend and Mask Transfiner), dividing whole image to three classes of patches and refine the mixed patch in a multi-stage is a good strategy.
Extensive ablation experiments comparison, and state-of-the-art method results (although improvement is limited).
Results improvement comparing to DCT-mask with also small speed decrease.
Good paper writing and clear structure, which is easy for readers to understand.
Weakness:
Can the paper describes more on the speed advantages compared to previous SOTA methods? What's the speed of using one-stage PatchDCT and two-stage PatchDCT respectively?
What are the typical failure/bad cases of the proposed methods?
Clarity, Quality, Novelty And Reproducibility
Clear writing and good paper quality; extensive experiments with effective improvement. |
ICLR | Title
PatchDCT: Patch Refinement for High Quality Instance Segmentation
Abstract
High-quality instance segmentation has shown emerging importance in computer vision. Without any refinement, DCT-Mask directly generates high-resolution masks by compressed vectors. To further refine masks obtained by compressed vectors, we propose for the first time a compressed vector based multi-stage refinement framework. However, the vanilla combination does not bring significant gains, because changes in some elements of the DCT vector will affect the prediction of the entire mask. Thus, we propose a simple and novel method named PatchDCT, which separates the mask decoded from a DCT vector into several patches and refines each patch by the designed classifier and regressor. Specifically, the classifier is used to distinguish mixed patches from all patches, and to correct previously mispredicted foreground and background patches. In contrast, the regressor is used for DCT vector prediction of mixed patches, further refining the segmentation quality at boundary locations. Experiments on COCO show that our method achieves 2.0%, 3.2%, 4.5% AP and 3.4%, 5.3%, 7.0% Boundary AP improvements over Mask-RCNN on COCO, LVIS, and Cityscapes, respectively. It also surpasses DCT-Mask by 0.7%, 1.1%, 1.3% AP and 0.9%, 1.7%, 4.2% Boundary AP on COCO, LVIS and Cityscapes. Besides, the performance of PatchDCT is also competitive with other state-of-the-art methods.
1 INTRODUCTION
Instance segmentation (Li et al., 2017; He et al., 2017) is a fundamental but challenging task in computer vision, which aims to locate objects in images and precisely segment each instance. The mainstream instance segmentation methods follow Mask-RCNN (He et al., 2017) paradigm, which often segment instances in a low-resolution grid (Kang et al., 2020; Cheng et al., 2020c; Chen et al., 2019; Ke et al., 2021). However, limited by the coarse mask representation ( i.e. 28 × 28 in Mask-RCNN), most of these algorithms cannot obtain high-quality segmentation results due to the loss of details. DCT-Mask (Shen et al., 2021) achieves considerable performance gain by predicting an informative 300-dimensional Discrete Cosine Transform (DCT) (Ahmed et al., 1974) vector compressed from a 128 × 128 mask. To further improve the segmentation results of DCTMask, we follow the refine mechanism (Ke et al., 2022; Zhang et al., 2021; Kirillov et al., 2020) to correct the mask details in a multi-stage manner.
A straightforward implementation is to refine the 300-dimensional DCT vector multiple times. However, experimental results show that this naive implementation does not succeed, which improves mask average precision (mAP) by 0.1% from 36.5% to 36.6% on COCO val set. The main reason for the limited improvement is that the full 300-dimensional DCT vector is not suitable for refining some important local regions, such as wrong predicted regions and boundary regions in masks. As each pixel value in the mask is calculated by all elements of the DCT vector in the inference stage, once some elements in the DCT vector change, the entire mask will change, and even the correct segmentation areas may be affected, refer to Figure 1a.
∗Corresponding author is Kewei Liang.
To overcome the above issue, we propose a novel method, called PatchDCT, which divides the mask decoded from a DCT vector into several independent patches and refines each patch with a threeclass classifier and a regressor, respectively. In detail, each patch is first classified into one of three categories: foreground, background, and mixed by the classifier, and then previously mispredicted foreground and background patches will be corrected. Mixed patches are fed into the regressor to predict their corresponding n-dimensional (n ≪ 300) DCT vectors. In the inference stage, we use Inverse Discrete Cosine Transform (IDCT) to decode the predicted vectors of the mixed patches as their refined masks, and merge them with the masks of other foreground and background patches to obtain a high-resolution mask. It is also worth emphasizing that each patch is independent, so the element change of a DCT vector will only affect the corresponding mixed patch, as shown in Figure 1b. In general, patching allows the model to focus on the refinement of local regions, thereby continuously improving the quality of segmentation, resulting in significant performance improvements. Our main contributions are:
1) To our best knowledge, PatchDCT is the first compressed vector based multi-stage refinement detector to predict high-quality masks.
2) PatchDCT innovatively adopts the patching technique, which successfully allows the model to focus on the refinement of important local regions, fully exploiting the advantages of multi-stage refinement and high-resolution information compression.
3) Compared to Mask RCNN, PatchDCT improves about 2.0% AP and 3.4% Boundary AP on COCO, 3.2% AP and 5.3% Boundary AP on LVIS∗1, 4.5% AP and 7.0% Boundary AP on Cityscapes. It also achieves 0.7% AP and 0.9% Boundary AP on COCO, 1.1% AP and 1.7% Boundary AP on LVIS∗, 1.3% AP and 4.2% Boundary AP on Cityscapes over DCT-Mask.
4) Demonstrated by experiments on COCO test-dev, the performance of PatchDCT is also competitive with other state-of-the-art methods.
2 RELATED WORK
Instance segmentation. Instance segmentation assigns a pixel-level mask to each instance of interest. Mask-RCNN (He et al., 2017) generates bounding boxes for each instance with a powerful detector (Ren et al., 2015) and categorizes each pixel in bounding boxes as foreground or background to obtain 28 × 28 binary grid masks. Several methods that build on Mask-RCNN improve the quality of masks. Mask Scoring RCNN (Huang et al., 2019) learns to regress mask IoU to select better-quality instance masks. HTC (Chen et al., 2019) utilizes interleaved execution, mask information flow, and semantic feature fusion to improve Mask-RCNN. BMask RCNN (Cheng et al., 2020c) adds a boundary branch on Mask-RCNN to detect the boundaries of masks. Bounding Shape Mask R-CNN (Kang et al., 2020) improves performance on object detection and instance segmentation by its bounding shape mask branch. BCNet (Ke et al., 2021) uses two GCN (Welling & Kipf, 2016) layers to detect overlapping instances. Although these algorithms have yielded promising results, they are still restricted in the low-resolution mask representation and thus do not generate high-quality masks.
1COCO dataset with LVIS annotations
Towards high-quality instance segmentation. To take full advantage of high-resolution masks, DCT-Mask (Shen et al., 2021) learns to regress a 300-dimensional DCT vector compressed from a 128 × 128 mask. SOLQ (Dong et al., 2021) is a query-based method, which also encodes highresolution masks into DCT vectors and predicts the vectors by queries. Both of these methods generate high-resolution masks in a one-shot manner, without any refinement. Although they have made considerable gains, there is still potential for improvement. Multi-stage refinement is another common technique for obtaining high-quality masks. PointRend (Kirillov et al., 2020) adaptively selects several locations to refine, rendering 224×224 masks from 7×7 coarse masks. RefineMask (Zhang et al., 2021) introduces semantic segmentation masks as auxiliary inputs, and generates 112 × 112 masks in a multi-stage manner. Mask Transfiner (Ke et al., 2022) represents image regions as a quadtree and corrects the errors of error-prone tree nodes to generate 112× 112 masks. PBR (Tang et al., 2021) is a post-processing method that refines patches along the mask boundaries. Unlike these refinement methods based on the binary grid mask representation, our method is based on compressed vectors.
Generating high-quality masks is also one of the main concerns in the field of semantic segmentation. CRFasRNN (Zheng et al., 2015) connects CRF (Krähenbühl & Koltun, 2011) with FCN (Long et al., 2015), formulating mean-field approximate inference for the CRF with Gaussian pairwise potentials as Recurrent Neural Networks. DeepLab (Chen et al., 2017) effectively improves the quality of masks by using atrous convolution for receptive field enhancement, ASPP for multiscale segmentation, and CRF for boundary refinement. SegModel (Shen et al., 2017) utilizes a guidance CRF to improve the segmentation quality. CascadePSP (Cheng et al., 2020b) trains independently a refinement module designed in a cascade fashion. RGR (Dias & Medeiros, 2018) is a post-processing module based on region growing. In contrast, PatchDCT can obtain high-quality segmentation results in an end-to-end learning manner without any additional post-processing.
3 METHODS
In this section, we show the difficulties in refining DCT vectors and then introduce PatchDCT to overcome these difficulties and generate finer masks.
3.1 DIFFICULTIES IN REFINING DCT VECTORS
Given a K ×K mask, DCT-Mask (Shen et al., 2021) encodes the mask MK×K into the frequency domain MfK×K :
MfK×K(u, v) = 2
K C(u)C(v) K−1∑ x=0 K−1∑ y=0 MK×K(x, y) cos (2x+ 1)uπ 2K cos (2y + 1)vπ 2K , (1)
where C(w) = 1/ √ 2 for w = 0 and C(w) = 1 otherwise. Non-zero values are concentrated in the upper left corner of MfK×K , which are low-frequency elements that contain the most information of the mask. The N -dimensional DCT vector is obtained by zigzag scanning (Al-Ani & Awad, 2013) MfK×K and selecting the top-N elements.
In the inference stage, MfK×K is recovered by filling the remaining elements to zero. Then each pixel in the mask MK×K is calculated as follow:
MK×K(x, y) = 2
K C(x)C(y) K−1∑ u=0 K−1∑ v=0 MfK×K(u, v) cos (2x+ 1)uπ 2K cos (2y + 1)vπ 2K , (2)
Equation 2 reveals that each pixel in the mask MK×K is calculated by all elements of MfK×K . When refining the N -dimensional DCT vector, once an element is incorrectly changed, all pixels in MK×K will be affected, even those correctly segmented regions, which is also shown in Figure 1. Therefore, when fixing some specific error regions (e.g. borders), it is difficult to get the correct refinement result unless all the elements in the DCT vector are correctly refined. In practice, however, it is almost impossible to correctly predict all N elements.
3.2 PATCHDCT
To prevent the above issue when refining the global DCT vector, we propose a method named PatchDCT, which divides the K ×K mask into m×m patches and refines each patch respectively. The overall architecture of PatchDCT is shown in Figure 2, which mainly consists of a three-class classifier and a DCT vector regressor. Specifically, the classifier is used to identify mixed patches and refine foreground and background patches. Each mixed patch is then refined by an n-dimensional DCT vector, which is obtained from the DCT vector regressor.
Three-class classifier. We define the patches with only foreground pixels and only background pixels as foreground patches and background patches, respectively, while the others are mixed patches. The task of differentiating patch categories is accomplished by a fully convolutional three-class classifier. Moreover, the mispredicted initial foreground and background patches are corrected by the classifier. We utilize a three-class classifier instead of a DCT vector regressor to refine foreground and background patches because of the particular form of their DCT vectors. For background patches, simply from Equation 1, all elements of DCT vectors are zero. For foreground patches, all elements are zero except for the first element named DC component (DCC), which is equal to the patch size m. The mathematical proof of the DCT vector form for the foreground patches is shown in the Appendix. DCT vector elements of foreground and background
patches are discrete data that are more suitable for classification. Referring to Figure 3, DCT vector elements of mixed patches are continuously distributed and therefore more suitable for regression.
Regressor. Similar to the phenomenon described in DCT-Mask (Shen et al., 2021), refining highresolution masks with the binary grid mask representation introduces performance degradation due to the high training complexity (refer to DCT-Mask (Shen et al., 2021) for more details). Learning to regress informative DCT vectors eases the training process. The specific experimental results are discussed in the experiments section (Sec. 4).
The regressor is trained and inferred for mixed patches only. It is actually a boundary attention module, since the mixed patches are distributed exactly along the boundary of the instance mask. For each mixed patch, the regressor predicts an n-dimensional DCT vector, which is very short but highly informative. Table 1 shows mask AP obtained by different lengths of ground truth patch DCT
vectors using Mask-RCNN framework on COCO val2017. The low-dimensional DCT vectors have been able to provide sufficient ground truth information.
3.3 MULTI-STAGE REFINEMENT AND LOSS FUNCTION
PatchDCT is a module where the input and output masks have the same resolution. Thus, the mask generated by a PatchDCT module can be fed into another PatchDCT module for further refinement, as shown in the upper right corner of Figure 2.
With multi-stage refinement, the loss function of the mask branch is defined as Lmask = λ0LdctN + ∑ s>0 λs(Lsclspatch + L s dctn), (3)
λ0 and λs are the loss weights. The first item LdctN of Equation 3 is the loss in predicting N - dimensional vectors of the entire masks (Shen et al., 2021).
LdctN = 1
N N∑ i R(V̂i − Vi), (4)
where Vi and V̂i are the i-th element in ground-truth and the prediction vector respectively. R is the loss function and N is the length of the vectors. The classification loss Lsclspatch of s-th stage is the cross-entropy loss over three classes. The regression loss Lsdctn of s-th stage is
Lsdctn = 1
Nm Nall∑ k
[ pk ( 1
n n∑ i R(V̂i − Vi)
)] , (5)
where Nm, Nall are the number of mixed patches and all patches respectively. n is the length of the patch DCT vectors. If the k-th patch is a mixed patch, pk = 1, otherwise pk = 0, indicating that only DCT vectors of mixed patches are regressed.
4 EXPERIMENTS
4.1 DATASETS
We evaluate our method on two standard instance segmentation datasets: COCO (Lin et al., 2014) and Cityscapes (Cordts et al., 2016). COCO provides 80 categories with instance-level annotations. Cityscapes is a dataset focused on urban street scenes. It contains 8 categories for instance segmentation, providing 2,975, 500 and 1,525 high-resolution images (1, 024 × 2, 048) for training, validation, and test respectively.
We report the standard mask AP metric and the Boundary AP (Cheng et al., 2021) metric (APB), the latter focusing on evaluating the boundary quality. Following (Kirillov et al., 2020), we also report AP∗ and AP∗B , which evaluate COCO val2017 with high-quality annotations provided by LVIS (Gupta et al., 2019). Note that for AP∗ and AP∗B , models are still trained on COCO train2017.
4.2 IMPLEMENT DETAILS
We build the model based on DCT-Mask (Shen et al., 2021). We first decode the 300-dimensional DCT vector to obtain a 112 × 112 mask. This mask is then fed into PatchDCT, together with a 42 × 42 feature map cropped from FPN-P2 (Lin et al., 2017). PatchDCT refines each patch of the mask and outputs a 112 × 112 mask. We set the patch size to 8 and each patch is represented by a 6-dimensional DCT vector. Our model is class-specific by default, i.e. one mask per class. L1 loss and cross-entropy loss are used for DCT vector regression and patch classification respectively. By default, only one PatchDCT module is used, and both λ0 and λ1 are set to 1. We implement our algorithm based on Detectron2 (Wu et al., 2019), and all hyperparameters remain the same as Mask-RCNN in Detectron2. Unless otherwise stated, 1× learning schedule is used.
4.3 MAIN RESULTS
Results on COCO. We compare PatchDCT with Mask-RCNN and DCT-Mask over different backbones. As shown in Table 2, on COCO val2017 with R50-FPN, PatchDCT improves 2.0% AP and 3.4% APB over Mask-RCNN. Compared with DCT-Mask, PatchDCT also achieves 0.7% AP and 0.9% APB improvements. When evaluating with LVIS annotations, PatchDCT yields significant gains of 3.2% AP∗ and 5.3% AP∗B over Mask-RCNN, and 1.1% AP
∗ and 1.7% AP∗B over DCTMask. Consistent improvements are observed on R101-FPN and RX101-FPN. Since AP∗ and AP∗B are evaluated with high-quality annotations, the significant improvements of these two metrics emphasize the superiority of our model. In addition, considering the improvement in mask quality, the cost in runtime is almost negligible, i.e. about 1.5 FPS degradation on the A100 GPU.
We also compare the performance of PatchDCT with state-of-the-art methods of instance segmentation on COCO test-dev2017. With RX101 backbone, PatchDCT surpasses PointRender (Kirillov et al., 2020) and RefineMask (Zhang et al., 2021), which are both multi-stage refinement methods based on binary grid masks, by 0.8% and 0.4%. PatchDCT also achieves comparable performance with Mask Transfiner (Ke et al., 2022) with R101 backbone. However, Mask-Transifer runs at 5.5 FPS on the A100 GPU, which is almost two times slower than PatchDCT. With Swin-B back-
bone, PatchDCT outperforms Mask Transfiner (Ke et al., 2022) by 0.7% AP. It is worth noting that PatchDCT is faster than most multi-stage refinement methods since only one refine process is required. These results demonstrate the effectiveness of PatchDCT in generating high-quality masks.
Results on Cityscapes. We also report results on Cityscapes val set in Table 3. In comparison with Mask-RCNN, PatchDCT obtains 4.5% AP and 7.0% APB improvements. It also outperforms DCT-Mask by 1.3% AP and 4.2% APB . Compared with other SOTA methods, PatchDCT is still competitive. PatchDCT achieves 0.8%, 1.4%, 2.1% APB gains over Mask Transfiner (Ke et al., 2022), RefineMask (Zhang et al., 2021) and PointRender (Kirillov et al., 2020) respectively. The large difference in APB highlights the ability of PatchDCT to generate masks with more detailed borders.
4.4 ABLATION EXPERIMENTS
We conduct extensive ablation experiments to further analyze PatchDCT. We adopt R50-FPN as the backbone and evaluate the performance on COCO val2017.
Simply refine DCT vectors. Simply refining the global DCT vectors does not succeed. To demonstrate that, we design a model named ‘Two-stage DCT’, which regresses a new 300-dimensional DCT vector after fusing the initial mask with a 42×42 feature map from FPN-P2. The refined mask is decoded from the final DCT vector. From Table 5, Two-stage DCT achieves only little improvements over DCT-Mask, since changes in some elements of the global DCT vector may affect the entire mask, even for the correct segmentation areas. PatchDCT leverages the patching mechanism to overcome this issue and outperforms Two-stage DCT by 1.0 AP∗B .
Binary grid refinement. Refining masks with the binary grid mask representation can be considered as the extreme patching mechanism, which treats each pixel as a patch. However, simply refining high-resolution masks with the binary grid mask representation introduces performance degradation. We construct an experiment named ‘binary grid refinement’, which predicts another 112×112 mask with the binary grid mask representation after fusing the initial mask as well as a 56×56 feature map from FPN-P2. Experimental results in Table 5 show that the performance of binary grid refinement is worse than PatchDCT, and even DCT-Mask. This is because binary grid refinement requires the refinement module to learn 12544 (112× 112) outputs, while PatchDCT only needs to learn at most 1176 (14× 14× 6) outputs, which reduces the training complexity. Effectiveness of three-class classifier. In addition to identifying mixed patches, a more important role of the three-class classifier is to correct previously mispredicted foreground and background patches. To validate the effectiveness of refining non-mixed patches (i.e. foreground and background patches), we construct a binary-class classifier, which only classifies patches as mixed or non-mixed and keeps masks of non-mixed patches unchanged. As shown in Table 6, the binary-class classifier is inferior to our three-class classifier by 0.3% AP and 0.4% AP∗, since the refinement of previously incorrectly predicted foreground and background patches is ignored.
Refinement of foreground and background patches can also be accomplished with the DCT vector regressor. However, as discussed in Sec. 3.2, the DCT vector elements of the non-mixed patches
Table 7: Mask AP obtained by PatchDCT with regressor focusing on all patches and mixed patches on val2017. The best results are obtained by regressing only the mixed patches.
Regressor AP APS APM APL APB AP∗ AP∗B all 36.6 17.7 39.5 52.2 23.6 39.6 28.6 mixed 37.2 18.3 39.5 54.2 24.5 40.8 30.1
Table 9: Mask AP obtained by models with different dimensions of patch DCT vectors on COCO val2017. Model with 6-dimensional vectors achieves the best performance.
Patch Dim. AP APS APM APL APB AP∗ AP∗B 3 36.8 17.6 39.2 53.5 24.0 40.5 29.5 6 37.2 18.3 39.5 54.1 24.5 40.8 30.1 9 36.9 17.1 39.3 53.3 24.3 40.6 30.1
only involve zero and m, making it ineffective to learn the DCT vectors of all patches directly. As shown in Table 7, the performance of the method refining non-mixed regions with the DCT vector regressor is lower than the method using a three-class classifier by 0.6% AP and 1.2% AP∗. Need to note that, APB and AP∗B decrease by 0.9% and 1.5% respectively, reflecting that learning to regress non-mixed patches also affects the prediction of boundaries.
Effectiveness of the regressor. The regressor is actually a boundary attention module that generates finer boundaries. As shown in Table 8, after removing the regressor and keeping only the classifier, the overall AP only decreases by 0.5% , but APB and AP∗B decrease by 1.2% and 3.0% respectively. The phenomenon demonstrates the importance of the regressor for generating finer boundaries.
Dimension of PatchDCT vectors We look for an appropriate patch DCT vector length to encode each mixed patch. Results in Table 9 show that the model with 6-dimensional patch DCT vectors obtains the best performance. As also shown in Table 1, the 6-dimensional patch DCT vector already contains most of the ground truth information. As more elements bring only very little incremental information, regressing these elements does not improve the prediction.
Multi-stage PatchDCT. We compare the performance of the multi-stage procedure in Table 10. One-stage PatchDCT already provides high-quality masks, while two-stage PatchDCT further improves the prediction. However, the computational cost of the mask branch has nearly doubled with tiny improvements in the quality of masks, so we choose to use one-stage PatchDCT in our paper.
Size of the patch. We evaluate the influence of patch size in Table 11. We keep the resolution of the mask and the size of the input feature map unchanged and compare the model performance with different patch sizes. PatchDCT with 8× 8 patches performs better than other settings. Size of the feature map. We compare the model with different sizes of the feature map used in PatchDCT. Table 12 illustrates that the performance saturates with the 42× 42 feature map. Feature map from FPN. We evaluate PatchDCT with the feature map cropped from all pyramid levels or P2. Table 13 shows that PatchDCT benefits from the finer feature map of P2.
4.5 QUALITATIVE RESULTS
In Figure 4 we visualize some outputs of PatchDCT on COCO val2017. PatchDCT generates finer boundaries among different instances, such as the shoulder of the person (the first column), the contour of the kite (the third column), and the arm of the girl (the fourth column). PatchDCT obtains masks of higher quality in comparison with Mask-RCNN and DCT-Mask.
5 CONCLUSIONS
In this work, we propose PatchDCT, a compressed vector based method towards high-quality instance segmentation. In contrast to previous methods, PatchDCT refines each patch of masks respectively and utilizes patch DCT vectors to compress boundaries that are full of details. By using a classifier to refine foreground and background patches, and predicting an informative lowdimensional DCT vector for each mixed patch, PatchDCT generates a high-resolution mask with fine boundaries. PatchDCT is designed with a simple and clean structure, which allows the method to obtain high-quality segmentation with almost negligible cost in speed compared to Mask-RCNN and DCT-Mask. We hope that our approach will benefit future studies in instance segmentation.
A MORE QUALITATIVE RESULTS
A.1 TWO-STAGE DCT
We visualize some outputs of two-stage DCT and compare them with DCT-Mask to demonstrate the disadvantages of simply combining DCT-Mask with multi-stage progress.
As shown in Figure 5, in two-stage DCT, the areas that were previously correctly predicted may be influenced in refinement. The phenomenon further proves the difficulties in refining DCT vectors directly.
A.2 QUALITATIVE RESULTS ON CITYSCAPES
We show some qualitative results on Cityscapes in Figure 6. In comparison with Mask-RCNN and DCT-Mask, PatchDCT generates finer boundaries that greatly improve the quality of masks.
B MORE TECHNICAL DETAILS
We prove that all elements except the DCCs for foreground patches are zero.
It can be derived from Equation 6 that DCC is equal to the patch size m in the foreground patch since Mm×m(x, y) = 1.
DCC = 1
m m−1∑ x=0 m−1∑ y=0 Mm×m(x, y) = m, (6)
Note that for a m×m patch Mfm×m(u, v) Equation 1 can be written as
Mfm×m(u, v) = 2
m C(u)C(v) ( m−1∑ x=0 A(x, u) )( m−1∑ y=0 A(y, v) ) , (7)
where A(a, b) = cos (2a+1)bπ2m .
If u is odd,
A(m− 1− x, u) = cos (2(m− 1− x) + 1)uπ 2m
= cos ( − (2x+ 1)uπ
2m + uπ ) = −A(x, u), (8)
If u is even and larger than zero, since from Euler’s formula
eiθ = cosθ + isinθ, (9)
We have m−1∑ x=0 A(x, u) = m−1∑ x=0 cos (2x+ 1)uπ 2m
= Re ( m−1∑ x=0 e (2x+1)uπi 2m )
= Re ( e uπi 2m 1− euπi
1− euπim
) = 0, (10)
Since u is even,
euπi = cos(uπ) + isin(uπ) = 1, (11)
We obtain m−1∑ x=0 A(x, u) = 0, ∀u ̸= 0, (12)
Therefore for foreground patches
Mfm×m(i, j) = { m, i = 0, j = 0, 0, otherwise.
(13)
This illustrates except the DCCs, elements of DCT vectors of foreground patches are all zero.
C LIMITATIONS AND FUTURE OUTLOOK
In the process of visualization, we observe that the model may generate masks with holes. These problems usually occur in semantical ambiguous areas, and rarely in the center of the mask where the semantic information is very clear. We demonstrate some typical bad cases in Figure 7. In these cases, the model either misclassifies these patches or generates imprecise patch DCT vectors, resulting in disconnected masks. We leave better classification and regression vectors as future work. In addition, we also plan to carry out further verification in other more challenging areas, such as aerial images, medical images, etc. Taking aerial images as an example, this field still focuses on the research of object detection (Yang et al., 2019; 2021a;b;c; 2023), especially oriented object detection (Yang & Yan, 2022; Zhou et al., 2022; Yang et al., 2022), which lacks the exploration of more precise positioning tasks, i.e instance segmentation. | 1. What is the focus and contribution of the paper regarding semantic correspondence?
2. What are the strengths of the proposed approach, particularly in terms of neural representation?
3. What are the weaknesses of the paper, especially for the experiment section?
4. Do you have any concerns about the semantic correspondence representation?
5. What are the limitations regarding the NeMF approach?
6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
7. What is the main contribution of the paper on dictionary learning?
8. What are the strengths of the paper, especially in the theoretical analysis?
9. Do you have any questions regarding the paper?
10. What is the difference between the proposed method and other semantic correspondence methods?
11. How does NeMF deal with different matching costs for different image pairs?
12. Can the authors provide more visual experiments to convince readers?
13. How much time will the NeMF method cost to train a network?
14. Is there any simulation result verifying the convergence rate of the algorithm?
15. Can the authors give some details about the initialization procedure and guarantees for the fast random initialization? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes a DCT-vector-based multi-stage refinement framework named PatchDCT, which contains a classifier and a regressor. PatchDCT first separates the original coarse mask into several patches. The classifier is used to distinguish mixed patches which consist of both foreground and background pixels. Then, these mixed patches are refined by the regressor with DCT-vector representation.
Strengths And Weaknesses
Strengths :
The paper clearly identifies that straightforward refining the global DCT vetors is unsuitable and proposes a patch-based method to overcome this issue.
Foreground patch and background patch have their special DCT vector, refiner them with a three-classes-classifier rather than a general DCT regressor is theoretically sound.
Compared with many other methods, this work achieves SOTA results, and ablation studies are sufficient.
Weaknesses :
There is only runtime result compared with Mask-RCNN and DCT-Mask. Please complement more experiments to compare the efficiency of PatchDCT with other refinement models.
In this paper, the result in Table 1 suggests that when using 1x1 patch and 1-dim DCT vector the network has the best performance (57.6 AP). But when encoding 1x1 patch (single-pixel) using DCT, the result should be the value of the pixel itself. What is the difference between this method and directly refining the mask with 1x1 conv when the patch size is 1x1? I think this result is inconsistent with DCT-Mask, nor "binary grid refinement". According to DCT-Mask (Table 1), directly increasing the resolution decreases the mask AP, which is the main reason they use DCT encoding.
Clarity, Quality, Novelty And Reproducibility
For one thing, the work starts with the issue of straightforward refining the global DCT vectors, proposes a patch-based method to solve this issue and finally gets well performance. It's logically fluent and complete. The experiment result of multiple metrics on three popular instance-segmentation datasets is also clear.
For another, as a refinement method, the work clearly shows that very small-size compressed vectors (such as 6-dimension) can afford enough information for segmentation. This is a valuable result for the refinement task. |
ICLR | Title
PatchDCT: Patch Refinement for High Quality Instance Segmentation
Abstract
High-quality instance segmentation has shown emerging importance in computer vision. Without any refinement, DCT-Mask directly generates high-resolution masks by compressed vectors. To further refine masks obtained by compressed vectors, we propose for the first time a compressed vector based multi-stage refinement framework. However, the vanilla combination does not bring significant gains, because changes in some elements of the DCT vector will affect the prediction of the entire mask. Thus, we propose a simple and novel method named PatchDCT, which separates the mask decoded from a DCT vector into several patches and refines each patch by the designed classifier and regressor. Specifically, the classifier is used to distinguish mixed patches from all patches, and to correct previously mispredicted foreground and background patches. In contrast, the regressor is used for DCT vector prediction of mixed patches, further refining the segmentation quality at boundary locations. Experiments on COCO show that our method achieves 2.0%, 3.2%, 4.5% AP and 3.4%, 5.3%, 7.0% Boundary AP improvements over Mask-RCNN on COCO, LVIS, and Cityscapes, respectively. It also surpasses DCT-Mask by 0.7%, 1.1%, 1.3% AP and 0.9%, 1.7%, 4.2% Boundary AP on COCO, LVIS and Cityscapes. Besides, the performance of PatchDCT is also competitive with other state-of-the-art methods.
1 INTRODUCTION
Instance segmentation (Li et al., 2017; He et al., 2017) is a fundamental but challenging task in computer vision, which aims to locate objects in images and precisely segment each instance. The mainstream instance segmentation methods follow Mask-RCNN (He et al., 2017) paradigm, which often segment instances in a low-resolution grid (Kang et al., 2020; Cheng et al., 2020c; Chen et al., 2019; Ke et al., 2021). However, limited by the coarse mask representation ( i.e. 28 × 28 in Mask-RCNN), most of these algorithms cannot obtain high-quality segmentation results due to the loss of details. DCT-Mask (Shen et al., 2021) achieves considerable performance gain by predicting an informative 300-dimensional Discrete Cosine Transform (DCT) (Ahmed et al., 1974) vector compressed from a 128 × 128 mask. To further improve the segmentation results of DCTMask, we follow the refine mechanism (Ke et al., 2022; Zhang et al., 2021; Kirillov et al., 2020) to correct the mask details in a multi-stage manner.
A straightforward implementation is to refine the 300-dimensional DCT vector multiple times. However, experimental results show that this naive implementation does not succeed, which improves mask average precision (mAP) by 0.1% from 36.5% to 36.6% on COCO val set. The main reason for the limited improvement is that the full 300-dimensional DCT vector is not suitable for refining some important local regions, such as wrong predicted regions and boundary regions in masks. As each pixel value in the mask is calculated by all elements of the DCT vector in the inference stage, once some elements in the DCT vector change, the entire mask will change, and even the correct segmentation areas may be affected, refer to Figure 1a.
∗Corresponding author is Kewei Liang.
To overcome the above issue, we propose a novel method, called PatchDCT, which divides the mask decoded from a DCT vector into several independent patches and refines each patch with a threeclass classifier and a regressor, respectively. In detail, each patch is first classified into one of three categories: foreground, background, and mixed by the classifier, and then previously mispredicted foreground and background patches will be corrected. Mixed patches are fed into the regressor to predict their corresponding n-dimensional (n ≪ 300) DCT vectors. In the inference stage, we use Inverse Discrete Cosine Transform (IDCT) to decode the predicted vectors of the mixed patches as their refined masks, and merge them with the masks of other foreground and background patches to obtain a high-resolution mask. It is also worth emphasizing that each patch is independent, so the element change of a DCT vector will only affect the corresponding mixed patch, as shown in Figure 1b. In general, patching allows the model to focus on the refinement of local regions, thereby continuously improving the quality of segmentation, resulting in significant performance improvements. Our main contributions are:
1) To our best knowledge, PatchDCT is the first compressed vector based multi-stage refinement detector to predict high-quality masks.
2) PatchDCT innovatively adopts the patching technique, which successfully allows the model to focus on the refinement of important local regions, fully exploiting the advantages of multi-stage refinement and high-resolution information compression.
3) Compared to Mask RCNN, PatchDCT improves about 2.0% AP and 3.4% Boundary AP on COCO, 3.2% AP and 5.3% Boundary AP on LVIS∗1, 4.5% AP and 7.0% Boundary AP on Cityscapes. It also achieves 0.7% AP and 0.9% Boundary AP on COCO, 1.1% AP and 1.7% Boundary AP on LVIS∗, 1.3% AP and 4.2% Boundary AP on Cityscapes over DCT-Mask.
4) Demonstrated by experiments on COCO test-dev, the performance of PatchDCT is also competitive with other state-of-the-art methods.
2 RELATED WORK
Instance segmentation. Instance segmentation assigns a pixel-level mask to each instance of interest. Mask-RCNN (He et al., 2017) generates bounding boxes for each instance with a powerful detector (Ren et al., 2015) and categorizes each pixel in bounding boxes as foreground or background to obtain 28 × 28 binary grid masks. Several methods that build on Mask-RCNN improve the quality of masks. Mask Scoring RCNN (Huang et al., 2019) learns to regress mask IoU to select better-quality instance masks. HTC (Chen et al., 2019) utilizes interleaved execution, mask information flow, and semantic feature fusion to improve Mask-RCNN. BMask RCNN (Cheng et al., 2020c) adds a boundary branch on Mask-RCNN to detect the boundaries of masks. Bounding Shape Mask R-CNN (Kang et al., 2020) improves performance on object detection and instance segmentation by its bounding shape mask branch. BCNet (Ke et al., 2021) uses two GCN (Welling & Kipf, 2016) layers to detect overlapping instances. Although these algorithms have yielded promising results, they are still restricted in the low-resolution mask representation and thus do not generate high-quality masks.
1COCO dataset with LVIS annotations
Towards high-quality instance segmentation. To take full advantage of high-resolution masks, DCT-Mask (Shen et al., 2021) learns to regress a 300-dimensional DCT vector compressed from a 128 × 128 mask. SOLQ (Dong et al., 2021) is a query-based method, which also encodes highresolution masks into DCT vectors and predicts the vectors by queries. Both of these methods generate high-resolution masks in a one-shot manner, without any refinement. Although they have made considerable gains, there is still potential for improvement. Multi-stage refinement is another common technique for obtaining high-quality masks. PointRend (Kirillov et al., 2020) adaptively selects several locations to refine, rendering 224×224 masks from 7×7 coarse masks. RefineMask (Zhang et al., 2021) introduces semantic segmentation masks as auxiliary inputs, and generates 112 × 112 masks in a multi-stage manner. Mask Transfiner (Ke et al., 2022) represents image regions as a quadtree and corrects the errors of error-prone tree nodes to generate 112× 112 masks. PBR (Tang et al., 2021) is a post-processing method that refines patches along the mask boundaries. Unlike these refinement methods based on the binary grid mask representation, our method is based on compressed vectors.
Generating high-quality masks is also one of the main concerns in the field of semantic segmentation. CRFasRNN (Zheng et al., 2015) connects CRF (Krähenbühl & Koltun, 2011) with FCN (Long et al., 2015), formulating mean-field approximate inference for the CRF with Gaussian pairwise potentials as Recurrent Neural Networks. DeepLab (Chen et al., 2017) effectively improves the quality of masks by using atrous convolution for receptive field enhancement, ASPP for multiscale segmentation, and CRF for boundary refinement. SegModel (Shen et al., 2017) utilizes a guidance CRF to improve the segmentation quality. CascadePSP (Cheng et al., 2020b) trains independently a refinement module designed in a cascade fashion. RGR (Dias & Medeiros, 2018) is a post-processing module based on region growing. In contrast, PatchDCT can obtain high-quality segmentation results in an end-to-end learning manner without any additional post-processing.
3 METHODS
In this section, we show the difficulties in refining DCT vectors and then introduce PatchDCT to overcome these difficulties and generate finer masks.
3.1 DIFFICULTIES IN REFINING DCT VECTORS
Given a K ×K mask, DCT-Mask (Shen et al., 2021) encodes the mask MK×K into the frequency domain MfK×K :
MfK×K(u, v) = 2
K C(u)C(v) K−1∑ x=0 K−1∑ y=0 MK×K(x, y) cos (2x+ 1)uπ 2K cos (2y + 1)vπ 2K , (1)
where C(w) = 1/ √ 2 for w = 0 and C(w) = 1 otherwise. Non-zero values are concentrated in the upper left corner of MfK×K , which are low-frequency elements that contain the most information of the mask. The N -dimensional DCT vector is obtained by zigzag scanning (Al-Ani & Awad, 2013) MfK×K and selecting the top-N elements.
In the inference stage, MfK×K is recovered by filling the remaining elements to zero. Then each pixel in the mask MK×K is calculated as follow:
MK×K(x, y) = 2
K C(x)C(y) K−1∑ u=0 K−1∑ v=0 MfK×K(u, v) cos (2x+ 1)uπ 2K cos (2y + 1)vπ 2K , (2)
Equation 2 reveals that each pixel in the mask MK×K is calculated by all elements of MfK×K . When refining the N -dimensional DCT vector, once an element is incorrectly changed, all pixels in MK×K will be affected, even those correctly segmented regions, which is also shown in Figure 1. Therefore, when fixing some specific error regions (e.g. borders), it is difficult to get the correct refinement result unless all the elements in the DCT vector are correctly refined. In practice, however, it is almost impossible to correctly predict all N elements.
3.2 PATCHDCT
To prevent the above issue when refining the global DCT vector, we propose a method named PatchDCT, which divides the K ×K mask into m×m patches and refines each patch respectively. The overall architecture of PatchDCT is shown in Figure 2, which mainly consists of a three-class classifier and a DCT vector regressor. Specifically, the classifier is used to identify mixed patches and refine foreground and background patches. Each mixed patch is then refined by an n-dimensional DCT vector, which is obtained from the DCT vector regressor.
Three-class classifier. We define the patches with only foreground pixels and only background pixels as foreground patches and background patches, respectively, while the others are mixed patches. The task of differentiating patch categories is accomplished by a fully convolutional three-class classifier. Moreover, the mispredicted initial foreground and background patches are corrected by the classifier. We utilize a three-class classifier instead of a DCT vector regressor to refine foreground and background patches because of the particular form of their DCT vectors. For background patches, simply from Equation 1, all elements of DCT vectors are zero. For foreground patches, all elements are zero except for the first element named DC component (DCC), which is equal to the patch size m. The mathematical proof of the DCT vector form for the foreground patches is shown in the Appendix. DCT vector elements of foreground and background
patches are discrete data that are more suitable for classification. Referring to Figure 3, DCT vector elements of mixed patches are continuously distributed and therefore more suitable for regression.
Regressor. Similar to the phenomenon described in DCT-Mask (Shen et al., 2021), refining highresolution masks with the binary grid mask representation introduces performance degradation due to the high training complexity (refer to DCT-Mask (Shen et al., 2021) for more details). Learning to regress informative DCT vectors eases the training process. The specific experimental results are discussed in the experiments section (Sec. 4).
The regressor is trained and inferred for mixed patches only. It is actually a boundary attention module, since the mixed patches are distributed exactly along the boundary of the instance mask. For each mixed patch, the regressor predicts an n-dimensional DCT vector, which is very short but highly informative. Table 1 shows mask AP obtained by different lengths of ground truth patch DCT
vectors using Mask-RCNN framework on COCO val2017. The low-dimensional DCT vectors have been able to provide sufficient ground truth information.
3.3 MULTI-STAGE REFINEMENT AND LOSS FUNCTION
PatchDCT is a module where the input and output masks have the same resolution. Thus, the mask generated by a PatchDCT module can be fed into another PatchDCT module for further refinement, as shown in the upper right corner of Figure 2.
With multi-stage refinement, the loss function of the mask branch is defined as Lmask = λ0LdctN + ∑ s>0 λs(Lsclspatch + L s dctn), (3)
λ0 and λs are the loss weights. The first item LdctN of Equation 3 is the loss in predicting N - dimensional vectors of the entire masks (Shen et al., 2021).
LdctN = 1
N N∑ i R(V̂i − Vi), (4)
where Vi and V̂i are the i-th element in ground-truth and the prediction vector respectively. R is the loss function and N is the length of the vectors. The classification loss Lsclspatch of s-th stage is the cross-entropy loss over three classes. The regression loss Lsdctn of s-th stage is
Lsdctn = 1
Nm Nall∑ k
[ pk ( 1
n n∑ i R(V̂i − Vi)
)] , (5)
where Nm, Nall are the number of mixed patches and all patches respectively. n is the length of the patch DCT vectors. If the k-th patch is a mixed patch, pk = 1, otherwise pk = 0, indicating that only DCT vectors of mixed patches are regressed.
4 EXPERIMENTS
4.1 DATASETS
We evaluate our method on two standard instance segmentation datasets: COCO (Lin et al., 2014) and Cityscapes (Cordts et al., 2016). COCO provides 80 categories with instance-level annotations. Cityscapes is a dataset focused on urban street scenes. It contains 8 categories for instance segmentation, providing 2,975, 500 and 1,525 high-resolution images (1, 024 × 2, 048) for training, validation, and test respectively.
We report the standard mask AP metric and the Boundary AP (Cheng et al., 2021) metric (APB), the latter focusing on evaluating the boundary quality. Following (Kirillov et al., 2020), we also report AP∗ and AP∗B , which evaluate COCO val2017 with high-quality annotations provided by LVIS (Gupta et al., 2019). Note that for AP∗ and AP∗B , models are still trained on COCO train2017.
4.2 IMPLEMENT DETAILS
We build the model based on DCT-Mask (Shen et al., 2021). We first decode the 300-dimensional DCT vector to obtain a 112 × 112 mask. This mask is then fed into PatchDCT, together with a 42 × 42 feature map cropped from FPN-P2 (Lin et al., 2017). PatchDCT refines each patch of the mask and outputs a 112 × 112 mask. We set the patch size to 8 and each patch is represented by a 6-dimensional DCT vector. Our model is class-specific by default, i.e. one mask per class. L1 loss and cross-entropy loss are used for DCT vector regression and patch classification respectively. By default, only one PatchDCT module is used, and both λ0 and λ1 are set to 1. We implement our algorithm based on Detectron2 (Wu et al., 2019), and all hyperparameters remain the same as Mask-RCNN in Detectron2. Unless otherwise stated, 1× learning schedule is used.
4.3 MAIN RESULTS
Results on COCO. We compare PatchDCT with Mask-RCNN and DCT-Mask over different backbones. As shown in Table 2, on COCO val2017 with R50-FPN, PatchDCT improves 2.0% AP and 3.4% APB over Mask-RCNN. Compared with DCT-Mask, PatchDCT also achieves 0.7% AP and 0.9% APB improvements. When evaluating with LVIS annotations, PatchDCT yields significant gains of 3.2% AP∗ and 5.3% AP∗B over Mask-RCNN, and 1.1% AP
∗ and 1.7% AP∗B over DCTMask. Consistent improvements are observed on R101-FPN and RX101-FPN. Since AP∗ and AP∗B are evaluated with high-quality annotations, the significant improvements of these two metrics emphasize the superiority of our model. In addition, considering the improvement in mask quality, the cost in runtime is almost negligible, i.e. about 1.5 FPS degradation on the A100 GPU.
We also compare the performance of PatchDCT with state-of-the-art methods of instance segmentation on COCO test-dev2017. With RX101 backbone, PatchDCT surpasses PointRender (Kirillov et al., 2020) and RefineMask (Zhang et al., 2021), which are both multi-stage refinement methods based on binary grid masks, by 0.8% and 0.4%. PatchDCT also achieves comparable performance with Mask Transfiner (Ke et al., 2022) with R101 backbone. However, Mask-Transifer runs at 5.5 FPS on the A100 GPU, which is almost two times slower than PatchDCT. With Swin-B back-
bone, PatchDCT outperforms Mask Transfiner (Ke et al., 2022) by 0.7% AP. It is worth noting that PatchDCT is faster than most multi-stage refinement methods since only one refine process is required. These results demonstrate the effectiveness of PatchDCT in generating high-quality masks.
Results on Cityscapes. We also report results on Cityscapes val set in Table 3. In comparison with Mask-RCNN, PatchDCT obtains 4.5% AP and 7.0% APB improvements. It also outperforms DCT-Mask by 1.3% AP and 4.2% APB . Compared with other SOTA methods, PatchDCT is still competitive. PatchDCT achieves 0.8%, 1.4%, 2.1% APB gains over Mask Transfiner (Ke et al., 2022), RefineMask (Zhang et al., 2021) and PointRender (Kirillov et al., 2020) respectively. The large difference in APB highlights the ability of PatchDCT to generate masks with more detailed borders.
4.4 ABLATION EXPERIMENTS
We conduct extensive ablation experiments to further analyze PatchDCT. We adopt R50-FPN as the backbone and evaluate the performance on COCO val2017.
Simply refine DCT vectors. Simply refining the global DCT vectors does not succeed. To demonstrate that, we design a model named ‘Two-stage DCT’, which regresses a new 300-dimensional DCT vector after fusing the initial mask with a 42×42 feature map from FPN-P2. The refined mask is decoded from the final DCT vector. From Table 5, Two-stage DCT achieves only little improvements over DCT-Mask, since changes in some elements of the global DCT vector may affect the entire mask, even for the correct segmentation areas. PatchDCT leverages the patching mechanism to overcome this issue and outperforms Two-stage DCT by 1.0 AP∗B .
Binary grid refinement. Refining masks with the binary grid mask representation can be considered as the extreme patching mechanism, which treats each pixel as a patch. However, simply refining high-resolution masks with the binary grid mask representation introduces performance degradation. We construct an experiment named ‘binary grid refinement’, which predicts another 112×112 mask with the binary grid mask representation after fusing the initial mask as well as a 56×56 feature map from FPN-P2. Experimental results in Table 5 show that the performance of binary grid refinement is worse than PatchDCT, and even DCT-Mask. This is because binary grid refinement requires the refinement module to learn 12544 (112× 112) outputs, while PatchDCT only needs to learn at most 1176 (14× 14× 6) outputs, which reduces the training complexity. Effectiveness of three-class classifier. In addition to identifying mixed patches, a more important role of the three-class classifier is to correct previously mispredicted foreground and background patches. To validate the effectiveness of refining non-mixed patches (i.e. foreground and background patches), we construct a binary-class classifier, which only classifies patches as mixed or non-mixed and keeps masks of non-mixed patches unchanged. As shown in Table 6, the binary-class classifier is inferior to our three-class classifier by 0.3% AP and 0.4% AP∗, since the refinement of previously incorrectly predicted foreground and background patches is ignored.
Refinement of foreground and background patches can also be accomplished with the DCT vector regressor. However, as discussed in Sec. 3.2, the DCT vector elements of the non-mixed patches
Table 7: Mask AP obtained by PatchDCT with regressor focusing on all patches and mixed patches on val2017. The best results are obtained by regressing only the mixed patches.
Regressor AP APS APM APL APB AP∗ AP∗B all 36.6 17.7 39.5 52.2 23.6 39.6 28.6 mixed 37.2 18.3 39.5 54.2 24.5 40.8 30.1
Table 9: Mask AP obtained by models with different dimensions of patch DCT vectors on COCO val2017. Model with 6-dimensional vectors achieves the best performance.
Patch Dim. AP APS APM APL APB AP∗ AP∗B 3 36.8 17.6 39.2 53.5 24.0 40.5 29.5 6 37.2 18.3 39.5 54.1 24.5 40.8 30.1 9 36.9 17.1 39.3 53.3 24.3 40.6 30.1
only involve zero and m, making it ineffective to learn the DCT vectors of all patches directly. As shown in Table 7, the performance of the method refining non-mixed regions with the DCT vector regressor is lower than the method using a three-class classifier by 0.6% AP and 1.2% AP∗. Need to note that, APB and AP∗B decrease by 0.9% and 1.5% respectively, reflecting that learning to regress non-mixed patches also affects the prediction of boundaries.
Effectiveness of the regressor. The regressor is actually a boundary attention module that generates finer boundaries. As shown in Table 8, after removing the regressor and keeping only the classifier, the overall AP only decreases by 0.5% , but APB and AP∗B decrease by 1.2% and 3.0% respectively. The phenomenon demonstrates the importance of the regressor for generating finer boundaries.
Dimension of PatchDCT vectors We look for an appropriate patch DCT vector length to encode each mixed patch. Results in Table 9 show that the model with 6-dimensional patch DCT vectors obtains the best performance. As also shown in Table 1, the 6-dimensional patch DCT vector already contains most of the ground truth information. As more elements bring only very little incremental information, regressing these elements does not improve the prediction.
Multi-stage PatchDCT. We compare the performance of the multi-stage procedure in Table 10. One-stage PatchDCT already provides high-quality masks, while two-stage PatchDCT further improves the prediction. However, the computational cost of the mask branch has nearly doubled with tiny improvements in the quality of masks, so we choose to use one-stage PatchDCT in our paper.
Size of the patch. We evaluate the influence of patch size in Table 11. We keep the resolution of the mask and the size of the input feature map unchanged and compare the model performance with different patch sizes. PatchDCT with 8× 8 patches performs better than other settings. Size of the feature map. We compare the model with different sizes of the feature map used in PatchDCT. Table 12 illustrates that the performance saturates with the 42× 42 feature map. Feature map from FPN. We evaluate PatchDCT with the feature map cropped from all pyramid levels or P2. Table 13 shows that PatchDCT benefits from the finer feature map of P2.
4.5 QUALITATIVE RESULTS
In Figure 4 we visualize some outputs of PatchDCT on COCO val2017. PatchDCT generates finer boundaries among different instances, such as the shoulder of the person (the first column), the contour of the kite (the third column), and the arm of the girl (the fourth column). PatchDCT obtains masks of higher quality in comparison with Mask-RCNN and DCT-Mask.
5 CONCLUSIONS
In this work, we propose PatchDCT, a compressed vector based method towards high-quality instance segmentation. In contrast to previous methods, PatchDCT refines each patch of masks respectively and utilizes patch DCT vectors to compress boundaries that are full of details. By using a classifier to refine foreground and background patches, and predicting an informative lowdimensional DCT vector for each mixed patch, PatchDCT generates a high-resolution mask with fine boundaries. PatchDCT is designed with a simple and clean structure, which allows the method to obtain high-quality segmentation with almost negligible cost in speed compared to Mask-RCNN and DCT-Mask. We hope that our approach will benefit future studies in instance segmentation.
A MORE QUALITATIVE RESULTS
A.1 TWO-STAGE DCT
We visualize some outputs of two-stage DCT and compare them with DCT-Mask to demonstrate the disadvantages of simply combining DCT-Mask with multi-stage progress.
As shown in Figure 5, in two-stage DCT, the areas that were previously correctly predicted may be influenced in refinement. The phenomenon further proves the difficulties in refining DCT vectors directly.
A.2 QUALITATIVE RESULTS ON CITYSCAPES
We show some qualitative results on Cityscapes in Figure 6. In comparison with Mask-RCNN and DCT-Mask, PatchDCT generates finer boundaries that greatly improve the quality of masks.
B MORE TECHNICAL DETAILS
We prove that all elements except the DCCs for foreground patches are zero.
It can be derived from Equation 6 that DCC is equal to the patch size m in the foreground patch since Mm×m(x, y) = 1.
DCC = 1
m m−1∑ x=0 m−1∑ y=0 Mm×m(x, y) = m, (6)
Note that for a m×m patch Mfm×m(u, v) Equation 1 can be written as
Mfm×m(u, v) = 2
m C(u)C(v) ( m−1∑ x=0 A(x, u) )( m−1∑ y=0 A(y, v) ) , (7)
where A(a, b) = cos (2a+1)bπ2m .
If u is odd,
A(m− 1− x, u) = cos (2(m− 1− x) + 1)uπ 2m
= cos ( − (2x+ 1)uπ
2m + uπ ) = −A(x, u), (8)
If u is even and larger than zero, since from Euler’s formula
eiθ = cosθ + isinθ, (9)
We have m−1∑ x=0 A(x, u) = m−1∑ x=0 cos (2x+ 1)uπ 2m
= Re ( m−1∑ x=0 e (2x+1)uπi 2m )
= Re ( e uπi 2m 1− euπi
1− euπim
) = 0, (10)
Since u is even,
euπi = cos(uπ) + isin(uπ) = 1, (11)
We obtain m−1∑ x=0 A(x, u) = 0, ∀u ̸= 0, (12)
Therefore for foreground patches
Mfm×m(i, j) = { m, i = 0, j = 0, 0, otherwise.
(13)
This illustrates except the DCCs, elements of DCT vectors of foreground patches are all zero.
C LIMITATIONS AND FUTURE OUTLOOK
In the process of visualization, we observe that the model may generate masks with holes. These problems usually occur in semantical ambiguous areas, and rarely in the center of the mask where the semantic information is very clear. We demonstrate some typical bad cases in Figure 7. In these cases, the model either misclassifies these patches or generates imprecise patch DCT vectors, resulting in disconnected masks. We leave better classification and regression vectors as future work. In addition, we also plan to carry out further verification in other more challenging areas, such as aerial images, medical images, etc. Taking aerial images as an example, this field still focuses on the research of object detection (Yang et al., 2019; 2021a;b;c; 2023), especially oriented object detection (Yang & Yan, 2022; Zhou et al., 2022; Yang et al., 2022), which lacks the exploration of more precise positioning tasks, i.e instance segmentation. | 1. What is the main contribution of the paper in instance segmentation?
2. What are the strengths of the proposed approach, particularly in its extension from image-level to patch-level?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any relevant previous works that the reviewer thinks should be compared to the proposed method? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper is focused on instance segmentation. Baseline method applise Discrete Cosine Transform to improve the segmentation quality around the object boundary. This paper extends DCT-Mask from image-level to patch-level. Through Figure 4 we could see the quality improvement visually.
Strengths And Weaknesses
The paper introduces PatchDCT, which improves the quality of instance segmentation. The experiments show the competitive performance of the proposed method. The paper provides detailed information for reproduction. There some previous works also focus on the segmentation boundary, such as Fully Connected CRF in DeepLab[1], CRFasRNN[2]. Comparison to these methods maybe helpful.
[1] DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
[2] Conditional Random Fields as Recurrent Neural Networks
Clarity, Quality, Novelty And Reproducibility
The writing is good and presentation is clear. |
ICLR | Title
PatchDCT: Patch Refinement for High Quality Instance Segmentation
Abstract
High-quality instance segmentation has shown emerging importance in computer vision. Without any refinement, DCT-Mask directly generates high-resolution masks by compressed vectors. To further refine masks obtained by compressed vectors, we propose for the first time a compressed vector based multi-stage refinement framework. However, the vanilla combination does not bring significant gains, because changes in some elements of the DCT vector will affect the prediction of the entire mask. Thus, we propose a simple and novel method named PatchDCT, which separates the mask decoded from a DCT vector into several patches and refines each patch by the designed classifier and regressor. Specifically, the classifier is used to distinguish mixed patches from all patches, and to correct previously mispredicted foreground and background patches. In contrast, the regressor is used for DCT vector prediction of mixed patches, further refining the segmentation quality at boundary locations. Experiments on COCO show that our method achieves 2.0%, 3.2%, 4.5% AP and 3.4%, 5.3%, 7.0% Boundary AP improvements over Mask-RCNN on COCO, LVIS, and Cityscapes, respectively. It also surpasses DCT-Mask by 0.7%, 1.1%, 1.3% AP and 0.9%, 1.7%, 4.2% Boundary AP on COCO, LVIS and Cityscapes. Besides, the performance of PatchDCT is also competitive with other state-of-the-art methods.
1 INTRODUCTION
Instance segmentation (Li et al., 2017; He et al., 2017) is a fundamental but challenging task in computer vision, which aims to locate objects in images and precisely segment each instance. The mainstream instance segmentation methods follow Mask-RCNN (He et al., 2017) paradigm, which often segment instances in a low-resolution grid (Kang et al., 2020; Cheng et al., 2020c; Chen et al., 2019; Ke et al., 2021). However, limited by the coarse mask representation ( i.e. 28 × 28 in Mask-RCNN), most of these algorithms cannot obtain high-quality segmentation results due to the loss of details. DCT-Mask (Shen et al., 2021) achieves considerable performance gain by predicting an informative 300-dimensional Discrete Cosine Transform (DCT) (Ahmed et al., 1974) vector compressed from a 128 × 128 mask. To further improve the segmentation results of DCTMask, we follow the refine mechanism (Ke et al., 2022; Zhang et al., 2021; Kirillov et al., 2020) to correct the mask details in a multi-stage manner.
A straightforward implementation is to refine the 300-dimensional DCT vector multiple times. However, experimental results show that this naive implementation does not succeed, which improves mask average precision (mAP) by 0.1% from 36.5% to 36.6% on COCO val set. The main reason for the limited improvement is that the full 300-dimensional DCT vector is not suitable for refining some important local regions, such as wrong predicted regions and boundary regions in masks. As each pixel value in the mask is calculated by all elements of the DCT vector in the inference stage, once some elements in the DCT vector change, the entire mask will change, and even the correct segmentation areas may be affected, refer to Figure 1a.
∗Corresponding author is Kewei Liang.
To overcome the above issue, we propose a novel method, called PatchDCT, which divides the mask decoded from a DCT vector into several independent patches and refines each patch with a threeclass classifier and a regressor, respectively. In detail, each patch is first classified into one of three categories: foreground, background, and mixed by the classifier, and then previously mispredicted foreground and background patches will be corrected. Mixed patches are fed into the regressor to predict their corresponding n-dimensional (n ≪ 300) DCT vectors. In the inference stage, we use Inverse Discrete Cosine Transform (IDCT) to decode the predicted vectors of the mixed patches as their refined masks, and merge them with the masks of other foreground and background patches to obtain a high-resolution mask. It is also worth emphasizing that each patch is independent, so the element change of a DCT vector will only affect the corresponding mixed patch, as shown in Figure 1b. In general, patching allows the model to focus on the refinement of local regions, thereby continuously improving the quality of segmentation, resulting in significant performance improvements. Our main contributions are:
1) To our best knowledge, PatchDCT is the first compressed vector based multi-stage refinement detector to predict high-quality masks.
2) PatchDCT innovatively adopts the patching technique, which successfully allows the model to focus on the refinement of important local regions, fully exploiting the advantages of multi-stage refinement and high-resolution information compression.
3) Compared to Mask RCNN, PatchDCT improves about 2.0% AP and 3.4% Boundary AP on COCO, 3.2% AP and 5.3% Boundary AP on LVIS∗1, 4.5% AP and 7.0% Boundary AP on Cityscapes. It also achieves 0.7% AP and 0.9% Boundary AP on COCO, 1.1% AP and 1.7% Boundary AP on LVIS∗, 1.3% AP and 4.2% Boundary AP on Cityscapes over DCT-Mask.
4) Demonstrated by experiments on COCO test-dev, the performance of PatchDCT is also competitive with other state-of-the-art methods.
2 RELATED WORK
Instance segmentation. Instance segmentation assigns a pixel-level mask to each instance of interest. Mask-RCNN (He et al., 2017) generates bounding boxes for each instance with a powerful detector (Ren et al., 2015) and categorizes each pixel in bounding boxes as foreground or background to obtain 28 × 28 binary grid masks. Several methods that build on Mask-RCNN improve the quality of masks. Mask Scoring RCNN (Huang et al., 2019) learns to regress mask IoU to select better-quality instance masks. HTC (Chen et al., 2019) utilizes interleaved execution, mask information flow, and semantic feature fusion to improve Mask-RCNN. BMask RCNN (Cheng et al., 2020c) adds a boundary branch on Mask-RCNN to detect the boundaries of masks. Bounding Shape Mask R-CNN (Kang et al., 2020) improves performance on object detection and instance segmentation by its bounding shape mask branch. BCNet (Ke et al., 2021) uses two GCN (Welling & Kipf, 2016) layers to detect overlapping instances. Although these algorithms have yielded promising results, they are still restricted in the low-resolution mask representation and thus do not generate high-quality masks.
1COCO dataset with LVIS annotations
Towards high-quality instance segmentation. To take full advantage of high-resolution masks, DCT-Mask (Shen et al., 2021) learns to regress a 300-dimensional DCT vector compressed from a 128 × 128 mask. SOLQ (Dong et al., 2021) is a query-based method, which also encodes highresolution masks into DCT vectors and predicts the vectors by queries. Both of these methods generate high-resolution masks in a one-shot manner, without any refinement. Although they have made considerable gains, there is still potential for improvement. Multi-stage refinement is another common technique for obtaining high-quality masks. PointRend (Kirillov et al., 2020) adaptively selects several locations to refine, rendering 224×224 masks from 7×7 coarse masks. RefineMask (Zhang et al., 2021) introduces semantic segmentation masks as auxiliary inputs, and generates 112 × 112 masks in a multi-stage manner. Mask Transfiner (Ke et al., 2022) represents image regions as a quadtree and corrects the errors of error-prone tree nodes to generate 112× 112 masks. PBR (Tang et al., 2021) is a post-processing method that refines patches along the mask boundaries. Unlike these refinement methods based on the binary grid mask representation, our method is based on compressed vectors.
Generating high-quality masks is also one of the main concerns in the field of semantic segmentation. CRFasRNN (Zheng et al., 2015) connects CRF (Krähenbühl & Koltun, 2011) with FCN (Long et al., 2015), formulating mean-field approximate inference for the CRF with Gaussian pairwise potentials as Recurrent Neural Networks. DeepLab (Chen et al., 2017) effectively improves the quality of masks by using atrous convolution for receptive field enhancement, ASPP for multiscale segmentation, and CRF for boundary refinement. SegModel (Shen et al., 2017) utilizes a guidance CRF to improve the segmentation quality. CascadePSP (Cheng et al., 2020b) trains independently a refinement module designed in a cascade fashion. RGR (Dias & Medeiros, 2018) is a post-processing module based on region growing. In contrast, PatchDCT can obtain high-quality segmentation results in an end-to-end learning manner without any additional post-processing.
3 METHODS
In this section, we show the difficulties in refining DCT vectors and then introduce PatchDCT to overcome these difficulties and generate finer masks.
3.1 DIFFICULTIES IN REFINING DCT VECTORS
Given a K ×K mask, DCT-Mask (Shen et al., 2021) encodes the mask MK×K into the frequency domain MfK×K :
MfK×K(u, v) = 2
K C(u)C(v) K−1∑ x=0 K−1∑ y=0 MK×K(x, y) cos (2x+ 1)uπ 2K cos (2y + 1)vπ 2K , (1)
where C(w) = 1/ √ 2 for w = 0 and C(w) = 1 otherwise. Non-zero values are concentrated in the upper left corner of MfK×K , which are low-frequency elements that contain the most information of the mask. The N -dimensional DCT vector is obtained by zigzag scanning (Al-Ani & Awad, 2013) MfK×K and selecting the top-N elements.
In the inference stage, MfK×K is recovered by filling the remaining elements to zero. Then each pixel in the mask MK×K is calculated as follow:
MK×K(x, y) = 2
K C(x)C(y) K−1∑ u=0 K−1∑ v=0 MfK×K(u, v) cos (2x+ 1)uπ 2K cos (2y + 1)vπ 2K , (2)
Equation 2 reveals that each pixel in the mask MK×K is calculated by all elements of MfK×K . When refining the N -dimensional DCT vector, once an element is incorrectly changed, all pixels in MK×K will be affected, even those correctly segmented regions, which is also shown in Figure 1. Therefore, when fixing some specific error regions (e.g. borders), it is difficult to get the correct refinement result unless all the elements in the DCT vector are correctly refined. In practice, however, it is almost impossible to correctly predict all N elements.
3.2 PATCHDCT
To prevent the above issue when refining the global DCT vector, we propose a method named PatchDCT, which divides the K ×K mask into m×m patches and refines each patch respectively. The overall architecture of PatchDCT is shown in Figure 2, which mainly consists of a three-class classifier and a DCT vector regressor. Specifically, the classifier is used to identify mixed patches and refine foreground and background patches. Each mixed patch is then refined by an n-dimensional DCT vector, which is obtained from the DCT vector regressor.
Three-class classifier. We define the patches with only foreground pixels and only background pixels as foreground patches and background patches, respectively, while the others are mixed patches. The task of differentiating patch categories is accomplished by a fully convolutional three-class classifier. Moreover, the mispredicted initial foreground and background patches are corrected by the classifier. We utilize a three-class classifier instead of a DCT vector regressor to refine foreground and background patches because of the particular form of their DCT vectors. For background patches, simply from Equation 1, all elements of DCT vectors are zero. For foreground patches, all elements are zero except for the first element named DC component (DCC), which is equal to the patch size m. The mathematical proof of the DCT vector form for the foreground patches is shown in the Appendix. DCT vector elements of foreground and background
patches are discrete data that are more suitable for classification. Referring to Figure 3, DCT vector elements of mixed patches are continuously distributed and therefore more suitable for regression.
Regressor. Similar to the phenomenon described in DCT-Mask (Shen et al., 2021), refining highresolution masks with the binary grid mask representation introduces performance degradation due to the high training complexity (refer to DCT-Mask (Shen et al., 2021) for more details). Learning to regress informative DCT vectors eases the training process. The specific experimental results are discussed in the experiments section (Sec. 4).
The regressor is trained and inferred for mixed patches only. It is actually a boundary attention module, since the mixed patches are distributed exactly along the boundary of the instance mask. For each mixed patch, the regressor predicts an n-dimensional DCT vector, which is very short but highly informative. Table 1 shows mask AP obtained by different lengths of ground truth patch DCT
vectors using Mask-RCNN framework on COCO val2017. The low-dimensional DCT vectors have been able to provide sufficient ground truth information.
3.3 MULTI-STAGE REFINEMENT AND LOSS FUNCTION
PatchDCT is a module where the input and output masks have the same resolution. Thus, the mask generated by a PatchDCT module can be fed into another PatchDCT module for further refinement, as shown in the upper right corner of Figure 2.
With multi-stage refinement, the loss function of the mask branch is defined as Lmask = λ0LdctN + ∑ s>0 λs(Lsclspatch + L s dctn), (3)
λ0 and λs are the loss weights. The first item LdctN of Equation 3 is the loss in predicting N - dimensional vectors of the entire masks (Shen et al., 2021).
LdctN = 1
N N∑ i R(V̂i − Vi), (4)
where Vi and V̂i are the i-th element in ground-truth and the prediction vector respectively. R is the loss function and N is the length of the vectors. The classification loss Lsclspatch of s-th stage is the cross-entropy loss over three classes. The regression loss Lsdctn of s-th stage is
Lsdctn = 1
Nm Nall∑ k
[ pk ( 1
n n∑ i R(V̂i − Vi)
)] , (5)
where Nm, Nall are the number of mixed patches and all patches respectively. n is the length of the patch DCT vectors. If the k-th patch is a mixed patch, pk = 1, otherwise pk = 0, indicating that only DCT vectors of mixed patches are regressed.
4 EXPERIMENTS
4.1 DATASETS
We evaluate our method on two standard instance segmentation datasets: COCO (Lin et al., 2014) and Cityscapes (Cordts et al., 2016). COCO provides 80 categories with instance-level annotations. Cityscapes is a dataset focused on urban street scenes. It contains 8 categories for instance segmentation, providing 2,975, 500 and 1,525 high-resolution images (1, 024 × 2, 048) for training, validation, and test respectively.
We report the standard mask AP metric and the Boundary AP (Cheng et al., 2021) metric (APB), the latter focusing on evaluating the boundary quality. Following (Kirillov et al., 2020), we also report AP∗ and AP∗B , which evaluate COCO val2017 with high-quality annotations provided by LVIS (Gupta et al., 2019). Note that for AP∗ and AP∗B , models are still trained on COCO train2017.
4.2 IMPLEMENT DETAILS
We build the model based on DCT-Mask (Shen et al., 2021). We first decode the 300-dimensional DCT vector to obtain a 112 × 112 mask. This mask is then fed into PatchDCT, together with a 42 × 42 feature map cropped from FPN-P2 (Lin et al., 2017). PatchDCT refines each patch of the mask and outputs a 112 × 112 mask. We set the patch size to 8 and each patch is represented by a 6-dimensional DCT vector. Our model is class-specific by default, i.e. one mask per class. L1 loss and cross-entropy loss are used for DCT vector regression and patch classification respectively. By default, only one PatchDCT module is used, and both λ0 and λ1 are set to 1. We implement our algorithm based on Detectron2 (Wu et al., 2019), and all hyperparameters remain the same as Mask-RCNN in Detectron2. Unless otherwise stated, 1× learning schedule is used.
4.3 MAIN RESULTS
Results on COCO. We compare PatchDCT with Mask-RCNN and DCT-Mask over different backbones. As shown in Table 2, on COCO val2017 with R50-FPN, PatchDCT improves 2.0% AP and 3.4% APB over Mask-RCNN. Compared with DCT-Mask, PatchDCT also achieves 0.7% AP and 0.9% APB improvements. When evaluating with LVIS annotations, PatchDCT yields significant gains of 3.2% AP∗ and 5.3% AP∗B over Mask-RCNN, and 1.1% AP
∗ and 1.7% AP∗B over DCTMask. Consistent improvements are observed on R101-FPN and RX101-FPN. Since AP∗ and AP∗B are evaluated with high-quality annotations, the significant improvements of these two metrics emphasize the superiority of our model. In addition, considering the improvement in mask quality, the cost in runtime is almost negligible, i.e. about 1.5 FPS degradation on the A100 GPU.
We also compare the performance of PatchDCT with state-of-the-art methods of instance segmentation on COCO test-dev2017. With RX101 backbone, PatchDCT surpasses PointRender (Kirillov et al., 2020) and RefineMask (Zhang et al., 2021), which are both multi-stage refinement methods based on binary grid masks, by 0.8% and 0.4%. PatchDCT also achieves comparable performance with Mask Transfiner (Ke et al., 2022) with R101 backbone. However, Mask-Transifer runs at 5.5 FPS on the A100 GPU, which is almost two times slower than PatchDCT. With Swin-B back-
bone, PatchDCT outperforms Mask Transfiner (Ke et al., 2022) by 0.7% AP. It is worth noting that PatchDCT is faster than most multi-stage refinement methods since only one refine process is required. These results demonstrate the effectiveness of PatchDCT in generating high-quality masks.
Results on Cityscapes. We also report results on Cityscapes val set in Table 3. In comparison with Mask-RCNN, PatchDCT obtains 4.5% AP and 7.0% APB improvements. It also outperforms DCT-Mask by 1.3% AP and 4.2% APB . Compared with other SOTA methods, PatchDCT is still competitive. PatchDCT achieves 0.8%, 1.4%, 2.1% APB gains over Mask Transfiner (Ke et al., 2022), RefineMask (Zhang et al., 2021) and PointRender (Kirillov et al., 2020) respectively. The large difference in APB highlights the ability of PatchDCT to generate masks with more detailed borders.
4.4 ABLATION EXPERIMENTS
We conduct extensive ablation experiments to further analyze PatchDCT. We adopt R50-FPN as the backbone and evaluate the performance on COCO val2017.
Simply refine DCT vectors. Simply refining the global DCT vectors does not succeed. To demonstrate that, we design a model named ‘Two-stage DCT’, which regresses a new 300-dimensional DCT vector after fusing the initial mask with a 42×42 feature map from FPN-P2. The refined mask is decoded from the final DCT vector. From Table 5, Two-stage DCT achieves only little improvements over DCT-Mask, since changes in some elements of the global DCT vector may affect the entire mask, even for the correct segmentation areas. PatchDCT leverages the patching mechanism to overcome this issue and outperforms Two-stage DCT by 1.0 AP∗B .
Binary grid refinement. Refining masks with the binary grid mask representation can be considered as the extreme patching mechanism, which treats each pixel as a patch. However, simply refining high-resolution masks with the binary grid mask representation introduces performance degradation. We construct an experiment named ‘binary grid refinement’, which predicts another 112×112 mask with the binary grid mask representation after fusing the initial mask as well as a 56×56 feature map from FPN-P2. Experimental results in Table 5 show that the performance of binary grid refinement is worse than PatchDCT, and even DCT-Mask. This is because binary grid refinement requires the refinement module to learn 12544 (112× 112) outputs, while PatchDCT only needs to learn at most 1176 (14× 14× 6) outputs, which reduces the training complexity. Effectiveness of three-class classifier. In addition to identifying mixed patches, a more important role of the three-class classifier is to correct previously mispredicted foreground and background patches. To validate the effectiveness of refining non-mixed patches (i.e. foreground and background patches), we construct a binary-class classifier, which only classifies patches as mixed or non-mixed and keeps masks of non-mixed patches unchanged. As shown in Table 6, the binary-class classifier is inferior to our three-class classifier by 0.3% AP and 0.4% AP∗, since the refinement of previously incorrectly predicted foreground and background patches is ignored.
Refinement of foreground and background patches can also be accomplished with the DCT vector regressor. However, as discussed in Sec. 3.2, the DCT vector elements of the non-mixed patches
Table 7: Mask AP obtained by PatchDCT with regressor focusing on all patches and mixed patches on val2017. The best results are obtained by regressing only the mixed patches.
Regressor AP APS APM APL APB AP∗ AP∗B all 36.6 17.7 39.5 52.2 23.6 39.6 28.6 mixed 37.2 18.3 39.5 54.2 24.5 40.8 30.1
Table 9: Mask AP obtained by models with different dimensions of patch DCT vectors on COCO val2017. Model with 6-dimensional vectors achieves the best performance.
Patch Dim. AP APS APM APL APB AP∗ AP∗B 3 36.8 17.6 39.2 53.5 24.0 40.5 29.5 6 37.2 18.3 39.5 54.1 24.5 40.8 30.1 9 36.9 17.1 39.3 53.3 24.3 40.6 30.1
only involve zero and m, making it ineffective to learn the DCT vectors of all patches directly. As shown in Table 7, the performance of the method refining non-mixed regions with the DCT vector regressor is lower than the method using a three-class classifier by 0.6% AP and 1.2% AP∗. Need to note that, APB and AP∗B decrease by 0.9% and 1.5% respectively, reflecting that learning to regress non-mixed patches also affects the prediction of boundaries.
Effectiveness of the regressor. The regressor is actually a boundary attention module that generates finer boundaries. As shown in Table 8, after removing the regressor and keeping only the classifier, the overall AP only decreases by 0.5% , but APB and AP∗B decrease by 1.2% and 3.0% respectively. The phenomenon demonstrates the importance of the regressor for generating finer boundaries.
Dimension of PatchDCT vectors We look for an appropriate patch DCT vector length to encode each mixed patch. Results in Table 9 show that the model with 6-dimensional patch DCT vectors obtains the best performance. As also shown in Table 1, the 6-dimensional patch DCT vector already contains most of the ground truth information. As more elements bring only very little incremental information, regressing these elements does not improve the prediction.
Multi-stage PatchDCT. We compare the performance of the multi-stage procedure in Table 10. One-stage PatchDCT already provides high-quality masks, while two-stage PatchDCT further improves the prediction. However, the computational cost of the mask branch has nearly doubled with tiny improvements in the quality of masks, so we choose to use one-stage PatchDCT in our paper.
Size of the patch. We evaluate the influence of patch size in Table 11. We keep the resolution of the mask and the size of the input feature map unchanged and compare the model performance with different patch sizes. PatchDCT with 8× 8 patches performs better than other settings. Size of the feature map. We compare the model with different sizes of the feature map used in PatchDCT. Table 12 illustrates that the performance saturates with the 42× 42 feature map. Feature map from FPN. We evaluate PatchDCT with the feature map cropped from all pyramid levels or P2. Table 13 shows that PatchDCT benefits from the finer feature map of P2.
4.5 QUALITATIVE RESULTS
In Figure 4 we visualize some outputs of PatchDCT on COCO val2017. PatchDCT generates finer boundaries among different instances, such as the shoulder of the person (the first column), the contour of the kite (the third column), and the arm of the girl (the fourth column). PatchDCT obtains masks of higher quality in comparison with Mask-RCNN and DCT-Mask.
5 CONCLUSIONS
In this work, we propose PatchDCT, a compressed vector based method towards high-quality instance segmentation. In contrast to previous methods, PatchDCT refines each patch of masks respectively and utilizes patch DCT vectors to compress boundaries that are full of details. By using a classifier to refine foreground and background patches, and predicting an informative lowdimensional DCT vector for each mixed patch, PatchDCT generates a high-resolution mask with fine boundaries. PatchDCT is designed with a simple and clean structure, which allows the method to obtain high-quality segmentation with almost negligible cost in speed compared to Mask-RCNN and DCT-Mask. We hope that our approach will benefit future studies in instance segmentation.
A MORE QUALITATIVE RESULTS
A.1 TWO-STAGE DCT
We visualize some outputs of two-stage DCT and compare them with DCT-Mask to demonstrate the disadvantages of simply combining DCT-Mask with multi-stage progress.
As shown in Figure 5, in two-stage DCT, the areas that were previously correctly predicted may be influenced in refinement. The phenomenon further proves the difficulties in refining DCT vectors directly.
A.2 QUALITATIVE RESULTS ON CITYSCAPES
We show some qualitative results on Cityscapes in Figure 6. In comparison with Mask-RCNN and DCT-Mask, PatchDCT generates finer boundaries that greatly improve the quality of masks.
B MORE TECHNICAL DETAILS
We prove that all elements except the DCCs for foreground patches are zero.
It can be derived from Equation 6 that DCC is equal to the patch size m in the foreground patch since Mm×m(x, y) = 1.
DCC = 1
m m−1∑ x=0 m−1∑ y=0 Mm×m(x, y) = m, (6)
Note that for a m×m patch Mfm×m(u, v) Equation 1 can be written as
Mfm×m(u, v) = 2
m C(u)C(v) ( m−1∑ x=0 A(x, u) )( m−1∑ y=0 A(y, v) ) , (7)
where A(a, b) = cos (2a+1)bπ2m .
If u is odd,
A(m− 1− x, u) = cos (2(m− 1− x) + 1)uπ 2m
= cos ( − (2x+ 1)uπ
2m + uπ ) = −A(x, u), (8)
If u is even and larger than zero, since from Euler’s formula
eiθ = cosθ + isinθ, (9)
We have m−1∑ x=0 A(x, u) = m−1∑ x=0 cos (2x+ 1)uπ 2m
= Re ( m−1∑ x=0 e (2x+1)uπi 2m )
= Re ( e uπi 2m 1− euπi
1− euπim
) = 0, (10)
Since u is even,
euπi = cos(uπ) + isin(uπ) = 1, (11)
We obtain m−1∑ x=0 A(x, u) = 0, ∀u ̸= 0, (12)
Therefore for foreground patches
Mfm×m(i, j) = { m, i = 0, j = 0, 0, otherwise.
(13)
This illustrates except the DCCs, elements of DCT vectors of foreground patches are all zero.
C LIMITATIONS AND FUTURE OUTLOOK
In the process of visualization, we observe that the model may generate masks with holes. These problems usually occur in semantical ambiguous areas, and rarely in the center of the mask where the semantic information is very clear. We demonstrate some typical bad cases in Figure 7. In these cases, the model either misclassifies these patches or generates imprecise patch DCT vectors, resulting in disconnected masks. We leave better classification and regression vectors as future work. In addition, we also plan to carry out further verification in other more challenging areas, such as aerial images, medical images, etc. Taking aerial images as an example, this field still focuses on the research of object detection (Yang et al., 2019; 2021a;b;c; 2023), especially oriented object detection (Yang & Yan, 2022; Zhou et al., 2022; Yang et al., 2022), which lacks the exploration of more precise positioning tasks, i.e instance segmentation. | 1. What is the focus and contribution of the paper regarding semantic correspondence?
2. What are the strengths of the proposed approach, particularly in terms of neural representation?
3. What are the weaknesses of the paper, especially for the experiment section?
4. Do you have any concerns about the semantic correspondence representation?
5. What are the limitations regarding the NeMF approach?
6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
7. What is the main contribution of the paper on dictionary learning?
8. What are the strengths of the paper, especially in theoretical analysis?
9. Do you have any questions regarding the paper?
10. What is the focus and contribution of the paper regarding patching techniques for mask generation?
11. What are the strengths of the proposed method, particularly in improving boundary segmentation performance?
12. What are the weaknesses of the paper, such as limited backbone models?
13. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper adds a patching technique to the DCT-Mask model and adopts a refinement technique for each patch, so that high-resolution masks can be achieved. The patching technique can produce better boundaries compared to the DCT mask model itself as element changes for DCT vectors can be limited to the patch level rather than the entire mask. The paper also provides many experiments to verify the capability of the proposed method and compare it with state-of-the-art methods. Ablation experiments are also provided to discuss the efficacy of the designed framework.
Strengths And Weaknesses
Strengths:
Introducing the patching technique to refine the generated masks to improve boundary segmentation performance and this idea is of good interest.
As patching is the key technique in the paper, the patching size is also analyzed by experiments to provide a suggested 8*8 size for users. Also, other hyperparameters like the dimension of PatchDCT vectors and the number of stages for PatchDCT are clearly given and discussed.
Many experiments are done to support the design.
Weaknesses:
Backbone is limited to CNN-based models. Vision transformer-based model which also uses a patching technique would be of interest to see whether they will make any difference to the conclusions.
Clarity, Quality, Novelty And Reproducibility
Clarity: In general, it is clear. However, in the 4.3 section, the second paragraph did not clearly give reference to its data points. Further, it mentioned that "MaskTransifer runs at 5.5 FPS on the A100 GPU, which is almost two times slower than PatchDCT", however, the paper did not provide concrete data points.
Quality: In general, the paper is well-written.
Novelty: The reviewer believes that the novelty of this paper is good. |
ICLR | Title
Multi-Agent Sequential Decision-Making via Communication
Abstract
Communication helps agents to obtain information about others so that better coordinated behavior can be learned. Some existing work communicates predicted future trajectory with others, hoping to get clues about what others would do for better coordination. However, circular dependencies sometimes can occur when agents are treated synchronously so it is hard to coordinate decision-making. In this paper, we propose a novel communication scheme, Sequential Communication (SeqComm). SeqComm treats agents asynchronously (the upper-level agents make decisions before the lower-level ones) and has two communication phases. In negotiation phase, agents determine the priority of decision-making by communicating hidden states of observations and comparing the value of intention, which is obtained by modeling the environment dynamics. In launching phase, the upper-level agents take the lead in making decisions and communicate their actions with the lower-level agents. Theoretically, we prove the policies learned by SeqComm are guaranteed to improve monotonically and converge. Empirically, we show that SeqComm outperforms existing methods in various multi-agent cooperative tasks.
1 INTRODUCTION
The partial observability and stochasticity inherent to the nature of multi-agent systems can easily impede the cooperation among agents and lead to catastrophic miscoordination (Ding et al., 2020). Communication has been exploited to help agents obtain extra information during both training and execution to mitigate such problems (Foerster et al., 2016; Sukhbaatar et al., 2016; Peng et al., 2017). Specifically, agents can share their information with others via a trainable communication channel.
Centralized training with decentralized execution (CTDE) is a popular learning paradigm in cooperative multi-agent reinforcement learning (MARL). Although the centralized value function can be learned to evaluate the joint policy of agents, the decentralized policies of agents are essentially independent. Therefore, a coordination problem arises. That is, agents may make sub-optimal actions by mistakenly assuming others’ actions when there exist multiple optimal joint actions (Busoniu et al., 2008). Communication allows agents to obtain information about others to avoid miscoordination. However, most existing work only focuses on communicating messages, e.g., the information of agents’ current observation or historical trajectory (Jiang & Lu, 2018; Singh et al., 2019; Das et al., 2019; Ding et al., 2020). It is impossible for an agent to acquire other’s actions before making decisions since the game model is usually synchronous, i.e., agents make decisions and execute actions simultaneously. Recently, intention or imagination, depicted by a combination of predicted actions and observations of many future steps, has been proposed as part of messages (Kim et al., 2021; Pretorius et al., 2021). However, circular dependencies can still occur, so it may be hard to coordinate decision-making under synchronous settings.
A general approach to solving the coordination problem is to make sure that ties between equally good actions are broken by all agents. One simple mechanism for doing so is to know exactly what others will do and adjust the behavior accordingly under a unique ordering of agents and actions (Busoniu et al., 2008). Inspired by this, we reconsider the cooperative game from an asynchronous perspective. In other words, each agent is assigned a priority (i.e., order) of decision-making each step in both training and execution, thus the Stackelberg equilibrium (SE) (Von Stackelberg, 2010) is naturally set up as the learning objective. Specifically, the upper-level agents make decisions before the lower-level agents. Therefore, the lower-level agents can acquire the actual actions of the upper-level agents by
communication and make their decisions conditioned on what the upper-level agents would do. Under this setting, the SE is likely to be Pareto superior to the average Nash equilibrium (NE) in games that require a high cooperation level (Zhang et al., 2020). However, is it necessary to decide a specific priority of decision-making for each agent? Ideally, the optimal joint policy can be decomposed by any orders (Wen et al., 2019), e.g., π∗(a1, a2|s) = π∗(a1|s)π∗(a2|s, a1) = π∗(a2|s)π∗(a1|s, a2). But during the learning process, it is unlikely for agents to use the optimal actions of other agents for gradient calculation, making it still vulnerable to the relative overgeneralization problem (Wei et al., 2018). Overall, there is no guarantee that the above equation will hold in the learning process, thus ordering should be carefully concerned.
In this paper, we propose a novel model-based multi-round communication scheme for cooperative MARL, Sequential Communication (SeqComm), to enable agents to explicitly coordinate with each other. Specifically, SeqComm has two-phase communication, negotiation phase and launching phase. In the negotiation phase, agents communicate their hidden states of observations with others simultaneously. Then they are able to generate multiple predicted trajectories, called intention, by modeling the environmental dynamics and other agents’ actions. In addition, the priority of decision-making is determined by communicating and comparing the corresponding values of agents’ intentions. The value of each intention represents the rewards obtained by letting that agent take the upper-level position of the order sequence. The sequence of others follows the same procedure as aforementioned with the upper-level agents fixed. In the launching phase, the upper-level agents take the lead in decision-making and communicate their actual actions with the lower-level agents. Note that the actual actions will be executed simultaneously in the environment without any changes.
SeqComm is currently built on MAPPO (Yu et al., 2021). Theoretically, we prove the policies learned by SeqComm are guaranteed to improve monotonically and converge. Empirically, we evaluate SeqComm on a set of tasks in multi-agent particle environment (MPE) (Lowe et al., 2017) and StarCraft multi-agent challenge (SMAC) (Samvelyan et al., 2019). In all these tasks, we demonstrate that SeqComm outperforms prior communication-free and communication-based methods. By ablation studies, we confirm that treating agents asynchronously is a more effective way to promote coordination and SeqComm can provide the proper priority of decision-making for agents to develop better coordination.
2 RELATED WORK
Communication. Existing studies (Jiang & Lu, 2018; Kim et al., 2019; Singh et al., 2019; Das et al., 2019; Zhang et al., 2019; Jiang et al., 2020; Ding et al., 2020; Konan et al., 2022) in this realm mainly focus on how to extract valuable messages. ATOC (Jiang & Lu, 2018) and IC3Net (Singh et al., 2019) utilize gate mechanisms to decide when to communicate with other agents. Many works (Das et al., 2019; Konan et al., 2022) employ multi-round communication to fully reason the intentions of others and establish complex collaboration strategies. Social influence (Jaques et al., 2019) uses communication to influence the behaviors of others. I2C (Ding et al., 2020) only communicates with agents that are relevant and influential which are determined by causal inference. However, all these methods focus on how to exploit valuable information from current or past partial observations effectively and properly. More recently, some studies (Kim et al., 2021; Du et al., 2021; Pretorius et al., 2021) begin to answer the question: can we favor cooperation beyond sharing partial observation? They allow agents to imagine their future states with a world model and communicate those with others. IS (Pretorius et al., 2021), as the representation of this line of research, enables each agent to share its intention with other agents in the form of the encoded imagined trajectory and use the attention module to figure out the importance of the received intention. However, two concerns arise. On one hand, circular dependencies can lead to inaccurate predicted future trajectories as long as the multi-agent system treats agents synchronously. On the other hand, MARL struggles in extracting useful information from numerous messages, not to mention more complex and dubious messages, i.e., predicted future trajectories.
Unlike these works, we treat the agents from an asynchronously perspective therefore circular dependencies can be naturally resolved. Furthermore, agents only send actions to lower-level agents besides partial observations to make sure the messages are compact as well as informative.
Coordination. The agents are essentially independent decision makers in execution and may break ties between equally good actions randomly. Thus, in the absence of additional mechanisms, different
agents may break ties in different ways, and the resulting joint actions may be suboptimal. Coordination graphs (Guestrin et al., 2002; Böhmer et al., 2020; Wang et al., 2021b) simplify the coordination when the global Q-function can be additively decomposed into local Q-functions that only depend on the actions of a subset of agents. Typically, a coordination graph expresses a higher-order value decomposition among agents. This improves the representational capacity to distinguish other agents’ effects on local utility functions, which addresses the miscoordination problems caused by partial observability. Another general approach to solving the coordination problem is to make sure that ties are broken by all agents in the same way, requiring that random action choices are somehow coordinated or negotiated. Social conventions (Boutilier, 1996) or role assignments (Prasad et al., 1998) encode prior preferences towards certain joint actions and help break ties during action selection. Communication (Fischer et al., 2004; Vlassis, 2007) can be used to negotiate action choices, either alone or in combination with the aforementioned techniques. Our method follows this line of research by utilizing the ordering of agents and actions to break the ties, other than the enhanced representational capacity of the local value function.
3 PROBLEM FORMULATION
Cost-Free Communication. The decentralized partially observable Markov decision process (DecPOMDP) can be extended to explicitly incorporate broadcasting observations. The resulting model is called multi-agent POMDP (Oliehoek et al., 2016).
Pynadath & Tambe (2002) showed that under cost-free communication, a joint communication policy that shares local observations at each stage is optimal. Many studies have also investigated sharing local observations in models that are similar to multi-agent POMDP (Pynadath & Tambe, 2002; Ooi & Wornell, 1996; Nair et al., 2004; Roth et al., 2005a;b; Spaan et al., 2006; Oliehoek et al., 2007; Becker et al., 2004). These works focus on issues other than communication cost and we foucs on the coordination problem. Note that even under multi-agent POMDP where agents can get joint observations, coordination problem can still arise (Busoniu et al., 2008). Suppose the centralized critic has learnt actions pairs [a1, a2] and [b1, b2] are equally optimal. Without any prior information, the individual policies π1 and π2 learnt from the centralized critic can break the ties randomly and may choose a1 and b2, respectively.
Multi-Agent Sequential Decision-Making. We consider fully cooperative multi-agent tasks that are modeled as multi-agent POMDP, where n agents interact with the environment according to the following procedure, which we refer to as multi-agent sequential decision-making.
At each timestep t, assume the priority (i.e., order) of decision-making for all agents is given and each priority level has only one agent (i.e., agents make decisions one by one). Note that the smaller the level index, the higher priority of decision-making is. The agent at each level k gets its own observation okt drawn from the state st, and receives messagesm −k t from all other agents, where m−kt , {{o1t , a1t}, . . . , {ok−1t , ak−1t }, ok+1t , . . . , ont }. Equivalently, m−kt can be written as {o−kt ,a 1:k−1 t }, where o−kt denotes the joint observations of all agents except k, and a 1:k−1 t denotes the joint actions of agents 1 to k − 1. For the agent at the first level (i.e., k = 1), a1:k−1t = ∅. Then, the agent determines its action akt sampled from its policy πk(·|okt ,m−kt ) or equivalently πk(·|ot,a1:k−1t ) and sends it to the lower-level agents. After all agents have determined their actions, they perform the joint actions at, which can be seen as sampled from the joint policy π(·|st) factorized as ∏n k=1 πk(·|ot,a 1:k−1 t ), in the environment and get a shared reward r(st,at) and the state transitions to next state s′ according to the transition probability p(s′|st,at). All agents aim to maximize the expected return ∑∞ t=0 γ
trt, where γ is the discount factor. The state-value function and action-value function of the level-k agent are defined as follows:
Vπk(s,a 1:k−1) , E
s1:∞ ak:n0 ∼πk:n a1:∞∼π
[ ∞∑ t=0 γtrt|s0 = s,a1:k−10 = a1:k−1]
Qπk(s,a 1:k) , E
s1:∞ ak+1:n0 ∼πk+1:n
a1:∞∼π
[ ∞∑ t=0 γtrt|s0 = s,a1:k0 = a1:k].
For the setting of multi-agent sequential decision-making discussed above, we have the following proposition. Proposition 1. If all the agents update its policy with individual TRPO (Schulman et al., 2015) sequentially in multi-agent sequential decision-making, then the joint policy of all agents is guaranteed to improve monotonically and converge.
Proof. The proof is given in Appendix A.
Proposition 1 indicates that SeqComm has the performance guarantee regardless of the priority of decision-making in multi-agent sequential decision-making. However, the priority of decision-making indeed affects the optimality of the converged joint policy, and we have the following claim. Claim 1. The different priorities of decision-making affect the optimality of the convergence of the learning algorithm due to the relative overgeneralization problem.
We use a one-step matrix game as an example, as illustrated in Figure 1(a), to demonstrate the influence of the priority of decision-making on the learning process. Due to relative overgeneralization (Wei et al., 2018), agent B tends to choose b2 or b3. Specifically, b2 or b3 in the suboptimal equilibrium is a better choice than b1 in the optimal equilibrium when matched with arbitrary actions from agent A. Therefore, as shown in Figure 1(b), B → A (i.e., agent B makes decisions before A, and A’s policy conditions on the action of B) and Simultaneous (i.e., two agents make decisions simultaneously and independently) are easily trapped into local optima. However, things can be different if agent A goes
first, as A → B achieves the optimum. As long as agent A does not suffer from relative overgeneralization, it can help agent B get rid of local optima by narrowing down the search space of B. Besides, a policy that determines the priority of decision-making can be learned under the guidance of the state-value function, denoted as Learned. It obtains better performance than B → A and Simultaneous, which indicates that dynamically determining the order during policy learning can be beneficial as we do not know the optimal priority in advance.
Remark 1. The priority (i.e., order) of decision-making affects the optimality of the converged joint policy in multi-agent sequential decision-making, thus it is critical to determine the order. However, learning the order directly requires an additional centralized policy in execution, which is not generalizable in the scenario where the number of agents varies. Moreover, its learning complexity exponentially increases with the number of agents, making it infeasible in many cases.
4 SEQUENTIAL COMMUNICATION
In this paper, we cast our eyes in another direction and resort to the world model. Ideally, we can randomly sample candidate order sequences, evaluate them under the world model (see Section 4.1), and choose the order sequence that is deemed the most promising under the true dynamic. SeqComm is designed based on this principle to determine the priority of decision-making via communication.
SeqComm adopts a multi-round communication mechanism, i.e., agents are allowed to communicate with others in multiple rounds. Importantly, communication is separated into phases serving different purposes. One is the negotiation phase for agents to determine the priority of decision-making. Another is the launching phase for agents to act conditioning on actual actions upper-level agents will take to implement explicit coordination via communication. The overview of SeqComm is illustrated in Figure 2. Each SeqComm agent consists of a policy, a critic, and a world model, as illustrated in Figure 3, and the parameters of all networks are shared across agents (Gupta et al., 2017).
Under review as a conference paper at ICLR 2023
agent 1 agent 1’s obs agent 2 agent 3 agent 4 re qu es t re pl y
agent 1a gent 1’s ob s agent 2agent 3 agent 4 reply re qu es t request request re pl yreply Agent 1 chooses to send request to agent 2 and ignore
agent 3
1
2 3 4
1 2 3 4 Agent 1 chooses to send request to agent 2, 3, 4 t t+1
B C
BC
order set
AsA CsC BsB B C 1 BC 2
r1 r2 intention reward
A
C
B
2
A C B
1
aC
A C B
2
aA A C
B
3
aB
aA aC
A
C
Baction stateagent action stateagent
4.1 NEGOTIATION PHASE
In the negotiation phase, the observation encoder first takes ot as input and outputs a hidden state ht, which is used to communicate with others. Agents then determine the priority of decision-making by intention which is established and evaluated based on the world model.
World Model. The world model is needed to predict and evaluate future trajectories. SeqComm, unlike previous works (Kim et al., 2021; Du et al., 2021; Pretorius et al., 2021), can utilize received hidden states of other agents in the first round of communication to model more precise environment dynamics for the explicit coordination in the next round of communication. Once an agent can access other agents’ hidden states, it shall have adequate information to estimate their actions since all agents are homogeneous and parameter-sharing. Therefore, the world modelM(·) takes as input the joint hidden states ht = {h1t , . . . , hnt } and actions at, and predicts the next joint observations and reward,
ôt+1, r̂t+1 =Mi(AMw(ht,at)), where AMw is the attention module. The reason that we adopt the attention module is to entitle the world model to be generalizable in the scenarios where additional agents are introduced or existing agents are removed.
Priority of Decision-Making. The intention is the key element to determine the priority of decision-making. The notion of intention is described as an agent’s future behavior in previous works (Rabinowitz et al., 2018; Raileanu et al., 2018; Kim et al., 2021). However, we define the intention as an agent’s future behavior without considering others.
As mentioned before, an agent’s intention considering others can lead to circular dependencies and cause miscoordination. By our definition, the intention of an agent should be depicted as all future trajectories considering that agent as the first-mover and ignoring the others. However, there are many possible future trajectories as the priority of the rest agents is unfixed. In practice, we use the Monte Carlo method to evaluate intention.
Taking agent i at timestep t to illustrate, it firstly considers itself as the first-mover and produces its action only based on the joint hidden states, âit ∼ πi(·|AMa(ht)), where we again use an
attention module AMa to handle the input. For the order sequence of lower-level agents, we randomly sample a set of order sequences from unfixed agents. Assume agent j is the second-mover, agent i models j’s action by considering the upper-level action following its own policy âjt ∼ πi(·|AMa(ht, âit)). The same procedure is applied to predict the actions of all other agents following the sampled order sequence. Based on the joint hidden states and predicted actions, the next joint observations ôt+1 and corresponding reward r̂t+1 can be predicted by the world model. The length of the predicted future trajectory isH and it can then be written as τ t = {ôt+1, ât+1, . . . , ôt+H , ât+H} by repeating the procedure aforementioned and the value of one trajectory is defined as the return of that trajectory vτt = ∑t+H t′=t+1 γ
t′−t−1r̂t′/H . In addition, the intention value is defined as the average value of F future trajectories with different sampled order sequences. The choice of F is a tradeoff between the computation overhead and the accuracy of the estimation.
After all the agents have computed their own intention and the corresponding value, they again communicate their intention values to others. Then agents would compare and choose the agent with the highest intention value to be the first-mover. The priority of lower-level decision-making follows the same procedure with the upper-level agents fixed. Note that some agents are required to communicate intention values with others multiple times until the priority of decision-making is finally determined.
4.2 LAUNCHING PHASE
As for the launching phase, agents communicate for obtaining additional information to make decisions. Apart from the received hidden states from the last phase, we allow agents to get what actual actions the upper-level agents will take in execution, while other studies can only infer others’ actions by opponent modeling (Rabinowitz et al., 2018; Raileanu et al., 2018) or communicating intentions (Kim et al., 2021). Therefore, miscoordination can be naturally avoided and a better cooperation strategy is possible since lower-level agents can adjust their behaviors accordingly. A lower-level agent i make a decision following the policy πi(·|AMa(ht,auppert )), where a upper t means received actual actions from all upper-level agents. As long as the agent has decided its action, it will send its action to all other lower-level agents by the communication channel. Note that the actions are executed simultaneously and distributedly in execution, though agents make decisions sequentially.
Communication Overhead. Two communication phases alternate until all agents determine their levels and get upper-level actions. Note that many previous works also adopt the multi-round communication scheme (Das et al., 2019; Singh et al., 2019). As for implementation in practice, compared with communicating high-dimensional hidden states/observations by multiple rounds (Das et al., 2019; Singh et al., 2019), or transferring multi-step trajectory (Kim et al., 2021), SeqComm needs more rounds, but it only transmits hidden states for one time. For the rest n − 1 round communication with total (n− 1)/2 broadcasts per agent, only a single intention value and an action will be exchanged. Considering there are n! permutations of different order choices for n agents, our method has greatly reduced computation overhead since each agent needs to calculate up to n times to search for a satisfying order. Although SeqComm is more suitable for latency-tolerate MARL tasks, e.g., power dispatch (minutes) (Wang et al., 2021a), inventory management (hours) (Feng et al., 2021), maritime transportation (days) (Li et al., 2019), it is possible for SeqComm to have a wider range of applications given the rapid development of the communication technology, e.g., 5G.
4.3 THEORETICAL ANALYSIS
As the priority of decision-making is determined by intention values, SeqComm is likely to choose different orders at different timesteps during training. However, we have the following proposition that theoretically guarantees the performance of the learned joint policy under SeqComm.
Proposition 2. The monotonic improvement and convergence of the joint policy in SeqComm are independent of the priority of decision-making of agents at each timestep.
Proof. The proof is given in Appendix A.
The priority of decision-making is chosen under the world model, thus the compounding errors in the world model can result in discrepancies between the predicted returns of the same order under the
world model and the true dynamics. We then analyze the monotonic improvement for the joint policy under the world model based on Janner et al. (2019). Theorem 1. Let the expected total variation between two transition distributions be bounded at each timestep as maxt Es∼πβ,t [DTV (p(s′|s,a)||p̂(s′|s,a))] ≤ m, and the policy divergences at level k be bounded as maxs,a1:k−1 DTV (πβ,k(ak|s,a1:k−1)||πk(ak|s,a1:k−1)) ≤ πk , where πβ is the data collecting policy for the model and p̂(s′|s,a) is the transition distribution under the model. Then the model return η̂ and true return η of the policy π are bounded as:
η̂[π] ≥ η[π]− [ 2γrmax( m + 2
∑n k=1 πk)
(1− γ)2 +
4rmax ∑n k=1 πk
(1− γ) ]︸ ︷︷ ︸
C( m, π1:n )
Proof. The proof is given in Appendix B.
Remark 2. Theorem 1 provides a useful relationship between the compounding errors and the policy update. As long as we improve the return under the true dynamic by more than the gap, C( m, π1:n), we can guarantee the policy improvement under the world model. If no such policy exists to overcome the gap, it implies the model error is too high, that is, there is a large discrepancy between the world model and true dynamics. Thus the order sequence obtained under the world model is not reliable. Such an order sequence is almost the same as a random one. Though a random order sequence also has the theoretical guarantee of Proposition 2, we will show in Section 5.2 that a random order sequence leads to a poor local optimum empirically.
5 EXPERIMENTS
Sequential communication (SeqComm) is currently instantiated based on MAPPO (Yu et al., 2021). We evaluate SeqComm on three tasks in multi-agent particle environment (MPE) (Lowe et al., 2017) and four maps in StarCraft multi-agent challenge (SMAC) (Samvelyan et al., 2019).
For these experiments, we compare SeqComm against the following communication-free and communication-based baselines: MAPPO (Yu et al., 2021), QMIX (Rashid et al., 2018), IS (Kim et al., 2021), TarMAC (Das et al., 2019), and I2C (Ding et al., 2020). In more detail, IS communicates predicted future trajectories (observations and actions), and predictions are made by the environment model. TarMAC uses the attention model to focus more on important incoming messages (the hidden states of observations). TarMAC is reproduced based on MAPPO instead of A2C in the original paper for better performance. I2C infers one-to-one communication to reduce the redundancy of messages (also conditioning on observations).
In the experiments, all the methods are parameter-sharing for fast convergence. We have fine-tuned the baselines for a fair comparison. Please refer to Appendix E for experimental settings and Appendix F for implementation details. All results are presented in terms of the mean and standard deviation of five runs with different random seeds.
5.1 RESULTS
MPE. We experiment on predator-prey (PP), cooperative navigation (CN), and keep-away (KA) in MPE. In PP, five predators (agents) try to capture three prey. In CN, five agents try to occupy five landmarks. In KA, three attackers (agents) try to occupy three landmarks, however, there are three
defenders to push them away. In all three tasks, the size of agents is set to be larger than the original settings so that collisions occur more easily, following the settings in (Kim et al., 2021). In addition, agents cannot observe any other agents, and this makes the task more difficult and communication more important. We can observe similar modifications in previous works (Foerster et al., 2016; Ding et al., 2020). After all, we want to demonstrate the superior over communication-based baselines, and communication-based methods are more suitable for scenarios with limited vision. More details about experimental settings are available in Appendix E.
Figure 4 shows the learning curves of all the methods in terms of the mean reward averaged over timesteps in PP, CN, and KA. We can see that SeqComm converges to the highest mean reward compared with all the baselines. The results demonstrate the superiority of SeqComm. In more detail, all communication-based methods outperform MAPPO, indicating the necessity of communication in these difficult tasks. Apart from MAPPO, IS performs the worst since it may access inaccurate predicted information due to the circular dependencies. The substantial improvement SeqComm over I2C and TarMAC is attributed to that SeqComm allows agents to get more valuable action information for explicit coordination. The agents learned by SeqComm show sophisticated coordination strategies induced by the priority of decision-making, which can be witnessed by the visualization of agent behaviors. More details are given in Appendix C. Note that QMIX is omitted in the comparison for clear presentation since Yu et al. (2021) have shown QMIX and MAPPO exhibit similar performance in various MPE tasks.
SMAC. We also evaluate SeqComm against the baselines on four customized maps in SMAC: 6h vs 8z, MMM2, 10m vs 11m, and 8m vs 9m, where we have made some minor changes to the observation part of agents to make it more difficult. Specifically, the sight range of agents is reduced from 9 to 2, and agents cannot perceive any information about their allies even if they are within the sight range. NDQ (Wang et al., 2020) adopts a similar change to increase the difficulty of action coordination and demonstrates that the miscoordination problem is widespread in multi-agent learning. The rest settings remain the same as the default.
The learning curves of SeqComm and the baselines in terms of the win rate are illustrated in Figure 5. IS and I2C fail in this task and get a zero win rate because these two methods are built on MADDPG. However, MADDPG cannot work well in SMAC, especially when we reduce the sight range of agents, which is also supported by other studies (Papoudakis et al., 2021). SeqComm and TarMAC converge to better performances than MAPPO and QMIX, which demonstrates the benefit of communication. Moreover, SeqComm outperforms TarMAC, which again verifies the gain of explicit action coordination.
5.2 ABLATION STUDIES
Priority of Decision-Making. We compare SeqComm with two ablation baselines with only a difference in the priority of decision-making: the priority of decision-making is fixed throughout one episode, denoted as Fix-C, and the priority of decision-making is determined randomly at each timestep, denoted as Random-C. TarMAC is also compared as a reference without explicit action coordination.
As depicted in Figure 6, SeqComm achieves a higher mean reward or win rate than Fix-C, Random-C, and TarMAC in all the tasks. These results verify the importance of the priority of decision-making and the necessity to continuously adjust it during one episode. It is also demonstrated that SeqComm can provide a proper priority of decision-making. As discussed in Section 4.3, although Fix-C and Random-C also have the theoretical guarantee, they converge to poor local optima in practice. Moreover, Fix-C and Random-C show better performance than TarMAC in most tasks. This result accords with the hypothesis that the SE is likely to be Pareto superior to the average NE in games with a high cooperation level. Additionally, the learned policy of SeqComm can generalize well to the same task with a different number of agents in MPE, which is detailed in Appendix C.
Communication Range. We also carry out ablation studies on communication range in MPE tasks. Note that communication range means how many nearest neighbors each agent is allowed to communicate with, following the setting in Ding et al. (2020). We reduce the communication range of SeqComm from 4 to 2 and 0. As there are only three agents in KA, it is omitted in this study. The results are shown in Figure 7. Communication-based agents perform better than communication-free agents, which accords with the results of many previous studies. More importantly, the superiority of SeqComm with communication range 2 over the corresponding TarMAC again demonstrates the effectiveness of sequential communication even in reduced communication ranges.
However, as the communication range decreases from 4 to 2, there is no performance reduction in these two MPE tasks. On the contrary, the agents with communication range 2 perform the best. It accords with the results in I2C (Ding et al., 2020) and ATOC (Jiang & Lu, 2018) that redundant information can impair the learning process sometimes. In other settings, this conclusion might not be true. Moreover, since under our communication scheme agents can ob-
tain more information, i.e., the actual actions of others, it is more reasonable that SeqComm can still outperform other methods in reduced communication ranges.
6 CONCLUSIONS
We have proposed SeqComm, which enables agents explicitly coordinate with each other. SeqComm from an asynchronous perspective allows agents to make decisions sequentially. A two-phase communication scheme has been adopted for determining the priority of decision-making and communicating messages accordingly. Theoretically, we prove the policies learned by SeqComm are guaranteed to improve monotonically and converge. Empirically, it is demonstrated that SeqComm outperforms baselines in a variety of cooperative multi-agent tasks and SeqComm can provide a proper priority of decision-making.
A PROOFS OF PROPOSITION 1 AND PROPOSITION 2
Lemma 1 (Agent-by-Agent PPO). If we update the policy of each agent i with TRPO Schulman et al. (2015) (or approximately PPO) when fixing all the other agent’s policies, then the joint policy will improve monotonically.
Proof. We consider the joint surrogate objective in TRPO Lπold(πnew) where πold is the joint policy before updating and πnew is the joint policy after updating.
Given that π−inew = π −i old, we have:
Lπold(πnew) = Ea∼πnew [Aπold(s,a)]
= Ea∼πold [ πnew(a|s) πold(a|s) Aπold(s,a)]
= Ea∼πold [ πinew(a i|s) πiold(a i|s) Aπold(s,a)]
= Eai∼πiold
[ πinew(a
i|s) πiold(a i|s) Ea−i∼π−iold [Aπold(s, a i, a−i)] ] = Eai∼πiold [ πinew(a
i|s) πiold(a i|s) Aiπold(s, a i) ] = Lπiold(π i new),
where Aiπold(s, a i) = Ea−i∼π−iold [Aπold(s, a i, a−i)] is the individual advantage of agent i, and the third equation is from the condition π−inew = π −i old.
With the result of TRPO, we have the following conclusion:
J(πnew)− J(πold) ≥ Lπold(πnew)− CD max KL (πnew||πold)
= Lπiold(π i new)− CD max KL (π i new||πiold) (from π−inew = π−iold)
This means the individual objective is the same as the joint objective so the monotonic improvement is guaranteed.
Then we can show the proof of Proposition 1.
Proof. We will build a new MDP M̃ based on the original MDP. We keep the action space à = A = ×ni=1Ai, where Ai is the original action space of agent i. The new state space contains multiple layers. We define S̃k = S × (×ki=1Ai) for k = 1, 2, · · · , n− 1 and S̃0 = S, where S is the original state space. Then a new state s̃k ∈ S̃k means that s̃k = (s, a1, a2, · · · , ak). The total new state space is defined as S̃ = ∪n−1i=0 S̃i. Next we define the transition probability P̃ as following:
P̃ (s̃′|s̃k, ak+1, a−(k+1)) = 1 ( s̃′ = (s̃k, ak+1) ) , k < n− 1
P̃ (s̃′|s̃k, ak+1, a−(k+1)) = 1 ( s̃′ ∈ S̃0 ) P (s̃′|s̃k, ak+1), k = n− 1.
This means that the state in the layer k can only transition to the state in the layer k + 1 with the corresponding action, and the state in the layer n− 1 will transition to the layer 0 with the probability P in the original MDP. The reward function r̃ is defined as following:
r̃(s̃,a) = 1 ( s̃ ∈ S̃0 ) r(s̃,a).
This means the reward is only obtained when the state in layer 0 and the value is the same as the original reward function. Now we obtain the total definition of the new MDP M̃ = {S̃, Ã, P̃ , r̃, γ}. Then we claim that if all agents learn in multi-agent sequential decision-making by PPO, they are actually taking agent-by-agent PPO in the new MDP M̃ . To be precise, one update of multi-agent
sequential decision-making in the original MDP M equals to a round of update from agent 1 to agent n by agent-by-agent PPO in the new MDP M̃ . Moreover, the total reward of a round in the new MDP M̃ is the same as the reward in one timestep in the original MDP M . With this conclusion and Lemma 1, we complete the proof.
The proof of Proposition 2 can be seen as a corollary of the proof of Proposition 1.
Proof. From Lemma 1 we know that the monotonic improvement of the joint policy in the new MDP M̃ is guaranteed for each update of one single agent’s policy. So even if the different round of updates in the new MDP M̃ is with different order of the decision-making, the monotonic improvement of the joint policy is still guaranteed. Finally, from the proof of Proposition 1, we know that the monotonic improvement in the new MDP M̃ equals to the monotonic improvement in the original MDP M . These complete the proof.
B PROOFS OF THEOREM 1
Lemma 2 (TVD of the joint distributions). Suppose we have two distribution p1(x, y) = p1(x)p1(x|y) and p2(x, y) = p2(x)p2(x|y). We can bound the total variation distance of the joint as:
DTV (p1(x, y)||p2(x, y)) ≤ DTV (p1(x)||p2(x)) + max x DTV (p1(y|x)||p2(y|x))
Proof. See (Janner et al., 2019) (Lemma B.1).
Lemma 3 (Markov chain TVD bound, time-varing). Suppose the expected KL-divergence between two transition is bounded as maxt Es∼p1,t(s)DKL(p1(s′|s)||p2(s′|s)) ≤ δ, and the initial state distributions are the same p1,t=0(s) = p2,t=0(s). Then the distance in the state marginal is bounded as:
DTV (p1,t(s)||p2,t(s)) ≤ tδ
Proof. See (Janner et al., 2019) (Lemma B.2).
Lemma 4 (Branched Returns Bound). Suppose the expected KL-divergence between two dynamics distributions is bounded as maxt Es∼p1,t(s)[DTV (p1(s′|s,a)||p2(s′|s,a))], and the policy divergences at level k are bounded as maxs,a1:k−1 DTV (π1(ak|s,a1:k−1)||π2(ak|s,a1:k−1)) ≤ πk . Then the returns are bounded as:
|η1 − η2| ≤ 2rmaxγ( m +
∑n k=1 πk)
(1− γ)2 +
2rmax ∑n k=1 πk
1− γ ,
where rmax is the upper bound of the reward function.
Proof. Here, η1 denotes the returns of π1 under dynamics p1(s′|s,a), and η2 denotes the returns of π2 under dynamics p2(s′|s,a). Then we have
|η1 − η2| = | ∑ s,a (p1(s,a)− p2(s,a))r(s,a)|
= | ∑ t ∑ s,a γt(p1,t(s,a)− p2,t(s,a))r(s,a)|
≤ ∑ t ∑ s,a γt|p1,t(s,a)− p2,t(s,a)|r(s,a)
≤ rmax ∑ t ∑ s,a γt|p1,t(s,a)− p2,t(s,a)|.
By Lemma 2, we get
max s DTV (π1(a|s)||π2(a|s)) ≤ max s,a1 DTV (π1(a −1|s, a1)||π2(a−1|s, a1))
+ max s DTV (π1(a
1|s)||π2(a1|s))
≤ · · ·
≤ n∑ k=1 max s,a1:k−1 DTV (π1(a k|s,a1:k−1)||π2(ak|s,a1:k−1))
≤ n∑ k=1 πk .
We then apply Lemma 3, using δ = m + ∑n k=1 πk (via Lemma 3 and 2) to get
DTV (p1,t(s)||p2,t(s)) ≤ tmax t Es∼p1,t(s)DTV (p1,t(s ′|s)||p2,t(s′|s))
≤ tmax t Es∼p1,t(s)DTV (p1,t(s ′,a|s)||p2,t(s′,a|s))
≤ t(max t Es∼p1,t(s)DTV (p1,t(s ′|s,a)||p2,t(s′|s,a))
+ max t Es∼p1,t(s) maxs
DTV (π1,t(a|s)||π2,t(a|s)))
≤ t( m + n∑ k=1 πk)
And we also get DTV (p1,t(s,a)||p2,t(s,a)) ≤ t( m + ∑n k=1 πk) + ∑n k=1 πk by Lemma 2. Thus, by plugging this back, we get:
|η1 − η2| ≤ rmax ∑ t ∑ s,a γt|p1,t(s,a)− p2,t(s,a)|
≤ 2rmax ∑ t γt(t( m + n∑ k=1 πk) + n∑ k=1 πk)
≤ 2rmax( γ( m +
∑n k=1 πk))
(1− γ)2 + ∑n k=1 πk 1− γ )
Then we can show the proof of Theorem 1.
Proof. Let πβ denote the data collecting policy. We use Lemma 4 to bound the returns, but it will require bounded model error under the new policy π. Thus, we need to introduce πβ by adding and subtracting η[πβ ], to get:
η̂[π]− η[π] = η̂[π]− η[πβ ] + η[πβ ]− η[π].
we can bound L1 and L2 both using Lemma 4 by using δ = ∑n k=1 πk and δ = m + ∑n k=1 πk respectively, and obtain:
L1 ≥ − 2γrmax
∑n k=1 πk
(1− γ)2 −
2rmax ∑n k=1 πk
(1− γ)
L2 ≥ − 2γrmax( πm +
∑n k=1 πk)
(1− γ)2 −
2rmax ∑n k=1 πk
(1− γ) .
Adding these two bounds together yields the conclusion.
Under review as a conference paper at ICLR 2023
pp 1
level 1 level 1
cn
C ADDITIONAL EXPERIMENTS
C.1 ILLUSTRATION OF LEARNED PRIORITY OF DECISION-MAKING
Figure 8 (upper panel from a to e) shows the priority order of decision-making determined by SeqComm in PP. Agent 2 that is far away from other preys and predators is chosen to be the firstmover. If agents want to encircle and capture the preys, the agents (e.g., agent 2 and 5) that are on the periphery of the encircling circle should hold upper-level positions since they are able to decide how to narrow the encirclement. In addition, agent 3 makes decisions prior to agent 5 so that collision can be avoided after agent 5 obtains the intention of agent 3.
For CN, as illustrated in Figure 8 (lower panel from a to e), agent 2 is far away from all the landmarks and all other agents are in a better position to occupy landmarks. Therefore, agents 2 is chosen to be the first-mover, which is similar to the phenomenon observed in PP. Once it has determined the target to occupy, other agents (agent 5 and 3) can adjust their actions accordingly and avoid conflict of goals. Otherwise, if agent 5 makes a decision first and chooses to occupy the closest landmark, then agent 2 has to approach to a further landmark which would take more steps.
C.2 GENERALIZATION
Generalization to different numbers of agents has always been a key problem in MARL. For most algorithms in communication, once the model is trained in one scenario, it is unlikely for agents to maintain relatively competitive performance in other scenarios with different numbers of agents. However, as we employ attention modules to process communicated messages so that agents can handle messages of different lengths. In addition, the module used to determine the priority of decision-making is also not restricted by the number of agents. Thus, we investigate whether SeqComm generalizes well to different numbers of agents in CN and PP.
For both tasks, SeqComm is trained on 5-agent settings. Then, we test SeqComm in 3-agent and 7-agent settings of CN and 7-agent setting of PP. We use Fix-C trained directly on these test tasks to illustrate the performance of SeqComm. Note that the quantity of both landmarks and preys is
adjusted according to the number of agents in CN and PP. The test results are shown in Table 1. SeqComm exhibits the superiority in CN and PP, demonstrating that SeqComm may have a good generalization to the number of agents. A thorough study of the generalization of SeqComm is left to future work.
C.3 MORE SMAC MAPS
We have evaluated our method on two additional maps, i.e., 3s vs 4z and corridor. As illustrated in Figure 9, we can find out the similar conclusions as section 5.1.
D ADDITIONAL RELATED WORK
Multi-Agent Path Finding (MAPF). MAPF aims to plan collision-free paths for multiple agents on a given graph from their given start vertices to target vertices. In MAPF, prioritized planning is deeply coupled with collision avoidance (Van Den Berg & Overmars, 2005; Ma et al., 2019), where collision is used to design constraints or heuristics for planning. Unlike MAPF, our method couples the priority of decision-making with the learning objective and thus is more general. In addition, the different motivations and problem settings may lead to the incompatibility of the methods in the two fields.
Reinforcement Learning in Stackelberg Game. Many studies (Könönen, 2004; Sodomka et al., 2013; Greenwald et al., 2003; Zhang et al., 2020) have investigated reinforcement learning in finding the Stackelberg equilibrium. Bi-AC (Zhang et al., 2020) is a bi-level actor-critic method that allows agents to have different knowledge bases so that the Stackelberg equilibrium (SE) is possible to find. The actions still can be executed simultaneously and distributedly. It empirically studies the relationship between the cooperation level and the superiority of the SE over the Nash equilibrium. AQL (Könönen, 2004) updates the Q-value by solving the SE in each iteration and can be regarded as the value-based version of Bi-AC. Existing work mainly focuses on two-agent settings and their order is fixed in advance. However, the fixed order can hardly be an optimal solution as we will show in the next section. To address this issue, we exploit agents’ intentions to dynamically determine the priority of decision-making along the way of interacting with each other.
E EXPERIMENTAL SETTINGS
In cooperative navigation, there are 5 agents and the size of each is 0.15. They need to occupy 5 landmarks with the size of 0.05. The acceleration of agents is 7. In predator-prey, the number of predators (agents) and prey is set to 5 and 3, respectively, and their sizes are 0.15 and 0.05. The acceleration is 5 for predators and 7 for prey. In keep away, the number of attackers (agents) and defenders is set to 3, and their sizes are respectively 0.15 and 0.05. Besides, the acceleration is 6
for attackers and 4 for defenders. The three landmarks are located at (0.00, 0.30), (0.25,−0.15), and (−0.25,−0.15). Note that each agent is allowed to communicate with all other agents in all three tasks. The team reward is similar across tasks. At a timestep t, it can be written as rtteam = − ∑n i=1 d t i + C
trcollision, where dti is the distance of landmark/prey i to its nearest agent/predator, Ct is the number of collisions (when the distance between two agents is less than the sum of their sizes) occurred at timestep t, and rcollision = −1. In addition, agents act discretely and have 5 actions (stay and move up, down, left, right). The length of each episode is 20, 30, and 20 in cooperative navigation, predator-prey, and keep-away, respectively.
F IMPLEMENTATION DETAILS
F.1 ARCHITECTURE AND HYPERPARAMETERS
Our models, including SeqComm, Fix-C, and Random-C are trained based on MAPPO. The critic and policy network are realized by two fully connected layers. As for the attention module, key, query, and value have one fully connected layer each. The size of hidden layers is 100. Tanh functions are used as nonlinearity. For I2C, we use their official code with default settings of basic hyperparameters and networks. As there is no released code of IS and TarMAC, we implement IS and TarMAC by ourselves, following the instructions mentioned in the original papers (Kim et al., 2021; Das et al., 2019).
For the world model, observations and actions are firstly encoded by a fully connected layer. The output size for the observation encoder is 48, and the output size for the action encoder is 16. Then the outputs of the encoder will be passed into the attention module with the same structure aforementioned. Finally, we use a fully connected layer to decode. In these layers, Tanh is used as the nonlinearity.
Table 2 summarize the hyperparameters used by SeqComm and the baselines in the MPE.
For SMAC, SeqComm, Random-C, Fix-C are based on the same architecture, the hyperparameters stay the same. For MMM2, 6z vs 8z, and 8m vs 9m, the learning rate is 5e−5, while for 10m vs 11m, corridor, and 3s vs 4z, learning rate is 7e−5. The ppo epoch is set to 10 for 6h vs 8z, and is 5 for rest maps. H and F is set to 5 and 1, respectively. However, 20 and 2 is a better value of H and F if computing resources is sufficient.
For TarMAC, the learning rate is 7e−5 for all maps. The ppo epoch is set to 10 for 6h vs 8z, and is 5 for rest maps.
For MAPPO, the learning rate is 5e−5 for MMM2 and 6z vs 8z, and 7e−5 for 8m vs 9m and 10m vs 11m.
For these four methods, the mini batch is set to 1. As for other hyperparameters, we follow the default settings of the official code (Yu et al., 2021).
For QMIX, the learning rate is 5e−5. The is 1 and the batch size is 32. The buffer size is 5e3. For others, we follow the default settings of link https://github.com/starry-sky6688/MARL-Algorithms.git
F.2 ATTENTION MODULE
Attention module (AM) is applied to process messages in the world model, critic network, and policy network. AM consists of three components: query, key, and values. The output of AM is the weighted sum of values, where the weight of value is determined by the dot product of the query and the corresponding key.
For AM in the world model denoted as AMw, agent i gets messages m−it = h −i t from all other agents at timestep t in negotiation phase, and predicts a query vector qit following AM i w,q(h i t). The query is used to compute a dot product with keys kt = [k1t , · · · , knt ]. Note that k j t is obtained by the message from agent j following AMia,k(h j t ) for j 6= i, and kit is from AM i neg,k(h i t). Besides, it is scaled by 1/ √ dk followed by a softmax to obtain attention weights α for each value vector:
αi = softmax qit T k1t√ dk · · · q
i t T kjt√ dk︸ ︷︷ ︸ αij · · · q i t T knt√ dk (1) The output of attention module is defined as: cit = ∑n j=1 αijv j t , where v j t is obtained from messages or its own hidden state of observation following AMiw,v(·). As for AM in the policy and critic network denoted as AMa , agent i gets additional messages from upper-level agent in the launching phase. The message from upper-level and lower-level agent can be expanded asmuppert = [h upper t ,a upper t ] andm lower t = [h lower t , 0], respectively. In addition, the query depends on agent’s own hidden state of observation hit, but keys and values are only from messages of other agents.
F.3 TRAINING
The training of SeqComm is an extension of MAPPO. The observation encoder e, the critic V , and the policy π are respectively parameterized by θe, θv, θπ. Besides, the attention module AMa is parameterized by θa and takes as input the agent’s hidden state, the messages (hidden states of other agents) in the negotiation phase, and the messages (the actions of upper-level agents) in launching phase. Let D = {τk}Kk=1 be a set of trajectories by running policy in the environment. Note that we drop time t in the following notations for simplicity.
The value function is fitted by regression on mean-squared error:
L(θv, θa, θe) = 1
KT ∑ τ∈D T−1∑ t=0 ∥∥∥V (AMa(e(o),aupper))− R̂∥∥∥2 2
(2)
where R̂ is the discount rewards-to-go.
We update the policy by maximizing the PPO-Clip objective:
L(θπ, θa, θe) = 1
KT ∑ τ∈D T−1∑ t=0 min( π(a|AMa(e(o),aupper)) πold(a|AMa(e(o),aupper)) Aπold , g( , Aπold)) (3)
where g( , A) = { (1 + )A A ≥ 0 (1− )A A ≤ 0 , and Aπold(o,a upper, a) is computed using the GAE method.
The world modelM is parameterized by θw is trained as a regression model using the training data set S. It is updated with the loss:
L(θw) = 1 |S| ∑
o,a,o′,r∈S
∥∥∥(o′, r)−M(AMw(e(o),a))∥∥∥2 2 . (4)
We trained our model on one GeForce GTX 1050 Ti and Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz. | 1. What is the focus of the paper regarding multi-agent communication schemes?
2. What are the strengths and weaknesses of the proposed approach, particularly in its setting and comparisons with other works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Summary
The paper presents SeqComm, a multi-agent communication scheme allowing agents to condition on one another's actions by imposing ordering over the agents. The paper introduces multi-agent sequential decision and demonstrates that ordering in this paradigm can affect the optimality of the learnt policy.
The authors then present SeqComm. Each agent in SeqComm learns a policy which conditions on the joint hidden state and other agents' actions via an attention mechanism. The ordering is chosen by weighing the value of each agent's intention, which is the paper defines as the agent's future behaviour without considering the action of others. In the second phase of communication, the agents the produce a joint action. The authors then prove that monotonic improvement of the policy is independent of priority, and provide a bound on the performance loss associated with using a world model for the ordering.
The authors then evaluate SeqComm on MPE and SMAC tasks.
Strengths And Weaknesses
Strengths
The setting of adding some communication to CTDE is an interesting way to alleviate miscoordination and work in this area is welcome.
The paper is quite clearly written.
The authors include ablation studies to evaluate why their method works.
The matrix game in Figure 1 nicely illustrates why an order is theoretically useful.
Weaknesses
I'm not sure that I understand the setting. If agents can broadcast functions of their observations to all other agents, then how is that different from joint learning, where all agents can view the same joint observation? This seems to not be the CTDE setting at all to me, but instead joint learning. I understand that TarMAC [1] adopts a similar setting, but I would appreciate some clarification from the authors on how this differs from joint learning.
The communication-free baselines MAPPO and IPPO are not a fair comparison, and it is not clear that SeqComm's performance is better TarMAC or the random-priority ablation. The authors claim that their method clearly outperforms the ablations and TarMAC, but the gap is only slight and seems to be mostly within a standard deviation.
The authors do not compare with a centralised method such as PPO (but conditioning the policy on the joint observation and outputting the joint action). This seems strange given this is an obvious alternative to the method, and would require approximately the same communication.
The empirical evaluation is only over 4 SMAC maps, and no further results are included. This does not seem enough to provide convincing evidence of outperformance.
Clarity, Quality, Novelty And Reproducibility
Clarity
The paper is fairly clearly written. I do not believe that the authors open-source their code, which makes reproducibility difficult. It is also not clear to me which implementation of the QMIX and MAPPO baselines is used or how hyperparameters were chosen for the baselines. |
ICLR | Title
Multi-Agent Sequential Decision-Making via Communication
Abstract
Communication helps agents to obtain information about others so that better coordinated behavior can be learned. Some existing work communicates predicted future trajectory with others, hoping to get clues about what others would do for better coordination. However, circular dependencies sometimes can occur when agents are treated synchronously so it is hard to coordinate decision-making. In this paper, we propose a novel communication scheme, Sequential Communication (SeqComm). SeqComm treats agents asynchronously (the upper-level agents make decisions before the lower-level ones) and has two communication phases. In negotiation phase, agents determine the priority of decision-making by communicating hidden states of observations and comparing the value of intention, which is obtained by modeling the environment dynamics. In launching phase, the upper-level agents take the lead in making decisions and communicate their actions with the lower-level agents. Theoretically, we prove the policies learned by SeqComm are guaranteed to improve monotonically and converge. Empirically, we show that SeqComm outperforms existing methods in various multi-agent cooperative tasks.
1 INTRODUCTION
The partial observability and stochasticity inherent to the nature of multi-agent systems can easily impede the cooperation among agents and lead to catastrophic miscoordination (Ding et al., 2020). Communication has been exploited to help agents obtain extra information during both training and execution to mitigate such problems (Foerster et al., 2016; Sukhbaatar et al., 2016; Peng et al., 2017). Specifically, agents can share their information with others via a trainable communication channel.
Centralized training with decentralized execution (CTDE) is a popular learning paradigm in cooperative multi-agent reinforcement learning (MARL). Although the centralized value function can be learned to evaluate the joint policy of agents, the decentralized policies of agents are essentially independent. Therefore, a coordination problem arises. That is, agents may make sub-optimal actions by mistakenly assuming others’ actions when there exist multiple optimal joint actions (Busoniu et al., 2008). Communication allows agents to obtain information about others to avoid miscoordination. However, most existing work only focuses on communicating messages, e.g., the information of agents’ current observation or historical trajectory (Jiang & Lu, 2018; Singh et al., 2019; Das et al., 2019; Ding et al., 2020). It is impossible for an agent to acquire other’s actions before making decisions since the game model is usually synchronous, i.e., agents make decisions and execute actions simultaneously. Recently, intention or imagination, depicted by a combination of predicted actions and observations of many future steps, has been proposed as part of messages (Kim et al., 2021; Pretorius et al., 2021). However, circular dependencies can still occur, so it may be hard to coordinate decision-making under synchronous settings.
A general approach to solving the coordination problem is to make sure that ties between equally good actions are broken by all agents. One simple mechanism for doing so is to know exactly what others will do and adjust the behavior accordingly under a unique ordering of agents and actions (Busoniu et al., 2008). Inspired by this, we reconsider the cooperative game from an asynchronous perspective. In other words, each agent is assigned a priority (i.e., order) of decision-making each step in both training and execution, thus the Stackelberg equilibrium (SE) (Von Stackelberg, 2010) is naturally set up as the learning objective. Specifically, the upper-level agents make decisions before the lower-level agents. Therefore, the lower-level agents can acquire the actual actions of the upper-level agents by
communication and make their decisions conditioned on what the upper-level agents would do. Under this setting, the SE is likely to be Pareto superior to the average Nash equilibrium (NE) in games that require a high cooperation level (Zhang et al., 2020). However, is it necessary to decide a specific priority of decision-making for each agent? Ideally, the optimal joint policy can be decomposed by any orders (Wen et al., 2019), e.g., π∗(a1, a2|s) = π∗(a1|s)π∗(a2|s, a1) = π∗(a2|s)π∗(a1|s, a2). But during the learning process, it is unlikely for agents to use the optimal actions of other agents for gradient calculation, making it still vulnerable to the relative overgeneralization problem (Wei et al., 2018). Overall, there is no guarantee that the above equation will hold in the learning process, thus ordering should be carefully concerned.
In this paper, we propose a novel model-based multi-round communication scheme for cooperative MARL, Sequential Communication (SeqComm), to enable agents to explicitly coordinate with each other. Specifically, SeqComm has two-phase communication, negotiation phase and launching phase. In the negotiation phase, agents communicate their hidden states of observations with others simultaneously. Then they are able to generate multiple predicted trajectories, called intention, by modeling the environmental dynamics and other agents’ actions. In addition, the priority of decision-making is determined by communicating and comparing the corresponding values of agents’ intentions. The value of each intention represents the rewards obtained by letting that agent take the upper-level position of the order sequence. The sequence of others follows the same procedure as aforementioned with the upper-level agents fixed. In the launching phase, the upper-level agents take the lead in decision-making and communicate their actual actions with the lower-level agents. Note that the actual actions will be executed simultaneously in the environment without any changes.
SeqComm is currently built on MAPPO (Yu et al., 2021). Theoretically, we prove the policies learned by SeqComm are guaranteed to improve monotonically and converge. Empirically, we evaluate SeqComm on a set of tasks in multi-agent particle environment (MPE) (Lowe et al., 2017) and StarCraft multi-agent challenge (SMAC) (Samvelyan et al., 2019). In all these tasks, we demonstrate that SeqComm outperforms prior communication-free and communication-based methods. By ablation studies, we confirm that treating agents asynchronously is a more effective way to promote coordination and SeqComm can provide the proper priority of decision-making for agents to develop better coordination.
2 RELATED WORK
Communication. Existing studies (Jiang & Lu, 2018; Kim et al., 2019; Singh et al., 2019; Das et al., 2019; Zhang et al., 2019; Jiang et al., 2020; Ding et al., 2020; Konan et al., 2022) in this realm mainly focus on how to extract valuable messages. ATOC (Jiang & Lu, 2018) and IC3Net (Singh et al., 2019) utilize gate mechanisms to decide when to communicate with other agents. Many works (Das et al., 2019; Konan et al., 2022) employ multi-round communication to fully reason the intentions of others and establish complex collaboration strategies. Social influence (Jaques et al., 2019) uses communication to influence the behaviors of others. I2C (Ding et al., 2020) only communicates with agents that are relevant and influential which are determined by causal inference. However, all these methods focus on how to exploit valuable information from current or past partial observations effectively and properly. More recently, some studies (Kim et al., 2021; Du et al., 2021; Pretorius et al., 2021) begin to answer the question: can we favor cooperation beyond sharing partial observation? They allow agents to imagine their future states with a world model and communicate those with others. IS (Pretorius et al., 2021), as the representation of this line of research, enables each agent to share its intention with other agents in the form of the encoded imagined trajectory and use the attention module to figure out the importance of the received intention. However, two concerns arise. On one hand, circular dependencies can lead to inaccurate predicted future trajectories as long as the multi-agent system treats agents synchronously. On the other hand, MARL struggles in extracting useful information from numerous messages, not to mention more complex and dubious messages, i.e., predicted future trajectories.
Unlike these works, we treat the agents from an asynchronously perspective therefore circular dependencies can be naturally resolved. Furthermore, agents only send actions to lower-level agents besides partial observations to make sure the messages are compact as well as informative.
Coordination. The agents are essentially independent decision makers in execution and may break ties between equally good actions randomly. Thus, in the absence of additional mechanisms, different
agents may break ties in different ways, and the resulting joint actions may be suboptimal. Coordination graphs (Guestrin et al., 2002; Böhmer et al., 2020; Wang et al., 2021b) simplify the coordination when the global Q-function can be additively decomposed into local Q-functions that only depend on the actions of a subset of agents. Typically, a coordination graph expresses a higher-order value decomposition among agents. This improves the representational capacity to distinguish other agents’ effects on local utility functions, which addresses the miscoordination problems caused by partial observability. Another general approach to solving the coordination problem is to make sure that ties are broken by all agents in the same way, requiring that random action choices are somehow coordinated or negotiated. Social conventions (Boutilier, 1996) or role assignments (Prasad et al., 1998) encode prior preferences towards certain joint actions and help break ties during action selection. Communication (Fischer et al., 2004; Vlassis, 2007) can be used to negotiate action choices, either alone or in combination with the aforementioned techniques. Our method follows this line of research by utilizing the ordering of agents and actions to break the ties, other than the enhanced representational capacity of the local value function.
3 PROBLEM FORMULATION
Cost-Free Communication. The decentralized partially observable Markov decision process (DecPOMDP) can be extended to explicitly incorporate broadcasting observations. The resulting model is called multi-agent POMDP (Oliehoek et al., 2016).
Pynadath & Tambe (2002) showed that under cost-free communication, a joint communication policy that shares local observations at each stage is optimal. Many studies have also investigated sharing local observations in models that are similar to multi-agent POMDP (Pynadath & Tambe, 2002; Ooi & Wornell, 1996; Nair et al., 2004; Roth et al., 2005a;b; Spaan et al., 2006; Oliehoek et al., 2007; Becker et al., 2004). These works focus on issues other than communication cost and we foucs on the coordination problem. Note that even under multi-agent POMDP where agents can get joint observations, coordination problem can still arise (Busoniu et al., 2008). Suppose the centralized critic has learnt actions pairs [a1, a2] and [b1, b2] are equally optimal. Without any prior information, the individual policies π1 and π2 learnt from the centralized critic can break the ties randomly and may choose a1 and b2, respectively.
Multi-Agent Sequential Decision-Making. We consider fully cooperative multi-agent tasks that are modeled as multi-agent POMDP, where n agents interact with the environment according to the following procedure, which we refer to as multi-agent sequential decision-making.
At each timestep t, assume the priority (i.e., order) of decision-making for all agents is given and each priority level has only one agent (i.e., agents make decisions one by one). Note that the smaller the level index, the higher priority of decision-making is. The agent at each level k gets its own observation okt drawn from the state st, and receives messagesm −k t from all other agents, where m−kt , {{o1t , a1t}, . . . , {ok−1t , ak−1t }, ok+1t , . . . , ont }. Equivalently, m−kt can be written as {o−kt ,a 1:k−1 t }, where o−kt denotes the joint observations of all agents except k, and a 1:k−1 t denotes the joint actions of agents 1 to k − 1. For the agent at the first level (i.e., k = 1), a1:k−1t = ∅. Then, the agent determines its action akt sampled from its policy πk(·|okt ,m−kt ) or equivalently πk(·|ot,a1:k−1t ) and sends it to the lower-level agents. After all agents have determined their actions, they perform the joint actions at, which can be seen as sampled from the joint policy π(·|st) factorized as ∏n k=1 πk(·|ot,a 1:k−1 t ), in the environment and get a shared reward r(st,at) and the state transitions to next state s′ according to the transition probability p(s′|st,at). All agents aim to maximize the expected return ∑∞ t=0 γ
trt, where γ is the discount factor. The state-value function and action-value function of the level-k agent are defined as follows:
Vπk(s,a 1:k−1) , E
s1:∞ ak:n0 ∼πk:n a1:∞∼π
[ ∞∑ t=0 γtrt|s0 = s,a1:k−10 = a1:k−1]
Qπk(s,a 1:k) , E
s1:∞ ak+1:n0 ∼πk+1:n
a1:∞∼π
[ ∞∑ t=0 γtrt|s0 = s,a1:k0 = a1:k].
For the setting of multi-agent sequential decision-making discussed above, we have the following proposition. Proposition 1. If all the agents update its policy with individual TRPO (Schulman et al., 2015) sequentially in multi-agent sequential decision-making, then the joint policy of all agents is guaranteed to improve monotonically and converge.
Proof. The proof is given in Appendix A.
Proposition 1 indicates that SeqComm has the performance guarantee regardless of the priority of decision-making in multi-agent sequential decision-making. However, the priority of decision-making indeed affects the optimality of the converged joint policy, and we have the following claim. Claim 1. The different priorities of decision-making affect the optimality of the convergence of the learning algorithm due to the relative overgeneralization problem.
We use a one-step matrix game as an example, as illustrated in Figure 1(a), to demonstrate the influence of the priority of decision-making on the learning process. Due to relative overgeneralization (Wei et al., 2018), agent B tends to choose b2 or b3. Specifically, b2 or b3 in the suboptimal equilibrium is a better choice than b1 in the optimal equilibrium when matched with arbitrary actions from agent A. Therefore, as shown in Figure 1(b), B → A (i.e., agent B makes decisions before A, and A’s policy conditions on the action of B) and Simultaneous (i.e., two agents make decisions simultaneously and independently) are easily trapped into local optima. However, things can be different if agent A goes
first, as A → B achieves the optimum. As long as agent A does not suffer from relative overgeneralization, it can help agent B get rid of local optima by narrowing down the search space of B. Besides, a policy that determines the priority of decision-making can be learned under the guidance of the state-value function, denoted as Learned. It obtains better performance than B → A and Simultaneous, which indicates that dynamically determining the order during policy learning can be beneficial as we do not know the optimal priority in advance.
Remark 1. The priority (i.e., order) of decision-making affects the optimality of the converged joint policy in multi-agent sequential decision-making, thus it is critical to determine the order. However, learning the order directly requires an additional centralized policy in execution, which is not generalizable in the scenario where the number of agents varies. Moreover, its learning complexity exponentially increases with the number of agents, making it infeasible in many cases.
4 SEQUENTIAL COMMUNICATION
In this paper, we cast our eyes in another direction and resort to the world model. Ideally, we can randomly sample candidate order sequences, evaluate them under the world model (see Section 4.1), and choose the order sequence that is deemed the most promising under the true dynamic. SeqComm is designed based on this principle to determine the priority of decision-making via communication.
SeqComm adopts a multi-round communication mechanism, i.e., agents are allowed to communicate with others in multiple rounds. Importantly, communication is separated into phases serving different purposes. One is the negotiation phase for agents to determine the priority of decision-making. Another is the launching phase for agents to act conditioning on actual actions upper-level agents will take to implement explicit coordination via communication. The overview of SeqComm is illustrated in Figure 2. Each SeqComm agent consists of a policy, a critic, and a world model, as illustrated in Figure 3, and the parameters of all networks are shared across agents (Gupta et al., 2017).
Under review as a conference paper at ICLR 2023
agent 1 agent 1’s obs agent 2 agent 3 agent 4 re qu es t re pl y
agent 1a gent 1’s ob s agent 2agent 3 agent 4 reply re qu es t request request re pl yreply Agent 1 chooses to send request to agent 2 and ignore
agent 3
1
2 3 4
1 2 3 4 Agent 1 chooses to send request to agent 2, 3, 4 t t+1
B C
BC
order set
AsA CsC BsB B C 1 BC 2
r1 r2 intention reward
A
C
B
2
A C B
1
aC
A C B
2
aA A C
B
3
aB
aA aC
A
C
Baction stateagent action stateagent
4.1 NEGOTIATION PHASE
In the negotiation phase, the observation encoder first takes ot as input and outputs a hidden state ht, which is used to communicate with others. Agents then determine the priority of decision-making by intention which is established and evaluated based on the world model.
World Model. The world model is needed to predict and evaluate future trajectories. SeqComm, unlike previous works (Kim et al., 2021; Du et al., 2021; Pretorius et al., 2021), can utilize received hidden states of other agents in the first round of communication to model more precise environment dynamics for the explicit coordination in the next round of communication. Once an agent can access other agents’ hidden states, it shall have adequate information to estimate their actions since all agents are homogeneous and parameter-sharing. Therefore, the world modelM(·) takes as input the joint hidden states ht = {h1t , . . . , hnt } and actions at, and predicts the next joint observations and reward,
ôt+1, r̂t+1 =Mi(AMw(ht,at)), where AMw is the attention module. The reason that we adopt the attention module is to entitle the world model to be generalizable in the scenarios where additional agents are introduced or existing agents are removed.
Priority of Decision-Making. The intention is the key element to determine the priority of decision-making. The notion of intention is described as an agent’s future behavior in previous works (Rabinowitz et al., 2018; Raileanu et al., 2018; Kim et al., 2021). However, we define the intention as an agent’s future behavior without considering others.
As mentioned before, an agent’s intention considering others can lead to circular dependencies and cause miscoordination. By our definition, the intention of an agent should be depicted as all future trajectories considering that agent as the first-mover and ignoring the others. However, there are many possible future trajectories as the priority of the rest agents is unfixed. In practice, we use the Monte Carlo method to evaluate intention.
Taking agent i at timestep t to illustrate, it firstly considers itself as the first-mover and produces its action only based on the joint hidden states, âit ∼ πi(·|AMa(ht)), where we again use an
attention module AMa to handle the input. For the order sequence of lower-level agents, we randomly sample a set of order sequences from unfixed agents. Assume agent j is the second-mover, agent i models j’s action by considering the upper-level action following its own policy âjt ∼ πi(·|AMa(ht, âit)). The same procedure is applied to predict the actions of all other agents following the sampled order sequence. Based on the joint hidden states and predicted actions, the next joint observations ôt+1 and corresponding reward r̂t+1 can be predicted by the world model. The length of the predicted future trajectory isH and it can then be written as τ t = {ôt+1, ât+1, . . . , ôt+H , ât+H} by repeating the procedure aforementioned and the value of one trajectory is defined as the return of that trajectory vτt = ∑t+H t′=t+1 γ
t′−t−1r̂t′/H . In addition, the intention value is defined as the average value of F future trajectories with different sampled order sequences. The choice of F is a tradeoff between the computation overhead and the accuracy of the estimation.
After all the agents have computed their own intention and the corresponding value, they again communicate their intention values to others. Then agents would compare and choose the agent with the highest intention value to be the first-mover. The priority of lower-level decision-making follows the same procedure with the upper-level agents fixed. Note that some agents are required to communicate intention values with others multiple times until the priority of decision-making is finally determined.
4.2 LAUNCHING PHASE
As for the launching phase, agents communicate for obtaining additional information to make decisions. Apart from the received hidden states from the last phase, we allow agents to get what actual actions the upper-level agents will take in execution, while other studies can only infer others’ actions by opponent modeling (Rabinowitz et al., 2018; Raileanu et al., 2018) or communicating intentions (Kim et al., 2021). Therefore, miscoordination can be naturally avoided and a better cooperation strategy is possible since lower-level agents can adjust their behaviors accordingly. A lower-level agent i make a decision following the policy πi(·|AMa(ht,auppert )), where a upper t means received actual actions from all upper-level agents. As long as the agent has decided its action, it will send its action to all other lower-level agents by the communication channel. Note that the actions are executed simultaneously and distributedly in execution, though agents make decisions sequentially.
Communication Overhead. Two communication phases alternate until all agents determine their levels and get upper-level actions. Note that many previous works also adopt the multi-round communication scheme (Das et al., 2019; Singh et al., 2019). As for implementation in practice, compared with communicating high-dimensional hidden states/observations by multiple rounds (Das et al., 2019; Singh et al., 2019), or transferring multi-step trajectory (Kim et al., 2021), SeqComm needs more rounds, but it only transmits hidden states for one time. For the rest n − 1 round communication with total (n− 1)/2 broadcasts per agent, only a single intention value and an action will be exchanged. Considering there are n! permutations of different order choices for n agents, our method has greatly reduced computation overhead since each agent needs to calculate up to n times to search for a satisfying order. Although SeqComm is more suitable for latency-tolerate MARL tasks, e.g., power dispatch (minutes) (Wang et al., 2021a), inventory management (hours) (Feng et al., 2021), maritime transportation (days) (Li et al., 2019), it is possible for SeqComm to have a wider range of applications given the rapid development of the communication technology, e.g., 5G.
4.3 THEORETICAL ANALYSIS
As the priority of decision-making is determined by intention values, SeqComm is likely to choose different orders at different timesteps during training. However, we have the following proposition that theoretically guarantees the performance of the learned joint policy under SeqComm.
Proposition 2. The monotonic improvement and convergence of the joint policy in SeqComm are independent of the priority of decision-making of agents at each timestep.
Proof. The proof is given in Appendix A.
The priority of decision-making is chosen under the world model, thus the compounding errors in the world model can result in discrepancies between the predicted returns of the same order under the
world model and the true dynamics. We then analyze the monotonic improvement for the joint policy under the world model based on Janner et al. (2019). Theorem 1. Let the expected total variation between two transition distributions be bounded at each timestep as maxt Es∼πβ,t [DTV (p(s′|s,a)||p̂(s′|s,a))] ≤ m, and the policy divergences at level k be bounded as maxs,a1:k−1 DTV (πβ,k(ak|s,a1:k−1)||πk(ak|s,a1:k−1)) ≤ πk , where πβ is the data collecting policy for the model and p̂(s′|s,a) is the transition distribution under the model. Then the model return η̂ and true return η of the policy π are bounded as:
η̂[π] ≥ η[π]− [ 2γrmax( m + 2
∑n k=1 πk)
(1− γ)2 +
4rmax ∑n k=1 πk
(1− γ) ]︸ ︷︷ ︸
C( m, π1:n )
Proof. The proof is given in Appendix B.
Remark 2. Theorem 1 provides a useful relationship between the compounding errors and the policy update. As long as we improve the return under the true dynamic by more than the gap, C( m, π1:n), we can guarantee the policy improvement under the world model. If no such policy exists to overcome the gap, it implies the model error is too high, that is, there is a large discrepancy between the world model and true dynamics. Thus the order sequence obtained under the world model is not reliable. Such an order sequence is almost the same as a random one. Though a random order sequence also has the theoretical guarantee of Proposition 2, we will show in Section 5.2 that a random order sequence leads to a poor local optimum empirically.
5 EXPERIMENTS
Sequential communication (SeqComm) is currently instantiated based on MAPPO (Yu et al., 2021). We evaluate SeqComm on three tasks in multi-agent particle environment (MPE) (Lowe et al., 2017) and four maps in StarCraft multi-agent challenge (SMAC) (Samvelyan et al., 2019).
For these experiments, we compare SeqComm against the following communication-free and communication-based baselines: MAPPO (Yu et al., 2021), QMIX (Rashid et al., 2018), IS (Kim et al., 2021), TarMAC (Das et al., 2019), and I2C (Ding et al., 2020). In more detail, IS communicates predicted future trajectories (observations and actions), and predictions are made by the environment model. TarMAC uses the attention model to focus more on important incoming messages (the hidden states of observations). TarMAC is reproduced based on MAPPO instead of A2C in the original paper for better performance. I2C infers one-to-one communication to reduce the redundancy of messages (also conditioning on observations).
In the experiments, all the methods are parameter-sharing for fast convergence. We have fine-tuned the baselines for a fair comparison. Please refer to Appendix E for experimental settings and Appendix F for implementation details. All results are presented in terms of the mean and standard deviation of five runs with different random seeds.
5.1 RESULTS
MPE. We experiment on predator-prey (PP), cooperative navigation (CN), and keep-away (KA) in MPE. In PP, five predators (agents) try to capture three prey. In CN, five agents try to occupy five landmarks. In KA, three attackers (agents) try to occupy three landmarks, however, there are three
defenders to push them away. In all three tasks, the size of agents is set to be larger than the original settings so that collisions occur more easily, following the settings in (Kim et al., 2021). In addition, agents cannot observe any other agents, and this makes the task more difficult and communication more important. We can observe similar modifications in previous works (Foerster et al., 2016; Ding et al., 2020). After all, we want to demonstrate the superior over communication-based baselines, and communication-based methods are more suitable for scenarios with limited vision. More details about experimental settings are available in Appendix E.
Figure 4 shows the learning curves of all the methods in terms of the mean reward averaged over timesteps in PP, CN, and KA. We can see that SeqComm converges to the highest mean reward compared with all the baselines. The results demonstrate the superiority of SeqComm. In more detail, all communication-based methods outperform MAPPO, indicating the necessity of communication in these difficult tasks. Apart from MAPPO, IS performs the worst since it may access inaccurate predicted information due to the circular dependencies. The substantial improvement SeqComm over I2C and TarMAC is attributed to that SeqComm allows agents to get more valuable action information for explicit coordination. The agents learned by SeqComm show sophisticated coordination strategies induced by the priority of decision-making, which can be witnessed by the visualization of agent behaviors. More details are given in Appendix C. Note that QMIX is omitted in the comparison for clear presentation since Yu et al. (2021) have shown QMIX and MAPPO exhibit similar performance in various MPE tasks.
SMAC. We also evaluate SeqComm against the baselines on four customized maps in SMAC: 6h vs 8z, MMM2, 10m vs 11m, and 8m vs 9m, where we have made some minor changes to the observation part of agents to make it more difficult. Specifically, the sight range of agents is reduced from 9 to 2, and agents cannot perceive any information about their allies even if they are within the sight range. NDQ (Wang et al., 2020) adopts a similar change to increase the difficulty of action coordination and demonstrates that the miscoordination problem is widespread in multi-agent learning. The rest settings remain the same as the default.
The learning curves of SeqComm and the baselines in terms of the win rate are illustrated in Figure 5. IS and I2C fail in this task and get a zero win rate because these two methods are built on MADDPG. However, MADDPG cannot work well in SMAC, especially when we reduce the sight range of agents, which is also supported by other studies (Papoudakis et al., 2021). SeqComm and TarMAC converge to better performances than MAPPO and QMIX, which demonstrates the benefit of communication. Moreover, SeqComm outperforms TarMAC, which again verifies the gain of explicit action coordination.
5.2 ABLATION STUDIES
Priority of Decision-Making. We compare SeqComm with two ablation baselines with only a difference in the priority of decision-making: the priority of decision-making is fixed throughout one episode, denoted as Fix-C, and the priority of decision-making is determined randomly at each timestep, denoted as Random-C. TarMAC is also compared as a reference without explicit action coordination.
As depicted in Figure 6, SeqComm achieves a higher mean reward or win rate than Fix-C, Random-C, and TarMAC in all the tasks. These results verify the importance of the priority of decision-making and the necessity to continuously adjust it during one episode. It is also demonstrated that SeqComm can provide a proper priority of decision-making. As discussed in Section 4.3, although Fix-C and Random-C also have the theoretical guarantee, they converge to poor local optima in practice. Moreover, Fix-C and Random-C show better performance than TarMAC in most tasks. This result accords with the hypothesis that the SE is likely to be Pareto superior to the average NE in games with a high cooperation level. Additionally, the learned policy of SeqComm can generalize well to the same task with a different number of agents in MPE, which is detailed in Appendix C.
Communication Range. We also carry out ablation studies on communication range in MPE tasks. Note that communication range means how many nearest neighbors each agent is allowed to communicate with, following the setting in Ding et al. (2020). We reduce the communication range of SeqComm from 4 to 2 and 0. As there are only three agents in KA, it is omitted in this study. The results are shown in Figure 7. Communication-based agents perform better than communication-free agents, which accords with the results of many previous studies. More importantly, the superiority of SeqComm with communication range 2 over the corresponding TarMAC again demonstrates the effectiveness of sequential communication even in reduced communication ranges.
However, as the communication range decreases from 4 to 2, there is no performance reduction in these two MPE tasks. On the contrary, the agents with communication range 2 perform the best. It accords with the results in I2C (Ding et al., 2020) and ATOC (Jiang & Lu, 2018) that redundant information can impair the learning process sometimes. In other settings, this conclusion might not be true. Moreover, since under our communication scheme agents can ob-
tain more information, i.e., the actual actions of others, it is more reasonable that SeqComm can still outperform other methods in reduced communication ranges.
6 CONCLUSIONS
We have proposed SeqComm, which enables agents explicitly coordinate with each other. SeqComm from an asynchronous perspective allows agents to make decisions sequentially. A two-phase communication scheme has been adopted for determining the priority of decision-making and communicating messages accordingly. Theoretically, we prove the policies learned by SeqComm are guaranteed to improve monotonically and converge. Empirically, it is demonstrated that SeqComm outperforms baselines in a variety of cooperative multi-agent tasks and SeqComm can provide a proper priority of decision-making.
A PROOFS OF PROPOSITION 1 AND PROPOSITION 2
Lemma 1 (Agent-by-Agent PPO). If we update the policy of each agent i with TRPO Schulman et al. (2015) (or approximately PPO) when fixing all the other agent’s policies, then the joint policy will improve monotonically.
Proof. We consider the joint surrogate objective in TRPO Lπold(πnew) where πold is the joint policy before updating and πnew is the joint policy after updating.
Given that π−inew = π −i old, we have:
Lπold(πnew) = Ea∼πnew [Aπold(s,a)]
= Ea∼πold [ πnew(a|s) πold(a|s) Aπold(s,a)]
= Ea∼πold [ πinew(a i|s) πiold(a i|s) Aπold(s,a)]
= Eai∼πiold
[ πinew(a
i|s) πiold(a i|s) Ea−i∼π−iold [Aπold(s, a i, a−i)] ] = Eai∼πiold [ πinew(a
i|s) πiold(a i|s) Aiπold(s, a i) ] = Lπiold(π i new),
where Aiπold(s, a i) = Ea−i∼π−iold [Aπold(s, a i, a−i)] is the individual advantage of agent i, and the third equation is from the condition π−inew = π −i old.
With the result of TRPO, we have the following conclusion:
J(πnew)− J(πold) ≥ Lπold(πnew)− CD max KL (πnew||πold)
= Lπiold(π i new)− CD max KL (π i new||πiold) (from π−inew = π−iold)
This means the individual objective is the same as the joint objective so the monotonic improvement is guaranteed.
Then we can show the proof of Proposition 1.
Proof. We will build a new MDP M̃ based on the original MDP. We keep the action space à = A = ×ni=1Ai, where Ai is the original action space of agent i. The new state space contains multiple layers. We define S̃k = S × (×ki=1Ai) for k = 1, 2, · · · , n− 1 and S̃0 = S, where S is the original state space. Then a new state s̃k ∈ S̃k means that s̃k = (s, a1, a2, · · · , ak). The total new state space is defined as S̃ = ∪n−1i=0 S̃i. Next we define the transition probability P̃ as following:
P̃ (s̃′|s̃k, ak+1, a−(k+1)) = 1 ( s̃′ = (s̃k, ak+1) ) , k < n− 1
P̃ (s̃′|s̃k, ak+1, a−(k+1)) = 1 ( s̃′ ∈ S̃0 ) P (s̃′|s̃k, ak+1), k = n− 1.
This means that the state in the layer k can only transition to the state in the layer k + 1 with the corresponding action, and the state in the layer n− 1 will transition to the layer 0 with the probability P in the original MDP. The reward function r̃ is defined as following:
r̃(s̃,a) = 1 ( s̃ ∈ S̃0 ) r(s̃,a).
This means the reward is only obtained when the state in layer 0 and the value is the same as the original reward function. Now we obtain the total definition of the new MDP M̃ = {S̃, Ã, P̃ , r̃, γ}. Then we claim that if all agents learn in multi-agent sequential decision-making by PPO, they are actually taking agent-by-agent PPO in the new MDP M̃ . To be precise, one update of multi-agent
sequential decision-making in the original MDP M equals to a round of update from agent 1 to agent n by agent-by-agent PPO in the new MDP M̃ . Moreover, the total reward of a round in the new MDP M̃ is the same as the reward in one timestep in the original MDP M . With this conclusion and Lemma 1, we complete the proof.
The proof of Proposition 2 can be seen as a corollary of the proof of Proposition 1.
Proof. From Lemma 1 we know that the monotonic improvement of the joint policy in the new MDP M̃ is guaranteed for each update of one single agent’s policy. So even if the different round of updates in the new MDP M̃ is with different order of the decision-making, the monotonic improvement of the joint policy is still guaranteed. Finally, from the proof of Proposition 1, we know that the monotonic improvement in the new MDP M̃ equals to the monotonic improvement in the original MDP M . These complete the proof.
B PROOFS OF THEOREM 1
Lemma 2 (TVD of the joint distributions). Suppose we have two distribution p1(x, y) = p1(x)p1(x|y) and p2(x, y) = p2(x)p2(x|y). We can bound the total variation distance of the joint as:
DTV (p1(x, y)||p2(x, y)) ≤ DTV (p1(x)||p2(x)) + max x DTV (p1(y|x)||p2(y|x))
Proof. See (Janner et al., 2019) (Lemma B.1).
Lemma 3 (Markov chain TVD bound, time-varing). Suppose the expected KL-divergence between two transition is bounded as maxt Es∼p1,t(s)DKL(p1(s′|s)||p2(s′|s)) ≤ δ, and the initial state distributions are the same p1,t=0(s) = p2,t=0(s). Then the distance in the state marginal is bounded as:
DTV (p1,t(s)||p2,t(s)) ≤ tδ
Proof. See (Janner et al., 2019) (Lemma B.2).
Lemma 4 (Branched Returns Bound). Suppose the expected KL-divergence between two dynamics distributions is bounded as maxt Es∼p1,t(s)[DTV (p1(s′|s,a)||p2(s′|s,a))], and the policy divergences at level k are bounded as maxs,a1:k−1 DTV (π1(ak|s,a1:k−1)||π2(ak|s,a1:k−1)) ≤ πk . Then the returns are bounded as:
|η1 − η2| ≤ 2rmaxγ( m +
∑n k=1 πk)
(1− γ)2 +
2rmax ∑n k=1 πk
1− γ ,
where rmax is the upper bound of the reward function.
Proof. Here, η1 denotes the returns of π1 under dynamics p1(s′|s,a), and η2 denotes the returns of π2 under dynamics p2(s′|s,a). Then we have
|η1 − η2| = | ∑ s,a (p1(s,a)− p2(s,a))r(s,a)|
= | ∑ t ∑ s,a γt(p1,t(s,a)− p2,t(s,a))r(s,a)|
≤ ∑ t ∑ s,a γt|p1,t(s,a)− p2,t(s,a)|r(s,a)
≤ rmax ∑ t ∑ s,a γt|p1,t(s,a)− p2,t(s,a)|.
By Lemma 2, we get
max s DTV (π1(a|s)||π2(a|s)) ≤ max s,a1 DTV (π1(a −1|s, a1)||π2(a−1|s, a1))
+ max s DTV (π1(a
1|s)||π2(a1|s))
≤ · · ·
≤ n∑ k=1 max s,a1:k−1 DTV (π1(a k|s,a1:k−1)||π2(ak|s,a1:k−1))
≤ n∑ k=1 πk .
We then apply Lemma 3, using δ = m + ∑n k=1 πk (via Lemma 3 and 2) to get
DTV (p1,t(s)||p2,t(s)) ≤ tmax t Es∼p1,t(s)DTV (p1,t(s ′|s)||p2,t(s′|s))
≤ tmax t Es∼p1,t(s)DTV (p1,t(s ′,a|s)||p2,t(s′,a|s))
≤ t(max t Es∼p1,t(s)DTV (p1,t(s ′|s,a)||p2,t(s′|s,a))
+ max t Es∼p1,t(s) maxs
DTV (π1,t(a|s)||π2,t(a|s)))
≤ t( m + n∑ k=1 πk)
And we also get DTV (p1,t(s,a)||p2,t(s,a)) ≤ t( m + ∑n k=1 πk) + ∑n k=1 πk by Lemma 2. Thus, by plugging this back, we get:
|η1 − η2| ≤ rmax ∑ t ∑ s,a γt|p1,t(s,a)− p2,t(s,a)|
≤ 2rmax ∑ t γt(t( m + n∑ k=1 πk) + n∑ k=1 πk)
≤ 2rmax( γ( m +
∑n k=1 πk))
(1− γ)2 + ∑n k=1 πk 1− γ )
Then we can show the proof of Theorem 1.
Proof. Let πβ denote the data collecting policy. We use Lemma 4 to bound the returns, but it will require bounded model error under the new policy π. Thus, we need to introduce πβ by adding and subtracting η[πβ ], to get:
η̂[π]− η[π] = η̂[π]− η[πβ ] + η[πβ ]− η[π].
we can bound L1 and L2 both using Lemma 4 by using δ = ∑n k=1 πk and δ = m + ∑n k=1 πk respectively, and obtain:
L1 ≥ − 2γrmax
∑n k=1 πk
(1− γ)2 −
2rmax ∑n k=1 πk
(1− γ)
L2 ≥ − 2γrmax( πm +
∑n k=1 πk)
(1− γ)2 −
2rmax ∑n k=1 πk
(1− γ) .
Adding these two bounds together yields the conclusion.
Under review as a conference paper at ICLR 2023
pp 1
level 1 level 1
cn
C ADDITIONAL EXPERIMENTS
C.1 ILLUSTRATION OF LEARNED PRIORITY OF DECISION-MAKING
Figure 8 (upper panel from a to e) shows the priority order of decision-making determined by SeqComm in PP. Agent 2 that is far away from other preys and predators is chosen to be the firstmover. If agents want to encircle and capture the preys, the agents (e.g., agent 2 and 5) that are on the periphery of the encircling circle should hold upper-level positions since they are able to decide how to narrow the encirclement. In addition, agent 3 makes decisions prior to agent 5 so that collision can be avoided after agent 5 obtains the intention of agent 3.
For CN, as illustrated in Figure 8 (lower panel from a to e), agent 2 is far away from all the landmarks and all other agents are in a better position to occupy landmarks. Therefore, agents 2 is chosen to be the first-mover, which is similar to the phenomenon observed in PP. Once it has determined the target to occupy, other agents (agent 5 and 3) can adjust their actions accordingly and avoid conflict of goals. Otherwise, if agent 5 makes a decision first and chooses to occupy the closest landmark, then agent 2 has to approach to a further landmark which would take more steps.
C.2 GENERALIZATION
Generalization to different numbers of agents has always been a key problem in MARL. For most algorithms in communication, once the model is trained in one scenario, it is unlikely for agents to maintain relatively competitive performance in other scenarios with different numbers of agents. However, as we employ attention modules to process communicated messages so that agents can handle messages of different lengths. In addition, the module used to determine the priority of decision-making is also not restricted by the number of agents. Thus, we investigate whether SeqComm generalizes well to different numbers of agents in CN and PP.
For both tasks, SeqComm is trained on 5-agent settings. Then, we test SeqComm in 3-agent and 7-agent settings of CN and 7-agent setting of PP. We use Fix-C trained directly on these test tasks to illustrate the performance of SeqComm. Note that the quantity of both landmarks and preys is
adjusted according to the number of agents in CN and PP. The test results are shown in Table 1. SeqComm exhibits the superiority in CN and PP, demonstrating that SeqComm may have a good generalization to the number of agents. A thorough study of the generalization of SeqComm is left to future work.
C.3 MORE SMAC MAPS
We have evaluated our method on two additional maps, i.e., 3s vs 4z and corridor. As illustrated in Figure 9, we can find out the similar conclusions as section 5.1.
D ADDITIONAL RELATED WORK
Multi-Agent Path Finding (MAPF). MAPF aims to plan collision-free paths for multiple agents on a given graph from their given start vertices to target vertices. In MAPF, prioritized planning is deeply coupled with collision avoidance (Van Den Berg & Overmars, 2005; Ma et al., 2019), where collision is used to design constraints or heuristics for planning. Unlike MAPF, our method couples the priority of decision-making with the learning objective and thus is more general. In addition, the different motivations and problem settings may lead to the incompatibility of the methods in the two fields.
Reinforcement Learning in Stackelberg Game. Many studies (Könönen, 2004; Sodomka et al., 2013; Greenwald et al., 2003; Zhang et al., 2020) have investigated reinforcement learning in finding the Stackelberg equilibrium. Bi-AC (Zhang et al., 2020) is a bi-level actor-critic method that allows agents to have different knowledge bases so that the Stackelberg equilibrium (SE) is possible to find. The actions still can be executed simultaneously and distributedly. It empirically studies the relationship between the cooperation level and the superiority of the SE over the Nash equilibrium. AQL (Könönen, 2004) updates the Q-value by solving the SE in each iteration and can be regarded as the value-based version of Bi-AC. Existing work mainly focuses on two-agent settings and their order is fixed in advance. However, the fixed order can hardly be an optimal solution as we will show in the next section. To address this issue, we exploit agents’ intentions to dynamically determine the priority of decision-making along the way of interacting with each other.
E EXPERIMENTAL SETTINGS
In cooperative navigation, there are 5 agents and the size of each is 0.15. They need to occupy 5 landmarks with the size of 0.05. The acceleration of agents is 7. In predator-prey, the number of predators (agents) and prey is set to 5 and 3, respectively, and their sizes are 0.15 and 0.05. The acceleration is 5 for predators and 7 for prey. In keep away, the number of attackers (agents) and defenders is set to 3, and their sizes are respectively 0.15 and 0.05. Besides, the acceleration is 6
for attackers and 4 for defenders. The three landmarks are located at (0.00, 0.30), (0.25,−0.15), and (−0.25,−0.15). Note that each agent is allowed to communicate with all other agents in all three tasks. The team reward is similar across tasks. At a timestep t, it can be written as rtteam = − ∑n i=1 d t i + C
trcollision, where dti is the distance of landmark/prey i to its nearest agent/predator, Ct is the number of collisions (when the distance between two agents is less than the sum of their sizes) occurred at timestep t, and rcollision = −1. In addition, agents act discretely and have 5 actions (stay and move up, down, left, right). The length of each episode is 20, 30, and 20 in cooperative navigation, predator-prey, and keep-away, respectively.
F IMPLEMENTATION DETAILS
F.1 ARCHITECTURE AND HYPERPARAMETERS
Our models, including SeqComm, Fix-C, and Random-C are trained based on MAPPO. The critic and policy network are realized by two fully connected layers. As for the attention module, key, query, and value have one fully connected layer each. The size of hidden layers is 100. Tanh functions are used as nonlinearity. For I2C, we use their official code with default settings of basic hyperparameters and networks. As there is no released code of IS and TarMAC, we implement IS and TarMAC by ourselves, following the instructions mentioned in the original papers (Kim et al., 2021; Das et al., 2019).
For the world model, observations and actions are firstly encoded by a fully connected layer. The output size for the observation encoder is 48, and the output size for the action encoder is 16. Then the outputs of the encoder will be passed into the attention module with the same structure aforementioned. Finally, we use a fully connected layer to decode. In these layers, Tanh is used as the nonlinearity.
Table 2 summarize the hyperparameters used by SeqComm and the baselines in the MPE.
For SMAC, SeqComm, Random-C, Fix-C are based on the same architecture, the hyperparameters stay the same. For MMM2, 6z vs 8z, and 8m vs 9m, the learning rate is 5e−5, while for 10m vs 11m, corridor, and 3s vs 4z, learning rate is 7e−5. The ppo epoch is set to 10 for 6h vs 8z, and is 5 for rest maps. H and F is set to 5 and 1, respectively. However, 20 and 2 is a better value of H and F if computing resources is sufficient.
For TarMAC, the learning rate is 7e−5 for all maps. The ppo epoch is set to 10 for 6h vs 8z, and is 5 for rest maps.
For MAPPO, the learning rate is 5e−5 for MMM2 and 6z vs 8z, and 7e−5 for 8m vs 9m and 10m vs 11m.
For these four methods, the mini batch is set to 1. As for other hyperparameters, we follow the default settings of the official code (Yu et al., 2021).
For QMIX, the learning rate is 5e−5. The is 1 and the batch size is 32. The buffer size is 5e3. For others, we follow the default settings of link https://github.com/starry-sky6688/MARL-Algorithms.git
F.2 ATTENTION MODULE
Attention module (AM) is applied to process messages in the world model, critic network, and policy network. AM consists of three components: query, key, and values. The output of AM is the weighted sum of values, where the weight of value is determined by the dot product of the query and the corresponding key.
For AM in the world model denoted as AMw, agent i gets messages m−it = h −i t from all other agents at timestep t in negotiation phase, and predicts a query vector qit following AM i w,q(h i t). The query is used to compute a dot product with keys kt = [k1t , · · · , knt ]. Note that k j t is obtained by the message from agent j following AMia,k(h j t ) for j 6= i, and kit is from AM i neg,k(h i t). Besides, it is scaled by 1/ √ dk followed by a softmax to obtain attention weights α for each value vector:
αi = softmax qit T k1t√ dk · · · q
i t T kjt√ dk︸ ︷︷ ︸ αij · · · q i t T knt√ dk (1) The output of attention module is defined as: cit = ∑n j=1 αijv j t , where v j t is obtained from messages or its own hidden state of observation following AMiw,v(·). As for AM in the policy and critic network denoted as AMa , agent i gets additional messages from upper-level agent in the launching phase. The message from upper-level and lower-level agent can be expanded asmuppert = [h upper t ,a upper t ] andm lower t = [h lower t , 0], respectively. In addition, the query depends on agent’s own hidden state of observation hit, but keys and values are only from messages of other agents.
F.3 TRAINING
The training of SeqComm is an extension of MAPPO. The observation encoder e, the critic V , and the policy π are respectively parameterized by θe, θv, θπ. Besides, the attention module AMa is parameterized by θa and takes as input the agent’s hidden state, the messages (hidden states of other agents) in the negotiation phase, and the messages (the actions of upper-level agents) in launching phase. Let D = {τk}Kk=1 be a set of trajectories by running policy in the environment. Note that we drop time t in the following notations for simplicity.
The value function is fitted by regression on mean-squared error:
L(θv, θa, θe) = 1
KT ∑ τ∈D T−1∑ t=0 ∥∥∥V (AMa(e(o),aupper))− R̂∥∥∥2 2
(2)
where R̂ is the discount rewards-to-go.
We update the policy by maximizing the PPO-Clip objective:
L(θπ, θa, θe) = 1
KT ∑ τ∈D T−1∑ t=0 min( π(a|AMa(e(o),aupper)) πold(a|AMa(e(o),aupper)) Aπold , g( , Aπold)) (3)
where g( , A) = { (1 + )A A ≥ 0 (1− )A A ≤ 0 , and Aπold(o,a upper, a) is computed using the GAE method.
The world modelM is parameterized by θw is trained as a regression model using the training data set S. It is updated with the loss:
L(θw) = 1 |S| ∑
o,a,o′,r∈S
∥∥∥(o′, r)−M(AMw(e(o),a))∥∥∥2 2 . (4)
We trained our model on one GeForce GTX 1050 Ti and Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz. | 1. What is the main contribution of the paper in multi-agent reinforcement learning?
2. What are the strengths and weaknesses of the proposed sequential communication framework?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are the concerns regarding the applicability of the algorithm in arbitrary MARL tasks?
5. Do the performance gains of SeqComm come at a significant cost, and how does it compare to other baselines?
6. How does the reviewer view the connection between game theory and the proposed algorithm?
7. Are there any questions or aspects that the reviewer believes require further discussion or analysis? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Authors present a sequential communication framework to address the relative overgeneralization problem in multi-agent reinforcement learning and test it against a number of communication-free and communication-based baselines. Performance figures drawn against the number of training steps show higher average rewards in the MPE and SMAC environments.
Strengths And Weaknesses
Introducing sequential decision making may prove both a valuable and feasible approach to some MARL tasks.
However, the domain on which the proposed algorithm would successfully operate seems largely exaggerated. The underlying game-theoretic construct of MARL tasks should most definitely considered as a given, and not as a control knob. Whether a game will play out sequentially or simultaneously is -- at least within the approach and scope of building a novel MARL algorithm -- the innate characteristic of the game and not a choice for the algorithm designer. SeqComm may function correctly in MARL tasks that naturally and inherently involve sequentiality, but such a demonstration is missing. This lack significantly undermines the applicability of the algorithm.
Even if there is some justification as to the applicability of SeqComm on arbitrary MARL tasks, the performance gain comes at a huge cost of communicating n-1 rounds. Actual performance may deteriorate from this best case (n-1 rounds), as "some agents are required to communicate intention values with others multiple times until the priority of decision-making is finally determined." Furthermore, each of those rounds involves Monte Carlo sampling. This means that every single tick in the x-axis (steps) actually involves, at best n-1 "sub-steps", each of which carries out Monte Carlo sampling. Since every figure in the paper is plotted against the number of training steps, not a single figure pictures a fair comparison between SeqComm and the baselines.
Moreover, the paper needs a much better foundation in terms of its connection to game theory. Simultaneous games are prone to non-stationarity precisely because the players' information sets are limited. Making a distinction between the decision-making phase and the action execution phase does not change the fact SeqComm alters information sets. There is no discussion whatsoever in this regard (e.g., information sets, bounded rationality, non-stationarity), and SeqComm proceeds straight to interfere with the players' information sets, which are dictated by the nature of the game and not by the algorithm's working mechanism.
Clarity, Quality, Novelty And Reproducibility
Clarity It is unclear how SeqComm would be applicable to arbitrary MARL tasks. Even if applicable, the performance gain must be normalized against the additional computation.
Quality Plots against training steps are misleading. "Miscoordination" on page 6 is not defined, so its avoidance cannot be measured. The latency tolerance disclaimer on page 6 is unsupported. Improvements on SeqComm runtime on real-world communication technology (e.g., 5G) cannot be measured/claimed unless accompanied by training (wall clock) times in seconds.
Originality To the best of my knowledge, SeqComm is a novel algorithm. |
ICLR | Title
Multi-Agent Sequential Decision-Making via Communication
Abstract
Communication helps agents to obtain information about others so that better coordinated behavior can be learned. Some existing work communicates predicted future trajectory with others, hoping to get clues about what others would do for better coordination. However, circular dependencies sometimes can occur when agents are treated synchronously so it is hard to coordinate decision-making. In this paper, we propose a novel communication scheme, Sequential Communication (SeqComm). SeqComm treats agents asynchronously (the upper-level agents make decisions before the lower-level ones) and has two communication phases. In negotiation phase, agents determine the priority of decision-making by communicating hidden states of observations and comparing the value of intention, which is obtained by modeling the environment dynamics. In launching phase, the upper-level agents take the lead in making decisions and communicate their actions with the lower-level agents. Theoretically, we prove the policies learned by SeqComm are guaranteed to improve monotonically and converge. Empirically, we show that SeqComm outperforms existing methods in various multi-agent cooperative tasks.
1 INTRODUCTION
The partial observability and stochasticity inherent to the nature of multi-agent systems can easily impede the cooperation among agents and lead to catastrophic miscoordination (Ding et al., 2020). Communication has been exploited to help agents obtain extra information during both training and execution to mitigate such problems (Foerster et al., 2016; Sukhbaatar et al., 2016; Peng et al., 2017). Specifically, agents can share their information with others via a trainable communication channel.
Centralized training with decentralized execution (CTDE) is a popular learning paradigm in cooperative multi-agent reinforcement learning (MARL). Although the centralized value function can be learned to evaluate the joint policy of agents, the decentralized policies of agents are essentially independent. Therefore, a coordination problem arises. That is, agents may make sub-optimal actions by mistakenly assuming others’ actions when there exist multiple optimal joint actions (Busoniu et al., 2008). Communication allows agents to obtain information about others to avoid miscoordination. However, most existing work only focuses on communicating messages, e.g., the information of agents’ current observation or historical trajectory (Jiang & Lu, 2018; Singh et al., 2019; Das et al., 2019; Ding et al., 2020). It is impossible for an agent to acquire other’s actions before making decisions since the game model is usually synchronous, i.e., agents make decisions and execute actions simultaneously. Recently, intention or imagination, depicted by a combination of predicted actions and observations of many future steps, has been proposed as part of messages (Kim et al., 2021; Pretorius et al., 2021). However, circular dependencies can still occur, so it may be hard to coordinate decision-making under synchronous settings.
A general approach to solving the coordination problem is to make sure that ties between equally good actions are broken by all agents. One simple mechanism for doing so is to know exactly what others will do and adjust the behavior accordingly under a unique ordering of agents and actions (Busoniu et al., 2008). Inspired by this, we reconsider the cooperative game from an asynchronous perspective. In other words, each agent is assigned a priority (i.e., order) of decision-making each step in both training and execution, thus the Stackelberg equilibrium (SE) (Von Stackelberg, 2010) is naturally set up as the learning objective. Specifically, the upper-level agents make decisions before the lower-level agents. Therefore, the lower-level agents can acquire the actual actions of the upper-level agents by
communication and make their decisions conditioned on what the upper-level agents would do. Under this setting, the SE is likely to be Pareto superior to the average Nash equilibrium (NE) in games that require a high cooperation level (Zhang et al., 2020). However, is it necessary to decide a specific priority of decision-making for each agent? Ideally, the optimal joint policy can be decomposed by any orders (Wen et al., 2019), e.g., π∗(a1, a2|s) = π∗(a1|s)π∗(a2|s, a1) = π∗(a2|s)π∗(a1|s, a2). But during the learning process, it is unlikely for agents to use the optimal actions of other agents for gradient calculation, making it still vulnerable to the relative overgeneralization problem (Wei et al., 2018). Overall, there is no guarantee that the above equation will hold in the learning process, thus ordering should be carefully concerned.
In this paper, we propose a novel model-based multi-round communication scheme for cooperative MARL, Sequential Communication (SeqComm), to enable agents to explicitly coordinate with each other. Specifically, SeqComm has two-phase communication, negotiation phase and launching phase. In the negotiation phase, agents communicate their hidden states of observations with others simultaneously. Then they are able to generate multiple predicted trajectories, called intention, by modeling the environmental dynamics and other agents’ actions. In addition, the priority of decision-making is determined by communicating and comparing the corresponding values of agents’ intentions. The value of each intention represents the rewards obtained by letting that agent take the upper-level position of the order sequence. The sequence of others follows the same procedure as aforementioned with the upper-level agents fixed. In the launching phase, the upper-level agents take the lead in decision-making and communicate their actual actions with the lower-level agents. Note that the actual actions will be executed simultaneously in the environment without any changes.
SeqComm is currently built on MAPPO (Yu et al., 2021). Theoretically, we prove the policies learned by SeqComm are guaranteed to improve monotonically and converge. Empirically, we evaluate SeqComm on a set of tasks in multi-agent particle environment (MPE) (Lowe et al., 2017) and StarCraft multi-agent challenge (SMAC) (Samvelyan et al., 2019). In all these tasks, we demonstrate that SeqComm outperforms prior communication-free and communication-based methods. By ablation studies, we confirm that treating agents asynchronously is a more effective way to promote coordination and SeqComm can provide the proper priority of decision-making for agents to develop better coordination.
2 RELATED WORK
Communication. Existing studies (Jiang & Lu, 2018; Kim et al., 2019; Singh et al., 2019; Das et al., 2019; Zhang et al., 2019; Jiang et al., 2020; Ding et al., 2020; Konan et al., 2022) in this realm mainly focus on how to extract valuable messages. ATOC (Jiang & Lu, 2018) and IC3Net (Singh et al., 2019) utilize gate mechanisms to decide when to communicate with other agents. Many works (Das et al., 2019; Konan et al., 2022) employ multi-round communication to fully reason the intentions of others and establish complex collaboration strategies. Social influence (Jaques et al., 2019) uses communication to influence the behaviors of others. I2C (Ding et al., 2020) only communicates with agents that are relevant and influential which are determined by causal inference. However, all these methods focus on how to exploit valuable information from current or past partial observations effectively and properly. More recently, some studies (Kim et al., 2021; Du et al., 2021; Pretorius et al., 2021) begin to answer the question: can we favor cooperation beyond sharing partial observation? They allow agents to imagine their future states with a world model and communicate those with others. IS (Pretorius et al., 2021), as the representation of this line of research, enables each agent to share its intention with other agents in the form of the encoded imagined trajectory and use the attention module to figure out the importance of the received intention. However, two concerns arise. On one hand, circular dependencies can lead to inaccurate predicted future trajectories as long as the multi-agent system treats agents synchronously. On the other hand, MARL struggles in extracting useful information from numerous messages, not to mention more complex and dubious messages, i.e., predicted future trajectories.
Unlike these works, we treat the agents from an asynchronously perspective therefore circular dependencies can be naturally resolved. Furthermore, agents only send actions to lower-level agents besides partial observations to make sure the messages are compact as well as informative.
Coordination. The agents are essentially independent decision makers in execution and may break ties between equally good actions randomly. Thus, in the absence of additional mechanisms, different
agents may break ties in different ways, and the resulting joint actions may be suboptimal. Coordination graphs (Guestrin et al., 2002; Böhmer et al., 2020; Wang et al., 2021b) simplify the coordination when the global Q-function can be additively decomposed into local Q-functions that only depend on the actions of a subset of agents. Typically, a coordination graph expresses a higher-order value decomposition among agents. This improves the representational capacity to distinguish other agents’ effects on local utility functions, which addresses the miscoordination problems caused by partial observability. Another general approach to solving the coordination problem is to make sure that ties are broken by all agents in the same way, requiring that random action choices are somehow coordinated or negotiated. Social conventions (Boutilier, 1996) or role assignments (Prasad et al., 1998) encode prior preferences towards certain joint actions and help break ties during action selection. Communication (Fischer et al., 2004; Vlassis, 2007) can be used to negotiate action choices, either alone or in combination with the aforementioned techniques. Our method follows this line of research by utilizing the ordering of agents and actions to break the ties, other than the enhanced representational capacity of the local value function.
3 PROBLEM FORMULATION
Cost-Free Communication. The decentralized partially observable Markov decision process (DecPOMDP) can be extended to explicitly incorporate broadcasting observations. The resulting model is called multi-agent POMDP (Oliehoek et al., 2016).
Pynadath & Tambe (2002) showed that under cost-free communication, a joint communication policy that shares local observations at each stage is optimal. Many studies have also investigated sharing local observations in models that are similar to multi-agent POMDP (Pynadath & Tambe, 2002; Ooi & Wornell, 1996; Nair et al., 2004; Roth et al., 2005a;b; Spaan et al., 2006; Oliehoek et al., 2007; Becker et al., 2004). These works focus on issues other than communication cost and we foucs on the coordination problem. Note that even under multi-agent POMDP where agents can get joint observations, coordination problem can still arise (Busoniu et al., 2008). Suppose the centralized critic has learnt actions pairs [a1, a2] and [b1, b2] are equally optimal. Without any prior information, the individual policies π1 and π2 learnt from the centralized critic can break the ties randomly and may choose a1 and b2, respectively.
Multi-Agent Sequential Decision-Making. We consider fully cooperative multi-agent tasks that are modeled as multi-agent POMDP, where n agents interact with the environment according to the following procedure, which we refer to as multi-agent sequential decision-making.
At each timestep t, assume the priority (i.e., order) of decision-making for all agents is given and each priority level has only one agent (i.e., agents make decisions one by one). Note that the smaller the level index, the higher priority of decision-making is. The agent at each level k gets its own observation okt drawn from the state st, and receives messagesm −k t from all other agents, where m−kt , {{o1t , a1t}, . . . , {ok−1t , ak−1t }, ok+1t , . . . , ont }. Equivalently, m−kt can be written as {o−kt ,a 1:k−1 t }, where o−kt denotes the joint observations of all agents except k, and a 1:k−1 t denotes the joint actions of agents 1 to k − 1. For the agent at the first level (i.e., k = 1), a1:k−1t = ∅. Then, the agent determines its action akt sampled from its policy πk(·|okt ,m−kt ) or equivalently πk(·|ot,a1:k−1t ) and sends it to the lower-level agents. After all agents have determined their actions, they perform the joint actions at, which can be seen as sampled from the joint policy π(·|st) factorized as ∏n k=1 πk(·|ot,a 1:k−1 t ), in the environment and get a shared reward r(st,at) and the state transitions to next state s′ according to the transition probability p(s′|st,at). All agents aim to maximize the expected return ∑∞ t=0 γ
trt, where γ is the discount factor. The state-value function and action-value function of the level-k agent are defined as follows:
Vπk(s,a 1:k−1) , E
s1:∞ ak:n0 ∼πk:n a1:∞∼π
[ ∞∑ t=0 γtrt|s0 = s,a1:k−10 = a1:k−1]
Qπk(s,a 1:k) , E
s1:∞ ak+1:n0 ∼πk+1:n
a1:∞∼π
[ ∞∑ t=0 γtrt|s0 = s,a1:k0 = a1:k].
For the setting of multi-agent sequential decision-making discussed above, we have the following proposition. Proposition 1. If all the agents update its policy with individual TRPO (Schulman et al., 2015) sequentially in multi-agent sequential decision-making, then the joint policy of all agents is guaranteed to improve monotonically and converge.
Proof. The proof is given in Appendix A.
Proposition 1 indicates that SeqComm has the performance guarantee regardless of the priority of decision-making in multi-agent sequential decision-making. However, the priority of decision-making indeed affects the optimality of the converged joint policy, and we have the following claim. Claim 1. The different priorities of decision-making affect the optimality of the convergence of the learning algorithm due to the relative overgeneralization problem.
We use a one-step matrix game as an example, as illustrated in Figure 1(a), to demonstrate the influence of the priority of decision-making on the learning process. Due to relative overgeneralization (Wei et al., 2018), agent B tends to choose b2 or b3. Specifically, b2 or b3 in the suboptimal equilibrium is a better choice than b1 in the optimal equilibrium when matched with arbitrary actions from agent A. Therefore, as shown in Figure 1(b), B → A (i.e., agent B makes decisions before A, and A’s policy conditions on the action of B) and Simultaneous (i.e., two agents make decisions simultaneously and independently) are easily trapped into local optima. However, things can be different if agent A goes
first, as A → B achieves the optimum. As long as agent A does not suffer from relative overgeneralization, it can help agent B get rid of local optima by narrowing down the search space of B. Besides, a policy that determines the priority of decision-making can be learned under the guidance of the state-value function, denoted as Learned. It obtains better performance than B → A and Simultaneous, which indicates that dynamically determining the order during policy learning can be beneficial as we do not know the optimal priority in advance.
Remark 1. The priority (i.e., order) of decision-making affects the optimality of the converged joint policy in multi-agent sequential decision-making, thus it is critical to determine the order. However, learning the order directly requires an additional centralized policy in execution, which is not generalizable in the scenario where the number of agents varies. Moreover, its learning complexity exponentially increases with the number of agents, making it infeasible in many cases.
4 SEQUENTIAL COMMUNICATION
In this paper, we cast our eyes in another direction and resort to the world model. Ideally, we can randomly sample candidate order sequences, evaluate them under the world model (see Section 4.1), and choose the order sequence that is deemed the most promising under the true dynamic. SeqComm is designed based on this principle to determine the priority of decision-making via communication.
SeqComm adopts a multi-round communication mechanism, i.e., agents are allowed to communicate with others in multiple rounds. Importantly, communication is separated into phases serving different purposes. One is the negotiation phase for agents to determine the priority of decision-making. Another is the launching phase for agents to act conditioning on actual actions upper-level agents will take to implement explicit coordination via communication. The overview of SeqComm is illustrated in Figure 2. Each SeqComm agent consists of a policy, a critic, and a world model, as illustrated in Figure 3, and the parameters of all networks are shared across agents (Gupta et al., 2017).
Under review as a conference paper at ICLR 2023
agent 1 agent 1’s obs agent 2 agent 3 agent 4 re qu es t re pl y
agent 1a gent 1’s ob s agent 2agent 3 agent 4 reply re qu es t request request re pl yreply Agent 1 chooses to send request to agent 2 and ignore
agent 3
1
2 3 4
1 2 3 4 Agent 1 chooses to send request to agent 2, 3, 4 t t+1
B C
BC
order set
AsA CsC BsB B C 1 BC 2
r1 r2 intention reward
A
C
B
2
A C B
1
aC
A C B
2
aA A C
B
3
aB
aA aC
A
C
Baction stateagent action stateagent
4.1 NEGOTIATION PHASE
In the negotiation phase, the observation encoder first takes ot as input and outputs a hidden state ht, which is used to communicate with others. Agents then determine the priority of decision-making by intention which is established and evaluated based on the world model.
World Model. The world model is needed to predict and evaluate future trajectories. SeqComm, unlike previous works (Kim et al., 2021; Du et al., 2021; Pretorius et al., 2021), can utilize received hidden states of other agents in the first round of communication to model more precise environment dynamics for the explicit coordination in the next round of communication. Once an agent can access other agents’ hidden states, it shall have adequate information to estimate their actions since all agents are homogeneous and parameter-sharing. Therefore, the world modelM(·) takes as input the joint hidden states ht = {h1t , . . . , hnt } and actions at, and predicts the next joint observations and reward,
ôt+1, r̂t+1 =Mi(AMw(ht,at)), where AMw is the attention module. The reason that we adopt the attention module is to entitle the world model to be generalizable in the scenarios where additional agents are introduced or existing agents are removed.
Priority of Decision-Making. The intention is the key element to determine the priority of decision-making. The notion of intention is described as an agent’s future behavior in previous works (Rabinowitz et al., 2018; Raileanu et al., 2018; Kim et al., 2021). However, we define the intention as an agent’s future behavior without considering others.
As mentioned before, an agent’s intention considering others can lead to circular dependencies and cause miscoordination. By our definition, the intention of an agent should be depicted as all future trajectories considering that agent as the first-mover and ignoring the others. However, there are many possible future trajectories as the priority of the rest agents is unfixed. In practice, we use the Monte Carlo method to evaluate intention.
Taking agent i at timestep t to illustrate, it firstly considers itself as the first-mover and produces its action only based on the joint hidden states, âit ∼ πi(·|AMa(ht)), where we again use an
attention module AMa to handle the input. For the order sequence of lower-level agents, we randomly sample a set of order sequences from unfixed agents. Assume agent j is the second-mover, agent i models j’s action by considering the upper-level action following its own policy âjt ∼ πi(·|AMa(ht, âit)). The same procedure is applied to predict the actions of all other agents following the sampled order sequence. Based on the joint hidden states and predicted actions, the next joint observations ôt+1 and corresponding reward r̂t+1 can be predicted by the world model. The length of the predicted future trajectory isH and it can then be written as τ t = {ôt+1, ât+1, . . . , ôt+H , ât+H} by repeating the procedure aforementioned and the value of one trajectory is defined as the return of that trajectory vτt = ∑t+H t′=t+1 γ
t′−t−1r̂t′/H . In addition, the intention value is defined as the average value of F future trajectories with different sampled order sequences. The choice of F is a tradeoff between the computation overhead and the accuracy of the estimation.
After all the agents have computed their own intention and the corresponding value, they again communicate their intention values to others. Then agents would compare and choose the agent with the highest intention value to be the first-mover. The priority of lower-level decision-making follows the same procedure with the upper-level agents fixed. Note that some agents are required to communicate intention values with others multiple times until the priority of decision-making is finally determined.
4.2 LAUNCHING PHASE
As for the launching phase, agents communicate for obtaining additional information to make decisions. Apart from the received hidden states from the last phase, we allow agents to get what actual actions the upper-level agents will take in execution, while other studies can only infer others’ actions by opponent modeling (Rabinowitz et al., 2018; Raileanu et al., 2018) or communicating intentions (Kim et al., 2021). Therefore, miscoordination can be naturally avoided and a better cooperation strategy is possible since lower-level agents can adjust their behaviors accordingly. A lower-level agent i make a decision following the policy πi(·|AMa(ht,auppert )), where a upper t means received actual actions from all upper-level agents. As long as the agent has decided its action, it will send its action to all other lower-level agents by the communication channel. Note that the actions are executed simultaneously and distributedly in execution, though agents make decisions sequentially.
Communication Overhead. Two communication phases alternate until all agents determine their levels and get upper-level actions. Note that many previous works also adopt the multi-round communication scheme (Das et al., 2019; Singh et al., 2019). As for implementation in practice, compared with communicating high-dimensional hidden states/observations by multiple rounds (Das et al., 2019; Singh et al., 2019), or transferring multi-step trajectory (Kim et al., 2021), SeqComm needs more rounds, but it only transmits hidden states for one time. For the rest n − 1 round communication with total (n− 1)/2 broadcasts per agent, only a single intention value and an action will be exchanged. Considering there are n! permutations of different order choices for n agents, our method has greatly reduced computation overhead since each agent needs to calculate up to n times to search for a satisfying order. Although SeqComm is more suitable for latency-tolerate MARL tasks, e.g., power dispatch (minutes) (Wang et al., 2021a), inventory management (hours) (Feng et al., 2021), maritime transportation (days) (Li et al., 2019), it is possible for SeqComm to have a wider range of applications given the rapid development of the communication technology, e.g., 5G.
4.3 THEORETICAL ANALYSIS
As the priority of decision-making is determined by intention values, SeqComm is likely to choose different orders at different timesteps during training. However, we have the following proposition that theoretically guarantees the performance of the learned joint policy under SeqComm.
Proposition 2. The monotonic improvement and convergence of the joint policy in SeqComm are independent of the priority of decision-making of agents at each timestep.
Proof. The proof is given in Appendix A.
The priority of decision-making is chosen under the world model, thus the compounding errors in the world model can result in discrepancies between the predicted returns of the same order under the
world model and the true dynamics. We then analyze the monotonic improvement for the joint policy under the world model based on Janner et al. (2019). Theorem 1. Let the expected total variation between two transition distributions be bounded at each timestep as maxt Es∼πβ,t [DTV (p(s′|s,a)||p̂(s′|s,a))] ≤ m, and the policy divergences at level k be bounded as maxs,a1:k−1 DTV (πβ,k(ak|s,a1:k−1)||πk(ak|s,a1:k−1)) ≤ πk , where πβ is the data collecting policy for the model and p̂(s′|s,a) is the transition distribution under the model. Then the model return η̂ and true return η of the policy π are bounded as:
η̂[π] ≥ η[π]− [ 2γrmax( m + 2
∑n k=1 πk)
(1− γ)2 +
4rmax ∑n k=1 πk
(1− γ) ]︸ ︷︷ ︸
C( m, π1:n )
Proof. The proof is given in Appendix B.
Remark 2. Theorem 1 provides a useful relationship between the compounding errors and the policy update. As long as we improve the return under the true dynamic by more than the gap, C( m, π1:n), we can guarantee the policy improvement under the world model. If no such policy exists to overcome the gap, it implies the model error is too high, that is, there is a large discrepancy between the world model and true dynamics. Thus the order sequence obtained under the world model is not reliable. Such an order sequence is almost the same as a random one. Though a random order sequence also has the theoretical guarantee of Proposition 2, we will show in Section 5.2 that a random order sequence leads to a poor local optimum empirically.
5 EXPERIMENTS
Sequential communication (SeqComm) is currently instantiated based on MAPPO (Yu et al., 2021). We evaluate SeqComm on three tasks in multi-agent particle environment (MPE) (Lowe et al., 2017) and four maps in StarCraft multi-agent challenge (SMAC) (Samvelyan et al., 2019).
For these experiments, we compare SeqComm against the following communication-free and communication-based baselines: MAPPO (Yu et al., 2021), QMIX (Rashid et al., 2018), IS (Kim et al., 2021), TarMAC (Das et al., 2019), and I2C (Ding et al., 2020). In more detail, IS communicates predicted future trajectories (observations and actions), and predictions are made by the environment model. TarMAC uses the attention model to focus more on important incoming messages (the hidden states of observations). TarMAC is reproduced based on MAPPO instead of A2C in the original paper for better performance. I2C infers one-to-one communication to reduce the redundancy of messages (also conditioning on observations).
In the experiments, all the methods are parameter-sharing for fast convergence. We have fine-tuned the baselines for a fair comparison. Please refer to Appendix E for experimental settings and Appendix F for implementation details. All results are presented in terms of the mean and standard deviation of five runs with different random seeds.
5.1 RESULTS
MPE. We experiment on predator-prey (PP), cooperative navigation (CN), and keep-away (KA) in MPE. In PP, five predators (agents) try to capture three prey. In CN, five agents try to occupy five landmarks. In KA, three attackers (agents) try to occupy three landmarks, however, there are three
defenders to push them away. In all three tasks, the size of agents is set to be larger than the original settings so that collisions occur more easily, following the settings in (Kim et al., 2021). In addition, agents cannot observe any other agents, and this makes the task more difficult and communication more important. We can observe similar modifications in previous works (Foerster et al., 2016; Ding et al., 2020). After all, we want to demonstrate the superior over communication-based baselines, and communication-based methods are more suitable for scenarios with limited vision. More details about experimental settings are available in Appendix E.
Figure 4 shows the learning curves of all the methods in terms of the mean reward averaged over timesteps in PP, CN, and KA. We can see that SeqComm converges to the highest mean reward compared with all the baselines. The results demonstrate the superiority of SeqComm. In more detail, all communication-based methods outperform MAPPO, indicating the necessity of communication in these difficult tasks. Apart from MAPPO, IS performs the worst since it may access inaccurate predicted information due to the circular dependencies. The substantial improvement SeqComm over I2C and TarMAC is attributed to that SeqComm allows agents to get more valuable action information for explicit coordination. The agents learned by SeqComm show sophisticated coordination strategies induced by the priority of decision-making, which can be witnessed by the visualization of agent behaviors. More details are given in Appendix C. Note that QMIX is omitted in the comparison for clear presentation since Yu et al. (2021) have shown QMIX and MAPPO exhibit similar performance in various MPE tasks.
SMAC. We also evaluate SeqComm against the baselines on four customized maps in SMAC: 6h vs 8z, MMM2, 10m vs 11m, and 8m vs 9m, where we have made some minor changes to the observation part of agents to make it more difficult. Specifically, the sight range of agents is reduced from 9 to 2, and agents cannot perceive any information about their allies even if they are within the sight range. NDQ (Wang et al., 2020) adopts a similar change to increase the difficulty of action coordination and demonstrates that the miscoordination problem is widespread in multi-agent learning. The rest settings remain the same as the default.
The learning curves of SeqComm and the baselines in terms of the win rate are illustrated in Figure 5. IS and I2C fail in this task and get a zero win rate because these two methods are built on MADDPG. However, MADDPG cannot work well in SMAC, especially when we reduce the sight range of agents, which is also supported by other studies (Papoudakis et al., 2021). SeqComm and TarMAC converge to better performances than MAPPO and QMIX, which demonstrates the benefit of communication. Moreover, SeqComm outperforms TarMAC, which again verifies the gain of explicit action coordination.
5.2 ABLATION STUDIES
Priority of Decision-Making. We compare SeqComm with two ablation baselines with only a difference in the priority of decision-making: the priority of decision-making is fixed throughout one episode, denoted as Fix-C, and the priority of decision-making is determined randomly at each timestep, denoted as Random-C. TarMAC is also compared as a reference without explicit action coordination.
As depicted in Figure 6, SeqComm achieves a higher mean reward or win rate than Fix-C, Random-C, and TarMAC in all the tasks. These results verify the importance of the priority of decision-making and the necessity to continuously adjust it during one episode. It is also demonstrated that SeqComm can provide a proper priority of decision-making. As discussed in Section 4.3, although Fix-C and Random-C also have the theoretical guarantee, they converge to poor local optima in practice. Moreover, Fix-C and Random-C show better performance than TarMAC in most tasks. This result accords with the hypothesis that the SE is likely to be Pareto superior to the average NE in games with a high cooperation level. Additionally, the learned policy of SeqComm can generalize well to the same task with a different number of agents in MPE, which is detailed in Appendix C.
Communication Range. We also carry out ablation studies on communication range in MPE tasks. Note that communication range means how many nearest neighbors each agent is allowed to communicate with, following the setting in Ding et al. (2020). We reduce the communication range of SeqComm from 4 to 2 and 0. As there are only three agents in KA, it is omitted in this study. The results are shown in Figure 7. Communication-based agents perform better than communication-free agents, which accords with the results of many previous studies. More importantly, the superiority of SeqComm with communication range 2 over the corresponding TarMAC again demonstrates the effectiveness of sequential communication even in reduced communication ranges.
However, as the communication range decreases from 4 to 2, there is no performance reduction in these two MPE tasks. On the contrary, the agents with communication range 2 perform the best. It accords with the results in I2C (Ding et al., 2020) and ATOC (Jiang & Lu, 2018) that redundant information can impair the learning process sometimes. In other settings, this conclusion might not be true. Moreover, since under our communication scheme agents can ob-
tain more information, i.e., the actual actions of others, it is more reasonable that SeqComm can still outperform other methods in reduced communication ranges.
6 CONCLUSIONS
We have proposed SeqComm, which enables agents explicitly coordinate with each other. SeqComm from an asynchronous perspective allows agents to make decisions sequentially. A two-phase communication scheme has been adopted for determining the priority of decision-making and communicating messages accordingly. Theoretically, we prove the policies learned by SeqComm are guaranteed to improve monotonically and converge. Empirically, it is demonstrated that SeqComm outperforms baselines in a variety of cooperative multi-agent tasks and SeqComm can provide a proper priority of decision-making.
A PROOFS OF PROPOSITION 1 AND PROPOSITION 2
Lemma 1 (Agent-by-Agent PPO). If we update the policy of each agent i with TRPO Schulman et al. (2015) (or approximately PPO) when fixing all the other agent’s policies, then the joint policy will improve monotonically.
Proof. We consider the joint surrogate objective in TRPO Lπold(πnew) where πold is the joint policy before updating and πnew is the joint policy after updating.
Given that π−inew = π −i old, we have:
Lπold(πnew) = Ea∼πnew [Aπold(s,a)]
= Ea∼πold [ πnew(a|s) πold(a|s) Aπold(s,a)]
= Ea∼πold [ πinew(a i|s) πiold(a i|s) Aπold(s,a)]
= Eai∼πiold
[ πinew(a
i|s) πiold(a i|s) Ea−i∼π−iold [Aπold(s, a i, a−i)] ] = Eai∼πiold [ πinew(a
i|s) πiold(a i|s) Aiπold(s, a i) ] = Lπiold(π i new),
where Aiπold(s, a i) = Ea−i∼π−iold [Aπold(s, a i, a−i)] is the individual advantage of agent i, and the third equation is from the condition π−inew = π −i old.
With the result of TRPO, we have the following conclusion:
J(πnew)− J(πold) ≥ Lπold(πnew)− CD max KL (πnew||πold)
= Lπiold(π i new)− CD max KL (π i new||πiold) (from π−inew = π−iold)
This means the individual objective is the same as the joint objective so the monotonic improvement is guaranteed.
Then we can show the proof of Proposition 1.
Proof. We will build a new MDP M̃ based on the original MDP. We keep the action space à = A = ×ni=1Ai, where Ai is the original action space of agent i. The new state space contains multiple layers. We define S̃k = S × (×ki=1Ai) for k = 1, 2, · · · , n− 1 and S̃0 = S, where S is the original state space. Then a new state s̃k ∈ S̃k means that s̃k = (s, a1, a2, · · · , ak). The total new state space is defined as S̃ = ∪n−1i=0 S̃i. Next we define the transition probability P̃ as following:
P̃ (s̃′|s̃k, ak+1, a−(k+1)) = 1 ( s̃′ = (s̃k, ak+1) ) , k < n− 1
P̃ (s̃′|s̃k, ak+1, a−(k+1)) = 1 ( s̃′ ∈ S̃0 ) P (s̃′|s̃k, ak+1), k = n− 1.
This means that the state in the layer k can only transition to the state in the layer k + 1 with the corresponding action, and the state in the layer n− 1 will transition to the layer 0 with the probability P in the original MDP. The reward function r̃ is defined as following:
r̃(s̃,a) = 1 ( s̃ ∈ S̃0 ) r(s̃,a).
This means the reward is only obtained when the state in layer 0 and the value is the same as the original reward function. Now we obtain the total definition of the new MDP M̃ = {S̃, Ã, P̃ , r̃, γ}. Then we claim that if all agents learn in multi-agent sequential decision-making by PPO, they are actually taking agent-by-agent PPO in the new MDP M̃ . To be precise, one update of multi-agent
sequential decision-making in the original MDP M equals to a round of update from agent 1 to agent n by agent-by-agent PPO in the new MDP M̃ . Moreover, the total reward of a round in the new MDP M̃ is the same as the reward in one timestep in the original MDP M . With this conclusion and Lemma 1, we complete the proof.
The proof of Proposition 2 can be seen as a corollary of the proof of Proposition 1.
Proof. From Lemma 1 we know that the monotonic improvement of the joint policy in the new MDP M̃ is guaranteed for each update of one single agent’s policy. So even if the different round of updates in the new MDP M̃ is with different order of the decision-making, the monotonic improvement of the joint policy is still guaranteed. Finally, from the proof of Proposition 1, we know that the monotonic improvement in the new MDP M̃ equals to the monotonic improvement in the original MDP M . These complete the proof.
B PROOFS OF THEOREM 1
Lemma 2 (TVD of the joint distributions). Suppose we have two distribution p1(x, y) = p1(x)p1(x|y) and p2(x, y) = p2(x)p2(x|y). We can bound the total variation distance of the joint as:
DTV (p1(x, y)||p2(x, y)) ≤ DTV (p1(x)||p2(x)) + max x DTV (p1(y|x)||p2(y|x))
Proof. See (Janner et al., 2019) (Lemma B.1).
Lemma 3 (Markov chain TVD bound, time-varing). Suppose the expected KL-divergence between two transition is bounded as maxt Es∼p1,t(s)DKL(p1(s′|s)||p2(s′|s)) ≤ δ, and the initial state distributions are the same p1,t=0(s) = p2,t=0(s). Then the distance in the state marginal is bounded as:
DTV (p1,t(s)||p2,t(s)) ≤ tδ
Proof. See (Janner et al., 2019) (Lemma B.2).
Lemma 4 (Branched Returns Bound). Suppose the expected KL-divergence between two dynamics distributions is bounded as maxt Es∼p1,t(s)[DTV (p1(s′|s,a)||p2(s′|s,a))], and the policy divergences at level k are bounded as maxs,a1:k−1 DTV (π1(ak|s,a1:k−1)||π2(ak|s,a1:k−1)) ≤ πk . Then the returns are bounded as:
|η1 − η2| ≤ 2rmaxγ( m +
∑n k=1 πk)
(1− γ)2 +
2rmax ∑n k=1 πk
1− γ ,
where rmax is the upper bound of the reward function.
Proof. Here, η1 denotes the returns of π1 under dynamics p1(s′|s,a), and η2 denotes the returns of π2 under dynamics p2(s′|s,a). Then we have
|η1 − η2| = | ∑ s,a (p1(s,a)− p2(s,a))r(s,a)|
= | ∑ t ∑ s,a γt(p1,t(s,a)− p2,t(s,a))r(s,a)|
≤ ∑ t ∑ s,a γt|p1,t(s,a)− p2,t(s,a)|r(s,a)
≤ rmax ∑ t ∑ s,a γt|p1,t(s,a)− p2,t(s,a)|.
By Lemma 2, we get
max s DTV (π1(a|s)||π2(a|s)) ≤ max s,a1 DTV (π1(a −1|s, a1)||π2(a−1|s, a1))
+ max s DTV (π1(a
1|s)||π2(a1|s))
≤ · · ·
≤ n∑ k=1 max s,a1:k−1 DTV (π1(a k|s,a1:k−1)||π2(ak|s,a1:k−1))
≤ n∑ k=1 πk .
We then apply Lemma 3, using δ = m + ∑n k=1 πk (via Lemma 3 and 2) to get
DTV (p1,t(s)||p2,t(s)) ≤ tmax t Es∼p1,t(s)DTV (p1,t(s ′|s)||p2,t(s′|s))
≤ tmax t Es∼p1,t(s)DTV (p1,t(s ′,a|s)||p2,t(s′,a|s))
≤ t(max t Es∼p1,t(s)DTV (p1,t(s ′|s,a)||p2,t(s′|s,a))
+ max t Es∼p1,t(s) maxs
DTV (π1,t(a|s)||π2,t(a|s)))
≤ t( m + n∑ k=1 πk)
And we also get DTV (p1,t(s,a)||p2,t(s,a)) ≤ t( m + ∑n k=1 πk) + ∑n k=1 πk by Lemma 2. Thus, by plugging this back, we get:
|η1 − η2| ≤ rmax ∑ t ∑ s,a γt|p1,t(s,a)− p2,t(s,a)|
≤ 2rmax ∑ t γt(t( m + n∑ k=1 πk) + n∑ k=1 πk)
≤ 2rmax( γ( m +
∑n k=1 πk))
(1− γ)2 + ∑n k=1 πk 1− γ )
Then we can show the proof of Theorem 1.
Proof. Let πβ denote the data collecting policy. We use Lemma 4 to bound the returns, but it will require bounded model error under the new policy π. Thus, we need to introduce πβ by adding and subtracting η[πβ ], to get:
η̂[π]− η[π] = η̂[π]− η[πβ ] + η[πβ ]− η[π].
we can bound L1 and L2 both using Lemma 4 by using δ = ∑n k=1 πk and δ = m + ∑n k=1 πk respectively, and obtain:
L1 ≥ − 2γrmax
∑n k=1 πk
(1− γ)2 −
2rmax ∑n k=1 πk
(1− γ)
L2 ≥ − 2γrmax( πm +
∑n k=1 πk)
(1− γ)2 −
2rmax ∑n k=1 πk
(1− γ) .
Adding these two bounds together yields the conclusion.
Under review as a conference paper at ICLR 2023
pp 1
level 1 level 1
cn
C ADDITIONAL EXPERIMENTS
C.1 ILLUSTRATION OF LEARNED PRIORITY OF DECISION-MAKING
Figure 8 (upper panel from a to e) shows the priority order of decision-making determined by SeqComm in PP. Agent 2 that is far away from other preys and predators is chosen to be the firstmover. If agents want to encircle and capture the preys, the agents (e.g., agent 2 and 5) that are on the periphery of the encircling circle should hold upper-level positions since they are able to decide how to narrow the encirclement. In addition, agent 3 makes decisions prior to agent 5 so that collision can be avoided after agent 5 obtains the intention of agent 3.
For CN, as illustrated in Figure 8 (lower panel from a to e), agent 2 is far away from all the landmarks and all other agents are in a better position to occupy landmarks. Therefore, agents 2 is chosen to be the first-mover, which is similar to the phenomenon observed in PP. Once it has determined the target to occupy, other agents (agent 5 and 3) can adjust their actions accordingly and avoid conflict of goals. Otherwise, if agent 5 makes a decision first and chooses to occupy the closest landmark, then agent 2 has to approach to a further landmark which would take more steps.
C.2 GENERALIZATION
Generalization to different numbers of agents has always been a key problem in MARL. For most algorithms in communication, once the model is trained in one scenario, it is unlikely for agents to maintain relatively competitive performance in other scenarios with different numbers of agents. However, as we employ attention modules to process communicated messages so that agents can handle messages of different lengths. In addition, the module used to determine the priority of decision-making is also not restricted by the number of agents. Thus, we investigate whether SeqComm generalizes well to different numbers of agents in CN and PP.
For both tasks, SeqComm is trained on 5-agent settings. Then, we test SeqComm in 3-agent and 7-agent settings of CN and 7-agent setting of PP. We use Fix-C trained directly on these test tasks to illustrate the performance of SeqComm. Note that the quantity of both landmarks and preys is
adjusted according to the number of agents in CN and PP. The test results are shown in Table 1. SeqComm exhibits the superiority in CN and PP, demonstrating that SeqComm may have a good generalization to the number of agents. A thorough study of the generalization of SeqComm is left to future work.
C.3 MORE SMAC MAPS
We have evaluated our method on two additional maps, i.e., 3s vs 4z and corridor. As illustrated in Figure 9, we can find out the similar conclusions as section 5.1.
D ADDITIONAL RELATED WORK
Multi-Agent Path Finding (MAPF). MAPF aims to plan collision-free paths for multiple agents on a given graph from their given start vertices to target vertices. In MAPF, prioritized planning is deeply coupled with collision avoidance (Van Den Berg & Overmars, 2005; Ma et al., 2019), where collision is used to design constraints or heuristics for planning. Unlike MAPF, our method couples the priority of decision-making with the learning objective and thus is more general. In addition, the different motivations and problem settings may lead to the incompatibility of the methods in the two fields.
Reinforcement Learning in Stackelberg Game. Many studies (Könönen, 2004; Sodomka et al., 2013; Greenwald et al., 2003; Zhang et al., 2020) have investigated reinforcement learning in finding the Stackelberg equilibrium. Bi-AC (Zhang et al., 2020) is a bi-level actor-critic method that allows agents to have different knowledge bases so that the Stackelberg equilibrium (SE) is possible to find. The actions still can be executed simultaneously and distributedly. It empirically studies the relationship between the cooperation level and the superiority of the SE over the Nash equilibrium. AQL (Könönen, 2004) updates the Q-value by solving the SE in each iteration and can be regarded as the value-based version of Bi-AC. Existing work mainly focuses on two-agent settings and their order is fixed in advance. However, the fixed order can hardly be an optimal solution as we will show in the next section. To address this issue, we exploit agents’ intentions to dynamically determine the priority of decision-making along the way of interacting with each other.
E EXPERIMENTAL SETTINGS
In cooperative navigation, there are 5 agents and the size of each is 0.15. They need to occupy 5 landmarks with the size of 0.05. The acceleration of agents is 7. In predator-prey, the number of predators (agents) and prey is set to 5 and 3, respectively, and their sizes are 0.15 and 0.05. The acceleration is 5 for predators and 7 for prey. In keep away, the number of attackers (agents) and defenders is set to 3, and their sizes are respectively 0.15 and 0.05. Besides, the acceleration is 6
for attackers and 4 for defenders. The three landmarks are located at (0.00, 0.30), (0.25,−0.15), and (−0.25,−0.15). Note that each agent is allowed to communicate with all other agents in all three tasks. The team reward is similar across tasks. At a timestep t, it can be written as rtteam = − ∑n i=1 d t i + C
trcollision, where dti is the distance of landmark/prey i to its nearest agent/predator, Ct is the number of collisions (when the distance between two agents is less than the sum of their sizes) occurred at timestep t, and rcollision = −1. In addition, agents act discretely and have 5 actions (stay and move up, down, left, right). The length of each episode is 20, 30, and 20 in cooperative navigation, predator-prey, and keep-away, respectively.
F IMPLEMENTATION DETAILS
F.1 ARCHITECTURE AND HYPERPARAMETERS
Our models, including SeqComm, Fix-C, and Random-C are trained based on MAPPO. The critic and policy network are realized by two fully connected layers. As for the attention module, key, query, and value have one fully connected layer each. The size of hidden layers is 100. Tanh functions are used as nonlinearity. For I2C, we use their official code with default settings of basic hyperparameters and networks. As there is no released code of IS and TarMAC, we implement IS and TarMAC by ourselves, following the instructions mentioned in the original papers (Kim et al., 2021; Das et al., 2019).
For the world model, observations and actions are firstly encoded by a fully connected layer. The output size for the observation encoder is 48, and the output size for the action encoder is 16. Then the outputs of the encoder will be passed into the attention module with the same structure aforementioned. Finally, we use a fully connected layer to decode. In these layers, Tanh is used as the nonlinearity.
Table 2 summarize the hyperparameters used by SeqComm and the baselines in the MPE.
For SMAC, SeqComm, Random-C, Fix-C are based on the same architecture, the hyperparameters stay the same. For MMM2, 6z vs 8z, and 8m vs 9m, the learning rate is 5e−5, while for 10m vs 11m, corridor, and 3s vs 4z, learning rate is 7e−5. The ppo epoch is set to 10 for 6h vs 8z, and is 5 for rest maps. H and F is set to 5 and 1, respectively. However, 20 and 2 is a better value of H and F if computing resources is sufficient.
For TarMAC, the learning rate is 7e−5 for all maps. The ppo epoch is set to 10 for 6h vs 8z, and is 5 for rest maps.
For MAPPO, the learning rate is 5e−5 for MMM2 and 6z vs 8z, and 7e−5 for 8m vs 9m and 10m vs 11m.
For these four methods, the mini batch is set to 1. As for other hyperparameters, we follow the default settings of the official code (Yu et al., 2021).
For QMIX, the learning rate is 5e−5. The is 1 and the batch size is 32. The buffer size is 5e3. For others, we follow the default settings of link https://github.com/starry-sky6688/MARL-Algorithms.git
F.2 ATTENTION MODULE
Attention module (AM) is applied to process messages in the world model, critic network, and policy network. AM consists of three components: query, key, and values. The output of AM is the weighted sum of values, where the weight of value is determined by the dot product of the query and the corresponding key.
For AM in the world model denoted as AMw, agent i gets messages m−it = h −i t from all other agents at timestep t in negotiation phase, and predicts a query vector qit following AM i w,q(h i t). The query is used to compute a dot product with keys kt = [k1t , · · · , knt ]. Note that k j t is obtained by the message from agent j following AMia,k(h j t ) for j 6= i, and kit is from AM i neg,k(h i t). Besides, it is scaled by 1/ √ dk followed by a softmax to obtain attention weights α for each value vector:
αi = softmax qit T k1t√ dk · · · q
i t T kjt√ dk︸ ︷︷ ︸ αij · · · q i t T knt√ dk (1) The output of attention module is defined as: cit = ∑n j=1 αijv j t , where v j t is obtained from messages or its own hidden state of observation following AMiw,v(·). As for AM in the policy and critic network denoted as AMa , agent i gets additional messages from upper-level agent in the launching phase. The message from upper-level and lower-level agent can be expanded asmuppert = [h upper t ,a upper t ] andm lower t = [h lower t , 0], respectively. In addition, the query depends on agent’s own hidden state of observation hit, but keys and values are only from messages of other agents.
F.3 TRAINING
The training of SeqComm is an extension of MAPPO. The observation encoder e, the critic V , and the policy π are respectively parameterized by θe, θv, θπ. Besides, the attention module AMa is parameterized by θa and takes as input the agent’s hidden state, the messages (hidden states of other agents) in the negotiation phase, and the messages (the actions of upper-level agents) in launching phase. Let D = {τk}Kk=1 be a set of trajectories by running policy in the environment. Note that we drop time t in the following notations for simplicity.
The value function is fitted by regression on mean-squared error:
L(θv, θa, θe) = 1
KT ∑ τ∈D T−1∑ t=0 ∥∥∥V (AMa(e(o),aupper))− R̂∥∥∥2 2
(2)
where R̂ is the discount rewards-to-go.
We update the policy by maximizing the PPO-Clip objective:
L(θπ, θa, θe) = 1
KT ∑ τ∈D T−1∑ t=0 min( π(a|AMa(e(o),aupper)) πold(a|AMa(e(o),aupper)) Aπold , g( , Aπold)) (3)
where g( , A) = { (1 + )A A ≥ 0 (1− )A A ≤ 0 , and Aπold(o,a upper, a) is computed using the GAE method.
The world modelM is parameterized by θw is trained as a regression model using the training data set S. It is updated with the loss:
L(θw) = 1 |S| ∑
o,a,o′,r∈S
∥∥∥(o′, r)−M(AMw(e(o),a))∥∥∥2 2 . (4)
We trained our model on one GeForce GTX 1050 Ti and Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz. | 1. What is the focus of the paper in cooperative multi-agent reinforcement learning?
2. What are the strengths of the proposed approach, particularly regarding its theoretical analysis and performance improvement?
3. Do you have any concerns or suggestions regarding the paper's title and its representation of the main contribution?
4. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes a novel communication scheme, Sequential Communication (SeqComm), for cooperative multi-agent reinforcement learning (MARL). In communication for cooperation in MARL, circular dependencies can sometimes occur. This is caused by synchronization in communication. The proposed model assumes asynchronous communication. The upper-level agents make decisions before the lower-level ones. They, theoretically, show that policies learned by SeqComm are guaranteed to improve performance monotonically and converge. Also, the empirical performance is shown by comparing the proposal with other existing methods.
Strengths And Weaknesses
Strength
The theoretical proof is provided.
The performance is empirically shown in the experiment.
I think the title does not sufficiently represent this paper's main characteristics. "Asynchronicity" seems to be the keyword of the paper.
Clarity, Quality, Novelty And Reproducibility
The paper's quality and clarity are satisfactory. The theoretical proposal has a certain amount of novelty. Information for implementation is provided to reproduce the experiment. |
ICLR | Title
Multi-Agent Sequential Decision-Making via Communication
Abstract
Communication helps agents to obtain information about others so that better coordinated behavior can be learned. Some existing work communicates predicted future trajectory with others, hoping to get clues about what others would do for better coordination. However, circular dependencies sometimes can occur when agents are treated synchronously so it is hard to coordinate decision-making. In this paper, we propose a novel communication scheme, Sequential Communication (SeqComm). SeqComm treats agents asynchronously (the upper-level agents make decisions before the lower-level ones) and has two communication phases. In negotiation phase, agents determine the priority of decision-making by communicating hidden states of observations and comparing the value of intention, which is obtained by modeling the environment dynamics. In launching phase, the upper-level agents take the lead in making decisions and communicate their actions with the lower-level agents. Theoretically, we prove the policies learned by SeqComm are guaranteed to improve monotonically and converge. Empirically, we show that SeqComm outperforms existing methods in various multi-agent cooperative tasks.
1 INTRODUCTION
The partial observability and stochasticity inherent to the nature of multi-agent systems can easily impede the cooperation among agents and lead to catastrophic miscoordination (Ding et al., 2020). Communication has been exploited to help agents obtain extra information during both training and execution to mitigate such problems (Foerster et al., 2016; Sukhbaatar et al., 2016; Peng et al., 2017). Specifically, agents can share their information with others via a trainable communication channel.
Centralized training with decentralized execution (CTDE) is a popular learning paradigm in cooperative multi-agent reinforcement learning (MARL). Although the centralized value function can be learned to evaluate the joint policy of agents, the decentralized policies of agents are essentially independent. Therefore, a coordination problem arises. That is, agents may make sub-optimal actions by mistakenly assuming others’ actions when there exist multiple optimal joint actions (Busoniu et al., 2008). Communication allows agents to obtain information about others to avoid miscoordination. However, most existing work only focuses on communicating messages, e.g., the information of agents’ current observation or historical trajectory (Jiang & Lu, 2018; Singh et al., 2019; Das et al., 2019; Ding et al., 2020). It is impossible for an agent to acquire other’s actions before making decisions since the game model is usually synchronous, i.e., agents make decisions and execute actions simultaneously. Recently, intention or imagination, depicted by a combination of predicted actions and observations of many future steps, has been proposed as part of messages (Kim et al., 2021; Pretorius et al., 2021). However, circular dependencies can still occur, so it may be hard to coordinate decision-making under synchronous settings.
A general approach to solving the coordination problem is to make sure that ties between equally good actions are broken by all agents. One simple mechanism for doing so is to know exactly what others will do and adjust the behavior accordingly under a unique ordering of agents and actions (Busoniu et al., 2008). Inspired by this, we reconsider the cooperative game from an asynchronous perspective. In other words, each agent is assigned a priority (i.e., order) of decision-making each step in both training and execution, thus the Stackelberg equilibrium (SE) (Von Stackelberg, 2010) is naturally set up as the learning objective. Specifically, the upper-level agents make decisions before the lower-level agents. Therefore, the lower-level agents can acquire the actual actions of the upper-level agents by
communication and make their decisions conditioned on what the upper-level agents would do. Under this setting, the SE is likely to be Pareto superior to the average Nash equilibrium (NE) in games that require a high cooperation level (Zhang et al., 2020). However, is it necessary to decide a specific priority of decision-making for each agent? Ideally, the optimal joint policy can be decomposed by any orders (Wen et al., 2019), e.g., π∗(a1, a2|s) = π∗(a1|s)π∗(a2|s, a1) = π∗(a2|s)π∗(a1|s, a2). But during the learning process, it is unlikely for agents to use the optimal actions of other agents for gradient calculation, making it still vulnerable to the relative overgeneralization problem (Wei et al., 2018). Overall, there is no guarantee that the above equation will hold in the learning process, thus ordering should be carefully concerned.
In this paper, we propose a novel model-based multi-round communication scheme for cooperative MARL, Sequential Communication (SeqComm), to enable agents to explicitly coordinate with each other. Specifically, SeqComm has two-phase communication, negotiation phase and launching phase. In the negotiation phase, agents communicate their hidden states of observations with others simultaneously. Then they are able to generate multiple predicted trajectories, called intention, by modeling the environmental dynamics and other agents’ actions. In addition, the priority of decision-making is determined by communicating and comparing the corresponding values of agents’ intentions. The value of each intention represents the rewards obtained by letting that agent take the upper-level position of the order sequence. The sequence of others follows the same procedure as aforementioned with the upper-level agents fixed. In the launching phase, the upper-level agents take the lead in decision-making and communicate their actual actions with the lower-level agents. Note that the actual actions will be executed simultaneously in the environment without any changes.
SeqComm is currently built on MAPPO (Yu et al., 2021). Theoretically, we prove the policies learned by SeqComm are guaranteed to improve monotonically and converge. Empirically, we evaluate SeqComm on a set of tasks in multi-agent particle environment (MPE) (Lowe et al., 2017) and StarCraft multi-agent challenge (SMAC) (Samvelyan et al., 2019). In all these tasks, we demonstrate that SeqComm outperforms prior communication-free and communication-based methods. By ablation studies, we confirm that treating agents asynchronously is a more effective way to promote coordination and SeqComm can provide the proper priority of decision-making for agents to develop better coordination.
2 RELATED WORK
Communication. Existing studies (Jiang & Lu, 2018; Kim et al., 2019; Singh et al., 2019; Das et al., 2019; Zhang et al., 2019; Jiang et al., 2020; Ding et al., 2020; Konan et al., 2022) in this realm mainly focus on how to extract valuable messages. ATOC (Jiang & Lu, 2018) and IC3Net (Singh et al., 2019) utilize gate mechanisms to decide when to communicate with other agents. Many works (Das et al., 2019; Konan et al., 2022) employ multi-round communication to fully reason the intentions of others and establish complex collaboration strategies. Social influence (Jaques et al., 2019) uses communication to influence the behaviors of others. I2C (Ding et al., 2020) only communicates with agents that are relevant and influential which are determined by causal inference. However, all these methods focus on how to exploit valuable information from current or past partial observations effectively and properly. More recently, some studies (Kim et al., 2021; Du et al., 2021; Pretorius et al., 2021) begin to answer the question: can we favor cooperation beyond sharing partial observation? They allow agents to imagine their future states with a world model and communicate those with others. IS (Pretorius et al., 2021), as the representation of this line of research, enables each agent to share its intention with other agents in the form of the encoded imagined trajectory and use the attention module to figure out the importance of the received intention. However, two concerns arise. On one hand, circular dependencies can lead to inaccurate predicted future trajectories as long as the multi-agent system treats agents synchronously. On the other hand, MARL struggles in extracting useful information from numerous messages, not to mention more complex and dubious messages, i.e., predicted future trajectories.
Unlike these works, we treat the agents from an asynchronously perspective therefore circular dependencies can be naturally resolved. Furthermore, agents only send actions to lower-level agents besides partial observations to make sure the messages are compact as well as informative.
Coordination. The agents are essentially independent decision makers in execution and may break ties between equally good actions randomly. Thus, in the absence of additional mechanisms, different
agents may break ties in different ways, and the resulting joint actions may be suboptimal. Coordination graphs (Guestrin et al., 2002; Böhmer et al., 2020; Wang et al., 2021b) simplify the coordination when the global Q-function can be additively decomposed into local Q-functions that only depend on the actions of a subset of agents. Typically, a coordination graph expresses a higher-order value decomposition among agents. This improves the representational capacity to distinguish other agents’ effects on local utility functions, which addresses the miscoordination problems caused by partial observability. Another general approach to solving the coordination problem is to make sure that ties are broken by all agents in the same way, requiring that random action choices are somehow coordinated or negotiated. Social conventions (Boutilier, 1996) or role assignments (Prasad et al., 1998) encode prior preferences towards certain joint actions and help break ties during action selection. Communication (Fischer et al., 2004; Vlassis, 2007) can be used to negotiate action choices, either alone or in combination with the aforementioned techniques. Our method follows this line of research by utilizing the ordering of agents and actions to break the ties, other than the enhanced representational capacity of the local value function.
3 PROBLEM FORMULATION
Cost-Free Communication. The decentralized partially observable Markov decision process (DecPOMDP) can be extended to explicitly incorporate broadcasting observations. The resulting model is called multi-agent POMDP (Oliehoek et al., 2016).
Pynadath & Tambe (2002) showed that under cost-free communication, a joint communication policy that shares local observations at each stage is optimal. Many studies have also investigated sharing local observations in models that are similar to multi-agent POMDP (Pynadath & Tambe, 2002; Ooi & Wornell, 1996; Nair et al., 2004; Roth et al., 2005a;b; Spaan et al., 2006; Oliehoek et al., 2007; Becker et al., 2004). These works focus on issues other than communication cost and we foucs on the coordination problem. Note that even under multi-agent POMDP where agents can get joint observations, coordination problem can still arise (Busoniu et al., 2008). Suppose the centralized critic has learnt actions pairs [a1, a2] and [b1, b2] are equally optimal. Without any prior information, the individual policies π1 and π2 learnt from the centralized critic can break the ties randomly and may choose a1 and b2, respectively.
Multi-Agent Sequential Decision-Making. We consider fully cooperative multi-agent tasks that are modeled as multi-agent POMDP, where n agents interact with the environment according to the following procedure, which we refer to as multi-agent sequential decision-making.
At each timestep t, assume the priority (i.e., order) of decision-making for all agents is given and each priority level has only one agent (i.e., agents make decisions one by one). Note that the smaller the level index, the higher priority of decision-making is. The agent at each level k gets its own observation okt drawn from the state st, and receives messagesm −k t from all other agents, where m−kt , {{o1t , a1t}, . . . , {ok−1t , ak−1t }, ok+1t , . . . , ont }. Equivalently, m−kt can be written as {o−kt ,a 1:k−1 t }, where o−kt denotes the joint observations of all agents except k, and a 1:k−1 t denotes the joint actions of agents 1 to k − 1. For the agent at the first level (i.e., k = 1), a1:k−1t = ∅. Then, the agent determines its action akt sampled from its policy πk(·|okt ,m−kt ) or equivalently πk(·|ot,a1:k−1t ) and sends it to the lower-level agents. After all agents have determined their actions, they perform the joint actions at, which can be seen as sampled from the joint policy π(·|st) factorized as ∏n k=1 πk(·|ot,a 1:k−1 t ), in the environment and get a shared reward r(st,at) and the state transitions to next state s′ according to the transition probability p(s′|st,at). All agents aim to maximize the expected return ∑∞ t=0 γ
trt, where γ is the discount factor. The state-value function and action-value function of the level-k agent are defined as follows:
Vπk(s,a 1:k−1) , E
s1:∞ ak:n0 ∼πk:n a1:∞∼π
[ ∞∑ t=0 γtrt|s0 = s,a1:k−10 = a1:k−1]
Qπk(s,a 1:k) , E
s1:∞ ak+1:n0 ∼πk+1:n
a1:∞∼π
[ ∞∑ t=0 γtrt|s0 = s,a1:k0 = a1:k].
For the setting of multi-agent sequential decision-making discussed above, we have the following proposition. Proposition 1. If all the agents update its policy with individual TRPO (Schulman et al., 2015) sequentially in multi-agent sequential decision-making, then the joint policy of all agents is guaranteed to improve monotonically and converge.
Proof. The proof is given in Appendix A.
Proposition 1 indicates that SeqComm has the performance guarantee regardless of the priority of decision-making in multi-agent sequential decision-making. However, the priority of decision-making indeed affects the optimality of the converged joint policy, and we have the following claim. Claim 1. The different priorities of decision-making affect the optimality of the convergence of the learning algorithm due to the relative overgeneralization problem.
We use a one-step matrix game as an example, as illustrated in Figure 1(a), to demonstrate the influence of the priority of decision-making on the learning process. Due to relative overgeneralization (Wei et al., 2018), agent B tends to choose b2 or b3. Specifically, b2 or b3 in the suboptimal equilibrium is a better choice than b1 in the optimal equilibrium when matched with arbitrary actions from agent A. Therefore, as shown in Figure 1(b), B → A (i.e., agent B makes decisions before A, and A’s policy conditions on the action of B) and Simultaneous (i.e., two agents make decisions simultaneously and independently) are easily trapped into local optima. However, things can be different if agent A goes
first, as A → B achieves the optimum. As long as agent A does not suffer from relative overgeneralization, it can help agent B get rid of local optima by narrowing down the search space of B. Besides, a policy that determines the priority of decision-making can be learned under the guidance of the state-value function, denoted as Learned. It obtains better performance than B → A and Simultaneous, which indicates that dynamically determining the order during policy learning can be beneficial as we do not know the optimal priority in advance.
Remark 1. The priority (i.e., order) of decision-making affects the optimality of the converged joint policy in multi-agent sequential decision-making, thus it is critical to determine the order. However, learning the order directly requires an additional centralized policy in execution, which is not generalizable in the scenario where the number of agents varies. Moreover, its learning complexity exponentially increases with the number of agents, making it infeasible in many cases.
4 SEQUENTIAL COMMUNICATION
In this paper, we cast our eyes in another direction and resort to the world model. Ideally, we can randomly sample candidate order sequences, evaluate them under the world model (see Section 4.1), and choose the order sequence that is deemed the most promising under the true dynamic. SeqComm is designed based on this principle to determine the priority of decision-making via communication.
SeqComm adopts a multi-round communication mechanism, i.e., agents are allowed to communicate with others in multiple rounds. Importantly, communication is separated into phases serving different purposes. One is the negotiation phase for agents to determine the priority of decision-making. Another is the launching phase for agents to act conditioning on actual actions upper-level agents will take to implement explicit coordination via communication. The overview of SeqComm is illustrated in Figure 2. Each SeqComm agent consists of a policy, a critic, and a world model, as illustrated in Figure 3, and the parameters of all networks are shared across agents (Gupta et al., 2017).
Under review as a conference paper at ICLR 2023
agent 1 agent 1’s obs agent 2 agent 3 agent 4 re qu es t re pl y
agent 1a gent 1’s ob s agent 2agent 3 agent 4 reply re qu es t request request re pl yreply Agent 1 chooses to send request to agent 2 and ignore
agent 3
1
2 3 4
1 2 3 4 Agent 1 chooses to send request to agent 2, 3, 4 t t+1
B C
BC
order set
AsA CsC BsB B C 1 BC 2
r1 r2 intention reward
A
C
B
2
A C B
1
aC
A C B
2
aA A C
B
3
aB
aA aC
A
C
Baction stateagent action stateagent
4.1 NEGOTIATION PHASE
In the negotiation phase, the observation encoder first takes ot as input and outputs a hidden state ht, which is used to communicate with others. Agents then determine the priority of decision-making by intention which is established and evaluated based on the world model.
World Model. The world model is needed to predict and evaluate future trajectories. SeqComm, unlike previous works (Kim et al., 2021; Du et al., 2021; Pretorius et al., 2021), can utilize received hidden states of other agents in the first round of communication to model more precise environment dynamics for the explicit coordination in the next round of communication. Once an agent can access other agents’ hidden states, it shall have adequate information to estimate their actions since all agents are homogeneous and parameter-sharing. Therefore, the world modelM(·) takes as input the joint hidden states ht = {h1t , . . . , hnt } and actions at, and predicts the next joint observations and reward,
ôt+1, r̂t+1 =Mi(AMw(ht,at)), where AMw is the attention module. The reason that we adopt the attention module is to entitle the world model to be generalizable in the scenarios where additional agents are introduced or existing agents are removed.
Priority of Decision-Making. The intention is the key element to determine the priority of decision-making. The notion of intention is described as an agent’s future behavior in previous works (Rabinowitz et al., 2018; Raileanu et al., 2018; Kim et al., 2021). However, we define the intention as an agent’s future behavior without considering others.
As mentioned before, an agent’s intention considering others can lead to circular dependencies and cause miscoordination. By our definition, the intention of an agent should be depicted as all future trajectories considering that agent as the first-mover and ignoring the others. However, there are many possible future trajectories as the priority of the rest agents is unfixed. In practice, we use the Monte Carlo method to evaluate intention.
Taking agent i at timestep t to illustrate, it firstly considers itself as the first-mover and produces its action only based on the joint hidden states, âit ∼ πi(·|AMa(ht)), where we again use an
attention module AMa to handle the input. For the order sequence of lower-level agents, we randomly sample a set of order sequences from unfixed agents. Assume agent j is the second-mover, agent i models j’s action by considering the upper-level action following its own policy âjt ∼ πi(·|AMa(ht, âit)). The same procedure is applied to predict the actions of all other agents following the sampled order sequence. Based on the joint hidden states and predicted actions, the next joint observations ôt+1 and corresponding reward r̂t+1 can be predicted by the world model. The length of the predicted future trajectory isH and it can then be written as τ t = {ôt+1, ât+1, . . . , ôt+H , ât+H} by repeating the procedure aforementioned and the value of one trajectory is defined as the return of that trajectory vτt = ∑t+H t′=t+1 γ
t′−t−1r̂t′/H . In addition, the intention value is defined as the average value of F future trajectories with different sampled order sequences. The choice of F is a tradeoff between the computation overhead and the accuracy of the estimation.
After all the agents have computed their own intention and the corresponding value, they again communicate their intention values to others. Then agents would compare and choose the agent with the highest intention value to be the first-mover. The priority of lower-level decision-making follows the same procedure with the upper-level agents fixed. Note that some agents are required to communicate intention values with others multiple times until the priority of decision-making is finally determined.
4.2 LAUNCHING PHASE
As for the launching phase, agents communicate for obtaining additional information to make decisions. Apart from the received hidden states from the last phase, we allow agents to get what actual actions the upper-level agents will take in execution, while other studies can only infer others’ actions by opponent modeling (Rabinowitz et al., 2018; Raileanu et al., 2018) or communicating intentions (Kim et al., 2021). Therefore, miscoordination can be naturally avoided and a better cooperation strategy is possible since lower-level agents can adjust their behaviors accordingly. A lower-level agent i make a decision following the policy πi(·|AMa(ht,auppert )), where a upper t means received actual actions from all upper-level agents. As long as the agent has decided its action, it will send its action to all other lower-level agents by the communication channel. Note that the actions are executed simultaneously and distributedly in execution, though agents make decisions sequentially.
Communication Overhead. Two communication phases alternate until all agents determine their levels and get upper-level actions. Note that many previous works also adopt the multi-round communication scheme (Das et al., 2019; Singh et al., 2019). As for implementation in practice, compared with communicating high-dimensional hidden states/observations by multiple rounds (Das et al., 2019; Singh et al., 2019), or transferring multi-step trajectory (Kim et al., 2021), SeqComm needs more rounds, but it only transmits hidden states for one time. For the rest n − 1 round communication with total (n− 1)/2 broadcasts per agent, only a single intention value and an action will be exchanged. Considering there are n! permutations of different order choices for n agents, our method has greatly reduced computation overhead since each agent needs to calculate up to n times to search for a satisfying order. Although SeqComm is more suitable for latency-tolerate MARL tasks, e.g., power dispatch (minutes) (Wang et al., 2021a), inventory management (hours) (Feng et al., 2021), maritime transportation (days) (Li et al., 2019), it is possible for SeqComm to have a wider range of applications given the rapid development of the communication technology, e.g., 5G.
4.3 THEORETICAL ANALYSIS
As the priority of decision-making is determined by intention values, SeqComm is likely to choose different orders at different timesteps during training. However, we have the following proposition that theoretically guarantees the performance of the learned joint policy under SeqComm.
Proposition 2. The monotonic improvement and convergence of the joint policy in SeqComm are independent of the priority of decision-making of agents at each timestep.
Proof. The proof is given in Appendix A.
The priority of decision-making is chosen under the world model, thus the compounding errors in the world model can result in discrepancies between the predicted returns of the same order under the
world model and the true dynamics. We then analyze the monotonic improvement for the joint policy under the world model based on Janner et al. (2019). Theorem 1. Let the expected total variation between two transition distributions be bounded at each timestep as maxt Es∼πβ,t [DTV (p(s′|s,a)||p̂(s′|s,a))] ≤ m, and the policy divergences at level k be bounded as maxs,a1:k−1 DTV (πβ,k(ak|s,a1:k−1)||πk(ak|s,a1:k−1)) ≤ πk , where πβ is the data collecting policy for the model and p̂(s′|s,a) is the transition distribution under the model. Then the model return η̂ and true return η of the policy π are bounded as:
η̂[π] ≥ η[π]− [ 2γrmax( m + 2
∑n k=1 πk)
(1− γ)2 +
4rmax ∑n k=1 πk
(1− γ) ]︸ ︷︷ ︸
C( m, π1:n )
Proof. The proof is given in Appendix B.
Remark 2. Theorem 1 provides a useful relationship between the compounding errors and the policy update. As long as we improve the return under the true dynamic by more than the gap, C( m, π1:n), we can guarantee the policy improvement under the world model. If no such policy exists to overcome the gap, it implies the model error is too high, that is, there is a large discrepancy between the world model and true dynamics. Thus the order sequence obtained under the world model is not reliable. Such an order sequence is almost the same as a random one. Though a random order sequence also has the theoretical guarantee of Proposition 2, we will show in Section 5.2 that a random order sequence leads to a poor local optimum empirically.
5 EXPERIMENTS
Sequential communication (SeqComm) is currently instantiated based on MAPPO (Yu et al., 2021). We evaluate SeqComm on three tasks in multi-agent particle environment (MPE) (Lowe et al., 2017) and four maps in StarCraft multi-agent challenge (SMAC) (Samvelyan et al., 2019).
For these experiments, we compare SeqComm against the following communication-free and communication-based baselines: MAPPO (Yu et al., 2021), QMIX (Rashid et al., 2018), IS (Kim et al., 2021), TarMAC (Das et al., 2019), and I2C (Ding et al., 2020). In more detail, IS communicates predicted future trajectories (observations and actions), and predictions are made by the environment model. TarMAC uses the attention model to focus more on important incoming messages (the hidden states of observations). TarMAC is reproduced based on MAPPO instead of A2C in the original paper for better performance. I2C infers one-to-one communication to reduce the redundancy of messages (also conditioning on observations).
In the experiments, all the methods are parameter-sharing for fast convergence. We have fine-tuned the baselines for a fair comparison. Please refer to Appendix E for experimental settings and Appendix F for implementation details. All results are presented in terms of the mean and standard deviation of five runs with different random seeds.
5.1 RESULTS
MPE. We experiment on predator-prey (PP), cooperative navigation (CN), and keep-away (KA) in MPE. In PP, five predators (agents) try to capture three prey. In CN, five agents try to occupy five landmarks. In KA, three attackers (agents) try to occupy three landmarks, however, there are three
defenders to push them away. In all three tasks, the size of agents is set to be larger than the original settings so that collisions occur more easily, following the settings in (Kim et al., 2021). In addition, agents cannot observe any other agents, and this makes the task more difficult and communication more important. We can observe similar modifications in previous works (Foerster et al., 2016; Ding et al., 2020). After all, we want to demonstrate the superior over communication-based baselines, and communication-based methods are more suitable for scenarios with limited vision. More details about experimental settings are available in Appendix E.
Figure 4 shows the learning curves of all the methods in terms of the mean reward averaged over timesteps in PP, CN, and KA. We can see that SeqComm converges to the highest mean reward compared with all the baselines. The results demonstrate the superiority of SeqComm. In more detail, all communication-based methods outperform MAPPO, indicating the necessity of communication in these difficult tasks. Apart from MAPPO, IS performs the worst since it may access inaccurate predicted information due to the circular dependencies. The substantial improvement SeqComm over I2C and TarMAC is attributed to that SeqComm allows agents to get more valuable action information for explicit coordination. The agents learned by SeqComm show sophisticated coordination strategies induced by the priority of decision-making, which can be witnessed by the visualization of agent behaviors. More details are given in Appendix C. Note that QMIX is omitted in the comparison for clear presentation since Yu et al. (2021) have shown QMIX and MAPPO exhibit similar performance in various MPE tasks.
SMAC. We also evaluate SeqComm against the baselines on four customized maps in SMAC: 6h vs 8z, MMM2, 10m vs 11m, and 8m vs 9m, where we have made some minor changes to the observation part of agents to make it more difficult. Specifically, the sight range of agents is reduced from 9 to 2, and agents cannot perceive any information about their allies even if they are within the sight range. NDQ (Wang et al., 2020) adopts a similar change to increase the difficulty of action coordination and demonstrates that the miscoordination problem is widespread in multi-agent learning. The rest settings remain the same as the default.
The learning curves of SeqComm and the baselines in terms of the win rate are illustrated in Figure 5. IS and I2C fail in this task and get a zero win rate because these two methods are built on MADDPG. However, MADDPG cannot work well in SMAC, especially when we reduce the sight range of agents, which is also supported by other studies (Papoudakis et al., 2021). SeqComm and TarMAC converge to better performances than MAPPO and QMIX, which demonstrates the benefit of communication. Moreover, SeqComm outperforms TarMAC, which again verifies the gain of explicit action coordination.
5.2 ABLATION STUDIES
Priority of Decision-Making. We compare SeqComm with two ablation baselines with only a difference in the priority of decision-making: the priority of decision-making is fixed throughout one episode, denoted as Fix-C, and the priority of decision-making is determined randomly at each timestep, denoted as Random-C. TarMAC is also compared as a reference without explicit action coordination.
As depicted in Figure 6, SeqComm achieves a higher mean reward or win rate than Fix-C, Random-C, and TarMAC in all the tasks. These results verify the importance of the priority of decision-making and the necessity to continuously adjust it during one episode. It is also demonstrated that SeqComm can provide a proper priority of decision-making. As discussed in Section 4.3, although Fix-C and Random-C also have the theoretical guarantee, they converge to poor local optima in practice. Moreover, Fix-C and Random-C show better performance than TarMAC in most tasks. This result accords with the hypothesis that the SE is likely to be Pareto superior to the average NE in games with a high cooperation level. Additionally, the learned policy of SeqComm can generalize well to the same task with a different number of agents in MPE, which is detailed in Appendix C.
Communication Range. We also carry out ablation studies on communication range in MPE tasks. Note that communication range means how many nearest neighbors each agent is allowed to communicate with, following the setting in Ding et al. (2020). We reduce the communication range of SeqComm from 4 to 2 and 0. As there are only three agents in KA, it is omitted in this study. The results are shown in Figure 7. Communication-based agents perform better than communication-free agents, which accords with the results of many previous studies. More importantly, the superiority of SeqComm with communication range 2 over the corresponding TarMAC again demonstrates the effectiveness of sequential communication even in reduced communication ranges.
However, as the communication range decreases from 4 to 2, there is no performance reduction in these two MPE tasks. On the contrary, the agents with communication range 2 perform the best. It accords with the results in I2C (Ding et al., 2020) and ATOC (Jiang & Lu, 2018) that redundant information can impair the learning process sometimes. In other settings, this conclusion might not be true. Moreover, since under our communication scheme agents can ob-
tain more information, i.e., the actual actions of others, it is more reasonable that SeqComm can still outperform other methods in reduced communication ranges.
6 CONCLUSIONS
We have proposed SeqComm, which enables agents explicitly coordinate with each other. SeqComm from an asynchronous perspective allows agents to make decisions sequentially. A two-phase communication scheme has been adopted for determining the priority of decision-making and communicating messages accordingly. Theoretically, we prove the policies learned by SeqComm are guaranteed to improve monotonically and converge. Empirically, it is demonstrated that SeqComm outperforms baselines in a variety of cooperative multi-agent tasks and SeqComm can provide a proper priority of decision-making.
A PROOFS OF PROPOSITION 1 AND PROPOSITION 2
Lemma 1 (Agent-by-Agent PPO). If we update the policy of each agent i with TRPO Schulman et al. (2015) (or approximately PPO) when fixing all the other agent’s policies, then the joint policy will improve monotonically.
Proof. We consider the joint surrogate objective in TRPO Lπold(πnew) where πold is the joint policy before updating and πnew is the joint policy after updating.
Given that π−inew = π −i old, we have:
Lπold(πnew) = Ea∼πnew [Aπold(s,a)]
= Ea∼πold [ πnew(a|s) πold(a|s) Aπold(s,a)]
= Ea∼πold [ πinew(a i|s) πiold(a i|s) Aπold(s,a)]
= Eai∼πiold
[ πinew(a
i|s) πiold(a i|s) Ea−i∼π−iold [Aπold(s, a i, a−i)] ] = Eai∼πiold [ πinew(a
i|s) πiold(a i|s) Aiπold(s, a i) ] = Lπiold(π i new),
where Aiπold(s, a i) = Ea−i∼π−iold [Aπold(s, a i, a−i)] is the individual advantage of agent i, and the third equation is from the condition π−inew = π −i old.
With the result of TRPO, we have the following conclusion:
J(πnew)− J(πold) ≥ Lπold(πnew)− CD max KL (πnew||πold)
= Lπiold(π i new)− CD max KL (π i new||πiold) (from π−inew = π−iold)
This means the individual objective is the same as the joint objective so the monotonic improvement is guaranteed.
Then we can show the proof of Proposition 1.
Proof. We will build a new MDP M̃ based on the original MDP. We keep the action space à = A = ×ni=1Ai, where Ai is the original action space of agent i. The new state space contains multiple layers. We define S̃k = S × (×ki=1Ai) for k = 1, 2, · · · , n− 1 and S̃0 = S, where S is the original state space. Then a new state s̃k ∈ S̃k means that s̃k = (s, a1, a2, · · · , ak). The total new state space is defined as S̃ = ∪n−1i=0 S̃i. Next we define the transition probability P̃ as following:
P̃ (s̃′|s̃k, ak+1, a−(k+1)) = 1 ( s̃′ = (s̃k, ak+1) ) , k < n− 1
P̃ (s̃′|s̃k, ak+1, a−(k+1)) = 1 ( s̃′ ∈ S̃0 ) P (s̃′|s̃k, ak+1), k = n− 1.
This means that the state in the layer k can only transition to the state in the layer k + 1 with the corresponding action, and the state in the layer n− 1 will transition to the layer 0 with the probability P in the original MDP. The reward function r̃ is defined as following:
r̃(s̃,a) = 1 ( s̃ ∈ S̃0 ) r(s̃,a).
This means the reward is only obtained when the state in layer 0 and the value is the same as the original reward function. Now we obtain the total definition of the new MDP M̃ = {S̃, Ã, P̃ , r̃, γ}. Then we claim that if all agents learn in multi-agent sequential decision-making by PPO, they are actually taking agent-by-agent PPO in the new MDP M̃ . To be precise, one update of multi-agent
sequential decision-making in the original MDP M equals to a round of update from agent 1 to agent n by agent-by-agent PPO in the new MDP M̃ . Moreover, the total reward of a round in the new MDP M̃ is the same as the reward in one timestep in the original MDP M . With this conclusion and Lemma 1, we complete the proof.
The proof of Proposition 2 can be seen as a corollary of the proof of Proposition 1.
Proof. From Lemma 1 we know that the monotonic improvement of the joint policy in the new MDP M̃ is guaranteed for each update of one single agent’s policy. So even if the different round of updates in the new MDP M̃ is with different order of the decision-making, the monotonic improvement of the joint policy is still guaranteed. Finally, from the proof of Proposition 1, we know that the monotonic improvement in the new MDP M̃ equals to the monotonic improvement in the original MDP M . These complete the proof.
B PROOFS OF THEOREM 1
Lemma 2 (TVD of the joint distributions). Suppose we have two distribution p1(x, y) = p1(x)p1(x|y) and p2(x, y) = p2(x)p2(x|y). We can bound the total variation distance of the joint as:
DTV (p1(x, y)||p2(x, y)) ≤ DTV (p1(x)||p2(x)) + max x DTV (p1(y|x)||p2(y|x))
Proof. See (Janner et al., 2019) (Lemma B.1).
Lemma 3 (Markov chain TVD bound, time-varing). Suppose the expected KL-divergence between two transition is bounded as maxt Es∼p1,t(s)DKL(p1(s′|s)||p2(s′|s)) ≤ δ, and the initial state distributions are the same p1,t=0(s) = p2,t=0(s). Then the distance in the state marginal is bounded as:
DTV (p1,t(s)||p2,t(s)) ≤ tδ
Proof. See (Janner et al., 2019) (Lemma B.2).
Lemma 4 (Branched Returns Bound). Suppose the expected KL-divergence between two dynamics distributions is bounded as maxt Es∼p1,t(s)[DTV (p1(s′|s,a)||p2(s′|s,a))], and the policy divergences at level k are bounded as maxs,a1:k−1 DTV (π1(ak|s,a1:k−1)||π2(ak|s,a1:k−1)) ≤ πk . Then the returns are bounded as:
|η1 − η2| ≤ 2rmaxγ( m +
∑n k=1 πk)
(1− γ)2 +
2rmax ∑n k=1 πk
1− γ ,
where rmax is the upper bound of the reward function.
Proof. Here, η1 denotes the returns of π1 under dynamics p1(s′|s,a), and η2 denotes the returns of π2 under dynamics p2(s′|s,a). Then we have
|η1 − η2| = | ∑ s,a (p1(s,a)− p2(s,a))r(s,a)|
= | ∑ t ∑ s,a γt(p1,t(s,a)− p2,t(s,a))r(s,a)|
≤ ∑ t ∑ s,a γt|p1,t(s,a)− p2,t(s,a)|r(s,a)
≤ rmax ∑ t ∑ s,a γt|p1,t(s,a)− p2,t(s,a)|.
By Lemma 2, we get
max s DTV (π1(a|s)||π2(a|s)) ≤ max s,a1 DTV (π1(a −1|s, a1)||π2(a−1|s, a1))
+ max s DTV (π1(a
1|s)||π2(a1|s))
≤ · · ·
≤ n∑ k=1 max s,a1:k−1 DTV (π1(a k|s,a1:k−1)||π2(ak|s,a1:k−1))
≤ n∑ k=1 πk .
We then apply Lemma 3, using δ = m + ∑n k=1 πk (via Lemma 3 and 2) to get
DTV (p1,t(s)||p2,t(s)) ≤ tmax t Es∼p1,t(s)DTV (p1,t(s ′|s)||p2,t(s′|s))
≤ tmax t Es∼p1,t(s)DTV (p1,t(s ′,a|s)||p2,t(s′,a|s))
≤ t(max t Es∼p1,t(s)DTV (p1,t(s ′|s,a)||p2,t(s′|s,a))
+ max t Es∼p1,t(s) maxs
DTV (π1,t(a|s)||π2,t(a|s)))
≤ t( m + n∑ k=1 πk)
And we also get DTV (p1,t(s,a)||p2,t(s,a)) ≤ t( m + ∑n k=1 πk) + ∑n k=1 πk by Lemma 2. Thus, by plugging this back, we get:
|η1 − η2| ≤ rmax ∑ t ∑ s,a γt|p1,t(s,a)− p2,t(s,a)|
≤ 2rmax ∑ t γt(t( m + n∑ k=1 πk) + n∑ k=1 πk)
≤ 2rmax( γ( m +
∑n k=1 πk))
(1− γ)2 + ∑n k=1 πk 1− γ )
Then we can show the proof of Theorem 1.
Proof. Let πβ denote the data collecting policy. We use Lemma 4 to bound the returns, but it will require bounded model error under the new policy π. Thus, we need to introduce πβ by adding and subtracting η[πβ ], to get:
η̂[π]− η[π] = η̂[π]− η[πβ ] + η[πβ ]− η[π].
we can bound L1 and L2 both using Lemma 4 by using δ = ∑n k=1 πk and δ = m + ∑n k=1 πk respectively, and obtain:
L1 ≥ − 2γrmax
∑n k=1 πk
(1− γ)2 −
2rmax ∑n k=1 πk
(1− γ)
L2 ≥ − 2γrmax( πm +
∑n k=1 πk)
(1− γ)2 −
2rmax ∑n k=1 πk
(1− γ) .
Adding these two bounds together yields the conclusion.
Under review as a conference paper at ICLR 2023
pp 1
level 1 level 1
cn
C ADDITIONAL EXPERIMENTS
C.1 ILLUSTRATION OF LEARNED PRIORITY OF DECISION-MAKING
Figure 8 (upper panel from a to e) shows the priority order of decision-making determined by SeqComm in PP. Agent 2 that is far away from other preys and predators is chosen to be the firstmover. If agents want to encircle and capture the preys, the agents (e.g., agent 2 and 5) that are on the periphery of the encircling circle should hold upper-level positions since they are able to decide how to narrow the encirclement. In addition, agent 3 makes decisions prior to agent 5 so that collision can be avoided after agent 5 obtains the intention of agent 3.
For CN, as illustrated in Figure 8 (lower panel from a to e), agent 2 is far away from all the landmarks and all other agents are in a better position to occupy landmarks. Therefore, agents 2 is chosen to be the first-mover, which is similar to the phenomenon observed in PP. Once it has determined the target to occupy, other agents (agent 5 and 3) can adjust their actions accordingly and avoid conflict of goals. Otherwise, if agent 5 makes a decision first and chooses to occupy the closest landmark, then agent 2 has to approach to a further landmark which would take more steps.
C.2 GENERALIZATION
Generalization to different numbers of agents has always been a key problem in MARL. For most algorithms in communication, once the model is trained in one scenario, it is unlikely for agents to maintain relatively competitive performance in other scenarios with different numbers of agents. However, as we employ attention modules to process communicated messages so that agents can handle messages of different lengths. In addition, the module used to determine the priority of decision-making is also not restricted by the number of agents. Thus, we investigate whether SeqComm generalizes well to different numbers of agents in CN and PP.
For both tasks, SeqComm is trained on 5-agent settings. Then, we test SeqComm in 3-agent and 7-agent settings of CN and 7-agent setting of PP. We use Fix-C trained directly on these test tasks to illustrate the performance of SeqComm. Note that the quantity of both landmarks and preys is
adjusted according to the number of agents in CN and PP. The test results are shown in Table 1. SeqComm exhibits the superiority in CN and PP, demonstrating that SeqComm may have a good generalization to the number of agents. A thorough study of the generalization of SeqComm is left to future work.
C.3 MORE SMAC MAPS
We have evaluated our method on two additional maps, i.e., 3s vs 4z and corridor. As illustrated in Figure 9, we can find out the similar conclusions as section 5.1.
D ADDITIONAL RELATED WORK
Multi-Agent Path Finding (MAPF). MAPF aims to plan collision-free paths for multiple agents on a given graph from their given start vertices to target vertices. In MAPF, prioritized planning is deeply coupled with collision avoidance (Van Den Berg & Overmars, 2005; Ma et al., 2019), where collision is used to design constraints or heuristics for planning. Unlike MAPF, our method couples the priority of decision-making with the learning objective and thus is more general. In addition, the different motivations and problem settings may lead to the incompatibility of the methods in the two fields.
Reinforcement Learning in Stackelberg Game. Many studies (Könönen, 2004; Sodomka et al., 2013; Greenwald et al., 2003; Zhang et al., 2020) have investigated reinforcement learning in finding the Stackelberg equilibrium. Bi-AC (Zhang et al., 2020) is a bi-level actor-critic method that allows agents to have different knowledge bases so that the Stackelberg equilibrium (SE) is possible to find. The actions still can be executed simultaneously and distributedly. It empirically studies the relationship between the cooperation level and the superiority of the SE over the Nash equilibrium. AQL (Könönen, 2004) updates the Q-value by solving the SE in each iteration and can be regarded as the value-based version of Bi-AC. Existing work mainly focuses on two-agent settings and their order is fixed in advance. However, the fixed order can hardly be an optimal solution as we will show in the next section. To address this issue, we exploit agents’ intentions to dynamically determine the priority of decision-making along the way of interacting with each other.
E EXPERIMENTAL SETTINGS
In cooperative navigation, there are 5 agents and the size of each is 0.15. They need to occupy 5 landmarks with the size of 0.05. The acceleration of agents is 7. In predator-prey, the number of predators (agents) and prey is set to 5 and 3, respectively, and their sizes are 0.15 and 0.05. The acceleration is 5 for predators and 7 for prey. In keep away, the number of attackers (agents) and defenders is set to 3, and their sizes are respectively 0.15 and 0.05. Besides, the acceleration is 6
for attackers and 4 for defenders. The three landmarks are located at (0.00, 0.30), (0.25,−0.15), and (−0.25,−0.15). Note that each agent is allowed to communicate with all other agents in all three tasks. The team reward is similar across tasks. At a timestep t, it can be written as rtteam = − ∑n i=1 d t i + C
trcollision, where dti is the distance of landmark/prey i to its nearest agent/predator, Ct is the number of collisions (when the distance between two agents is less than the sum of their sizes) occurred at timestep t, and rcollision = −1. In addition, agents act discretely and have 5 actions (stay and move up, down, left, right). The length of each episode is 20, 30, and 20 in cooperative navigation, predator-prey, and keep-away, respectively.
F IMPLEMENTATION DETAILS
F.1 ARCHITECTURE AND HYPERPARAMETERS
Our models, including SeqComm, Fix-C, and Random-C are trained based on MAPPO. The critic and policy network are realized by two fully connected layers. As for the attention module, key, query, and value have one fully connected layer each. The size of hidden layers is 100. Tanh functions are used as nonlinearity. For I2C, we use their official code with default settings of basic hyperparameters and networks. As there is no released code of IS and TarMAC, we implement IS and TarMAC by ourselves, following the instructions mentioned in the original papers (Kim et al., 2021; Das et al., 2019).
For the world model, observations and actions are firstly encoded by a fully connected layer. The output size for the observation encoder is 48, and the output size for the action encoder is 16. Then the outputs of the encoder will be passed into the attention module with the same structure aforementioned. Finally, we use a fully connected layer to decode. In these layers, Tanh is used as the nonlinearity.
Table 2 summarize the hyperparameters used by SeqComm and the baselines in the MPE.
For SMAC, SeqComm, Random-C, Fix-C are based on the same architecture, the hyperparameters stay the same. For MMM2, 6z vs 8z, and 8m vs 9m, the learning rate is 5e−5, while for 10m vs 11m, corridor, and 3s vs 4z, learning rate is 7e−5. The ppo epoch is set to 10 for 6h vs 8z, and is 5 for rest maps. H and F is set to 5 and 1, respectively. However, 20 and 2 is a better value of H and F if computing resources is sufficient.
For TarMAC, the learning rate is 7e−5 for all maps. The ppo epoch is set to 10 for 6h vs 8z, and is 5 for rest maps.
For MAPPO, the learning rate is 5e−5 for MMM2 and 6z vs 8z, and 7e−5 for 8m vs 9m and 10m vs 11m.
For these four methods, the mini batch is set to 1. As for other hyperparameters, we follow the default settings of the official code (Yu et al., 2021).
For QMIX, the learning rate is 5e−5. The is 1 and the batch size is 32. The buffer size is 5e3. For others, we follow the default settings of link https://github.com/starry-sky6688/MARL-Algorithms.git
F.2 ATTENTION MODULE
Attention module (AM) is applied to process messages in the world model, critic network, and policy network. AM consists of three components: query, key, and values. The output of AM is the weighted sum of values, where the weight of value is determined by the dot product of the query and the corresponding key.
For AM in the world model denoted as AMw, agent i gets messages m−it = h −i t from all other agents at timestep t in negotiation phase, and predicts a query vector qit following AM i w,q(h i t). The query is used to compute a dot product with keys kt = [k1t , · · · , knt ]. Note that k j t is obtained by the message from agent j following AMia,k(h j t ) for j 6= i, and kit is from AM i neg,k(h i t). Besides, it is scaled by 1/ √ dk followed by a softmax to obtain attention weights α for each value vector:
αi = softmax qit T k1t√ dk · · · q
i t T kjt√ dk︸ ︷︷ ︸ αij · · · q i t T knt√ dk (1) The output of attention module is defined as: cit = ∑n j=1 αijv j t , where v j t is obtained from messages or its own hidden state of observation following AMiw,v(·). As for AM in the policy and critic network denoted as AMa , agent i gets additional messages from upper-level agent in the launching phase. The message from upper-level and lower-level agent can be expanded asmuppert = [h upper t ,a upper t ] andm lower t = [h lower t , 0], respectively. In addition, the query depends on agent’s own hidden state of observation hit, but keys and values are only from messages of other agents.
F.3 TRAINING
The training of SeqComm is an extension of MAPPO. The observation encoder e, the critic V , and the policy π are respectively parameterized by θe, θv, θπ. Besides, the attention module AMa is parameterized by θa and takes as input the agent’s hidden state, the messages (hidden states of other agents) in the negotiation phase, and the messages (the actions of upper-level agents) in launching phase. Let D = {τk}Kk=1 be a set of trajectories by running policy in the environment. Note that we drop time t in the following notations for simplicity.
The value function is fitted by regression on mean-squared error:
L(θv, θa, θe) = 1
KT ∑ τ∈D T−1∑ t=0 ∥∥∥V (AMa(e(o),aupper))− R̂∥∥∥2 2
(2)
where R̂ is the discount rewards-to-go.
We update the policy by maximizing the PPO-Clip objective:
L(θπ, θa, θe) = 1
KT ∑ τ∈D T−1∑ t=0 min( π(a|AMa(e(o),aupper)) πold(a|AMa(e(o),aupper)) Aπold , g( , Aπold)) (3)
where g( , A) = { (1 + )A A ≥ 0 (1− )A A ≤ 0 , and Aπold(o,a upper, a) is computed using the GAE method.
The world modelM is parameterized by θw is trained as a regression model using the training data set S. It is updated with the loss:
L(θw) = 1 |S| ∑
o,a,o′,r∈S
∥∥∥(o′, r)−M(AMw(e(o),a))∥∥∥2 2 . (4)
We trained our model on one GeForce GTX 1050 Ti and Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz. | 1. What is the focus and contribution of the paper on multiagent POMDP?
2. What are the strengths of the proposed approach, particularly in terms of communication mechanism?
3. What are the weaknesses of the paper regarding the comparison with other works and the advantage of the proposed approach?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper studies a multiagent POMDP and proposes a communication mechanism for the agents to exchange information about their decision-making. In the process, the agents have the same objective, aiming to find a joint policy that maximizes their utility. The authors introduced a communication mechanism for the agents, which consists of a negotiation phase and a lauching pahse. In the negotiation phase, the agents communicate their observations, and in the lauching pahse, they communicate their decision-making. The latter is further implemented according to a hierarchical structure, where the agent at each level k send information to the agent at level k+1. The paper prents both theoretical results about the convergence of the agents' strategies to an equilibrium and empirical results about the performance of the proposed approach.
Strengths And Weaknesses
Strengths:
The problem is well-motivated. The paper is overall clearly writen and well organized. The idea of introducing communication into the process is natural and the proposed approach of hierarchical communication is also novel and looks interesting.
Weaknesses:
Though the idea of hierarchical communication is interesting, it is unclear what are its advantages compared with letting one agent makes a centralized decision and broadcasting the decision (especially now that the agents' objectives are the same and their policy space is already broken down through factorization).
Clarity, Quality, Novelty And Reproducibility
The experiments are comprehensive and the results look sound and presented in sufficient detail and clarity. Experiment settings are described clearly. |
ICLR | Title
CO2: Consistent Contrast for Unsupervised Visual Representation Learning
Abstract
Contrastive learning has been adopted as a core method for unsupervised visual representation learning. Without human annotation, the common practice is to perform an instance discrimination task: Given a query image crop, this task labels crops from the same image as positives, and crops from other randomly sampled images as negatives. An important limitation of this label assignment strategy is that it can not reflect the heterogeneous similarity between the query crop and each crop from other images, taking them as equally negative, while some of them may even belong to the same semantic class as the query. To address this issue, inspired by consistency regularization in semi-supervised learning on unlabeled data, we propose Consistent Contrast (CO2), which introduces a consistency regularization term into the current contrastive learning framework. Regarding the similarity of the query crop to each crop from other images as “unlabeled”, the consistency term takes the corresponding similarity of a positive crop as a pseudo label, and encourages consistency between these two similarities. Empirically, CO2 improves Momentum Contrast (MoCo) by 2.9% top-1 accuracy on ImageNet linear protocol, 3.8% and 1.1% top-5 accuracy on 1% and 10% labeled semi-supervised settings. It also transfers to image classification, object detection, and semantic segmentation on PASCAL VOC. This shows that CO2 learns better visual representations for these downstream tasks.
1 INTRODUCTION
Unsupervised visual representation learning has attracted increasing research interests for it unlocks the potential of large-scale pre-training for vision models without human annotation. Most of recent works learn representations through one or more pretext tasks, in which labels are automatically generated from image data itself. Several early methods propose pretext tasks that explore the inherent structures within a single image. For example, by identifying spatial arrangement (Doersch et al., 2015), orientation (Gidaris et al., 2018), or chromatic channels (Zhang et al., 2016), models learn useful representations for downstream tasks. Recently, another line of works (Wu et al., 2018; Bachman et al., 2019; Hjelm et al., 2018; Tian et al., 2019; He et al., 2020; Misra & van der Maaten, 2020; Chen et al., 2020a), e.g. Momentum Contrast (MoCo), falls within the framework of contrastive learning (Hadsell et al., 2006), which directly learns relations of images as the pretext task. In practice, contrastive learning methods show better generalization in downstream tasks.
Although designed differently, most contrastive learning methods perform an instance discrimination task, i.e., contrasting between image instances. Specifically, given a query crop from one image, a positive sample is an image crop from the same image; negative samples are crops randomly sampled from other images in the training set. Thus, the label for instance discrimination is a one-hot encoding over the positive and negative samples. This objective is to bring together crops from the same image and keep away crops from different images in the feature space, forming an instance discrimination task.
However, the one-hot label used by instance discrimination might be problematic, since it takes all the crops from other images as equally negative, which cannot reflect the heterogeneous similarities between the query crop and each of them. For example, some “negative” samples are semantically similar to the query, or even belong to the same semantic class as the query. This is referred to as
∗corresponding author
“class collision” in Saunshi et al. (2019) and “sampling bias” in Chuang et al. (2020). The ignorance of the heterogeneous similarities between the query crop and the crops from other images can thus raise an obstacle for contrastive methods to learn a good representation. A recent work, supervised contrastive learning (Khosla et al., 2020), fixes this problem by using human annotated class labels and achieves strong classification performance. However, in unsupervised representation learning, the human annotated class labels are unavailable, and thus it is more challenging to capture the similarities between crops.
In this paper, we propose to view this instance discrimination task from the perspective of semisupervised learning. The positive crop should be similar to the query for sure since they are from the same image, and thus can be viewed as labeled. On the contrary, the similarity between the query and each crop from other images is unknown, or unlabeled. With the viewpoint of semi-supervised learning, we introduce Consistent Contrast (CO2), a consistency regularization method which fits into current contrastive learning framework. Consistency regularization (Sajjadi et al., 2016) is at the core of many state-of-the-art semi-supervised learning algorithms (Xie et al., 2019; Berthelot et al., 2019b; Sohn et al., 2020). It generates pseudo labels for unlabeled data by relying on the assumption that a good model should output similar predictions on perturbed versions of the same image. Similarly, in unsupervised contrastive learning, since the query crop and the positive crop naturally form two perturbed versions of the same image, we encourage them to have consistent similarities to each crop from other images. Specifically, the similarity of the positive sample predicted by the model is taken as a pseudo label for that of the query crop.
Our model is trained with both the original instance discrimination loss term and the introduced consistency regularization term. The instance discrimination label and the pseudo similarity label jointly construct a virtual soft label on-the-fly, and the soft label further guides the model itself in a bootstrap manner. In this way, CO2 exploits the consistency assumption on unlabeled data, mitigates the “class collision” effect introduced by the one-hot labels, and results in a better visual representation. More importantly, our work brings a new perspective of unsupervised visual representation learning. It relaxes the stereotype that the pretext task can only be self-supervised which aims to construct artificial labels for every sample, e.g., a specific degree of rotation (Gidaris et al., 2018), a configuration of jigsaw puzzle (Noroozi & Favaro, 2016), and a one-hot label that indicates whether a crop comes from the same instance or not (Wu et al., 2018). In contrast, the pretext task can also be self-semi-supervised, allowing the task itself to be partially labeled. This relaxation is especially helpful when information for artificial label construction is not enough and imposing a label is harmful, such as the case of imposing the one-hot labels in instance discrimination.
This simple modification brings consistent gains on various evaluation protocols. We first benchmark CO2 on ImageNet (Deng et al., 2009) linear classification protocol. CO2 improves MoCo by 2.9% on top-1 accuracy. It also provides 3.8% and 1.1% top-5 accuracy gains under the semisupervised setting on ImageNet with 1% and 10% labels respectively, showing the effectiveness of the introduced consistency regularization. We also evaluate the transfer ability of the learned representations on three different downstream tasks: image classification, object detection and semantic segmentation. CO2 models consistently surpass their MoCo counterparts, showing that CO2 can improve the generalization ability of learned representation. Besides, our experiments on ImageNet100 (Tian et al., 2019) demonstrate the efficacy of CO2 on SimCLR (Chen et al., 2020a), showing the generality of our method on different contrastive learning frameworks.
2 METHOD
In this section, we begin by formulating current unsupervised contrastive learning as an instance discrimination task. Then, we propose our consistency regularization term which addresses the ignorance of the heterogeneous similarity between the query crop and each crop of other images in the instance discrimination task.
2.1 CONTRASTIVE LEARNING
Contrastive learning (Hadsell et al., 2006) is recently adopted as an objective for unsupervised learning of visual representations. Its goal is to find a parametric function fθ : RD → Rd that maps an input vector x to a feature vector fθ(x) ∈ Rd with D d, such that a simple distance measure (e.g., cosine distance) in the low-dimensional feature space can reflect complex similarities in the high-dimensional input space.
For each input vector xi in the training set S, the similarity measure in the input space is defined by a subset of training vectors Si ⊂ S, called similarity set. The sample xi is deemed similar to samples in the similarity set Si, but dissimilar to samples in S \ Si. Then, the contrastive objective encourages fθ(xj) to be close to fθ(xi) in the feature space if xj ∈ Si, and otherwise to be distant. By training with contrastive loss, the similarities defined by the similarity set determine characteristics of the learned representation and the mapping function fθ. For example, if the similarity is defined as samples from the same semantic class, then fθ will probably learn invariances to other factors, e.g., object deformation. In the supervised setting, this definition of similarity requires a large amount of human labeling. On the contrary, unsupervised contrastive learning exploits similarities with no need of human labels. One natural definition of unsupervised similarity is multiple views of an image, as explored by many recent methods. For example, random augmented crops (Wu et al., 2018; Ye et al., 2019; He et al., 2020; Chen et al., 2020a;b) of an image could be defined as a similarity set. In this case, the contrastive objective is effectively solving an instance discrimination task (Wu et al., 2018) as illustrated in Figure 1a.
The training of this instance discriminator involves randomly sampling a query crop xq ∈ Si, a positive crop xp ∈ Si from the same image, and K negative crops {xk ∈ S \ Si}Kk=1 from other images. These K + 2 crops (the query, the positive, and K negatives) are encoded with fθ respectively, q = fθ(xq),p = fθ(xp),nk = fθ(xk). Then, an effective contrastive loss function, InfoNCE (Hjelm et al., 2018), is written as:
Lins = − log exp(q · p/τins) exp(q · p/τins) + ∑K k=1 exp(q · nk/τins) , (1)
where τins is a temperature hyper-parameter (Hinton et al., 2015). This loss can be interpreted as a cross entropy loss that trains the model to discriminate the positive crop (labeled as 1) from negative crops (labeled as 0) given the query crop. We denote this loss as Lins as it performs an instance discrimination task. One direct instantiation of InfoNCE loss, represented by SimCLR (Chen et al., 2020a), formulates fθ as an end-to-end encoder. In this case, two crops of the same image are exchangeable or symmetric to each other as both are encoded by fθ. The final loss is also symmetric
with either one of the two crops as the query and the other crop as the positive. Another popular instantiation, represented by MoCo (He et al., 2020), encodes the query with fθ and encodes the positive and the negatives with fθ′ which is the moving average of fθ. In this case, only q can propagate gradients, which causes Lins to be asymmetric.
2.2 CONSISTENT CONTRAST
The one-hot labels used by InfoNCE loss is effective, showing good generalization ability across tasks and datasets (Chen et al., 2020b;a). Nevertheless, we argue that the hard, zero-one labels is uninformative. Specifically, crops from other images are taken as equally negative as they are all labeled as 0. This is contradictory to the fact that some so-called “negative” crops can be similar or even in the same semantic class, especially when K is large. For example, SimCLR (Chen et al., 2020a) uses 16,382 negative samples in a batch, and MoCo (He et al., 2020; Chen et al., 2020b) uses a memory bank of 65,536 features as negative samples. Even worse, the current objective forces negatives to be as far from the query as possible, with larger weights for closer negatives since they are “hard negatives”. However, these “hard negative” crops in fact tend to be semantically close. These issues impair good representation learning because the one-hot labels can not faithfully reflect the heterogeneous similarities between the query crop and the crops from other images.
Although generating labels based on instance discrimination is trivial, revealing the similarity between two arbitrary crops is exactly what we want to learn from unsupervised pre-training. Therefore, the label of the similarity between the query crop and each crop from other images is of little hope to get. This situation is similar to the usage of unlabeled data in semi-supervised learning setting, in which consistency regularization is widely used to propagate knowledge from labeled data to discover the structures in unlabeled data. Inspired by this, we propose to encourage the consistency between the similarities of crops from the same image, i.e., the query crop and the positive crop. We illustrate the consistency regularization in Figure 1b.
First, we denote the similarity between the query q and the negatives ni(i ∈ {1, . . . ,K}) as:
Q(i) = exp(q · ni/τcon)∑K k=1 exp(q · nk/τcon) , (2)
where τcon is also a temperature hyper-parameter. Q(i) is the probability that the query q selects ni as its match from {nk}Kk=1. Similarly, the similarity between the positive p and the negatives is written as:
P (i) = exp(p · ni/τcon)∑K k=1 exp(p · nk/τcon) . (3)
We impose the consistency between the probability distributions P and Q by using symmetric Kullback-Leibler (KL) Divergence as the measure of disagreement:
Lcon = 1
2 DKL(P‖Q) +
1 2 DKL(Q‖P ) . (4)
When p and q are encoded by the same end-to-end encoder fθ, it is natural to use symmetric KL as their disagreement measure, since p and q are exchangeable. Even when p and ni are encoded by the momentum encoder f ′θ, symmetric KL empirically works as well as forward KL, i.e., DKL(P‖Q), as shown in Section 3.5. Thus, we use symmetric KL as a unified objective for both cases.
The total loss is a weighted average of the original instance discrimination loss term and the consistency regularization term: L = Lins + αLcon , (5) where α denotes the coefficient to balance the two terms. It is possible to merge the two terms by creating a unique label containing information of both the one-hot label and the pseudo similarity label, but we find the weighted average can already get good performance and is easy to control.
The pseudo label is informative to reveal the similarity between the query q and each ni, while the one-hot label is unable to provide such information, since it only describe co-occurrence within one image. Note that, the pseudo label is also dynamic since the embedding function fθ is updated in every training step, and thus generating better pseudo labels during training. It indicates that the unsupervised embedding function and the soft similarity labels give positive feedback to each other.
Our method is simple and low-cost. It captures the similarity to each ni while introducing unnoticeable computational overhead with only one extra loss term computed. This is unlike clustering based unsupervised learning methods, which are costly, since they explicitly compute the similarity sets in the training set after every training epoch (Caron et al., 2018; Zhuang et al., 2019; Li et al., 2020; Caron et al., 2020).
3 EXPERIMENTS
Herein, we first report our implementation details and benchmark the learned representations on ImageNet. Next, we examine how the unsupervised pre-trained models transfer to other datasets and tasks. We then analyze the characteristics of our proposed method.
3.1 LINEAR CLASSIFICATION
Setup We mainly evaluate CO2 based on MoCo (He et al., 2020) and MoCo v2 (Chen et al., 2020b). Both of them use instance discrimination as pretext task, while MoCo v2 adopts more sophisticated design choices on projection head architecture, learning rate schedule and data augmentation strategy. We test CO2 on MoCo for its representativeness and simplicity. On MoCo v2, we evaluate how CO2 is compatible with advanced design choices. We also demonstrate the impact of CO2 on the end-to-end contrastive framework in Section 3.5.
The unsupervised training is performed on the train split of ImageNet-1K (Deng et al., 2009) without using label information. We keep aligned every detail with our baseline MoCo to effectively pinpoint the contribution of our approach, except the number of GPUs (MoCo uses 8 GPUs while we use 4). A further search on MoCo-related hyper-parameters might lead to better results of our
* Results reported in Chen & He (2020).
method. For the hyper-parameters of CO2, we set τcon as 0.04, α as 10 for MoCo-based CO2, and τcon as 0.05, α as 0.3 for MoCo v2-based CO2. Please refer to the appendix for more detailed implementation description.
3.2 LINEAR CLASSIFICATION
We first benchmark the learned representations on the common linear classification protocol. After the unsupervised pre-training stage, we freeze the backbone network including the batch normalization parameters, and train a linear classifier consisting of a fully-connected layer and a softmax layer on the 2048-D features following the global average pooling layer. Table 1 summaries the singlecrop top-1 classification accuracy on the validation set of ImageNet-1K. Our method consistently improves by 2.9% on MoCo and by 0.5% on MoCo v2. We also list several top-performing methods in the table for reference. These results indicate that the representation is more linearly separable on ImageNet with consistency regularization, since the consistency regularization mitigates the “class collision” effect caused by semantically similar negative samples.
3.3 SEMI-SUPERVISED LEARNING
We next perform semi-supervised learning on ImageNet to evaluate the effectiveness of the pretrained network in data-efficient settings. Following (Wu et al., 2018; Misra & van der Maaten, 2020; Chen et al., 2020a), we finetune the whole pre-trained networks with only 1% and 10% labels which are sampled in a class-balanced way. Table 2 summaries the mean of the top-5 accuracy on the validation set of ImageNet-1K over three runs. The results for MoCo and MoCo v2 are produced by us using their officially released models. The proposed consistency regularization term can provide 3.8% and 1.1% top-5 accuracy gains for MoCo with 1% and 10% labels respectively. CO2 also improves from MoCo v2 by 1.1% top-5 accuracy with 1% labels, and by 0.3% with 10% labels.
3.4 TRANSFER LEARNING
To further investigate the generalization ability of our models across different datasets and tasks, we evaluate the transfer learning performance on PASCAL VOC (Everingham et al., 2015) with three typical visual recognition tasks, i.e., image classification, object detection and semantic segmentation. Table 3 reports the transfer learning performance comparing with other methods using ResNet-50. CO2 shows competitive or better performance comparing with the corresponding baselines, In addition, it achieves better performance than state-of-the-art unsupervised representation learning methods.
Image Classification Following the evaluation setup in Goyal et al. (2019), we train a linear SVM (Boser et al., 1992) on the frozen 2048-D features extracted after the global average pool-
ing layer. The results of MoCo are produced by us with their official models. In this case, CO2 is 2.9% better than MoCo, and 0.2% than MoCo v2.
Object Detection Following the detection benchmark set up in He et al. (2020), we use Faster R-CNN (Ren et al., 2015) object detector and ResNet-50 C4 (He et al., 2017) backbone, and all the layers are finetuned including the batch normalization parameters. The numbers of our method are averaged over three runs. Our reproduced results for MoCo are also listed in the table for reference. CO2 provides 0.3% AP50 gains on both MoCo and MoCo v2.
Semantic Segmentation We follow the settings in He et al. (2020) for semantic segmentation. Results are average over three runs. Similarly, we include our reproduced results of MoCo as a reference. The result of MoCo v2 is produced by us using its officially released model. CO2 gives 0.9% mIoU improvement upon MoCo, and 0.5% upon MoCo v2, which finally surpasses its supervised counterpart.
The overall transfer learning improvements, though consistent, are smaller than linear classification and semi-supervised learning. Similar observations have also been made in Chen et al. (2020b). We hypothesize that the current unsupervised contrastive methods, which bring close different crops from the same image, reduce the representation’s sensitivity to location which is useful for tasks like detection. It is still an open question which properties of an unsupervised representation benefit the transfer ability to various downstream tasks.
3.5 ANALYSIS
In this section, we study the characteristics of the proposed method on a smaller backbone ResNet18 and a smaller dataset ImageNet-100 due to the consideration of the computational resource. ImageNet-100 is firstly used in Tian et al. (2019) and consists of 100 randomly selected classes from all 1, 000 classes of ImageNet.
Hyper-parameter Our method introduces two new hyper-parameters, the coefficient of consistency regularization term α, and its temperature τcon. In Figure 2, we show the top-1 accuracy of a linear classifier on models pre-trained by CO2 with different hyper-parameters. In Figure 2a, we fix the temperature τcon as 0.04 and vary the coefficient α. The best coefficient is 10. We see that by using the consistency regularization term, the linear classification accuracy can be boosted from 63.6% to 69.2%. Increasing α to 20 and beyond causes performance degeneration. We hypothesize that the model is over-regularized by the consistency loss, and thus it loses some discrimination among different instances. In Figure 2b, we fix the coefficient to be 10 and varying the temperature. As other consistency regularization methods (e.g., Berthelot et al. (2019b)), temperature τcon effectively influences the quality of the learned representation, and the best to use is 0.04.
Training Curves In Figure 3 we show the training curves of the instance discrimination loss Lins, the consistency loss Lcon and the instance discrimination accuracy. Instance discrimination accuracy represents the percent of query crops which successfully select their corresponding positive crops, i.e., successfully identify their instances. MoCo is trained with Lins only and its Lcon is just calculated out for comparison. We see that Lins of MoCo drops quickly from the beginning at the cost of a jump of Lcon. As the training proceeds, Lcon of MoCo decreases spontaneously, possibly because more semantic knowledge has been learned, but it is still relatively high. Training with Lcon and Lins together, i.e., MoCo + CO2, Lcon is kept very low from beginning, and Lcon increases gradually since the model is trained to discriminate between images at the same time. At the end of the training, Lcon stays much lower than Lcon of MoCo. We also notice that with CO2, the instance discrimination accuracy drops from 97.57% to 95.26%. Although CO2 results in lower instance discrimination accuracy, it still does better in the downstream classification task. The linear classification accuracy improves from 63.6% to 69.2%, as shown in Figure 2a. It suggests again that there is a gap between instance discrimination and the downstream tasks.
Comparison with Label Smoothing With the consistency regularization term, our approach assigns soft pseudo labels to crops from other images. This looks similar to label smoothing regularization on supervised classification (Szegedy et al., 2016), a useful trick which assigns a small constant value to the labels of all the negative classes to avoid overconfidence. We equip MoCo with label smoothing, i.e., assigning a small constant value to crops from other images (the “negatives”). Surprisingly, it reports 61.2% linear classification accuracy, 2.4% lower than MoCo alone. This suggests that assigning a constant value as label smoothing can be harmful for unsupervised contrastive learning, since it ignores the heterogeneous similarity relationship. And it is better to assign labels according to the similarities as our consistency regularization.
End-to-End Encoder To further verify the effectiveness of the proposed consistency regularization term on different contrastive learning frameworks, we apply CO2 to SimCLR (Chen et al., 2020a), a representative method with an end-to-end encoder (without a momentum encoder). The results are presented in Table 4. On ImageNet-100 (Tian et al., 2019) with a ResNet-18, SimCLR obtains 68.9% top-1 linear classification accuracy with batch size 128 and temperature τins 0.1. Equipped with CO2 whose coefficient α is 0.07 and temperature τcon is 1.0, the linear classification accuracy is boosted to 72.3%. The improvement demonstrates that CO2 can be applied to different unsupervised contrastive frameworks and improve the quality of the learned representation regardless of whether using a momentum encoder or not.
Varying the choices of Lcon We ablate on different variants of Lcon (Eq. 4) on MoCo including forward KL (DKL(P‖Q)), reverse KL (DKL(Q‖P )), and the objective of CO2, i.e., symmetric KL. Each of models uses a coefficient α of 10 and a temperature τcon of 0.04. We present the linear classification accuracy in Table 4. Our CO2 (symmetric KL) improves over the baseline MoCo by a large margin, from 63.1% to 69.7%. Forward KL alone improves similarly to 69.6%. And reserve KL alone can also provide a nontrivial 2.0% gain in accuracy.
4 RELATED WORK
Our method falls in the area of unsupervised visual representation learning, especially for image data. In this section, we first revisit various design strategies of pretext tasks for unsupervised learning. Then we elaborate on the pretext tasks based on contrastive learning, which is the focus of our work. Next, we review the methods using consistency regularization in semi-supervised learning, which inspire our work.
Unsupervised Learning and Pretext Tasks To learn from unlabeled image data, a wide range of pretext tasks have been established. The models can be taught to specify the relative position of a patch (Doersch et al., 2015), solve spatial jigsaw puzzles (Noroozi & Favaro, 2016; Wei et al.,
2019), colorize gray scale images (Zhang et al., 2016; Larsson et al., 2017), inpaint images (Pathak et al., 2016), count objects (Noroozi et al., 2017), discriminate orientation (Gidaris et al., 2018), iteratively cluster (Caron et al., 2018; Zhuang et al., 2019; Asano et al., 2019; Zhong et al., 2020), generate images (Donahue et al., 2016; Donahue & Simonyan, 2019), etc. Doersch & Zisserman (2017) evaluates the combination of different pretext tasks. Kolesnikov et al. (2019) and Goyal et al. (2019) revisit and benchmark different pretext tasks.
Contrastive Learning Contrastive learning (Hadsell et al., 2006) recently puts a new perspective on the design of pretext task and holds the key to most state-of-the-art methods. Most of them perform an instance discrimination task while differ in i) the strategies to synthesize positives and negatives, and ii) the mechanisms to manage a large amount of negatives. The synthesizing can base on context with patches (Hjelm et al., 2018; 2019), random resized crops with data augmentation (Wu et al., 2018; Ye et al., 2019; Bachman et al., 2019; He et al., 2020; Chen et al., 2020a), jigsaw puzzle transformation (Misra & van der Maaten, 2020) or luminance-chrominance decomposition (Tian et al., 2019). Regarding the mechanisms to maintain negative features, some methods (Wu et al., 2018; Misra & van der Maaten, 2020) keep tracking the features of all images, some directly utilize the samples within the minibatch (Chen et al., 2020a; Tian et al., 2019; Ye et al., 2019), and He et al. (2020) proposes to use a momentum encoder. Grill et al. (2020) recently proposes to only use positive examples without negatives. Recently, Li et al. (2020) argues that the lack of semantic structure is one fundamental weakness of instance discrimination, and proposes to generate prototypes by k-means clustering. However, the computational overhead and the degeneration introduced by clustering are difficult to address. Chuang et al. (2020) also points out the possible sampling bias of instance discrimination, and proposes a debiased objective.
Consistency Regularization Consistency regularization is an important component of many successful semi-supervised learning methods. It is firstly proposed in Sajjadi et al. (2016), encouraging similar predictions on perturbed versions of the same image. Besides the consistency regularization on unlabeled data, the model is simultaneously trained with a supervised loss on a small set of labeled data. Several works made improvements on the way of perturbation, including using an adversarial transformation (Miyato et al., 2018), using the prediction of a moving average or previous model (Tarvainen & Valpola, 2017; Laine & Aila, 2017), and using strong data augmentation (Xie et al., 2019). Recently, several larger pipelines are proposed (Berthelot et al., 2019b;a; Sohn et al., 2020), in which consistency regularization still serves as a core component.
The instance discrimination loss in unsupervised contrastive learning is analogous to the supervised loss in semi-supervised learning, as both of them rely on some concrete information, i.e., cooccurrence in one image and human annotation, respectively. Meanwhile, CO2 on the similarities between crops is analogous to consistency regularization on unlabeled samples of semi-supervised methods as their labels are both unknown. The main difference, however, is that semi-supervised methods crucially rely on the supervised loss to warm up the model, while there is no human annotation at all in unsupervised contrastive learning. Our work presents an example that a model learned completely without human annotations can also generate surprisingly effective pseudo labels for similarities between different crops and benefit from consistency regularization.
5 DISCUSSION
Unsupervised visual representation learning has shown encouraging progress recently, thanks to the introduction of instance discrimination and the contrastive learning framework. However, in this paper, we point out that instance discrimination is ignorant of the heterogeneous similarities between image crops. We address this issue with a consistency regularization term on the similarities between crops, inspired by semi-supervised learning methods which impose consistency regularization on unlabeled data. In such a simple way, the proposed CO2 consistently improves on supervised and semi-supervised image classification. It also transfers to other datasets and downstream tasks.
More broadly, we encourage researchers to rethink label correctness in existing pretext tasks. Taking instance discrimination as an example, we show that a pretext task itself could be, in fact, a semisupervised learning task. It might be harmful to think of the pretext task as a simple pure supervised task by assuming the unknown labels are negatives. In addition, our work relaxes the stereotype restriction that pretext task labels should always be known and clean. We hope this relaxation can give rise to novel pretext tasks which exploit noisy labels or partially-available labels, making a better usage of the data without human annotation.
A APPENDIX
A.1 IMPLEMENTATION DETAILS OF CONTRASTIVE PRE-TRAINING
We evaluate our approach based on MoCo (He et al., 2020). MoCo has two different encoders to encode queries and keys respectively. The query encoder is updated with respect to the loss function, while the key encoder is an exponential moving average of the query encoder. The keys are stored in a dynamic memory bank, whose entries are updated at every training step with the current minibatch enqueued and the oldest mini-batch dequeued. The backbone is a standard ResNet-50 (He et al., 2016), and features after the global average pooling layer are projected to 128-D vectors (Wu et al., 2018), normalized by `2 norm. The size of the memory bank (i.e., the number of negative samples) is 65,536 and the momentum to update the key encoder is 0.999. τins is 0.07 for MoCo variants and 0.2 for MoCo v2 variants, which are the default settings of these two methods.
We use momentum SGD with momentum 0.9 and weight decay 1e-4. The batch size is 256 on 4 GPUs. To prevent potential information leak with Batch Normalization (BN) (Ioffe & Szegedy, 2015), shuffling BN (He et al., 2020) is performed. The model is trained for 200 epochs with the initial learning rate of 0.03. The learning rate is multiplied by 0.1 after 120 and 160 epochs for MoCo v1, while cosine decayed (Loshchilov & Hutter, 2016) for MoCo v2. We keep aligned all training details with MoCo except the number of GPUs. This could be problematic since it changes the perworker minibatch size, which is related to potential information leaks pointed by He et al. (2020). However, we do not notice much difference when reproducing MoCo with 4 GPUs. Our reproduced MoCo v2 with 4 GPUs reaches the accuracy of 67.6% on the linear classification protocol, 0.1% higher than 67.5% reported in its paper. For the hyper-parameters of the proposed consistency term, we set τcons as 0.04 and α as 10 for the MoCo v1-based CO2, and τcon as 0.05, α as 0.3 for the MoCo v2-based variant.
A.2 IMPLEMETATION DETAILS OF DOWNSTREAM TASKS
Linear Classification We freeze the backbone network including the batch normalization parameters, and train a linear classifier consisting of a fully-connected layer followed by softmax on the 2048-D features following the global average pooling layer. We train for 100 epochs. The learning rate is initialized as 15 and decayed by 0.1 every 20 epoch after the first 60 epochs. We set weight decay as 0 and momentum as 0.9. Only random cropping with random horizontal flipping is used as data augmentation.
Semi-Supervised Learning We finetune the pre-trained model for 20 epochs with learning rate starting from 0.01 for the base model and 1.0 for the randomly initialized classification head, decayed by 0.2 after 12 and 16 epochs. Momentum is set to 0.9. Weight decay is 5e-4 for MoCo v1 and 1e-4 for MoCo v2. Only random cropping with random horizontal flipping is used as data augmentation.
Classification on PASCAL VOC Following the evaluation setup in Goyal et al. (2019), we train a linearSVM (Boser et al., 1992) on the frozen 2048-D features extracted after the global average pooling layer. The models are trained on trainval2007 split and tested on test2007. The hyper-parameters are selected based on a held-out subset of the training set.
Detection on PASCAL VOC Following the detection benchmark set up in He et al. (2020), we use FasterR-CNN (Ren et al., 2015) object detector and ResNet-50 C4 (He et al., 2017) backbone, implemented in Detectron2 (Wu et al., 2019). We finetune all the layers including the batch normalization parameters for 24k iterations on the trainval07+12 split and test on test2007 set. The hyper-parameters are the same as the counterpart with supervised ImageNet initialization and MoCo. To calibrate the small feature magnitude due to the output normalization in the unsupervised pre-training stage, two extra batch normalization layers are introduced, one is followed by the regional proposal head whose gradients are divided by 10 and the other is followed by the box prediction head.
Segmentation on PASCAL VOC Following the setup in He et al. (2020), an FCN-based (Long et al., 2015) architecture with atrous convolutions (Chen et al., 2017) is used and ResNet-50 is the backbone. The training set is train aug2012 (Hariharan et al., 2011) and the testing set is val2012. Initialized with CO2 models, we finetune all layers for 50 epochs ( 33k iterations) with batch size 16, initial learning rate 0.003, weight decay 1e-4 and momentum 0.9. | 1. What is the focus of the paper regarding self-supervised visual representation learning?
2. What is the novelty of the proposed approach, particularly in combining consistency regularization loss with the standard instance discrimination loss?
3. How does the reviewer assess the significance and impact of the paper's contribution?
4. What are the limitations of the paper, including its comparison with other works and the magnitude of improvement shown in experiments? | Review | Review
This paper proposes to add a new consistency loss term to the momentum contrast (MoCo) framework for self-supervised visual representation learning. A common strategy for self-supervised learning, employed by MoCo as well as others, is to learn invariance to a class of transforms. Here, a deep network is trained on an instance discrimination task: among distractor images and a transformed (e.g. via data augmentation) version of the input (query) image, correctly identify the transformed example (a classification problem).
Consistency regularization formulates an alternate objective: learn a similarity function between images and, given a gallery of images, encourage the input query and positive example (transformed variant) to have a similar distribution of similarity over the gallery. The similarity distribution can be treated as a pseudo-label.
The paper trains a variant of MoCo that combines the standard instance discrimination loss with this consistency regularization loss. For optimal hyperparameters of this combination, experiments show small but consistent accuracy gains over the baseline MoCo in multiple scenarios: classification and semi-supervised learning on ImageNet, as well as classification, object detection, and semantic segmentation on PASCAL.
In terms of overall impact, the contribution of this paper appears to be an incremental improvement to the current self-supervised learning paradigm. The consistency loss itself is not a new idea, as the paper cites Sajjadi et al. (2016). In fact, Sajjadi et al. examine consistency loss in the context of self-supervised learning, which appears to further limit the novelty of the current paper to the specific contribution of doing so with MoCo.
As another limitation, the overall gains are small and it is not clear that they necessarily make MoCo+CO2 the top method. For example, on ImageNet classification, SimCLR outperforms MoCov2+CO2 (69.3 to 68.0 in Table 1), though SimCLR is trained for more epochs. |
ICLR | Title
CO2: Consistent Contrast for Unsupervised Visual Representation Learning
Abstract
Contrastive learning has been adopted as a core method for unsupervised visual representation learning. Without human annotation, the common practice is to perform an instance discrimination task: Given a query image crop, this task labels crops from the same image as positives, and crops from other randomly sampled images as negatives. An important limitation of this label assignment strategy is that it can not reflect the heterogeneous similarity between the query crop and each crop from other images, taking them as equally negative, while some of them may even belong to the same semantic class as the query. To address this issue, inspired by consistency regularization in semi-supervised learning on unlabeled data, we propose Consistent Contrast (CO2), which introduces a consistency regularization term into the current contrastive learning framework. Regarding the similarity of the query crop to each crop from other images as “unlabeled”, the consistency term takes the corresponding similarity of a positive crop as a pseudo label, and encourages consistency between these two similarities. Empirically, CO2 improves Momentum Contrast (MoCo) by 2.9% top-1 accuracy on ImageNet linear protocol, 3.8% and 1.1% top-5 accuracy on 1% and 10% labeled semi-supervised settings. It also transfers to image classification, object detection, and semantic segmentation on PASCAL VOC. This shows that CO2 learns better visual representations for these downstream tasks.
1 INTRODUCTION
Unsupervised visual representation learning has attracted increasing research interests for it unlocks the potential of large-scale pre-training for vision models without human annotation. Most of recent works learn representations through one or more pretext tasks, in which labels are automatically generated from image data itself. Several early methods propose pretext tasks that explore the inherent structures within a single image. For example, by identifying spatial arrangement (Doersch et al., 2015), orientation (Gidaris et al., 2018), or chromatic channels (Zhang et al., 2016), models learn useful representations for downstream tasks. Recently, another line of works (Wu et al., 2018; Bachman et al., 2019; Hjelm et al., 2018; Tian et al., 2019; He et al., 2020; Misra & van der Maaten, 2020; Chen et al., 2020a), e.g. Momentum Contrast (MoCo), falls within the framework of contrastive learning (Hadsell et al., 2006), which directly learns relations of images as the pretext task. In practice, contrastive learning methods show better generalization in downstream tasks.
Although designed differently, most contrastive learning methods perform an instance discrimination task, i.e., contrasting between image instances. Specifically, given a query crop from one image, a positive sample is an image crop from the same image; negative samples are crops randomly sampled from other images in the training set. Thus, the label for instance discrimination is a one-hot encoding over the positive and negative samples. This objective is to bring together crops from the same image and keep away crops from different images in the feature space, forming an instance discrimination task.
However, the one-hot label used by instance discrimination might be problematic, since it takes all the crops from other images as equally negative, which cannot reflect the heterogeneous similarities between the query crop and each of them. For example, some “negative” samples are semantically similar to the query, or even belong to the same semantic class as the query. This is referred to as
∗corresponding author
“class collision” in Saunshi et al. (2019) and “sampling bias” in Chuang et al. (2020). The ignorance of the heterogeneous similarities between the query crop and the crops from other images can thus raise an obstacle for contrastive methods to learn a good representation. A recent work, supervised contrastive learning (Khosla et al., 2020), fixes this problem by using human annotated class labels and achieves strong classification performance. However, in unsupervised representation learning, the human annotated class labels are unavailable, and thus it is more challenging to capture the similarities between crops.
In this paper, we propose to view this instance discrimination task from the perspective of semisupervised learning. The positive crop should be similar to the query for sure since they are from the same image, and thus can be viewed as labeled. On the contrary, the similarity between the query and each crop from other images is unknown, or unlabeled. With the viewpoint of semi-supervised learning, we introduce Consistent Contrast (CO2), a consistency regularization method which fits into current contrastive learning framework. Consistency regularization (Sajjadi et al., 2016) is at the core of many state-of-the-art semi-supervised learning algorithms (Xie et al., 2019; Berthelot et al., 2019b; Sohn et al., 2020). It generates pseudo labels for unlabeled data by relying on the assumption that a good model should output similar predictions on perturbed versions of the same image. Similarly, in unsupervised contrastive learning, since the query crop and the positive crop naturally form two perturbed versions of the same image, we encourage them to have consistent similarities to each crop from other images. Specifically, the similarity of the positive sample predicted by the model is taken as a pseudo label for that of the query crop.
Our model is trained with both the original instance discrimination loss term and the introduced consistency regularization term. The instance discrimination label and the pseudo similarity label jointly construct a virtual soft label on-the-fly, and the soft label further guides the model itself in a bootstrap manner. In this way, CO2 exploits the consistency assumption on unlabeled data, mitigates the “class collision” effect introduced by the one-hot labels, and results in a better visual representation. More importantly, our work brings a new perspective of unsupervised visual representation learning. It relaxes the stereotype that the pretext task can only be self-supervised which aims to construct artificial labels for every sample, e.g., a specific degree of rotation (Gidaris et al., 2018), a configuration of jigsaw puzzle (Noroozi & Favaro, 2016), and a one-hot label that indicates whether a crop comes from the same instance or not (Wu et al., 2018). In contrast, the pretext task can also be self-semi-supervised, allowing the task itself to be partially labeled. This relaxation is especially helpful when information for artificial label construction is not enough and imposing a label is harmful, such as the case of imposing the one-hot labels in instance discrimination.
This simple modification brings consistent gains on various evaluation protocols. We first benchmark CO2 on ImageNet (Deng et al., 2009) linear classification protocol. CO2 improves MoCo by 2.9% on top-1 accuracy. It also provides 3.8% and 1.1% top-5 accuracy gains under the semisupervised setting on ImageNet with 1% and 10% labels respectively, showing the effectiveness of the introduced consistency regularization. We also evaluate the transfer ability of the learned representations on three different downstream tasks: image classification, object detection and semantic segmentation. CO2 models consistently surpass their MoCo counterparts, showing that CO2 can improve the generalization ability of learned representation. Besides, our experiments on ImageNet100 (Tian et al., 2019) demonstrate the efficacy of CO2 on SimCLR (Chen et al., 2020a), showing the generality of our method on different contrastive learning frameworks.
2 METHOD
In this section, we begin by formulating current unsupervised contrastive learning as an instance discrimination task. Then, we propose our consistency regularization term which addresses the ignorance of the heterogeneous similarity between the query crop and each crop of other images in the instance discrimination task.
2.1 CONTRASTIVE LEARNING
Contrastive learning (Hadsell et al., 2006) is recently adopted as an objective for unsupervised learning of visual representations. Its goal is to find a parametric function fθ : RD → Rd that maps an input vector x to a feature vector fθ(x) ∈ Rd with D d, such that a simple distance measure (e.g., cosine distance) in the low-dimensional feature space can reflect complex similarities in the high-dimensional input space.
For each input vector xi in the training set S, the similarity measure in the input space is defined by a subset of training vectors Si ⊂ S, called similarity set. The sample xi is deemed similar to samples in the similarity set Si, but dissimilar to samples in S \ Si. Then, the contrastive objective encourages fθ(xj) to be close to fθ(xi) in the feature space if xj ∈ Si, and otherwise to be distant. By training with contrastive loss, the similarities defined by the similarity set determine characteristics of the learned representation and the mapping function fθ. For example, if the similarity is defined as samples from the same semantic class, then fθ will probably learn invariances to other factors, e.g., object deformation. In the supervised setting, this definition of similarity requires a large amount of human labeling. On the contrary, unsupervised contrastive learning exploits similarities with no need of human labels. One natural definition of unsupervised similarity is multiple views of an image, as explored by many recent methods. For example, random augmented crops (Wu et al., 2018; Ye et al., 2019; He et al., 2020; Chen et al., 2020a;b) of an image could be defined as a similarity set. In this case, the contrastive objective is effectively solving an instance discrimination task (Wu et al., 2018) as illustrated in Figure 1a.
The training of this instance discriminator involves randomly sampling a query crop xq ∈ Si, a positive crop xp ∈ Si from the same image, and K negative crops {xk ∈ S \ Si}Kk=1 from other images. These K + 2 crops (the query, the positive, and K negatives) are encoded with fθ respectively, q = fθ(xq),p = fθ(xp),nk = fθ(xk). Then, an effective contrastive loss function, InfoNCE (Hjelm et al., 2018), is written as:
Lins = − log exp(q · p/τins) exp(q · p/τins) + ∑K k=1 exp(q · nk/τins) , (1)
where τins is a temperature hyper-parameter (Hinton et al., 2015). This loss can be interpreted as a cross entropy loss that trains the model to discriminate the positive crop (labeled as 1) from negative crops (labeled as 0) given the query crop. We denote this loss as Lins as it performs an instance discrimination task. One direct instantiation of InfoNCE loss, represented by SimCLR (Chen et al., 2020a), formulates fθ as an end-to-end encoder. In this case, two crops of the same image are exchangeable or symmetric to each other as both are encoded by fθ. The final loss is also symmetric
with either one of the two crops as the query and the other crop as the positive. Another popular instantiation, represented by MoCo (He et al., 2020), encodes the query with fθ and encodes the positive and the negatives with fθ′ which is the moving average of fθ. In this case, only q can propagate gradients, which causes Lins to be asymmetric.
2.2 CONSISTENT CONTRAST
The one-hot labels used by InfoNCE loss is effective, showing good generalization ability across tasks and datasets (Chen et al., 2020b;a). Nevertheless, we argue that the hard, zero-one labels is uninformative. Specifically, crops from other images are taken as equally negative as they are all labeled as 0. This is contradictory to the fact that some so-called “negative” crops can be similar or even in the same semantic class, especially when K is large. For example, SimCLR (Chen et al., 2020a) uses 16,382 negative samples in a batch, and MoCo (He et al., 2020; Chen et al., 2020b) uses a memory bank of 65,536 features as negative samples. Even worse, the current objective forces negatives to be as far from the query as possible, with larger weights for closer negatives since they are “hard negatives”. However, these “hard negative” crops in fact tend to be semantically close. These issues impair good representation learning because the one-hot labels can not faithfully reflect the heterogeneous similarities between the query crop and the crops from other images.
Although generating labels based on instance discrimination is trivial, revealing the similarity between two arbitrary crops is exactly what we want to learn from unsupervised pre-training. Therefore, the label of the similarity between the query crop and each crop from other images is of little hope to get. This situation is similar to the usage of unlabeled data in semi-supervised learning setting, in which consistency regularization is widely used to propagate knowledge from labeled data to discover the structures in unlabeled data. Inspired by this, we propose to encourage the consistency between the similarities of crops from the same image, i.e., the query crop and the positive crop. We illustrate the consistency regularization in Figure 1b.
First, we denote the similarity between the query q and the negatives ni(i ∈ {1, . . . ,K}) as:
Q(i) = exp(q · ni/τcon)∑K k=1 exp(q · nk/τcon) , (2)
where τcon is also a temperature hyper-parameter. Q(i) is the probability that the query q selects ni as its match from {nk}Kk=1. Similarly, the similarity between the positive p and the negatives is written as:
P (i) = exp(p · ni/τcon)∑K k=1 exp(p · nk/τcon) . (3)
We impose the consistency between the probability distributions P and Q by using symmetric Kullback-Leibler (KL) Divergence as the measure of disagreement:
Lcon = 1
2 DKL(P‖Q) +
1 2 DKL(Q‖P ) . (4)
When p and q are encoded by the same end-to-end encoder fθ, it is natural to use symmetric KL as their disagreement measure, since p and q are exchangeable. Even when p and ni are encoded by the momentum encoder f ′θ, symmetric KL empirically works as well as forward KL, i.e., DKL(P‖Q), as shown in Section 3.5. Thus, we use symmetric KL as a unified objective for both cases.
The total loss is a weighted average of the original instance discrimination loss term and the consistency regularization term: L = Lins + αLcon , (5) where α denotes the coefficient to balance the two terms. It is possible to merge the two terms by creating a unique label containing information of both the one-hot label and the pseudo similarity label, but we find the weighted average can already get good performance and is easy to control.
The pseudo label is informative to reveal the similarity between the query q and each ni, while the one-hot label is unable to provide such information, since it only describe co-occurrence within one image. Note that, the pseudo label is also dynamic since the embedding function fθ is updated in every training step, and thus generating better pseudo labels during training. It indicates that the unsupervised embedding function and the soft similarity labels give positive feedback to each other.
Our method is simple and low-cost. It captures the similarity to each ni while introducing unnoticeable computational overhead with only one extra loss term computed. This is unlike clustering based unsupervised learning methods, which are costly, since they explicitly compute the similarity sets in the training set after every training epoch (Caron et al., 2018; Zhuang et al., 2019; Li et al., 2020; Caron et al., 2020).
3 EXPERIMENTS
Herein, we first report our implementation details and benchmark the learned representations on ImageNet. Next, we examine how the unsupervised pre-trained models transfer to other datasets and tasks. We then analyze the characteristics of our proposed method.
3.1 LINEAR CLASSIFICATION
Setup We mainly evaluate CO2 based on MoCo (He et al., 2020) and MoCo v2 (Chen et al., 2020b). Both of them use instance discrimination as pretext task, while MoCo v2 adopts more sophisticated design choices on projection head architecture, learning rate schedule and data augmentation strategy. We test CO2 on MoCo for its representativeness and simplicity. On MoCo v2, we evaluate how CO2 is compatible with advanced design choices. We also demonstrate the impact of CO2 on the end-to-end contrastive framework in Section 3.5.
The unsupervised training is performed on the train split of ImageNet-1K (Deng et al., 2009) without using label information. We keep aligned every detail with our baseline MoCo to effectively pinpoint the contribution of our approach, except the number of GPUs (MoCo uses 8 GPUs while we use 4). A further search on MoCo-related hyper-parameters might lead to better results of our
* Results reported in Chen & He (2020).
method. For the hyper-parameters of CO2, we set τcon as 0.04, α as 10 for MoCo-based CO2, and τcon as 0.05, α as 0.3 for MoCo v2-based CO2. Please refer to the appendix for more detailed implementation description.
3.2 LINEAR CLASSIFICATION
We first benchmark the learned representations on the common linear classification protocol. After the unsupervised pre-training stage, we freeze the backbone network including the batch normalization parameters, and train a linear classifier consisting of a fully-connected layer and a softmax layer on the 2048-D features following the global average pooling layer. Table 1 summaries the singlecrop top-1 classification accuracy on the validation set of ImageNet-1K. Our method consistently improves by 2.9% on MoCo and by 0.5% on MoCo v2. We also list several top-performing methods in the table for reference. These results indicate that the representation is more linearly separable on ImageNet with consistency regularization, since the consistency regularization mitigates the “class collision” effect caused by semantically similar negative samples.
3.3 SEMI-SUPERVISED LEARNING
We next perform semi-supervised learning on ImageNet to evaluate the effectiveness of the pretrained network in data-efficient settings. Following (Wu et al., 2018; Misra & van der Maaten, 2020; Chen et al., 2020a), we finetune the whole pre-trained networks with only 1% and 10% labels which are sampled in a class-balanced way. Table 2 summaries the mean of the top-5 accuracy on the validation set of ImageNet-1K over three runs. The results for MoCo and MoCo v2 are produced by us using their officially released models. The proposed consistency regularization term can provide 3.8% and 1.1% top-5 accuracy gains for MoCo with 1% and 10% labels respectively. CO2 also improves from MoCo v2 by 1.1% top-5 accuracy with 1% labels, and by 0.3% with 10% labels.
3.4 TRANSFER LEARNING
To further investigate the generalization ability of our models across different datasets and tasks, we evaluate the transfer learning performance on PASCAL VOC (Everingham et al., 2015) with three typical visual recognition tasks, i.e., image classification, object detection and semantic segmentation. Table 3 reports the transfer learning performance comparing with other methods using ResNet-50. CO2 shows competitive or better performance comparing with the corresponding baselines, In addition, it achieves better performance than state-of-the-art unsupervised representation learning methods.
Image Classification Following the evaluation setup in Goyal et al. (2019), we train a linear SVM (Boser et al., 1992) on the frozen 2048-D features extracted after the global average pool-
ing layer. The results of MoCo are produced by us with their official models. In this case, CO2 is 2.9% better than MoCo, and 0.2% than MoCo v2.
Object Detection Following the detection benchmark set up in He et al. (2020), we use Faster R-CNN (Ren et al., 2015) object detector and ResNet-50 C4 (He et al., 2017) backbone, and all the layers are finetuned including the batch normalization parameters. The numbers of our method are averaged over three runs. Our reproduced results for MoCo are also listed in the table for reference. CO2 provides 0.3% AP50 gains on both MoCo and MoCo v2.
Semantic Segmentation We follow the settings in He et al. (2020) for semantic segmentation. Results are average over three runs. Similarly, we include our reproduced results of MoCo as a reference. The result of MoCo v2 is produced by us using its officially released model. CO2 gives 0.9% mIoU improvement upon MoCo, and 0.5% upon MoCo v2, which finally surpasses its supervised counterpart.
The overall transfer learning improvements, though consistent, are smaller than linear classification and semi-supervised learning. Similar observations have also been made in Chen et al. (2020b). We hypothesize that the current unsupervised contrastive methods, which bring close different crops from the same image, reduce the representation’s sensitivity to location which is useful for tasks like detection. It is still an open question which properties of an unsupervised representation benefit the transfer ability to various downstream tasks.
3.5 ANALYSIS
In this section, we study the characteristics of the proposed method on a smaller backbone ResNet18 and a smaller dataset ImageNet-100 due to the consideration of the computational resource. ImageNet-100 is firstly used in Tian et al. (2019) and consists of 100 randomly selected classes from all 1, 000 classes of ImageNet.
Hyper-parameter Our method introduces two new hyper-parameters, the coefficient of consistency regularization term α, and its temperature τcon. In Figure 2, we show the top-1 accuracy of a linear classifier on models pre-trained by CO2 with different hyper-parameters. In Figure 2a, we fix the temperature τcon as 0.04 and vary the coefficient α. The best coefficient is 10. We see that by using the consistency regularization term, the linear classification accuracy can be boosted from 63.6% to 69.2%. Increasing α to 20 and beyond causes performance degeneration. We hypothesize that the model is over-regularized by the consistency loss, and thus it loses some discrimination among different instances. In Figure 2b, we fix the coefficient to be 10 and varying the temperature. As other consistency regularization methods (e.g., Berthelot et al. (2019b)), temperature τcon effectively influences the quality of the learned representation, and the best to use is 0.04.
Training Curves In Figure 3 we show the training curves of the instance discrimination loss Lins, the consistency loss Lcon and the instance discrimination accuracy. Instance discrimination accuracy represents the percent of query crops which successfully select their corresponding positive crops, i.e., successfully identify their instances. MoCo is trained with Lins only and its Lcon is just calculated out for comparison. We see that Lins of MoCo drops quickly from the beginning at the cost of a jump of Lcon. As the training proceeds, Lcon of MoCo decreases spontaneously, possibly because more semantic knowledge has been learned, but it is still relatively high. Training with Lcon and Lins together, i.e., MoCo + CO2, Lcon is kept very low from beginning, and Lcon increases gradually since the model is trained to discriminate between images at the same time. At the end of the training, Lcon stays much lower than Lcon of MoCo. We also notice that with CO2, the instance discrimination accuracy drops from 97.57% to 95.26%. Although CO2 results in lower instance discrimination accuracy, it still does better in the downstream classification task. The linear classification accuracy improves from 63.6% to 69.2%, as shown in Figure 2a. It suggests again that there is a gap between instance discrimination and the downstream tasks.
Comparison with Label Smoothing With the consistency regularization term, our approach assigns soft pseudo labels to crops from other images. This looks similar to label smoothing regularization on supervised classification (Szegedy et al., 2016), a useful trick which assigns a small constant value to the labels of all the negative classes to avoid overconfidence. We equip MoCo with label smoothing, i.e., assigning a small constant value to crops from other images (the “negatives”). Surprisingly, it reports 61.2% linear classification accuracy, 2.4% lower than MoCo alone. This suggests that assigning a constant value as label smoothing can be harmful for unsupervised contrastive learning, since it ignores the heterogeneous similarity relationship. And it is better to assign labels according to the similarities as our consistency regularization.
End-to-End Encoder To further verify the effectiveness of the proposed consistency regularization term on different contrastive learning frameworks, we apply CO2 to SimCLR (Chen et al., 2020a), a representative method with an end-to-end encoder (without a momentum encoder). The results are presented in Table 4. On ImageNet-100 (Tian et al., 2019) with a ResNet-18, SimCLR obtains 68.9% top-1 linear classification accuracy with batch size 128 and temperature τins 0.1. Equipped with CO2 whose coefficient α is 0.07 and temperature τcon is 1.0, the linear classification accuracy is boosted to 72.3%. The improvement demonstrates that CO2 can be applied to different unsupervised contrastive frameworks and improve the quality of the learned representation regardless of whether using a momentum encoder or not.
Varying the choices of Lcon We ablate on different variants of Lcon (Eq. 4) on MoCo including forward KL (DKL(P‖Q)), reverse KL (DKL(Q‖P )), and the objective of CO2, i.e., symmetric KL. Each of models uses a coefficient α of 10 and a temperature τcon of 0.04. We present the linear classification accuracy in Table 4. Our CO2 (symmetric KL) improves over the baseline MoCo by a large margin, from 63.1% to 69.7%. Forward KL alone improves similarly to 69.6%. And reserve KL alone can also provide a nontrivial 2.0% gain in accuracy.
4 RELATED WORK
Our method falls in the area of unsupervised visual representation learning, especially for image data. In this section, we first revisit various design strategies of pretext tasks for unsupervised learning. Then we elaborate on the pretext tasks based on contrastive learning, which is the focus of our work. Next, we review the methods using consistency regularization in semi-supervised learning, which inspire our work.
Unsupervised Learning and Pretext Tasks To learn from unlabeled image data, a wide range of pretext tasks have been established. The models can be taught to specify the relative position of a patch (Doersch et al., 2015), solve spatial jigsaw puzzles (Noroozi & Favaro, 2016; Wei et al.,
2019), colorize gray scale images (Zhang et al., 2016; Larsson et al., 2017), inpaint images (Pathak et al., 2016), count objects (Noroozi et al., 2017), discriminate orientation (Gidaris et al., 2018), iteratively cluster (Caron et al., 2018; Zhuang et al., 2019; Asano et al., 2019; Zhong et al., 2020), generate images (Donahue et al., 2016; Donahue & Simonyan, 2019), etc. Doersch & Zisserman (2017) evaluates the combination of different pretext tasks. Kolesnikov et al. (2019) and Goyal et al. (2019) revisit and benchmark different pretext tasks.
Contrastive Learning Contrastive learning (Hadsell et al., 2006) recently puts a new perspective on the design of pretext task and holds the key to most state-of-the-art methods. Most of them perform an instance discrimination task while differ in i) the strategies to synthesize positives and negatives, and ii) the mechanisms to manage a large amount of negatives. The synthesizing can base on context with patches (Hjelm et al., 2018; 2019), random resized crops with data augmentation (Wu et al., 2018; Ye et al., 2019; Bachman et al., 2019; He et al., 2020; Chen et al., 2020a), jigsaw puzzle transformation (Misra & van der Maaten, 2020) or luminance-chrominance decomposition (Tian et al., 2019). Regarding the mechanisms to maintain negative features, some methods (Wu et al., 2018; Misra & van der Maaten, 2020) keep tracking the features of all images, some directly utilize the samples within the minibatch (Chen et al., 2020a; Tian et al., 2019; Ye et al., 2019), and He et al. (2020) proposes to use a momentum encoder. Grill et al. (2020) recently proposes to only use positive examples without negatives. Recently, Li et al. (2020) argues that the lack of semantic structure is one fundamental weakness of instance discrimination, and proposes to generate prototypes by k-means clustering. However, the computational overhead and the degeneration introduced by clustering are difficult to address. Chuang et al. (2020) also points out the possible sampling bias of instance discrimination, and proposes a debiased objective.
Consistency Regularization Consistency regularization is an important component of many successful semi-supervised learning methods. It is firstly proposed in Sajjadi et al. (2016), encouraging similar predictions on perturbed versions of the same image. Besides the consistency regularization on unlabeled data, the model is simultaneously trained with a supervised loss on a small set of labeled data. Several works made improvements on the way of perturbation, including using an adversarial transformation (Miyato et al., 2018), using the prediction of a moving average or previous model (Tarvainen & Valpola, 2017; Laine & Aila, 2017), and using strong data augmentation (Xie et al., 2019). Recently, several larger pipelines are proposed (Berthelot et al., 2019b;a; Sohn et al., 2020), in which consistency regularization still serves as a core component.
The instance discrimination loss in unsupervised contrastive learning is analogous to the supervised loss in semi-supervised learning, as both of them rely on some concrete information, i.e., cooccurrence in one image and human annotation, respectively. Meanwhile, CO2 on the similarities between crops is analogous to consistency regularization on unlabeled samples of semi-supervised methods as their labels are both unknown. The main difference, however, is that semi-supervised methods crucially rely on the supervised loss to warm up the model, while there is no human annotation at all in unsupervised contrastive learning. Our work presents an example that a model learned completely without human annotations can also generate surprisingly effective pseudo labels for similarities between different crops and benefit from consistency regularization.
5 DISCUSSION
Unsupervised visual representation learning has shown encouraging progress recently, thanks to the introduction of instance discrimination and the contrastive learning framework. However, in this paper, we point out that instance discrimination is ignorant of the heterogeneous similarities between image crops. We address this issue with a consistency regularization term on the similarities between crops, inspired by semi-supervised learning methods which impose consistency regularization on unlabeled data. In such a simple way, the proposed CO2 consistently improves on supervised and semi-supervised image classification. It also transfers to other datasets and downstream tasks.
More broadly, we encourage researchers to rethink label correctness in existing pretext tasks. Taking instance discrimination as an example, we show that a pretext task itself could be, in fact, a semisupervised learning task. It might be harmful to think of the pretext task as a simple pure supervised task by assuming the unknown labels are negatives. In addition, our work relaxes the stereotype restriction that pretext task labels should always be known and clean. We hope this relaxation can give rise to novel pretext tasks which exploit noisy labels or partially-available labels, making a better usage of the data without human annotation.
A APPENDIX
A.1 IMPLEMENTATION DETAILS OF CONTRASTIVE PRE-TRAINING
We evaluate our approach based on MoCo (He et al., 2020). MoCo has two different encoders to encode queries and keys respectively. The query encoder is updated with respect to the loss function, while the key encoder is an exponential moving average of the query encoder. The keys are stored in a dynamic memory bank, whose entries are updated at every training step with the current minibatch enqueued and the oldest mini-batch dequeued. The backbone is a standard ResNet-50 (He et al., 2016), and features after the global average pooling layer are projected to 128-D vectors (Wu et al., 2018), normalized by `2 norm. The size of the memory bank (i.e., the number of negative samples) is 65,536 and the momentum to update the key encoder is 0.999. τins is 0.07 for MoCo variants and 0.2 for MoCo v2 variants, which are the default settings of these two methods.
We use momentum SGD with momentum 0.9 and weight decay 1e-4. The batch size is 256 on 4 GPUs. To prevent potential information leak with Batch Normalization (BN) (Ioffe & Szegedy, 2015), shuffling BN (He et al., 2020) is performed. The model is trained for 200 epochs with the initial learning rate of 0.03. The learning rate is multiplied by 0.1 after 120 and 160 epochs for MoCo v1, while cosine decayed (Loshchilov & Hutter, 2016) for MoCo v2. We keep aligned all training details with MoCo except the number of GPUs. This could be problematic since it changes the perworker minibatch size, which is related to potential information leaks pointed by He et al. (2020). However, we do not notice much difference when reproducing MoCo with 4 GPUs. Our reproduced MoCo v2 with 4 GPUs reaches the accuracy of 67.6% on the linear classification protocol, 0.1% higher than 67.5% reported in its paper. For the hyper-parameters of the proposed consistency term, we set τcons as 0.04 and α as 10 for the MoCo v1-based CO2, and τcon as 0.05, α as 0.3 for the MoCo v2-based variant.
A.2 IMPLEMETATION DETAILS OF DOWNSTREAM TASKS
Linear Classification We freeze the backbone network including the batch normalization parameters, and train a linear classifier consisting of a fully-connected layer followed by softmax on the 2048-D features following the global average pooling layer. We train for 100 epochs. The learning rate is initialized as 15 and decayed by 0.1 every 20 epoch after the first 60 epochs. We set weight decay as 0 and momentum as 0.9. Only random cropping with random horizontal flipping is used as data augmentation.
Semi-Supervised Learning We finetune the pre-trained model for 20 epochs with learning rate starting from 0.01 for the base model and 1.0 for the randomly initialized classification head, decayed by 0.2 after 12 and 16 epochs. Momentum is set to 0.9. Weight decay is 5e-4 for MoCo v1 and 1e-4 for MoCo v2. Only random cropping with random horizontal flipping is used as data augmentation.
Classification on PASCAL VOC Following the evaluation setup in Goyal et al. (2019), we train a linearSVM (Boser et al., 1992) on the frozen 2048-D features extracted after the global average pooling layer. The models are trained on trainval2007 split and tested on test2007. The hyper-parameters are selected based on a held-out subset of the training set.
Detection on PASCAL VOC Following the detection benchmark set up in He et al. (2020), we use FasterR-CNN (Ren et al., 2015) object detector and ResNet-50 C4 (He et al., 2017) backbone, implemented in Detectron2 (Wu et al., 2019). We finetune all the layers including the batch normalization parameters for 24k iterations on the trainval07+12 split and test on test2007 set. The hyper-parameters are the same as the counterpart with supervised ImageNet initialization and MoCo. To calibrate the small feature magnitude due to the output normalization in the unsupervised pre-training stage, two extra batch normalization layers are introduced, one is followed by the regional proposal head whose gradients are divided by 10 and the other is followed by the box prediction head.
Segmentation on PASCAL VOC Following the setup in He et al. (2020), an FCN-based (Long et al., 2015) architecture with atrous convolutions (Chen et al., 2017) is used and ResNet-50 is the backbone. The training set is train aug2012 (Hariharan et al., 2011) and the testing set is val2012. Initialized with CO2 models, we finetune all layers for 50 epochs ( 33k iterations) with batch size 16, initial learning rate 0.003, weight decay 1e-4 and momentum 0.9. | 1. What is the main contribution of the paper?
2. How does the reviewer assess the technical novelty of the proposed approach?
3. What are the strengths and weaknesses of the experimental evaluation?
4. Does the reviewer think the proposed method is generally applicable to other contrastive learning methods?
5. What is the final decision of the reviewer regarding the paper's acceptance? | Review | Review
This paper addresses the problem of unsupervised contrastive learning for visual representation. Its key idea is to use a consistency regularization method to resolve the issue of one-hot labels for instance discrimination on which most previous work have relied. The proposed CO2 method is implemented on top of MoCo and MoCo v2 and improves their representation performance on multiple classification and detection tasks.
It borrows a simple consistency regularize method from semi-supervised learning literature to tackle one issue of recent unsupervised contrastive learning for visual representation – one-hot label that cannot discriminate the semantic closeness to a query between negative samples.
The proposed approach is applied to recent SOTA MoCo methods and further improves their performance on image classification, detection and semantic segmentation tasks.
Although this paper can clearly alleviate one issue of recent practice of contrastive learning, the technical novelty is limited.
(1) This paper proposes a new use of an existing technique (consistence regularization) to a new problem (unsupervised contrastive learning). Given that the technique is basic and well-known in semi-supervised learning literature, the proposal of its simple use (with no extension) bears little technical novelty.
(2) In my opinion, the proposal could be a good practice for unsupervised contrastive learning, but its contribution may not be sufficient enough to make this work as a legitimate full paper on ICLR as one of the top premier ML conferences.
Experimental evaluation should be improved.
(1) The proposed approach shows the SOTA performance on multiple computer vision tasks, but it largely attributes to the strong performance of MOCO variants.
(2) The performance gain of the proposed method over MOCO v2 is rather marginal as shown in Table 2 and 3.
(3) The proposed CO2 is only implemented on MOCO variants. In order to show the generality of the proposed method, it should be tested with multiple contrastive learning methods and show whether it can consistently improve the methods.
My initial decision is ‘reject’ mainly because the contribution is somewhat limited and more empirical justification for the method is required. |
ICLR | Title
CO2: Consistent Contrast for Unsupervised Visual Representation Learning
Abstract
Contrastive learning has been adopted as a core method for unsupervised visual representation learning. Without human annotation, the common practice is to perform an instance discrimination task: Given a query image crop, this task labels crops from the same image as positives, and crops from other randomly sampled images as negatives. An important limitation of this label assignment strategy is that it can not reflect the heterogeneous similarity between the query crop and each crop from other images, taking them as equally negative, while some of them may even belong to the same semantic class as the query. To address this issue, inspired by consistency regularization in semi-supervised learning on unlabeled data, we propose Consistent Contrast (CO2), which introduces a consistency regularization term into the current contrastive learning framework. Regarding the similarity of the query crop to each crop from other images as “unlabeled”, the consistency term takes the corresponding similarity of a positive crop as a pseudo label, and encourages consistency between these two similarities. Empirically, CO2 improves Momentum Contrast (MoCo) by 2.9% top-1 accuracy on ImageNet linear protocol, 3.8% and 1.1% top-5 accuracy on 1% and 10% labeled semi-supervised settings. It also transfers to image classification, object detection, and semantic segmentation on PASCAL VOC. This shows that CO2 learns better visual representations for these downstream tasks.
1 INTRODUCTION
Unsupervised visual representation learning has attracted increasing research interests for it unlocks the potential of large-scale pre-training for vision models without human annotation. Most of recent works learn representations through one or more pretext tasks, in which labels are automatically generated from image data itself. Several early methods propose pretext tasks that explore the inherent structures within a single image. For example, by identifying spatial arrangement (Doersch et al., 2015), orientation (Gidaris et al., 2018), or chromatic channels (Zhang et al., 2016), models learn useful representations for downstream tasks. Recently, another line of works (Wu et al., 2018; Bachman et al., 2019; Hjelm et al., 2018; Tian et al., 2019; He et al., 2020; Misra & van der Maaten, 2020; Chen et al., 2020a), e.g. Momentum Contrast (MoCo), falls within the framework of contrastive learning (Hadsell et al., 2006), which directly learns relations of images as the pretext task. In practice, contrastive learning methods show better generalization in downstream tasks.
Although designed differently, most contrastive learning methods perform an instance discrimination task, i.e., contrasting between image instances. Specifically, given a query crop from one image, a positive sample is an image crop from the same image; negative samples are crops randomly sampled from other images in the training set. Thus, the label for instance discrimination is a one-hot encoding over the positive and negative samples. This objective is to bring together crops from the same image and keep away crops from different images in the feature space, forming an instance discrimination task.
However, the one-hot label used by instance discrimination might be problematic, since it takes all the crops from other images as equally negative, which cannot reflect the heterogeneous similarities between the query crop and each of them. For example, some “negative” samples are semantically similar to the query, or even belong to the same semantic class as the query. This is referred to as
∗corresponding author
“class collision” in Saunshi et al. (2019) and “sampling bias” in Chuang et al. (2020). The ignorance of the heterogeneous similarities between the query crop and the crops from other images can thus raise an obstacle for contrastive methods to learn a good representation. A recent work, supervised contrastive learning (Khosla et al., 2020), fixes this problem by using human annotated class labels and achieves strong classification performance. However, in unsupervised representation learning, the human annotated class labels are unavailable, and thus it is more challenging to capture the similarities between crops.
In this paper, we propose to view this instance discrimination task from the perspective of semisupervised learning. The positive crop should be similar to the query for sure since they are from the same image, and thus can be viewed as labeled. On the contrary, the similarity between the query and each crop from other images is unknown, or unlabeled. With the viewpoint of semi-supervised learning, we introduce Consistent Contrast (CO2), a consistency regularization method which fits into current contrastive learning framework. Consistency regularization (Sajjadi et al., 2016) is at the core of many state-of-the-art semi-supervised learning algorithms (Xie et al., 2019; Berthelot et al., 2019b; Sohn et al., 2020). It generates pseudo labels for unlabeled data by relying on the assumption that a good model should output similar predictions on perturbed versions of the same image. Similarly, in unsupervised contrastive learning, since the query crop and the positive crop naturally form two perturbed versions of the same image, we encourage them to have consistent similarities to each crop from other images. Specifically, the similarity of the positive sample predicted by the model is taken as a pseudo label for that of the query crop.
Our model is trained with both the original instance discrimination loss term and the introduced consistency regularization term. The instance discrimination label and the pseudo similarity label jointly construct a virtual soft label on-the-fly, and the soft label further guides the model itself in a bootstrap manner. In this way, CO2 exploits the consistency assumption on unlabeled data, mitigates the “class collision” effect introduced by the one-hot labels, and results in a better visual representation. More importantly, our work brings a new perspective of unsupervised visual representation learning. It relaxes the stereotype that the pretext task can only be self-supervised which aims to construct artificial labels for every sample, e.g., a specific degree of rotation (Gidaris et al., 2018), a configuration of jigsaw puzzle (Noroozi & Favaro, 2016), and a one-hot label that indicates whether a crop comes from the same instance or not (Wu et al., 2018). In contrast, the pretext task can also be self-semi-supervised, allowing the task itself to be partially labeled. This relaxation is especially helpful when information for artificial label construction is not enough and imposing a label is harmful, such as the case of imposing the one-hot labels in instance discrimination.
This simple modification brings consistent gains on various evaluation protocols. We first benchmark CO2 on ImageNet (Deng et al., 2009) linear classification protocol. CO2 improves MoCo by 2.9% on top-1 accuracy. It also provides 3.8% and 1.1% top-5 accuracy gains under the semisupervised setting on ImageNet with 1% and 10% labels respectively, showing the effectiveness of the introduced consistency regularization. We also evaluate the transfer ability of the learned representations on three different downstream tasks: image classification, object detection and semantic segmentation. CO2 models consistently surpass their MoCo counterparts, showing that CO2 can improve the generalization ability of learned representation. Besides, our experiments on ImageNet100 (Tian et al., 2019) demonstrate the efficacy of CO2 on SimCLR (Chen et al., 2020a), showing the generality of our method on different contrastive learning frameworks.
2 METHOD
In this section, we begin by formulating current unsupervised contrastive learning as an instance discrimination task. Then, we propose our consistency regularization term which addresses the ignorance of the heterogeneous similarity between the query crop and each crop of other images in the instance discrimination task.
2.1 CONTRASTIVE LEARNING
Contrastive learning (Hadsell et al., 2006) is recently adopted as an objective for unsupervised learning of visual representations. Its goal is to find a parametric function fθ : RD → Rd that maps an input vector x to a feature vector fθ(x) ∈ Rd with D d, such that a simple distance measure (e.g., cosine distance) in the low-dimensional feature space can reflect complex similarities in the high-dimensional input space.
For each input vector xi in the training set S, the similarity measure in the input space is defined by a subset of training vectors Si ⊂ S, called similarity set. The sample xi is deemed similar to samples in the similarity set Si, but dissimilar to samples in S \ Si. Then, the contrastive objective encourages fθ(xj) to be close to fθ(xi) in the feature space if xj ∈ Si, and otherwise to be distant. By training with contrastive loss, the similarities defined by the similarity set determine characteristics of the learned representation and the mapping function fθ. For example, if the similarity is defined as samples from the same semantic class, then fθ will probably learn invariances to other factors, e.g., object deformation. In the supervised setting, this definition of similarity requires a large amount of human labeling. On the contrary, unsupervised contrastive learning exploits similarities with no need of human labels. One natural definition of unsupervised similarity is multiple views of an image, as explored by many recent methods. For example, random augmented crops (Wu et al., 2018; Ye et al., 2019; He et al., 2020; Chen et al., 2020a;b) of an image could be defined as a similarity set. In this case, the contrastive objective is effectively solving an instance discrimination task (Wu et al., 2018) as illustrated in Figure 1a.
The training of this instance discriminator involves randomly sampling a query crop xq ∈ Si, a positive crop xp ∈ Si from the same image, and K negative crops {xk ∈ S \ Si}Kk=1 from other images. These K + 2 crops (the query, the positive, and K negatives) are encoded with fθ respectively, q = fθ(xq),p = fθ(xp),nk = fθ(xk). Then, an effective contrastive loss function, InfoNCE (Hjelm et al., 2018), is written as:
Lins = − log exp(q · p/τins) exp(q · p/τins) + ∑K k=1 exp(q · nk/τins) , (1)
where τins is a temperature hyper-parameter (Hinton et al., 2015). This loss can be interpreted as a cross entropy loss that trains the model to discriminate the positive crop (labeled as 1) from negative crops (labeled as 0) given the query crop. We denote this loss as Lins as it performs an instance discrimination task. One direct instantiation of InfoNCE loss, represented by SimCLR (Chen et al., 2020a), formulates fθ as an end-to-end encoder. In this case, two crops of the same image are exchangeable or symmetric to each other as both are encoded by fθ. The final loss is also symmetric
with either one of the two crops as the query and the other crop as the positive. Another popular instantiation, represented by MoCo (He et al., 2020), encodes the query with fθ and encodes the positive and the negatives with fθ′ which is the moving average of fθ. In this case, only q can propagate gradients, which causes Lins to be asymmetric.
2.2 CONSISTENT CONTRAST
The one-hot labels used by InfoNCE loss is effective, showing good generalization ability across tasks and datasets (Chen et al., 2020b;a). Nevertheless, we argue that the hard, zero-one labels is uninformative. Specifically, crops from other images are taken as equally negative as they are all labeled as 0. This is contradictory to the fact that some so-called “negative” crops can be similar or even in the same semantic class, especially when K is large. For example, SimCLR (Chen et al., 2020a) uses 16,382 negative samples in a batch, and MoCo (He et al., 2020; Chen et al., 2020b) uses a memory bank of 65,536 features as negative samples. Even worse, the current objective forces negatives to be as far from the query as possible, with larger weights for closer negatives since they are “hard negatives”. However, these “hard negative” crops in fact tend to be semantically close. These issues impair good representation learning because the one-hot labels can not faithfully reflect the heterogeneous similarities between the query crop and the crops from other images.
Although generating labels based on instance discrimination is trivial, revealing the similarity between two arbitrary crops is exactly what we want to learn from unsupervised pre-training. Therefore, the label of the similarity between the query crop and each crop from other images is of little hope to get. This situation is similar to the usage of unlabeled data in semi-supervised learning setting, in which consistency regularization is widely used to propagate knowledge from labeled data to discover the structures in unlabeled data. Inspired by this, we propose to encourage the consistency between the similarities of crops from the same image, i.e., the query crop and the positive crop. We illustrate the consistency regularization in Figure 1b.
First, we denote the similarity between the query q and the negatives ni(i ∈ {1, . . . ,K}) as:
Q(i) = exp(q · ni/τcon)∑K k=1 exp(q · nk/τcon) , (2)
where τcon is also a temperature hyper-parameter. Q(i) is the probability that the query q selects ni as its match from {nk}Kk=1. Similarly, the similarity between the positive p and the negatives is written as:
P (i) = exp(p · ni/τcon)∑K k=1 exp(p · nk/τcon) . (3)
We impose the consistency between the probability distributions P and Q by using symmetric Kullback-Leibler (KL) Divergence as the measure of disagreement:
Lcon = 1
2 DKL(P‖Q) +
1 2 DKL(Q‖P ) . (4)
When p and q are encoded by the same end-to-end encoder fθ, it is natural to use symmetric KL as their disagreement measure, since p and q are exchangeable. Even when p and ni are encoded by the momentum encoder f ′θ, symmetric KL empirically works as well as forward KL, i.e., DKL(P‖Q), as shown in Section 3.5. Thus, we use symmetric KL as a unified objective for both cases.
The total loss is a weighted average of the original instance discrimination loss term and the consistency regularization term: L = Lins + αLcon , (5) where α denotes the coefficient to balance the two terms. It is possible to merge the two terms by creating a unique label containing information of both the one-hot label and the pseudo similarity label, but we find the weighted average can already get good performance and is easy to control.
The pseudo label is informative to reveal the similarity between the query q and each ni, while the one-hot label is unable to provide such information, since it only describe co-occurrence within one image. Note that, the pseudo label is also dynamic since the embedding function fθ is updated in every training step, and thus generating better pseudo labels during training. It indicates that the unsupervised embedding function and the soft similarity labels give positive feedback to each other.
Our method is simple and low-cost. It captures the similarity to each ni while introducing unnoticeable computational overhead with only one extra loss term computed. This is unlike clustering based unsupervised learning methods, which are costly, since they explicitly compute the similarity sets in the training set after every training epoch (Caron et al., 2018; Zhuang et al., 2019; Li et al., 2020; Caron et al., 2020).
3 EXPERIMENTS
Herein, we first report our implementation details and benchmark the learned representations on ImageNet. Next, we examine how the unsupervised pre-trained models transfer to other datasets and tasks. We then analyze the characteristics of our proposed method.
3.1 LINEAR CLASSIFICATION
Setup We mainly evaluate CO2 based on MoCo (He et al., 2020) and MoCo v2 (Chen et al., 2020b). Both of them use instance discrimination as pretext task, while MoCo v2 adopts more sophisticated design choices on projection head architecture, learning rate schedule and data augmentation strategy. We test CO2 on MoCo for its representativeness and simplicity. On MoCo v2, we evaluate how CO2 is compatible with advanced design choices. We also demonstrate the impact of CO2 on the end-to-end contrastive framework in Section 3.5.
The unsupervised training is performed on the train split of ImageNet-1K (Deng et al., 2009) without using label information. We keep aligned every detail with our baseline MoCo to effectively pinpoint the contribution of our approach, except the number of GPUs (MoCo uses 8 GPUs while we use 4). A further search on MoCo-related hyper-parameters might lead to better results of our
* Results reported in Chen & He (2020).
method. For the hyper-parameters of CO2, we set τcon as 0.04, α as 10 for MoCo-based CO2, and τcon as 0.05, α as 0.3 for MoCo v2-based CO2. Please refer to the appendix for more detailed implementation description.
3.2 LINEAR CLASSIFICATION
We first benchmark the learned representations on the common linear classification protocol. After the unsupervised pre-training stage, we freeze the backbone network including the batch normalization parameters, and train a linear classifier consisting of a fully-connected layer and a softmax layer on the 2048-D features following the global average pooling layer. Table 1 summaries the singlecrop top-1 classification accuracy on the validation set of ImageNet-1K. Our method consistently improves by 2.9% on MoCo and by 0.5% on MoCo v2. We also list several top-performing methods in the table for reference. These results indicate that the representation is more linearly separable on ImageNet with consistency regularization, since the consistency regularization mitigates the “class collision” effect caused by semantically similar negative samples.
3.3 SEMI-SUPERVISED LEARNING
We next perform semi-supervised learning on ImageNet to evaluate the effectiveness of the pretrained network in data-efficient settings. Following (Wu et al., 2018; Misra & van der Maaten, 2020; Chen et al., 2020a), we finetune the whole pre-trained networks with only 1% and 10% labels which are sampled in a class-balanced way. Table 2 summaries the mean of the top-5 accuracy on the validation set of ImageNet-1K over three runs. The results for MoCo and MoCo v2 are produced by us using their officially released models. The proposed consistency regularization term can provide 3.8% and 1.1% top-5 accuracy gains for MoCo with 1% and 10% labels respectively. CO2 also improves from MoCo v2 by 1.1% top-5 accuracy with 1% labels, and by 0.3% with 10% labels.
3.4 TRANSFER LEARNING
To further investigate the generalization ability of our models across different datasets and tasks, we evaluate the transfer learning performance on PASCAL VOC (Everingham et al., 2015) with three typical visual recognition tasks, i.e., image classification, object detection and semantic segmentation. Table 3 reports the transfer learning performance comparing with other methods using ResNet-50. CO2 shows competitive or better performance comparing with the corresponding baselines, In addition, it achieves better performance than state-of-the-art unsupervised representation learning methods.
Image Classification Following the evaluation setup in Goyal et al. (2019), we train a linear SVM (Boser et al., 1992) on the frozen 2048-D features extracted after the global average pool-
ing layer. The results of MoCo are produced by us with their official models. In this case, CO2 is 2.9% better than MoCo, and 0.2% than MoCo v2.
Object Detection Following the detection benchmark set up in He et al. (2020), we use Faster R-CNN (Ren et al., 2015) object detector and ResNet-50 C4 (He et al., 2017) backbone, and all the layers are finetuned including the batch normalization parameters. The numbers of our method are averaged over three runs. Our reproduced results for MoCo are also listed in the table for reference. CO2 provides 0.3% AP50 gains on both MoCo and MoCo v2.
Semantic Segmentation We follow the settings in He et al. (2020) for semantic segmentation. Results are average over three runs. Similarly, we include our reproduced results of MoCo as a reference. The result of MoCo v2 is produced by us using its officially released model. CO2 gives 0.9% mIoU improvement upon MoCo, and 0.5% upon MoCo v2, which finally surpasses its supervised counterpart.
The overall transfer learning improvements, though consistent, are smaller than linear classification and semi-supervised learning. Similar observations have also been made in Chen et al. (2020b). We hypothesize that the current unsupervised contrastive methods, which bring close different crops from the same image, reduce the representation’s sensitivity to location which is useful for tasks like detection. It is still an open question which properties of an unsupervised representation benefit the transfer ability to various downstream tasks.
3.5 ANALYSIS
In this section, we study the characteristics of the proposed method on a smaller backbone ResNet18 and a smaller dataset ImageNet-100 due to the consideration of the computational resource. ImageNet-100 is firstly used in Tian et al. (2019) and consists of 100 randomly selected classes from all 1, 000 classes of ImageNet.
Hyper-parameter Our method introduces two new hyper-parameters, the coefficient of consistency regularization term α, and its temperature τcon. In Figure 2, we show the top-1 accuracy of a linear classifier on models pre-trained by CO2 with different hyper-parameters. In Figure 2a, we fix the temperature τcon as 0.04 and vary the coefficient α. The best coefficient is 10. We see that by using the consistency regularization term, the linear classification accuracy can be boosted from 63.6% to 69.2%. Increasing α to 20 and beyond causes performance degeneration. We hypothesize that the model is over-regularized by the consistency loss, and thus it loses some discrimination among different instances. In Figure 2b, we fix the coefficient to be 10 and varying the temperature. As other consistency regularization methods (e.g., Berthelot et al. (2019b)), temperature τcon effectively influences the quality of the learned representation, and the best to use is 0.04.
Training Curves In Figure 3 we show the training curves of the instance discrimination loss Lins, the consistency loss Lcon and the instance discrimination accuracy. Instance discrimination accuracy represents the percent of query crops which successfully select their corresponding positive crops, i.e., successfully identify their instances. MoCo is trained with Lins only and its Lcon is just calculated out for comparison. We see that Lins of MoCo drops quickly from the beginning at the cost of a jump of Lcon. As the training proceeds, Lcon of MoCo decreases spontaneously, possibly because more semantic knowledge has been learned, but it is still relatively high. Training with Lcon and Lins together, i.e., MoCo + CO2, Lcon is kept very low from beginning, and Lcon increases gradually since the model is trained to discriminate between images at the same time. At the end of the training, Lcon stays much lower than Lcon of MoCo. We also notice that with CO2, the instance discrimination accuracy drops from 97.57% to 95.26%. Although CO2 results in lower instance discrimination accuracy, it still does better in the downstream classification task. The linear classification accuracy improves from 63.6% to 69.2%, as shown in Figure 2a. It suggests again that there is a gap between instance discrimination and the downstream tasks.
Comparison with Label Smoothing With the consistency regularization term, our approach assigns soft pseudo labels to crops from other images. This looks similar to label smoothing regularization on supervised classification (Szegedy et al., 2016), a useful trick which assigns a small constant value to the labels of all the negative classes to avoid overconfidence. We equip MoCo with label smoothing, i.e., assigning a small constant value to crops from other images (the “negatives”). Surprisingly, it reports 61.2% linear classification accuracy, 2.4% lower than MoCo alone. This suggests that assigning a constant value as label smoothing can be harmful for unsupervised contrastive learning, since it ignores the heterogeneous similarity relationship. And it is better to assign labels according to the similarities as our consistency regularization.
End-to-End Encoder To further verify the effectiveness of the proposed consistency regularization term on different contrastive learning frameworks, we apply CO2 to SimCLR (Chen et al., 2020a), a representative method with an end-to-end encoder (without a momentum encoder). The results are presented in Table 4. On ImageNet-100 (Tian et al., 2019) with a ResNet-18, SimCLR obtains 68.9% top-1 linear classification accuracy with batch size 128 and temperature τins 0.1. Equipped with CO2 whose coefficient α is 0.07 and temperature τcon is 1.0, the linear classification accuracy is boosted to 72.3%. The improvement demonstrates that CO2 can be applied to different unsupervised contrastive frameworks and improve the quality of the learned representation regardless of whether using a momentum encoder or not.
Varying the choices of Lcon We ablate on different variants of Lcon (Eq. 4) on MoCo including forward KL (DKL(P‖Q)), reverse KL (DKL(Q‖P )), and the objective of CO2, i.e., symmetric KL. Each of models uses a coefficient α of 10 and a temperature τcon of 0.04. We present the linear classification accuracy in Table 4. Our CO2 (symmetric KL) improves over the baseline MoCo by a large margin, from 63.1% to 69.7%. Forward KL alone improves similarly to 69.6%. And reserve KL alone can also provide a nontrivial 2.0% gain in accuracy.
4 RELATED WORK
Our method falls in the area of unsupervised visual representation learning, especially for image data. In this section, we first revisit various design strategies of pretext tasks for unsupervised learning. Then we elaborate on the pretext tasks based on contrastive learning, which is the focus of our work. Next, we review the methods using consistency regularization in semi-supervised learning, which inspire our work.
Unsupervised Learning and Pretext Tasks To learn from unlabeled image data, a wide range of pretext tasks have been established. The models can be taught to specify the relative position of a patch (Doersch et al., 2015), solve spatial jigsaw puzzles (Noroozi & Favaro, 2016; Wei et al.,
2019), colorize gray scale images (Zhang et al., 2016; Larsson et al., 2017), inpaint images (Pathak et al., 2016), count objects (Noroozi et al., 2017), discriminate orientation (Gidaris et al., 2018), iteratively cluster (Caron et al., 2018; Zhuang et al., 2019; Asano et al., 2019; Zhong et al., 2020), generate images (Donahue et al., 2016; Donahue & Simonyan, 2019), etc. Doersch & Zisserman (2017) evaluates the combination of different pretext tasks. Kolesnikov et al. (2019) and Goyal et al. (2019) revisit and benchmark different pretext tasks.
Contrastive Learning Contrastive learning (Hadsell et al., 2006) recently puts a new perspective on the design of pretext task and holds the key to most state-of-the-art methods. Most of them perform an instance discrimination task while differ in i) the strategies to synthesize positives and negatives, and ii) the mechanisms to manage a large amount of negatives. The synthesizing can base on context with patches (Hjelm et al., 2018; 2019), random resized crops with data augmentation (Wu et al., 2018; Ye et al., 2019; Bachman et al., 2019; He et al., 2020; Chen et al., 2020a), jigsaw puzzle transformation (Misra & van der Maaten, 2020) or luminance-chrominance decomposition (Tian et al., 2019). Regarding the mechanisms to maintain negative features, some methods (Wu et al., 2018; Misra & van der Maaten, 2020) keep tracking the features of all images, some directly utilize the samples within the minibatch (Chen et al., 2020a; Tian et al., 2019; Ye et al., 2019), and He et al. (2020) proposes to use a momentum encoder. Grill et al. (2020) recently proposes to only use positive examples without negatives. Recently, Li et al. (2020) argues that the lack of semantic structure is one fundamental weakness of instance discrimination, and proposes to generate prototypes by k-means clustering. However, the computational overhead and the degeneration introduced by clustering are difficult to address. Chuang et al. (2020) also points out the possible sampling bias of instance discrimination, and proposes a debiased objective.
Consistency Regularization Consistency regularization is an important component of many successful semi-supervised learning methods. It is firstly proposed in Sajjadi et al. (2016), encouraging similar predictions on perturbed versions of the same image. Besides the consistency regularization on unlabeled data, the model is simultaneously trained with a supervised loss on a small set of labeled data. Several works made improvements on the way of perturbation, including using an adversarial transformation (Miyato et al., 2018), using the prediction of a moving average or previous model (Tarvainen & Valpola, 2017; Laine & Aila, 2017), and using strong data augmentation (Xie et al., 2019). Recently, several larger pipelines are proposed (Berthelot et al., 2019b;a; Sohn et al., 2020), in which consistency regularization still serves as a core component.
The instance discrimination loss in unsupervised contrastive learning is analogous to the supervised loss in semi-supervised learning, as both of them rely on some concrete information, i.e., cooccurrence in one image and human annotation, respectively. Meanwhile, CO2 on the similarities between crops is analogous to consistency regularization on unlabeled samples of semi-supervised methods as their labels are both unknown. The main difference, however, is that semi-supervised methods crucially rely on the supervised loss to warm up the model, while there is no human annotation at all in unsupervised contrastive learning. Our work presents an example that a model learned completely without human annotations can also generate surprisingly effective pseudo labels for similarities between different crops and benefit from consistency regularization.
5 DISCUSSION
Unsupervised visual representation learning has shown encouraging progress recently, thanks to the introduction of instance discrimination and the contrastive learning framework. However, in this paper, we point out that instance discrimination is ignorant of the heterogeneous similarities between image crops. We address this issue with a consistency regularization term on the similarities between crops, inspired by semi-supervised learning methods which impose consistency regularization on unlabeled data. In such a simple way, the proposed CO2 consistently improves on supervised and semi-supervised image classification. It also transfers to other datasets and downstream tasks.
More broadly, we encourage researchers to rethink label correctness in existing pretext tasks. Taking instance discrimination as an example, we show that a pretext task itself could be, in fact, a semisupervised learning task. It might be harmful to think of the pretext task as a simple pure supervised task by assuming the unknown labels are negatives. In addition, our work relaxes the stereotype restriction that pretext task labels should always be known and clean. We hope this relaxation can give rise to novel pretext tasks which exploit noisy labels or partially-available labels, making a better usage of the data without human annotation.
A APPENDIX
A.1 IMPLEMENTATION DETAILS OF CONTRASTIVE PRE-TRAINING
We evaluate our approach based on MoCo (He et al., 2020). MoCo has two different encoders to encode queries and keys respectively. The query encoder is updated with respect to the loss function, while the key encoder is an exponential moving average of the query encoder. The keys are stored in a dynamic memory bank, whose entries are updated at every training step with the current minibatch enqueued and the oldest mini-batch dequeued. The backbone is a standard ResNet-50 (He et al., 2016), and features after the global average pooling layer are projected to 128-D vectors (Wu et al., 2018), normalized by `2 norm. The size of the memory bank (i.e., the number of negative samples) is 65,536 and the momentum to update the key encoder is 0.999. τins is 0.07 for MoCo variants and 0.2 for MoCo v2 variants, which are the default settings of these two methods.
We use momentum SGD with momentum 0.9 and weight decay 1e-4. The batch size is 256 on 4 GPUs. To prevent potential information leak with Batch Normalization (BN) (Ioffe & Szegedy, 2015), shuffling BN (He et al., 2020) is performed. The model is trained for 200 epochs with the initial learning rate of 0.03. The learning rate is multiplied by 0.1 after 120 and 160 epochs for MoCo v1, while cosine decayed (Loshchilov & Hutter, 2016) for MoCo v2. We keep aligned all training details with MoCo except the number of GPUs. This could be problematic since it changes the perworker minibatch size, which is related to potential information leaks pointed by He et al. (2020). However, we do not notice much difference when reproducing MoCo with 4 GPUs. Our reproduced MoCo v2 with 4 GPUs reaches the accuracy of 67.6% on the linear classification protocol, 0.1% higher than 67.5% reported in its paper. For the hyper-parameters of the proposed consistency term, we set τcons as 0.04 and α as 10 for the MoCo v1-based CO2, and τcon as 0.05, α as 0.3 for the MoCo v2-based variant.
A.2 IMPLEMETATION DETAILS OF DOWNSTREAM TASKS
Linear Classification We freeze the backbone network including the batch normalization parameters, and train a linear classifier consisting of a fully-connected layer followed by softmax on the 2048-D features following the global average pooling layer. We train for 100 epochs. The learning rate is initialized as 15 and decayed by 0.1 every 20 epoch after the first 60 epochs. We set weight decay as 0 and momentum as 0.9. Only random cropping with random horizontal flipping is used as data augmentation.
Semi-Supervised Learning We finetune the pre-trained model for 20 epochs with learning rate starting from 0.01 for the base model and 1.0 for the randomly initialized classification head, decayed by 0.2 after 12 and 16 epochs. Momentum is set to 0.9. Weight decay is 5e-4 for MoCo v1 and 1e-4 for MoCo v2. Only random cropping with random horizontal flipping is used as data augmentation.
Classification on PASCAL VOC Following the evaluation setup in Goyal et al. (2019), we train a linearSVM (Boser et al., 1992) on the frozen 2048-D features extracted after the global average pooling layer. The models are trained on trainval2007 split and tested on test2007. The hyper-parameters are selected based on a held-out subset of the training set.
Detection on PASCAL VOC Following the detection benchmark set up in He et al. (2020), we use FasterR-CNN (Ren et al., 2015) object detector and ResNet-50 C4 (He et al., 2017) backbone, implemented in Detectron2 (Wu et al., 2019). We finetune all the layers including the batch normalization parameters for 24k iterations on the trainval07+12 split and test on test2007 set. The hyper-parameters are the same as the counterpart with supervised ImageNet initialization and MoCo. To calibrate the small feature magnitude due to the output normalization in the unsupervised pre-training stage, two extra batch normalization layers are introduced, one is followed by the regional proposal head whose gradients are divided by 10 and the other is followed by the box prediction head.
Segmentation on PASCAL VOC Following the setup in He et al. (2020), an FCN-based (Long et al., 2015) architecture with atrous convolutions (Chen et al., 2017) is used and ResNet-50 is the backbone. The training set is train aug2012 (Hariharan et al., 2011) and the testing set is val2012. Initialized with CO2 models, we finetune all layers for 50 epochs ( 33k iterations) with batch size 16, initial learning rate 0.003, weight decay 1e-4 and momentum 0.9. | 1. What is the main contribution of the paper, and how does it extend existing contrastive loss functions?
2. How effective is the proposed method in improving performance compared to previous approaches, especially in transfer learning tasks?
3. What are the strengths and weaknesses of the paper regarding its novelty, simplicity, and impact on future research directions?
4. Are there any concerns or questions about the choice of loss function, experimental setup, or notation used in the paper? | Review | Review
Summary This paper proposes an extension, coined as CO2, to InfoNCE contrastive loss used in semi/unsupervised methods. CO2 is based on the premise that the query-negative crop similarity distribution and positive-negative crop similarity distribution should be alike. The proposed method yields significant improvements for the linear classification protocol using ImageNet while the improvements for downstream transfer learning tasks such as object detection is marginal. Aside from the performance improvements, the paradigm where the researchers think of the pretext task as a downstream task and improve the pseudo labels by using pseudo-pseudo labels is very interesting. Proposed CO2 method provides a relatively simple way to achieve this.
Authors build upon the intuition that among the many crops the algorithm uses as negatives, it is highly likely that at least a few positive samples exist. These unknown “positive” crops should yield a high similarity to the query, but this is not possible to enforce as these crops’ labels are, by definition, unknown. Instead the authors suggest that positive-negative similarities and query-negative similarities should be alike. A KL divergence term (between positive-negative similarity distribution and query-negative distribution) is used to implement this constraint.
Novelty To the best of my knowledge this is a novel approach. It is also surprising that using pseudo-pseudo labels to correct the wrong assumptions of pseudo labels is working reasonably well.
Impact I believe this paper will have a significant impact. Accuracy improvements on downstream tasks are diminished when the authors use MoCo-v2, suggesting that their method may not always yield significant benefits. However, aside from the numeric accuracy improvements on the downstream tasks, the proposed idea is very simple, seems easy to incorporate in other methods and likely opens new research directions.
Clarity The paper is well written and easy to follow. It is also generally clear but for some of my questions/comments please see below.
Evaluation Authors use their loss function with MoCo and MoCo-v2 and report relatively small improvements over MoCo-v2. It is an open question whether the proposed loss function would result in large performance gains for other methods. On the transfer learning side, authors report marginal improvements for image classification, object detection and semantic segmentation tasks. It is not possible to form an opinion on how statistically significant these improvements are. Still, the authors provide a comparison with label smoothing which implies that CO2 is a beneficial addition.
Strengths (Reasons to accept) This is a relatively simple loss function extension, applicable to other methods. Experiments are implying that the method works well and improves upon the state of the art (under reasonable resource constraints). As pointed out in the discussion, the paradigm of relaxing the pretext task’s label constraints (in a way, a learned label smoothing) is likely to open new research directions.
Weaknesses (Reasons to reject) Transfer learning improvements are marginal (and in the object detection case CO2 results in an unexplained 0.2% drop in AP). I would expect to see a discussion about the reasons behind the discrepancy between the large improvements for semi-supervised learning (or linear classification) vs. the marginal improvements for transfer learning. Only the object detection and semantic segmentation task experiments have been repeated (3 times). The rest of the experiments are, I believe, single runs. This makes the reader question the significance of the reported results. A more in depth explanation for the choice of the particular loss in Eq. 4 would benefit the reader. Why use symmetric loss? Please see my questions below.
Questions and other comments to the authors In Figure 1 I would like to see where the labels (both One-hot and Pseudo) come from, at least in the caption. The same is true for the similarity graphs, and the authors should consider adding axis-labels and named ticks. Finally, increasing the arrowhead sizes would make everything easier to follow. Even though we can not expect the authors to reproduce 1000 epoch and >4000 batch-size methods common to recent semi-supervised techniques, I would still like to see the impact of CO2 using a vanilla ResNet backbone (without the momentum encoder). I believe in Eq. 4, the first term alone should be enough to ensure that the “learned” query extractor distribution (Q) matches the “ground truth” distribution (P). In the next few paragraphs authors state P to be dynamic, but with just the first term P would be dynamic as both Q and P depend on the same network. As such, it is not clear to me why the authors chose a symmetric divergence. Finally, authors should refrain from using the letter P both for the distribution and for the pseudo label. This is an unnecessary overloading of notation. |
ICLR | Title
CO2: Consistent Contrast for Unsupervised Visual Representation Learning
Abstract
Contrastive learning has been adopted as a core method for unsupervised visual representation learning. Without human annotation, the common practice is to perform an instance discrimination task: Given a query image crop, this task labels crops from the same image as positives, and crops from other randomly sampled images as negatives. An important limitation of this label assignment strategy is that it can not reflect the heterogeneous similarity between the query crop and each crop from other images, taking them as equally negative, while some of them may even belong to the same semantic class as the query. To address this issue, inspired by consistency regularization in semi-supervised learning on unlabeled data, we propose Consistent Contrast (CO2), which introduces a consistency regularization term into the current contrastive learning framework. Regarding the similarity of the query crop to each crop from other images as “unlabeled”, the consistency term takes the corresponding similarity of a positive crop as a pseudo label, and encourages consistency between these two similarities. Empirically, CO2 improves Momentum Contrast (MoCo) by 2.9% top-1 accuracy on ImageNet linear protocol, 3.8% and 1.1% top-5 accuracy on 1% and 10% labeled semi-supervised settings. It also transfers to image classification, object detection, and semantic segmentation on PASCAL VOC. This shows that CO2 learns better visual representations for these downstream tasks.
1 INTRODUCTION
Unsupervised visual representation learning has attracted increasing research interests for it unlocks the potential of large-scale pre-training for vision models without human annotation. Most of recent works learn representations through one or more pretext tasks, in which labels are automatically generated from image data itself. Several early methods propose pretext tasks that explore the inherent structures within a single image. For example, by identifying spatial arrangement (Doersch et al., 2015), orientation (Gidaris et al., 2018), or chromatic channels (Zhang et al., 2016), models learn useful representations for downstream tasks. Recently, another line of works (Wu et al., 2018; Bachman et al., 2019; Hjelm et al., 2018; Tian et al., 2019; He et al., 2020; Misra & van der Maaten, 2020; Chen et al., 2020a), e.g. Momentum Contrast (MoCo), falls within the framework of contrastive learning (Hadsell et al., 2006), which directly learns relations of images as the pretext task. In practice, contrastive learning methods show better generalization in downstream tasks.
Although designed differently, most contrastive learning methods perform an instance discrimination task, i.e., contrasting between image instances. Specifically, given a query crop from one image, a positive sample is an image crop from the same image; negative samples are crops randomly sampled from other images in the training set. Thus, the label for instance discrimination is a one-hot encoding over the positive and negative samples. This objective is to bring together crops from the same image and keep away crops from different images in the feature space, forming an instance discrimination task.
However, the one-hot label used by instance discrimination might be problematic, since it takes all the crops from other images as equally negative, which cannot reflect the heterogeneous similarities between the query crop and each of them. For example, some “negative” samples are semantically similar to the query, or even belong to the same semantic class as the query. This is referred to as
∗corresponding author
“class collision” in Saunshi et al. (2019) and “sampling bias” in Chuang et al. (2020). The ignorance of the heterogeneous similarities between the query crop and the crops from other images can thus raise an obstacle for contrastive methods to learn a good representation. A recent work, supervised contrastive learning (Khosla et al., 2020), fixes this problem by using human annotated class labels and achieves strong classification performance. However, in unsupervised representation learning, the human annotated class labels are unavailable, and thus it is more challenging to capture the similarities between crops.
In this paper, we propose to view this instance discrimination task from the perspective of semisupervised learning. The positive crop should be similar to the query for sure since they are from the same image, and thus can be viewed as labeled. On the contrary, the similarity between the query and each crop from other images is unknown, or unlabeled. With the viewpoint of semi-supervised learning, we introduce Consistent Contrast (CO2), a consistency regularization method which fits into current contrastive learning framework. Consistency regularization (Sajjadi et al., 2016) is at the core of many state-of-the-art semi-supervised learning algorithms (Xie et al., 2019; Berthelot et al., 2019b; Sohn et al., 2020). It generates pseudo labels for unlabeled data by relying on the assumption that a good model should output similar predictions on perturbed versions of the same image. Similarly, in unsupervised contrastive learning, since the query crop and the positive crop naturally form two perturbed versions of the same image, we encourage them to have consistent similarities to each crop from other images. Specifically, the similarity of the positive sample predicted by the model is taken as a pseudo label for that of the query crop.
Our model is trained with both the original instance discrimination loss term and the introduced consistency regularization term. The instance discrimination label and the pseudo similarity label jointly construct a virtual soft label on-the-fly, and the soft label further guides the model itself in a bootstrap manner. In this way, CO2 exploits the consistency assumption on unlabeled data, mitigates the “class collision” effect introduced by the one-hot labels, and results in a better visual representation. More importantly, our work brings a new perspective of unsupervised visual representation learning. It relaxes the stereotype that the pretext task can only be self-supervised which aims to construct artificial labels for every sample, e.g., a specific degree of rotation (Gidaris et al., 2018), a configuration of jigsaw puzzle (Noroozi & Favaro, 2016), and a one-hot label that indicates whether a crop comes from the same instance or not (Wu et al., 2018). In contrast, the pretext task can also be self-semi-supervised, allowing the task itself to be partially labeled. This relaxation is especially helpful when information for artificial label construction is not enough and imposing a label is harmful, such as the case of imposing the one-hot labels in instance discrimination.
This simple modification brings consistent gains on various evaluation protocols. We first benchmark CO2 on ImageNet (Deng et al., 2009) linear classification protocol. CO2 improves MoCo by 2.9% on top-1 accuracy. It also provides 3.8% and 1.1% top-5 accuracy gains under the semisupervised setting on ImageNet with 1% and 10% labels respectively, showing the effectiveness of the introduced consistency regularization. We also evaluate the transfer ability of the learned representations on three different downstream tasks: image classification, object detection and semantic segmentation. CO2 models consistently surpass their MoCo counterparts, showing that CO2 can improve the generalization ability of learned representation. Besides, our experiments on ImageNet100 (Tian et al., 2019) demonstrate the efficacy of CO2 on SimCLR (Chen et al., 2020a), showing the generality of our method on different contrastive learning frameworks.
2 METHOD
In this section, we begin by formulating current unsupervised contrastive learning as an instance discrimination task. Then, we propose our consistency regularization term which addresses the ignorance of the heterogeneous similarity between the query crop and each crop of other images in the instance discrimination task.
2.1 CONTRASTIVE LEARNING
Contrastive learning (Hadsell et al., 2006) is recently adopted as an objective for unsupervised learning of visual representations. Its goal is to find a parametric function fθ : RD → Rd that maps an input vector x to a feature vector fθ(x) ∈ Rd with D d, such that a simple distance measure (e.g., cosine distance) in the low-dimensional feature space can reflect complex similarities in the high-dimensional input space.
For each input vector xi in the training set S, the similarity measure in the input space is defined by a subset of training vectors Si ⊂ S, called similarity set. The sample xi is deemed similar to samples in the similarity set Si, but dissimilar to samples in S \ Si. Then, the contrastive objective encourages fθ(xj) to be close to fθ(xi) in the feature space if xj ∈ Si, and otherwise to be distant. By training with contrastive loss, the similarities defined by the similarity set determine characteristics of the learned representation and the mapping function fθ. For example, if the similarity is defined as samples from the same semantic class, then fθ will probably learn invariances to other factors, e.g., object deformation. In the supervised setting, this definition of similarity requires a large amount of human labeling. On the contrary, unsupervised contrastive learning exploits similarities with no need of human labels. One natural definition of unsupervised similarity is multiple views of an image, as explored by many recent methods. For example, random augmented crops (Wu et al., 2018; Ye et al., 2019; He et al., 2020; Chen et al., 2020a;b) of an image could be defined as a similarity set. In this case, the contrastive objective is effectively solving an instance discrimination task (Wu et al., 2018) as illustrated in Figure 1a.
The training of this instance discriminator involves randomly sampling a query crop xq ∈ Si, a positive crop xp ∈ Si from the same image, and K negative crops {xk ∈ S \ Si}Kk=1 from other images. These K + 2 crops (the query, the positive, and K negatives) are encoded with fθ respectively, q = fθ(xq),p = fθ(xp),nk = fθ(xk). Then, an effective contrastive loss function, InfoNCE (Hjelm et al., 2018), is written as:
Lins = − log exp(q · p/τins) exp(q · p/τins) + ∑K k=1 exp(q · nk/τins) , (1)
where τins is a temperature hyper-parameter (Hinton et al., 2015). This loss can be interpreted as a cross entropy loss that trains the model to discriminate the positive crop (labeled as 1) from negative crops (labeled as 0) given the query crop. We denote this loss as Lins as it performs an instance discrimination task. One direct instantiation of InfoNCE loss, represented by SimCLR (Chen et al., 2020a), formulates fθ as an end-to-end encoder. In this case, two crops of the same image are exchangeable or symmetric to each other as both are encoded by fθ. The final loss is also symmetric
with either one of the two crops as the query and the other crop as the positive. Another popular instantiation, represented by MoCo (He et al., 2020), encodes the query with fθ and encodes the positive and the negatives with fθ′ which is the moving average of fθ. In this case, only q can propagate gradients, which causes Lins to be asymmetric.
2.2 CONSISTENT CONTRAST
The one-hot labels used by InfoNCE loss is effective, showing good generalization ability across tasks and datasets (Chen et al., 2020b;a). Nevertheless, we argue that the hard, zero-one labels is uninformative. Specifically, crops from other images are taken as equally negative as they are all labeled as 0. This is contradictory to the fact that some so-called “negative” crops can be similar or even in the same semantic class, especially when K is large. For example, SimCLR (Chen et al., 2020a) uses 16,382 negative samples in a batch, and MoCo (He et al., 2020; Chen et al., 2020b) uses a memory bank of 65,536 features as negative samples. Even worse, the current objective forces negatives to be as far from the query as possible, with larger weights for closer negatives since they are “hard negatives”. However, these “hard negative” crops in fact tend to be semantically close. These issues impair good representation learning because the one-hot labels can not faithfully reflect the heterogeneous similarities between the query crop and the crops from other images.
Although generating labels based on instance discrimination is trivial, revealing the similarity between two arbitrary crops is exactly what we want to learn from unsupervised pre-training. Therefore, the label of the similarity between the query crop and each crop from other images is of little hope to get. This situation is similar to the usage of unlabeled data in semi-supervised learning setting, in which consistency regularization is widely used to propagate knowledge from labeled data to discover the structures in unlabeled data. Inspired by this, we propose to encourage the consistency between the similarities of crops from the same image, i.e., the query crop and the positive crop. We illustrate the consistency regularization in Figure 1b.
First, we denote the similarity between the query q and the negatives ni(i ∈ {1, . . . ,K}) as:
Q(i) = exp(q · ni/τcon)∑K k=1 exp(q · nk/τcon) , (2)
where τcon is also a temperature hyper-parameter. Q(i) is the probability that the query q selects ni as its match from {nk}Kk=1. Similarly, the similarity between the positive p and the negatives is written as:
P (i) = exp(p · ni/τcon)∑K k=1 exp(p · nk/τcon) . (3)
We impose the consistency between the probability distributions P and Q by using symmetric Kullback-Leibler (KL) Divergence as the measure of disagreement:
Lcon = 1
2 DKL(P‖Q) +
1 2 DKL(Q‖P ) . (4)
When p and q are encoded by the same end-to-end encoder fθ, it is natural to use symmetric KL as their disagreement measure, since p and q are exchangeable. Even when p and ni are encoded by the momentum encoder f ′θ, symmetric KL empirically works as well as forward KL, i.e., DKL(P‖Q), as shown in Section 3.5. Thus, we use symmetric KL as a unified objective for both cases.
The total loss is a weighted average of the original instance discrimination loss term and the consistency regularization term: L = Lins + αLcon , (5) where α denotes the coefficient to balance the two terms. It is possible to merge the two terms by creating a unique label containing information of both the one-hot label and the pseudo similarity label, but we find the weighted average can already get good performance and is easy to control.
The pseudo label is informative to reveal the similarity between the query q and each ni, while the one-hot label is unable to provide such information, since it only describe co-occurrence within one image. Note that, the pseudo label is also dynamic since the embedding function fθ is updated in every training step, and thus generating better pseudo labels during training. It indicates that the unsupervised embedding function and the soft similarity labels give positive feedback to each other.
Our method is simple and low-cost. It captures the similarity to each ni while introducing unnoticeable computational overhead with only one extra loss term computed. This is unlike clustering based unsupervised learning methods, which are costly, since they explicitly compute the similarity sets in the training set after every training epoch (Caron et al., 2018; Zhuang et al., 2019; Li et al., 2020; Caron et al., 2020).
3 EXPERIMENTS
Herein, we first report our implementation details and benchmark the learned representations on ImageNet. Next, we examine how the unsupervised pre-trained models transfer to other datasets and tasks. We then analyze the characteristics of our proposed method.
3.1 LINEAR CLASSIFICATION
Setup We mainly evaluate CO2 based on MoCo (He et al., 2020) and MoCo v2 (Chen et al., 2020b). Both of them use instance discrimination as pretext task, while MoCo v2 adopts more sophisticated design choices on projection head architecture, learning rate schedule and data augmentation strategy. We test CO2 on MoCo for its representativeness and simplicity. On MoCo v2, we evaluate how CO2 is compatible with advanced design choices. We also demonstrate the impact of CO2 on the end-to-end contrastive framework in Section 3.5.
The unsupervised training is performed on the train split of ImageNet-1K (Deng et al., 2009) without using label information. We keep aligned every detail with our baseline MoCo to effectively pinpoint the contribution of our approach, except the number of GPUs (MoCo uses 8 GPUs while we use 4). A further search on MoCo-related hyper-parameters might lead to better results of our
* Results reported in Chen & He (2020).
method. For the hyper-parameters of CO2, we set τcon as 0.04, α as 10 for MoCo-based CO2, and τcon as 0.05, α as 0.3 for MoCo v2-based CO2. Please refer to the appendix for more detailed implementation description.
3.2 LINEAR CLASSIFICATION
We first benchmark the learned representations on the common linear classification protocol. After the unsupervised pre-training stage, we freeze the backbone network including the batch normalization parameters, and train a linear classifier consisting of a fully-connected layer and a softmax layer on the 2048-D features following the global average pooling layer. Table 1 summaries the singlecrop top-1 classification accuracy on the validation set of ImageNet-1K. Our method consistently improves by 2.9% on MoCo and by 0.5% on MoCo v2. We also list several top-performing methods in the table for reference. These results indicate that the representation is more linearly separable on ImageNet with consistency regularization, since the consistency regularization mitigates the “class collision” effect caused by semantically similar negative samples.
3.3 SEMI-SUPERVISED LEARNING
We next perform semi-supervised learning on ImageNet to evaluate the effectiveness of the pretrained network in data-efficient settings. Following (Wu et al., 2018; Misra & van der Maaten, 2020; Chen et al., 2020a), we finetune the whole pre-trained networks with only 1% and 10% labels which are sampled in a class-balanced way. Table 2 summaries the mean of the top-5 accuracy on the validation set of ImageNet-1K over three runs. The results for MoCo and MoCo v2 are produced by us using their officially released models. The proposed consistency regularization term can provide 3.8% and 1.1% top-5 accuracy gains for MoCo with 1% and 10% labels respectively. CO2 also improves from MoCo v2 by 1.1% top-5 accuracy with 1% labels, and by 0.3% with 10% labels.
3.4 TRANSFER LEARNING
To further investigate the generalization ability of our models across different datasets and tasks, we evaluate the transfer learning performance on PASCAL VOC (Everingham et al., 2015) with three typical visual recognition tasks, i.e., image classification, object detection and semantic segmentation. Table 3 reports the transfer learning performance comparing with other methods using ResNet-50. CO2 shows competitive or better performance comparing with the corresponding baselines, In addition, it achieves better performance than state-of-the-art unsupervised representation learning methods.
Image Classification Following the evaluation setup in Goyal et al. (2019), we train a linear SVM (Boser et al., 1992) on the frozen 2048-D features extracted after the global average pool-
ing layer. The results of MoCo are produced by us with their official models. In this case, CO2 is 2.9% better than MoCo, and 0.2% than MoCo v2.
Object Detection Following the detection benchmark set up in He et al. (2020), we use Faster R-CNN (Ren et al., 2015) object detector and ResNet-50 C4 (He et al., 2017) backbone, and all the layers are finetuned including the batch normalization parameters. The numbers of our method are averaged over three runs. Our reproduced results for MoCo are also listed in the table for reference. CO2 provides 0.3% AP50 gains on both MoCo and MoCo v2.
Semantic Segmentation We follow the settings in He et al. (2020) for semantic segmentation. Results are average over three runs. Similarly, we include our reproduced results of MoCo as a reference. The result of MoCo v2 is produced by us using its officially released model. CO2 gives 0.9% mIoU improvement upon MoCo, and 0.5% upon MoCo v2, which finally surpasses its supervised counterpart.
The overall transfer learning improvements, though consistent, are smaller than linear classification and semi-supervised learning. Similar observations have also been made in Chen et al. (2020b). We hypothesize that the current unsupervised contrastive methods, which bring close different crops from the same image, reduce the representation’s sensitivity to location which is useful for tasks like detection. It is still an open question which properties of an unsupervised representation benefit the transfer ability to various downstream tasks.
3.5 ANALYSIS
In this section, we study the characteristics of the proposed method on a smaller backbone ResNet18 and a smaller dataset ImageNet-100 due to the consideration of the computational resource. ImageNet-100 is firstly used in Tian et al. (2019) and consists of 100 randomly selected classes from all 1, 000 classes of ImageNet.
Hyper-parameter Our method introduces two new hyper-parameters, the coefficient of consistency regularization term α, and its temperature τcon. In Figure 2, we show the top-1 accuracy of a linear classifier on models pre-trained by CO2 with different hyper-parameters. In Figure 2a, we fix the temperature τcon as 0.04 and vary the coefficient α. The best coefficient is 10. We see that by using the consistency regularization term, the linear classification accuracy can be boosted from 63.6% to 69.2%. Increasing α to 20 and beyond causes performance degeneration. We hypothesize that the model is over-regularized by the consistency loss, and thus it loses some discrimination among different instances. In Figure 2b, we fix the coefficient to be 10 and varying the temperature. As other consistency regularization methods (e.g., Berthelot et al. (2019b)), temperature τcon effectively influences the quality of the learned representation, and the best to use is 0.04.
Training Curves In Figure 3 we show the training curves of the instance discrimination loss Lins, the consistency loss Lcon and the instance discrimination accuracy. Instance discrimination accuracy represents the percent of query crops which successfully select their corresponding positive crops, i.e., successfully identify their instances. MoCo is trained with Lins only and its Lcon is just calculated out for comparison. We see that Lins of MoCo drops quickly from the beginning at the cost of a jump of Lcon. As the training proceeds, Lcon of MoCo decreases spontaneously, possibly because more semantic knowledge has been learned, but it is still relatively high. Training with Lcon and Lins together, i.e., MoCo + CO2, Lcon is kept very low from beginning, and Lcon increases gradually since the model is trained to discriminate between images at the same time. At the end of the training, Lcon stays much lower than Lcon of MoCo. We also notice that with CO2, the instance discrimination accuracy drops from 97.57% to 95.26%. Although CO2 results in lower instance discrimination accuracy, it still does better in the downstream classification task. The linear classification accuracy improves from 63.6% to 69.2%, as shown in Figure 2a. It suggests again that there is a gap between instance discrimination and the downstream tasks.
Comparison with Label Smoothing With the consistency regularization term, our approach assigns soft pseudo labels to crops from other images. This looks similar to label smoothing regularization on supervised classification (Szegedy et al., 2016), a useful trick which assigns a small constant value to the labels of all the negative classes to avoid overconfidence. We equip MoCo with label smoothing, i.e., assigning a small constant value to crops from other images (the “negatives”). Surprisingly, it reports 61.2% linear classification accuracy, 2.4% lower than MoCo alone. This suggests that assigning a constant value as label smoothing can be harmful for unsupervised contrastive learning, since it ignores the heterogeneous similarity relationship. And it is better to assign labels according to the similarities as our consistency regularization.
End-to-End Encoder To further verify the effectiveness of the proposed consistency regularization term on different contrastive learning frameworks, we apply CO2 to SimCLR (Chen et al., 2020a), a representative method with an end-to-end encoder (without a momentum encoder). The results are presented in Table 4. On ImageNet-100 (Tian et al., 2019) with a ResNet-18, SimCLR obtains 68.9% top-1 linear classification accuracy with batch size 128 and temperature τins 0.1. Equipped with CO2 whose coefficient α is 0.07 and temperature τcon is 1.0, the linear classification accuracy is boosted to 72.3%. The improvement demonstrates that CO2 can be applied to different unsupervised contrastive frameworks and improve the quality of the learned representation regardless of whether using a momentum encoder or not.
Varying the choices of Lcon We ablate on different variants of Lcon (Eq. 4) on MoCo including forward KL (DKL(P‖Q)), reverse KL (DKL(Q‖P )), and the objective of CO2, i.e., symmetric KL. Each of models uses a coefficient α of 10 and a temperature τcon of 0.04. We present the linear classification accuracy in Table 4. Our CO2 (symmetric KL) improves over the baseline MoCo by a large margin, from 63.1% to 69.7%. Forward KL alone improves similarly to 69.6%. And reserve KL alone can also provide a nontrivial 2.0% gain in accuracy.
4 RELATED WORK
Our method falls in the area of unsupervised visual representation learning, especially for image data. In this section, we first revisit various design strategies of pretext tasks for unsupervised learning. Then we elaborate on the pretext tasks based on contrastive learning, which is the focus of our work. Next, we review the methods using consistency regularization in semi-supervised learning, which inspire our work.
Unsupervised Learning and Pretext Tasks To learn from unlabeled image data, a wide range of pretext tasks have been established. The models can be taught to specify the relative position of a patch (Doersch et al., 2015), solve spatial jigsaw puzzles (Noroozi & Favaro, 2016; Wei et al.,
2019), colorize gray scale images (Zhang et al., 2016; Larsson et al., 2017), inpaint images (Pathak et al., 2016), count objects (Noroozi et al., 2017), discriminate orientation (Gidaris et al., 2018), iteratively cluster (Caron et al., 2018; Zhuang et al., 2019; Asano et al., 2019; Zhong et al., 2020), generate images (Donahue et al., 2016; Donahue & Simonyan, 2019), etc. Doersch & Zisserman (2017) evaluates the combination of different pretext tasks. Kolesnikov et al. (2019) and Goyal et al. (2019) revisit and benchmark different pretext tasks.
Contrastive Learning Contrastive learning (Hadsell et al., 2006) recently puts a new perspective on the design of pretext task and holds the key to most state-of-the-art methods. Most of them perform an instance discrimination task while differ in i) the strategies to synthesize positives and negatives, and ii) the mechanisms to manage a large amount of negatives. The synthesizing can base on context with patches (Hjelm et al., 2018; 2019), random resized crops with data augmentation (Wu et al., 2018; Ye et al., 2019; Bachman et al., 2019; He et al., 2020; Chen et al., 2020a), jigsaw puzzle transformation (Misra & van der Maaten, 2020) or luminance-chrominance decomposition (Tian et al., 2019). Regarding the mechanisms to maintain negative features, some methods (Wu et al., 2018; Misra & van der Maaten, 2020) keep tracking the features of all images, some directly utilize the samples within the minibatch (Chen et al., 2020a; Tian et al., 2019; Ye et al., 2019), and He et al. (2020) proposes to use a momentum encoder. Grill et al. (2020) recently proposes to only use positive examples without negatives. Recently, Li et al. (2020) argues that the lack of semantic structure is one fundamental weakness of instance discrimination, and proposes to generate prototypes by k-means clustering. However, the computational overhead and the degeneration introduced by clustering are difficult to address. Chuang et al. (2020) also points out the possible sampling bias of instance discrimination, and proposes a debiased objective.
Consistency Regularization Consistency regularization is an important component of many successful semi-supervised learning methods. It is firstly proposed in Sajjadi et al. (2016), encouraging similar predictions on perturbed versions of the same image. Besides the consistency regularization on unlabeled data, the model is simultaneously trained with a supervised loss on a small set of labeled data. Several works made improvements on the way of perturbation, including using an adversarial transformation (Miyato et al., 2018), using the prediction of a moving average or previous model (Tarvainen & Valpola, 2017; Laine & Aila, 2017), and using strong data augmentation (Xie et al., 2019). Recently, several larger pipelines are proposed (Berthelot et al., 2019b;a; Sohn et al., 2020), in which consistency regularization still serves as a core component.
The instance discrimination loss in unsupervised contrastive learning is analogous to the supervised loss in semi-supervised learning, as both of them rely on some concrete information, i.e., cooccurrence in one image and human annotation, respectively. Meanwhile, CO2 on the similarities between crops is analogous to consistency regularization on unlabeled samples of semi-supervised methods as their labels are both unknown. The main difference, however, is that semi-supervised methods crucially rely on the supervised loss to warm up the model, while there is no human annotation at all in unsupervised contrastive learning. Our work presents an example that a model learned completely without human annotations can also generate surprisingly effective pseudo labels for similarities between different crops and benefit from consistency regularization.
5 DISCUSSION
Unsupervised visual representation learning has shown encouraging progress recently, thanks to the introduction of instance discrimination and the contrastive learning framework. However, in this paper, we point out that instance discrimination is ignorant of the heterogeneous similarities between image crops. We address this issue with a consistency regularization term on the similarities between crops, inspired by semi-supervised learning methods which impose consistency regularization on unlabeled data. In such a simple way, the proposed CO2 consistently improves on supervised and semi-supervised image classification. It also transfers to other datasets and downstream tasks.
More broadly, we encourage researchers to rethink label correctness in existing pretext tasks. Taking instance discrimination as an example, we show that a pretext task itself could be, in fact, a semisupervised learning task. It might be harmful to think of the pretext task as a simple pure supervised task by assuming the unknown labels are negatives. In addition, our work relaxes the stereotype restriction that pretext task labels should always be known and clean. We hope this relaxation can give rise to novel pretext tasks which exploit noisy labels or partially-available labels, making a better usage of the data without human annotation.
A APPENDIX
A.1 IMPLEMENTATION DETAILS OF CONTRASTIVE PRE-TRAINING
We evaluate our approach based on MoCo (He et al., 2020). MoCo has two different encoders to encode queries and keys respectively. The query encoder is updated with respect to the loss function, while the key encoder is an exponential moving average of the query encoder. The keys are stored in a dynamic memory bank, whose entries are updated at every training step with the current minibatch enqueued and the oldest mini-batch dequeued. The backbone is a standard ResNet-50 (He et al., 2016), and features after the global average pooling layer are projected to 128-D vectors (Wu et al., 2018), normalized by `2 norm. The size of the memory bank (i.e., the number of negative samples) is 65,536 and the momentum to update the key encoder is 0.999. τins is 0.07 for MoCo variants and 0.2 for MoCo v2 variants, which are the default settings of these two methods.
We use momentum SGD with momentum 0.9 and weight decay 1e-4. The batch size is 256 on 4 GPUs. To prevent potential information leak with Batch Normalization (BN) (Ioffe & Szegedy, 2015), shuffling BN (He et al., 2020) is performed. The model is trained for 200 epochs with the initial learning rate of 0.03. The learning rate is multiplied by 0.1 after 120 and 160 epochs for MoCo v1, while cosine decayed (Loshchilov & Hutter, 2016) for MoCo v2. We keep aligned all training details with MoCo except the number of GPUs. This could be problematic since it changes the perworker minibatch size, which is related to potential information leaks pointed by He et al. (2020). However, we do not notice much difference when reproducing MoCo with 4 GPUs. Our reproduced MoCo v2 with 4 GPUs reaches the accuracy of 67.6% on the linear classification protocol, 0.1% higher than 67.5% reported in its paper. For the hyper-parameters of the proposed consistency term, we set τcons as 0.04 and α as 10 for the MoCo v1-based CO2, and τcon as 0.05, α as 0.3 for the MoCo v2-based variant.
A.2 IMPLEMETATION DETAILS OF DOWNSTREAM TASKS
Linear Classification We freeze the backbone network including the batch normalization parameters, and train a linear classifier consisting of a fully-connected layer followed by softmax on the 2048-D features following the global average pooling layer. We train for 100 epochs. The learning rate is initialized as 15 and decayed by 0.1 every 20 epoch after the first 60 epochs. We set weight decay as 0 and momentum as 0.9. Only random cropping with random horizontal flipping is used as data augmentation.
Semi-Supervised Learning We finetune the pre-trained model for 20 epochs with learning rate starting from 0.01 for the base model and 1.0 for the randomly initialized classification head, decayed by 0.2 after 12 and 16 epochs. Momentum is set to 0.9. Weight decay is 5e-4 for MoCo v1 and 1e-4 for MoCo v2. Only random cropping with random horizontal flipping is used as data augmentation.
Classification on PASCAL VOC Following the evaluation setup in Goyal et al. (2019), we train a linearSVM (Boser et al., 1992) on the frozen 2048-D features extracted after the global average pooling layer. The models are trained on trainval2007 split and tested on test2007. The hyper-parameters are selected based on a held-out subset of the training set.
Detection on PASCAL VOC Following the detection benchmark set up in He et al. (2020), we use FasterR-CNN (Ren et al., 2015) object detector and ResNet-50 C4 (He et al., 2017) backbone, implemented in Detectron2 (Wu et al., 2019). We finetune all the layers including the batch normalization parameters for 24k iterations on the trainval07+12 split and test on test2007 set. The hyper-parameters are the same as the counterpart with supervised ImageNet initialization and MoCo. To calibrate the small feature magnitude due to the output normalization in the unsupervised pre-training stage, two extra batch normalization layers are introduced, one is followed by the regional proposal head whose gradients are divided by 10 and the other is followed by the box prediction head.
Segmentation on PASCAL VOC Following the setup in He et al. (2020), an FCN-based (Long et al., 2015) architecture with atrous convolutions (Chen et al., 2017) is used and ResNet-50 is the backbone. The training set is train aug2012 (Hariharan et al., 2011) and the testing set is val2012. Initialized with CO2 models, we finetune all layers for 50 epochs ( 33k iterations) with batch size 16, initial learning rate 0.003, weight decay 1e-4 and momentum 0.9. | 1. What is the focus of the paper regarding unsupervised visual representation learning?
2. What is the proposed method for contrastive regularization, and how does it differ from clustering-based methods?
3. How does the performance of the proposed method compare to MoCo and MoCo v2?
4. Why was symmetric KD divergence used instead of asymmetric KD divergence in Equation 4?
5. Is there any difference in considering the similarity between the query and positive samples in addition to negative samples?
6. Can you provide further explanation about the sharpness of Q(i) logQ(i)?
7. Are there any differences in using the same encoder for q, p, and nk shares?
8. What is the limitation of the improvement of CO2 on MoCo v2? | Review | Review
##########################################################################
Summary:
This paper proposed a consistency regularization for unsupervised visual representation learning. This paper argues that the instance discrimination task performed by most contrastive learning methods merely uses one-hot labels, which cannot reflect the similarities between the query sample and negative samples. In order to tackle this problem, this paper proposed a consistency regularization method to generate pseudo labels and encourages the query sample and its positive sample to have consistent similarities to negative samples. The proposed consistency regularization method is simple and low-cost compared to clustering-based methods.
##########################################################################
Reasons for score:
Overall, I vote for acceptance, but some concerns on implementations lower my score. The idea of consistency regularization is simple and low-cost, but achieves significant improvement on the popular MoCo baseline. My main concerns are that (1) the improvement on MoCo v2 is much smaller than MoCo while the weight of the consistency regularization term is small (10 for MoCo, 0.3 for MoCo v2), (2) the usage of symmetric KD divergence, (3) the discussion about Mean Teacher [1]. I will change my rating depending on the feedback.
##########################################################################
Pros:
P1: The research problem, contrastive learning for unsupervised visual representation learning, is popular and important in the CV community.
P2: The proposed consistent contrast (CO2) method is simple but effective. Compared to clustering-based methods, CO2 is less time-consuming.
P3: The performance of linear classification, semi-supervised learning, transfer learning is reported.
P4: The analysis with a smaller backbone and dataset is provided as an ablation study.
##########################################################################
Cons:
C1: Section 3.1 states that CO2 can be easily applied to other contrastive learning mechanisms. It is acceptable to me that CO2 is evaluated with MoCo. However, it is not convincing that the application of other methods is easy. The reason is that MoCo and SimCLR have different a design of key encoders. CO2 works well with the momentum updated encoder. However, there is no evidence that CO2 also works well with the end-to-end encoder, since the query and key encoders are updated simultaneously. I doubt that this conclusion is misleading and overstating.
C2: In Section 3.1,
α
is set as 10 for MoCo while 0.3 for MoCo v2. The improvement of CO2 over MoCo is 2.9% while only 0.5% for MoCo v2. It is not clear why
α
is set at different levels and why the improvement over MoCo v2 is not significant.
C3: This paper is closed to Mean Teacher [Tarvainen & Valpola, 2017]. The contrastive loss (instance discrimination) is like the classification cost in Mean Teacher, while the symmetric KL Diverfebce (consistency regularization) is like the consistency cost. The comparison, especially the difference, between CO2 and Mean Teacher should be discussed.
##########################################################################
Questions during the rebuttal period:
Q1: It is not clear to me why a symmetric KD divergence (i.e., D(P|Q)+D(Q|P)) is used in Eq. (4) rather than asymmetric KD divergence (i.e., D(Q|P)). In MoCo, the encoder for computing keys is momentum-updated rather than updated by back-propagation. Therefore, Eq.(4) becomes
L
c
o
n
=
1
2
∑
i
K
−
P
(
i
)
log
Q
(
i
)
+
Q
(
i
)
log
Q
(
i
)
−
Q
(
i
)
log
P
(
i
)
Since the gradient of
P
(
i
)
is stopped,
P
(
i
)
log
P
(
i
)
is ignored. If the gradient of
Q
in
D
K
L
(
Q
|
|
P
)
is stopped, then
D
K
L
(
Q
|
|
P
)
does not contribute to back-propagation. If not,
Q
(
i
)
log
Q
(
i
)
works like a regularization term as minimizing
Q
(
i
)
log
Q
(
i
)
may encourage a sharp
Q
(
i
)
. I wonder whether the sharpness implicitly boosts the performance. I hope that more explanation about the symmetric KD divergence, especially (1) why "symmetric" rather than only
D
K
L
(
Q
|
|
P
)
. If "symmetric" is necessary, additional ablations and references/citations will help. (2) Whether
Q
(
i
)
log
Q
(
i
)
is minimized. If so, more explanation is expected.
Q2: In section 2.2, Eq. (2) and (3) only consider the similarity between the query and negatives. I wonder why the similarity between the query and positive is ignored. Although the latter is used in Eq.(1), I wonder whether there is any difference if the similarity between the query and positive is also considered in Eq. (2) and (3).
#########################################################################
Suggestions:
S1: In Page 3, this paper states that
q
,
p
and
n
k
shares the same encoder
f
θ
. However,
q
,
p
and
n
k
are obtained with different encoders using MoCo. This part could be carefully revised to be more general.
#########################################################################
References:
[1] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In NeurIPS, 2017.
Thank the authors for their good job during rebuttal.
Most of my questions have been addressed. However, my main concern is still that the improvement of CO2 on MoCo v2 is incremental. During the rebuttal stage, the authors did not include more empirical evidence on whether CO2 can improve MoCo v2 well, or theoretical analysis on why the improvement is incremental, both of which are accepteable to me. This severely limits the contribution of this paper. In all, I would not change my score. |
ICLR | Title
Domain Adversarial Training: A Game Perspective
Abstract
The dominant line of work in domain adaptation has focused on learning invariant representations using domain-adversarial training. In this paper, we interpret this approach from a game theoretical perspective. Defining optimal solutions in domain-adversarial training as local Nash equilibria, we show that gradient descent in domain-adversarial training can violate the asymptotic convergence guarantees of the optimizer, oftentimes hindering the transfer performance. Our analysis leads us to replace gradient descent with high-order ODE solvers (i.e., Runge–Kutta), for which we derive asymptotic convergence guarantees. This family of optimizers is significantly more stable and allows more aggressive learning rates, leading to high performance gains when used as a drop-in replacement over standard optimizers. Our experiments show that in conjunction with state-of-the-art domain-adversarial methods, we achieve up to 3.5% improvement with less than half of training iterations. Our optimizers are easy to implement, free of additional parameters, and can be plugged into any domain-adversarial framework.
1 INTRODUCTION
Unsupervised domain adaptation (UDA) deals with the lack of labeled data in a target domain by transferring knowledge from a labeled source domain (i.e., a related dataset with different distribution where abundant labeled data already exists). The paramount importance of this paradigm has led to remarkable advances in the field in terms of both theory and algorithms (Ben-David et al., 2007; 2010a;b; Mansour et al., 2009). Several state-of-the-art algorithms tackle UDA by learning domaininvariant representations in an adversarial fashion (Shu et al., 2018; Long et al., 2018; Saito et al., 2018; Hoffman et al., 2018; Zhang et al., 2019; Acuna et al., 2021). Their goal is to fool an auxiliary classifier that operates in a representation space and aims to classify whether the datapoint belongs to either the source or the target domain. This idea, called Domain-Adversarial Learning (DAL), was introduced by Ganin et al. (2016) and can be more formally understood as minimizing the discrepancy between source and target domain in a representation space (Acuna et al., 2021).
Despite DAL being a dominant approach for UDA, alternative solutions have been sought as DAL is noticeably unstable and difficult to train in practice (Sener et al., 2016; Sun et al., 2019; Chang et al., 2019). One major cause of instability is the adversarial nature of the learning algorithm which results from the introduction of the Gradient Reversal Layer (GRL, Ganin et al., 2016) (Figure 1). GRL flips the sign of the gradient during the backward pass, which has profound implications on the training dynamics and asymptotic behavior of the learning algorithm. Indeed, GRL transforms gradient descent into a competitive gradient-based algorithm which may converge to periodic orbits and other non-trivial limiting behavior that arise for instance in chaotic systems (Mazumdar et al., 2020). Surprisingly, little attention has been paid to this fact, and specifically to the adversarial component and interaction among the three different networks in the algorithm. In particular, three fundamental questions have not been answered from an algorithmic point of view, 1) What is optimality in DAL? 2) What makes DAL difficult to train and 3) How can we mitigate this problem?
In this work, we aim to answer these questions by interpreting the DAL framework through the lens of game theory. Specifically, we use tools developed by the game theoretical community in Başar & Olsder (1998); Letcher et al. (2019); Mazumdar et al. (2020) and draw inspiration from the existing two-player zero-sum game interpretations of Generative Adversarial Networks (GANs)
(Goodfellow et al., 2014). We emphasize that in DAL, however, we have three rather than two networks interacting with each other, with partial cooperation and competition. We propose a natural three-player game interpretation for DAL, which is not necessarily equivalent to two-player zero-sum game interpretations (see Example 1), which we coin as the Domain-Adversarial Game. We also propose to interpret and characterize optimal solutions in DAL as local Nash Equilibria (see Section 3). This characterization introduces a proper mathematical definition of algorithmic optimality for DAL. It also provides sufficient conditions for optimality that drives the algorithmic analysis.
With our proposed game perspective in mind, a simple optimization solution would be to use the Gradient Descent (GD) algorithm, which is the de facto solution but known to be unstable. Alternatively, we could also use other popular gradient based optimizers proposed in the context of differentiable games (e.g. Korpelevich, 1976; Mescheder et al., 2017). However, we notice that these do not outperform GD in practice (see § 6). To understand why, we analyze the asymptotic behavior of gradient-based algorithms in the proposed domain-adversarial game (§ 4). The main result of § 4.2 (Theorem 2) shows that GD with GRL (i.e., the existing solution for DAL) violates the asymptotic convergence guarantees to local NE unless an upper bound is placed on the learning rate, which may explain its training instability and sensitivity to optimizer parameters. In § 4.3, Appendix B.2 and Appendix E, we also provide a similar analysis for the popular game optimization algorithms mentioned above. We emphasize however that while some of our results may be of independent interest for learning in general games, our focus is DAL. § 4.3 and § 6 show both theoretically and experimentally that the limitations mentioned above disappear if standard optimizers are replaced with ODE solvers of at least second order. These are straightforward to implement as drop-in replacements to existing optimizers. They also lead to more stable algorithms, allow for more aggressive learning rates and provide notable performance gains.
2 PRELIMINARIES <latexit sha1_base64="XcLU0OQlzQn4TD3DERhz5ZsAZ/U=">AAAB8nicbVBNS8NAFHypX7V+VT16WSyCp5KIoseiF48VbC2moWy223bpJht2X4QS+jO8eFDEq7/Gm//GTZuDtg4sDDPvsfMmTKQw6LrfTmlldW19o7xZ2dre2d2r7h+0jUo14y2mpNKdkBouRcxbKFDyTqI5jULJH8LxTe4/PHFthIrvcZLwIKLDWAwEo2glvxtRHDEqs8dpr1pz6+4MZJl4BalBgWav+tXtK5ZGPEYmqTG+5yYYZFSjYJJPK93U8ISyMR1y39KYRtwE2SzylJxYpU8GStsXI5mpvzcyGhkziUI7mUc0i14u/uf5KQ6ugkzESYo8ZvOPBqkkqEh+P+kLzRnKiSWUaWGzEjaimjK0LVVsCd7iycukfVb3Luru3XmtcV3UUYYjOIZT8OASGnALTWgBAwXP8ApvDjovzrvzMR8tOcXOIfyB8/kDl52RdA==</latexit>Z
<latexit sha1_base64="3xtdy5laA/WQWHLvq4EtMaredYI=">AAAB7XicbVBNS8NAEJ34WetX1aOXxSJ4Kokoeix68VjBfkAbyma7adZuNmF3IpTQ/+DFgyJe/T/e/Ddu2xy09cHA470ZZuYFqRQGXffbWVldW9/YLG2Vt3d29/YrB4ctk2Sa8SZLZKI7ATVcCsWbKFDyTqo5jQPJ28Hoduq3n7g2IlEPOE65H9OhEqFgFK3U6kUUSdSvVN2aOwNZJl5BqlCg0a989QYJy2KukElqTNdzU/RzqlEwySflXmZ4StmIDnnXUkVjbvx8du2EnFplQMJE21JIZurviZzGxozjwHbGFCOz6E3F/7xuhuG1nwuVZsgVmy8KM0kwIdPXyUBozlCOLaFMC3srYRHVlKENqGxD8BZfXiat85p3WXPvL6r1myKOEhzDCZyBB1dQhztoQBMYPMIzvMKbkzgvzrvzMW9dcYqZI/gD5/MHKgqO2w==</latexit> ĥ
<latexit sha1_base64="4ZzWvs7xxQO9ik+Ene1mXfvAGHA=">AAAB73icbVDLSgNBEOyNrxhfUY9eBoPoKeyKosegF48RzAOSJcxOZrNDZmfXmV4hLPkJLx4U8ervePNvnDwOmljQUFR1090VpFIYdN1vp7Cyura+UdwsbW3v7O6V9w+aJsk04w2WyES3A2q4FIo3UKDk7VRzGgeSt4Lh7cRvPXFtRKIecJRyP6YDJULBKFqp3Y0okoic9soVt+pOQZaJNycVmKPeK391+wnLYq6QSWpMx3NT9HOqUTDJx6VuZnhK2ZAOeMdSRWNu/Hx675icWKVPwkTbUkim6u+JnMbGjOLAdsYUI7PoTcT/vE6G4bWfC5VmyBWbLQozSTAhk+dJX2jOUI4soUwLeythEdWUoY2oZEPwFl9eJs3zqndZde8vKrWbeRxFOIJjOAMPrqAGd1CHBjCQ8Ayv8OY8Oi/Ou/Mxay0485lD+APn8wfj5o82</latexit> ĥ0
<latexit sha1_base64="2G0f8RUU8YWS9zeN9PTsk8ECOOY=">AAAB8HicbVDLSgMxFL1TX7W+qi7dBIvgqsyIosuiG5dV7EPaoWQymTY0yQxJRihDv8KNC0Xc+jnu/BvT6Sy09UDgcM655N4TJJxp47rfTmlldW19o7xZ2dre2d2r7h+0dZwqQlsk5rHqBlhTziRtGWY47SaKYhFw2gnGNzO/80SVZrF8MJOE+gIPJYsYwcZKj/eDPrfhEA+qNbfu5kDLxCtIDQo0B9WvfhiTVFBpCMda9zw3MX6GlWGE02mln2qaYDLGQ9qzVGJBtZ/lC0/RiVVCFMXKPmlQrv6eyLDQeiICmxTYjPSiNxP/83qpia78jMkkNVSS+UdRypGJ0ex6FDJFieETSzBRzO6KyAgrTIztqGJL8BZPXibts7p3UXfvzmuN66KOMhzBMZyCB5fQgFtoQgsICHiGV3hzlPPivDsf82jJKWYO4Q+czx+ZdZBG</latexit> R (GRL) <latexit sha1_base64="1Out49/IGg3BAWnSOWbUFY4P5mo=">AAAB+HicdVDLSgMxFM3UV62Pjrp0EyyCG8tMHdu6K7pxWcE+oFPKnTRtQzOZIckItfRL3LhQxK2f4s6/MdNWUNEDgcM553JvThBzprTjfFiZldW19Y3sZm5re2c3b+/tN1WUSEIbJOKRbAegKGeCNjTTnLZjSSEMOG0F46vUb91RqVgkbvUkpt0QhoINGAFtpJ6dP/W5SfcB+wICDj274BQvquWSV8ZO0XEqbslNSaninXnYNUqKAlqi3rPf/X5EkpAKTTgo1XGdWHenIDUjnM5yfqJoDGQMQ9oxVEBIVXc6P3yGj43Sx4NImic0nqvfJ6YQKjUJA5MMQY/Uby8V//I6iR5Uu1Mm4kRTQRaLBgnHOsJpC7jPJCWaTwwBIpm5FZMRSCDadJUzJXz9FP9PmqWie150brxC7XJZRxYdoiN0glxUQTV0jeqogQhK0AN6Qs/WvfVovVivi2jGWs4coB+w3j4BX0GS6w==</latexit> r
<latexit sha1_base64="4e7VN7IWD/ZM+123qYV7uoLuXJs=">AAAB7XicbVDLSgNBEOz1GeMr6tHLYBA8hV1R9Bj04jGCeUCyhN7JbDJmdmaZmRVCyD948aCIV//Hm3/jJNmDJhY0FFXddHdFqeDG+v63t7K6tr6xWdgqbu/s7u2XDg4bRmWasjpVQulWhIYJLlndcitYK9UMk0iwZjS8nfrNJ6YNV/LBjlIWJtiXPOYUrZMaHYmRwG6p7Ff8GcgyCXJShhy1bumr01M0S5i0VKAx7cBPbThGbTkVbFLsZIalSIfYZ21HJSbMhOPZtRNy6pQeiZV2JS2Zqb8nxpgYM0oi15mgHZhFbyr+57UzG1+HYy7TzDJJ54viTBCryPR10uOaUStGjiDV3N1K6AA1UusCKroQgsWXl0njvBJcVvz7i3L1Jo+jAMdwAmcQwBVU4Q5qUAcKj/AMr/DmKe/Fe/c+5q0rXj5zBH/gff4AgK6PFA==</latexit>r
<latexit sha1_base64="4e7VN7IWD/ZM+123qYV7uoLuXJs=">AAAB7XicbVDLSgNBEOz1GeMr6tHLYBA8hV1R9Bj04jGCeUCyhN7JbDJmdmaZmRVCyD948aCIV//Hm3/jJNmDJhY0FFXddHdFqeDG+v63t7K6tr6xWdgqbu/s7u2XDg4bRmWasjpVQulWhIYJLlndcitYK9UMk0iwZjS8nfrNJ6YNV/LBjlIWJtiXPOYUrZMaHYmRwG6p7Ff8GcgyCXJShhy1bumr01M0S5i0VKAx7cBPbThGbTkVbFLsZIalSIfYZ21HJSbMhOPZtRNy6pQeiZV2JS2Zqb8nxpgYM0oi15mgHZhFbyr+57UzG1+HYy7TzDJJ54viTBCryPR10uOaUStGjiDV3N1K6AA1UusCKroQgsWXl0njvBJcVvz7i3L1Jo+jAMdwAmcQwBVU4Q5qUAcKj/AMr/DmKe/Fe/c+5q0rXj5zBH/gff4AgK6PFA==</latexit>r
<latexit sha1_base64="qmqxvw54qUUoUaz9MnxwYmJpOwM=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0WPRi8cW7Ae0oWy2k3btZhN2N0IJ/QVePCji1Z/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4bua3n1BpHssHM0nQj+hQ8pAzaqzUGPbLFbfqzkFWiZeTCuSo98tfvUHM0gilYYJq3fXcxPgZVYYzgdNSL9WYUDamQ+xaKmmE2s/mh07JmVUGJIyVLWnIXP09kdFI60kU2M6ImpFe9mbif143NeGNn3GZpAYlWywKU0FMTGZfkwFXyIyYWEKZ4vZWwkZUUWZsNiUbgrf88ippXVS9q6rbuKzUbvM4inACp3AOHlxDDe6hDk1ggPAMr/DmPDovzrvzsWgtOPnMMfyB8/kDzceM7w==</latexit>g
Figure 1: We study domainadversarial training from a game perspective. In DAL (Ganin et al. (2016)), three networks interact with each other: the feature extractor (g), the domain classifier (ĥ′) and the classifier (ĥ). During backpropagation, the GRL flips the sign of the gradient with respect to g.
We focus on the UDA scenario and follow the formulation from Acuna et al. (2021). This makes our analysis general and applicable to most state-of-the-art DAL algorithms (e.g., Ganin et al. (2016); Saito et al. (2018); Zhang et al. (2019)). We assume that the learner has access to a source dataset (S) with labeled examples and a target dataset (T) with unlabeled examples, where the source inputs xsi are sampled i.i.d. from a (source) distribution Ps and the target inputs xti are sampled i.i.d. from a (target) distribution Pt, both over X . We have Y = {0, 1} for binary classification, and Y = {1, ..., k} in the multiclass case. The risk of a hypothesis h : X → Y w.r.t. the labeling function f , using a loss function ` : Y×Y → R+ under distribution D is defined as: R`D(h, f) := ED[`(h(x), f(x))]. For simplicity, we define R`S(h) := R ` Ps (h, fs) and R`T (h) := R ` Pt
(h, ft). The hypothesis class of h is denoted byH.
UDA aims to minimize the risk in the target domain while only having access to labeled data in the source domain. This risk is upper bounded in terms of the risk of the source domain, the discrepancy between the two distributions and the joint hypothesis error λ∗: Theorem 1. (Acuna et al. (2021)) Let us note ` : Y × Y → [0, 1], λ∗ := minh∈HR`S(h) +R`T (h), and Dφh,H(Ps||Pt) := suph′∈H |Ex∼Ps [`(h(x), h′(x))]− Ex∼Pt [φ∗(`(h(x), h′(x)))|. We have:
R`T (h) ≤ R`S(h) + Dφh,H(Ps||Pt) + λ∗. (1)
The function φ : R+ → R defines a particular f -divergence and φ∗ is its (Fenchel) conjugate. As is typical in UDA, we assume that the hypothesis class is complex enough and both fs and ft are similar in such a way that the non-estimable term (λ∗) is negligible and can be ignored.
Domain-Adversarial Training (see Figure 1) aims to find a hypothesis h ∈ H that jointly minimizes the first two terms of Theorem 1. To this end, the hypothesis h is interpreted as the composition of h = ĥ ◦ g with g : X → Z , and ĥ : Z → Y . Another function class Ĥ is then defined to formulate H := {ĥ ◦ g : ĥ ∈ Ĥ, g ∈ G}. The algorithm tries to find the function g ∈ G such that ĥ ◦ g minimizes the risk of the source domain (i.e. the first term in Theorem 1), and its composition with ĥ and ĥ′ minimizes the divergence of the two distributions (i.e. the second term in Theorem 1).
Algorithmically, the computation of the divergence function in Theorem 1 is estimated by a so-called domain classifier ĥ′ ∈ Ĥ whose role is to detect whether the datapoint g(xi) ∈ Z belongs to the source or to the target domain. When there does not exist a function ĥ′ ∈ Ĥ that can properly distinguish between g(xsi ) and g(x t i), g is said to be invariant to the domains.
Learning is performed using GD and the GRL (denoted by Rλ) on the following objective:
min ĥ∈Ĥ,g∈G,ĥ′∈Ĥ
Ex∼ps [`(ĥ ◦ g, y)]− αds,t(ĥ, ĥ′, Rλ(g)), (2)
where ds,t(ĥ, ĥ′, g) := Ex∼ps [ˆ̀(ĥ′ ◦ g, ĥ ◦ g)]− Ex∼pt [(φ∗ ◦ ˆ̀)(ĥ′ ◦ g, ĥ ◦ g)]. Mathematically, the GRL Rλ is treated as a “pseudo-function” defined by two (incompatible) equations describing its forward and back-propagation behavior (Ganin & Lempitsky, 2015; Ganin et al., 2016). Specifically,
Rλ(x) := x and dRλ(x)/dx := −λ, (3) where λ and α are hyper-parameters that control the tradeoff between achieving small source error and learning an invariant representation. The surrogate loss ` : Y × Y → R (e.g., cross-entropy) is used to minimize the empirical risk in the source domain. The choice of function ˆ̀ : Y × Y → R and of conjugate φ∗ of the f -divergence defines the particular algorithm (Ganin et al., 2016; Saito et al., 2018; Zhang et al., 2019; Acuna et al., 2021). From eq. 2, we can notice that GRL introduces an adversarial scheme. We next interpret eq. 2 as a three-player game where the players are ĥ, ĥ′ and g, and study its continuous gradient dynamics.
3 A GAME PERSPECTIVE ON DAL
We now interpret DAL from a game-theoretical perspective. In § 3.1, we rewrite the DAL objective as a three-player game. In this view, each of the feature extractor and two classifiers is a player. This allows us to define optimality in terms of local Nash Equilibrium (see Def. 2 in Appendices). In § 3.2, we introduce the vector field, the game Hessian and the tools that allow us to characterize local NE for the players. This characterization leads to our analysis of the continuous dynamics in § 4.
3.1 DOMAIN-ADVERSARIAL GAME
We now rewrite and analyze the DAL problem in eq. 2 as a three-player game. Let Ĥ, Ĥ′ and G be classes of neural network functions and define ω1 ⊆ Ω1 and ω3 ⊆ Ω3 as a vector composed of the parameters of the classifier and domain classifier networks ĥ ∈ Ĥ and ĥ′ ∈ Ĥ, respectively. Similarly, let ω2 ⊆ Ω2 be the parameters of the feature extractor network g ∈ G. Their joint domain is denoted by Ω = Ω1 × Ω2 × Ω3 and the joint parameter set is ω = (ω1, ω2, ω3). Let each neural network be a player and its parameter choice to be its individual strategy (here continuous). The goal of each player is then to selfishly minimize its own cost function Ji : Ω→ R. We use the subscript −i to refer to all parameters/players but i. With the notation introduced, we can now formally define the Domain-Adversarial Game as the three-player game G(I,Ωi, Ji) where I := {1, 2, 3}, dim(Ω) = ∑3 i=1 dim(Ωi) = d, Ωi ⊆ Rdi and:
J1(ω1, ω−1) := `(ω1, ω2) + αds,t(ω)
J2(ω2, ω−2) := `(ω1, ω2) + αλds,t(ω) J3(ω3, ω−3) := − αds,t(ω), (4)
We use the shorthand `(ω1, ω2) for Ex,y∼ps [`(ω1 ◦ ω2(x), y)], and ωi’s refer to the feature extractor g and the classifiers (ĥ and ĥ′). Similar notation follows for ds,t. Here, we assume that each Ji is smooth in each of its arguments ωi ∈ Ωi. The gradient field of Equation (2) and the game’s vector field (see § 3.2) are equivalent, making the original interpretation of DAL and our three-player formulation equivalent. However, it is worth noting that our intepretation does not explicitly require the use of Rλ in ds,t in Equation (4). We can write optimality conditions of the above problem through the concept of Nash Equilibrium: Definition 1. (Nash Equilibrium (NE)) A point ω∗ ∈ Ω is said to be a Nash Equilibrium of the Domain-Adversarial Game if ∀i ∈ {1, 2, 3},∀ωi ∈ Ωi, we have: Ji(ω∗i , ω∗−i) ≤ Ji(ωi, ω∗−i). In our scenario, the losses are not convex/concave. NE then does not necessarily exist and, in general, finding NE is analogous to, but much harder than, finding global minima in neural networks – which
is unrealistic using gradient-based methods (Letcher et al., 2019). Thus, we focus on local NE which relaxes the NE to a local neighborhood B(w∗, δ) := {||w − w∗|| < δ} with δ > 0 (see Definition 2). Intuitively, a NE means that no player has the incentive to change its own strategy (here parameters of the neural network) because it will not generate any additional pay off (here it will not minimize its cost function). We emphasize that each player only has access to its own strategy set. In other words, the player J1 cannot change the parameters ω2, ω3. It only has access to ω1 ∈ Ω1. While the motivation of the three-player game follows naturally from the original formulation of DAL where three networks interact with each other (see Figure 1), the optimization problem (2) could also be interpreted as the minimax objective of a two-player zero-sum game. Thus, a natural question arises: can we interpret the domain-adversarial game as a two player zero-sum game? This can be done for example by defining ω∗12 := (ω ∗ 1 , ω ∗ 2), and considering the cost of the two players (ω12, ω3) as J12 = J and J3 = −J where J(ω12, ω3) := Eps [`(ω1, ω2)] + ds,t(ω). In general, however, the solution of the two-player game (ω∗12, ω ∗ 3) is not equal to the NE solution of the three-player game (ω∗1 , ω ∗ 2 , ω ∗ 3). This is because the team optimal solution ω ∗ 12 6= (ω∗1 , ω∗2) in general. We illustrate this in the following counterexample (see Başar & Olsder (1998) for more details): Example 1. Let the function J(ω) := 12 ( ω21 + 4ω1ω2 + ω 2 2 − ω23 ) . (a) Suppose the three-player game ω = (ω1, ω2, ω3) with J1 = J2 = J and J3 = −J . Each Ji is strictly convex in ωi. The NE solution of the game ω∗ = (0, 0, 0) is unique. (b) Suppose the two-player game ω = (ω12, ω3) with J12 = J and J3 = −J . The solution ω∗ from (a) is not a NE solution. To see this, let ω̂ := (−1, 1, 0). We have that J12(ω̂) = −1 < J12(ω∗) = 0. This contradicts Definition 1. One can verify that there is no NE in this two-player scenario.
3.2 CHARACTERIZATION OF THE DOMAIN-ADVERSARIAL GAME
We now introduce the game’s vector field (also called pseudo-gradient) and the pseudo-gradient’s Jacobian. We also provide a characterization of local NE based on them (see § 3). These are the core concepts used in our analysis (§ 4). We first define the game’s vector field v(w), and its Jacobian H(ω) (also called the game Hessian (Letcher et al., 2019)):
v(ω) := (∇ω1J1,∇ω2J2,∇ω3J3) ∈ Rd, H(ω) := ∇v(ω) ∈ Rd×d (5) Note that the vector field v(w) and the three-player formulation naturally capture the behavior introduced by the GRL in the original formulation. Specifically, v(ω) is identical to the gradient with respect to the parameters of the original DAL objective with GRL (Equation (2)). Therefore, in both cases the behavior of GD is identical. Assuming the same initial conditions, they will reach the same solution. This shows the equivalence between our perspective and the original DAL formulation. We emphasize that by equivalence, we mean the same dynamics, and the same intermediate and final solutions. Another fact worth emphasizing is that H(ω) is asymmetric. This is in contrast with the Hessian in supervised learning. Before proceeding with a characterization of local NE in terms of v(w) and H(ω), we first define sufficient and necessary conditions for local NEs: Proposition 1. (Necessary condition). Suppose each Ji is twice continuously differentiable in each ωi, any local NE ω∗ satisfies: i)∇ωiJi(ω∗) = 0 and ii) ∀i ∈ {1, 2, 3},∇2ωi,ωiJi(ω∗) 0. Proposition 2. (Sufficient condition). Suppose each Ji is twice continuously differentiable in each ωi. ω∗i is a local NE if i) ∇ωiJi(ω∗) = 0 and ii) ∀i,∇2ωi,ωiJi(ω∗) 0.
The necessary and sufficient conditions from Propositions 1 and 2 are reminiscent of conditions for local optimality in continuous optimization (Nocedal & Wright, 2006). Similar conditions were also proposed in Ratliff et al. (2016) where the sufficient condition defines the differential Nash equilibrium. We can now characterize a local NE in terms of v(w) and H(ω): Proposition 3. (Strict Local NE) w is a strict local NE if v(w) = 0 and H(ω) +H(ω)> 0. The sufficient condition implies that the NE is structurally stable (Ratliff et al., 2016). Structural stability is important as it implies that slightly biased estimators of the gradient (e.g., due to sampling noise) will not have vastly different behaviors in neighborhoods of equilibria (Mazumdar et al., 2020). In the following, we focus on the strict local NE (i.e., ω∗ for which Proposition 3 is satisfied).
4 LEARNING ALGORITHMS
We defined optimality as the local NE and provided sufficient conditions in terms of the pseudogradient and its Jacobian. In this section, we assume the existence of the strict local NE (Prop. 3)
in the neighborhood of the current point (e.g., initialization), and analyze the continuous gradient dynamics of the Domain-Adversarial Game (eq. 4 and eq. 5). We show that given the sufficient conditions from Prop. 3, asymptotic convergence to a local NE is guaranteed through an application of Hurwitz condition (Khalil, 2002). Most importantly, we show that using GD with the GRL could violate those guarantees unless its learning rate is upper bounded (see Thm. 2 an Cor. 1). This is in sharp contrast with known results from supervised learning where the implicit regularization introduced by GD has been shown to be desirable (Barrett & Dherin, 2021). We also analyze the use of higher-order ODE solvers for DAL and show that the above restrictions are not required if GD is replaced with them. Finally, we compare our resulting optimizers with recently algorithms in the context of games.
Our algorithmic analysis is based on the continuous gradient-play dynamics and the derivation of the modified or high-resolution ODE of popular integrators (e.g., GD/Euler Method and Runge-Kutta). This type of analysis is also known in the numerical integration community as backward error analysis (Hairer et al., 2006) and has recently been used to understand the implicit regularization effect of GD in supervised learning (Barrett & Dherin, 2021). High resolution ODEs have also been used in Shi et al. (2018) to understand the acceleration effect of optimization algorithms, and more recently in Lu (2020). As in Shi et al. (2018); Lu (2020); Barrett & Dherin (2021), our derivation of the high resolution ODEs is in the full-batch setting. The derivation of the stochastic dynamics of stochastic discrete time algorithms is significantly more complicated and is beyond the scope of this work.
We experimentally demonstrate that our results are also valid when there is stochasticity due to sampling noise in the mini-batch. We emphasize that our analysis does not put any constraint or structure on the players’ cost functions as opposed to Azizian et al. (2020); Zhang & Yu (2020). In our problem, the game is neither bilinear nor necessarily strongly monotone. See proofs in appendices.
4.1 CONTINUOUS GRADIENT DYNAMICS
Given v(ω) the continuous gradient dynamics can be written as:
ω̇(t) = −v(ω). (6) For later reasons and to distinguish between eq. 6 and the gradient flow, we will refer to these as the gradient-play dynamics as in Başar & Olsder (1998); Mazumdar et al. (2020). These dynamics are well studied and understood when the game is either a potential or a purely adversarial game (see definitions in appendices). While eq. 2 may look like a single objective, the introduction of the GRL (Rλ), makes a fundamental difference between our case and the dynamics that are analyzed in the single-objective gradient-based learning and optimization literature. We summarize this below: Proposition 4. The domain-adversarial game is neither a potential nor necessarily a purely adversarial game. Moreover, its gradient dynamics are not equivalent to the gradient flow.
Fortunately, we can directly apply the Hurwitz condition (Khalil, 2002) (also known as the condition for asymptotic stability, see Appendix A.1) to derive sufficient conditions for which the continuous dynamics of the gradient play would converge. Lemma 1. (Hurwitz condition) Let ∇v(w∗) be the Jacobian of the vector field at a stationary point w∗ where v(w∗) = 0. If the real part of every eigenvalue λ of ∇v(w∗) (i.e. in the spectrum Sp(∇v(w∗))) is positive then the continuous gradient dynamics are asymptotically stable. In this work, we assume the algorithms are initialized in a neighborhood of a strict local NE ω∗. Therefore, Lemma 1 provides sufficient conditions for the asymptotic convergence of the gradientplay dynamics to a local NE. In practice this assumption may not hold, and it is computationally hard to verify. Despite this, our experiments show noticeable performance gains in several tasks, benchmarks and network architectures (see § 6).
4.2 ANALYSIS OF GD WITH THE GRL
We showed above that given the existence of a strict local NE, the gradient-play dynamics are attracted to the strict local NE. A natural question then arises: If under this assumption local asymptotic convergence is guaranteed, what could make DAL notoriously hard to train and unstable? In practice, we do not have access to an explicit solution of the ODE. Thus, we rely on integration algorithms to approximate the solution. One simple approach is to use the Euler method:
w+ = w − ηv(w). (7)
This is commonly known as GD. The equivalence between v(w) (game’s vector field) and the gradient of Equation (2) (original DAL formulation) follows from the use of the GRL (Rλ). We remind the reader that the GRL is a “pseudo-function” defined by two (incompatible) equations describing its forward and back-propagation behavior, i.e., a flip in the gradient’s sign for the backward pass (see Figure 1, Section 2 and Ganin et al. (2016)). Equation (7) is then the default algorithm used in DAL. Now, to provide an answer to the motivating question of this section, we propose to analyze the high-resolution ODE of this numerical integrator (i.e., Euler) and in turn its asymptotic behavior. This is similar to deriving the modified continuous dynamics for which the integrator produces the exact solution (Hairer et al., 2006) and applying Hurwitz condition on the high-resolution ODE. Theorem 2. The high resolution ODE of GD with the GRL up to O(η) is:
ẇ = −v(w)−η2∇v(w)v(w) (8) Moreover, this is asymptotically stable (see Appendix A.1) at a stationary point w∗ (Proposition 3) iff for all eigenvalue written as λ = a+ ib ∈ Sp(−∇v(w∗)), we have 0 > η(a2 − b2)/2 > a. A striking difference between Equation (6) and Equation (8) is made clear (additional term marked in red). This additional term is a result of the discretization of the gradient-play dynamics using Euler’s method (i.e. GD) and leads to a different Jacobian of the dynamics. This term was recently shown to be beneficial for standard supervised learning (Barrett & Dherin, 2021), where∇v(ω∗) is symmetric and thus only has real eigenvalues. In our scenario, this term is undesirable. In fact, this additional term puts an upper bound on the learning rate η. The following corollary formalizes this: Corollary 1. The high resolution ODE of GD with GRL in Equation (8) is asymptotically stable only if the learning rate η is in the interval: 0 < η < −2ab2−a2 , for all λ = a+ ib ∈ Sp(−∇v(w∗)) with large imaginary part (i.e., such that |a| < |b|). To have good convergence properties, the imaginary part of the eigenvalues of −∇v(w∗) must be small enough. Therefore, if some eigenvalue λ = a+ ib satisfies a < 0 and b2 a2 −2a, the learning rates should be chosen to be very small. This is verified in Section 6 and in Example 2. Example 2. Consider the three-player game where `(w1, w2) = w21 + 2w1w2 + w22 , λ = 1 and ds,t(w2, w3) = w 2 2 + 99w2w3 − w23 . Then ẇ = −v(w) becomes: ẇ = Aw = (−2 −2 0 −2 −4 −99 0 99 −2 ) . The
eigenvalues of A are −2 and −3± 2i √ 2449. From Corollary 1, η should be 0 < η < 6.2× 10−3.
4.3 HIGHER ORDER ODE SOLVERS
The limitation described above exists because GD with the GRL can be understood as a discretization of the gradient-play dynamics using Euler’s Method. Thus, it only approximates the continuous dynamics up to O(η). To obtain a better approximation, we consider Runge-Kutta (RK) methods of order two and beyond (Butcher, 1996). For example, take the improved Euler’s method (a particular RK method of second order) that can be written as:
w+ = w − η2 (v(w) + v(w − ηv(w))). (9) Comparing Equation (9) (i.e., update rule of RK2) with Equation (7) (i.e., update rule of GD), one can see that the RK2 method is straightforward to implement in standard deep learning frameworks. Moreover, it does not introduce additional hyper-parameters. More importantly, such discrete dynamics approximate the continuous ODE of Equation (6) to a higher precision. In Appendix C, we provide asymptotic guarantees for the high resolution ODE of general RK methods , their generalized expression and the algorithm pseudo-code. See also PyTorch pseudo-code in Appendix E.
Limitation. A disadvantage of using high-order solvers is that they require additional extra steps. Specifically, one extra step in the case of RK2 (computation of the additional second term in Equation (9)). In our implementation, however, this was less than 2x slower in wall-clock time (see Appendix E.5 for more details and wall-clock comparison). Moreover, if not initialized in the neighborhood of a local NE, high-order solvers and gradient-based methods might also converge to a non-NE as described in Mazumdar et al. (2019) although this is likely a rare case.
Comparison vs other game optimization algorithms. DAL has not been previously interpreted from a game perspective. Our interpretation allows us to bring recently proposed algorithms to the context of differentiable games (Zhang & Yu, 2020; Azizian et al., 2020) to DAL. Classic examples are the Extra-Gradient (EG) method (Korpelevich, 1976) and Consensus Optimization (CO) (Mescheder et al., 2017). In Appendix B.2 we analyze the continuous dynamics of the EG method, and show that
we cannot take the learning rate of EG to be large either. Thus, we obtain a similar conclusion as Corollary 1. Then, in practice for DAL, stability for EG comes at the price of slow convergence due to the use of small learning rates. We experimentally show this in Figure 3. With respect to CO, we show in Appendix C that this algorithm can be interpreted in the limit as an approximation of the RK2 solver. In practice, if its additional hyper-parameter (γ) is tuned thoroughly, CO may approximate the continuous dynamics better than GD and EG. We believe this may be the reason why CO slightly outperforms GD and EG (see Appendix E.4). In all cases, RK solvers outperform GD, EG and CO. This is in line with our theoretical analysis since they better approximate the continuous dynamics (Hairer et al., 2006). It is worth noting that many other optimizers have recently been proposed in the context of games e.g., Gidel et al. (2019a); Hsieh et al. (2020); Lorraine et al. (2021a;b). Some of them are modifications of the EG method that we compared to e.g. Extra-Adam (Gidel et al., 2019a) or double step-size EG (Hsieh et al., 2020). More practical modifications in terms of adaptive step size could also be applied on top of RK solvers as done in Qin et al. (2020). A comparison of all existing game optimizers in DAL, and a better theoretical understanding of such modification on RK solvers are beyond the scope of this work. However, we believe it is an interesting and unexplored research direction that our game perspective on DAL enables.
5 RELATED WORK
To the best of our knowledge, DAL has not been previously analyzed from a game perspective. Moreover, the stability of the optimizer and the implications of introducing the GRL has not been analyzed either. Here, we compare our results with the general literature.
Gradient-Based Learning in Games. Ratliff et al. (2016) proposed a characterization of local Nash Equilibrium providing sufficient and necessary conditions for its existence. Mazumdar et al. (2020) proposed a general framework to analyze the limiting behavior of the gradient-play algorithms in games using tools from dynamical systems. Our work builds on top of this characterization but specializes them to the domain-adversarial problem. We propose a more stable learning algorithm that better approximates the gradient-play dynamics. Our resulting algorithm does not introduce explicit adjustments or modify the learning dynamics, nor does it require the computation of the several Hessian vector products or new hyperparameters. This is in contrast with general algorithms previously analyzed in the context of differentiable games (Azizian et al., 2020; Letcher et al., 2019).
Integration Methods and ML. Scieur et al. (2017) showed that accelerated optimization methods can be interpreted as integration schemes of the gradient flow equation. Zhang et al. (2018) showed that the use of high order RK integrators can achieve acceleration in convex functions. In the context of two-players game (i.e GANs), Gemp & Mahadevan (2018) consider using a second-order ODE integrator. More recently, Qin et al. (2020) proposed to combine RK solvers with regularization on the generators’ gradient norm. Chen et al. (2018) interpreted the residual connection in modern networks as the Euler’s integration of a continuous systems. In our case, we notice that the combination of GD with the GRL can be interpreted as the Euler’s discretization of the continuous gradient play dynamics, which could prevent asymptotic convergence guarantees. We then study the discretization step of popular ODE solvers and provide simple guarantees for stability. Moreover, our analysis is based on a novel three-player game interpretation of the domain-adaptation problem. This is also different from a single potential function or two-player games (i.e. GANs).
Two-Player Zero-Sum Games have recently received significant attention in the machine learning literature due to the popularity of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014). For example, several algorithms have been proposed and analyzed (Mescheder et al., 2017; Mertikopoulos et al., 2019; Gidel et al., 2019a;b; Zhang & Yu, 2020; Hsieh et al., 2020), in both deterministic and stochastic scenarios. In our problem, we have a general three-player games resulting of a novel game interpretation of the domain-adversarial problem. It is worth noting that while Gidel et al. (2019a) focused on GANs, their convergence theory and methods for stochastic variational inequalities could also be applied to three-players games and thus DAL using our perspective.
6 EXPERIMENTAL RESULTS
We conduct an extensive experimental analysis. We compare with default optimizers used in domainadversarial training such as GD, GD with Nesterov Momentum (GD-NM) (as in Sutskever et al. (2013)) and Adam (Kingma & Ba, 2014). We also compare against recently proposed optimizers in the context of differentiable games such as EG (Korpelevich, 1976) and CO (Mescheder et al., 2017). We focus our experimental analysis on the original domain-adversarial framework of Ganin et al.
(2016) (DANN). However, in section 6.2, we also show the versatility and efficacy of our approach improving the performance of recently proposed SoTA DAL framework (e.g., f -DAL (Acuna et al., 2021) combined with Implicit Alignment (Jiang et al., 2020)).
6.1 EXPERIMENTAL ANALYSIS ON DIGITS
Implementation Details. Our first experimental analysis is based on the digits benchmark with models trained from scratch (i.e., with random initialization). This benchmark constitutes of two digits datasets MNIST (CC BY-SA 3.0) and USPS (LeCun et al., 1998; Long et al., 2018) with two transfer tasks (M → U and U →M). We adopt the splits and evaluation protocol from Long et al. (2018) and follow their standard implementation details.
For GD-NM, we use the default momentum value (0.9). We follow the same approach for the additional hyper-parameters of Adam. Hyperparameters such as learning rate, learning schedule and adaptation coefficient (λ) are determined for all optimizers by running a dense grid search and selecting the best hyper-parameters on the transfer task M→U. As usual in UDA, the best criteria are determined based on best transfer accuracy. The same parameters are then used
for the other task (i.e., U→M). We use the same seed and identically initialize the network weights for all optimizers. This analysis is conducted on Jax (Bradbury et al., 2018) (see Appendix D).
Comparison vs optimizers used in DAL. Figure 2 (top) illustrates the training dynamics for the loss in the target domain and the performance transfer. As expected, our optimizer converges faster and achieves noticeable performance gains. A core idea of DAL is to learn domain-invariant representations, thus we plot in Figure 2 (bottom) t-SNE (Van der Maaten & Hinton, 2008) visualizations of the last layer features of the network. We show this over a sequence of epochs for GD with GRL vs RK2. A different color is used for the source and target datasets. In the comparison vs Adam, we emphasize that Adam computes adaptive learning rates which our method does not. That said, Figure 2 shows that our two methods RK2 and RK4
outperform all baselines in terms of both convergence and transfer performance. In Figure 7, we show how unstable these standard optimizers are when more aggressive step sizes are used. This is in line with our theoretical analysis. Experimentally, it can be seen that in DAL, GD is more stable than GD-NM and Adam, with the latter being the most unstable. This sheds lights on why well tuned GD-NM is often preferred over Adam in DAL.
Comparison vs game optimization algorithms. We now compare RK solvers vs other recently proposed game optimization algorithms. Specifically, we compare vs the EG method (Korpelevich, 1976) and CO (Mescheder et al., 2017). In every case, we perform a dense grid under the same budget for all the optimizer and report the best selection (see Appendix E for details). In line with our theoretical analysis of the continuous dynamics of the EG, we notice that the EG method is not able to train with learning rates bigger than 0.006, as a result it performs signficantly worse than any other optimizer (including simple GD). Also inline with our theoretical analysis, CO performs better than EG and all other popular gradient algorithms used in DAL. This is because CO can be seen as an approximation of Heun’s Method (RK2). More details on this in supplementary.
Robustness to hyper-parameters. Figure 4 shows transfer performance of our method for different choices of hyper-parameters while highlighting (green line) the best score of the best performing GD hyperparameters on the same dataset. Our method is robust to a wide variety of hyperparameters.
0.1 0.3 0.4
92
94
96 LR
const poly
LR Schedule
RK2 RK4
Method
0.1 0.5 1.0
adapt
92 94 96
92
94
96 Transfer Acc.
Figure 4: Robustness to hyperparameters. We compare the transfer performance of our method for different hyperarameters in the task M→ U in the Digits benchmark. Green line shows the best score for the best performing hyperparameters of GD. Blue star corresponds to the best solution. Our method performs well for a wide variety of hyperparameters.
0 8000 16000 24000 # of Iterations
65.0
67.5
70.0
72.5
Tr an
sf er
P er
fo rm
an ce
Grad. Descent (Nesterov) Ours (RK2)
Figure 6: Transfer Performance on Visda (DANN). 0 6000 12000 18000 24000 # of Iteration(s)
1.25
1.50
1.75
2.00
2.25
Ta rg
et T
as k
Lo ss
Grad. Descent LR:0.3 Nesterov Momentun LR:0.1
Figure 7: Stability anal. on Digits. Most aggressive step size before divergence. Adam diverges for η > 0.001.
6.2 COMPARISON IN COMPLEX ADAPTATION TASKS Method Sim→Real GD-NM 71.7 ± 0.7
Ours(RK2) 73.8 ± 0.3
Table 1: Accuracy (DANN) on Visda 2017 with ResNet-50.
We evaluate the performance of our algorithm with Resnet-50 (He et al., 2016) on more challenging adaptation benchmarks. Specifically, this analysis is conducted on Visda-2017 benchmark (Peng et al., 2017). This is a simulation-to-real dataset with two different domains: (S) synthetic renderings of 3D models and (R) real images. For this experiment, we use PyTorch (Paszke et al., 2019), our evaluation protocol follows Zhang et al. (2019) and uses ResNet-50 as the backbone network. For the optimizer parameters, we tune
thoroughly GD-NM, which is the optimizer used in this setting (Long et al., 2018; Zhang et al., 2019; Jiang et al., 2020; Acuna et al., 2021). For ours, we keep the hyper-parameters, but increase the learning rate (0.2), and the batch size to 128. In this task, our approach corresponds to the improved Euler’s method (RK2). Table 1 shows the comparison. Figure 6 compares the training dynamics of our method vs GD-NM. In Figure 5, we evaluate the sensitivity of our method (in terms of transfer performance) to sampling noise as controlled by the batch size.
Improving SoTA DAL frameworks. We use this complex visual adaptation task to showcase the applicability of our method to SoTA DAL frameworks. Specifically, we let the DA method being f -DAL Pearson as in Acuna et al. (2021) with Implicit Alignment Jiang et al. (2020). We use the tuned parameters and optimizer from Acuna et al. (2021); Jiang et al.
(2020) as the baseline. In our case, we only increase the learning rate (0.2). Table 2 shows that our method achieves peak results (+3.5%) in 10.5K iterations (vs 29.5K iterations for GD-NM). Natural Language Processing Tasks. We also evaluate our approach on natural language processing tasks on the Amazon product reviews dataset (Blitzer et al., 2006). We show noticeable gains by replacing the GD with either RK2 or RK4. Results and details can be found in Appendix E.1.
7 CONCLUSIONS
We analyzed DAL from a game-theoretical perspective where optimality is defined as local NE. From this view, we showed that standard optimizers in DAL can violate the asymptotic guarantees of the gradient-play dynamics, requiring careful tuning and small learning rates. Based on our analysis, we proposed to replace existing optimizers with higher-order ODE solvers. We showed both theoretically and experimentally that these are more stable and allow for higher learning rates, leading to noticeable improvements in terms of the transfer performance and the number of training iterations. We showed that these ODE solvers can be used as a drop-in replacement and outperformed strong baselines.
Acknowledgements. We would like to thank James Lucas, Jonathan Lorraine, Tianshi Cao, Rafid Mahmood, Mark Brophy and the anonymous reviewers for feedback on earlier versions of this work.
SUPPLEMENTARY MATERIAL
CONTENTS
1 Introduction 1
2 Preliminaries 2
3 A Game Perspective on DAL 3
3.1 Domain-Adversarial Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3.2 Characterization of the Domain-Adversarial Game . . . . . . . . . . . . . . . . . . . . . . . 4
4 Learning Algorithms 4
4.1 Continuous Gradient Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4.2 Analysis of GD with the GRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4.3 Higher order ODE Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5 Related Work 7
6 Experimental Results 7
6.1 Experimental Analysis on Digits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
6.2 Comparison in complex adaptation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
7 Conclusions 9
A Concepts in Game Theory 15
A.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
A.2 Games Characterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
A.3 Case of Study in DANN. Original Formulation from Ganin et al. (2016) . . . . . . . . . . . . 15
B Derivation of high-resolution ODEs 17
B.1 High-resolution ODE of second-order Runge–Kutta method . . . . . . . . . . . . . . . . . . 17
B.2 Continuous dynamics of Extra-Gradient (EG) . . . . . . . . . . . . . . . . . . . . . . . . . . 17
B.3 High-resolution ODE of classic fourth-order Runge–Kutta method (RK4) . . . . . . . . . . . 18
C Proofs and additional theoretical results 20
C.1 Proposed Learning Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
C.2 CO approximates RK2 (Heun’s Method) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
D Experimental Setup Additional Details 22
E Additional Experiments 23
E.1 Natural Language Processing Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
E.2 Sensitivity to Sampling Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
E.3 Additional Comparison vs Game Optimization Algorithms . . . . . . . . . . . . . . . . . . . 24
E.4 CO vs Gradient Descent and Extra-Gradient Algorithms . . . . . . . . . . . . . . . . . . . . 24
E.5 Wall-Clock Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
F PyTorch PseudoCode of RK2 Solver 25
A CONCEPTS IN GAME THEORY
A.1 DEFINITIONS
Definition 2. (Local Nash Equilibrium) : A point (w∗i , w∗−i) ∈ Ω is said to be a local Nash Equilibrium of the domain-adversarial game if there exists some δ > 0 such that:
∀i ∈ {1, 2, 3}, Ji(w∗i , w∗−i) ≤ Ji(wi, w∗−i), s.t. ||ωi − ω∗i ||2 < δ (10)
Intuitively, this is restricting the concept of NE to a local neighborhood B(x∗, δ) := {||x− x∗||2 < δ} with δ > 0.
A more practical characterization of the NE can be given in terms of the Best Response Map of each player which we now define.
Definition 3. (Best Response Map (BR)) The best response map BRi : Ω−i ⇒ Ωi of player i is defined as:
BRi(ω−i) := arg min ωi∈Ωi Ji(ωi, ω−i), (11)
here the symbol ⇒ emphasizes that the best response map is generally a set map and not a singleton, thus it is not a function in general. In other words, there may be a subset of element in Ωi for which Ji(., ω−i) is a minimum.
The notion of NE can be defined in terms of the generalized BR : Ω ⇒ Ω map. This can be thought as an stacked vector where the i-th element of BR is BRi(ω−i). Proposition 5. A pointw∗i ∈ Ω is said to be a NE of the game if it is a fixed point of the generalized BR : Ω ⇒ Ω map. That is,
ω∗ ∈ BR(ω∗) =⇒ ∀i ∈ {1, 2, 3}, ω∗i ∈ BRi(ω∗−i) (12)
Proof. This follows from the definitions of BR map and NE.
Definition 4. (Asymptotically Stable) A point ω is said to be a locally asymptotically stable point of the continuous dynamics ω̇ = f(ω) if Re(λ) < 0 for all λ ∈ Sp(∇f(ω)), where Sp(∇f(ω)) is the spectrum of ∇f(ω).
Definition 4 is also known as the Hurwitz condition Khalil (2002).
Definition 5. A stationary point x of a C2 function φ : Rn → R is said to be a strict saddle point if:
• λmin(∇2xxφ(x∗)) < 0 and,
• λ(∇2xxφ(x∗)) > 0, for any other λ ∈ sp(∇2xxφ(x∗))
A.2 GAMES CHARACTERIZATIONS
Potential Games. Potential Games were introduced in Monderer & Shapley (1996) and can be defined as a type of game for which there exists an implicit potential function φ : Ω → R such that ∇φ(ω) = v(ω). Consequently, a necessary and sufficient condition for the game to be potential is the Jacobian of the vector field ∇v(ω) being symmetric (see 3.3 in Mazumdar et al. (2020) and Monderer & Shapley (1996)).
Purely Adversarial Games. This particular type of game refers to the other extreme in which H(ω) is a non-symmetric matrix with purely imaginary eigenvalues. If the game Hessian is skew-symmetric these have also been called Hamiltonian Games Letcher et al. (2019).
A.3 CASE OF STUDY IN DANN. ORIGINAL FORMULATION FROM GANIN ET AL. (2016)
As mentioned in the main text (Section 2), our analysis is compatible with both the original and more recent formulation of domain-adversarial training such as Zhang et al. (2019); Acuna et al. (2021). In this section, we specifically derive additional results for DANN Ganin et al. (2016).
In order to obtain the original formulation of DANN, let us define ˆ̀(_, b) = log(σ(b)) and φ∗(t) = − log(1−et) in Equation (2). This corresponds to the Jensen-Shannon divergence (JS) (up to a constant shift that does not affect optimization). We can then rewrite ds,t as:
ds,t = Ex∼ps log σ ◦ ĥ ′ ◦ g(x) + Ex∼pt log(1− σ ◦ ĥ ′ ◦ g(x)) (13)
where σ(x) := 1 1+e−x . To simplify the notation, we writeH := σ ◦ H.
We can now re-define the pseudo-gradient v(w) of the game as the gradient of each player loss with respect to its parameters. Letting α = 1, we get from Equation (4).
v(ω) := (∇ω1`,∇ω2(`+ λds,t),−∇ω3ds,t) ∈ R d. (14)
The following propositions characterize local NE in terms of the pseudo-gradient v(w) and its Jacobian H(ω). Proposition 6. (Local NE) Suppose v(w) = 0 and(
∇2ω1` ∇ 2 ω1,ω2`
∇2ω1,ω2` ∇ 2 ω2(`+ λds,t)
) 0, ∇2ω3ds,t ≺ 0, (15)
then w is an isolated local NE.
The proof is simple and follows from Propositions 1 and 2, the definition of the vector field v(ω) and the condition H +H> 0.
Cooperation with Competition. By examining the matrix H(ω), one can see that, in our scenario, the game is neither a potential game nor a purely adversarial game. However, we can write the vector field in the following form:
v(w) = ∇ω1`∇ω2` 0 ︸ ︷︷ ︸ ∇φ(ω) + 0λ∇ω2dst −∇ω3dst ︸ ︷︷ ︸ v̂(ω)
(16)
where the first part corresponds to the gradient of the potential function φ(ω) = `(ω1, ω2). The second part, on the other hand, corresponds to a function v̂(w) whose Jacobian is a non-symmetric matrix. Analyzing them separately leads to either a potential or an adversarial game respectively. We define this particular type of game as cooperation (i.e., in the potential term) with competition (i.e., the adversarial term).
It is worth noting that, while the spectrum of the game Hessian for the first term has only real eigenvalues, the second term can have complex eigenvalues with a large imaginary component. Indeed, it can be shown that this second term approximates the one obtained for a GAN using the non-saturating loss proposed by Goodfellow et al. (2014) (e.g. λ = 1). In other words, the second term can be written as the pseudo-gradient of the two player zero-sum game minω2 maxω3 dst. Building on this key observation and the work of Mescheder et al. (2017); Berard et al. (2020) (Figure 4), where it was experimentally shown that the eigenvalues of the game Hessian for GANs have indeed a large imaginary component around stationary points, we can assume that the spectrum of the game Hessian in our case also have eigenvalues with a large imaginary component around the stationary points. This observation can also be used with Corollary 1 to further motivate the use of higher-order ODE solvers instead of GD with the GRL. Example 3. Consider the three-player game Equation (16) where `(w1, w2) = w21 + 2w1w2 +w22 , λ = 1 and ds,t(w2, w3) = w 2 2 + 99w2w3 − w23 . The gradient play dynamics ẇ = −v(w) becomes:
ẇ = Aw = −2 −2 0−2 −4 −99 0 99 −2 w. The eigenvalues of A are −2 and −3± 2i √ 2449. From Corollary 1, η should be 0 < η < 6.2 · 10−3.
Is the three-player game formulation desired? In domain adaptation, optimization is a means to an end. The final goal is to minimize the upper bound from Theorem 1 to ensure better performance in the target domain. One might then wonder whether interpreting optimality in terms of NE is desirable. In our context, NE means finding the optimal g∗, ĥ∗ and ĥ
′∗ of the cost functions defined in Equation (4). This in turns leads to minimizing the upper bound in Theorem 1.
Remark on sequential games: Recently, Jin et al. (2020) introduced a notion of local min-max optimality for two-player’s game exploiting the sequential nature of some problems in adversarial ML (i.e GANs). In domain-adversarial learning, updates are usually performed in a simultaneous manner using the GRL. Thus, we focus here on the general case where the order of the players is not known.
B DERIVATION OF HIGH-RESOLUTION ODES
Lemma 2. The high resolution ODE of resulting of the GD algorithm with the GRL is:
ẇ = −v(w)− η 2 ∇v(w)v(w) +O(η2), (17)
Proof. This follows from Corollary 1 of Lu (2020).
B.1 HIGH-RESOLUTION ODE OF SECOND-ORDER RUNGE–KUTTA METHOD
The high-resolution ODE was discussed in Shi et al. (2018); Lu (2020). For discrete algorithms with the following update:
w+ = w + f(η, w), (18)
we can think of the trajectory as a discretization of the continuous dynamics w : [0,+∞) → Rd, and in Equation (18), we have w = w(t), w+ = w(t+ η). Here, with slight abuse of notation we also use w for the continuous function of dynamics.
We derive high-resolution ODE of the second-order Runge–Kutta method:
wk+1/2 = wk − η
2α v(wk), wk+1 = wk − η((1− α)v(wk) + αv(wk+1/2)),
where 0 < α ≤ 1 and α is a constant. If α = 1/2, we obtain Heun’s method; if α = 1, we obtain the midpoint method; if α = 2/3, we obtain the Ralston’s method. Combining the two equations, we have:
wk+1 − wk η = −(1− α)v(wk)− αv(wk − η 2α v(wk)). (19)
Using the Taylor expansion:
v(wk − η
2α v(wk)) = v(wk)−
η
2α ∇v(wk)>v(wk) +O(η2)
Plugging it back into Equation (19) and using the Taylor expansion wk+1 = wk + ηẇ + η2ẅ/2, we have:
ẇ + 1 2 ηẅ = −v(w) + 1 2 ∇v(w)>v(w) +O(η2). (20)
Now we make the assumption that we have the high-resolution ODE that:
ẇ = f0(w) + ηf1(w) +O(η 2). (21)
Taking the derivative over t we have:
ẅ = ∇f0(w)f0(w) +O(η). (22) Combining Equation (20), Equation (21) and Equation (22), we obtain that:
f0(w) = −v(w), f1(w) = 0, (23) i.e., the high resolution ODE of second-order Runge–Kutta method is:
ẇ = −v(w) +O(η2). (24)
B.2 CONTINUOUS DYNAMICS OF EXTRA-GRADIENT (EG)
The continuous dynamics of Gradient Descent Ascent (GDA), Extra-Gradient (EG) and Heun’s method can be summarized as follows: ẇ = v(w) + α∇v(w)v(w) For GDA, we have α = −η/2; for EG, we have α = η/2 (Lu, 2020); for Heun’s method, ẇ = v(w) +O(η2). The Jacobian of the dynamics at the stationary point is∇v(w) + α∇v(w)2. Take λ = a+ ib ∈ Sp(∇v(w)). The eigenvalue of the Jacobian of the dynamics is:
α(a+ ib)2 + a+ ib = a+ α(a2 − b2) + i(b+ 2ab)α. (25) We want the real part to be negative, i.e.:
a+ α(a2 − b2) < 0, (26) and thus:
a(1 + αa) < αb2. (27)
for EG, α = η/2 and the dynamics diverges if a(1+(η/2)a) ≥ ηb2/2. When η is large, and η(a2−b2)/2 ≥ −a then it diverges. However, the high-resolution ODE of second-order Runge–Kutta methods only requires a < 0.
B.3 HIGH-RESOLUTION ODE OF CLASSIC FOURTH-ORDER RUNGE–KUTTA METHOD (RK4)
In this subsection, we derive the high-resolution ODE of the classic fourth-order Runge–Kutta method. We prove the following result: Theorem 3. The high-resolution ODE of the classic fourth-order Runge–Kutta method (RK4):
w+ = w − η 6 (v(w) + 2v2(w) + 2v3(w) + v4(w)), (28)
where
v2(w) = v(w − η
2 v(w)), v3(w) = v(w −
η 2 v2(w)), v4(w) = v(w − ηv3(w)), (29)
is
ẇ = −v(w) +O(η4). (30)
Proof. We use the following Taylor expansion:
v(w + δ) = v(w) = ∇v(w)δ + 1 2 ∇2v(w)(δ, δ) + 1 6 ∇3v(w)(δ, δ, δ) +O(‖δ4‖), (31)
where ∇2v(w) : Rd × Rd → Rd is a symmetric bilinear form, and ∇3v(w) : Rd × Rd × Rd → Rd is a symmetric trilinear form. With the formula we have:
v4(w) = v(w)− η∇v(w)v3(w) + η2
2 ∇2v(w)(v3(w), v3(w))−
η3
6 ∇3v(w)(v3(w), v3(w), v3(w)) +O(η4),
(32)
v3(w) = v(w)− η
2 ∇v(w)v2(w) +
η2
8 ∇2v(w)(v2(w), v2(w))−
η3 48 ∇3v(w)(v2(w), v2(w), v2(w)) +O(η4),
(33)
v2(w) = v(w)− η 2 ∇v(w)v(w) + η
2
8 ∇2v(w)(v(w), v(w))− η
3
48 ∇3v(w)(v(w), v(w), v(w)) +O(η4).
(34)
Putting them together we have:
v4(w) + 2v3(w) + 2v2(w) + v(w) = 6v(w)− η∇v(w)(v3(w) + v2(w) + v(w))
+ η2
2
( ∇2v(w)(v3(w), v3(w)) + 1
2 ∇2v(w)(v2(w), v2(w)) +
1 2 ∇2v(w)(v(w), v(w))
) +
− η 3
4 ∇3v(w)(v(w), v(w), v(w)) +O(η4), (35)
v3(w) + v2(w) + v(w) = 3v(w)− η
2 ∇v(w)(v2(w) + v(w)) +
η2
4 ∇2v(w)(v(w), v(w)) +O(η3),
(36)
v2(w) + v(w) = 2v(w)− η
2 ∇v(w)v(w) +O(η2). (37)
Bringing Equation (37) into Equation (36), we obtain:
v3(w) + v2(w) + v(w) = 3v(w)− η∇v(w)v(w) + η2 4 ∇2v(w)(v(w), v(w)) + η 2 4 (∇v(w))2v(w) +O(η3). (38)
Putting Equ | 1. What is the main contribution of the paper in the context of unsupervised domain adaptation?
2. What is the significance of the game-theoretical formulation proposed in the paper?
3. What are the strengths and weaknesses of the paper regarding its experimental results and practical implications?
4. How does the reviewer assess the novelty and technical aspects of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The setting in the paper is the classic unsupervised domain adaptation problem, where we are given a labeled sample from a source distribution and an unlabeled sample from a target distribution. The goal is to minimize the risk on the target distribution. Theoretical results led to a breakthrough in practice - the Domain Adversarial Learning architecture (Ganin et al., 2016).
The paper suggests looking at the paper from a game theory perspective. This is natural, as the objective is to minimize the loss on the source distribution while maximizing the distinction between the distributions. The optimal solutions of the game are characterized by the local NE.
Motivated by the results from game theory, the authors suggest replacing Gradient Descent (due to its limitation in this optimization problem) with other optimizers - ODE (ordinary differential equation) solvers.
Review
Strengths : The game-theoretical formulation is natural for this important problem.
The experiments seem to support the asymptotic convergence guarantees of the optimizer.
The empirical results look somewhat significant.
The practical significance of this work reveals that using high-order solvers instead of Gradient Descent (GD) in this setting leads to better results.
Weakness: The technical novelty is only marginally novel and quite straightforward. |
ICLR | Title
Domain Adversarial Training: A Game Perspective
Abstract
The dominant line of work in domain adaptation has focused on learning invariant representations using domain-adversarial training. In this paper, we interpret this approach from a game theoretical perspective. Defining optimal solutions in domain-adversarial training as local Nash equilibria, we show that gradient descent in domain-adversarial training can violate the asymptotic convergence guarantees of the optimizer, oftentimes hindering the transfer performance. Our analysis leads us to replace gradient descent with high-order ODE solvers (i.e., Runge–Kutta), for which we derive asymptotic convergence guarantees. This family of optimizers is significantly more stable and allows more aggressive learning rates, leading to high performance gains when used as a drop-in replacement over standard optimizers. Our experiments show that in conjunction with state-of-the-art domain-adversarial methods, we achieve up to 3.5% improvement with less than half of training iterations. Our optimizers are easy to implement, free of additional parameters, and can be plugged into any domain-adversarial framework.
1 INTRODUCTION
Unsupervised domain adaptation (UDA) deals with the lack of labeled data in a target domain by transferring knowledge from a labeled source domain (i.e., a related dataset with different distribution where abundant labeled data already exists). The paramount importance of this paradigm has led to remarkable advances in the field in terms of both theory and algorithms (Ben-David et al., 2007; 2010a;b; Mansour et al., 2009). Several state-of-the-art algorithms tackle UDA by learning domaininvariant representations in an adversarial fashion (Shu et al., 2018; Long et al., 2018; Saito et al., 2018; Hoffman et al., 2018; Zhang et al., 2019; Acuna et al., 2021). Their goal is to fool an auxiliary classifier that operates in a representation space and aims to classify whether the datapoint belongs to either the source or the target domain. This idea, called Domain-Adversarial Learning (DAL), was introduced by Ganin et al. (2016) and can be more formally understood as minimizing the discrepancy between source and target domain in a representation space (Acuna et al., 2021).
Despite DAL being a dominant approach for UDA, alternative solutions have been sought as DAL is noticeably unstable and difficult to train in practice (Sener et al., 2016; Sun et al., 2019; Chang et al., 2019). One major cause of instability is the adversarial nature of the learning algorithm which results from the introduction of the Gradient Reversal Layer (GRL, Ganin et al., 2016) (Figure 1). GRL flips the sign of the gradient during the backward pass, which has profound implications on the training dynamics and asymptotic behavior of the learning algorithm. Indeed, GRL transforms gradient descent into a competitive gradient-based algorithm which may converge to periodic orbits and other non-trivial limiting behavior that arise for instance in chaotic systems (Mazumdar et al., 2020). Surprisingly, little attention has been paid to this fact, and specifically to the adversarial component and interaction among the three different networks in the algorithm. In particular, three fundamental questions have not been answered from an algorithmic point of view, 1) What is optimality in DAL? 2) What makes DAL difficult to train and 3) How can we mitigate this problem?
In this work, we aim to answer these questions by interpreting the DAL framework through the lens of game theory. Specifically, we use tools developed by the game theoretical community in Başar & Olsder (1998); Letcher et al. (2019); Mazumdar et al. (2020) and draw inspiration from the existing two-player zero-sum game interpretations of Generative Adversarial Networks (GANs)
(Goodfellow et al., 2014). We emphasize that in DAL, however, we have three rather than two networks interacting with each other, with partial cooperation and competition. We propose a natural three-player game interpretation for DAL, which is not necessarily equivalent to two-player zero-sum game interpretations (see Example 1), which we coin as the Domain-Adversarial Game. We also propose to interpret and characterize optimal solutions in DAL as local Nash Equilibria (see Section 3). This characterization introduces a proper mathematical definition of algorithmic optimality for DAL. It also provides sufficient conditions for optimality that drives the algorithmic analysis.
With our proposed game perspective in mind, a simple optimization solution would be to use the Gradient Descent (GD) algorithm, which is the de facto solution but known to be unstable. Alternatively, we could also use other popular gradient based optimizers proposed in the context of differentiable games (e.g. Korpelevich, 1976; Mescheder et al., 2017). However, we notice that these do not outperform GD in practice (see § 6). To understand why, we analyze the asymptotic behavior of gradient-based algorithms in the proposed domain-adversarial game (§ 4). The main result of § 4.2 (Theorem 2) shows that GD with GRL (i.e., the existing solution for DAL) violates the asymptotic convergence guarantees to local NE unless an upper bound is placed on the learning rate, which may explain its training instability and sensitivity to optimizer parameters. In § 4.3, Appendix B.2 and Appendix E, we also provide a similar analysis for the popular game optimization algorithms mentioned above. We emphasize however that while some of our results may be of independent interest for learning in general games, our focus is DAL. § 4.3 and § 6 show both theoretically and experimentally that the limitations mentioned above disappear if standard optimizers are replaced with ODE solvers of at least second order. These are straightforward to implement as drop-in replacements to existing optimizers. They also lead to more stable algorithms, allow for more aggressive learning rates and provide notable performance gains.
2 PRELIMINARIES <latexit sha1_base64="XcLU0OQlzQn4TD3DERhz5ZsAZ/U=">AAAB8nicbVBNS8NAFHypX7V+VT16WSyCp5KIoseiF48VbC2moWy223bpJht2X4QS+jO8eFDEq7/Gm//GTZuDtg4sDDPvsfMmTKQw6LrfTmlldW19o7xZ2dre2d2r7h+0jUo14y2mpNKdkBouRcxbKFDyTqI5jULJH8LxTe4/PHFthIrvcZLwIKLDWAwEo2glvxtRHDEqs8dpr1pz6+4MZJl4BalBgWav+tXtK5ZGPEYmqTG+5yYYZFSjYJJPK93U8ISyMR1y39KYRtwE2SzylJxYpU8GStsXI5mpvzcyGhkziUI7mUc0i14u/uf5KQ6ugkzESYo8ZvOPBqkkqEh+P+kLzRnKiSWUaWGzEjaimjK0LVVsCd7iycukfVb3Luru3XmtcV3UUYYjOIZT8OASGnALTWgBAwXP8ApvDjovzrvzMR8tOcXOIfyB8/kDl52RdA==</latexit>Z
<latexit sha1_base64="3xtdy5laA/WQWHLvq4EtMaredYI=">AAAB7XicbVBNS8NAEJ34WetX1aOXxSJ4Kokoeix68VjBfkAbyma7adZuNmF3IpTQ/+DFgyJe/T/e/Ddu2xy09cHA470ZZuYFqRQGXffbWVldW9/YLG2Vt3d29/YrB4ctk2Sa8SZLZKI7ATVcCsWbKFDyTqo5jQPJ28Hoduq3n7g2IlEPOE65H9OhEqFgFK3U6kUUSdSvVN2aOwNZJl5BqlCg0a989QYJy2KukElqTNdzU/RzqlEwySflXmZ4StmIDnnXUkVjbvx8du2EnFplQMJE21JIZurviZzGxozjwHbGFCOz6E3F/7xuhuG1nwuVZsgVmy8KM0kwIdPXyUBozlCOLaFMC3srYRHVlKENqGxD8BZfXiat85p3WXPvL6r1myKOEhzDCZyBB1dQhztoQBMYPMIzvMKbkzgvzrvzMW9dcYqZI/gD5/MHKgqO2w==</latexit> ĥ
<latexit sha1_base64="4ZzWvs7xxQO9ik+Ene1mXfvAGHA=">AAAB73icbVDLSgNBEOyNrxhfUY9eBoPoKeyKosegF48RzAOSJcxOZrNDZmfXmV4hLPkJLx4U8ervePNvnDwOmljQUFR1090VpFIYdN1vp7Cyura+UdwsbW3v7O6V9w+aJsk04w2WyES3A2q4FIo3UKDk7VRzGgeSt4Lh7cRvPXFtRKIecJRyP6YDJULBKFqp3Y0okoic9soVt+pOQZaJNycVmKPeK391+wnLYq6QSWpMx3NT9HOqUTDJx6VuZnhK2ZAOeMdSRWNu/Hx675icWKVPwkTbUkim6u+JnMbGjOLAdsYUI7PoTcT/vE6G4bWfC5VmyBWbLQozSTAhk+dJX2jOUI4soUwLeythEdWUoY2oZEPwFl9eJs3zqndZde8vKrWbeRxFOIJjOAMPrqAGd1CHBjCQ8Ayv8OY8Oi/Ou/Mxay0485lD+APn8wfj5o82</latexit> ĥ0
<latexit sha1_base64="2G0f8RUU8YWS9zeN9PTsk8ECOOY=">AAAB8HicbVDLSgMxFL1TX7W+qi7dBIvgqsyIosuiG5dV7EPaoWQymTY0yQxJRihDv8KNC0Xc+jnu/BvT6Sy09UDgcM655N4TJJxp47rfTmlldW19o7xZ2dre2d2r7h+0dZwqQlsk5rHqBlhTziRtGWY47SaKYhFw2gnGNzO/80SVZrF8MJOE+gIPJYsYwcZKj/eDPrfhEA+qNbfu5kDLxCtIDQo0B9WvfhiTVFBpCMda9zw3MX6GlWGE02mln2qaYDLGQ9qzVGJBtZ/lC0/RiVVCFMXKPmlQrv6eyLDQeiICmxTYjPSiNxP/83qpia78jMkkNVSS+UdRypGJ0ex6FDJFieETSzBRzO6KyAgrTIztqGJL8BZPXibts7p3UXfvzmuN66KOMhzBMZyCB5fQgFtoQgsICHiGV3hzlPPivDsf82jJKWYO4Q+czx+ZdZBG</latexit> R (GRL) <latexit sha1_base64="1Out49/IGg3BAWnSOWbUFY4P5mo=">AAAB+HicdVDLSgMxFM3UV62Pjrp0EyyCG8tMHdu6K7pxWcE+oFPKnTRtQzOZIckItfRL3LhQxK2f4s6/MdNWUNEDgcM553JvThBzprTjfFiZldW19Y3sZm5re2c3b+/tN1WUSEIbJOKRbAegKGeCNjTTnLZjSSEMOG0F46vUb91RqVgkbvUkpt0QhoINGAFtpJ6dP/W5SfcB+wICDj274BQvquWSV8ZO0XEqbslNSaninXnYNUqKAlqi3rPf/X5EkpAKTTgo1XGdWHenIDUjnM5yfqJoDGQMQ9oxVEBIVXc6P3yGj43Sx4NImic0nqvfJ6YQKjUJA5MMQY/Uby8V//I6iR5Uu1Mm4kRTQRaLBgnHOsJpC7jPJCWaTwwBIpm5FZMRSCDadJUzJXz9FP9PmqWie150brxC7XJZRxYdoiN0glxUQTV0jeqogQhK0AN6Qs/WvfVovVivi2jGWs4coB+w3j4BX0GS6w==</latexit> r
<latexit sha1_base64="4e7VN7IWD/ZM+123qYV7uoLuXJs=">AAAB7XicbVDLSgNBEOz1GeMr6tHLYBA8hV1R9Bj04jGCeUCyhN7JbDJmdmaZmRVCyD948aCIV//Hm3/jJNmDJhY0FFXddHdFqeDG+v63t7K6tr6xWdgqbu/s7u2XDg4bRmWasjpVQulWhIYJLlndcitYK9UMk0iwZjS8nfrNJ6YNV/LBjlIWJtiXPOYUrZMaHYmRwG6p7Ff8GcgyCXJShhy1bumr01M0S5i0VKAx7cBPbThGbTkVbFLsZIalSIfYZ21HJSbMhOPZtRNy6pQeiZV2JS2Zqb8nxpgYM0oi15mgHZhFbyr+57UzG1+HYy7TzDJJ54viTBCryPR10uOaUStGjiDV3N1K6AA1UusCKroQgsWXl0njvBJcVvz7i3L1Jo+jAMdwAmcQwBVU4Q5qUAcKj/AMr/DmKe/Fe/c+5q0rXj5zBH/gff4AgK6PFA==</latexit>r
<latexit sha1_base64="4e7VN7IWD/ZM+123qYV7uoLuXJs=">AAAB7XicbVDLSgNBEOz1GeMr6tHLYBA8hV1R9Bj04jGCeUCyhN7JbDJmdmaZmRVCyD948aCIV//Hm3/jJNmDJhY0FFXddHdFqeDG+v63t7K6tr6xWdgqbu/s7u2XDg4bRmWasjpVQulWhIYJLlndcitYK9UMk0iwZjS8nfrNJ6YNV/LBjlIWJtiXPOYUrZMaHYmRwG6p7Ff8GcgyCXJShhy1bumr01M0S5i0VKAx7cBPbThGbTkVbFLsZIalSIfYZ21HJSbMhOPZtRNy6pQeiZV2JS2Zqb8nxpgYM0oi15mgHZhFbyr+57UzG1+HYy7TzDJJ54viTBCryPR10uOaUStGjiDV3N1K6AA1UusCKroQgsWXl0njvBJcVvz7i3L1Jo+jAMdwAmcQwBVU4Q5qUAcKj/AMr/DmKe/Fe/c+5q0rXj5zBH/gff4AgK6PFA==</latexit>r
<latexit sha1_base64="qmqxvw54qUUoUaz9MnxwYmJpOwM=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0WPRi8cW7Ae0oWy2k3btZhN2N0IJ/QVePCji1Z/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4bua3n1BpHssHM0nQj+hQ8pAzaqzUGPbLFbfqzkFWiZeTCuSo98tfvUHM0gilYYJq3fXcxPgZVYYzgdNSL9WYUDamQ+xaKmmE2s/mh07JmVUGJIyVLWnIXP09kdFI60kU2M6ImpFe9mbif143NeGNn3GZpAYlWywKU0FMTGZfkwFXyIyYWEKZ4vZWwkZUUWZsNiUbgrf88ippXVS9q6rbuKzUbvM4inACp3AOHlxDDe6hDk1ggPAMr/DmPDovzrvzsWgtOPnMMfyB8/kDzceM7w==</latexit>g
Figure 1: We study domainadversarial training from a game perspective. In DAL (Ganin et al. (2016)), three networks interact with each other: the feature extractor (g), the domain classifier (ĥ′) and the classifier (ĥ). During backpropagation, the GRL flips the sign of the gradient with respect to g.
We focus on the UDA scenario and follow the formulation from Acuna et al. (2021). This makes our analysis general and applicable to most state-of-the-art DAL algorithms (e.g., Ganin et al. (2016); Saito et al. (2018); Zhang et al. (2019)). We assume that the learner has access to a source dataset (S) with labeled examples and a target dataset (T) with unlabeled examples, where the source inputs xsi are sampled i.i.d. from a (source) distribution Ps and the target inputs xti are sampled i.i.d. from a (target) distribution Pt, both over X . We have Y = {0, 1} for binary classification, and Y = {1, ..., k} in the multiclass case. The risk of a hypothesis h : X → Y w.r.t. the labeling function f , using a loss function ` : Y×Y → R+ under distribution D is defined as: R`D(h, f) := ED[`(h(x), f(x))]. For simplicity, we define R`S(h) := R ` Ps (h, fs) and R`T (h) := R ` Pt
(h, ft). The hypothesis class of h is denoted byH.
UDA aims to minimize the risk in the target domain while only having access to labeled data in the source domain. This risk is upper bounded in terms of the risk of the source domain, the discrepancy between the two distributions and the joint hypothesis error λ∗: Theorem 1. (Acuna et al. (2021)) Let us note ` : Y × Y → [0, 1], λ∗ := minh∈HR`S(h) +R`T (h), and Dφh,H(Ps||Pt) := suph′∈H |Ex∼Ps [`(h(x), h′(x))]− Ex∼Pt [φ∗(`(h(x), h′(x)))|. We have:
R`T (h) ≤ R`S(h) + Dφh,H(Ps||Pt) + λ∗. (1)
The function φ : R+ → R defines a particular f -divergence and φ∗ is its (Fenchel) conjugate. As is typical in UDA, we assume that the hypothesis class is complex enough and both fs and ft are similar in such a way that the non-estimable term (λ∗) is negligible and can be ignored.
Domain-Adversarial Training (see Figure 1) aims to find a hypothesis h ∈ H that jointly minimizes the first two terms of Theorem 1. To this end, the hypothesis h is interpreted as the composition of h = ĥ ◦ g with g : X → Z , and ĥ : Z → Y . Another function class Ĥ is then defined to formulate H := {ĥ ◦ g : ĥ ∈ Ĥ, g ∈ G}. The algorithm tries to find the function g ∈ G such that ĥ ◦ g minimizes the risk of the source domain (i.e. the first term in Theorem 1), and its composition with ĥ and ĥ′ minimizes the divergence of the two distributions (i.e. the second term in Theorem 1).
Algorithmically, the computation of the divergence function in Theorem 1 is estimated by a so-called domain classifier ĥ′ ∈ Ĥ whose role is to detect whether the datapoint g(xi) ∈ Z belongs to the source or to the target domain. When there does not exist a function ĥ′ ∈ Ĥ that can properly distinguish between g(xsi ) and g(x t i), g is said to be invariant to the domains.
Learning is performed using GD and the GRL (denoted by Rλ) on the following objective:
min ĥ∈Ĥ,g∈G,ĥ′∈Ĥ
Ex∼ps [`(ĥ ◦ g, y)]− αds,t(ĥ, ĥ′, Rλ(g)), (2)
where ds,t(ĥ, ĥ′, g) := Ex∼ps [ˆ̀(ĥ′ ◦ g, ĥ ◦ g)]− Ex∼pt [(φ∗ ◦ ˆ̀)(ĥ′ ◦ g, ĥ ◦ g)]. Mathematically, the GRL Rλ is treated as a “pseudo-function” defined by two (incompatible) equations describing its forward and back-propagation behavior (Ganin & Lempitsky, 2015; Ganin et al., 2016). Specifically,
Rλ(x) := x and dRλ(x)/dx := −λ, (3) where λ and α are hyper-parameters that control the tradeoff between achieving small source error and learning an invariant representation. The surrogate loss ` : Y × Y → R (e.g., cross-entropy) is used to minimize the empirical risk in the source domain. The choice of function ˆ̀ : Y × Y → R and of conjugate φ∗ of the f -divergence defines the particular algorithm (Ganin et al., 2016; Saito et al., 2018; Zhang et al., 2019; Acuna et al., 2021). From eq. 2, we can notice that GRL introduces an adversarial scheme. We next interpret eq. 2 as a three-player game where the players are ĥ, ĥ′ and g, and study its continuous gradient dynamics.
3 A GAME PERSPECTIVE ON DAL
We now interpret DAL from a game-theoretical perspective. In § 3.1, we rewrite the DAL objective as a three-player game. In this view, each of the feature extractor and two classifiers is a player. This allows us to define optimality in terms of local Nash Equilibrium (see Def. 2 in Appendices). In § 3.2, we introduce the vector field, the game Hessian and the tools that allow us to characterize local NE for the players. This characterization leads to our analysis of the continuous dynamics in § 4.
3.1 DOMAIN-ADVERSARIAL GAME
We now rewrite and analyze the DAL problem in eq. 2 as a three-player game. Let Ĥ, Ĥ′ and G be classes of neural network functions and define ω1 ⊆ Ω1 and ω3 ⊆ Ω3 as a vector composed of the parameters of the classifier and domain classifier networks ĥ ∈ Ĥ and ĥ′ ∈ Ĥ, respectively. Similarly, let ω2 ⊆ Ω2 be the parameters of the feature extractor network g ∈ G. Their joint domain is denoted by Ω = Ω1 × Ω2 × Ω3 and the joint parameter set is ω = (ω1, ω2, ω3). Let each neural network be a player and its parameter choice to be its individual strategy (here continuous). The goal of each player is then to selfishly minimize its own cost function Ji : Ω→ R. We use the subscript −i to refer to all parameters/players but i. With the notation introduced, we can now formally define the Domain-Adversarial Game as the three-player game G(I,Ωi, Ji) where I := {1, 2, 3}, dim(Ω) = ∑3 i=1 dim(Ωi) = d, Ωi ⊆ Rdi and:
J1(ω1, ω−1) := `(ω1, ω2) + αds,t(ω)
J2(ω2, ω−2) := `(ω1, ω2) + αλds,t(ω) J3(ω3, ω−3) := − αds,t(ω), (4)
We use the shorthand `(ω1, ω2) for Ex,y∼ps [`(ω1 ◦ ω2(x), y)], and ωi’s refer to the feature extractor g and the classifiers (ĥ and ĥ′). Similar notation follows for ds,t. Here, we assume that each Ji is smooth in each of its arguments ωi ∈ Ωi. The gradient field of Equation (2) and the game’s vector field (see § 3.2) are equivalent, making the original interpretation of DAL and our three-player formulation equivalent. However, it is worth noting that our intepretation does not explicitly require the use of Rλ in ds,t in Equation (4). We can write optimality conditions of the above problem through the concept of Nash Equilibrium: Definition 1. (Nash Equilibrium (NE)) A point ω∗ ∈ Ω is said to be a Nash Equilibrium of the Domain-Adversarial Game if ∀i ∈ {1, 2, 3},∀ωi ∈ Ωi, we have: Ji(ω∗i , ω∗−i) ≤ Ji(ωi, ω∗−i). In our scenario, the losses are not convex/concave. NE then does not necessarily exist and, in general, finding NE is analogous to, but much harder than, finding global minima in neural networks – which
is unrealistic using gradient-based methods (Letcher et al., 2019). Thus, we focus on local NE which relaxes the NE to a local neighborhood B(w∗, δ) := {||w − w∗|| < δ} with δ > 0 (see Definition 2). Intuitively, a NE means that no player has the incentive to change its own strategy (here parameters of the neural network) because it will not generate any additional pay off (here it will not minimize its cost function). We emphasize that each player only has access to its own strategy set. In other words, the player J1 cannot change the parameters ω2, ω3. It only has access to ω1 ∈ Ω1. While the motivation of the three-player game follows naturally from the original formulation of DAL where three networks interact with each other (see Figure 1), the optimization problem (2) could also be interpreted as the minimax objective of a two-player zero-sum game. Thus, a natural question arises: can we interpret the domain-adversarial game as a two player zero-sum game? This can be done for example by defining ω∗12 := (ω ∗ 1 , ω ∗ 2), and considering the cost of the two players (ω12, ω3) as J12 = J and J3 = −J where J(ω12, ω3) := Eps [`(ω1, ω2)] + ds,t(ω). In general, however, the solution of the two-player game (ω∗12, ω ∗ 3) is not equal to the NE solution of the three-player game (ω∗1 , ω ∗ 2 , ω ∗ 3). This is because the team optimal solution ω ∗ 12 6= (ω∗1 , ω∗2) in general. We illustrate this in the following counterexample (see Başar & Olsder (1998) for more details): Example 1. Let the function J(ω) := 12 ( ω21 + 4ω1ω2 + ω 2 2 − ω23 ) . (a) Suppose the three-player game ω = (ω1, ω2, ω3) with J1 = J2 = J and J3 = −J . Each Ji is strictly convex in ωi. The NE solution of the game ω∗ = (0, 0, 0) is unique. (b) Suppose the two-player game ω = (ω12, ω3) with J12 = J and J3 = −J . The solution ω∗ from (a) is not a NE solution. To see this, let ω̂ := (−1, 1, 0). We have that J12(ω̂) = −1 < J12(ω∗) = 0. This contradicts Definition 1. One can verify that there is no NE in this two-player scenario.
3.2 CHARACTERIZATION OF THE DOMAIN-ADVERSARIAL GAME
We now introduce the game’s vector field (also called pseudo-gradient) and the pseudo-gradient’s Jacobian. We also provide a characterization of local NE based on them (see § 3). These are the core concepts used in our analysis (§ 4). We first define the game’s vector field v(w), and its Jacobian H(ω) (also called the game Hessian (Letcher et al., 2019)):
v(ω) := (∇ω1J1,∇ω2J2,∇ω3J3) ∈ Rd, H(ω) := ∇v(ω) ∈ Rd×d (5) Note that the vector field v(w) and the three-player formulation naturally capture the behavior introduced by the GRL in the original formulation. Specifically, v(ω) is identical to the gradient with respect to the parameters of the original DAL objective with GRL (Equation (2)). Therefore, in both cases the behavior of GD is identical. Assuming the same initial conditions, they will reach the same solution. This shows the equivalence between our perspective and the original DAL formulation. We emphasize that by equivalence, we mean the same dynamics, and the same intermediate and final solutions. Another fact worth emphasizing is that H(ω) is asymmetric. This is in contrast with the Hessian in supervised learning. Before proceeding with a characterization of local NE in terms of v(w) and H(ω), we first define sufficient and necessary conditions for local NEs: Proposition 1. (Necessary condition). Suppose each Ji is twice continuously differentiable in each ωi, any local NE ω∗ satisfies: i)∇ωiJi(ω∗) = 0 and ii) ∀i ∈ {1, 2, 3},∇2ωi,ωiJi(ω∗) 0. Proposition 2. (Sufficient condition). Suppose each Ji is twice continuously differentiable in each ωi. ω∗i is a local NE if i) ∇ωiJi(ω∗) = 0 and ii) ∀i,∇2ωi,ωiJi(ω∗) 0.
The necessary and sufficient conditions from Propositions 1 and 2 are reminiscent of conditions for local optimality in continuous optimization (Nocedal & Wright, 2006). Similar conditions were also proposed in Ratliff et al. (2016) where the sufficient condition defines the differential Nash equilibrium. We can now characterize a local NE in terms of v(w) and H(ω): Proposition 3. (Strict Local NE) w is a strict local NE if v(w) = 0 and H(ω) +H(ω)> 0. The sufficient condition implies that the NE is structurally stable (Ratliff et al., 2016). Structural stability is important as it implies that slightly biased estimators of the gradient (e.g., due to sampling noise) will not have vastly different behaviors in neighborhoods of equilibria (Mazumdar et al., 2020). In the following, we focus on the strict local NE (i.e., ω∗ for which Proposition 3 is satisfied).
4 LEARNING ALGORITHMS
We defined optimality as the local NE and provided sufficient conditions in terms of the pseudogradient and its Jacobian. In this section, we assume the existence of the strict local NE (Prop. 3)
in the neighborhood of the current point (e.g., initialization), and analyze the continuous gradient dynamics of the Domain-Adversarial Game (eq. 4 and eq. 5). We show that given the sufficient conditions from Prop. 3, asymptotic convergence to a local NE is guaranteed through an application of Hurwitz condition (Khalil, 2002). Most importantly, we show that using GD with the GRL could violate those guarantees unless its learning rate is upper bounded (see Thm. 2 an Cor. 1). This is in sharp contrast with known results from supervised learning where the implicit regularization introduced by GD has been shown to be desirable (Barrett & Dherin, 2021). We also analyze the use of higher-order ODE solvers for DAL and show that the above restrictions are not required if GD is replaced with them. Finally, we compare our resulting optimizers with recently algorithms in the context of games.
Our algorithmic analysis is based on the continuous gradient-play dynamics and the derivation of the modified or high-resolution ODE of popular integrators (e.g., GD/Euler Method and Runge-Kutta). This type of analysis is also known in the numerical integration community as backward error analysis (Hairer et al., 2006) and has recently been used to understand the implicit regularization effect of GD in supervised learning (Barrett & Dherin, 2021). High resolution ODEs have also been used in Shi et al. (2018) to understand the acceleration effect of optimization algorithms, and more recently in Lu (2020). As in Shi et al. (2018); Lu (2020); Barrett & Dherin (2021), our derivation of the high resolution ODEs is in the full-batch setting. The derivation of the stochastic dynamics of stochastic discrete time algorithms is significantly more complicated and is beyond the scope of this work.
We experimentally demonstrate that our results are also valid when there is stochasticity due to sampling noise in the mini-batch. We emphasize that our analysis does not put any constraint or structure on the players’ cost functions as opposed to Azizian et al. (2020); Zhang & Yu (2020). In our problem, the game is neither bilinear nor necessarily strongly monotone. See proofs in appendices.
4.1 CONTINUOUS GRADIENT DYNAMICS
Given v(ω) the continuous gradient dynamics can be written as:
ω̇(t) = −v(ω). (6) For later reasons and to distinguish between eq. 6 and the gradient flow, we will refer to these as the gradient-play dynamics as in Başar & Olsder (1998); Mazumdar et al. (2020). These dynamics are well studied and understood when the game is either a potential or a purely adversarial game (see definitions in appendices). While eq. 2 may look like a single objective, the introduction of the GRL (Rλ), makes a fundamental difference between our case and the dynamics that are analyzed in the single-objective gradient-based learning and optimization literature. We summarize this below: Proposition 4. The domain-adversarial game is neither a potential nor necessarily a purely adversarial game. Moreover, its gradient dynamics are not equivalent to the gradient flow.
Fortunately, we can directly apply the Hurwitz condition (Khalil, 2002) (also known as the condition for asymptotic stability, see Appendix A.1) to derive sufficient conditions for which the continuous dynamics of the gradient play would converge. Lemma 1. (Hurwitz condition) Let ∇v(w∗) be the Jacobian of the vector field at a stationary point w∗ where v(w∗) = 0. If the real part of every eigenvalue λ of ∇v(w∗) (i.e. in the spectrum Sp(∇v(w∗))) is positive then the continuous gradient dynamics are asymptotically stable. In this work, we assume the algorithms are initialized in a neighborhood of a strict local NE ω∗. Therefore, Lemma 1 provides sufficient conditions for the asymptotic convergence of the gradientplay dynamics to a local NE. In practice this assumption may not hold, and it is computationally hard to verify. Despite this, our experiments show noticeable performance gains in several tasks, benchmarks and network architectures (see § 6).
4.2 ANALYSIS OF GD WITH THE GRL
We showed above that given the existence of a strict local NE, the gradient-play dynamics are attracted to the strict local NE. A natural question then arises: If under this assumption local asymptotic convergence is guaranteed, what could make DAL notoriously hard to train and unstable? In practice, we do not have access to an explicit solution of the ODE. Thus, we rely on integration algorithms to approximate the solution. One simple approach is to use the Euler method:
w+ = w − ηv(w). (7)
This is commonly known as GD. The equivalence between v(w) (game’s vector field) and the gradient of Equation (2) (original DAL formulation) follows from the use of the GRL (Rλ). We remind the reader that the GRL is a “pseudo-function” defined by two (incompatible) equations describing its forward and back-propagation behavior, i.e., a flip in the gradient’s sign for the backward pass (see Figure 1, Section 2 and Ganin et al. (2016)). Equation (7) is then the default algorithm used in DAL. Now, to provide an answer to the motivating question of this section, we propose to analyze the high-resolution ODE of this numerical integrator (i.e., Euler) and in turn its asymptotic behavior. This is similar to deriving the modified continuous dynamics for which the integrator produces the exact solution (Hairer et al., 2006) and applying Hurwitz condition on the high-resolution ODE. Theorem 2. The high resolution ODE of GD with the GRL up to O(η) is:
ẇ = −v(w)−η2∇v(w)v(w) (8) Moreover, this is asymptotically stable (see Appendix A.1) at a stationary point w∗ (Proposition 3) iff for all eigenvalue written as λ = a+ ib ∈ Sp(−∇v(w∗)), we have 0 > η(a2 − b2)/2 > a. A striking difference between Equation (6) and Equation (8) is made clear (additional term marked in red). This additional term is a result of the discretization of the gradient-play dynamics using Euler’s method (i.e. GD) and leads to a different Jacobian of the dynamics. This term was recently shown to be beneficial for standard supervised learning (Barrett & Dherin, 2021), where∇v(ω∗) is symmetric and thus only has real eigenvalues. In our scenario, this term is undesirable. In fact, this additional term puts an upper bound on the learning rate η. The following corollary formalizes this: Corollary 1. The high resolution ODE of GD with GRL in Equation (8) is asymptotically stable only if the learning rate η is in the interval: 0 < η < −2ab2−a2 , for all λ = a+ ib ∈ Sp(−∇v(w∗)) with large imaginary part (i.e., such that |a| < |b|). To have good convergence properties, the imaginary part of the eigenvalues of −∇v(w∗) must be small enough. Therefore, if some eigenvalue λ = a+ ib satisfies a < 0 and b2 a2 −2a, the learning rates should be chosen to be very small. This is verified in Section 6 and in Example 2. Example 2. Consider the three-player game where `(w1, w2) = w21 + 2w1w2 + w22 , λ = 1 and ds,t(w2, w3) = w 2 2 + 99w2w3 − w23 . Then ẇ = −v(w) becomes: ẇ = Aw = (−2 −2 0 −2 −4 −99 0 99 −2 ) . The
eigenvalues of A are −2 and −3± 2i √ 2449. From Corollary 1, η should be 0 < η < 6.2× 10−3.
4.3 HIGHER ORDER ODE SOLVERS
The limitation described above exists because GD with the GRL can be understood as a discretization of the gradient-play dynamics using Euler’s Method. Thus, it only approximates the continuous dynamics up to O(η). To obtain a better approximation, we consider Runge-Kutta (RK) methods of order two and beyond (Butcher, 1996). For example, take the improved Euler’s method (a particular RK method of second order) that can be written as:
w+ = w − η2 (v(w) + v(w − ηv(w))). (9) Comparing Equation (9) (i.e., update rule of RK2) with Equation (7) (i.e., update rule of GD), one can see that the RK2 method is straightforward to implement in standard deep learning frameworks. Moreover, it does not introduce additional hyper-parameters. More importantly, such discrete dynamics approximate the continuous ODE of Equation (6) to a higher precision. In Appendix C, we provide asymptotic guarantees for the high resolution ODE of general RK methods , their generalized expression and the algorithm pseudo-code. See also PyTorch pseudo-code in Appendix E.
Limitation. A disadvantage of using high-order solvers is that they require additional extra steps. Specifically, one extra step in the case of RK2 (computation of the additional second term in Equation (9)). In our implementation, however, this was less than 2x slower in wall-clock time (see Appendix E.5 for more details and wall-clock comparison). Moreover, if not initialized in the neighborhood of a local NE, high-order solvers and gradient-based methods might also converge to a non-NE as described in Mazumdar et al. (2019) although this is likely a rare case.
Comparison vs other game optimization algorithms. DAL has not been previously interpreted from a game perspective. Our interpretation allows us to bring recently proposed algorithms to the context of differentiable games (Zhang & Yu, 2020; Azizian et al., 2020) to DAL. Classic examples are the Extra-Gradient (EG) method (Korpelevich, 1976) and Consensus Optimization (CO) (Mescheder et al., 2017). In Appendix B.2 we analyze the continuous dynamics of the EG method, and show that
we cannot take the learning rate of EG to be large either. Thus, we obtain a similar conclusion as Corollary 1. Then, in practice for DAL, stability for EG comes at the price of slow convergence due to the use of small learning rates. We experimentally show this in Figure 3. With respect to CO, we show in Appendix C that this algorithm can be interpreted in the limit as an approximation of the RK2 solver. In practice, if its additional hyper-parameter (γ) is tuned thoroughly, CO may approximate the continuous dynamics better than GD and EG. We believe this may be the reason why CO slightly outperforms GD and EG (see Appendix E.4). In all cases, RK solvers outperform GD, EG and CO. This is in line with our theoretical analysis since they better approximate the continuous dynamics (Hairer et al., 2006). It is worth noting that many other optimizers have recently been proposed in the context of games e.g., Gidel et al. (2019a); Hsieh et al. (2020); Lorraine et al. (2021a;b). Some of them are modifications of the EG method that we compared to e.g. Extra-Adam (Gidel et al., 2019a) or double step-size EG (Hsieh et al., 2020). More practical modifications in terms of adaptive step size could also be applied on top of RK solvers as done in Qin et al. (2020). A comparison of all existing game optimizers in DAL, and a better theoretical understanding of such modification on RK solvers are beyond the scope of this work. However, we believe it is an interesting and unexplored research direction that our game perspective on DAL enables.
5 RELATED WORK
To the best of our knowledge, DAL has not been previously analyzed from a game perspective. Moreover, the stability of the optimizer and the implications of introducing the GRL has not been analyzed either. Here, we compare our results with the general literature.
Gradient-Based Learning in Games. Ratliff et al. (2016) proposed a characterization of local Nash Equilibrium providing sufficient and necessary conditions for its existence. Mazumdar et al. (2020) proposed a general framework to analyze the limiting behavior of the gradient-play algorithms in games using tools from dynamical systems. Our work builds on top of this characterization but specializes them to the domain-adversarial problem. We propose a more stable learning algorithm that better approximates the gradient-play dynamics. Our resulting algorithm does not introduce explicit adjustments or modify the learning dynamics, nor does it require the computation of the several Hessian vector products or new hyperparameters. This is in contrast with general algorithms previously analyzed in the context of differentiable games (Azizian et al., 2020; Letcher et al., 2019).
Integration Methods and ML. Scieur et al. (2017) showed that accelerated optimization methods can be interpreted as integration schemes of the gradient flow equation. Zhang et al. (2018) showed that the use of high order RK integrators can achieve acceleration in convex functions. In the context of two-players game (i.e GANs), Gemp & Mahadevan (2018) consider using a second-order ODE integrator. More recently, Qin et al. (2020) proposed to combine RK solvers with regularization on the generators’ gradient norm. Chen et al. (2018) interpreted the residual connection in modern networks as the Euler’s integration of a continuous systems. In our case, we notice that the combination of GD with the GRL can be interpreted as the Euler’s discretization of the continuous gradient play dynamics, which could prevent asymptotic convergence guarantees. We then study the discretization step of popular ODE solvers and provide simple guarantees for stability. Moreover, our analysis is based on a novel three-player game interpretation of the domain-adaptation problem. This is also different from a single potential function or two-player games (i.e. GANs).
Two-Player Zero-Sum Games have recently received significant attention in the machine learning literature due to the popularity of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014). For example, several algorithms have been proposed and analyzed (Mescheder et al., 2017; Mertikopoulos et al., 2019; Gidel et al., 2019a;b; Zhang & Yu, 2020; Hsieh et al., 2020), in both deterministic and stochastic scenarios. In our problem, we have a general three-player games resulting of a novel game interpretation of the domain-adversarial problem. It is worth noting that while Gidel et al. (2019a) focused on GANs, their convergence theory and methods for stochastic variational inequalities could also be applied to three-players games and thus DAL using our perspective.
6 EXPERIMENTAL RESULTS
We conduct an extensive experimental analysis. We compare with default optimizers used in domainadversarial training such as GD, GD with Nesterov Momentum (GD-NM) (as in Sutskever et al. (2013)) and Adam (Kingma & Ba, 2014). We also compare against recently proposed optimizers in the context of differentiable games such as EG (Korpelevich, 1976) and CO (Mescheder et al., 2017). We focus our experimental analysis on the original domain-adversarial framework of Ganin et al.
(2016) (DANN). However, in section 6.2, we also show the versatility and efficacy of our approach improving the performance of recently proposed SoTA DAL framework (e.g., f -DAL (Acuna et al., 2021) combined with Implicit Alignment (Jiang et al., 2020)).
6.1 EXPERIMENTAL ANALYSIS ON DIGITS
Implementation Details. Our first experimental analysis is based on the digits benchmark with models trained from scratch (i.e., with random initialization). This benchmark constitutes of two digits datasets MNIST (CC BY-SA 3.0) and USPS (LeCun et al., 1998; Long et al., 2018) with two transfer tasks (M → U and U →M). We adopt the splits and evaluation protocol from Long et al. (2018) and follow their standard implementation details.
For GD-NM, we use the default momentum value (0.9). We follow the same approach for the additional hyper-parameters of Adam. Hyperparameters such as learning rate, learning schedule and adaptation coefficient (λ) are determined for all optimizers by running a dense grid search and selecting the best hyper-parameters on the transfer task M→U. As usual in UDA, the best criteria are determined based on best transfer accuracy. The same parameters are then used
for the other task (i.e., U→M). We use the same seed and identically initialize the network weights for all optimizers. This analysis is conducted on Jax (Bradbury et al., 2018) (see Appendix D).
Comparison vs optimizers used in DAL. Figure 2 (top) illustrates the training dynamics for the loss in the target domain and the performance transfer. As expected, our optimizer converges faster and achieves noticeable performance gains. A core idea of DAL is to learn domain-invariant representations, thus we plot in Figure 2 (bottom) t-SNE (Van der Maaten & Hinton, 2008) visualizations of the last layer features of the network. We show this over a sequence of epochs for GD with GRL vs RK2. A different color is used for the source and target datasets. In the comparison vs Adam, we emphasize that Adam computes adaptive learning rates which our method does not. That said, Figure 2 shows that our two methods RK2 and RK4
outperform all baselines in terms of both convergence and transfer performance. In Figure 7, we show how unstable these standard optimizers are when more aggressive step sizes are used. This is in line with our theoretical analysis. Experimentally, it can be seen that in DAL, GD is more stable than GD-NM and Adam, with the latter being the most unstable. This sheds lights on why well tuned GD-NM is often preferred over Adam in DAL.
Comparison vs game optimization algorithms. We now compare RK solvers vs other recently proposed game optimization algorithms. Specifically, we compare vs the EG method (Korpelevich, 1976) and CO (Mescheder et al., 2017). In every case, we perform a dense grid under the same budget for all the optimizer and report the best selection (see Appendix E for details). In line with our theoretical analysis of the continuous dynamics of the EG, we notice that the EG method is not able to train with learning rates bigger than 0.006, as a result it performs signficantly worse than any other optimizer (including simple GD). Also inline with our theoretical analysis, CO performs better than EG and all other popular gradient algorithms used in DAL. This is because CO can be seen as an approximation of Heun’s Method (RK2). More details on this in supplementary.
Robustness to hyper-parameters. Figure 4 shows transfer performance of our method for different choices of hyper-parameters while highlighting (green line) the best score of the best performing GD hyperparameters on the same dataset. Our method is robust to a wide variety of hyperparameters.
0.1 0.3 0.4
92
94
96 LR
const poly
LR Schedule
RK2 RK4
Method
0.1 0.5 1.0
adapt
92 94 96
92
94
96 Transfer Acc.
Figure 4: Robustness to hyperparameters. We compare the transfer performance of our method for different hyperarameters in the task M→ U in the Digits benchmark. Green line shows the best score for the best performing hyperparameters of GD. Blue star corresponds to the best solution. Our method performs well for a wide variety of hyperparameters.
0 8000 16000 24000 # of Iterations
65.0
67.5
70.0
72.5
Tr an
sf er
P er
fo rm
an ce
Grad. Descent (Nesterov) Ours (RK2)
Figure 6: Transfer Performance on Visda (DANN). 0 6000 12000 18000 24000 # of Iteration(s)
1.25
1.50
1.75
2.00
2.25
Ta rg
et T
as k
Lo ss
Grad. Descent LR:0.3 Nesterov Momentun LR:0.1
Figure 7: Stability anal. on Digits. Most aggressive step size before divergence. Adam diverges for η > 0.001.
6.2 COMPARISON IN COMPLEX ADAPTATION TASKS Method Sim→Real GD-NM 71.7 ± 0.7
Ours(RK2) 73.8 ± 0.3
Table 1: Accuracy (DANN) on Visda 2017 with ResNet-50.
We evaluate the performance of our algorithm with Resnet-50 (He et al., 2016) on more challenging adaptation benchmarks. Specifically, this analysis is conducted on Visda-2017 benchmark (Peng et al., 2017). This is a simulation-to-real dataset with two different domains: (S) synthetic renderings of 3D models and (R) real images. For this experiment, we use PyTorch (Paszke et al., 2019), our evaluation protocol follows Zhang et al. (2019) and uses ResNet-50 as the backbone network. For the optimizer parameters, we tune
thoroughly GD-NM, which is the optimizer used in this setting (Long et al., 2018; Zhang et al., 2019; Jiang et al., 2020; Acuna et al., 2021). For ours, we keep the hyper-parameters, but increase the learning rate (0.2), and the batch size to 128. In this task, our approach corresponds to the improved Euler’s method (RK2). Table 1 shows the comparison. Figure 6 compares the training dynamics of our method vs GD-NM. In Figure 5, we evaluate the sensitivity of our method (in terms of transfer performance) to sampling noise as controlled by the batch size.
Improving SoTA DAL frameworks. We use this complex visual adaptation task to showcase the applicability of our method to SoTA DAL frameworks. Specifically, we let the DA method being f -DAL Pearson as in Acuna et al. (2021) with Implicit Alignment Jiang et al. (2020). We use the tuned parameters and optimizer from Acuna et al. (2021); Jiang et al.
(2020) as the baseline. In our case, we only increase the learning rate (0.2). Table 2 shows that our method achieves peak results (+3.5%) in 10.5K iterations (vs 29.5K iterations for GD-NM). Natural Language Processing Tasks. We also evaluate our approach on natural language processing tasks on the Amazon product reviews dataset (Blitzer et al., 2006). We show noticeable gains by replacing the GD with either RK2 or RK4. Results and details can be found in Appendix E.1.
7 CONCLUSIONS
We analyzed DAL from a game-theoretical perspective where optimality is defined as local NE. From this view, we showed that standard optimizers in DAL can violate the asymptotic guarantees of the gradient-play dynamics, requiring careful tuning and small learning rates. Based on our analysis, we proposed to replace existing optimizers with higher-order ODE solvers. We showed both theoretically and experimentally that these are more stable and allow for higher learning rates, leading to noticeable improvements in terms of the transfer performance and the number of training iterations. We showed that these ODE solvers can be used as a drop-in replacement and outperformed strong baselines.
Acknowledgements. We would like to thank James Lucas, Jonathan Lorraine, Tianshi Cao, Rafid Mahmood, Mark Brophy and the anonymous reviewers for feedback on earlier versions of this work.
SUPPLEMENTARY MATERIAL
CONTENTS
1 Introduction 1
2 Preliminaries 2
3 A Game Perspective on DAL 3
3.1 Domain-Adversarial Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3.2 Characterization of the Domain-Adversarial Game . . . . . . . . . . . . . . . . . . . . . . . 4
4 Learning Algorithms 4
4.1 Continuous Gradient Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4.2 Analysis of GD with the GRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4.3 Higher order ODE Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5 Related Work 7
6 Experimental Results 7
6.1 Experimental Analysis on Digits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
6.2 Comparison in complex adaptation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
7 Conclusions 9
A Concepts in Game Theory 15
A.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
A.2 Games Characterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
A.3 Case of Study in DANN. Original Formulation from Ganin et al. (2016) . . . . . . . . . . . . 15
B Derivation of high-resolution ODEs 17
B.1 High-resolution ODE of second-order Runge–Kutta method . . . . . . . . . . . . . . . . . . 17
B.2 Continuous dynamics of Extra-Gradient (EG) . . . . . . . . . . . . . . . . . . . . . . . . . . 17
B.3 High-resolution ODE of classic fourth-order Runge–Kutta method (RK4) . . . . . . . . . . . 18
C Proofs and additional theoretical results 20
C.1 Proposed Learning Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
C.2 CO approximates RK2 (Heun’s Method) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
D Experimental Setup Additional Details 22
E Additional Experiments 23
E.1 Natural Language Processing Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
E.2 Sensitivity to Sampling Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
E.3 Additional Comparison vs Game Optimization Algorithms . . . . . . . . . . . . . . . . . . . 24
E.4 CO vs Gradient Descent and Extra-Gradient Algorithms . . . . . . . . . . . . . . . . . . . . 24
E.5 Wall-Clock Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
F PyTorch PseudoCode of RK2 Solver 25
A CONCEPTS IN GAME THEORY
A.1 DEFINITIONS
Definition 2. (Local Nash Equilibrium) : A point (w∗i , w∗−i) ∈ Ω is said to be a local Nash Equilibrium of the domain-adversarial game if there exists some δ > 0 such that:
∀i ∈ {1, 2, 3}, Ji(w∗i , w∗−i) ≤ Ji(wi, w∗−i), s.t. ||ωi − ω∗i ||2 < δ (10)
Intuitively, this is restricting the concept of NE to a local neighborhood B(x∗, δ) := {||x− x∗||2 < δ} with δ > 0.
A more practical characterization of the NE can be given in terms of the Best Response Map of each player which we now define.
Definition 3. (Best Response Map (BR)) The best response map BRi : Ω−i ⇒ Ωi of player i is defined as:
BRi(ω−i) := arg min ωi∈Ωi Ji(ωi, ω−i), (11)
here the symbol ⇒ emphasizes that the best response map is generally a set map and not a singleton, thus it is not a function in general. In other words, there may be a subset of element in Ωi for which Ji(., ω−i) is a minimum.
The notion of NE can be defined in terms of the generalized BR : Ω ⇒ Ω map. This can be thought as an stacked vector where the i-th element of BR is BRi(ω−i). Proposition 5. A pointw∗i ∈ Ω is said to be a NE of the game if it is a fixed point of the generalized BR : Ω ⇒ Ω map. That is,
ω∗ ∈ BR(ω∗) =⇒ ∀i ∈ {1, 2, 3}, ω∗i ∈ BRi(ω∗−i) (12)
Proof. This follows from the definitions of BR map and NE.
Definition 4. (Asymptotically Stable) A point ω is said to be a locally asymptotically stable point of the continuous dynamics ω̇ = f(ω) if Re(λ) < 0 for all λ ∈ Sp(∇f(ω)), where Sp(∇f(ω)) is the spectrum of ∇f(ω).
Definition 4 is also known as the Hurwitz condition Khalil (2002).
Definition 5. A stationary point x of a C2 function φ : Rn → R is said to be a strict saddle point if:
• λmin(∇2xxφ(x∗)) < 0 and,
• λ(∇2xxφ(x∗)) > 0, for any other λ ∈ sp(∇2xxφ(x∗))
A.2 GAMES CHARACTERIZATIONS
Potential Games. Potential Games were introduced in Monderer & Shapley (1996) and can be defined as a type of game for which there exists an implicit potential function φ : Ω → R such that ∇φ(ω) = v(ω). Consequently, a necessary and sufficient condition for the game to be potential is the Jacobian of the vector field ∇v(ω) being symmetric (see 3.3 in Mazumdar et al. (2020) and Monderer & Shapley (1996)).
Purely Adversarial Games. This particular type of game refers to the other extreme in which H(ω) is a non-symmetric matrix with purely imaginary eigenvalues. If the game Hessian is skew-symmetric these have also been called Hamiltonian Games Letcher et al. (2019).
A.3 CASE OF STUDY IN DANN. ORIGINAL FORMULATION FROM GANIN ET AL. (2016)
As mentioned in the main text (Section 2), our analysis is compatible with both the original and more recent formulation of domain-adversarial training such as Zhang et al. (2019); Acuna et al. (2021). In this section, we specifically derive additional results for DANN Ganin et al. (2016).
In order to obtain the original formulation of DANN, let us define ˆ̀(_, b) = log(σ(b)) and φ∗(t) = − log(1−et) in Equation (2). This corresponds to the Jensen-Shannon divergence (JS) (up to a constant shift that does not affect optimization). We can then rewrite ds,t as:
ds,t = Ex∼ps log σ ◦ ĥ ′ ◦ g(x) + Ex∼pt log(1− σ ◦ ĥ ′ ◦ g(x)) (13)
where σ(x) := 1 1+e−x . To simplify the notation, we writeH := σ ◦ H.
We can now re-define the pseudo-gradient v(w) of the game as the gradient of each player loss with respect to its parameters. Letting α = 1, we get from Equation (4).
v(ω) := (∇ω1`,∇ω2(`+ λds,t),−∇ω3ds,t) ∈ R d. (14)
The following propositions characterize local NE in terms of the pseudo-gradient v(w) and its Jacobian H(ω). Proposition 6. (Local NE) Suppose v(w) = 0 and(
∇2ω1` ∇ 2 ω1,ω2`
∇2ω1,ω2` ∇ 2 ω2(`+ λds,t)
) 0, ∇2ω3ds,t ≺ 0, (15)
then w is an isolated local NE.
The proof is simple and follows from Propositions 1 and 2, the definition of the vector field v(ω) and the condition H +H> 0.
Cooperation with Competition. By examining the matrix H(ω), one can see that, in our scenario, the game is neither a potential game nor a purely adversarial game. However, we can write the vector field in the following form:
v(w) = ∇ω1`∇ω2` 0 ︸ ︷︷ ︸ ∇φ(ω) + 0λ∇ω2dst −∇ω3dst ︸ ︷︷ ︸ v̂(ω)
(16)
where the first part corresponds to the gradient of the potential function φ(ω) = `(ω1, ω2). The second part, on the other hand, corresponds to a function v̂(w) whose Jacobian is a non-symmetric matrix. Analyzing them separately leads to either a potential or an adversarial game respectively. We define this particular type of game as cooperation (i.e., in the potential term) with competition (i.e., the adversarial term).
It is worth noting that, while the spectrum of the game Hessian for the first term has only real eigenvalues, the second term can have complex eigenvalues with a large imaginary component. Indeed, it can be shown that this second term approximates the one obtained for a GAN using the non-saturating loss proposed by Goodfellow et al. (2014) (e.g. λ = 1). In other words, the second term can be written as the pseudo-gradient of the two player zero-sum game minω2 maxω3 dst. Building on this key observation and the work of Mescheder et al. (2017); Berard et al. (2020) (Figure 4), where it was experimentally shown that the eigenvalues of the game Hessian for GANs have indeed a large imaginary component around stationary points, we can assume that the spectrum of the game Hessian in our case also have eigenvalues with a large imaginary component around the stationary points. This observation can also be used with Corollary 1 to further motivate the use of higher-order ODE solvers instead of GD with the GRL. Example 3. Consider the three-player game Equation (16) where `(w1, w2) = w21 + 2w1w2 +w22 , λ = 1 and ds,t(w2, w3) = w 2 2 + 99w2w3 − w23 . The gradient play dynamics ẇ = −v(w) becomes:
ẇ = Aw = −2 −2 0−2 −4 −99 0 99 −2 w. The eigenvalues of A are −2 and −3± 2i √ 2449. From Corollary 1, η should be 0 < η < 6.2 · 10−3.
Is the three-player game formulation desired? In domain adaptation, optimization is a means to an end. The final goal is to minimize the upper bound from Theorem 1 to ensure better performance in the target domain. One might then wonder whether interpreting optimality in terms of NE is desirable. In our context, NE means finding the optimal g∗, ĥ∗ and ĥ
′∗ of the cost functions defined in Equation (4). This in turns leads to minimizing the upper bound in Theorem 1.
Remark on sequential games: Recently, Jin et al. (2020) introduced a notion of local min-max optimality for two-player’s game exploiting the sequential nature of some problems in adversarial ML (i.e GANs). In domain-adversarial learning, updates are usually performed in a simultaneous manner using the GRL. Thus, we focus here on the general case where the order of the players is not known.
B DERIVATION OF HIGH-RESOLUTION ODES
Lemma 2. The high resolution ODE of resulting of the GD algorithm with the GRL is:
ẇ = −v(w)− η 2 ∇v(w)v(w) +O(η2), (17)
Proof. This follows from Corollary 1 of Lu (2020).
B.1 HIGH-RESOLUTION ODE OF SECOND-ORDER RUNGE–KUTTA METHOD
The high-resolution ODE was discussed in Shi et al. (2018); Lu (2020). For discrete algorithms with the following update:
w+ = w + f(η, w), (18)
we can think of the trajectory as a discretization of the continuous dynamics w : [0,+∞) → Rd, and in Equation (18), we have w = w(t), w+ = w(t+ η). Here, with slight abuse of notation we also use w for the continuous function of dynamics.
We derive high-resolution ODE of the second-order Runge–Kutta method:
wk+1/2 = wk − η
2α v(wk), wk+1 = wk − η((1− α)v(wk) + αv(wk+1/2)),
where 0 < α ≤ 1 and α is a constant. If α = 1/2, we obtain Heun’s method; if α = 1, we obtain the midpoint method; if α = 2/3, we obtain the Ralston’s method. Combining the two equations, we have:
wk+1 − wk η = −(1− α)v(wk)− αv(wk − η 2α v(wk)). (19)
Using the Taylor expansion:
v(wk − η
2α v(wk)) = v(wk)−
η
2α ∇v(wk)>v(wk) +O(η2)
Plugging it back into Equation (19) and using the Taylor expansion wk+1 = wk + ηẇ + η2ẅ/2, we have:
ẇ + 1 2 ηẅ = −v(w) + 1 2 ∇v(w)>v(w) +O(η2). (20)
Now we make the assumption that we have the high-resolution ODE that:
ẇ = f0(w) + ηf1(w) +O(η 2). (21)
Taking the derivative over t we have:
ẅ = ∇f0(w)f0(w) +O(η). (22) Combining Equation (20), Equation (21) and Equation (22), we obtain that:
f0(w) = −v(w), f1(w) = 0, (23) i.e., the high resolution ODE of second-order Runge–Kutta method is:
ẇ = −v(w) +O(η2). (24)
B.2 CONTINUOUS DYNAMICS OF EXTRA-GRADIENT (EG)
The continuous dynamics of Gradient Descent Ascent (GDA), Extra-Gradient (EG) and Heun’s method can be summarized as follows: ẇ = v(w) + α∇v(w)v(w) For GDA, we have α = −η/2; for EG, we have α = η/2 (Lu, 2020); for Heun’s method, ẇ = v(w) +O(η2). The Jacobian of the dynamics at the stationary point is∇v(w) + α∇v(w)2. Take λ = a+ ib ∈ Sp(∇v(w)). The eigenvalue of the Jacobian of the dynamics is:
α(a+ ib)2 + a+ ib = a+ α(a2 − b2) + i(b+ 2ab)α. (25) We want the real part to be negative, i.e.:
a+ α(a2 − b2) < 0, (26) and thus:
a(1 + αa) < αb2. (27)
for EG, α = η/2 and the dynamics diverges if a(1+(η/2)a) ≥ ηb2/2. When η is large, and η(a2−b2)/2 ≥ −a then it diverges. However, the high-resolution ODE of second-order Runge–Kutta methods only requires a < 0.
B.3 HIGH-RESOLUTION ODE OF CLASSIC FOURTH-ORDER RUNGE–KUTTA METHOD (RK4)
In this subsection, we derive the high-resolution ODE of the classic fourth-order Runge–Kutta method. We prove the following result: Theorem 3. The high-resolution ODE of the classic fourth-order Runge–Kutta method (RK4):
w+ = w − η 6 (v(w) + 2v2(w) + 2v3(w) + v4(w)), (28)
where
v2(w) = v(w − η
2 v(w)), v3(w) = v(w −
η 2 v2(w)), v4(w) = v(w − ηv3(w)), (29)
is
ẇ = −v(w) +O(η4). (30)
Proof. We use the following Taylor expansion:
v(w + δ) = v(w) = ∇v(w)δ + 1 2 ∇2v(w)(δ, δ) + 1 6 ∇3v(w)(δ, δ, δ) +O(‖δ4‖), (31)
where ∇2v(w) : Rd × Rd → Rd is a symmetric bilinear form, and ∇3v(w) : Rd × Rd × Rd → Rd is a symmetric trilinear form. With the formula we have:
v4(w) = v(w)− η∇v(w)v3(w) + η2
2 ∇2v(w)(v3(w), v3(w))−
η3
6 ∇3v(w)(v3(w), v3(w), v3(w)) +O(η4),
(32)
v3(w) = v(w)− η
2 ∇v(w)v2(w) +
η2
8 ∇2v(w)(v2(w), v2(w))−
η3 48 ∇3v(w)(v2(w), v2(w), v2(w)) +O(η4),
(33)
v2(w) = v(w)− η 2 ∇v(w)v(w) + η
2
8 ∇2v(w)(v(w), v(w))− η
3
48 ∇3v(w)(v(w), v(w), v(w)) +O(η4).
(34)
Putting them together we have:
v4(w) + 2v3(w) + 2v2(w) + v(w) = 6v(w)− η∇v(w)(v3(w) + v2(w) + v(w))
+ η2
2
( ∇2v(w)(v3(w), v3(w)) + 1
2 ∇2v(w)(v2(w), v2(w)) +
1 2 ∇2v(w)(v(w), v(w))
) +
− η 3
4 ∇3v(w)(v(w), v(w), v(w)) +O(η4), (35)
v3(w) + v2(w) + v(w) = 3v(w)− η
2 ∇v(w)(v2(w) + v(w)) +
η2
4 ∇2v(w)(v(w), v(w)) +O(η3),
(36)
v2(w) + v(w) = 2v(w)− η
2 ∇v(w)v(w) +O(η2). (37)
Bringing Equation (37) into Equation (36), we obtain:
v3(w) + v2(w) + v(w) = 3v(w)− η∇v(w)v(w) + η2 4 ∇2v(w)(v(w), v(w)) + η 2 4 (∇v(w))2v(w) +O(η3). (38)
Putting Equ | 1. What is the focus of the paper regarding domain-adversarial training?
2. What are the strengths of the proposed approach, especially in terms of game theory and numerical analysis?
3. Do you have any concerns or questions regarding the novelty and impact of the work in the field of domain-adversarial learning?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. What are the conclusions drawn by the reviewer regarding the paper's contribution and potential for opening new research directions? | Summary Of The Paper
Review | Summary Of The Paper
The authors exhibit a strong link between game theory and domain-adversarial training. They show the optimal point in the latter is a Nash equilibrium of a three players game. From this perspective, the authors show that standard approaches, like gradient descent, cannot work in this setting as the method is known to be divergent in such a case. Instead, they propose to use Runge-Kutta methods (for example) to discretize the ODE, which gives insights for novel algorithms with better convergence guarantees.
Review
(I am not a specialist in domain adversarial learning. Therefore I am not entirely sure about the novelty and impact of the work in this field).
The paper is very well written. The introduction of both fields of domain adversarial learning (sec 2) and game theory (sec 3) is done properly, with an adequate study of the related work. Moreover, the authors did a good job convincing about the complexity of the three-players game setting.
From this perspective, using known tools from game theory / variationally inequalities / numerical analysis for ODE discretization, the authors show necessary and sufficient conditions for the existence of local Nash equilibrium (prop. 1 and 2), showed that a modified dynamic of the gradient flow (equation (8) ) ensure global convergence of the Euler discretization, and developed an algorithm (RK-2, generalization of extra-gradient, equation (9)).
Finally, they discussed several approaches with solid numerical experiments on various problems.
In terms of novelty, there are no "new" theoretical results, properly speaking. The novelty here is the derivation of the domain adversarial learning into a game, which is not straightforward but simplifies its analysis. Thanks to this new perspective, I believe this could open new doors and research direction in this field. For this reason, and because the paper is particularly well written, I recommend the paper to be accepted.
*** Post rebuttal
I have read the authors' answer and decided to keep my score. |
ICLR | Title
Domain Adversarial Training: A Game Perspective
Abstract
The dominant line of work in domain adaptation has focused on learning invariant representations using domain-adversarial training. In this paper, we interpret this approach from a game theoretical perspective. Defining optimal solutions in domain-adversarial training as local Nash equilibria, we show that gradient descent in domain-adversarial training can violate the asymptotic convergence guarantees of the optimizer, oftentimes hindering the transfer performance. Our analysis leads us to replace gradient descent with high-order ODE solvers (i.e., Runge–Kutta), for which we derive asymptotic convergence guarantees. This family of optimizers is significantly more stable and allows more aggressive learning rates, leading to high performance gains when used as a drop-in replacement over standard optimizers. Our experiments show that in conjunction with state-of-the-art domain-adversarial methods, we achieve up to 3.5% improvement with less than half of training iterations. Our optimizers are easy to implement, free of additional parameters, and can be plugged into any domain-adversarial framework.
1 INTRODUCTION
Unsupervised domain adaptation (UDA) deals with the lack of labeled data in a target domain by transferring knowledge from a labeled source domain (i.e., a related dataset with different distribution where abundant labeled data already exists). The paramount importance of this paradigm has led to remarkable advances in the field in terms of both theory and algorithms (Ben-David et al., 2007; 2010a;b; Mansour et al., 2009). Several state-of-the-art algorithms tackle UDA by learning domaininvariant representations in an adversarial fashion (Shu et al., 2018; Long et al., 2018; Saito et al., 2018; Hoffman et al., 2018; Zhang et al., 2019; Acuna et al., 2021). Their goal is to fool an auxiliary classifier that operates in a representation space and aims to classify whether the datapoint belongs to either the source or the target domain. This idea, called Domain-Adversarial Learning (DAL), was introduced by Ganin et al. (2016) and can be more formally understood as minimizing the discrepancy between source and target domain in a representation space (Acuna et al., 2021).
Despite DAL being a dominant approach for UDA, alternative solutions have been sought as DAL is noticeably unstable and difficult to train in practice (Sener et al., 2016; Sun et al., 2019; Chang et al., 2019). One major cause of instability is the adversarial nature of the learning algorithm which results from the introduction of the Gradient Reversal Layer (GRL, Ganin et al., 2016) (Figure 1). GRL flips the sign of the gradient during the backward pass, which has profound implications on the training dynamics and asymptotic behavior of the learning algorithm. Indeed, GRL transforms gradient descent into a competitive gradient-based algorithm which may converge to periodic orbits and other non-trivial limiting behavior that arise for instance in chaotic systems (Mazumdar et al., 2020). Surprisingly, little attention has been paid to this fact, and specifically to the adversarial component and interaction among the three different networks in the algorithm. In particular, three fundamental questions have not been answered from an algorithmic point of view, 1) What is optimality in DAL? 2) What makes DAL difficult to train and 3) How can we mitigate this problem?
In this work, we aim to answer these questions by interpreting the DAL framework through the lens of game theory. Specifically, we use tools developed by the game theoretical community in Başar & Olsder (1998); Letcher et al. (2019); Mazumdar et al. (2020) and draw inspiration from the existing two-player zero-sum game interpretations of Generative Adversarial Networks (GANs)
(Goodfellow et al., 2014). We emphasize that in DAL, however, we have three rather than two networks interacting with each other, with partial cooperation and competition. We propose a natural three-player game interpretation for DAL, which is not necessarily equivalent to two-player zero-sum game interpretations (see Example 1), which we coin as the Domain-Adversarial Game. We also propose to interpret and characterize optimal solutions in DAL as local Nash Equilibria (see Section 3). This characterization introduces a proper mathematical definition of algorithmic optimality for DAL. It also provides sufficient conditions for optimality that drives the algorithmic analysis.
With our proposed game perspective in mind, a simple optimization solution would be to use the Gradient Descent (GD) algorithm, which is the de facto solution but known to be unstable. Alternatively, we could also use other popular gradient based optimizers proposed in the context of differentiable games (e.g. Korpelevich, 1976; Mescheder et al., 2017). However, we notice that these do not outperform GD in practice (see § 6). To understand why, we analyze the asymptotic behavior of gradient-based algorithms in the proposed domain-adversarial game (§ 4). The main result of § 4.2 (Theorem 2) shows that GD with GRL (i.e., the existing solution for DAL) violates the asymptotic convergence guarantees to local NE unless an upper bound is placed on the learning rate, which may explain its training instability and sensitivity to optimizer parameters. In § 4.3, Appendix B.2 and Appendix E, we also provide a similar analysis for the popular game optimization algorithms mentioned above. We emphasize however that while some of our results may be of independent interest for learning in general games, our focus is DAL. § 4.3 and § 6 show both theoretically and experimentally that the limitations mentioned above disappear if standard optimizers are replaced with ODE solvers of at least second order. These are straightforward to implement as drop-in replacements to existing optimizers. They also lead to more stable algorithms, allow for more aggressive learning rates and provide notable performance gains.
2 PRELIMINARIES <latexit sha1_base64="XcLU0OQlzQn4TD3DERhz5ZsAZ/U=">AAAB8nicbVBNS8NAFHypX7V+VT16WSyCp5KIoseiF48VbC2moWy223bpJht2X4QS+jO8eFDEq7/Gm//GTZuDtg4sDDPvsfMmTKQw6LrfTmlldW19o7xZ2dre2d2r7h+0jUo14y2mpNKdkBouRcxbKFDyTqI5jULJH8LxTe4/PHFthIrvcZLwIKLDWAwEo2glvxtRHDEqs8dpr1pz6+4MZJl4BalBgWav+tXtK5ZGPEYmqTG+5yYYZFSjYJJPK93U8ISyMR1y39KYRtwE2SzylJxYpU8GStsXI5mpvzcyGhkziUI7mUc0i14u/uf5KQ6ugkzESYo8ZvOPBqkkqEh+P+kLzRnKiSWUaWGzEjaimjK0LVVsCd7iycukfVb3Luru3XmtcV3UUYYjOIZT8OASGnALTWgBAwXP8ApvDjovzrvzMR8tOcXOIfyB8/kDl52RdA==</latexit>Z
<latexit sha1_base64="3xtdy5laA/WQWHLvq4EtMaredYI=">AAAB7XicbVBNS8NAEJ34WetX1aOXxSJ4Kokoeix68VjBfkAbyma7adZuNmF3IpTQ/+DFgyJe/T/e/Ddu2xy09cHA470ZZuYFqRQGXffbWVldW9/YLG2Vt3d29/YrB4ctk2Sa8SZLZKI7ATVcCsWbKFDyTqo5jQPJ28Hoduq3n7g2IlEPOE65H9OhEqFgFK3U6kUUSdSvVN2aOwNZJl5BqlCg0a989QYJy2KukElqTNdzU/RzqlEwySflXmZ4StmIDnnXUkVjbvx8du2EnFplQMJE21JIZurviZzGxozjwHbGFCOz6E3F/7xuhuG1nwuVZsgVmy8KM0kwIdPXyUBozlCOLaFMC3srYRHVlKENqGxD8BZfXiat85p3WXPvL6r1myKOEhzDCZyBB1dQhztoQBMYPMIzvMKbkzgvzrvzMW9dcYqZI/gD5/MHKgqO2w==</latexit> ĥ
<latexit sha1_base64="4ZzWvs7xxQO9ik+Ene1mXfvAGHA=">AAAB73icbVDLSgNBEOyNrxhfUY9eBoPoKeyKosegF48RzAOSJcxOZrNDZmfXmV4hLPkJLx4U8ervePNvnDwOmljQUFR1090VpFIYdN1vp7Cyura+UdwsbW3v7O6V9w+aJsk04w2WyES3A2q4FIo3UKDk7VRzGgeSt4Lh7cRvPXFtRKIecJRyP6YDJULBKFqp3Y0okoic9soVt+pOQZaJNycVmKPeK391+wnLYq6QSWpMx3NT9HOqUTDJx6VuZnhK2ZAOeMdSRWNu/Hx675icWKVPwkTbUkim6u+JnMbGjOLAdsYUI7PoTcT/vE6G4bWfC5VmyBWbLQozSTAhk+dJX2jOUI4soUwLeythEdWUoY2oZEPwFl9eJs3zqndZde8vKrWbeRxFOIJjOAMPrqAGd1CHBjCQ8Ayv8OY8Oi/Ou/Mxay0485lD+APn8wfj5o82</latexit> ĥ0
<latexit sha1_base64="2G0f8RUU8YWS9zeN9PTsk8ECOOY=">AAAB8HicbVDLSgMxFL1TX7W+qi7dBIvgqsyIosuiG5dV7EPaoWQymTY0yQxJRihDv8KNC0Xc+jnu/BvT6Sy09UDgcM655N4TJJxp47rfTmlldW19o7xZ2dre2d2r7h+0dZwqQlsk5rHqBlhTziRtGWY47SaKYhFw2gnGNzO/80SVZrF8MJOE+gIPJYsYwcZKj/eDPrfhEA+qNbfu5kDLxCtIDQo0B9WvfhiTVFBpCMda9zw3MX6GlWGE02mln2qaYDLGQ9qzVGJBtZ/lC0/RiVVCFMXKPmlQrv6eyLDQeiICmxTYjPSiNxP/83qpia78jMkkNVSS+UdRypGJ0ex6FDJFieETSzBRzO6KyAgrTIztqGJL8BZPXibts7p3UXfvzmuN66KOMhzBMZyCB5fQgFtoQgsICHiGV3hzlPPivDsf82jJKWYO4Q+czx+ZdZBG</latexit> R (GRL) <latexit sha1_base64="1Out49/IGg3BAWnSOWbUFY4P5mo=">AAAB+HicdVDLSgMxFM3UV62Pjrp0EyyCG8tMHdu6K7pxWcE+oFPKnTRtQzOZIckItfRL3LhQxK2f4s6/MdNWUNEDgcM553JvThBzprTjfFiZldW19Y3sZm5re2c3b+/tN1WUSEIbJOKRbAegKGeCNjTTnLZjSSEMOG0F46vUb91RqVgkbvUkpt0QhoINGAFtpJ6dP/W5SfcB+wICDj274BQvquWSV8ZO0XEqbslNSaninXnYNUqKAlqi3rPf/X5EkpAKTTgo1XGdWHenIDUjnM5yfqJoDGQMQ9oxVEBIVXc6P3yGj43Sx4NImic0nqvfJ6YQKjUJA5MMQY/Uby8V//I6iR5Uu1Mm4kRTQRaLBgnHOsJpC7jPJCWaTwwBIpm5FZMRSCDadJUzJXz9FP9PmqWie150brxC7XJZRxYdoiN0glxUQTV0jeqogQhK0AN6Qs/WvfVovVivi2jGWs4coB+w3j4BX0GS6w==</latexit> r
<latexit sha1_base64="4e7VN7IWD/ZM+123qYV7uoLuXJs=">AAAB7XicbVDLSgNBEOz1GeMr6tHLYBA8hV1R9Bj04jGCeUCyhN7JbDJmdmaZmRVCyD948aCIV//Hm3/jJNmDJhY0FFXddHdFqeDG+v63t7K6tr6xWdgqbu/s7u2XDg4bRmWasjpVQulWhIYJLlndcitYK9UMk0iwZjS8nfrNJ6YNV/LBjlIWJtiXPOYUrZMaHYmRwG6p7Ff8GcgyCXJShhy1bumr01M0S5i0VKAx7cBPbThGbTkVbFLsZIalSIfYZ21HJSbMhOPZtRNy6pQeiZV2JS2Zqb8nxpgYM0oi15mgHZhFbyr+57UzG1+HYy7TzDJJ54viTBCryPR10uOaUStGjiDV3N1K6AA1UusCKroQgsWXl0njvBJcVvz7i3L1Jo+jAMdwAmcQwBVU4Q5qUAcKj/AMr/DmKe/Fe/c+5q0rXj5zBH/gff4AgK6PFA==</latexit>r
<latexit sha1_base64="4e7VN7IWD/ZM+123qYV7uoLuXJs=">AAAB7XicbVDLSgNBEOz1GeMr6tHLYBA8hV1R9Bj04jGCeUCyhN7JbDJmdmaZmRVCyD948aCIV//Hm3/jJNmDJhY0FFXddHdFqeDG+v63t7K6tr6xWdgqbu/s7u2XDg4bRmWasjpVQulWhIYJLlndcitYK9UMk0iwZjS8nfrNJ6YNV/LBjlIWJtiXPOYUrZMaHYmRwG6p7Ff8GcgyCXJShhy1bumr01M0S5i0VKAx7cBPbThGbTkVbFLsZIalSIfYZ21HJSbMhOPZtRNy6pQeiZV2JS2Zqb8nxpgYM0oi15mgHZhFbyr+57UzG1+HYy7TzDJJ54viTBCryPR10uOaUStGjiDV3N1K6AA1UusCKroQgsWXl0njvBJcVvz7i3L1Jo+jAMdwAmcQwBVU4Q5qUAcKj/AMr/DmKe/Fe/c+5q0rXj5zBH/gff4AgK6PFA==</latexit>r
<latexit sha1_base64="qmqxvw54qUUoUaz9MnxwYmJpOwM=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0WPRi8cW7Ae0oWy2k3btZhN2N0IJ/QVePCji1Z/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4bua3n1BpHssHM0nQj+hQ8pAzaqzUGPbLFbfqzkFWiZeTCuSo98tfvUHM0gilYYJq3fXcxPgZVYYzgdNSL9WYUDamQ+xaKmmE2s/mh07JmVUGJIyVLWnIXP09kdFI60kU2M6ImpFe9mbif143NeGNn3GZpAYlWywKU0FMTGZfkwFXyIyYWEKZ4vZWwkZUUWZsNiUbgrf88ippXVS9q6rbuKzUbvM4inACp3AOHlxDDe6hDk1ggPAMr/DmPDovzrvzsWgtOPnMMfyB8/kDzceM7w==</latexit>g
Figure 1: We study domainadversarial training from a game perspective. In DAL (Ganin et al. (2016)), three networks interact with each other: the feature extractor (g), the domain classifier (ĥ′) and the classifier (ĥ). During backpropagation, the GRL flips the sign of the gradient with respect to g.
We focus on the UDA scenario and follow the formulation from Acuna et al. (2021). This makes our analysis general and applicable to most state-of-the-art DAL algorithms (e.g., Ganin et al. (2016); Saito et al. (2018); Zhang et al. (2019)). We assume that the learner has access to a source dataset (S) with labeled examples and a target dataset (T) with unlabeled examples, where the source inputs xsi are sampled i.i.d. from a (source) distribution Ps and the target inputs xti are sampled i.i.d. from a (target) distribution Pt, both over X . We have Y = {0, 1} for binary classification, and Y = {1, ..., k} in the multiclass case. The risk of a hypothesis h : X → Y w.r.t. the labeling function f , using a loss function ` : Y×Y → R+ under distribution D is defined as: R`D(h, f) := ED[`(h(x), f(x))]. For simplicity, we define R`S(h) := R ` Ps (h, fs) and R`T (h) := R ` Pt
(h, ft). The hypothesis class of h is denoted byH.
UDA aims to minimize the risk in the target domain while only having access to labeled data in the source domain. This risk is upper bounded in terms of the risk of the source domain, the discrepancy between the two distributions and the joint hypothesis error λ∗: Theorem 1. (Acuna et al. (2021)) Let us note ` : Y × Y → [0, 1], λ∗ := minh∈HR`S(h) +R`T (h), and Dφh,H(Ps||Pt) := suph′∈H |Ex∼Ps [`(h(x), h′(x))]− Ex∼Pt [φ∗(`(h(x), h′(x)))|. We have:
R`T (h) ≤ R`S(h) + Dφh,H(Ps||Pt) + λ∗. (1)
The function φ : R+ → R defines a particular f -divergence and φ∗ is its (Fenchel) conjugate. As is typical in UDA, we assume that the hypothesis class is complex enough and both fs and ft are similar in such a way that the non-estimable term (λ∗) is negligible and can be ignored.
Domain-Adversarial Training (see Figure 1) aims to find a hypothesis h ∈ H that jointly minimizes the first two terms of Theorem 1. To this end, the hypothesis h is interpreted as the composition of h = ĥ ◦ g with g : X → Z , and ĥ : Z → Y . Another function class Ĥ is then defined to formulate H := {ĥ ◦ g : ĥ ∈ Ĥ, g ∈ G}. The algorithm tries to find the function g ∈ G such that ĥ ◦ g minimizes the risk of the source domain (i.e. the first term in Theorem 1), and its composition with ĥ and ĥ′ minimizes the divergence of the two distributions (i.e. the second term in Theorem 1).
Algorithmically, the computation of the divergence function in Theorem 1 is estimated by a so-called domain classifier ĥ′ ∈ Ĥ whose role is to detect whether the datapoint g(xi) ∈ Z belongs to the source or to the target domain. When there does not exist a function ĥ′ ∈ Ĥ that can properly distinguish between g(xsi ) and g(x t i), g is said to be invariant to the domains.
Learning is performed using GD and the GRL (denoted by Rλ) on the following objective:
min ĥ∈Ĥ,g∈G,ĥ′∈Ĥ
Ex∼ps [`(ĥ ◦ g, y)]− αds,t(ĥ, ĥ′, Rλ(g)), (2)
where ds,t(ĥ, ĥ′, g) := Ex∼ps [ˆ̀(ĥ′ ◦ g, ĥ ◦ g)]− Ex∼pt [(φ∗ ◦ ˆ̀)(ĥ′ ◦ g, ĥ ◦ g)]. Mathematically, the GRL Rλ is treated as a “pseudo-function” defined by two (incompatible) equations describing its forward and back-propagation behavior (Ganin & Lempitsky, 2015; Ganin et al., 2016). Specifically,
Rλ(x) := x and dRλ(x)/dx := −λ, (3) where λ and α are hyper-parameters that control the tradeoff between achieving small source error and learning an invariant representation. The surrogate loss ` : Y × Y → R (e.g., cross-entropy) is used to minimize the empirical risk in the source domain. The choice of function ˆ̀ : Y × Y → R and of conjugate φ∗ of the f -divergence defines the particular algorithm (Ganin et al., 2016; Saito et al., 2018; Zhang et al., 2019; Acuna et al., 2021). From eq. 2, we can notice that GRL introduces an adversarial scheme. We next interpret eq. 2 as a three-player game where the players are ĥ, ĥ′ and g, and study its continuous gradient dynamics.
3 A GAME PERSPECTIVE ON DAL
We now interpret DAL from a game-theoretical perspective. In § 3.1, we rewrite the DAL objective as a three-player game. In this view, each of the feature extractor and two classifiers is a player. This allows us to define optimality in terms of local Nash Equilibrium (see Def. 2 in Appendices). In § 3.2, we introduce the vector field, the game Hessian and the tools that allow us to characterize local NE for the players. This characterization leads to our analysis of the continuous dynamics in § 4.
3.1 DOMAIN-ADVERSARIAL GAME
We now rewrite and analyze the DAL problem in eq. 2 as a three-player game. Let Ĥ, Ĥ′ and G be classes of neural network functions and define ω1 ⊆ Ω1 and ω3 ⊆ Ω3 as a vector composed of the parameters of the classifier and domain classifier networks ĥ ∈ Ĥ and ĥ′ ∈ Ĥ, respectively. Similarly, let ω2 ⊆ Ω2 be the parameters of the feature extractor network g ∈ G. Their joint domain is denoted by Ω = Ω1 × Ω2 × Ω3 and the joint parameter set is ω = (ω1, ω2, ω3). Let each neural network be a player and its parameter choice to be its individual strategy (here continuous). The goal of each player is then to selfishly minimize its own cost function Ji : Ω→ R. We use the subscript −i to refer to all parameters/players but i. With the notation introduced, we can now formally define the Domain-Adversarial Game as the three-player game G(I,Ωi, Ji) where I := {1, 2, 3}, dim(Ω) = ∑3 i=1 dim(Ωi) = d, Ωi ⊆ Rdi and:
J1(ω1, ω−1) := `(ω1, ω2) + αds,t(ω)
J2(ω2, ω−2) := `(ω1, ω2) + αλds,t(ω) J3(ω3, ω−3) := − αds,t(ω), (4)
We use the shorthand `(ω1, ω2) for Ex,y∼ps [`(ω1 ◦ ω2(x), y)], and ωi’s refer to the feature extractor g and the classifiers (ĥ and ĥ′). Similar notation follows for ds,t. Here, we assume that each Ji is smooth in each of its arguments ωi ∈ Ωi. The gradient field of Equation (2) and the game’s vector field (see § 3.2) are equivalent, making the original interpretation of DAL and our three-player formulation equivalent. However, it is worth noting that our intepretation does not explicitly require the use of Rλ in ds,t in Equation (4). We can write optimality conditions of the above problem through the concept of Nash Equilibrium: Definition 1. (Nash Equilibrium (NE)) A point ω∗ ∈ Ω is said to be a Nash Equilibrium of the Domain-Adversarial Game if ∀i ∈ {1, 2, 3},∀ωi ∈ Ωi, we have: Ji(ω∗i , ω∗−i) ≤ Ji(ωi, ω∗−i). In our scenario, the losses are not convex/concave. NE then does not necessarily exist and, in general, finding NE is analogous to, but much harder than, finding global minima in neural networks – which
is unrealistic using gradient-based methods (Letcher et al., 2019). Thus, we focus on local NE which relaxes the NE to a local neighborhood B(w∗, δ) := {||w − w∗|| < δ} with δ > 0 (see Definition 2). Intuitively, a NE means that no player has the incentive to change its own strategy (here parameters of the neural network) because it will not generate any additional pay off (here it will not minimize its cost function). We emphasize that each player only has access to its own strategy set. In other words, the player J1 cannot change the parameters ω2, ω3. It only has access to ω1 ∈ Ω1. While the motivation of the three-player game follows naturally from the original formulation of DAL where three networks interact with each other (see Figure 1), the optimization problem (2) could also be interpreted as the minimax objective of a two-player zero-sum game. Thus, a natural question arises: can we interpret the domain-adversarial game as a two player zero-sum game? This can be done for example by defining ω∗12 := (ω ∗ 1 , ω ∗ 2), and considering the cost of the two players (ω12, ω3) as J12 = J and J3 = −J where J(ω12, ω3) := Eps [`(ω1, ω2)] + ds,t(ω). In general, however, the solution of the two-player game (ω∗12, ω ∗ 3) is not equal to the NE solution of the three-player game (ω∗1 , ω ∗ 2 , ω ∗ 3). This is because the team optimal solution ω ∗ 12 6= (ω∗1 , ω∗2) in general. We illustrate this in the following counterexample (see Başar & Olsder (1998) for more details): Example 1. Let the function J(ω) := 12 ( ω21 + 4ω1ω2 + ω 2 2 − ω23 ) . (a) Suppose the three-player game ω = (ω1, ω2, ω3) with J1 = J2 = J and J3 = −J . Each Ji is strictly convex in ωi. The NE solution of the game ω∗ = (0, 0, 0) is unique. (b) Suppose the two-player game ω = (ω12, ω3) with J12 = J and J3 = −J . The solution ω∗ from (a) is not a NE solution. To see this, let ω̂ := (−1, 1, 0). We have that J12(ω̂) = −1 < J12(ω∗) = 0. This contradicts Definition 1. One can verify that there is no NE in this two-player scenario.
3.2 CHARACTERIZATION OF THE DOMAIN-ADVERSARIAL GAME
We now introduce the game’s vector field (also called pseudo-gradient) and the pseudo-gradient’s Jacobian. We also provide a characterization of local NE based on them (see § 3). These are the core concepts used in our analysis (§ 4). We first define the game’s vector field v(w), and its Jacobian H(ω) (also called the game Hessian (Letcher et al., 2019)):
v(ω) := (∇ω1J1,∇ω2J2,∇ω3J3) ∈ Rd, H(ω) := ∇v(ω) ∈ Rd×d (5) Note that the vector field v(w) and the three-player formulation naturally capture the behavior introduced by the GRL in the original formulation. Specifically, v(ω) is identical to the gradient with respect to the parameters of the original DAL objective with GRL (Equation (2)). Therefore, in both cases the behavior of GD is identical. Assuming the same initial conditions, they will reach the same solution. This shows the equivalence between our perspective and the original DAL formulation. We emphasize that by equivalence, we mean the same dynamics, and the same intermediate and final solutions. Another fact worth emphasizing is that H(ω) is asymmetric. This is in contrast with the Hessian in supervised learning. Before proceeding with a characterization of local NE in terms of v(w) and H(ω), we first define sufficient and necessary conditions for local NEs: Proposition 1. (Necessary condition). Suppose each Ji is twice continuously differentiable in each ωi, any local NE ω∗ satisfies: i)∇ωiJi(ω∗) = 0 and ii) ∀i ∈ {1, 2, 3},∇2ωi,ωiJi(ω∗) 0. Proposition 2. (Sufficient condition). Suppose each Ji is twice continuously differentiable in each ωi. ω∗i is a local NE if i) ∇ωiJi(ω∗) = 0 and ii) ∀i,∇2ωi,ωiJi(ω∗) 0.
The necessary and sufficient conditions from Propositions 1 and 2 are reminiscent of conditions for local optimality in continuous optimization (Nocedal & Wright, 2006). Similar conditions were also proposed in Ratliff et al. (2016) where the sufficient condition defines the differential Nash equilibrium. We can now characterize a local NE in terms of v(w) and H(ω): Proposition 3. (Strict Local NE) w is a strict local NE if v(w) = 0 and H(ω) +H(ω)> 0. The sufficient condition implies that the NE is structurally stable (Ratliff et al., 2016). Structural stability is important as it implies that slightly biased estimators of the gradient (e.g., due to sampling noise) will not have vastly different behaviors in neighborhoods of equilibria (Mazumdar et al., 2020). In the following, we focus on the strict local NE (i.e., ω∗ for which Proposition 3 is satisfied).
4 LEARNING ALGORITHMS
We defined optimality as the local NE and provided sufficient conditions in terms of the pseudogradient and its Jacobian. In this section, we assume the existence of the strict local NE (Prop. 3)
in the neighborhood of the current point (e.g., initialization), and analyze the continuous gradient dynamics of the Domain-Adversarial Game (eq. 4 and eq. 5). We show that given the sufficient conditions from Prop. 3, asymptotic convergence to a local NE is guaranteed through an application of Hurwitz condition (Khalil, 2002). Most importantly, we show that using GD with the GRL could violate those guarantees unless its learning rate is upper bounded (see Thm. 2 an Cor. 1). This is in sharp contrast with known results from supervised learning where the implicit regularization introduced by GD has been shown to be desirable (Barrett & Dherin, 2021). We also analyze the use of higher-order ODE solvers for DAL and show that the above restrictions are not required if GD is replaced with them. Finally, we compare our resulting optimizers with recently algorithms in the context of games.
Our algorithmic analysis is based on the continuous gradient-play dynamics and the derivation of the modified or high-resolution ODE of popular integrators (e.g., GD/Euler Method and Runge-Kutta). This type of analysis is also known in the numerical integration community as backward error analysis (Hairer et al., 2006) and has recently been used to understand the implicit regularization effect of GD in supervised learning (Barrett & Dherin, 2021). High resolution ODEs have also been used in Shi et al. (2018) to understand the acceleration effect of optimization algorithms, and more recently in Lu (2020). As in Shi et al. (2018); Lu (2020); Barrett & Dherin (2021), our derivation of the high resolution ODEs is in the full-batch setting. The derivation of the stochastic dynamics of stochastic discrete time algorithms is significantly more complicated and is beyond the scope of this work.
We experimentally demonstrate that our results are also valid when there is stochasticity due to sampling noise in the mini-batch. We emphasize that our analysis does not put any constraint or structure on the players’ cost functions as opposed to Azizian et al. (2020); Zhang & Yu (2020). In our problem, the game is neither bilinear nor necessarily strongly monotone. See proofs in appendices.
4.1 CONTINUOUS GRADIENT DYNAMICS
Given v(ω) the continuous gradient dynamics can be written as:
ω̇(t) = −v(ω). (6) For later reasons and to distinguish between eq. 6 and the gradient flow, we will refer to these as the gradient-play dynamics as in Başar & Olsder (1998); Mazumdar et al. (2020). These dynamics are well studied and understood when the game is either a potential or a purely adversarial game (see definitions in appendices). While eq. 2 may look like a single objective, the introduction of the GRL (Rλ), makes a fundamental difference between our case and the dynamics that are analyzed in the single-objective gradient-based learning and optimization literature. We summarize this below: Proposition 4. The domain-adversarial game is neither a potential nor necessarily a purely adversarial game. Moreover, its gradient dynamics are not equivalent to the gradient flow.
Fortunately, we can directly apply the Hurwitz condition (Khalil, 2002) (also known as the condition for asymptotic stability, see Appendix A.1) to derive sufficient conditions for which the continuous dynamics of the gradient play would converge. Lemma 1. (Hurwitz condition) Let ∇v(w∗) be the Jacobian of the vector field at a stationary point w∗ where v(w∗) = 0. If the real part of every eigenvalue λ of ∇v(w∗) (i.e. in the spectrum Sp(∇v(w∗))) is positive then the continuous gradient dynamics are asymptotically stable. In this work, we assume the algorithms are initialized in a neighborhood of a strict local NE ω∗. Therefore, Lemma 1 provides sufficient conditions for the asymptotic convergence of the gradientplay dynamics to a local NE. In practice this assumption may not hold, and it is computationally hard to verify. Despite this, our experiments show noticeable performance gains in several tasks, benchmarks and network architectures (see § 6).
4.2 ANALYSIS OF GD WITH THE GRL
We showed above that given the existence of a strict local NE, the gradient-play dynamics are attracted to the strict local NE. A natural question then arises: If under this assumption local asymptotic convergence is guaranteed, what could make DAL notoriously hard to train and unstable? In practice, we do not have access to an explicit solution of the ODE. Thus, we rely on integration algorithms to approximate the solution. One simple approach is to use the Euler method:
w+ = w − ηv(w). (7)
This is commonly known as GD. The equivalence between v(w) (game’s vector field) and the gradient of Equation (2) (original DAL formulation) follows from the use of the GRL (Rλ). We remind the reader that the GRL is a “pseudo-function” defined by two (incompatible) equations describing its forward and back-propagation behavior, i.e., a flip in the gradient’s sign for the backward pass (see Figure 1, Section 2 and Ganin et al. (2016)). Equation (7) is then the default algorithm used in DAL. Now, to provide an answer to the motivating question of this section, we propose to analyze the high-resolution ODE of this numerical integrator (i.e., Euler) and in turn its asymptotic behavior. This is similar to deriving the modified continuous dynamics for which the integrator produces the exact solution (Hairer et al., 2006) and applying Hurwitz condition on the high-resolution ODE. Theorem 2. The high resolution ODE of GD with the GRL up to O(η) is:
ẇ = −v(w)−η2∇v(w)v(w) (8) Moreover, this is asymptotically stable (see Appendix A.1) at a stationary point w∗ (Proposition 3) iff for all eigenvalue written as λ = a+ ib ∈ Sp(−∇v(w∗)), we have 0 > η(a2 − b2)/2 > a. A striking difference between Equation (6) and Equation (8) is made clear (additional term marked in red). This additional term is a result of the discretization of the gradient-play dynamics using Euler’s method (i.e. GD) and leads to a different Jacobian of the dynamics. This term was recently shown to be beneficial for standard supervised learning (Barrett & Dherin, 2021), where∇v(ω∗) is symmetric and thus only has real eigenvalues. In our scenario, this term is undesirable. In fact, this additional term puts an upper bound on the learning rate η. The following corollary formalizes this: Corollary 1. The high resolution ODE of GD with GRL in Equation (8) is asymptotically stable only if the learning rate η is in the interval: 0 < η < −2ab2−a2 , for all λ = a+ ib ∈ Sp(−∇v(w∗)) with large imaginary part (i.e., such that |a| < |b|). To have good convergence properties, the imaginary part of the eigenvalues of −∇v(w∗) must be small enough. Therefore, if some eigenvalue λ = a+ ib satisfies a < 0 and b2 a2 −2a, the learning rates should be chosen to be very small. This is verified in Section 6 and in Example 2. Example 2. Consider the three-player game where `(w1, w2) = w21 + 2w1w2 + w22 , λ = 1 and ds,t(w2, w3) = w 2 2 + 99w2w3 − w23 . Then ẇ = −v(w) becomes: ẇ = Aw = (−2 −2 0 −2 −4 −99 0 99 −2 ) . The
eigenvalues of A are −2 and −3± 2i √ 2449. From Corollary 1, η should be 0 < η < 6.2× 10−3.
4.3 HIGHER ORDER ODE SOLVERS
The limitation described above exists because GD with the GRL can be understood as a discretization of the gradient-play dynamics using Euler’s Method. Thus, it only approximates the continuous dynamics up to O(η). To obtain a better approximation, we consider Runge-Kutta (RK) methods of order two and beyond (Butcher, 1996). For example, take the improved Euler’s method (a particular RK method of second order) that can be written as:
w+ = w − η2 (v(w) + v(w − ηv(w))). (9) Comparing Equation (9) (i.e., update rule of RK2) with Equation (7) (i.e., update rule of GD), one can see that the RK2 method is straightforward to implement in standard deep learning frameworks. Moreover, it does not introduce additional hyper-parameters. More importantly, such discrete dynamics approximate the continuous ODE of Equation (6) to a higher precision. In Appendix C, we provide asymptotic guarantees for the high resolution ODE of general RK methods , their generalized expression and the algorithm pseudo-code. See also PyTorch pseudo-code in Appendix E.
Limitation. A disadvantage of using high-order solvers is that they require additional extra steps. Specifically, one extra step in the case of RK2 (computation of the additional second term in Equation (9)). In our implementation, however, this was less than 2x slower in wall-clock time (see Appendix E.5 for more details and wall-clock comparison). Moreover, if not initialized in the neighborhood of a local NE, high-order solvers and gradient-based methods might also converge to a non-NE as described in Mazumdar et al. (2019) although this is likely a rare case.
Comparison vs other game optimization algorithms. DAL has not been previously interpreted from a game perspective. Our interpretation allows us to bring recently proposed algorithms to the context of differentiable games (Zhang & Yu, 2020; Azizian et al., 2020) to DAL. Classic examples are the Extra-Gradient (EG) method (Korpelevich, 1976) and Consensus Optimization (CO) (Mescheder et al., 2017). In Appendix B.2 we analyze the continuous dynamics of the EG method, and show that
we cannot take the learning rate of EG to be large either. Thus, we obtain a similar conclusion as Corollary 1. Then, in practice for DAL, stability for EG comes at the price of slow convergence due to the use of small learning rates. We experimentally show this in Figure 3. With respect to CO, we show in Appendix C that this algorithm can be interpreted in the limit as an approximation of the RK2 solver. In practice, if its additional hyper-parameter (γ) is tuned thoroughly, CO may approximate the continuous dynamics better than GD and EG. We believe this may be the reason why CO slightly outperforms GD and EG (see Appendix E.4). In all cases, RK solvers outperform GD, EG and CO. This is in line with our theoretical analysis since they better approximate the continuous dynamics (Hairer et al., 2006). It is worth noting that many other optimizers have recently been proposed in the context of games e.g., Gidel et al. (2019a); Hsieh et al. (2020); Lorraine et al. (2021a;b). Some of them are modifications of the EG method that we compared to e.g. Extra-Adam (Gidel et al., 2019a) or double step-size EG (Hsieh et al., 2020). More practical modifications in terms of adaptive step size could also be applied on top of RK solvers as done in Qin et al. (2020). A comparison of all existing game optimizers in DAL, and a better theoretical understanding of such modification on RK solvers are beyond the scope of this work. However, we believe it is an interesting and unexplored research direction that our game perspective on DAL enables.
5 RELATED WORK
To the best of our knowledge, DAL has not been previously analyzed from a game perspective. Moreover, the stability of the optimizer and the implications of introducing the GRL has not been analyzed either. Here, we compare our results with the general literature.
Gradient-Based Learning in Games. Ratliff et al. (2016) proposed a characterization of local Nash Equilibrium providing sufficient and necessary conditions for its existence. Mazumdar et al. (2020) proposed a general framework to analyze the limiting behavior of the gradient-play algorithms in games using tools from dynamical systems. Our work builds on top of this characterization but specializes them to the domain-adversarial problem. We propose a more stable learning algorithm that better approximates the gradient-play dynamics. Our resulting algorithm does not introduce explicit adjustments or modify the learning dynamics, nor does it require the computation of the several Hessian vector products or new hyperparameters. This is in contrast with general algorithms previously analyzed in the context of differentiable games (Azizian et al., 2020; Letcher et al., 2019).
Integration Methods and ML. Scieur et al. (2017) showed that accelerated optimization methods can be interpreted as integration schemes of the gradient flow equation. Zhang et al. (2018) showed that the use of high order RK integrators can achieve acceleration in convex functions. In the context of two-players game (i.e GANs), Gemp & Mahadevan (2018) consider using a second-order ODE integrator. More recently, Qin et al. (2020) proposed to combine RK solvers with regularization on the generators’ gradient norm. Chen et al. (2018) interpreted the residual connection in modern networks as the Euler’s integration of a continuous systems. In our case, we notice that the combination of GD with the GRL can be interpreted as the Euler’s discretization of the continuous gradient play dynamics, which could prevent asymptotic convergence guarantees. We then study the discretization step of popular ODE solvers and provide simple guarantees for stability. Moreover, our analysis is based on a novel three-player game interpretation of the domain-adaptation problem. This is also different from a single potential function or two-player games (i.e. GANs).
Two-Player Zero-Sum Games have recently received significant attention in the machine learning literature due to the popularity of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014). For example, several algorithms have been proposed and analyzed (Mescheder et al., 2017; Mertikopoulos et al., 2019; Gidel et al., 2019a;b; Zhang & Yu, 2020; Hsieh et al., 2020), in both deterministic and stochastic scenarios. In our problem, we have a general three-player games resulting of a novel game interpretation of the domain-adversarial problem. It is worth noting that while Gidel et al. (2019a) focused on GANs, their convergence theory and methods for stochastic variational inequalities could also be applied to three-players games and thus DAL using our perspective.
6 EXPERIMENTAL RESULTS
We conduct an extensive experimental analysis. We compare with default optimizers used in domainadversarial training such as GD, GD with Nesterov Momentum (GD-NM) (as in Sutskever et al. (2013)) and Adam (Kingma & Ba, 2014). We also compare against recently proposed optimizers in the context of differentiable games such as EG (Korpelevich, 1976) and CO (Mescheder et al., 2017). We focus our experimental analysis on the original domain-adversarial framework of Ganin et al.
(2016) (DANN). However, in section 6.2, we also show the versatility and efficacy of our approach improving the performance of recently proposed SoTA DAL framework (e.g., f -DAL (Acuna et al., 2021) combined with Implicit Alignment (Jiang et al., 2020)).
6.1 EXPERIMENTAL ANALYSIS ON DIGITS
Implementation Details. Our first experimental analysis is based on the digits benchmark with models trained from scratch (i.e., with random initialization). This benchmark constitutes of two digits datasets MNIST (CC BY-SA 3.0) and USPS (LeCun et al., 1998; Long et al., 2018) with two transfer tasks (M → U and U →M). We adopt the splits and evaluation protocol from Long et al. (2018) and follow their standard implementation details.
For GD-NM, we use the default momentum value (0.9). We follow the same approach for the additional hyper-parameters of Adam. Hyperparameters such as learning rate, learning schedule and adaptation coefficient (λ) are determined for all optimizers by running a dense grid search and selecting the best hyper-parameters on the transfer task M→U. As usual in UDA, the best criteria are determined based on best transfer accuracy. The same parameters are then used
for the other task (i.e., U→M). We use the same seed and identically initialize the network weights for all optimizers. This analysis is conducted on Jax (Bradbury et al., 2018) (see Appendix D).
Comparison vs optimizers used in DAL. Figure 2 (top) illustrates the training dynamics for the loss in the target domain and the performance transfer. As expected, our optimizer converges faster and achieves noticeable performance gains. A core idea of DAL is to learn domain-invariant representations, thus we plot in Figure 2 (bottom) t-SNE (Van der Maaten & Hinton, 2008) visualizations of the last layer features of the network. We show this over a sequence of epochs for GD with GRL vs RK2. A different color is used for the source and target datasets. In the comparison vs Adam, we emphasize that Adam computes adaptive learning rates which our method does not. That said, Figure 2 shows that our two methods RK2 and RK4
outperform all baselines in terms of both convergence and transfer performance. In Figure 7, we show how unstable these standard optimizers are when more aggressive step sizes are used. This is in line with our theoretical analysis. Experimentally, it can be seen that in DAL, GD is more stable than GD-NM and Adam, with the latter being the most unstable. This sheds lights on why well tuned GD-NM is often preferred over Adam in DAL.
Comparison vs game optimization algorithms. We now compare RK solvers vs other recently proposed game optimization algorithms. Specifically, we compare vs the EG method (Korpelevich, 1976) and CO (Mescheder et al., 2017). In every case, we perform a dense grid under the same budget for all the optimizer and report the best selection (see Appendix E for details). In line with our theoretical analysis of the continuous dynamics of the EG, we notice that the EG method is not able to train with learning rates bigger than 0.006, as a result it performs signficantly worse than any other optimizer (including simple GD). Also inline with our theoretical analysis, CO performs better than EG and all other popular gradient algorithms used in DAL. This is because CO can be seen as an approximation of Heun’s Method (RK2). More details on this in supplementary.
Robustness to hyper-parameters. Figure 4 shows transfer performance of our method for different choices of hyper-parameters while highlighting (green line) the best score of the best performing GD hyperparameters on the same dataset. Our method is robust to a wide variety of hyperparameters.
0.1 0.3 0.4
92
94
96 LR
const poly
LR Schedule
RK2 RK4
Method
0.1 0.5 1.0
adapt
92 94 96
92
94
96 Transfer Acc.
Figure 4: Robustness to hyperparameters. We compare the transfer performance of our method for different hyperarameters in the task M→ U in the Digits benchmark. Green line shows the best score for the best performing hyperparameters of GD. Blue star corresponds to the best solution. Our method performs well for a wide variety of hyperparameters.
0 8000 16000 24000 # of Iterations
65.0
67.5
70.0
72.5
Tr an
sf er
P er
fo rm
an ce
Grad. Descent (Nesterov) Ours (RK2)
Figure 6: Transfer Performance on Visda (DANN). 0 6000 12000 18000 24000 # of Iteration(s)
1.25
1.50
1.75
2.00
2.25
Ta rg
et T
as k
Lo ss
Grad. Descent LR:0.3 Nesterov Momentun LR:0.1
Figure 7: Stability anal. on Digits. Most aggressive step size before divergence. Adam diverges for η > 0.001.
6.2 COMPARISON IN COMPLEX ADAPTATION TASKS Method Sim→Real GD-NM 71.7 ± 0.7
Ours(RK2) 73.8 ± 0.3
Table 1: Accuracy (DANN) on Visda 2017 with ResNet-50.
We evaluate the performance of our algorithm with Resnet-50 (He et al., 2016) on more challenging adaptation benchmarks. Specifically, this analysis is conducted on Visda-2017 benchmark (Peng et al., 2017). This is a simulation-to-real dataset with two different domains: (S) synthetic renderings of 3D models and (R) real images. For this experiment, we use PyTorch (Paszke et al., 2019), our evaluation protocol follows Zhang et al. (2019) and uses ResNet-50 as the backbone network. For the optimizer parameters, we tune
thoroughly GD-NM, which is the optimizer used in this setting (Long et al., 2018; Zhang et al., 2019; Jiang et al., 2020; Acuna et al., 2021). For ours, we keep the hyper-parameters, but increase the learning rate (0.2), and the batch size to 128. In this task, our approach corresponds to the improved Euler’s method (RK2). Table 1 shows the comparison. Figure 6 compares the training dynamics of our method vs GD-NM. In Figure 5, we evaluate the sensitivity of our method (in terms of transfer performance) to sampling noise as controlled by the batch size.
Improving SoTA DAL frameworks. We use this complex visual adaptation task to showcase the applicability of our method to SoTA DAL frameworks. Specifically, we let the DA method being f -DAL Pearson as in Acuna et al. (2021) with Implicit Alignment Jiang et al. (2020). We use the tuned parameters and optimizer from Acuna et al. (2021); Jiang et al.
(2020) as the baseline. In our case, we only increase the learning rate (0.2). Table 2 shows that our method achieves peak results (+3.5%) in 10.5K iterations (vs 29.5K iterations for GD-NM). Natural Language Processing Tasks. We also evaluate our approach on natural language processing tasks on the Amazon product reviews dataset (Blitzer et al., 2006). We show noticeable gains by replacing the GD with either RK2 or RK4. Results and details can be found in Appendix E.1.
7 CONCLUSIONS
We analyzed DAL from a game-theoretical perspective where optimality is defined as local NE. From this view, we showed that standard optimizers in DAL can violate the asymptotic guarantees of the gradient-play dynamics, requiring careful tuning and small learning rates. Based on our analysis, we proposed to replace existing optimizers with higher-order ODE solvers. We showed both theoretically and experimentally that these are more stable and allow for higher learning rates, leading to noticeable improvements in terms of the transfer performance and the number of training iterations. We showed that these ODE solvers can be used as a drop-in replacement and outperformed strong baselines.
Acknowledgements. We would like to thank James Lucas, Jonathan Lorraine, Tianshi Cao, Rafid Mahmood, Mark Brophy and the anonymous reviewers for feedback on earlier versions of this work.
SUPPLEMENTARY MATERIAL
CONTENTS
1 Introduction 1
2 Preliminaries 2
3 A Game Perspective on DAL 3
3.1 Domain-Adversarial Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3.2 Characterization of the Domain-Adversarial Game . . . . . . . . . . . . . . . . . . . . . . . 4
4 Learning Algorithms 4
4.1 Continuous Gradient Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4.2 Analysis of GD with the GRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4.3 Higher order ODE Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5 Related Work 7
6 Experimental Results 7
6.1 Experimental Analysis on Digits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
6.2 Comparison in complex adaptation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
7 Conclusions 9
A Concepts in Game Theory 15
A.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
A.2 Games Characterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
A.3 Case of Study in DANN. Original Formulation from Ganin et al. (2016) . . . . . . . . . . . . 15
B Derivation of high-resolution ODEs 17
B.1 High-resolution ODE of second-order Runge–Kutta method . . . . . . . . . . . . . . . . . . 17
B.2 Continuous dynamics of Extra-Gradient (EG) . . . . . . . . . . . . . . . . . . . . . . . . . . 17
B.3 High-resolution ODE of classic fourth-order Runge–Kutta method (RK4) . . . . . . . . . . . 18
C Proofs and additional theoretical results 20
C.1 Proposed Learning Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
C.2 CO approximates RK2 (Heun’s Method) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
D Experimental Setup Additional Details 22
E Additional Experiments 23
E.1 Natural Language Processing Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
E.2 Sensitivity to Sampling Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
E.3 Additional Comparison vs Game Optimization Algorithms . . . . . . . . . . . . . . . . . . . 24
E.4 CO vs Gradient Descent and Extra-Gradient Algorithms . . . . . . . . . . . . . . . . . . . . 24
E.5 Wall-Clock Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
F PyTorch PseudoCode of RK2 Solver 25
A CONCEPTS IN GAME THEORY
A.1 DEFINITIONS
Definition 2. (Local Nash Equilibrium) : A point (w∗i , w∗−i) ∈ Ω is said to be a local Nash Equilibrium of the domain-adversarial game if there exists some δ > 0 such that:
∀i ∈ {1, 2, 3}, Ji(w∗i , w∗−i) ≤ Ji(wi, w∗−i), s.t. ||ωi − ω∗i ||2 < δ (10)
Intuitively, this is restricting the concept of NE to a local neighborhood B(x∗, δ) := {||x− x∗||2 < δ} with δ > 0.
A more practical characterization of the NE can be given in terms of the Best Response Map of each player which we now define.
Definition 3. (Best Response Map (BR)) The best response map BRi : Ω−i ⇒ Ωi of player i is defined as:
BRi(ω−i) := arg min ωi∈Ωi Ji(ωi, ω−i), (11)
here the symbol ⇒ emphasizes that the best response map is generally a set map and not a singleton, thus it is not a function in general. In other words, there may be a subset of element in Ωi for which Ji(., ω−i) is a minimum.
The notion of NE can be defined in terms of the generalized BR : Ω ⇒ Ω map. This can be thought as an stacked vector where the i-th element of BR is BRi(ω−i). Proposition 5. A pointw∗i ∈ Ω is said to be a NE of the game if it is a fixed point of the generalized BR : Ω ⇒ Ω map. That is,
ω∗ ∈ BR(ω∗) =⇒ ∀i ∈ {1, 2, 3}, ω∗i ∈ BRi(ω∗−i) (12)
Proof. This follows from the definitions of BR map and NE.
Definition 4. (Asymptotically Stable) A point ω is said to be a locally asymptotically stable point of the continuous dynamics ω̇ = f(ω) if Re(λ) < 0 for all λ ∈ Sp(∇f(ω)), where Sp(∇f(ω)) is the spectrum of ∇f(ω).
Definition 4 is also known as the Hurwitz condition Khalil (2002).
Definition 5. A stationary point x of a C2 function φ : Rn → R is said to be a strict saddle point if:
• λmin(∇2xxφ(x∗)) < 0 and,
• λ(∇2xxφ(x∗)) > 0, for any other λ ∈ sp(∇2xxφ(x∗))
A.2 GAMES CHARACTERIZATIONS
Potential Games. Potential Games were introduced in Monderer & Shapley (1996) and can be defined as a type of game for which there exists an implicit potential function φ : Ω → R such that ∇φ(ω) = v(ω). Consequently, a necessary and sufficient condition for the game to be potential is the Jacobian of the vector field ∇v(ω) being symmetric (see 3.3 in Mazumdar et al. (2020) and Monderer & Shapley (1996)).
Purely Adversarial Games. This particular type of game refers to the other extreme in which H(ω) is a non-symmetric matrix with purely imaginary eigenvalues. If the game Hessian is skew-symmetric these have also been called Hamiltonian Games Letcher et al. (2019).
A.3 CASE OF STUDY IN DANN. ORIGINAL FORMULATION FROM GANIN ET AL. (2016)
As mentioned in the main text (Section 2), our analysis is compatible with both the original and more recent formulation of domain-adversarial training such as Zhang et al. (2019); Acuna et al. (2021). In this section, we specifically derive additional results for DANN Ganin et al. (2016).
In order to obtain the original formulation of DANN, let us define ˆ̀(_, b) = log(σ(b)) and φ∗(t) = − log(1−et) in Equation (2). This corresponds to the Jensen-Shannon divergence (JS) (up to a constant shift that does not affect optimization). We can then rewrite ds,t as:
ds,t = Ex∼ps log σ ◦ ĥ ′ ◦ g(x) + Ex∼pt log(1− σ ◦ ĥ ′ ◦ g(x)) (13)
where σ(x) := 1 1+e−x . To simplify the notation, we writeH := σ ◦ H.
We can now re-define the pseudo-gradient v(w) of the game as the gradient of each player loss with respect to its parameters. Letting α = 1, we get from Equation (4).
v(ω) := (∇ω1`,∇ω2(`+ λds,t),−∇ω3ds,t) ∈ R d. (14)
The following propositions characterize local NE in terms of the pseudo-gradient v(w) and its Jacobian H(ω). Proposition 6. (Local NE) Suppose v(w) = 0 and(
∇2ω1` ∇ 2 ω1,ω2`
∇2ω1,ω2` ∇ 2 ω2(`+ λds,t)
) 0, ∇2ω3ds,t ≺ 0, (15)
then w is an isolated local NE.
The proof is simple and follows from Propositions 1 and 2, the definition of the vector field v(ω) and the condition H +H> 0.
Cooperation with Competition. By examining the matrix H(ω), one can see that, in our scenario, the game is neither a potential game nor a purely adversarial game. However, we can write the vector field in the following form:
v(w) = ∇ω1`∇ω2` 0 ︸ ︷︷ ︸ ∇φ(ω) + 0λ∇ω2dst −∇ω3dst ︸ ︷︷ ︸ v̂(ω)
(16)
where the first part corresponds to the gradient of the potential function φ(ω) = `(ω1, ω2). The second part, on the other hand, corresponds to a function v̂(w) whose Jacobian is a non-symmetric matrix. Analyzing them separately leads to either a potential or an adversarial game respectively. We define this particular type of game as cooperation (i.e., in the potential term) with competition (i.e., the adversarial term).
It is worth noting that, while the spectrum of the game Hessian for the first term has only real eigenvalues, the second term can have complex eigenvalues with a large imaginary component. Indeed, it can be shown that this second term approximates the one obtained for a GAN using the non-saturating loss proposed by Goodfellow et al. (2014) (e.g. λ = 1). In other words, the second term can be written as the pseudo-gradient of the two player zero-sum game minω2 maxω3 dst. Building on this key observation and the work of Mescheder et al. (2017); Berard et al. (2020) (Figure 4), where it was experimentally shown that the eigenvalues of the game Hessian for GANs have indeed a large imaginary component around stationary points, we can assume that the spectrum of the game Hessian in our case also have eigenvalues with a large imaginary component around the stationary points. This observation can also be used with Corollary 1 to further motivate the use of higher-order ODE solvers instead of GD with the GRL. Example 3. Consider the three-player game Equation (16) where `(w1, w2) = w21 + 2w1w2 +w22 , λ = 1 and ds,t(w2, w3) = w 2 2 + 99w2w3 − w23 . The gradient play dynamics ẇ = −v(w) becomes:
ẇ = Aw = −2 −2 0−2 −4 −99 0 99 −2 w. The eigenvalues of A are −2 and −3± 2i √ 2449. From Corollary 1, η should be 0 < η < 6.2 · 10−3.
Is the three-player game formulation desired? In domain adaptation, optimization is a means to an end. The final goal is to minimize the upper bound from Theorem 1 to ensure better performance in the target domain. One might then wonder whether interpreting optimality in terms of NE is desirable. In our context, NE means finding the optimal g∗, ĥ∗ and ĥ
′∗ of the cost functions defined in Equation (4). This in turns leads to minimizing the upper bound in Theorem 1.
Remark on sequential games: Recently, Jin et al. (2020) introduced a notion of local min-max optimality for two-player’s game exploiting the sequential nature of some problems in adversarial ML (i.e GANs). In domain-adversarial learning, updates are usually performed in a simultaneous manner using the GRL. Thus, we focus here on the general case where the order of the players is not known.
B DERIVATION OF HIGH-RESOLUTION ODES
Lemma 2. The high resolution ODE of resulting of the GD algorithm with the GRL is:
ẇ = −v(w)− η 2 ∇v(w)v(w) +O(η2), (17)
Proof. This follows from Corollary 1 of Lu (2020).
B.1 HIGH-RESOLUTION ODE OF SECOND-ORDER RUNGE–KUTTA METHOD
The high-resolution ODE was discussed in Shi et al. (2018); Lu (2020). For discrete algorithms with the following update:
w+ = w + f(η, w), (18)
we can think of the trajectory as a discretization of the continuous dynamics w : [0,+∞) → Rd, and in Equation (18), we have w = w(t), w+ = w(t+ η). Here, with slight abuse of notation we also use w for the continuous function of dynamics.
We derive high-resolution ODE of the second-order Runge–Kutta method:
wk+1/2 = wk − η
2α v(wk), wk+1 = wk − η((1− α)v(wk) + αv(wk+1/2)),
where 0 < α ≤ 1 and α is a constant. If α = 1/2, we obtain Heun’s method; if α = 1, we obtain the midpoint method; if α = 2/3, we obtain the Ralston’s method. Combining the two equations, we have:
wk+1 − wk η = −(1− α)v(wk)− αv(wk − η 2α v(wk)). (19)
Using the Taylor expansion:
v(wk − η
2α v(wk)) = v(wk)−
η
2α ∇v(wk)>v(wk) +O(η2)
Plugging it back into Equation (19) and using the Taylor expansion wk+1 = wk + ηẇ + η2ẅ/2, we have:
ẇ + 1 2 ηẅ = −v(w) + 1 2 ∇v(w)>v(w) +O(η2). (20)
Now we make the assumption that we have the high-resolution ODE that:
ẇ = f0(w) + ηf1(w) +O(η 2). (21)
Taking the derivative over t we have:
ẅ = ∇f0(w)f0(w) +O(η). (22) Combining Equation (20), Equation (21) and Equation (22), we obtain that:
f0(w) = −v(w), f1(w) = 0, (23) i.e., the high resolution ODE of second-order Runge–Kutta method is:
ẇ = −v(w) +O(η2). (24)
B.2 CONTINUOUS DYNAMICS OF EXTRA-GRADIENT (EG)
The continuous dynamics of Gradient Descent Ascent (GDA), Extra-Gradient (EG) and Heun’s method can be summarized as follows: ẇ = v(w) + α∇v(w)v(w) For GDA, we have α = −η/2; for EG, we have α = η/2 (Lu, 2020); for Heun’s method, ẇ = v(w) +O(η2). The Jacobian of the dynamics at the stationary point is∇v(w) + α∇v(w)2. Take λ = a+ ib ∈ Sp(∇v(w)). The eigenvalue of the Jacobian of the dynamics is:
α(a+ ib)2 + a+ ib = a+ α(a2 − b2) + i(b+ 2ab)α. (25) We want the real part to be negative, i.e.:
a+ α(a2 − b2) < 0, (26) and thus:
a(1 + αa) < αb2. (27)
for EG, α = η/2 and the dynamics diverges if a(1+(η/2)a) ≥ ηb2/2. When η is large, and η(a2−b2)/2 ≥ −a then it diverges. However, the high-resolution ODE of second-order Runge–Kutta methods only requires a < 0.
B.3 HIGH-RESOLUTION ODE OF CLASSIC FOURTH-ORDER RUNGE–KUTTA METHOD (RK4)
In this subsection, we derive the high-resolution ODE of the classic fourth-order Runge–Kutta method. We prove the following result: Theorem 3. The high-resolution ODE of the classic fourth-order Runge–Kutta method (RK4):
w+ = w − η 6 (v(w) + 2v2(w) + 2v3(w) + v4(w)), (28)
where
v2(w) = v(w − η
2 v(w)), v3(w) = v(w −
η 2 v2(w)), v4(w) = v(w − ηv3(w)), (29)
is
ẇ = −v(w) +O(η4). (30)
Proof. We use the following Taylor expansion:
v(w + δ) = v(w) = ∇v(w)δ + 1 2 ∇2v(w)(δ, δ) + 1 6 ∇3v(w)(δ, δ, δ) +O(‖δ4‖), (31)
where ∇2v(w) : Rd × Rd → Rd is a symmetric bilinear form, and ∇3v(w) : Rd × Rd × Rd → Rd is a symmetric trilinear form. With the formula we have:
v4(w) = v(w)− η∇v(w)v3(w) + η2
2 ∇2v(w)(v3(w), v3(w))−
η3
6 ∇3v(w)(v3(w), v3(w), v3(w)) +O(η4),
(32)
v3(w) = v(w)− η
2 ∇v(w)v2(w) +
η2
8 ∇2v(w)(v2(w), v2(w))−
η3 48 ∇3v(w)(v2(w), v2(w), v2(w)) +O(η4),
(33)
v2(w) = v(w)− η 2 ∇v(w)v(w) + η
2
8 ∇2v(w)(v(w), v(w))− η
3
48 ∇3v(w)(v(w), v(w), v(w)) +O(η4).
(34)
Putting them together we have:
v4(w) + 2v3(w) + 2v2(w) + v(w) = 6v(w)− η∇v(w)(v3(w) + v2(w) + v(w))
+ η2
2
( ∇2v(w)(v3(w), v3(w)) + 1
2 ∇2v(w)(v2(w), v2(w)) +
1 2 ∇2v(w)(v(w), v(w))
) +
− η 3
4 ∇3v(w)(v(w), v(w), v(w)) +O(η4), (35)
v3(w) + v2(w) + v(w) = 3v(w)− η
2 ∇v(w)(v2(w) + v(w)) +
η2
4 ∇2v(w)(v(w), v(w)) +O(η3),
(36)
v2(w) + v(w) = 2v(w)− η
2 ∇v(w)v(w) +O(η2). (37)
Bringing Equation (37) into Equation (36), we obtain:
v3(w) + v2(w) + v(w) = 3v(w)− η∇v(w)v(w) + η2 4 ∇2v(w)(v(w), v(w)) + η 2 4 (∇v(w))2v(w) +O(η3). (38)
Putting Equ | 1. What is the focus of the paper regarding adversarial domain learning?
2. What are the issues with the standard optimization method in DAL according to the review?
3. How does the proposed ODE method improve the transfer performance and reduce the number of training iterations?
4. What is the concern regarding the equivalence between the gradient field and the game's vector field?
5. Are there any suggestions for improving the layout of the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper analyzes adversarial domain learning (DAL) from a game-theoretical perspective, where the optimal condition is defined as obtaining the local Nash equilibrium. From this view, the authors show that the standard optimization method in DAL can violate the asymptotic guarantees of the gradient-play dynamics, thus requiring careful tuning and small learning rates. Based on these analyses, this paper proposed to replace the existing optimization method with higher-order ordinary differential equation solvers. Both theoretical and experimental results show that the latter ODE method is more stable and allows for higher learning rates, leading to noticeable improvements in transfer performance and the number of training iterations.
Review
The authors claim that "The gradient field of Equation (2) and the game's vector field (see Section 3.2) are equivalent, making the original interpretation of DAL and our three-player formulation equivalent." Why does field equivalence induce formulation equivalence? What is the exact definition of "equivalence" here?
The layout of the paper needs to be improved. For example, Example 2 should be an independent paragraph in the paper but not a "window" embedded in the other section. Some figures and tables have the same issue. |
ICLR | Title
Domain Adversarial Training: A Game Perspective
Abstract
The dominant line of work in domain adaptation has focused on learning invariant representations using domain-adversarial training. In this paper, we interpret this approach from a game theoretical perspective. Defining optimal solutions in domain-adversarial training as local Nash equilibria, we show that gradient descent in domain-adversarial training can violate the asymptotic convergence guarantees of the optimizer, oftentimes hindering the transfer performance. Our analysis leads us to replace gradient descent with high-order ODE solvers (i.e., Runge–Kutta), for which we derive asymptotic convergence guarantees. This family of optimizers is significantly more stable and allows more aggressive learning rates, leading to high performance gains when used as a drop-in replacement over standard optimizers. Our experiments show that in conjunction with state-of-the-art domain-adversarial methods, we achieve up to 3.5% improvement with less than half of training iterations. Our optimizers are easy to implement, free of additional parameters, and can be plugged into any domain-adversarial framework.
1 INTRODUCTION
Unsupervised domain adaptation (UDA) deals with the lack of labeled data in a target domain by transferring knowledge from a labeled source domain (i.e., a related dataset with different distribution where abundant labeled data already exists). The paramount importance of this paradigm has led to remarkable advances in the field in terms of both theory and algorithms (Ben-David et al., 2007; 2010a;b; Mansour et al., 2009). Several state-of-the-art algorithms tackle UDA by learning domaininvariant representations in an adversarial fashion (Shu et al., 2018; Long et al., 2018; Saito et al., 2018; Hoffman et al., 2018; Zhang et al., 2019; Acuna et al., 2021). Their goal is to fool an auxiliary classifier that operates in a representation space and aims to classify whether the datapoint belongs to either the source or the target domain. This idea, called Domain-Adversarial Learning (DAL), was introduced by Ganin et al. (2016) and can be more formally understood as minimizing the discrepancy between source and target domain in a representation space (Acuna et al., 2021).
Despite DAL being a dominant approach for UDA, alternative solutions have been sought as DAL is noticeably unstable and difficult to train in practice (Sener et al., 2016; Sun et al., 2019; Chang et al., 2019). One major cause of instability is the adversarial nature of the learning algorithm which results from the introduction of the Gradient Reversal Layer (GRL, Ganin et al., 2016) (Figure 1). GRL flips the sign of the gradient during the backward pass, which has profound implications on the training dynamics and asymptotic behavior of the learning algorithm. Indeed, GRL transforms gradient descent into a competitive gradient-based algorithm which may converge to periodic orbits and other non-trivial limiting behavior that arise for instance in chaotic systems (Mazumdar et al., 2020). Surprisingly, little attention has been paid to this fact, and specifically to the adversarial component and interaction among the three different networks in the algorithm. In particular, three fundamental questions have not been answered from an algorithmic point of view, 1) What is optimality in DAL? 2) What makes DAL difficult to train and 3) How can we mitigate this problem?
In this work, we aim to answer these questions by interpreting the DAL framework through the lens of game theory. Specifically, we use tools developed by the game theoretical community in Başar & Olsder (1998); Letcher et al. (2019); Mazumdar et al. (2020) and draw inspiration from the existing two-player zero-sum game interpretations of Generative Adversarial Networks (GANs)
(Goodfellow et al., 2014). We emphasize that in DAL, however, we have three rather than two networks interacting with each other, with partial cooperation and competition. We propose a natural three-player game interpretation for DAL, which is not necessarily equivalent to two-player zero-sum game interpretations (see Example 1), which we coin as the Domain-Adversarial Game. We also propose to interpret and characterize optimal solutions in DAL as local Nash Equilibria (see Section 3). This characterization introduces a proper mathematical definition of algorithmic optimality for DAL. It also provides sufficient conditions for optimality that drives the algorithmic analysis.
With our proposed game perspective in mind, a simple optimization solution would be to use the Gradient Descent (GD) algorithm, which is the de facto solution but known to be unstable. Alternatively, we could also use other popular gradient based optimizers proposed in the context of differentiable games (e.g. Korpelevich, 1976; Mescheder et al., 2017). However, we notice that these do not outperform GD in practice (see § 6). To understand why, we analyze the asymptotic behavior of gradient-based algorithms in the proposed domain-adversarial game (§ 4). The main result of § 4.2 (Theorem 2) shows that GD with GRL (i.e., the existing solution for DAL) violates the asymptotic convergence guarantees to local NE unless an upper bound is placed on the learning rate, which may explain its training instability and sensitivity to optimizer parameters. In § 4.3, Appendix B.2 and Appendix E, we also provide a similar analysis for the popular game optimization algorithms mentioned above. We emphasize however that while some of our results may be of independent interest for learning in general games, our focus is DAL. § 4.3 and § 6 show both theoretically and experimentally that the limitations mentioned above disappear if standard optimizers are replaced with ODE solvers of at least second order. These are straightforward to implement as drop-in replacements to existing optimizers. They also lead to more stable algorithms, allow for more aggressive learning rates and provide notable performance gains.
2 PRELIMINARIES <latexit sha1_base64="XcLU0OQlzQn4TD3DERhz5ZsAZ/U=">AAAB8nicbVBNS8NAFHypX7V+VT16WSyCp5KIoseiF48VbC2moWy223bpJht2X4QS+jO8eFDEq7/Gm//GTZuDtg4sDDPvsfMmTKQw6LrfTmlldW19o7xZ2dre2d2r7h+0jUo14y2mpNKdkBouRcxbKFDyTqI5jULJH8LxTe4/PHFthIrvcZLwIKLDWAwEo2glvxtRHDEqs8dpr1pz6+4MZJl4BalBgWav+tXtK5ZGPEYmqTG+5yYYZFSjYJJPK93U8ISyMR1y39KYRtwE2SzylJxYpU8GStsXI5mpvzcyGhkziUI7mUc0i14u/uf5KQ6ugkzESYo8ZvOPBqkkqEh+P+kLzRnKiSWUaWGzEjaimjK0LVVsCd7iycukfVb3Luru3XmtcV3UUYYjOIZT8OASGnALTWgBAwXP8ApvDjovzrvzMR8tOcXOIfyB8/kDl52RdA==</latexit>Z
<latexit sha1_base64="3xtdy5laA/WQWHLvq4EtMaredYI=">AAAB7XicbVBNS8NAEJ34WetX1aOXxSJ4Kokoeix68VjBfkAbyma7adZuNmF3IpTQ/+DFgyJe/T/e/Ddu2xy09cHA470ZZuYFqRQGXffbWVldW9/YLG2Vt3d29/YrB4ctk2Sa8SZLZKI7ATVcCsWbKFDyTqo5jQPJ28Hoduq3n7g2IlEPOE65H9OhEqFgFK3U6kUUSdSvVN2aOwNZJl5BqlCg0a989QYJy2KukElqTNdzU/RzqlEwySflXmZ4StmIDnnXUkVjbvx8du2EnFplQMJE21JIZurviZzGxozjwHbGFCOz6E3F/7xuhuG1nwuVZsgVmy8KM0kwIdPXyUBozlCOLaFMC3srYRHVlKENqGxD8BZfXiat85p3WXPvL6r1myKOEhzDCZyBB1dQhztoQBMYPMIzvMKbkzgvzrvzMW9dcYqZI/gD5/MHKgqO2w==</latexit> ĥ
<latexit sha1_base64="4ZzWvs7xxQO9ik+Ene1mXfvAGHA=">AAAB73icbVDLSgNBEOyNrxhfUY9eBoPoKeyKosegF48RzAOSJcxOZrNDZmfXmV4hLPkJLx4U8ervePNvnDwOmljQUFR1090VpFIYdN1vp7Cyura+UdwsbW3v7O6V9w+aJsk04w2WyES3A2q4FIo3UKDk7VRzGgeSt4Lh7cRvPXFtRKIecJRyP6YDJULBKFqp3Y0okoic9soVt+pOQZaJNycVmKPeK391+wnLYq6QSWpMx3NT9HOqUTDJx6VuZnhK2ZAOeMdSRWNu/Hx675icWKVPwkTbUkim6u+JnMbGjOLAdsYUI7PoTcT/vE6G4bWfC5VmyBWbLQozSTAhk+dJX2jOUI4soUwLeythEdWUoY2oZEPwFl9eJs3zqndZde8vKrWbeRxFOIJjOAMPrqAGd1CHBjCQ8Ayv8OY8Oi/Ou/Mxay0485lD+APn8wfj5o82</latexit> ĥ0
<latexit sha1_base64="2G0f8RUU8YWS9zeN9PTsk8ECOOY=">AAAB8HicbVDLSgMxFL1TX7W+qi7dBIvgqsyIosuiG5dV7EPaoWQymTY0yQxJRihDv8KNC0Xc+jnu/BvT6Sy09UDgcM655N4TJJxp47rfTmlldW19o7xZ2dre2d2r7h+0dZwqQlsk5rHqBlhTziRtGWY47SaKYhFw2gnGNzO/80SVZrF8MJOE+gIPJYsYwcZKj/eDPrfhEA+qNbfu5kDLxCtIDQo0B9WvfhiTVFBpCMda9zw3MX6GlWGE02mln2qaYDLGQ9qzVGJBtZ/lC0/RiVVCFMXKPmlQrv6eyLDQeiICmxTYjPSiNxP/83qpia78jMkkNVSS+UdRypGJ0ex6FDJFieETSzBRzO6KyAgrTIztqGJL8BZPXibts7p3UXfvzmuN66KOMhzBMZyCB5fQgFtoQgsICHiGV3hzlPPivDsf82jJKWYO4Q+czx+ZdZBG</latexit> R (GRL) <latexit sha1_base64="1Out49/IGg3BAWnSOWbUFY4P5mo=">AAAB+HicdVDLSgMxFM3UV62Pjrp0EyyCG8tMHdu6K7pxWcE+oFPKnTRtQzOZIckItfRL3LhQxK2f4s6/MdNWUNEDgcM553JvThBzprTjfFiZldW19Y3sZm5re2c3b+/tN1WUSEIbJOKRbAegKGeCNjTTnLZjSSEMOG0F46vUb91RqVgkbvUkpt0QhoINGAFtpJ6dP/W5SfcB+wICDj274BQvquWSV8ZO0XEqbslNSaninXnYNUqKAlqi3rPf/X5EkpAKTTgo1XGdWHenIDUjnM5yfqJoDGQMQ9oxVEBIVXc6P3yGj43Sx4NImic0nqvfJ6YQKjUJA5MMQY/Uby8V//I6iR5Uu1Mm4kRTQRaLBgnHOsJpC7jPJCWaTwwBIpm5FZMRSCDadJUzJXz9FP9PmqWie150brxC7XJZRxYdoiN0glxUQTV0jeqogQhK0AN6Qs/WvfVovVivi2jGWs4coB+w3j4BX0GS6w==</latexit> r
<latexit sha1_base64="4e7VN7IWD/ZM+123qYV7uoLuXJs=">AAAB7XicbVDLSgNBEOz1GeMr6tHLYBA8hV1R9Bj04jGCeUCyhN7JbDJmdmaZmRVCyD948aCIV//Hm3/jJNmDJhY0FFXddHdFqeDG+v63t7K6tr6xWdgqbu/s7u2XDg4bRmWasjpVQulWhIYJLlndcitYK9UMk0iwZjS8nfrNJ6YNV/LBjlIWJtiXPOYUrZMaHYmRwG6p7Ff8GcgyCXJShhy1bumr01M0S5i0VKAx7cBPbThGbTkVbFLsZIalSIfYZ21HJSbMhOPZtRNy6pQeiZV2JS2Zqb8nxpgYM0oi15mgHZhFbyr+57UzG1+HYy7TzDJJ54viTBCryPR10uOaUStGjiDV3N1K6AA1UusCKroQgsWXl0njvBJcVvz7i3L1Jo+jAMdwAmcQwBVU4Q5qUAcKj/AMr/DmKe/Fe/c+5q0rXj5zBH/gff4AgK6PFA==</latexit>r
<latexit sha1_base64="4e7VN7IWD/ZM+123qYV7uoLuXJs=">AAAB7XicbVDLSgNBEOz1GeMr6tHLYBA8hV1R9Bj04jGCeUCyhN7JbDJmdmaZmRVCyD948aCIV//Hm3/jJNmDJhY0FFXddHdFqeDG+v63t7K6tr6xWdgqbu/s7u2XDg4bRmWasjpVQulWhIYJLlndcitYK9UMk0iwZjS8nfrNJ6YNV/LBjlIWJtiXPOYUrZMaHYmRwG6p7Ff8GcgyCXJShhy1bumr01M0S5i0VKAx7cBPbThGbTkVbFLsZIalSIfYZ21HJSbMhOPZtRNy6pQeiZV2JS2Zqb8nxpgYM0oi15mgHZhFbyr+57UzG1+HYy7TzDJJ54viTBCryPR10uOaUStGjiDV3N1K6AA1UusCKroQgsWXl0njvBJcVvz7i3L1Jo+jAMdwAmcQwBVU4Q5qUAcKj/AMr/DmKe/Fe/c+5q0rXj5zBH/gff4AgK6PFA==</latexit>r
<latexit sha1_base64="qmqxvw54qUUoUaz9MnxwYmJpOwM=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0WPRi8cW7Ae0oWy2k3btZhN2N0IJ/QVePCji1Z/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4bua3n1BpHssHM0nQj+hQ8pAzaqzUGPbLFbfqzkFWiZeTCuSo98tfvUHM0gilYYJq3fXcxPgZVYYzgdNSL9WYUDamQ+xaKmmE2s/mh07JmVUGJIyVLWnIXP09kdFI60kU2M6ImpFe9mbif143NeGNn3GZpAYlWywKU0FMTGZfkwFXyIyYWEKZ4vZWwkZUUWZsNiUbgrf88ippXVS9q6rbuKzUbvM4inACp3AOHlxDDe6hDk1ggPAMr/DmPDovzrvzsWgtOPnMMfyB8/kDzceM7w==</latexit>g
Figure 1: We study domainadversarial training from a game perspective. In DAL (Ganin et al. (2016)), three networks interact with each other: the feature extractor (g), the domain classifier (ĥ′) and the classifier (ĥ). During backpropagation, the GRL flips the sign of the gradient with respect to g.
We focus on the UDA scenario and follow the formulation from Acuna et al. (2021). This makes our analysis general and applicable to most state-of-the-art DAL algorithms (e.g., Ganin et al. (2016); Saito et al. (2018); Zhang et al. (2019)). We assume that the learner has access to a source dataset (S) with labeled examples and a target dataset (T) with unlabeled examples, where the source inputs xsi are sampled i.i.d. from a (source) distribution Ps and the target inputs xti are sampled i.i.d. from a (target) distribution Pt, both over X . We have Y = {0, 1} for binary classification, and Y = {1, ..., k} in the multiclass case. The risk of a hypothesis h : X → Y w.r.t. the labeling function f , using a loss function ` : Y×Y → R+ under distribution D is defined as: R`D(h, f) := ED[`(h(x), f(x))]. For simplicity, we define R`S(h) := R ` Ps (h, fs) and R`T (h) := R ` Pt
(h, ft). The hypothesis class of h is denoted byH.
UDA aims to minimize the risk in the target domain while only having access to labeled data in the source domain. This risk is upper bounded in terms of the risk of the source domain, the discrepancy between the two distributions and the joint hypothesis error λ∗: Theorem 1. (Acuna et al. (2021)) Let us note ` : Y × Y → [0, 1], λ∗ := minh∈HR`S(h) +R`T (h), and Dφh,H(Ps||Pt) := suph′∈H |Ex∼Ps [`(h(x), h′(x))]− Ex∼Pt [φ∗(`(h(x), h′(x)))|. We have:
R`T (h) ≤ R`S(h) + Dφh,H(Ps||Pt) + λ∗. (1)
The function φ : R+ → R defines a particular f -divergence and φ∗ is its (Fenchel) conjugate. As is typical in UDA, we assume that the hypothesis class is complex enough and both fs and ft are similar in such a way that the non-estimable term (λ∗) is negligible and can be ignored.
Domain-Adversarial Training (see Figure 1) aims to find a hypothesis h ∈ H that jointly minimizes the first two terms of Theorem 1. To this end, the hypothesis h is interpreted as the composition of h = ĥ ◦ g with g : X → Z , and ĥ : Z → Y . Another function class Ĥ is then defined to formulate H := {ĥ ◦ g : ĥ ∈ Ĥ, g ∈ G}. The algorithm tries to find the function g ∈ G such that ĥ ◦ g minimizes the risk of the source domain (i.e. the first term in Theorem 1), and its composition with ĥ and ĥ′ minimizes the divergence of the two distributions (i.e. the second term in Theorem 1).
Algorithmically, the computation of the divergence function in Theorem 1 is estimated by a so-called domain classifier ĥ′ ∈ Ĥ whose role is to detect whether the datapoint g(xi) ∈ Z belongs to the source or to the target domain. When there does not exist a function ĥ′ ∈ Ĥ that can properly distinguish between g(xsi ) and g(x t i), g is said to be invariant to the domains.
Learning is performed using GD and the GRL (denoted by Rλ) on the following objective:
min ĥ∈Ĥ,g∈G,ĥ′∈Ĥ
Ex∼ps [`(ĥ ◦ g, y)]− αds,t(ĥ, ĥ′, Rλ(g)), (2)
where ds,t(ĥ, ĥ′, g) := Ex∼ps [ˆ̀(ĥ′ ◦ g, ĥ ◦ g)]− Ex∼pt [(φ∗ ◦ ˆ̀)(ĥ′ ◦ g, ĥ ◦ g)]. Mathematically, the GRL Rλ is treated as a “pseudo-function” defined by two (incompatible) equations describing its forward and back-propagation behavior (Ganin & Lempitsky, 2015; Ganin et al., 2016). Specifically,
Rλ(x) := x and dRλ(x)/dx := −λ, (3) where λ and α are hyper-parameters that control the tradeoff between achieving small source error and learning an invariant representation. The surrogate loss ` : Y × Y → R (e.g., cross-entropy) is used to minimize the empirical risk in the source domain. The choice of function ˆ̀ : Y × Y → R and of conjugate φ∗ of the f -divergence defines the particular algorithm (Ganin et al., 2016; Saito et al., 2018; Zhang et al., 2019; Acuna et al., 2021). From eq. 2, we can notice that GRL introduces an adversarial scheme. We next interpret eq. 2 as a three-player game where the players are ĥ, ĥ′ and g, and study its continuous gradient dynamics.
3 A GAME PERSPECTIVE ON DAL
We now interpret DAL from a game-theoretical perspective. In § 3.1, we rewrite the DAL objective as a three-player game. In this view, each of the feature extractor and two classifiers is a player. This allows us to define optimality in terms of local Nash Equilibrium (see Def. 2 in Appendices). In § 3.2, we introduce the vector field, the game Hessian and the tools that allow us to characterize local NE for the players. This characterization leads to our analysis of the continuous dynamics in § 4.
3.1 DOMAIN-ADVERSARIAL GAME
We now rewrite and analyze the DAL problem in eq. 2 as a three-player game. Let Ĥ, Ĥ′ and G be classes of neural network functions and define ω1 ⊆ Ω1 and ω3 ⊆ Ω3 as a vector composed of the parameters of the classifier and domain classifier networks ĥ ∈ Ĥ and ĥ′ ∈ Ĥ, respectively. Similarly, let ω2 ⊆ Ω2 be the parameters of the feature extractor network g ∈ G. Their joint domain is denoted by Ω = Ω1 × Ω2 × Ω3 and the joint parameter set is ω = (ω1, ω2, ω3). Let each neural network be a player and its parameter choice to be its individual strategy (here continuous). The goal of each player is then to selfishly minimize its own cost function Ji : Ω→ R. We use the subscript −i to refer to all parameters/players but i. With the notation introduced, we can now formally define the Domain-Adversarial Game as the three-player game G(I,Ωi, Ji) where I := {1, 2, 3}, dim(Ω) = ∑3 i=1 dim(Ωi) = d, Ωi ⊆ Rdi and:
J1(ω1, ω−1) := `(ω1, ω2) + αds,t(ω)
J2(ω2, ω−2) := `(ω1, ω2) + αλds,t(ω) J3(ω3, ω−3) := − αds,t(ω), (4)
We use the shorthand `(ω1, ω2) for Ex,y∼ps [`(ω1 ◦ ω2(x), y)], and ωi’s refer to the feature extractor g and the classifiers (ĥ and ĥ′). Similar notation follows for ds,t. Here, we assume that each Ji is smooth in each of its arguments ωi ∈ Ωi. The gradient field of Equation (2) and the game’s vector field (see § 3.2) are equivalent, making the original interpretation of DAL and our three-player formulation equivalent. However, it is worth noting that our intepretation does not explicitly require the use of Rλ in ds,t in Equation (4). We can write optimality conditions of the above problem through the concept of Nash Equilibrium: Definition 1. (Nash Equilibrium (NE)) A point ω∗ ∈ Ω is said to be a Nash Equilibrium of the Domain-Adversarial Game if ∀i ∈ {1, 2, 3},∀ωi ∈ Ωi, we have: Ji(ω∗i , ω∗−i) ≤ Ji(ωi, ω∗−i). In our scenario, the losses are not convex/concave. NE then does not necessarily exist and, in general, finding NE is analogous to, but much harder than, finding global minima in neural networks – which
is unrealistic using gradient-based methods (Letcher et al., 2019). Thus, we focus on local NE which relaxes the NE to a local neighborhood B(w∗, δ) := {||w − w∗|| < δ} with δ > 0 (see Definition 2). Intuitively, a NE means that no player has the incentive to change its own strategy (here parameters of the neural network) because it will not generate any additional pay off (here it will not minimize its cost function). We emphasize that each player only has access to its own strategy set. In other words, the player J1 cannot change the parameters ω2, ω3. It only has access to ω1 ∈ Ω1. While the motivation of the three-player game follows naturally from the original formulation of DAL where three networks interact with each other (see Figure 1), the optimization problem (2) could also be interpreted as the minimax objective of a two-player zero-sum game. Thus, a natural question arises: can we interpret the domain-adversarial game as a two player zero-sum game? This can be done for example by defining ω∗12 := (ω ∗ 1 , ω ∗ 2), and considering the cost of the two players (ω12, ω3) as J12 = J and J3 = −J where J(ω12, ω3) := Eps [`(ω1, ω2)] + ds,t(ω). In general, however, the solution of the two-player game (ω∗12, ω ∗ 3) is not equal to the NE solution of the three-player game (ω∗1 , ω ∗ 2 , ω ∗ 3). This is because the team optimal solution ω ∗ 12 6= (ω∗1 , ω∗2) in general. We illustrate this in the following counterexample (see Başar & Olsder (1998) for more details): Example 1. Let the function J(ω) := 12 ( ω21 + 4ω1ω2 + ω 2 2 − ω23 ) . (a) Suppose the three-player game ω = (ω1, ω2, ω3) with J1 = J2 = J and J3 = −J . Each Ji is strictly convex in ωi. The NE solution of the game ω∗ = (0, 0, 0) is unique. (b) Suppose the two-player game ω = (ω12, ω3) with J12 = J and J3 = −J . The solution ω∗ from (a) is not a NE solution. To see this, let ω̂ := (−1, 1, 0). We have that J12(ω̂) = −1 < J12(ω∗) = 0. This contradicts Definition 1. One can verify that there is no NE in this two-player scenario.
3.2 CHARACTERIZATION OF THE DOMAIN-ADVERSARIAL GAME
We now introduce the game’s vector field (also called pseudo-gradient) and the pseudo-gradient’s Jacobian. We also provide a characterization of local NE based on them (see § 3). These are the core concepts used in our analysis (§ 4). We first define the game’s vector field v(w), and its Jacobian H(ω) (also called the game Hessian (Letcher et al., 2019)):
v(ω) := (∇ω1J1,∇ω2J2,∇ω3J3) ∈ Rd, H(ω) := ∇v(ω) ∈ Rd×d (5) Note that the vector field v(w) and the three-player formulation naturally capture the behavior introduced by the GRL in the original formulation. Specifically, v(ω) is identical to the gradient with respect to the parameters of the original DAL objective with GRL (Equation (2)). Therefore, in both cases the behavior of GD is identical. Assuming the same initial conditions, they will reach the same solution. This shows the equivalence between our perspective and the original DAL formulation. We emphasize that by equivalence, we mean the same dynamics, and the same intermediate and final solutions. Another fact worth emphasizing is that H(ω) is asymmetric. This is in contrast with the Hessian in supervised learning. Before proceeding with a characterization of local NE in terms of v(w) and H(ω), we first define sufficient and necessary conditions for local NEs: Proposition 1. (Necessary condition). Suppose each Ji is twice continuously differentiable in each ωi, any local NE ω∗ satisfies: i)∇ωiJi(ω∗) = 0 and ii) ∀i ∈ {1, 2, 3},∇2ωi,ωiJi(ω∗) 0. Proposition 2. (Sufficient condition). Suppose each Ji is twice continuously differentiable in each ωi. ω∗i is a local NE if i) ∇ωiJi(ω∗) = 0 and ii) ∀i,∇2ωi,ωiJi(ω∗) 0.
The necessary and sufficient conditions from Propositions 1 and 2 are reminiscent of conditions for local optimality in continuous optimization (Nocedal & Wright, 2006). Similar conditions were also proposed in Ratliff et al. (2016) where the sufficient condition defines the differential Nash equilibrium. We can now characterize a local NE in terms of v(w) and H(ω): Proposition 3. (Strict Local NE) w is a strict local NE if v(w) = 0 and H(ω) +H(ω)> 0. The sufficient condition implies that the NE is structurally stable (Ratliff et al., 2016). Structural stability is important as it implies that slightly biased estimators of the gradient (e.g., due to sampling noise) will not have vastly different behaviors in neighborhoods of equilibria (Mazumdar et al., 2020). In the following, we focus on the strict local NE (i.e., ω∗ for which Proposition 3 is satisfied).
4 LEARNING ALGORITHMS
We defined optimality as the local NE and provided sufficient conditions in terms of the pseudogradient and its Jacobian. In this section, we assume the existence of the strict local NE (Prop. 3)
in the neighborhood of the current point (e.g., initialization), and analyze the continuous gradient dynamics of the Domain-Adversarial Game (eq. 4 and eq. 5). We show that given the sufficient conditions from Prop. 3, asymptotic convergence to a local NE is guaranteed through an application of Hurwitz condition (Khalil, 2002). Most importantly, we show that using GD with the GRL could violate those guarantees unless its learning rate is upper bounded (see Thm. 2 an Cor. 1). This is in sharp contrast with known results from supervised learning where the implicit regularization introduced by GD has been shown to be desirable (Barrett & Dherin, 2021). We also analyze the use of higher-order ODE solvers for DAL and show that the above restrictions are not required if GD is replaced with them. Finally, we compare our resulting optimizers with recently algorithms in the context of games.
Our algorithmic analysis is based on the continuous gradient-play dynamics and the derivation of the modified or high-resolution ODE of popular integrators (e.g., GD/Euler Method and Runge-Kutta). This type of analysis is also known in the numerical integration community as backward error analysis (Hairer et al., 2006) and has recently been used to understand the implicit regularization effect of GD in supervised learning (Barrett & Dherin, 2021). High resolution ODEs have also been used in Shi et al. (2018) to understand the acceleration effect of optimization algorithms, and more recently in Lu (2020). As in Shi et al. (2018); Lu (2020); Barrett & Dherin (2021), our derivation of the high resolution ODEs is in the full-batch setting. The derivation of the stochastic dynamics of stochastic discrete time algorithms is significantly more complicated and is beyond the scope of this work.
We experimentally demonstrate that our results are also valid when there is stochasticity due to sampling noise in the mini-batch. We emphasize that our analysis does not put any constraint or structure on the players’ cost functions as opposed to Azizian et al. (2020); Zhang & Yu (2020). In our problem, the game is neither bilinear nor necessarily strongly monotone. See proofs in appendices.
4.1 CONTINUOUS GRADIENT DYNAMICS
Given v(ω) the continuous gradient dynamics can be written as:
ω̇(t) = −v(ω). (6) For later reasons and to distinguish between eq. 6 and the gradient flow, we will refer to these as the gradient-play dynamics as in Başar & Olsder (1998); Mazumdar et al. (2020). These dynamics are well studied and understood when the game is either a potential or a purely adversarial game (see definitions in appendices). While eq. 2 may look like a single objective, the introduction of the GRL (Rλ), makes a fundamental difference between our case and the dynamics that are analyzed in the single-objective gradient-based learning and optimization literature. We summarize this below: Proposition 4. The domain-adversarial game is neither a potential nor necessarily a purely adversarial game. Moreover, its gradient dynamics are not equivalent to the gradient flow.
Fortunately, we can directly apply the Hurwitz condition (Khalil, 2002) (also known as the condition for asymptotic stability, see Appendix A.1) to derive sufficient conditions for which the continuous dynamics of the gradient play would converge. Lemma 1. (Hurwitz condition) Let ∇v(w∗) be the Jacobian of the vector field at a stationary point w∗ where v(w∗) = 0. If the real part of every eigenvalue λ of ∇v(w∗) (i.e. in the spectrum Sp(∇v(w∗))) is positive then the continuous gradient dynamics are asymptotically stable. In this work, we assume the algorithms are initialized in a neighborhood of a strict local NE ω∗. Therefore, Lemma 1 provides sufficient conditions for the asymptotic convergence of the gradientplay dynamics to a local NE. In practice this assumption may not hold, and it is computationally hard to verify. Despite this, our experiments show noticeable performance gains in several tasks, benchmarks and network architectures (see § 6).
4.2 ANALYSIS OF GD WITH THE GRL
We showed above that given the existence of a strict local NE, the gradient-play dynamics are attracted to the strict local NE. A natural question then arises: If under this assumption local asymptotic convergence is guaranteed, what could make DAL notoriously hard to train and unstable? In practice, we do not have access to an explicit solution of the ODE. Thus, we rely on integration algorithms to approximate the solution. One simple approach is to use the Euler method:
w+ = w − ηv(w). (7)
This is commonly known as GD. The equivalence between v(w) (game’s vector field) and the gradient of Equation (2) (original DAL formulation) follows from the use of the GRL (Rλ). We remind the reader that the GRL is a “pseudo-function” defined by two (incompatible) equations describing its forward and back-propagation behavior, i.e., a flip in the gradient’s sign for the backward pass (see Figure 1, Section 2 and Ganin et al. (2016)). Equation (7) is then the default algorithm used in DAL. Now, to provide an answer to the motivating question of this section, we propose to analyze the high-resolution ODE of this numerical integrator (i.e., Euler) and in turn its asymptotic behavior. This is similar to deriving the modified continuous dynamics for which the integrator produces the exact solution (Hairer et al., 2006) and applying Hurwitz condition on the high-resolution ODE. Theorem 2. The high resolution ODE of GD with the GRL up to O(η) is:
ẇ = −v(w)−η2∇v(w)v(w) (8) Moreover, this is asymptotically stable (see Appendix A.1) at a stationary point w∗ (Proposition 3) iff for all eigenvalue written as λ = a+ ib ∈ Sp(−∇v(w∗)), we have 0 > η(a2 − b2)/2 > a. A striking difference between Equation (6) and Equation (8) is made clear (additional term marked in red). This additional term is a result of the discretization of the gradient-play dynamics using Euler’s method (i.e. GD) and leads to a different Jacobian of the dynamics. This term was recently shown to be beneficial for standard supervised learning (Barrett & Dherin, 2021), where∇v(ω∗) is symmetric and thus only has real eigenvalues. In our scenario, this term is undesirable. In fact, this additional term puts an upper bound on the learning rate η. The following corollary formalizes this: Corollary 1. The high resolution ODE of GD with GRL in Equation (8) is asymptotically stable only if the learning rate η is in the interval: 0 < η < −2ab2−a2 , for all λ = a+ ib ∈ Sp(−∇v(w∗)) with large imaginary part (i.e., such that |a| < |b|). To have good convergence properties, the imaginary part of the eigenvalues of −∇v(w∗) must be small enough. Therefore, if some eigenvalue λ = a+ ib satisfies a < 0 and b2 a2 −2a, the learning rates should be chosen to be very small. This is verified in Section 6 and in Example 2. Example 2. Consider the three-player game where `(w1, w2) = w21 + 2w1w2 + w22 , λ = 1 and ds,t(w2, w3) = w 2 2 + 99w2w3 − w23 . Then ẇ = −v(w) becomes: ẇ = Aw = (−2 −2 0 −2 −4 −99 0 99 −2 ) . The
eigenvalues of A are −2 and −3± 2i √ 2449. From Corollary 1, η should be 0 < η < 6.2× 10−3.
4.3 HIGHER ORDER ODE SOLVERS
The limitation described above exists because GD with the GRL can be understood as a discretization of the gradient-play dynamics using Euler’s Method. Thus, it only approximates the continuous dynamics up to O(η). To obtain a better approximation, we consider Runge-Kutta (RK) methods of order two and beyond (Butcher, 1996). For example, take the improved Euler’s method (a particular RK method of second order) that can be written as:
w+ = w − η2 (v(w) + v(w − ηv(w))). (9) Comparing Equation (9) (i.e., update rule of RK2) with Equation (7) (i.e., update rule of GD), one can see that the RK2 method is straightforward to implement in standard deep learning frameworks. Moreover, it does not introduce additional hyper-parameters. More importantly, such discrete dynamics approximate the continuous ODE of Equation (6) to a higher precision. In Appendix C, we provide asymptotic guarantees for the high resolution ODE of general RK methods , their generalized expression and the algorithm pseudo-code. See also PyTorch pseudo-code in Appendix E.
Limitation. A disadvantage of using high-order solvers is that they require additional extra steps. Specifically, one extra step in the case of RK2 (computation of the additional second term in Equation (9)). In our implementation, however, this was less than 2x slower in wall-clock time (see Appendix E.5 for more details and wall-clock comparison). Moreover, if not initialized in the neighborhood of a local NE, high-order solvers and gradient-based methods might also converge to a non-NE as described in Mazumdar et al. (2019) although this is likely a rare case.
Comparison vs other game optimization algorithms. DAL has not been previously interpreted from a game perspective. Our interpretation allows us to bring recently proposed algorithms to the context of differentiable games (Zhang & Yu, 2020; Azizian et al., 2020) to DAL. Classic examples are the Extra-Gradient (EG) method (Korpelevich, 1976) and Consensus Optimization (CO) (Mescheder et al., 2017). In Appendix B.2 we analyze the continuous dynamics of the EG method, and show that
we cannot take the learning rate of EG to be large either. Thus, we obtain a similar conclusion as Corollary 1. Then, in practice for DAL, stability for EG comes at the price of slow convergence due to the use of small learning rates. We experimentally show this in Figure 3. With respect to CO, we show in Appendix C that this algorithm can be interpreted in the limit as an approximation of the RK2 solver. In practice, if its additional hyper-parameter (γ) is tuned thoroughly, CO may approximate the continuous dynamics better than GD and EG. We believe this may be the reason why CO slightly outperforms GD and EG (see Appendix E.4). In all cases, RK solvers outperform GD, EG and CO. This is in line with our theoretical analysis since they better approximate the continuous dynamics (Hairer et al., 2006). It is worth noting that many other optimizers have recently been proposed in the context of games e.g., Gidel et al. (2019a); Hsieh et al. (2020); Lorraine et al. (2021a;b). Some of them are modifications of the EG method that we compared to e.g. Extra-Adam (Gidel et al., 2019a) or double step-size EG (Hsieh et al., 2020). More practical modifications in terms of adaptive step size could also be applied on top of RK solvers as done in Qin et al. (2020). A comparison of all existing game optimizers in DAL, and a better theoretical understanding of such modification on RK solvers are beyond the scope of this work. However, we believe it is an interesting and unexplored research direction that our game perspective on DAL enables.
5 RELATED WORK
To the best of our knowledge, DAL has not been previously analyzed from a game perspective. Moreover, the stability of the optimizer and the implications of introducing the GRL has not been analyzed either. Here, we compare our results with the general literature.
Gradient-Based Learning in Games. Ratliff et al. (2016) proposed a characterization of local Nash Equilibrium providing sufficient and necessary conditions for its existence. Mazumdar et al. (2020) proposed a general framework to analyze the limiting behavior of the gradient-play algorithms in games using tools from dynamical systems. Our work builds on top of this characterization but specializes them to the domain-adversarial problem. We propose a more stable learning algorithm that better approximates the gradient-play dynamics. Our resulting algorithm does not introduce explicit adjustments or modify the learning dynamics, nor does it require the computation of the several Hessian vector products or new hyperparameters. This is in contrast with general algorithms previously analyzed in the context of differentiable games (Azizian et al., 2020; Letcher et al., 2019).
Integration Methods and ML. Scieur et al. (2017) showed that accelerated optimization methods can be interpreted as integration schemes of the gradient flow equation. Zhang et al. (2018) showed that the use of high order RK integrators can achieve acceleration in convex functions. In the context of two-players game (i.e GANs), Gemp & Mahadevan (2018) consider using a second-order ODE integrator. More recently, Qin et al. (2020) proposed to combine RK solvers with regularization on the generators’ gradient norm. Chen et al. (2018) interpreted the residual connection in modern networks as the Euler’s integration of a continuous systems. In our case, we notice that the combination of GD with the GRL can be interpreted as the Euler’s discretization of the continuous gradient play dynamics, which could prevent asymptotic convergence guarantees. We then study the discretization step of popular ODE solvers and provide simple guarantees for stability. Moreover, our analysis is based on a novel three-player game interpretation of the domain-adaptation problem. This is also different from a single potential function or two-player games (i.e. GANs).
Two-Player Zero-Sum Games have recently received significant attention in the machine learning literature due to the popularity of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014). For example, several algorithms have been proposed and analyzed (Mescheder et al., 2017; Mertikopoulos et al., 2019; Gidel et al., 2019a;b; Zhang & Yu, 2020; Hsieh et al., 2020), in both deterministic and stochastic scenarios. In our problem, we have a general three-player games resulting of a novel game interpretation of the domain-adversarial problem. It is worth noting that while Gidel et al. (2019a) focused on GANs, their convergence theory and methods for stochastic variational inequalities could also be applied to three-players games and thus DAL using our perspective.
6 EXPERIMENTAL RESULTS
We conduct an extensive experimental analysis. We compare with default optimizers used in domainadversarial training such as GD, GD with Nesterov Momentum (GD-NM) (as in Sutskever et al. (2013)) and Adam (Kingma & Ba, 2014). We also compare against recently proposed optimizers in the context of differentiable games such as EG (Korpelevich, 1976) and CO (Mescheder et al., 2017). We focus our experimental analysis on the original domain-adversarial framework of Ganin et al.
(2016) (DANN). However, in section 6.2, we also show the versatility and efficacy of our approach improving the performance of recently proposed SoTA DAL framework (e.g., f -DAL (Acuna et al., 2021) combined with Implicit Alignment (Jiang et al., 2020)).
6.1 EXPERIMENTAL ANALYSIS ON DIGITS
Implementation Details. Our first experimental analysis is based on the digits benchmark with models trained from scratch (i.e., with random initialization). This benchmark constitutes of two digits datasets MNIST (CC BY-SA 3.0) and USPS (LeCun et al., 1998; Long et al., 2018) with two transfer tasks (M → U and U →M). We adopt the splits and evaluation protocol from Long et al. (2018) and follow their standard implementation details.
For GD-NM, we use the default momentum value (0.9). We follow the same approach for the additional hyper-parameters of Adam. Hyperparameters such as learning rate, learning schedule and adaptation coefficient (λ) are determined for all optimizers by running a dense grid search and selecting the best hyper-parameters on the transfer task M→U. As usual in UDA, the best criteria are determined based on best transfer accuracy. The same parameters are then used
for the other task (i.e., U→M). We use the same seed and identically initialize the network weights for all optimizers. This analysis is conducted on Jax (Bradbury et al., 2018) (see Appendix D).
Comparison vs optimizers used in DAL. Figure 2 (top) illustrates the training dynamics for the loss in the target domain and the performance transfer. As expected, our optimizer converges faster and achieves noticeable performance gains. A core idea of DAL is to learn domain-invariant representations, thus we plot in Figure 2 (bottom) t-SNE (Van der Maaten & Hinton, 2008) visualizations of the last layer features of the network. We show this over a sequence of epochs for GD with GRL vs RK2. A different color is used for the source and target datasets. In the comparison vs Adam, we emphasize that Adam computes adaptive learning rates which our method does not. That said, Figure 2 shows that our two methods RK2 and RK4
outperform all baselines in terms of both convergence and transfer performance. In Figure 7, we show how unstable these standard optimizers are when more aggressive step sizes are used. This is in line with our theoretical analysis. Experimentally, it can be seen that in DAL, GD is more stable than GD-NM and Adam, with the latter being the most unstable. This sheds lights on why well tuned GD-NM is often preferred over Adam in DAL.
Comparison vs game optimization algorithms. We now compare RK solvers vs other recently proposed game optimization algorithms. Specifically, we compare vs the EG method (Korpelevich, 1976) and CO (Mescheder et al., 2017). In every case, we perform a dense grid under the same budget for all the optimizer and report the best selection (see Appendix E for details). In line with our theoretical analysis of the continuous dynamics of the EG, we notice that the EG method is not able to train with learning rates bigger than 0.006, as a result it performs signficantly worse than any other optimizer (including simple GD). Also inline with our theoretical analysis, CO performs better than EG and all other popular gradient algorithms used in DAL. This is because CO can be seen as an approximation of Heun’s Method (RK2). More details on this in supplementary.
Robustness to hyper-parameters. Figure 4 shows transfer performance of our method for different choices of hyper-parameters while highlighting (green line) the best score of the best performing GD hyperparameters on the same dataset. Our method is robust to a wide variety of hyperparameters.
0.1 0.3 0.4
92
94
96 LR
const poly
LR Schedule
RK2 RK4
Method
0.1 0.5 1.0
adapt
92 94 96
92
94
96 Transfer Acc.
Figure 4: Robustness to hyperparameters. We compare the transfer performance of our method for different hyperarameters in the task M→ U in the Digits benchmark. Green line shows the best score for the best performing hyperparameters of GD. Blue star corresponds to the best solution. Our method performs well for a wide variety of hyperparameters.
0 8000 16000 24000 # of Iterations
65.0
67.5
70.0
72.5
Tr an
sf er
P er
fo rm
an ce
Grad. Descent (Nesterov) Ours (RK2)
Figure 6: Transfer Performance on Visda (DANN). 0 6000 12000 18000 24000 # of Iteration(s)
1.25
1.50
1.75
2.00
2.25
Ta rg
et T
as k
Lo ss
Grad. Descent LR:0.3 Nesterov Momentun LR:0.1
Figure 7: Stability anal. on Digits. Most aggressive step size before divergence. Adam diverges for η > 0.001.
6.2 COMPARISON IN COMPLEX ADAPTATION TASKS Method Sim→Real GD-NM 71.7 ± 0.7
Ours(RK2) 73.8 ± 0.3
Table 1: Accuracy (DANN) on Visda 2017 with ResNet-50.
We evaluate the performance of our algorithm with Resnet-50 (He et al., 2016) on more challenging adaptation benchmarks. Specifically, this analysis is conducted on Visda-2017 benchmark (Peng et al., 2017). This is a simulation-to-real dataset with two different domains: (S) synthetic renderings of 3D models and (R) real images. For this experiment, we use PyTorch (Paszke et al., 2019), our evaluation protocol follows Zhang et al. (2019) and uses ResNet-50 as the backbone network. For the optimizer parameters, we tune
thoroughly GD-NM, which is the optimizer used in this setting (Long et al., 2018; Zhang et al., 2019; Jiang et al., 2020; Acuna et al., 2021). For ours, we keep the hyper-parameters, but increase the learning rate (0.2), and the batch size to 128. In this task, our approach corresponds to the improved Euler’s method (RK2). Table 1 shows the comparison. Figure 6 compares the training dynamics of our method vs GD-NM. In Figure 5, we evaluate the sensitivity of our method (in terms of transfer performance) to sampling noise as controlled by the batch size.
Improving SoTA DAL frameworks. We use this complex visual adaptation task to showcase the applicability of our method to SoTA DAL frameworks. Specifically, we let the DA method being f -DAL Pearson as in Acuna et al. (2021) with Implicit Alignment Jiang et al. (2020). We use the tuned parameters and optimizer from Acuna et al. (2021); Jiang et al.
(2020) as the baseline. In our case, we only increase the learning rate (0.2). Table 2 shows that our method achieves peak results (+3.5%) in 10.5K iterations (vs 29.5K iterations for GD-NM). Natural Language Processing Tasks. We also evaluate our approach on natural language processing tasks on the Amazon product reviews dataset (Blitzer et al., 2006). We show noticeable gains by replacing the GD with either RK2 or RK4. Results and details can be found in Appendix E.1.
7 CONCLUSIONS
We analyzed DAL from a game-theoretical perspective where optimality is defined as local NE. From this view, we showed that standard optimizers in DAL can violate the asymptotic guarantees of the gradient-play dynamics, requiring careful tuning and small learning rates. Based on our analysis, we proposed to replace existing optimizers with higher-order ODE solvers. We showed both theoretically and experimentally that these are more stable and allow for higher learning rates, leading to noticeable improvements in terms of the transfer performance and the number of training iterations. We showed that these ODE solvers can be used as a drop-in replacement and outperformed strong baselines.
Acknowledgements. We would like to thank James Lucas, Jonathan Lorraine, Tianshi Cao, Rafid Mahmood, Mark Brophy and the anonymous reviewers for feedback on earlier versions of this work.
SUPPLEMENTARY MATERIAL
CONTENTS
1 Introduction 1
2 Preliminaries 2
3 A Game Perspective on DAL 3
3.1 Domain-Adversarial Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3.2 Characterization of the Domain-Adversarial Game . . . . . . . . . . . . . . . . . . . . . . . 4
4 Learning Algorithms 4
4.1 Continuous Gradient Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4.2 Analysis of GD with the GRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4.3 Higher order ODE Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5 Related Work 7
6 Experimental Results 7
6.1 Experimental Analysis on Digits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
6.2 Comparison in complex adaptation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
7 Conclusions 9
A Concepts in Game Theory 15
A.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
A.2 Games Characterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
A.3 Case of Study in DANN. Original Formulation from Ganin et al. (2016) . . . . . . . . . . . . 15
B Derivation of high-resolution ODEs 17
B.1 High-resolution ODE of second-order Runge–Kutta method . . . . . . . . . . . . . . . . . . 17
B.2 Continuous dynamics of Extra-Gradient (EG) . . . . . . . . . . . . . . . . . . . . . . . . . . 17
B.3 High-resolution ODE of classic fourth-order Runge–Kutta method (RK4) . . . . . . . . . . . 18
C Proofs and additional theoretical results 20
C.1 Proposed Learning Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
C.2 CO approximates RK2 (Heun’s Method) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
D Experimental Setup Additional Details 22
E Additional Experiments 23
E.1 Natural Language Processing Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
E.2 Sensitivity to Sampling Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
E.3 Additional Comparison vs Game Optimization Algorithms . . . . . . . . . . . . . . . . . . . 24
E.4 CO vs Gradient Descent and Extra-Gradient Algorithms . . . . . . . . . . . . . . . . . . . . 24
E.5 Wall-Clock Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
F PyTorch PseudoCode of RK2 Solver 25
A CONCEPTS IN GAME THEORY
A.1 DEFINITIONS
Definition 2. (Local Nash Equilibrium) : A point (w∗i , w∗−i) ∈ Ω is said to be a local Nash Equilibrium of the domain-adversarial game if there exists some δ > 0 such that:
∀i ∈ {1, 2, 3}, Ji(w∗i , w∗−i) ≤ Ji(wi, w∗−i), s.t. ||ωi − ω∗i ||2 < δ (10)
Intuitively, this is restricting the concept of NE to a local neighborhood B(x∗, δ) := {||x− x∗||2 < δ} with δ > 0.
A more practical characterization of the NE can be given in terms of the Best Response Map of each player which we now define.
Definition 3. (Best Response Map (BR)) The best response map BRi : Ω−i ⇒ Ωi of player i is defined as:
BRi(ω−i) := arg min ωi∈Ωi Ji(ωi, ω−i), (11)
here the symbol ⇒ emphasizes that the best response map is generally a set map and not a singleton, thus it is not a function in general. In other words, there may be a subset of element in Ωi for which Ji(., ω−i) is a minimum.
The notion of NE can be defined in terms of the generalized BR : Ω ⇒ Ω map. This can be thought as an stacked vector where the i-th element of BR is BRi(ω−i). Proposition 5. A pointw∗i ∈ Ω is said to be a NE of the game if it is a fixed point of the generalized BR : Ω ⇒ Ω map. That is,
ω∗ ∈ BR(ω∗) =⇒ ∀i ∈ {1, 2, 3}, ω∗i ∈ BRi(ω∗−i) (12)
Proof. This follows from the definitions of BR map and NE.
Definition 4. (Asymptotically Stable) A point ω is said to be a locally asymptotically stable point of the continuous dynamics ω̇ = f(ω) if Re(λ) < 0 for all λ ∈ Sp(∇f(ω)), where Sp(∇f(ω)) is the spectrum of ∇f(ω).
Definition 4 is also known as the Hurwitz condition Khalil (2002).
Definition 5. A stationary point x of a C2 function φ : Rn → R is said to be a strict saddle point if:
• λmin(∇2xxφ(x∗)) < 0 and,
• λ(∇2xxφ(x∗)) > 0, for any other λ ∈ sp(∇2xxφ(x∗))
A.2 GAMES CHARACTERIZATIONS
Potential Games. Potential Games were introduced in Monderer & Shapley (1996) and can be defined as a type of game for which there exists an implicit potential function φ : Ω → R such that ∇φ(ω) = v(ω). Consequently, a necessary and sufficient condition for the game to be potential is the Jacobian of the vector field ∇v(ω) being symmetric (see 3.3 in Mazumdar et al. (2020) and Monderer & Shapley (1996)).
Purely Adversarial Games. This particular type of game refers to the other extreme in which H(ω) is a non-symmetric matrix with purely imaginary eigenvalues. If the game Hessian is skew-symmetric these have also been called Hamiltonian Games Letcher et al. (2019).
A.3 CASE OF STUDY IN DANN. ORIGINAL FORMULATION FROM GANIN ET AL. (2016)
As mentioned in the main text (Section 2), our analysis is compatible with both the original and more recent formulation of domain-adversarial training such as Zhang et al. (2019); Acuna et al. (2021). In this section, we specifically derive additional results for DANN Ganin et al. (2016).
In order to obtain the original formulation of DANN, let us define ˆ̀(_, b) = log(σ(b)) and φ∗(t) = − log(1−et) in Equation (2). This corresponds to the Jensen-Shannon divergence (JS) (up to a constant shift that does not affect optimization). We can then rewrite ds,t as:
ds,t = Ex∼ps log σ ◦ ĥ ′ ◦ g(x) + Ex∼pt log(1− σ ◦ ĥ ′ ◦ g(x)) (13)
where σ(x) := 1 1+e−x . To simplify the notation, we writeH := σ ◦ H.
We can now re-define the pseudo-gradient v(w) of the game as the gradient of each player loss with respect to its parameters. Letting α = 1, we get from Equation (4).
v(ω) := (∇ω1`,∇ω2(`+ λds,t),−∇ω3ds,t) ∈ R d. (14)
The following propositions characterize local NE in terms of the pseudo-gradient v(w) and its Jacobian H(ω). Proposition 6. (Local NE) Suppose v(w) = 0 and(
∇2ω1` ∇ 2 ω1,ω2`
∇2ω1,ω2` ∇ 2 ω2(`+ λds,t)
) 0, ∇2ω3ds,t ≺ 0, (15)
then w is an isolated local NE.
The proof is simple and follows from Propositions 1 and 2, the definition of the vector field v(ω) and the condition H +H> 0.
Cooperation with Competition. By examining the matrix H(ω), one can see that, in our scenario, the game is neither a potential game nor a purely adversarial game. However, we can write the vector field in the following form:
v(w) = ∇ω1`∇ω2` 0 ︸ ︷︷ ︸ ∇φ(ω) + 0λ∇ω2dst −∇ω3dst ︸ ︷︷ ︸ v̂(ω)
(16)
where the first part corresponds to the gradient of the potential function φ(ω) = `(ω1, ω2). The second part, on the other hand, corresponds to a function v̂(w) whose Jacobian is a non-symmetric matrix. Analyzing them separately leads to either a potential or an adversarial game respectively. We define this particular type of game as cooperation (i.e., in the potential term) with competition (i.e., the adversarial term).
It is worth noting that, while the spectrum of the game Hessian for the first term has only real eigenvalues, the second term can have complex eigenvalues with a large imaginary component. Indeed, it can be shown that this second term approximates the one obtained for a GAN using the non-saturating loss proposed by Goodfellow et al. (2014) (e.g. λ = 1). In other words, the second term can be written as the pseudo-gradient of the two player zero-sum game minω2 maxω3 dst. Building on this key observation and the work of Mescheder et al. (2017); Berard et al. (2020) (Figure 4), where it was experimentally shown that the eigenvalues of the game Hessian for GANs have indeed a large imaginary component around stationary points, we can assume that the spectrum of the game Hessian in our case also have eigenvalues with a large imaginary component around the stationary points. This observation can also be used with Corollary 1 to further motivate the use of higher-order ODE solvers instead of GD with the GRL. Example 3. Consider the three-player game Equation (16) where `(w1, w2) = w21 + 2w1w2 +w22 , λ = 1 and ds,t(w2, w3) = w 2 2 + 99w2w3 − w23 . The gradient play dynamics ẇ = −v(w) becomes:
ẇ = Aw = −2 −2 0−2 −4 −99 0 99 −2 w. The eigenvalues of A are −2 and −3± 2i √ 2449. From Corollary 1, η should be 0 < η < 6.2 · 10−3.
Is the three-player game formulation desired? In domain adaptation, optimization is a means to an end. The final goal is to minimize the upper bound from Theorem 1 to ensure better performance in the target domain. One might then wonder whether interpreting optimality in terms of NE is desirable. In our context, NE means finding the optimal g∗, ĥ∗ and ĥ
′∗ of the cost functions defined in Equation (4). This in turns leads to minimizing the upper bound in Theorem 1.
Remark on sequential games: Recently, Jin et al. (2020) introduced a notion of local min-max optimality for two-player’s game exploiting the sequential nature of some problems in adversarial ML (i.e GANs). In domain-adversarial learning, updates are usually performed in a simultaneous manner using the GRL. Thus, we focus here on the general case where the order of the players is not known.
B DERIVATION OF HIGH-RESOLUTION ODES
Lemma 2. The high resolution ODE of resulting of the GD algorithm with the GRL is:
ẇ = −v(w)− η 2 ∇v(w)v(w) +O(η2), (17)
Proof. This follows from Corollary 1 of Lu (2020).
B.1 HIGH-RESOLUTION ODE OF SECOND-ORDER RUNGE–KUTTA METHOD
The high-resolution ODE was discussed in Shi et al. (2018); Lu (2020). For discrete algorithms with the following update:
w+ = w + f(η, w), (18)
we can think of the trajectory as a discretization of the continuous dynamics w : [0,+∞) → Rd, and in Equation (18), we have w = w(t), w+ = w(t+ η). Here, with slight abuse of notation we also use w for the continuous function of dynamics.
We derive high-resolution ODE of the second-order Runge–Kutta method:
wk+1/2 = wk − η
2α v(wk), wk+1 = wk − η((1− α)v(wk) + αv(wk+1/2)),
where 0 < α ≤ 1 and α is a constant. If α = 1/2, we obtain Heun’s method; if α = 1, we obtain the midpoint method; if α = 2/3, we obtain the Ralston’s method. Combining the two equations, we have:
wk+1 − wk η = −(1− α)v(wk)− αv(wk − η 2α v(wk)). (19)
Using the Taylor expansion:
v(wk − η
2α v(wk)) = v(wk)−
η
2α ∇v(wk)>v(wk) +O(η2)
Plugging it back into Equation (19) and using the Taylor expansion wk+1 = wk + ηẇ + η2ẅ/2, we have:
ẇ + 1 2 ηẅ = −v(w) + 1 2 ∇v(w)>v(w) +O(η2). (20)
Now we make the assumption that we have the high-resolution ODE that:
ẇ = f0(w) + ηf1(w) +O(η 2). (21)
Taking the derivative over t we have:
ẅ = ∇f0(w)f0(w) +O(η). (22) Combining Equation (20), Equation (21) and Equation (22), we obtain that:
f0(w) = −v(w), f1(w) = 0, (23) i.e., the high resolution ODE of second-order Runge–Kutta method is:
ẇ = −v(w) +O(η2). (24)
B.2 CONTINUOUS DYNAMICS OF EXTRA-GRADIENT (EG)
The continuous dynamics of Gradient Descent Ascent (GDA), Extra-Gradient (EG) and Heun’s method can be summarized as follows: ẇ = v(w) + α∇v(w)v(w) For GDA, we have α = −η/2; for EG, we have α = η/2 (Lu, 2020); for Heun’s method, ẇ = v(w) +O(η2). The Jacobian of the dynamics at the stationary point is∇v(w) + α∇v(w)2. Take λ = a+ ib ∈ Sp(∇v(w)). The eigenvalue of the Jacobian of the dynamics is:
α(a+ ib)2 + a+ ib = a+ α(a2 − b2) + i(b+ 2ab)α. (25) We want the real part to be negative, i.e.:
a+ α(a2 − b2) < 0, (26) and thus:
a(1 + αa) < αb2. (27)
for EG, α = η/2 and the dynamics diverges if a(1+(η/2)a) ≥ ηb2/2. When η is large, and η(a2−b2)/2 ≥ −a then it diverges. However, the high-resolution ODE of second-order Runge–Kutta methods only requires a < 0.
B.3 HIGH-RESOLUTION ODE OF CLASSIC FOURTH-ORDER RUNGE–KUTTA METHOD (RK4)
In this subsection, we derive the high-resolution ODE of the classic fourth-order Runge–Kutta method. We prove the following result: Theorem 3. The high-resolution ODE of the classic fourth-order Runge–Kutta method (RK4):
w+ = w − η 6 (v(w) + 2v2(w) + 2v3(w) + v4(w)), (28)
where
v2(w) = v(w − η
2 v(w)), v3(w) = v(w −
η 2 v2(w)), v4(w) = v(w − ηv3(w)), (29)
is
ẇ = −v(w) +O(η4). (30)
Proof. We use the following Taylor expansion:
v(w + δ) = v(w) = ∇v(w)δ + 1 2 ∇2v(w)(δ, δ) + 1 6 ∇3v(w)(δ, δ, δ) +O(‖δ4‖), (31)
where ∇2v(w) : Rd × Rd → Rd is a symmetric bilinear form, and ∇3v(w) : Rd × Rd × Rd → Rd is a symmetric trilinear form. With the formula we have:
v4(w) = v(w)− η∇v(w)v3(w) + η2
2 ∇2v(w)(v3(w), v3(w))−
η3
6 ∇3v(w)(v3(w), v3(w), v3(w)) +O(η4),
(32)
v3(w) = v(w)− η
2 ∇v(w)v2(w) +
η2
8 ∇2v(w)(v2(w), v2(w))−
η3 48 ∇3v(w)(v2(w), v2(w), v2(w)) +O(η4),
(33)
v2(w) = v(w)− η 2 ∇v(w)v(w) + η
2
8 ∇2v(w)(v(w), v(w))− η
3
48 ∇3v(w)(v(w), v(w), v(w)) +O(η4).
(34)
Putting them together we have:
v4(w) + 2v3(w) + 2v2(w) + v(w) = 6v(w)− η∇v(w)(v3(w) + v2(w) + v(w))
+ η2
2
( ∇2v(w)(v3(w), v3(w)) + 1
2 ∇2v(w)(v2(w), v2(w)) +
1 2 ∇2v(w)(v(w), v(w))
) +
− η 3
4 ∇3v(w)(v(w), v(w), v(w)) +O(η4), (35)
v3(w) + v2(w) + v(w) = 3v(w)− η
2 ∇v(w)(v2(w) + v(w)) +
η2
4 ∇2v(w)(v(w), v(w)) +O(η3),
(36)
v2(w) + v(w) = 2v(w)− η
2 ∇v(w)v(w) +O(η2). (37)
Bringing Equation (37) into Equation (36), we obtain:
v3(w) + v2(w) + v(w) = 3v(w)− η∇v(w)v(w) + η2 4 ∇2v(w)(v(w), v(w)) + η 2 4 (∇v(w))2v(w) +O(η3). (38)
Putting Equ | 1. What is the focus of the paper regarding adversarial domain adaptation training?
2. What are the strengths of the proposed approach, particularly in its theoretical analysis and novelty?
3. What are the weaknesses of the paper, especially regarding its limitations in practical applications?
4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This manuscript considers the adversarial domain adaptation training problem, specifically the gradient reversal method, from the perspective of game theory. The authors show that gradient-based optimizers without an upper bound on the learning rate violate asymptotic convergence guarantees to local NEs. The authors further show that these constraints can be lifted by higher order ODE solvers. In the experimental part, the authors evaluate their method i.e. Runge-Kutta ODE solvers of order 2 and 4 with different general and also game optimized gradient-based optimizers on a MNIST/USPS digits dataset. Furthermore, they show hyperparameter robustness of their method and finally, the method is tested on more complex image and NLP datasets and compared to current SOTA methods. Overall better results are achieved.
Review
Strengths:
Well written and understandable
Theoretical sound in both problem description and solution
Analyzing domain adaptation learning from a game perspective and analyzing the stability of the optimizer with the gradient reversal layer model is novel
Relevant related work is considered and compared
The experimental part is sufficient and supports the theoretical claims
Weaknesses:
The theoretical analysis is limited to full batch training while in practice as well in the experiments mini-batches are used. However, the authors mentioned it in the paper.
As with gradient-based methods, this method could also converge to a non-Nash equilibrium as outlined in [1], however this is likely a rare case and not unique to this proposal
[1] Mazumdar et al. On Finding Local Nash Equilibria (and Only Local Nash Equilibria) in Zero-Sum Games. https://arxiv.org/abs/1901.00838 |
ICLR | Title
Prototype Based Classification from Hierarchy to Fairness
Abstract
Artificial neural nets can represent and classify many types of high-dimensional data but are often tailored to particular applications – e.g., for “fair” or “hierarchical” classification. Once an architecture has been selected, it is often difficult for humans to adjust models for a new task; for example, a hierarchical classifier cannot be easily transformed into a fair classifier that shields a protected field. Our contribution in this work is a new neural network architecture, the concept subspace network (CSN), which generalizes existing specialized classifiers to produce a unified model capable of learning a spectrum of multi-concept relationships. We demonstrate that CSNs reproduce state-of-the-art results in fair classification when enforcing concept independence, may be transformed into hierarchical classifiers, or may even reconcile fairness and hierarchy within a single classifier. The CSN is inspired by and matches the performance of existing prototype-based classifiers that promote interpretability.
1 INTRODUCTION
Neural networks are able to learn rich representations of data that support highly accurate classification; however, understanding or controlling what neural nets learn remains challenging. Some techniques offer insight into pre-trained models by uncovering directions within latent spaces that correspond to particular concepts, image manipulations, or more (Goetschalckx et al., 2019; Kim et al., 2018), while approaches focused on interpretability provide techniques that are more comprehensible to humans (Li et al., 2018; Chen et al., 2019). While these methods provide insight, they fail to offer control: humans observe learned patterns but are unable to guide models such that learned relationships are useful for a particular setting or task.
Another line of work has advanced the design of models for particular types of classification tasks (such as fair or hierarchical classification) but these techniques are often developed with only one problem in mind (Zemel et al., 2016; Xie et al., 2017; Hase et al., 2019). For example, models built for fair classification (predicting an outcome regardless of information about a protected field) are only used to enforce independence of concepts rather than hierarchy. Thus, humans may exert control over learned representations by selecting an appropriate technique rather than tuning training parameters within the same technique.
We have designed a new neural network architecture, the concept subspace network (CSN), which generalizes existing specialized classifiers to produce a unified model capable of learning a spectrum of multi-concept relationships. CSNs use prototype-based representations, a technique employed in interpretable neural networks in prior art (Li et al., 2018; Chen et al., 2019; Garnot & Landrieu, 2020). A single CSN uses sets of prototypes in order to simultaneously learn multiple concepts; classification within a single concept (e.g., “type of animal”) is performed by projecting encodings into a concept subspace defined by the prototypes for that concept (e.g., “bird,” “dog,” etc.). Lastly, CSNs use a measure of concept subspace alignment to guide concept relationships such as independence or hierarchy.
In our experiments, CSNs performed comparably to state-of-the art in fair classification, despite prior methods only being designed for this type of problem. In applying CSNs to hierarchical classification tasks, networks automatically deduced interpretable representations of the hierarchical problem structure, allowing them to outperform state-of-the-art, for a given neural network backbone, in terms of both accuracy and average cost of errors on the CIFAR100 dataset. Lastly, in
a human-motion prediction task, we demonstrated how a single CSN could enforce both fairness (to preserve participant privacy) and hierarchy (to exploit a known taxonomy of tasks). Our findings suggest that CSNs may be applied to a wide range of problems that had previously only been addressed individually, or not at all.
2 RELATED WORK
2.1 INTERPRETABILITY AND PROTOTYPE NETWORKS
Numerous post-hoc explanation techniques fit models to pre-trained neural nets; if humans understand these auxiliary models, they can hypothesize about how the neural nets behave (Ribeiro et al., 2016; Lundberg & Lee, 2017). However, techniques in which explanations are decoupled from underlying logic may be susceptible to adversarial attacks or produce misleading explanations (Heo et al., 2019; Slack et al., 2020).
Unlike such decoupled explanations, interpretability research seeks to expose a model’s reasoning. In this work we focus on prototype-based latent representations in neural nets. There is a long history of learning discrete representations in continuous spaces, originating under “vector quantization” literature (Kohonen, 1990; Schneider et al., 2009). More recently, the prototype case network (PCN) comprised an autoencoder model that clustered encodings around understandable, trainable prototypes, with classifications made via a linear weighting of the distances from encodings to prototypes (Li et al., 2018). Further research in image classification extended PCNs to use convolutional filters as prototypes and for hierarchical classification in the hierarchical prototype network (HPN) (Chen et al., 2019; Hase et al., 2019). Lastly, Garnot & Landrieu (2020) use prototypes in Metric-Guided Prototype Learning (MGP) in conjunction with a loss function to cluster prototypes to minimize user-defined costs.
Our model similarly uses trainable prototypes for classification, but differs from prior art in two respects. First, we modify the standard PCN architecture to support other changes, without degrading classification performance. Second, like HPNs (but not PCNs or MGP), CSNs leverage multiple sets of prototypes to enable hierarchical classification but also allow for non-hierarchical concept relationships.
2.2 FAIR AND HIERARCHICAL CLASSIFICATION
AI fairness research considers how to mitigate undesirable patterns or biases in machine learning models. Consider the problem of predicting a person’s credit risk: non-causal correlations between age and risk may lead AI models to inappropriately penalize people according to their age (Zemel et al., 2016). The problem of fair classification is often framed as follows: given inputs, x, which are informative of a protected field, s, and outcome, y, predict y from x without being influenced by s (Zemel et al., 2013). Merely removing s from x (e.g., not including age as an input to a credit predictor) rarely removes all information about s, so researchers have developed a variety of techniques to create representations that “purge” information about s (Zemel et al., 2016; Xie et al., 2017; Jiang et al., 2020).
Hierarchical classification solves a different problem: given a hierarchical taxonomy of classes (e.g., birds vs. dogs at a high level and sparrows vs. toucans at a low level), output the correct label at each classification level. Neural nets using convolution and recurrent layers in specialized designs have achieved remarkable success in hierarchical image classification (Zhu & Bain, 2017; Guo et al., 2018). The hierarchical prototype network (HPN) uses prototypes and a training routine based upon conditional subsets of training data to create hierarchically-organized prototypes (Hase et al., 2019). Garnot & Landrieu (2020) also use prototypes for hierarchical classification in Metric-Guided Prototype Learning (MGP) by adjusting the training loss to guide prototype arrangement. Neither HPN nor MGP explicitly models relationships between multiple subsets of prototypes. Lastly, recent works propose hyperbolic latent spaces as a natural way to model hierarchical data (Dai et al., 2021; Mathieu et al., 2019; Nickel & Kiela, 2017; Liu et al., 2020). Our method, conversely, relies upon concepts from Euclidean geometry. Extending the principle of subspace alignment that we develop to non-Euclidean geometric spaces is a promising direction but is beyond the scope of this work.
3 TECHNICAL APPROACH
In this section, we outlined the design of the CSN, which was inspired by desires for both interpretable representations and explicit concept relationships. First, we wished for interpretable representations, so we built upon the PCN design, with modifications. Second, we explicitly encoded relationships between concepts by introducing multiple sets of prototypes, instead of just one in PCNs. Third, we enabled guidance of the concept relationships by modifying the CSN training loss. Together, these changes supported not only interpretable classification, but also provided a flexible framework for a single model architecture to learn different concept relationships.
3.1 CONCEPT SUBSPACE CLASSIFICATION
A CSN performing a single classification task (e.g., identifying a digit in an image) is defined by three sets of trainable weights. First, an encoder parametrized by weights θ, eθ, maps from inputs of dimension X to encodings of dimension Z: eθ : RX → RZ . Second, a decoder parametrized by weights φ, dφ, performs the decoding function of mapping from encodings to reconstructed inputs: dφ : R
Z → RX . Third, there exist a set of k trainable prototype weights, p, that are each Zdimensional vectors: p1, p2, ..., pk ∈ RZ . This architecture resembles that of the PCN, but without the additional linear classification layer (Li et al., 2018).
Here, we focus briefly on the set of prototypes, p. Given a set of k prototypes in RZ , we define a “concept subspace,” C as follows:
vi = pi − p1 ∀i ∈ [2, k] (1) C = {x|x ∈ RZ where x = p1 + ∑ i∈[2,k] λivifor λi ∈ R ∀i} (2)
C is the linear subspace in RZ defined by starting at the first prototype and adding linear scalings of vector differences to all other prototypes. We call this subspace a concept subspace because it represents a space of encodings between prototypes defining a single concept (e.g., prototypes for digits 0, 1, 2, etc. define a concept subspace for digit classification).
A CSN’s architecture — consisting of an encoder, a decoder, and a set of prototypes and the associated concept subspace — enables two types of functionality: the encoder and decoder may be composed to reconstruct inputs via their latent representations, and CSNs may perform classification tasks by mapping an input, x, to one of Y discrete categories. Classification is performed by first encoding an input into a latent representation, z = eθ(x). The l2 distance from z to each prototype is then calculated, yielding k distance values: di(z,p) = ||z − pi||22; i ∈ [1, k]. These distances are mapped to a probability distribution, PK(i); i ∈ [1, k], by taking the softmax of their negatives. Lastly, if there are more prototypes than classes, (e.g., two prototypes for dogs, two for cats, etc.) the distribution over k is converted to a distribution over Y categories summing the probabilities for prototypes belonging to the same class.
For single-concept classification, CSNs differ from PCNs primarily by removing the linear layer that PCNs used to transform distances to prototypes into classifications. We found this unnecessary for high classification accuracy (Appendix A) and instead directly used negative distances. Without the linear layer, CSN classification is equivalent to projecting encodings, z, onto a concept subspace before calculating distances. The distances between projected encoding, dubbed zproj , and prototypes will induce the same softmax distribution as when the orthogonal component remains. Indeed, we find projection more intuitive - only the component of z that corresponds to the subspace is used for classification - and list projection as a standard step in the remainder of this paper. A simple example of projecting an encoding and calculating distances to prototypes is shown in Figure 1 a.
For some tasks, we used an encoder design from variational-autoencoders (VAEs) in order to regularize the distribution of encodings to conform to unit Gaussians (Kingma & Welling, 2014). By default, this regularization loss was set to 0, but it sometimes proved useful in some domains to prevent overfitting (as detailed in experiments later). We emphasize that CSNs are discriminative, rather than generative, models, so we did not seek to learn a latent space from which to sample.
3.2 MULTI-CONCEPT LEARNING
We defined the CSN architecture for single classification tasks in the previous section; here, we explain how a CSN may be used for multiple classification tasks. (For example, consider a scenario involving classifying both what type of bird a photo depicts and whether the photo was taken outdoors or indoors.) Extending CSNs to support multiple classifications requires the addition of new sets of prototypes. This is the primary contribution of our work.
Multiple classification tasks are performed by defining a set consisting of sets of prototypes: P = {p1, ...,pc}, with a set of prototypes for each of c classification tasks. A classification task is performed by using the CSN’s encoder to generate an encoding, z, and projecting z into the concept subspace defined by the set of prototypes particular to the given task. Figure 1 (b-d) depicts simplified examples of two concept subspaces. In each example, each concept uses three prototypes, yielding two planar concept spaces (one of which corresponds to the x − y plane for illustrative purposes); z may be projected into either plane depending upon the classification task at hand.
While the prototypes in different sets are separate from each other, correlations present in training data may lead to a range of relationships among prototypes. Returning to the previous example scenario, prototypes of birds may represent canaries and toucans, while prototypes of indoor and outdoor scenes may represent living rooms and jungles; each set of prototypes is independent in principle, but in reality, prototypes may represent canaries in living rooms and toucans in jungles. In fact, two sets of prototypes can exhibit a range of relationships from highly correlated to fully independent, as shown in Figure 1.
We defined a metric, concept subspace alignment, to reflect this range of relationships. Mathematically, the alignment of two subspaces is the mean of the cosine squared of the angle between all pairs of vectors drawn from the basis of each subspace. Given orthonormal bases, efficiently computed via QR factorization, Q1 and Q2, of ranks m and n, we define alignment as follows:
a(Q1, Q2) = 1
mn m∑ i n∑ j (Q>1 Q2[i, j]) 2 (3)
Given the range of values for the cosine squared function, alignment values range from 0 to 1 for orthogonal and parallel subspaces, respectively. Intuitively, orthogonality lends itself well to independent concepts and therefore supports fair classification, whereas parallel subspaces naturally correspond to hierarchical classification. We elaborated on this intuition in Section 3.4.
3.3 TRAINING PROCEDURE
When training a CSN, we assume access to a set of training data, (X,Y) for Y = (Y1, Y2, ...Yc). For each entry in the dataset, there is an input x and a label yi, for each of c classification tasks.
We trained CSNs in an end-to-end manner to minimize a single loss function, defined in Equation 4. The four terms in the loss function were as follows: 1) reconstruction error; 2) the loss introduced for the PCN, encouraging classification accuracy and the clustering of encodings around prototypes (applied within each concept subspace); 3) a KL divergence regularization term; and 4) a term penalizing alignment between concept subspaces. Each term was weighted by a choice of real-
valued λs. We emphasize that the PCN loss — clustering and classification accuracy, defined in Equation 7 of Li et al. (2018) — is calculated within each concept subspace using the projections of encodings; thus, encodings were encouraged to cluster around prototypes only along dimensions within the subspace. The encoder, decoder, and prototype weights were trained simultaneously.
l(X,Y, θ,φ,P) = λ0 |X| ∑ x∈X (dφ(eθ(x))− x)2
+ ∑
i∈[1,C]
λPiPCN(proj(eθ(X),pi), Yi)
+ ∑
i∈[1,C]
λKLiKL(X,pi) (4)
+ ∑
i∈[1,C] ∑ j∈[1,C] λAija(Qi, Qj)
The KL regularization term mimics training losses often used in VAEs that penalize the divergence between the distribution of encodings and a zero-mean unit Gaussian (Kingma & Welling, 2014). In our case, we wished to induce a similar distribution of encodings, but centered around prototypes rather than the origin. Furthermore, rather than induce a Gaussian distribution within a concept subspace (which would dictate classification probabilities and therefore potentially worsen classification accuracy), we wished to regularize the out-of-subspace components of encodings.
Concretely, we implemented this regularization loss in three steps. First, we computed the orthogonal component of an encoding as zorth = z − zproj . We then computed the KL divergence between the distribution of zorth and unit Gaussians centered at each prototype in each subspace. Finally, we took the softmax over distances between encodings and prototypes in order to only select the closest prototype to the encoding; we then multiplied the softmax by the divergences to enforce that encodings were distributed as unit Gaussians around the nearest prototype in each subspace. Together, these operations led the distributions out of each subspace to conform to unit Gaussians around each prototype. As confirmed in later experiments, this component was crucial in training fair classifiers.
3.4 HIERARCHICAL AND FAIR CLASSIFICATION
We conclude this section by demonstrating how CSNs may support hierarchical or fair classifications. Hierarchical and fair classification may be thought of as extremes along a spectrum of concept alignment. In hierarchical classification, concepts are highly aligned and therefore parallel: the difference between a toucan and a Dalmatian is similar to the difference between a generic bird and dog, and so the vector differences between prototypes associated with different classes should also be parallel (e.g., “bird” - “dog” = “toucan” - “Dalmatian.”). In fair classification, concepts are not aligned: switching belief about someone’s sex should not alter predictions about their income. Thus, based on the classification task, moving an encoding relative to one subspace should either affect (for hierarchical) or not affect (for fair) that encoding’s projection onto the other subspace. We provide a geometric interpretation of these two tasks in Figure 1 b and d.
CSNs can be trained to adopt either form of concept relationship by penalizing or encouraging concept subspace alignment (already present as a(Qi, Qj) in the training loss). Our single model reconciles these two types of problems by viewing them as opposite extremes along a spectrum of concept relationships that our technique is able to learn; this is the main contribution of our work.
4 RESULTS
Our experiments were divided in four parts. First, we demonstrated how CSNs matched standard performance on single classification tasks: in other words, that using a CSN did not degrade performance. We omit these unsurprising results from the paper; full details are included in Appendix A. Second, we showed that CSNs matched state-of-the-art performance in two fair classification tasks. Third, we used CSNs for hierarchical classification tasks, exceeding performance demonstrated by
Table 1: Mean Adult dataset fairness results.
Model y Acc. s Acc. D.I. DD-0.5 CSN 0.85 0.67 0.83 0.16 Adv. 0.85 0.67 0.87 0.16 VFAE 0.85 0.70 0.82 0.17 FR Train 0.85 0.67 0.83 0.16 Wass. DB 0.81 0.67 0.92 0.08 Random 0.76 0.67
Table 2: Mean German dataset fairness results.
Model y Acc. s Acc. D.I. DD-0.5 CSN 0.73 0.81 0.70 0.10 Adv. 0.73 0.81 0.63 0.10 VFAE 0.72 0.81 0.47 0.23 FR Train 0.72 0.80 0.55 0.16 Wass DB 0.72 0.81 0.33 0.02 Random 0.70 0.81
prior art along several metrics. Fourth, we showed how CSNs enabled both fair and hierarchical classification in a dataset describing human motion in an assembly task that exploited hierarchical knowledge while preserving participant anonymity. Implementation details of CSNs in all experiments are included in Appendix F.
4.1 FAIR CLASSIFICATION
We evaluated CSN’s performance in fair classification tasks in the Adult and German datasets. These datasets are commonly used in fairness literature and contain data that can be used to predict people’s income or credit risks (Dua & Graff, 2017).We compared CSN performance to our implementations of an advesarial purging technique (Adv.), the variational fair autoencoder (VFAE), Wasserstein Fair Classification (Wass. DB), and a mutual-information-based fairness approach (FR Train) (Xie et al., 2017; Zemel et al., 2016; Jiang et al., 2020; Roh et al., 2020). Implementation details of fair classification baselines and full results including standard deviations are included in Appendix G.
For the Adult dataset, the protected attribute was sex, and for the German dataset, the protected attribute was a binary variable indicating whether the person was older than 25 years of age. In evaluation, we measured y Acc., the accuracy of predicting income or credit, s Acc., the accuracy of a linear classifier trained to predict the protected field from the latent space, disparate impact (DI), as defined in Roh et al. (2020), and demographic disparity (DD-0.5), as defined by Jiang et al. (2020).
Mean results over 20 trials for both datasets were included in Tables 1 and 2. In both datasets, we observed that CSNs matched state-of-the-art performance. CSNs produced high y Acc., indicating high task performance for predicting income or credit. Furthermore, fairness measures demonstrate that CSNs purged protected information successfully (low s Acc.) and achieved high D.I. and low DD-0.5, as desired. A visualization of the latent space of a fair classifier, trained on the German dataset, is shown in Figure 2 and confirmed that CSNs learned orthogonal concept subspaces.
In addition to reproducing the state of the art, we conducted an ablation study to demonstrate the importance of two terms in our training loss: the alignment and KL losses. Using the German dataset, we trained 20 CSNs, setting the KL, alignment, or both loss weights to 0. The mean results of these trials are reported in Table 3.
Table 3 demonstrates the necessity of both KL and alignment losses to train fair predictors (with higher disparate impact and lower demographic disparity values). Including both loss terms resulted
in the fairest predictors; removing those losses could enable better classification accuracy, but at the expense of fairness. This confirms geometric intuition: the alignment loss created orthogonal subspaces and the KL regularization created distributional equivalence based on the subspaces. Jointly, these losses therefore produced statistical independence.
Table 3 also includes causal analysis of trained CSNs via the ρ metric. Intuitively, this metric reflected the learned correlation between s and y; it was calculated by updating embeddings in the CSN latent space along the gradient of s and recording the change in prediction over y. We reported the ratio of these changes as ρ; as expected, enforcing orthogonality via alignment loss led to ρ values of 0. This technique is inspired by work in causally probing language models (e.g., Tucker et al. (2021)); full details for calculating ρ are included in Appendix D.
4.2 HIERARCHICAL CLASSIFICATION
We compared CSNs to our implementation of HPNs and results for Metric-Guided Prototype Learning (MGP), reported by Garnot & Landrieu (2020), for hierarchical classification tasks. Our HPN baseline used the same architecture as CSN (same encoder, decoder, and number of prototypes). It differed from CSNs by setting alignment losses to 0 and by adopting the conditional probability training loss introduced by Hase et al. (2019). We further included results of a randomly-initialized CSN under “Init.” in tables. In these experiments, we sought to test the hypothesis that CSNs with highly aligned subspaces would support hierarchical classification, just as orthogonal subspaces enabled fair classification.
In addition to standard accuracy metrics, we measured two aspects of CSNs trained on hierarchical tasks. First, we recorded the “average cost” (AC) of errors. AC is defined as the mean distance between the predicted and true label in a graph of the hierarchical taxonomy (e.g., if true and predicted label shared a common parent, the cost was 2; if the common ancestor was two levels up, the cost was 4, etc.) (Garnot & Landrieu, 2020). Second, we measured the quality of trees derived from the learned prototypes. After a CSN was trained, we defined a fully-connected graph G = (V ,E) with vertices V = P ⋃ {0} (the set of all prototypes and a point at the origin) and undirected edges between each node with lengths equal to the l2 distance between nodes in the latent space. We recovered the minimum spanning tree, T , from G, (which is unique given distinct edge lengths, which we observed in all experiments), and converted all edges to directed edges through a global ordering of nodes. Lastly, we calculated the graph edit distance (ED) between isomorphisms of the recovered tree and the ground-truth hierarchical tree (with edges that obeyed the same ordering constraints) (Abu-Aisheh et al., 2015). Intuitively, this corresponded to counting how many edges had to be deleted or added to the minimum spanning tree to match the taxonomy tree, ignoring edge lengths, with a minimum value of 0 for perfect matches.
As a basic test of CSNs in hierarchical classification tasks, we created simple hierarchies from the MNIST Digit and Fashion datasets. The Digit dataset used the standard low-level labels of digit, supplemented with high-level labels of parity (two classes); the Fashion dataset used the standard low-level labels for item of clothing, with a ternary label for a high-level classification of “tops” (tshirts, pullovers, coats, and shirts), “shoes” (sandals, sneakers, and ankle boots), or “other” (trousers, dresses, and bags).
Mean results from 10 trials for both MNIST datasets were included in Tables 4 and 5. The HPN baselines were implemented using the same number of prototypes as the CSNs being compared against. Both tables show that CSNs exhibit comparable or better accuracy than HPNs for both the low-level (Y0) and high-level (Y1) classification tasks. In addition, the average cost (A.C.) and edit distance (E.D.) values show that CSNs recovered minimum spanning trees that nearly perfectly matched the ground truth tree, and that when CSNs did make errors, they were less “costly” than errors made by HPNs (although admittedly, a dominant force in A.C. is classification accuracy alone). A 2D visualization of the latent space of a CSN trained on the Digit task is shown in Figure 3: encodings for particular digits clustered around prototypes for those digits (X), while prototypes for even and odd digits (circles) separated the digit clusters into the left and right halves of the latent space. Visualizations of latent spaces for more fair and hierarchical classification tasks are included in Appendix C; they confirmed the theoretical derivations of orthogonal and parallel subspaces.
Lastly, we trained 10 CSNs and HPNs on the substantially more challenging CIFAR100 dataset. The dataset is inherently hierarchical: the 100 low-level classes are grouped into 20 higher-level
Table 4: MNIST digit hierarchy mean (stdev) over 10 trials. First two columns × 100.
Table 5: MNIST fashion hierarchy mean (stdev) over 10 trials. First two columns × 100.
Figure 3: 2D latent space for hierarchical digit classification creates clusters around even and odd prototypes (circles on the right and left, respectively) and digit prototypes (X).
Y0% Y1% A.C. E.D. CSN 0.76 (0.0) 0.85 (0.0) 0.76 (0.02) 11.2 (7) HPN 0.71 (0.0) 0.80 (0.0) 0.97 (0.04) 165.0 (3) Init. 0.01 0.05 3.88 200 CSN 0.78 (0.0) 0.88 (0.0) 0.91 (0.0) 6.0 (8.2) MGP 0.76 1.05 Init. 0.01 0.05 7.33 258
classes, each of size 5. Using a resnet18 encoder, pre-trained on ImageNet, in conjunction with 100 prototypes for low-level classification and 20 for high-level, we trained CSNs and HPNs. CSNs additionally used an alignment loss weight of -10 to encourage parallelism between the two concept subspaces. The mean results over 10 trials are shown in the top half of Table 6.
We also compared CSNs to MGP and other hierarchical classifiers using the CIFAR100 dataset and a deeper hierarchy, consisting of 5 levels of sizes 100, 20, 8, 4, and 2, as done by Garnot & Landrieu (2020). The additional information provided by this deeper hierarchy resulted in improved classification performance. Median results (as done by Garnot & Landrieu (2020)) for 10 CSNs using this dataset are shown in the bottom half of Table 6. Changing the hierarchy changed how average cost was calculated, so values from the top and bottom halves of the table should not be compared. Within the bottom half, we note that CSNs outperformed MGP on both A.C. and classification accuracy. Furthermore, according to values generated in the extensive experiments conducted by Garnot & Landrieu (2020), CSNs outperformed numerous other baselines, including HXE and soft-labels (Bertinetto et al. (2020)), YOLO (Redmon et al. (2016)), and a hyperspherical prototype network (Mettes et al. (2019)), all of which were built upon a resnet18 pretrained on ImageNet. In fact, our CSNs achieve SOTA classification accuracy for any classifier built upon a resnet18 backbone, without data augmentation. Furthermore, the decrease in A.C. is especially surprising given that other techniques explicitly optimized for average cost reductions, while CSNs merely trained on classification at each level. Notably, the decrease in A.C. is not fully explained by the increase in accuracy, indicating that CSN not only exhibited higher accuracy but also, when it did make mistakes, those mistakes were less severe.
Lastly, we note that CSNs support a range of learned relationships other than fair or hierarchical. The varying values of ρ in Table 3 indicate that CSNs may learn different relationships when alignment loss is set to 0. However, in general, one could train models to learn desired relationships by penalizing or rewarding alignment relative to some intercept. We trained and evaluated such models in Appendix E and found that models indeed learned the desired alignment.
4.3 FAIR AND HIERARCHICAL CLASSIFICATION
Prior experiments demonstrated how CSNs could solve different classification problems separately; in this section, we applied a single CSN to a task that required it to use both fair and hierarchical classification. Intuitively, fairness was used to protect privacy, while hierarchical structure was used for better performance.
We used a dataset describing human motion in a bolt-placement task. The dataset was gathered from a similar setup to (Lasota et al., 2014) - motion was recorded at 50 Hz, using the 3D location of each of the 8 volunteer participant’s gloved right hands as they reached towards one of 8 holes arranged
in a line to place a bolt in the hole. The bolt holes may be thought of hierarchically by dividing destinations into left vs. right (LR) groupings, in addition to the label of the specific hole.
Initial exploration of the dataset showed promising results for prototype-based classification: the target locations were identified with 81% accuracy, and further analysis showed that prototypes corresponded to human-like motions (details in Appendix B). Troublingly, however a trained CSN could identify the participant with over 60% accuracy, which posed privacy concerns. Nevertheless, from a robotic safety perspective, it is important for robots to exploit as much information as possible to avoid collisions with humans.
Ultimately, we wished to predict which hole a participant was reaching towards given the past 1 second of their motion, while preserving privacy and exploiting hierarchical structure. Thus, we designed a CSN with three concept subspaces: one for predicting the bolt (8 prototypes), one for predicting the high-level grouping of a left or right destination (2 prototypes) and one for predicting the participant id (8 prototypes). We enforced that the bolt and LR subspaces were parallel while the bolt and participant subpsaces were orthogonal.
Results from our experiments are presented in Table 7. Means and standard deviations over 10 trials for each row are reported. (A.C. was calculated only for prediction errors, due to the large differences in accuracy rates across models.) When training a CSN with no constraints on subspace alignment, we found a highly accurate but unfair predictor (81% accuracy for bolt location, but sub-optimal disparate impact and demographic disparity values). Switching the CSNs to be fair classifiers by only enforcing orthogonality between bolts and participants yielded a fair classifier (illustrated by ρ, D.I., and DD-0.5), but with much worse bolt prediction accuracy (44%). However, by using a hierarchical subspace for LR groupings, the final CSN both improved classification accuracy and decreased the average cost of errors, while maintaining desired fairness characteristics.
5 CONTRIBUTIONS
The primary contribution of this work is a new type of model, the Concept Subspace Network, that supports inter-concept relationships. CSNs’ design, motivated by prior art in interpretable neural network models, use sets of prototypes to define concept subspaces in neural net latent spaces. The relationships between these subspaces may be controlled during training in order to guide desired model characteristics. Critically, we note that two popular classification problems — fair and hierarchical classification — are located at either end of a spectrum of concept relationships, allowing CSNs to solve each type of problem in a manner on par with techniques that had previously been designed to solve only one. Furthermore, a single CSN may exhibit multiple concept relationships, as demonstrated in a privacy-preserving hierarchical classification task.
While we have demonstrated the utility of CSNs within several domains, numerous extensions could improve their design. First, the idea of subspace alignment could be applied to non-Euclidean geometries like hyperbolic latent spaces that are sometimes used for hierarchical classification. Second, CSNs could additionally benefit from relaxation of some simplifying assumptions: notably, allowing for more complex relationships rather than those defined by subspace cosine similarity, or using adversarial approaches for distributional regularization rather than only supporting unit Gaussians.
Lastly, we note that CSNs, while designed with ethical applications such as fair classification in mind, may lead to undesired consequences. For example, malicious actors could enforce undesirable concept relationships, or simply observing emergent concept relationships within a CSN could reinforce undesirable correlations. In addition, although prototypes encourage interpretability, which we posit can be used for good, the reductive nature of prototypes may be problematic when classifying human-related data (e.g., the COMPAS fair classification task we avoided).
A SINGLE-CONCEPT CLASSIFICATION BASELINES
In addition to the specialized fair and hierarchical classification tasks, we tested CSNs on two standard classification tasks: identifying digits in the MNIST Digit dataset, and identifying one of 100 categories in the CIFAR100 dataset. There were no concept relationships present because this was a single classification task; instead, the tests established that using a CSN did not degrade classification performance relative to PCNs or other neural architectures.
On the Digit dataset, we trained a CSN using the same encoding and decoding layer architectures with 20 prototypes (two for each digit) and applied the same Gaussian distortions to training images as Li et al. (2018). Over five trials, training for 50 epochs with batches of size 250, we achieved the same mean classification accuracy as PCN (99.22%), demonstrating that the use of a CSN did not worsen classification accuracy (Li et al., 2018).
On the CIFAR100 dataset, we extended a resnet18 backbone (pretrained on ImageNet) as our encoder with 100 prototypes, 1 for each class, and trained 10 models for 60 epochs He et al. (2016). We achieved a mean classification accuracy of 76%, the standard result for networks built upon a resnet18 framework (Hase et al., 2019). Thus, CSNs exhibited high performance in a challenging domain, matching performance of normal networks, with the benefit of interpretable prototypes.
B VISUALIZING DECODED PROTOTYPES
Because CSNs are built upon prototype-based classification, they are at least as interpretable as prior art, such as PCNs. In this section, we demonstrate how prototypes may be decoded to visualize their representations. These figures were generated by decoding prototypes from the models used throughout the paper.
Figure 4 shows the first 10 prototypes from the MNIST digit classifier in Appendix A. Unlike PCNs, CSNs benefit from an inductive bias that leads to an equal number of prototypes per class.
Figure 5 depicts the decoded prototypes when training a CSN to predict human reaching motions, as described in Section 3.4. This model was only trained on motion prediction, without using fair or hierarchical training terms. Interestingly, by over-parametrizing the number of prototypes (there were only 8 possible destinations, but twice as many prototypes), the model learned different forms of trajectories that reached towards the same destination: short movements near the targets, and longer loops when reaching from farther away.
C VISUALIZING LEARNED LATENT SPACES
In addition to decoding prototypes, we visualized the latent spaces of trained classifiers. For the purposes of visualization, we trained new models from scratch, using only 2D latent spaces. Encodings and prototypes for both fair classification tasks, as well as the digit and fashion hierarchical classification tasks, are shown in Figure 6.
In all diagrams, encodings of test inputs are denoted by small colored dots. All classification tasks used 2 sets of prototypes: we depicted one set of prototypes as large black dots, and the other as X’s. The arrangement of the prototypes in the latent spaces confirms that CSNs have learned the right concept alignment.
Specifically, for the fair classification tasks, the X’s form a line segment that is orthogonal to the line formed by the black dots. This orthogonality leads to fairness, as discussed in our paper.
In the hierarchical domains, we similarly observed that CSNs had learned the “right” latent structure. In these domains, the black dots denoted prototypes for high-level classification (such as even vs. odd). We observed that the lower-level prototypes (e.g., for digit), denoted by ‘X’s were clustered around the high level prototypes.
As a whole, these visualizations confirm that CSNs learn the desired latent structure, all controlled by changing the alignment loss weight.
D CALCULATING LEARNED CASUAL RELATIONSHIPS
In Sections 4.1 and 4.3, we report a metric, ρ, to denote the learned causal relationship between concepts. Here, we explain how we calculate ρ in greater detail.
Intuitively, ρ corresponds to the mean change in belief for one classification task divided by changes in belief for another classification task. For example, in fair classification, prediction of a person’s credit risk should not change based on changes in belief over the person’s age; this notion corresponds to ρ = 0.
We calculate ρ in CSNs using a technique inspired by Tucker et al. (2021). In that work, the authors studied if a language model’s output changed when the model’s internal representation changed according to syntactic principles. In our work, we change latent representations, z, by taking the gradient of z with respect to the loss of one classification task given true label y∗, creating a new z′
taken by moving z along that gradient, and then calculating the new classification likelihoods using z′.
For simplicity, we limit our analysis to CSNs with two concept subspaces. We denote the encoding of an input, x, as z = eθ(x). Prediction for each of the two tasks may be denoted as prediction functions, predi, for i ∈ [0, 1] indicating the two tasks; the prediction function corresponds to projecting z into the relevant subspace and calculating distances to prototypes, as discussed earlier. Using this notation, we define ρ formally:
z = eθ(x) (5) y0 = pred0(z) (6) y1 = pred1(z) (7) z′ = z +∇loss(y0, y∗0) (8) y′0 = pred0(z ′) (9) y′1 = pred1(z ′) (10)
ρ = y0 − y′0 y1 − y′1
(11)
Thus, ρ captures how the model’s change in belief about one attribute affects its change in belief over another attribute, in other words the causal learned relationship between prediction tasks.
E CORRELATED-CONCEPT CLASSIFICATION BASELINES
A single CSN may be used to perform multiple classification tasks simultaneously without explicitly guiding concept relationships. In the Adult and German fair classification domains, CSNs predicted both s, the protected field, and y, the desired final prediction, while we explicitly guided the learned concept relationships to enforce fairness. In a separate set of experiments conducted on the same datasets, we demonstrated how CSNs can learn more complex concept relationships.
In these experiments, we trained CSNs with two subspaces, each with two prototypes, and set both the KL and alignment losses to zero. The CSNs were trained to predict both s and y, using the two subspaces. We recorded prediction accuracy of y and ρ, the learned causal correlation between s and y.
Over 10 trials, for the German and Adult datasets, CSNs achieved mean y classification accuracies of 85% and 74%, on par with prior art on these datasets when not enforcing fairness (Xie et al., 2017). We also found non-zero ρ: for the German dataset, we found a value of 0.20; for the Adult dataset, a mean value of 0.23. An example latent space from a CSN trained on the German dataset in this manner is shown in Figure 7, using the same visualization mechanism as introduced in Appendix C. In this example, the model learned a non-zero correlation between prototypes for credit (Xs) and
for an applicant’s age (circles). This type of learned correlation is undesirable in fair classification domains but may be useful in other scenarios.
As a demonstration of useful learned correlations, we implemented a CSN in a classification task using synthetic data. Consider a simplified weather prediction task in which, given noisy observations of temperature and precipitation, a weather station must classify the day as hot or cold and rainy or sunny. In the artificial world, in the last year of weather data, half of the days are rainy and half are sunny, and all rainy days are cold and all sunny days are hot. Cold days have a true temperature uniformly drawn between 0.0 and 0.2 and warm days have a true temperature uniformly drawn between 0.8 and 1.0. Similarly, sunny days have a precipitation value drawn uniformly between 0.0 and 0.2 and rainy days have precipitation values are drawn uniformly between 0.8 and 1.0. Observations of temperature and precipitation are corrupted by zero-mean gaussian noise with σ = 0.05. Given noisy observations of temperature and precipitation, a model’s task is to predict binary labels for whether the day is hot or cold and rainy or sunny.
Numerous causal paths could explain observational data recorded from this environment in which hot days are sunny and cold days are rainy: rain could cause cold weather, some latent factor like atmospheric pressure could affect both precipitation and temperature, etc.. Trained simply from observational data, models are unable to learn the right causal relationship between these variables.
Unlike traditional neural networks, however, CSNs allow humans to encode desirable causal relationships. We designed a CSN with two concept subspaces (for temperature and precipitation), each with two prototypes. We then penalized (a(Qrs, Qhc)− √ ρ∗)2; that is, we set an intercept of √ ρ∗ for the alignment loss between the two subspaces for rainy and sunny (rs) and hot and cold (hc). (We used √ ρ∗ as the notation for setting desired alignment for reasons that will become apparent in the next paragraph.)
In our experiments, we sought to identify if CSNs could learn the desired causal relationship between temperature and precipitation. We did so by setting √ ρ∗ to some value, training CSNs using standard losses, and then measuring if the ρ metric we calculated from the trained CSNs matched ρ∗. We trained 10 CSNs with latent dimension 2 with √ ρ∗ = 0.5. This corresponds to a cosine value of 0.7, or about 45 degrees. This is intuitively interpreted as meaning that for every percentage increase in likelihood in the weather being sunny, the likelihood of it being warm should increase by 0.7 percent.
As desired, the trained CSNs had a mean ρ value of 0.72 (standard deviation 0.14). An example latent space from one such CSN is shown in Figure 7: the two subspaces are arranged at roughly 45 degrees. Moving an embedding of a cold and rainy day to increase the likelihood of it begin sunny by 1% increases the predicted likelihood of it being warm by 0.7%. This demonstrates that we were able to train CSNs to learn the causal relationship we wished for. Lastly, we note that we repeated these experiments with other √ ρ∗ and found similar findings, and if we did not include alignment training loss, CSNs learned arbitrary concept relationships.
F CSN IMPLEMENTATIONS
In this section, we include details necessary for replication of experimental results that did not fit in the main paper. 1
In all experiments, we used random seeds ranging from 0 to the number of trials used for that experiment. Although CSNs support several prototypes per class (e.g., 2 prototypes for the digit 0, 2 prototypes for the digit 1, etc.), unless otherwise noted, we used an equal number of prototypes and classes.
In the German fairness experiments, we trained for 30 epochs, with batch size 32, with classification loss weights of 1, alignment loss of 100 between the two subspaces, and KL lossses of 0.5. In the Adult fairness experiments, we trained for 20 epochs, with batch size 128, with classification loss weights of [1, 0.1] for y and s, respectively, alignment loss of 100 between the two subspaces, and KL losses of 0.1. For both fair classification tasks, the CSN encoders comprised 2 fully-connected layers with ReLu activations and hidden dimension 128, outputting into a 32-dimensional latent
1Anonymized code is available at https://anonymous.4open.science/r/csn-8470/
space, and the decoder comprised 2, 128-dimensional ReLu layers followed by a sigmoid layer. The whole model was trained with an Adam optimizer with default parameters.
For the digit and fashion hierarchical classification tasks, CSNs were trained for 20 epochs with batch size 128. CSNs for both datastes used identical architectures to the networks created for the fair classification tasks.
For the CIFAR100 hierarchical classification task, we built upon a ResNet18 backbone, as done by Garnot & Landrieu (2020). The encoder consisted of a ResNet encoder, pre-trained on imagenet, followed by two fully-connected layers with hidden dimension 4096, feeding into a latent space of dimension 100. The decoder (and decoding training loss) was removed in this domain to reduce training time. The network was trained using an SGD optimizer with learning rate 0.001 and momentum 0.9, with batch size 32. Training terminated after 60 epochs or due to early stopping, with a patience of 10 epochs. Classification loss weights were set to [1, 5] for classifying the high- and lowlevel categories, respectively. KL losses were set to 0; alignment loss was set to −10 to encourage parallel subspaces.
For the fair and hierarchical classification task with bolts, CSNs used the same architecture as for the fair classification task. Models were trained for 50 epochs, with batch size 256. All classification loss weights were set to 1; alignment loss between bolts and LR was set to −10 and between bolts and participant was set to 100. KL losses were set to 2.
G FAIR CLASSIFICATION BASELINES
In Section 4.1, we compared CSNs to several fair classification baselines on the standard German and Adult datasets. Although the datasets are standard in the literature, there are a wide variety of fairness metrics, only a subset of which each method has published. Therefore, we implemented each fair classification baseline and recorded all metrics of interest for each method. In this section, we demonstrated the soundness of our implementations by comparing to published metrics. Implementations of our baselines are available here: https://anonymous.4open.science/status/fairbaselines-44A7.
Tables 8 and 9 report the recorded metrics for each fairness technique. Values reported in prior literature are included in the table using the technique’s name (e.g., the second row of Table 8, labeled ‘Adv.’ includes the values reported by Xie et al. (2017)). Values that we measured using our implementations of each technique are marked with asterisks. We report means and standard errors for our implementations for each method and compare to the metrics that prior methods published for each dataset.
The bottom halves of Tables 8 and 9 are separated from the top halves to indicate modified datasets. The Wass. DB baseline used the German and Adult datasets but treated protected fields differently (e.g., by creating binary age labels at the cutoff age of 30 instead of 25, as all other techniques did). We therefore evaluated CSNs and our own implementations of Wass. DB on these different datasets as well and reported them below the horizontal line.
Interestingly, while we were able to recreate the Wass. DB results on these modified datasets, the technique, when applied to the standard datasets, demonstrated better fairness than most techniques but worse y Acc.. (We repeated the hyperparameter sweeps reported by Jiang et al. (2020) and used the best results.) We attribute the low y Acc. to the fact that Jiang et al. (2020) call for a linear model as opposed to deeper neural nets used by other approaches. When using the datasets suggested by Wass. DB, we reproduced their published results, as did CSNs trained on the same datasets. We note, however, that predictors on this dataset are of limited use, as both Wass. DB and CSNs fail to outperform random classification accuracy.
As a whole, Tables 8 and 9 give us confidence in our implementations of the fairness baselines. Our implementations were able to match or exceed metrics reported from prior art. This suggests that our underlying implementations were correct and that new metrics we gathered on them were valid. | 1. What is the focus and contribution of the paper regarding prototypical classification networks?
2. What are the strengths and weaknesses of the proposed approach compared to prior works, specifically Li et al. 2018?
3. How does the reviewer assess the clarity and quality of the paper's content, particularly in the writing style and explanation of key concepts?
4. What are the concerns regarding the additional value added by the new terms in the proposed method?
5. How does the reviewer evaluate the effectiveness of the proposed approach in achieving fairness accuracy tradeoffs? | Summary Of The Paper
Review | Summary Of The Paper
The paper builds on prior work on prototypical classification networks (more specifically, the work of Li et al. 2018) and additionally tries to include criteria such as orthogonality to enable applications such as fair classification. An application to hierarchical networks is also described though the details are very hard to understand. Experiments show that the resulting models are able to achieve reasonable fairness accuracy tradeoffs.
Review
The paper attempts interesting problems but lacks on 2 major fronts: (1) It is not clear what the improvement over the existing work is, and if it is significant enough to merit acceptance at ICLR (2) the writing needs a lot of work to bring out motivation for different choices.
About the first point, the main contribution seems to lie in section 3.1, but most of the machinery here seems to be borrowed from the PCN work of Li et al. 2018. The additional contribution seems to be the concept subspace projection (if I understood correctly), whose motivation is not explained very well, and the addition of the alignment term in Eq. 1. The paper does not explain what is the additional value added by these terms over PCN.
Continuing from the previous point, the paper is very hard to understand. In the second paragraph of Section 3.1, there is some departure from PCN where some combinations of
p
1
and other prototypes are taken. The process is defined in a very handwavy manner and not clear what it is the formal mathematical operation performed here. How is this different from PCN and why was this step needed? Talking about digits and concept subspaces, do we have as many concept subspaces as the number of classes? If not, how is this number picked?
Moving on to the third paragraph of 3.1, the first few lines seem to be quite similar to PCN. However, at some point, a probability distribution is mentioned in connection with traditional softmax probabilities, but then yet another probability distribution is mentioned. It is not clear what the second distribution does. In the absence of formal equations, it is very difficult to understand what each component does. I would highly recommend describing each operation formally (in a sequential manner) and also adding a visualization like Figure 1 in the PCN paper to clearly convey the idea to the reader.
Fourth paragraph of 3.1 mentions two differences from PCN. Again, it is not clear what each of the differences achieves. Moreover, Figure 1 is neither described well in the main text nor in the caption, leaving the reader puzzled over what is happening in the figure. Instead of the regular autoencoder, a variational autoencoder is used, but again, the motivation is not clear. Other important details like the text above Equation 1, and the usage of KL diveregnce regularization term are skimmed over very quickly. The details of hierarchical classification setup in 3.4 are also glossed over quickly. The same happens in 4.2. For instance, what is meant by "adopting the conditional probability training loss introduced by Hase et al"? |
ICLR | Title
Prototype Based Classification from Hierarchy to Fairness
Abstract
Artificial neural nets can represent and classify many types of high-dimensional data but are often tailored to particular applications – e.g., for “fair” or “hierarchical” classification. Once an architecture has been selected, it is often difficult for humans to adjust models for a new task; for example, a hierarchical classifier cannot be easily transformed into a fair classifier that shields a protected field. Our contribution in this work is a new neural network architecture, the concept subspace network (CSN), which generalizes existing specialized classifiers to produce a unified model capable of learning a spectrum of multi-concept relationships. We demonstrate that CSNs reproduce state-of-the-art results in fair classification when enforcing concept independence, may be transformed into hierarchical classifiers, or may even reconcile fairness and hierarchy within a single classifier. The CSN is inspired by and matches the performance of existing prototype-based classifiers that promote interpretability.
1 INTRODUCTION
Neural networks are able to learn rich representations of data that support highly accurate classification; however, understanding or controlling what neural nets learn remains challenging. Some techniques offer insight into pre-trained models by uncovering directions within latent spaces that correspond to particular concepts, image manipulations, or more (Goetschalckx et al., 2019; Kim et al., 2018), while approaches focused on interpretability provide techniques that are more comprehensible to humans (Li et al., 2018; Chen et al., 2019). While these methods provide insight, they fail to offer control: humans observe learned patterns but are unable to guide models such that learned relationships are useful for a particular setting or task.
Another line of work has advanced the design of models for particular types of classification tasks (such as fair or hierarchical classification) but these techniques are often developed with only one problem in mind (Zemel et al., 2016; Xie et al., 2017; Hase et al., 2019). For example, models built for fair classification (predicting an outcome regardless of information about a protected field) are only used to enforce independence of concepts rather than hierarchy. Thus, humans may exert control over learned representations by selecting an appropriate technique rather than tuning training parameters within the same technique.
We have designed a new neural network architecture, the concept subspace network (CSN), which generalizes existing specialized classifiers to produce a unified model capable of learning a spectrum of multi-concept relationships. CSNs use prototype-based representations, a technique employed in interpretable neural networks in prior art (Li et al., 2018; Chen et al., 2019; Garnot & Landrieu, 2020). A single CSN uses sets of prototypes in order to simultaneously learn multiple concepts; classification within a single concept (e.g., “type of animal”) is performed by projecting encodings into a concept subspace defined by the prototypes for that concept (e.g., “bird,” “dog,” etc.). Lastly, CSNs use a measure of concept subspace alignment to guide concept relationships such as independence or hierarchy.
In our experiments, CSNs performed comparably to state-of-the art in fair classification, despite prior methods only being designed for this type of problem. In applying CSNs to hierarchical classification tasks, networks automatically deduced interpretable representations of the hierarchical problem structure, allowing them to outperform state-of-the-art, for a given neural network backbone, in terms of both accuracy and average cost of errors on the CIFAR100 dataset. Lastly, in
a human-motion prediction task, we demonstrated how a single CSN could enforce both fairness (to preserve participant privacy) and hierarchy (to exploit a known taxonomy of tasks). Our findings suggest that CSNs may be applied to a wide range of problems that had previously only been addressed individually, or not at all.
2 RELATED WORK
2.1 INTERPRETABILITY AND PROTOTYPE NETWORKS
Numerous post-hoc explanation techniques fit models to pre-trained neural nets; if humans understand these auxiliary models, they can hypothesize about how the neural nets behave (Ribeiro et al., 2016; Lundberg & Lee, 2017). However, techniques in which explanations are decoupled from underlying logic may be susceptible to adversarial attacks or produce misleading explanations (Heo et al., 2019; Slack et al., 2020).
Unlike such decoupled explanations, interpretability research seeks to expose a model’s reasoning. In this work we focus on prototype-based latent representations in neural nets. There is a long history of learning discrete representations in continuous spaces, originating under “vector quantization” literature (Kohonen, 1990; Schneider et al., 2009). More recently, the prototype case network (PCN) comprised an autoencoder model that clustered encodings around understandable, trainable prototypes, with classifications made via a linear weighting of the distances from encodings to prototypes (Li et al., 2018). Further research in image classification extended PCNs to use convolutional filters as prototypes and for hierarchical classification in the hierarchical prototype network (HPN) (Chen et al., 2019; Hase et al., 2019). Lastly, Garnot & Landrieu (2020) use prototypes in Metric-Guided Prototype Learning (MGP) in conjunction with a loss function to cluster prototypes to minimize user-defined costs.
Our model similarly uses trainable prototypes for classification, but differs from prior art in two respects. First, we modify the standard PCN architecture to support other changes, without degrading classification performance. Second, like HPNs (but not PCNs or MGP), CSNs leverage multiple sets of prototypes to enable hierarchical classification but also allow for non-hierarchical concept relationships.
2.2 FAIR AND HIERARCHICAL CLASSIFICATION
AI fairness research considers how to mitigate undesirable patterns or biases in machine learning models. Consider the problem of predicting a person’s credit risk: non-causal correlations between age and risk may lead AI models to inappropriately penalize people according to their age (Zemel et al., 2016). The problem of fair classification is often framed as follows: given inputs, x, which are informative of a protected field, s, and outcome, y, predict y from x without being influenced by s (Zemel et al., 2013). Merely removing s from x (e.g., not including age as an input to a credit predictor) rarely removes all information about s, so researchers have developed a variety of techniques to create representations that “purge” information about s (Zemel et al., 2016; Xie et al., 2017; Jiang et al., 2020).
Hierarchical classification solves a different problem: given a hierarchical taxonomy of classes (e.g., birds vs. dogs at a high level and sparrows vs. toucans at a low level), output the correct label at each classification level. Neural nets using convolution and recurrent layers in specialized designs have achieved remarkable success in hierarchical image classification (Zhu & Bain, 2017; Guo et al., 2018). The hierarchical prototype network (HPN) uses prototypes and a training routine based upon conditional subsets of training data to create hierarchically-organized prototypes (Hase et al., 2019). Garnot & Landrieu (2020) also use prototypes for hierarchical classification in Metric-Guided Prototype Learning (MGP) by adjusting the training loss to guide prototype arrangement. Neither HPN nor MGP explicitly models relationships between multiple subsets of prototypes. Lastly, recent works propose hyperbolic latent spaces as a natural way to model hierarchical data (Dai et al., 2021; Mathieu et al., 2019; Nickel & Kiela, 2017; Liu et al., 2020). Our method, conversely, relies upon concepts from Euclidean geometry. Extending the principle of subspace alignment that we develop to non-Euclidean geometric spaces is a promising direction but is beyond the scope of this work.
3 TECHNICAL APPROACH
In this section, we outlined the design of the CSN, which was inspired by desires for both interpretable representations and explicit concept relationships. First, we wished for interpretable representations, so we built upon the PCN design, with modifications. Second, we explicitly encoded relationships between concepts by introducing multiple sets of prototypes, instead of just one in PCNs. Third, we enabled guidance of the concept relationships by modifying the CSN training loss. Together, these changes supported not only interpretable classification, but also provided a flexible framework for a single model architecture to learn different concept relationships.
3.1 CONCEPT SUBSPACE CLASSIFICATION
A CSN performing a single classification task (e.g., identifying a digit in an image) is defined by three sets of trainable weights. First, an encoder parametrized by weights θ, eθ, maps from inputs of dimension X to encodings of dimension Z: eθ : RX → RZ . Second, a decoder parametrized by weights φ, dφ, performs the decoding function of mapping from encodings to reconstructed inputs: dφ : R
Z → RX . Third, there exist a set of k trainable prototype weights, p, that are each Zdimensional vectors: p1, p2, ..., pk ∈ RZ . This architecture resembles that of the PCN, but without the additional linear classification layer (Li et al., 2018).
Here, we focus briefly on the set of prototypes, p. Given a set of k prototypes in RZ , we define a “concept subspace,” C as follows:
vi = pi − p1 ∀i ∈ [2, k] (1) C = {x|x ∈ RZ where x = p1 + ∑ i∈[2,k] λivifor λi ∈ R ∀i} (2)
C is the linear subspace in RZ defined by starting at the first prototype and adding linear scalings of vector differences to all other prototypes. We call this subspace a concept subspace because it represents a space of encodings between prototypes defining a single concept (e.g., prototypes for digits 0, 1, 2, etc. define a concept subspace for digit classification).
A CSN’s architecture — consisting of an encoder, a decoder, and a set of prototypes and the associated concept subspace — enables two types of functionality: the encoder and decoder may be composed to reconstruct inputs via their latent representations, and CSNs may perform classification tasks by mapping an input, x, to one of Y discrete categories. Classification is performed by first encoding an input into a latent representation, z = eθ(x). The l2 distance from z to each prototype is then calculated, yielding k distance values: di(z,p) = ||z − pi||22; i ∈ [1, k]. These distances are mapped to a probability distribution, PK(i); i ∈ [1, k], by taking the softmax of their negatives. Lastly, if there are more prototypes than classes, (e.g., two prototypes for dogs, two for cats, etc.) the distribution over k is converted to a distribution over Y categories summing the probabilities for prototypes belonging to the same class.
For single-concept classification, CSNs differ from PCNs primarily by removing the linear layer that PCNs used to transform distances to prototypes into classifications. We found this unnecessary for high classification accuracy (Appendix A) and instead directly used negative distances. Without the linear layer, CSN classification is equivalent to projecting encodings, z, onto a concept subspace before calculating distances. The distances between projected encoding, dubbed zproj , and prototypes will induce the same softmax distribution as when the orthogonal component remains. Indeed, we find projection more intuitive - only the component of z that corresponds to the subspace is used for classification - and list projection as a standard step in the remainder of this paper. A simple example of projecting an encoding and calculating distances to prototypes is shown in Figure 1 a.
For some tasks, we used an encoder design from variational-autoencoders (VAEs) in order to regularize the distribution of encodings to conform to unit Gaussians (Kingma & Welling, 2014). By default, this regularization loss was set to 0, but it sometimes proved useful in some domains to prevent overfitting (as detailed in experiments later). We emphasize that CSNs are discriminative, rather than generative, models, so we did not seek to learn a latent space from which to sample.
3.2 MULTI-CONCEPT LEARNING
We defined the CSN architecture for single classification tasks in the previous section; here, we explain how a CSN may be used for multiple classification tasks. (For example, consider a scenario involving classifying both what type of bird a photo depicts and whether the photo was taken outdoors or indoors.) Extending CSNs to support multiple classifications requires the addition of new sets of prototypes. This is the primary contribution of our work.
Multiple classification tasks are performed by defining a set consisting of sets of prototypes: P = {p1, ...,pc}, with a set of prototypes for each of c classification tasks. A classification task is performed by using the CSN’s encoder to generate an encoding, z, and projecting z into the concept subspace defined by the set of prototypes particular to the given task. Figure 1 (b-d) depicts simplified examples of two concept subspaces. In each example, each concept uses three prototypes, yielding two planar concept spaces (one of which corresponds to the x − y plane for illustrative purposes); z may be projected into either plane depending upon the classification task at hand.
While the prototypes in different sets are separate from each other, correlations present in training data may lead to a range of relationships among prototypes. Returning to the previous example scenario, prototypes of birds may represent canaries and toucans, while prototypes of indoor and outdoor scenes may represent living rooms and jungles; each set of prototypes is independent in principle, but in reality, prototypes may represent canaries in living rooms and toucans in jungles. In fact, two sets of prototypes can exhibit a range of relationships from highly correlated to fully independent, as shown in Figure 1.
We defined a metric, concept subspace alignment, to reflect this range of relationships. Mathematically, the alignment of two subspaces is the mean of the cosine squared of the angle between all pairs of vectors drawn from the basis of each subspace. Given orthonormal bases, efficiently computed via QR factorization, Q1 and Q2, of ranks m and n, we define alignment as follows:
a(Q1, Q2) = 1
mn m∑ i n∑ j (Q>1 Q2[i, j]) 2 (3)
Given the range of values for the cosine squared function, alignment values range from 0 to 1 for orthogonal and parallel subspaces, respectively. Intuitively, orthogonality lends itself well to independent concepts and therefore supports fair classification, whereas parallel subspaces naturally correspond to hierarchical classification. We elaborated on this intuition in Section 3.4.
3.3 TRAINING PROCEDURE
When training a CSN, we assume access to a set of training data, (X,Y) for Y = (Y1, Y2, ...Yc). For each entry in the dataset, there is an input x and a label yi, for each of c classification tasks.
We trained CSNs in an end-to-end manner to minimize a single loss function, defined in Equation 4. The four terms in the loss function were as follows: 1) reconstruction error; 2) the loss introduced for the PCN, encouraging classification accuracy and the clustering of encodings around prototypes (applied within each concept subspace); 3) a KL divergence regularization term; and 4) a term penalizing alignment between concept subspaces. Each term was weighted by a choice of real-
valued λs. We emphasize that the PCN loss — clustering and classification accuracy, defined in Equation 7 of Li et al. (2018) — is calculated within each concept subspace using the projections of encodings; thus, encodings were encouraged to cluster around prototypes only along dimensions within the subspace. The encoder, decoder, and prototype weights were trained simultaneously.
l(X,Y, θ,φ,P) = λ0 |X| ∑ x∈X (dφ(eθ(x))− x)2
+ ∑
i∈[1,C]
λPiPCN(proj(eθ(X),pi), Yi)
+ ∑
i∈[1,C]
λKLiKL(X,pi) (4)
+ ∑
i∈[1,C] ∑ j∈[1,C] λAija(Qi, Qj)
The KL regularization term mimics training losses often used in VAEs that penalize the divergence between the distribution of encodings and a zero-mean unit Gaussian (Kingma & Welling, 2014). In our case, we wished to induce a similar distribution of encodings, but centered around prototypes rather than the origin. Furthermore, rather than induce a Gaussian distribution within a concept subspace (which would dictate classification probabilities and therefore potentially worsen classification accuracy), we wished to regularize the out-of-subspace components of encodings.
Concretely, we implemented this regularization loss in three steps. First, we computed the orthogonal component of an encoding as zorth = z − zproj . We then computed the KL divergence between the distribution of zorth and unit Gaussians centered at each prototype in each subspace. Finally, we took the softmax over distances between encodings and prototypes in order to only select the closest prototype to the encoding; we then multiplied the softmax by the divergences to enforce that encodings were distributed as unit Gaussians around the nearest prototype in each subspace. Together, these operations led the distributions out of each subspace to conform to unit Gaussians around each prototype. As confirmed in later experiments, this component was crucial in training fair classifiers.
3.4 HIERARCHICAL AND FAIR CLASSIFICATION
We conclude this section by demonstrating how CSNs may support hierarchical or fair classifications. Hierarchical and fair classification may be thought of as extremes along a spectrum of concept alignment. In hierarchical classification, concepts are highly aligned and therefore parallel: the difference between a toucan and a Dalmatian is similar to the difference between a generic bird and dog, and so the vector differences between prototypes associated with different classes should also be parallel (e.g., “bird” - “dog” = “toucan” - “Dalmatian.”). In fair classification, concepts are not aligned: switching belief about someone’s sex should not alter predictions about their income. Thus, based on the classification task, moving an encoding relative to one subspace should either affect (for hierarchical) or not affect (for fair) that encoding’s projection onto the other subspace. We provide a geometric interpretation of these two tasks in Figure 1 b and d.
CSNs can be trained to adopt either form of concept relationship by penalizing or encouraging concept subspace alignment (already present as a(Qi, Qj) in the training loss). Our single model reconciles these two types of problems by viewing them as opposite extremes along a spectrum of concept relationships that our technique is able to learn; this is the main contribution of our work.
4 RESULTS
Our experiments were divided in four parts. First, we demonstrated how CSNs matched standard performance on single classification tasks: in other words, that using a CSN did not degrade performance. We omit these unsurprising results from the paper; full details are included in Appendix A. Second, we showed that CSNs matched state-of-the-art performance in two fair classification tasks. Third, we used CSNs for hierarchical classification tasks, exceeding performance demonstrated by
Table 1: Mean Adult dataset fairness results.
Model y Acc. s Acc. D.I. DD-0.5 CSN 0.85 0.67 0.83 0.16 Adv. 0.85 0.67 0.87 0.16 VFAE 0.85 0.70 0.82 0.17 FR Train 0.85 0.67 0.83 0.16 Wass. DB 0.81 0.67 0.92 0.08 Random 0.76 0.67
Table 2: Mean German dataset fairness results.
Model y Acc. s Acc. D.I. DD-0.5 CSN 0.73 0.81 0.70 0.10 Adv. 0.73 0.81 0.63 0.10 VFAE 0.72 0.81 0.47 0.23 FR Train 0.72 0.80 0.55 0.16 Wass DB 0.72 0.81 0.33 0.02 Random 0.70 0.81
prior art along several metrics. Fourth, we showed how CSNs enabled both fair and hierarchical classification in a dataset describing human motion in an assembly task that exploited hierarchical knowledge while preserving participant anonymity. Implementation details of CSNs in all experiments are included in Appendix F.
4.1 FAIR CLASSIFICATION
We evaluated CSN’s performance in fair classification tasks in the Adult and German datasets. These datasets are commonly used in fairness literature and contain data that can be used to predict people’s income or credit risks (Dua & Graff, 2017).We compared CSN performance to our implementations of an advesarial purging technique (Adv.), the variational fair autoencoder (VFAE), Wasserstein Fair Classification (Wass. DB), and a mutual-information-based fairness approach (FR Train) (Xie et al., 2017; Zemel et al., 2016; Jiang et al., 2020; Roh et al., 2020). Implementation details of fair classification baselines and full results including standard deviations are included in Appendix G.
For the Adult dataset, the protected attribute was sex, and for the German dataset, the protected attribute was a binary variable indicating whether the person was older than 25 years of age. In evaluation, we measured y Acc., the accuracy of predicting income or credit, s Acc., the accuracy of a linear classifier trained to predict the protected field from the latent space, disparate impact (DI), as defined in Roh et al. (2020), and demographic disparity (DD-0.5), as defined by Jiang et al. (2020).
Mean results over 20 trials for both datasets were included in Tables 1 and 2. In both datasets, we observed that CSNs matched state-of-the-art performance. CSNs produced high y Acc., indicating high task performance for predicting income or credit. Furthermore, fairness measures demonstrate that CSNs purged protected information successfully (low s Acc.) and achieved high D.I. and low DD-0.5, as desired. A visualization of the latent space of a fair classifier, trained on the German dataset, is shown in Figure 2 and confirmed that CSNs learned orthogonal concept subspaces.
In addition to reproducing the state of the art, we conducted an ablation study to demonstrate the importance of two terms in our training loss: the alignment and KL losses. Using the German dataset, we trained 20 CSNs, setting the KL, alignment, or both loss weights to 0. The mean results of these trials are reported in Table 3.
Table 3 demonstrates the necessity of both KL and alignment losses to train fair predictors (with higher disparate impact and lower demographic disparity values). Including both loss terms resulted
in the fairest predictors; removing those losses could enable better classification accuracy, but at the expense of fairness. This confirms geometric intuition: the alignment loss created orthogonal subspaces and the KL regularization created distributional equivalence based on the subspaces. Jointly, these losses therefore produced statistical independence.
Table 3 also includes causal analysis of trained CSNs via the ρ metric. Intuitively, this metric reflected the learned correlation between s and y; it was calculated by updating embeddings in the CSN latent space along the gradient of s and recording the change in prediction over y. We reported the ratio of these changes as ρ; as expected, enforcing orthogonality via alignment loss led to ρ values of 0. This technique is inspired by work in causally probing language models (e.g., Tucker et al. (2021)); full details for calculating ρ are included in Appendix D.
4.2 HIERARCHICAL CLASSIFICATION
We compared CSNs to our implementation of HPNs and results for Metric-Guided Prototype Learning (MGP), reported by Garnot & Landrieu (2020), for hierarchical classification tasks. Our HPN baseline used the same architecture as CSN (same encoder, decoder, and number of prototypes). It differed from CSNs by setting alignment losses to 0 and by adopting the conditional probability training loss introduced by Hase et al. (2019). We further included results of a randomly-initialized CSN under “Init.” in tables. In these experiments, we sought to test the hypothesis that CSNs with highly aligned subspaces would support hierarchical classification, just as orthogonal subspaces enabled fair classification.
In addition to standard accuracy metrics, we measured two aspects of CSNs trained on hierarchical tasks. First, we recorded the “average cost” (AC) of errors. AC is defined as the mean distance between the predicted and true label in a graph of the hierarchical taxonomy (e.g., if true and predicted label shared a common parent, the cost was 2; if the common ancestor was two levels up, the cost was 4, etc.) (Garnot & Landrieu, 2020). Second, we measured the quality of trees derived from the learned prototypes. After a CSN was trained, we defined a fully-connected graph G = (V ,E) with vertices V = P ⋃ {0} (the set of all prototypes and a point at the origin) and undirected edges between each node with lengths equal to the l2 distance between nodes in the latent space. We recovered the minimum spanning tree, T , from G, (which is unique given distinct edge lengths, which we observed in all experiments), and converted all edges to directed edges through a global ordering of nodes. Lastly, we calculated the graph edit distance (ED) between isomorphisms of the recovered tree and the ground-truth hierarchical tree (with edges that obeyed the same ordering constraints) (Abu-Aisheh et al., 2015). Intuitively, this corresponded to counting how many edges had to be deleted or added to the minimum spanning tree to match the taxonomy tree, ignoring edge lengths, with a minimum value of 0 for perfect matches.
As a basic test of CSNs in hierarchical classification tasks, we created simple hierarchies from the MNIST Digit and Fashion datasets. The Digit dataset used the standard low-level labels of digit, supplemented with high-level labels of parity (two classes); the Fashion dataset used the standard low-level labels for item of clothing, with a ternary label for a high-level classification of “tops” (tshirts, pullovers, coats, and shirts), “shoes” (sandals, sneakers, and ankle boots), or “other” (trousers, dresses, and bags).
Mean results from 10 trials for both MNIST datasets were included in Tables 4 and 5. The HPN baselines were implemented using the same number of prototypes as the CSNs being compared against. Both tables show that CSNs exhibit comparable or better accuracy than HPNs for both the low-level (Y0) and high-level (Y1) classification tasks. In addition, the average cost (A.C.) and edit distance (E.D.) values show that CSNs recovered minimum spanning trees that nearly perfectly matched the ground truth tree, and that when CSNs did make errors, they were less “costly” than errors made by HPNs (although admittedly, a dominant force in A.C. is classification accuracy alone). A 2D visualization of the latent space of a CSN trained on the Digit task is shown in Figure 3: encodings for particular digits clustered around prototypes for those digits (X), while prototypes for even and odd digits (circles) separated the digit clusters into the left and right halves of the latent space. Visualizations of latent spaces for more fair and hierarchical classification tasks are included in Appendix C; they confirmed the theoretical derivations of orthogonal and parallel subspaces.
Lastly, we trained 10 CSNs and HPNs on the substantially more challenging CIFAR100 dataset. The dataset is inherently hierarchical: the 100 low-level classes are grouped into 20 higher-level
Table 4: MNIST digit hierarchy mean (stdev) over 10 trials. First two columns × 100.
Table 5: MNIST fashion hierarchy mean (stdev) over 10 trials. First two columns × 100.
Figure 3: 2D latent space for hierarchical digit classification creates clusters around even and odd prototypes (circles on the right and left, respectively) and digit prototypes (X).
Y0% Y1% A.C. E.D. CSN 0.76 (0.0) 0.85 (0.0) 0.76 (0.02) 11.2 (7) HPN 0.71 (0.0) 0.80 (0.0) 0.97 (0.04) 165.0 (3) Init. 0.01 0.05 3.88 200 CSN 0.78 (0.0) 0.88 (0.0) 0.91 (0.0) 6.0 (8.2) MGP 0.76 1.05 Init. 0.01 0.05 7.33 258
classes, each of size 5. Using a resnet18 encoder, pre-trained on ImageNet, in conjunction with 100 prototypes for low-level classification and 20 for high-level, we trained CSNs and HPNs. CSNs additionally used an alignment loss weight of -10 to encourage parallelism between the two concept subspaces. The mean results over 10 trials are shown in the top half of Table 6.
We also compared CSNs to MGP and other hierarchical classifiers using the CIFAR100 dataset and a deeper hierarchy, consisting of 5 levels of sizes 100, 20, 8, 4, and 2, as done by Garnot & Landrieu (2020). The additional information provided by this deeper hierarchy resulted in improved classification performance. Median results (as done by Garnot & Landrieu (2020)) for 10 CSNs using this dataset are shown in the bottom half of Table 6. Changing the hierarchy changed how average cost was calculated, so values from the top and bottom halves of the table should not be compared. Within the bottom half, we note that CSNs outperformed MGP on both A.C. and classification accuracy. Furthermore, according to values generated in the extensive experiments conducted by Garnot & Landrieu (2020), CSNs outperformed numerous other baselines, including HXE and soft-labels (Bertinetto et al. (2020)), YOLO (Redmon et al. (2016)), and a hyperspherical prototype network (Mettes et al. (2019)), all of which were built upon a resnet18 pretrained on ImageNet. In fact, our CSNs achieve SOTA classification accuracy for any classifier built upon a resnet18 backbone, without data augmentation. Furthermore, the decrease in A.C. is especially surprising given that other techniques explicitly optimized for average cost reductions, while CSNs merely trained on classification at each level. Notably, the decrease in A.C. is not fully explained by the increase in accuracy, indicating that CSN not only exhibited higher accuracy but also, when it did make mistakes, those mistakes were less severe.
Lastly, we note that CSNs support a range of learned relationships other than fair or hierarchical. The varying values of ρ in Table 3 indicate that CSNs may learn different relationships when alignment loss is set to 0. However, in general, one could train models to learn desired relationships by penalizing or rewarding alignment relative to some intercept. We trained and evaluated such models in Appendix E and found that models indeed learned the desired alignment.
4.3 FAIR AND HIERARCHICAL CLASSIFICATION
Prior experiments demonstrated how CSNs could solve different classification problems separately; in this section, we applied a single CSN to a task that required it to use both fair and hierarchical classification. Intuitively, fairness was used to protect privacy, while hierarchical structure was used for better performance.
We used a dataset describing human motion in a bolt-placement task. The dataset was gathered from a similar setup to (Lasota et al., 2014) - motion was recorded at 50 Hz, using the 3D location of each of the 8 volunteer participant’s gloved right hands as they reached towards one of 8 holes arranged
in a line to place a bolt in the hole. The bolt holes may be thought of hierarchically by dividing destinations into left vs. right (LR) groupings, in addition to the label of the specific hole.
Initial exploration of the dataset showed promising results for prototype-based classification: the target locations were identified with 81% accuracy, and further analysis showed that prototypes corresponded to human-like motions (details in Appendix B). Troublingly, however a trained CSN could identify the participant with over 60% accuracy, which posed privacy concerns. Nevertheless, from a robotic safety perspective, it is important for robots to exploit as much information as possible to avoid collisions with humans.
Ultimately, we wished to predict which hole a participant was reaching towards given the past 1 second of their motion, while preserving privacy and exploiting hierarchical structure. Thus, we designed a CSN with three concept subspaces: one for predicting the bolt (8 prototypes), one for predicting the high-level grouping of a left or right destination (2 prototypes) and one for predicting the participant id (8 prototypes). We enforced that the bolt and LR subspaces were parallel while the bolt and participant subpsaces were orthogonal.
Results from our experiments are presented in Table 7. Means and standard deviations over 10 trials for each row are reported. (A.C. was calculated only for prediction errors, due to the large differences in accuracy rates across models.) When training a CSN with no constraints on subspace alignment, we found a highly accurate but unfair predictor (81% accuracy for bolt location, but sub-optimal disparate impact and demographic disparity values). Switching the CSNs to be fair classifiers by only enforcing orthogonality between bolts and participants yielded a fair classifier (illustrated by ρ, D.I., and DD-0.5), but with much worse bolt prediction accuracy (44%). However, by using a hierarchical subspace for LR groupings, the final CSN both improved classification accuracy and decreased the average cost of errors, while maintaining desired fairness characteristics.
5 CONTRIBUTIONS
The primary contribution of this work is a new type of model, the Concept Subspace Network, that supports inter-concept relationships. CSNs’ design, motivated by prior art in interpretable neural network models, use sets of prototypes to define concept subspaces in neural net latent spaces. The relationships between these subspaces may be controlled during training in order to guide desired model characteristics. Critically, we note that two popular classification problems — fair and hierarchical classification — are located at either end of a spectrum of concept relationships, allowing CSNs to solve each type of problem in a manner on par with techniques that had previously been designed to solve only one. Furthermore, a single CSN may exhibit multiple concept relationships, as demonstrated in a privacy-preserving hierarchical classification task.
While we have demonstrated the utility of CSNs within several domains, numerous extensions could improve their design. First, the idea of subspace alignment could be applied to non-Euclidean geometries like hyperbolic latent spaces that are sometimes used for hierarchical classification. Second, CSNs could additionally benefit from relaxation of some simplifying assumptions: notably, allowing for more complex relationships rather than those defined by subspace cosine similarity, or using adversarial approaches for distributional regularization rather than only supporting unit Gaussians.
Lastly, we note that CSNs, while designed with ethical applications such as fair classification in mind, may lead to undesired consequences. For example, malicious actors could enforce undesirable concept relationships, or simply observing emergent concept relationships within a CSN could reinforce undesirable correlations. In addition, although prototypes encourage interpretability, which we posit can be used for good, the reductive nature of prototypes may be problematic when classifying human-related data (e.g., the COMPAS fair classification task we avoided).
A SINGLE-CONCEPT CLASSIFICATION BASELINES
In addition to the specialized fair and hierarchical classification tasks, we tested CSNs on two standard classification tasks: identifying digits in the MNIST Digit dataset, and identifying one of 100 categories in the CIFAR100 dataset. There were no concept relationships present because this was a single classification task; instead, the tests established that using a CSN did not degrade classification performance relative to PCNs or other neural architectures.
On the Digit dataset, we trained a CSN using the same encoding and decoding layer architectures with 20 prototypes (two for each digit) and applied the same Gaussian distortions to training images as Li et al. (2018). Over five trials, training for 50 epochs with batches of size 250, we achieved the same mean classification accuracy as PCN (99.22%), demonstrating that the use of a CSN did not worsen classification accuracy (Li et al., 2018).
On the CIFAR100 dataset, we extended a resnet18 backbone (pretrained on ImageNet) as our encoder with 100 prototypes, 1 for each class, and trained 10 models for 60 epochs He et al. (2016). We achieved a mean classification accuracy of 76%, the standard result for networks built upon a resnet18 framework (Hase et al., 2019). Thus, CSNs exhibited high performance in a challenging domain, matching performance of normal networks, with the benefit of interpretable prototypes.
B VISUALIZING DECODED PROTOTYPES
Because CSNs are built upon prototype-based classification, they are at least as interpretable as prior art, such as PCNs. In this section, we demonstrate how prototypes may be decoded to visualize their representations. These figures were generated by decoding prototypes from the models used throughout the paper.
Figure 4 shows the first 10 prototypes from the MNIST digit classifier in Appendix A. Unlike PCNs, CSNs benefit from an inductive bias that leads to an equal number of prototypes per class.
Figure 5 depicts the decoded prototypes when training a CSN to predict human reaching motions, as described in Section 3.4. This model was only trained on motion prediction, without using fair or hierarchical training terms. Interestingly, by over-parametrizing the number of prototypes (there were only 8 possible destinations, but twice as many prototypes), the model learned different forms of trajectories that reached towards the same destination: short movements near the targets, and longer loops when reaching from farther away.
C VISUALIZING LEARNED LATENT SPACES
In addition to decoding prototypes, we visualized the latent spaces of trained classifiers. For the purposes of visualization, we trained new models from scratch, using only 2D latent spaces. Encodings and prototypes for both fair classification tasks, as well as the digit and fashion hierarchical classification tasks, are shown in Figure 6.
In all diagrams, encodings of test inputs are denoted by small colored dots. All classification tasks used 2 sets of prototypes: we depicted one set of prototypes as large black dots, and the other as X’s. The arrangement of the prototypes in the latent spaces confirms that CSNs have learned the right concept alignment.
Specifically, for the fair classification tasks, the X’s form a line segment that is orthogonal to the line formed by the black dots. This orthogonality leads to fairness, as discussed in our paper.
In the hierarchical domains, we similarly observed that CSNs had learned the “right” latent structure. In these domains, the black dots denoted prototypes for high-level classification (such as even vs. odd). We observed that the lower-level prototypes (e.g., for digit), denoted by ‘X’s were clustered around the high level prototypes.
As a whole, these visualizations confirm that CSNs learn the desired latent structure, all controlled by changing the alignment loss weight.
D CALCULATING LEARNED CASUAL RELATIONSHIPS
In Sections 4.1 and 4.3, we report a metric, ρ, to denote the learned causal relationship between concepts. Here, we explain how we calculate ρ in greater detail.
Intuitively, ρ corresponds to the mean change in belief for one classification task divided by changes in belief for another classification task. For example, in fair classification, prediction of a person’s credit risk should not change based on changes in belief over the person’s age; this notion corresponds to ρ = 0.
We calculate ρ in CSNs using a technique inspired by Tucker et al. (2021). In that work, the authors studied if a language model’s output changed when the model’s internal representation changed according to syntactic principles. In our work, we change latent representations, z, by taking the gradient of z with respect to the loss of one classification task given true label y∗, creating a new z′
taken by moving z along that gradient, and then calculating the new classification likelihoods using z′.
For simplicity, we limit our analysis to CSNs with two concept subspaces. We denote the encoding of an input, x, as z = eθ(x). Prediction for each of the two tasks may be denoted as prediction functions, predi, for i ∈ [0, 1] indicating the two tasks; the prediction function corresponds to projecting z into the relevant subspace and calculating distances to prototypes, as discussed earlier. Using this notation, we define ρ formally:
z = eθ(x) (5) y0 = pred0(z) (6) y1 = pred1(z) (7) z′ = z +∇loss(y0, y∗0) (8) y′0 = pred0(z ′) (9) y′1 = pred1(z ′) (10)
ρ = y0 − y′0 y1 − y′1
(11)
Thus, ρ captures how the model’s change in belief about one attribute affects its change in belief over another attribute, in other words the causal learned relationship between prediction tasks.
E CORRELATED-CONCEPT CLASSIFICATION BASELINES
A single CSN may be used to perform multiple classification tasks simultaneously without explicitly guiding concept relationships. In the Adult and German fair classification domains, CSNs predicted both s, the protected field, and y, the desired final prediction, while we explicitly guided the learned concept relationships to enforce fairness. In a separate set of experiments conducted on the same datasets, we demonstrated how CSNs can learn more complex concept relationships.
In these experiments, we trained CSNs with two subspaces, each with two prototypes, and set both the KL and alignment losses to zero. The CSNs were trained to predict both s and y, using the two subspaces. We recorded prediction accuracy of y and ρ, the learned causal correlation between s and y.
Over 10 trials, for the German and Adult datasets, CSNs achieved mean y classification accuracies of 85% and 74%, on par with prior art on these datasets when not enforcing fairness (Xie et al., 2017). We also found non-zero ρ: for the German dataset, we found a value of 0.20; for the Adult dataset, a mean value of 0.23. An example latent space from a CSN trained on the German dataset in this manner is shown in Figure 7, using the same visualization mechanism as introduced in Appendix C. In this example, the model learned a non-zero correlation between prototypes for credit (Xs) and
for an applicant’s age (circles). This type of learned correlation is undesirable in fair classification domains but may be useful in other scenarios.
As a demonstration of useful learned correlations, we implemented a CSN in a classification task using synthetic data. Consider a simplified weather prediction task in which, given noisy observations of temperature and precipitation, a weather station must classify the day as hot or cold and rainy or sunny. In the artificial world, in the last year of weather data, half of the days are rainy and half are sunny, and all rainy days are cold and all sunny days are hot. Cold days have a true temperature uniformly drawn between 0.0 and 0.2 and warm days have a true temperature uniformly drawn between 0.8 and 1.0. Similarly, sunny days have a precipitation value drawn uniformly between 0.0 and 0.2 and rainy days have precipitation values are drawn uniformly between 0.8 and 1.0. Observations of temperature and precipitation are corrupted by zero-mean gaussian noise with σ = 0.05. Given noisy observations of temperature and precipitation, a model’s task is to predict binary labels for whether the day is hot or cold and rainy or sunny.
Numerous causal paths could explain observational data recorded from this environment in which hot days are sunny and cold days are rainy: rain could cause cold weather, some latent factor like atmospheric pressure could affect both precipitation and temperature, etc.. Trained simply from observational data, models are unable to learn the right causal relationship between these variables.
Unlike traditional neural networks, however, CSNs allow humans to encode desirable causal relationships. We designed a CSN with two concept subspaces (for temperature and precipitation), each with two prototypes. We then penalized (a(Qrs, Qhc)− √ ρ∗)2; that is, we set an intercept of √ ρ∗ for the alignment loss between the two subspaces for rainy and sunny (rs) and hot and cold (hc). (We used √ ρ∗ as the notation for setting desired alignment for reasons that will become apparent in the next paragraph.)
In our experiments, we sought to identify if CSNs could learn the desired causal relationship between temperature and precipitation. We did so by setting √ ρ∗ to some value, training CSNs using standard losses, and then measuring if the ρ metric we calculated from the trained CSNs matched ρ∗. We trained 10 CSNs with latent dimension 2 with √ ρ∗ = 0.5. This corresponds to a cosine value of 0.7, or about 45 degrees. This is intuitively interpreted as meaning that for every percentage increase in likelihood in the weather being sunny, the likelihood of it being warm should increase by 0.7 percent.
As desired, the trained CSNs had a mean ρ value of 0.72 (standard deviation 0.14). An example latent space from one such CSN is shown in Figure 7: the two subspaces are arranged at roughly 45 degrees. Moving an embedding of a cold and rainy day to increase the likelihood of it begin sunny by 1% increases the predicted likelihood of it being warm by 0.7%. This demonstrates that we were able to train CSNs to learn the causal relationship we wished for. Lastly, we note that we repeated these experiments with other √ ρ∗ and found similar findings, and if we did not include alignment training loss, CSNs learned arbitrary concept relationships.
F CSN IMPLEMENTATIONS
In this section, we include details necessary for replication of experimental results that did not fit in the main paper. 1
In all experiments, we used random seeds ranging from 0 to the number of trials used for that experiment. Although CSNs support several prototypes per class (e.g., 2 prototypes for the digit 0, 2 prototypes for the digit 1, etc.), unless otherwise noted, we used an equal number of prototypes and classes.
In the German fairness experiments, we trained for 30 epochs, with batch size 32, with classification loss weights of 1, alignment loss of 100 between the two subspaces, and KL lossses of 0.5. In the Adult fairness experiments, we trained for 20 epochs, with batch size 128, with classification loss weights of [1, 0.1] for y and s, respectively, alignment loss of 100 between the two subspaces, and KL losses of 0.1. For both fair classification tasks, the CSN encoders comprised 2 fully-connected layers with ReLu activations and hidden dimension 128, outputting into a 32-dimensional latent
1Anonymized code is available at https://anonymous.4open.science/r/csn-8470/
space, and the decoder comprised 2, 128-dimensional ReLu layers followed by a sigmoid layer. The whole model was trained with an Adam optimizer with default parameters.
For the digit and fashion hierarchical classification tasks, CSNs were trained for 20 epochs with batch size 128. CSNs for both datastes used identical architectures to the networks created for the fair classification tasks.
For the CIFAR100 hierarchical classification task, we built upon a ResNet18 backbone, as done by Garnot & Landrieu (2020). The encoder consisted of a ResNet encoder, pre-trained on imagenet, followed by two fully-connected layers with hidden dimension 4096, feeding into a latent space of dimension 100. The decoder (and decoding training loss) was removed in this domain to reduce training time. The network was trained using an SGD optimizer with learning rate 0.001 and momentum 0.9, with batch size 32. Training terminated after 60 epochs or due to early stopping, with a patience of 10 epochs. Classification loss weights were set to [1, 5] for classifying the high- and lowlevel categories, respectively. KL losses were set to 0; alignment loss was set to −10 to encourage parallel subspaces.
For the fair and hierarchical classification task with bolts, CSNs used the same architecture as for the fair classification task. Models were trained for 50 epochs, with batch size 256. All classification loss weights were set to 1; alignment loss between bolts and LR was set to −10 and between bolts and participant was set to 100. KL losses were set to 2.
G FAIR CLASSIFICATION BASELINES
In Section 4.1, we compared CSNs to several fair classification baselines on the standard German and Adult datasets. Although the datasets are standard in the literature, there are a wide variety of fairness metrics, only a subset of which each method has published. Therefore, we implemented each fair classification baseline and recorded all metrics of interest for each method. In this section, we demonstrated the soundness of our implementations by comparing to published metrics. Implementations of our baselines are available here: https://anonymous.4open.science/status/fairbaselines-44A7.
Tables 8 and 9 report the recorded metrics for each fairness technique. Values reported in prior literature are included in the table using the technique’s name (e.g., the second row of Table 8, labeled ‘Adv.’ includes the values reported by Xie et al. (2017)). Values that we measured using our implementations of each technique are marked with asterisks. We report means and standard errors for our implementations for each method and compare to the metrics that prior methods published for each dataset.
The bottom halves of Tables 8 and 9 are separated from the top halves to indicate modified datasets. The Wass. DB baseline used the German and Adult datasets but treated protected fields differently (e.g., by creating binary age labels at the cutoff age of 30 instead of 25, as all other techniques did). We therefore evaluated CSNs and our own implementations of Wass. DB on these different datasets as well and reported them below the horizontal line.
Interestingly, while we were able to recreate the Wass. DB results on these modified datasets, the technique, when applied to the standard datasets, demonstrated better fairness than most techniques but worse y Acc.. (We repeated the hyperparameter sweeps reported by Jiang et al. (2020) and used the best results.) We attribute the low y Acc. to the fact that Jiang et al. (2020) call for a linear model as opposed to deeper neural nets used by other approaches. When using the datasets suggested by Wass. DB, we reproduced their published results, as did CSNs trained on the same datasets. We note, however, that predictors on this dataset are of limited use, as both Wass. DB and CSNs fail to outperform random classification accuracy.
As a whole, Tables 8 and 9 give us confidence in our implementations of the fairness baselines. Our implementations were able to match or exceed metrics reported from prior art. This suggests that our underlying implementations were correct and that new metrics we gathered on them were valid. | 1. What is the focus and contribution of the paper on hierarchical and fair classification?
2. What are the strengths of the proposed approach, particularly in terms of concept subspaces?
3. Do you have any concerns or questions regarding the definition and implementation of concept subspaces?
4. How does the reviewer assess the clarity and presentation of the ideas in the paper?
5. What are the weaknesses of the paper, especially regarding the assumptions made and the comparison with other works? | Summary Of The Paper
Review | Summary Of The Paper
The authors propose a novel model — called Concept Subspace Network (CSN) — for both hierarchical and fair classification. The idea behind the network is to use sets of prototypes to define concept subspaces in the latent space defined by the neural network itself. The relationships between the subspaces can be manipulated at training time to enforce concept relationships (i.e., two concept subspaces are orthogonal if the concepts they represent are independent, while they are parallel if the concepts they represent are hierarchically organised).
Review
In this paper, the ideas are quite novel and mostly well presented, and the problem handled is significant.
I have though some questions and some minor comments that I hope will be addressed in the final version:
The way in which concept subspaces are defined is not clear to me. In the paper, the authors write: “Given a set of k prototypes in
R
Z
, the prototypes define a subspace generated by starting at the first prototype,
p
1
, and adding all linear combinations of vector differences from
p
1
to all other
p
i
;
i
∈
[
2
,
k
]
.” This is not clear to me, and it would be beneficial having a clearer example than the one given in the paper, in which it is not clear why we should obtain the plane
x
−
y
.
Also, in order to get a subspace, do you need the assumption that
k
<
Z
?
In equation (2) the term
P
C
N
(
⋅
)
is just defined as the loss introduced for PCN. Where is it defined?
The random baseline seems to achieve very high performance in tables 1 and 2.
At page 7 the authors mention a global ordering of the nodes, how was such ordering decided?
Minor comments:
Z
in the figure 1 instead of
z
Add upward and downward arrows nearby the metric names to improve readability
“Random” and “Rand” in Tables 1 and 2 |
ICLR | Title
Prototype Based Classification from Hierarchy to Fairness
Abstract
Artificial neural nets can represent and classify many types of high-dimensional data but are often tailored to particular applications – e.g., for “fair” or “hierarchical” classification. Once an architecture has been selected, it is often difficult for humans to adjust models for a new task; for example, a hierarchical classifier cannot be easily transformed into a fair classifier that shields a protected field. Our contribution in this work is a new neural network architecture, the concept subspace network (CSN), which generalizes existing specialized classifiers to produce a unified model capable of learning a spectrum of multi-concept relationships. We demonstrate that CSNs reproduce state-of-the-art results in fair classification when enforcing concept independence, may be transformed into hierarchical classifiers, or may even reconcile fairness and hierarchy within a single classifier. The CSN is inspired by and matches the performance of existing prototype-based classifiers that promote interpretability.
1 INTRODUCTION
Neural networks are able to learn rich representations of data that support highly accurate classification; however, understanding or controlling what neural nets learn remains challenging. Some techniques offer insight into pre-trained models by uncovering directions within latent spaces that correspond to particular concepts, image manipulations, or more (Goetschalckx et al., 2019; Kim et al., 2018), while approaches focused on interpretability provide techniques that are more comprehensible to humans (Li et al., 2018; Chen et al., 2019). While these methods provide insight, they fail to offer control: humans observe learned patterns but are unable to guide models such that learned relationships are useful for a particular setting or task.
Another line of work has advanced the design of models for particular types of classification tasks (such as fair or hierarchical classification) but these techniques are often developed with only one problem in mind (Zemel et al., 2016; Xie et al., 2017; Hase et al., 2019). For example, models built for fair classification (predicting an outcome regardless of information about a protected field) are only used to enforce independence of concepts rather than hierarchy. Thus, humans may exert control over learned representations by selecting an appropriate technique rather than tuning training parameters within the same technique.
We have designed a new neural network architecture, the concept subspace network (CSN), which generalizes existing specialized classifiers to produce a unified model capable of learning a spectrum of multi-concept relationships. CSNs use prototype-based representations, a technique employed in interpretable neural networks in prior art (Li et al., 2018; Chen et al., 2019; Garnot & Landrieu, 2020). A single CSN uses sets of prototypes in order to simultaneously learn multiple concepts; classification within a single concept (e.g., “type of animal”) is performed by projecting encodings into a concept subspace defined by the prototypes for that concept (e.g., “bird,” “dog,” etc.). Lastly, CSNs use a measure of concept subspace alignment to guide concept relationships such as independence or hierarchy.
In our experiments, CSNs performed comparably to state-of-the art in fair classification, despite prior methods only being designed for this type of problem. In applying CSNs to hierarchical classification tasks, networks automatically deduced interpretable representations of the hierarchical problem structure, allowing them to outperform state-of-the-art, for a given neural network backbone, in terms of both accuracy and average cost of errors on the CIFAR100 dataset. Lastly, in
a human-motion prediction task, we demonstrated how a single CSN could enforce both fairness (to preserve participant privacy) and hierarchy (to exploit a known taxonomy of tasks). Our findings suggest that CSNs may be applied to a wide range of problems that had previously only been addressed individually, or not at all.
2 RELATED WORK
2.1 INTERPRETABILITY AND PROTOTYPE NETWORKS
Numerous post-hoc explanation techniques fit models to pre-trained neural nets; if humans understand these auxiliary models, they can hypothesize about how the neural nets behave (Ribeiro et al., 2016; Lundberg & Lee, 2017). However, techniques in which explanations are decoupled from underlying logic may be susceptible to adversarial attacks or produce misleading explanations (Heo et al., 2019; Slack et al., 2020).
Unlike such decoupled explanations, interpretability research seeks to expose a model’s reasoning. In this work we focus on prototype-based latent representations in neural nets. There is a long history of learning discrete representations in continuous spaces, originating under “vector quantization” literature (Kohonen, 1990; Schneider et al., 2009). More recently, the prototype case network (PCN) comprised an autoencoder model that clustered encodings around understandable, trainable prototypes, with classifications made via a linear weighting of the distances from encodings to prototypes (Li et al., 2018). Further research in image classification extended PCNs to use convolutional filters as prototypes and for hierarchical classification in the hierarchical prototype network (HPN) (Chen et al., 2019; Hase et al., 2019). Lastly, Garnot & Landrieu (2020) use prototypes in Metric-Guided Prototype Learning (MGP) in conjunction with a loss function to cluster prototypes to minimize user-defined costs.
Our model similarly uses trainable prototypes for classification, but differs from prior art in two respects. First, we modify the standard PCN architecture to support other changes, without degrading classification performance. Second, like HPNs (but not PCNs or MGP), CSNs leverage multiple sets of prototypes to enable hierarchical classification but also allow for non-hierarchical concept relationships.
2.2 FAIR AND HIERARCHICAL CLASSIFICATION
AI fairness research considers how to mitigate undesirable patterns or biases in machine learning models. Consider the problem of predicting a person’s credit risk: non-causal correlations between age and risk may lead AI models to inappropriately penalize people according to their age (Zemel et al., 2016). The problem of fair classification is often framed as follows: given inputs, x, which are informative of a protected field, s, and outcome, y, predict y from x without being influenced by s (Zemel et al., 2013). Merely removing s from x (e.g., not including age as an input to a credit predictor) rarely removes all information about s, so researchers have developed a variety of techniques to create representations that “purge” information about s (Zemel et al., 2016; Xie et al., 2017; Jiang et al., 2020).
Hierarchical classification solves a different problem: given a hierarchical taxonomy of classes (e.g., birds vs. dogs at a high level and sparrows vs. toucans at a low level), output the correct label at each classification level. Neural nets using convolution and recurrent layers in specialized designs have achieved remarkable success in hierarchical image classification (Zhu & Bain, 2017; Guo et al., 2018). The hierarchical prototype network (HPN) uses prototypes and a training routine based upon conditional subsets of training data to create hierarchically-organized prototypes (Hase et al., 2019). Garnot & Landrieu (2020) also use prototypes for hierarchical classification in Metric-Guided Prototype Learning (MGP) by adjusting the training loss to guide prototype arrangement. Neither HPN nor MGP explicitly models relationships between multiple subsets of prototypes. Lastly, recent works propose hyperbolic latent spaces as a natural way to model hierarchical data (Dai et al., 2021; Mathieu et al., 2019; Nickel & Kiela, 2017; Liu et al., 2020). Our method, conversely, relies upon concepts from Euclidean geometry. Extending the principle of subspace alignment that we develop to non-Euclidean geometric spaces is a promising direction but is beyond the scope of this work.
3 TECHNICAL APPROACH
In this section, we outlined the design of the CSN, which was inspired by desires for both interpretable representations and explicit concept relationships. First, we wished for interpretable representations, so we built upon the PCN design, with modifications. Second, we explicitly encoded relationships between concepts by introducing multiple sets of prototypes, instead of just one in PCNs. Third, we enabled guidance of the concept relationships by modifying the CSN training loss. Together, these changes supported not only interpretable classification, but also provided a flexible framework for a single model architecture to learn different concept relationships.
3.1 CONCEPT SUBSPACE CLASSIFICATION
A CSN performing a single classification task (e.g., identifying a digit in an image) is defined by three sets of trainable weights. First, an encoder parametrized by weights θ, eθ, maps from inputs of dimension X to encodings of dimension Z: eθ : RX → RZ . Second, a decoder parametrized by weights φ, dφ, performs the decoding function of mapping from encodings to reconstructed inputs: dφ : R
Z → RX . Third, there exist a set of k trainable prototype weights, p, that are each Zdimensional vectors: p1, p2, ..., pk ∈ RZ . This architecture resembles that of the PCN, but without the additional linear classification layer (Li et al., 2018).
Here, we focus briefly on the set of prototypes, p. Given a set of k prototypes in RZ , we define a “concept subspace,” C as follows:
vi = pi − p1 ∀i ∈ [2, k] (1) C = {x|x ∈ RZ where x = p1 + ∑ i∈[2,k] λivifor λi ∈ R ∀i} (2)
C is the linear subspace in RZ defined by starting at the first prototype and adding linear scalings of vector differences to all other prototypes. We call this subspace a concept subspace because it represents a space of encodings between prototypes defining a single concept (e.g., prototypes for digits 0, 1, 2, etc. define a concept subspace for digit classification).
A CSN’s architecture — consisting of an encoder, a decoder, and a set of prototypes and the associated concept subspace — enables two types of functionality: the encoder and decoder may be composed to reconstruct inputs via their latent representations, and CSNs may perform classification tasks by mapping an input, x, to one of Y discrete categories. Classification is performed by first encoding an input into a latent representation, z = eθ(x). The l2 distance from z to each prototype is then calculated, yielding k distance values: di(z,p) = ||z − pi||22; i ∈ [1, k]. These distances are mapped to a probability distribution, PK(i); i ∈ [1, k], by taking the softmax of their negatives. Lastly, if there are more prototypes than classes, (e.g., two prototypes for dogs, two for cats, etc.) the distribution over k is converted to a distribution over Y categories summing the probabilities for prototypes belonging to the same class.
For single-concept classification, CSNs differ from PCNs primarily by removing the linear layer that PCNs used to transform distances to prototypes into classifications. We found this unnecessary for high classification accuracy (Appendix A) and instead directly used negative distances. Without the linear layer, CSN classification is equivalent to projecting encodings, z, onto a concept subspace before calculating distances. The distances between projected encoding, dubbed zproj , and prototypes will induce the same softmax distribution as when the orthogonal component remains. Indeed, we find projection more intuitive - only the component of z that corresponds to the subspace is used for classification - and list projection as a standard step in the remainder of this paper. A simple example of projecting an encoding and calculating distances to prototypes is shown in Figure 1 a.
For some tasks, we used an encoder design from variational-autoencoders (VAEs) in order to regularize the distribution of encodings to conform to unit Gaussians (Kingma & Welling, 2014). By default, this regularization loss was set to 0, but it sometimes proved useful in some domains to prevent overfitting (as detailed in experiments later). We emphasize that CSNs are discriminative, rather than generative, models, so we did not seek to learn a latent space from which to sample.
3.2 MULTI-CONCEPT LEARNING
We defined the CSN architecture for single classification tasks in the previous section; here, we explain how a CSN may be used for multiple classification tasks. (For example, consider a scenario involving classifying both what type of bird a photo depicts and whether the photo was taken outdoors or indoors.) Extending CSNs to support multiple classifications requires the addition of new sets of prototypes. This is the primary contribution of our work.
Multiple classification tasks are performed by defining a set consisting of sets of prototypes: P = {p1, ...,pc}, with a set of prototypes for each of c classification tasks. A classification task is performed by using the CSN’s encoder to generate an encoding, z, and projecting z into the concept subspace defined by the set of prototypes particular to the given task. Figure 1 (b-d) depicts simplified examples of two concept subspaces. In each example, each concept uses three prototypes, yielding two planar concept spaces (one of which corresponds to the x − y plane for illustrative purposes); z may be projected into either plane depending upon the classification task at hand.
While the prototypes in different sets are separate from each other, correlations present in training data may lead to a range of relationships among prototypes. Returning to the previous example scenario, prototypes of birds may represent canaries and toucans, while prototypes of indoor and outdoor scenes may represent living rooms and jungles; each set of prototypes is independent in principle, but in reality, prototypes may represent canaries in living rooms and toucans in jungles. In fact, two sets of prototypes can exhibit a range of relationships from highly correlated to fully independent, as shown in Figure 1.
We defined a metric, concept subspace alignment, to reflect this range of relationships. Mathematically, the alignment of two subspaces is the mean of the cosine squared of the angle between all pairs of vectors drawn from the basis of each subspace. Given orthonormal bases, efficiently computed via QR factorization, Q1 and Q2, of ranks m and n, we define alignment as follows:
a(Q1, Q2) = 1
mn m∑ i n∑ j (Q>1 Q2[i, j]) 2 (3)
Given the range of values for the cosine squared function, alignment values range from 0 to 1 for orthogonal and parallel subspaces, respectively. Intuitively, orthogonality lends itself well to independent concepts and therefore supports fair classification, whereas parallel subspaces naturally correspond to hierarchical classification. We elaborated on this intuition in Section 3.4.
3.3 TRAINING PROCEDURE
When training a CSN, we assume access to a set of training data, (X,Y) for Y = (Y1, Y2, ...Yc). For each entry in the dataset, there is an input x and a label yi, for each of c classification tasks.
We trained CSNs in an end-to-end manner to minimize a single loss function, defined in Equation 4. The four terms in the loss function were as follows: 1) reconstruction error; 2) the loss introduced for the PCN, encouraging classification accuracy and the clustering of encodings around prototypes (applied within each concept subspace); 3) a KL divergence regularization term; and 4) a term penalizing alignment between concept subspaces. Each term was weighted by a choice of real-
valued λs. We emphasize that the PCN loss — clustering and classification accuracy, defined in Equation 7 of Li et al. (2018) — is calculated within each concept subspace using the projections of encodings; thus, encodings were encouraged to cluster around prototypes only along dimensions within the subspace. The encoder, decoder, and prototype weights were trained simultaneously.
l(X,Y, θ,φ,P) = λ0 |X| ∑ x∈X (dφ(eθ(x))− x)2
+ ∑
i∈[1,C]
λPiPCN(proj(eθ(X),pi), Yi)
+ ∑
i∈[1,C]
λKLiKL(X,pi) (4)
+ ∑
i∈[1,C] ∑ j∈[1,C] λAija(Qi, Qj)
The KL regularization term mimics training losses often used in VAEs that penalize the divergence between the distribution of encodings and a zero-mean unit Gaussian (Kingma & Welling, 2014). In our case, we wished to induce a similar distribution of encodings, but centered around prototypes rather than the origin. Furthermore, rather than induce a Gaussian distribution within a concept subspace (which would dictate classification probabilities and therefore potentially worsen classification accuracy), we wished to regularize the out-of-subspace components of encodings.
Concretely, we implemented this regularization loss in three steps. First, we computed the orthogonal component of an encoding as zorth = z − zproj . We then computed the KL divergence between the distribution of zorth and unit Gaussians centered at each prototype in each subspace. Finally, we took the softmax over distances between encodings and prototypes in order to only select the closest prototype to the encoding; we then multiplied the softmax by the divergences to enforce that encodings were distributed as unit Gaussians around the nearest prototype in each subspace. Together, these operations led the distributions out of each subspace to conform to unit Gaussians around each prototype. As confirmed in later experiments, this component was crucial in training fair classifiers.
3.4 HIERARCHICAL AND FAIR CLASSIFICATION
We conclude this section by demonstrating how CSNs may support hierarchical or fair classifications. Hierarchical and fair classification may be thought of as extremes along a spectrum of concept alignment. In hierarchical classification, concepts are highly aligned and therefore parallel: the difference between a toucan and a Dalmatian is similar to the difference between a generic bird and dog, and so the vector differences between prototypes associated with different classes should also be parallel (e.g., “bird” - “dog” = “toucan” - “Dalmatian.”). In fair classification, concepts are not aligned: switching belief about someone’s sex should not alter predictions about their income. Thus, based on the classification task, moving an encoding relative to one subspace should either affect (for hierarchical) or not affect (for fair) that encoding’s projection onto the other subspace. We provide a geometric interpretation of these two tasks in Figure 1 b and d.
CSNs can be trained to adopt either form of concept relationship by penalizing or encouraging concept subspace alignment (already present as a(Qi, Qj) in the training loss). Our single model reconciles these two types of problems by viewing them as opposite extremes along a spectrum of concept relationships that our technique is able to learn; this is the main contribution of our work.
4 RESULTS
Our experiments were divided in four parts. First, we demonstrated how CSNs matched standard performance on single classification tasks: in other words, that using a CSN did not degrade performance. We omit these unsurprising results from the paper; full details are included in Appendix A. Second, we showed that CSNs matched state-of-the-art performance in two fair classification tasks. Third, we used CSNs for hierarchical classification tasks, exceeding performance demonstrated by
Table 1: Mean Adult dataset fairness results.
Model y Acc. s Acc. D.I. DD-0.5 CSN 0.85 0.67 0.83 0.16 Adv. 0.85 0.67 0.87 0.16 VFAE 0.85 0.70 0.82 0.17 FR Train 0.85 0.67 0.83 0.16 Wass. DB 0.81 0.67 0.92 0.08 Random 0.76 0.67
Table 2: Mean German dataset fairness results.
Model y Acc. s Acc. D.I. DD-0.5 CSN 0.73 0.81 0.70 0.10 Adv. 0.73 0.81 0.63 0.10 VFAE 0.72 0.81 0.47 0.23 FR Train 0.72 0.80 0.55 0.16 Wass DB 0.72 0.81 0.33 0.02 Random 0.70 0.81
prior art along several metrics. Fourth, we showed how CSNs enabled both fair and hierarchical classification in a dataset describing human motion in an assembly task that exploited hierarchical knowledge while preserving participant anonymity. Implementation details of CSNs in all experiments are included in Appendix F.
4.1 FAIR CLASSIFICATION
We evaluated CSN’s performance in fair classification tasks in the Adult and German datasets. These datasets are commonly used in fairness literature and contain data that can be used to predict people’s income or credit risks (Dua & Graff, 2017).We compared CSN performance to our implementations of an advesarial purging technique (Adv.), the variational fair autoencoder (VFAE), Wasserstein Fair Classification (Wass. DB), and a mutual-information-based fairness approach (FR Train) (Xie et al., 2017; Zemel et al., 2016; Jiang et al., 2020; Roh et al., 2020). Implementation details of fair classification baselines and full results including standard deviations are included in Appendix G.
For the Adult dataset, the protected attribute was sex, and for the German dataset, the protected attribute was a binary variable indicating whether the person was older than 25 years of age. In evaluation, we measured y Acc., the accuracy of predicting income or credit, s Acc., the accuracy of a linear classifier trained to predict the protected field from the latent space, disparate impact (DI), as defined in Roh et al. (2020), and demographic disparity (DD-0.5), as defined by Jiang et al. (2020).
Mean results over 20 trials for both datasets were included in Tables 1 and 2. In both datasets, we observed that CSNs matched state-of-the-art performance. CSNs produced high y Acc., indicating high task performance for predicting income or credit. Furthermore, fairness measures demonstrate that CSNs purged protected information successfully (low s Acc.) and achieved high D.I. and low DD-0.5, as desired. A visualization of the latent space of a fair classifier, trained on the German dataset, is shown in Figure 2 and confirmed that CSNs learned orthogonal concept subspaces.
In addition to reproducing the state of the art, we conducted an ablation study to demonstrate the importance of two terms in our training loss: the alignment and KL losses. Using the German dataset, we trained 20 CSNs, setting the KL, alignment, or both loss weights to 0. The mean results of these trials are reported in Table 3.
Table 3 demonstrates the necessity of both KL and alignment losses to train fair predictors (with higher disparate impact and lower demographic disparity values). Including both loss terms resulted
in the fairest predictors; removing those losses could enable better classification accuracy, but at the expense of fairness. This confirms geometric intuition: the alignment loss created orthogonal subspaces and the KL regularization created distributional equivalence based on the subspaces. Jointly, these losses therefore produced statistical independence.
Table 3 also includes causal analysis of trained CSNs via the ρ metric. Intuitively, this metric reflected the learned correlation between s and y; it was calculated by updating embeddings in the CSN latent space along the gradient of s and recording the change in prediction over y. We reported the ratio of these changes as ρ; as expected, enforcing orthogonality via alignment loss led to ρ values of 0. This technique is inspired by work in causally probing language models (e.g., Tucker et al. (2021)); full details for calculating ρ are included in Appendix D.
4.2 HIERARCHICAL CLASSIFICATION
We compared CSNs to our implementation of HPNs and results for Metric-Guided Prototype Learning (MGP), reported by Garnot & Landrieu (2020), for hierarchical classification tasks. Our HPN baseline used the same architecture as CSN (same encoder, decoder, and number of prototypes). It differed from CSNs by setting alignment losses to 0 and by adopting the conditional probability training loss introduced by Hase et al. (2019). We further included results of a randomly-initialized CSN under “Init.” in tables. In these experiments, we sought to test the hypothesis that CSNs with highly aligned subspaces would support hierarchical classification, just as orthogonal subspaces enabled fair classification.
In addition to standard accuracy metrics, we measured two aspects of CSNs trained on hierarchical tasks. First, we recorded the “average cost” (AC) of errors. AC is defined as the mean distance between the predicted and true label in a graph of the hierarchical taxonomy (e.g., if true and predicted label shared a common parent, the cost was 2; if the common ancestor was two levels up, the cost was 4, etc.) (Garnot & Landrieu, 2020). Second, we measured the quality of trees derived from the learned prototypes. After a CSN was trained, we defined a fully-connected graph G = (V ,E) with vertices V = P ⋃ {0} (the set of all prototypes and a point at the origin) and undirected edges between each node with lengths equal to the l2 distance between nodes in the latent space. We recovered the minimum spanning tree, T , from G, (which is unique given distinct edge lengths, which we observed in all experiments), and converted all edges to directed edges through a global ordering of nodes. Lastly, we calculated the graph edit distance (ED) between isomorphisms of the recovered tree and the ground-truth hierarchical tree (with edges that obeyed the same ordering constraints) (Abu-Aisheh et al., 2015). Intuitively, this corresponded to counting how many edges had to be deleted or added to the minimum spanning tree to match the taxonomy tree, ignoring edge lengths, with a minimum value of 0 for perfect matches.
As a basic test of CSNs in hierarchical classification tasks, we created simple hierarchies from the MNIST Digit and Fashion datasets. The Digit dataset used the standard low-level labels of digit, supplemented with high-level labels of parity (two classes); the Fashion dataset used the standard low-level labels for item of clothing, with a ternary label for a high-level classification of “tops” (tshirts, pullovers, coats, and shirts), “shoes” (sandals, sneakers, and ankle boots), or “other” (trousers, dresses, and bags).
Mean results from 10 trials for both MNIST datasets were included in Tables 4 and 5. The HPN baselines were implemented using the same number of prototypes as the CSNs being compared against. Both tables show that CSNs exhibit comparable or better accuracy than HPNs for both the low-level (Y0) and high-level (Y1) classification tasks. In addition, the average cost (A.C.) and edit distance (E.D.) values show that CSNs recovered minimum spanning trees that nearly perfectly matched the ground truth tree, and that when CSNs did make errors, they were less “costly” than errors made by HPNs (although admittedly, a dominant force in A.C. is classification accuracy alone). A 2D visualization of the latent space of a CSN trained on the Digit task is shown in Figure 3: encodings for particular digits clustered around prototypes for those digits (X), while prototypes for even and odd digits (circles) separated the digit clusters into the left and right halves of the latent space. Visualizations of latent spaces for more fair and hierarchical classification tasks are included in Appendix C; they confirmed the theoretical derivations of orthogonal and parallel subspaces.
Lastly, we trained 10 CSNs and HPNs on the substantially more challenging CIFAR100 dataset. The dataset is inherently hierarchical: the 100 low-level classes are grouped into 20 higher-level
Table 4: MNIST digit hierarchy mean (stdev) over 10 trials. First two columns × 100.
Table 5: MNIST fashion hierarchy mean (stdev) over 10 trials. First two columns × 100.
Figure 3: 2D latent space for hierarchical digit classification creates clusters around even and odd prototypes (circles on the right and left, respectively) and digit prototypes (X).
Y0% Y1% A.C. E.D. CSN 0.76 (0.0) 0.85 (0.0) 0.76 (0.02) 11.2 (7) HPN 0.71 (0.0) 0.80 (0.0) 0.97 (0.04) 165.0 (3) Init. 0.01 0.05 3.88 200 CSN 0.78 (0.0) 0.88 (0.0) 0.91 (0.0) 6.0 (8.2) MGP 0.76 1.05 Init. 0.01 0.05 7.33 258
classes, each of size 5. Using a resnet18 encoder, pre-trained on ImageNet, in conjunction with 100 prototypes for low-level classification and 20 for high-level, we trained CSNs and HPNs. CSNs additionally used an alignment loss weight of -10 to encourage parallelism between the two concept subspaces. The mean results over 10 trials are shown in the top half of Table 6.
We also compared CSNs to MGP and other hierarchical classifiers using the CIFAR100 dataset and a deeper hierarchy, consisting of 5 levels of sizes 100, 20, 8, 4, and 2, as done by Garnot & Landrieu (2020). The additional information provided by this deeper hierarchy resulted in improved classification performance. Median results (as done by Garnot & Landrieu (2020)) for 10 CSNs using this dataset are shown in the bottom half of Table 6. Changing the hierarchy changed how average cost was calculated, so values from the top and bottom halves of the table should not be compared. Within the bottom half, we note that CSNs outperformed MGP on both A.C. and classification accuracy. Furthermore, according to values generated in the extensive experiments conducted by Garnot & Landrieu (2020), CSNs outperformed numerous other baselines, including HXE and soft-labels (Bertinetto et al. (2020)), YOLO (Redmon et al. (2016)), and a hyperspherical prototype network (Mettes et al. (2019)), all of which were built upon a resnet18 pretrained on ImageNet. In fact, our CSNs achieve SOTA classification accuracy for any classifier built upon a resnet18 backbone, without data augmentation. Furthermore, the decrease in A.C. is especially surprising given that other techniques explicitly optimized for average cost reductions, while CSNs merely trained on classification at each level. Notably, the decrease in A.C. is not fully explained by the increase in accuracy, indicating that CSN not only exhibited higher accuracy but also, when it did make mistakes, those mistakes were less severe.
Lastly, we note that CSNs support a range of learned relationships other than fair or hierarchical. The varying values of ρ in Table 3 indicate that CSNs may learn different relationships when alignment loss is set to 0. However, in general, one could train models to learn desired relationships by penalizing or rewarding alignment relative to some intercept. We trained and evaluated such models in Appendix E and found that models indeed learned the desired alignment.
4.3 FAIR AND HIERARCHICAL CLASSIFICATION
Prior experiments demonstrated how CSNs could solve different classification problems separately; in this section, we applied a single CSN to a task that required it to use both fair and hierarchical classification. Intuitively, fairness was used to protect privacy, while hierarchical structure was used for better performance.
We used a dataset describing human motion in a bolt-placement task. The dataset was gathered from a similar setup to (Lasota et al., 2014) - motion was recorded at 50 Hz, using the 3D location of each of the 8 volunteer participant’s gloved right hands as they reached towards one of 8 holes arranged
in a line to place a bolt in the hole. The bolt holes may be thought of hierarchically by dividing destinations into left vs. right (LR) groupings, in addition to the label of the specific hole.
Initial exploration of the dataset showed promising results for prototype-based classification: the target locations were identified with 81% accuracy, and further analysis showed that prototypes corresponded to human-like motions (details in Appendix B). Troublingly, however a trained CSN could identify the participant with over 60% accuracy, which posed privacy concerns. Nevertheless, from a robotic safety perspective, it is important for robots to exploit as much information as possible to avoid collisions with humans.
Ultimately, we wished to predict which hole a participant was reaching towards given the past 1 second of their motion, while preserving privacy and exploiting hierarchical structure. Thus, we designed a CSN with three concept subspaces: one for predicting the bolt (8 prototypes), one for predicting the high-level grouping of a left or right destination (2 prototypes) and one for predicting the participant id (8 prototypes). We enforced that the bolt and LR subspaces were parallel while the bolt and participant subpsaces were orthogonal.
Results from our experiments are presented in Table 7. Means and standard deviations over 10 trials for each row are reported. (A.C. was calculated only for prediction errors, due to the large differences in accuracy rates across models.) When training a CSN with no constraints on subspace alignment, we found a highly accurate but unfair predictor (81% accuracy for bolt location, but sub-optimal disparate impact and demographic disparity values). Switching the CSNs to be fair classifiers by only enforcing orthogonality between bolts and participants yielded a fair classifier (illustrated by ρ, D.I., and DD-0.5), but with much worse bolt prediction accuracy (44%). However, by using a hierarchical subspace for LR groupings, the final CSN both improved classification accuracy and decreased the average cost of errors, while maintaining desired fairness characteristics.
5 CONTRIBUTIONS
The primary contribution of this work is a new type of model, the Concept Subspace Network, that supports inter-concept relationships. CSNs’ design, motivated by prior art in interpretable neural network models, use sets of prototypes to define concept subspaces in neural net latent spaces. The relationships between these subspaces may be controlled during training in order to guide desired model characteristics. Critically, we note that two popular classification problems — fair and hierarchical classification — are located at either end of a spectrum of concept relationships, allowing CSNs to solve each type of problem in a manner on par with techniques that had previously been designed to solve only one. Furthermore, a single CSN may exhibit multiple concept relationships, as demonstrated in a privacy-preserving hierarchical classification task.
While we have demonstrated the utility of CSNs within several domains, numerous extensions could improve their design. First, the idea of subspace alignment could be applied to non-Euclidean geometries like hyperbolic latent spaces that are sometimes used for hierarchical classification. Second, CSNs could additionally benefit from relaxation of some simplifying assumptions: notably, allowing for more complex relationships rather than those defined by subspace cosine similarity, or using adversarial approaches for distributional regularization rather than only supporting unit Gaussians.
Lastly, we note that CSNs, while designed with ethical applications such as fair classification in mind, may lead to undesired consequences. For example, malicious actors could enforce undesirable concept relationships, or simply observing emergent concept relationships within a CSN could reinforce undesirable correlations. In addition, although prototypes encourage interpretability, which we posit can be used for good, the reductive nature of prototypes may be problematic when classifying human-related data (e.g., the COMPAS fair classification task we avoided).
A SINGLE-CONCEPT CLASSIFICATION BASELINES
In addition to the specialized fair and hierarchical classification tasks, we tested CSNs on two standard classification tasks: identifying digits in the MNIST Digit dataset, and identifying one of 100 categories in the CIFAR100 dataset. There were no concept relationships present because this was a single classification task; instead, the tests established that using a CSN did not degrade classification performance relative to PCNs or other neural architectures.
On the Digit dataset, we trained a CSN using the same encoding and decoding layer architectures with 20 prototypes (two for each digit) and applied the same Gaussian distortions to training images as Li et al. (2018). Over five trials, training for 50 epochs with batches of size 250, we achieved the same mean classification accuracy as PCN (99.22%), demonstrating that the use of a CSN did not worsen classification accuracy (Li et al., 2018).
On the CIFAR100 dataset, we extended a resnet18 backbone (pretrained on ImageNet) as our encoder with 100 prototypes, 1 for each class, and trained 10 models for 60 epochs He et al. (2016). We achieved a mean classification accuracy of 76%, the standard result for networks built upon a resnet18 framework (Hase et al., 2019). Thus, CSNs exhibited high performance in a challenging domain, matching performance of normal networks, with the benefit of interpretable prototypes.
B VISUALIZING DECODED PROTOTYPES
Because CSNs are built upon prototype-based classification, they are at least as interpretable as prior art, such as PCNs. In this section, we demonstrate how prototypes may be decoded to visualize their representations. These figures were generated by decoding prototypes from the models used throughout the paper.
Figure 4 shows the first 10 prototypes from the MNIST digit classifier in Appendix A. Unlike PCNs, CSNs benefit from an inductive bias that leads to an equal number of prototypes per class.
Figure 5 depicts the decoded prototypes when training a CSN to predict human reaching motions, as described in Section 3.4. This model was only trained on motion prediction, without using fair or hierarchical training terms. Interestingly, by over-parametrizing the number of prototypes (there were only 8 possible destinations, but twice as many prototypes), the model learned different forms of trajectories that reached towards the same destination: short movements near the targets, and longer loops when reaching from farther away.
C VISUALIZING LEARNED LATENT SPACES
In addition to decoding prototypes, we visualized the latent spaces of trained classifiers. For the purposes of visualization, we trained new models from scratch, using only 2D latent spaces. Encodings and prototypes for both fair classification tasks, as well as the digit and fashion hierarchical classification tasks, are shown in Figure 6.
In all diagrams, encodings of test inputs are denoted by small colored dots. All classification tasks used 2 sets of prototypes: we depicted one set of prototypes as large black dots, and the other as X’s. The arrangement of the prototypes in the latent spaces confirms that CSNs have learned the right concept alignment.
Specifically, for the fair classification tasks, the X’s form a line segment that is orthogonal to the line formed by the black dots. This orthogonality leads to fairness, as discussed in our paper.
In the hierarchical domains, we similarly observed that CSNs had learned the “right” latent structure. In these domains, the black dots denoted prototypes for high-level classification (such as even vs. odd). We observed that the lower-level prototypes (e.g., for digit), denoted by ‘X’s were clustered around the high level prototypes.
As a whole, these visualizations confirm that CSNs learn the desired latent structure, all controlled by changing the alignment loss weight.
D CALCULATING LEARNED CASUAL RELATIONSHIPS
In Sections 4.1 and 4.3, we report a metric, ρ, to denote the learned causal relationship between concepts. Here, we explain how we calculate ρ in greater detail.
Intuitively, ρ corresponds to the mean change in belief for one classification task divided by changes in belief for another classification task. For example, in fair classification, prediction of a person’s credit risk should not change based on changes in belief over the person’s age; this notion corresponds to ρ = 0.
We calculate ρ in CSNs using a technique inspired by Tucker et al. (2021). In that work, the authors studied if a language model’s output changed when the model’s internal representation changed according to syntactic principles. In our work, we change latent representations, z, by taking the gradient of z with respect to the loss of one classification task given true label y∗, creating a new z′
taken by moving z along that gradient, and then calculating the new classification likelihoods using z′.
For simplicity, we limit our analysis to CSNs with two concept subspaces. We denote the encoding of an input, x, as z = eθ(x). Prediction for each of the two tasks may be denoted as prediction functions, predi, for i ∈ [0, 1] indicating the two tasks; the prediction function corresponds to projecting z into the relevant subspace and calculating distances to prototypes, as discussed earlier. Using this notation, we define ρ formally:
z = eθ(x) (5) y0 = pred0(z) (6) y1 = pred1(z) (7) z′ = z +∇loss(y0, y∗0) (8) y′0 = pred0(z ′) (9) y′1 = pred1(z ′) (10)
ρ = y0 − y′0 y1 − y′1
(11)
Thus, ρ captures how the model’s change in belief about one attribute affects its change in belief over another attribute, in other words the causal learned relationship between prediction tasks.
E CORRELATED-CONCEPT CLASSIFICATION BASELINES
A single CSN may be used to perform multiple classification tasks simultaneously without explicitly guiding concept relationships. In the Adult and German fair classification domains, CSNs predicted both s, the protected field, and y, the desired final prediction, while we explicitly guided the learned concept relationships to enforce fairness. In a separate set of experiments conducted on the same datasets, we demonstrated how CSNs can learn more complex concept relationships.
In these experiments, we trained CSNs with two subspaces, each with two prototypes, and set both the KL and alignment losses to zero. The CSNs were trained to predict both s and y, using the two subspaces. We recorded prediction accuracy of y and ρ, the learned causal correlation between s and y.
Over 10 trials, for the German and Adult datasets, CSNs achieved mean y classification accuracies of 85% and 74%, on par with prior art on these datasets when not enforcing fairness (Xie et al., 2017). We also found non-zero ρ: for the German dataset, we found a value of 0.20; for the Adult dataset, a mean value of 0.23. An example latent space from a CSN trained on the German dataset in this manner is shown in Figure 7, using the same visualization mechanism as introduced in Appendix C. In this example, the model learned a non-zero correlation between prototypes for credit (Xs) and
for an applicant’s age (circles). This type of learned correlation is undesirable in fair classification domains but may be useful in other scenarios.
As a demonstration of useful learned correlations, we implemented a CSN in a classification task using synthetic data. Consider a simplified weather prediction task in which, given noisy observations of temperature and precipitation, a weather station must classify the day as hot or cold and rainy or sunny. In the artificial world, in the last year of weather data, half of the days are rainy and half are sunny, and all rainy days are cold and all sunny days are hot. Cold days have a true temperature uniformly drawn between 0.0 and 0.2 and warm days have a true temperature uniformly drawn between 0.8 and 1.0. Similarly, sunny days have a precipitation value drawn uniformly between 0.0 and 0.2 and rainy days have precipitation values are drawn uniformly between 0.8 and 1.0. Observations of temperature and precipitation are corrupted by zero-mean gaussian noise with σ = 0.05. Given noisy observations of temperature and precipitation, a model’s task is to predict binary labels for whether the day is hot or cold and rainy or sunny.
Numerous causal paths could explain observational data recorded from this environment in which hot days are sunny and cold days are rainy: rain could cause cold weather, some latent factor like atmospheric pressure could affect both precipitation and temperature, etc.. Trained simply from observational data, models are unable to learn the right causal relationship between these variables.
Unlike traditional neural networks, however, CSNs allow humans to encode desirable causal relationships. We designed a CSN with two concept subspaces (for temperature and precipitation), each with two prototypes. We then penalized (a(Qrs, Qhc)− √ ρ∗)2; that is, we set an intercept of √ ρ∗ for the alignment loss between the two subspaces for rainy and sunny (rs) and hot and cold (hc). (We used √ ρ∗ as the notation for setting desired alignment for reasons that will become apparent in the next paragraph.)
In our experiments, we sought to identify if CSNs could learn the desired causal relationship between temperature and precipitation. We did so by setting √ ρ∗ to some value, training CSNs using standard losses, and then measuring if the ρ metric we calculated from the trained CSNs matched ρ∗. We trained 10 CSNs with latent dimension 2 with √ ρ∗ = 0.5. This corresponds to a cosine value of 0.7, or about 45 degrees. This is intuitively interpreted as meaning that for every percentage increase in likelihood in the weather being sunny, the likelihood of it being warm should increase by 0.7 percent.
As desired, the trained CSNs had a mean ρ value of 0.72 (standard deviation 0.14). An example latent space from one such CSN is shown in Figure 7: the two subspaces are arranged at roughly 45 degrees. Moving an embedding of a cold and rainy day to increase the likelihood of it begin sunny by 1% increases the predicted likelihood of it being warm by 0.7%. This demonstrates that we were able to train CSNs to learn the causal relationship we wished for. Lastly, we note that we repeated these experiments with other √ ρ∗ and found similar findings, and if we did not include alignment training loss, CSNs learned arbitrary concept relationships.
F CSN IMPLEMENTATIONS
In this section, we include details necessary for replication of experimental results that did not fit in the main paper. 1
In all experiments, we used random seeds ranging from 0 to the number of trials used for that experiment. Although CSNs support several prototypes per class (e.g., 2 prototypes for the digit 0, 2 prototypes for the digit 1, etc.), unless otherwise noted, we used an equal number of prototypes and classes.
In the German fairness experiments, we trained for 30 epochs, with batch size 32, with classification loss weights of 1, alignment loss of 100 between the two subspaces, and KL lossses of 0.5. In the Adult fairness experiments, we trained for 20 epochs, with batch size 128, with classification loss weights of [1, 0.1] for y and s, respectively, alignment loss of 100 between the two subspaces, and KL losses of 0.1. For both fair classification tasks, the CSN encoders comprised 2 fully-connected layers with ReLu activations and hidden dimension 128, outputting into a 32-dimensional latent
1Anonymized code is available at https://anonymous.4open.science/r/csn-8470/
space, and the decoder comprised 2, 128-dimensional ReLu layers followed by a sigmoid layer. The whole model was trained with an Adam optimizer with default parameters.
For the digit and fashion hierarchical classification tasks, CSNs were trained for 20 epochs with batch size 128. CSNs for both datastes used identical architectures to the networks created for the fair classification tasks.
For the CIFAR100 hierarchical classification task, we built upon a ResNet18 backbone, as done by Garnot & Landrieu (2020). The encoder consisted of a ResNet encoder, pre-trained on imagenet, followed by two fully-connected layers with hidden dimension 4096, feeding into a latent space of dimension 100. The decoder (and decoding training loss) was removed in this domain to reduce training time. The network was trained using an SGD optimizer with learning rate 0.001 and momentum 0.9, with batch size 32. Training terminated after 60 epochs or due to early stopping, with a patience of 10 epochs. Classification loss weights were set to [1, 5] for classifying the high- and lowlevel categories, respectively. KL losses were set to 0; alignment loss was set to −10 to encourage parallel subspaces.
For the fair and hierarchical classification task with bolts, CSNs used the same architecture as for the fair classification task. Models were trained for 50 epochs, with batch size 256. All classification loss weights were set to 1; alignment loss between bolts and LR was set to −10 and between bolts and participant was set to 100. KL losses were set to 2.
G FAIR CLASSIFICATION BASELINES
In Section 4.1, we compared CSNs to several fair classification baselines on the standard German and Adult datasets. Although the datasets are standard in the literature, there are a wide variety of fairness metrics, only a subset of which each method has published. Therefore, we implemented each fair classification baseline and recorded all metrics of interest for each method. In this section, we demonstrated the soundness of our implementations by comparing to published metrics. Implementations of our baselines are available here: https://anonymous.4open.science/status/fairbaselines-44A7.
Tables 8 and 9 report the recorded metrics for each fairness technique. Values reported in prior literature are included in the table using the technique’s name (e.g., the second row of Table 8, labeled ‘Adv.’ includes the values reported by Xie et al. (2017)). Values that we measured using our implementations of each technique are marked with asterisks. We report means and standard errors for our implementations for each method and compare to the metrics that prior methods published for each dataset.
The bottom halves of Tables 8 and 9 are separated from the top halves to indicate modified datasets. The Wass. DB baseline used the German and Adult datasets but treated protected fields differently (e.g., by creating binary age labels at the cutoff age of 30 instead of 25, as all other techniques did). We therefore evaluated CSNs and our own implementations of Wass. DB on these different datasets as well and reported them below the horizontal line.
Interestingly, while we were able to recreate the Wass. DB results on these modified datasets, the technique, when applied to the standard datasets, demonstrated better fairness than most techniques but worse y Acc.. (We repeated the hyperparameter sweeps reported by Jiang et al. (2020) and used the best results.) We attribute the low y Acc. to the fact that Jiang et al. (2020) call for a linear model as opposed to deeper neural nets used by other approaches. When using the datasets suggested by Wass. DB, we reproduced their published results, as did CSNs trained on the same datasets. We note, however, that predictors on this dataset are of limited use, as both Wass. DB and CSNs fail to outperform random classification accuracy.
As a whole, Tables 8 and 9 give us confidence in our implementations of the fairness baselines. Our implementations were able to match or exceed metrics reported from prior art. This suggests that our underlying implementations were correct and that new metrics we gathered on them were valid. | 1. What is the main contribution of the paper regarding prototype-based representation?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its application in fairness and hierarchical classification?
3. Do you have any questions or concerns about the motivation behind the paper, the choice of subspaces, and the relationship between parallel concepts in hierarchical classification?
4. How does the reviewer assess the clarity and quality of the paper's content, including the choice of examples and the explanation of key concepts?
5. Are there any minor suggestions for improving the paper, such as adding a figure to describe the proposed architecture? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposed a framework (that authors called the concept subspace network) using prototype-based representation controlling the alignment between two subspaces for the purpose of the classifier (fair or hierarch classification).
Review
Strength:
The paper proposed a new prototype-based approach considering the relationship between two concepts (classification tasks) for fairness and hierarchical classification.
Weakness
The motivation of this paper is not clear to me. It would be helpful to understand the motivation by giving examples of major applications where both fairness and hierarchical classification should be considered. Also, is there any challenge when training a classifier using a regularization term regarding fairness in the existing hierarchy classifier training method?
It is not clear that why the two subspaces should be orthogonal and parallel for a fair and hierarchical classifier, respectively. Specifically, in section 3.4:
For fair classification, what are the two subspaces? is it correct that the two subspaces are for label classification and sensitive attributes classification (e.g., male vs. female), respectively? Then, does the orthogonal relationship between the prototypes for estimating the sensitive attribute and label prediction guarantee the independence of the actual sensitive attribute and label prediction?
“In hierarchical classification, concepts are highly aligned and therefore parallel: the difference between a toucan and a Dalmatian is similar to the difference between a generic bird and dog.” --> In this example, why do parallel two concepts imply the difference between a toucan and a Dalmatian can be similar to the difference between a generic bird and dog? I think that the parallelism of two concepts is not related to the different relationships among prototypes. Then It is not clear that why two concepts should be parallel in the hierarchical classification.
Questions:
How does one train a hierarchical classifier with 3 or more concepts? In hierarchical classification, I think there are at least 3 concepts (e.g., dog vs bird, dog species classification, bird species classification)
In experiments, How was the parameter lambdas (in equation 2) chosen in the experiments?
Minor feedback:
Adding a figure describing the proposed architecture will help readers understand the framework. |
ICLR | Title
Prototype Based Classification from Hierarchy to Fairness
Abstract
Artificial neural nets can represent and classify many types of high-dimensional data but are often tailored to particular applications – e.g., for “fair” or “hierarchical” classification. Once an architecture has been selected, it is often difficult for humans to adjust models for a new task; for example, a hierarchical classifier cannot be easily transformed into a fair classifier that shields a protected field. Our contribution in this work is a new neural network architecture, the concept subspace network (CSN), which generalizes existing specialized classifiers to produce a unified model capable of learning a spectrum of multi-concept relationships. We demonstrate that CSNs reproduce state-of-the-art results in fair classification when enforcing concept independence, may be transformed into hierarchical classifiers, or may even reconcile fairness and hierarchy within a single classifier. The CSN is inspired by and matches the performance of existing prototype-based classifiers that promote interpretability.
1 INTRODUCTION
Neural networks are able to learn rich representations of data that support highly accurate classification; however, understanding or controlling what neural nets learn remains challenging. Some techniques offer insight into pre-trained models by uncovering directions within latent spaces that correspond to particular concepts, image manipulations, or more (Goetschalckx et al., 2019; Kim et al., 2018), while approaches focused on interpretability provide techniques that are more comprehensible to humans (Li et al., 2018; Chen et al., 2019). While these methods provide insight, they fail to offer control: humans observe learned patterns but are unable to guide models such that learned relationships are useful for a particular setting or task.
Another line of work has advanced the design of models for particular types of classification tasks (such as fair or hierarchical classification) but these techniques are often developed with only one problem in mind (Zemel et al., 2016; Xie et al., 2017; Hase et al., 2019). For example, models built for fair classification (predicting an outcome regardless of information about a protected field) are only used to enforce independence of concepts rather than hierarchy. Thus, humans may exert control over learned representations by selecting an appropriate technique rather than tuning training parameters within the same technique.
We have designed a new neural network architecture, the concept subspace network (CSN), which generalizes existing specialized classifiers to produce a unified model capable of learning a spectrum of multi-concept relationships. CSNs use prototype-based representations, a technique employed in interpretable neural networks in prior art (Li et al., 2018; Chen et al., 2019; Garnot & Landrieu, 2020). A single CSN uses sets of prototypes in order to simultaneously learn multiple concepts; classification within a single concept (e.g., “type of animal”) is performed by projecting encodings into a concept subspace defined by the prototypes for that concept (e.g., “bird,” “dog,” etc.). Lastly, CSNs use a measure of concept subspace alignment to guide concept relationships such as independence or hierarchy.
In our experiments, CSNs performed comparably to state-of-the art in fair classification, despite prior methods only being designed for this type of problem. In applying CSNs to hierarchical classification tasks, networks automatically deduced interpretable representations of the hierarchical problem structure, allowing them to outperform state-of-the-art, for a given neural network backbone, in terms of both accuracy and average cost of errors on the CIFAR100 dataset. Lastly, in
a human-motion prediction task, we demonstrated how a single CSN could enforce both fairness (to preserve participant privacy) and hierarchy (to exploit a known taxonomy of tasks). Our findings suggest that CSNs may be applied to a wide range of problems that had previously only been addressed individually, or not at all.
2 RELATED WORK
2.1 INTERPRETABILITY AND PROTOTYPE NETWORKS
Numerous post-hoc explanation techniques fit models to pre-trained neural nets; if humans understand these auxiliary models, they can hypothesize about how the neural nets behave (Ribeiro et al., 2016; Lundberg & Lee, 2017). However, techniques in which explanations are decoupled from underlying logic may be susceptible to adversarial attacks or produce misleading explanations (Heo et al., 2019; Slack et al., 2020).
Unlike such decoupled explanations, interpretability research seeks to expose a model’s reasoning. In this work we focus on prototype-based latent representations in neural nets. There is a long history of learning discrete representations in continuous spaces, originating under “vector quantization” literature (Kohonen, 1990; Schneider et al., 2009). More recently, the prototype case network (PCN) comprised an autoencoder model that clustered encodings around understandable, trainable prototypes, with classifications made via a linear weighting of the distances from encodings to prototypes (Li et al., 2018). Further research in image classification extended PCNs to use convolutional filters as prototypes and for hierarchical classification in the hierarchical prototype network (HPN) (Chen et al., 2019; Hase et al., 2019). Lastly, Garnot & Landrieu (2020) use prototypes in Metric-Guided Prototype Learning (MGP) in conjunction with a loss function to cluster prototypes to minimize user-defined costs.
Our model similarly uses trainable prototypes for classification, but differs from prior art in two respects. First, we modify the standard PCN architecture to support other changes, without degrading classification performance. Second, like HPNs (but not PCNs or MGP), CSNs leverage multiple sets of prototypes to enable hierarchical classification but also allow for non-hierarchical concept relationships.
2.2 FAIR AND HIERARCHICAL CLASSIFICATION
AI fairness research considers how to mitigate undesirable patterns or biases in machine learning models. Consider the problem of predicting a person’s credit risk: non-causal correlations between age and risk may lead AI models to inappropriately penalize people according to their age (Zemel et al., 2016). The problem of fair classification is often framed as follows: given inputs, x, which are informative of a protected field, s, and outcome, y, predict y from x without being influenced by s (Zemel et al., 2013). Merely removing s from x (e.g., not including age as an input to a credit predictor) rarely removes all information about s, so researchers have developed a variety of techniques to create representations that “purge” information about s (Zemel et al., 2016; Xie et al., 2017; Jiang et al., 2020).
Hierarchical classification solves a different problem: given a hierarchical taxonomy of classes (e.g., birds vs. dogs at a high level and sparrows vs. toucans at a low level), output the correct label at each classification level. Neural nets using convolution and recurrent layers in specialized designs have achieved remarkable success in hierarchical image classification (Zhu & Bain, 2017; Guo et al., 2018). The hierarchical prototype network (HPN) uses prototypes and a training routine based upon conditional subsets of training data to create hierarchically-organized prototypes (Hase et al., 2019). Garnot & Landrieu (2020) also use prototypes for hierarchical classification in Metric-Guided Prototype Learning (MGP) by adjusting the training loss to guide prototype arrangement. Neither HPN nor MGP explicitly models relationships between multiple subsets of prototypes. Lastly, recent works propose hyperbolic latent spaces as a natural way to model hierarchical data (Dai et al., 2021; Mathieu et al., 2019; Nickel & Kiela, 2017; Liu et al., 2020). Our method, conversely, relies upon concepts from Euclidean geometry. Extending the principle of subspace alignment that we develop to non-Euclidean geometric spaces is a promising direction but is beyond the scope of this work.
3 TECHNICAL APPROACH
In this section, we outlined the design of the CSN, which was inspired by desires for both interpretable representations and explicit concept relationships. First, we wished for interpretable representations, so we built upon the PCN design, with modifications. Second, we explicitly encoded relationships between concepts by introducing multiple sets of prototypes, instead of just one in PCNs. Third, we enabled guidance of the concept relationships by modifying the CSN training loss. Together, these changes supported not only interpretable classification, but also provided a flexible framework for a single model architecture to learn different concept relationships.
3.1 CONCEPT SUBSPACE CLASSIFICATION
A CSN performing a single classification task (e.g., identifying a digit in an image) is defined by three sets of trainable weights. First, an encoder parametrized by weights θ, eθ, maps from inputs of dimension X to encodings of dimension Z: eθ : RX → RZ . Second, a decoder parametrized by weights φ, dφ, performs the decoding function of mapping from encodings to reconstructed inputs: dφ : R
Z → RX . Third, there exist a set of k trainable prototype weights, p, that are each Zdimensional vectors: p1, p2, ..., pk ∈ RZ . This architecture resembles that of the PCN, but without the additional linear classification layer (Li et al., 2018).
Here, we focus briefly on the set of prototypes, p. Given a set of k prototypes in RZ , we define a “concept subspace,” C as follows:
vi = pi − p1 ∀i ∈ [2, k] (1) C = {x|x ∈ RZ where x = p1 + ∑ i∈[2,k] λivifor λi ∈ R ∀i} (2)
C is the linear subspace in RZ defined by starting at the first prototype and adding linear scalings of vector differences to all other prototypes. We call this subspace a concept subspace because it represents a space of encodings between prototypes defining a single concept (e.g., prototypes for digits 0, 1, 2, etc. define a concept subspace for digit classification).
A CSN’s architecture — consisting of an encoder, a decoder, and a set of prototypes and the associated concept subspace — enables two types of functionality: the encoder and decoder may be composed to reconstruct inputs via their latent representations, and CSNs may perform classification tasks by mapping an input, x, to one of Y discrete categories. Classification is performed by first encoding an input into a latent representation, z = eθ(x). The l2 distance from z to each prototype is then calculated, yielding k distance values: di(z,p) = ||z − pi||22; i ∈ [1, k]. These distances are mapped to a probability distribution, PK(i); i ∈ [1, k], by taking the softmax of their negatives. Lastly, if there are more prototypes than classes, (e.g., two prototypes for dogs, two for cats, etc.) the distribution over k is converted to a distribution over Y categories summing the probabilities for prototypes belonging to the same class.
For single-concept classification, CSNs differ from PCNs primarily by removing the linear layer that PCNs used to transform distances to prototypes into classifications. We found this unnecessary for high classification accuracy (Appendix A) and instead directly used negative distances. Without the linear layer, CSN classification is equivalent to projecting encodings, z, onto a concept subspace before calculating distances. The distances between projected encoding, dubbed zproj , and prototypes will induce the same softmax distribution as when the orthogonal component remains. Indeed, we find projection more intuitive - only the component of z that corresponds to the subspace is used for classification - and list projection as a standard step in the remainder of this paper. A simple example of projecting an encoding and calculating distances to prototypes is shown in Figure 1 a.
For some tasks, we used an encoder design from variational-autoencoders (VAEs) in order to regularize the distribution of encodings to conform to unit Gaussians (Kingma & Welling, 2014). By default, this regularization loss was set to 0, but it sometimes proved useful in some domains to prevent overfitting (as detailed in experiments later). We emphasize that CSNs are discriminative, rather than generative, models, so we did not seek to learn a latent space from which to sample.
3.2 MULTI-CONCEPT LEARNING
We defined the CSN architecture for single classification tasks in the previous section; here, we explain how a CSN may be used for multiple classification tasks. (For example, consider a scenario involving classifying both what type of bird a photo depicts and whether the photo was taken outdoors or indoors.) Extending CSNs to support multiple classifications requires the addition of new sets of prototypes. This is the primary contribution of our work.
Multiple classification tasks are performed by defining a set consisting of sets of prototypes: P = {p1, ...,pc}, with a set of prototypes for each of c classification tasks. A classification task is performed by using the CSN’s encoder to generate an encoding, z, and projecting z into the concept subspace defined by the set of prototypes particular to the given task. Figure 1 (b-d) depicts simplified examples of two concept subspaces. In each example, each concept uses three prototypes, yielding two planar concept spaces (one of which corresponds to the x − y plane for illustrative purposes); z may be projected into either plane depending upon the classification task at hand.
While the prototypes in different sets are separate from each other, correlations present in training data may lead to a range of relationships among prototypes. Returning to the previous example scenario, prototypes of birds may represent canaries and toucans, while prototypes of indoor and outdoor scenes may represent living rooms and jungles; each set of prototypes is independent in principle, but in reality, prototypes may represent canaries in living rooms and toucans in jungles. In fact, two sets of prototypes can exhibit a range of relationships from highly correlated to fully independent, as shown in Figure 1.
We defined a metric, concept subspace alignment, to reflect this range of relationships. Mathematically, the alignment of two subspaces is the mean of the cosine squared of the angle between all pairs of vectors drawn from the basis of each subspace. Given orthonormal bases, efficiently computed via QR factorization, Q1 and Q2, of ranks m and n, we define alignment as follows:
a(Q1, Q2) = 1
mn m∑ i n∑ j (Q>1 Q2[i, j]) 2 (3)
Given the range of values for the cosine squared function, alignment values range from 0 to 1 for orthogonal and parallel subspaces, respectively. Intuitively, orthogonality lends itself well to independent concepts and therefore supports fair classification, whereas parallel subspaces naturally correspond to hierarchical classification. We elaborated on this intuition in Section 3.4.
3.3 TRAINING PROCEDURE
When training a CSN, we assume access to a set of training data, (X,Y) for Y = (Y1, Y2, ...Yc). For each entry in the dataset, there is an input x and a label yi, for each of c classification tasks.
We trained CSNs in an end-to-end manner to minimize a single loss function, defined in Equation 4. The four terms in the loss function were as follows: 1) reconstruction error; 2) the loss introduced for the PCN, encouraging classification accuracy and the clustering of encodings around prototypes (applied within each concept subspace); 3) a KL divergence regularization term; and 4) a term penalizing alignment between concept subspaces. Each term was weighted by a choice of real-
valued λs. We emphasize that the PCN loss — clustering and classification accuracy, defined in Equation 7 of Li et al. (2018) — is calculated within each concept subspace using the projections of encodings; thus, encodings were encouraged to cluster around prototypes only along dimensions within the subspace. The encoder, decoder, and prototype weights were trained simultaneously.
l(X,Y, θ,φ,P) = λ0 |X| ∑ x∈X (dφ(eθ(x))− x)2
+ ∑
i∈[1,C]
λPiPCN(proj(eθ(X),pi), Yi)
+ ∑
i∈[1,C]
λKLiKL(X,pi) (4)
+ ∑
i∈[1,C] ∑ j∈[1,C] λAija(Qi, Qj)
The KL regularization term mimics training losses often used in VAEs that penalize the divergence between the distribution of encodings and a zero-mean unit Gaussian (Kingma & Welling, 2014). In our case, we wished to induce a similar distribution of encodings, but centered around prototypes rather than the origin. Furthermore, rather than induce a Gaussian distribution within a concept subspace (which would dictate classification probabilities and therefore potentially worsen classification accuracy), we wished to regularize the out-of-subspace components of encodings.
Concretely, we implemented this regularization loss in three steps. First, we computed the orthogonal component of an encoding as zorth = z − zproj . We then computed the KL divergence between the distribution of zorth and unit Gaussians centered at each prototype in each subspace. Finally, we took the softmax over distances between encodings and prototypes in order to only select the closest prototype to the encoding; we then multiplied the softmax by the divergences to enforce that encodings were distributed as unit Gaussians around the nearest prototype in each subspace. Together, these operations led the distributions out of each subspace to conform to unit Gaussians around each prototype. As confirmed in later experiments, this component was crucial in training fair classifiers.
3.4 HIERARCHICAL AND FAIR CLASSIFICATION
We conclude this section by demonstrating how CSNs may support hierarchical or fair classifications. Hierarchical and fair classification may be thought of as extremes along a spectrum of concept alignment. In hierarchical classification, concepts are highly aligned and therefore parallel: the difference between a toucan and a Dalmatian is similar to the difference between a generic bird and dog, and so the vector differences between prototypes associated with different classes should also be parallel (e.g., “bird” - “dog” = “toucan” - “Dalmatian.”). In fair classification, concepts are not aligned: switching belief about someone’s sex should not alter predictions about their income. Thus, based on the classification task, moving an encoding relative to one subspace should either affect (for hierarchical) or not affect (for fair) that encoding’s projection onto the other subspace. We provide a geometric interpretation of these two tasks in Figure 1 b and d.
CSNs can be trained to adopt either form of concept relationship by penalizing or encouraging concept subspace alignment (already present as a(Qi, Qj) in the training loss). Our single model reconciles these two types of problems by viewing them as opposite extremes along a spectrum of concept relationships that our technique is able to learn; this is the main contribution of our work.
4 RESULTS
Our experiments were divided in four parts. First, we demonstrated how CSNs matched standard performance on single classification tasks: in other words, that using a CSN did not degrade performance. We omit these unsurprising results from the paper; full details are included in Appendix A. Second, we showed that CSNs matched state-of-the-art performance in two fair classification tasks. Third, we used CSNs for hierarchical classification tasks, exceeding performance demonstrated by
Table 1: Mean Adult dataset fairness results.
Model y Acc. s Acc. D.I. DD-0.5 CSN 0.85 0.67 0.83 0.16 Adv. 0.85 0.67 0.87 0.16 VFAE 0.85 0.70 0.82 0.17 FR Train 0.85 0.67 0.83 0.16 Wass. DB 0.81 0.67 0.92 0.08 Random 0.76 0.67
Table 2: Mean German dataset fairness results.
Model y Acc. s Acc. D.I. DD-0.5 CSN 0.73 0.81 0.70 0.10 Adv. 0.73 0.81 0.63 0.10 VFAE 0.72 0.81 0.47 0.23 FR Train 0.72 0.80 0.55 0.16 Wass DB 0.72 0.81 0.33 0.02 Random 0.70 0.81
prior art along several metrics. Fourth, we showed how CSNs enabled both fair and hierarchical classification in a dataset describing human motion in an assembly task that exploited hierarchical knowledge while preserving participant anonymity. Implementation details of CSNs in all experiments are included in Appendix F.
4.1 FAIR CLASSIFICATION
We evaluated CSN’s performance in fair classification tasks in the Adult and German datasets. These datasets are commonly used in fairness literature and contain data that can be used to predict people’s income or credit risks (Dua & Graff, 2017).We compared CSN performance to our implementations of an advesarial purging technique (Adv.), the variational fair autoencoder (VFAE), Wasserstein Fair Classification (Wass. DB), and a mutual-information-based fairness approach (FR Train) (Xie et al., 2017; Zemel et al., 2016; Jiang et al., 2020; Roh et al., 2020). Implementation details of fair classification baselines and full results including standard deviations are included in Appendix G.
For the Adult dataset, the protected attribute was sex, and for the German dataset, the protected attribute was a binary variable indicating whether the person was older than 25 years of age. In evaluation, we measured y Acc., the accuracy of predicting income or credit, s Acc., the accuracy of a linear classifier trained to predict the protected field from the latent space, disparate impact (DI), as defined in Roh et al. (2020), and demographic disparity (DD-0.5), as defined by Jiang et al. (2020).
Mean results over 20 trials for both datasets were included in Tables 1 and 2. In both datasets, we observed that CSNs matched state-of-the-art performance. CSNs produced high y Acc., indicating high task performance for predicting income or credit. Furthermore, fairness measures demonstrate that CSNs purged protected information successfully (low s Acc.) and achieved high D.I. and low DD-0.5, as desired. A visualization of the latent space of a fair classifier, trained on the German dataset, is shown in Figure 2 and confirmed that CSNs learned orthogonal concept subspaces.
In addition to reproducing the state of the art, we conducted an ablation study to demonstrate the importance of two terms in our training loss: the alignment and KL losses. Using the German dataset, we trained 20 CSNs, setting the KL, alignment, or both loss weights to 0. The mean results of these trials are reported in Table 3.
Table 3 demonstrates the necessity of both KL and alignment losses to train fair predictors (with higher disparate impact and lower demographic disparity values). Including both loss terms resulted
in the fairest predictors; removing those losses could enable better classification accuracy, but at the expense of fairness. This confirms geometric intuition: the alignment loss created orthogonal subspaces and the KL regularization created distributional equivalence based on the subspaces. Jointly, these losses therefore produced statistical independence.
Table 3 also includes causal analysis of trained CSNs via the ρ metric. Intuitively, this metric reflected the learned correlation between s and y; it was calculated by updating embeddings in the CSN latent space along the gradient of s and recording the change in prediction over y. We reported the ratio of these changes as ρ; as expected, enforcing orthogonality via alignment loss led to ρ values of 0. This technique is inspired by work in causally probing language models (e.g., Tucker et al. (2021)); full details for calculating ρ are included in Appendix D.
4.2 HIERARCHICAL CLASSIFICATION
We compared CSNs to our implementation of HPNs and results for Metric-Guided Prototype Learning (MGP), reported by Garnot & Landrieu (2020), for hierarchical classification tasks. Our HPN baseline used the same architecture as CSN (same encoder, decoder, and number of prototypes). It differed from CSNs by setting alignment losses to 0 and by adopting the conditional probability training loss introduced by Hase et al. (2019). We further included results of a randomly-initialized CSN under “Init.” in tables. In these experiments, we sought to test the hypothesis that CSNs with highly aligned subspaces would support hierarchical classification, just as orthogonal subspaces enabled fair classification.
In addition to standard accuracy metrics, we measured two aspects of CSNs trained on hierarchical tasks. First, we recorded the “average cost” (AC) of errors. AC is defined as the mean distance between the predicted and true label in a graph of the hierarchical taxonomy (e.g., if true and predicted label shared a common parent, the cost was 2; if the common ancestor was two levels up, the cost was 4, etc.) (Garnot & Landrieu, 2020). Second, we measured the quality of trees derived from the learned prototypes. After a CSN was trained, we defined a fully-connected graph G = (V ,E) with vertices V = P ⋃ {0} (the set of all prototypes and a point at the origin) and undirected edges between each node with lengths equal to the l2 distance between nodes in the latent space. We recovered the minimum spanning tree, T , from G, (which is unique given distinct edge lengths, which we observed in all experiments), and converted all edges to directed edges through a global ordering of nodes. Lastly, we calculated the graph edit distance (ED) between isomorphisms of the recovered tree and the ground-truth hierarchical tree (with edges that obeyed the same ordering constraints) (Abu-Aisheh et al., 2015). Intuitively, this corresponded to counting how many edges had to be deleted or added to the minimum spanning tree to match the taxonomy tree, ignoring edge lengths, with a minimum value of 0 for perfect matches.
As a basic test of CSNs in hierarchical classification tasks, we created simple hierarchies from the MNIST Digit and Fashion datasets. The Digit dataset used the standard low-level labels of digit, supplemented with high-level labels of parity (two classes); the Fashion dataset used the standard low-level labels for item of clothing, with a ternary label for a high-level classification of “tops” (tshirts, pullovers, coats, and shirts), “shoes” (sandals, sneakers, and ankle boots), or “other” (trousers, dresses, and bags).
Mean results from 10 trials for both MNIST datasets were included in Tables 4 and 5. The HPN baselines were implemented using the same number of prototypes as the CSNs being compared against. Both tables show that CSNs exhibit comparable or better accuracy than HPNs for both the low-level (Y0) and high-level (Y1) classification tasks. In addition, the average cost (A.C.) and edit distance (E.D.) values show that CSNs recovered minimum spanning trees that nearly perfectly matched the ground truth tree, and that when CSNs did make errors, they were less “costly” than errors made by HPNs (although admittedly, a dominant force in A.C. is classification accuracy alone). A 2D visualization of the latent space of a CSN trained on the Digit task is shown in Figure 3: encodings for particular digits clustered around prototypes for those digits (X), while prototypes for even and odd digits (circles) separated the digit clusters into the left and right halves of the latent space. Visualizations of latent spaces for more fair and hierarchical classification tasks are included in Appendix C; they confirmed the theoretical derivations of orthogonal and parallel subspaces.
Lastly, we trained 10 CSNs and HPNs on the substantially more challenging CIFAR100 dataset. The dataset is inherently hierarchical: the 100 low-level classes are grouped into 20 higher-level
Table 4: MNIST digit hierarchy mean (stdev) over 10 trials. First two columns × 100.
Table 5: MNIST fashion hierarchy mean (stdev) over 10 trials. First two columns × 100.
Figure 3: 2D latent space for hierarchical digit classification creates clusters around even and odd prototypes (circles on the right and left, respectively) and digit prototypes (X).
Y0% Y1% A.C. E.D. CSN 0.76 (0.0) 0.85 (0.0) 0.76 (0.02) 11.2 (7) HPN 0.71 (0.0) 0.80 (0.0) 0.97 (0.04) 165.0 (3) Init. 0.01 0.05 3.88 200 CSN 0.78 (0.0) 0.88 (0.0) 0.91 (0.0) 6.0 (8.2) MGP 0.76 1.05 Init. 0.01 0.05 7.33 258
classes, each of size 5. Using a resnet18 encoder, pre-trained on ImageNet, in conjunction with 100 prototypes for low-level classification and 20 for high-level, we trained CSNs and HPNs. CSNs additionally used an alignment loss weight of -10 to encourage parallelism between the two concept subspaces. The mean results over 10 trials are shown in the top half of Table 6.
We also compared CSNs to MGP and other hierarchical classifiers using the CIFAR100 dataset and a deeper hierarchy, consisting of 5 levels of sizes 100, 20, 8, 4, and 2, as done by Garnot & Landrieu (2020). The additional information provided by this deeper hierarchy resulted in improved classification performance. Median results (as done by Garnot & Landrieu (2020)) for 10 CSNs using this dataset are shown in the bottom half of Table 6. Changing the hierarchy changed how average cost was calculated, so values from the top and bottom halves of the table should not be compared. Within the bottom half, we note that CSNs outperformed MGP on both A.C. and classification accuracy. Furthermore, according to values generated in the extensive experiments conducted by Garnot & Landrieu (2020), CSNs outperformed numerous other baselines, including HXE and soft-labels (Bertinetto et al. (2020)), YOLO (Redmon et al. (2016)), and a hyperspherical prototype network (Mettes et al. (2019)), all of which were built upon a resnet18 pretrained on ImageNet. In fact, our CSNs achieve SOTA classification accuracy for any classifier built upon a resnet18 backbone, without data augmentation. Furthermore, the decrease in A.C. is especially surprising given that other techniques explicitly optimized for average cost reductions, while CSNs merely trained on classification at each level. Notably, the decrease in A.C. is not fully explained by the increase in accuracy, indicating that CSN not only exhibited higher accuracy but also, when it did make mistakes, those mistakes were less severe.
Lastly, we note that CSNs support a range of learned relationships other than fair or hierarchical. The varying values of ρ in Table 3 indicate that CSNs may learn different relationships when alignment loss is set to 0. However, in general, one could train models to learn desired relationships by penalizing or rewarding alignment relative to some intercept. We trained and evaluated such models in Appendix E and found that models indeed learned the desired alignment.
4.3 FAIR AND HIERARCHICAL CLASSIFICATION
Prior experiments demonstrated how CSNs could solve different classification problems separately; in this section, we applied a single CSN to a task that required it to use both fair and hierarchical classification. Intuitively, fairness was used to protect privacy, while hierarchical structure was used for better performance.
We used a dataset describing human motion in a bolt-placement task. The dataset was gathered from a similar setup to (Lasota et al., 2014) - motion was recorded at 50 Hz, using the 3D location of each of the 8 volunteer participant’s gloved right hands as they reached towards one of 8 holes arranged
in a line to place a bolt in the hole. The bolt holes may be thought of hierarchically by dividing destinations into left vs. right (LR) groupings, in addition to the label of the specific hole.
Initial exploration of the dataset showed promising results for prototype-based classification: the target locations were identified with 81% accuracy, and further analysis showed that prototypes corresponded to human-like motions (details in Appendix B). Troublingly, however a trained CSN could identify the participant with over 60% accuracy, which posed privacy concerns. Nevertheless, from a robotic safety perspective, it is important for robots to exploit as much information as possible to avoid collisions with humans.
Ultimately, we wished to predict which hole a participant was reaching towards given the past 1 second of their motion, while preserving privacy and exploiting hierarchical structure. Thus, we designed a CSN with three concept subspaces: one for predicting the bolt (8 prototypes), one for predicting the high-level grouping of a left or right destination (2 prototypes) and one for predicting the participant id (8 prototypes). We enforced that the bolt and LR subspaces were parallel while the bolt and participant subpsaces were orthogonal.
Results from our experiments are presented in Table 7. Means and standard deviations over 10 trials for each row are reported. (A.C. was calculated only for prediction errors, due to the large differences in accuracy rates across models.) When training a CSN with no constraints on subspace alignment, we found a highly accurate but unfair predictor (81% accuracy for bolt location, but sub-optimal disparate impact and demographic disparity values). Switching the CSNs to be fair classifiers by only enforcing orthogonality between bolts and participants yielded a fair classifier (illustrated by ρ, D.I., and DD-0.5), but with much worse bolt prediction accuracy (44%). However, by using a hierarchical subspace for LR groupings, the final CSN both improved classification accuracy and decreased the average cost of errors, while maintaining desired fairness characteristics.
5 CONTRIBUTIONS
The primary contribution of this work is a new type of model, the Concept Subspace Network, that supports inter-concept relationships. CSNs’ design, motivated by prior art in interpretable neural network models, use sets of prototypes to define concept subspaces in neural net latent spaces. The relationships between these subspaces may be controlled during training in order to guide desired model characteristics. Critically, we note that two popular classification problems — fair and hierarchical classification — are located at either end of a spectrum of concept relationships, allowing CSNs to solve each type of problem in a manner on par with techniques that had previously been designed to solve only one. Furthermore, a single CSN may exhibit multiple concept relationships, as demonstrated in a privacy-preserving hierarchical classification task.
While we have demonstrated the utility of CSNs within several domains, numerous extensions could improve their design. First, the idea of subspace alignment could be applied to non-Euclidean geometries like hyperbolic latent spaces that are sometimes used for hierarchical classification. Second, CSNs could additionally benefit from relaxation of some simplifying assumptions: notably, allowing for more complex relationships rather than those defined by subspace cosine similarity, or using adversarial approaches for distributional regularization rather than only supporting unit Gaussians.
Lastly, we note that CSNs, while designed with ethical applications such as fair classification in mind, may lead to undesired consequences. For example, malicious actors could enforce undesirable concept relationships, or simply observing emergent concept relationships within a CSN could reinforce undesirable correlations. In addition, although prototypes encourage interpretability, which we posit can be used for good, the reductive nature of prototypes may be problematic when classifying human-related data (e.g., the COMPAS fair classification task we avoided).
A SINGLE-CONCEPT CLASSIFICATION BASELINES
In addition to the specialized fair and hierarchical classification tasks, we tested CSNs on two standard classification tasks: identifying digits in the MNIST Digit dataset, and identifying one of 100 categories in the CIFAR100 dataset. There were no concept relationships present because this was a single classification task; instead, the tests established that using a CSN did not degrade classification performance relative to PCNs or other neural architectures.
On the Digit dataset, we trained a CSN using the same encoding and decoding layer architectures with 20 prototypes (two for each digit) and applied the same Gaussian distortions to training images as Li et al. (2018). Over five trials, training for 50 epochs with batches of size 250, we achieved the same mean classification accuracy as PCN (99.22%), demonstrating that the use of a CSN did not worsen classification accuracy (Li et al., 2018).
On the CIFAR100 dataset, we extended a resnet18 backbone (pretrained on ImageNet) as our encoder with 100 prototypes, 1 for each class, and trained 10 models for 60 epochs He et al. (2016). We achieved a mean classification accuracy of 76%, the standard result for networks built upon a resnet18 framework (Hase et al., 2019). Thus, CSNs exhibited high performance in a challenging domain, matching performance of normal networks, with the benefit of interpretable prototypes.
B VISUALIZING DECODED PROTOTYPES
Because CSNs are built upon prototype-based classification, they are at least as interpretable as prior art, such as PCNs. In this section, we demonstrate how prototypes may be decoded to visualize their representations. These figures were generated by decoding prototypes from the models used throughout the paper.
Figure 4 shows the first 10 prototypes from the MNIST digit classifier in Appendix A. Unlike PCNs, CSNs benefit from an inductive bias that leads to an equal number of prototypes per class.
Figure 5 depicts the decoded prototypes when training a CSN to predict human reaching motions, as described in Section 3.4. This model was only trained on motion prediction, without using fair or hierarchical training terms. Interestingly, by over-parametrizing the number of prototypes (there were only 8 possible destinations, but twice as many prototypes), the model learned different forms of trajectories that reached towards the same destination: short movements near the targets, and longer loops when reaching from farther away.
C VISUALIZING LEARNED LATENT SPACES
In addition to decoding prototypes, we visualized the latent spaces of trained classifiers. For the purposes of visualization, we trained new models from scratch, using only 2D latent spaces. Encodings and prototypes for both fair classification tasks, as well as the digit and fashion hierarchical classification tasks, are shown in Figure 6.
In all diagrams, encodings of test inputs are denoted by small colored dots. All classification tasks used 2 sets of prototypes: we depicted one set of prototypes as large black dots, and the other as X’s. The arrangement of the prototypes in the latent spaces confirms that CSNs have learned the right concept alignment.
Specifically, for the fair classification tasks, the X’s form a line segment that is orthogonal to the line formed by the black dots. This orthogonality leads to fairness, as discussed in our paper.
In the hierarchical domains, we similarly observed that CSNs had learned the “right” latent structure. In these domains, the black dots denoted prototypes for high-level classification (such as even vs. odd). We observed that the lower-level prototypes (e.g., for digit), denoted by ‘X’s were clustered around the high level prototypes.
As a whole, these visualizations confirm that CSNs learn the desired latent structure, all controlled by changing the alignment loss weight.
D CALCULATING LEARNED CASUAL RELATIONSHIPS
In Sections 4.1 and 4.3, we report a metric, ρ, to denote the learned causal relationship between concepts. Here, we explain how we calculate ρ in greater detail.
Intuitively, ρ corresponds to the mean change in belief for one classification task divided by changes in belief for another classification task. For example, in fair classification, prediction of a person’s credit risk should not change based on changes in belief over the person’s age; this notion corresponds to ρ = 0.
We calculate ρ in CSNs using a technique inspired by Tucker et al. (2021). In that work, the authors studied if a language model’s output changed when the model’s internal representation changed according to syntactic principles. In our work, we change latent representations, z, by taking the gradient of z with respect to the loss of one classification task given true label y∗, creating a new z′
taken by moving z along that gradient, and then calculating the new classification likelihoods using z′.
For simplicity, we limit our analysis to CSNs with two concept subspaces. We denote the encoding of an input, x, as z = eθ(x). Prediction for each of the two tasks may be denoted as prediction functions, predi, for i ∈ [0, 1] indicating the two tasks; the prediction function corresponds to projecting z into the relevant subspace and calculating distances to prototypes, as discussed earlier. Using this notation, we define ρ formally:
z = eθ(x) (5) y0 = pred0(z) (6) y1 = pred1(z) (7) z′ = z +∇loss(y0, y∗0) (8) y′0 = pred0(z ′) (9) y′1 = pred1(z ′) (10)
ρ = y0 − y′0 y1 − y′1
(11)
Thus, ρ captures how the model’s change in belief about one attribute affects its change in belief over another attribute, in other words the causal learned relationship between prediction tasks.
E CORRELATED-CONCEPT CLASSIFICATION BASELINES
A single CSN may be used to perform multiple classification tasks simultaneously without explicitly guiding concept relationships. In the Adult and German fair classification domains, CSNs predicted both s, the protected field, and y, the desired final prediction, while we explicitly guided the learned concept relationships to enforce fairness. In a separate set of experiments conducted on the same datasets, we demonstrated how CSNs can learn more complex concept relationships.
In these experiments, we trained CSNs with two subspaces, each with two prototypes, and set both the KL and alignment losses to zero. The CSNs were trained to predict both s and y, using the two subspaces. We recorded prediction accuracy of y and ρ, the learned causal correlation between s and y.
Over 10 trials, for the German and Adult datasets, CSNs achieved mean y classification accuracies of 85% and 74%, on par with prior art on these datasets when not enforcing fairness (Xie et al., 2017). We also found non-zero ρ: for the German dataset, we found a value of 0.20; for the Adult dataset, a mean value of 0.23. An example latent space from a CSN trained on the German dataset in this manner is shown in Figure 7, using the same visualization mechanism as introduced in Appendix C. In this example, the model learned a non-zero correlation between prototypes for credit (Xs) and
for an applicant’s age (circles). This type of learned correlation is undesirable in fair classification domains but may be useful in other scenarios.
As a demonstration of useful learned correlations, we implemented a CSN in a classification task using synthetic data. Consider a simplified weather prediction task in which, given noisy observations of temperature and precipitation, a weather station must classify the day as hot or cold and rainy or sunny. In the artificial world, in the last year of weather data, half of the days are rainy and half are sunny, and all rainy days are cold and all sunny days are hot. Cold days have a true temperature uniformly drawn between 0.0 and 0.2 and warm days have a true temperature uniformly drawn between 0.8 and 1.0. Similarly, sunny days have a precipitation value drawn uniformly between 0.0 and 0.2 and rainy days have precipitation values are drawn uniformly between 0.8 and 1.0. Observations of temperature and precipitation are corrupted by zero-mean gaussian noise with σ = 0.05. Given noisy observations of temperature and precipitation, a model’s task is to predict binary labels for whether the day is hot or cold and rainy or sunny.
Numerous causal paths could explain observational data recorded from this environment in which hot days are sunny and cold days are rainy: rain could cause cold weather, some latent factor like atmospheric pressure could affect both precipitation and temperature, etc.. Trained simply from observational data, models are unable to learn the right causal relationship between these variables.
Unlike traditional neural networks, however, CSNs allow humans to encode desirable causal relationships. We designed a CSN with two concept subspaces (for temperature and precipitation), each with two prototypes. We then penalized (a(Qrs, Qhc)− √ ρ∗)2; that is, we set an intercept of √ ρ∗ for the alignment loss between the two subspaces for rainy and sunny (rs) and hot and cold (hc). (We used √ ρ∗ as the notation for setting desired alignment for reasons that will become apparent in the next paragraph.)
In our experiments, we sought to identify if CSNs could learn the desired causal relationship between temperature and precipitation. We did so by setting √ ρ∗ to some value, training CSNs using standard losses, and then measuring if the ρ metric we calculated from the trained CSNs matched ρ∗. We trained 10 CSNs with latent dimension 2 with √ ρ∗ = 0.5. This corresponds to a cosine value of 0.7, or about 45 degrees. This is intuitively interpreted as meaning that for every percentage increase in likelihood in the weather being sunny, the likelihood of it being warm should increase by 0.7 percent.
As desired, the trained CSNs had a mean ρ value of 0.72 (standard deviation 0.14). An example latent space from one such CSN is shown in Figure 7: the two subspaces are arranged at roughly 45 degrees. Moving an embedding of a cold and rainy day to increase the likelihood of it begin sunny by 1% increases the predicted likelihood of it being warm by 0.7%. This demonstrates that we were able to train CSNs to learn the causal relationship we wished for. Lastly, we note that we repeated these experiments with other √ ρ∗ and found similar findings, and if we did not include alignment training loss, CSNs learned arbitrary concept relationships.
F CSN IMPLEMENTATIONS
In this section, we include details necessary for replication of experimental results that did not fit in the main paper. 1
In all experiments, we used random seeds ranging from 0 to the number of trials used for that experiment. Although CSNs support several prototypes per class (e.g., 2 prototypes for the digit 0, 2 prototypes for the digit 1, etc.), unless otherwise noted, we used an equal number of prototypes and classes.
In the German fairness experiments, we trained for 30 epochs, with batch size 32, with classification loss weights of 1, alignment loss of 100 between the two subspaces, and KL lossses of 0.5. In the Adult fairness experiments, we trained for 20 epochs, with batch size 128, with classification loss weights of [1, 0.1] for y and s, respectively, alignment loss of 100 between the two subspaces, and KL losses of 0.1. For both fair classification tasks, the CSN encoders comprised 2 fully-connected layers with ReLu activations and hidden dimension 128, outputting into a 32-dimensional latent
1Anonymized code is available at https://anonymous.4open.science/r/csn-8470/
space, and the decoder comprised 2, 128-dimensional ReLu layers followed by a sigmoid layer. The whole model was trained with an Adam optimizer with default parameters.
For the digit and fashion hierarchical classification tasks, CSNs were trained for 20 epochs with batch size 128. CSNs for both datastes used identical architectures to the networks created for the fair classification tasks.
For the CIFAR100 hierarchical classification task, we built upon a ResNet18 backbone, as done by Garnot & Landrieu (2020). The encoder consisted of a ResNet encoder, pre-trained on imagenet, followed by two fully-connected layers with hidden dimension 4096, feeding into a latent space of dimension 100. The decoder (and decoding training loss) was removed in this domain to reduce training time. The network was trained using an SGD optimizer with learning rate 0.001 and momentum 0.9, with batch size 32. Training terminated after 60 epochs or due to early stopping, with a patience of 10 epochs. Classification loss weights were set to [1, 5] for classifying the high- and lowlevel categories, respectively. KL losses were set to 0; alignment loss was set to −10 to encourage parallel subspaces.
For the fair and hierarchical classification task with bolts, CSNs used the same architecture as for the fair classification task. Models were trained for 50 epochs, with batch size 256. All classification loss weights were set to 1; alignment loss between bolts and LR was set to −10 and between bolts and participant was set to 100. KL losses were set to 2.
G FAIR CLASSIFICATION BASELINES
In Section 4.1, we compared CSNs to several fair classification baselines on the standard German and Adult datasets. Although the datasets are standard in the literature, there are a wide variety of fairness metrics, only a subset of which each method has published. Therefore, we implemented each fair classification baseline and recorded all metrics of interest for each method. In this section, we demonstrated the soundness of our implementations by comparing to published metrics. Implementations of our baselines are available here: https://anonymous.4open.science/status/fairbaselines-44A7.
Tables 8 and 9 report the recorded metrics for each fairness technique. Values reported in prior literature are included in the table using the technique’s name (e.g., the second row of Table 8, labeled ‘Adv.’ includes the values reported by Xie et al. (2017)). Values that we measured using our implementations of each technique are marked with asterisks. We report means and standard errors for our implementations for each method and compare to the metrics that prior methods published for each dataset.
The bottom halves of Tables 8 and 9 are separated from the top halves to indicate modified datasets. The Wass. DB baseline used the German and Adult datasets but treated protected fields differently (e.g., by creating binary age labels at the cutoff age of 30 instead of 25, as all other techniques did). We therefore evaluated CSNs and our own implementations of Wass. DB on these different datasets as well and reported them below the horizontal line.
Interestingly, while we were able to recreate the Wass. DB results on these modified datasets, the technique, when applied to the standard datasets, demonstrated better fairness than most techniques but worse y Acc.. (We repeated the hyperparameter sweeps reported by Jiang et al. (2020) and used the best results.) We attribute the low y Acc. to the fact that Jiang et al. (2020) call for a linear model as opposed to deeper neural nets used by other approaches. When using the datasets suggested by Wass. DB, we reproduced their published results, as did CSNs trained on the same datasets. We note, however, that predictors on this dataset are of limited use, as both Wass. DB and CSNs fail to outperform random classification accuracy.
As a whole, Tables 8 and 9 give us confidence in our implementations of the fairness baselines. Our implementations were able to match or exceed metrics reported from prior art. This suggests that our underlying implementations were correct and that new metrics we gathered on them were valid. | 1. What is the main contribution of the paper regarding prototype-based classification?
2. How does the proposed approach support class hierarchies and fairness?
3. What are the strengths and weaknesses of the paper, particularly in terms of its connection to various concepts and experimental material?
4. How does the paper's proposed notion of orthogonality relate to fairness, privacy, and causality?
5. What are some potential improvements for the paper regarding its focus, related work, experiment size, and hyperparameter selection?
6. Could you provide further explanation or clarification regarding the effect of projection into the plane spanned by prototypes on classification?
7. Why did the authors choose to use the parity hierarchy as ground truth for MNIST instead of a hierarchy based on visual similarity? | Summary Of The Paper
Review | Summary Of The Paper
The present paper proposes a novel architecture for prototype-based classification to support class hierarchies and fairness. In particular, hierarchies are supported by training the model for multiple classification problems jointly, each in its own subspace of the feature space, spanned by the respective prototypes. For fairness, the paper proposes to make the subspace for the classification between subgroups orthogonal to all other subspaces, such that any change in subgroup membership does not influence any other classification. In a series of experiments, the paper evaluates hierarchical classification and fairness separately as well as jointly and demonstrates equal or superior results to a state-of-the-art approach from the literature.
Review
The paper's main strengths are:
I found it particularly elegant to phrase both hierarchical classification and fairness in the same language, namely that of classification subspaces which are spanned by prototypes.
The paper connects to a wide range of concepts, namely hierarchical classification, interpretability, and fairness, such that it is of potential interest to a wide range of researchers.
The paper reports a wide range of experiments; so wide, indeed, that much experimental material had to be pushed to the appendix. I particularly appreciate the analysis of the hierarchies discovered by the prototype network and the comparison to the ground-truth hierarchy via edit distance.
The paper is clearly written. I, for one, had no problem following along and would feel well equipped to reproduce the reported results.
The paper's main weaknesses are:
The wide range of concepts discussed result in a certain lack of focus. Fairness, privacy, and causality are all mentioned but only discussed superficially. For fairness, this is particularly dangerous as readers may be mislead to believe that the proposed notion of orthogonality is sufficient for fairness. However, fairness has many meanings (as the paper acknowledges in the appendix) and only some of them are related to the proposed notion of orthogonality. Therefore, I would advise to revise references to fairness, privacy, and causality ind to mention explicitly that only a narrow notion of these terms is implemented by the proposed model.
The related work fails to mention the historic roots of the prototype concept. I understand that many recent works in prototype networks make the same mistake but I would still advise to not continue it. Prototype-based classification has - to my knowledge - been pioneered by Kohonen in the late 1980s/early 1990s with his work on Learning Vector Quantization (refer to the review by Nova and Estevez, 2014; doi: 10.1007/s00521-013-1535-3 ) and has since been extended in many directions, such as metric learning (Schneider et al., 2009, doi:10.1162/neco.2009.11-08-908), or probabilistic classification (Seo and Obermayer, 2003, doi: 10.1162/089976603321891819 ). The latter extension should be of particular interest because the classification scheme is very similar to the one proposed in this paper.
While the paper reports many different experiments, any single one seems relatively small with few data sets and (for hierarchical classification) few baselines. Further, I could not find information on the hyperparameter selection (e.g. how many prototypes and how strong the regularization strengths lambda were).
Overall, my recommendation is to accept this paper. While the paper could be more focused, make its own contribution and limitations more clearly, and experiments could be extended, I still believe that most flaws could be addressed with minor adjustments and that the core contribution of the paper is interesting enough to a wide range of scholars that publication is warranted.
Nonetheless, I would appreciate if the authors could help me to deepen my understand of the work by responding to two questions:
I am not fully convinced that the projection into the plane spanned by the prototypes of a classification problem has any effect on the classification itself. If I understand correctly, the paper uses a softmax on the squared distances to all prototypes for classification (which is entirely reasonable). Now, let
D
2
be the squared distance between a point
z
and a prototype
p
, let
d
2
be the squared distance between the projected point
z
~
and the same prototype
p
, and let
h
2
be the squared distance between
z
and
z
~
. Since the distances form a right-angle triangle, we obtain
d
2
=
D
2
−
h
2
. This holds for any prototype in the same classification problem. Accordingly, all projected distances within one classification problem are merely the original distance minus a constant offset. This constant offset gets removed by softmax, anyways. So I would assume that the softmax probabilities are the same - no matter whether a point is projected or not.
Why was the parity hierarchy used as ground truth for MNIST? Garnot et al. use a hierarchy based on visual similarity of the digits (e.g. 3 and 8). Wouldn't that be more natural? |
ICLR | Title
Frequency Decomposition in Neural Processes
Abstract
Neural Processes are a powerful tool for learning representations of function spaces purely from examples, in a way that allows them to perform predictions at test time conditioned on so-called context observations. The learned representations are finite-dimensional, while function spaces are infinite-dimensional, and so far it has been unclear how these representations are learned and what kinds of functions can be represented. We show that deterministic Neural Processes implicitly perform a decomposition of the training signals into different frequency components, similar to a Fourier transform. In this context, we derive a theoretical upper bound on the maximum frequency Neural Processes can reproduce, depending on their representation size. This bound is confirmed empirically. Finally, we show that Neural Processes can be trained to only represent a subset of possible frequencies and suppress others, which makes them programmable band-pass or band-stop filters.
N/A
Neural Processes are a powerful tool for learning representations of function spaces purely from examples, in a way that allows them to perform predictions at test time conditioned on so-called context observations. The learned representations are finite-dimensional, while function spaces are infinite-dimensional, and so far it has been unclear how these representations are learned and what kinds of functions can be represented. We show that deterministic Neural Processes implicitly perform a decomposition of the training signals into different frequency components, similar to a Fourier transform. In this context, we derive a theoretical upper bound on the maximum frequency Neural Processes can reproduce, depending on their representation size. This bound is confirmed empirically. Finally, we show that Neural Processes can be trained to only represent a subset of possible frequencies and suppress others, which makes them programmable band-pass or band-stop filters.
1 INTRODUCTION
Neural Processes (Garnelo et al., 2018a;b) are a class of models that can learn a distribution over functions, or more generally a function space. In contrast to many other approaches that do the same, for example Bayesian Neural Networks, Neural Processes learn an explicit representation of such a function space, which allows them to condition their predictions on an arbitrary number of observations that are only available at test time. This representation is finite-dimensional, while function spaces are infinite-dimensional, and so far it has not been understood how they are able to bridge this gap and under what conditions they can successfully do so.
Our work reveals how Neural Processes learn to represent infinite-dimensional function spaces in a finite-dimensional space, and in the process describes constraints and conditions that decide what kinds of function spaces can be represented. We begin with an observation that prior art in the context of learning on sets can be reinterpreted from a signal-processing perspective, which allows us to derive a theoretical upper bound on the frequencies, i.e. Fourier components, of functions that can be represented. We subsequently confirm this bound empirically, which suggests that the learned representations should contain a notion of frequency. To further investigate this hypothesis, we continue with a visualization of the learned representations, which reveals that Neural Processes can decompose a function space into different frequency components, essentially finding a representation in Fourier space without any explicit supervision on the representations to elicit such behaviour. As further evidence of this we train Neural Processes to represent only certain frequencies, which results in them suppressing those frequencies that were not observed in the training data. Our contributions can be summarized as follows1:
• We derive a theoretical upper bound on the signal frequency Neural Processes of a given representation size can reconstruct. As we show, the bound is observed either in the expected way—by suppressing high frequencies—or by implicitly limiting the signal interval.
• We investigate learned representations qualitatively, presenting evidence that Neural Processes perform a frequency decomposition of the function space, akin to a Fourier transform. This behaviour is not incentivized externally but rather emerges naturally.
1The complete source code to reproduce our experiments is available at https://github.com/***
• We show that by choosing the training distribution appropriately, Neural Processes can be made to represent certain frequencies and suppress others, which turns them into programmable band-pass or band-stop filters.
2 BACKGROUND
Neural Processes (Garnelo et al., 2018a;b) are maps P : C,X → Y , where C is a set of tuples {(x, f(x))}Nc=1 =: (xc, f(xc))2 with arbitrary but positive cardinality N , and f ∈ F : X → Y . C is often called the context, because Neural Processes perform predictions for values xt ∈ X (t for target), conditioned on these points. F is the function space we would like to find a representation of. Note that some sources define function spaces as any set of functions with a shared domain and co-domain, while others require them to be vector spaces as well. We don’t concern ourselves with this distinction and further restrict our work to X = Y = R, because it allows us to visualize learned representations. We only look at the original Neural Processes, namely the deterministic Conditional Neural Processes (CNP) (Garnelo et al., 2018a) and the variational Neural Processes (NP) (Garnelo et al., 2018b), because newer contributions in the field work in ways that preclude them from being analyzed in the same way. We discuss this further in Section 5. In CNPs and NPs, the map P is separated into two parts, a so called encoding E : C → Z and a decoding or generating part G : Z,X → Y . Z is referred to as the representation or latent space. To allow Neural Processes to approximate arbitrary3 function spaces F , E and G are typically chosen to be powerful approximators, specifically neural networks, as the name suggests.
The defining characteristic of CNPs and NPs is that E encodes individual pairs (x, f(x)) from the context separately, and the resulting representations are averaged to form a global representation, meaning one that is independent of the target points xt at which we then evaluate the Neural Process. This is often not the case in later work, for example in Attentive Neural Processes (Kim et al., 2019), where the individual representations are instead aggregated using an attention mechanism that depends on xt. In CNPs the representations are deterministic, while in NPs they parametrize mean and (log-)variance of a Gaussian distribution, so the latter are trained using variational inference. For details on implementation and training we refer to Appendix A.1. Our work will investigate how these global representations, which are finite-dimensional, represent infinite-dimensional function spaces.
As stated above,E and by extension the Neural Process P acts on set-valued inputs. This is contrary to the vast majority of machine learning work where inputs are vectors of fixed dimension and ordering. Recall that sets are permutation invariant, so we must ensure that the same is true for the output of E. It is easy to see that this is given when we average individual encodings, but Zaheer et al. (2017) show that it is in fact the only way to ensure it: E is permutation-invariant if and only if it has a so-called sum-decomposition, i.e. it can be represented in the form
E(x) = ρ ( N∑ i=1 φ(xi) ) (1)
where ρ, φ are appropriately chosen functions. Wagstaff et al. (2019) further show that to be able to represent all continuous permutation-invariant functions on sets with a cardinality of at most N , the dimension of the image Z must at least be N . This will become relevant in the following section.
3 AN UPPER BOUND ON SIGNAL FREQUENCIES
We mentioned in the previous section that the encoder E in a Neural Process should have a sumdecomposition, so that the global representations are permutation-invariant, as shown in Zaheer et al. (2017). Expanding on this, Wagstaff et al. (2019) show that we require a representation size of at least N to be able to represent arbitrary continuous functions on sets of cardinality smaller or equal to N . What these works do not consider are the implications for situations where the elements of
2We use boldface as a shorthand for sets, not vectors. 3This will depend on the implementation of E and G, and for neural networks F is practically restricted to
continuous and differentiable functions.
the sets are input-output tuples of some function f , as it is typically the case in Neural Processes. We will use these previous findings to derive an upper bound on the frequencies ν any f ∈ F may contain so that they can be represented in a Neural Process. In order to do this, we must first define what it means to successfully learn a representation of a function space. Definition 3.1 (Representation of Function Spaces in Neural Processes). We say that a Neural Processes P has learned a representation of a function space F , defined on an interval [a, b] ⊂ R, if, for some error tolerance , it holds for all x ∈ [a, b] and for all f ∈ F , represented as a suitable set of discrete measurements (xf , f(xf )), that |P ((xf , f(xf )), x)− f(x)| < .
That means the learned representation must be such that we can encode a particular element of the function space f into it and are able to reconstruct it up to a predefined error tolerance. The choice of this tolerance is essentially arbitrary, but should reflect that for g /∈ F the reconstructions should generally not be accurate within . We also write that f is represented as a suitable set of discrete measurements, by which we mean that it must be possible to reconstruct f from those measurements.
Switching to signal-processing terminology, we know that to represent a continuous signal as a set of discrete measurements, we need to sample it at points with a distance of at most τ = 1/(2νmax), where νmax is the maximum frequency component of the signal. This is most commonly known as the Nyquist-Shannon sampling theorem (Whittaker, 1915; Kotelnikov, 1933; Shannon, 1949). For any finite real interval [a, b], this translates to a number of sampling points N > 2|b − a|νmax. The latter allows us to make a connection to the findings by Wagstaff et al. (2019), so that we can deduce an upper bound on the maximum signal frequency Neural Processes with a given representation size can reconstruct. Theorem 3.1 (Maximum Frequency in Neural Process Representations). A Neural Process P with latent dimension Dr can only learn a representation of some function space F defined on a finite interval [a, b] ⊂ R if for all f ∈ F with a maximum frequency content νmax,f it holds that:
νmax,f < Dr
2|b− a| (2)
Note that this means we should in theory be able to represent any function space that obeys Eq. (2) to within arbitrarily small . In practice, we will typically have less control over F , and we only find approximate representations. Part of our experiments will test how Neural Processes behave if the signals contain frequencies larger than those allowed by Eq. (2). It should also be noted that the Nyquist-Shannon theorem used for the above derivation assumes equidistant sampling points. During training, we work with randomly sampled inputs, but at test time equidistant points are used, as we outline in Appendix A.2.
4 EXPERIMENTS & RESULTS
4.1 VALIDATION OF THE FREQUENCY BOUND
Our experiments are grouped into three parts. The first experiment seeks to test the validity of the bound we just derived in Eq. (2). In particular, we train Neural Processes with varying representation sizes on two exemplary function spaces, so that for some models the representation size is insufficient to represent all frequencies. The function spaces we base our experiments on are those defined by Gaussian Process priors (for an introduction see for example Rasmussen & Williams (2006)) with an exponentiated-quadratic (EQ) kernel with lengthscale parameter l, as well as those defined by random real-valued Fourier series—for details we refer to Appendix A.2. While the Gaussian Process samples have an average Fourier magnitude that smoothly decays to zero, the distribution of Fourier magnitudes is uniform for the Fourier series, as shown in Fig. A.1. The Fourier series space also grants us precise control over the frequency domain, which will be useful in subsequent experiments.
Figure 1 shows example reconstructions in a deterministic Neural Process (CNP) for samples from a Gaussian Process prior with EQ kernel (l = 0.05) and from a random Fourier series. For the GP example, the CNP essentially acts like a low-pass filter when the representation size is insufficient, which qualitatively confirms the bound we derived in Eq. (2). Interestingly, the bound can also be observed in a different way: for the Fourier series example, the CNP hardly suppresses high
frequencies, but instead limits the effective interval of the signal, simply ignoring the outer regions of it. Both behaviours are in agreement with the bound in Eq. (2). The Fourier example also serves as a good sanity check: with K = 19 (the maximum angular frequency) the data has a maximum frequency of νmax = K/(2π) = 3.02. For Dr = 32 this would limit the size of the interval to |b − a| < 5.29, for Dr = 16 to |b − a| < 2.65. The reconstructed signal regions in Fig. 1 are a bit narrower, and thus in good agreement with the bound we derived. For a variational Neural Process, we observe the same behaviour, but with stronger dampening of high frequencies in both cases, as seen in Fig. A.2. In Fig. A.3 we show the average reconstruction error for CNPs and NPs of different representation sizes, applied to GP examples with varying lengthscale, which results in a smooth decrease in error for larger representations and larger lengthscale parameters, as one would expect.
4.2 HOW DO NEURAL PROCESSES REPRESENT FUNCTION SPACES?
Having found that Neural Processes do indeed observe the bound we derived in Eq. (2), we seek to understand how this happens. To this end, we visualize the learned representations in Neural Processes, which is possible because we restrict ourselves to X = Y = R. Again looking at the two function spaces from the previous experiment, we sample pairs (x, y) on a regular grid (50 × 50) with x ∈ [−3, 3], which is our training input range, and also y ∈ [−3, 3] as it suitably covers the value range of outputs. We then encode each pair individually to a representation, thus constructing a map ri(x, y) for each representation channel. The latter allows us to uncover potential patterns and to gain a better understanding of how Neural Processes learn representations of function spaces.
Figure 2 presents example representation channels for CNPs and NPs, trained on samples from a Gaussian Process with an EQ-kernel (l = 0.2) and on random Fourier series. The individual channels were selected to illustrate the general patterns of behaviour we observed. First, we find that representations are almost always anti-symmetric across y = 0. This is not surprising, as the function spaces we look at are on average symmetric—in the sense that f and −f will occur with the same probability—so the Neural Process learns the same representation, just with a different sign. More importantly, we find that both NPs and CNPs implicitly form a representation of the input space (i.e. the relevant interval of the function space domain), in the sense that different regions of the input space map to different representation channels. In CNPs this results in an oscillating pattern, with different channels exhibiting different frequencies. In other words, the CNP performs a frequency decomposition of the function space, not unlike a Fourier transform. At the
same time, there is nothing that would enforce orthogonality between the different representation dimensions, and the Fourier series example highlights that we can generally expect a mixture of multiple frequencies for a given dimension. It should be noted that this frequency decomposition emerges naturally and is not incentivized externally (e.g. by a special loss).
Even though NPs behaved very similarly to CNPs in the previous section, their learned representations look vastly different from those in a CNP. Instead of a frequency decomposition, they seem to partition the input space, so that a given representation dimension is written to by a specific, narrow region of the input space. Only for channels with a low average magnitude (i.e. a large index in Fig. 2) do we find behaviour similar to CNPs. We conclude that NPs can in principle learn a frequency decomposition, but their variational formulation—the only difference to CNPs— disincentivizes it. We show more representations for CNPs and NPs trained on GP data in Fig. A.4 and Fig. A.5, and for CNPs and NPs trained on Fourier series data in Fig. A.6 and Fig. A.7, sorting channels by their average magnitude.
4.3 NEURAL PROCESSES AS BAND FILTERS
Our final experiment is designed to show that we can exert more control over the learned representations, and it will serve as additional evidence that deterministic Neural Processes (CNP) perform a frequency decomposition of the function space they represent. At the same time, it suggests a possible practical application of Neural Processes. We saw in Section 4.1 that CNPs sometimes act like low-pass filters, which could be a useful application, but the emergence of that behaviour is not reliable. We now train CNPs with a sufficiently large representation size (Dr = 128) to be bandpass and band-stop filters. To this end, we train the models on the Fourier series defined by Eq. (11), but for the band-stop we set all components ak to zero for which 5 ≤ k ≤ 14, and likewise set all ak to zero outside of that range for the band-pass. We then look at the reconstructions of examples from the original series with all components present. For more details on the training procedure and how we sample points for function evaluation, please see Appendix A.1 and Appendix A.2.
The average Fourier magnitude of the training functions for the different models is given by the bottom left panel in Fig. 3. In the first model (Reference), all components are allowed; in the second (Band-stop), components in the middle of that range are suppressed; in the third (Band-pass) only components in the middle of the range are allowed. We then apply these models to examples from the reference data distribution, the result of which can be seen in the bottom-right panel of Fig. 3. The models that are only shown certain frequencies during training will suppress those frequencies that were not found in the training data, meaning they effectively become programmable band-stop or band-pass filters. This is confirmed by the example in the top rows of the figure, where we show
both the signal and its Fourier transform magnitude. Note that one needs to adjust the value range of the reference data before passing them through the band filters to prevent gain in the non-suppressed frequency regions. We give more details in Appendix A.2.
Unfortunately, we were only partly able to elicit the same behaviour in variational NPs. While the trained band-stop filter worked exactly like the CNP band-stop, we were not able to train a bandpass filter. The models collapsed during training, meaning the loss plateaued and no meaningful representations were learned. There is no obvious reason why a band-pass shouldn’t work when a band-stop does, so we suspect our hyperparameter configuration was not ideal and that with more tuning it would be possible to train a band-pass as well. The NP results are shown in Fig. A.8.
5 RELATED WORK
Neural Processes broadly relate to the topic of learning distributions of functions, even though we speak of the less restrictive term function space in our work. In this context, Bayesian Neural Networks (see for example Neal (1996); Graves (2011); Hernández-Lobato & Adams (2015)) are a popular choice, which place distributions on the weights of a network. However, in doing so they only implicitly represent distributions over functions, while Neural Processes learn an explicit finite-dimensional representation that can be leveraged for predictions, so as to condition on context observations given at test time. Perhaps the most well known class of methods that do the same are Gaussian Processes (for an introduction see Rasmussen & Williams (2006)). These are stochastic processes represented by a joint Gaussian distribution over context and target points, defined via the covariance matrix by a kernel. All flexibility of Gaussian Processes to represent different distributions of functions is decided by this kernel, so many works try to learn it (Yang et al., 2015; Wilson et al., 2016b;a; Tossou et al., 2019; Calandra et al., 2016). Even though Neural Processes were originally motivated by Gaussian Processes, they can be understood as orthogonal methods: Gaussian Processes represent a function space using a (potentially learned) kernel, while Neural Processes represent them in a learned finite-dimensional space.
Neural Processes can also be interpreted from the perspective of deep learning on sets, the earliest work in the field being Zaheer et al. (2017). More theoretical contributions were made by Wagstaff et al. (2019), whose work we use to underpin our finding that the representation size in Neural Processes limits the maximum frequency of signals that can be represented. More applied work in the set-learning context has mostly been performed on point-cloud data (Qi et al., 2017b;a; Wu et al., 2019), which can be interpreted as a higher-dimensional instance of learning function spaces. Validating our findings in higher-dimensional spaces is an important direction for future work.
Neural Processes have inspired a number of follow-up works. Perhaps the most well known addition are Attentive Neural Processes (Kim et al., 2019), which replace the averaging of individual representations with a learned attention mechanism (Vaswani et al., 2017). The aggregate representations are thus no longer independent of the target inputs, and no global representation is learned. This holds true for most follow-up work. Convolutional Conditional Neural Processes (Gordon et al., 2020) propose to no longer learn a finite-dimensional representation at all and instead work in function space by applying a CNN on suitable and variable discretizations of a kernel density estimate. Similar to ANP, Louizos et al. (2019) propose to not merge observations into a global latent space, but instead learn conditional relationships between them. Singh et al. (2019) and Willi et al. (2019) address the problem of overlapping and changing dynamics in time series data. Relating this to our work, it would be possible to test how the original Neural Processes would represent functions where the average frequency content is not constant over the domain. We leave this investigation for future work. Neural Processes have also been extended to scenarios where the function space maps to entire images, in the form of Generative Query Networks (GQN) (Eslami et al., 2018; Kumar et al., 2018). Employing vastly more powerful decoders, they can (re-)construct unseen views in 3D scenes, which relates Neural Processes to the field of 3D scene understanding, an area that has received a lot of attention more recently (Sitzmann et al., 2019; Engelcke et al., 2020; Mildenhall et al., 2020). Sitzmann et al. (2020) show that periodic activation functions make it easier for networks to learn so-called implicit representations—mappings from coordinates to a density, RGB values, etc.. We did in fact try periodic activation functions in our experiments, but found no difference to using tanh-activations. In the same context, Tancik et al. (2020) show that coordinates in Fourier space are often superior to coordinates in signal space to produce fine detail. We interpret this as an indication
that a representation in frequency space is more efficient for many signals, which could explain why Neural Processes implicitly perform a frequency decomposition. Note that the above introduces Fourier features explicitly as a form of inductive bias, while Neural Processes automatically learn this form of representation.
It is well known that neural networks, specifically a MLP with at least one hidden layer, can learn the Fourier transform of an input signal (Gallant & White, 1988). In fact, there have been a multitude of works that exploit this ability in one way or the other, leading to the term Fourier Neural Networks. We refer to the recent review by Zhumekenov et al. (2019) for a comprehensive overview. The difference to Neural Processes is that these works typically apply networks directly to a sequence of points, while NPs learn a mapping that is only applied to individual (x,y) pairs, the representations of which are averaged. We emphasize again that the frequency decomposition occurs naturally in NPs, while these works usually employ direct supervision.
6 DISCUSSION
The goal of this work was to gain a better understanding of the mechanisms that allow Neural Processes to form finite-dimensional representations of infinite-dimensional function spaces. To the best of our knowledge, ours is the first work to investigate this question, and our findings are both surprising and meaningful in this context. We first derived a theoretical upper bound on the frequency of signals that can be represented in Neural Processes with a given representation size. We empirically confirmed that the representation size does indeed pose such a limit and that this can result in Neural Processes acting like low-pass filters. Alternatively, models ignore parts of the signal to keep higher frequencies. Both behaviours are in agreement with the derived bound. We then visualized learned representations to understand how the models incorporate the concept of frequency into them. In all cases the models formed an implicit representation of the input space, in the sense that different x-values are mapped to different representation channels. For CNPs, an oscillating pattern emerges, such that different representation channels correspond to different frequencies, from which we concluded that CNPs perform a frequency decomposition of the function space they learn to represent. It should be noted that this behaviour emerges naturally and is not explicitly encouraged (e.g. by a special loss). In contrast to this, NPs tend to partition the space into more or less disjunct regions. They are still able to learn a frequency decomposition like CNPs, but we assume that the variational training objective makes it harder to do so, as sampling from the representation during training can also be understood as a random perturbation. For VAEs, which are conceptually similar to NPs, it was also suggested that models partition their latent space in way that maximally spreads representations of individual data points under the prior distribution (Rezende & Viola, 2018). Finally, to further test the models’ ability to distinguish frequencies and also as an example of possible practical benefits of our findings, we trained CNPs to be band-pass and bandstop filters. This worked extremely well, the Fourier component magnitudes of the training data are essentially “baked” into the models, and any frequency not found therein is subsequently suppressed in reconstructions from the models. An obvious use case would be programmable frequency filters, when perhaps a more complex frequency response is desired.
Overall, our work offers exciting new insights into the inner workings of Neural Processes and into the learning of representations of function spaces. Many applications of deep learning are concerned with representation learning in some way, and we hope that our findings inspire further research and forge a better understanding of the methods used in the field. Our work also opens up a number of exciting questions for future work. We only look at function spaces with scalar domain, and while we expect that our findings translate to higher dimensions, the same should be validated empirically. Seeing that variational Neural Processes can in principle learn frequency decompositions, it would be interesting to investigate how we can further incentivize this behaviour in them. Likewise, it should be possible to encourage orthogonality between the individual representation dimensions, so that frequencies are more cleanly separated. Further theoretical exploration of the conditions, besides frequency content, that allow function spaces to be represented could also be worthwhile. Finally, it is not immediately obvious how our findings translate to scenarios that disallow a classical definition of frequency, for example when the observations are entire images as in Eslami et al. (2018).
A APPENDIX
A.1 OPTIMIZATION & IMPLEMENTATION
To train Neural Processes, we represent individual examples f ∈ F as sets of randomly sampled evaluations (x, f(x) = y), which we partition into context set (xc,yc) and target set (xt,yt). We further have encoder E and decoder G of a Neural Process implemented as neural networks, for which we summarize the parameters in θ. In our implementation, both are multilayer perceptrons (MLP), meaning simple fully connected networks. Our goal is then to find the optimal set of parameters θ∗ that maximizes the likelihood of yt, given xc, yc and xt, over all f :
θ∗ = argmax θ ∑ f∈F log pθ(yt|xt,xc,yc) (3)
where pθ is a placeholder for some parametrized likelihood function. We introduce the logarithm because we assume the likelihood factorizes across individual f , turning the expression into a sum. So what would this optimization look like in practice? For example, we could minimize the mean squared error between yt and the predictions ŷt from our network. This would implicitly assume a Gaussian likelihood with a fixed variance. However, we would like our model to predict a variance, so that it can indicate how uncertain it is about a prediction, and because Le et al. (2018) found that this results in overall better performance. We achieve this by implementing G as a network that predicts both the mean and the variance of a diagonal Gaussian distribution, and Eq. (3) becomes:
θ∗ = argmax θ ∑ f∈F ∑ t logN (yt;Gµθ (Z, xt), Gσθ (Z, xt)) (4)
In deterministic Neural Processes (CNP), we can directly optimize this with maximum likelihood training. In variational Neural Processes (NP), Z is also parametrized by a Gaussian, meaning just like G, E predicts mean and variance of a Gaussian with Dr dimensions. In this case, we need to rewrite the summands of Eq. (3):
log pθ(yt|xt,xc,yc) = log E z∼p(Z|xc,yc) pθ(yt|xt, z) (5)
Here, p(Z|xc,yc) is not the distribution predicted by our encoder, but some true distribution we don’t have access to. The idea of variational inference (see for example Bishop (2006) for an introduction) is to approximate this p by some other distribution qθ and then to optimize pθ and qθ simultaneously. qθ is what our encoder E predicts, just like pθ is what our decoder G predicts. Continuing from Eq. (5):
LHS = log E z∼qθ(Z|xt,yt) pθ(yt|xt, z) p(z|xc,yc) qθ(z|xt,yt)
(6)
≥ E z∼qθ(Z|xt,yt) log
( pθ(yt|xt, z)
p(z|xc,yc) qθ(z|xt,yt)
) (7)
≈ E z∼qθ(Z|xt,yt) log
( pθ(yt|xt, z)
qθ(z|xc,yc) qθ(z|xt,yt)
) (8)
= E z∼qθ(Z|xt,yt)
log pθ(yt|xt, z)
−DKL(qθ(z|xt,yt)||qθ(z|xc,yc)) (9)
where LHS refers to the left hand side of Eq. (5). In the first line, we have switched the underlying distribution from the true prior—meaning conditioned on the context—to an approximate posterior—meaning conditioned on both context and target, but for notational simplicity we only write out the target set. The second line follows from Jensen’s inequality while in the third line we have replaced the true prior with the approximate prior. Finally, we have rewritten the right hand side using the Kullback-Leibler (KL) divergence, a measure of distance between two distributions. Because we predict Gaussian distributions, the KL divergence has a closed-form expression. Otherwise it would be impractical to use it in an optimization context. The last line is often called the evidence lower bound (ELBO) in variational inference.
Let us put the above into more practical terms. When presented with an example consisting of context and target sets, we first use the encoder network E to encode each context tuple separately. The encoder is a MLP with two input channels (for X and Y ), 6 hidden layers with 128 channels, and a final layer mapping to Dr channels, i.e. to the representation. While all hidden layers have a fixed dimension of 128, we vary the representation dimension Dr for our experiments (but never make it larger than 128). For the variational case, the final layer maps to 2Dr channels, half for the mean and half for the variance of the predicted Gaussian (in practice, we predict the log-variance to allow negative values). The individual representations are then averaged, and in the variational case we call this the prior (qθ(z|xc,yc) in Eq. (9)). For the posterior, we also encode the target pairs and then average over all individual representations, including the context. During training forward passes, we sample once from the posterior and use this sample as the representation for the decoder. Ideally, we should sample many times to integrate the expectation in Eq. (9), but for stochastic mini-batch training it was found empirically that a single sample suffices (Jimenez Rezende et al., 2014; Kingma & Welling, 2014). The decoder predicts a Gaussian from the representation and an input xt. It is implemented symmetrically to the encoder, meaning it’s a MLP with Dr + 1 input channels, 6 hidden layers with 128 channels, and two output channels for mean and (log-)variance. We use tanh-activations as well. As a loss we directly use the negative log-likelihood, meaning we evaluate the likelihood of a reference point yt under a Gaussian parametrized by the predicted mean and variance. Finally, we average over all predicted points, which are the target points as well as the context points. We use the Adam optimizer Kingma & Ba (2015) with an initial learning rate of 0.001, repeatedly decaying it with a factor of 0.995 after 1000 batches. We train with a batch size of 256 for a total of 600 000 batches.
A.2 EXPERIMENT DETAILS
We conduct our experiments on two kinds of function spaces. The first is defined by a Gaussian Processes prior using an EQ kernel given by:
k(x1, x2) = exp ( ||x1 − x2||22 2l ) (10)
where l is a lengthscale parameter. This example was also used in the original works (Garnelo et al., 2018a;b). The second are random Fourier series, defined by:
f(x) = a0 + K∑ k=1 ak cos (kx− φk) , K = 19 (11)
where we sample φk and ak (including a0) randomly from the interval [−1, 1]. Note that k is an angular frequency, while results are presented for regular frequencies.
To construct training examples, we sample N context inputs and M target input values uniformly from the range [−3, 3]. N is a random integer from the range [3, 100), while M is a random integer from [N, 100). This sampling strategy was adopted from the original works and Le et al. (2018). yvalues are generated by evaluating the above functions (or drawing from the distribution in the case of the GP) on the random input values. The models are trained by letting them predict/reconstruct both context and target points, conditioned only on the context. At test time, we are only interested in reconstructions, meaning target points and context points are identical, and we work with 200 equally spaced input values across the full range.
In the band filter experiment, we train models on Fourier series with some frequencies intentionally left out of the training data. When we train a model on data where some frequency components are blocked, the distribution of y-values a model sees during training becomes narrower. As a result, passing functions from the reference distribution (where no components are blocked) through a band-filter CNP will suppress the desired frequencies, but will also amplify non-blocked frequencies. To counteract this, we have to multiply the y-values of the reference data, which are approximately normally distributed, by σband/σref, i.e. the ratio of standard deviations of the relative y-distributions.
A.3 ADDITIONAL VISUALIZATIONS | 1. What are the main contributions and key findings of the paper regarding Neural Processes?
2. How do the authors infer a maximum theoretical upper bound of frequencies of functions f that can be used to represent the points, based on signal theoretic aspects of discretization?
3. How do the authors use simulations to test the validity of the upper bond, and what are the results?
4. How do the authors claim that NPs behave like a Fourier Transform and decompose the spectrum of the signal, and what are the implications of this finding?
5. What are the limitations and potential applications of using NPs as band-pass/stop filters, and how does their computational complexity affect their ecological validity?
6. How does the paper provide a strong theoretical foundation for the method, and how do the authors support their claims through empirical stimulation?
7. How does the paper address the importance of explainability and interpretability in understanding how methods generate results?
8. What are the potential consequences and future directions for research stemming from the findings of this paper, and how significant are they in the context of neural networks and machine learning? | Review | Review
The work examines properties of Neural Processes (NP). More precisely, of deterministic NPs and how they for finite-dimensional representations of infinite-dimensional function spaces. NP learn functions f that best represent/fit discrete sets of points in space. Based on signal theoretic aspects of discretisation, authors infer a maximum theoretical upper bond of frequencies of functions f that can be used to represent the points. The bond depends on the latent dimension/representation size and the finite interval spawn by the points. Simulations are computed to test the validity of the upper bond. Authors find that NPs behave like a Fourier Transform and decompose the spectrum of the signal. Since the representation during training learns to represent specific frequencies, NPs can be used as band pass/stop filter.
The paper is well written, and the basic approach is clearly outlined. The quality of the work and the evaluation are good and support the authors claims. However, it is not fully clear to which extend the claims translate to other data or generalise well. The finding that NPs interpret points in space as signals and implement a frequency decomposition like Fourier/Wavelet transforms seems reasonable. Not sure, however, if an application as filter is ecological in terms of computational complexity.
The paper provides a strong theoretical foundation of the method and authors support their claims by empirical stimulation. Also, explainability and more importantly interpretability of how methods generate results is essential. So, the message the paper sends is relevant. However ,the relevance and significance of the findings, and the consequences thereof are not clear. |
ICLR | Title
Frequency Decomposition in Neural Processes
Abstract
Neural Processes are a powerful tool for learning representations of function spaces purely from examples, in a way that allows them to perform predictions at test time conditioned on so-called context observations. The learned representations are finite-dimensional, while function spaces are infinite-dimensional, and so far it has been unclear how these representations are learned and what kinds of functions can be represented. We show that deterministic Neural Processes implicitly perform a decomposition of the training signals into different frequency components, similar to a Fourier transform. In this context, we derive a theoretical upper bound on the maximum frequency Neural Processes can reproduce, depending on their representation size. This bound is confirmed empirically. Finally, we show that Neural Processes can be trained to only represent a subset of possible frequencies and suppress others, which makes them programmable band-pass or band-stop filters.
N/A
Neural Processes are a powerful tool for learning representations of function spaces purely from examples, in a way that allows them to perform predictions at test time conditioned on so-called context observations. The learned representations are finite-dimensional, while function spaces are infinite-dimensional, and so far it has been unclear how these representations are learned and what kinds of functions can be represented. We show that deterministic Neural Processes implicitly perform a decomposition of the training signals into different frequency components, similar to a Fourier transform. In this context, we derive a theoretical upper bound on the maximum frequency Neural Processes can reproduce, depending on their representation size. This bound is confirmed empirically. Finally, we show that Neural Processes can be trained to only represent a subset of possible frequencies and suppress others, which makes them programmable band-pass or band-stop filters.
1 INTRODUCTION
Neural Processes (Garnelo et al., 2018a;b) are a class of models that can learn a distribution over functions, or more generally a function space. In contrast to many other approaches that do the same, for example Bayesian Neural Networks, Neural Processes learn an explicit representation of such a function space, which allows them to condition their predictions on an arbitrary number of observations that are only available at test time. This representation is finite-dimensional, while function spaces are infinite-dimensional, and so far it has not been understood how they are able to bridge this gap and under what conditions they can successfully do so.
Our work reveals how Neural Processes learn to represent infinite-dimensional function spaces in a finite-dimensional space, and in the process describes constraints and conditions that decide what kinds of function spaces can be represented. We begin with an observation that prior art in the context of learning on sets can be reinterpreted from a signal-processing perspective, which allows us to derive a theoretical upper bound on the frequencies, i.e. Fourier components, of functions that can be represented. We subsequently confirm this bound empirically, which suggests that the learned representations should contain a notion of frequency. To further investigate this hypothesis, we continue with a visualization of the learned representations, which reveals that Neural Processes can decompose a function space into different frequency components, essentially finding a representation in Fourier space without any explicit supervision on the representations to elicit such behaviour. As further evidence of this we train Neural Processes to represent only certain frequencies, which results in them suppressing those frequencies that were not observed in the training data. Our contributions can be summarized as follows1:
• We derive a theoretical upper bound on the signal frequency Neural Processes of a given representation size can reconstruct. As we show, the bound is observed either in the expected way—by suppressing high frequencies—or by implicitly limiting the signal interval.
• We investigate learned representations qualitatively, presenting evidence that Neural Processes perform a frequency decomposition of the function space, akin to a Fourier transform. This behaviour is not incentivized externally but rather emerges naturally.
1The complete source code to reproduce our experiments is available at https://github.com/***
• We show that by choosing the training distribution appropriately, Neural Processes can be made to represent certain frequencies and suppress others, which turns them into programmable band-pass or band-stop filters.
2 BACKGROUND
Neural Processes (Garnelo et al., 2018a;b) are maps P : C,X → Y , where C is a set of tuples {(x, f(x))}Nc=1 =: (xc, f(xc))2 with arbitrary but positive cardinality N , and f ∈ F : X → Y . C is often called the context, because Neural Processes perform predictions for values xt ∈ X (t for target), conditioned on these points. F is the function space we would like to find a representation of. Note that some sources define function spaces as any set of functions with a shared domain and co-domain, while others require them to be vector spaces as well. We don’t concern ourselves with this distinction and further restrict our work to X = Y = R, because it allows us to visualize learned representations. We only look at the original Neural Processes, namely the deterministic Conditional Neural Processes (CNP) (Garnelo et al., 2018a) and the variational Neural Processes (NP) (Garnelo et al., 2018b), because newer contributions in the field work in ways that preclude them from being analyzed in the same way. We discuss this further in Section 5. In CNPs and NPs, the map P is separated into two parts, a so called encoding E : C → Z and a decoding or generating part G : Z,X → Y . Z is referred to as the representation or latent space. To allow Neural Processes to approximate arbitrary3 function spaces F , E and G are typically chosen to be powerful approximators, specifically neural networks, as the name suggests.
The defining characteristic of CNPs and NPs is that E encodes individual pairs (x, f(x)) from the context separately, and the resulting representations are averaged to form a global representation, meaning one that is independent of the target points xt at which we then evaluate the Neural Process. This is often not the case in later work, for example in Attentive Neural Processes (Kim et al., 2019), where the individual representations are instead aggregated using an attention mechanism that depends on xt. In CNPs the representations are deterministic, while in NPs they parametrize mean and (log-)variance of a Gaussian distribution, so the latter are trained using variational inference. For details on implementation and training we refer to Appendix A.1. Our work will investigate how these global representations, which are finite-dimensional, represent infinite-dimensional function spaces.
As stated above,E and by extension the Neural Process P acts on set-valued inputs. This is contrary to the vast majority of machine learning work where inputs are vectors of fixed dimension and ordering. Recall that sets are permutation invariant, so we must ensure that the same is true for the output of E. It is easy to see that this is given when we average individual encodings, but Zaheer et al. (2017) show that it is in fact the only way to ensure it: E is permutation-invariant if and only if it has a so-called sum-decomposition, i.e. it can be represented in the form
E(x) = ρ ( N∑ i=1 φ(xi) ) (1)
where ρ, φ are appropriately chosen functions. Wagstaff et al. (2019) further show that to be able to represent all continuous permutation-invariant functions on sets with a cardinality of at most N , the dimension of the image Z must at least be N . This will become relevant in the following section.
3 AN UPPER BOUND ON SIGNAL FREQUENCIES
We mentioned in the previous section that the encoder E in a Neural Process should have a sumdecomposition, so that the global representations are permutation-invariant, as shown in Zaheer et al. (2017). Expanding on this, Wagstaff et al. (2019) show that we require a representation size of at least N to be able to represent arbitrary continuous functions on sets of cardinality smaller or equal to N . What these works do not consider are the implications for situations where the elements of
2We use boldface as a shorthand for sets, not vectors. 3This will depend on the implementation of E and G, and for neural networks F is practically restricted to
continuous and differentiable functions.
the sets are input-output tuples of some function f , as it is typically the case in Neural Processes. We will use these previous findings to derive an upper bound on the frequencies ν any f ∈ F may contain so that they can be represented in a Neural Process. In order to do this, we must first define what it means to successfully learn a representation of a function space. Definition 3.1 (Representation of Function Spaces in Neural Processes). We say that a Neural Processes P has learned a representation of a function space F , defined on an interval [a, b] ⊂ R, if, for some error tolerance , it holds for all x ∈ [a, b] and for all f ∈ F , represented as a suitable set of discrete measurements (xf , f(xf )), that |P ((xf , f(xf )), x)− f(x)| < .
That means the learned representation must be such that we can encode a particular element of the function space f into it and are able to reconstruct it up to a predefined error tolerance. The choice of this tolerance is essentially arbitrary, but should reflect that for g /∈ F the reconstructions should generally not be accurate within . We also write that f is represented as a suitable set of discrete measurements, by which we mean that it must be possible to reconstruct f from those measurements.
Switching to signal-processing terminology, we know that to represent a continuous signal as a set of discrete measurements, we need to sample it at points with a distance of at most τ = 1/(2νmax), where νmax is the maximum frequency component of the signal. This is most commonly known as the Nyquist-Shannon sampling theorem (Whittaker, 1915; Kotelnikov, 1933; Shannon, 1949). For any finite real interval [a, b], this translates to a number of sampling points N > 2|b − a|νmax. The latter allows us to make a connection to the findings by Wagstaff et al. (2019), so that we can deduce an upper bound on the maximum signal frequency Neural Processes with a given representation size can reconstruct. Theorem 3.1 (Maximum Frequency in Neural Process Representations). A Neural Process P with latent dimension Dr can only learn a representation of some function space F defined on a finite interval [a, b] ⊂ R if for all f ∈ F with a maximum frequency content νmax,f it holds that:
νmax,f < Dr
2|b− a| (2)
Note that this means we should in theory be able to represent any function space that obeys Eq. (2) to within arbitrarily small . In practice, we will typically have less control over F , and we only find approximate representations. Part of our experiments will test how Neural Processes behave if the signals contain frequencies larger than those allowed by Eq. (2). It should also be noted that the Nyquist-Shannon theorem used for the above derivation assumes equidistant sampling points. During training, we work with randomly sampled inputs, but at test time equidistant points are used, as we outline in Appendix A.2.
4 EXPERIMENTS & RESULTS
4.1 VALIDATION OF THE FREQUENCY BOUND
Our experiments are grouped into three parts. The first experiment seeks to test the validity of the bound we just derived in Eq. (2). In particular, we train Neural Processes with varying representation sizes on two exemplary function spaces, so that for some models the representation size is insufficient to represent all frequencies. The function spaces we base our experiments on are those defined by Gaussian Process priors (for an introduction see for example Rasmussen & Williams (2006)) with an exponentiated-quadratic (EQ) kernel with lengthscale parameter l, as well as those defined by random real-valued Fourier series—for details we refer to Appendix A.2. While the Gaussian Process samples have an average Fourier magnitude that smoothly decays to zero, the distribution of Fourier magnitudes is uniform for the Fourier series, as shown in Fig. A.1. The Fourier series space also grants us precise control over the frequency domain, which will be useful in subsequent experiments.
Figure 1 shows example reconstructions in a deterministic Neural Process (CNP) for samples from a Gaussian Process prior with EQ kernel (l = 0.05) and from a random Fourier series. For the GP example, the CNP essentially acts like a low-pass filter when the representation size is insufficient, which qualitatively confirms the bound we derived in Eq. (2). Interestingly, the bound can also be observed in a different way: for the Fourier series example, the CNP hardly suppresses high
frequencies, but instead limits the effective interval of the signal, simply ignoring the outer regions of it. Both behaviours are in agreement with the bound in Eq. (2). The Fourier example also serves as a good sanity check: with K = 19 (the maximum angular frequency) the data has a maximum frequency of νmax = K/(2π) = 3.02. For Dr = 32 this would limit the size of the interval to |b − a| < 5.29, for Dr = 16 to |b − a| < 2.65. The reconstructed signal regions in Fig. 1 are a bit narrower, and thus in good agreement with the bound we derived. For a variational Neural Process, we observe the same behaviour, but with stronger dampening of high frequencies in both cases, as seen in Fig. A.2. In Fig. A.3 we show the average reconstruction error for CNPs and NPs of different representation sizes, applied to GP examples with varying lengthscale, which results in a smooth decrease in error for larger representations and larger lengthscale parameters, as one would expect.
4.2 HOW DO NEURAL PROCESSES REPRESENT FUNCTION SPACES?
Having found that Neural Processes do indeed observe the bound we derived in Eq. (2), we seek to understand how this happens. To this end, we visualize the learned representations in Neural Processes, which is possible because we restrict ourselves to X = Y = R. Again looking at the two function spaces from the previous experiment, we sample pairs (x, y) on a regular grid (50 × 50) with x ∈ [−3, 3], which is our training input range, and also y ∈ [−3, 3] as it suitably covers the value range of outputs. We then encode each pair individually to a representation, thus constructing a map ri(x, y) for each representation channel. The latter allows us to uncover potential patterns and to gain a better understanding of how Neural Processes learn representations of function spaces.
Figure 2 presents example representation channels for CNPs and NPs, trained on samples from a Gaussian Process with an EQ-kernel (l = 0.2) and on random Fourier series. The individual channels were selected to illustrate the general patterns of behaviour we observed. First, we find that representations are almost always anti-symmetric across y = 0. This is not surprising, as the function spaces we look at are on average symmetric—in the sense that f and −f will occur with the same probability—so the Neural Process learns the same representation, just with a different sign. More importantly, we find that both NPs and CNPs implicitly form a representation of the input space (i.e. the relevant interval of the function space domain), in the sense that different regions of the input space map to different representation channels. In CNPs this results in an oscillating pattern, with different channels exhibiting different frequencies. In other words, the CNP performs a frequency decomposition of the function space, not unlike a Fourier transform. At the
same time, there is nothing that would enforce orthogonality between the different representation dimensions, and the Fourier series example highlights that we can generally expect a mixture of multiple frequencies for a given dimension. It should be noted that this frequency decomposition emerges naturally and is not incentivized externally (e.g. by a special loss).
Even though NPs behaved very similarly to CNPs in the previous section, their learned representations look vastly different from those in a CNP. Instead of a frequency decomposition, they seem to partition the input space, so that a given representation dimension is written to by a specific, narrow region of the input space. Only for channels with a low average magnitude (i.e. a large index in Fig. 2) do we find behaviour similar to CNPs. We conclude that NPs can in principle learn a frequency decomposition, but their variational formulation—the only difference to CNPs— disincentivizes it. We show more representations for CNPs and NPs trained on GP data in Fig. A.4 and Fig. A.5, and for CNPs and NPs trained on Fourier series data in Fig. A.6 and Fig. A.7, sorting channels by their average magnitude.
4.3 NEURAL PROCESSES AS BAND FILTERS
Our final experiment is designed to show that we can exert more control over the learned representations, and it will serve as additional evidence that deterministic Neural Processes (CNP) perform a frequency decomposition of the function space they represent. At the same time, it suggests a possible practical application of Neural Processes. We saw in Section 4.1 that CNPs sometimes act like low-pass filters, which could be a useful application, but the emergence of that behaviour is not reliable. We now train CNPs with a sufficiently large representation size (Dr = 128) to be bandpass and band-stop filters. To this end, we train the models on the Fourier series defined by Eq. (11), but for the band-stop we set all components ak to zero for which 5 ≤ k ≤ 14, and likewise set all ak to zero outside of that range for the band-pass. We then look at the reconstructions of examples from the original series with all components present. For more details on the training procedure and how we sample points for function evaluation, please see Appendix A.1 and Appendix A.2.
The average Fourier magnitude of the training functions for the different models is given by the bottom left panel in Fig. 3. In the first model (Reference), all components are allowed; in the second (Band-stop), components in the middle of that range are suppressed; in the third (Band-pass) only components in the middle of the range are allowed. We then apply these models to examples from the reference data distribution, the result of which can be seen in the bottom-right panel of Fig. 3. The models that are only shown certain frequencies during training will suppress those frequencies that were not found in the training data, meaning they effectively become programmable band-stop or band-pass filters. This is confirmed by the example in the top rows of the figure, where we show
both the signal and its Fourier transform magnitude. Note that one needs to adjust the value range of the reference data before passing them through the band filters to prevent gain in the non-suppressed frequency regions. We give more details in Appendix A.2.
Unfortunately, we were only partly able to elicit the same behaviour in variational NPs. While the trained band-stop filter worked exactly like the CNP band-stop, we were not able to train a bandpass filter. The models collapsed during training, meaning the loss plateaued and no meaningful representations were learned. There is no obvious reason why a band-pass shouldn’t work when a band-stop does, so we suspect our hyperparameter configuration was not ideal and that with more tuning it would be possible to train a band-pass as well. The NP results are shown in Fig. A.8.
5 RELATED WORK
Neural Processes broadly relate to the topic of learning distributions of functions, even though we speak of the less restrictive term function space in our work. In this context, Bayesian Neural Networks (see for example Neal (1996); Graves (2011); Hernández-Lobato & Adams (2015)) are a popular choice, which place distributions on the weights of a network. However, in doing so they only implicitly represent distributions over functions, while Neural Processes learn an explicit finite-dimensional representation that can be leveraged for predictions, so as to condition on context observations given at test time. Perhaps the most well known class of methods that do the same are Gaussian Processes (for an introduction see Rasmussen & Williams (2006)). These are stochastic processes represented by a joint Gaussian distribution over context and target points, defined via the covariance matrix by a kernel. All flexibility of Gaussian Processes to represent different distributions of functions is decided by this kernel, so many works try to learn it (Yang et al., 2015; Wilson et al., 2016b;a; Tossou et al., 2019; Calandra et al., 2016). Even though Neural Processes were originally motivated by Gaussian Processes, they can be understood as orthogonal methods: Gaussian Processes represent a function space using a (potentially learned) kernel, while Neural Processes represent them in a learned finite-dimensional space.
Neural Processes can also be interpreted from the perspective of deep learning on sets, the earliest work in the field being Zaheer et al. (2017). More theoretical contributions were made by Wagstaff et al. (2019), whose work we use to underpin our finding that the representation size in Neural Processes limits the maximum frequency of signals that can be represented. More applied work in the set-learning context has mostly been performed on point-cloud data (Qi et al., 2017b;a; Wu et al., 2019), which can be interpreted as a higher-dimensional instance of learning function spaces. Validating our findings in higher-dimensional spaces is an important direction for future work.
Neural Processes have inspired a number of follow-up works. Perhaps the most well known addition are Attentive Neural Processes (Kim et al., 2019), which replace the averaging of individual representations with a learned attention mechanism (Vaswani et al., 2017). The aggregate representations are thus no longer independent of the target inputs, and no global representation is learned. This holds true for most follow-up work. Convolutional Conditional Neural Processes (Gordon et al., 2020) propose to no longer learn a finite-dimensional representation at all and instead work in function space by applying a CNN on suitable and variable discretizations of a kernel density estimate. Similar to ANP, Louizos et al. (2019) propose to not merge observations into a global latent space, but instead learn conditional relationships between them. Singh et al. (2019) and Willi et al. (2019) address the problem of overlapping and changing dynamics in time series data. Relating this to our work, it would be possible to test how the original Neural Processes would represent functions where the average frequency content is not constant over the domain. We leave this investigation for future work. Neural Processes have also been extended to scenarios where the function space maps to entire images, in the form of Generative Query Networks (GQN) (Eslami et al., 2018; Kumar et al., 2018). Employing vastly more powerful decoders, they can (re-)construct unseen views in 3D scenes, which relates Neural Processes to the field of 3D scene understanding, an area that has received a lot of attention more recently (Sitzmann et al., 2019; Engelcke et al., 2020; Mildenhall et al., 2020). Sitzmann et al. (2020) show that periodic activation functions make it easier for networks to learn so-called implicit representations—mappings from coordinates to a density, RGB values, etc.. We did in fact try periodic activation functions in our experiments, but found no difference to using tanh-activations. In the same context, Tancik et al. (2020) show that coordinates in Fourier space are often superior to coordinates in signal space to produce fine detail. We interpret this as an indication
that a representation in frequency space is more efficient for many signals, which could explain why Neural Processes implicitly perform a frequency decomposition. Note that the above introduces Fourier features explicitly as a form of inductive bias, while Neural Processes automatically learn this form of representation.
It is well known that neural networks, specifically a MLP with at least one hidden layer, can learn the Fourier transform of an input signal (Gallant & White, 1988). In fact, there have been a multitude of works that exploit this ability in one way or the other, leading to the term Fourier Neural Networks. We refer to the recent review by Zhumekenov et al. (2019) for a comprehensive overview. The difference to Neural Processes is that these works typically apply networks directly to a sequence of points, while NPs learn a mapping that is only applied to individual (x,y) pairs, the representations of which are averaged. We emphasize again that the frequency decomposition occurs naturally in NPs, while these works usually employ direct supervision.
6 DISCUSSION
The goal of this work was to gain a better understanding of the mechanisms that allow Neural Processes to form finite-dimensional representations of infinite-dimensional function spaces. To the best of our knowledge, ours is the first work to investigate this question, and our findings are both surprising and meaningful in this context. We first derived a theoretical upper bound on the frequency of signals that can be represented in Neural Processes with a given representation size. We empirically confirmed that the representation size does indeed pose such a limit and that this can result in Neural Processes acting like low-pass filters. Alternatively, models ignore parts of the signal to keep higher frequencies. Both behaviours are in agreement with the derived bound. We then visualized learned representations to understand how the models incorporate the concept of frequency into them. In all cases the models formed an implicit representation of the input space, in the sense that different x-values are mapped to different representation channels. For CNPs, an oscillating pattern emerges, such that different representation channels correspond to different frequencies, from which we concluded that CNPs perform a frequency decomposition of the function space they learn to represent. It should be noted that this behaviour emerges naturally and is not explicitly encouraged (e.g. by a special loss). In contrast to this, NPs tend to partition the space into more or less disjunct regions. They are still able to learn a frequency decomposition like CNPs, but we assume that the variational training objective makes it harder to do so, as sampling from the representation during training can also be understood as a random perturbation. For VAEs, which are conceptually similar to NPs, it was also suggested that models partition their latent space in way that maximally spreads representations of individual data points under the prior distribution (Rezende & Viola, 2018). Finally, to further test the models’ ability to distinguish frequencies and also as an example of possible practical benefits of our findings, we trained CNPs to be band-pass and bandstop filters. This worked extremely well, the Fourier component magnitudes of the training data are essentially “baked” into the models, and any frequency not found therein is subsequently suppressed in reconstructions from the models. An obvious use case would be programmable frequency filters, when perhaps a more complex frequency response is desired.
Overall, our work offers exciting new insights into the inner workings of Neural Processes and into the learning of representations of function spaces. Many applications of deep learning are concerned with representation learning in some way, and we hope that our findings inspire further research and forge a better understanding of the methods used in the field. Our work also opens up a number of exciting questions for future work. We only look at function spaces with scalar domain, and while we expect that our findings translate to higher dimensions, the same should be validated empirically. Seeing that variational Neural Processes can in principle learn frequency decompositions, it would be interesting to investigate how we can further incentivize this behaviour in them. Likewise, it should be possible to encourage orthogonality between the individual representation dimensions, so that frequencies are more cleanly separated. Further theoretical exploration of the conditions, besides frequency content, that allow function spaces to be represented could also be worthwhile. Finally, it is not immediately obvious how our findings translate to scenarios that disallow a classical definition of frequency, for example when the observations are entire images as in Eslami et al. (2018).
A APPENDIX
A.1 OPTIMIZATION & IMPLEMENTATION
To train Neural Processes, we represent individual examples f ∈ F as sets of randomly sampled evaluations (x, f(x) = y), which we partition into context set (xc,yc) and target set (xt,yt). We further have encoder E and decoder G of a Neural Process implemented as neural networks, for which we summarize the parameters in θ. In our implementation, both are multilayer perceptrons (MLP), meaning simple fully connected networks. Our goal is then to find the optimal set of parameters θ∗ that maximizes the likelihood of yt, given xc, yc and xt, over all f :
θ∗ = argmax θ ∑ f∈F log pθ(yt|xt,xc,yc) (3)
where pθ is a placeholder for some parametrized likelihood function. We introduce the logarithm because we assume the likelihood factorizes across individual f , turning the expression into a sum. So what would this optimization look like in practice? For example, we could minimize the mean squared error between yt and the predictions ŷt from our network. This would implicitly assume a Gaussian likelihood with a fixed variance. However, we would like our model to predict a variance, so that it can indicate how uncertain it is about a prediction, and because Le et al. (2018) found that this results in overall better performance. We achieve this by implementing G as a network that predicts both the mean and the variance of a diagonal Gaussian distribution, and Eq. (3) becomes:
θ∗ = argmax θ ∑ f∈F ∑ t logN (yt;Gµθ (Z, xt), Gσθ (Z, xt)) (4)
In deterministic Neural Processes (CNP), we can directly optimize this with maximum likelihood training. In variational Neural Processes (NP), Z is also parametrized by a Gaussian, meaning just like G, E predicts mean and variance of a Gaussian with Dr dimensions. In this case, we need to rewrite the summands of Eq. (3):
log pθ(yt|xt,xc,yc) = log E z∼p(Z|xc,yc) pθ(yt|xt, z) (5)
Here, p(Z|xc,yc) is not the distribution predicted by our encoder, but some true distribution we don’t have access to. The idea of variational inference (see for example Bishop (2006) for an introduction) is to approximate this p by some other distribution qθ and then to optimize pθ and qθ simultaneously. qθ is what our encoder E predicts, just like pθ is what our decoder G predicts. Continuing from Eq. (5):
LHS = log E z∼qθ(Z|xt,yt) pθ(yt|xt, z) p(z|xc,yc) qθ(z|xt,yt)
(6)
≥ E z∼qθ(Z|xt,yt) log
( pθ(yt|xt, z)
p(z|xc,yc) qθ(z|xt,yt)
) (7)
≈ E z∼qθ(Z|xt,yt) log
( pθ(yt|xt, z)
qθ(z|xc,yc) qθ(z|xt,yt)
) (8)
= E z∼qθ(Z|xt,yt)
log pθ(yt|xt, z)
−DKL(qθ(z|xt,yt)||qθ(z|xc,yc)) (9)
where LHS refers to the left hand side of Eq. (5). In the first line, we have switched the underlying distribution from the true prior—meaning conditioned on the context—to an approximate posterior—meaning conditioned on both context and target, but for notational simplicity we only write out the target set. The second line follows from Jensen’s inequality while in the third line we have replaced the true prior with the approximate prior. Finally, we have rewritten the right hand side using the Kullback-Leibler (KL) divergence, a measure of distance between two distributions. Because we predict Gaussian distributions, the KL divergence has a closed-form expression. Otherwise it would be impractical to use it in an optimization context. The last line is often called the evidence lower bound (ELBO) in variational inference.
Let us put the above into more practical terms. When presented with an example consisting of context and target sets, we first use the encoder network E to encode each context tuple separately. The encoder is a MLP with two input channels (for X and Y ), 6 hidden layers with 128 channels, and a final layer mapping to Dr channels, i.e. to the representation. While all hidden layers have a fixed dimension of 128, we vary the representation dimension Dr for our experiments (but never make it larger than 128). For the variational case, the final layer maps to 2Dr channels, half for the mean and half for the variance of the predicted Gaussian (in practice, we predict the log-variance to allow negative values). The individual representations are then averaged, and in the variational case we call this the prior (qθ(z|xc,yc) in Eq. (9)). For the posterior, we also encode the target pairs and then average over all individual representations, including the context. During training forward passes, we sample once from the posterior and use this sample as the representation for the decoder. Ideally, we should sample many times to integrate the expectation in Eq. (9), but for stochastic mini-batch training it was found empirically that a single sample suffices (Jimenez Rezende et al., 2014; Kingma & Welling, 2014). The decoder predicts a Gaussian from the representation and an input xt. It is implemented symmetrically to the encoder, meaning it’s a MLP with Dr + 1 input channels, 6 hidden layers with 128 channels, and two output channels for mean and (log-)variance. We use tanh-activations as well. As a loss we directly use the negative log-likelihood, meaning we evaluate the likelihood of a reference point yt under a Gaussian parametrized by the predicted mean and variance. Finally, we average over all predicted points, which are the target points as well as the context points. We use the Adam optimizer Kingma & Ba (2015) with an initial learning rate of 0.001, repeatedly decaying it with a factor of 0.995 after 1000 batches. We train with a batch size of 256 for a total of 600 000 batches.
A.2 EXPERIMENT DETAILS
We conduct our experiments on two kinds of function spaces. The first is defined by a Gaussian Processes prior using an EQ kernel given by:
k(x1, x2) = exp ( ||x1 − x2||22 2l ) (10)
where l is a lengthscale parameter. This example was also used in the original works (Garnelo et al., 2018a;b). The second are random Fourier series, defined by:
f(x) = a0 + K∑ k=1 ak cos (kx− φk) , K = 19 (11)
where we sample φk and ak (including a0) randomly from the interval [−1, 1]. Note that k is an angular frequency, while results are presented for regular frequencies.
To construct training examples, we sample N context inputs and M target input values uniformly from the range [−3, 3]. N is a random integer from the range [3, 100), while M is a random integer from [N, 100). This sampling strategy was adopted from the original works and Le et al. (2018). yvalues are generated by evaluating the above functions (or drawing from the distribution in the case of the GP) on the random input values. The models are trained by letting them predict/reconstruct both context and target points, conditioned only on the context. At test time, we are only interested in reconstructions, meaning target points and context points are identical, and we work with 200 equally spaced input values across the full range.
In the band filter experiment, we train models on Fourier series with some frequencies intentionally left out of the training data. When we train a model on data where some frequency components are blocked, the distribution of y-values a model sees during training becomes narrower. As a result, passing functions from the reference distribution (where no components are blocked) through a band-filter CNP will suppress the desired frequencies, but will also amplify non-blocked frequencies. To counteract this, we have to multiply the y-values of the reference data, which are approximately normally distributed, by σband/σref, i.e. the ratio of standard deviations of the relative y-distributions.
A.3 ADDITIONAL VISUALIZATIONS | 1. What are the limitations of the experimental approach used in the paper?
2. How does the paper contribute to the understanding of neural processes in the frequency domain?
3. What are the implications of the paper's findings for the broader field of machine learning?
4. Are there any potential applications or use cases where the insights from this paper could be beneficial? | Review | Review
The paper tries to analyze the behavior of Neural Processes in the frequency domain and concludes that such Processes can only represent oscillations up to a certain frequency.
While drawing a parallel between Neural Processes and signal processes, I think that there is some weakness in the experiments of the paper. In particular, the authors only seem to consider the exponential quadratic kernel to generate examples which would mostly show examples of smooth functions as would sampling Fourier linear combinations.
I am also unsure how this paper could be helpful to our community in its present form as it sheds some light on the inner workings of Neural Processes but only in a very limited practical setting. |
ICLR | Title
Frequency Decomposition in Neural Processes
Abstract
Neural Processes are a powerful tool for learning representations of function spaces purely from examples, in a way that allows them to perform predictions at test time conditioned on so-called context observations. The learned representations are finite-dimensional, while function spaces are infinite-dimensional, and so far it has been unclear how these representations are learned and what kinds of functions can be represented. We show that deterministic Neural Processes implicitly perform a decomposition of the training signals into different frequency components, similar to a Fourier transform. In this context, we derive a theoretical upper bound on the maximum frequency Neural Processes can reproduce, depending on their representation size. This bound is confirmed empirically. Finally, we show that Neural Processes can be trained to only represent a subset of possible frequencies and suppress others, which makes them programmable band-pass or band-stop filters.
N/A
Neural Processes are a powerful tool for learning representations of function spaces purely from examples, in a way that allows them to perform predictions at test time conditioned on so-called context observations. The learned representations are finite-dimensional, while function spaces are infinite-dimensional, and so far it has been unclear how these representations are learned and what kinds of functions can be represented. We show that deterministic Neural Processes implicitly perform a decomposition of the training signals into different frequency components, similar to a Fourier transform. In this context, we derive a theoretical upper bound on the maximum frequency Neural Processes can reproduce, depending on their representation size. This bound is confirmed empirically. Finally, we show that Neural Processes can be trained to only represent a subset of possible frequencies and suppress others, which makes them programmable band-pass or band-stop filters.
1 INTRODUCTION
Neural Processes (Garnelo et al., 2018a;b) are a class of models that can learn a distribution over functions, or more generally a function space. In contrast to many other approaches that do the same, for example Bayesian Neural Networks, Neural Processes learn an explicit representation of such a function space, which allows them to condition their predictions on an arbitrary number of observations that are only available at test time. This representation is finite-dimensional, while function spaces are infinite-dimensional, and so far it has not been understood how they are able to bridge this gap and under what conditions they can successfully do so.
Our work reveals how Neural Processes learn to represent infinite-dimensional function spaces in a finite-dimensional space, and in the process describes constraints and conditions that decide what kinds of function spaces can be represented. We begin with an observation that prior art in the context of learning on sets can be reinterpreted from a signal-processing perspective, which allows us to derive a theoretical upper bound on the frequencies, i.e. Fourier components, of functions that can be represented. We subsequently confirm this bound empirically, which suggests that the learned representations should contain a notion of frequency. To further investigate this hypothesis, we continue with a visualization of the learned representations, which reveals that Neural Processes can decompose a function space into different frequency components, essentially finding a representation in Fourier space without any explicit supervision on the representations to elicit such behaviour. As further evidence of this we train Neural Processes to represent only certain frequencies, which results in them suppressing those frequencies that were not observed in the training data. Our contributions can be summarized as follows1:
• We derive a theoretical upper bound on the signal frequency Neural Processes of a given representation size can reconstruct. As we show, the bound is observed either in the expected way—by suppressing high frequencies—or by implicitly limiting the signal interval.
• We investigate learned representations qualitatively, presenting evidence that Neural Processes perform a frequency decomposition of the function space, akin to a Fourier transform. This behaviour is not incentivized externally but rather emerges naturally.
1The complete source code to reproduce our experiments is available at https://github.com/***
• We show that by choosing the training distribution appropriately, Neural Processes can be made to represent certain frequencies and suppress others, which turns them into programmable band-pass or band-stop filters.
2 BACKGROUND
Neural Processes (Garnelo et al., 2018a;b) are maps P : C,X → Y , where C is a set of tuples {(x, f(x))}Nc=1 =: (xc, f(xc))2 with arbitrary but positive cardinality N , and f ∈ F : X → Y . C is often called the context, because Neural Processes perform predictions for values xt ∈ X (t for target), conditioned on these points. F is the function space we would like to find a representation of. Note that some sources define function spaces as any set of functions with a shared domain and co-domain, while others require them to be vector spaces as well. We don’t concern ourselves with this distinction and further restrict our work to X = Y = R, because it allows us to visualize learned representations. We only look at the original Neural Processes, namely the deterministic Conditional Neural Processes (CNP) (Garnelo et al., 2018a) and the variational Neural Processes (NP) (Garnelo et al., 2018b), because newer contributions in the field work in ways that preclude them from being analyzed in the same way. We discuss this further in Section 5. In CNPs and NPs, the map P is separated into two parts, a so called encoding E : C → Z and a decoding or generating part G : Z,X → Y . Z is referred to as the representation or latent space. To allow Neural Processes to approximate arbitrary3 function spaces F , E and G are typically chosen to be powerful approximators, specifically neural networks, as the name suggests.
The defining characteristic of CNPs and NPs is that E encodes individual pairs (x, f(x)) from the context separately, and the resulting representations are averaged to form a global representation, meaning one that is independent of the target points xt at which we then evaluate the Neural Process. This is often not the case in later work, for example in Attentive Neural Processes (Kim et al., 2019), where the individual representations are instead aggregated using an attention mechanism that depends on xt. In CNPs the representations are deterministic, while in NPs they parametrize mean and (log-)variance of a Gaussian distribution, so the latter are trained using variational inference. For details on implementation and training we refer to Appendix A.1. Our work will investigate how these global representations, which are finite-dimensional, represent infinite-dimensional function spaces.
As stated above,E and by extension the Neural Process P acts on set-valued inputs. This is contrary to the vast majority of machine learning work where inputs are vectors of fixed dimension and ordering. Recall that sets are permutation invariant, so we must ensure that the same is true for the output of E. It is easy to see that this is given when we average individual encodings, but Zaheer et al. (2017) show that it is in fact the only way to ensure it: E is permutation-invariant if and only if it has a so-called sum-decomposition, i.e. it can be represented in the form
E(x) = ρ ( N∑ i=1 φ(xi) ) (1)
where ρ, φ are appropriately chosen functions. Wagstaff et al. (2019) further show that to be able to represent all continuous permutation-invariant functions on sets with a cardinality of at most N , the dimension of the image Z must at least be N . This will become relevant in the following section.
3 AN UPPER BOUND ON SIGNAL FREQUENCIES
We mentioned in the previous section that the encoder E in a Neural Process should have a sumdecomposition, so that the global representations are permutation-invariant, as shown in Zaheer et al. (2017). Expanding on this, Wagstaff et al. (2019) show that we require a representation size of at least N to be able to represent arbitrary continuous functions on sets of cardinality smaller or equal to N . What these works do not consider are the implications for situations where the elements of
2We use boldface as a shorthand for sets, not vectors. 3This will depend on the implementation of E and G, and for neural networks F is practically restricted to
continuous and differentiable functions.
the sets are input-output tuples of some function f , as it is typically the case in Neural Processes. We will use these previous findings to derive an upper bound on the frequencies ν any f ∈ F may contain so that they can be represented in a Neural Process. In order to do this, we must first define what it means to successfully learn a representation of a function space. Definition 3.1 (Representation of Function Spaces in Neural Processes). We say that a Neural Processes P has learned a representation of a function space F , defined on an interval [a, b] ⊂ R, if, for some error tolerance , it holds for all x ∈ [a, b] and for all f ∈ F , represented as a suitable set of discrete measurements (xf , f(xf )), that |P ((xf , f(xf )), x)− f(x)| < .
That means the learned representation must be such that we can encode a particular element of the function space f into it and are able to reconstruct it up to a predefined error tolerance. The choice of this tolerance is essentially arbitrary, but should reflect that for g /∈ F the reconstructions should generally not be accurate within . We also write that f is represented as a suitable set of discrete measurements, by which we mean that it must be possible to reconstruct f from those measurements.
Switching to signal-processing terminology, we know that to represent a continuous signal as a set of discrete measurements, we need to sample it at points with a distance of at most τ = 1/(2νmax), where νmax is the maximum frequency component of the signal. This is most commonly known as the Nyquist-Shannon sampling theorem (Whittaker, 1915; Kotelnikov, 1933; Shannon, 1949). For any finite real interval [a, b], this translates to a number of sampling points N > 2|b − a|νmax. The latter allows us to make a connection to the findings by Wagstaff et al. (2019), so that we can deduce an upper bound on the maximum signal frequency Neural Processes with a given representation size can reconstruct. Theorem 3.1 (Maximum Frequency in Neural Process Representations). A Neural Process P with latent dimension Dr can only learn a representation of some function space F defined on a finite interval [a, b] ⊂ R if for all f ∈ F with a maximum frequency content νmax,f it holds that:
νmax,f < Dr
2|b− a| (2)
Note that this means we should in theory be able to represent any function space that obeys Eq. (2) to within arbitrarily small . In practice, we will typically have less control over F , and we only find approximate representations. Part of our experiments will test how Neural Processes behave if the signals contain frequencies larger than those allowed by Eq. (2). It should also be noted that the Nyquist-Shannon theorem used for the above derivation assumes equidistant sampling points. During training, we work with randomly sampled inputs, but at test time equidistant points are used, as we outline in Appendix A.2.
4 EXPERIMENTS & RESULTS
4.1 VALIDATION OF THE FREQUENCY BOUND
Our experiments are grouped into three parts. The first experiment seeks to test the validity of the bound we just derived in Eq. (2). In particular, we train Neural Processes with varying representation sizes on two exemplary function spaces, so that for some models the representation size is insufficient to represent all frequencies. The function spaces we base our experiments on are those defined by Gaussian Process priors (for an introduction see for example Rasmussen & Williams (2006)) with an exponentiated-quadratic (EQ) kernel with lengthscale parameter l, as well as those defined by random real-valued Fourier series—for details we refer to Appendix A.2. While the Gaussian Process samples have an average Fourier magnitude that smoothly decays to zero, the distribution of Fourier magnitudes is uniform for the Fourier series, as shown in Fig. A.1. The Fourier series space also grants us precise control over the frequency domain, which will be useful in subsequent experiments.
Figure 1 shows example reconstructions in a deterministic Neural Process (CNP) for samples from a Gaussian Process prior with EQ kernel (l = 0.05) and from a random Fourier series. For the GP example, the CNP essentially acts like a low-pass filter when the representation size is insufficient, which qualitatively confirms the bound we derived in Eq. (2). Interestingly, the bound can also be observed in a different way: for the Fourier series example, the CNP hardly suppresses high
frequencies, but instead limits the effective interval of the signal, simply ignoring the outer regions of it. Both behaviours are in agreement with the bound in Eq. (2). The Fourier example also serves as a good sanity check: with K = 19 (the maximum angular frequency) the data has a maximum frequency of νmax = K/(2π) = 3.02. For Dr = 32 this would limit the size of the interval to |b − a| < 5.29, for Dr = 16 to |b − a| < 2.65. The reconstructed signal regions in Fig. 1 are a bit narrower, and thus in good agreement with the bound we derived. For a variational Neural Process, we observe the same behaviour, but with stronger dampening of high frequencies in both cases, as seen in Fig. A.2. In Fig. A.3 we show the average reconstruction error for CNPs and NPs of different representation sizes, applied to GP examples with varying lengthscale, which results in a smooth decrease in error for larger representations and larger lengthscale parameters, as one would expect.
4.2 HOW DO NEURAL PROCESSES REPRESENT FUNCTION SPACES?
Having found that Neural Processes do indeed observe the bound we derived in Eq. (2), we seek to understand how this happens. To this end, we visualize the learned representations in Neural Processes, which is possible because we restrict ourselves to X = Y = R. Again looking at the two function spaces from the previous experiment, we sample pairs (x, y) on a regular grid (50 × 50) with x ∈ [−3, 3], which is our training input range, and also y ∈ [−3, 3] as it suitably covers the value range of outputs. We then encode each pair individually to a representation, thus constructing a map ri(x, y) for each representation channel. The latter allows us to uncover potential patterns and to gain a better understanding of how Neural Processes learn representations of function spaces.
Figure 2 presents example representation channels for CNPs and NPs, trained on samples from a Gaussian Process with an EQ-kernel (l = 0.2) and on random Fourier series. The individual channels were selected to illustrate the general patterns of behaviour we observed. First, we find that representations are almost always anti-symmetric across y = 0. This is not surprising, as the function spaces we look at are on average symmetric—in the sense that f and −f will occur with the same probability—so the Neural Process learns the same representation, just with a different sign. More importantly, we find that both NPs and CNPs implicitly form a representation of the input space (i.e. the relevant interval of the function space domain), in the sense that different regions of the input space map to different representation channels. In CNPs this results in an oscillating pattern, with different channels exhibiting different frequencies. In other words, the CNP performs a frequency decomposition of the function space, not unlike a Fourier transform. At the
same time, there is nothing that would enforce orthogonality between the different representation dimensions, and the Fourier series example highlights that we can generally expect a mixture of multiple frequencies for a given dimension. It should be noted that this frequency decomposition emerges naturally and is not incentivized externally (e.g. by a special loss).
Even though NPs behaved very similarly to CNPs in the previous section, their learned representations look vastly different from those in a CNP. Instead of a frequency decomposition, they seem to partition the input space, so that a given representation dimension is written to by a specific, narrow region of the input space. Only for channels with a low average magnitude (i.e. a large index in Fig. 2) do we find behaviour similar to CNPs. We conclude that NPs can in principle learn a frequency decomposition, but their variational formulation—the only difference to CNPs— disincentivizes it. We show more representations for CNPs and NPs trained on GP data in Fig. A.4 and Fig. A.5, and for CNPs and NPs trained on Fourier series data in Fig. A.6 and Fig. A.7, sorting channels by their average magnitude.
4.3 NEURAL PROCESSES AS BAND FILTERS
Our final experiment is designed to show that we can exert more control over the learned representations, and it will serve as additional evidence that deterministic Neural Processes (CNP) perform a frequency decomposition of the function space they represent. At the same time, it suggests a possible practical application of Neural Processes. We saw in Section 4.1 that CNPs sometimes act like low-pass filters, which could be a useful application, but the emergence of that behaviour is not reliable. We now train CNPs with a sufficiently large representation size (Dr = 128) to be bandpass and band-stop filters. To this end, we train the models on the Fourier series defined by Eq. (11), but for the band-stop we set all components ak to zero for which 5 ≤ k ≤ 14, and likewise set all ak to zero outside of that range for the band-pass. We then look at the reconstructions of examples from the original series with all components present. For more details on the training procedure and how we sample points for function evaluation, please see Appendix A.1 and Appendix A.2.
The average Fourier magnitude of the training functions for the different models is given by the bottom left panel in Fig. 3. In the first model (Reference), all components are allowed; in the second (Band-stop), components in the middle of that range are suppressed; in the third (Band-pass) only components in the middle of the range are allowed. We then apply these models to examples from the reference data distribution, the result of which can be seen in the bottom-right panel of Fig. 3. The models that are only shown certain frequencies during training will suppress those frequencies that were not found in the training data, meaning they effectively become programmable band-stop or band-pass filters. This is confirmed by the example in the top rows of the figure, where we show
both the signal and its Fourier transform magnitude. Note that one needs to adjust the value range of the reference data before passing them through the band filters to prevent gain in the non-suppressed frequency regions. We give more details in Appendix A.2.
Unfortunately, we were only partly able to elicit the same behaviour in variational NPs. While the trained band-stop filter worked exactly like the CNP band-stop, we were not able to train a bandpass filter. The models collapsed during training, meaning the loss plateaued and no meaningful representations were learned. There is no obvious reason why a band-pass shouldn’t work when a band-stop does, so we suspect our hyperparameter configuration was not ideal and that with more tuning it would be possible to train a band-pass as well. The NP results are shown in Fig. A.8.
5 RELATED WORK
Neural Processes broadly relate to the topic of learning distributions of functions, even though we speak of the less restrictive term function space in our work. In this context, Bayesian Neural Networks (see for example Neal (1996); Graves (2011); Hernández-Lobato & Adams (2015)) are a popular choice, which place distributions on the weights of a network. However, in doing so they only implicitly represent distributions over functions, while Neural Processes learn an explicit finite-dimensional representation that can be leveraged for predictions, so as to condition on context observations given at test time. Perhaps the most well known class of methods that do the same are Gaussian Processes (for an introduction see Rasmussen & Williams (2006)). These are stochastic processes represented by a joint Gaussian distribution over context and target points, defined via the covariance matrix by a kernel. All flexibility of Gaussian Processes to represent different distributions of functions is decided by this kernel, so many works try to learn it (Yang et al., 2015; Wilson et al., 2016b;a; Tossou et al., 2019; Calandra et al., 2016). Even though Neural Processes were originally motivated by Gaussian Processes, they can be understood as orthogonal methods: Gaussian Processes represent a function space using a (potentially learned) kernel, while Neural Processes represent them in a learned finite-dimensional space.
Neural Processes can also be interpreted from the perspective of deep learning on sets, the earliest work in the field being Zaheer et al. (2017). More theoretical contributions were made by Wagstaff et al. (2019), whose work we use to underpin our finding that the representation size in Neural Processes limits the maximum frequency of signals that can be represented. More applied work in the set-learning context has mostly been performed on point-cloud data (Qi et al., 2017b;a; Wu et al., 2019), which can be interpreted as a higher-dimensional instance of learning function spaces. Validating our findings in higher-dimensional spaces is an important direction for future work.
Neural Processes have inspired a number of follow-up works. Perhaps the most well known addition are Attentive Neural Processes (Kim et al., 2019), which replace the averaging of individual representations with a learned attention mechanism (Vaswani et al., 2017). The aggregate representations are thus no longer independent of the target inputs, and no global representation is learned. This holds true for most follow-up work. Convolutional Conditional Neural Processes (Gordon et al., 2020) propose to no longer learn a finite-dimensional representation at all and instead work in function space by applying a CNN on suitable and variable discretizations of a kernel density estimate. Similar to ANP, Louizos et al. (2019) propose to not merge observations into a global latent space, but instead learn conditional relationships between them. Singh et al. (2019) and Willi et al. (2019) address the problem of overlapping and changing dynamics in time series data. Relating this to our work, it would be possible to test how the original Neural Processes would represent functions where the average frequency content is not constant over the domain. We leave this investigation for future work. Neural Processes have also been extended to scenarios where the function space maps to entire images, in the form of Generative Query Networks (GQN) (Eslami et al., 2018; Kumar et al., 2018). Employing vastly more powerful decoders, they can (re-)construct unseen views in 3D scenes, which relates Neural Processes to the field of 3D scene understanding, an area that has received a lot of attention more recently (Sitzmann et al., 2019; Engelcke et al., 2020; Mildenhall et al., 2020). Sitzmann et al. (2020) show that periodic activation functions make it easier for networks to learn so-called implicit representations—mappings from coordinates to a density, RGB values, etc.. We did in fact try periodic activation functions in our experiments, but found no difference to using tanh-activations. In the same context, Tancik et al. (2020) show that coordinates in Fourier space are often superior to coordinates in signal space to produce fine detail. We interpret this as an indication
that a representation in frequency space is more efficient for many signals, which could explain why Neural Processes implicitly perform a frequency decomposition. Note that the above introduces Fourier features explicitly as a form of inductive bias, while Neural Processes automatically learn this form of representation.
It is well known that neural networks, specifically a MLP with at least one hidden layer, can learn the Fourier transform of an input signal (Gallant & White, 1988). In fact, there have been a multitude of works that exploit this ability in one way or the other, leading to the term Fourier Neural Networks. We refer to the recent review by Zhumekenov et al. (2019) for a comprehensive overview. The difference to Neural Processes is that these works typically apply networks directly to a sequence of points, while NPs learn a mapping that is only applied to individual (x,y) pairs, the representations of which are averaged. We emphasize again that the frequency decomposition occurs naturally in NPs, while these works usually employ direct supervision.
6 DISCUSSION
The goal of this work was to gain a better understanding of the mechanisms that allow Neural Processes to form finite-dimensional representations of infinite-dimensional function spaces. To the best of our knowledge, ours is the first work to investigate this question, and our findings are both surprising and meaningful in this context. We first derived a theoretical upper bound on the frequency of signals that can be represented in Neural Processes with a given representation size. We empirically confirmed that the representation size does indeed pose such a limit and that this can result in Neural Processes acting like low-pass filters. Alternatively, models ignore parts of the signal to keep higher frequencies. Both behaviours are in agreement with the derived bound. We then visualized learned representations to understand how the models incorporate the concept of frequency into them. In all cases the models formed an implicit representation of the input space, in the sense that different x-values are mapped to different representation channels. For CNPs, an oscillating pattern emerges, such that different representation channels correspond to different frequencies, from which we concluded that CNPs perform a frequency decomposition of the function space they learn to represent. It should be noted that this behaviour emerges naturally and is not explicitly encouraged (e.g. by a special loss). In contrast to this, NPs tend to partition the space into more or less disjunct regions. They are still able to learn a frequency decomposition like CNPs, but we assume that the variational training objective makes it harder to do so, as sampling from the representation during training can also be understood as a random perturbation. For VAEs, which are conceptually similar to NPs, it was also suggested that models partition their latent space in way that maximally spreads representations of individual data points under the prior distribution (Rezende & Viola, 2018). Finally, to further test the models’ ability to distinguish frequencies and also as an example of possible practical benefits of our findings, we trained CNPs to be band-pass and bandstop filters. This worked extremely well, the Fourier component magnitudes of the training data are essentially “baked” into the models, and any frequency not found therein is subsequently suppressed in reconstructions from the models. An obvious use case would be programmable frequency filters, when perhaps a more complex frequency response is desired.
Overall, our work offers exciting new insights into the inner workings of Neural Processes and into the learning of representations of function spaces. Many applications of deep learning are concerned with representation learning in some way, and we hope that our findings inspire further research and forge a better understanding of the methods used in the field. Our work also opens up a number of exciting questions for future work. We only look at function spaces with scalar domain, and while we expect that our findings translate to higher dimensions, the same should be validated empirically. Seeing that variational Neural Processes can in principle learn frequency decompositions, it would be interesting to investigate how we can further incentivize this behaviour in them. Likewise, it should be possible to encourage orthogonality between the individual representation dimensions, so that frequencies are more cleanly separated. Further theoretical exploration of the conditions, besides frequency content, that allow function spaces to be represented could also be worthwhile. Finally, it is not immediately obvious how our findings translate to scenarios that disallow a classical definition of frequency, for example when the observations are entire images as in Eslami et al. (2018).
A APPENDIX
A.1 OPTIMIZATION & IMPLEMENTATION
To train Neural Processes, we represent individual examples f ∈ F as sets of randomly sampled evaluations (x, f(x) = y), which we partition into context set (xc,yc) and target set (xt,yt). We further have encoder E and decoder G of a Neural Process implemented as neural networks, for which we summarize the parameters in θ. In our implementation, both are multilayer perceptrons (MLP), meaning simple fully connected networks. Our goal is then to find the optimal set of parameters θ∗ that maximizes the likelihood of yt, given xc, yc and xt, over all f :
θ∗ = argmax θ ∑ f∈F log pθ(yt|xt,xc,yc) (3)
where pθ is a placeholder for some parametrized likelihood function. We introduce the logarithm because we assume the likelihood factorizes across individual f , turning the expression into a sum. So what would this optimization look like in practice? For example, we could minimize the mean squared error between yt and the predictions ŷt from our network. This would implicitly assume a Gaussian likelihood with a fixed variance. However, we would like our model to predict a variance, so that it can indicate how uncertain it is about a prediction, and because Le et al. (2018) found that this results in overall better performance. We achieve this by implementing G as a network that predicts both the mean and the variance of a diagonal Gaussian distribution, and Eq. (3) becomes:
θ∗ = argmax θ ∑ f∈F ∑ t logN (yt;Gµθ (Z, xt), Gσθ (Z, xt)) (4)
In deterministic Neural Processes (CNP), we can directly optimize this with maximum likelihood training. In variational Neural Processes (NP), Z is also parametrized by a Gaussian, meaning just like G, E predicts mean and variance of a Gaussian with Dr dimensions. In this case, we need to rewrite the summands of Eq. (3):
log pθ(yt|xt,xc,yc) = log E z∼p(Z|xc,yc) pθ(yt|xt, z) (5)
Here, p(Z|xc,yc) is not the distribution predicted by our encoder, but some true distribution we don’t have access to. The idea of variational inference (see for example Bishop (2006) for an introduction) is to approximate this p by some other distribution qθ and then to optimize pθ and qθ simultaneously. qθ is what our encoder E predicts, just like pθ is what our decoder G predicts. Continuing from Eq. (5):
LHS = log E z∼qθ(Z|xt,yt) pθ(yt|xt, z) p(z|xc,yc) qθ(z|xt,yt)
(6)
≥ E z∼qθ(Z|xt,yt) log
( pθ(yt|xt, z)
p(z|xc,yc) qθ(z|xt,yt)
) (7)
≈ E z∼qθ(Z|xt,yt) log
( pθ(yt|xt, z)
qθ(z|xc,yc) qθ(z|xt,yt)
) (8)
= E z∼qθ(Z|xt,yt)
log pθ(yt|xt, z)
−DKL(qθ(z|xt,yt)||qθ(z|xc,yc)) (9)
where LHS refers to the left hand side of Eq. (5). In the first line, we have switched the underlying distribution from the true prior—meaning conditioned on the context—to an approximate posterior—meaning conditioned on both context and target, but for notational simplicity we only write out the target set. The second line follows from Jensen’s inequality while in the third line we have replaced the true prior with the approximate prior. Finally, we have rewritten the right hand side using the Kullback-Leibler (KL) divergence, a measure of distance between two distributions. Because we predict Gaussian distributions, the KL divergence has a closed-form expression. Otherwise it would be impractical to use it in an optimization context. The last line is often called the evidence lower bound (ELBO) in variational inference.
Let us put the above into more practical terms. When presented with an example consisting of context and target sets, we first use the encoder network E to encode each context tuple separately. The encoder is a MLP with two input channels (for X and Y ), 6 hidden layers with 128 channels, and a final layer mapping to Dr channels, i.e. to the representation. While all hidden layers have a fixed dimension of 128, we vary the representation dimension Dr for our experiments (but never make it larger than 128). For the variational case, the final layer maps to 2Dr channels, half for the mean and half for the variance of the predicted Gaussian (in practice, we predict the log-variance to allow negative values). The individual representations are then averaged, and in the variational case we call this the prior (qθ(z|xc,yc) in Eq. (9)). For the posterior, we also encode the target pairs and then average over all individual representations, including the context. During training forward passes, we sample once from the posterior and use this sample as the representation for the decoder. Ideally, we should sample many times to integrate the expectation in Eq. (9), but for stochastic mini-batch training it was found empirically that a single sample suffices (Jimenez Rezende et al., 2014; Kingma & Welling, 2014). The decoder predicts a Gaussian from the representation and an input xt. It is implemented symmetrically to the encoder, meaning it’s a MLP with Dr + 1 input channels, 6 hidden layers with 128 channels, and two output channels for mean and (log-)variance. We use tanh-activations as well. As a loss we directly use the negative log-likelihood, meaning we evaluate the likelihood of a reference point yt under a Gaussian parametrized by the predicted mean and variance. Finally, we average over all predicted points, which are the target points as well as the context points. We use the Adam optimizer Kingma & Ba (2015) with an initial learning rate of 0.001, repeatedly decaying it with a factor of 0.995 after 1000 batches. We train with a batch size of 256 for a total of 600 000 batches.
A.2 EXPERIMENT DETAILS
We conduct our experiments on two kinds of function spaces. The first is defined by a Gaussian Processes prior using an EQ kernel given by:
k(x1, x2) = exp ( ||x1 − x2||22 2l ) (10)
where l is a lengthscale parameter. This example was also used in the original works (Garnelo et al., 2018a;b). The second are random Fourier series, defined by:
f(x) = a0 + K∑ k=1 ak cos (kx− φk) , K = 19 (11)
where we sample φk and ak (including a0) randomly from the interval [−1, 1]. Note that k is an angular frequency, while results are presented for regular frequencies.
To construct training examples, we sample N context inputs and M target input values uniformly from the range [−3, 3]. N is a random integer from the range [3, 100), while M is a random integer from [N, 100). This sampling strategy was adopted from the original works and Le et al. (2018). yvalues are generated by evaluating the above functions (or drawing from the distribution in the case of the GP) on the random input values. The models are trained by letting them predict/reconstruct both context and target points, conditioned only on the context. At test time, we are only interested in reconstructions, meaning target points and context points are identical, and we work with 200 equally spaced input values across the full range.
In the band filter experiment, we train models on Fourier series with some frequencies intentionally left out of the training data. When we train a model on data where some frequency components are blocked, the distribution of y-values a model sees during training becomes narrower. As a result, passing functions from the reference distribution (where no components are blocked) through a band-filter CNP will suppress the desired frequencies, but will also amplify non-blocked frequencies. To counteract this, we have to multiply the y-values of the reference data, which are approximately normally distributed, by σband/σref, i.e. the ratio of standard deviations of the relative y-distributions.
A.3 ADDITIONAL VISUALIZATIONS | 1. What is the focus of the paper, and how does it contribute to understanding Neural Processes?
2. What are the strengths and weaknesses of the paper's empirical observations?
3. Do you have any concerns regarding the main claims of the paper, particularly about the "frequency decomposition" observation and the theoretical upper bound on frequency?
4. How do you suggest the authors improve their work to better support their claims? | Review | Review
This paper addresses an interesting and timely problem, which is to understand how Neural Processes work to learn a representation of a function space. Offering a closer investigation into a recently introduced framework, this work will likely be of interest to the ICLR community. The work focuses on the 1-dimensional case and tries to analyze the simplest case in a rigorous way, which I think is a good approach in general.
However, I have some concerns about the main claims of this paper, as listed below:
One of the main findings of the paper is an observation that Neural Processes perform a "frequency decomposition". However, I think this is an insufficiently supported, and even misleading, over-statement. Indeed, Figure 2 shows that there are different modes dominated by varying characteristic frequencies, where a higher-rank mode shows a more slowly varying feature; but there is no further evidence that the decomposition is actually based on the frequency of the signal. One would get a similar result by simply doing a Principal Component Analysis too. When you say "frequency decomposition" it carries a clear mathematical meaning, and it is a much stronger statement than what the paper reports empirically.
That said, I agree that the empirical observations are interesting. Perhaps the observations in the paper's experiments may be better described in a frame of global mode decomposition (CNP) vs. local feature detection (NP)?
I also think that the claim about the theoretical upper bound on the frequency is overstated, the way it is stated currently. The validity of the statement (Theorem 3.1) really depends on the assumption of uniform sampling, which is mentioned as a note after Theorem 3.1. Of course, I fully agree that it is an important starting step to get rigorous results in simplified conditions. But those conditions should be mentioned as part of the statement, especially when it is highly likely that the conditions are not met in the use case (there is no reason to expect that the x values in the context set is close to uniform). For example, it is possible to encode functions with a localized feature whose (local) frequency is higher than your derived bound, by using more samples around that high-frequency feature.
This paper will get views, partly because it is actually asking an interesting question, and partly because of the boldness and attractiveness of the claims made. How exciting is it to discover a naturally emerging Fourier transform? Except... that's not exactly what one can say just yet (I think). I believe the authors should either support the paper's claims by further work, or tone down their overall framing — major changes either way. While I think this work is headed to a promising direction, given the concerns described above, I recommend a rejection at this time.
UPDATE: I appreciate the authors' responses and the engaged discussion. However, I still think that the claims of the paper are not sufficiently supported by the presented results, and maintain my original rating. |
ICLR | Title
Frequency Decomposition in Neural Processes
Abstract
Neural Processes are a powerful tool for learning representations of function spaces purely from examples, in a way that allows them to perform predictions at test time conditioned on so-called context observations. The learned representations are finite-dimensional, while function spaces are infinite-dimensional, and so far it has been unclear how these representations are learned and what kinds of functions can be represented. We show that deterministic Neural Processes implicitly perform a decomposition of the training signals into different frequency components, similar to a Fourier transform. In this context, we derive a theoretical upper bound on the maximum frequency Neural Processes can reproduce, depending on their representation size. This bound is confirmed empirically. Finally, we show that Neural Processes can be trained to only represent a subset of possible frequencies and suppress others, which makes them programmable band-pass or band-stop filters.
N/A
Neural Processes are a powerful tool for learning representations of function spaces purely from examples, in a way that allows them to perform predictions at test time conditioned on so-called context observations. The learned representations are finite-dimensional, while function spaces are infinite-dimensional, and so far it has been unclear how these representations are learned and what kinds of functions can be represented. We show that deterministic Neural Processes implicitly perform a decomposition of the training signals into different frequency components, similar to a Fourier transform. In this context, we derive a theoretical upper bound on the maximum frequency Neural Processes can reproduce, depending on their representation size. This bound is confirmed empirically. Finally, we show that Neural Processes can be trained to only represent a subset of possible frequencies and suppress others, which makes them programmable band-pass or band-stop filters.
1 INTRODUCTION
Neural Processes (Garnelo et al., 2018a;b) are a class of models that can learn a distribution over functions, or more generally a function space. In contrast to many other approaches that do the same, for example Bayesian Neural Networks, Neural Processes learn an explicit representation of such a function space, which allows them to condition their predictions on an arbitrary number of observations that are only available at test time. This representation is finite-dimensional, while function spaces are infinite-dimensional, and so far it has not been understood how they are able to bridge this gap and under what conditions they can successfully do so.
Our work reveals how Neural Processes learn to represent infinite-dimensional function spaces in a finite-dimensional space, and in the process describes constraints and conditions that decide what kinds of function spaces can be represented. We begin with an observation that prior art in the context of learning on sets can be reinterpreted from a signal-processing perspective, which allows us to derive a theoretical upper bound on the frequencies, i.e. Fourier components, of functions that can be represented. We subsequently confirm this bound empirically, which suggests that the learned representations should contain a notion of frequency. To further investigate this hypothesis, we continue with a visualization of the learned representations, which reveals that Neural Processes can decompose a function space into different frequency components, essentially finding a representation in Fourier space without any explicit supervision on the representations to elicit such behaviour. As further evidence of this we train Neural Processes to represent only certain frequencies, which results in them suppressing those frequencies that were not observed in the training data. Our contributions can be summarized as follows1:
• We derive a theoretical upper bound on the signal frequency Neural Processes of a given representation size can reconstruct. As we show, the bound is observed either in the expected way—by suppressing high frequencies—or by implicitly limiting the signal interval.
• We investigate learned representations qualitatively, presenting evidence that Neural Processes perform a frequency decomposition of the function space, akin to a Fourier transform. This behaviour is not incentivized externally but rather emerges naturally.
1The complete source code to reproduce our experiments is available at https://github.com/***
• We show that by choosing the training distribution appropriately, Neural Processes can be made to represent certain frequencies and suppress others, which turns them into programmable band-pass or band-stop filters.
2 BACKGROUND
Neural Processes (Garnelo et al., 2018a;b) are maps P : C,X → Y , where C is a set of tuples {(x, f(x))}Nc=1 =: (xc, f(xc))2 with arbitrary but positive cardinality N , and f ∈ F : X → Y . C is often called the context, because Neural Processes perform predictions for values xt ∈ X (t for target), conditioned on these points. F is the function space we would like to find a representation of. Note that some sources define function spaces as any set of functions with a shared domain and co-domain, while others require them to be vector spaces as well. We don’t concern ourselves with this distinction and further restrict our work to X = Y = R, because it allows us to visualize learned representations. We only look at the original Neural Processes, namely the deterministic Conditional Neural Processes (CNP) (Garnelo et al., 2018a) and the variational Neural Processes (NP) (Garnelo et al., 2018b), because newer contributions in the field work in ways that preclude them from being analyzed in the same way. We discuss this further in Section 5. In CNPs and NPs, the map P is separated into two parts, a so called encoding E : C → Z and a decoding or generating part G : Z,X → Y . Z is referred to as the representation or latent space. To allow Neural Processes to approximate arbitrary3 function spaces F , E and G are typically chosen to be powerful approximators, specifically neural networks, as the name suggests.
The defining characteristic of CNPs and NPs is that E encodes individual pairs (x, f(x)) from the context separately, and the resulting representations are averaged to form a global representation, meaning one that is independent of the target points xt at which we then evaluate the Neural Process. This is often not the case in later work, for example in Attentive Neural Processes (Kim et al., 2019), where the individual representations are instead aggregated using an attention mechanism that depends on xt. In CNPs the representations are deterministic, while in NPs they parametrize mean and (log-)variance of a Gaussian distribution, so the latter are trained using variational inference. For details on implementation and training we refer to Appendix A.1. Our work will investigate how these global representations, which are finite-dimensional, represent infinite-dimensional function spaces.
As stated above,E and by extension the Neural Process P acts on set-valued inputs. This is contrary to the vast majority of machine learning work where inputs are vectors of fixed dimension and ordering. Recall that sets are permutation invariant, so we must ensure that the same is true for the output of E. It is easy to see that this is given when we average individual encodings, but Zaheer et al. (2017) show that it is in fact the only way to ensure it: E is permutation-invariant if and only if it has a so-called sum-decomposition, i.e. it can be represented in the form
E(x) = ρ ( N∑ i=1 φ(xi) ) (1)
where ρ, φ are appropriately chosen functions. Wagstaff et al. (2019) further show that to be able to represent all continuous permutation-invariant functions on sets with a cardinality of at most N , the dimension of the image Z must at least be N . This will become relevant in the following section.
3 AN UPPER BOUND ON SIGNAL FREQUENCIES
We mentioned in the previous section that the encoder E in a Neural Process should have a sumdecomposition, so that the global representations are permutation-invariant, as shown in Zaheer et al. (2017). Expanding on this, Wagstaff et al. (2019) show that we require a representation size of at least N to be able to represent arbitrary continuous functions on sets of cardinality smaller or equal to N . What these works do not consider are the implications for situations where the elements of
2We use boldface as a shorthand for sets, not vectors. 3This will depend on the implementation of E and G, and for neural networks F is practically restricted to
continuous and differentiable functions.
the sets are input-output tuples of some function f , as it is typically the case in Neural Processes. We will use these previous findings to derive an upper bound on the frequencies ν any f ∈ F may contain so that they can be represented in a Neural Process. In order to do this, we must first define what it means to successfully learn a representation of a function space. Definition 3.1 (Representation of Function Spaces in Neural Processes). We say that a Neural Processes P has learned a representation of a function space F , defined on an interval [a, b] ⊂ R, if, for some error tolerance , it holds for all x ∈ [a, b] and for all f ∈ F , represented as a suitable set of discrete measurements (xf , f(xf )), that |P ((xf , f(xf )), x)− f(x)| < .
That means the learned representation must be such that we can encode a particular element of the function space f into it and are able to reconstruct it up to a predefined error tolerance. The choice of this tolerance is essentially arbitrary, but should reflect that for g /∈ F the reconstructions should generally not be accurate within . We also write that f is represented as a suitable set of discrete measurements, by which we mean that it must be possible to reconstruct f from those measurements.
Switching to signal-processing terminology, we know that to represent a continuous signal as a set of discrete measurements, we need to sample it at points with a distance of at most τ = 1/(2νmax), where νmax is the maximum frequency component of the signal. This is most commonly known as the Nyquist-Shannon sampling theorem (Whittaker, 1915; Kotelnikov, 1933; Shannon, 1949). For any finite real interval [a, b], this translates to a number of sampling points N > 2|b − a|νmax. The latter allows us to make a connection to the findings by Wagstaff et al. (2019), so that we can deduce an upper bound on the maximum signal frequency Neural Processes with a given representation size can reconstruct. Theorem 3.1 (Maximum Frequency in Neural Process Representations). A Neural Process P with latent dimension Dr can only learn a representation of some function space F defined on a finite interval [a, b] ⊂ R if for all f ∈ F with a maximum frequency content νmax,f it holds that:
νmax,f < Dr
2|b− a| (2)
Note that this means we should in theory be able to represent any function space that obeys Eq. (2) to within arbitrarily small . In practice, we will typically have less control over F , and we only find approximate representations. Part of our experiments will test how Neural Processes behave if the signals contain frequencies larger than those allowed by Eq. (2). It should also be noted that the Nyquist-Shannon theorem used for the above derivation assumes equidistant sampling points. During training, we work with randomly sampled inputs, but at test time equidistant points are used, as we outline in Appendix A.2.
4 EXPERIMENTS & RESULTS
4.1 VALIDATION OF THE FREQUENCY BOUND
Our experiments are grouped into three parts. The first experiment seeks to test the validity of the bound we just derived in Eq. (2). In particular, we train Neural Processes with varying representation sizes on two exemplary function spaces, so that for some models the representation size is insufficient to represent all frequencies. The function spaces we base our experiments on are those defined by Gaussian Process priors (for an introduction see for example Rasmussen & Williams (2006)) with an exponentiated-quadratic (EQ) kernel with lengthscale parameter l, as well as those defined by random real-valued Fourier series—for details we refer to Appendix A.2. While the Gaussian Process samples have an average Fourier magnitude that smoothly decays to zero, the distribution of Fourier magnitudes is uniform for the Fourier series, as shown in Fig. A.1. The Fourier series space also grants us precise control over the frequency domain, which will be useful in subsequent experiments.
Figure 1 shows example reconstructions in a deterministic Neural Process (CNP) for samples from a Gaussian Process prior with EQ kernel (l = 0.05) and from a random Fourier series. For the GP example, the CNP essentially acts like a low-pass filter when the representation size is insufficient, which qualitatively confirms the bound we derived in Eq. (2). Interestingly, the bound can also be observed in a different way: for the Fourier series example, the CNP hardly suppresses high
frequencies, but instead limits the effective interval of the signal, simply ignoring the outer regions of it. Both behaviours are in agreement with the bound in Eq. (2). The Fourier example also serves as a good sanity check: with K = 19 (the maximum angular frequency) the data has a maximum frequency of νmax = K/(2π) = 3.02. For Dr = 32 this would limit the size of the interval to |b − a| < 5.29, for Dr = 16 to |b − a| < 2.65. The reconstructed signal regions in Fig. 1 are a bit narrower, and thus in good agreement with the bound we derived. For a variational Neural Process, we observe the same behaviour, but with stronger dampening of high frequencies in both cases, as seen in Fig. A.2. In Fig. A.3 we show the average reconstruction error for CNPs and NPs of different representation sizes, applied to GP examples with varying lengthscale, which results in a smooth decrease in error for larger representations and larger lengthscale parameters, as one would expect.
4.2 HOW DO NEURAL PROCESSES REPRESENT FUNCTION SPACES?
Having found that Neural Processes do indeed observe the bound we derived in Eq. (2), we seek to understand how this happens. To this end, we visualize the learned representations in Neural Processes, which is possible because we restrict ourselves to X = Y = R. Again looking at the two function spaces from the previous experiment, we sample pairs (x, y) on a regular grid (50 × 50) with x ∈ [−3, 3], which is our training input range, and also y ∈ [−3, 3] as it suitably covers the value range of outputs. We then encode each pair individually to a representation, thus constructing a map ri(x, y) for each representation channel. The latter allows us to uncover potential patterns and to gain a better understanding of how Neural Processes learn representations of function spaces.
Figure 2 presents example representation channels for CNPs and NPs, trained on samples from a Gaussian Process with an EQ-kernel (l = 0.2) and on random Fourier series. The individual channels were selected to illustrate the general patterns of behaviour we observed. First, we find that representations are almost always anti-symmetric across y = 0. This is not surprising, as the function spaces we look at are on average symmetric—in the sense that f and −f will occur with the same probability—so the Neural Process learns the same representation, just with a different sign. More importantly, we find that both NPs and CNPs implicitly form a representation of the input space (i.e. the relevant interval of the function space domain), in the sense that different regions of the input space map to different representation channels. In CNPs this results in an oscillating pattern, with different channels exhibiting different frequencies. In other words, the CNP performs a frequency decomposition of the function space, not unlike a Fourier transform. At the
same time, there is nothing that would enforce orthogonality between the different representation dimensions, and the Fourier series example highlights that we can generally expect a mixture of multiple frequencies for a given dimension. It should be noted that this frequency decomposition emerges naturally and is not incentivized externally (e.g. by a special loss).
Even though NPs behaved very similarly to CNPs in the previous section, their learned representations look vastly different from those in a CNP. Instead of a frequency decomposition, they seem to partition the input space, so that a given representation dimension is written to by a specific, narrow region of the input space. Only for channels with a low average magnitude (i.e. a large index in Fig. 2) do we find behaviour similar to CNPs. We conclude that NPs can in principle learn a frequency decomposition, but their variational formulation—the only difference to CNPs— disincentivizes it. We show more representations for CNPs and NPs trained on GP data in Fig. A.4 and Fig. A.5, and for CNPs and NPs trained on Fourier series data in Fig. A.6 and Fig. A.7, sorting channels by their average magnitude.
4.3 NEURAL PROCESSES AS BAND FILTERS
Our final experiment is designed to show that we can exert more control over the learned representations, and it will serve as additional evidence that deterministic Neural Processes (CNP) perform a frequency decomposition of the function space they represent. At the same time, it suggests a possible practical application of Neural Processes. We saw in Section 4.1 that CNPs sometimes act like low-pass filters, which could be a useful application, but the emergence of that behaviour is not reliable. We now train CNPs with a sufficiently large representation size (Dr = 128) to be bandpass and band-stop filters. To this end, we train the models on the Fourier series defined by Eq. (11), but for the band-stop we set all components ak to zero for which 5 ≤ k ≤ 14, and likewise set all ak to zero outside of that range for the band-pass. We then look at the reconstructions of examples from the original series with all components present. For more details on the training procedure and how we sample points for function evaluation, please see Appendix A.1 and Appendix A.2.
The average Fourier magnitude of the training functions for the different models is given by the bottom left panel in Fig. 3. In the first model (Reference), all components are allowed; in the second (Band-stop), components in the middle of that range are suppressed; in the third (Band-pass) only components in the middle of the range are allowed. We then apply these models to examples from the reference data distribution, the result of which can be seen in the bottom-right panel of Fig. 3. The models that are only shown certain frequencies during training will suppress those frequencies that were not found in the training data, meaning they effectively become programmable band-stop or band-pass filters. This is confirmed by the example in the top rows of the figure, where we show
both the signal and its Fourier transform magnitude. Note that one needs to adjust the value range of the reference data before passing them through the band filters to prevent gain in the non-suppressed frequency regions. We give more details in Appendix A.2.
Unfortunately, we were only partly able to elicit the same behaviour in variational NPs. While the trained band-stop filter worked exactly like the CNP band-stop, we were not able to train a bandpass filter. The models collapsed during training, meaning the loss plateaued and no meaningful representations were learned. There is no obvious reason why a band-pass shouldn’t work when a band-stop does, so we suspect our hyperparameter configuration was not ideal and that with more tuning it would be possible to train a band-pass as well. The NP results are shown in Fig. A.8.
5 RELATED WORK
Neural Processes broadly relate to the topic of learning distributions of functions, even though we speak of the less restrictive term function space in our work. In this context, Bayesian Neural Networks (see for example Neal (1996); Graves (2011); Hernández-Lobato & Adams (2015)) are a popular choice, which place distributions on the weights of a network. However, in doing so they only implicitly represent distributions over functions, while Neural Processes learn an explicit finite-dimensional representation that can be leveraged for predictions, so as to condition on context observations given at test time. Perhaps the most well known class of methods that do the same are Gaussian Processes (for an introduction see Rasmussen & Williams (2006)). These are stochastic processes represented by a joint Gaussian distribution over context and target points, defined via the covariance matrix by a kernel. All flexibility of Gaussian Processes to represent different distributions of functions is decided by this kernel, so many works try to learn it (Yang et al., 2015; Wilson et al., 2016b;a; Tossou et al., 2019; Calandra et al., 2016). Even though Neural Processes were originally motivated by Gaussian Processes, they can be understood as orthogonal methods: Gaussian Processes represent a function space using a (potentially learned) kernel, while Neural Processes represent them in a learned finite-dimensional space.
Neural Processes can also be interpreted from the perspective of deep learning on sets, the earliest work in the field being Zaheer et al. (2017). More theoretical contributions were made by Wagstaff et al. (2019), whose work we use to underpin our finding that the representation size in Neural Processes limits the maximum frequency of signals that can be represented. More applied work in the set-learning context has mostly been performed on point-cloud data (Qi et al., 2017b;a; Wu et al., 2019), which can be interpreted as a higher-dimensional instance of learning function spaces. Validating our findings in higher-dimensional spaces is an important direction for future work.
Neural Processes have inspired a number of follow-up works. Perhaps the most well known addition are Attentive Neural Processes (Kim et al., 2019), which replace the averaging of individual representations with a learned attention mechanism (Vaswani et al., 2017). The aggregate representations are thus no longer independent of the target inputs, and no global representation is learned. This holds true for most follow-up work. Convolutional Conditional Neural Processes (Gordon et al., 2020) propose to no longer learn a finite-dimensional representation at all and instead work in function space by applying a CNN on suitable and variable discretizations of a kernel density estimate. Similar to ANP, Louizos et al. (2019) propose to not merge observations into a global latent space, but instead learn conditional relationships between them. Singh et al. (2019) and Willi et al. (2019) address the problem of overlapping and changing dynamics in time series data. Relating this to our work, it would be possible to test how the original Neural Processes would represent functions where the average frequency content is not constant over the domain. We leave this investigation for future work. Neural Processes have also been extended to scenarios where the function space maps to entire images, in the form of Generative Query Networks (GQN) (Eslami et al., 2018; Kumar et al., 2018). Employing vastly more powerful decoders, they can (re-)construct unseen views in 3D scenes, which relates Neural Processes to the field of 3D scene understanding, an area that has received a lot of attention more recently (Sitzmann et al., 2019; Engelcke et al., 2020; Mildenhall et al., 2020). Sitzmann et al. (2020) show that periodic activation functions make it easier for networks to learn so-called implicit representations—mappings from coordinates to a density, RGB values, etc.. We did in fact try periodic activation functions in our experiments, but found no difference to using tanh-activations. In the same context, Tancik et al. (2020) show that coordinates in Fourier space are often superior to coordinates in signal space to produce fine detail. We interpret this as an indication
that a representation in frequency space is more efficient for many signals, which could explain why Neural Processes implicitly perform a frequency decomposition. Note that the above introduces Fourier features explicitly as a form of inductive bias, while Neural Processes automatically learn this form of representation.
It is well known that neural networks, specifically a MLP with at least one hidden layer, can learn the Fourier transform of an input signal (Gallant & White, 1988). In fact, there have been a multitude of works that exploit this ability in one way or the other, leading to the term Fourier Neural Networks. We refer to the recent review by Zhumekenov et al. (2019) for a comprehensive overview. The difference to Neural Processes is that these works typically apply networks directly to a sequence of points, while NPs learn a mapping that is only applied to individual (x,y) pairs, the representations of which are averaged. We emphasize again that the frequency decomposition occurs naturally in NPs, while these works usually employ direct supervision.
6 DISCUSSION
The goal of this work was to gain a better understanding of the mechanisms that allow Neural Processes to form finite-dimensional representations of infinite-dimensional function spaces. To the best of our knowledge, ours is the first work to investigate this question, and our findings are both surprising and meaningful in this context. We first derived a theoretical upper bound on the frequency of signals that can be represented in Neural Processes with a given representation size. We empirically confirmed that the representation size does indeed pose such a limit and that this can result in Neural Processes acting like low-pass filters. Alternatively, models ignore parts of the signal to keep higher frequencies. Both behaviours are in agreement with the derived bound. We then visualized learned representations to understand how the models incorporate the concept of frequency into them. In all cases the models formed an implicit representation of the input space, in the sense that different x-values are mapped to different representation channels. For CNPs, an oscillating pattern emerges, such that different representation channels correspond to different frequencies, from which we concluded that CNPs perform a frequency decomposition of the function space they learn to represent. It should be noted that this behaviour emerges naturally and is not explicitly encouraged (e.g. by a special loss). In contrast to this, NPs tend to partition the space into more or less disjunct regions. They are still able to learn a frequency decomposition like CNPs, but we assume that the variational training objective makes it harder to do so, as sampling from the representation during training can also be understood as a random perturbation. For VAEs, which are conceptually similar to NPs, it was also suggested that models partition their latent space in way that maximally spreads representations of individual data points under the prior distribution (Rezende & Viola, 2018). Finally, to further test the models’ ability to distinguish frequencies and also as an example of possible practical benefits of our findings, we trained CNPs to be band-pass and bandstop filters. This worked extremely well, the Fourier component magnitudes of the training data are essentially “baked” into the models, and any frequency not found therein is subsequently suppressed in reconstructions from the models. An obvious use case would be programmable frequency filters, when perhaps a more complex frequency response is desired.
Overall, our work offers exciting new insights into the inner workings of Neural Processes and into the learning of representations of function spaces. Many applications of deep learning are concerned with representation learning in some way, and we hope that our findings inspire further research and forge a better understanding of the methods used in the field. Our work also opens up a number of exciting questions for future work. We only look at function spaces with scalar domain, and while we expect that our findings translate to higher dimensions, the same should be validated empirically. Seeing that variational Neural Processes can in principle learn frequency decompositions, it would be interesting to investigate how we can further incentivize this behaviour in them. Likewise, it should be possible to encourage orthogonality between the individual representation dimensions, so that frequencies are more cleanly separated. Further theoretical exploration of the conditions, besides frequency content, that allow function spaces to be represented could also be worthwhile. Finally, it is not immediately obvious how our findings translate to scenarios that disallow a classical definition of frequency, for example when the observations are entire images as in Eslami et al. (2018).
A APPENDIX
A.1 OPTIMIZATION & IMPLEMENTATION
To train Neural Processes, we represent individual examples f ∈ F as sets of randomly sampled evaluations (x, f(x) = y), which we partition into context set (xc,yc) and target set (xt,yt). We further have encoder E and decoder G of a Neural Process implemented as neural networks, for which we summarize the parameters in θ. In our implementation, both are multilayer perceptrons (MLP), meaning simple fully connected networks. Our goal is then to find the optimal set of parameters θ∗ that maximizes the likelihood of yt, given xc, yc and xt, over all f :
θ∗ = argmax θ ∑ f∈F log pθ(yt|xt,xc,yc) (3)
where pθ is a placeholder for some parametrized likelihood function. We introduce the logarithm because we assume the likelihood factorizes across individual f , turning the expression into a sum. So what would this optimization look like in practice? For example, we could minimize the mean squared error between yt and the predictions ŷt from our network. This would implicitly assume a Gaussian likelihood with a fixed variance. However, we would like our model to predict a variance, so that it can indicate how uncertain it is about a prediction, and because Le et al. (2018) found that this results in overall better performance. We achieve this by implementing G as a network that predicts both the mean and the variance of a diagonal Gaussian distribution, and Eq. (3) becomes:
θ∗ = argmax θ ∑ f∈F ∑ t logN (yt;Gµθ (Z, xt), Gσθ (Z, xt)) (4)
In deterministic Neural Processes (CNP), we can directly optimize this with maximum likelihood training. In variational Neural Processes (NP), Z is also parametrized by a Gaussian, meaning just like G, E predicts mean and variance of a Gaussian with Dr dimensions. In this case, we need to rewrite the summands of Eq. (3):
log pθ(yt|xt,xc,yc) = log E z∼p(Z|xc,yc) pθ(yt|xt, z) (5)
Here, p(Z|xc,yc) is not the distribution predicted by our encoder, but some true distribution we don’t have access to. The idea of variational inference (see for example Bishop (2006) for an introduction) is to approximate this p by some other distribution qθ and then to optimize pθ and qθ simultaneously. qθ is what our encoder E predicts, just like pθ is what our decoder G predicts. Continuing from Eq. (5):
LHS = log E z∼qθ(Z|xt,yt) pθ(yt|xt, z) p(z|xc,yc) qθ(z|xt,yt)
(6)
≥ E z∼qθ(Z|xt,yt) log
( pθ(yt|xt, z)
p(z|xc,yc) qθ(z|xt,yt)
) (7)
≈ E z∼qθ(Z|xt,yt) log
( pθ(yt|xt, z)
qθ(z|xc,yc) qθ(z|xt,yt)
) (8)
= E z∼qθ(Z|xt,yt)
log pθ(yt|xt, z)
−DKL(qθ(z|xt,yt)||qθ(z|xc,yc)) (9)
where LHS refers to the left hand side of Eq. (5). In the first line, we have switched the underlying distribution from the true prior—meaning conditioned on the context—to an approximate posterior—meaning conditioned on both context and target, but for notational simplicity we only write out the target set. The second line follows from Jensen’s inequality while in the third line we have replaced the true prior with the approximate prior. Finally, we have rewritten the right hand side using the Kullback-Leibler (KL) divergence, a measure of distance between two distributions. Because we predict Gaussian distributions, the KL divergence has a closed-form expression. Otherwise it would be impractical to use it in an optimization context. The last line is often called the evidence lower bound (ELBO) in variational inference.
Let us put the above into more practical terms. When presented with an example consisting of context and target sets, we first use the encoder network E to encode each context tuple separately. The encoder is a MLP with two input channels (for X and Y ), 6 hidden layers with 128 channels, and a final layer mapping to Dr channels, i.e. to the representation. While all hidden layers have a fixed dimension of 128, we vary the representation dimension Dr for our experiments (but never make it larger than 128). For the variational case, the final layer maps to 2Dr channels, half for the mean and half for the variance of the predicted Gaussian (in practice, we predict the log-variance to allow negative values). The individual representations are then averaged, and in the variational case we call this the prior (qθ(z|xc,yc) in Eq. (9)). For the posterior, we also encode the target pairs and then average over all individual representations, including the context. During training forward passes, we sample once from the posterior and use this sample as the representation for the decoder. Ideally, we should sample many times to integrate the expectation in Eq. (9), but for stochastic mini-batch training it was found empirically that a single sample suffices (Jimenez Rezende et al., 2014; Kingma & Welling, 2014). The decoder predicts a Gaussian from the representation and an input xt. It is implemented symmetrically to the encoder, meaning it’s a MLP with Dr + 1 input channels, 6 hidden layers with 128 channels, and two output channels for mean and (log-)variance. We use tanh-activations as well. As a loss we directly use the negative log-likelihood, meaning we evaluate the likelihood of a reference point yt under a Gaussian parametrized by the predicted mean and variance. Finally, we average over all predicted points, which are the target points as well as the context points. We use the Adam optimizer Kingma & Ba (2015) with an initial learning rate of 0.001, repeatedly decaying it with a factor of 0.995 after 1000 batches. We train with a batch size of 256 for a total of 600 000 batches.
A.2 EXPERIMENT DETAILS
We conduct our experiments on two kinds of function spaces. The first is defined by a Gaussian Processes prior using an EQ kernel given by:
k(x1, x2) = exp ( ||x1 − x2||22 2l ) (10)
where l is a lengthscale parameter. This example was also used in the original works (Garnelo et al., 2018a;b). The second are random Fourier series, defined by:
f(x) = a0 + K∑ k=1 ak cos (kx− φk) , K = 19 (11)
where we sample φk and ak (including a0) randomly from the interval [−1, 1]. Note that k is an angular frequency, while results are presented for regular frequencies.
To construct training examples, we sample N context inputs and M target input values uniformly from the range [−3, 3]. N is a random integer from the range [3, 100), while M is a random integer from [N, 100). This sampling strategy was adopted from the original works and Le et al. (2018). yvalues are generated by evaluating the above functions (or drawing from the distribution in the case of the GP) on the random input values. The models are trained by letting them predict/reconstruct both context and target points, conditioned only on the context. At test time, we are only interested in reconstructions, meaning target points and context points are identical, and we work with 200 equally spaced input values across the full range.
In the band filter experiment, we train models on Fourier series with some frequencies intentionally left out of the training data. When we train a model on data where some frequency components are blocked, the distribution of y-values a model sees during training becomes narrower. As a result, passing functions from the reference distribution (where no components are blocked) through a band-filter CNP will suppress the desired frequencies, but will also amplify non-blocked frequencies. To counteract this, we have to multiply the y-values of the reference data, which are approximately normally distributed, by σband/σref, i.e. the ratio of standard deviations of the relative y-distributions.
A.3 ADDITIONAL VISUALIZATIONS | 1. What is the main contribution of the paper regarding neural processes in signal processing?
2. What are the weaknesses of the paper, particularly in its analysis and experimental design?
3. Do you have any concerns about the definitions and terminology used in the paper?
4. How does the reviewer assess the clarity and quality of the writing in the paper?
5. Are there any specific questions or points raised by the reviewer that require further explanation or justification from the authors? | Review | Review
This paper presents an analysis on the neural processes in the signal processing point of view and gives a bound on the highest frequency of the function that a neural process can represent.
I recommend to reject this manuscript. My comments are below.
The key point of this work is Theorem 3.1. However the theorem itself is just a direct outcome of the Nyquist–Shannon sampling theorem, and it is generally true to not only neural processes but also to all the other approaches. Meanwhile, the authors did not talk about the relationship quantitatively between the representability and the error tolerance in Definition 3.1. In addition, the analysis is limited to only scalar-valued function on a 1D interval. The writing could also be improved.
Concerns:
The definition of neural processes in the background section is confusing. Despite the way of defining a map, P is a mathematical object defined by a set of tuples and a map, meaning that the neural processes are also defined by data. In the original paper, the neural processes were however defined as random functions.
In the background section, the words say 'some sources define ...'. Could the authors give the sources?
In Def 3.1, what do the authors mean by 'discrete measurements'?
In the experiment section, do the authors mean sampling from a Gaussian process by saying GP prior? I don't see a GP plays the role of prior in terms of Bayesian inference.
The examples given in the experiment section lack quantitative results. It is better for evaluating the reconstruction by showing the posterior or predictive distribution instead of single reconstructions.
In Sec. 4.2. how did the authors sample regular grid on the 2D plane as y is determined by x.
Eq.11 is defined in the appendix. Better to use separate numbering. |
ICLR | Title
Learning A Minimax Optimizer: A Pilot Study
Abstract
Solving continuous minimax optimization is of extensive practical interest, yet notoriously unstable and difficult. This paper introduces the learning to optimize (L2O) methodology to the minimax problems for the first time and addresses its accompanying unique challenges. We first present Twin-L2O, the first dedicated minimax L2O framework consisting of two LSTMs for updating min and max variables separately. The decoupled design is found to facilitate learning, particularly when the min and max variables are highly asymmetric. Empirical experiments on a variety of minimax problems corroborate the effectiveness of Twin-L2O. We then discuss a crucial concern of Twin-L2O, i.e., its inevitably limited generalizability to unseen optimizees. To address this issue, we present two complementary strategies. Our first solution, Enhanced Twin-L2O, is empirically applicable for general minimax problems, by improving L2O training via leveraging curriculum learning. Our second alternative, called Safeguarded Twin-L2O, is a preliminary theoretical exploration stating that under some strong assumptions, it is possible to theoretically establish the convergence of Twin-L2O. We benchmark our algorithms on several testbed problems and compare against state-of-the-art minimax solvers. The code is available at: https://github.com/VITA-Group/L2O-Minimax.
1 INTRODUCTION
Many popular applications can be formulated into solving continuous minimax optimization, such as generative adversarial networks (GAN) (Goodfellow et al., 2014), distributionally robust learning (Globerson & Roweis, 2006), domain adaptation (Ganin & Lempitsky, 2014), distributed computing (Shamma, 2008; Mateos et al., 2010), privacy protection (Wu et al., 2018; 2020), among many more. This paper studies such problems: we consider a cost function f : Rm × Rn → R and the min-max game minxmaxy f(x, y). We aim to find the saddle point (x∗, y∗) of f :
f(x∗, y) ≤ f(x∗, y∗) ≤ f(x, y∗), ∀(x, y) ∈ X × Y, (1)
where X ⊂ Rm and Y ⊂ Rn. If X = Rm and Y = Rn, (x∗, y∗) is called a global saddle point; if X × Y is a neighborhood near (x∗, y∗), (x∗, y∗) is a local saddle point. The main challenge to solve problem (1) is the unstable dynamics of iterative algorithms. Simplest algorithms such as gradient descent ascent (GDA) can cycle around the saddle point or even diverge (Benaım & Hirsch, 1999; Mertikopoulos et al., 2018b; Lin et al., 2019). Plenty of works have been developed recently to address this issue (Daskalakis et al., 2018; Daskalakis & Panageas, 2018; Liang & Stokes, 2019; Mertikopoulos et al., 2018a; Gidel et al., 2018; Mokhtari et al., 2019). However, the convergence is still sensitive to the parameters in these algorithms. Even if the cost function is only changed by scaling, those parameters have to be re-tuned to ensure convergence.
A recent trend of learning to optimize (L2O) parameterizes training algorithms to be learnable from data, such that the meta-learned optimizers can be adapted to a special class of functions and outperform general-purpose optimizers. That is particularly meaningful, when one has to solve a large number of yet similar optimization problems repeatedly and quickly. Specifically, for existing L2O methods that operate in the space of continuous optimization, almost all of them solve some
∗Equal Contribution.
1
minimization problem (Andrychowicz et al., 2016; Chen et al., 2017; Li & Malik, 2016), leveraging an LSTM or a reinforcement learner to model their optimizer. Different from classic optimization results that often provide worst-case convergence, most L2O methods have little or no convergence guarantees, especially on problem or data instances distinct from what is seen in training, leaving their generalizability in practice questionable (Heaton et al., 2020). Motivated by L2O’s success in learning efficient minimization solvers from data, this paper seeks to answer: whether we could accomplish strong minimax L2O solvers as well; and if yes, how generalizable they could be?
As it might look straightforward at first glance, such extension is highly nontrivial due to facing several unique challenges. Firstly, while continuous minimization has a magnitude of mature and empirically stable solvers, for general minimax optimization, even state-of-the-art analytical algorithms can exhibit instability or even divergence. To the best of our knowledge, most state-of-theart convergence analysis of minimax optimization is built on the convex-concave assumption (Gidel et al., 2018; Mokhtari et al., 2019; Ryu et al., 2019), and some recent works relax the assumption to nonconvex-concave (Lin et al., 2019; 2020). Convergence for general minimax problems is still open. That makes a prominent concern on whether a stable minimax L2O is feasible. Secondly, given the two groups of min and max variables simultaneously, it is unclear to what extent their optimization strategies can be modeled and interact within one unified framework – a new question that would never be met in minimization. Thirdly, the noisy and sometimes cyclic dynamics of minimax optimization will provide noisier guidance (e.g., reward) to L2O; not to say that, it is not immediately clear how to define the reward: for minimization, the reward is typically defined as the negative cumulative objective values along the history (Li & Malik, 2016). However, for minimax optimization the objective cannot simply be decreased or increased monotonically.
Contribution: This paper is a pilot study into minimax L2O. We start by establishing the first dedicated minimax L2O framework, called Twin-L2O. It is composed of two LSTMs sharing one objective-based reward, separately responsible for updating min and max variables. By ablations of the design options, we find this decoupled design facilitate meta-learning most, particularly when the min and max updates are highly non-symmetric. We demonstrate the superior convergence of Twin-L2O on several testbed problems, compared against a number of analytical solvers.
On top of that, we further investigate how to enhance the generalizability of the learned minimax solver1, and discuss two complementary alternatives with experimental validations. The first alternative is an empirical toolkit that is applicable for general minimax L2O. We introduce curriculum learning to training L2O models for the first time, by recognizing that not all problem instances are the same difficult to learn to solve. After plugging in that idea, we show that Twin-L2O can be trained to stably solve a magnitude more problem instances (in terms of parameter varying range). The second alternative explores a theoretical mechanism called safeguarding, particularly for the important special case of convex-concave problems. When solving a testing instance, safeguarding identifies when an L2O failure would occur and provides an analytical fall-back option (Diakonikolas, 2020). That guarantees convergence for convex-concave problems and, in practice, converges faster even when the problem parameters are drawn from a different distribution from training.
2 RELATED WORK
2.1 MINIMAX OPTIMIZATION
Following (Neumann, 1928), the problem (1) has been studied for decades due to its wide applicability. Simultaneous gradient descent (SimGD) or gradient descent ascent (GDA) (Nedić & Ozdaglar, 2009; Du & Hu, 2019; Jin et al., 2019; Lin et al., 2019) is one of the simplest minimax algorithms, conducting gradient descent over variable x and gradient ascent over variable y. However, the dynamics of SimGD or GDA can converge to limit cycles or even diverge (Benaım & Hirsch, 1999; Mertikopoulos et al., 2018b; Lin et al., 2019). To address this issue, Optimistic gradient descent ascent (OGDA) simply modifies the dynamics of GDA and shows more stable performance (Daskalakis et al., 2018; Daskalakis & Panageas, 2018; Liang & Stokes, 2019; Mertikopoulos et al., 2018a; Gidel et al., 2018; Mokhtari et al., 2019). OGDA attracts more attention because of its empirical success in training GANs. (Ryu et al., 2019) theoretically studies OGDA by analyzing its continuous time dynamic and
1We differentiate the usages of two terms: parameters and variables, throughout the paper. For example, minu maxv ax
2 − by2, we call a, b parameters and x, y variables. For simplicity, this paper only discusses the L2O generalizability when the testing instances’ parameter distribution differs from the training.
2
proposes Anchored simultaneous gradient descent that shows good performance. Follow-the-Ridge (Wang et al., 2019) also addresses the limit cycling problem by introducing second-order information into the dynamic of GDA. Lately, K-Beam (Hamm & Noh, 2018) stabilizes the convergence of GDA by duplicating variable y, yielding strong performance. At each iteration, it performs gradient ascent independently on K copies of y and greedily chooses the copy that leads to a large function value f , then it updates x based on the selected copies.
2.2 LEARNING TO OPTIMIZE
As a special instance of meta-learning, L2O has been studied in multiple contexts, with continuous optimization being one of its main playgrounds so far. The first L2O framework is introduced in (Andrychowicz et al., 2016), where both the optimizee’s gradients and loss function values are formulated as the input features for an RNN optimizer. Due to the enormous number of parameters, a coordinate-wise design of RNN optimizer is adopted, where all optimization coordinates share the same updating strategy. (Li & Malik, 2016) uses the gradient history and objective values as observations and step vectors as actions in their reinforcement learning framework. (Chen et al., 2017) leverages RNN to train a meta-optimizer to optimize black-box functions. Two effective training tricks, random scaling and objective convexifying, are presented in (Lv et al., 2017). Wichrowska et al. (2017) presents an optimizer of multi-level hierarchical RNN architecture augmented with additional architectural features. Li et al. (2020) introduces a Jacobian regularization to L2O and enhances the domain adaptation performance of optimizees. Chen et al. (2020a) proposes several improved training techniques to stabilize L2O training and ameliorate performance. You et al. (2020); Chen et al. (2020b;c) extend the application scope of L2O into various practical problems such as graph neural network training, domain generalization, and noisy label training.
The above works address continuous minimization problems using single optimizer models. One exception, (Cao et al., 2019), extends L2O to solving Bayesian swarm optimization. The author presents a novel architecture where multiple LSTMs jointly learn iterative update formulas for the swarm of particles, coordinated by attention mechanisms. We also notice that two recent efforts (Jiang et al., 2018; Xiong & Hsieh, 2020) introduce L2O to adversarial training, a renowned application of minimax optimization. However, both of them merely utilize L2O to solve the inner minimization of their minimax problems (i.e., generating attacks), while the outer maximization is still solved analytically. Neither of the two directly solves the full minimax optimization.
3 METHOD
3.1 MAIN FRAMEWORK: TWIN LEARNABLE OPTIMIZERS (TWIN-L2O)
The main L2O framework we proposed is named Twin-L2O, where we use two learnable optimizers to alternate between min and max updates. Our design adopts the basic idea of (Andrychowicz et al., 2016) to use Long Short-Term Memory (LSTM) to model learnable optimizers, for solving target problems known as optimizees. At each step, LSTM outputs the update of the optimizee variables. The LSTM inputs are typically the current zero-order or first-order information of the optimizee (Andrychowicz et al., 2016; Lee & Choi, 2018), plus the historic optimization trajectory information.
In Twin-L2O, two LSTMs separately update x and y and record historical trajectory information of their own variables respectively. Formally, we consider the minimax problem minxmaxy f(x, y). We use two LSTM optimizers, LSTM-Min and LSTM-Max, to updates the min variable x and the max variable y respectively. LSTM-Min is parameterized by φmin and LSTM-Max is parameterized by φmax. At each iteration t, Twin-L2O updates x and y in turns and yields the following rule:
xt+1 =xt + ∆xt,where (∆xt, hmint+1) = LSTM-Min ( [∇xf (xt, yt) ,∇yf (xt, yt)], hmint , φmin ) ,
yt+1 =yt + ∆yt,where (∆yt, hmaxt+1 ) = LSTM-Max ( [∇yf (xt+1, yt) ,∇xf (xt+1, yt)], hmaxt , φmax ) , (2)
where hmint and h max t are the historical trajectory information of LSTM-Min and LSTM-Max at time step t. This formulation is inspired by the SimGD/GDA-style algorithms (Nedić & Ozdaglar, 2009; Du & Hu, 2019; Jin et al., 2019; Lin et al., 2019) that conduct simultaneous/alternative gradient descent over x and ascent over y. Figure A4 (Appendix A1) conceptually illustrates the framework.
The next question is to design the L2O reward. To train the LSTM optimizers, the loss function is often to penalize some type of cost, accumulated along the optimization trajectory for a horizon of T
3
steps (also known as the unrolling length for LSTM (Sherstinsky, 2018))
L(φmin, φmax) = Ef [ T∑ t=1 wtR(f, xt, yt) ] , (3)
wt is chosen to be all 1 following the basic setting in (Andrychowicz et al., 2016), that might be tuned for better performance in future work.
As a key design option, R(f) represents the reward to guide the L2O training. In existing L2O methods for continuous minimization (Andrychowicz et al., 2016; Lv et al., 2017), R(f) is usually simply set to R(f, xt)) = f (xt) to encourage fast decrease of objective values over time. To extend this existing reward to the minimax scenario, we cannot directly penalize the overall objective function value either way, since the min and max objectives are entangled. Also, different from pure minimization problems, the Twin-L2O updates (2) consist of two alternating steps governed by two different LSTM optimizers: each accounts for its own subproblem goal (min or max updates), but the two also have to collaborate to explore/exploit the minimax landscape. We specifically design the following reward that implicitly addresses the above issue by setting a new reward function:
L(φmin, φmax) = Ef [ T∑ t=1 {[f(xt, yt−1)− f(xt, yt)] + [f(xt, yt−1)− f(xt−1, yt−1)]} ] . (4)
Analysis of the reward design In Eqn. 4, the first and second terms always characterize two consecutive min and max updates. In more details, the value of f (xt, yt)−f (xt, yt−1) solely reflects how effectively the t-step max update increases the objective f , while f (xt, yt−1)− f (xt−1, yt−1) reflects the effectiveness of t-step min update in decreasing the objective f . Our goal is then to maximize the weighted accumulated sum for f (xt, yt)−f (xt, yt−1) , while minimizing the weighted accumulated sum for f (xt, yt)− f (xt, yt−1) , t = 1, 2, ..., T . Combining the two sub-goals together (with a sign change to turn max into min) yields our reward. One may also alternatively interpret Eqn. 4 as penalizing the loss change from f(xt, yt) along both x and y updating directions, which would encourage yielding stationary points.
For the reward design, we provide a more detailed discussion in Appendix A6. Specifically, we provide a comparison between the objective-based reward in Eqn. 4, and another possible gradientbased reward. The latter was found to be ineffective in solving the problems presented in Section 4.
Rationale of the framework selection Another important design question is to what extent learning the min and max updates should be (dis)entangled: on the one hand, the two steps obviously interact with each other as they jointly explore the minimax landscape; on the other hand, min and max steps commonly have asymmetric difficulty levels, that have been leveraged by previous algorithms. For example, (Hamm & Noh, 2018) demonstrates the failure of alternating gradient descent in minimax optimization due to the multiple solution discontinuity of the inner maximization, and addresses that by simultaneously tracking K candidate solutions for the max step, while the outer minimization remains to take one descent step. Besides the joint reward (4), the default Twin-L2O design leverages two independent LSTMs in Eqn. (2), each dedicatedly handling min or max updates. In comparison, we also consider two other more "entangled" ways: (a) fully entangling the two optimizers, i.e. using one LSTM to simultaneously generate min and max outputs; (b) weakly entangling the two optimizers, by using two LSTMs sharing weights, yet allowing either to maintain its own temporal hidden states. Our ablation experiments (see Section 4.1) find that the default decoupled design in Eqn. (2) seems to facilitate the L2O learning most.
3.2 IMPROVING GENERALIZABILITY OF TWIN-L2O
Despite the empirical success of L2O, it is unfortunately impossible to ensure that any L2O algorithm always converges. Assuming the objective function type to keep unchanged, the testing instances’ parameter distribution may differ from the one of training, and L2O can catastrophically fail. For the Twin-L2O, we discuss two remedies to partially fix this issue and boost its generalizability.
We first propose curriculum L2O training scheme as a practical L2O training technique such that Twin-L2O can be trained to work on a much wider coverage of problem parameters than its vanilla versions. That would empirically help the generalizability due to broader coverage by training instance, but would still inevitably fail when meeting unseen testing instances. We then present a
4
preliminary exploration of the safeguard mechanism on minimax under a special case, i.e., solving convex-concave problems. We demonstrate that with such strong assumptions, it is possible to theoretically establish the "perfect" convergence of Twin-L2O on any unseen optimizee.
Curriculum L2O Training When it comes to general minimax problems, it is unlikely to exist an ideal theory to fully ensure Twin-L2O convergence on all instances. Therefore, we seek empirical L2O success of as many instances as possible. Specifically: can we train Twin-L2O better, so that it can work on instances at a broader parameter range?
We find a curriculum learning (CL) strategy (Bengio et al., 2009) particularly useful. CL was first adopted to train neural networks by first focusing the training on an "easy" training subset (often adaptively selected), that is then gradually grown to the full set. It is known to be effective to stabilize training, especially when the training set is highly varied or noisy (Jiang et al., 2017). Since minimax optimization is notoriously unstable no matter via analytical or learned optimizers, we conjecture that the noisy minimax dynamics might challenge Twin-L2O by providing unreliable guidance and impede its training. Considering that our Twin-L2O is modeled using LSTMs, it is natural to think of whether CL can bring additional gains if applied to meta-training. Previously it was also found effective in L2O for minimization problems (Chen et al., 2020a).
Method 1 Safeguarded-Twin-L2O for ConvexConcave Saddle Point Problems
1: Initialize u1 ∈ Rn, C ∈ [0,∞), α ∈ (0,∞), {λ`} = {1/(`+ 1)}, k ← 2, weights {φk} 2: function SADDLEHALPERN(ε) 3: u2 ← 1
2
( u1 + Jα∂f (u 1) ) .
4: while ‖uk − Jα∂f (uk)‖ > ε do 5: zk+1 ← LSTM(uk; φk) 6: C Apply L2O operator
7: if Ek+1(uk+1) ≤ C k + 1 8: C Verify safeguard condition 9: uk+1 ← zk+1
10: C Use L2O update 11: else 12: uk+1 ← λku1 +(1−λk)Jα∂f (uk) 13: C Use fallback update 14: k ← k + 1 15: return uk 16: end function
Specifically, in one epoch, we will rank all optimizee instances by their cumulative losses (3) from low to high, and only select the top C instances to count into the total reward. In that way, only the instances that exhibit "good training behaviors" (smaller gradients & more likely to get close to stationary points) will be initially used for updating the Twin-L2O. That prevents the learned optimizer being misled by random failures and outliers, which are commonly found in the early epochs of Twin-L2O training. We by default set the percentage C to start from 20%, then growing linearly every epoch until reaching 100% in the later training stage.
Up to our best knowledge, this is the first effort to incorporate CL with L2O training. We can this Twin-L2O trained with CL as Enhanced Twin-L2O: note that it is the same model structure, just trained in a different and better way. More details can be found in Appendix A4 .
Safeguard Twin-L2O: A Preliminary Theoretical Exploration Most L2O methods have little or no convergence guarantees. Very recently, a safeguarding mechanism has been introduced to L2O for convex minimization problems with gradient and/or proximal oracles (Heaton et al., 2020). Conceptually, a safeguard is anything that identifies when a ”bad" L2O update would occur and what ”fallback" update to apply in place of that bad L2O update. In this section, we establish a safeguarding theory and algorithm, specifically for learned convex-concave saddle point algorithms. Here the safeguard takes the form of an energy inequality (c.f. Line 6 in Method 1).
In this section, we write u = (x,y) ∈ Rm × Rn and let α > 0. We use the resolvent, defined by Jα∂f (x,y) = (Id + α∂f) −1, (5)
where we note ∂f = (∂xf,−∂yf). For simple f (e.g., quadratic functions), a closed formula exists for Jα∂f . Otherwise, one may use an iterative method to approximate this quantity. In addition, define the residual operator
F (u) := 1
2 (u− Jα∂f (u)) , (6)
and, for each k ∈ N, the energy Ek : Rm × Rn → R by
Ek(u) := ‖F (u)‖2 − λk
1− λk 〈F (u),u1 − u〉 , (7)
5
where {λk} is a sequence of step sizes. The full method is outlined in the Method 1, where the L2O update is denoted by LSTM(uk;φk) and the fallback method is a Halpern iteration (Halpern, 1967). Our main result for minimax safeguarding theory is formally stated below:
Theorem 3.1. If the sequence {uk} is generated by Algorithm 1, then
‖uk − Jα∂f (uk)‖ ≤ 1
2 ( d1 k + √ d21 k2 + 4C k ) , for all k ≥ 2, (8)
where d1 := min{‖u − u1‖ : 0 ∈ ∂f(u)} is the distance from the initial iterate u1 to the set of saddle points and C ≥ 0 is an arbitrary constant. In particular, this implies each limit point of {uk} is a saddle point.
Our proof draws and integrates two sources of ideas: (1) the safeguarded L2O technique that has recently just been introduced to convex minimization (Heaton et al., 2020); and (2) Halpern iteration (Diakonikolas, 2020) that is adopted for analytical minimax optimization with favorable theoretical properties. The full proof is provided in Appendix A3. Note that this work is not intended as a theory innovation on (classical) minimax optimization. Instead, our aim is to extend the emerging idea of safeguarded L2O from convex minimization to convex-concave minimax problems of interest, and shows this idea to be helpful for minimax L2O too: see experiments in Section 4.
4 EXPERIMENTS
4.1 ABLATION STUDY ON THE DESIGN OF TWIN-L2O
We first investigate the design choices for Twin-L2O that we discussed in Section 3.1. We mainly investigate two aspects: (i) whether to share the weights in the two LSTM solvers or not; (ii) whether to share the hidden states between the two LSTM solvers or not. That leads us to four options, denoted as (with self-explanatory names): Share-LSTM-Share-Hidden, Share-LSTM-Two-Hidden, Two-LSTM-Share-Hidden, and Two-LSTM-Two-Hidden. We use the seasaw problem, formulated as below, as the testbed for our ablation study (note that the ranges of a, b are picked only to make L2O easy to converge, while more will be investigated in Section 4.3):
Seesaw: min x max y −by sin(aπx), a ∼ U [0.9, 1], b ∼ U [0.9, 1] (Seesaw)
The Seesaw problem is nonconvex-concave, and is considered challenging (Hamm & Noh, 2018) due to its non-differentiability arising from that the solutions of the state equation or the adjoint state equation are not unique (Danskin, 1966). The L2O training routine follows (Andrychowicz et al., 2016): we use 128 optimizee instances for training; each of them has its parameters i.i.d. sampled, and variables x, y randomly initialized by i.i.d. sampling from U [−0.5, 0.5]. A validation set of 20 optimizees is used with parameters and variables sampled in the same way; and similarly we generate a hold-out testing set of another 100 instances. For each epoch, an L2O optimizer will update the optimizee parameters for 1000 iterations, with its unrolling length T = 10. When the next epoch starts, all x, y as well as LSTM hidden states are reset. We train the L2O solvers for 200 epochs, using Adam with a constant learning rate 10−4. We pick the model checkpoint at the epoch when its validation performance reaches the peak. Figure 1 compares the convergence results of the four options, evaluated on the same testing set. We measure the `2 distances between the solved variables and their corresponding ground-truth solutions (or the closet one, if multiple exist). It is obvious that only the Two-LSTM-Two-Hidden can successfully converge to the correct solution (x∗, y∗) = (0, 0), which is also the equilibrium. Our major observation from the above experiments is that for minimax L2O optimization, especially for asymmetric problems such as Seesaw, it would be a better choice to use decoupled two LSTM solvers and let them take care of their own trajectory information. We will hence stick to this option and use it as our default Twin-L2O.
All experiments in this and following sections are conducted using the GeForce GTX 1080 Ti GPUs.
4.2 COMPARISON WITH STATE-OF-THE-ART ANALYTICAL OPTIMIZERS
In this section, we apply Twin-L2O to two more test problems besides Seesaw:
6
• Rotated Saddle2: minxmaxy ax2 − by2 + 2xy, a ∼ U [0.9, 1], b ∼ U [0.9, 1] • Matrix Game: minx maxy xTAy, A ∈ R5×5,Ai,j ∼ Bernoulli(0.5) · U [−1, 1]
On all three problems, we compare Twin-L2O with several state-of-the-art algorithms: Gradient Descent Ascent (GDA) (Lin et al., 2019), Optimistic Mirror Descent (OMD) (Daskalakis et al., 2018) and GD with anchoring (GD-Anchoring) (Ryu et al., 2019). On Rotated Saddle and Seesaw we will compare with K-beam (Hamm & Noh, 2018) in addition. For matrix game, we also compare it with the standard Halpern Iteration (Diakonikolas, 2020) that is designed for convex-concave minimax problems. For these analytical methods, all parameters are tuned with careful grid search. We train, validate and test Twin-L2O models following the protocol described in Section 4.1.
Figure 2 plots the convergence curves of all methods, averaged across all testing problems (and each with 20 trials of random x, y initialization). Several observations are drawn below:
• L2O does not show superiority over well-tuned analytical algorithms on the simplest Rotated Saddle problem (and similarly Saddle). The problem is very gradient-friendly, and therefore OMD already achieves the best convergence speed as well as solution quality.
• On Matrix Game, Twin-L2O starts to show competitive edges over analytical solvers with faster convergence speed and higher-precision solutions.
• On the Seesaw problem, Twin-L2O largely outperforms all carefully-tuned analytical algorithms, achieving one-magnitude higher-precision solutions with comparable convergence speed. That shows us one take-home message: L2O can work for minimax optimization, and can contribute most significantly to those hard problems. That makes minimax L2O a highly meaningful complement to existing analytical minimax solvers. More analysis on comparing the actual computational costs (MAC numbers) can be found in Appendix A5.
2We also test on the classical Saddle problem, but its behaviors and conclusions are almost identical to the Rotated Saddle. We hence report on Rotated Saddle due to the space limit.
7
4.3 ENHANCED TWIN-L2O: CURRICULUM LEARNING EVALUATION
We again use the Seesaw problem as an example in this section. Its two parameters a and b, i.e., the problem period and the scale, are sampled independently from two uniform distributions U [L1a, L2a], U[L1b, L2b]. In Section 4.1, both are chosen as U [0.9, 1] for the ease of L2O convergence. We now stretch both parameter ranges and test whether an L2O model can still solve the resultant broader range of problems. All other training protocols follow Section 4.1 identically.
Sections 4.1 and 4.2 evaluate the average solution distances over the testing set (100 instances), which worked fine in the small [a, b] range then. However, when we extend the [a, b] range, we find that the L2O behaviors can differ vastly across testing instances, i.e., some converging quickly while others suffering from heavy fluctuations or even divergence, which is an artifact of inefficient L2O training that leaves it unable to cover the full large problem range. That motivates us to carefully re-design our evaluation metrics here, to reflect both the solution quality and its variation/stability.
For p-th testing instance, we record its solution distance l2 D p t , at epoch t = 1, 2, .... Given two thresholds acc and std (chosen by multi-fold validation; we use default d = 2× 10−2 and std = 10−4), we define two forms of success rate (SR):
SR1 = ∑n p=1 I(d(D p)< acc) n SR2 = ∑n p=1 I(Std(D p)< std) n
where d(Dp) = ∑L t=t0 Dpt
L−t0+1 , Std(Di) = Std({D p t }Lt=t0), t0 = 0.8L; n = 100 is the number of testing
instances; L = 1000 is the total iteration number that each instance (optimizee) is trained by L2O. Intuitively, SR1 emphasizes the average solution precision from the last 20 iterations; and SR2 measures how large solution variation is seen in the last 20 iterations.
Table 1 compares Twin-L2O and Enhanced Twin-L2O at multiple combinations to stretch the ranges of a and b, starting to the original [0.9, 1]× [0.9, 1], up to as large as [0, 5]× [0, 2] : the parameter coverage increase by 1,000 times. Adding CL evidently helps Twin-L2O stay effective to train over a broader instance range, under both SR metrics. Vanilla Twin-L2O performs perfectly at [0.9, 1]× [0.9, 1], yet begins to drop at [0, 1]× [0.9, 1] (mainly showing higher instability, as indicated by lower SR2), and hardly succeeds beyond [0, 3.5] × [0.9, 1]. In contrast, Enhanced Twin-L2O obtains nontrivial results even at [0, 5]× [0, 1] (tens of times wider than the vanilla one).
4.4 SAFEGUARDED TWIN-L2O EXPERIMENTS
Here we use the matrix game as the example to evaluate the above established safeguard mechanism for convex-concave minimax optimization. We directly take a well-trained Twin-L2O model for matrix game in Section 4.2, where the matrix A ∈ R5×5 and Ai,j ∼ Bernoulli(0.5) · U [−1, 1], and the coordinates of initial optimization variables x and y are independently sampled from U [−1, 1]. During testing, in addition to testing the Twin-L2O model on the testing data from this seen distribution, we also evaluate it on unseen data, whose A is now sampled from an intentionally very distinct distribution: Ai,j ∼ Bernoulli(1.0) · U [−8, 8]. x and y are initialized in the same manner. We compare Safeguarded Twin-L2O (denoted as Safe-Twin-L2O) with standard Halpern iteration (Diakonikolas, 2020) as the fallback update, when the L2O update is disapproved in Method 1. We also compare with OMD and GD-Anchoring on both seen and unseen testing data (GDA fails to converge in both cases, even we tune its hyperparameters to our best efforts). The results are shown in Figure 3. When tested on the aggressively varied unseen data, the vanilla Twin-L2O model fails and diverges, but Safe-Twin-L2O remains to converge successfully: even faster than Halpern iteration and OMD, and much better than GD-Anchoring.
8
5 CONCLUSION
This paper studies L2O for minimax optimization for the first time. We present the Twin-L2O model, and further improve its generalizability by introducing a theoretically grounded safeguarding framework (for convex-concave problems), as well as an empirical curriculum training strategy (for general problems). Extensive simulations endorse the promise of our algorithms. This pilot study suggests and paves the way for extending L2O beyond continuous minimization problems.
Limitation: The entire L2O field faces challenges to scale up to larger-scale optimization (Andrychowicz et al., 2016), and our study has not yet made an exception. Despite very promising gains from challenging cases such as the Seasaw and Matrix Game problems, the current work only proves the first concept of minimax L2O, on relatively basic and low-dimensional test problems. Our immediate next
step is to scale up Twin-L2O, and to explore its potential in solving the minimax application problems of practical interest, such as adversarial training (Jiang et al., 2018; Xiong & Hsieh, 2020) and GANs (Gulrajani et al., 2017). A potential idea might leverage the memory-efficient hierarchical RNN structure in (Wichrowska et al., 2017).
A1 TWIN-L2O FRAMEWORK
Iteration
......
LSTM-Max LSTM-Max LSTM-Max
......
......
Iteration Iteration ......
Twin-Structured Learning to Optimize (Twin-L2O)
Apply Updating RuleMinimax OptimizeeTwin-LSTM Optimizer
LSTM-Min LSTM-Min LSTM-Min
......
Pass Updated or Variable Pass Input Information to Twin-LSTM
Figure A4: Architecture of Twin-L2O. We let LSTM-Min and LSTM-Max, parameterized by φmin and φmax, update x and y respectively. As shown by curved dashed lines, Twin-LSTM keeps being updated about the latest variable values of x and y when computing input information and the reward. When constructing the computational graph and training the Twin-LSTM, the solid lines allow gradients to flow while the dashed lines do not pass any gradient (Andrychowicz et al., 2016).
A2 COMPARISON WITH STATE-OF-THE-ART ANALYTICAL OPTIMIZERS
Figure A5 shows the performance of y variable in Rotated Saddle, Matrix Game and Seesaw. The analyses of the results generally align with those in the paper.
100 101 102 103Iteration 0.00
0.05
0.10
0.15
0.20
0.25
Di st
an ce
5.56e-92
GDA OMD GD-Anchoring K-beam Twin-L2O
(a) Rotated Saddle 100 101 102 103Iteration
0.0
0.5
1.0
1.5
2.0
Di st
an ce
1.89e-03
GDA OMD GD-Anchoring Halpern Twin-L2O
(b) Matrix Game 100 101 102 103Iteration
0.0
0.1
0.2
0.3
0.4
Di st
an ce
2.94e-03
GDA OMD GD-Anchoring K-beam Twin-L2O
(c) Seesaw
Figure A5: Convergence comparison of variable y between Twin-L2O and state-of-the-art analytical minimax optimizers (GDA, OMD, GD-Anchoring, and K-beam), for three test problems.
A3 PROOF OF SAFEGUARDING RESULT
Below is a proof of the main result, Theorem 4.1:
Proof. We proceed in the following manner, with much credit due to the analysis in (Diakonikolas, 2020). First we verify an inequality with the energy sequence {Ek(uk)} (Step 1). This is used to obtain the convergence rate (Step 2). Resulting implications about limit points are established last (Step 3).
Step 1. We claim
Ek(u k) ≤ C
k , for all k ≥ 2. (9)
A12
We proceed by induction. First note Jα∂f is firmly nonexpansive, and so 2F = Id − Jα∂f is also firmly nonexpansive (Bauschke et al., 2011), which implies
‖2F (u)− 2F (v)‖2 ≤ 〈2F (u)− 2F (v),u− v〉 , for all u,v ∈ Rm × Rn. (10) Using (10) with u = u2 and v = u1 together with our choice of step sizes {λk}, we find
E2(u 2) = ‖F (u2)‖2 − λ1
1− λ1 〈F (u2),u1 − u2〉 (11)
= ‖F (u2)‖2 − 〈F (u2),u1 − u2〉 (12) = 〈F (u2), F (u2)− F (u1)〉 (13) = ‖F (u2)− F (u1)‖2 + 〈F (u1), F (u2)− F (u1)〉 (14)
≤ 1 2 ‖2F (u2)− 2F (u1)‖2 + 〈F (u1), F (u2)− F (u1)〉 (15) ≤ 〈F (u2)− F (u1),u2 − u1〉+ 〈F (u1), F (u2)− F (u1)〉 (16) = −〈F (u2)− F (u1), F (u1)〉+ 〈F (u1), F (u2)− F (u1)〉 (17) = 0. (18)
Thus, E2(u2) ≤ 0 ≤ C/2, and the base case holds. Inductively, suppose (9) holds taking k = n for some n ≥ 2. If un+1 = zn+1, then (9) holds, taking k = n+ 1, by the conditional statement in Line 6 of Method 1. Alternatively, suppose un+1 6= zn+1. Applying (10) with u = un+1 and v = un yields
‖F (un+1)− F (un)‖2 ≤ 2‖F (un+1)− F (un)‖2 ≤ 〈F (un+1)− F (un),un+1 − un〉 . (19) Upon expansion of the left hand side, we discover
‖F (un+1)‖2 ≤ 〈F (un+1),un+1 − un + 2F (un)〉 − 〈F (un),un+1 − un + F (un)〉 . (20)
Algebraic manipulations of the update formula for un+1 yield the relations
un+1 − un + 2F (un) = λn 1− λn (u1 − un+1), (21a)
un+1 − un + F (un) = λn(u1 − un)− (1− 2λn)F (un), (21b) Substituting (21) in (20) gives
‖F (un+1)‖2 ≤ λn 1− λn 〈F (un+1),u1 − un+1〉 (22)
− λn 〈F (un),u1 − un〉+ (1− 2λn)‖F (un)‖2. (23)
and we collect terms with F (un+1) on the left hand side to obtain
‖F (un+1)‖2 − λn 1− λn
〈F (un+1),u1 − un+1〉 ≤ (1− 2λn)‖F (un)‖2 − λn 〈F (un),u1 − un〉 . (24)
Furthermore, by our choice of step size sequence {λn},
1− 2λn = n− 1 n+ 1
(25)
and, for n ≥ 2, λn =
n− 1 n+ 1 · 1 n− 1 = n− 1 n+ 1 · λn−1 1− λn−1 . (26)
Combining (24), (25), and (26) with the definition of En in (7) yields
En+1(u n+1) ≤ n− 1
n+ 1 · En(un). (27)
Applying the inductive hypothesis, we deduce
En+1(u n+1) ≤ n− 1 n+ 1 · C n = n− 1 n · C n+ 1 ≤ C n+ 1 , (28)
A13
and this inequality closes the induction. Thus, (9) holds by the principle of mathematical induction.
Step 2. Let u? be the projection of u1 onto Fix(Jα∂f ) so that
‖u1 − u?‖ = argmin{‖u1 − u‖ : u ∈ Fix(Jα∂f )} = d1. (29) Note this projection is well defined since the set of saddle points is convex. By (9), for k ≥ 2,
‖F (uk)‖2 ≤ λk 1− λk 〈F (uk),u1 − uk〉+ C k
(30)
= λk
1− λk︸ ︷︷ ︸ =1/k
〈F (uk),u1 − u?〉+ 〈F (uk)− F (u?),u? − uk〉︸ ︷︷ ︸ =0 + C k
(31)
= 1 k 〈F (uk),u1 − u?〉+ C k (32)
≤ 1 k ‖F (uk)‖‖u1 − u?‖+ C k . (33)
where the third line holds since F (u?) = 0 and F is monotone. Using the quadratic formula with the fact that ‖F (uk)‖2 ≥ 0, we obtain (8), as desired.
Step 3. Let ũ be a limit point of {uk}. This implies there exists a subsequence {unk} that converges to ũ. Since Jα∂f is 1-Lipschitz and norms are continuous, it follows that
0 ≤ ‖ũ− Jα∂f (ũ)‖ = lim k→∞ ‖unk − Jα∂f (unk)‖ ≤ lim k→∞
1
2 ( d1 nk + √ d21 n2k + 4C nk ) = 0. (34)
By the squeeze lemma, we deduce ũ ∈ Fix(Jα∂f ), i.e., ũ is a saddle point of f . Because ũ was an arbitrarily chosen limit point, each limit point of {uk} is a saddle point of f .
A4 DETAILS ON CURRICULUM LEARNING
In L2O framework, the reward for training the optimizer is defined as:
L(φ) = Ef [ T∑ t=1 wtR (f (xt)) ] (35)
where f is a distribution of functions. The Enhanced Twin-L2O using Curriculum Learning(CL) selects a portion of instances that demonstrate "good training behaviors" (smaller gradients & more likely to get close to stationary points) to be counted into the reward, with the portion C increasing linearly from 20% to 100% as the training epoch increases. In our experiments, the detailed scheme of C is:
C = min{20 + epoch_index, 100}% (36)
where epoch_index denotes the index of epoch when training, starting from 0 and ending with 199 in our case. When applying CL, the actual reward becomes
L̃(φ) = Ef [ T∑ t=1 wtq(f)R (f (xt)) ] (37)
where q(f) = 1 if the value m(f) = ∑T t=1 wt ‖∇yf (xt, yt)‖
2 ranks top C of all sampled functions, and q(f) = 0 otherwise.
This process does not change the structure of Twin-L2O, and essentially acts as adding masks to those training instances that demonstrate poor behavior and ignoring them in the actual training phase. Combining this trick with the existing framework, the Twin-L2O can achieve a higher success rate when solving problems with a larger range of parameters.
A14
A5 COMPUTATIONAL COST ANALYSIS
We analyze the number of the multiplier–accumulator operation (MAC) of Twin-L2O and K-beam (Hamm & Noh, 2018) for a Seasaw problem testing instance with 20 trials of random x, y initialization, each trial lasting for 1000 iterations. As for K-beam, the numbers of MAC are 2.36M (Million), 3.8M, 8.11 M, 15.31M for K = 1, 2, 5, 10 respectively. For Twin-L2O, the total number of MAC is 3.86M.
We use K = 5 in K-beam for experiments in our paper whose number of MAC costs 2.1 times more than that of Twin-L2O, yet its solution quality in terms of both the converging speed and the precision fails to beat it.
A6 MORE DISCUSSIONS ON THE DESIGN OF TWIN-L2O REWARD
We term the reward function in Eqn. 4 as an objective-based reward, since it penalizes the objective change from f(xt, yt) along both x and y updating directions. It is naturally inherited and extends the reward functions prevailing in most prior L2O works for minimization (Andrychowicz et al., 2016; Li & Malik, 2016), whose default reward is to minimize a weighted sum of the past function values.
One may also design the following two rewards, which we name as gradient-based rewards:
L(φmin, φmax) = Ef [ T∑ t=1 ‖∇xf (xt, yt)‖2 + ‖∇yf (xt, yt)‖2 ] , (38)
L(φmin, φmax) = Ef
[ T∑
t=1
( f (xt, yt−1)− f (xt, yt)
‖yt − yt−1‖
)2 + ( f (xt−1, yt−1)− f (xt, yt)
‖xt − xt−1‖
)2] . (39)
Eqn. 39 is the gradient-based Nikaido-Isoda function introduced by Raghunathan et al. (2019).
For minimax optimization, it is not immediately clear whether the objective-based or the gradientbased might work practically better. Intuitively by definition, the former is likely to lead towards a saddle point (defined in Eqn. 1) and the latter to a stationary point. They do not always coincide in general, e.g, a stationary point might not be a saddle point. But for all specific test problems we studied in Section 4, a stationary point is also a saddle point.
We try several experiments on the challenging seesaw problem as a specific example, to provide a close comparison between the gradient-based in Eqn. 38 and the objective-based reward. We re-do Twin-L2O by only replacing Eqn. 4 with the gradient-based reward, and our observations are: a) the gradient-based reward solves the seesaw problem worse than the objective-based one; b) the minimization variable x diverges on testing problem instances; c) the maximization variable y will converge to a solution of precision magnitude 0.04 (for reference, y converges to have magnitude less than 0.01 when using the objective-based loss). We further identify one possible cause after analyzing the gradient behaviors. Note here the gradient-based reward could be expressed as:
||∇xf(x, y)‖2 + ‖∇yf(x, y)‖2 = a2b2π2y2 cos2(aπx) + b2 sin2(aπx) (40)
Because a, b ∼ U [0.9, 1], the first term often dominates during training due to the π2 multiplier, unless y is sufficiently close to zero. The imbalance could be a cause of instability. For example, this reward could sometimes penalize cos2(aπx) to be close to zero, which is the opposite direction of the true solution sin2(aπx) = 0 . Although this is just a very specific problem example, it reveals that the gradient-based loss may sometimes not work well as expected, due to the instability or asymmetry of min/max gradients.
Besides, we have also tried the second gradient-based reward in Eqn. 39, and find it ineffective. It is mainly because the denominator (consecutive variable differences) can become very small and the loss will then explode and break training.
Back to the objective-based reward used in this paper, we have not observed oscillation empirically from all experiments so far. Our hypothesis is that the recurrent structure of the proposed Twin-L2O
A15
framework (shown in Eqn. 2) plays a role here. Although we use two LSTMs for the min and max updates respectively, the LSTM of one variable actually takes in the information of the other LSTM implicitly, because it takes the output of the other as input. When we penalize the objective function value of one LSTM update, all previous min and max updates can (in principle) be taken into account due to the effect of unrolled back-propagation, e.g., the min and max updates each take reference to not only its own, but also the other’s higher-order past trajectory information. While this is a tentative explanation, we think more in-depth analysis of why oscillation may or may not happen in L2O could be a really interesting future work.
Another implicit intuition that leads us to prioritizing the use of objective-based over gradient-based is that, in classic minimization, objective change is summable (i.e., having a finite accumulation), but gradient change is not summable in general (unless with properties such as strong convexity). While summability is itself not a guarantee for good training/testing performance, lack of summability means the loss may have an overly large dynamic range.
To summarize, our function-based objective naturally extends previous L2O convention, works better than other alternatives, and observes no oscillation yet. However, we emphasize that there is no intention to claim the current reward in Eqn. 4 is the best choice for minimax L2O - it is one of plausible options. We do concur the gradient-based reward designs in Eqn. 38 and Eqn. 39 are a complicated yet interesting question, especially when considering more complicated minimax problems. Again, as this paper is intended only as the first work and pilot study towards understanding the profound challenges and rich possibilities for minimax L2O, we believe everything discussed and proposed here, including the loss function, has large room of improvement.
A16 | 1. What is the main contribution of the paper regarding learning to optimize minimax optimization?
2. What are the strengths of the proposed approach, particularly in its design options and extensions?
3. How does the reviewer assess the novelty and significance of the paper's contributions compared to prior works?
4. What are the weaknesses of the paper, and what suggestions does the reviewer have for improvements?
5. How does the reviewer evaluate the clarity, quality, and impact of the paper's content? | Review | Review
This paper studies the learning to optimize (L2O) for minimax optimization. Since L2O has been studied in a few works, extending L2O from continuous minimization to minimax is a straightforward idea and not super-novel. But it also is a non-trivial effort, as minimax problems are much harder and unstable to solve.
The authors proposed to use two LSTMs with one shared reward, for updating min and max variables respectively. They presented a careful ablation study of design options such as (semi-)weight sharing between the two and their reward function, which is valuable for helping us understand what matters for L2O to work in minimax L2O.
The authors then presented two extensions to improve the generalization of Twin-L2O. The first one is based on curriculum learning to focus the meta-training gradually from easy to hard instances. The second one is a minimax safeguard mechanism under a special case of solving convex-concave problems; the theory part seems to be a direct extension of (Heaton et al., 2020).
The following suggestions are for the authors:
It is impressive to see that on relatively challenging minimax problems such as Seesaw, Twin-L2O can achieve one-magnitude higher-precision solutions than carefully tuned analytical algorithms. The number of iterations and MAC numbers needed for convergence are also comparable. I wonder whether the authors could also make a fair comparison on their running clock time?
One further suggestion is that, it would be natural (and to the authors’ good) to combine enhanced L2O and safeguarded L2O together for solving convex-concave problems, so that we can get an impression on how large benefits it can lead to if we combine the best of the two L2O improvement ideas.
I appreciate the authors clearly and openly discussed the current work’s limitations by end of the paper. Although the paper was positioned as “proof of concept”, it could also be strengthened if some real problem can be demonstrated, e.g., training of a very simple GAN or so. |
ICLR | Title
Learning A Minimax Optimizer: A Pilot Study
Abstract
Solving continuous minimax optimization is of extensive practical interest, yet notoriously unstable and difficult. This paper introduces the learning to optimize (L2O) methodology to the minimax problems for the first time and addresses its accompanying unique challenges. We first present Twin-L2O, the first dedicated minimax L2O framework consisting of two LSTMs for updating min and max variables separately. The decoupled design is found to facilitate learning, particularly when the min and max variables are highly asymmetric. Empirical experiments on a variety of minimax problems corroborate the effectiveness of Twin-L2O. We then discuss a crucial concern of Twin-L2O, i.e., its inevitably limited generalizability to unseen optimizees. To address this issue, we present two complementary strategies. Our first solution, Enhanced Twin-L2O, is empirically applicable for general minimax problems, by improving L2O training via leveraging curriculum learning. Our second alternative, called Safeguarded Twin-L2O, is a preliminary theoretical exploration stating that under some strong assumptions, it is possible to theoretically establish the convergence of Twin-L2O. We benchmark our algorithms on several testbed problems and compare against state-of-the-art minimax solvers. The code is available at: https://github.com/VITA-Group/L2O-Minimax.
1 INTRODUCTION
Many popular applications can be formulated into solving continuous minimax optimization, such as generative adversarial networks (GAN) (Goodfellow et al., 2014), distributionally robust learning (Globerson & Roweis, 2006), domain adaptation (Ganin & Lempitsky, 2014), distributed computing (Shamma, 2008; Mateos et al., 2010), privacy protection (Wu et al., 2018; 2020), among many more. This paper studies such problems: we consider a cost function f : Rm × Rn → R and the min-max game minxmaxy f(x, y). We aim to find the saddle point (x∗, y∗) of f :
f(x∗, y) ≤ f(x∗, y∗) ≤ f(x, y∗), ∀(x, y) ∈ X × Y, (1)
where X ⊂ Rm and Y ⊂ Rn. If X = Rm and Y = Rn, (x∗, y∗) is called a global saddle point; if X × Y is a neighborhood near (x∗, y∗), (x∗, y∗) is a local saddle point. The main challenge to solve problem (1) is the unstable dynamics of iterative algorithms. Simplest algorithms such as gradient descent ascent (GDA) can cycle around the saddle point or even diverge (Benaım & Hirsch, 1999; Mertikopoulos et al., 2018b; Lin et al., 2019). Plenty of works have been developed recently to address this issue (Daskalakis et al., 2018; Daskalakis & Panageas, 2018; Liang & Stokes, 2019; Mertikopoulos et al., 2018a; Gidel et al., 2018; Mokhtari et al., 2019). However, the convergence is still sensitive to the parameters in these algorithms. Even if the cost function is only changed by scaling, those parameters have to be re-tuned to ensure convergence.
A recent trend of learning to optimize (L2O) parameterizes training algorithms to be learnable from data, such that the meta-learned optimizers can be adapted to a special class of functions and outperform general-purpose optimizers. That is particularly meaningful, when one has to solve a large number of yet similar optimization problems repeatedly and quickly. Specifically, for existing L2O methods that operate in the space of continuous optimization, almost all of them solve some
∗Equal Contribution.
1
minimization problem (Andrychowicz et al., 2016; Chen et al., 2017; Li & Malik, 2016), leveraging an LSTM or a reinforcement learner to model their optimizer. Different from classic optimization results that often provide worst-case convergence, most L2O methods have little or no convergence guarantees, especially on problem or data instances distinct from what is seen in training, leaving their generalizability in practice questionable (Heaton et al., 2020). Motivated by L2O’s success in learning efficient minimization solvers from data, this paper seeks to answer: whether we could accomplish strong minimax L2O solvers as well; and if yes, how generalizable they could be?
As it might look straightforward at first glance, such extension is highly nontrivial due to facing several unique challenges. Firstly, while continuous minimization has a magnitude of mature and empirically stable solvers, for general minimax optimization, even state-of-the-art analytical algorithms can exhibit instability or even divergence. To the best of our knowledge, most state-of-theart convergence analysis of minimax optimization is built on the convex-concave assumption (Gidel et al., 2018; Mokhtari et al., 2019; Ryu et al., 2019), and some recent works relax the assumption to nonconvex-concave (Lin et al., 2019; 2020). Convergence for general minimax problems is still open. That makes a prominent concern on whether a stable minimax L2O is feasible. Secondly, given the two groups of min and max variables simultaneously, it is unclear to what extent their optimization strategies can be modeled and interact within one unified framework – a new question that would never be met in minimization. Thirdly, the noisy and sometimes cyclic dynamics of minimax optimization will provide noisier guidance (e.g., reward) to L2O; not to say that, it is not immediately clear how to define the reward: for minimization, the reward is typically defined as the negative cumulative objective values along the history (Li & Malik, 2016). However, for minimax optimization the objective cannot simply be decreased or increased monotonically.
Contribution: This paper is a pilot study into minimax L2O. We start by establishing the first dedicated minimax L2O framework, called Twin-L2O. It is composed of two LSTMs sharing one objective-based reward, separately responsible for updating min and max variables. By ablations of the design options, we find this decoupled design facilitate meta-learning most, particularly when the min and max updates are highly non-symmetric. We demonstrate the superior convergence of Twin-L2O on several testbed problems, compared against a number of analytical solvers.
On top of that, we further investigate how to enhance the generalizability of the learned minimax solver1, and discuss two complementary alternatives with experimental validations. The first alternative is an empirical toolkit that is applicable for general minimax L2O. We introduce curriculum learning to training L2O models for the first time, by recognizing that not all problem instances are the same difficult to learn to solve. After plugging in that idea, we show that Twin-L2O can be trained to stably solve a magnitude more problem instances (in terms of parameter varying range). The second alternative explores a theoretical mechanism called safeguarding, particularly for the important special case of convex-concave problems. When solving a testing instance, safeguarding identifies when an L2O failure would occur and provides an analytical fall-back option (Diakonikolas, 2020). That guarantees convergence for convex-concave problems and, in practice, converges faster even when the problem parameters are drawn from a different distribution from training.
2 RELATED WORK
2.1 MINIMAX OPTIMIZATION
Following (Neumann, 1928), the problem (1) has been studied for decades due to its wide applicability. Simultaneous gradient descent (SimGD) or gradient descent ascent (GDA) (Nedić & Ozdaglar, 2009; Du & Hu, 2019; Jin et al., 2019; Lin et al., 2019) is one of the simplest minimax algorithms, conducting gradient descent over variable x and gradient ascent over variable y. However, the dynamics of SimGD or GDA can converge to limit cycles or even diverge (Benaım & Hirsch, 1999; Mertikopoulos et al., 2018b; Lin et al., 2019). To address this issue, Optimistic gradient descent ascent (OGDA) simply modifies the dynamics of GDA and shows more stable performance (Daskalakis et al., 2018; Daskalakis & Panageas, 2018; Liang & Stokes, 2019; Mertikopoulos et al., 2018a; Gidel et al., 2018; Mokhtari et al., 2019). OGDA attracts more attention because of its empirical success in training GANs. (Ryu et al., 2019) theoretically studies OGDA by analyzing its continuous time dynamic and
1We differentiate the usages of two terms: parameters and variables, throughout the paper. For example, minu maxv ax
2 − by2, we call a, b parameters and x, y variables. For simplicity, this paper only discusses the L2O generalizability when the testing instances’ parameter distribution differs from the training.
2
proposes Anchored simultaneous gradient descent that shows good performance. Follow-the-Ridge (Wang et al., 2019) also addresses the limit cycling problem by introducing second-order information into the dynamic of GDA. Lately, K-Beam (Hamm & Noh, 2018) stabilizes the convergence of GDA by duplicating variable y, yielding strong performance. At each iteration, it performs gradient ascent independently on K copies of y and greedily chooses the copy that leads to a large function value f , then it updates x based on the selected copies.
2.2 LEARNING TO OPTIMIZE
As a special instance of meta-learning, L2O has been studied in multiple contexts, with continuous optimization being one of its main playgrounds so far. The first L2O framework is introduced in (Andrychowicz et al., 2016), where both the optimizee’s gradients and loss function values are formulated as the input features for an RNN optimizer. Due to the enormous number of parameters, a coordinate-wise design of RNN optimizer is adopted, where all optimization coordinates share the same updating strategy. (Li & Malik, 2016) uses the gradient history and objective values as observations and step vectors as actions in their reinforcement learning framework. (Chen et al., 2017) leverages RNN to train a meta-optimizer to optimize black-box functions. Two effective training tricks, random scaling and objective convexifying, are presented in (Lv et al., 2017). Wichrowska et al. (2017) presents an optimizer of multi-level hierarchical RNN architecture augmented with additional architectural features. Li et al. (2020) introduces a Jacobian regularization to L2O and enhances the domain adaptation performance of optimizees. Chen et al. (2020a) proposes several improved training techniques to stabilize L2O training and ameliorate performance. You et al. (2020); Chen et al. (2020b;c) extend the application scope of L2O into various practical problems such as graph neural network training, domain generalization, and noisy label training.
The above works address continuous minimization problems using single optimizer models. One exception, (Cao et al., 2019), extends L2O to solving Bayesian swarm optimization. The author presents a novel architecture where multiple LSTMs jointly learn iterative update formulas for the swarm of particles, coordinated by attention mechanisms. We also notice that two recent efforts (Jiang et al., 2018; Xiong & Hsieh, 2020) introduce L2O to adversarial training, a renowned application of minimax optimization. However, both of them merely utilize L2O to solve the inner minimization of their minimax problems (i.e., generating attacks), while the outer maximization is still solved analytically. Neither of the two directly solves the full minimax optimization.
3 METHOD
3.1 MAIN FRAMEWORK: TWIN LEARNABLE OPTIMIZERS (TWIN-L2O)
The main L2O framework we proposed is named Twin-L2O, where we use two learnable optimizers to alternate between min and max updates. Our design adopts the basic idea of (Andrychowicz et al., 2016) to use Long Short-Term Memory (LSTM) to model learnable optimizers, for solving target problems known as optimizees. At each step, LSTM outputs the update of the optimizee variables. The LSTM inputs are typically the current zero-order or first-order information of the optimizee (Andrychowicz et al., 2016; Lee & Choi, 2018), plus the historic optimization trajectory information.
In Twin-L2O, two LSTMs separately update x and y and record historical trajectory information of their own variables respectively. Formally, we consider the minimax problem minxmaxy f(x, y). We use two LSTM optimizers, LSTM-Min and LSTM-Max, to updates the min variable x and the max variable y respectively. LSTM-Min is parameterized by φmin and LSTM-Max is parameterized by φmax. At each iteration t, Twin-L2O updates x and y in turns and yields the following rule:
xt+1 =xt + ∆xt,where (∆xt, hmint+1) = LSTM-Min ( [∇xf (xt, yt) ,∇yf (xt, yt)], hmint , φmin ) ,
yt+1 =yt + ∆yt,where (∆yt, hmaxt+1 ) = LSTM-Max ( [∇yf (xt+1, yt) ,∇xf (xt+1, yt)], hmaxt , φmax ) , (2)
where hmint and h max t are the historical trajectory information of LSTM-Min and LSTM-Max at time step t. This formulation is inspired by the SimGD/GDA-style algorithms (Nedić & Ozdaglar, 2009; Du & Hu, 2019; Jin et al., 2019; Lin et al., 2019) that conduct simultaneous/alternative gradient descent over x and ascent over y. Figure A4 (Appendix A1) conceptually illustrates the framework.
The next question is to design the L2O reward. To train the LSTM optimizers, the loss function is often to penalize some type of cost, accumulated along the optimization trajectory for a horizon of T
3
steps (also known as the unrolling length for LSTM (Sherstinsky, 2018))
L(φmin, φmax) = Ef [ T∑ t=1 wtR(f, xt, yt) ] , (3)
wt is chosen to be all 1 following the basic setting in (Andrychowicz et al., 2016), that might be tuned for better performance in future work.
As a key design option, R(f) represents the reward to guide the L2O training. In existing L2O methods for continuous minimization (Andrychowicz et al., 2016; Lv et al., 2017), R(f) is usually simply set to R(f, xt)) = f (xt) to encourage fast decrease of objective values over time. To extend this existing reward to the minimax scenario, we cannot directly penalize the overall objective function value either way, since the min and max objectives are entangled. Also, different from pure minimization problems, the Twin-L2O updates (2) consist of two alternating steps governed by two different LSTM optimizers: each accounts for its own subproblem goal (min or max updates), but the two also have to collaborate to explore/exploit the minimax landscape. We specifically design the following reward that implicitly addresses the above issue by setting a new reward function:
L(φmin, φmax) = Ef [ T∑ t=1 {[f(xt, yt−1)− f(xt, yt)] + [f(xt, yt−1)− f(xt−1, yt−1)]} ] . (4)
Analysis of the reward design In Eqn. 4, the first and second terms always characterize two consecutive min and max updates. In more details, the value of f (xt, yt)−f (xt, yt−1) solely reflects how effectively the t-step max update increases the objective f , while f (xt, yt−1)− f (xt−1, yt−1) reflects the effectiveness of t-step min update in decreasing the objective f . Our goal is then to maximize the weighted accumulated sum for f (xt, yt)−f (xt, yt−1) , while minimizing the weighted accumulated sum for f (xt, yt)− f (xt, yt−1) , t = 1, 2, ..., T . Combining the two sub-goals together (with a sign change to turn max into min) yields our reward. One may also alternatively interpret Eqn. 4 as penalizing the loss change from f(xt, yt) along both x and y updating directions, which would encourage yielding stationary points.
For the reward design, we provide a more detailed discussion in Appendix A6. Specifically, we provide a comparison between the objective-based reward in Eqn. 4, and another possible gradientbased reward. The latter was found to be ineffective in solving the problems presented in Section 4.
Rationale of the framework selection Another important design question is to what extent learning the min and max updates should be (dis)entangled: on the one hand, the two steps obviously interact with each other as they jointly explore the minimax landscape; on the other hand, min and max steps commonly have asymmetric difficulty levels, that have been leveraged by previous algorithms. For example, (Hamm & Noh, 2018) demonstrates the failure of alternating gradient descent in minimax optimization due to the multiple solution discontinuity of the inner maximization, and addresses that by simultaneously tracking K candidate solutions for the max step, while the outer minimization remains to take one descent step. Besides the joint reward (4), the default Twin-L2O design leverages two independent LSTMs in Eqn. (2), each dedicatedly handling min or max updates. In comparison, we also consider two other more "entangled" ways: (a) fully entangling the two optimizers, i.e. using one LSTM to simultaneously generate min and max outputs; (b) weakly entangling the two optimizers, by using two LSTMs sharing weights, yet allowing either to maintain its own temporal hidden states. Our ablation experiments (see Section 4.1) find that the default decoupled design in Eqn. (2) seems to facilitate the L2O learning most.
3.2 IMPROVING GENERALIZABILITY OF TWIN-L2O
Despite the empirical success of L2O, it is unfortunately impossible to ensure that any L2O algorithm always converges. Assuming the objective function type to keep unchanged, the testing instances’ parameter distribution may differ from the one of training, and L2O can catastrophically fail. For the Twin-L2O, we discuss two remedies to partially fix this issue and boost its generalizability.
We first propose curriculum L2O training scheme as a practical L2O training technique such that Twin-L2O can be trained to work on a much wider coverage of problem parameters than its vanilla versions. That would empirically help the generalizability due to broader coverage by training instance, but would still inevitably fail when meeting unseen testing instances. We then present a
4
preliminary exploration of the safeguard mechanism on minimax under a special case, i.e., solving convex-concave problems. We demonstrate that with such strong assumptions, it is possible to theoretically establish the "perfect" convergence of Twin-L2O on any unseen optimizee.
Curriculum L2O Training When it comes to general minimax problems, it is unlikely to exist an ideal theory to fully ensure Twin-L2O convergence on all instances. Therefore, we seek empirical L2O success of as many instances as possible. Specifically: can we train Twin-L2O better, so that it can work on instances at a broader parameter range?
We find a curriculum learning (CL) strategy (Bengio et al., 2009) particularly useful. CL was first adopted to train neural networks by first focusing the training on an "easy" training subset (often adaptively selected), that is then gradually grown to the full set. It is known to be effective to stabilize training, especially when the training set is highly varied or noisy (Jiang et al., 2017). Since minimax optimization is notoriously unstable no matter via analytical or learned optimizers, we conjecture that the noisy minimax dynamics might challenge Twin-L2O by providing unreliable guidance and impede its training. Considering that our Twin-L2O is modeled using LSTMs, it is natural to think of whether CL can bring additional gains if applied to meta-training. Previously it was also found effective in L2O for minimization problems (Chen et al., 2020a).
Method 1 Safeguarded-Twin-L2O for ConvexConcave Saddle Point Problems
1: Initialize u1 ∈ Rn, C ∈ [0,∞), α ∈ (0,∞), {λ`} = {1/(`+ 1)}, k ← 2, weights {φk} 2: function SADDLEHALPERN(ε) 3: u2 ← 1
2
( u1 + Jα∂f (u 1) ) .
4: while ‖uk − Jα∂f (uk)‖ > ε do 5: zk+1 ← LSTM(uk; φk) 6: C Apply L2O operator
7: if Ek+1(uk+1) ≤ C k + 1 8: C Verify safeguard condition 9: uk+1 ← zk+1
10: C Use L2O update 11: else 12: uk+1 ← λku1 +(1−λk)Jα∂f (uk) 13: C Use fallback update 14: k ← k + 1 15: return uk 16: end function
Specifically, in one epoch, we will rank all optimizee instances by their cumulative losses (3) from low to high, and only select the top C instances to count into the total reward. In that way, only the instances that exhibit "good training behaviors" (smaller gradients & more likely to get close to stationary points) will be initially used for updating the Twin-L2O. That prevents the learned optimizer being misled by random failures and outliers, which are commonly found in the early epochs of Twin-L2O training. We by default set the percentage C to start from 20%, then growing linearly every epoch until reaching 100% in the later training stage.
Up to our best knowledge, this is the first effort to incorporate CL with L2O training. We can this Twin-L2O trained with CL as Enhanced Twin-L2O: note that it is the same model structure, just trained in a different and better way. More details can be found in Appendix A4 .
Safeguard Twin-L2O: A Preliminary Theoretical Exploration Most L2O methods have little or no convergence guarantees. Very recently, a safeguarding mechanism has been introduced to L2O for convex minimization problems with gradient and/or proximal oracles (Heaton et al., 2020). Conceptually, a safeguard is anything that identifies when a ”bad" L2O update would occur and what ”fallback" update to apply in place of that bad L2O update. In this section, we establish a safeguarding theory and algorithm, specifically for learned convex-concave saddle point algorithms. Here the safeguard takes the form of an energy inequality (c.f. Line 6 in Method 1).
In this section, we write u = (x,y) ∈ Rm × Rn and let α > 0. We use the resolvent, defined by Jα∂f (x,y) = (Id + α∂f) −1, (5)
where we note ∂f = (∂xf,−∂yf). For simple f (e.g., quadratic functions), a closed formula exists for Jα∂f . Otherwise, one may use an iterative method to approximate this quantity. In addition, define the residual operator
F (u) := 1
2 (u− Jα∂f (u)) , (6)
and, for each k ∈ N, the energy Ek : Rm × Rn → R by
Ek(u) := ‖F (u)‖2 − λk
1− λk 〈F (u),u1 − u〉 , (7)
5
where {λk} is a sequence of step sizes. The full method is outlined in the Method 1, where the L2O update is denoted by LSTM(uk;φk) and the fallback method is a Halpern iteration (Halpern, 1967). Our main result for minimax safeguarding theory is formally stated below:
Theorem 3.1. If the sequence {uk} is generated by Algorithm 1, then
‖uk − Jα∂f (uk)‖ ≤ 1
2 ( d1 k + √ d21 k2 + 4C k ) , for all k ≥ 2, (8)
where d1 := min{‖u − u1‖ : 0 ∈ ∂f(u)} is the distance from the initial iterate u1 to the set of saddle points and C ≥ 0 is an arbitrary constant. In particular, this implies each limit point of {uk} is a saddle point.
Our proof draws and integrates two sources of ideas: (1) the safeguarded L2O technique that has recently just been introduced to convex minimization (Heaton et al., 2020); and (2) Halpern iteration (Diakonikolas, 2020) that is adopted for analytical minimax optimization with favorable theoretical properties. The full proof is provided in Appendix A3. Note that this work is not intended as a theory innovation on (classical) minimax optimization. Instead, our aim is to extend the emerging idea of safeguarded L2O from convex minimization to convex-concave minimax problems of interest, and shows this idea to be helpful for minimax L2O too: see experiments in Section 4.
4 EXPERIMENTS
4.1 ABLATION STUDY ON THE DESIGN OF TWIN-L2O
We first investigate the design choices for Twin-L2O that we discussed in Section 3.1. We mainly investigate two aspects: (i) whether to share the weights in the two LSTM solvers or not; (ii) whether to share the hidden states between the two LSTM solvers or not. That leads us to four options, denoted as (with self-explanatory names): Share-LSTM-Share-Hidden, Share-LSTM-Two-Hidden, Two-LSTM-Share-Hidden, and Two-LSTM-Two-Hidden. We use the seasaw problem, formulated as below, as the testbed for our ablation study (note that the ranges of a, b are picked only to make L2O easy to converge, while more will be investigated in Section 4.3):
Seesaw: min x max y −by sin(aπx), a ∼ U [0.9, 1], b ∼ U [0.9, 1] (Seesaw)
The Seesaw problem is nonconvex-concave, and is considered challenging (Hamm & Noh, 2018) due to its non-differentiability arising from that the solutions of the state equation or the adjoint state equation are not unique (Danskin, 1966). The L2O training routine follows (Andrychowicz et al., 2016): we use 128 optimizee instances for training; each of them has its parameters i.i.d. sampled, and variables x, y randomly initialized by i.i.d. sampling from U [−0.5, 0.5]. A validation set of 20 optimizees is used with parameters and variables sampled in the same way; and similarly we generate a hold-out testing set of another 100 instances. For each epoch, an L2O optimizer will update the optimizee parameters for 1000 iterations, with its unrolling length T = 10. When the next epoch starts, all x, y as well as LSTM hidden states are reset. We train the L2O solvers for 200 epochs, using Adam with a constant learning rate 10−4. We pick the model checkpoint at the epoch when its validation performance reaches the peak. Figure 1 compares the convergence results of the four options, evaluated on the same testing set. We measure the `2 distances between the solved variables and their corresponding ground-truth solutions (or the closet one, if multiple exist). It is obvious that only the Two-LSTM-Two-Hidden can successfully converge to the correct solution (x∗, y∗) = (0, 0), which is also the equilibrium. Our major observation from the above experiments is that for minimax L2O optimization, especially for asymmetric problems such as Seesaw, it would be a better choice to use decoupled two LSTM solvers and let them take care of their own trajectory information. We will hence stick to this option and use it as our default Twin-L2O.
All experiments in this and following sections are conducted using the GeForce GTX 1080 Ti GPUs.
4.2 COMPARISON WITH STATE-OF-THE-ART ANALYTICAL OPTIMIZERS
In this section, we apply Twin-L2O to two more test problems besides Seesaw:
6
• Rotated Saddle2: minxmaxy ax2 − by2 + 2xy, a ∼ U [0.9, 1], b ∼ U [0.9, 1] • Matrix Game: minx maxy xTAy, A ∈ R5×5,Ai,j ∼ Bernoulli(0.5) · U [−1, 1]
On all three problems, we compare Twin-L2O with several state-of-the-art algorithms: Gradient Descent Ascent (GDA) (Lin et al., 2019), Optimistic Mirror Descent (OMD) (Daskalakis et al., 2018) and GD with anchoring (GD-Anchoring) (Ryu et al., 2019). On Rotated Saddle and Seesaw we will compare with K-beam (Hamm & Noh, 2018) in addition. For matrix game, we also compare it with the standard Halpern Iteration (Diakonikolas, 2020) that is designed for convex-concave minimax problems. For these analytical methods, all parameters are tuned with careful grid search. We train, validate and test Twin-L2O models following the protocol described in Section 4.1.
Figure 2 plots the convergence curves of all methods, averaged across all testing problems (and each with 20 trials of random x, y initialization). Several observations are drawn below:
• L2O does not show superiority over well-tuned analytical algorithms on the simplest Rotated Saddle problem (and similarly Saddle). The problem is very gradient-friendly, and therefore OMD already achieves the best convergence speed as well as solution quality.
• On Matrix Game, Twin-L2O starts to show competitive edges over analytical solvers with faster convergence speed and higher-precision solutions.
• On the Seesaw problem, Twin-L2O largely outperforms all carefully-tuned analytical algorithms, achieving one-magnitude higher-precision solutions with comparable convergence speed. That shows us one take-home message: L2O can work for minimax optimization, and can contribute most significantly to those hard problems. That makes minimax L2O a highly meaningful complement to existing analytical minimax solvers. More analysis on comparing the actual computational costs (MAC numbers) can be found in Appendix A5.
2We also test on the classical Saddle problem, but its behaviors and conclusions are almost identical to the Rotated Saddle. We hence report on Rotated Saddle due to the space limit.
7
4.3 ENHANCED TWIN-L2O: CURRICULUM LEARNING EVALUATION
We again use the Seesaw problem as an example in this section. Its two parameters a and b, i.e., the problem period and the scale, are sampled independently from two uniform distributions U [L1a, L2a], U[L1b, L2b]. In Section 4.1, both are chosen as U [0.9, 1] for the ease of L2O convergence. We now stretch both parameter ranges and test whether an L2O model can still solve the resultant broader range of problems. All other training protocols follow Section 4.1 identically.
Sections 4.1 and 4.2 evaluate the average solution distances over the testing set (100 instances), which worked fine in the small [a, b] range then. However, when we extend the [a, b] range, we find that the L2O behaviors can differ vastly across testing instances, i.e., some converging quickly while others suffering from heavy fluctuations or even divergence, which is an artifact of inefficient L2O training that leaves it unable to cover the full large problem range. That motivates us to carefully re-design our evaluation metrics here, to reflect both the solution quality and its variation/stability.
For p-th testing instance, we record its solution distance l2 D p t , at epoch t = 1, 2, .... Given two thresholds acc and std (chosen by multi-fold validation; we use default d = 2× 10−2 and std = 10−4), we define two forms of success rate (SR):
SR1 = ∑n p=1 I(d(D p)< acc) n SR2 = ∑n p=1 I(Std(D p)< std) n
where d(Dp) = ∑L t=t0 Dpt
L−t0+1 , Std(Di) = Std({D p t }Lt=t0), t0 = 0.8L; n = 100 is the number of testing
instances; L = 1000 is the total iteration number that each instance (optimizee) is trained by L2O. Intuitively, SR1 emphasizes the average solution precision from the last 20 iterations; and SR2 measures how large solution variation is seen in the last 20 iterations.
Table 1 compares Twin-L2O and Enhanced Twin-L2O at multiple combinations to stretch the ranges of a and b, starting to the original [0.9, 1]× [0.9, 1], up to as large as [0, 5]× [0, 2] : the parameter coverage increase by 1,000 times. Adding CL evidently helps Twin-L2O stay effective to train over a broader instance range, under both SR metrics. Vanilla Twin-L2O performs perfectly at [0.9, 1]× [0.9, 1], yet begins to drop at [0, 1]× [0.9, 1] (mainly showing higher instability, as indicated by lower SR2), and hardly succeeds beyond [0, 3.5] × [0.9, 1]. In contrast, Enhanced Twin-L2O obtains nontrivial results even at [0, 5]× [0, 1] (tens of times wider than the vanilla one).
4.4 SAFEGUARDED TWIN-L2O EXPERIMENTS
Here we use the matrix game as the example to evaluate the above established safeguard mechanism for convex-concave minimax optimization. We directly take a well-trained Twin-L2O model for matrix game in Section 4.2, where the matrix A ∈ R5×5 and Ai,j ∼ Bernoulli(0.5) · U [−1, 1], and the coordinates of initial optimization variables x and y are independently sampled from U [−1, 1]. During testing, in addition to testing the Twin-L2O model on the testing data from this seen distribution, we also evaluate it on unseen data, whose A is now sampled from an intentionally very distinct distribution: Ai,j ∼ Bernoulli(1.0) · U [−8, 8]. x and y are initialized in the same manner. We compare Safeguarded Twin-L2O (denoted as Safe-Twin-L2O) with standard Halpern iteration (Diakonikolas, 2020) as the fallback update, when the L2O update is disapproved in Method 1. We also compare with OMD and GD-Anchoring on both seen and unseen testing data (GDA fails to converge in both cases, even we tune its hyperparameters to our best efforts). The results are shown in Figure 3. When tested on the aggressively varied unseen data, the vanilla Twin-L2O model fails and diverges, but Safe-Twin-L2O remains to converge successfully: even faster than Halpern iteration and OMD, and much better than GD-Anchoring.
8
5 CONCLUSION
This paper studies L2O for minimax optimization for the first time. We present the Twin-L2O model, and further improve its generalizability by introducing a theoretically grounded safeguarding framework (for convex-concave problems), as well as an empirical curriculum training strategy (for general problems). Extensive simulations endorse the promise of our algorithms. This pilot study suggests and paves the way for extending L2O beyond continuous minimization problems.
Limitation: The entire L2O field faces challenges to scale up to larger-scale optimization (Andrychowicz et al., 2016), and our study has not yet made an exception. Despite very promising gains from challenging cases such as the Seasaw and Matrix Game problems, the current work only proves the first concept of minimax L2O, on relatively basic and low-dimensional test problems. Our immediate next
step is to scale up Twin-L2O, and to explore its potential in solving the minimax application problems of practical interest, such as adversarial training (Jiang et al., 2018; Xiong & Hsieh, 2020) and GANs (Gulrajani et al., 2017). A potential idea might leverage the memory-efficient hierarchical RNN structure in (Wichrowska et al., 2017).
A1 TWIN-L2O FRAMEWORK
Iteration
......
LSTM-Max LSTM-Max LSTM-Max
......
......
Iteration Iteration ......
Twin-Structured Learning to Optimize (Twin-L2O)
Apply Updating RuleMinimax OptimizeeTwin-LSTM Optimizer
LSTM-Min LSTM-Min LSTM-Min
......
Pass Updated or Variable Pass Input Information to Twin-LSTM
Figure A4: Architecture of Twin-L2O. We let LSTM-Min and LSTM-Max, parameterized by φmin and φmax, update x and y respectively. As shown by curved dashed lines, Twin-LSTM keeps being updated about the latest variable values of x and y when computing input information and the reward. When constructing the computational graph and training the Twin-LSTM, the solid lines allow gradients to flow while the dashed lines do not pass any gradient (Andrychowicz et al., 2016).
A2 COMPARISON WITH STATE-OF-THE-ART ANALYTICAL OPTIMIZERS
Figure A5 shows the performance of y variable in Rotated Saddle, Matrix Game and Seesaw. The analyses of the results generally align with those in the paper.
100 101 102 103Iteration 0.00
0.05
0.10
0.15
0.20
0.25
Di st
an ce
5.56e-92
GDA OMD GD-Anchoring K-beam Twin-L2O
(a) Rotated Saddle 100 101 102 103Iteration
0.0
0.5
1.0
1.5
2.0
Di st
an ce
1.89e-03
GDA OMD GD-Anchoring Halpern Twin-L2O
(b) Matrix Game 100 101 102 103Iteration
0.0
0.1
0.2
0.3
0.4
Di st
an ce
2.94e-03
GDA OMD GD-Anchoring K-beam Twin-L2O
(c) Seesaw
Figure A5: Convergence comparison of variable y between Twin-L2O and state-of-the-art analytical minimax optimizers (GDA, OMD, GD-Anchoring, and K-beam), for three test problems.
A3 PROOF OF SAFEGUARDING RESULT
Below is a proof of the main result, Theorem 4.1:
Proof. We proceed in the following manner, with much credit due to the analysis in (Diakonikolas, 2020). First we verify an inequality with the energy sequence {Ek(uk)} (Step 1). This is used to obtain the convergence rate (Step 2). Resulting implications about limit points are established last (Step 3).
Step 1. We claim
Ek(u k) ≤ C
k , for all k ≥ 2. (9)
A12
We proceed by induction. First note Jα∂f is firmly nonexpansive, and so 2F = Id − Jα∂f is also firmly nonexpansive (Bauschke et al., 2011), which implies
‖2F (u)− 2F (v)‖2 ≤ 〈2F (u)− 2F (v),u− v〉 , for all u,v ∈ Rm × Rn. (10) Using (10) with u = u2 and v = u1 together with our choice of step sizes {λk}, we find
E2(u 2) = ‖F (u2)‖2 − λ1
1− λ1 〈F (u2),u1 − u2〉 (11)
= ‖F (u2)‖2 − 〈F (u2),u1 − u2〉 (12) = 〈F (u2), F (u2)− F (u1)〉 (13) = ‖F (u2)− F (u1)‖2 + 〈F (u1), F (u2)− F (u1)〉 (14)
≤ 1 2 ‖2F (u2)− 2F (u1)‖2 + 〈F (u1), F (u2)− F (u1)〉 (15) ≤ 〈F (u2)− F (u1),u2 − u1〉+ 〈F (u1), F (u2)− F (u1)〉 (16) = −〈F (u2)− F (u1), F (u1)〉+ 〈F (u1), F (u2)− F (u1)〉 (17) = 0. (18)
Thus, E2(u2) ≤ 0 ≤ C/2, and the base case holds. Inductively, suppose (9) holds taking k = n for some n ≥ 2. If un+1 = zn+1, then (9) holds, taking k = n+ 1, by the conditional statement in Line 6 of Method 1. Alternatively, suppose un+1 6= zn+1. Applying (10) with u = un+1 and v = un yields
‖F (un+1)− F (un)‖2 ≤ 2‖F (un+1)− F (un)‖2 ≤ 〈F (un+1)− F (un),un+1 − un〉 . (19) Upon expansion of the left hand side, we discover
‖F (un+1)‖2 ≤ 〈F (un+1),un+1 − un + 2F (un)〉 − 〈F (un),un+1 − un + F (un)〉 . (20)
Algebraic manipulations of the update formula for un+1 yield the relations
un+1 − un + 2F (un) = λn 1− λn (u1 − un+1), (21a)
un+1 − un + F (un) = λn(u1 − un)− (1− 2λn)F (un), (21b) Substituting (21) in (20) gives
‖F (un+1)‖2 ≤ λn 1− λn 〈F (un+1),u1 − un+1〉 (22)
− λn 〈F (un),u1 − un〉+ (1− 2λn)‖F (un)‖2. (23)
and we collect terms with F (un+1) on the left hand side to obtain
‖F (un+1)‖2 − λn 1− λn
〈F (un+1),u1 − un+1〉 ≤ (1− 2λn)‖F (un)‖2 − λn 〈F (un),u1 − un〉 . (24)
Furthermore, by our choice of step size sequence {λn},
1− 2λn = n− 1 n+ 1
(25)
and, for n ≥ 2, λn =
n− 1 n+ 1 · 1 n− 1 = n− 1 n+ 1 · λn−1 1− λn−1 . (26)
Combining (24), (25), and (26) with the definition of En in (7) yields
En+1(u n+1) ≤ n− 1
n+ 1 · En(un). (27)
Applying the inductive hypothesis, we deduce
En+1(u n+1) ≤ n− 1 n+ 1 · C n = n− 1 n · C n+ 1 ≤ C n+ 1 , (28)
A13
and this inequality closes the induction. Thus, (9) holds by the principle of mathematical induction.
Step 2. Let u? be the projection of u1 onto Fix(Jα∂f ) so that
‖u1 − u?‖ = argmin{‖u1 − u‖ : u ∈ Fix(Jα∂f )} = d1. (29) Note this projection is well defined since the set of saddle points is convex. By (9), for k ≥ 2,
‖F (uk)‖2 ≤ λk 1− λk 〈F (uk),u1 − uk〉+ C k
(30)
= λk
1− λk︸ ︷︷ ︸ =1/k
〈F (uk),u1 − u?〉+ 〈F (uk)− F (u?),u? − uk〉︸ ︷︷ ︸ =0 + C k
(31)
= 1 k 〈F (uk),u1 − u?〉+ C k (32)
≤ 1 k ‖F (uk)‖‖u1 − u?‖+ C k . (33)
where the third line holds since F (u?) = 0 and F is monotone. Using the quadratic formula with the fact that ‖F (uk)‖2 ≥ 0, we obtain (8), as desired.
Step 3. Let ũ be a limit point of {uk}. This implies there exists a subsequence {unk} that converges to ũ. Since Jα∂f is 1-Lipschitz and norms are continuous, it follows that
0 ≤ ‖ũ− Jα∂f (ũ)‖ = lim k→∞ ‖unk − Jα∂f (unk)‖ ≤ lim k→∞
1
2 ( d1 nk + √ d21 n2k + 4C nk ) = 0. (34)
By the squeeze lemma, we deduce ũ ∈ Fix(Jα∂f ), i.e., ũ is a saddle point of f . Because ũ was an arbitrarily chosen limit point, each limit point of {uk} is a saddle point of f .
A4 DETAILS ON CURRICULUM LEARNING
In L2O framework, the reward for training the optimizer is defined as:
L(φ) = Ef [ T∑ t=1 wtR (f (xt)) ] (35)
where f is a distribution of functions. The Enhanced Twin-L2O using Curriculum Learning(CL) selects a portion of instances that demonstrate "good training behaviors" (smaller gradients & more likely to get close to stationary points) to be counted into the reward, with the portion C increasing linearly from 20% to 100% as the training epoch increases. In our experiments, the detailed scheme of C is:
C = min{20 + epoch_index, 100}% (36)
where epoch_index denotes the index of epoch when training, starting from 0 and ending with 199 in our case. When applying CL, the actual reward becomes
L̃(φ) = Ef [ T∑ t=1 wtq(f)R (f (xt)) ] (37)
where q(f) = 1 if the value m(f) = ∑T t=1 wt ‖∇yf (xt, yt)‖
2 ranks top C of all sampled functions, and q(f) = 0 otherwise.
This process does not change the structure of Twin-L2O, and essentially acts as adding masks to those training instances that demonstrate poor behavior and ignoring them in the actual training phase. Combining this trick with the existing framework, the Twin-L2O can achieve a higher success rate when solving problems with a larger range of parameters.
A14
A5 COMPUTATIONAL COST ANALYSIS
We analyze the number of the multiplier–accumulator operation (MAC) of Twin-L2O and K-beam (Hamm & Noh, 2018) for a Seasaw problem testing instance with 20 trials of random x, y initialization, each trial lasting for 1000 iterations. As for K-beam, the numbers of MAC are 2.36M (Million), 3.8M, 8.11 M, 15.31M for K = 1, 2, 5, 10 respectively. For Twin-L2O, the total number of MAC is 3.86M.
We use K = 5 in K-beam for experiments in our paper whose number of MAC costs 2.1 times more than that of Twin-L2O, yet its solution quality in terms of both the converging speed and the precision fails to beat it.
A6 MORE DISCUSSIONS ON THE DESIGN OF TWIN-L2O REWARD
We term the reward function in Eqn. 4 as an objective-based reward, since it penalizes the objective change from f(xt, yt) along both x and y updating directions. It is naturally inherited and extends the reward functions prevailing in most prior L2O works for minimization (Andrychowicz et al., 2016; Li & Malik, 2016), whose default reward is to minimize a weighted sum of the past function values.
One may also design the following two rewards, which we name as gradient-based rewards:
L(φmin, φmax) = Ef [ T∑ t=1 ‖∇xf (xt, yt)‖2 + ‖∇yf (xt, yt)‖2 ] , (38)
L(φmin, φmax) = Ef
[ T∑
t=1
( f (xt, yt−1)− f (xt, yt)
‖yt − yt−1‖
)2 + ( f (xt−1, yt−1)− f (xt, yt)
‖xt − xt−1‖
)2] . (39)
Eqn. 39 is the gradient-based Nikaido-Isoda function introduced by Raghunathan et al. (2019).
For minimax optimization, it is not immediately clear whether the objective-based or the gradientbased might work practically better. Intuitively by definition, the former is likely to lead towards a saddle point (defined in Eqn. 1) and the latter to a stationary point. They do not always coincide in general, e.g, a stationary point might not be a saddle point. But for all specific test problems we studied in Section 4, a stationary point is also a saddle point.
We try several experiments on the challenging seesaw problem as a specific example, to provide a close comparison between the gradient-based in Eqn. 38 and the objective-based reward. We re-do Twin-L2O by only replacing Eqn. 4 with the gradient-based reward, and our observations are: a) the gradient-based reward solves the seesaw problem worse than the objective-based one; b) the minimization variable x diverges on testing problem instances; c) the maximization variable y will converge to a solution of precision magnitude 0.04 (for reference, y converges to have magnitude less than 0.01 when using the objective-based loss). We further identify one possible cause after analyzing the gradient behaviors. Note here the gradient-based reward could be expressed as:
||∇xf(x, y)‖2 + ‖∇yf(x, y)‖2 = a2b2π2y2 cos2(aπx) + b2 sin2(aπx) (40)
Because a, b ∼ U [0.9, 1], the first term often dominates during training due to the π2 multiplier, unless y is sufficiently close to zero. The imbalance could be a cause of instability. For example, this reward could sometimes penalize cos2(aπx) to be close to zero, which is the opposite direction of the true solution sin2(aπx) = 0 . Although this is just a very specific problem example, it reveals that the gradient-based loss may sometimes not work well as expected, due to the instability or asymmetry of min/max gradients.
Besides, we have also tried the second gradient-based reward in Eqn. 39, and find it ineffective. It is mainly because the denominator (consecutive variable differences) can become very small and the loss will then explode and break training.
Back to the objective-based reward used in this paper, we have not observed oscillation empirically from all experiments so far. Our hypothesis is that the recurrent structure of the proposed Twin-L2O
A15
framework (shown in Eqn. 2) plays a role here. Although we use two LSTMs for the min and max updates respectively, the LSTM of one variable actually takes in the information of the other LSTM implicitly, because it takes the output of the other as input. When we penalize the objective function value of one LSTM update, all previous min and max updates can (in principle) be taken into account due to the effect of unrolled back-propagation, e.g., the min and max updates each take reference to not only its own, but also the other’s higher-order past trajectory information. While this is a tentative explanation, we think more in-depth analysis of why oscillation may or may not happen in L2O could be a really interesting future work.
Another implicit intuition that leads us to prioritizing the use of objective-based over gradient-based is that, in classic minimization, objective change is summable (i.e., having a finite accumulation), but gradient change is not summable in general (unless with properties such as strong convexity). While summability is itself not a guarantee for good training/testing performance, lack of summability means the loss may have an overly large dynamic range.
To summarize, our function-based objective naturally extends previous L2O convention, works better than other alternatives, and observes no oscillation yet. However, we emphasize that there is no intention to claim the current reward in Eqn. 4 is the best choice for minimax L2O - it is one of plausible options. We do concur the gradient-based reward designs in Eqn. 38 and Eqn. 39 are a complicated yet interesting question, especially when considering more complicated minimax problems. Again, as this paper is intended only as the first work and pilot study towards understanding the profound challenges and rich possibilities for minimax L2O, we believe everything discussed and proposed here, including the loss function, has large room of improvement.
A16 | 1. What is the main contribution of the paper regarding L2O frameworks?
2. What are the strengths of the proposed minimax L2O design options?
3. What are the weaknesses of the paper, particularly in the theoretical analysis?
4. How does the reviewer assess the convergence guarantees provided by the authors?
5. Do you have any questions regarding the compatibility of safeguarded L2O with other minimax solvers? | Review | Review
This paper’s main contribution is to extend the L2O framework to solving minimax problems for the first time.
Minimax optimization is in general unstable and harder to solve, challenging whether an L2O model can indeed figure out effective learning rules from data. Further, in order to design L2O for minimax problems, one has to decide to what extent the learning models for min and max updates should be coupled, and what reward shall be used (minimizing the negative cumulative objective value is no longer viable)
By discussing and comparing a number of design options, the authors find that two decoupled LSTMs sharing one variation-based reward is the best empirical design. They show this minimax L2O can display favorable empirical convergence speed on several testbed problems, compared against a number of analytical solvers.
More importantly, most L2O methods have little or no convergence guarantees, which constitutes another roadblock for broadening their practical usage, such as people often questioning whether they will diverge on some even slightly different problem or data. The authors presented Safeguarded Twin L2O, a preliminary theory effort saying that under some strong assumptions, it is possible to theoretically establish the general worst-case convergence of Twin-L2O.
The proof draws and integrate two sources of ideas: (1) the safeguarded L2O technique recently developed for convex minimization (Heaton et al., 2020); and (2) Halpern iteration. For this part, it is unclear to me why Halpern iteration was chosen as the fallback method in their safeguarded L2O, since it is not a current popular or fast minimax solver. Is the safeguarded L2O framework also compatible with other convergent minimax solvers?
In Section 4.4, the authors said on unseen data, their safe-Twin-L2O remains to converge successfully and even faster than Halpern iteration. Is this really correct? As far as I understand, on an unseen distribution the optimization should “fall back” to exactly the Halpern iteration; so shouldn’t safeguarded L2O behave identically with Halpern on unseen data? |
ICLR | Title
Learning A Minimax Optimizer: A Pilot Study
Abstract
Solving continuous minimax optimization is of extensive practical interest, yet notoriously unstable and difficult. This paper introduces the learning to optimize (L2O) methodology to the minimax problems for the first time and addresses its accompanying unique challenges. We first present Twin-L2O, the first dedicated minimax L2O framework consisting of two LSTMs for updating min and max variables separately. The decoupled design is found to facilitate learning, particularly when the min and max variables are highly asymmetric. Empirical experiments on a variety of minimax problems corroborate the effectiveness of Twin-L2O. We then discuss a crucial concern of Twin-L2O, i.e., its inevitably limited generalizability to unseen optimizees. To address this issue, we present two complementary strategies. Our first solution, Enhanced Twin-L2O, is empirically applicable for general minimax problems, by improving L2O training via leveraging curriculum learning. Our second alternative, called Safeguarded Twin-L2O, is a preliminary theoretical exploration stating that under some strong assumptions, it is possible to theoretically establish the convergence of Twin-L2O. We benchmark our algorithms on several testbed problems and compare against state-of-the-art minimax solvers. The code is available at: https://github.com/VITA-Group/L2O-Minimax.
1 INTRODUCTION
Many popular applications can be formulated into solving continuous minimax optimization, such as generative adversarial networks (GAN) (Goodfellow et al., 2014), distributionally robust learning (Globerson & Roweis, 2006), domain adaptation (Ganin & Lempitsky, 2014), distributed computing (Shamma, 2008; Mateos et al., 2010), privacy protection (Wu et al., 2018; 2020), among many more. This paper studies such problems: we consider a cost function f : Rm × Rn → R and the min-max game minxmaxy f(x, y). We aim to find the saddle point (x∗, y∗) of f :
f(x∗, y) ≤ f(x∗, y∗) ≤ f(x, y∗), ∀(x, y) ∈ X × Y, (1)
where X ⊂ Rm and Y ⊂ Rn. If X = Rm and Y = Rn, (x∗, y∗) is called a global saddle point; if X × Y is a neighborhood near (x∗, y∗), (x∗, y∗) is a local saddle point. The main challenge to solve problem (1) is the unstable dynamics of iterative algorithms. Simplest algorithms such as gradient descent ascent (GDA) can cycle around the saddle point or even diverge (Benaım & Hirsch, 1999; Mertikopoulos et al., 2018b; Lin et al., 2019). Plenty of works have been developed recently to address this issue (Daskalakis et al., 2018; Daskalakis & Panageas, 2018; Liang & Stokes, 2019; Mertikopoulos et al., 2018a; Gidel et al., 2018; Mokhtari et al., 2019). However, the convergence is still sensitive to the parameters in these algorithms. Even if the cost function is only changed by scaling, those parameters have to be re-tuned to ensure convergence.
A recent trend of learning to optimize (L2O) parameterizes training algorithms to be learnable from data, such that the meta-learned optimizers can be adapted to a special class of functions and outperform general-purpose optimizers. That is particularly meaningful, when one has to solve a large number of yet similar optimization problems repeatedly and quickly. Specifically, for existing L2O methods that operate in the space of continuous optimization, almost all of them solve some
∗Equal Contribution.
1
minimization problem (Andrychowicz et al., 2016; Chen et al., 2017; Li & Malik, 2016), leveraging an LSTM or a reinforcement learner to model their optimizer. Different from classic optimization results that often provide worst-case convergence, most L2O methods have little or no convergence guarantees, especially on problem or data instances distinct from what is seen in training, leaving their generalizability in practice questionable (Heaton et al., 2020). Motivated by L2O’s success in learning efficient minimization solvers from data, this paper seeks to answer: whether we could accomplish strong minimax L2O solvers as well; and if yes, how generalizable they could be?
As it might look straightforward at first glance, such extension is highly nontrivial due to facing several unique challenges. Firstly, while continuous minimization has a magnitude of mature and empirically stable solvers, for general minimax optimization, even state-of-the-art analytical algorithms can exhibit instability or even divergence. To the best of our knowledge, most state-of-theart convergence analysis of minimax optimization is built on the convex-concave assumption (Gidel et al., 2018; Mokhtari et al., 2019; Ryu et al., 2019), and some recent works relax the assumption to nonconvex-concave (Lin et al., 2019; 2020). Convergence for general minimax problems is still open. That makes a prominent concern on whether a stable minimax L2O is feasible. Secondly, given the two groups of min and max variables simultaneously, it is unclear to what extent their optimization strategies can be modeled and interact within one unified framework – a new question that would never be met in minimization. Thirdly, the noisy and sometimes cyclic dynamics of minimax optimization will provide noisier guidance (e.g., reward) to L2O; not to say that, it is not immediately clear how to define the reward: for minimization, the reward is typically defined as the negative cumulative objective values along the history (Li & Malik, 2016). However, for minimax optimization the objective cannot simply be decreased or increased monotonically.
Contribution: This paper is a pilot study into minimax L2O. We start by establishing the first dedicated minimax L2O framework, called Twin-L2O. It is composed of two LSTMs sharing one objective-based reward, separately responsible for updating min and max variables. By ablations of the design options, we find this decoupled design facilitate meta-learning most, particularly when the min and max updates are highly non-symmetric. We demonstrate the superior convergence of Twin-L2O on several testbed problems, compared against a number of analytical solvers.
On top of that, we further investigate how to enhance the generalizability of the learned minimax solver1, and discuss two complementary alternatives with experimental validations. The first alternative is an empirical toolkit that is applicable for general minimax L2O. We introduce curriculum learning to training L2O models for the first time, by recognizing that not all problem instances are the same difficult to learn to solve. After plugging in that idea, we show that Twin-L2O can be trained to stably solve a magnitude more problem instances (in terms of parameter varying range). The second alternative explores a theoretical mechanism called safeguarding, particularly for the important special case of convex-concave problems. When solving a testing instance, safeguarding identifies when an L2O failure would occur and provides an analytical fall-back option (Diakonikolas, 2020). That guarantees convergence for convex-concave problems and, in practice, converges faster even when the problem parameters are drawn from a different distribution from training.
2 RELATED WORK
2.1 MINIMAX OPTIMIZATION
Following (Neumann, 1928), the problem (1) has been studied for decades due to its wide applicability. Simultaneous gradient descent (SimGD) or gradient descent ascent (GDA) (Nedić & Ozdaglar, 2009; Du & Hu, 2019; Jin et al., 2019; Lin et al., 2019) is one of the simplest minimax algorithms, conducting gradient descent over variable x and gradient ascent over variable y. However, the dynamics of SimGD or GDA can converge to limit cycles or even diverge (Benaım & Hirsch, 1999; Mertikopoulos et al., 2018b; Lin et al., 2019). To address this issue, Optimistic gradient descent ascent (OGDA) simply modifies the dynamics of GDA and shows more stable performance (Daskalakis et al., 2018; Daskalakis & Panageas, 2018; Liang & Stokes, 2019; Mertikopoulos et al., 2018a; Gidel et al., 2018; Mokhtari et al., 2019). OGDA attracts more attention because of its empirical success in training GANs. (Ryu et al., 2019) theoretically studies OGDA by analyzing its continuous time dynamic and
1We differentiate the usages of two terms: parameters and variables, throughout the paper. For example, minu maxv ax
2 − by2, we call a, b parameters and x, y variables. For simplicity, this paper only discusses the L2O generalizability when the testing instances’ parameter distribution differs from the training.
2
proposes Anchored simultaneous gradient descent that shows good performance. Follow-the-Ridge (Wang et al., 2019) also addresses the limit cycling problem by introducing second-order information into the dynamic of GDA. Lately, K-Beam (Hamm & Noh, 2018) stabilizes the convergence of GDA by duplicating variable y, yielding strong performance. At each iteration, it performs gradient ascent independently on K copies of y and greedily chooses the copy that leads to a large function value f , then it updates x based on the selected copies.
2.2 LEARNING TO OPTIMIZE
As a special instance of meta-learning, L2O has been studied in multiple contexts, with continuous optimization being one of its main playgrounds so far. The first L2O framework is introduced in (Andrychowicz et al., 2016), where both the optimizee’s gradients and loss function values are formulated as the input features for an RNN optimizer. Due to the enormous number of parameters, a coordinate-wise design of RNN optimizer is adopted, where all optimization coordinates share the same updating strategy. (Li & Malik, 2016) uses the gradient history and objective values as observations and step vectors as actions in their reinforcement learning framework. (Chen et al., 2017) leverages RNN to train a meta-optimizer to optimize black-box functions. Two effective training tricks, random scaling and objective convexifying, are presented in (Lv et al., 2017). Wichrowska et al. (2017) presents an optimizer of multi-level hierarchical RNN architecture augmented with additional architectural features. Li et al. (2020) introduces a Jacobian regularization to L2O and enhances the domain adaptation performance of optimizees. Chen et al. (2020a) proposes several improved training techniques to stabilize L2O training and ameliorate performance. You et al. (2020); Chen et al. (2020b;c) extend the application scope of L2O into various practical problems such as graph neural network training, domain generalization, and noisy label training.
The above works address continuous minimization problems using single optimizer models. One exception, (Cao et al., 2019), extends L2O to solving Bayesian swarm optimization. The author presents a novel architecture where multiple LSTMs jointly learn iterative update formulas for the swarm of particles, coordinated by attention mechanisms. We also notice that two recent efforts (Jiang et al., 2018; Xiong & Hsieh, 2020) introduce L2O to adversarial training, a renowned application of minimax optimization. However, both of them merely utilize L2O to solve the inner minimization of their minimax problems (i.e., generating attacks), while the outer maximization is still solved analytically. Neither of the two directly solves the full minimax optimization.
3 METHOD
3.1 MAIN FRAMEWORK: TWIN LEARNABLE OPTIMIZERS (TWIN-L2O)
The main L2O framework we proposed is named Twin-L2O, where we use two learnable optimizers to alternate between min and max updates. Our design adopts the basic idea of (Andrychowicz et al., 2016) to use Long Short-Term Memory (LSTM) to model learnable optimizers, for solving target problems known as optimizees. At each step, LSTM outputs the update of the optimizee variables. The LSTM inputs are typically the current zero-order or first-order information of the optimizee (Andrychowicz et al., 2016; Lee & Choi, 2018), plus the historic optimization trajectory information.
In Twin-L2O, two LSTMs separately update x and y and record historical trajectory information of their own variables respectively. Formally, we consider the minimax problem minxmaxy f(x, y). We use two LSTM optimizers, LSTM-Min and LSTM-Max, to updates the min variable x and the max variable y respectively. LSTM-Min is parameterized by φmin and LSTM-Max is parameterized by φmax. At each iteration t, Twin-L2O updates x and y in turns and yields the following rule:
xt+1 =xt + ∆xt,where (∆xt, hmint+1) = LSTM-Min ( [∇xf (xt, yt) ,∇yf (xt, yt)], hmint , φmin ) ,
yt+1 =yt + ∆yt,where (∆yt, hmaxt+1 ) = LSTM-Max ( [∇yf (xt+1, yt) ,∇xf (xt+1, yt)], hmaxt , φmax ) , (2)
where hmint and h max t are the historical trajectory information of LSTM-Min and LSTM-Max at time step t. This formulation is inspired by the SimGD/GDA-style algorithms (Nedić & Ozdaglar, 2009; Du & Hu, 2019; Jin et al., 2019; Lin et al., 2019) that conduct simultaneous/alternative gradient descent over x and ascent over y. Figure A4 (Appendix A1) conceptually illustrates the framework.
The next question is to design the L2O reward. To train the LSTM optimizers, the loss function is often to penalize some type of cost, accumulated along the optimization trajectory for a horizon of T
3
steps (also known as the unrolling length for LSTM (Sherstinsky, 2018))
L(φmin, φmax) = Ef [ T∑ t=1 wtR(f, xt, yt) ] , (3)
wt is chosen to be all 1 following the basic setting in (Andrychowicz et al., 2016), that might be tuned for better performance in future work.
As a key design option, R(f) represents the reward to guide the L2O training. In existing L2O methods for continuous minimization (Andrychowicz et al., 2016; Lv et al., 2017), R(f) is usually simply set to R(f, xt)) = f (xt) to encourage fast decrease of objective values over time. To extend this existing reward to the minimax scenario, we cannot directly penalize the overall objective function value either way, since the min and max objectives are entangled. Also, different from pure minimization problems, the Twin-L2O updates (2) consist of two alternating steps governed by two different LSTM optimizers: each accounts for its own subproblem goal (min or max updates), but the two also have to collaborate to explore/exploit the minimax landscape. We specifically design the following reward that implicitly addresses the above issue by setting a new reward function:
L(φmin, φmax) = Ef [ T∑ t=1 {[f(xt, yt−1)− f(xt, yt)] + [f(xt, yt−1)− f(xt−1, yt−1)]} ] . (4)
Analysis of the reward design In Eqn. 4, the first and second terms always characterize two consecutive min and max updates. In more details, the value of f (xt, yt)−f (xt, yt−1) solely reflects how effectively the t-step max update increases the objective f , while f (xt, yt−1)− f (xt−1, yt−1) reflects the effectiveness of t-step min update in decreasing the objective f . Our goal is then to maximize the weighted accumulated sum for f (xt, yt)−f (xt, yt−1) , while minimizing the weighted accumulated sum for f (xt, yt)− f (xt, yt−1) , t = 1, 2, ..., T . Combining the two sub-goals together (with a sign change to turn max into min) yields our reward. One may also alternatively interpret Eqn. 4 as penalizing the loss change from f(xt, yt) along both x and y updating directions, which would encourage yielding stationary points.
For the reward design, we provide a more detailed discussion in Appendix A6. Specifically, we provide a comparison between the objective-based reward in Eqn. 4, and another possible gradientbased reward. The latter was found to be ineffective in solving the problems presented in Section 4.
Rationale of the framework selection Another important design question is to what extent learning the min and max updates should be (dis)entangled: on the one hand, the two steps obviously interact with each other as they jointly explore the minimax landscape; on the other hand, min and max steps commonly have asymmetric difficulty levels, that have been leveraged by previous algorithms. For example, (Hamm & Noh, 2018) demonstrates the failure of alternating gradient descent in minimax optimization due to the multiple solution discontinuity of the inner maximization, and addresses that by simultaneously tracking K candidate solutions for the max step, while the outer minimization remains to take one descent step. Besides the joint reward (4), the default Twin-L2O design leverages two independent LSTMs in Eqn. (2), each dedicatedly handling min or max updates. In comparison, we also consider two other more "entangled" ways: (a) fully entangling the two optimizers, i.e. using one LSTM to simultaneously generate min and max outputs; (b) weakly entangling the two optimizers, by using two LSTMs sharing weights, yet allowing either to maintain its own temporal hidden states. Our ablation experiments (see Section 4.1) find that the default decoupled design in Eqn. (2) seems to facilitate the L2O learning most.
3.2 IMPROVING GENERALIZABILITY OF TWIN-L2O
Despite the empirical success of L2O, it is unfortunately impossible to ensure that any L2O algorithm always converges. Assuming the objective function type to keep unchanged, the testing instances’ parameter distribution may differ from the one of training, and L2O can catastrophically fail. For the Twin-L2O, we discuss two remedies to partially fix this issue and boost its generalizability.
We first propose curriculum L2O training scheme as a practical L2O training technique such that Twin-L2O can be trained to work on a much wider coverage of problem parameters than its vanilla versions. That would empirically help the generalizability due to broader coverage by training instance, but would still inevitably fail when meeting unseen testing instances. We then present a
4
preliminary exploration of the safeguard mechanism on minimax under a special case, i.e., solving convex-concave problems. We demonstrate that with such strong assumptions, it is possible to theoretically establish the "perfect" convergence of Twin-L2O on any unseen optimizee.
Curriculum L2O Training When it comes to general minimax problems, it is unlikely to exist an ideal theory to fully ensure Twin-L2O convergence on all instances. Therefore, we seek empirical L2O success of as many instances as possible. Specifically: can we train Twin-L2O better, so that it can work on instances at a broader parameter range?
We find a curriculum learning (CL) strategy (Bengio et al., 2009) particularly useful. CL was first adopted to train neural networks by first focusing the training on an "easy" training subset (often adaptively selected), that is then gradually grown to the full set. It is known to be effective to stabilize training, especially when the training set is highly varied or noisy (Jiang et al., 2017). Since minimax optimization is notoriously unstable no matter via analytical or learned optimizers, we conjecture that the noisy minimax dynamics might challenge Twin-L2O by providing unreliable guidance and impede its training. Considering that our Twin-L2O is modeled using LSTMs, it is natural to think of whether CL can bring additional gains if applied to meta-training. Previously it was also found effective in L2O for minimization problems (Chen et al., 2020a).
Method 1 Safeguarded-Twin-L2O for ConvexConcave Saddle Point Problems
1: Initialize u1 ∈ Rn, C ∈ [0,∞), α ∈ (0,∞), {λ`} = {1/(`+ 1)}, k ← 2, weights {φk} 2: function SADDLEHALPERN(ε) 3: u2 ← 1
2
( u1 + Jα∂f (u 1) ) .
4: while ‖uk − Jα∂f (uk)‖ > ε do 5: zk+1 ← LSTM(uk; φk) 6: C Apply L2O operator
7: if Ek+1(uk+1) ≤ C k + 1 8: C Verify safeguard condition 9: uk+1 ← zk+1
10: C Use L2O update 11: else 12: uk+1 ← λku1 +(1−λk)Jα∂f (uk) 13: C Use fallback update 14: k ← k + 1 15: return uk 16: end function
Specifically, in one epoch, we will rank all optimizee instances by their cumulative losses (3) from low to high, and only select the top C instances to count into the total reward. In that way, only the instances that exhibit "good training behaviors" (smaller gradients & more likely to get close to stationary points) will be initially used for updating the Twin-L2O. That prevents the learned optimizer being misled by random failures and outliers, which are commonly found in the early epochs of Twin-L2O training. We by default set the percentage C to start from 20%, then growing linearly every epoch until reaching 100% in the later training stage.
Up to our best knowledge, this is the first effort to incorporate CL with L2O training. We can this Twin-L2O trained with CL as Enhanced Twin-L2O: note that it is the same model structure, just trained in a different and better way. More details can be found in Appendix A4 .
Safeguard Twin-L2O: A Preliminary Theoretical Exploration Most L2O methods have little or no convergence guarantees. Very recently, a safeguarding mechanism has been introduced to L2O for convex minimization problems with gradient and/or proximal oracles (Heaton et al., 2020). Conceptually, a safeguard is anything that identifies when a ”bad" L2O update would occur and what ”fallback" update to apply in place of that bad L2O update. In this section, we establish a safeguarding theory and algorithm, specifically for learned convex-concave saddle point algorithms. Here the safeguard takes the form of an energy inequality (c.f. Line 6 in Method 1).
In this section, we write u = (x,y) ∈ Rm × Rn and let α > 0. We use the resolvent, defined by Jα∂f (x,y) = (Id + α∂f) −1, (5)
where we note ∂f = (∂xf,−∂yf). For simple f (e.g., quadratic functions), a closed formula exists for Jα∂f . Otherwise, one may use an iterative method to approximate this quantity. In addition, define the residual operator
F (u) := 1
2 (u− Jα∂f (u)) , (6)
and, for each k ∈ N, the energy Ek : Rm × Rn → R by
Ek(u) := ‖F (u)‖2 − λk
1− λk 〈F (u),u1 − u〉 , (7)
5
where {λk} is a sequence of step sizes. The full method is outlined in the Method 1, where the L2O update is denoted by LSTM(uk;φk) and the fallback method is a Halpern iteration (Halpern, 1967). Our main result for minimax safeguarding theory is formally stated below:
Theorem 3.1. If the sequence {uk} is generated by Algorithm 1, then
‖uk − Jα∂f (uk)‖ ≤ 1
2 ( d1 k + √ d21 k2 + 4C k ) , for all k ≥ 2, (8)
where d1 := min{‖u − u1‖ : 0 ∈ ∂f(u)} is the distance from the initial iterate u1 to the set of saddle points and C ≥ 0 is an arbitrary constant. In particular, this implies each limit point of {uk} is a saddle point.
Our proof draws and integrates two sources of ideas: (1) the safeguarded L2O technique that has recently just been introduced to convex minimization (Heaton et al., 2020); and (2) Halpern iteration (Diakonikolas, 2020) that is adopted for analytical minimax optimization with favorable theoretical properties. The full proof is provided in Appendix A3. Note that this work is not intended as a theory innovation on (classical) minimax optimization. Instead, our aim is to extend the emerging idea of safeguarded L2O from convex minimization to convex-concave minimax problems of interest, and shows this idea to be helpful for minimax L2O too: see experiments in Section 4.
4 EXPERIMENTS
4.1 ABLATION STUDY ON THE DESIGN OF TWIN-L2O
We first investigate the design choices for Twin-L2O that we discussed in Section 3.1. We mainly investigate two aspects: (i) whether to share the weights in the two LSTM solvers or not; (ii) whether to share the hidden states between the two LSTM solvers or not. That leads us to four options, denoted as (with self-explanatory names): Share-LSTM-Share-Hidden, Share-LSTM-Two-Hidden, Two-LSTM-Share-Hidden, and Two-LSTM-Two-Hidden. We use the seasaw problem, formulated as below, as the testbed for our ablation study (note that the ranges of a, b are picked only to make L2O easy to converge, while more will be investigated in Section 4.3):
Seesaw: min x max y −by sin(aπx), a ∼ U [0.9, 1], b ∼ U [0.9, 1] (Seesaw)
The Seesaw problem is nonconvex-concave, and is considered challenging (Hamm & Noh, 2018) due to its non-differentiability arising from that the solutions of the state equation or the adjoint state equation are not unique (Danskin, 1966). The L2O training routine follows (Andrychowicz et al., 2016): we use 128 optimizee instances for training; each of them has its parameters i.i.d. sampled, and variables x, y randomly initialized by i.i.d. sampling from U [−0.5, 0.5]. A validation set of 20 optimizees is used with parameters and variables sampled in the same way; and similarly we generate a hold-out testing set of another 100 instances. For each epoch, an L2O optimizer will update the optimizee parameters for 1000 iterations, with its unrolling length T = 10. When the next epoch starts, all x, y as well as LSTM hidden states are reset. We train the L2O solvers for 200 epochs, using Adam with a constant learning rate 10−4. We pick the model checkpoint at the epoch when its validation performance reaches the peak. Figure 1 compares the convergence results of the four options, evaluated on the same testing set. We measure the `2 distances between the solved variables and their corresponding ground-truth solutions (or the closet one, if multiple exist). It is obvious that only the Two-LSTM-Two-Hidden can successfully converge to the correct solution (x∗, y∗) = (0, 0), which is also the equilibrium. Our major observation from the above experiments is that for minimax L2O optimization, especially for asymmetric problems such as Seesaw, it would be a better choice to use decoupled two LSTM solvers and let them take care of their own trajectory information. We will hence stick to this option and use it as our default Twin-L2O.
All experiments in this and following sections are conducted using the GeForce GTX 1080 Ti GPUs.
4.2 COMPARISON WITH STATE-OF-THE-ART ANALYTICAL OPTIMIZERS
In this section, we apply Twin-L2O to two more test problems besides Seesaw:
6
• Rotated Saddle2: minxmaxy ax2 − by2 + 2xy, a ∼ U [0.9, 1], b ∼ U [0.9, 1] • Matrix Game: minx maxy xTAy, A ∈ R5×5,Ai,j ∼ Bernoulli(0.5) · U [−1, 1]
On all three problems, we compare Twin-L2O with several state-of-the-art algorithms: Gradient Descent Ascent (GDA) (Lin et al., 2019), Optimistic Mirror Descent (OMD) (Daskalakis et al., 2018) and GD with anchoring (GD-Anchoring) (Ryu et al., 2019). On Rotated Saddle and Seesaw we will compare with K-beam (Hamm & Noh, 2018) in addition. For matrix game, we also compare it with the standard Halpern Iteration (Diakonikolas, 2020) that is designed for convex-concave minimax problems. For these analytical methods, all parameters are tuned with careful grid search. We train, validate and test Twin-L2O models following the protocol described in Section 4.1.
Figure 2 plots the convergence curves of all methods, averaged across all testing problems (and each with 20 trials of random x, y initialization). Several observations are drawn below:
• L2O does not show superiority over well-tuned analytical algorithms on the simplest Rotated Saddle problem (and similarly Saddle). The problem is very gradient-friendly, and therefore OMD already achieves the best convergence speed as well as solution quality.
• On Matrix Game, Twin-L2O starts to show competitive edges over analytical solvers with faster convergence speed and higher-precision solutions.
• On the Seesaw problem, Twin-L2O largely outperforms all carefully-tuned analytical algorithms, achieving one-magnitude higher-precision solutions with comparable convergence speed. That shows us one take-home message: L2O can work for minimax optimization, and can contribute most significantly to those hard problems. That makes minimax L2O a highly meaningful complement to existing analytical minimax solvers. More analysis on comparing the actual computational costs (MAC numbers) can be found in Appendix A5.
2We also test on the classical Saddle problem, but its behaviors and conclusions are almost identical to the Rotated Saddle. We hence report on Rotated Saddle due to the space limit.
7
4.3 ENHANCED TWIN-L2O: CURRICULUM LEARNING EVALUATION
We again use the Seesaw problem as an example in this section. Its two parameters a and b, i.e., the problem period and the scale, are sampled independently from two uniform distributions U [L1a, L2a], U[L1b, L2b]. In Section 4.1, both are chosen as U [0.9, 1] for the ease of L2O convergence. We now stretch both parameter ranges and test whether an L2O model can still solve the resultant broader range of problems. All other training protocols follow Section 4.1 identically.
Sections 4.1 and 4.2 evaluate the average solution distances over the testing set (100 instances), which worked fine in the small [a, b] range then. However, when we extend the [a, b] range, we find that the L2O behaviors can differ vastly across testing instances, i.e., some converging quickly while others suffering from heavy fluctuations or even divergence, which is an artifact of inefficient L2O training that leaves it unable to cover the full large problem range. That motivates us to carefully re-design our evaluation metrics here, to reflect both the solution quality and its variation/stability.
For p-th testing instance, we record its solution distance l2 D p t , at epoch t = 1, 2, .... Given two thresholds acc and std (chosen by multi-fold validation; we use default d = 2× 10−2 and std = 10−4), we define two forms of success rate (SR):
SR1 = ∑n p=1 I(d(D p)< acc) n SR2 = ∑n p=1 I(Std(D p)< std) n
where d(Dp) = ∑L t=t0 Dpt
L−t0+1 , Std(Di) = Std({D p t }Lt=t0), t0 = 0.8L; n = 100 is the number of testing
instances; L = 1000 is the total iteration number that each instance (optimizee) is trained by L2O. Intuitively, SR1 emphasizes the average solution precision from the last 20 iterations; and SR2 measures how large solution variation is seen in the last 20 iterations.
Table 1 compares Twin-L2O and Enhanced Twin-L2O at multiple combinations to stretch the ranges of a and b, starting to the original [0.9, 1]× [0.9, 1], up to as large as [0, 5]× [0, 2] : the parameter coverage increase by 1,000 times. Adding CL evidently helps Twin-L2O stay effective to train over a broader instance range, under both SR metrics. Vanilla Twin-L2O performs perfectly at [0.9, 1]× [0.9, 1], yet begins to drop at [0, 1]× [0.9, 1] (mainly showing higher instability, as indicated by lower SR2), and hardly succeeds beyond [0, 3.5] × [0.9, 1]. In contrast, Enhanced Twin-L2O obtains nontrivial results even at [0, 5]× [0, 1] (tens of times wider than the vanilla one).
4.4 SAFEGUARDED TWIN-L2O EXPERIMENTS
Here we use the matrix game as the example to evaluate the above established safeguard mechanism for convex-concave minimax optimization. We directly take a well-trained Twin-L2O model for matrix game in Section 4.2, where the matrix A ∈ R5×5 and Ai,j ∼ Bernoulli(0.5) · U [−1, 1], and the coordinates of initial optimization variables x and y are independently sampled from U [−1, 1]. During testing, in addition to testing the Twin-L2O model on the testing data from this seen distribution, we also evaluate it on unseen data, whose A is now sampled from an intentionally very distinct distribution: Ai,j ∼ Bernoulli(1.0) · U [−8, 8]. x and y are initialized in the same manner. We compare Safeguarded Twin-L2O (denoted as Safe-Twin-L2O) with standard Halpern iteration (Diakonikolas, 2020) as the fallback update, when the L2O update is disapproved in Method 1. We also compare with OMD and GD-Anchoring on both seen and unseen testing data (GDA fails to converge in both cases, even we tune its hyperparameters to our best efforts). The results are shown in Figure 3. When tested on the aggressively varied unseen data, the vanilla Twin-L2O model fails and diverges, but Safe-Twin-L2O remains to converge successfully: even faster than Halpern iteration and OMD, and much better than GD-Anchoring.
8
5 CONCLUSION
This paper studies L2O for minimax optimization for the first time. We present the Twin-L2O model, and further improve its generalizability by introducing a theoretically grounded safeguarding framework (for convex-concave problems), as well as an empirical curriculum training strategy (for general problems). Extensive simulations endorse the promise of our algorithms. This pilot study suggests and paves the way for extending L2O beyond continuous minimization problems.
Limitation: The entire L2O field faces challenges to scale up to larger-scale optimization (Andrychowicz et al., 2016), and our study has not yet made an exception. Despite very promising gains from challenging cases such as the Seasaw and Matrix Game problems, the current work only proves the first concept of minimax L2O, on relatively basic and low-dimensional test problems. Our immediate next
step is to scale up Twin-L2O, and to explore its potential in solving the minimax application problems of practical interest, such as adversarial training (Jiang et al., 2018; Xiong & Hsieh, 2020) and GANs (Gulrajani et al., 2017). A potential idea might leverage the memory-efficient hierarchical RNN structure in (Wichrowska et al., 2017).
A1 TWIN-L2O FRAMEWORK
Iteration
......
LSTM-Max LSTM-Max LSTM-Max
......
......
Iteration Iteration ......
Twin-Structured Learning to Optimize (Twin-L2O)
Apply Updating RuleMinimax OptimizeeTwin-LSTM Optimizer
LSTM-Min LSTM-Min LSTM-Min
......
Pass Updated or Variable Pass Input Information to Twin-LSTM
Figure A4: Architecture of Twin-L2O. We let LSTM-Min and LSTM-Max, parameterized by φmin and φmax, update x and y respectively. As shown by curved dashed lines, Twin-LSTM keeps being updated about the latest variable values of x and y when computing input information and the reward. When constructing the computational graph and training the Twin-LSTM, the solid lines allow gradients to flow while the dashed lines do not pass any gradient (Andrychowicz et al., 2016).
A2 COMPARISON WITH STATE-OF-THE-ART ANALYTICAL OPTIMIZERS
Figure A5 shows the performance of y variable in Rotated Saddle, Matrix Game and Seesaw. The analyses of the results generally align with those in the paper.
100 101 102 103Iteration 0.00
0.05
0.10
0.15
0.20
0.25
Di st
an ce
5.56e-92
GDA OMD GD-Anchoring K-beam Twin-L2O
(a) Rotated Saddle 100 101 102 103Iteration
0.0
0.5
1.0
1.5
2.0
Di st
an ce
1.89e-03
GDA OMD GD-Anchoring Halpern Twin-L2O
(b) Matrix Game 100 101 102 103Iteration
0.0
0.1
0.2
0.3
0.4
Di st
an ce
2.94e-03
GDA OMD GD-Anchoring K-beam Twin-L2O
(c) Seesaw
Figure A5: Convergence comparison of variable y between Twin-L2O and state-of-the-art analytical minimax optimizers (GDA, OMD, GD-Anchoring, and K-beam), for three test problems.
A3 PROOF OF SAFEGUARDING RESULT
Below is a proof of the main result, Theorem 4.1:
Proof. We proceed in the following manner, with much credit due to the analysis in (Diakonikolas, 2020). First we verify an inequality with the energy sequence {Ek(uk)} (Step 1). This is used to obtain the convergence rate (Step 2). Resulting implications about limit points are established last (Step 3).
Step 1. We claim
Ek(u k) ≤ C
k , for all k ≥ 2. (9)
A12
We proceed by induction. First note Jα∂f is firmly nonexpansive, and so 2F = Id − Jα∂f is also firmly nonexpansive (Bauschke et al., 2011), which implies
‖2F (u)− 2F (v)‖2 ≤ 〈2F (u)− 2F (v),u− v〉 , for all u,v ∈ Rm × Rn. (10) Using (10) with u = u2 and v = u1 together with our choice of step sizes {λk}, we find
E2(u 2) = ‖F (u2)‖2 − λ1
1− λ1 〈F (u2),u1 − u2〉 (11)
= ‖F (u2)‖2 − 〈F (u2),u1 − u2〉 (12) = 〈F (u2), F (u2)− F (u1)〉 (13) = ‖F (u2)− F (u1)‖2 + 〈F (u1), F (u2)− F (u1)〉 (14)
≤ 1 2 ‖2F (u2)− 2F (u1)‖2 + 〈F (u1), F (u2)− F (u1)〉 (15) ≤ 〈F (u2)− F (u1),u2 − u1〉+ 〈F (u1), F (u2)− F (u1)〉 (16) = −〈F (u2)− F (u1), F (u1)〉+ 〈F (u1), F (u2)− F (u1)〉 (17) = 0. (18)
Thus, E2(u2) ≤ 0 ≤ C/2, and the base case holds. Inductively, suppose (9) holds taking k = n for some n ≥ 2. If un+1 = zn+1, then (9) holds, taking k = n+ 1, by the conditional statement in Line 6 of Method 1. Alternatively, suppose un+1 6= zn+1. Applying (10) with u = un+1 and v = un yields
‖F (un+1)− F (un)‖2 ≤ 2‖F (un+1)− F (un)‖2 ≤ 〈F (un+1)− F (un),un+1 − un〉 . (19) Upon expansion of the left hand side, we discover
‖F (un+1)‖2 ≤ 〈F (un+1),un+1 − un + 2F (un)〉 − 〈F (un),un+1 − un + F (un)〉 . (20)
Algebraic manipulations of the update formula for un+1 yield the relations
un+1 − un + 2F (un) = λn 1− λn (u1 − un+1), (21a)
un+1 − un + F (un) = λn(u1 − un)− (1− 2λn)F (un), (21b) Substituting (21) in (20) gives
‖F (un+1)‖2 ≤ λn 1− λn 〈F (un+1),u1 − un+1〉 (22)
− λn 〈F (un),u1 − un〉+ (1− 2λn)‖F (un)‖2. (23)
and we collect terms with F (un+1) on the left hand side to obtain
‖F (un+1)‖2 − λn 1− λn
〈F (un+1),u1 − un+1〉 ≤ (1− 2λn)‖F (un)‖2 − λn 〈F (un),u1 − un〉 . (24)
Furthermore, by our choice of step size sequence {λn},
1− 2λn = n− 1 n+ 1
(25)
and, for n ≥ 2, λn =
n− 1 n+ 1 · 1 n− 1 = n− 1 n+ 1 · λn−1 1− λn−1 . (26)
Combining (24), (25), and (26) with the definition of En in (7) yields
En+1(u n+1) ≤ n− 1
n+ 1 · En(un). (27)
Applying the inductive hypothesis, we deduce
En+1(u n+1) ≤ n− 1 n+ 1 · C n = n− 1 n · C n+ 1 ≤ C n+ 1 , (28)
A13
and this inequality closes the induction. Thus, (9) holds by the principle of mathematical induction.
Step 2. Let u? be the projection of u1 onto Fix(Jα∂f ) so that
‖u1 − u?‖ = argmin{‖u1 − u‖ : u ∈ Fix(Jα∂f )} = d1. (29) Note this projection is well defined since the set of saddle points is convex. By (9), for k ≥ 2,
‖F (uk)‖2 ≤ λk 1− λk 〈F (uk),u1 − uk〉+ C k
(30)
= λk
1− λk︸ ︷︷ ︸ =1/k
〈F (uk),u1 − u?〉+ 〈F (uk)− F (u?),u? − uk〉︸ ︷︷ ︸ =0 + C k
(31)
= 1 k 〈F (uk),u1 − u?〉+ C k (32)
≤ 1 k ‖F (uk)‖‖u1 − u?‖+ C k . (33)
where the third line holds since F (u?) = 0 and F is monotone. Using the quadratic formula with the fact that ‖F (uk)‖2 ≥ 0, we obtain (8), as desired.
Step 3. Let ũ be a limit point of {uk}. This implies there exists a subsequence {unk} that converges to ũ. Since Jα∂f is 1-Lipschitz and norms are continuous, it follows that
0 ≤ ‖ũ− Jα∂f (ũ)‖ = lim k→∞ ‖unk − Jα∂f (unk)‖ ≤ lim k→∞
1
2 ( d1 nk + √ d21 n2k + 4C nk ) = 0. (34)
By the squeeze lemma, we deduce ũ ∈ Fix(Jα∂f ), i.e., ũ is a saddle point of f . Because ũ was an arbitrarily chosen limit point, each limit point of {uk} is a saddle point of f .
A4 DETAILS ON CURRICULUM LEARNING
In L2O framework, the reward for training the optimizer is defined as:
L(φ) = Ef [ T∑ t=1 wtR (f (xt)) ] (35)
where f is a distribution of functions. The Enhanced Twin-L2O using Curriculum Learning(CL) selects a portion of instances that demonstrate "good training behaviors" (smaller gradients & more likely to get close to stationary points) to be counted into the reward, with the portion C increasing linearly from 20% to 100% as the training epoch increases. In our experiments, the detailed scheme of C is:
C = min{20 + epoch_index, 100}% (36)
where epoch_index denotes the index of epoch when training, starting from 0 and ending with 199 in our case. When applying CL, the actual reward becomes
L̃(φ) = Ef [ T∑ t=1 wtq(f)R (f (xt)) ] (37)
where q(f) = 1 if the value m(f) = ∑T t=1 wt ‖∇yf (xt, yt)‖
2 ranks top C of all sampled functions, and q(f) = 0 otherwise.
This process does not change the structure of Twin-L2O, and essentially acts as adding masks to those training instances that demonstrate poor behavior and ignoring them in the actual training phase. Combining this trick with the existing framework, the Twin-L2O can achieve a higher success rate when solving problems with a larger range of parameters.
A14
A5 COMPUTATIONAL COST ANALYSIS
We analyze the number of the multiplier–accumulator operation (MAC) of Twin-L2O and K-beam (Hamm & Noh, 2018) for a Seasaw problem testing instance with 20 trials of random x, y initialization, each trial lasting for 1000 iterations. As for K-beam, the numbers of MAC are 2.36M (Million), 3.8M, 8.11 M, 15.31M for K = 1, 2, 5, 10 respectively. For Twin-L2O, the total number of MAC is 3.86M.
We use K = 5 in K-beam for experiments in our paper whose number of MAC costs 2.1 times more than that of Twin-L2O, yet its solution quality in terms of both the converging speed and the precision fails to beat it.
A6 MORE DISCUSSIONS ON THE DESIGN OF TWIN-L2O REWARD
We term the reward function in Eqn. 4 as an objective-based reward, since it penalizes the objective change from f(xt, yt) along both x and y updating directions. It is naturally inherited and extends the reward functions prevailing in most prior L2O works for minimization (Andrychowicz et al., 2016; Li & Malik, 2016), whose default reward is to minimize a weighted sum of the past function values.
One may also design the following two rewards, which we name as gradient-based rewards:
L(φmin, φmax) = Ef [ T∑ t=1 ‖∇xf (xt, yt)‖2 + ‖∇yf (xt, yt)‖2 ] , (38)
L(φmin, φmax) = Ef
[ T∑
t=1
( f (xt, yt−1)− f (xt, yt)
‖yt − yt−1‖
)2 + ( f (xt−1, yt−1)− f (xt, yt)
‖xt − xt−1‖
)2] . (39)
Eqn. 39 is the gradient-based Nikaido-Isoda function introduced by Raghunathan et al. (2019).
For minimax optimization, it is not immediately clear whether the objective-based or the gradientbased might work practically better. Intuitively by definition, the former is likely to lead towards a saddle point (defined in Eqn. 1) and the latter to a stationary point. They do not always coincide in general, e.g, a stationary point might not be a saddle point. But for all specific test problems we studied in Section 4, a stationary point is also a saddle point.
We try several experiments on the challenging seesaw problem as a specific example, to provide a close comparison between the gradient-based in Eqn. 38 and the objective-based reward. We re-do Twin-L2O by only replacing Eqn. 4 with the gradient-based reward, and our observations are: a) the gradient-based reward solves the seesaw problem worse than the objective-based one; b) the minimization variable x diverges on testing problem instances; c) the maximization variable y will converge to a solution of precision magnitude 0.04 (for reference, y converges to have magnitude less than 0.01 when using the objective-based loss). We further identify one possible cause after analyzing the gradient behaviors. Note here the gradient-based reward could be expressed as:
||∇xf(x, y)‖2 + ‖∇yf(x, y)‖2 = a2b2π2y2 cos2(aπx) + b2 sin2(aπx) (40)
Because a, b ∼ U [0.9, 1], the first term often dominates during training due to the π2 multiplier, unless y is sufficiently close to zero. The imbalance could be a cause of instability. For example, this reward could sometimes penalize cos2(aπx) to be close to zero, which is the opposite direction of the true solution sin2(aπx) = 0 . Although this is just a very specific problem example, it reveals that the gradient-based loss may sometimes not work well as expected, due to the instability or asymmetry of min/max gradients.
Besides, we have also tried the second gradient-based reward in Eqn. 39, and find it ineffective. It is mainly because the denominator (consecutive variable differences) can become very small and the loss will then explode and break training.
Back to the objective-based reward used in this paper, we have not observed oscillation empirically from all experiments so far. Our hypothesis is that the recurrent structure of the proposed Twin-L2O
A15
framework (shown in Eqn. 2) plays a role here. Although we use two LSTMs for the min and max updates respectively, the LSTM of one variable actually takes in the information of the other LSTM implicitly, because it takes the output of the other as input. When we penalize the objective function value of one LSTM update, all previous min and max updates can (in principle) be taken into account due to the effect of unrolled back-propagation, e.g., the min and max updates each take reference to not only its own, but also the other’s higher-order past trajectory information. While this is a tentative explanation, we think more in-depth analysis of why oscillation may or may not happen in L2O could be a really interesting future work.
Another implicit intuition that leads us to prioritizing the use of objective-based over gradient-based is that, in classic minimization, objective change is summable (i.e., having a finite accumulation), but gradient change is not summable in general (unless with properties such as strong convexity). While summability is itself not a guarantee for good training/testing performance, lack of summability means the loss may have an overly large dynamic range.
To summarize, our function-based objective naturally extends previous L2O convention, works better than other alternatives, and observes no oscillation yet. However, we emphasize that there is no intention to claim the current reward in Eqn. 4 is the best choice for minimax L2O - it is one of plausible options. We do concur the gradient-based reward designs in Eqn. 38 and Eqn. 39 are a complicated yet interesting question, especially when considering more complicated minimax problems. Again, as this paper is intended only as the first work and pilot study towards understanding the profound challenges and rich possibilities for minimax L2O, we believe everything discussed and proposed here, including the loss function, has large room of improvement.
A16 | 1. What is the focus of the reviewed paper?
2. What are the strengths and weaknesses of the proposed approach compared to traditional methods?
3. How does the reviewer assess the significance of the contributions made by the paper?
4. Are there any concerns regarding the experimental results presented in the paper?
5. How does the reviewer evaluate the clarity and organization of the paper's content? | Review | Review
Classical iterative minimax optimization algorithms display the unstable dynamics. Their convergence is often sensitive to the parameters and needs to be re-tuned for different problems to ensure convergence. Therefore, there is a practical motivation to develop L2O for minimax problems, so that we could meta-learn and adapt optimization rules to a special class of functions.
To extend L2O from minimization to minimax where two groups of variables need be updated, the authors designed and explored a variety of model options. They find that using two LSTMs, with only their reward function shared, to benefit meta-learning most, particularly when the min and max updates are highly non-symmetric. The decoupled design is aligned with the experience in classical optimizers, e.g., the max step often needs for solution tracking. The authors also described both a curriculum training strategy, and a preliminary theory called safeguarding, to make L2O models be able to solve a wider range of problems.
This paper’s contribution mainly lies in the engineering side, i.e., demonstrating meta learning or L2O can handle more complicated tasks/objectives than conventionally solving minimization. It is an interesting empirical study and is also done solidly. I believe this paper could attract interest and generate follow-up ideas from the L2O community.
On the math side, even though the authors tried to motivate their work from the limitation of classical minimax algorithms, I feel its impact may be limited for the optimization field, as it does not reveal many insights on how to design new minimax algorithms or providing better theory guarantees.
Regarding the experiments, the authors demonstrated three simple testbed functions. As an empirical paper, it would definitely become stronger if the authors can prove their concept on some real minimax problems such as GAN or robust/private training.
The paper is in general well-written. With a lot of contents packed, the authors managed to organize and lay out their logic flow smoothly and clearly. I found just some typos: meta-learing -> meta-learning, draws and integrate -> draws and integrate, recently just introduce -> recently just introduced. |
ICLR | Title
Learning A Minimax Optimizer: A Pilot Study
Abstract
Solving continuous minimax optimization is of extensive practical interest, yet notoriously unstable and difficult. This paper introduces the learning to optimize (L2O) methodology to the minimax problems for the first time and addresses its accompanying unique challenges. We first present Twin-L2O, the first dedicated minimax L2O framework consisting of two LSTMs for updating min and max variables separately. The decoupled design is found to facilitate learning, particularly when the min and max variables are highly asymmetric. Empirical experiments on a variety of minimax problems corroborate the effectiveness of Twin-L2O. We then discuss a crucial concern of Twin-L2O, i.e., its inevitably limited generalizability to unseen optimizees. To address this issue, we present two complementary strategies. Our first solution, Enhanced Twin-L2O, is empirically applicable for general minimax problems, by improving L2O training via leveraging curriculum learning. Our second alternative, called Safeguarded Twin-L2O, is a preliminary theoretical exploration stating that under some strong assumptions, it is possible to theoretically establish the convergence of Twin-L2O. We benchmark our algorithms on several testbed problems and compare against state-of-the-art minimax solvers. The code is available at: https://github.com/VITA-Group/L2O-Minimax.
1 INTRODUCTION
Many popular applications can be formulated into solving continuous minimax optimization, such as generative adversarial networks (GAN) (Goodfellow et al., 2014), distributionally robust learning (Globerson & Roweis, 2006), domain adaptation (Ganin & Lempitsky, 2014), distributed computing (Shamma, 2008; Mateos et al., 2010), privacy protection (Wu et al., 2018; 2020), among many more. This paper studies such problems: we consider a cost function f : Rm × Rn → R and the min-max game minxmaxy f(x, y). We aim to find the saddle point (x∗, y∗) of f :
f(x∗, y) ≤ f(x∗, y∗) ≤ f(x, y∗), ∀(x, y) ∈ X × Y, (1)
where X ⊂ Rm and Y ⊂ Rn. If X = Rm and Y = Rn, (x∗, y∗) is called a global saddle point; if X × Y is a neighborhood near (x∗, y∗), (x∗, y∗) is a local saddle point. The main challenge to solve problem (1) is the unstable dynamics of iterative algorithms. Simplest algorithms such as gradient descent ascent (GDA) can cycle around the saddle point or even diverge (Benaım & Hirsch, 1999; Mertikopoulos et al., 2018b; Lin et al., 2019). Plenty of works have been developed recently to address this issue (Daskalakis et al., 2018; Daskalakis & Panageas, 2018; Liang & Stokes, 2019; Mertikopoulos et al., 2018a; Gidel et al., 2018; Mokhtari et al., 2019). However, the convergence is still sensitive to the parameters in these algorithms. Even if the cost function is only changed by scaling, those parameters have to be re-tuned to ensure convergence.
A recent trend of learning to optimize (L2O) parameterizes training algorithms to be learnable from data, such that the meta-learned optimizers can be adapted to a special class of functions and outperform general-purpose optimizers. That is particularly meaningful, when one has to solve a large number of yet similar optimization problems repeatedly and quickly. Specifically, for existing L2O methods that operate in the space of continuous optimization, almost all of them solve some
∗Equal Contribution.
1
minimization problem (Andrychowicz et al., 2016; Chen et al., 2017; Li & Malik, 2016), leveraging an LSTM or a reinforcement learner to model their optimizer. Different from classic optimization results that often provide worst-case convergence, most L2O methods have little or no convergence guarantees, especially on problem or data instances distinct from what is seen in training, leaving their generalizability in practice questionable (Heaton et al., 2020). Motivated by L2O’s success in learning efficient minimization solvers from data, this paper seeks to answer: whether we could accomplish strong minimax L2O solvers as well; and if yes, how generalizable they could be?
As it might look straightforward at first glance, such extension is highly nontrivial due to facing several unique challenges. Firstly, while continuous minimization has a magnitude of mature and empirically stable solvers, for general minimax optimization, even state-of-the-art analytical algorithms can exhibit instability or even divergence. To the best of our knowledge, most state-of-theart convergence analysis of minimax optimization is built on the convex-concave assumption (Gidel et al., 2018; Mokhtari et al., 2019; Ryu et al., 2019), and some recent works relax the assumption to nonconvex-concave (Lin et al., 2019; 2020). Convergence for general minimax problems is still open. That makes a prominent concern on whether a stable minimax L2O is feasible. Secondly, given the two groups of min and max variables simultaneously, it is unclear to what extent their optimization strategies can be modeled and interact within one unified framework – a new question that would never be met in minimization. Thirdly, the noisy and sometimes cyclic dynamics of minimax optimization will provide noisier guidance (e.g., reward) to L2O; not to say that, it is not immediately clear how to define the reward: for minimization, the reward is typically defined as the negative cumulative objective values along the history (Li & Malik, 2016). However, for minimax optimization the objective cannot simply be decreased or increased monotonically.
Contribution: This paper is a pilot study into minimax L2O. We start by establishing the first dedicated minimax L2O framework, called Twin-L2O. It is composed of two LSTMs sharing one objective-based reward, separately responsible for updating min and max variables. By ablations of the design options, we find this decoupled design facilitate meta-learning most, particularly when the min and max updates are highly non-symmetric. We demonstrate the superior convergence of Twin-L2O on several testbed problems, compared against a number of analytical solvers.
On top of that, we further investigate how to enhance the generalizability of the learned minimax solver1, and discuss two complementary alternatives with experimental validations. The first alternative is an empirical toolkit that is applicable for general minimax L2O. We introduce curriculum learning to training L2O models for the first time, by recognizing that not all problem instances are the same difficult to learn to solve. After plugging in that idea, we show that Twin-L2O can be trained to stably solve a magnitude more problem instances (in terms of parameter varying range). The second alternative explores a theoretical mechanism called safeguarding, particularly for the important special case of convex-concave problems. When solving a testing instance, safeguarding identifies when an L2O failure would occur and provides an analytical fall-back option (Diakonikolas, 2020). That guarantees convergence for convex-concave problems and, in practice, converges faster even when the problem parameters are drawn from a different distribution from training.
2 RELATED WORK
2.1 MINIMAX OPTIMIZATION
Following (Neumann, 1928), the problem (1) has been studied for decades due to its wide applicability. Simultaneous gradient descent (SimGD) or gradient descent ascent (GDA) (Nedić & Ozdaglar, 2009; Du & Hu, 2019; Jin et al., 2019; Lin et al., 2019) is one of the simplest minimax algorithms, conducting gradient descent over variable x and gradient ascent over variable y. However, the dynamics of SimGD or GDA can converge to limit cycles or even diverge (Benaım & Hirsch, 1999; Mertikopoulos et al., 2018b; Lin et al., 2019). To address this issue, Optimistic gradient descent ascent (OGDA) simply modifies the dynamics of GDA and shows more stable performance (Daskalakis et al., 2018; Daskalakis & Panageas, 2018; Liang & Stokes, 2019; Mertikopoulos et al., 2018a; Gidel et al., 2018; Mokhtari et al., 2019). OGDA attracts more attention because of its empirical success in training GANs. (Ryu et al., 2019) theoretically studies OGDA by analyzing its continuous time dynamic and
1We differentiate the usages of two terms: parameters and variables, throughout the paper. For example, minu maxv ax
2 − by2, we call a, b parameters and x, y variables. For simplicity, this paper only discusses the L2O generalizability when the testing instances’ parameter distribution differs from the training.
2
proposes Anchored simultaneous gradient descent that shows good performance. Follow-the-Ridge (Wang et al., 2019) also addresses the limit cycling problem by introducing second-order information into the dynamic of GDA. Lately, K-Beam (Hamm & Noh, 2018) stabilizes the convergence of GDA by duplicating variable y, yielding strong performance. At each iteration, it performs gradient ascent independently on K copies of y and greedily chooses the copy that leads to a large function value f , then it updates x based on the selected copies.
2.2 LEARNING TO OPTIMIZE
As a special instance of meta-learning, L2O has been studied in multiple contexts, with continuous optimization being one of its main playgrounds so far. The first L2O framework is introduced in (Andrychowicz et al., 2016), where both the optimizee’s gradients and loss function values are formulated as the input features for an RNN optimizer. Due to the enormous number of parameters, a coordinate-wise design of RNN optimizer is adopted, where all optimization coordinates share the same updating strategy. (Li & Malik, 2016) uses the gradient history and objective values as observations and step vectors as actions in their reinforcement learning framework. (Chen et al., 2017) leverages RNN to train a meta-optimizer to optimize black-box functions. Two effective training tricks, random scaling and objective convexifying, are presented in (Lv et al., 2017). Wichrowska et al. (2017) presents an optimizer of multi-level hierarchical RNN architecture augmented with additional architectural features. Li et al. (2020) introduces a Jacobian regularization to L2O and enhances the domain adaptation performance of optimizees. Chen et al. (2020a) proposes several improved training techniques to stabilize L2O training and ameliorate performance. You et al. (2020); Chen et al. (2020b;c) extend the application scope of L2O into various practical problems such as graph neural network training, domain generalization, and noisy label training.
The above works address continuous minimization problems using single optimizer models. One exception, (Cao et al., 2019), extends L2O to solving Bayesian swarm optimization. The author presents a novel architecture where multiple LSTMs jointly learn iterative update formulas for the swarm of particles, coordinated by attention mechanisms. We also notice that two recent efforts (Jiang et al., 2018; Xiong & Hsieh, 2020) introduce L2O to adversarial training, a renowned application of minimax optimization. However, both of them merely utilize L2O to solve the inner minimization of their minimax problems (i.e., generating attacks), while the outer maximization is still solved analytically. Neither of the two directly solves the full minimax optimization.
3 METHOD
3.1 MAIN FRAMEWORK: TWIN LEARNABLE OPTIMIZERS (TWIN-L2O)
The main L2O framework we proposed is named Twin-L2O, where we use two learnable optimizers to alternate between min and max updates. Our design adopts the basic idea of (Andrychowicz et al., 2016) to use Long Short-Term Memory (LSTM) to model learnable optimizers, for solving target problems known as optimizees. At each step, LSTM outputs the update of the optimizee variables. The LSTM inputs are typically the current zero-order or first-order information of the optimizee (Andrychowicz et al., 2016; Lee & Choi, 2018), plus the historic optimization trajectory information.
In Twin-L2O, two LSTMs separately update x and y and record historical trajectory information of their own variables respectively. Formally, we consider the minimax problem minxmaxy f(x, y). We use two LSTM optimizers, LSTM-Min and LSTM-Max, to updates the min variable x and the max variable y respectively. LSTM-Min is parameterized by φmin and LSTM-Max is parameterized by φmax. At each iteration t, Twin-L2O updates x and y in turns and yields the following rule:
xt+1 =xt + ∆xt,where (∆xt, hmint+1) = LSTM-Min ( [∇xf (xt, yt) ,∇yf (xt, yt)], hmint , φmin ) ,
yt+1 =yt + ∆yt,where (∆yt, hmaxt+1 ) = LSTM-Max ( [∇yf (xt+1, yt) ,∇xf (xt+1, yt)], hmaxt , φmax ) , (2)
where hmint and h max t are the historical trajectory information of LSTM-Min and LSTM-Max at time step t. This formulation is inspired by the SimGD/GDA-style algorithms (Nedić & Ozdaglar, 2009; Du & Hu, 2019; Jin et al., 2019; Lin et al., 2019) that conduct simultaneous/alternative gradient descent over x and ascent over y. Figure A4 (Appendix A1) conceptually illustrates the framework.
The next question is to design the L2O reward. To train the LSTM optimizers, the loss function is often to penalize some type of cost, accumulated along the optimization trajectory for a horizon of T
3
steps (also known as the unrolling length for LSTM (Sherstinsky, 2018))
L(φmin, φmax) = Ef [ T∑ t=1 wtR(f, xt, yt) ] , (3)
wt is chosen to be all 1 following the basic setting in (Andrychowicz et al., 2016), that might be tuned for better performance in future work.
As a key design option, R(f) represents the reward to guide the L2O training. In existing L2O methods for continuous minimization (Andrychowicz et al., 2016; Lv et al., 2017), R(f) is usually simply set to R(f, xt)) = f (xt) to encourage fast decrease of objective values over time. To extend this existing reward to the minimax scenario, we cannot directly penalize the overall objective function value either way, since the min and max objectives are entangled. Also, different from pure minimization problems, the Twin-L2O updates (2) consist of two alternating steps governed by two different LSTM optimizers: each accounts for its own subproblem goal (min or max updates), but the two also have to collaborate to explore/exploit the minimax landscape. We specifically design the following reward that implicitly addresses the above issue by setting a new reward function:
L(φmin, φmax) = Ef [ T∑ t=1 {[f(xt, yt−1)− f(xt, yt)] + [f(xt, yt−1)− f(xt−1, yt−1)]} ] . (4)
Analysis of the reward design In Eqn. 4, the first and second terms always characterize two consecutive min and max updates. In more details, the value of f (xt, yt)−f (xt, yt−1) solely reflects how effectively the t-step max update increases the objective f , while f (xt, yt−1)− f (xt−1, yt−1) reflects the effectiveness of t-step min update in decreasing the objective f . Our goal is then to maximize the weighted accumulated sum for f (xt, yt)−f (xt, yt−1) , while minimizing the weighted accumulated sum for f (xt, yt)− f (xt, yt−1) , t = 1, 2, ..., T . Combining the two sub-goals together (with a sign change to turn max into min) yields our reward. One may also alternatively interpret Eqn. 4 as penalizing the loss change from f(xt, yt) along both x and y updating directions, which would encourage yielding stationary points.
For the reward design, we provide a more detailed discussion in Appendix A6. Specifically, we provide a comparison between the objective-based reward in Eqn. 4, and another possible gradientbased reward. The latter was found to be ineffective in solving the problems presented in Section 4.
Rationale of the framework selection Another important design question is to what extent learning the min and max updates should be (dis)entangled: on the one hand, the two steps obviously interact with each other as they jointly explore the minimax landscape; on the other hand, min and max steps commonly have asymmetric difficulty levels, that have been leveraged by previous algorithms. For example, (Hamm & Noh, 2018) demonstrates the failure of alternating gradient descent in minimax optimization due to the multiple solution discontinuity of the inner maximization, and addresses that by simultaneously tracking K candidate solutions for the max step, while the outer minimization remains to take one descent step. Besides the joint reward (4), the default Twin-L2O design leverages two independent LSTMs in Eqn. (2), each dedicatedly handling min or max updates. In comparison, we also consider two other more "entangled" ways: (a) fully entangling the two optimizers, i.e. using one LSTM to simultaneously generate min and max outputs; (b) weakly entangling the two optimizers, by using two LSTMs sharing weights, yet allowing either to maintain its own temporal hidden states. Our ablation experiments (see Section 4.1) find that the default decoupled design in Eqn. (2) seems to facilitate the L2O learning most.
3.2 IMPROVING GENERALIZABILITY OF TWIN-L2O
Despite the empirical success of L2O, it is unfortunately impossible to ensure that any L2O algorithm always converges. Assuming the objective function type to keep unchanged, the testing instances’ parameter distribution may differ from the one of training, and L2O can catastrophically fail. For the Twin-L2O, we discuss two remedies to partially fix this issue and boost its generalizability.
We first propose curriculum L2O training scheme as a practical L2O training technique such that Twin-L2O can be trained to work on a much wider coverage of problem parameters than its vanilla versions. That would empirically help the generalizability due to broader coverage by training instance, but would still inevitably fail when meeting unseen testing instances. We then present a
4
preliminary exploration of the safeguard mechanism on minimax under a special case, i.e., solving convex-concave problems. We demonstrate that with such strong assumptions, it is possible to theoretically establish the "perfect" convergence of Twin-L2O on any unseen optimizee.
Curriculum L2O Training When it comes to general minimax problems, it is unlikely to exist an ideal theory to fully ensure Twin-L2O convergence on all instances. Therefore, we seek empirical L2O success of as many instances as possible. Specifically: can we train Twin-L2O better, so that it can work on instances at a broader parameter range?
We find a curriculum learning (CL) strategy (Bengio et al., 2009) particularly useful. CL was first adopted to train neural networks by first focusing the training on an "easy" training subset (often adaptively selected), that is then gradually grown to the full set. It is known to be effective to stabilize training, especially when the training set is highly varied or noisy (Jiang et al., 2017). Since minimax optimization is notoriously unstable no matter via analytical or learned optimizers, we conjecture that the noisy minimax dynamics might challenge Twin-L2O by providing unreliable guidance and impede its training. Considering that our Twin-L2O is modeled using LSTMs, it is natural to think of whether CL can bring additional gains if applied to meta-training. Previously it was also found effective in L2O for minimization problems (Chen et al., 2020a).
Method 1 Safeguarded-Twin-L2O for ConvexConcave Saddle Point Problems
1: Initialize u1 ∈ Rn, C ∈ [0,∞), α ∈ (0,∞), {λ`} = {1/(`+ 1)}, k ← 2, weights {φk} 2: function SADDLEHALPERN(ε) 3: u2 ← 1
2
( u1 + Jα∂f (u 1) ) .
4: while ‖uk − Jα∂f (uk)‖ > ε do 5: zk+1 ← LSTM(uk; φk) 6: C Apply L2O operator
7: if Ek+1(uk+1) ≤ C k + 1 8: C Verify safeguard condition 9: uk+1 ← zk+1
10: C Use L2O update 11: else 12: uk+1 ← λku1 +(1−λk)Jα∂f (uk) 13: C Use fallback update 14: k ← k + 1 15: return uk 16: end function
Specifically, in one epoch, we will rank all optimizee instances by their cumulative losses (3) from low to high, and only select the top C instances to count into the total reward. In that way, only the instances that exhibit "good training behaviors" (smaller gradients & more likely to get close to stationary points) will be initially used for updating the Twin-L2O. That prevents the learned optimizer being misled by random failures and outliers, which are commonly found in the early epochs of Twin-L2O training. We by default set the percentage C to start from 20%, then growing linearly every epoch until reaching 100% in the later training stage.
Up to our best knowledge, this is the first effort to incorporate CL with L2O training. We can this Twin-L2O trained with CL as Enhanced Twin-L2O: note that it is the same model structure, just trained in a different and better way. More details can be found in Appendix A4 .
Safeguard Twin-L2O: A Preliminary Theoretical Exploration Most L2O methods have little or no convergence guarantees. Very recently, a safeguarding mechanism has been introduced to L2O for convex minimization problems with gradient and/or proximal oracles (Heaton et al., 2020). Conceptually, a safeguard is anything that identifies when a ”bad" L2O update would occur and what ”fallback" update to apply in place of that bad L2O update. In this section, we establish a safeguarding theory and algorithm, specifically for learned convex-concave saddle point algorithms. Here the safeguard takes the form of an energy inequality (c.f. Line 6 in Method 1).
In this section, we write u = (x,y) ∈ Rm × Rn and let α > 0. We use the resolvent, defined by Jα∂f (x,y) = (Id + α∂f) −1, (5)
where we note ∂f = (∂xf,−∂yf). For simple f (e.g., quadratic functions), a closed formula exists for Jα∂f . Otherwise, one may use an iterative method to approximate this quantity. In addition, define the residual operator
F (u) := 1
2 (u− Jα∂f (u)) , (6)
and, for each k ∈ N, the energy Ek : Rm × Rn → R by
Ek(u) := ‖F (u)‖2 − λk
1− λk 〈F (u),u1 − u〉 , (7)
5
where {λk} is a sequence of step sizes. The full method is outlined in the Method 1, where the L2O update is denoted by LSTM(uk;φk) and the fallback method is a Halpern iteration (Halpern, 1967). Our main result for minimax safeguarding theory is formally stated below:
Theorem 3.1. If the sequence {uk} is generated by Algorithm 1, then
‖uk − Jα∂f (uk)‖ ≤ 1
2 ( d1 k + √ d21 k2 + 4C k ) , for all k ≥ 2, (8)
where d1 := min{‖u − u1‖ : 0 ∈ ∂f(u)} is the distance from the initial iterate u1 to the set of saddle points and C ≥ 0 is an arbitrary constant. In particular, this implies each limit point of {uk} is a saddle point.
Our proof draws and integrates two sources of ideas: (1) the safeguarded L2O technique that has recently just been introduced to convex minimization (Heaton et al., 2020); and (2) Halpern iteration (Diakonikolas, 2020) that is adopted for analytical minimax optimization with favorable theoretical properties. The full proof is provided in Appendix A3. Note that this work is not intended as a theory innovation on (classical) minimax optimization. Instead, our aim is to extend the emerging idea of safeguarded L2O from convex minimization to convex-concave minimax problems of interest, and shows this idea to be helpful for minimax L2O too: see experiments in Section 4.
4 EXPERIMENTS
4.1 ABLATION STUDY ON THE DESIGN OF TWIN-L2O
We first investigate the design choices for Twin-L2O that we discussed in Section 3.1. We mainly investigate two aspects: (i) whether to share the weights in the two LSTM solvers or not; (ii) whether to share the hidden states between the two LSTM solvers or not. That leads us to four options, denoted as (with self-explanatory names): Share-LSTM-Share-Hidden, Share-LSTM-Two-Hidden, Two-LSTM-Share-Hidden, and Two-LSTM-Two-Hidden. We use the seasaw problem, formulated as below, as the testbed for our ablation study (note that the ranges of a, b are picked only to make L2O easy to converge, while more will be investigated in Section 4.3):
Seesaw: min x max y −by sin(aπx), a ∼ U [0.9, 1], b ∼ U [0.9, 1] (Seesaw)
The Seesaw problem is nonconvex-concave, and is considered challenging (Hamm & Noh, 2018) due to its non-differentiability arising from that the solutions of the state equation or the adjoint state equation are not unique (Danskin, 1966). The L2O training routine follows (Andrychowicz et al., 2016): we use 128 optimizee instances for training; each of them has its parameters i.i.d. sampled, and variables x, y randomly initialized by i.i.d. sampling from U [−0.5, 0.5]. A validation set of 20 optimizees is used with parameters and variables sampled in the same way; and similarly we generate a hold-out testing set of another 100 instances. For each epoch, an L2O optimizer will update the optimizee parameters for 1000 iterations, with its unrolling length T = 10. When the next epoch starts, all x, y as well as LSTM hidden states are reset. We train the L2O solvers for 200 epochs, using Adam with a constant learning rate 10−4. We pick the model checkpoint at the epoch when its validation performance reaches the peak. Figure 1 compares the convergence results of the four options, evaluated on the same testing set. We measure the `2 distances between the solved variables and their corresponding ground-truth solutions (or the closet one, if multiple exist). It is obvious that only the Two-LSTM-Two-Hidden can successfully converge to the correct solution (x∗, y∗) = (0, 0), which is also the equilibrium. Our major observation from the above experiments is that for minimax L2O optimization, especially for asymmetric problems such as Seesaw, it would be a better choice to use decoupled two LSTM solvers and let them take care of their own trajectory information. We will hence stick to this option and use it as our default Twin-L2O.
All experiments in this and following sections are conducted using the GeForce GTX 1080 Ti GPUs.
4.2 COMPARISON WITH STATE-OF-THE-ART ANALYTICAL OPTIMIZERS
In this section, we apply Twin-L2O to two more test problems besides Seesaw:
6
• Rotated Saddle2: minxmaxy ax2 − by2 + 2xy, a ∼ U [0.9, 1], b ∼ U [0.9, 1] • Matrix Game: minx maxy xTAy, A ∈ R5×5,Ai,j ∼ Bernoulli(0.5) · U [−1, 1]
On all three problems, we compare Twin-L2O with several state-of-the-art algorithms: Gradient Descent Ascent (GDA) (Lin et al., 2019), Optimistic Mirror Descent (OMD) (Daskalakis et al., 2018) and GD with anchoring (GD-Anchoring) (Ryu et al., 2019). On Rotated Saddle and Seesaw we will compare with K-beam (Hamm & Noh, 2018) in addition. For matrix game, we also compare it with the standard Halpern Iteration (Diakonikolas, 2020) that is designed for convex-concave minimax problems. For these analytical methods, all parameters are tuned with careful grid search. We train, validate and test Twin-L2O models following the protocol described in Section 4.1.
Figure 2 plots the convergence curves of all methods, averaged across all testing problems (and each with 20 trials of random x, y initialization). Several observations are drawn below:
• L2O does not show superiority over well-tuned analytical algorithms on the simplest Rotated Saddle problem (and similarly Saddle). The problem is very gradient-friendly, and therefore OMD already achieves the best convergence speed as well as solution quality.
• On Matrix Game, Twin-L2O starts to show competitive edges over analytical solvers with faster convergence speed and higher-precision solutions.
• On the Seesaw problem, Twin-L2O largely outperforms all carefully-tuned analytical algorithms, achieving one-magnitude higher-precision solutions with comparable convergence speed. That shows us one take-home message: L2O can work for minimax optimization, and can contribute most significantly to those hard problems. That makes minimax L2O a highly meaningful complement to existing analytical minimax solvers. More analysis on comparing the actual computational costs (MAC numbers) can be found in Appendix A5.
2We also test on the classical Saddle problem, but its behaviors and conclusions are almost identical to the Rotated Saddle. We hence report on Rotated Saddle due to the space limit.
7
4.3 ENHANCED TWIN-L2O: CURRICULUM LEARNING EVALUATION
We again use the Seesaw problem as an example in this section. Its two parameters a and b, i.e., the problem period and the scale, are sampled independently from two uniform distributions U [L1a, L2a], U[L1b, L2b]. In Section 4.1, both are chosen as U [0.9, 1] for the ease of L2O convergence. We now stretch both parameter ranges and test whether an L2O model can still solve the resultant broader range of problems. All other training protocols follow Section 4.1 identically.
Sections 4.1 and 4.2 evaluate the average solution distances over the testing set (100 instances), which worked fine in the small [a, b] range then. However, when we extend the [a, b] range, we find that the L2O behaviors can differ vastly across testing instances, i.e., some converging quickly while others suffering from heavy fluctuations or even divergence, which is an artifact of inefficient L2O training that leaves it unable to cover the full large problem range. That motivates us to carefully re-design our evaluation metrics here, to reflect both the solution quality and its variation/stability.
For p-th testing instance, we record its solution distance l2 D p t , at epoch t = 1, 2, .... Given two thresholds acc and std (chosen by multi-fold validation; we use default d = 2× 10−2 and std = 10−4), we define two forms of success rate (SR):
SR1 = ∑n p=1 I(d(D p)< acc) n SR2 = ∑n p=1 I(Std(D p)< std) n
where d(Dp) = ∑L t=t0 Dpt
L−t0+1 , Std(Di) = Std({D p t }Lt=t0), t0 = 0.8L; n = 100 is the number of testing
instances; L = 1000 is the total iteration number that each instance (optimizee) is trained by L2O. Intuitively, SR1 emphasizes the average solution precision from the last 20 iterations; and SR2 measures how large solution variation is seen in the last 20 iterations.
Table 1 compares Twin-L2O and Enhanced Twin-L2O at multiple combinations to stretch the ranges of a and b, starting to the original [0.9, 1]× [0.9, 1], up to as large as [0, 5]× [0, 2] : the parameter coverage increase by 1,000 times. Adding CL evidently helps Twin-L2O stay effective to train over a broader instance range, under both SR metrics. Vanilla Twin-L2O performs perfectly at [0.9, 1]× [0.9, 1], yet begins to drop at [0, 1]× [0.9, 1] (mainly showing higher instability, as indicated by lower SR2), and hardly succeeds beyond [0, 3.5] × [0.9, 1]. In contrast, Enhanced Twin-L2O obtains nontrivial results even at [0, 5]× [0, 1] (tens of times wider than the vanilla one).
4.4 SAFEGUARDED TWIN-L2O EXPERIMENTS
Here we use the matrix game as the example to evaluate the above established safeguard mechanism for convex-concave minimax optimization. We directly take a well-trained Twin-L2O model for matrix game in Section 4.2, where the matrix A ∈ R5×5 and Ai,j ∼ Bernoulli(0.5) · U [−1, 1], and the coordinates of initial optimization variables x and y are independently sampled from U [−1, 1]. During testing, in addition to testing the Twin-L2O model on the testing data from this seen distribution, we also evaluate it on unseen data, whose A is now sampled from an intentionally very distinct distribution: Ai,j ∼ Bernoulli(1.0) · U [−8, 8]. x and y are initialized in the same manner. We compare Safeguarded Twin-L2O (denoted as Safe-Twin-L2O) with standard Halpern iteration (Diakonikolas, 2020) as the fallback update, when the L2O update is disapproved in Method 1. We also compare with OMD and GD-Anchoring on both seen and unseen testing data (GDA fails to converge in both cases, even we tune its hyperparameters to our best efforts). The results are shown in Figure 3. When tested on the aggressively varied unseen data, the vanilla Twin-L2O model fails and diverges, but Safe-Twin-L2O remains to converge successfully: even faster than Halpern iteration and OMD, and much better than GD-Anchoring.
8
5 CONCLUSION
This paper studies L2O for minimax optimization for the first time. We present the Twin-L2O model, and further improve its generalizability by introducing a theoretically grounded safeguarding framework (for convex-concave problems), as well as an empirical curriculum training strategy (for general problems). Extensive simulations endorse the promise of our algorithms. This pilot study suggests and paves the way for extending L2O beyond continuous minimization problems.
Limitation: The entire L2O field faces challenges to scale up to larger-scale optimization (Andrychowicz et al., 2016), and our study has not yet made an exception. Despite very promising gains from challenging cases such as the Seasaw and Matrix Game problems, the current work only proves the first concept of minimax L2O, on relatively basic and low-dimensional test problems. Our immediate next
step is to scale up Twin-L2O, and to explore its potential in solving the minimax application problems of practical interest, such as adversarial training (Jiang et al., 2018; Xiong & Hsieh, 2020) and GANs (Gulrajani et al., 2017). A potential idea might leverage the memory-efficient hierarchical RNN structure in (Wichrowska et al., 2017).
A1 TWIN-L2O FRAMEWORK
Iteration
......
LSTM-Max LSTM-Max LSTM-Max
......
......
Iteration Iteration ......
Twin-Structured Learning to Optimize (Twin-L2O)
Apply Updating RuleMinimax OptimizeeTwin-LSTM Optimizer
LSTM-Min LSTM-Min LSTM-Min
......
Pass Updated or Variable Pass Input Information to Twin-LSTM
Figure A4: Architecture of Twin-L2O. We let LSTM-Min and LSTM-Max, parameterized by φmin and φmax, update x and y respectively. As shown by curved dashed lines, Twin-LSTM keeps being updated about the latest variable values of x and y when computing input information and the reward. When constructing the computational graph and training the Twin-LSTM, the solid lines allow gradients to flow while the dashed lines do not pass any gradient (Andrychowicz et al., 2016).
A2 COMPARISON WITH STATE-OF-THE-ART ANALYTICAL OPTIMIZERS
Figure A5 shows the performance of y variable in Rotated Saddle, Matrix Game and Seesaw. The analyses of the results generally align with those in the paper.
100 101 102 103Iteration 0.00
0.05
0.10
0.15
0.20
0.25
Di st
an ce
5.56e-92
GDA OMD GD-Anchoring K-beam Twin-L2O
(a) Rotated Saddle 100 101 102 103Iteration
0.0
0.5
1.0
1.5
2.0
Di st
an ce
1.89e-03
GDA OMD GD-Anchoring Halpern Twin-L2O
(b) Matrix Game 100 101 102 103Iteration
0.0
0.1
0.2
0.3
0.4
Di st
an ce
2.94e-03
GDA OMD GD-Anchoring K-beam Twin-L2O
(c) Seesaw
Figure A5: Convergence comparison of variable y between Twin-L2O and state-of-the-art analytical minimax optimizers (GDA, OMD, GD-Anchoring, and K-beam), for three test problems.
A3 PROOF OF SAFEGUARDING RESULT
Below is a proof of the main result, Theorem 4.1:
Proof. We proceed in the following manner, with much credit due to the analysis in (Diakonikolas, 2020). First we verify an inequality with the energy sequence {Ek(uk)} (Step 1). This is used to obtain the convergence rate (Step 2). Resulting implications about limit points are established last (Step 3).
Step 1. We claim
Ek(u k) ≤ C
k , for all k ≥ 2. (9)
A12
We proceed by induction. First note Jα∂f is firmly nonexpansive, and so 2F = Id − Jα∂f is also firmly nonexpansive (Bauschke et al., 2011), which implies
‖2F (u)− 2F (v)‖2 ≤ 〈2F (u)− 2F (v),u− v〉 , for all u,v ∈ Rm × Rn. (10) Using (10) with u = u2 and v = u1 together with our choice of step sizes {λk}, we find
E2(u 2) = ‖F (u2)‖2 − λ1
1− λ1 〈F (u2),u1 − u2〉 (11)
= ‖F (u2)‖2 − 〈F (u2),u1 − u2〉 (12) = 〈F (u2), F (u2)− F (u1)〉 (13) = ‖F (u2)− F (u1)‖2 + 〈F (u1), F (u2)− F (u1)〉 (14)
≤ 1 2 ‖2F (u2)− 2F (u1)‖2 + 〈F (u1), F (u2)− F (u1)〉 (15) ≤ 〈F (u2)− F (u1),u2 − u1〉+ 〈F (u1), F (u2)− F (u1)〉 (16) = −〈F (u2)− F (u1), F (u1)〉+ 〈F (u1), F (u2)− F (u1)〉 (17) = 0. (18)
Thus, E2(u2) ≤ 0 ≤ C/2, and the base case holds. Inductively, suppose (9) holds taking k = n for some n ≥ 2. If un+1 = zn+1, then (9) holds, taking k = n+ 1, by the conditional statement in Line 6 of Method 1. Alternatively, suppose un+1 6= zn+1. Applying (10) with u = un+1 and v = un yields
‖F (un+1)− F (un)‖2 ≤ 2‖F (un+1)− F (un)‖2 ≤ 〈F (un+1)− F (un),un+1 − un〉 . (19) Upon expansion of the left hand side, we discover
‖F (un+1)‖2 ≤ 〈F (un+1),un+1 − un + 2F (un)〉 − 〈F (un),un+1 − un + F (un)〉 . (20)
Algebraic manipulations of the update formula for un+1 yield the relations
un+1 − un + 2F (un) = λn 1− λn (u1 − un+1), (21a)
un+1 − un + F (un) = λn(u1 − un)− (1− 2λn)F (un), (21b) Substituting (21) in (20) gives
‖F (un+1)‖2 ≤ λn 1− λn 〈F (un+1),u1 − un+1〉 (22)
− λn 〈F (un),u1 − un〉+ (1− 2λn)‖F (un)‖2. (23)
and we collect terms with F (un+1) on the left hand side to obtain
‖F (un+1)‖2 − λn 1− λn
〈F (un+1),u1 − un+1〉 ≤ (1− 2λn)‖F (un)‖2 − λn 〈F (un),u1 − un〉 . (24)
Furthermore, by our choice of step size sequence {λn},
1− 2λn = n− 1 n+ 1
(25)
and, for n ≥ 2, λn =
n− 1 n+ 1 · 1 n− 1 = n− 1 n+ 1 · λn−1 1− λn−1 . (26)
Combining (24), (25), and (26) with the definition of En in (7) yields
En+1(u n+1) ≤ n− 1
n+ 1 · En(un). (27)
Applying the inductive hypothesis, we deduce
En+1(u n+1) ≤ n− 1 n+ 1 · C n = n− 1 n · C n+ 1 ≤ C n+ 1 , (28)
A13
and this inequality closes the induction. Thus, (9) holds by the principle of mathematical induction.
Step 2. Let u? be the projection of u1 onto Fix(Jα∂f ) so that
‖u1 − u?‖ = argmin{‖u1 − u‖ : u ∈ Fix(Jα∂f )} = d1. (29) Note this projection is well defined since the set of saddle points is convex. By (9), for k ≥ 2,
‖F (uk)‖2 ≤ λk 1− λk 〈F (uk),u1 − uk〉+ C k
(30)
= λk
1− λk︸ ︷︷ ︸ =1/k
〈F (uk),u1 − u?〉+ 〈F (uk)− F (u?),u? − uk〉︸ ︷︷ ︸ =0 + C k
(31)
= 1 k 〈F (uk),u1 − u?〉+ C k (32)
≤ 1 k ‖F (uk)‖‖u1 − u?‖+ C k . (33)
where the third line holds since F (u?) = 0 and F is monotone. Using the quadratic formula with the fact that ‖F (uk)‖2 ≥ 0, we obtain (8), as desired.
Step 3. Let ũ be a limit point of {uk}. This implies there exists a subsequence {unk} that converges to ũ. Since Jα∂f is 1-Lipschitz and norms are continuous, it follows that
0 ≤ ‖ũ− Jα∂f (ũ)‖ = lim k→∞ ‖unk − Jα∂f (unk)‖ ≤ lim k→∞
1
2 ( d1 nk + √ d21 n2k + 4C nk ) = 0. (34)
By the squeeze lemma, we deduce ũ ∈ Fix(Jα∂f ), i.e., ũ is a saddle point of f . Because ũ was an arbitrarily chosen limit point, each limit point of {uk} is a saddle point of f .
A4 DETAILS ON CURRICULUM LEARNING
In L2O framework, the reward for training the optimizer is defined as:
L(φ) = Ef [ T∑ t=1 wtR (f (xt)) ] (35)
where f is a distribution of functions. The Enhanced Twin-L2O using Curriculum Learning(CL) selects a portion of instances that demonstrate "good training behaviors" (smaller gradients & more likely to get close to stationary points) to be counted into the reward, with the portion C increasing linearly from 20% to 100% as the training epoch increases. In our experiments, the detailed scheme of C is:
C = min{20 + epoch_index, 100}% (36)
where epoch_index denotes the index of epoch when training, starting from 0 and ending with 199 in our case. When applying CL, the actual reward becomes
L̃(φ) = Ef [ T∑ t=1 wtq(f)R (f (xt)) ] (37)
where q(f) = 1 if the value m(f) = ∑T t=1 wt ‖∇yf (xt, yt)‖
2 ranks top C of all sampled functions, and q(f) = 0 otherwise.
This process does not change the structure of Twin-L2O, and essentially acts as adding masks to those training instances that demonstrate poor behavior and ignoring them in the actual training phase. Combining this trick with the existing framework, the Twin-L2O can achieve a higher success rate when solving problems with a larger range of parameters.
A14
A5 COMPUTATIONAL COST ANALYSIS
We analyze the number of the multiplier–accumulator operation (MAC) of Twin-L2O and K-beam (Hamm & Noh, 2018) for a Seasaw problem testing instance with 20 trials of random x, y initialization, each trial lasting for 1000 iterations. As for K-beam, the numbers of MAC are 2.36M (Million), 3.8M, 8.11 M, 15.31M for K = 1, 2, 5, 10 respectively. For Twin-L2O, the total number of MAC is 3.86M.
We use K = 5 in K-beam for experiments in our paper whose number of MAC costs 2.1 times more than that of Twin-L2O, yet its solution quality in terms of both the converging speed and the precision fails to beat it.
A6 MORE DISCUSSIONS ON THE DESIGN OF TWIN-L2O REWARD
We term the reward function in Eqn. 4 as an objective-based reward, since it penalizes the objective change from f(xt, yt) along both x and y updating directions. It is naturally inherited and extends the reward functions prevailing in most prior L2O works for minimization (Andrychowicz et al., 2016; Li & Malik, 2016), whose default reward is to minimize a weighted sum of the past function values.
One may also design the following two rewards, which we name as gradient-based rewards:
L(φmin, φmax) = Ef [ T∑ t=1 ‖∇xf (xt, yt)‖2 + ‖∇yf (xt, yt)‖2 ] , (38)
L(φmin, φmax) = Ef
[ T∑
t=1
( f (xt, yt−1)− f (xt, yt)
‖yt − yt−1‖
)2 + ( f (xt−1, yt−1)− f (xt, yt)
‖xt − xt−1‖
)2] . (39)
Eqn. 39 is the gradient-based Nikaido-Isoda function introduced by Raghunathan et al. (2019).
For minimax optimization, it is not immediately clear whether the objective-based or the gradientbased might work practically better. Intuitively by definition, the former is likely to lead towards a saddle point (defined in Eqn. 1) and the latter to a stationary point. They do not always coincide in general, e.g, a stationary point might not be a saddle point. But for all specific test problems we studied in Section 4, a stationary point is also a saddle point.
We try several experiments on the challenging seesaw problem as a specific example, to provide a close comparison between the gradient-based in Eqn. 38 and the objective-based reward. We re-do Twin-L2O by only replacing Eqn. 4 with the gradient-based reward, and our observations are: a) the gradient-based reward solves the seesaw problem worse than the objective-based one; b) the minimization variable x diverges on testing problem instances; c) the maximization variable y will converge to a solution of precision magnitude 0.04 (for reference, y converges to have magnitude less than 0.01 when using the objective-based loss). We further identify one possible cause after analyzing the gradient behaviors. Note here the gradient-based reward could be expressed as:
||∇xf(x, y)‖2 + ‖∇yf(x, y)‖2 = a2b2π2y2 cos2(aπx) + b2 sin2(aπx) (40)
Because a, b ∼ U [0.9, 1], the first term often dominates during training due to the π2 multiplier, unless y is sufficiently close to zero. The imbalance could be a cause of instability. For example, this reward could sometimes penalize cos2(aπx) to be close to zero, which is the opposite direction of the true solution sin2(aπx) = 0 . Although this is just a very specific problem example, it reveals that the gradient-based loss may sometimes not work well as expected, due to the instability or asymmetry of min/max gradients.
Besides, we have also tried the second gradient-based reward in Eqn. 39, and find it ineffective. It is mainly because the denominator (consecutive variable differences) can become very small and the loss will then explode and break training.
Back to the objective-based reward used in this paper, we have not observed oscillation empirically from all experiments so far. Our hypothesis is that the recurrent structure of the proposed Twin-L2O
A15
framework (shown in Eqn. 2) plays a role here. Although we use two LSTMs for the min and max updates respectively, the LSTM of one variable actually takes in the information of the other LSTM implicitly, because it takes the output of the other as input. When we penalize the objective function value of one LSTM update, all previous min and max updates can (in principle) be taken into account due to the effect of unrolled back-propagation, e.g., the min and max updates each take reference to not only its own, but also the other’s higher-order past trajectory information. While this is a tentative explanation, we think more in-depth analysis of why oscillation may or may not happen in L2O could be a really interesting future work.
Another implicit intuition that leads us to prioritizing the use of objective-based over gradient-based is that, in classic minimization, objective change is summable (i.e., having a finite accumulation), but gradient change is not summable in general (unless with properties such as strong convexity). While summability is itself not a guarantee for good training/testing performance, lack of summability means the loss may have an overly large dynamic range.
To summarize, our function-based objective naturally extends previous L2O convention, works better than other alternatives, and observes no oscillation yet. However, we emphasize that there is no intention to claim the current reward in Eqn. 4 is the best choice for minimax L2O - it is one of plausible options. We do concur the gradient-based reward designs in Eqn. 38 and Eqn. 39 are a complicated yet interesting question, especially when considering more complicated minimax problems. Again, as this paper is intended only as the first work and pilot study towards understanding the profound challenges and rich possibilities for minimax L2O, we believe everything discussed and proposed here, including the loss function, has large room of improvement.
A16 | 1. What is the focus of the paper, and what are the author's contributions to the field of minimax problems?
2. What are the strengths of the paper, particularly in terms of its organization and clarity?
3. What are the weaknesses of the paper, regarding the lack of clear motivation, groundbreaking idea, and convincing loss function definition?
4. How does the reviewer assess the significance of the proposed method in solving minimax problems, and what kind of evidence would be needed to support its utility?
5. What are some minor issues with the paper, such as the computability of the proximal point of the safeguarding mechanism? | Review | Review
Summary
The paper introduces the learning to optimize (L2O) framework into the solution of minimax problems. The base model is composed of two decoupled LSTMs with a shared objective, with the two LSTMs being respectively responsible for the update of the min and max variables. On top of this, the authors further investigate two possible improvements. One consists in applying curriculum learning to improve the generalization capability of the solver while the other uses safeguarding to guarantee convergence in convex-concave problems. Numerical experiments are presented to justify the design choices of the base model and demonstrate the potential of minimax L2O.
Pros
The paper is well-organized, easy to follow and provides a clear context for the problem that is studied. This problem is particularly challenging and the authors manage to obtain some preliminary results.
Score justification
I do not think the paper meets the acceptance criteria mainly due to the following reasons (all together):
Lack of clear motivation.
Lack of groundbreaking idea.
The definition of the loss function is not convincing.
The experiments do not provide strong evidence of the utility of the method either.
Although I fully understand this paper is just intended to be a proof of concept study that demonstrates the usefulness of L2O in minimax problems, I believe the authors should justify more the framework and their algorithmic choices (as done for the decoupled design).
In more detail
While finding efficient algorithms for solving minimax problems is without doubt of increasing importance in machine learning today, in the paper there seems to be a lack of justification for the use of L2O in minimax problems. In the literature, the L2O methodology has been mostly applied to relatively well-studied problems, such as sparse coding and function minimization. On the contrary, minimax optimization is much less well understood and there is little consensus on which optimization algorithm should be used when solving a particular minimax problem. In a sense, while meta learning approaches search for a universal solution to different problems, in minimax optimization people have not even agreed on the solution of a single problem. Therefore, it would be beneficial if the authors could provide concrete examples for which they believe minimax L2O can really help.
In terms of the originality of the paper, the proposed models are basically combinations of existing ideas. While minimax L2O poses unprecedented challenges as claimed by the authors, this work does not seem to propose any dedicated solutions to address these challenges.
More importantly, I do not find what the authors propose as the loss function of the solver convincing. The definition of this loss function is probably one of the most important things in the framework. Nonetheless, I fail to see why encouraging stepwise progress in the two variables will necessarily lead to a solution of the problem. In my opinion, the objective (4) may lead to an unstable behavior of the generated iterates.
To finish, the experiments do not provide a strong motivation for the use of minimax L2O either.
A minor point: the proximal point of the safeguarding mechanism is not always computable so even for convex-concave problems safeguard Twin-L2O does not really offer a practical algorithm. |
ICLR | Title
C3PO: Learning to Achieve Arbitrary Goals via Massively Entropic Pretraining
Abstract
Given a particular embodiment, we propose a novel method (C3PO) that learns policies able to achieve any arbitrary position and pose. Such a policy would allow for easier control, and would be re-useable as a key building block for downstream tasks. The method is two-fold: First, we introduce a novel exploration algorithm that optimizes for uniform coverage, is able to discover a set of achievable states, and investigates its abilities in attaining both high coverage, and hard-to-discover states; Second, we leverage this set of achievable states as training data for a universal goal-achievement policy, a goal-based SAC variant. We demonstrate the trained policy’s performance in achieving a large number of novel states. Finally, we showcase the influence of massive unsupervised training of a goal-achievement policy with state-of-the-art pose-based control of the Hopper, Walker, Halfcheetah, Humanoid and Ant embodiments.
N/A
Given a particular embodiment, we propose a novel method (C3PO) that learns policies able to achieve any arbitrary position and pose. Such a policy would allow for easier control, and would be re-useable as a key building block for downstream tasks. The method is two-fold: First, we introduce a novel exploration algorithm that optimizes for uniform coverage, is able to discover a set of achievable states, and investigates its abilities in attaining both high coverage, and hard-to-discover states; Second, we leverage this set of achievable states as training data for a universal goal-achievement policy, a goal-based SAC variant. We demonstrate the trained policy’s performance in achieving a large number of novel states. Finally, we showcase the influence of massive unsupervised training of a goal-achievement policy with state-of-the-art pose-based control of the Hopper, Walker, Halfcheetah, Humanoid and Ant embodiments.
1 INTRODUCTION
Reinforcement learning (RL) has shown great results in optimizing for single reward functions (Mnih et al., 2013; Silver et al., 2017; Vinyals et al., 2019), that is when a controller has to solve a specific task and/or the task is known beforehand. If the task is not known a priori, or is likely to be often re-configured, then re-training a new policy from scratch can be very expensive and looks as a waste of resources. In the case of multipurpose systems deployed in contexts where they will likely be required to perform a large range of tasks, investing significant resources into training a high-performance general goal-based controller beforehand makes sense. We propose an approach allowing for training a universal goal achievement policy, a policy able to attain any arbitrary state the system can take. Goal-conditioned RL is such a setting where a single policy function can be prompted to aim for a particular goal-state (Kaelbling, 1993; Schaul et al., 2015). One important issue with goal-conditioned RL is that goals that are useful for training the policy are generally unknown, and even in the case of humans this is a key part of learning general controllers (Schulz, 2012; Smith and Gasser, 2005). Several approaches exist in the literature. Adversarial methods build out a goal-based curriculum (Mendonca et al., 2021; Eysenbach et al., 2018; OpenAI et al., 2021; Florensa et al., 2018) through various ad-hoc 2-player games. Other recent approaches (Kamienny et al., 2021; Campos et al., 2020) explicitly optimize for uniform state coverage with the goal of learning a general goal-conditioned policy, but are still tied to learning a policy function to actually implement the exploration strategy in the environment. Although not explicitly geared towards goal-based learning, many reward-free RL (Laskin et al., 2021) approaches are geared towards learning policies that provide good state coverage (Bellemare et al., 2016; Ostrovski et al., 2017; Burda et al., 2018; Houthooft et al., 2016; Badia et al., 2020), however primarily with the intent of fine-tuning the learnt exploration policy rather than leveraging its state coverage. Our proposed approach, Entropy-Based Conditioned Continuous Control Policy Optimization (C3PO), is based on the hypothesis that disentangling the exploration phase from the policy learning phase can lead to simpler and more robust algorithms. It is composed of two steps:
• Goal Discovery: generating a set of achievable states, as diverse as possible to maximize coverage, while being as uniform as possible to facilitate interpolation.
• Goal-Conditioned Training: leveraging these states to learn to reach arbitrary goals.
To address the goal discovery step in C3PO, we propose the Chronological Greedy Entropy Maximization (ChronoGEM) algorithm, designed to exhaustively explore reachable states, even in complex high dimensional environments. ChronoGEM does not rely on any form of trained policy and thus doesn’t require any interaction with the environment to learn to explore. Instead it uses a highlyparallelized random-branching policy to cover the environment, whose branching tree is iteratively re-pruned to maintain uniform leaf coverage. This iterative pruning process leverages learnt density models and inverse sampling to maintain a set of leaf states that are as uniform as possible over the state space. Training the goal-conditioned policy is then performed by leveraging the uniform states generated by ChronoGEM as a dataset of goals that provide well-distributed coverage over achievable states. We perform two types of experiments to illustrate C3PO’s benefits over similar methods: First, we evaluate entropy upper bounds on ChronoGEM’s generated state distribution compared to other reference exploration methods such as RND (Burda et al., 2018) and SMM (Lee et al., 2019) as described in Section 2.1.2. Second, we compare the full C3PO approach to ablated versions that leverage datasets generated by SMM and RND and a random walk. We do this by performing cross-validating of goal-conditioned policies across methods. By training a policy on one method’s dataset and evaluating its goal-achievement capabilities on datasets generated by other methods, we can observe which of the methods gives rise to the most general policy. Through these two empirical studies, we illustrate the superiority of ChronoGEM compared to RND and SMM. Finally, we investigate C3PO’s abilities in achieving arbitrary poses on five continuous control environments: Hopper, Walker2d, HalfCheetah, Ant and Humanoid. Videos of the resulting behaviours reaching the goal poses are available as gif files in our supplementary material.
2 CONDITIONED CONTINUOUS CONTROL POLICY OPTIMIZATION (C3PO)
The optimal universal goal-achievement policy for a given embodiment should allow an agent to achieve any reachable position and pose in the environment as quickly as possible. Learning such a policy necessarily requires a significant amount of exploration and training to both find and learn to attain a large enough number of states to generalize across goal-states. However, in the context of a simulator, which allows for both parallelization and arbitrary environment resets, covering a massive amount of the state space is doable. In our case, we consider 217 parallel trajectories, that are re-sampled every step to maintain an high coverage of the reachable space. Once such large coverage of the state space is achieved, learning a goal-achievement policy can be done with a relatively straight-forward learning algorithm that aims at attaining goals from this high-coverage set of states. If the states are well-enough distributed, and if the learning algorithm is sufficiently efficient, we may expect the final policy to achieve universal goal-achievement.
2.1 MASSIVELY ENTROPIC PRE-TRAINING
As described above, the first step is to discover the set of achievable goals. This collection will be the key of the effectiveness of the resulting policy: We want it as uniform as possible such that no reachable region is neglected. Therefore, without any prior, the ideal set of goals should be uniformly sampled from the manifold of states that are reachable in a given number of steps (T ). Since the shape
of that manifold is totally unknown and can be arbitrarily complex, such sampling is impossible. However, it is possible to approximate such a sampling if enough states are simultaneously explored at the previous time step (T − 1). Assume we are able to sample N states that approximate the uniform distribution at time T − 1. Then, from each of these states, playing K uniform actions to obtain NK next states would not lead to a uniform sampling over the possible next states. However, with N large enough, it would at least induce a good coverage of the set of reachable next states. Let ρT be the distribution induced by these NK next states. Since the set of achievable states in T steps is necessarily bounded (at least in any realistic environment), and given that we are able to closely estimate ρT , we can sub-sample with a probability weighted by the inverse of the density 1ρT in order to approximate a uniform sampling. We prove in Appendix ?? that such a sub-sampling approximates a uniform sampling when the number of sampled states is large enough. This suggests a recursive approach to approximate a uniform sampling of the reachable states in T steps, by starting with states sampled from the environment’s initial distribution ρ0, playing uniform actions, sub-sampling to get an highly covering set that approximates a uniform distribution, re-exploring actions from that new set, and then again, for T iterations. We call this process ChronoGEM (for Chronological Greedy Entropy Maximization) since at a given step, it only focus on maximizing the entropy by directly approximating a uniform distribution over the next step, without further planning. Algo 1 summarizes ChronoGEM.
Algorithm 1 Chronological Greedy Entropy Maximization (ChronoGEM)
1: Sample N states S0 = {si0}Ni=1 ∼ ρ0. 2: for t = 1 to T do 3: Sample K uniform actions for each state of St−1. 4: Obtain KN next states. 5: Estimate ρt using a density model fitted on the distribution of these KN states. 6: Sample N states with probabilities p(s) ∝ 1ρt(s) to get St. 7: end for 8: return ST
ChronoGEM requires exactly KNT interactions with the environment, which makes easily controllable in term of sample complexity. At each time step, the N sampled states being supposed to be independent also simplifies the implementation, allowing to parallelize N jobs that only consume KT interactions, significantly shortening the time complexity (that also depend on the density model fitting).
2.1.1 RESETTABLE STATES ASSUMPTION
Like other diffusion-based algorithms (see related works 4.4), ChronoGEM needs to explore many actions from a single given state. While many approaches would forbid such assumptions to better approximate a real-world situation, we argue that in our case, simulated environments are used for these precise types of simplification, also allowing to run many jobs in parallel and to safely explore dangerous trajectories. In this paper, we ran every simulations using Brax (Freeman et al., 2021), a physics engine offering a collection of continuous control environments similar to MuJoCo (Todorov et al., 2012). Brax is designed to use acceleration devices and massive parallelization, and allows resetting a trajectory at any given state. In the unfortunate case where this assumption would not be available, different equivalent alternative of ChronoGEM could be imagined. With short horizon and small branching factor K, the easiest one being to start with KTN states, and sub-sampling with a factor K at each step, until it ends up with N states at time T . In this paper, we stick to the case where this assumption is satisfied.
2.1.2 DENSITY ESTIMATION
Many choices of density estimation in high dimensional space exist, from simple Gaussian estimators to neural networks-based methods such as autoregressive models or normalizing flows. The performance of these models may vary given the type of data: some models are more suited for images while others are better for text or lower dimensions. In our case, we wanted to find the most accurate model for the special case of state distribution in continuous control environments.
For this, we implemented 7 candidate models, including Gaussian models (Multivariate, Mixture), autoregressive networks (RNade (Uria et al., 2013), Made (Germain et al., 2015)), and normalizing flows (real-NVP (Dinh et al., 2016), Maf (Papamakarios et al., 2017), NSF (Durkan et al., 2019)). Each of them was trained by maximum likelihood estimation over different continuous control environments, using sets of states obtained from pre-trained agents solving the task. We used various hyper parameters configuration for each model, and selected the best ones based on AUC criteria, then we compared the different models when trained with their best hyper parameters. We found that normalizing flows performed significantly better than other models, and among normalizing flows NSF worked slightly better than other flows. This experiment is detailed in Appendix ??.
2.1.3 CONTINUOUS MAZE
(a) (b)
ChronoGEM with N = 217 paralleled environments and branching factor K = 4. In this setup we know that if T is large enough, all achievable states are just every point in the maze, so ChronoGEM could be uniform on the maze given that we let it run for enough time (for instance, T = 1000). However, in that given episode length, both RND and SMM failed at exploring beyond the first corridor, and a random walk did not even explore the whole first room, as illustrated in Figure 3.
2.2 GOAL-CONDITIONED TRAINING
To build C3PO, we modified Brax’ implementation of SAC to take a set of goal as input and train to reach them. The reward is the opposite of the maximum of the euclidean distance between a body (e.g. an arm, the torso, a leg) and its goal position. As a result the policy is encouraged to move to the correct location and then match the correct pose. The goal is added to the observation as a relative position to the current position. we say that a goal is reached if the Euclidean distance between the agent’s state and the goal is smaller that a tolerance threshold . In other terms, an episode E is successful when its closest state to the goal is close enough: success(E|g)⇔ mins∈E ||s− g||2 < . We set the environment to stop an episode as soon as it is successful, or when it exceeds the number of allowed steps. We initially set the tolerance to be high (1.) and slowly anneal it down when the success rate reaches 90% on the training data. As a result SAC learns first to move towards the
target and then to match the exact position. We call C3PO the resulting procedure that combines ChronoGEM for the training data collection and goal-conditioned SAC with tolerance annealing, as described in Algorithm 2.
Algorithm 2 C3PO Require: Initial tolerance .
1: Collect training goals G with ChronoGEM. 2: for enough steps do 3: Draw goals from G. 4: Run full episode rollout with drawn goals, considering an episode is successful if the distance to the goal is less than . 5: Add rollouts to the replay buffer. 6: If the success rate is above 90%, multiply by .99 7: Train networks using SAC losses on transitions drawn from the replay buffer. 8: end for 9: return Trained policy.
3 EXPERIMENTS
This section has two functions: 1) to quantify the superiority of C3PO compared to baselines in which the training data was collected using different exploration methods (SMM, RND and a random walk) and 2) to illustrate the accuracy of the resulting goal-achieving policy after reaching asymptotical training performances, on various continuous control tasks, including Humanoid. To collect training data, ChronoGEM was run with N = 217 paralleled environments and branching factor K = 4 in all following experiments, except for Humanoid in which N = 215 and K = 64. We detail C3PO and all baselines (SAC+RND, SAC+SMM and SAC+random walk) implementations in Appendix ??. For each baseline, we separately tuned the hyper parameters in order to obtain the best performance in all different environments.
3.1 CONTINUOUS CONTROL TASKS
We used the control tasks from Brax (Freeman et al., 2021) as high dimensional environments. To compare the entropy and the richness of the obtained set of states with bonus-based exploration baselines, we focused on four classical tasks: Hopper, Walker2d, Halfcheetah and Ant. Since we focus on achieving arbitrary poses and positions of the embodied agents, we modified the environments observations so they contain the (x, y, z) positions of all body parts. All measures (cross entropy in section 3.2 and reaching distances in section 3.3) are based on that type of state. To get reasonable trajectories (as opposed to trajectories where HalfCheetah jumps to the sky), we explore the environment within the low energy regime by putting a multiplier on the maximum action. The multiplier is .1 for Hopper, .1 for Walker, .01 for HalfCheetah and 1. for Ant. In the two following subsections, we considered episodes of length T = 128. So the physical time horizon is similar on all tasks, we added an action repeat of 6 for Hopper and Walker. All episode end conditions (because the torso is too low or too high for example) have been removed, so we have no prior.
3.2 ENTROPY UPPER-BOUND
Given a set of points x1 . . . xN sampled from a distribution with an unknown density ρ, one can estimate an upper bound of the entropy of ρ given by the cross entropy H(ρ, ρ̂) where ρ̂ is an estimation of ρ:
H(ρ, ρ̂) = −Ex∼ρ[log ρ̂(x)] = H(ρ) + KL(ρ||ρ̂) ≥ H(ρ).
The estimation ρ̂ being trained by maximum likelihood specifically on the set of points, it directly minimises the cross entropy and closely approximate the true entropy. The KL term becomes negligible and only depends on the accuracy of the trained model on the observed set of points, which supposedly does not differ given the different exploration method that generated the points. Consequently, comparing the cross entropy is similar to comparing the entropy of the distribution
induced by the exploration. In this experiment, we used this upper-bound to compare the efficiency of ChronoGEM compared to RND, SMM and a random walks. Figure 4 displays box plots over 10 seeds of the resulting cross entropy measured on the sets of states induced by different algorithms, on the 4 continuous control tasks. As expected, the random walk has the lowest entropy, and RND and SMM have, in average over the environments, similar performances. ChronoGEM has the highest entropy on all environments, especially on HalfCheetah, where it was the only method to manage exploration while the actions were drastically reduced by the low multiplier (see 3.1). In order to illustrate the fact that ChronoGEM induces a distribution that is close to the uniform, we measured the spatial coverage based on a discrete gird of the x-y plan: if the distribution is uniform over both the possible poses and positions, it should be in particular uniform over the positions. Figure 5 shows the resulting log-frequency on the x-y grid visitations and if ChronoGEM is not the method that induces the largest scope of exploration, it however has the most uniform coverage. We also report in appendix ?? the x grid visitation in 2D environments (Hopper, Walker2d and Halfcheetah).
3.3 COMPARISON OF EXPLORATION METHODS VIA GOAL-CONDITIONED TRAINING.
If an exploration method is good, drawing from the states it explored should be a good approximation of drawing from all achievable states. The state distribution induced by an exploration algorithm can be used both as a training set of goal, but also as an evaluation set of goals. In the next experiment, for each environment, we ran every of the four examined exploration methods (ChronoGEM, Random Walk, SMM and RND) with 3 seeds to build 3 training goal sets per method and 3 evaluation goal sets per method. Training goal sets have 4096 goals and evaluation goal sets have 128 goals. We plot the success rate with regard to the tolerance, for each environment and evaluation goal set. Figure 6 shows that evaluated on ChronoGEM goals, only C3PO – which is trained on ChronoGEM – gets good results while evaluated on goals from other methods. This is a good hint that the diversity of ChronoGEM goals is higher than other exploration methods. C3PO performs well on other evaluation sets as well, in particular in the low distance threshold regime (see Hopper and Walker). This can be explained by the fact that C3PO learns to reach a high variety of poses, since being able to achieve poses with high fidelity is what matters for low distance threshold regime. However, these achievement rates alone are still hardly interpretable: for example, being good at reaching goals generated by the random walk is less important than achieving harder goals, especially those from the highly entropic distributions (like ChronoGEM goals on Halfcheetah or SMM goals on Walker). We hence summarized the results by collecting all the areas under the curve (AUC), and weighting
them proportionally to the exponential of the evaluation goals entropy in Figure 7. Indeed, if a set is very diverse, it means more to be able to achieve all of its goals, and vice-versa: if a set is not diverse we don’t want to give too much importance to it, as achieving always the same goal is not so interesting. The exponential of the entropy quantifies the number of states in the distribution. We call this metric Entropy Weighted Goal Achievement (EWGA):
EWGA(method) = ∑ s∈evaluation sets exp(entropy(s)) ∗AUC(method on s)∑
s∈evaluation sets exp(entropy(s))
Success rate when evaluated on
3.4 MASSIVE GOAL-CONDITIONED TRAINING.
Now that we established that ChronoGEM is the best exploration method for the purpose of producing training goals for a goal-conditioned setup, we will only use this method. We know allow ourselves to train for massive amount of steps, and see what is the best policy we can achieve. Thanks to Brax’s high parallelization and efficient infrastructure, it is possible to run 30G steps in a couple days. We also add Humanoid to our set environments. By default, ChronoGEM would mostly explore positions where the humanoid is on the floor. However, it was simple to modulate the algorithm to only explore uniformly in the space of state where the humanoid is standing. For example, on can just associate
zero weight to undesired states during the re-sampling step. That way, we avoided states in which the torso go under the altitude of .8 (the default failure condition). ChronoGEM is modified to not draw states where the humanoid is too low. The goal-conditioned learner gets a high penalty for going too low too. The visual results of a policy able to achieve 90% success at .25 tolerance are represented in Figure 8. This shows that when we do have a prior, we can leverage it to steer the exploration and policy learning.
4 RELATED WORKS
This work is situated between various fields. Although effectively a goal-conditioned policy optimization algorithm, C3PO is enabled by the ChronoGEM exploration algorithm. We will first look at similar goal-conditioned learning setups, and then discuss more in-depth related works in the domain of exploration.
4.1 GOAL-CONDITIONED REINFORCEMENT LEARNING
Goal-conditioned RL (Kaelbling, 1993; Schaul et al., 2015) is the general setup of learning a goalconditioned policy instead of a specialized policy. We are particularly interested in goal-based setups where there is no a-priori reward function. Although well known works such as HER (Andrychowicz et al., 2017) demonstrate methods for learning goal-conditioned policies with minimal explicit exploration, more recent works (Pitis et al., 2020; OpenAI et al., 2021; Mendonca et al., 2021) demonstrate the importance of having a good curriculum of goals to train from. MEGA (Pitis et al., 2020) extends HER-style relabeling and answers the exploration problem by iteratively sampling goals according to a learnt density model of previous goals. ABC (OpenAI et al., 2021) demonstrates the importance of an adversarial curriculum for learning more complex goal-conditioned tasks, but is concentrated on specific tasks instead of arbitrary goal achievemnt. LEXA (Mendonca et al., 2021) builds on Plan2Explore (Sekar et al., 2020), and demonstrates the importance both of a good exploration mechanism, as well as the use of significant amounts of (imagined) data for learning an arbitrary goal-achievement policy. DIAYN (Eysenbach et al., 2018) uses a two-part mechanism that encourages the agent to explore novel areas for a given latent goal, while at the same time learning a
goal embeddings for different areas of the state space. While some of the above methods consider notions of density for exploration (Eysenbach et al., 2018), C3PO uses a more principled exploration mechanism, and is particularly interested in high-precision goal-achievement from full states.
4.2 BONUS-BASED EXPLORATION
Although generally not concerned with goal-conditioned RL, there is a large family of exploration methods that are manifest as reward bonuses, with the intent of training a policy faster, or to be more performant. One family of approaches uses state-visitation counts that can be approximate to create an associated bonus for rarely-visited states (Bellemare et al., 2016; Ostrovski et al., 2017). Prediction-error bonuses use the residual error on predictions of future states as a reward signal to approximate novel states, these includes methods such as RND (Burda et al., 2018) which leverages the prediction error of random projections of future states, SPR (Schwarzer et al., 2020) and BYOLExplore (Guo et al., 2022), which make use of the self-prediction error of the network with a frozen version of itself. Model-based methods often optimise for next-state novelty, either by looking at the KL-Divergence between sampled states and likely states, such as in VIME (Houthooft et al., 2016) or by explicitly driving towards states with high model ensemble disagreement such as in Plan2Explore (Sekar et al., 2020). RIDE (Raileanu and Rocktäschel, 2020) and NGU (Badia et al., 2020) use episodic memory in which the bonus reflects the variety of different states covered in a single trajectory.
4.3 ENTROPY MAXIMISATION
Some exploration algorithms, such as ChronoGEM, are constructed in order to maximize the entropy of the state visitation distribution. Most of them however, focus on the distribution induced by the whole history buffer (instead of the just T -th states of episodes in ChronoGEM), generally based on the behavior of a trained policy. This is the case of MaxEnt (Hazan et al., 2019), GEM (Guo et al., 2021), SMM (Lee et al., 2019) and CURL (Geist et al., 2021). In APT (Liu and Abbeel, 2021b), instead of using a density model to estimate the entropy, they use a non-parametric approach based on the distance with the K nearest neighbors in a latent representation of the state space. APS (Liu and Abbeel, 2021a) combines APT’s entropy bonus with an estimation of the cross-entropy based on successor features to maximize the mutual information I(w; s) between a latent skill representations w and states.
4.4 DIFFUSION-BASED EXPLORATION
ChronoGEM is based on a tree-structured diffusion, that makes a selection of states, randomly explore from these states and then reselect states, etc. Go-Explore (Ecoffet et al., 2019), share the same approach, by running a random policy for some steps, then selecting a set of ‘interesting’ states, then going back in these states and start again random explorations from them. The main difference with ChronoGEM is that we skip the ’go back’ part and we only perform one step of random actions before the new state selection. Also, the selection of states in ChronoGEM is provably converging to a uniform distribution over achievable goals, and does not need any additive prior about the state importance. Another close work also using a diffusion approach is UPSIDE (Kamienny et al., 2021). It finds a set of nodes along with a set of policies that connect any node to the closest ones, looks for new nodes by random exploration from the existing ones, and remove non necessary nodes that are reached by the less discriminable policies. UPSIDE converges to a network of nodes that efficiently covers the state space.
5 CONCLUSION
We designed ChronoGEM, an exploration method that generates high entropy behaviors, in theory (Theorem ??) and in practice (Figure 4), outperforming baseline algorithms. All the skills discovered by an exploration algorithm can be used to train a goal-conditioned policy. We showed that training ChronoGEM goals results in the most potent policies compared to other exploration methods. On Hopper, Walker, HalfCheetah, Ant and Humanoid, visuals and metrics show that the policy we trained is able to achieve a large variety of goals - by moving to the correct position and then matching the pose - with high fidelity. | 1. What is the main contribution of the paper regarding unsupervised reinforcement learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in its comparison to other methods?
3. Do you have any questions or concerns about the methodology, analysis, or results presented in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any additional works or citations that the reviewer thinks would be relevant to include in the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
this paper is proposing a new unsupervised reinforcement learning scheme for being able to train a more taskable agent that can reach a large base of goals in the environment. the motivation for this work is to not only collect the data and have the agent be able to explore but at the same time be able to train a goal condition to policy such that if we wanted the agents to reach a particular state in the world, we could specify that during test time. the method proposes to do a type of incremental search outward from the initial State distribution and slowly add states that the agent can learn how to reach in this case in an effort to be able to sample from the state distribution such that goals can be reached uniformly across the state space. the method does perform some analysis showing that qualitatively the agents appear to learn and display a more uniform type of visitation strategy across the state space while visiting goals. and the method is compared across a handful of mujoco-based robotic simulations.
Strengths And Weaknesses
pros
The paper proposes finding a more helpful policy that can be trained inside the environment compared to prior methods. Most prior curiosity-based methods do help the agents learn to discover different states that are more difficult to reach via exploration bonuses, but this method also seems to train a goal condition policy making it more taskable as well as prior methods.
They are also able to show through some illustrative examples that their proposed method can or at least appears to explore the environment in a more uniform manner.
cons
There are some limitations in the comparison and analysis that should be addressed. In particular, some prior methods are mentioned in the paper that would be important to compare to understand better if the proposed method is better than prior work. this also notes and is based on the concept that they're training a goal condition policy to explore the state distribution and not just a curiosity-based method.
This method appears only to be applicable to simulators. As the authors note, it's also extremely data-hungry.
Clarity, Quality, Novelty And Reproducibility
Additional Comments on the paper:
It's not discussed very clearly in the paper, but there are obvious practical challenges in being able to create a policy that can reach all of the states in a very uniform distribution. There will be more probability for states that are closer to the initial State distribution for the trained RL policy.
What are the assumptions that go along with the proposed algorithm in section 2.1? From this somewhat difficult-to-understand analysis it seems that one additional assumption is that the environment is deterministic and can reach these states given similar paths and actions. Generally, the assumption and motivation that these methods will work really well in simulation can be a problematic hypothesis and can't imply that this method will not be helpful on any real-world problems that have data limitations, stochasticity and partial observation.
It's not clear what is the goal of algorithm one. The first sentence after that algorithm states that it only requires NKT samples, but it doesn't say what these are required for.
The stopping condition for individual episodes that depends on
ϵ
, how was
ϵ
chosen for each of the environments? In particular, how was it chosen such that it does not bias the method proposed in the paper compared to other methods?
The appendix links in the paper appear to be broken.
Why are the XYZ positions of everybody part added to the state in the simulation and not just the position information for the center of the agent? For example, the center of mass could be used or the agent's position of the root link.
The beginning of section 3.2 on entropy upper bounds needs citations to inform the reader where these axioms are derived from.
The results in figure 4 that compare different methods depending on the measure proposed for entropy over a prior distribution appear to be somewhat self-fulfilling. The method proposed in this paper may be optimizing almost this exact objective, while prior methods are not. how can we be sure that this is the most critical form of the objective to be optimized and that the other methods aren't performing much better, giving some other analysis of the entropy over the state space?
Why is this method not compared to a PT and APS? both of those methods perform types of entropy maximization to explore states in the environment. this makes them appear to be very likely candidates for solving a similar problem.
In addition, this method should also be compared to the UPSIDE algorithm.
Additional works that should be cited in the paper:
Skew-Fit: State-Covering Self-Supervised Reinforcement Learning, Vitchyr H. Pong, Murtaza Dalal, Steven Lin, Ashvin Nair, Shikhar Bahl, Sergey Levine |
ICLR | Title
C3PO: Learning to Achieve Arbitrary Goals via Massively Entropic Pretraining
Abstract
Given a particular embodiment, we propose a novel method (C3PO) that learns policies able to achieve any arbitrary position and pose. Such a policy would allow for easier control, and would be re-useable as a key building block for downstream tasks. The method is two-fold: First, we introduce a novel exploration algorithm that optimizes for uniform coverage, is able to discover a set of achievable states, and investigates its abilities in attaining both high coverage, and hard-to-discover states; Second, we leverage this set of achievable states as training data for a universal goal-achievement policy, a goal-based SAC variant. We demonstrate the trained policy’s performance in achieving a large number of novel states. Finally, we showcase the influence of massive unsupervised training of a goal-achievement policy with state-of-the-art pose-based control of the Hopper, Walker, Halfcheetah, Humanoid and Ant embodiments.
N/A
Given a particular embodiment, we propose a novel method (C3PO) that learns policies able to achieve any arbitrary position and pose. Such a policy would allow for easier control, and would be re-useable as a key building block for downstream tasks. The method is two-fold: First, we introduce a novel exploration algorithm that optimizes for uniform coverage, is able to discover a set of achievable states, and investigates its abilities in attaining both high coverage, and hard-to-discover states; Second, we leverage this set of achievable states as training data for a universal goal-achievement policy, a goal-based SAC variant. We demonstrate the trained policy’s performance in achieving a large number of novel states. Finally, we showcase the influence of massive unsupervised training of a goal-achievement policy with state-of-the-art pose-based control of the Hopper, Walker, Halfcheetah, Humanoid and Ant embodiments.
1 INTRODUCTION
Reinforcement learning (RL) has shown great results in optimizing for single reward functions (Mnih et al., 2013; Silver et al., 2017; Vinyals et al., 2019), that is when a controller has to solve a specific task and/or the task is known beforehand. If the task is not known a priori, or is likely to be often re-configured, then re-training a new policy from scratch can be very expensive and looks as a waste of resources. In the case of multipurpose systems deployed in contexts where they will likely be required to perform a large range of tasks, investing significant resources into training a high-performance general goal-based controller beforehand makes sense. We propose an approach allowing for training a universal goal achievement policy, a policy able to attain any arbitrary state the system can take. Goal-conditioned RL is such a setting where a single policy function can be prompted to aim for a particular goal-state (Kaelbling, 1993; Schaul et al., 2015). One important issue with goal-conditioned RL is that goals that are useful for training the policy are generally unknown, and even in the case of humans this is a key part of learning general controllers (Schulz, 2012; Smith and Gasser, 2005). Several approaches exist in the literature. Adversarial methods build out a goal-based curriculum (Mendonca et al., 2021; Eysenbach et al., 2018; OpenAI et al., 2021; Florensa et al., 2018) through various ad-hoc 2-player games. Other recent approaches (Kamienny et al., 2021; Campos et al., 2020) explicitly optimize for uniform state coverage with the goal of learning a general goal-conditioned policy, but are still tied to learning a policy function to actually implement the exploration strategy in the environment. Although not explicitly geared towards goal-based learning, many reward-free RL (Laskin et al., 2021) approaches are geared towards learning policies that provide good state coverage (Bellemare et al., 2016; Ostrovski et al., 2017; Burda et al., 2018; Houthooft et al., 2016; Badia et al., 2020), however primarily with the intent of fine-tuning the learnt exploration policy rather than leveraging its state coverage. Our proposed approach, Entropy-Based Conditioned Continuous Control Policy Optimization (C3PO), is based on the hypothesis that disentangling the exploration phase from the policy learning phase can lead to simpler and more robust algorithms. It is composed of two steps:
• Goal Discovery: generating a set of achievable states, as diverse as possible to maximize coverage, while being as uniform as possible to facilitate interpolation.
• Goal-Conditioned Training: leveraging these states to learn to reach arbitrary goals.
To address the goal discovery step in C3PO, we propose the Chronological Greedy Entropy Maximization (ChronoGEM) algorithm, designed to exhaustively explore reachable states, even in complex high dimensional environments. ChronoGEM does not rely on any form of trained policy and thus doesn’t require any interaction with the environment to learn to explore. Instead it uses a highlyparallelized random-branching policy to cover the environment, whose branching tree is iteratively re-pruned to maintain uniform leaf coverage. This iterative pruning process leverages learnt density models and inverse sampling to maintain a set of leaf states that are as uniform as possible over the state space. Training the goal-conditioned policy is then performed by leveraging the uniform states generated by ChronoGEM as a dataset of goals that provide well-distributed coverage over achievable states. We perform two types of experiments to illustrate C3PO’s benefits over similar methods: First, we evaluate entropy upper bounds on ChronoGEM’s generated state distribution compared to other reference exploration methods such as RND (Burda et al., 2018) and SMM (Lee et al., 2019) as described in Section 2.1.2. Second, we compare the full C3PO approach to ablated versions that leverage datasets generated by SMM and RND and a random walk. We do this by performing cross-validating of goal-conditioned policies across methods. By training a policy on one method’s dataset and evaluating its goal-achievement capabilities on datasets generated by other methods, we can observe which of the methods gives rise to the most general policy. Through these two empirical studies, we illustrate the superiority of ChronoGEM compared to RND and SMM. Finally, we investigate C3PO’s abilities in achieving arbitrary poses on five continuous control environments: Hopper, Walker2d, HalfCheetah, Ant and Humanoid. Videos of the resulting behaviours reaching the goal poses are available as gif files in our supplementary material.
2 CONDITIONED CONTINUOUS CONTROL POLICY OPTIMIZATION (C3PO)
The optimal universal goal-achievement policy for a given embodiment should allow an agent to achieve any reachable position and pose in the environment as quickly as possible. Learning such a policy necessarily requires a significant amount of exploration and training to both find and learn to attain a large enough number of states to generalize across goal-states. However, in the context of a simulator, which allows for both parallelization and arbitrary environment resets, covering a massive amount of the state space is doable. In our case, we consider 217 parallel trajectories, that are re-sampled every step to maintain an high coverage of the reachable space. Once such large coverage of the state space is achieved, learning a goal-achievement policy can be done with a relatively straight-forward learning algorithm that aims at attaining goals from this high-coverage set of states. If the states are well-enough distributed, and if the learning algorithm is sufficiently efficient, we may expect the final policy to achieve universal goal-achievement.
2.1 MASSIVELY ENTROPIC PRE-TRAINING
As described above, the first step is to discover the set of achievable goals. This collection will be the key of the effectiveness of the resulting policy: We want it as uniform as possible such that no reachable region is neglected. Therefore, without any prior, the ideal set of goals should be uniformly sampled from the manifold of states that are reachable in a given number of steps (T ). Since the shape
of that manifold is totally unknown and can be arbitrarily complex, such sampling is impossible. However, it is possible to approximate such a sampling if enough states are simultaneously explored at the previous time step (T − 1). Assume we are able to sample N states that approximate the uniform distribution at time T − 1. Then, from each of these states, playing K uniform actions to obtain NK next states would not lead to a uniform sampling over the possible next states. However, with N large enough, it would at least induce a good coverage of the set of reachable next states. Let ρT be the distribution induced by these NK next states. Since the set of achievable states in T steps is necessarily bounded (at least in any realistic environment), and given that we are able to closely estimate ρT , we can sub-sample with a probability weighted by the inverse of the density 1ρT in order to approximate a uniform sampling. We prove in Appendix ?? that such a sub-sampling approximates a uniform sampling when the number of sampled states is large enough. This suggests a recursive approach to approximate a uniform sampling of the reachable states in T steps, by starting with states sampled from the environment’s initial distribution ρ0, playing uniform actions, sub-sampling to get an highly covering set that approximates a uniform distribution, re-exploring actions from that new set, and then again, for T iterations. We call this process ChronoGEM (for Chronological Greedy Entropy Maximization) since at a given step, it only focus on maximizing the entropy by directly approximating a uniform distribution over the next step, without further planning. Algo 1 summarizes ChronoGEM.
Algorithm 1 Chronological Greedy Entropy Maximization (ChronoGEM)
1: Sample N states S0 = {si0}Ni=1 ∼ ρ0. 2: for t = 1 to T do 3: Sample K uniform actions for each state of St−1. 4: Obtain KN next states. 5: Estimate ρt using a density model fitted on the distribution of these KN states. 6: Sample N states with probabilities p(s) ∝ 1ρt(s) to get St. 7: end for 8: return ST
ChronoGEM requires exactly KNT interactions with the environment, which makes easily controllable in term of sample complexity. At each time step, the N sampled states being supposed to be independent also simplifies the implementation, allowing to parallelize N jobs that only consume KT interactions, significantly shortening the time complexity (that also depend on the density model fitting).
2.1.1 RESETTABLE STATES ASSUMPTION
Like other diffusion-based algorithms (see related works 4.4), ChronoGEM needs to explore many actions from a single given state. While many approaches would forbid such assumptions to better approximate a real-world situation, we argue that in our case, simulated environments are used for these precise types of simplification, also allowing to run many jobs in parallel and to safely explore dangerous trajectories. In this paper, we ran every simulations using Brax (Freeman et al., 2021), a physics engine offering a collection of continuous control environments similar to MuJoCo (Todorov et al., 2012). Brax is designed to use acceleration devices and massive parallelization, and allows resetting a trajectory at any given state. In the unfortunate case where this assumption would not be available, different equivalent alternative of ChronoGEM could be imagined. With short horizon and small branching factor K, the easiest one being to start with KTN states, and sub-sampling with a factor K at each step, until it ends up with N states at time T . In this paper, we stick to the case where this assumption is satisfied.
2.1.2 DENSITY ESTIMATION
Many choices of density estimation in high dimensional space exist, from simple Gaussian estimators to neural networks-based methods such as autoregressive models or normalizing flows. The performance of these models may vary given the type of data: some models are more suited for images while others are better for text or lower dimensions. In our case, we wanted to find the most accurate model for the special case of state distribution in continuous control environments.
For this, we implemented 7 candidate models, including Gaussian models (Multivariate, Mixture), autoregressive networks (RNade (Uria et al., 2013), Made (Germain et al., 2015)), and normalizing flows (real-NVP (Dinh et al., 2016), Maf (Papamakarios et al., 2017), NSF (Durkan et al., 2019)). Each of them was trained by maximum likelihood estimation over different continuous control environments, using sets of states obtained from pre-trained agents solving the task. We used various hyper parameters configuration for each model, and selected the best ones based on AUC criteria, then we compared the different models when trained with their best hyper parameters. We found that normalizing flows performed significantly better than other models, and among normalizing flows NSF worked slightly better than other flows. This experiment is detailed in Appendix ??.
2.1.3 CONTINUOUS MAZE
(a) (b)
ChronoGEM with N = 217 paralleled environments and branching factor K = 4. In this setup we know that if T is large enough, all achievable states are just every point in the maze, so ChronoGEM could be uniform on the maze given that we let it run for enough time (for instance, T = 1000). However, in that given episode length, both RND and SMM failed at exploring beyond the first corridor, and a random walk did not even explore the whole first room, as illustrated in Figure 3.
2.2 GOAL-CONDITIONED TRAINING
To build C3PO, we modified Brax’ implementation of SAC to take a set of goal as input and train to reach them. The reward is the opposite of the maximum of the euclidean distance between a body (e.g. an arm, the torso, a leg) and its goal position. As a result the policy is encouraged to move to the correct location and then match the correct pose. The goal is added to the observation as a relative position to the current position. we say that a goal is reached if the Euclidean distance between the agent’s state and the goal is smaller that a tolerance threshold . In other terms, an episode E is successful when its closest state to the goal is close enough: success(E|g)⇔ mins∈E ||s− g||2 < . We set the environment to stop an episode as soon as it is successful, or when it exceeds the number of allowed steps. We initially set the tolerance to be high (1.) and slowly anneal it down when the success rate reaches 90% on the training data. As a result SAC learns first to move towards the
target and then to match the exact position. We call C3PO the resulting procedure that combines ChronoGEM for the training data collection and goal-conditioned SAC with tolerance annealing, as described in Algorithm 2.
Algorithm 2 C3PO Require: Initial tolerance .
1: Collect training goals G with ChronoGEM. 2: for enough steps do 3: Draw goals from G. 4: Run full episode rollout with drawn goals, considering an episode is successful if the distance to the goal is less than . 5: Add rollouts to the replay buffer. 6: If the success rate is above 90%, multiply by .99 7: Train networks using SAC losses on transitions drawn from the replay buffer. 8: end for 9: return Trained policy.
3 EXPERIMENTS
This section has two functions: 1) to quantify the superiority of C3PO compared to baselines in which the training data was collected using different exploration methods (SMM, RND and a random walk) and 2) to illustrate the accuracy of the resulting goal-achieving policy after reaching asymptotical training performances, on various continuous control tasks, including Humanoid. To collect training data, ChronoGEM was run with N = 217 paralleled environments and branching factor K = 4 in all following experiments, except for Humanoid in which N = 215 and K = 64. We detail C3PO and all baselines (SAC+RND, SAC+SMM and SAC+random walk) implementations in Appendix ??. For each baseline, we separately tuned the hyper parameters in order to obtain the best performance in all different environments.
3.1 CONTINUOUS CONTROL TASKS
We used the control tasks from Brax (Freeman et al., 2021) as high dimensional environments. To compare the entropy and the richness of the obtained set of states with bonus-based exploration baselines, we focused on four classical tasks: Hopper, Walker2d, Halfcheetah and Ant. Since we focus on achieving arbitrary poses and positions of the embodied agents, we modified the environments observations so they contain the (x, y, z) positions of all body parts. All measures (cross entropy in section 3.2 and reaching distances in section 3.3) are based on that type of state. To get reasonable trajectories (as opposed to trajectories where HalfCheetah jumps to the sky), we explore the environment within the low energy regime by putting a multiplier on the maximum action. The multiplier is .1 for Hopper, .1 for Walker, .01 for HalfCheetah and 1. for Ant. In the two following subsections, we considered episodes of length T = 128. So the physical time horizon is similar on all tasks, we added an action repeat of 6 for Hopper and Walker. All episode end conditions (because the torso is too low or too high for example) have been removed, so we have no prior.
3.2 ENTROPY UPPER-BOUND
Given a set of points x1 . . . xN sampled from a distribution with an unknown density ρ, one can estimate an upper bound of the entropy of ρ given by the cross entropy H(ρ, ρ̂) where ρ̂ is an estimation of ρ:
H(ρ, ρ̂) = −Ex∼ρ[log ρ̂(x)] = H(ρ) + KL(ρ||ρ̂) ≥ H(ρ).
The estimation ρ̂ being trained by maximum likelihood specifically on the set of points, it directly minimises the cross entropy and closely approximate the true entropy. The KL term becomes negligible and only depends on the accuracy of the trained model on the observed set of points, which supposedly does not differ given the different exploration method that generated the points. Consequently, comparing the cross entropy is similar to comparing the entropy of the distribution
induced by the exploration. In this experiment, we used this upper-bound to compare the efficiency of ChronoGEM compared to RND, SMM and a random walks. Figure 4 displays box plots over 10 seeds of the resulting cross entropy measured on the sets of states induced by different algorithms, on the 4 continuous control tasks. As expected, the random walk has the lowest entropy, and RND and SMM have, in average over the environments, similar performances. ChronoGEM has the highest entropy on all environments, especially on HalfCheetah, where it was the only method to manage exploration while the actions were drastically reduced by the low multiplier (see 3.1). In order to illustrate the fact that ChronoGEM induces a distribution that is close to the uniform, we measured the spatial coverage based on a discrete gird of the x-y plan: if the distribution is uniform over both the possible poses and positions, it should be in particular uniform over the positions. Figure 5 shows the resulting log-frequency on the x-y grid visitations and if ChronoGEM is not the method that induces the largest scope of exploration, it however has the most uniform coverage. We also report in appendix ?? the x grid visitation in 2D environments (Hopper, Walker2d and Halfcheetah).
3.3 COMPARISON OF EXPLORATION METHODS VIA GOAL-CONDITIONED TRAINING.
If an exploration method is good, drawing from the states it explored should be a good approximation of drawing from all achievable states. The state distribution induced by an exploration algorithm can be used both as a training set of goal, but also as an evaluation set of goals. In the next experiment, for each environment, we ran every of the four examined exploration methods (ChronoGEM, Random Walk, SMM and RND) with 3 seeds to build 3 training goal sets per method and 3 evaluation goal sets per method. Training goal sets have 4096 goals and evaluation goal sets have 128 goals. We plot the success rate with regard to the tolerance, for each environment and evaluation goal set. Figure 6 shows that evaluated on ChronoGEM goals, only C3PO – which is trained on ChronoGEM – gets good results while evaluated on goals from other methods. This is a good hint that the diversity of ChronoGEM goals is higher than other exploration methods. C3PO performs well on other evaluation sets as well, in particular in the low distance threshold regime (see Hopper and Walker). This can be explained by the fact that C3PO learns to reach a high variety of poses, since being able to achieve poses with high fidelity is what matters for low distance threshold regime. However, these achievement rates alone are still hardly interpretable: for example, being good at reaching goals generated by the random walk is less important than achieving harder goals, especially those from the highly entropic distributions (like ChronoGEM goals on Halfcheetah or SMM goals on Walker). We hence summarized the results by collecting all the areas under the curve (AUC), and weighting
them proportionally to the exponential of the evaluation goals entropy in Figure 7. Indeed, if a set is very diverse, it means more to be able to achieve all of its goals, and vice-versa: if a set is not diverse we don’t want to give too much importance to it, as achieving always the same goal is not so interesting. The exponential of the entropy quantifies the number of states in the distribution. We call this metric Entropy Weighted Goal Achievement (EWGA):
EWGA(method) = ∑ s∈evaluation sets exp(entropy(s)) ∗AUC(method on s)∑
s∈evaluation sets exp(entropy(s))
Success rate when evaluated on
3.4 MASSIVE GOAL-CONDITIONED TRAINING.
Now that we established that ChronoGEM is the best exploration method for the purpose of producing training goals for a goal-conditioned setup, we will only use this method. We know allow ourselves to train for massive amount of steps, and see what is the best policy we can achieve. Thanks to Brax’s high parallelization and efficient infrastructure, it is possible to run 30G steps in a couple days. We also add Humanoid to our set environments. By default, ChronoGEM would mostly explore positions where the humanoid is on the floor. However, it was simple to modulate the algorithm to only explore uniformly in the space of state where the humanoid is standing. For example, on can just associate
zero weight to undesired states during the re-sampling step. That way, we avoided states in which the torso go under the altitude of .8 (the default failure condition). ChronoGEM is modified to not draw states where the humanoid is too low. The goal-conditioned learner gets a high penalty for going too low too. The visual results of a policy able to achieve 90% success at .25 tolerance are represented in Figure 8. This shows that when we do have a prior, we can leverage it to steer the exploration and policy learning.
4 RELATED WORKS
This work is situated between various fields. Although effectively a goal-conditioned policy optimization algorithm, C3PO is enabled by the ChronoGEM exploration algorithm. We will first look at similar goal-conditioned learning setups, and then discuss more in-depth related works in the domain of exploration.
4.1 GOAL-CONDITIONED REINFORCEMENT LEARNING
Goal-conditioned RL (Kaelbling, 1993; Schaul et al., 2015) is the general setup of learning a goalconditioned policy instead of a specialized policy. We are particularly interested in goal-based setups where there is no a-priori reward function. Although well known works such as HER (Andrychowicz et al., 2017) demonstrate methods for learning goal-conditioned policies with minimal explicit exploration, more recent works (Pitis et al., 2020; OpenAI et al., 2021; Mendonca et al., 2021) demonstrate the importance of having a good curriculum of goals to train from. MEGA (Pitis et al., 2020) extends HER-style relabeling and answers the exploration problem by iteratively sampling goals according to a learnt density model of previous goals. ABC (OpenAI et al., 2021) demonstrates the importance of an adversarial curriculum for learning more complex goal-conditioned tasks, but is concentrated on specific tasks instead of arbitrary goal achievemnt. LEXA (Mendonca et al., 2021) builds on Plan2Explore (Sekar et al., 2020), and demonstrates the importance both of a good exploration mechanism, as well as the use of significant amounts of (imagined) data for learning an arbitrary goal-achievement policy. DIAYN (Eysenbach et al., 2018) uses a two-part mechanism that encourages the agent to explore novel areas for a given latent goal, while at the same time learning a
goal embeddings for different areas of the state space. While some of the above methods consider notions of density for exploration (Eysenbach et al., 2018), C3PO uses a more principled exploration mechanism, and is particularly interested in high-precision goal-achievement from full states.
4.2 BONUS-BASED EXPLORATION
Although generally not concerned with goal-conditioned RL, there is a large family of exploration methods that are manifest as reward bonuses, with the intent of training a policy faster, or to be more performant. One family of approaches uses state-visitation counts that can be approximate to create an associated bonus for rarely-visited states (Bellemare et al., 2016; Ostrovski et al., 2017). Prediction-error bonuses use the residual error on predictions of future states as a reward signal to approximate novel states, these includes methods such as RND (Burda et al., 2018) which leverages the prediction error of random projections of future states, SPR (Schwarzer et al., 2020) and BYOLExplore (Guo et al., 2022), which make use of the self-prediction error of the network with a frozen version of itself. Model-based methods often optimise for next-state novelty, either by looking at the KL-Divergence between sampled states and likely states, such as in VIME (Houthooft et al., 2016) or by explicitly driving towards states with high model ensemble disagreement such as in Plan2Explore (Sekar et al., 2020). RIDE (Raileanu and Rocktäschel, 2020) and NGU (Badia et al., 2020) use episodic memory in which the bonus reflects the variety of different states covered in a single trajectory.
4.3 ENTROPY MAXIMISATION
Some exploration algorithms, such as ChronoGEM, are constructed in order to maximize the entropy of the state visitation distribution. Most of them however, focus on the distribution induced by the whole history buffer (instead of the just T -th states of episodes in ChronoGEM), generally based on the behavior of a trained policy. This is the case of MaxEnt (Hazan et al., 2019), GEM (Guo et al., 2021), SMM (Lee et al., 2019) and CURL (Geist et al., 2021). In APT (Liu and Abbeel, 2021b), instead of using a density model to estimate the entropy, they use a non-parametric approach based on the distance with the K nearest neighbors in a latent representation of the state space. APS (Liu and Abbeel, 2021a) combines APT’s entropy bonus with an estimation of the cross-entropy based on successor features to maximize the mutual information I(w; s) between a latent skill representations w and states.
4.4 DIFFUSION-BASED EXPLORATION
ChronoGEM is based on a tree-structured diffusion, that makes a selection of states, randomly explore from these states and then reselect states, etc. Go-Explore (Ecoffet et al., 2019), share the same approach, by running a random policy for some steps, then selecting a set of ‘interesting’ states, then going back in these states and start again random explorations from them. The main difference with ChronoGEM is that we skip the ’go back’ part and we only perform one step of random actions before the new state selection. Also, the selection of states in ChronoGEM is provably converging to a uniform distribution over achievable goals, and does not need any additive prior about the state importance. Another close work also using a diffusion approach is UPSIDE (Kamienny et al., 2021). It finds a set of nodes along with a set of policies that connect any node to the closest ones, looks for new nodes by random exploration from the existing ones, and remove non necessary nodes that are reached by the less discriminable policies. UPSIDE converges to a network of nodes that efficiently covers the state space.
5 CONCLUSION
We designed ChronoGEM, an exploration method that generates high entropy behaviors, in theory (Theorem ??) and in practice (Figure 4), outperforming baseline algorithms. All the skills discovered by an exploration algorithm can be used to train a goal-conditioned policy. We showed that training ChronoGEM goals results in the most potent policies compared to other exploration methods. On Hopper, Walker, HalfCheetah, Ant and Humanoid, visuals and metrics show that the policy we trained is able to achieve a large variety of goals - by moving to the correct position and then matching the pose - with high fidelity. | 1. What is the main contribution of the paper regarding goal-conditioned policy learning?
2. What are the strengths and weaknesses of the proposed method, particularly in its exploration strategy and theoretical analysis?
3. Do you have any concerns or suggestions regarding the comparison with other methods, ablation studies, or evaluation metrics?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any typos or minor issues that could be improved in the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents a method called Entropy-Based Conditioned Continuous Control Policy Optimization (C3PO) that tackles the problem of learning a general goal-conditioned policy in two stages. In the first stage, using an exploration algorithm called ChronoGEM, an exploration dataset consisting of diverse goals is collected. In the second stage, the goals collected in the first phase are used as goal targets in goal-conditioned RL with soft-actor critic (SAC) as the base algorithm. The paper demonstrates the method’s effectiveness on Gym environments: walker, hopper, halfcheetah, and ant. It demonstrates that C3PO is not only able to reach a wider distribution of states compared to RND and SMM, training goal-conditioned policies on these states improves the ability of these policies to solve a range of goals.
Strengths And Weaknesses
Strengths:
The ChronoGEM algorithm appears empirically to generate more uniform state distributions compared to random walks, SMM, and RND exploration strategies on the maze and ant environments. It is also conceptually simple to understand and implement. The experimental results training goal-conditioned RL on the goals collected by ChromoGEN outperforms policies trained on the other goal distributions, which is a positive signal that the distribution of goals for ChromoGEM is relatively wide.
Weaknesses:
The theoretical argument in section 2.1 does not seem convincing to me. The statement seems to be that the state visitation will eventually converge to be uniform over the entire state space, but the theorem in Appendix A only demonstrates that the states selected for the next step of the greedy procedure can be approximately uniform from KN sampled states. As the authors mention, playing uniform actions would not necessarily lead to a uniform sampling over the possible next states. So it seems like the “inductive” step here is missing. So I don’t agree with the statement in the conclusion that the method generates “high entropy behaviors, in theory (Theorem 1)”.
I think it would help the paper to have a comparison to a method like Skew-Fit[1] that has a similar exploration strategy to the one proposed in the paper by rebalancing the distribution of states to set as goals.
There isn’t an ablation study conducted on the impact of the tolerance annealing component from SAC to see how much of the improved performance comes from that compared to the goals from ChronoGEM.
The EWGA metric is an interesting way to evaluate the performance across evaluation sets that may have varying difficulties, but why not evaluate using another strategy like uniformly sampling goals across the [x, y, z] space? That seems like it would be less noisy, and that observation space has already been constructed.
[1] Skew-Fit: State-Covering Self-Supervised Reinforcement Learning (Pong et. al, 2019)
Clarity, Quality, Novelty And Reproducibility
The paper is generally clearly written, and experiments are conducted thoroughly with multiple random seeds. The method is described with sufficient implementation details that it seems likely to be reproducible.
Questions:
Given that the entropy needed to be upper bounded in section 3.2, how is the entropy used for the computation of the EWGA metric?
Is the entire observation of each environment replaced by the xyz positions, or are they appended to the default state spaces?
Other comments/nitpicks/typos:
HalfCheetah has inconsistent capitalization throughout the manuscript
I think the first sentence in Section 3.4 is overstated: while it does seem that ChronoGEM performs the best for the studied environments, the statement is rather general and this hasn’t been shown for all goal-conditioned setups. |
ICLR | Title
C3PO: Learning to Achieve Arbitrary Goals via Massively Entropic Pretraining
Abstract
Given a particular embodiment, we propose a novel method (C3PO) that learns policies able to achieve any arbitrary position and pose. Such a policy would allow for easier control, and would be re-useable as a key building block for downstream tasks. The method is two-fold: First, we introduce a novel exploration algorithm that optimizes for uniform coverage, is able to discover a set of achievable states, and investigates its abilities in attaining both high coverage, and hard-to-discover states; Second, we leverage this set of achievable states as training data for a universal goal-achievement policy, a goal-based SAC variant. We demonstrate the trained policy’s performance in achieving a large number of novel states. Finally, we showcase the influence of massive unsupervised training of a goal-achievement policy with state-of-the-art pose-based control of the Hopper, Walker, Halfcheetah, Humanoid and Ant embodiments.
N/A
Given a particular embodiment, we propose a novel method (C3PO) that learns policies able to achieve any arbitrary position and pose. Such a policy would allow for easier control, and would be re-useable as a key building block for downstream tasks. The method is two-fold: First, we introduce a novel exploration algorithm that optimizes for uniform coverage, is able to discover a set of achievable states, and investigates its abilities in attaining both high coverage, and hard-to-discover states; Second, we leverage this set of achievable states as training data for a universal goal-achievement policy, a goal-based SAC variant. We demonstrate the trained policy’s performance in achieving a large number of novel states. Finally, we showcase the influence of massive unsupervised training of a goal-achievement policy with state-of-the-art pose-based control of the Hopper, Walker, Halfcheetah, Humanoid and Ant embodiments.
1 INTRODUCTION
Reinforcement learning (RL) has shown great results in optimizing for single reward functions (Mnih et al., 2013; Silver et al., 2017; Vinyals et al., 2019), that is when a controller has to solve a specific task and/or the task is known beforehand. If the task is not known a priori, or is likely to be often re-configured, then re-training a new policy from scratch can be very expensive and looks as a waste of resources. In the case of multipurpose systems deployed in contexts where they will likely be required to perform a large range of tasks, investing significant resources into training a high-performance general goal-based controller beforehand makes sense. We propose an approach allowing for training a universal goal achievement policy, a policy able to attain any arbitrary state the system can take. Goal-conditioned RL is such a setting where a single policy function can be prompted to aim for a particular goal-state (Kaelbling, 1993; Schaul et al., 2015). One important issue with goal-conditioned RL is that goals that are useful for training the policy are generally unknown, and even in the case of humans this is a key part of learning general controllers (Schulz, 2012; Smith and Gasser, 2005). Several approaches exist in the literature. Adversarial methods build out a goal-based curriculum (Mendonca et al., 2021; Eysenbach et al., 2018; OpenAI et al., 2021; Florensa et al., 2018) through various ad-hoc 2-player games. Other recent approaches (Kamienny et al., 2021; Campos et al., 2020) explicitly optimize for uniform state coverage with the goal of learning a general goal-conditioned policy, but are still tied to learning a policy function to actually implement the exploration strategy in the environment. Although not explicitly geared towards goal-based learning, many reward-free RL (Laskin et al., 2021) approaches are geared towards learning policies that provide good state coverage (Bellemare et al., 2016; Ostrovski et al., 2017; Burda et al., 2018; Houthooft et al., 2016; Badia et al., 2020), however primarily with the intent of fine-tuning the learnt exploration policy rather than leveraging its state coverage. Our proposed approach, Entropy-Based Conditioned Continuous Control Policy Optimization (C3PO), is based on the hypothesis that disentangling the exploration phase from the policy learning phase can lead to simpler and more robust algorithms. It is composed of two steps:
• Goal Discovery: generating a set of achievable states, as diverse as possible to maximize coverage, while being as uniform as possible to facilitate interpolation.
• Goal-Conditioned Training: leveraging these states to learn to reach arbitrary goals.
To address the goal discovery step in C3PO, we propose the Chronological Greedy Entropy Maximization (ChronoGEM) algorithm, designed to exhaustively explore reachable states, even in complex high dimensional environments. ChronoGEM does not rely on any form of trained policy and thus doesn’t require any interaction with the environment to learn to explore. Instead it uses a highlyparallelized random-branching policy to cover the environment, whose branching tree is iteratively re-pruned to maintain uniform leaf coverage. This iterative pruning process leverages learnt density models and inverse sampling to maintain a set of leaf states that are as uniform as possible over the state space. Training the goal-conditioned policy is then performed by leveraging the uniform states generated by ChronoGEM as a dataset of goals that provide well-distributed coverage over achievable states. We perform two types of experiments to illustrate C3PO’s benefits over similar methods: First, we evaluate entropy upper bounds on ChronoGEM’s generated state distribution compared to other reference exploration methods such as RND (Burda et al., 2018) and SMM (Lee et al., 2019) as described in Section 2.1.2. Second, we compare the full C3PO approach to ablated versions that leverage datasets generated by SMM and RND and a random walk. We do this by performing cross-validating of goal-conditioned policies across methods. By training a policy on one method’s dataset and evaluating its goal-achievement capabilities on datasets generated by other methods, we can observe which of the methods gives rise to the most general policy. Through these two empirical studies, we illustrate the superiority of ChronoGEM compared to RND and SMM. Finally, we investigate C3PO’s abilities in achieving arbitrary poses on five continuous control environments: Hopper, Walker2d, HalfCheetah, Ant and Humanoid. Videos of the resulting behaviours reaching the goal poses are available as gif files in our supplementary material.
2 CONDITIONED CONTINUOUS CONTROL POLICY OPTIMIZATION (C3PO)
The optimal universal goal-achievement policy for a given embodiment should allow an agent to achieve any reachable position and pose in the environment as quickly as possible. Learning such a policy necessarily requires a significant amount of exploration and training to both find and learn to attain a large enough number of states to generalize across goal-states. However, in the context of a simulator, which allows for both parallelization and arbitrary environment resets, covering a massive amount of the state space is doable. In our case, we consider 217 parallel trajectories, that are re-sampled every step to maintain an high coverage of the reachable space. Once such large coverage of the state space is achieved, learning a goal-achievement policy can be done with a relatively straight-forward learning algorithm that aims at attaining goals from this high-coverage set of states. If the states are well-enough distributed, and if the learning algorithm is sufficiently efficient, we may expect the final policy to achieve universal goal-achievement.
2.1 MASSIVELY ENTROPIC PRE-TRAINING
As described above, the first step is to discover the set of achievable goals. This collection will be the key of the effectiveness of the resulting policy: We want it as uniform as possible such that no reachable region is neglected. Therefore, without any prior, the ideal set of goals should be uniformly sampled from the manifold of states that are reachable in a given number of steps (T ). Since the shape
of that manifold is totally unknown and can be arbitrarily complex, such sampling is impossible. However, it is possible to approximate such a sampling if enough states are simultaneously explored at the previous time step (T − 1). Assume we are able to sample N states that approximate the uniform distribution at time T − 1. Then, from each of these states, playing K uniform actions to obtain NK next states would not lead to a uniform sampling over the possible next states. However, with N large enough, it would at least induce a good coverage of the set of reachable next states. Let ρT be the distribution induced by these NK next states. Since the set of achievable states in T steps is necessarily bounded (at least in any realistic environment), and given that we are able to closely estimate ρT , we can sub-sample with a probability weighted by the inverse of the density 1ρT in order to approximate a uniform sampling. We prove in Appendix ?? that such a sub-sampling approximates a uniform sampling when the number of sampled states is large enough. This suggests a recursive approach to approximate a uniform sampling of the reachable states in T steps, by starting with states sampled from the environment’s initial distribution ρ0, playing uniform actions, sub-sampling to get an highly covering set that approximates a uniform distribution, re-exploring actions from that new set, and then again, for T iterations. We call this process ChronoGEM (for Chronological Greedy Entropy Maximization) since at a given step, it only focus on maximizing the entropy by directly approximating a uniform distribution over the next step, without further planning. Algo 1 summarizes ChronoGEM.
Algorithm 1 Chronological Greedy Entropy Maximization (ChronoGEM)
1: Sample N states S0 = {si0}Ni=1 ∼ ρ0. 2: for t = 1 to T do 3: Sample K uniform actions for each state of St−1. 4: Obtain KN next states. 5: Estimate ρt using a density model fitted on the distribution of these KN states. 6: Sample N states with probabilities p(s) ∝ 1ρt(s) to get St. 7: end for 8: return ST
ChronoGEM requires exactly KNT interactions with the environment, which makes easily controllable in term of sample complexity. At each time step, the N sampled states being supposed to be independent also simplifies the implementation, allowing to parallelize N jobs that only consume KT interactions, significantly shortening the time complexity (that also depend on the density model fitting).
2.1.1 RESETTABLE STATES ASSUMPTION
Like other diffusion-based algorithms (see related works 4.4), ChronoGEM needs to explore many actions from a single given state. While many approaches would forbid such assumptions to better approximate a real-world situation, we argue that in our case, simulated environments are used for these precise types of simplification, also allowing to run many jobs in parallel and to safely explore dangerous trajectories. In this paper, we ran every simulations using Brax (Freeman et al., 2021), a physics engine offering a collection of continuous control environments similar to MuJoCo (Todorov et al., 2012). Brax is designed to use acceleration devices and massive parallelization, and allows resetting a trajectory at any given state. In the unfortunate case where this assumption would not be available, different equivalent alternative of ChronoGEM could be imagined. With short horizon and small branching factor K, the easiest one being to start with KTN states, and sub-sampling with a factor K at each step, until it ends up with N states at time T . In this paper, we stick to the case where this assumption is satisfied.
2.1.2 DENSITY ESTIMATION
Many choices of density estimation in high dimensional space exist, from simple Gaussian estimators to neural networks-based methods such as autoregressive models or normalizing flows. The performance of these models may vary given the type of data: some models are more suited for images while others are better for text or lower dimensions. In our case, we wanted to find the most accurate model for the special case of state distribution in continuous control environments.
For this, we implemented 7 candidate models, including Gaussian models (Multivariate, Mixture), autoregressive networks (RNade (Uria et al., 2013), Made (Germain et al., 2015)), and normalizing flows (real-NVP (Dinh et al., 2016), Maf (Papamakarios et al., 2017), NSF (Durkan et al., 2019)). Each of them was trained by maximum likelihood estimation over different continuous control environments, using sets of states obtained from pre-trained agents solving the task. We used various hyper parameters configuration for each model, and selected the best ones based on AUC criteria, then we compared the different models when trained with their best hyper parameters. We found that normalizing flows performed significantly better than other models, and among normalizing flows NSF worked slightly better than other flows. This experiment is detailed in Appendix ??.
2.1.3 CONTINUOUS MAZE
(a) (b)
ChronoGEM with N = 217 paralleled environments and branching factor K = 4. In this setup we know that if T is large enough, all achievable states are just every point in the maze, so ChronoGEM could be uniform on the maze given that we let it run for enough time (for instance, T = 1000). However, in that given episode length, both RND and SMM failed at exploring beyond the first corridor, and a random walk did not even explore the whole first room, as illustrated in Figure 3.
2.2 GOAL-CONDITIONED TRAINING
To build C3PO, we modified Brax’ implementation of SAC to take a set of goal as input and train to reach them. The reward is the opposite of the maximum of the euclidean distance between a body (e.g. an arm, the torso, a leg) and its goal position. As a result the policy is encouraged to move to the correct location and then match the correct pose. The goal is added to the observation as a relative position to the current position. we say that a goal is reached if the Euclidean distance between the agent’s state and the goal is smaller that a tolerance threshold . In other terms, an episode E is successful when its closest state to the goal is close enough: success(E|g)⇔ mins∈E ||s− g||2 < . We set the environment to stop an episode as soon as it is successful, or when it exceeds the number of allowed steps. We initially set the tolerance to be high (1.) and slowly anneal it down when the success rate reaches 90% on the training data. As a result SAC learns first to move towards the
target and then to match the exact position. We call C3PO the resulting procedure that combines ChronoGEM for the training data collection and goal-conditioned SAC with tolerance annealing, as described in Algorithm 2.
Algorithm 2 C3PO Require: Initial tolerance .
1: Collect training goals G with ChronoGEM. 2: for enough steps do 3: Draw goals from G. 4: Run full episode rollout with drawn goals, considering an episode is successful if the distance to the goal is less than . 5: Add rollouts to the replay buffer. 6: If the success rate is above 90%, multiply by .99 7: Train networks using SAC losses on transitions drawn from the replay buffer. 8: end for 9: return Trained policy.
3 EXPERIMENTS
This section has two functions: 1) to quantify the superiority of C3PO compared to baselines in which the training data was collected using different exploration methods (SMM, RND and a random walk) and 2) to illustrate the accuracy of the resulting goal-achieving policy after reaching asymptotical training performances, on various continuous control tasks, including Humanoid. To collect training data, ChronoGEM was run with N = 217 paralleled environments and branching factor K = 4 in all following experiments, except for Humanoid in which N = 215 and K = 64. We detail C3PO and all baselines (SAC+RND, SAC+SMM and SAC+random walk) implementations in Appendix ??. For each baseline, we separately tuned the hyper parameters in order to obtain the best performance in all different environments.
3.1 CONTINUOUS CONTROL TASKS
We used the control tasks from Brax (Freeman et al., 2021) as high dimensional environments. To compare the entropy and the richness of the obtained set of states with bonus-based exploration baselines, we focused on four classical tasks: Hopper, Walker2d, Halfcheetah and Ant. Since we focus on achieving arbitrary poses and positions of the embodied agents, we modified the environments observations so they contain the (x, y, z) positions of all body parts. All measures (cross entropy in section 3.2 and reaching distances in section 3.3) are based on that type of state. To get reasonable trajectories (as opposed to trajectories where HalfCheetah jumps to the sky), we explore the environment within the low energy regime by putting a multiplier on the maximum action. The multiplier is .1 for Hopper, .1 for Walker, .01 for HalfCheetah and 1. for Ant. In the two following subsections, we considered episodes of length T = 128. So the physical time horizon is similar on all tasks, we added an action repeat of 6 for Hopper and Walker. All episode end conditions (because the torso is too low or too high for example) have been removed, so we have no prior.
3.2 ENTROPY UPPER-BOUND
Given a set of points x1 . . . xN sampled from a distribution with an unknown density ρ, one can estimate an upper bound of the entropy of ρ given by the cross entropy H(ρ, ρ̂) where ρ̂ is an estimation of ρ:
H(ρ, ρ̂) = −Ex∼ρ[log ρ̂(x)] = H(ρ) + KL(ρ||ρ̂) ≥ H(ρ).
The estimation ρ̂ being trained by maximum likelihood specifically on the set of points, it directly minimises the cross entropy and closely approximate the true entropy. The KL term becomes negligible and only depends on the accuracy of the trained model on the observed set of points, which supposedly does not differ given the different exploration method that generated the points. Consequently, comparing the cross entropy is similar to comparing the entropy of the distribution
induced by the exploration. In this experiment, we used this upper-bound to compare the efficiency of ChronoGEM compared to RND, SMM and a random walks. Figure 4 displays box plots over 10 seeds of the resulting cross entropy measured on the sets of states induced by different algorithms, on the 4 continuous control tasks. As expected, the random walk has the lowest entropy, and RND and SMM have, in average over the environments, similar performances. ChronoGEM has the highest entropy on all environments, especially on HalfCheetah, where it was the only method to manage exploration while the actions were drastically reduced by the low multiplier (see 3.1). In order to illustrate the fact that ChronoGEM induces a distribution that is close to the uniform, we measured the spatial coverage based on a discrete gird of the x-y plan: if the distribution is uniform over both the possible poses and positions, it should be in particular uniform over the positions. Figure 5 shows the resulting log-frequency on the x-y grid visitations and if ChronoGEM is not the method that induces the largest scope of exploration, it however has the most uniform coverage. We also report in appendix ?? the x grid visitation in 2D environments (Hopper, Walker2d and Halfcheetah).
3.3 COMPARISON OF EXPLORATION METHODS VIA GOAL-CONDITIONED TRAINING.
If an exploration method is good, drawing from the states it explored should be a good approximation of drawing from all achievable states. The state distribution induced by an exploration algorithm can be used both as a training set of goal, but also as an evaluation set of goals. In the next experiment, for each environment, we ran every of the four examined exploration methods (ChronoGEM, Random Walk, SMM and RND) with 3 seeds to build 3 training goal sets per method and 3 evaluation goal sets per method. Training goal sets have 4096 goals and evaluation goal sets have 128 goals. We plot the success rate with regard to the tolerance, for each environment and evaluation goal set. Figure 6 shows that evaluated on ChronoGEM goals, only C3PO – which is trained on ChronoGEM – gets good results while evaluated on goals from other methods. This is a good hint that the diversity of ChronoGEM goals is higher than other exploration methods. C3PO performs well on other evaluation sets as well, in particular in the low distance threshold regime (see Hopper and Walker). This can be explained by the fact that C3PO learns to reach a high variety of poses, since being able to achieve poses with high fidelity is what matters for low distance threshold regime. However, these achievement rates alone are still hardly interpretable: for example, being good at reaching goals generated by the random walk is less important than achieving harder goals, especially those from the highly entropic distributions (like ChronoGEM goals on Halfcheetah or SMM goals on Walker). We hence summarized the results by collecting all the areas under the curve (AUC), and weighting
them proportionally to the exponential of the evaluation goals entropy in Figure 7. Indeed, if a set is very diverse, it means more to be able to achieve all of its goals, and vice-versa: if a set is not diverse we don’t want to give too much importance to it, as achieving always the same goal is not so interesting. The exponential of the entropy quantifies the number of states in the distribution. We call this metric Entropy Weighted Goal Achievement (EWGA):
EWGA(method) = ∑ s∈evaluation sets exp(entropy(s)) ∗AUC(method on s)∑
s∈evaluation sets exp(entropy(s))
Success rate when evaluated on
3.4 MASSIVE GOAL-CONDITIONED TRAINING.
Now that we established that ChronoGEM is the best exploration method for the purpose of producing training goals for a goal-conditioned setup, we will only use this method. We know allow ourselves to train for massive amount of steps, and see what is the best policy we can achieve. Thanks to Brax’s high parallelization and efficient infrastructure, it is possible to run 30G steps in a couple days. We also add Humanoid to our set environments. By default, ChronoGEM would mostly explore positions where the humanoid is on the floor. However, it was simple to modulate the algorithm to only explore uniformly in the space of state where the humanoid is standing. For example, on can just associate
zero weight to undesired states during the re-sampling step. That way, we avoided states in which the torso go under the altitude of .8 (the default failure condition). ChronoGEM is modified to not draw states where the humanoid is too low. The goal-conditioned learner gets a high penalty for going too low too. The visual results of a policy able to achieve 90% success at .25 tolerance are represented in Figure 8. This shows that when we do have a prior, we can leverage it to steer the exploration and policy learning.
4 RELATED WORKS
This work is situated between various fields. Although effectively a goal-conditioned policy optimization algorithm, C3PO is enabled by the ChronoGEM exploration algorithm. We will first look at similar goal-conditioned learning setups, and then discuss more in-depth related works in the domain of exploration.
4.1 GOAL-CONDITIONED REINFORCEMENT LEARNING
Goal-conditioned RL (Kaelbling, 1993; Schaul et al., 2015) is the general setup of learning a goalconditioned policy instead of a specialized policy. We are particularly interested in goal-based setups where there is no a-priori reward function. Although well known works such as HER (Andrychowicz et al., 2017) demonstrate methods for learning goal-conditioned policies with minimal explicit exploration, more recent works (Pitis et al., 2020; OpenAI et al., 2021; Mendonca et al., 2021) demonstrate the importance of having a good curriculum of goals to train from. MEGA (Pitis et al., 2020) extends HER-style relabeling and answers the exploration problem by iteratively sampling goals according to a learnt density model of previous goals. ABC (OpenAI et al., 2021) demonstrates the importance of an adversarial curriculum for learning more complex goal-conditioned tasks, but is concentrated on specific tasks instead of arbitrary goal achievemnt. LEXA (Mendonca et al., 2021) builds on Plan2Explore (Sekar et al., 2020), and demonstrates the importance both of a good exploration mechanism, as well as the use of significant amounts of (imagined) data for learning an arbitrary goal-achievement policy. DIAYN (Eysenbach et al., 2018) uses a two-part mechanism that encourages the agent to explore novel areas for a given latent goal, while at the same time learning a
goal embeddings for different areas of the state space. While some of the above methods consider notions of density for exploration (Eysenbach et al., 2018), C3PO uses a more principled exploration mechanism, and is particularly interested in high-precision goal-achievement from full states.
4.2 BONUS-BASED EXPLORATION
Although generally not concerned with goal-conditioned RL, there is a large family of exploration methods that are manifest as reward bonuses, with the intent of training a policy faster, or to be more performant. One family of approaches uses state-visitation counts that can be approximate to create an associated bonus for rarely-visited states (Bellemare et al., 2016; Ostrovski et al., 2017). Prediction-error bonuses use the residual error on predictions of future states as a reward signal to approximate novel states, these includes methods such as RND (Burda et al., 2018) which leverages the prediction error of random projections of future states, SPR (Schwarzer et al., 2020) and BYOLExplore (Guo et al., 2022), which make use of the self-prediction error of the network with a frozen version of itself. Model-based methods often optimise for next-state novelty, either by looking at the KL-Divergence between sampled states and likely states, such as in VIME (Houthooft et al., 2016) or by explicitly driving towards states with high model ensemble disagreement such as in Plan2Explore (Sekar et al., 2020). RIDE (Raileanu and Rocktäschel, 2020) and NGU (Badia et al., 2020) use episodic memory in which the bonus reflects the variety of different states covered in a single trajectory.
4.3 ENTROPY MAXIMISATION
Some exploration algorithms, such as ChronoGEM, are constructed in order to maximize the entropy of the state visitation distribution. Most of them however, focus on the distribution induced by the whole history buffer (instead of the just T -th states of episodes in ChronoGEM), generally based on the behavior of a trained policy. This is the case of MaxEnt (Hazan et al., 2019), GEM (Guo et al., 2021), SMM (Lee et al., 2019) and CURL (Geist et al., 2021). In APT (Liu and Abbeel, 2021b), instead of using a density model to estimate the entropy, they use a non-parametric approach based on the distance with the K nearest neighbors in a latent representation of the state space. APS (Liu and Abbeel, 2021a) combines APT’s entropy bonus with an estimation of the cross-entropy based on successor features to maximize the mutual information I(w; s) between a latent skill representations w and states.
4.4 DIFFUSION-BASED EXPLORATION
ChronoGEM is based on a tree-structured diffusion, that makes a selection of states, randomly explore from these states and then reselect states, etc. Go-Explore (Ecoffet et al., 2019), share the same approach, by running a random policy for some steps, then selecting a set of ‘interesting’ states, then going back in these states and start again random explorations from them. The main difference with ChronoGEM is that we skip the ’go back’ part and we only perform one step of random actions before the new state selection. Also, the selection of states in ChronoGEM is provably converging to a uniform distribution over achievable goals, and does not need any additive prior about the state importance. Another close work also using a diffusion approach is UPSIDE (Kamienny et al., 2021). It finds a set of nodes along with a set of policies that connect any node to the closest ones, looks for new nodes by random exploration from the existing ones, and remove non necessary nodes that are reached by the less discriminable policies. UPSIDE converges to a network of nodes that efficiently covers the state space.
5 CONCLUSION
We designed ChronoGEM, an exploration method that generates high entropy behaviors, in theory (Theorem ??) and in practice (Figure 4), outperforming baseline algorithms. All the skills discovered by an exploration algorithm can be used to train a goal-conditioned policy. We showed that training ChronoGEM goals results in the most potent policies compared to other exploration methods. On Hopper, Walker, HalfCheetah, Ant and Humanoid, visuals and metrics show that the policy we trained is able to achieve a large variety of goals - by moving to the correct position and then matching the pose - with high fidelity. | 1. What is the focus and contribution of the paper regarding exploration methods for simulated environments?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its sample efficiency and applicability to various tasks?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns regarding the framing and claims made in the paper, especially regarding its real-world applications?
5. What are some minor details that the reviewer would like to bring to the author's attention? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors present a method that performs massive uniform exploration of simulated environments and then trains goal-conditioned SAC with an annealed success condition to achieve states discovered during exploration. They demonstrate this exploration procedure to be superior on locomotion environments to other methods.
Strengths And Weaknesses
Strengths
Clarity:
The figures are, for the most part, very nice and clear.
Experiments/Results:
ChronoGEM does work well as an exploration procedure, and GIFs in the supplementary demonstrate that their goal-conditioned policy can reach many arbitrary poses.
C3PO is able to do well on goal distributions generated by other methods, verifying the fact that ChronoGEM is a good exploration algorithm.
Weaknesses
Clarity:
Quite a few grammatical hiccups throughout. Please fix (some that I found are listed in the minor details heading)
Framing:
I somewhat doubt the utility of this work. It requires a specific setting (a cheap simulator that can be sampled en masse in parallel, and reset to arbitrary states), and has extremely poor sample efficiency due to this assumption. Essentially, ChronoGEM tries to visit all of the states in the simulator. In anything other than locomotion tasks, (e.g. robotic manipulation) these states will mostly be useless and having so many states to try to achieve can hurt learning.
What is the point of being able to achieve arbitrary positions and poses? The authors state in the abstract that it would allow for easier control and be re-usable as a key building block for downstream tasks. But they don’t show this. The experiments simply demonstrate that Goal-conditioned SAC can reach the position and poses given.
Conclusion: “In the real world, no reward function is provided.” THis is true, however in the real world we also can’t run ChronoGEM. This goes back to how I think the paper is not framed too well.
Experiments:
Related to the above point about framing in achieving arbitrary positions and poses, the authors should show experiments where their method allows for better finetuning performance on some downstream tasks (maybe goal conditioned, maybe not) to validate the claims made in the abstract and intro.
Minor details:
Grammar
Intro last paragraph: “its’ → its”
Sec 2.1: “necessary bounded” → “necessarily bounded”
“up-left room” → “top-left room”
“let it enough time” → “let it run for enough time”
“we says” → “we say”
Questions
In chronoGEM, how exactly do you sample the next states? Do you just reset BRAX simulation to those exact states and continue from there?
Formatting:
The formatting is incorrect. The margins are way too small. This is unfair to other papers who fit everything into the page limit without the small margins. Regardless of whether this was intentional or on accident, I am giving a strong reject no matter what, until the authors fix this issue during the rebuttal.
Clarity, Quality, Novelty And Reproducibility
The paper is mostly clear, of OK quality, incrementally novel, and seemingly reproducible provided sufficient computational resources. |
ICLR | Title
SPC-Net: A New Scalable Point Cloud Compression Framework for Both Machine and Human Vision Tasks
Abstract
Recently, point cloud process and analysis have attracted increasing attention in various machine vision tasks. Therefore, some point cloud compression algorithms are developed. However, such compression algorithms are developed for human vision while most of the point cloud data will be used for automated point cloud analysis (e.g., detection of abnormal event and early warning in autonomous driving) and may not be seen by humans. To this end, we design a new scalable point cloud compression framework (SPC-Net) for both machine and human vision tasks, in which a scalable bit-stream will be used to describe the point cloud for both machine vision and human vision tasks. For machine vision tasks, only part of the bit-stream will be transmitted for bit-rate saving, while the full bitstream will be transmitted when used for the human vision task. Additionally, we propose a new octree depth level predictor to automatically predict the optimal depth level in order to control the bit-rate cost for the machine vision tasks. As a result, for simple objects/scenarios, we will use fewer depth levels with less bits for the machine tasks, while for complex objects/scenarios, we prefer deeper depth levels of octree with more bits for machine tasks comprehensive. Experimental results on different datasets (e.g., ModelNet10, ModelNet40, ShapeNet and ScanNet) demonstrate that our proposed scalable point could compression framework SPC-Net achieves better performance on the machine vision tasks (e.g., classification, segmentation and detection) without degrading the performance of the human vision task.
N/A
Recently, point cloud process and analysis have attracted increasing attention in various machine vision tasks. Therefore, some point cloud compression algorithms are developed. However, such compression algorithms are developed for human vision while most of the point cloud data will be used for automated point cloud analysis (e.g., detection of abnormal event and early warning in autonomous driving) and may not be seen by humans. To this end, we design a new scalable point cloud compression framework (SPC-Net) for both machine and human vision tasks, in which a scalable bit-stream will be used to describe the point cloud for both machine vision and human vision tasks. For machine vision tasks, only part of the bit-stream will be transmitted for bit-rate saving, while the full bitstream will be transmitted when used for the human vision task. Additionally, we propose a new octree depth level predictor to automatically predict the optimal depth level in order to control the bit-rate cost for the machine vision tasks. As a result, for simple objects/scenarios, we will use fewer depth levels with less bits for the machine tasks, while for complex objects/scenarios, we prefer deeper depth levels of octree with more bits for machine tasks comprehensive. Experimental results on different datasets (e.g., ModelNet10, ModelNet40, ShapeNet and ScanNet) demonstrate that our proposed scalable point could compression framework SPC-Net achieves better performance on the machine vision tasks (e.g., classification, segmentation and detection) without degrading the performance of the human vision task.
1 INTRODUCTION
With the development of advanced 3D technologies, it has become easier to collect point clouds by using various types of 3D scanners including LiDARs and RGB-D cameras. Therefore, a huge amount of point cloud data has been collected and various point cloud related machine vision tasks like classification, segmentation and detection have attracted increasing attention. However, most point cloud analysis tasks take raw point cloud data as the input, which requires large bandwidth/storage for transmitting/storing huge massive point cloud data.
Recently, some point cloud compression frameworks Huang et al. (2020); Que et al. (2021) were proposed to save the bandwidth/storage for point cloud transmitting/storing. However, those point cloud compression frameworks are designed for human vision, which will thus degrade the performance in the machine vision tasks. Currently, the existing point cloud compression framework are not designed for the machine vision tasks. Some recent works Yang et al. (2021); Le et al. (2021); Song et al. (2021); Torfason et al. (2018) have explored the image coding for machine task by optimizing the network with additional loss functions for the machine vision tasks. However, the state-of-the-art point cloud compression algorithms like VoxelContext-NetQue et al. (2021) need to construct the octree and then compress it, in which the octree construction procedure is indifferentiable and thus we cannot directly add the loss function to improve the coding performance for the machine vision tasks. Therefore, it is necessary to design a new point cloud compression framework for both human and machine vision tasks.
In this work, we propose the first point cloud compression framework for both human and machine vision, Our framework follows the scalable coding paradigm, in which the full bit-stream will used for the human vision task, while only part of the bit-stream will be used for the machine vision tasks. For the human vision task, we take the start-of-the-art method VoxelContext-Net Que et al. (2021) as an example to compress the point cloud, in which the octrees are constructed and then compressed into bit-streams. For the machine vision tasks, we only transmit part of the bit-stream to reconstruct the first few depth-level of the octrees for bit-rate saving. Additionally, we propose the octree depth level predictor to predict the optimal depth-level of the octree for different scenarios when for coding for machine vision tasks. As a result, for simple objects/scenarios, we will use less depth level with less bits for bit-rate saving, while for complex objects/scenarios, we prefer deeper depth level of octree for more accurate prediction. Experimental results demonstrate that our proposed framework SPC-Net achieves promising results on various machine vision tasks without sacrificing the coding performance for the human vision task.
• In this work, We propose a new scalable point cloud compression framework for both machine vision and human vision tasks. To the best of our knowledge, this is the first point cloud compression method for both machine and human vision.
• We propose a new octree depth level predictor to predict the optimal depth of the octree used for the machine vision tasks, where deeper octree will be used for complex objects/scenarios for achieving more accurate prediction while shallow octree will be used for simple objects/scenarios for achieving less bit-rate cost.
• Comprehensive experimental results demonstrate that our proposed scalable point cloud coding framework achieves promising results without sacrificing the coding performance of the human vision task.
2 RELATED WORK
2.1 POINT CLOUD COMPRESSION FOR HUMAN VISION
In the past few years, hand-crafted and learning-based point cloud compression methods Group (2021); Wang et al. (2021b); Biswas et al. (2020); Huang et al. (2020); Zhu et al. (2020); Que et al. (2021) have been proposed by transforming the point cloud data into tree representations for better compression.
Specifically, a few hand-crafted point cloud compression methods Group (2021); Schwarz et al. (2018); Google (2022) have been proposed. For example, the standard point cloud compression method G-PCC (geometry based point cloud compression) Group (2021) proposed by the MPEG group, which transforms point cloud data into the octree-structure before performing static point cloud compression.
In recent years, some learning-based point cloud compression methods Huang & Liu (2019); Zhu et al. (2020); Huang et al. (2020); Biswas et al. (2020); Que et al. (2021); Wang et al. (2021b;a) have achieved the state-of-the-arts performance. Huang et al. Huang et al. (2020) and Wang et al. Wang et al. (2021b) followed the learned image compression framework Ballé et al. (2017) to compress the voxelized point clouds. To reduce the bitrate, Biswas et al. Biswas et al. (2020) exploited the spatio-temporal relationships across multiple LiDAR sweeps by using a novel conditional entropy model. Based on Wang et al. (2021b), Wang et al. Wang et al. (2021a) used the lossless compressed octree and the lossy compressed point feature to further improve the coding performance. Que et al. Que et al. (2021) extended the framework by further exploiting the context information among neighbouring nodes and refining the 3D coordinate at the decoder side. Considering that VoxelContext-Net Que et al. (2021) is the state-of-the-art point cloud compression method, we use it as our baseline method for the human vision task.
All the existing methods compress the point cloud data for human perception, which is evaluated by the metrics like point-to-point PSNR and point-to-plane PSNR. However, unlike the 2-D images or videos, most point clouds are not purely collected for human perception. Instead, they are widely used for various real-world machine vision tasks, such as classification, segmentation, and detection, which is unfortunately not considered in there works
2.2 IMAGE COMPRESSION FOR BOTH MACHINE AND HUMAN VISION TASKS
To the best of our knowledge, there is no existing point cloud compression method for both machine and human vision. In this section, we first discuss the scalable image compression methods for both machine and human vision tasks, and then the other compression methods.
Scalable Methods. Both Choi et al. Choi & Bajić (2022)and Chen et al. Chen et al. (2021) performed scalable image compression by dividing the image bit-streams into different parts and transmitting one or more parts of the bit-streams for both machine or human vision tasks. Liu et al. Liu et al. (2021) proposed a scalable image compression method for define grained classification at different levels.
Other Methods. Yang et al. Yang et al. (2021) designed the image encoder by using the edge extraction algorithm, and the reconstructed images from the decoder achieve promising performance for both human vision and machine vision tasks. Le et al. Le et al. (2021) directly added the additional machine vision loss to the compression loss functions to improve the reconstructed image quality for the machine vision tasks. Song et al. Song et al. (2021) compressed the source image through a corresponding quality map produced from different machine vision tasks. Torfason et al. Torfason et al. (2018) combined the image compression network with the detection network, and directly extracted the detection related information from bit-stream without using an image decoder.
In summary, the above methods for machine vision methods are all lossy compression methods. The encoder extracts the helpful features from the images, the decoder reconstructs the images based on the encoded features, and the entropy model calculates the number of bits used for the features. Most methods can adjust the various parameters in the encoder and decoder based on the performance in machine vision tasks. Therefore, it is easy for the encoder to learn the representative image features for machine vision. However, most learning-based point cloud compression methods use a lossless compression network. Their encoders and decoders cannot be optimized to extract the useful features in the point cloud for machine vision, which hinders the development of the point cloud compression methods for the machine vision tasks.
In contrast to these works Liu et al. (2021); Chen et al. (2021); Choi & Bajić (2022); Yang et al. (2021); Le et al. (2021); Song et al. (2021); Torfason et al. (2018), we propose some new modules before and after the compression model to improve the machine vision performance while maintaining the fidelity for human vision by keeping the lossless point cloud compression model unchanged.
3 METHODOLOGY
3.1 THE FRAMEWORK
The overall structure of our scalable point cloud compression framework (SPC-Net) is shown in Figure 1(b). In this section, we will first introduce our method coding strategy. And then, each module in our framework will be introduced.
Scalable Coding Strategy. The point cloud data is commonly used for various machine vision tasks. Therefore, our SPC-Net is always used for machine vision tasks (e.g., abnormal event detections to detect collision between the pedestrians and the vehicles) and the point cloud information is transformed along the solid arrows as shown in Figure 1 (b). If the human vision task must also be involved (e.g., when the prediction results from the machine vision tasks like event detection are abnormal), our framework can provide a high quality reconstructed point cloud for humans further analysis. It should be mentioned that like the scalable coding method, to reconstruct the point clouds for the human vision task, we can reuse the bit-stream generated for the machine vision task, which can avoid duplicate bit transmission.
Octree Construction, Encoder, Decoder and Point Cloud Reconstruction. The octree construction module constructs the point cloud to octree. Octree is a tree-like data structure used to describe three-dimensional space. Each node of the octree represents a volume element of a cube, and each non-leaf node has eight child nodes. The volume of the parent node can be obtained by adding the volume elements represented by the eight child nodes together. And the black node in Figure 1 (a) means there are points in this cube, and the white node means empty cube without having any 3D point. Each octree is encoded as the bit-stream by using the 3D encoder. The decoder reconstructs
Raw point cloud
Octree Depth Level Predictor
Predicted depth level 1 2 3 ....
❎ ✔ ❎ .... Encoder
Point Cloud Reconstruction
Data Processing
Task Specific Network
Octree Construction
Results of the machine task
Results of the human vision
Raw point cloud
Max Pooling
MLPs (64,128,512)
FC (128)
ReLU
FC (Nc)
Gubmel Softmax
Random Sampling
one hot vector
(c) Octree depth level predictor(b) Overview
0929 修改版
octree with 3 depth levels
...
...
(a) The process of encode and decode the octree
depth level 1
... ...
octree with 2 depth levels for the machine vision task
... octree with 3 depth
levels for the human vision task
Partition Scalable Bit-stream Partitioning
Decoder
depth level 2
depth level 3
Encoder
Decoder
Predicted depth level
Bm Bh
B
Figure 1: (a) The encoding and decoding process of the octree. B,Bm and Bh denote the full bitstream, the bit-stream for the machine vision task and the bit-stream for the rest depth level of the octree, respectively. (b) The overall architecture of our proposed scalable point cloud compression framework SPC-Net, which is designed for both machine vision and human vision. (c) Details of our proposed octree depth level predictor.
the bit-stream to octree. The point cloud reconstruction module then restore the point cloud from the octree. In this work, we task VoxelContext-Net Que et al. (2021) as an example and use the same design for all those modules, the details can be found in Appendix A.1.
Scalable Bit-stream Partitioning. Our scalable bit-stream partitioning module can split the full bit-stream to two parts bit-stream for different tasks. The details is shown in section 3.2.
Octree Depth level Predictor. Our octree depth level predictor is used to adaptively choose the octree depth for the machine vision tasks and can guide the full bit-steam splitting. The details of this module will be described in section 3.3.
Data Processing. The role of this module is to process point cloud data to compensate for the data difference between the output of the compressed network and the input of the machine task network. The details about this module are shown in Appendix A.1.
Task Specific Network. To adapt to a variety of situations in the point cloud based machine vision tasks, this module will use different networks for different machine vision tasks. For the classification task and the segmentation task, PointNet++ Qi et al. (2017) will be used in this module. For the detection task, VoteNet Qi et al. (2019) is adopted.
3.2 SCALABLE BIT-STREAM PARTITIONING
Although the reconstructed point cloud often achieves promising performance for the human vision task when using full bit-stream, it has plenty of redundant information for the machine vision tasks and thus it is less effective in terms of the bit-rate cost. Therefore, we design this scalable bit-stream partitioning method to split the bit-stream for both human and machine vision tasks.
Before introducing how to divide the bit-stream, we first introduce how to generate the point cloud bit-stream. Figure 1(a) shows the encoding and decoding process of the octree. During the encoding process, each octree is encoded from the lower depth level to the higher depth level. Therefore, the final full bit-stream can be expressed as B = (b1, b2, ..., bn), where n is the maximum octree depth level and bi represents the bit-stream from the ith depth level. At the decoder side, each octree will be reconstructed from the lower depth level to the higher depth level. The (i + 1)th depth level of the octree can be reconstructed with the previously reconstructed octree which has i depth levels and the extra bits bi+1. For example, with b1 ∪ b2, we can reconstruct the octree with the first two depth levels, and with b1 ∪ b2 ∪ b3 we can reconstruct the octree with the first three depth levels. Based on the above octree encoding and decoding process, we can split the full bit-stream B = (b1, b2, ..., bn) into two parts Bm and Bh according to the octree depth level. Bm = (b1, b2, ..., bi) can be used to reconstruct the octree with the first i depth levels, which will be used for the machine vision tasks. Bh = (bi+1, bi+2, ..., bn) can reconstruct the rest depth levels of the octree based on the reconstruction of the first i depth levels, which will be used for the human vision task. And the optimal splitting level index i is determined by the octree depth predictor for scalable bit-stream partitioning.
3.3 OCTREE DEPTH LEVEL PREDICTOR
The design of our octree depth level predictor is inspired by the well-trained machine vision tasks (i.e., classification, segmentation, detection). We can often achieve reasonable results when using the reconstructed point cloud from the lower depth level octree as the input. Taking the classification results in Figure 2 as an example, some objects with simple shapes like laptop can be easily recognized when using the reconstructed point cloud reconstructed from the octree with 4 depth levels as the input, while other objects with complex shapes like guitar can only be recognized when using the reconstructed point cloud from the octree with 6 depth levels. Therefore, we can use the octree with lower depth levels to reduce bit-stream cost and thus we can save the storage space and the bandwidth.
To achieve this goal, we propose the octree depth predictor to decide the optimal depth level of the octree for the machine vision tasks, which can not only achieve the reasonable performance for the machine vision tasks but also reduce the bit-rate cost. In addition, the encoder side (e.g., the RGB-D cameras or the LiDAR sensors) always do not have enough computing power and can not support the complex networks. Therefore, the networks (e.g., PointNet++ and VoteNet) for handling the complex machine vision tasks are placed behind the decoder and not in the encoder side. As shown in Figure 1 (c), our octree depth level predictor is designed by using 3 layers MLP, and 2 fully connected layers, which is a simple network. To future reduce the computational complexity, we random sample 1024 points from the raw point cloud as the input of our octree depth level predictor for classification and segmentation tasks.
Our octree depth level predictor can select the optimal octree depth level for machine vision tasks from the raw point cloud global feature. According to the different characters in machine vision tasks (e.g., the difficulty of classification), our octree depth predictor can generate n probabilities p = {p1, p2, ..., pn} for n octree depth levels, and then choose the octree depth level with the highest probabilities.
However, the process of choosing the depth level of octree with the highest probability is nondifferentiable, which makes the octree depth predictor unable to train. Therefore, we adopt the Gumbel Softmax Strategy Jang et al. (2017) to address this issue. First, we generate confidence score set p̂ from the probability set p with Gumbel noise as follows:
p̂i = pi +Gi, i ∈ {0, 1, ..., n} (1) where Gi = − log(− log ϵ) is the standard gumbel noise, and ϵ is randomly sampled from a uniform distribution between 0 and 1. Therefore, we can generate the one-hot vector ĥ = [ĥ0, ĥ1, ..., ĥn], where ĥi = 1 if i = argmaxj p̂j , j ∈ {0, 1, ..., n}. Otherwise ĥi = 0. ĥ is the one hot vector of the depth level selection results. However, the argmax operation when generating the one hot vector will led to non-differentiable. Therefore, during the backward propagation process, we apply the Gumbel Softmax Strategy and relax the one-hot vector ĥ to h̃ = [h̃0, h̃1, ..., h̃n] as follows:
h̃i = exp(p̂i/τ))∑7 j=0 exp(p̂j/τ) , i ∈ {0, 1, ..., n} (2)
where τ is the temperature parameter. Using the Gumbel softmax Strategy Jang et al. (2017), we can select the optimal depth-level of octree for machine tasks based on the argmax function during forward propagation process and approximate the gradient of the argmax function by using Eq. (2) in the back propagation process. During the inference stage, we directly select the depth level with the maximum probability in p.
3.4 TRAINING STRATEGY
Loss Function. In our SPC-Net, we need to train three modules including the octree depth level predictor, the compression module (i.e., the encoder and the decoder) and the task specific network module. As the encoder and the decoder is the same as the VoxelContext-Net Que et al. (2021), we train the compression module based on the same setting as VoxelContext-Net. For the task network module, we train the network based on the same setting as PointNet++ Qi et al. (2017) or VoteNet Qi et al. (2019). For our octree depth level predictor, we train it after the other two modules are pretrained. When training the octree depth level predictor, we fix the parameters in the compression module and the task network module. And the loss function function used for training our octree depth level predictor is shown here,
loss = ∑ ((λ ∗ bpp+ L) ∗ h̃) (3)
Our method selects the optimal depth level from n depth levels. bpp = (bpp1,bpp2, ...,bppn), bpp means bits per points, which denotes the length of the bit-stream. bppi(i ∈ {0, 1, ...,n}) represents the bpp for constructing the octree at the ith depth levels. We can obtain bpp from the encoder. L = (L1, L2, ..., Ln), and Li are formulated as follows,
Li = D(f(x̂i), ygt), i ∈ {0, 1, ..., n} (4) where f is the machine vision task network (i.e., PointNet++ or VoteNet). x̂i is the reconstructed point cloud from the octree with i depth levels. ygt is the ground true for the machine vision tasks. And D can calcualte the loss between the f(x̂i) and the ygt. h̃ = [h̃0, h̃1, ..., h̃n], and h̃i is designed in equation 2. λ is a hyper-parameter, which is used to balance the trade off of bpp and L.
4 EXPERIMENT
4.1 DATASET
ModelNet. ModelNet Wu et al. (2015) is a widely used benchmark to evaluate the point cloud classification performance, which contains two datasets named ModelNet40 and ModelNet10. ModelNet40 dataset is divided into 40 categories, which has 9843 point cloud data for training and 2468
point cloud data for testing. The ModelNet10 dataset is a subset of the ModelNet40, which only has 10 categories with 3991 point cloudd data for training and 908 point cloud data for testing.
ShapeNet. ShapeNet Yi et al. (2016) contains 16881 point cloud data from 16 object classes. Each point cloud data contains 2-5 parts, with a total of 50 part categures. ShapeNet has 14007 point cloud data for training and 2874 point cloud data for testing.
ScanNet. ScanNet Dai et al. (2017) is a physically available dataset used for the 3D object detection task, which contains 1201 scans for training and 312 scans for testing. Following the VoxelContextNet Que et al. (2021), we sample 50,000 points from each scan.
4.2 EXPERIMENT DETAILS
Baseline. To the best of our knowledge, this is the first point cloud compression framework for both machine vision and human vision tasks. Therefore, we directly use the encoder and the decoder from VoxelContext-Net Que et al. (2021) as our baseline method, which is the start-of-the-art point cloud compression method designed for the human vision task. We also use the same encoder and decoder for point cloud compression in our proposed framework for a fair comparison with the baseline method.
For the baseline methods for the machine vision tasks, we directly use the reconstructed point clouds from VoxelContext-Net as the input of the networks for the machine vision tasks. PointNet++ Qi et al. (2017) is used for the classification task and segmentation task. VoteNet Qi et al. (2019) is adopted for the detection task. For the classification task, we use the octree with the depth levels of 3,4,5 to compress the input raw point clouds. For the segmentation task, we use the octree with the depth levels of 4,5,6 for data compression. For the detection task, we use the octree with the depth levels of 7,8,9 for data compression. As suggested in VoxelContext-Net Que et al. (2021), we train the machine vision task networks with the raw point clouds and evaluate the classification/segmentation/detection results based on the reconstructed point clouds.
Evaluation Metric. We use bit per point (bpp) to denote the bit cost in the compression procedure. For the machine vision tasks, accuracy, mean intersection-over-union (mIoU) and mean average precision (mAP) are used to measure the performance for the classification, segmentation and detection tasks, respectively. For the human vision task, Point-to-Point PSNR Tian et al. (2017) and Chamfer distance (CD) Fan et al. (2017); Huang & Liu (2019) are used to measure the distortion between the reconstructed point cloud and the raw point cloud, which are widely used metric for measuring compression performance.
Implementation Details. We train our model in two stages. At the first stage, we only train the encoder, the decoder and the task specific networks. We use the same training strategy as VoxelContext-Net Que et al. (2021) to train the encoder and the decoder. For different task specific networks (i.e., PointNet++ Qi et al. (2017) and VoteNet Qi et al. (2019)), we follow the settings in their works to train the task specific networks. At the second stage, based on the loss function 3, we train the octree depth predictor by fixing the parameters in the encoder, the decoder and the task specific networks. For the classification task, the hype-parameter λ is set from 0.01 to 16. For the segmentation task, the hype-parameter λ is set from 0.02 to 8. For the detection task, the hype-parameter is set as 0.3, 0.6, 1 and 2.
The whole network is implemented by Pytorch with CUDA support. At the second training stage, we set the batch size as 48. We use the Adam optimizer Kingma & Ba (2015) with the learning rate of 1e-4 for the first 50 epochs, 1e-5 for the next 30 epochs, and 1e-6 for the last 20 epochs.
In our experiments, the maximum depth levels of the octrees are set as 8, 8 and 9 for the human vision task on ModelNet, ShapeNet and ScanNet datasets, respectively, as the predefined depth levels are sufficient for reconstructing high quality point clouds in a highly visual experience. For machine vision, the maximum depth levels of the octree are set as 7, 7 and 9 on ModelNet, ShapeNet and ScanNet datasets, respectively.
4.3 EXPERIMENT RESULTS
Classification Task. The classification results of our SPC-Net on the ModelNet10, ModelNet40 and ShapeNet datasets are shown in Figure 3 (a) (b) and (c). It is observed that our proposed
framework SPC-Net achieves 1% accuracy improvement at 0.05 bpp on the ModelNet10 dataset when compared with our baseline method. On the ModelNet40 dataset, our SPC-Net achieves about 10% accuracy improvement at 0.056 bpp and saves about 0.8 bpp at 91.8% accuracy. On the ShapeNet dataset, our SPC-Net achieves more than 10% accuracy improvement at the 0.01 bpp when compared with our baseline method using 4 octree depth levels. The experimental results demonstrate that our new SPC-Net can improve the performance when the input point cloud is compressed for the classification task.
Segmentation Task. The segmentation results of our SPC-Net on the ShapeNet dataset are shown in Figure 3 (d). We observe that our proposed framework SPC-Net achieves 0.8% mIoU improvement at 0.08 bpp when compared with our baseline method. Our method can save above 10% bpp when the mIoU target is similar when compared with our baseline method. Therefore, our method achieves better performance than the baseline method for the segmentation task.
Detection Task. The detection results of our SPC-Net on the ScanNet dataset are shown in Figure 3 (e) and (f). From Figure 3 (e) we observe that our framework SPC-Net achieves about 0.01 mAP@0.25 improvement at 3 bpp when compared with our baseline method. At the highest bpp, our SPC-Net saves above 20% bpp when compared with our baseline. From figure 3 (f), we observe that our SPC-Net imporves about 0.08 mAP@0.5 and saves about 0.3 bpp when compared with our baseline method at 3 bpp. And at the highest bpp, our SPC-Net saves above 15% bpp when compared with our baseline method. The experimental results demonstrate that our proposed framework can also improve the performance of the detection task.
Human Vision Results. Our SPC-Net achieves exactly the same compression performance as our baseline method VoxelContext-Net Que et al. (2021), (please refer to Appendix A.2 for more details). It should be mentioned that in most 2D images compression for both machine vision and human vision methods Choi & Bajić (2022); Yang et al. (2021); Torfason et al. (2018), further compression performance for human vision always drops in order to achieving better performance for the machine vision tasks. Therefore, this is the advantage that our proposed framework SPC-Net can improve the performance for the machine vision tasks without sacrificing the compression results for human vision.
4.4 MODEL ANALYSIS
In order to balance the bit-rate cost and the performance of the machine vision tasks in different scenarios, our proposed octree depth level predictor can dynamically adjust the number of the point clouds reconstructed by the octree at different depth levels. The selection percentages at different depth levels of the octrees for different tasks at different λ values are shown in Figure 4. We observe that smaller λ values lead to selection more of higher depth levels. With the increasing of the λ values, our octree depth level predictor will select lower depth levels of the octrees. The selection percentage in Figure 4 demonstrates that our SPC-Net can dynamically select the optimal depth level of the octree at different λ values for different machine vision tasks.
In Table 1, we evaluate our SPC-Net for the classification task on the ModelNet40 dataset when setting λ = 0.25. It is observed that lower depth levels of the octree are prefered for the simple categories (e.g., chair, laptop, bed) and it can still achieve more than 95% accuracy. Therefore, our SPC-Net will save bits while still achieve promising classification performance. For the “complex” categories (e.g., person, curtain, guitar), our octree depth level predictor prefers higher depth levels. As it is hard to recognize objects from the complex categories, our SPC-Net need to spend more bits for higher depth levels to achieve better classification results. The results demonstrate that our proposed octree depth level predictor can select different depth levels for different input point clouds according to their characteristics (e.g., the relating “simple” or “complex” categories).
4.5 CONCLUSION
In this work, we have proposed a new scalable point cloud compression framework SPC-Net for both machine vision and human vision tasks. In our SPC-Net, we propose a new scalable bit-stream partitioning method based on the point cloud encoder-decoder structure in order to make the compressed point clouds more suitable for the both tasks. Additionally, considering the purpose of different tasks and the characteristics of different point clouds, we design a new depth level predictor to guide the division of the bit-stream. The experimental results on four benchmark datasets demonstrate that our SPC-Net achieves promising results for three machine vision tasks(i.e., classification, segmentation, detection) without sacrificing the performance of the human vision task.
A APPENDIX
A.1 THE FRAMEWORK
Octree Construction. As shown in Figure 1 (c), octree is a point cloud storage structure, which is beneficial to compression. To construct the octree, we first need to surround the point cloud with the smallest cube. Then the smallest cube will be split into eight sub-cube of the same size. For each sub-cube, if there is no point in the cube, this cube is recorded empty. Otherwise, the cube is recorded nonempty, which means there are some points in this cube. After that, for the nonempty sub-cube, we repeat the above split process to reduce the size of the cube until the depth of the octree reaches the predefined maximum depth value. In the constructed octree, a non-leaf node stands for one cube and the nonempty non-leaf node has eight child nodes that stand for the sub-cubes.
Encoder. The encoder compresses the octree into the bit-stream. All octrees are encoded into the bit-stream from the low depth level to the high depth level, as shown in Figure 1(c). Therefore, we can divide the full bit-stream into two parts according to the selected octree depth.
Decoder and Point Cloud Reconstruction. The decoder will restores the octree from the bitstream. And the point cloud reconstruction module reconstructs the point cloud coordinates from the octree. The reconstruct point cloud coordinates is the coordinate of the center point of the smallest nonempty cubes. The point cloud coordinates can not only be used in machine vision tasks but also be easily visualized for human vision.
Data Processing. In each octree, all points in one smallest cube will be combined to one point. So the number of points in the reconstructed point cloud will have less number of points than the raw point cloud. Additionally, the reduced number of different point clouds is different, so the output point clouds from the point cloud reconstruction module have different number of points. However, our framework need the size of batch size more than 1 (e.g., 32 or 48). And the point clouds with different number of points can not directly concatenate together to one batch. So we random sample the point cloud based on the predefined number of points to unify the size of the point cloud. As we know, the octree will combine some points to one point. However, each point corresponds to one target for the segmentation task. So if all points in the smallest cube have the same label, we use this label for the new combined point. If the points in one smallest cube have the different label, we use the label of the point which is closest to the combined point.
A.2 HUMAN VISION RESULT
The experimental results of our SPC-Net for human vision are shown in Table 2. In this table, we observe that our SPC-Net achieves the same performance as the VoxelContext-Net.
The visualization results of the segmentation task are shown in Figure 5. From the results of the table and the mug in the first two rows of Figure 5, point clouds reconstructed from the octree with 5 depth levels can achieve similar segmentation performance when compared with the point clouds reconstructed from the octree with more depth levels. Therefore, our octree depth level predictor
prefers the octree with 5 depth levels for the segmentation task in this two cases to save bits. From the results of the car and airplane in the last two rows of Figure 5, the point clouds reconstructed from the octrees with 7 depth levels achieve much better segmentation performance when compared with those octrees with less depth levels. Therefore, our octree depth level predictor selects the octree with 7 depth levels for achieving better segmentation performance in this two cases.
The visualization results of the detection task is shown in Figure 6. In the first row, the point cloud reconstructed from the octree with 7 depth levels achieves the same mAP@0.25 performance when compared with the point clouds reconstructed from the octrees with higher depth levels. Therefore, our octree depth level predictor selects the 7 depth levels in this case to save bits. In the second row, the point cloud reconstructed from the octree with 9 depth levels has much better mAP@0.25 performance than the point cloud reconstructed from the octrees with less depth level. Therefore, our octree depth level predictor selects the octree with 9 depth levels for better detection performance in this case.
It is observed that our proposed octree depth level predictor can select the optimal depth levels of the octrees for different cases, which demonstrate the effectiveness of our proposed octree depth level predictor. | 1. What is the focus of the paper regarding semantic correspondence?
2. What are the strengths and weaknesses of the proposed method in representing semantic correspondence?
3. Do you have any concerns about the representation used in the paper?
4. What are the limitations of the NeMF approach?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes to include an octree depth predictor which is a PointNet followed by Gumbel softmax to choose the depth of the octree. This is then used to split the octree index into 2 parts, one for computer vision tasks with presumably smaller depth (coarser compression) and another part for the human vision task with deeper octree. Experiment results showed improvements over its baseline VoxelContextNet on machine vision tasks.
Strengths And Weaknesses
+: Performance improvement over its baseline.
+: They showed that for different classes different levels of detail would be needed.
-: They claim to be the first "point cloud compression method for both machine and human vision". This is a strange claim, because they did not propose a method that generates one point cloud compression that is good for both human and machine. What they did was just separate the compression into 2 different compressions, with one for machine vision and another one for human vision. If you use the one that is supposed to be for human vision, then you get bad compression rates. If you use the one for machine vision, then you get bad human vision. This is really very much overclaiming.
-: The contribution is minor to just predict the number of octree levels, they did not really touch anything in the compression algorithm so this is quite superficial.
-: Even for the superficial contribution, I'm not sure if it's presented clearly. It sounds like in order to train their octree depth predictor, they would need to run the point cloud network for each point cloud at each reconstruction level and compute the loss there. Is that correct?
-: Another significant issue I had was the train/test split. Is everything trained on the training data? Seems like the L after training the classifier would be very overfitted to the training set and most of the loss differences might be quite small from different reconstruction levels. That would hardly give meaningful signal to the depth predictor. I am not too sure if this algorithm isn't trained on the testing set.
-: It's also unclear that in realistic scenarios such as ScanNet, how would this approach work. ScanNet would simultaneously have different types of objects. According to Table 1, these may require different levels of detail, so the single-scale prediction of this algorithm may not be useful.
Minor: there are many typos in the paper, e.g. Gumbel in Fig. 1, ground true, calcualte (page 6), point cloudd, hype-parameter (page 7), etc. The authors should do a spell check.
Clarity, Quality, Novelty And Reproducibility
Novelty is minor since it only adds a network to predict the octree depth.
Clarity is not so great, it is painful for me to figure out the detail of the training. There are also many typos.
Quality is just OK. I think the paper could have put much more thought into the problem, and the main claim is strange and over the board. |
ICLR | Title
SPC-Net: A New Scalable Point Cloud Compression Framework for Both Machine and Human Vision Tasks
Abstract
Recently, point cloud process and analysis have attracted increasing attention in various machine vision tasks. Therefore, some point cloud compression algorithms are developed. However, such compression algorithms are developed for human vision while most of the point cloud data will be used for automated point cloud analysis (e.g., detection of abnormal event and early warning in autonomous driving) and may not be seen by humans. To this end, we design a new scalable point cloud compression framework (SPC-Net) for both machine and human vision tasks, in which a scalable bit-stream will be used to describe the point cloud for both machine vision and human vision tasks. For machine vision tasks, only part of the bit-stream will be transmitted for bit-rate saving, while the full bitstream will be transmitted when used for the human vision task. Additionally, we propose a new octree depth level predictor to automatically predict the optimal depth level in order to control the bit-rate cost for the machine vision tasks. As a result, for simple objects/scenarios, we will use fewer depth levels with less bits for the machine tasks, while for complex objects/scenarios, we prefer deeper depth levels of octree with more bits for machine tasks comprehensive. Experimental results on different datasets (e.g., ModelNet10, ModelNet40, ShapeNet and ScanNet) demonstrate that our proposed scalable point could compression framework SPC-Net achieves better performance on the machine vision tasks (e.g., classification, segmentation and detection) without degrading the performance of the human vision task.
N/A
Recently, point cloud process and analysis have attracted increasing attention in various machine vision tasks. Therefore, some point cloud compression algorithms are developed. However, such compression algorithms are developed for human vision while most of the point cloud data will be used for automated point cloud analysis (e.g., detection of abnormal event and early warning in autonomous driving) and may not be seen by humans. To this end, we design a new scalable point cloud compression framework (SPC-Net) for both machine and human vision tasks, in which a scalable bit-stream will be used to describe the point cloud for both machine vision and human vision tasks. For machine vision tasks, only part of the bit-stream will be transmitted for bit-rate saving, while the full bitstream will be transmitted when used for the human vision task. Additionally, we propose a new octree depth level predictor to automatically predict the optimal depth level in order to control the bit-rate cost for the machine vision tasks. As a result, for simple objects/scenarios, we will use fewer depth levels with less bits for the machine tasks, while for complex objects/scenarios, we prefer deeper depth levels of octree with more bits for machine tasks comprehensive. Experimental results on different datasets (e.g., ModelNet10, ModelNet40, ShapeNet and ScanNet) demonstrate that our proposed scalable point could compression framework SPC-Net achieves better performance on the machine vision tasks (e.g., classification, segmentation and detection) without degrading the performance of the human vision task.
1 INTRODUCTION
With the development of advanced 3D technologies, it has become easier to collect point clouds by using various types of 3D scanners including LiDARs and RGB-D cameras. Therefore, a huge amount of point cloud data has been collected and various point cloud related machine vision tasks like classification, segmentation and detection have attracted increasing attention. However, most point cloud analysis tasks take raw point cloud data as the input, which requires large bandwidth/storage for transmitting/storing huge massive point cloud data.
Recently, some point cloud compression frameworks Huang et al. (2020); Que et al. (2021) were proposed to save the bandwidth/storage for point cloud transmitting/storing. However, those point cloud compression frameworks are designed for human vision, which will thus degrade the performance in the machine vision tasks. Currently, the existing point cloud compression framework are not designed for the machine vision tasks. Some recent works Yang et al. (2021); Le et al. (2021); Song et al. (2021); Torfason et al. (2018) have explored the image coding for machine task by optimizing the network with additional loss functions for the machine vision tasks. However, the state-of-the-art point cloud compression algorithms like VoxelContext-NetQue et al. (2021) need to construct the octree and then compress it, in which the octree construction procedure is indifferentiable and thus we cannot directly add the loss function to improve the coding performance for the machine vision tasks. Therefore, it is necessary to design a new point cloud compression framework for both human and machine vision tasks.
In this work, we propose the first point cloud compression framework for both human and machine vision, Our framework follows the scalable coding paradigm, in which the full bit-stream will used for the human vision task, while only part of the bit-stream will be used for the machine vision tasks. For the human vision task, we take the start-of-the-art method VoxelContext-Net Que et al. (2021) as an example to compress the point cloud, in which the octrees are constructed and then compressed into bit-streams. For the machine vision tasks, we only transmit part of the bit-stream to reconstruct the first few depth-level of the octrees for bit-rate saving. Additionally, we propose the octree depth level predictor to predict the optimal depth-level of the octree for different scenarios when for coding for machine vision tasks. As a result, for simple objects/scenarios, we will use less depth level with less bits for bit-rate saving, while for complex objects/scenarios, we prefer deeper depth level of octree for more accurate prediction. Experimental results demonstrate that our proposed framework SPC-Net achieves promising results on various machine vision tasks without sacrificing the coding performance for the human vision task.
• In this work, We propose a new scalable point cloud compression framework for both machine vision and human vision tasks. To the best of our knowledge, this is the first point cloud compression method for both machine and human vision.
• We propose a new octree depth level predictor to predict the optimal depth of the octree used for the machine vision tasks, where deeper octree will be used for complex objects/scenarios for achieving more accurate prediction while shallow octree will be used for simple objects/scenarios for achieving less bit-rate cost.
• Comprehensive experimental results demonstrate that our proposed scalable point cloud coding framework achieves promising results without sacrificing the coding performance of the human vision task.
2 RELATED WORK
2.1 POINT CLOUD COMPRESSION FOR HUMAN VISION
In the past few years, hand-crafted and learning-based point cloud compression methods Group (2021); Wang et al. (2021b); Biswas et al. (2020); Huang et al. (2020); Zhu et al. (2020); Que et al. (2021) have been proposed by transforming the point cloud data into tree representations for better compression.
Specifically, a few hand-crafted point cloud compression methods Group (2021); Schwarz et al. (2018); Google (2022) have been proposed. For example, the standard point cloud compression method G-PCC (geometry based point cloud compression) Group (2021) proposed by the MPEG group, which transforms point cloud data into the octree-structure before performing static point cloud compression.
In recent years, some learning-based point cloud compression methods Huang & Liu (2019); Zhu et al. (2020); Huang et al. (2020); Biswas et al. (2020); Que et al. (2021); Wang et al. (2021b;a) have achieved the state-of-the-arts performance. Huang et al. Huang et al. (2020) and Wang et al. Wang et al. (2021b) followed the learned image compression framework Ballé et al. (2017) to compress the voxelized point clouds. To reduce the bitrate, Biswas et al. Biswas et al. (2020) exploited the spatio-temporal relationships across multiple LiDAR sweeps by using a novel conditional entropy model. Based on Wang et al. (2021b), Wang et al. Wang et al. (2021a) used the lossless compressed octree and the lossy compressed point feature to further improve the coding performance. Que et al. Que et al. (2021) extended the framework by further exploiting the context information among neighbouring nodes and refining the 3D coordinate at the decoder side. Considering that VoxelContext-Net Que et al. (2021) is the state-of-the-art point cloud compression method, we use it as our baseline method for the human vision task.
All the existing methods compress the point cloud data for human perception, which is evaluated by the metrics like point-to-point PSNR and point-to-plane PSNR. However, unlike the 2-D images or videos, most point clouds are not purely collected for human perception. Instead, they are widely used for various real-world machine vision tasks, such as classification, segmentation, and detection, which is unfortunately not considered in there works
2.2 IMAGE COMPRESSION FOR BOTH MACHINE AND HUMAN VISION TASKS
To the best of our knowledge, there is no existing point cloud compression method for both machine and human vision. In this section, we first discuss the scalable image compression methods for both machine and human vision tasks, and then the other compression methods.
Scalable Methods. Both Choi et al. Choi & Bajić (2022)and Chen et al. Chen et al. (2021) performed scalable image compression by dividing the image bit-streams into different parts and transmitting one or more parts of the bit-streams for both machine or human vision tasks. Liu et al. Liu et al. (2021) proposed a scalable image compression method for define grained classification at different levels.
Other Methods. Yang et al. Yang et al. (2021) designed the image encoder by using the edge extraction algorithm, and the reconstructed images from the decoder achieve promising performance for both human vision and machine vision tasks. Le et al. Le et al. (2021) directly added the additional machine vision loss to the compression loss functions to improve the reconstructed image quality for the machine vision tasks. Song et al. Song et al. (2021) compressed the source image through a corresponding quality map produced from different machine vision tasks. Torfason et al. Torfason et al. (2018) combined the image compression network with the detection network, and directly extracted the detection related information from bit-stream without using an image decoder.
In summary, the above methods for machine vision methods are all lossy compression methods. The encoder extracts the helpful features from the images, the decoder reconstructs the images based on the encoded features, and the entropy model calculates the number of bits used for the features. Most methods can adjust the various parameters in the encoder and decoder based on the performance in machine vision tasks. Therefore, it is easy for the encoder to learn the representative image features for machine vision. However, most learning-based point cloud compression methods use a lossless compression network. Their encoders and decoders cannot be optimized to extract the useful features in the point cloud for machine vision, which hinders the development of the point cloud compression methods for the machine vision tasks.
In contrast to these works Liu et al. (2021); Chen et al. (2021); Choi & Bajić (2022); Yang et al. (2021); Le et al. (2021); Song et al. (2021); Torfason et al. (2018), we propose some new modules before and after the compression model to improve the machine vision performance while maintaining the fidelity for human vision by keeping the lossless point cloud compression model unchanged.
3 METHODOLOGY
3.1 THE FRAMEWORK
The overall structure of our scalable point cloud compression framework (SPC-Net) is shown in Figure 1(b). In this section, we will first introduce our method coding strategy. And then, each module in our framework will be introduced.
Scalable Coding Strategy. The point cloud data is commonly used for various machine vision tasks. Therefore, our SPC-Net is always used for machine vision tasks (e.g., abnormal event detections to detect collision between the pedestrians and the vehicles) and the point cloud information is transformed along the solid arrows as shown in Figure 1 (b). If the human vision task must also be involved (e.g., when the prediction results from the machine vision tasks like event detection are abnormal), our framework can provide a high quality reconstructed point cloud for humans further analysis. It should be mentioned that like the scalable coding method, to reconstruct the point clouds for the human vision task, we can reuse the bit-stream generated for the machine vision task, which can avoid duplicate bit transmission.
Octree Construction, Encoder, Decoder and Point Cloud Reconstruction. The octree construction module constructs the point cloud to octree. Octree is a tree-like data structure used to describe three-dimensional space. Each node of the octree represents a volume element of a cube, and each non-leaf node has eight child nodes. The volume of the parent node can be obtained by adding the volume elements represented by the eight child nodes together. And the black node in Figure 1 (a) means there are points in this cube, and the white node means empty cube without having any 3D point. Each octree is encoded as the bit-stream by using the 3D encoder. The decoder reconstructs
Raw point cloud
Octree Depth Level Predictor
Predicted depth level 1 2 3 ....
❎ ✔ ❎ .... Encoder
Point Cloud Reconstruction
Data Processing
Task Specific Network
Octree Construction
Results of the machine task
Results of the human vision
Raw point cloud
Max Pooling
MLPs (64,128,512)
FC (128)
ReLU
FC (Nc)
Gubmel Softmax
Random Sampling
one hot vector
(c) Octree depth level predictor(b) Overview
0929 修改版
octree with 3 depth levels
...
...
(a) The process of encode and decode the octree
depth level 1
... ...
octree with 2 depth levels for the machine vision task
... octree with 3 depth
levels for the human vision task
Partition Scalable Bit-stream Partitioning
Decoder
depth level 2
depth level 3
Encoder
Decoder
Predicted depth level
Bm Bh
B
Figure 1: (a) The encoding and decoding process of the octree. B,Bm and Bh denote the full bitstream, the bit-stream for the machine vision task and the bit-stream for the rest depth level of the octree, respectively. (b) The overall architecture of our proposed scalable point cloud compression framework SPC-Net, which is designed for both machine vision and human vision. (c) Details of our proposed octree depth level predictor.
the bit-stream to octree. The point cloud reconstruction module then restore the point cloud from the octree. In this work, we task VoxelContext-Net Que et al. (2021) as an example and use the same design for all those modules, the details can be found in Appendix A.1.
Scalable Bit-stream Partitioning. Our scalable bit-stream partitioning module can split the full bit-stream to two parts bit-stream for different tasks. The details is shown in section 3.2.
Octree Depth level Predictor. Our octree depth level predictor is used to adaptively choose the octree depth for the machine vision tasks and can guide the full bit-steam splitting. The details of this module will be described in section 3.3.
Data Processing. The role of this module is to process point cloud data to compensate for the data difference between the output of the compressed network and the input of the machine task network. The details about this module are shown in Appendix A.1.
Task Specific Network. To adapt to a variety of situations in the point cloud based machine vision tasks, this module will use different networks for different machine vision tasks. For the classification task and the segmentation task, PointNet++ Qi et al. (2017) will be used in this module. For the detection task, VoteNet Qi et al. (2019) is adopted.
3.2 SCALABLE BIT-STREAM PARTITIONING
Although the reconstructed point cloud often achieves promising performance for the human vision task when using full bit-stream, it has plenty of redundant information for the machine vision tasks and thus it is less effective in terms of the bit-rate cost. Therefore, we design this scalable bit-stream partitioning method to split the bit-stream for both human and machine vision tasks.
Before introducing how to divide the bit-stream, we first introduce how to generate the point cloud bit-stream. Figure 1(a) shows the encoding and decoding process of the octree. During the encoding process, each octree is encoded from the lower depth level to the higher depth level. Therefore, the final full bit-stream can be expressed as B = (b1, b2, ..., bn), where n is the maximum octree depth level and bi represents the bit-stream from the ith depth level. At the decoder side, each octree will be reconstructed from the lower depth level to the higher depth level. The (i + 1)th depth level of the octree can be reconstructed with the previously reconstructed octree which has i depth levels and the extra bits bi+1. For example, with b1 ∪ b2, we can reconstruct the octree with the first two depth levels, and with b1 ∪ b2 ∪ b3 we can reconstruct the octree with the first three depth levels. Based on the above octree encoding and decoding process, we can split the full bit-stream B = (b1, b2, ..., bn) into two parts Bm and Bh according to the octree depth level. Bm = (b1, b2, ..., bi) can be used to reconstruct the octree with the first i depth levels, which will be used for the machine vision tasks. Bh = (bi+1, bi+2, ..., bn) can reconstruct the rest depth levels of the octree based on the reconstruction of the first i depth levels, which will be used for the human vision task. And the optimal splitting level index i is determined by the octree depth predictor for scalable bit-stream partitioning.
3.3 OCTREE DEPTH LEVEL PREDICTOR
The design of our octree depth level predictor is inspired by the well-trained machine vision tasks (i.e., classification, segmentation, detection). We can often achieve reasonable results when using the reconstructed point cloud from the lower depth level octree as the input. Taking the classification results in Figure 2 as an example, some objects with simple shapes like laptop can be easily recognized when using the reconstructed point cloud reconstructed from the octree with 4 depth levels as the input, while other objects with complex shapes like guitar can only be recognized when using the reconstructed point cloud from the octree with 6 depth levels. Therefore, we can use the octree with lower depth levels to reduce bit-stream cost and thus we can save the storage space and the bandwidth.
To achieve this goal, we propose the octree depth predictor to decide the optimal depth level of the octree for the machine vision tasks, which can not only achieve the reasonable performance for the machine vision tasks but also reduce the bit-rate cost. In addition, the encoder side (e.g., the RGB-D cameras or the LiDAR sensors) always do not have enough computing power and can not support the complex networks. Therefore, the networks (e.g., PointNet++ and VoteNet) for handling the complex machine vision tasks are placed behind the decoder and not in the encoder side. As shown in Figure 1 (c), our octree depth level predictor is designed by using 3 layers MLP, and 2 fully connected layers, which is a simple network. To future reduce the computational complexity, we random sample 1024 points from the raw point cloud as the input of our octree depth level predictor for classification and segmentation tasks.
Our octree depth level predictor can select the optimal octree depth level for machine vision tasks from the raw point cloud global feature. According to the different characters in machine vision tasks (e.g., the difficulty of classification), our octree depth predictor can generate n probabilities p = {p1, p2, ..., pn} for n octree depth levels, and then choose the octree depth level with the highest probabilities.
However, the process of choosing the depth level of octree with the highest probability is nondifferentiable, which makes the octree depth predictor unable to train. Therefore, we adopt the Gumbel Softmax Strategy Jang et al. (2017) to address this issue. First, we generate confidence score set p̂ from the probability set p with Gumbel noise as follows:
p̂i = pi +Gi, i ∈ {0, 1, ..., n} (1) where Gi = − log(− log ϵ) is the standard gumbel noise, and ϵ is randomly sampled from a uniform distribution between 0 and 1. Therefore, we can generate the one-hot vector ĥ = [ĥ0, ĥ1, ..., ĥn], where ĥi = 1 if i = argmaxj p̂j , j ∈ {0, 1, ..., n}. Otherwise ĥi = 0. ĥ is the one hot vector of the depth level selection results. However, the argmax operation when generating the one hot vector will led to non-differentiable. Therefore, during the backward propagation process, we apply the Gumbel Softmax Strategy and relax the one-hot vector ĥ to h̃ = [h̃0, h̃1, ..., h̃n] as follows:
h̃i = exp(p̂i/τ))∑7 j=0 exp(p̂j/τ) , i ∈ {0, 1, ..., n} (2)
where τ is the temperature parameter. Using the Gumbel softmax Strategy Jang et al. (2017), we can select the optimal depth-level of octree for machine tasks based on the argmax function during forward propagation process and approximate the gradient of the argmax function by using Eq. (2) in the back propagation process. During the inference stage, we directly select the depth level with the maximum probability in p.
3.4 TRAINING STRATEGY
Loss Function. In our SPC-Net, we need to train three modules including the octree depth level predictor, the compression module (i.e., the encoder and the decoder) and the task specific network module. As the encoder and the decoder is the same as the VoxelContext-Net Que et al. (2021), we train the compression module based on the same setting as VoxelContext-Net. For the task network module, we train the network based on the same setting as PointNet++ Qi et al. (2017) or VoteNet Qi et al. (2019). For our octree depth level predictor, we train it after the other two modules are pretrained. When training the octree depth level predictor, we fix the parameters in the compression module and the task network module. And the loss function function used for training our octree depth level predictor is shown here,
loss = ∑ ((λ ∗ bpp+ L) ∗ h̃) (3)
Our method selects the optimal depth level from n depth levels. bpp = (bpp1,bpp2, ...,bppn), bpp means bits per points, which denotes the length of the bit-stream. bppi(i ∈ {0, 1, ...,n}) represents the bpp for constructing the octree at the ith depth levels. We can obtain bpp from the encoder. L = (L1, L2, ..., Ln), and Li are formulated as follows,
Li = D(f(x̂i), ygt), i ∈ {0, 1, ..., n} (4) where f is the machine vision task network (i.e., PointNet++ or VoteNet). x̂i is the reconstructed point cloud from the octree with i depth levels. ygt is the ground true for the machine vision tasks. And D can calcualte the loss between the f(x̂i) and the ygt. h̃ = [h̃0, h̃1, ..., h̃n], and h̃i is designed in equation 2. λ is a hyper-parameter, which is used to balance the trade off of bpp and L.
4 EXPERIMENT
4.1 DATASET
ModelNet. ModelNet Wu et al. (2015) is a widely used benchmark to evaluate the point cloud classification performance, which contains two datasets named ModelNet40 and ModelNet10. ModelNet40 dataset is divided into 40 categories, which has 9843 point cloud data for training and 2468
point cloud data for testing. The ModelNet10 dataset is a subset of the ModelNet40, which only has 10 categories with 3991 point cloudd data for training and 908 point cloud data for testing.
ShapeNet. ShapeNet Yi et al. (2016) contains 16881 point cloud data from 16 object classes. Each point cloud data contains 2-5 parts, with a total of 50 part categures. ShapeNet has 14007 point cloud data for training and 2874 point cloud data for testing.
ScanNet. ScanNet Dai et al. (2017) is a physically available dataset used for the 3D object detection task, which contains 1201 scans for training and 312 scans for testing. Following the VoxelContextNet Que et al. (2021), we sample 50,000 points from each scan.
4.2 EXPERIMENT DETAILS
Baseline. To the best of our knowledge, this is the first point cloud compression framework for both machine vision and human vision tasks. Therefore, we directly use the encoder and the decoder from VoxelContext-Net Que et al. (2021) as our baseline method, which is the start-of-the-art point cloud compression method designed for the human vision task. We also use the same encoder and decoder for point cloud compression in our proposed framework for a fair comparison with the baseline method.
For the baseline methods for the machine vision tasks, we directly use the reconstructed point clouds from VoxelContext-Net as the input of the networks for the machine vision tasks. PointNet++ Qi et al. (2017) is used for the classification task and segmentation task. VoteNet Qi et al. (2019) is adopted for the detection task. For the classification task, we use the octree with the depth levels of 3,4,5 to compress the input raw point clouds. For the segmentation task, we use the octree with the depth levels of 4,5,6 for data compression. For the detection task, we use the octree with the depth levels of 7,8,9 for data compression. As suggested in VoxelContext-Net Que et al. (2021), we train the machine vision task networks with the raw point clouds and evaluate the classification/segmentation/detection results based on the reconstructed point clouds.
Evaluation Metric. We use bit per point (bpp) to denote the bit cost in the compression procedure. For the machine vision tasks, accuracy, mean intersection-over-union (mIoU) and mean average precision (mAP) are used to measure the performance for the classification, segmentation and detection tasks, respectively. For the human vision task, Point-to-Point PSNR Tian et al. (2017) and Chamfer distance (CD) Fan et al. (2017); Huang & Liu (2019) are used to measure the distortion between the reconstructed point cloud and the raw point cloud, which are widely used metric for measuring compression performance.
Implementation Details. We train our model in two stages. At the first stage, we only train the encoder, the decoder and the task specific networks. We use the same training strategy as VoxelContext-Net Que et al. (2021) to train the encoder and the decoder. For different task specific networks (i.e., PointNet++ Qi et al. (2017) and VoteNet Qi et al. (2019)), we follow the settings in their works to train the task specific networks. At the second stage, based on the loss function 3, we train the octree depth predictor by fixing the parameters in the encoder, the decoder and the task specific networks. For the classification task, the hype-parameter λ is set from 0.01 to 16. For the segmentation task, the hype-parameter λ is set from 0.02 to 8. For the detection task, the hype-parameter is set as 0.3, 0.6, 1 and 2.
The whole network is implemented by Pytorch with CUDA support. At the second training stage, we set the batch size as 48. We use the Adam optimizer Kingma & Ba (2015) with the learning rate of 1e-4 for the first 50 epochs, 1e-5 for the next 30 epochs, and 1e-6 for the last 20 epochs.
In our experiments, the maximum depth levels of the octrees are set as 8, 8 and 9 for the human vision task on ModelNet, ShapeNet and ScanNet datasets, respectively, as the predefined depth levels are sufficient for reconstructing high quality point clouds in a highly visual experience. For machine vision, the maximum depth levels of the octree are set as 7, 7 and 9 on ModelNet, ShapeNet and ScanNet datasets, respectively.
4.3 EXPERIMENT RESULTS
Classification Task. The classification results of our SPC-Net on the ModelNet10, ModelNet40 and ShapeNet datasets are shown in Figure 3 (a) (b) and (c). It is observed that our proposed
framework SPC-Net achieves 1% accuracy improvement at 0.05 bpp on the ModelNet10 dataset when compared with our baseline method. On the ModelNet40 dataset, our SPC-Net achieves about 10% accuracy improvement at 0.056 bpp and saves about 0.8 bpp at 91.8% accuracy. On the ShapeNet dataset, our SPC-Net achieves more than 10% accuracy improvement at the 0.01 bpp when compared with our baseline method using 4 octree depth levels. The experimental results demonstrate that our new SPC-Net can improve the performance when the input point cloud is compressed for the classification task.
Segmentation Task. The segmentation results of our SPC-Net on the ShapeNet dataset are shown in Figure 3 (d). We observe that our proposed framework SPC-Net achieves 0.8% mIoU improvement at 0.08 bpp when compared with our baseline method. Our method can save above 10% bpp when the mIoU target is similar when compared with our baseline method. Therefore, our method achieves better performance than the baseline method for the segmentation task.
Detection Task. The detection results of our SPC-Net on the ScanNet dataset are shown in Figure 3 (e) and (f). From Figure 3 (e) we observe that our framework SPC-Net achieves about 0.01 mAP@0.25 improvement at 3 bpp when compared with our baseline method. At the highest bpp, our SPC-Net saves above 20% bpp when compared with our baseline. From figure 3 (f), we observe that our SPC-Net imporves about 0.08 mAP@0.5 and saves about 0.3 bpp when compared with our baseline method at 3 bpp. And at the highest bpp, our SPC-Net saves above 15% bpp when compared with our baseline method. The experimental results demonstrate that our proposed framework can also improve the performance of the detection task.
Human Vision Results. Our SPC-Net achieves exactly the same compression performance as our baseline method VoxelContext-Net Que et al. (2021), (please refer to Appendix A.2 for more details). It should be mentioned that in most 2D images compression for both machine vision and human vision methods Choi & Bajić (2022); Yang et al. (2021); Torfason et al. (2018), further compression performance for human vision always drops in order to achieving better performance for the machine vision tasks. Therefore, this is the advantage that our proposed framework SPC-Net can improve the performance for the machine vision tasks without sacrificing the compression results for human vision.
4.4 MODEL ANALYSIS
In order to balance the bit-rate cost and the performance of the machine vision tasks in different scenarios, our proposed octree depth level predictor can dynamically adjust the number of the point clouds reconstructed by the octree at different depth levels. The selection percentages at different depth levels of the octrees for different tasks at different λ values are shown in Figure 4. We observe that smaller λ values lead to selection more of higher depth levels. With the increasing of the λ values, our octree depth level predictor will select lower depth levels of the octrees. The selection percentage in Figure 4 demonstrates that our SPC-Net can dynamically select the optimal depth level of the octree at different λ values for different machine vision tasks.
In Table 1, we evaluate our SPC-Net for the classification task on the ModelNet40 dataset when setting λ = 0.25. It is observed that lower depth levels of the octree are prefered for the simple categories (e.g., chair, laptop, bed) and it can still achieve more than 95% accuracy. Therefore, our SPC-Net will save bits while still achieve promising classification performance. For the “complex” categories (e.g., person, curtain, guitar), our octree depth level predictor prefers higher depth levels. As it is hard to recognize objects from the complex categories, our SPC-Net need to spend more bits for higher depth levels to achieve better classification results. The results demonstrate that our proposed octree depth level predictor can select different depth levels for different input point clouds according to their characteristics (e.g., the relating “simple” or “complex” categories).
4.5 CONCLUSION
In this work, we have proposed a new scalable point cloud compression framework SPC-Net for both machine vision and human vision tasks. In our SPC-Net, we propose a new scalable bit-stream partitioning method based on the point cloud encoder-decoder structure in order to make the compressed point clouds more suitable for the both tasks. Additionally, considering the purpose of different tasks and the characteristics of different point clouds, we design a new depth level predictor to guide the division of the bit-stream. The experimental results on four benchmark datasets demonstrate that our SPC-Net achieves promising results for three machine vision tasks(i.e., classification, segmentation, detection) without sacrificing the performance of the human vision task.
A APPENDIX
A.1 THE FRAMEWORK
Octree Construction. As shown in Figure 1 (c), octree is a point cloud storage structure, which is beneficial to compression. To construct the octree, we first need to surround the point cloud with the smallest cube. Then the smallest cube will be split into eight sub-cube of the same size. For each sub-cube, if there is no point in the cube, this cube is recorded empty. Otherwise, the cube is recorded nonempty, which means there are some points in this cube. After that, for the nonempty sub-cube, we repeat the above split process to reduce the size of the cube until the depth of the octree reaches the predefined maximum depth value. In the constructed octree, a non-leaf node stands for one cube and the nonempty non-leaf node has eight child nodes that stand for the sub-cubes.
Encoder. The encoder compresses the octree into the bit-stream. All octrees are encoded into the bit-stream from the low depth level to the high depth level, as shown in Figure 1(c). Therefore, we can divide the full bit-stream into two parts according to the selected octree depth.
Decoder and Point Cloud Reconstruction. The decoder will restores the octree from the bitstream. And the point cloud reconstruction module reconstructs the point cloud coordinates from the octree. The reconstruct point cloud coordinates is the coordinate of the center point of the smallest nonempty cubes. The point cloud coordinates can not only be used in machine vision tasks but also be easily visualized for human vision.
Data Processing. In each octree, all points in one smallest cube will be combined to one point. So the number of points in the reconstructed point cloud will have less number of points than the raw point cloud. Additionally, the reduced number of different point clouds is different, so the output point clouds from the point cloud reconstruction module have different number of points. However, our framework need the size of batch size more than 1 (e.g., 32 or 48). And the point clouds with different number of points can not directly concatenate together to one batch. So we random sample the point cloud based on the predefined number of points to unify the size of the point cloud. As we know, the octree will combine some points to one point. However, each point corresponds to one target for the segmentation task. So if all points in the smallest cube have the same label, we use this label for the new combined point. If the points in one smallest cube have the different label, we use the label of the point which is closest to the combined point.
A.2 HUMAN VISION RESULT
The experimental results of our SPC-Net for human vision are shown in Table 2. In this table, we observe that our SPC-Net achieves the same performance as the VoxelContext-Net.
The visualization results of the segmentation task are shown in Figure 5. From the results of the table and the mug in the first two rows of Figure 5, point clouds reconstructed from the octree with 5 depth levels can achieve similar segmentation performance when compared with the point clouds reconstructed from the octree with more depth levels. Therefore, our octree depth level predictor
prefers the octree with 5 depth levels for the segmentation task in this two cases to save bits. From the results of the car and airplane in the last two rows of Figure 5, the point clouds reconstructed from the octrees with 7 depth levels achieve much better segmentation performance when compared with those octrees with less depth levels. Therefore, our octree depth level predictor selects the octree with 7 depth levels for achieving better segmentation performance in this two cases.
The visualization results of the detection task is shown in Figure 6. In the first row, the point cloud reconstructed from the octree with 7 depth levels achieves the same mAP@0.25 performance when compared with the point clouds reconstructed from the octrees with higher depth levels. Therefore, our octree depth level predictor selects the 7 depth levels in this case to save bits. In the second row, the point cloud reconstructed from the octree with 9 depth levels has much better mAP@0.25 performance than the point cloud reconstructed from the octrees with less depth level. Therefore, our octree depth level predictor selects the octree with 9 depth levels for better detection performance in this case.
It is observed that our proposed octree depth level predictor can select the optimal depth levels of the octrees for different cases, which demonstrate the effectiveness of our proposed octree depth level predictor. | 1. What is the main contribution of the paper on point cloud compression?
2. What are the strengths and weaknesses of the proposed method, particularly regarding its impact on downstream machine vision tasks?
3. Do you have any concerns about the claimed contribution and comparisons with other works in the field?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions regarding the computational complexity of the predictor network or the suitability of the sparse point clouds for human and machine vision? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposed a scalable point cloud compression method and aimed to support the machine vision and human vision tasks. The proposed method used fewer points for the machine vision task while the full point cloud is exploit for the final "human" evaluation. This paper proposed a prediction network to decided the octree depth for the machine vision tasks. The results on the several benchmark datasets demonstrated the superority of the proposed method.
Strengths And Weaknesses
Strenght The paper provides some insight into the impact of point cloud compression on the downstream machine vision task.
Weaknesses
Over-claimed contribution. This paper claims this is the first point cloud compression method for both machine and human vision. I am afraid I cannot agree with that. They also claim that the existing point cloud compression methods are only designed for human vision. It is also not correct. Most existing learned point cloud compression methods like OctSqueeze, and VoxelContext, evaluate their approaches for the downstream tasks, like point cloud detection. Therefore, we cannot say they are only designed for human vision since the results show that their compression methods also improve both the PSNR results and machine vision metrics over the G-PCC or the learned PCC method. In fact, most learned PCC methods will provide their results on the downstream tasks and we cannot deny that their compression approaches are beneficial for machine vision tasks.
Following Q1, there are also several latest point cloud compression methods that this paper does not discuss or compare[1,2,3,4]. I only provide the following papers for your reference and a lot of them have provided the corresponding downstream machine vision results while this paper should cite and compare.
[1] Chen, Zhili, Zian Qian, Sukai Wang, and Qifeng Chen. 2022. “Point Cloud Compression with Sibling Context and Surface Priors.”
[2] Zhou, Xuanyu, Charles R Qi, Yin Zhou, and Dragomir Anguelov. n.d. “RIDDLE: Lidar Data Compression With Range Image Deep Delta Encoding”.
[3] Fu, Chunyang, Ge Li, Rui Song, Wei Gao, and Shan Liu. 2022. “OctAttention: Octree-Based Large-Scale Contexts Model for Point Cloud Compression.”
[4] He, Yun, Xinlin Ren, Danhang Tang, Yinda Zhang, Xiangyang Xue, and Yanwei Fu. 2022. “Density-Preserving Deep Point Cloud Compression.”
For the baseline methods, the paper only evaluates the proposed framework for the VoxelContext-net, while there are a lot of new PCC methods as I mentioned before, which should be discussed. Moreover, the classification backbone like Pointnet++(2017) may be out-of-date. There are several more advanced classification backbones and the corresponding performance drop from the downsampled point cloud may be alleviated.
Even for the Pointnet++ backbone, this paper used the pretrained Pointnet++ for the raw point cloud without downsampling while the input for the Pointnet++ is the downsampling version, which is not fair. The performance of the downstream will be easily boosted by re-train the Pointnet++ or even partial layers.
Several important references are missed as I mentioned before. There are also some VCM papers that should be cited like [4,5].
[5] Wang, Shurun, Shiqi Wang, Wenhan Yang, Xinfeng Zhang, Shanshe Wang, Siwei Ma, and Wen Gao. 2021. “Towards Analysis-Friendly Face Representation with Scalable Feature and Texture Compression.” [6] Bai, Yuanchao, Xu Yang, Xianming Liu, Junjun Jiang, Yaowei Wang, Xiangyang Ji, and Wen Gao. 2021. “Towards End-to-End Image Compression and Analysis with Transformers.”
Scalable coding for machine vision in image or video compression has been investigated by a lot of methods and using scalable coding for point cloud compression is not new.
How about the computational complexity of the predictor network? According to the settings in this paper, the predictor will select one optimal depth level from three possible candidates. It seems that the range is too limited and the authors do not provide the results to show that the selected depth is optimal and gain over the random selection from 3 candidates or fixed depth level are not available.
There are several typos in this paper. For example, Page 2, "for both human and machine, Our framework ...". Page 2, "In this work, We ...."
Another key point is that the authors claim that the PCC for both human and machine vision. However, the results are evaluated for the LiDAR datasets which are sparse and usually used for automatic driving. In this scenario, I am not sure whether people will watch these point clouds themselves. The authors should give some reference that these sparse point clouds are watched by both humans and machines.
Clarity, Quality, Novelty And Reproducibility
The novelty is not significant enough and experimental results are not convincing. |
ICLR | Title
SPC-Net: A New Scalable Point Cloud Compression Framework for Both Machine and Human Vision Tasks
Abstract
Recently, point cloud process and analysis have attracted increasing attention in various machine vision tasks. Therefore, some point cloud compression algorithms are developed. However, such compression algorithms are developed for human vision while most of the point cloud data will be used for automated point cloud analysis (e.g., detection of abnormal event and early warning in autonomous driving) and may not be seen by humans. To this end, we design a new scalable point cloud compression framework (SPC-Net) for both machine and human vision tasks, in which a scalable bit-stream will be used to describe the point cloud for both machine vision and human vision tasks. For machine vision tasks, only part of the bit-stream will be transmitted for bit-rate saving, while the full bitstream will be transmitted when used for the human vision task. Additionally, we propose a new octree depth level predictor to automatically predict the optimal depth level in order to control the bit-rate cost for the machine vision tasks. As a result, for simple objects/scenarios, we will use fewer depth levels with less bits for the machine tasks, while for complex objects/scenarios, we prefer deeper depth levels of octree with more bits for machine tasks comprehensive. Experimental results on different datasets (e.g., ModelNet10, ModelNet40, ShapeNet and ScanNet) demonstrate that our proposed scalable point could compression framework SPC-Net achieves better performance on the machine vision tasks (e.g., classification, segmentation and detection) without degrading the performance of the human vision task.
N/A
Recently, point cloud process and analysis have attracted increasing attention in various machine vision tasks. Therefore, some point cloud compression algorithms are developed. However, such compression algorithms are developed for human vision while most of the point cloud data will be used for automated point cloud analysis (e.g., detection of abnormal event and early warning in autonomous driving) and may not be seen by humans. To this end, we design a new scalable point cloud compression framework (SPC-Net) for both machine and human vision tasks, in which a scalable bit-stream will be used to describe the point cloud for both machine vision and human vision tasks. For machine vision tasks, only part of the bit-stream will be transmitted for bit-rate saving, while the full bitstream will be transmitted when used for the human vision task. Additionally, we propose a new octree depth level predictor to automatically predict the optimal depth level in order to control the bit-rate cost for the machine vision tasks. As a result, for simple objects/scenarios, we will use fewer depth levels with less bits for the machine tasks, while for complex objects/scenarios, we prefer deeper depth levels of octree with more bits for machine tasks comprehensive. Experimental results on different datasets (e.g., ModelNet10, ModelNet40, ShapeNet and ScanNet) demonstrate that our proposed scalable point could compression framework SPC-Net achieves better performance on the machine vision tasks (e.g., classification, segmentation and detection) without degrading the performance of the human vision task.
1 INTRODUCTION
With the development of advanced 3D technologies, it has become easier to collect point clouds by using various types of 3D scanners including LiDARs and RGB-D cameras. Therefore, a huge amount of point cloud data has been collected and various point cloud related machine vision tasks like classification, segmentation and detection have attracted increasing attention. However, most point cloud analysis tasks take raw point cloud data as the input, which requires large bandwidth/storage for transmitting/storing huge massive point cloud data.
Recently, some point cloud compression frameworks Huang et al. (2020); Que et al. (2021) were proposed to save the bandwidth/storage for point cloud transmitting/storing. However, those point cloud compression frameworks are designed for human vision, which will thus degrade the performance in the machine vision tasks. Currently, the existing point cloud compression framework are not designed for the machine vision tasks. Some recent works Yang et al. (2021); Le et al. (2021); Song et al. (2021); Torfason et al. (2018) have explored the image coding for machine task by optimizing the network with additional loss functions for the machine vision tasks. However, the state-of-the-art point cloud compression algorithms like VoxelContext-NetQue et al. (2021) need to construct the octree and then compress it, in which the octree construction procedure is indifferentiable and thus we cannot directly add the loss function to improve the coding performance for the machine vision tasks. Therefore, it is necessary to design a new point cloud compression framework for both human and machine vision tasks.
In this work, we propose the first point cloud compression framework for both human and machine vision, Our framework follows the scalable coding paradigm, in which the full bit-stream will used for the human vision task, while only part of the bit-stream will be used for the machine vision tasks. For the human vision task, we take the start-of-the-art method VoxelContext-Net Que et al. (2021) as an example to compress the point cloud, in which the octrees are constructed and then compressed into bit-streams. For the machine vision tasks, we only transmit part of the bit-stream to reconstruct the first few depth-level of the octrees for bit-rate saving. Additionally, we propose the octree depth level predictor to predict the optimal depth-level of the octree for different scenarios when for coding for machine vision tasks. As a result, for simple objects/scenarios, we will use less depth level with less bits for bit-rate saving, while for complex objects/scenarios, we prefer deeper depth level of octree for more accurate prediction. Experimental results demonstrate that our proposed framework SPC-Net achieves promising results on various machine vision tasks without sacrificing the coding performance for the human vision task.
• In this work, We propose a new scalable point cloud compression framework for both machine vision and human vision tasks. To the best of our knowledge, this is the first point cloud compression method for both machine and human vision.
• We propose a new octree depth level predictor to predict the optimal depth of the octree used for the machine vision tasks, where deeper octree will be used for complex objects/scenarios for achieving more accurate prediction while shallow octree will be used for simple objects/scenarios for achieving less bit-rate cost.
• Comprehensive experimental results demonstrate that our proposed scalable point cloud coding framework achieves promising results without sacrificing the coding performance of the human vision task.
2 RELATED WORK
2.1 POINT CLOUD COMPRESSION FOR HUMAN VISION
In the past few years, hand-crafted and learning-based point cloud compression methods Group (2021); Wang et al. (2021b); Biswas et al. (2020); Huang et al. (2020); Zhu et al. (2020); Que et al. (2021) have been proposed by transforming the point cloud data into tree representations for better compression.
Specifically, a few hand-crafted point cloud compression methods Group (2021); Schwarz et al. (2018); Google (2022) have been proposed. For example, the standard point cloud compression method G-PCC (geometry based point cloud compression) Group (2021) proposed by the MPEG group, which transforms point cloud data into the octree-structure before performing static point cloud compression.
In recent years, some learning-based point cloud compression methods Huang & Liu (2019); Zhu et al. (2020); Huang et al. (2020); Biswas et al. (2020); Que et al. (2021); Wang et al. (2021b;a) have achieved the state-of-the-arts performance. Huang et al. Huang et al. (2020) and Wang et al. Wang et al. (2021b) followed the learned image compression framework Ballé et al. (2017) to compress the voxelized point clouds. To reduce the bitrate, Biswas et al. Biswas et al. (2020) exploited the spatio-temporal relationships across multiple LiDAR sweeps by using a novel conditional entropy model. Based on Wang et al. (2021b), Wang et al. Wang et al. (2021a) used the lossless compressed octree and the lossy compressed point feature to further improve the coding performance. Que et al. Que et al. (2021) extended the framework by further exploiting the context information among neighbouring nodes and refining the 3D coordinate at the decoder side. Considering that VoxelContext-Net Que et al. (2021) is the state-of-the-art point cloud compression method, we use it as our baseline method for the human vision task.
All the existing methods compress the point cloud data for human perception, which is evaluated by the metrics like point-to-point PSNR and point-to-plane PSNR. However, unlike the 2-D images or videos, most point clouds are not purely collected for human perception. Instead, they are widely used for various real-world machine vision tasks, such as classification, segmentation, and detection, which is unfortunately not considered in there works
2.2 IMAGE COMPRESSION FOR BOTH MACHINE AND HUMAN VISION TASKS
To the best of our knowledge, there is no existing point cloud compression method for both machine and human vision. In this section, we first discuss the scalable image compression methods for both machine and human vision tasks, and then the other compression methods.
Scalable Methods. Both Choi et al. Choi & Bajić (2022)and Chen et al. Chen et al. (2021) performed scalable image compression by dividing the image bit-streams into different parts and transmitting one or more parts of the bit-streams for both machine or human vision tasks. Liu et al. Liu et al. (2021) proposed a scalable image compression method for define grained classification at different levels.
Other Methods. Yang et al. Yang et al. (2021) designed the image encoder by using the edge extraction algorithm, and the reconstructed images from the decoder achieve promising performance for both human vision and machine vision tasks. Le et al. Le et al. (2021) directly added the additional machine vision loss to the compression loss functions to improve the reconstructed image quality for the machine vision tasks. Song et al. Song et al. (2021) compressed the source image through a corresponding quality map produced from different machine vision tasks. Torfason et al. Torfason et al. (2018) combined the image compression network with the detection network, and directly extracted the detection related information from bit-stream without using an image decoder.
In summary, the above methods for machine vision methods are all lossy compression methods. The encoder extracts the helpful features from the images, the decoder reconstructs the images based on the encoded features, and the entropy model calculates the number of bits used for the features. Most methods can adjust the various parameters in the encoder and decoder based on the performance in machine vision tasks. Therefore, it is easy for the encoder to learn the representative image features for machine vision. However, most learning-based point cloud compression methods use a lossless compression network. Their encoders and decoders cannot be optimized to extract the useful features in the point cloud for machine vision, which hinders the development of the point cloud compression methods for the machine vision tasks.
In contrast to these works Liu et al. (2021); Chen et al. (2021); Choi & Bajić (2022); Yang et al. (2021); Le et al. (2021); Song et al. (2021); Torfason et al. (2018), we propose some new modules before and after the compression model to improve the machine vision performance while maintaining the fidelity for human vision by keeping the lossless point cloud compression model unchanged.
3 METHODOLOGY
3.1 THE FRAMEWORK
The overall structure of our scalable point cloud compression framework (SPC-Net) is shown in Figure 1(b). In this section, we will first introduce our method coding strategy. And then, each module in our framework will be introduced.
Scalable Coding Strategy. The point cloud data is commonly used for various machine vision tasks. Therefore, our SPC-Net is always used for machine vision tasks (e.g., abnormal event detections to detect collision between the pedestrians and the vehicles) and the point cloud information is transformed along the solid arrows as shown in Figure 1 (b). If the human vision task must also be involved (e.g., when the prediction results from the machine vision tasks like event detection are abnormal), our framework can provide a high quality reconstructed point cloud for humans further analysis. It should be mentioned that like the scalable coding method, to reconstruct the point clouds for the human vision task, we can reuse the bit-stream generated for the machine vision task, which can avoid duplicate bit transmission.
Octree Construction, Encoder, Decoder and Point Cloud Reconstruction. The octree construction module constructs the point cloud to octree. Octree is a tree-like data structure used to describe three-dimensional space. Each node of the octree represents a volume element of a cube, and each non-leaf node has eight child nodes. The volume of the parent node can be obtained by adding the volume elements represented by the eight child nodes together. And the black node in Figure 1 (a) means there are points in this cube, and the white node means empty cube without having any 3D point. Each octree is encoded as the bit-stream by using the 3D encoder. The decoder reconstructs
Raw point cloud
Octree Depth Level Predictor
Predicted depth level 1 2 3 ....
❎ ✔ ❎ .... Encoder
Point Cloud Reconstruction
Data Processing
Task Specific Network
Octree Construction
Results of the machine task
Results of the human vision
Raw point cloud
Max Pooling
MLPs (64,128,512)
FC (128)
ReLU
FC (Nc)
Gubmel Softmax
Random Sampling
one hot vector
(c) Octree depth level predictor(b) Overview
0929 修改版
octree with 3 depth levels
...
...
(a) The process of encode and decode the octree
depth level 1
... ...
octree with 2 depth levels for the machine vision task
... octree with 3 depth
levels for the human vision task
Partition Scalable Bit-stream Partitioning
Decoder
depth level 2
depth level 3
Encoder
Decoder
Predicted depth level
Bm Bh
B
Figure 1: (a) The encoding and decoding process of the octree. B,Bm and Bh denote the full bitstream, the bit-stream for the machine vision task and the bit-stream for the rest depth level of the octree, respectively. (b) The overall architecture of our proposed scalable point cloud compression framework SPC-Net, which is designed for both machine vision and human vision. (c) Details of our proposed octree depth level predictor.
the bit-stream to octree. The point cloud reconstruction module then restore the point cloud from the octree. In this work, we task VoxelContext-Net Que et al. (2021) as an example and use the same design for all those modules, the details can be found in Appendix A.1.
Scalable Bit-stream Partitioning. Our scalable bit-stream partitioning module can split the full bit-stream to two parts bit-stream for different tasks. The details is shown in section 3.2.
Octree Depth level Predictor. Our octree depth level predictor is used to adaptively choose the octree depth for the machine vision tasks and can guide the full bit-steam splitting. The details of this module will be described in section 3.3.
Data Processing. The role of this module is to process point cloud data to compensate for the data difference between the output of the compressed network and the input of the machine task network. The details about this module are shown in Appendix A.1.
Task Specific Network. To adapt to a variety of situations in the point cloud based machine vision tasks, this module will use different networks for different machine vision tasks. For the classification task and the segmentation task, PointNet++ Qi et al. (2017) will be used in this module. For the detection task, VoteNet Qi et al. (2019) is adopted.
3.2 SCALABLE BIT-STREAM PARTITIONING
Although the reconstructed point cloud often achieves promising performance for the human vision task when using full bit-stream, it has plenty of redundant information for the machine vision tasks and thus it is less effective in terms of the bit-rate cost. Therefore, we design this scalable bit-stream partitioning method to split the bit-stream for both human and machine vision tasks.
Before introducing how to divide the bit-stream, we first introduce how to generate the point cloud bit-stream. Figure 1(a) shows the encoding and decoding process of the octree. During the encoding process, each octree is encoded from the lower depth level to the higher depth level. Therefore, the final full bit-stream can be expressed as B = (b1, b2, ..., bn), where n is the maximum octree depth level and bi represents the bit-stream from the ith depth level. At the decoder side, each octree will be reconstructed from the lower depth level to the higher depth level. The (i + 1)th depth level of the octree can be reconstructed with the previously reconstructed octree which has i depth levels and the extra bits bi+1. For example, with b1 ∪ b2, we can reconstruct the octree with the first two depth levels, and with b1 ∪ b2 ∪ b3 we can reconstruct the octree with the first three depth levels. Based on the above octree encoding and decoding process, we can split the full bit-stream B = (b1, b2, ..., bn) into two parts Bm and Bh according to the octree depth level. Bm = (b1, b2, ..., bi) can be used to reconstruct the octree with the first i depth levels, which will be used for the machine vision tasks. Bh = (bi+1, bi+2, ..., bn) can reconstruct the rest depth levels of the octree based on the reconstruction of the first i depth levels, which will be used for the human vision task. And the optimal splitting level index i is determined by the octree depth predictor for scalable bit-stream partitioning.
3.3 OCTREE DEPTH LEVEL PREDICTOR
The design of our octree depth level predictor is inspired by the well-trained machine vision tasks (i.e., classification, segmentation, detection). We can often achieve reasonable results when using the reconstructed point cloud from the lower depth level octree as the input. Taking the classification results in Figure 2 as an example, some objects with simple shapes like laptop can be easily recognized when using the reconstructed point cloud reconstructed from the octree with 4 depth levels as the input, while other objects with complex shapes like guitar can only be recognized when using the reconstructed point cloud from the octree with 6 depth levels. Therefore, we can use the octree with lower depth levels to reduce bit-stream cost and thus we can save the storage space and the bandwidth.
To achieve this goal, we propose the octree depth predictor to decide the optimal depth level of the octree for the machine vision tasks, which can not only achieve the reasonable performance for the machine vision tasks but also reduce the bit-rate cost. In addition, the encoder side (e.g., the RGB-D cameras or the LiDAR sensors) always do not have enough computing power and can not support the complex networks. Therefore, the networks (e.g., PointNet++ and VoteNet) for handling the complex machine vision tasks are placed behind the decoder and not in the encoder side. As shown in Figure 1 (c), our octree depth level predictor is designed by using 3 layers MLP, and 2 fully connected layers, which is a simple network. To future reduce the computational complexity, we random sample 1024 points from the raw point cloud as the input of our octree depth level predictor for classification and segmentation tasks.
Our octree depth level predictor can select the optimal octree depth level for machine vision tasks from the raw point cloud global feature. According to the different characters in machine vision tasks (e.g., the difficulty of classification), our octree depth predictor can generate n probabilities p = {p1, p2, ..., pn} for n octree depth levels, and then choose the octree depth level with the highest probabilities.
However, the process of choosing the depth level of octree with the highest probability is nondifferentiable, which makes the octree depth predictor unable to train. Therefore, we adopt the Gumbel Softmax Strategy Jang et al. (2017) to address this issue. First, we generate confidence score set p̂ from the probability set p with Gumbel noise as follows:
p̂i = pi +Gi, i ∈ {0, 1, ..., n} (1) where Gi = − log(− log ϵ) is the standard gumbel noise, and ϵ is randomly sampled from a uniform distribution between 0 and 1. Therefore, we can generate the one-hot vector ĥ = [ĥ0, ĥ1, ..., ĥn], where ĥi = 1 if i = argmaxj p̂j , j ∈ {0, 1, ..., n}. Otherwise ĥi = 0. ĥ is the one hot vector of the depth level selection results. However, the argmax operation when generating the one hot vector will led to non-differentiable. Therefore, during the backward propagation process, we apply the Gumbel Softmax Strategy and relax the one-hot vector ĥ to h̃ = [h̃0, h̃1, ..., h̃n] as follows:
h̃i = exp(p̂i/τ))∑7 j=0 exp(p̂j/τ) , i ∈ {0, 1, ..., n} (2)
where τ is the temperature parameter. Using the Gumbel softmax Strategy Jang et al. (2017), we can select the optimal depth-level of octree for machine tasks based on the argmax function during forward propagation process and approximate the gradient of the argmax function by using Eq. (2) in the back propagation process. During the inference stage, we directly select the depth level with the maximum probability in p.
3.4 TRAINING STRATEGY
Loss Function. In our SPC-Net, we need to train three modules including the octree depth level predictor, the compression module (i.e., the encoder and the decoder) and the task specific network module. As the encoder and the decoder is the same as the VoxelContext-Net Que et al. (2021), we train the compression module based on the same setting as VoxelContext-Net. For the task network module, we train the network based on the same setting as PointNet++ Qi et al. (2017) or VoteNet Qi et al. (2019). For our octree depth level predictor, we train it after the other two modules are pretrained. When training the octree depth level predictor, we fix the parameters in the compression module and the task network module. And the loss function function used for training our octree depth level predictor is shown here,
loss = ∑ ((λ ∗ bpp+ L) ∗ h̃) (3)
Our method selects the optimal depth level from n depth levels. bpp = (bpp1,bpp2, ...,bppn), bpp means bits per points, which denotes the length of the bit-stream. bppi(i ∈ {0, 1, ...,n}) represents the bpp for constructing the octree at the ith depth levels. We can obtain bpp from the encoder. L = (L1, L2, ..., Ln), and Li are formulated as follows,
Li = D(f(x̂i), ygt), i ∈ {0, 1, ..., n} (4) where f is the machine vision task network (i.e., PointNet++ or VoteNet). x̂i is the reconstructed point cloud from the octree with i depth levels. ygt is the ground true for the machine vision tasks. And D can calcualte the loss between the f(x̂i) and the ygt. h̃ = [h̃0, h̃1, ..., h̃n], and h̃i is designed in equation 2. λ is a hyper-parameter, which is used to balance the trade off of bpp and L.
4 EXPERIMENT
4.1 DATASET
ModelNet. ModelNet Wu et al. (2015) is a widely used benchmark to evaluate the point cloud classification performance, which contains two datasets named ModelNet40 and ModelNet10. ModelNet40 dataset is divided into 40 categories, which has 9843 point cloud data for training and 2468
point cloud data for testing. The ModelNet10 dataset is a subset of the ModelNet40, which only has 10 categories with 3991 point cloudd data for training and 908 point cloud data for testing.
ShapeNet. ShapeNet Yi et al. (2016) contains 16881 point cloud data from 16 object classes. Each point cloud data contains 2-5 parts, with a total of 50 part categures. ShapeNet has 14007 point cloud data for training and 2874 point cloud data for testing.
ScanNet. ScanNet Dai et al. (2017) is a physically available dataset used for the 3D object detection task, which contains 1201 scans for training and 312 scans for testing. Following the VoxelContextNet Que et al. (2021), we sample 50,000 points from each scan.
4.2 EXPERIMENT DETAILS
Baseline. To the best of our knowledge, this is the first point cloud compression framework for both machine vision and human vision tasks. Therefore, we directly use the encoder and the decoder from VoxelContext-Net Que et al. (2021) as our baseline method, which is the start-of-the-art point cloud compression method designed for the human vision task. We also use the same encoder and decoder for point cloud compression in our proposed framework for a fair comparison with the baseline method.
For the baseline methods for the machine vision tasks, we directly use the reconstructed point clouds from VoxelContext-Net as the input of the networks for the machine vision tasks. PointNet++ Qi et al. (2017) is used for the classification task and segmentation task. VoteNet Qi et al. (2019) is adopted for the detection task. For the classification task, we use the octree with the depth levels of 3,4,5 to compress the input raw point clouds. For the segmentation task, we use the octree with the depth levels of 4,5,6 for data compression. For the detection task, we use the octree with the depth levels of 7,8,9 for data compression. As suggested in VoxelContext-Net Que et al. (2021), we train the machine vision task networks with the raw point clouds and evaluate the classification/segmentation/detection results based on the reconstructed point clouds.
Evaluation Metric. We use bit per point (bpp) to denote the bit cost in the compression procedure. For the machine vision tasks, accuracy, mean intersection-over-union (mIoU) and mean average precision (mAP) are used to measure the performance for the classification, segmentation and detection tasks, respectively. For the human vision task, Point-to-Point PSNR Tian et al. (2017) and Chamfer distance (CD) Fan et al. (2017); Huang & Liu (2019) are used to measure the distortion between the reconstructed point cloud and the raw point cloud, which are widely used metric for measuring compression performance.
Implementation Details. We train our model in two stages. At the first stage, we only train the encoder, the decoder and the task specific networks. We use the same training strategy as VoxelContext-Net Que et al. (2021) to train the encoder and the decoder. For different task specific networks (i.e., PointNet++ Qi et al. (2017) and VoteNet Qi et al. (2019)), we follow the settings in their works to train the task specific networks. At the second stage, based on the loss function 3, we train the octree depth predictor by fixing the parameters in the encoder, the decoder and the task specific networks. For the classification task, the hype-parameter λ is set from 0.01 to 16. For the segmentation task, the hype-parameter λ is set from 0.02 to 8. For the detection task, the hype-parameter is set as 0.3, 0.6, 1 and 2.
The whole network is implemented by Pytorch with CUDA support. At the second training stage, we set the batch size as 48. We use the Adam optimizer Kingma & Ba (2015) with the learning rate of 1e-4 for the first 50 epochs, 1e-5 for the next 30 epochs, and 1e-6 for the last 20 epochs.
In our experiments, the maximum depth levels of the octrees are set as 8, 8 and 9 for the human vision task on ModelNet, ShapeNet and ScanNet datasets, respectively, as the predefined depth levels are sufficient for reconstructing high quality point clouds in a highly visual experience. For machine vision, the maximum depth levels of the octree are set as 7, 7 and 9 on ModelNet, ShapeNet and ScanNet datasets, respectively.
4.3 EXPERIMENT RESULTS
Classification Task. The classification results of our SPC-Net on the ModelNet10, ModelNet40 and ShapeNet datasets are shown in Figure 3 (a) (b) and (c). It is observed that our proposed
framework SPC-Net achieves 1% accuracy improvement at 0.05 bpp on the ModelNet10 dataset when compared with our baseline method. On the ModelNet40 dataset, our SPC-Net achieves about 10% accuracy improvement at 0.056 bpp and saves about 0.8 bpp at 91.8% accuracy. On the ShapeNet dataset, our SPC-Net achieves more than 10% accuracy improvement at the 0.01 bpp when compared with our baseline method using 4 octree depth levels. The experimental results demonstrate that our new SPC-Net can improve the performance when the input point cloud is compressed for the classification task.
Segmentation Task. The segmentation results of our SPC-Net on the ShapeNet dataset are shown in Figure 3 (d). We observe that our proposed framework SPC-Net achieves 0.8% mIoU improvement at 0.08 bpp when compared with our baseline method. Our method can save above 10% bpp when the mIoU target is similar when compared with our baseline method. Therefore, our method achieves better performance than the baseline method for the segmentation task.
Detection Task. The detection results of our SPC-Net on the ScanNet dataset are shown in Figure 3 (e) and (f). From Figure 3 (e) we observe that our framework SPC-Net achieves about 0.01 mAP@0.25 improvement at 3 bpp when compared with our baseline method. At the highest bpp, our SPC-Net saves above 20% bpp when compared with our baseline. From figure 3 (f), we observe that our SPC-Net imporves about 0.08 mAP@0.5 and saves about 0.3 bpp when compared with our baseline method at 3 bpp. And at the highest bpp, our SPC-Net saves above 15% bpp when compared with our baseline method. The experimental results demonstrate that our proposed framework can also improve the performance of the detection task.
Human Vision Results. Our SPC-Net achieves exactly the same compression performance as our baseline method VoxelContext-Net Que et al. (2021), (please refer to Appendix A.2 for more details). It should be mentioned that in most 2D images compression for both machine vision and human vision methods Choi & Bajić (2022); Yang et al. (2021); Torfason et al. (2018), further compression performance for human vision always drops in order to achieving better performance for the machine vision tasks. Therefore, this is the advantage that our proposed framework SPC-Net can improve the performance for the machine vision tasks without sacrificing the compression results for human vision.
4.4 MODEL ANALYSIS
In order to balance the bit-rate cost and the performance of the machine vision tasks in different scenarios, our proposed octree depth level predictor can dynamically adjust the number of the point clouds reconstructed by the octree at different depth levels. The selection percentages at different depth levels of the octrees for different tasks at different λ values are shown in Figure 4. We observe that smaller λ values lead to selection more of higher depth levels. With the increasing of the λ values, our octree depth level predictor will select lower depth levels of the octrees. The selection percentage in Figure 4 demonstrates that our SPC-Net can dynamically select the optimal depth level of the octree at different λ values for different machine vision tasks.
In Table 1, we evaluate our SPC-Net for the classification task on the ModelNet40 dataset when setting λ = 0.25. It is observed that lower depth levels of the octree are prefered for the simple categories (e.g., chair, laptop, bed) and it can still achieve more than 95% accuracy. Therefore, our SPC-Net will save bits while still achieve promising classification performance. For the “complex” categories (e.g., person, curtain, guitar), our octree depth level predictor prefers higher depth levels. As it is hard to recognize objects from the complex categories, our SPC-Net need to spend more bits for higher depth levels to achieve better classification results. The results demonstrate that our proposed octree depth level predictor can select different depth levels for different input point clouds according to their characteristics (e.g., the relating “simple” or “complex” categories).
4.5 CONCLUSION
In this work, we have proposed a new scalable point cloud compression framework SPC-Net for both machine vision and human vision tasks. In our SPC-Net, we propose a new scalable bit-stream partitioning method based on the point cloud encoder-decoder structure in order to make the compressed point clouds more suitable for the both tasks. Additionally, considering the purpose of different tasks and the characteristics of different point clouds, we design a new depth level predictor to guide the division of the bit-stream. The experimental results on four benchmark datasets demonstrate that our SPC-Net achieves promising results for three machine vision tasks(i.e., classification, segmentation, detection) without sacrificing the performance of the human vision task.
A APPENDIX
A.1 THE FRAMEWORK
Octree Construction. As shown in Figure 1 (c), octree is a point cloud storage structure, which is beneficial to compression. To construct the octree, we first need to surround the point cloud with the smallest cube. Then the smallest cube will be split into eight sub-cube of the same size. For each sub-cube, if there is no point in the cube, this cube is recorded empty. Otherwise, the cube is recorded nonempty, which means there are some points in this cube. After that, for the nonempty sub-cube, we repeat the above split process to reduce the size of the cube until the depth of the octree reaches the predefined maximum depth value. In the constructed octree, a non-leaf node stands for one cube and the nonempty non-leaf node has eight child nodes that stand for the sub-cubes.
Encoder. The encoder compresses the octree into the bit-stream. All octrees are encoded into the bit-stream from the low depth level to the high depth level, as shown in Figure 1(c). Therefore, we can divide the full bit-stream into two parts according to the selected octree depth.
Decoder and Point Cloud Reconstruction. The decoder will restores the octree from the bitstream. And the point cloud reconstruction module reconstructs the point cloud coordinates from the octree. The reconstruct point cloud coordinates is the coordinate of the center point of the smallest nonempty cubes. The point cloud coordinates can not only be used in machine vision tasks but also be easily visualized for human vision.
Data Processing. In each octree, all points in one smallest cube will be combined to one point. So the number of points in the reconstructed point cloud will have less number of points than the raw point cloud. Additionally, the reduced number of different point clouds is different, so the output point clouds from the point cloud reconstruction module have different number of points. However, our framework need the size of batch size more than 1 (e.g., 32 or 48). And the point clouds with different number of points can not directly concatenate together to one batch. So we random sample the point cloud based on the predefined number of points to unify the size of the point cloud. As we know, the octree will combine some points to one point. However, each point corresponds to one target for the segmentation task. So if all points in the smallest cube have the same label, we use this label for the new combined point. If the points in one smallest cube have the different label, we use the label of the point which is closest to the combined point.
A.2 HUMAN VISION RESULT
The experimental results of our SPC-Net for human vision are shown in Table 2. In this table, we observe that our SPC-Net achieves the same performance as the VoxelContext-Net.
The visualization results of the segmentation task are shown in Figure 5. From the results of the table and the mug in the first two rows of Figure 5, point clouds reconstructed from the octree with 5 depth levels can achieve similar segmentation performance when compared with the point clouds reconstructed from the octree with more depth levels. Therefore, our octree depth level predictor
prefers the octree with 5 depth levels for the segmentation task in this two cases to save bits. From the results of the car and airplane in the last two rows of Figure 5, the point clouds reconstructed from the octrees with 7 depth levels achieve much better segmentation performance when compared with those octrees with less depth levels. Therefore, our octree depth level predictor selects the octree with 7 depth levels for achieving better segmentation performance in this two cases.
The visualization results of the detection task is shown in Figure 6. In the first row, the point cloud reconstructed from the octree with 7 depth levels achieves the same mAP@0.25 performance when compared with the point clouds reconstructed from the octrees with higher depth levels. Therefore, our octree depth level predictor selects the 7 depth levels in this case to save bits. In the second row, the point cloud reconstructed from the octree with 9 depth levels has much better mAP@0.25 performance than the point cloud reconstructed from the octrees with less depth level. Therefore, our octree depth level predictor selects the octree with 9 depth levels for better detection performance in this case.
It is observed that our proposed octree depth level predictor can select the optimal depth levels of the octrees for different cases, which demonstrate the effectiveness of our proposed octree depth level predictor. | 1. What is the focus and contribution of the paper on point cloud compression?
2. What are the strengths of the proposed approach, particularly in terms of its application to both machine and human vision tasks?
3. What are the weaknesses of the paper, especially regarding the octree depth level predictor?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a new scalable point cloud compression framework for both machine and human vision tasks. Unlike many previous works, this paper considers the purpose of different tasks and the characteristics of different point clouds and designs a new depth level predictor to guide the division of the bit-stream.
Strengths And Weaknesses
Paper Strengths:
This paper proposes a new scalable bit-stream partitioning method based on the point cloud encoder-decoder structure to make the compressed point clouds more suitable for the both machine vision and human vision tasks, which is interesting and novel. 2. The authors have done a lot of experiments and the workload of the experiments is very sufficient.
Paper Weaknesses:
The octree depth level predictor in the paper is used to guide the division of the bit- stream. It seems that it can only work in the point cloud compression frameworks constructed based on octrees. 2. The octree depth level predictor is one of the important contributions of this paper, but it is only a simple selector based on Gumbel Softmax Strategy, which lacks novelty.
Clarity, Quality, Novelty And Reproducibility
This paper proposes the first point cloud compression method for both machine and human vision, which is an original work. |
ICLR | Title
SPC-Net: A New Scalable Point Cloud Compression Framework for Both Machine and Human Vision Tasks
Abstract
Recently, point cloud process and analysis have attracted increasing attention in various machine vision tasks. Therefore, some point cloud compression algorithms are developed. However, such compression algorithms are developed for human vision while most of the point cloud data will be used for automated point cloud analysis (e.g., detection of abnormal event and early warning in autonomous driving) and may not be seen by humans. To this end, we design a new scalable point cloud compression framework (SPC-Net) for both machine and human vision tasks, in which a scalable bit-stream will be used to describe the point cloud for both machine vision and human vision tasks. For machine vision tasks, only part of the bit-stream will be transmitted for bit-rate saving, while the full bitstream will be transmitted when used for the human vision task. Additionally, we propose a new octree depth level predictor to automatically predict the optimal depth level in order to control the bit-rate cost for the machine vision tasks. As a result, for simple objects/scenarios, we will use fewer depth levels with less bits for the machine tasks, while for complex objects/scenarios, we prefer deeper depth levels of octree with more bits for machine tasks comprehensive. Experimental results on different datasets (e.g., ModelNet10, ModelNet40, ShapeNet and ScanNet) demonstrate that our proposed scalable point could compression framework SPC-Net achieves better performance on the machine vision tasks (e.g., classification, segmentation and detection) without degrading the performance of the human vision task.
N/A
Recently, point cloud process and analysis have attracted increasing attention in various machine vision tasks. Therefore, some point cloud compression algorithms are developed. However, such compression algorithms are developed for human vision while most of the point cloud data will be used for automated point cloud analysis (e.g., detection of abnormal event and early warning in autonomous driving) and may not be seen by humans. To this end, we design a new scalable point cloud compression framework (SPC-Net) for both machine and human vision tasks, in which a scalable bit-stream will be used to describe the point cloud for both machine vision and human vision tasks. For machine vision tasks, only part of the bit-stream will be transmitted for bit-rate saving, while the full bitstream will be transmitted when used for the human vision task. Additionally, we propose a new octree depth level predictor to automatically predict the optimal depth level in order to control the bit-rate cost for the machine vision tasks. As a result, for simple objects/scenarios, we will use fewer depth levels with less bits for the machine tasks, while for complex objects/scenarios, we prefer deeper depth levels of octree with more bits for machine tasks comprehensive. Experimental results on different datasets (e.g., ModelNet10, ModelNet40, ShapeNet and ScanNet) demonstrate that our proposed scalable point could compression framework SPC-Net achieves better performance on the machine vision tasks (e.g., classification, segmentation and detection) without degrading the performance of the human vision task.
1 INTRODUCTION
With the development of advanced 3D technologies, it has become easier to collect point clouds by using various types of 3D scanners including LiDARs and RGB-D cameras. Therefore, a huge amount of point cloud data has been collected and various point cloud related machine vision tasks like classification, segmentation and detection have attracted increasing attention. However, most point cloud analysis tasks take raw point cloud data as the input, which requires large bandwidth/storage for transmitting/storing huge massive point cloud data.
Recently, some point cloud compression frameworks Huang et al. (2020); Que et al. (2021) were proposed to save the bandwidth/storage for point cloud transmitting/storing. However, those point cloud compression frameworks are designed for human vision, which will thus degrade the performance in the machine vision tasks. Currently, the existing point cloud compression framework are not designed for the machine vision tasks. Some recent works Yang et al. (2021); Le et al. (2021); Song et al. (2021); Torfason et al. (2018) have explored the image coding for machine task by optimizing the network with additional loss functions for the machine vision tasks. However, the state-of-the-art point cloud compression algorithms like VoxelContext-NetQue et al. (2021) need to construct the octree and then compress it, in which the octree construction procedure is indifferentiable and thus we cannot directly add the loss function to improve the coding performance for the machine vision tasks. Therefore, it is necessary to design a new point cloud compression framework for both human and machine vision tasks.
In this work, we propose the first point cloud compression framework for both human and machine vision, Our framework follows the scalable coding paradigm, in which the full bit-stream will used for the human vision task, while only part of the bit-stream will be used for the machine vision tasks. For the human vision task, we take the start-of-the-art method VoxelContext-Net Que et al. (2021) as an example to compress the point cloud, in which the octrees are constructed and then compressed into bit-streams. For the machine vision tasks, we only transmit part of the bit-stream to reconstruct the first few depth-level of the octrees for bit-rate saving. Additionally, we propose the octree depth level predictor to predict the optimal depth-level of the octree for different scenarios when for coding for machine vision tasks. As a result, for simple objects/scenarios, we will use less depth level with less bits for bit-rate saving, while for complex objects/scenarios, we prefer deeper depth level of octree for more accurate prediction. Experimental results demonstrate that our proposed framework SPC-Net achieves promising results on various machine vision tasks without sacrificing the coding performance for the human vision task.
• In this work, We propose a new scalable point cloud compression framework for both machine vision and human vision tasks. To the best of our knowledge, this is the first point cloud compression method for both machine and human vision.
• We propose a new octree depth level predictor to predict the optimal depth of the octree used for the machine vision tasks, where deeper octree will be used for complex objects/scenarios for achieving more accurate prediction while shallow octree will be used for simple objects/scenarios for achieving less bit-rate cost.
• Comprehensive experimental results demonstrate that our proposed scalable point cloud coding framework achieves promising results without sacrificing the coding performance of the human vision task.
2 RELATED WORK
2.1 POINT CLOUD COMPRESSION FOR HUMAN VISION
In the past few years, hand-crafted and learning-based point cloud compression methods Group (2021); Wang et al. (2021b); Biswas et al. (2020); Huang et al. (2020); Zhu et al. (2020); Que et al. (2021) have been proposed by transforming the point cloud data into tree representations for better compression.
Specifically, a few hand-crafted point cloud compression methods Group (2021); Schwarz et al. (2018); Google (2022) have been proposed. For example, the standard point cloud compression method G-PCC (geometry based point cloud compression) Group (2021) proposed by the MPEG group, which transforms point cloud data into the octree-structure before performing static point cloud compression.
In recent years, some learning-based point cloud compression methods Huang & Liu (2019); Zhu et al. (2020); Huang et al. (2020); Biswas et al. (2020); Que et al. (2021); Wang et al. (2021b;a) have achieved the state-of-the-arts performance. Huang et al. Huang et al. (2020) and Wang et al. Wang et al. (2021b) followed the learned image compression framework Ballé et al. (2017) to compress the voxelized point clouds. To reduce the bitrate, Biswas et al. Biswas et al. (2020) exploited the spatio-temporal relationships across multiple LiDAR sweeps by using a novel conditional entropy model. Based on Wang et al. (2021b), Wang et al. Wang et al. (2021a) used the lossless compressed octree and the lossy compressed point feature to further improve the coding performance. Que et al. Que et al. (2021) extended the framework by further exploiting the context information among neighbouring nodes and refining the 3D coordinate at the decoder side. Considering that VoxelContext-Net Que et al. (2021) is the state-of-the-art point cloud compression method, we use it as our baseline method for the human vision task.
All the existing methods compress the point cloud data for human perception, which is evaluated by the metrics like point-to-point PSNR and point-to-plane PSNR. However, unlike the 2-D images or videos, most point clouds are not purely collected for human perception. Instead, they are widely used for various real-world machine vision tasks, such as classification, segmentation, and detection, which is unfortunately not considered in there works
2.2 IMAGE COMPRESSION FOR BOTH MACHINE AND HUMAN VISION TASKS
To the best of our knowledge, there is no existing point cloud compression method for both machine and human vision. In this section, we first discuss the scalable image compression methods for both machine and human vision tasks, and then the other compression methods.
Scalable Methods. Both Choi et al. Choi & Bajić (2022)and Chen et al. Chen et al. (2021) performed scalable image compression by dividing the image bit-streams into different parts and transmitting one or more parts of the bit-streams for both machine or human vision tasks. Liu et al. Liu et al. (2021) proposed a scalable image compression method for define grained classification at different levels.
Other Methods. Yang et al. Yang et al. (2021) designed the image encoder by using the edge extraction algorithm, and the reconstructed images from the decoder achieve promising performance for both human vision and machine vision tasks. Le et al. Le et al. (2021) directly added the additional machine vision loss to the compression loss functions to improve the reconstructed image quality for the machine vision tasks. Song et al. Song et al. (2021) compressed the source image through a corresponding quality map produced from different machine vision tasks. Torfason et al. Torfason et al. (2018) combined the image compression network with the detection network, and directly extracted the detection related information from bit-stream without using an image decoder.
In summary, the above methods for machine vision methods are all lossy compression methods. The encoder extracts the helpful features from the images, the decoder reconstructs the images based on the encoded features, and the entropy model calculates the number of bits used for the features. Most methods can adjust the various parameters in the encoder and decoder based on the performance in machine vision tasks. Therefore, it is easy for the encoder to learn the representative image features for machine vision. However, most learning-based point cloud compression methods use a lossless compression network. Their encoders and decoders cannot be optimized to extract the useful features in the point cloud for machine vision, which hinders the development of the point cloud compression methods for the machine vision tasks.
In contrast to these works Liu et al. (2021); Chen et al. (2021); Choi & Bajić (2022); Yang et al. (2021); Le et al. (2021); Song et al. (2021); Torfason et al. (2018), we propose some new modules before and after the compression model to improve the machine vision performance while maintaining the fidelity for human vision by keeping the lossless point cloud compression model unchanged.
3 METHODOLOGY
3.1 THE FRAMEWORK
The overall structure of our scalable point cloud compression framework (SPC-Net) is shown in Figure 1(b). In this section, we will first introduce our method coding strategy. And then, each module in our framework will be introduced.
Scalable Coding Strategy. The point cloud data is commonly used for various machine vision tasks. Therefore, our SPC-Net is always used for machine vision tasks (e.g., abnormal event detections to detect collision between the pedestrians and the vehicles) and the point cloud information is transformed along the solid arrows as shown in Figure 1 (b). If the human vision task must also be involved (e.g., when the prediction results from the machine vision tasks like event detection are abnormal), our framework can provide a high quality reconstructed point cloud for humans further analysis. It should be mentioned that like the scalable coding method, to reconstruct the point clouds for the human vision task, we can reuse the bit-stream generated for the machine vision task, which can avoid duplicate bit transmission.
Octree Construction, Encoder, Decoder and Point Cloud Reconstruction. The octree construction module constructs the point cloud to octree. Octree is a tree-like data structure used to describe three-dimensional space. Each node of the octree represents a volume element of a cube, and each non-leaf node has eight child nodes. The volume of the parent node can be obtained by adding the volume elements represented by the eight child nodes together. And the black node in Figure 1 (a) means there are points in this cube, and the white node means empty cube without having any 3D point. Each octree is encoded as the bit-stream by using the 3D encoder. The decoder reconstructs
Raw point cloud
Octree Depth Level Predictor
Predicted depth level 1 2 3 ....
❎ ✔ ❎ .... Encoder
Point Cloud Reconstruction
Data Processing
Task Specific Network
Octree Construction
Results of the machine task
Results of the human vision
Raw point cloud
Max Pooling
MLPs (64,128,512)
FC (128)
ReLU
FC (Nc)
Gubmel Softmax
Random Sampling
one hot vector
(c) Octree depth level predictor(b) Overview
0929 修改版
octree with 3 depth levels
...
...
(a) The process of encode and decode the octree
depth level 1
... ...
octree with 2 depth levels for the machine vision task
... octree with 3 depth
levels for the human vision task
Partition Scalable Bit-stream Partitioning
Decoder
depth level 2
depth level 3
Encoder
Decoder
Predicted depth level
Bm Bh
B
Figure 1: (a) The encoding and decoding process of the octree. B,Bm and Bh denote the full bitstream, the bit-stream for the machine vision task and the bit-stream for the rest depth level of the octree, respectively. (b) The overall architecture of our proposed scalable point cloud compression framework SPC-Net, which is designed for both machine vision and human vision. (c) Details of our proposed octree depth level predictor.
the bit-stream to octree. The point cloud reconstruction module then restore the point cloud from the octree. In this work, we task VoxelContext-Net Que et al. (2021) as an example and use the same design for all those modules, the details can be found in Appendix A.1.
Scalable Bit-stream Partitioning. Our scalable bit-stream partitioning module can split the full bit-stream to two parts bit-stream for different tasks. The details is shown in section 3.2.
Octree Depth level Predictor. Our octree depth level predictor is used to adaptively choose the octree depth for the machine vision tasks and can guide the full bit-steam splitting. The details of this module will be described in section 3.3.
Data Processing. The role of this module is to process point cloud data to compensate for the data difference between the output of the compressed network and the input of the machine task network. The details about this module are shown in Appendix A.1.
Task Specific Network. To adapt to a variety of situations in the point cloud based machine vision tasks, this module will use different networks for different machine vision tasks. For the classification task and the segmentation task, PointNet++ Qi et al. (2017) will be used in this module. For the detection task, VoteNet Qi et al. (2019) is adopted.
3.2 SCALABLE BIT-STREAM PARTITIONING
Although the reconstructed point cloud often achieves promising performance for the human vision task when using full bit-stream, it has plenty of redundant information for the machine vision tasks and thus it is less effective in terms of the bit-rate cost. Therefore, we design this scalable bit-stream partitioning method to split the bit-stream for both human and machine vision tasks.
Before introducing how to divide the bit-stream, we first introduce how to generate the point cloud bit-stream. Figure 1(a) shows the encoding and decoding process of the octree. During the encoding process, each octree is encoded from the lower depth level to the higher depth level. Therefore, the final full bit-stream can be expressed as B = (b1, b2, ..., bn), where n is the maximum octree depth level and bi represents the bit-stream from the ith depth level. At the decoder side, each octree will be reconstructed from the lower depth level to the higher depth level. The (i + 1)th depth level of the octree can be reconstructed with the previously reconstructed octree which has i depth levels and the extra bits bi+1. For example, with b1 ∪ b2, we can reconstruct the octree with the first two depth levels, and with b1 ∪ b2 ∪ b3 we can reconstruct the octree with the first three depth levels. Based on the above octree encoding and decoding process, we can split the full bit-stream B = (b1, b2, ..., bn) into two parts Bm and Bh according to the octree depth level. Bm = (b1, b2, ..., bi) can be used to reconstruct the octree with the first i depth levels, which will be used for the machine vision tasks. Bh = (bi+1, bi+2, ..., bn) can reconstruct the rest depth levels of the octree based on the reconstruction of the first i depth levels, which will be used for the human vision task. And the optimal splitting level index i is determined by the octree depth predictor for scalable bit-stream partitioning.
3.3 OCTREE DEPTH LEVEL PREDICTOR
The design of our octree depth level predictor is inspired by the well-trained machine vision tasks (i.e., classification, segmentation, detection). We can often achieve reasonable results when using the reconstructed point cloud from the lower depth level octree as the input. Taking the classification results in Figure 2 as an example, some objects with simple shapes like laptop can be easily recognized when using the reconstructed point cloud reconstructed from the octree with 4 depth levels as the input, while other objects with complex shapes like guitar can only be recognized when using the reconstructed point cloud from the octree with 6 depth levels. Therefore, we can use the octree with lower depth levels to reduce bit-stream cost and thus we can save the storage space and the bandwidth.
To achieve this goal, we propose the octree depth predictor to decide the optimal depth level of the octree for the machine vision tasks, which can not only achieve the reasonable performance for the machine vision tasks but also reduce the bit-rate cost. In addition, the encoder side (e.g., the RGB-D cameras or the LiDAR sensors) always do not have enough computing power and can not support the complex networks. Therefore, the networks (e.g., PointNet++ and VoteNet) for handling the complex machine vision tasks are placed behind the decoder and not in the encoder side. As shown in Figure 1 (c), our octree depth level predictor is designed by using 3 layers MLP, and 2 fully connected layers, which is a simple network. To future reduce the computational complexity, we random sample 1024 points from the raw point cloud as the input of our octree depth level predictor for classification and segmentation tasks.
Our octree depth level predictor can select the optimal octree depth level for machine vision tasks from the raw point cloud global feature. According to the different characters in machine vision tasks (e.g., the difficulty of classification), our octree depth predictor can generate n probabilities p = {p1, p2, ..., pn} for n octree depth levels, and then choose the octree depth level with the highest probabilities.
However, the process of choosing the depth level of octree with the highest probability is nondifferentiable, which makes the octree depth predictor unable to train. Therefore, we adopt the Gumbel Softmax Strategy Jang et al. (2017) to address this issue. First, we generate confidence score set p̂ from the probability set p with Gumbel noise as follows:
p̂i = pi +Gi, i ∈ {0, 1, ..., n} (1) where Gi = − log(− log ϵ) is the standard gumbel noise, and ϵ is randomly sampled from a uniform distribution between 0 and 1. Therefore, we can generate the one-hot vector ĥ = [ĥ0, ĥ1, ..., ĥn], where ĥi = 1 if i = argmaxj p̂j , j ∈ {0, 1, ..., n}. Otherwise ĥi = 0. ĥ is the one hot vector of the depth level selection results. However, the argmax operation when generating the one hot vector will led to non-differentiable. Therefore, during the backward propagation process, we apply the Gumbel Softmax Strategy and relax the one-hot vector ĥ to h̃ = [h̃0, h̃1, ..., h̃n] as follows:
h̃i = exp(p̂i/τ))∑7 j=0 exp(p̂j/τ) , i ∈ {0, 1, ..., n} (2)
where τ is the temperature parameter. Using the Gumbel softmax Strategy Jang et al. (2017), we can select the optimal depth-level of octree for machine tasks based on the argmax function during forward propagation process and approximate the gradient of the argmax function by using Eq. (2) in the back propagation process. During the inference stage, we directly select the depth level with the maximum probability in p.
3.4 TRAINING STRATEGY
Loss Function. In our SPC-Net, we need to train three modules including the octree depth level predictor, the compression module (i.e., the encoder and the decoder) and the task specific network module. As the encoder and the decoder is the same as the VoxelContext-Net Que et al. (2021), we train the compression module based on the same setting as VoxelContext-Net. For the task network module, we train the network based on the same setting as PointNet++ Qi et al. (2017) or VoteNet Qi et al. (2019). For our octree depth level predictor, we train it after the other two modules are pretrained. When training the octree depth level predictor, we fix the parameters in the compression module and the task network module. And the loss function function used for training our octree depth level predictor is shown here,
loss = ∑ ((λ ∗ bpp+ L) ∗ h̃) (3)
Our method selects the optimal depth level from n depth levels. bpp = (bpp1,bpp2, ...,bppn), bpp means bits per points, which denotes the length of the bit-stream. bppi(i ∈ {0, 1, ...,n}) represents the bpp for constructing the octree at the ith depth levels. We can obtain bpp from the encoder. L = (L1, L2, ..., Ln), and Li are formulated as follows,
Li = D(f(x̂i), ygt), i ∈ {0, 1, ..., n} (4) where f is the machine vision task network (i.e., PointNet++ or VoteNet). x̂i is the reconstructed point cloud from the octree with i depth levels. ygt is the ground true for the machine vision tasks. And D can calcualte the loss between the f(x̂i) and the ygt. h̃ = [h̃0, h̃1, ..., h̃n], and h̃i is designed in equation 2. λ is a hyper-parameter, which is used to balance the trade off of bpp and L.
4 EXPERIMENT
4.1 DATASET
ModelNet. ModelNet Wu et al. (2015) is a widely used benchmark to evaluate the point cloud classification performance, which contains two datasets named ModelNet40 and ModelNet10. ModelNet40 dataset is divided into 40 categories, which has 9843 point cloud data for training and 2468
point cloud data for testing. The ModelNet10 dataset is a subset of the ModelNet40, which only has 10 categories with 3991 point cloudd data for training and 908 point cloud data for testing.
ShapeNet. ShapeNet Yi et al. (2016) contains 16881 point cloud data from 16 object classes. Each point cloud data contains 2-5 parts, with a total of 50 part categures. ShapeNet has 14007 point cloud data for training and 2874 point cloud data for testing.
ScanNet. ScanNet Dai et al. (2017) is a physically available dataset used for the 3D object detection task, which contains 1201 scans for training and 312 scans for testing. Following the VoxelContextNet Que et al. (2021), we sample 50,000 points from each scan.
4.2 EXPERIMENT DETAILS
Baseline. To the best of our knowledge, this is the first point cloud compression framework for both machine vision and human vision tasks. Therefore, we directly use the encoder and the decoder from VoxelContext-Net Que et al. (2021) as our baseline method, which is the start-of-the-art point cloud compression method designed for the human vision task. We also use the same encoder and decoder for point cloud compression in our proposed framework for a fair comparison with the baseline method.
For the baseline methods for the machine vision tasks, we directly use the reconstructed point clouds from VoxelContext-Net as the input of the networks for the machine vision tasks. PointNet++ Qi et al. (2017) is used for the classification task and segmentation task. VoteNet Qi et al. (2019) is adopted for the detection task. For the classification task, we use the octree with the depth levels of 3,4,5 to compress the input raw point clouds. For the segmentation task, we use the octree with the depth levels of 4,5,6 for data compression. For the detection task, we use the octree with the depth levels of 7,8,9 for data compression. As suggested in VoxelContext-Net Que et al. (2021), we train the machine vision task networks with the raw point clouds and evaluate the classification/segmentation/detection results based on the reconstructed point clouds.
Evaluation Metric. We use bit per point (bpp) to denote the bit cost in the compression procedure. For the machine vision tasks, accuracy, mean intersection-over-union (mIoU) and mean average precision (mAP) are used to measure the performance for the classification, segmentation and detection tasks, respectively. For the human vision task, Point-to-Point PSNR Tian et al. (2017) and Chamfer distance (CD) Fan et al. (2017); Huang & Liu (2019) are used to measure the distortion between the reconstructed point cloud and the raw point cloud, which are widely used metric for measuring compression performance.
Implementation Details. We train our model in two stages. At the first stage, we only train the encoder, the decoder and the task specific networks. We use the same training strategy as VoxelContext-Net Que et al. (2021) to train the encoder and the decoder. For different task specific networks (i.e., PointNet++ Qi et al. (2017) and VoteNet Qi et al. (2019)), we follow the settings in their works to train the task specific networks. At the second stage, based on the loss function 3, we train the octree depth predictor by fixing the parameters in the encoder, the decoder and the task specific networks. For the classification task, the hype-parameter λ is set from 0.01 to 16. For the segmentation task, the hype-parameter λ is set from 0.02 to 8. For the detection task, the hype-parameter is set as 0.3, 0.6, 1 and 2.
The whole network is implemented by Pytorch with CUDA support. At the second training stage, we set the batch size as 48. We use the Adam optimizer Kingma & Ba (2015) with the learning rate of 1e-4 for the first 50 epochs, 1e-5 for the next 30 epochs, and 1e-6 for the last 20 epochs.
In our experiments, the maximum depth levels of the octrees are set as 8, 8 and 9 for the human vision task on ModelNet, ShapeNet and ScanNet datasets, respectively, as the predefined depth levels are sufficient for reconstructing high quality point clouds in a highly visual experience. For machine vision, the maximum depth levels of the octree are set as 7, 7 and 9 on ModelNet, ShapeNet and ScanNet datasets, respectively.
4.3 EXPERIMENT RESULTS
Classification Task. The classification results of our SPC-Net on the ModelNet10, ModelNet40 and ShapeNet datasets are shown in Figure 3 (a) (b) and (c). It is observed that our proposed
framework SPC-Net achieves 1% accuracy improvement at 0.05 bpp on the ModelNet10 dataset when compared with our baseline method. On the ModelNet40 dataset, our SPC-Net achieves about 10% accuracy improvement at 0.056 bpp and saves about 0.8 bpp at 91.8% accuracy. On the ShapeNet dataset, our SPC-Net achieves more than 10% accuracy improvement at the 0.01 bpp when compared with our baseline method using 4 octree depth levels. The experimental results demonstrate that our new SPC-Net can improve the performance when the input point cloud is compressed for the classification task.
Segmentation Task. The segmentation results of our SPC-Net on the ShapeNet dataset are shown in Figure 3 (d). We observe that our proposed framework SPC-Net achieves 0.8% mIoU improvement at 0.08 bpp when compared with our baseline method. Our method can save above 10% bpp when the mIoU target is similar when compared with our baseline method. Therefore, our method achieves better performance than the baseline method for the segmentation task.
Detection Task. The detection results of our SPC-Net on the ScanNet dataset are shown in Figure 3 (e) and (f). From Figure 3 (e) we observe that our framework SPC-Net achieves about 0.01 mAP@0.25 improvement at 3 bpp when compared with our baseline method. At the highest bpp, our SPC-Net saves above 20% bpp when compared with our baseline. From figure 3 (f), we observe that our SPC-Net imporves about 0.08 mAP@0.5 and saves about 0.3 bpp when compared with our baseline method at 3 bpp. And at the highest bpp, our SPC-Net saves above 15% bpp when compared with our baseline method. The experimental results demonstrate that our proposed framework can also improve the performance of the detection task.
Human Vision Results. Our SPC-Net achieves exactly the same compression performance as our baseline method VoxelContext-Net Que et al. (2021), (please refer to Appendix A.2 for more details). It should be mentioned that in most 2D images compression for both machine vision and human vision methods Choi & Bajić (2022); Yang et al. (2021); Torfason et al. (2018), further compression performance for human vision always drops in order to achieving better performance for the machine vision tasks. Therefore, this is the advantage that our proposed framework SPC-Net can improve the performance for the machine vision tasks without sacrificing the compression results for human vision.
4.4 MODEL ANALYSIS
In order to balance the bit-rate cost and the performance of the machine vision tasks in different scenarios, our proposed octree depth level predictor can dynamically adjust the number of the point clouds reconstructed by the octree at different depth levels. The selection percentages at different depth levels of the octrees for different tasks at different λ values are shown in Figure 4. We observe that smaller λ values lead to selection more of higher depth levels. With the increasing of the λ values, our octree depth level predictor will select lower depth levels of the octrees. The selection percentage in Figure 4 demonstrates that our SPC-Net can dynamically select the optimal depth level of the octree at different λ values for different machine vision tasks.
In Table 1, we evaluate our SPC-Net for the classification task on the ModelNet40 dataset when setting λ = 0.25. It is observed that lower depth levels of the octree are prefered for the simple categories (e.g., chair, laptop, bed) and it can still achieve more than 95% accuracy. Therefore, our SPC-Net will save bits while still achieve promising classification performance. For the “complex” categories (e.g., person, curtain, guitar), our octree depth level predictor prefers higher depth levels. As it is hard to recognize objects from the complex categories, our SPC-Net need to spend more bits for higher depth levels to achieve better classification results. The results demonstrate that our proposed octree depth level predictor can select different depth levels for different input point clouds according to their characteristics (e.g., the relating “simple” or “complex” categories).
4.5 CONCLUSION
In this work, we have proposed a new scalable point cloud compression framework SPC-Net for both machine vision and human vision tasks. In our SPC-Net, we propose a new scalable bit-stream partitioning method based on the point cloud encoder-decoder structure in order to make the compressed point clouds more suitable for the both tasks. Additionally, considering the purpose of different tasks and the characteristics of different point clouds, we design a new depth level predictor to guide the division of the bit-stream. The experimental results on four benchmark datasets demonstrate that our SPC-Net achieves promising results for three machine vision tasks(i.e., classification, segmentation, detection) without sacrificing the performance of the human vision task.
A APPENDIX
A.1 THE FRAMEWORK
Octree Construction. As shown in Figure 1 (c), octree is a point cloud storage structure, which is beneficial to compression. To construct the octree, we first need to surround the point cloud with the smallest cube. Then the smallest cube will be split into eight sub-cube of the same size. For each sub-cube, if there is no point in the cube, this cube is recorded empty. Otherwise, the cube is recorded nonempty, which means there are some points in this cube. After that, for the nonempty sub-cube, we repeat the above split process to reduce the size of the cube until the depth of the octree reaches the predefined maximum depth value. In the constructed octree, a non-leaf node stands for one cube and the nonempty non-leaf node has eight child nodes that stand for the sub-cubes.
Encoder. The encoder compresses the octree into the bit-stream. All octrees are encoded into the bit-stream from the low depth level to the high depth level, as shown in Figure 1(c). Therefore, we can divide the full bit-stream into two parts according to the selected octree depth.
Decoder and Point Cloud Reconstruction. The decoder will restores the octree from the bitstream. And the point cloud reconstruction module reconstructs the point cloud coordinates from the octree. The reconstruct point cloud coordinates is the coordinate of the center point of the smallest nonempty cubes. The point cloud coordinates can not only be used in machine vision tasks but also be easily visualized for human vision.
Data Processing. In each octree, all points in one smallest cube will be combined to one point. So the number of points in the reconstructed point cloud will have less number of points than the raw point cloud. Additionally, the reduced number of different point clouds is different, so the output point clouds from the point cloud reconstruction module have different number of points. However, our framework need the size of batch size more than 1 (e.g., 32 or 48). And the point clouds with different number of points can not directly concatenate together to one batch. So we random sample the point cloud based on the predefined number of points to unify the size of the point cloud. As we know, the octree will combine some points to one point. However, each point corresponds to one target for the segmentation task. So if all points in the smallest cube have the same label, we use this label for the new combined point. If the points in one smallest cube have the different label, we use the label of the point which is closest to the combined point.
A.2 HUMAN VISION RESULT
The experimental results of our SPC-Net for human vision are shown in Table 2. In this table, we observe that our SPC-Net achieves the same performance as the VoxelContext-Net.
The visualization results of the segmentation task are shown in Figure 5. From the results of the table and the mug in the first two rows of Figure 5, point clouds reconstructed from the octree with 5 depth levels can achieve similar segmentation performance when compared with the point clouds reconstructed from the octree with more depth levels. Therefore, our octree depth level predictor
prefers the octree with 5 depth levels for the segmentation task in this two cases to save bits. From the results of the car and airplane in the last two rows of Figure 5, the point clouds reconstructed from the octrees with 7 depth levels achieve much better segmentation performance when compared with those octrees with less depth levels. Therefore, our octree depth level predictor selects the octree with 7 depth levels for achieving better segmentation performance in this two cases.
The visualization results of the detection task is shown in Figure 6. In the first row, the point cloud reconstructed from the octree with 7 depth levels achieves the same mAP@0.25 performance when compared with the point clouds reconstructed from the octrees with higher depth levels. Therefore, our octree depth level predictor selects the 7 depth levels in this case to save bits. In the second row, the point cloud reconstructed from the octree with 9 depth levels has much better mAP@0.25 performance than the point cloud reconstructed from the octrees with less depth level. Therefore, our octree depth level predictor selects the octree with 9 depth levels for better detection performance in this case.
It is observed that our proposed octree depth level predictor can select the optimal depth levels of the octrees for different cases, which demonstrate the effectiveness of our proposed octree depth level predictor. | 1. What is the main contribution of the paper on scalable PCC?
2. What are the strengths of the proposed approach, particularly in terms of its application to machine vision tasks?
3. What are the weaknesses of the paper regarding its claims and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for improving the experimental results or expanding the scope of the study? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper presents a scalable PCC framework for both machine vision and human vision tasks. It mainly proposes a novel octree depth level predictor to predict the optimal octree depth used for machine vision tasks. Experimental results demonstrated its superiority compared with the fixed depth level setting strategy.
Strengths And Weaknesses
Strength:
In this paper, the authors study the influence of octree depth levels to the performance of machine vision tasks. The topic is interesting and promising.
The experimental results demonstrates that the proposed methods can reduce the bitrates of point clouds in machine vision tasks. Weakness:
It is hard to claim this work to be a new scalable PCC framework. The VoxelContext-Net used for compression and the PointNet++ used for machine vision task are both existing techniques, which is not novel. The work is actually studying the relationship of the performance of machine vision tasks and the octree depth levels.
The selected datasets in the experiments are limitied. I suggest the authors to conduct extensive experiments on LiDAR point cluods, which is more convincing.
The work focuses on machine vision tasks, while for human vision, the authors only test some objective metrics such as point-to-point distance. At the same time, the proposed method did not improve these human vision metrics according to the experiments.
Clarity, Quality, Novelty And Reproducibility
The writing of the paper is clear. The proposed method is sound. But the novelty of the work is limited. |
ICLR | Title
SPC-Net: A New Scalable Point Cloud Compression Framework for Both Machine and Human Vision Tasks
Abstract
Recently, point cloud process and analysis have attracted increasing attention in various machine vision tasks. Therefore, some point cloud compression algorithms are developed. However, such compression algorithms are developed for human vision while most of the point cloud data will be used for automated point cloud analysis (e.g., detection of abnormal event and early warning in autonomous driving) and may not be seen by humans. To this end, we design a new scalable point cloud compression framework (SPC-Net) for both machine and human vision tasks, in which a scalable bit-stream will be used to describe the point cloud for both machine vision and human vision tasks. For machine vision tasks, only part of the bit-stream will be transmitted for bit-rate saving, while the full bitstream will be transmitted when used for the human vision task. Additionally, we propose a new octree depth level predictor to automatically predict the optimal depth level in order to control the bit-rate cost for the machine vision tasks. As a result, for simple objects/scenarios, we will use fewer depth levels with less bits for the machine tasks, while for complex objects/scenarios, we prefer deeper depth levels of octree with more bits for machine tasks comprehensive. Experimental results on different datasets (e.g., ModelNet10, ModelNet40, ShapeNet and ScanNet) demonstrate that our proposed scalable point could compression framework SPC-Net achieves better performance on the machine vision tasks (e.g., classification, segmentation and detection) without degrading the performance of the human vision task.
N/A
Recently, point cloud process and analysis have attracted increasing attention in various machine vision tasks. Therefore, some point cloud compression algorithms are developed. However, such compression algorithms are developed for human vision while most of the point cloud data will be used for automated point cloud analysis (e.g., detection of abnormal event and early warning in autonomous driving) and may not be seen by humans. To this end, we design a new scalable point cloud compression framework (SPC-Net) for both machine and human vision tasks, in which a scalable bit-stream will be used to describe the point cloud for both machine vision and human vision tasks. For machine vision tasks, only part of the bit-stream will be transmitted for bit-rate saving, while the full bitstream will be transmitted when used for the human vision task. Additionally, we propose a new octree depth level predictor to automatically predict the optimal depth level in order to control the bit-rate cost for the machine vision tasks. As a result, for simple objects/scenarios, we will use fewer depth levels with less bits for the machine tasks, while for complex objects/scenarios, we prefer deeper depth levels of octree with more bits for machine tasks comprehensive. Experimental results on different datasets (e.g., ModelNet10, ModelNet40, ShapeNet and ScanNet) demonstrate that our proposed scalable point could compression framework SPC-Net achieves better performance on the machine vision tasks (e.g., classification, segmentation and detection) without degrading the performance of the human vision task.
1 INTRODUCTION
With the development of advanced 3D technologies, it has become easier to collect point clouds by using various types of 3D scanners including LiDARs and RGB-D cameras. Therefore, a huge amount of point cloud data has been collected and various point cloud related machine vision tasks like classification, segmentation and detection have attracted increasing attention. However, most point cloud analysis tasks take raw point cloud data as the input, which requires large bandwidth/storage for transmitting/storing huge massive point cloud data.
Recently, some point cloud compression frameworks Huang et al. (2020); Que et al. (2021) were proposed to save the bandwidth/storage for point cloud transmitting/storing. However, those point cloud compression frameworks are designed for human vision, which will thus degrade the performance in the machine vision tasks. Currently, the existing point cloud compression framework are not designed for the machine vision tasks. Some recent works Yang et al. (2021); Le et al. (2021); Song et al. (2021); Torfason et al. (2018) have explored the image coding for machine task by optimizing the network with additional loss functions for the machine vision tasks. However, the state-of-the-art point cloud compression algorithms like VoxelContext-NetQue et al. (2021) need to construct the octree and then compress it, in which the octree construction procedure is indifferentiable and thus we cannot directly add the loss function to improve the coding performance for the machine vision tasks. Therefore, it is necessary to design a new point cloud compression framework for both human and machine vision tasks.
In this work, we propose the first point cloud compression framework for both human and machine vision, Our framework follows the scalable coding paradigm, in which the full bit-stream will used for the human vision task, while only part of the bit-stream will be used for the machine vision tasks. For the human vision task, we take the start-of-the-art method VoxelContext-Net Que et al. (2021) as an example to compress the point cloud, in which the octrees are constructed and then compressed into bit-streams. For the machine vision tasks, we only transmit part of the bit-stream to reconstruct the first few depth-level of the octrees for bit-rate saving. Additionally, we propose the octree depth level predictor to predict the optimal depth-level of the octree for different scenarios when for coding for machine vision tasks. As a result, for simple objects/scenarios, we will use less depth level with less bits for bit-rate saving, while for complex objects/scenarios, we prefer deeper depth level of octree for more accurate prediction. Experimental results demonstrate that our proposed framework SPC-Net achieves promising results on various machine vision tasks without sacrificing the coding performance for the human vision task.
• In this work, We propose a new scalable point cloud compression framework for both machine vision and human vision tasks. To the best of our knowledge, this is the first point cloud compression method for both machine and human vision.
• We propose a new octree depth level predictor to predict the optimal depth of the octree used for the machine vision tasks, where deeper octree will be used for complex objects/scenarios for achieving more accurate prediction while shallow octree will be used for simple objects/scenarios for achieving less bit-rate cost.
• Comprehensive experimental results demonstrate that our proposed scalable point cloud coding framework achieves promising results without sacrificing the coding performance of the human vision task.
2 RELATED WORK
2.1 POINT CLOUD COMPRESSION FOR HUMAN VISION
In the past few years, hand-crafted and learning-based point cloud compression methods Group (2021); Wang et al. (2021b); Biswas et al. (2020); Huang et al. (2020); Zhu et al. (2020); Que et al. (2021) have been proposed by transforming the point cloud data into tree representations for better compression.
Specifically, a few hand-crafted point cloud compression methods Group (2021); Schwarz et al. (2018); Google (2022) have been proposed. For example, the standard point cloud compression method G-PCC (geometry based point cloud compression) Group (2021) proposed by the MPEG group, which transforms point cloud data into the octree-structure before performing static point cloud compression.
In recent years, some learning-based point cloud compression methods Huang & Liu (2019); Zhu et al. (2020); Huang et al. (2020); Biswas et al. (2020); Que et al. (2021); Wang et al. (2021b;a) have achieved the state-of-the-arts performance. Huang et al. Huang et al. (2020) and Wang et al. Wang et al. (2021b) followed the learned image compression framework Ballé et al. (2017) to compress the voxelized point clouds. To reduce the bitrate, Biswas et al. Biswas et al. (2020) exploited the spatio-temporal relationships across multiple LiDAR sweeps by using a novel conditional entropy model. Based on Wang et al. (2021b), Wang et al. Wang et al. (2021a) used the lossless compressed octree and the lossy compressed point feature to further improve the coding performance. Que et al. Que et al. (2021) extended the framework by further exploiting the context information among neighbouring nodes and refining the 3D coordinate at the decoder side. Considering that VoxelContext-Net Que et al. (2021) is the state-of-the-art point cloud compression method, we use it as our baseline method for the human vision task.
All the existing methods compress the point cloud data for human perception, which is evaluated by the metrics like point-to-point PSNR and point-to-plane PSNR. However, unlike the 2-D images or videos, most point clouds are not purely collected for human perception. Instead, they are widely used for various real-world machine vision tasks, such as classification, segmentation, and detection, which is unfortunately not considered in there works
2.2 IMAGE COMPRESSION FOR BOTH MACHINE AND HUMAN VISION TASKS
To the best of our knowledge, there is no existing point cloud compression method for both machine and human vision. In this section, we first discuss the scalable image compression methods for both machine and human vision tasks, and then the other compression methods.
Scalable Methods. Both Choi et al. Choi & Bajić (2022)and Chen et al. Chen et al. (2021) performed scalable image compression by dividing the image bit-streams into different parts and transmitting one or more parts of the bit-streams for both machine or human vision tasks. Liu et al. Liu et al. (2021) proposed a scalable image compression method for define grained classification at different levels.
Other Methods. Yang et al. Yang et al. (2021) designed the image encoder by using the edge extraction algorithm, and the reconstructed images from the decoder achieve promising performance for both human vision and machine vision tasks. Le et al. Le et al. (2021) directly added the additional machine vision loss to the compression loss functions to improve the reconstructed image quality for the machine vision tasks. Song et al. Song et al. (2021) compressed the source image through a corresponding quality map produced from different machine vision tasks. Torfason et al. Torfason et al. (2018) combined the image compression network with the detection network, and directly extracted the detection related information from bit-stream without using an image decoder.
In summary, the above methods for machine vision methods are all lossy compression methods. The encoder extracts the helpful features from the images, the decoder reconstructs the images based on the encoded features, and the entropy model calculates the number of bits used for the features. Most methods can adjust the various parameters in the encoder and decoder based on the performance in machine vision tasks. Therefore, it is easy for the encoder to learn the representative image features for machine vision. However, most learning-based point cloud compression methods use a lossless compression network. Their encoders and decoders cannot be optimized to extract the useful features in the point cloud for machine vision, which hinders the development of the point cloud compression methods for the machine vision tasks.
In contrast to these works Liu et al. (2021); Chen et al. (2021); Choi & Bajić (2022); Yang et al. (2021); Le et al. (2021); Song et al. (2021); Torfason et al. (2018), we propose some new modules before and after the compression model to improve the machine vision performance while maintaining the fidelity for human vision by keeping the lossless point cloud compression model unchanged.
3 METHODOLOGY
3.1 THE FRAMEWORK
The overall structure of our scalable point cloud compression framework (SPC-Net) is shown in Figure 1(b). In this section, we will first introduce our method coding strategy. And then, each module in our framework will be introduced.
Scalable Coding Strategy. The point cloud data is commonly used for various machine vision tasks. Therefore, our SPC-Net is always used for machine vision tasks (e.g., abnormal event detections to detect collision between the pedestrians and the vehicles) and the point cloud information is transformed along the solid arrows as shown in Figure 1 (b). If the human vision task must also be involved (e.g., when the prediction results from the machine vision tasks like event detection are abnormal), our framework can provide a high quality reconstructed point cloud for humans further analysis. It should be mentioned that like the scalable coding method, to reconstruct the point clouds for the human vision task, we can reuse the bit-stream generated for the machine vision task, which can avoid duplicate bit transmission.
Octree Construction, Encoder, Decoder and Point Cloud Reconstruction. The octree construction module constructs the point cloud to octree. Octree is a tree-like data structure used to describe three-dimensional space. Each node of the octree represents a volume element of a cube, and each non-leaf node has eight child nodes. The volume of the parent node can be obtained by adding the volume elements represented by the eight child nodes together. And the black node in Figure 1 (a) means there are points in this cube, and the white node means empty cube without having any 3D point. Each octree is encoded as the bit-stream by using the 3D encoder. The decoder reconstructs
Raw point cloud
Octree Depth Level Predictor
Predicted depth level 1 2 3 ....
❎ ✔ ❎ .... Encoder
Point Cloud Reconstruction
Data Processing
Task Specific Network
Octree Construction
Results of the machine task
Results of the human vision
Raw point cloud
Max Pooling
MLPs (64,128,512)
FC (128)
ReLU
FC (Nc)
Gubmel Softmax
Random Sampling
one hot vector
(c) Octree depth level predictor(b) Overview
0929 修改版
octree with 3 depth levels
...
...
(a) The process of encode and decode the octree
depth level 1
... ...
octree with 2 depth levels for the machine vision task
... octree with 3 depth
levels for the human vision task
Partition Scalable Bit-stream Partitioning
Decoder
depth level 2
depth level 3
Encoder
Decoder
Predicted depth level
Bm Bh
B
Figure 1: (a) The encoding and decoding process of the octree. B,Bm and Bh denote the full bitstream, the bit-stream for the machine vision task and the bit-stream for the rest depth level of the octree, respectively. (b) The overall architecture of our proposed scalable point cloud compression framework SPC-Net, which is designed for both machine vision and human vision. (c) Details of our proposed octree depth level predictor.
the bit-stream to octree. The point cloud reconstruction module then restore the point cloud from the octree. In this work, we task VoxelContext-Net Que et al. (2021) as an example and use the same design for all those modules, the details can be found in Appendix A.1.
Scalable Bit-stream Partitioning. Our scalable bit-stream partitioning module can split the full bit-stream to two parts bit-stream for different tasks. The details is shown in section 3.2.
Octree Depth level Predictor. Our octree depth level predictor is used to adaptively choose the octree depth for the machine vision tasks and can guide the full bit-steam splitting. The details of this module will be described in section 3.3.
Data Processing. The role of this module is to process point cloud data to compensate for the data difference between the output of the compressed network and the input of the machine task network. The details about this module are shown in Appendix A.1.
Task Specific Network. To adapt to a variety of situations in the point cloud based machine vision tasks, this module will use different networks for different machine vision tasks. For the classification task and the segmentation task, PointNet++ Qi et al. (2017) will be used in this module. For the detection task, VoteNet Qi et al. (2019) is adopted.
3.2 SCALABLE BIT-STREAM PARTITIONING
Although the reconstructed point cloud often achieves promising performance for the human vision task when using full bit-stream, it has plenty of redundant information for the machine vision tasks and thus it is less effective in terms of the bit-rate cost. Therefore, we design this scalable bit-stream partitioning method to split the bit-stream for both human and machine vision tasks.
Before introducing how to divide the bit-stream, we first introduce how to generate the point cloud bit-stream. Figure 1(a) shows the encoding and decoding process of the octree. During the encoding process, each octree is encoded from the lower depth level to the higher depth level. Therefore, the final full bit-stream can be expressed as B = (b1, b2, ..., bn), where n is the maximum octree depth level and bi represents the bit-stream from the ith depth level. At the decoder side, each octree will be reconstructed from the lower depth level to the higher depth level. The (i + 1)th depth level of the octree can be reconstructed with the previously reconstructed octree which has i depth levels and the extra bits bi+1. For example, with b1 ∪ b2, we can reconstruct the octree with the first two depth levels, and with b1 ∪ b2 ∪ b3 we can reconstruct the octree with the first three depth levels. Based on the above octree encoding and decoding process, we can split the full bit-stream B = (b1, b2, ..., bn) into two parts Bm and Bh according to the octree depth level. Bm = (b1, b2, ..., bi) can be used to reconstruct the octree with the first i depth levels, which will be used for the machine vision tasks. Bh = (bi+1, bi+2, ..., bn) can reconstruct the rest depth levels of the octree based on the reconstruction of the first i depth levels, which will be used for the human vision task. And the optimal splitting level index i is determined by the octree depth predictor for scalable bit-stream partitioning.
3.3 OCTREE DEPTH LEVEL PREDICTOR
The design of our octree depth level predictor is inspired by the well-trained machine vision tasks (i.e., classification, segmentation, detection). We can often achieve reasonable results when using the reconstructed point cloud from the lower depth level octree as the input. Taking the classification results in Figure 2 as an example, some objects with simple shapes like laptop can be easily recognized when using the reconstructed point cloud reconstructed from the octree with 4 depth levels as the input, while other objects with complex shapes like guitar can only be recognized when using the reconstructed point cloud from the octree with 6 depth levels. Therefore, we can use the octree with lower depth levels to reduce bit-stream cost and thus we can save the storage space and the bandwidth.
To achieve this goal, we propose the octree depth predictor to decide the optimal depth level of the octree for the machine vision tasks, which can not only achieve the reasonable performance for the machine vision tasks but also reduce the bit-rate cost. In addition, the encoder side (e.g., the RGB-D cameras or the LiDAR sensors) always do not have enough computing power and can not support the complex networks. Therefore, the networks (e.g., PointNet++ and VoteNet) for handling the complex machine vision tasks are placed behind the decoder and not in the encoder side. As shown in Figure 1 (c), our octree depth level predictor is designed by using 3 layers MLP, and 2 fully connected layers, which is a simple network. To future reduce the computational complexity, we random sample 1024 points from the raw point cloud as the input of our octree depth level predictor for classification and segmentation tasks.
Our octree depth level predictor can select the optimal octree depth level for machine vision tasks from the raw point cloud global feature. According to the different characters in machine vision tasks (e.g., the difficulty of classification), our octree depth predictor can generate n probabilities p = {p1, p2, ..., pn} for n octree depth levels, and then choose the octree depth level with the highest probabilities.
However, the process of choosing the depth level of octree with the highest probability is nondifferentiable, which makes the octree depth predictor unable to train. Therefore, we adopt the Gumbel Softmax Strategy Jang et al. (2017) to address this issue. First, we generate confidence score set p̂ from the probability set p with Gumbel noise as follows:
p̂i = pi +Gi, i ∈ {0, 1, ..., n} (1) where Gi = − log(− log ϵ) is the standard gumbel noise, and ϵ is randomly sampled from a uniform distribution between 0 and 1. Therefore, we can generate the one-hot vector ĥ = [ĥ0, ĥ1, ..., ĥn], where ĥi = 1 if i = argmaxj p̂j , j ∈ {0, 1, ..., n}. Otherwise ĥi = 0. ĥ is the one hot vector of the depth level selection results. However, the argmax operation when generating the one hot vector will led to non-differentiable. Therefore, during the backward propagation process, we apply the Gumbel Softmax Strategy and relax the one-hot vector ĥ to h̃ = [h̃0, h̃1, ..., h̃n] as follows:
h̃i = exp(p̂i/τ))∑7 j=0 exp(p̂j/τ) , i ∈ {0, 1, ..., n} (2)
where τ is the temperature parameter. Using the Gumbel softmax Strategy Jang et al. (2017), we can select the optimal depth-level of octree for machine tasks based on the argmax function during forward propagation process and approximate the gradient of the argmax function by using Eq. (2) in the back propagation process. During the inference stage, we directly select the depth level with the maximum probability in p.
3.4 TRAINING STRATEGY
Loss Function. In our SPC-Net, we need to train three modules including the octree depth level predictor, the compression module (i.e., the encoder and the decoder) and the task specific network module. As the encoder and the decoder is the same as the VoxelContext-Net Que et al. (2021), we train the compression module based on the same setting as VoxelContext-Net. For the task network module, we train the network based on the same setting as PointNet++ Qi et al. (2017) or VoteNet Qi et al. (2019). For our octree depth level predictor, we train it after the other two modules are pretrained. When training the octree depth level predictor, we fix the parameters in the compression module and the task network module. And the loss function function used for training our octree depth level predictor is shown here,
loss = ∑ ((λ ∗ bpp+ L) ∗ h̃) (3)
Our method selects the optimal depth level from n depth levels. bpp = (bpp1,bpp2, ...,bppn), bpp means bits per points, which denotes the length of the bit-stream. bppi(i ∈ {0, 1, ...,n}) represents the bpp for constructing the octree at the ith depth levels. We can obtain bpp from the encoder. L = (L1, L2, ..., Ln), and Li are formulated as follows,
Li = D(f(x̂i), ygt), i ∈ {0, 1, ..., n} (4) where f is the machine vision task network (i.e., PointNet++ or VoteNet). x̂i is the reconstructed point cloud from the octree with i depth levels. ygt is the ground true for the machine vision tasks. And D can calcualte the loss between the f(x̂i) and the ygt. h̃ = [h̃0, h̃1, ..., h̃n], and h̃i is designed in equation 2. λ is a hyper-parameter, which is used to balance the trade off of bpp and L.
4 EXPERIMENT
4.1 DATASET
ModelNet. ModelNet Wu et al. (2015) is a widely used benchmark to evaluate the point cloud classification performance, which contains two datasets named ModelNet40 and ModelNet10. ModelNet40 dataset is divided into 40 categories, which has 9843 point cloud data for training and 2468
point cloud data for testing. The ModelNet10 dataset is a subset of the ModelNet40, which only has 10 categories with 3991 point cloudd data for training and 908 point cloud data for testing.
ShapeNet. ShapeNet Yi et al. (2016) contains 16881 point cloud data from 16 object classes. Each point cloud data contains 2-5 parts, with a total of 50 part categures. ShapeNet has 14007 point cloud data for training and 2874 point cloud data for testing.
ScanNet. ScanNet Dai et al. (2017) is a physically available dataset used for the 3D object detection task, which contains 1201 scans for training and 312 scans for testing. Following the VoxelContextNet Que et al. (2021), we sample 50,000 points from each scan.
4.2 EXPERIMENT DETAILS
Baseline. To the best of our knowledge, this is the first point cloud compression framework for both machine vision and human vision tasks. Therefore, we directly use the encoder and the decoder from VoxelContext-Net Que et al. (2021) as our baseline method, which is the start-of-the-art point cloud compression method designed for the human vision task. We also use the same encoder and decoder for point cloud compression in our proposed framework for a fair comparison with the baseline method.
For the baseline methods for the machine vision tasks, we directly use the reconstructed point clouds from VoxelContext-Net as the input of the networks for the machine vision tasks. PointNet++ Qi et al. (2017) is used for the classification task and segmentation task. VoteNet Qi et al. (2019) is adopted for the detection task. For the classification task, we use the octree with the depth levels of 3,4,5 to compress the input raw point clouds. For the segmentation task, we use the octree with the depth levels of 4,5,6 for data compression. For the detection task, we use the octree with the depth levels of 7,8,9 for data compression. As suggested in VoxelContext-Net Que et al. (2021), we train the machine vision task networks with the raw point clouds and evaluate the classification/segmentation/detection results based on the reconstructed point clouds.
Evaluation Metric. We use bit per point (bpp) to denote the bit cost in the compression procedure. For the machine vision tasks, accuracy, mean intersection-over-union (mIoU) and mean average precision (mAP) are used to measure the performance for the classification, segmentation and detection tasks, respectively. For the human vision task, Point-to-Point PSNR Tian et al. (2017) and Chamfer distance (CD) Fan et al. (2017); Huang & Liu (2019) are used to measure the distortion between the reconstructed point cloud and the raw point cloud, which are widely used metric for measuring compression performance.
Implementation Details. We train our model in two stages. At the first stage, we only train the encoder, the decoder and the task specific networks. We use the same training strategy as VoxelContext-Net Que et al. (2021) to train the encoder and the decoder. For different task specific networks (i.e., PointNet++ Qi et al. (2017) and VoteNet Qi et al. (2019)), we follow the settings in their works to train the task specific networks. At the second stage, based on the loss function 3, we train the octree depth predictor by fixing the parameters in the encoder, the decoder and the task specific networks. For the classification task, the hype-parameter λ is set from 0.01 to 16. For the segmentation task, the hype-parameter λ is set from 0.02 to 8. For the detection task, the hype-parameter is set as 0.3, 0.6, 1 and 2.
The whole network is implemented by Pytorch with CUDA support. At the second training stage, we set the batch size as 48. We use the Adam optimizer Kingma & Ba (2015) with the learning rate of 1e-4 for the first 50 epochs, 1e-5 for the next 30 epochs, and 1e-6 for the last 20 epochs.
In our experiments, the maximum depth levels of the octrees are set as 8, 8 and 9 for the human vision task on ModelNet, ShapeNet and ScanNet datasets, respectively, as the predefined depth levels are sufficient for reconstructing high quality point clouds in a highly visual experience. For machine vision, the maximum depth levels of the octree are set as 7, 7 and 9 on ModelNet, ShapeNet and ScanNet datasets, respectively.
4.3 EXPERIMENT RESULTS
Classification Task. The classification results of our SPC-Net on the ModelNet10, ModelNet40 and ShapeNet datasets are shown in Figure 3 (a) (b) and (c). It is observed that our proposed
framework SPC-Net achieves 1% accuracy improvement at 0.05 bpp on the ModelNet10 dataset when compared with our baseline method. On the ModelNet40 dataset, our SPC-Net achieves about 10% accuracy improvement at 0.056 bpp and saves about 0.8 bpp at 91.8% accuracy. On the ShapeNet dataset, our SPC-Net achieves more than 10% accuracy improvement at the 0.01 bpp when compared with our baseline method using 4 octree depth levels. The experimental results demonstrate that our new SPC-Net can improve the performance when the input point cloud is compressed for the classification task.
Segmentation Task. The segmentation results of our SPC-Net on the ShapeNet dataset are shown in Figure 3 (d). We observe that our proposed framework SPC-Net achieves 0.8% mIoU improvement at 0.08 bpp when compared with our baseline method. Our method can save above 10% bpp when the mIoU target is similar when compared with our baseline method. Therefore, our method achieves better performance than the baseline method for the segmentation task.
Detection Task. The detection results of our SPC-Net on the ScanNet dataset are shown in Figure 3 (e) and (f). From Figure 3 (e) we observe that our framework SPC-Net achieves about 0.01 mAP@0.25 improvement at 3 bpp when compared with our baseline method. At the highest bpp, our SPC-Net saves above 20% bpp when compared with our baseline. From figure 3 (f), we observe that our SPC-Net imporves about 0.08 mAP@0.5 and saves about 0.3 bpp when compared with our baseline method at 3 bpp. And at the highest bpp, our SPC-Net saves above 15% bpp when compared with our baseline method. The experimental results demonstrate that our proposed framework can also improve the performance of the detection task.
Human Vision Results. Our SPC-Net achieves exactly the same compression performance as our baseline method VoxelContext-Net Que et al. (2021), (please refer to Appendix A.2 for more details). It should be mentioned that in most 2D images compression for both machine vision and human vision methods Choi & Bajić (2022); Yang et al. (2021); Torfason et al. (2018), further compression performance for human vision always drops in order to achieving better performance for the machine vision tasks. Therefore, this is the advantage that our proposed framework SPC-Net can improve the performance for the machine vision tasks without sacrificing the compression results for human vision.
4.4 MODEL ANALYSIS
In order to balance the bit-rate cost and the performance of the machine vision tasks in different scenarios, our proposed octree depth level predictor can dynamically adjust the number of the point clouds reconstructed by the octree at different depth levels. The selection percentages at different depth levels of the octrees for different tasks at different λ values are shown in Figure 4. We observe that smaller λ values lead to selection more of higher depth levels. With the increasing of the λ values, our octree depth level predictor will select lower depth levels of the octrees. The selection percentage in Figure 4 demonstrates that our SPC-Net can dynamically select the optimal depth level of the octree at different λ values for different machine vision tasks.
In Table 1, we evaluate our SPC-Net for the classification task on the ModelNet40 dataset when setting λ = 0.25. It is observed that lower depth levels of the octree are prefered for the simple categories (e.g., chair, laptop, bed) and it can still achieve more than 95% accuracy. Therefore, our SPC-Net will save bits while still achieve promising classification performance. For the “complex” categories (e.g., person, curtain, guitar), our octree depth level predictor prefers higher depth levels. As it is hard to recognize objects from the complex categories, our SPC-Net need to spend more bits for higher depth levels to achieve better classification results. The results demonstrate that our proposed octree depth level predictor can select different depth levels for different input point clouds according to their characteristics (e.g., the relating “simple” or “complex” categories).
4.5 CONCLUSION
In this work, we have proposed a new scalable point cloud compression framework SPC-Net for both machine vision and human vision tasks. In our SPC-Net, we propose a new scalable bit-stream partitioning method based on the point cloud encoder-decoder structure in order to make the compressed point clouds more suitable for the both tasks. Additionally, considering the purpose of different tasks and the characteristics of different point clouds, we design a new depth level predictor to guide the division of the bit-stream. The experimental results on four benchmark datasets demonstrate that our SPC-Net achieves promising results for three machine vision tasks(i.e., classification, segmentation, detection) without sacrificing the performance of the human vision task.
A APPENDIX
A.1 THE FRAMEWORK
Octree Construction. As shown in Figure 1 (c), octree is a point cloud storage structure, which is beneficial to compression. To construct the octree, we first need to surround the point cloud with the smallest cube. Then the smallest cube will be split into eight sub-cube of the same size. For each sub-cube, if there is no point in the cube, this cube is recorded empty. Otherwise, the cube is recorded nonempty, which means there are some points in this cube. After that, for the nonempty sub-cube, we repeat the above split process to reduce the size of the cube until the depth of the octree reaches the predefined maximum depth value. In the constructed octree, a non-leaf node stands for one cube and the nonempty non-leaf node has eight child nodes that stand for the sub-cubes.
Encoder. The encoder compresses the octree into the bit-stream. All octrees are encoded into the bit-stream from the low depth level to the high depth level, as shown in Figure 1(c). Therefore, we can divide the full bit-stream into two parts according to the selected octree depth.
Decoder and Point Cloud Reconstruction. The decoder will restores the octree from the bitstream. And the point cloud reconstruction module reconstructs the point cloud coordinates from the octree. The reconstruct point cloud coordinates is the coordinate of the center point of the smallest nonempty cubes. The point cloud coordinates can not only be used in machine vision tasks but also be easily visualized for human vision.
Data Processing. In each octree, all points in one smallest cube will be combined to one point. So the number of points in the reconstructed point cloud will have less number of points than the raw point cloud. Additionally, the reduced number of different point clouds is different, so the output point clouds from the point cloud reconstruction module have different number of points. However, our framework need the size of batch size more than 1 (e.g., 32 or 48). And the point clouds with different number of points can not directly concatenate together to one batch. So we random sample the point cloud based on the predefined number of points to unify the size of the point cloud. As we know, the octree will combine some points to one point. However, each point corresponds to one target for the segmentation task. So if all points in the smallest cube have the same label, we use this label for the new combined point. If the points in one smallest cube have the different label, we use the label of the point which is closest to the combined point.
A.2 HUMAN VISION RESULT
The experimental results of our SPC-Net for human vision are shown in Table 2. In this table, we observe that our SPC-Net achieves the same performance as the VoxelContext-Net.
The visualization results of the segmentation task are shown in Figure 5. From the results of the table and the mug in the first two rows of Figure 5, point clouds reconstructed from the octree with 5 depth levels can achieve similar segmentation performance when compared with the point clouds reconstructed from the octree with more depth levels. Therefore, our octree depth level predictor
prefers the octree with 5 depth levels for the segmentation task in this two cases to save bits. From the results of the car and airplane in the last two rows of Figure 5, the point clouds reconstructed from the octrees with 7 depth levels achieve much better segmentation performance when compared with those octrees with less depth levels. Therefore, our octree depth level predictor selects the octree with 7 depth levels for achieving better segmentation performance in this two cases.
The visualization results of the detection task is shown in Figure 6. In the first row, the point cloud reconstructed from the octree with 7 depth levels achieves the same mAP@0.25 performance when compared with the point clouds reconstructed from the octrees with higher depth levels. Therefore, our octree depth level predictor selects the 7 depth levels in this case to save bits. In the second row, the point cloud reconstructed from the octree with 9 depth levels has much better mAP@0.25 performance than the point cloud reconstructed from the octrees with less depth level. Therefore, our octree depth level predictor selects the octree with 9 depth levels for better detection performance in this case.
It is observed that our proposed octree depth level predictor can select the optimal depth levels of the octrees for different cases, which demonstrate the effectiveness of our proposed octree depth level predictor. | 1. What is the focus and contribution of the paper on point cloud compression?
2. What are the strengths of the proposed approach, particularly in its application to various tasks?
3. What are the weaknesses of the paper, especially regarding its limitation to a single baseline compression approach?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes SPC-NET, a new compression method for point cloud. The paper claims it is the first to address point-cloud compression for both ML tasks and human vision. The key idea of SPC-NET is to train a short neural network to select the octree depth (~= the resolution of the point cloud) to use for each object and ML task. The bitstream is then truncated to that depth. For human vision, no selection is made and the full octree is used. This idea is then combined with a state-of-the-art point cloud compression architecture: VoxelContext-Net (which is retrained). This combined model is evaluated on rate/distorsion (as a proxy for human vision) and on three ML taks: classification, segmentation and detection. 4 datasets are considered, but not for all tasks. Results show no loss of performance for human vision and improvements over the base model for the ML tasks.
Strengths And Weaknesses
Strength
Selecting the octree level is potentially applicable to other approaches.
Evaluation on three different machine learning tasks.
Performance is good. There is no loss of performance for human vision and improvements in these three tasks.
Weaknesses
The idea is only evaluated on one baseline compression approach.
No code is provided (though lots of experimental details are).
The idea is really specific to point cloud compression. If I am wrong, I would suggest highlighting other application domains.
Clarity, Quality, Novelty And Reproducibility
The paper is well organized but would benefit from proofreading.
A lot of implementation details are given, so I think reproducibility is possible.
As far as I know the idea is novel.
On the other hand, it is simple but potentially useful. |
ICLR | Title
UNITER: Learning UNiversal Image-TExt Representations
Abstract
Joint image-text embedding is the bedrock for most Vision-and-Language (V+L) tasks, where multimodality inputs are jointly processed for visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design three pre-training tasks: Masked Language Modeling (MLM), ImageText Matching (ITM), and Masked Region Modeling (MRM, with three variants). Different from concurrent work on multimodal pre-training that apply joint random masking to both modalities, we use conditioned masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). Comprehensive analysis shows that conditioned masking yields better performance than unconditioned masking. We also conduct a thorough ablation study to find an optimal setting for the combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question Answering, Image-Text Retrieval, Referring Expression Comprehension, Visual Commonsense Reasoning, Visual Entailment, and NLVR.
1 INTRODUCTION
Most Vision-and-Language tasks rely on joint multimodel embeddings to bridge the semantic gap between visual and textual clues in images and text, although such representations are usually tailored for specific tasks. For example, MCB (Fukui et al., 2017), BAN (Kim et al., 2018), DFAF (Gao et al., 2019) proposed advanced multimodal fusion methods for Visual Question Answering (VQA) (Antol et al., 2015). SCAN (Lee et al., 2018) and MAttNet (Yu et al., 2018) studied learning latent alignment between words and image regions for Image-Text Retrieval (Wang et al., 2016) and Referring Expression Comprehension (Kazemzadeh et al., 2014) tasks. While each of these proposed models has pushed the state of the art on respective benchmarks, their architectures are diverse and the learned representations are highly task-specific, preventing them from being generalized to other tasks. This raises a million-dollar question: can we learn a universal image-text representation for all V+L tasks?
To answer this question, we introduce UNiversal Image-TExt Representations (UNITER), a largescale pre-trained model for multimodal embedding. We adopt Transformer (Vaswani et al., 2017) as the core of our model, to leverage its elegant self-attention mechanism designed for learning contextualized representations. Inspired by BERT (Devlin et al., 2019), which has successfully applied Transformer to NLP tasks through large-scale language modeling, we pre-train UNITER through three pre-training tasks: (i) Masked Language Modeling (MLM) conditioned on image; (ii) Masked Region Modeling (MRM) conditioned on text; and (iii) joint Image-Text Matching (ITM). To further investigate the effectiveness of MRM, we propose three MRM variants: (i) Masked Region Classification (MRC); (ii) Masked Region Feature Regression (MRFR); and (iii) Masked Region Classification with KL-divergence (MRC-kl).
As shown in Figure 1, UNITER first encodes image regions (visual features and bounding box features) and textual words (tokens and positions) into a common embedding space with Image Embedder and Text Embedder, then applies a Transformer module to learn generalizable contex-
tualized embeddings for each region and word through aforementioned pre-training tasks. Compared with LXMERT (Tan & Bansal, 2019) and ViLBERT (Lu et al., 2019) that use two streams (one Transformer for each modality), our UNITER model can learn joint contextualized representations for image regions and textual words through a single Transformer. Besides, our masked language/region modeling is conditioned on full observation of image/text, different from other concurrent pre-trained models that apply joint random masking to both modalities. We show that the conditional masking strategy can successfully ease the missing-alignment between images and text, and obtain better joint embeddings for downstream tasks. Detailed ablation study also demonstrates that the combination of MLM+ITM+MRC-kl+MRFR yields the best pre-training performance.
To demonstrate the power of UNITER, we evaluate on six V+L tasks across nine datasets, including: (i) VQA; (ii) Visual Commonsense Reasoning (VCR) (Zellers et al., 2019); (iii) NLVR2 (Suhr et al., 2019); (iv) Visual Entailment (Xie et al., 2019); (v) Image-Text Retrieval (including zero-shot setting) (Lee et al., 2018); and (vi) Referring Expression Comprehension. Our UNITER model is trained on a large-scale V+L dataset composed of four subsets: (i) COCO (Lin et al., 2014); (ii) Visual Genome (VG) (Krishna et al., 2017); (iii) Conceptual Captions (CC) (Sharma et al., 2018); and (iv) SBU Captions (Ordonez et al., 2011). Experiments show that UNITER achieves new state of the art with significant performance boost across all six downstream tasks. Moreover, training on additional CC and SBU data (containing unseen images/text in downstream tasks) further boosts model performance over training on COCO and VG only.
Our contributions can be summarized as follows: (i) We introduce UNITER, a powerful UNiversal Image-TExt Representations for Vision-and-Language tasks. (ii) We achieve new state of the art (SOTA) on multiple V+L benchmarks, outperforming existing SOTA and concurrent multimodal pre-training methods by a large margin. (iii) We present extensive experiments and analysis to provide useful insights on the effectiveness of each pre-training task/dataset for multimodal encoder training.
2 RELATED WORK
Self-supervised learning utilizes original data as its own source of supervision, which has been applied to many Computer Vision tasks, such as image colorization (Zhang et al., 2016), solving jigsaw puzzles (Noroozi & Favaro, 2016; Trinh et al., 2019), inpainting (Pathak et al., 2016), rotation prediction (Gidaris et al., 2018), and relative location prediction (Doersch et al., 2015). Recently, pre-trained language models such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), GPT2 (Radford et al., 2019), and XLNet (Yang et al., 2019) have shown great advances for NLP tasks. There are two keys to their success: effective pre-training tasks over large language corpus, and the use of Transformer (Vaswani et al., 2017) for learning contextualized text representations.
More recently, there has been some concurrent work on self-supervised learning for multimodal tasks, by pre-training on large-scale image/video and text pairs, then finetuning on downstream tasks. For example, VideoBERT (Sun et al., 2019) applied BERT to learn a bidrectional joint distribution over quantized video frame features and linguistic tokens from video-text pairs. ViLBERT (Lu
et al., 2019) and LXMERT (Tan & Bansal, 2019) introduced the two-stream architecture, where two Transformers are applied to images and text independently, which will be fused by a third Transformer in a later stage. On the other hand, VisualBERT (Li et al., 2019b), Unicoder-VL (Li et al., 2019a), VL-BERT (Su et al., 2019) and B2T2 (Alberti et al., 2019) proposed the singlestream architecture, where a single Transformer is applied to both image and text. Specifically, LXMERT model was pre-trained with downstream tasks such as VQA (Antol et al., 2015) and GQA (Hudson & Manning, 2019), while the others were pre-trained on image-text pairs only. Our UNITER model belongs to the second family. One key difference between UNITER and the other methods is the masking approach on pre-training tasks. Instead of randomly masking both image regions and sentence words, we use conditional masking, i.e., masking only one modality while keeping the other untainted. In addition, we examine the best combination of pre-training tasks through a thorough ablation study on the effects of each pre-training task and dataset on downstream tasks.
Another related work is DDAF (Gao et al., 2019), which proposed a novel architecture of intermodality and intra-modality attention modules to learn the latent alignment between two modalities for VQA. Compared with Gao et al. (2019), UNITER learns a relatively more generic V+L representation via pre-training.
3 UNIVERSAL IMAGE-TEXT REPRESENTATIONS
In this section, we first introduce the model architecture of UNITER (Section 3.1), then describe the designed pre-training tasks and V+L datasets used for pre-training (Section 3.2 and 3.3).
3.1 MODEL OVERVIEW
The model architecture of UNITER is illustrated in Figure 1. Given a pair of image and sentence, UNITER takes the visual regions of the image and textual tokens of the sentence as the input. We design an Image Embedder and a Text Embedder to extract their respective embeddings. These embeddings are then feed it into a multi-layer self-attention Transformer to learn a cross-modality contextualized embedding between visual regions and textual tokens. Note that the self-attention mechanism in Transformer is order-less, thus it is necessary to explicitly encode positions/locations of tokens/regions as additional inputs.
Specifically, in Image Embedder, we first use Faster R-CNN1 to extract the visual features (pooled ROI features) for each region. We also encode the location features for each region via a 7- dimensional vector2. Both visual and location features are then fed through a fully-connected (FC) layer, to be projected into the same embedding space. The final visual embedding for each region is obtained by summing up the two FC outputs and then passing through a layer normalization (LN) layer. For Text Embedder, we follow BERT (Devlin et al., 2019) and tokenize the input sentence into WordPieces (Wu et al., 2016). The final representation for each sub-word token3 is obtained via summing up its word embedding and position embedding, followed by another LN layer4.
We introduce three main tasks to pre-train our model: Masked Language Modeling conditioned on image regions (MLM), Masked Region Modeling conditioned on input text (with three variants) (MRM), and Image-Text Matching (ITM). As shown in Figure 1, our MRM and MLM are in analogy to BERT, where we randomly mask some words or regions from the input and learn to recover the words or regions as the output of Transformer. Specifically, word masking is realized by replacing the token with a special token [MASK], and region masking is implemented by replacing the visual feature vector with all zeros. Note that each time we only mask one modality while keeping the other modality intact, instead of randomly masking both modalities like ViLBERT and LXMERT. This prevents potential miss-alignment when a masked region happens to be described by a masked word. Empirically, we show that with conditional masking, our model is able to learn better embeddings
1Our Faster R-CNN was pre-trained on Visual Genome object+attribute data (Anderson et al., 2018). 2[x1, y1, x2, y2, w, h, w ∗ h] (normalized top/left/bottom/right coordinates, width, height, and area.) 3We use word/sub-word and token interchangeably throughout the rest of the paper. 4We also use a special modality embedding to help the model distinguish between textual and visual input, which is similar to the ‘segment embedding’ in BERT. This embedding is also summed before the LN layer in each embedder. For simplicity, this modality embedding is omitted in Figure 1.
(in Section 4.2). Lastly, we also learn an instance-level alignment (rather than token/region-level) between the whole image and the sentence via ITM. During training, we sample both positive and negative image-sentence pairs and learn their matching scores.
To pre-train UNITER with the aforementioned different tasks, we randomly sample one pre-training task for each mini-batch and train on only one objective per SGD update.
3.2 PRE-TRAINING TASKS
Masked Language Modeling (MLM) We denote the image regions as v = {v1, ..., vK}, the input words as w = {w1, ..., wT }, and the mask indices as m ∈ NM . 5 In MLM, we randomly mask out the input words with probability of 15%, and replace the masked ones wm with special token [MASK]6. The goal is to predict these masked words based on the observation of their surrounding words w\m and all image regions v, by minimizing the negative log-likelihood:
LMLM(θ) = −E(w,v)∼D logPθ(wm|w\m,v). (1)
where θ is the trainable parameters. Each pair (w,v) is sampled from the whole training set D.
Image-Text Matching (ITM) In ITM, an additional special token [CLS] is fed into our model, which indicates the fused representation of both modalities. The inputs to ITM are a sentence and a set of image regions, and the output is a binary label (0 for negative match, and 1 for positive match). We extract the representation of [CLS] token as the joint representation of the input text and image, then fed into a fully connected layer and a sigmoid function to predict a score between 0 and 1. We denote the output score as sθ(w,v). The ITM supervision is over the [CLS] token.7 During training, we sample a positive or negative pair (w,v) from the dataset D at each step. The negative pair is created by replacing the image or text in a paired sample with a randomly-selected one from other samples. We denote the label as y ∈ {0, 1}, indicating if the sampled pair is a match. Then we apply a binary cross-entropy loss for optimization:
LITM(θ) = −E(w,v)∼D[y log sθ(v,w) + (1− y) log(1− sθ(v,w))]). (2)
Masked Region Modeling (MRM) Similar to MLM, we also sample image regions and mask their visual features with a probability of 15%. The model is trained to reconstruct the masked regions vm given the remaining regions v\m and all the words w. The visual features vm of the masked region are replaced by zeros. Unlike textual tokens that are represented as discrete labels, visual features are high-dimensional and continuous, thus cannot be supervised via class likelihood. Instead, we propose three variants for Masked Region Modeling, which share the same objective base:
LMRM(θ) = E(w,v)∼Dfθ(vm|v\m,w). (3)
1) Masked Region Feature Regression (MRFR) MRFR learns to regress the Transformer output of each masked region v(i)m to its visual features. Specifically, we apply an FC layer to convert its Transformer output into a vector hθ(v (i) m ) of same dimension as the input ROI pooled feature r(v (i) m ).
Then we apply L2 regression between the two: fθ(vm|v\m,w) = ∑M i=1 ‖hθ(v (i) m )− r(v(i)m )‖22.
2) Masked Region Classification (MRC) MRC learns to predict the object semantic class for each masked region. We first feed the Transformer output of the masked region v(i)m into an FC layer to predict the scores of K object classes, which further goes through a softmax function to be transformed into a normalized distribution gθ(v (i) m ) ∈ RK . Note that there is no ground-truth label, as the object categories are not provided. Thus, we use the object detection output from Faster RCNN, and take the detected object category (with the highest confidence score) as the label of the masked region, which will be converted into a one-hot vector c(v(i)m ) ∈ RK . The final objective minimizes the cross-entropy (CE) loss: fθ(vm|v\m,w) = ∑M i=1 CE(c(v (i) m ), gθ(v (i) m )).
5N is the natural numbers, M is the number of masked tokens, and m is the set of masked indices. 6Following BERT, we decompose this 15% into 10% random word, 10% unchanged, and 80% [MASK]. 7The supervision over the [CLS] token in pretraining also alleviates the input mismatch problem between pretraining tasks and downstream finetuning tasks, since most of the downstream tasks take the representation of [CLS] token as the joint representation.
3) Masked Region Classification with KL-Divergence (MRC-kl) MRC takes the most likely object class from the object detection model as the hard label (w.p. 0 or 1), which assumes the detected object class is the ground-truth label for the region. However, this may not be true, as no ground-truth label is provided for the detected region. Thus, in MRC-kl, we avoid this assumption by using soft label as supervision signal, which is the raw output from the detector (i.e., a distribution of object classes c̃(v(i)m )). MRC-kl aims to distill such knowledge into UNITER as Hinton et al. (2015), by minimizing the KL divergence between two distributions: fθ(vm|v\m,w) = ∑M i=1DKL(c̃(v (i) m )||gθ(v(i)m )).
3.3 PRE-TRAINING DATASETS
We construct our pre-training dataset based on four existing V+L datasets: COCO (Lin et al., 2014), Visual Genome (VG) (Krishna et al., 2017), Conceptual Captions (CC) (Sharma et al., 2018), and SBU Captions (Ordonez et al., 2011). Only image and sentence pairs are used for our pre-training purpose, which makes the model framework more scalable, as additional image-sentence pairs are easy to harvest for further pre-training.
To study the effects of different datasets on pre-training, we divide the four datasets into two categories. The first one consists of image captioning data from COCO and dense captioning data from VG. We call it “In-domain” data, as most V+L tasks are built on top of these two datasets. To obtain a ‘fair’ data split, we merge the raw training and validation splits from COCO, and exclude all validation and test images that appear in downstream tasks. We also exclude all co-occurring Flickr30K (Plummer et al., 2015) images via URL matching, as both COCO and Flickr30K images were crawled from Flickr and may have overlaps8. The same rule was applied to Visual Genome as well. In this way, we obtain 5.6M image-text pairs for training and 131K image-text pairs for our internal validation, which is half the size of the dataset used in LXMERT (Tan & Bansal, 2019), due to the filtering of overlapping images and the use of image-text pairs only. We also use additional Out-of-domain data from Conceptual Captions (Sharma et al., 2018) and SBU Captions (Ordonez et al., 2011) for model training9. The statistics on the cleaned splits are provided in Table 1.
4 EXPERIMENTS
We evaluate UNITER on six V+L tasks (listed in Table 2), by transferring the pre-trained model to each target task and finetuning through end-to-end training. We report experimental results on two model sizes: UNITER-base with 12 layers and UNITER-large with 24 layers10.
8A total of 222 images were eliminated through this process. 9We apply the same URL matching method, excluding 109 images from the training set.
10UNITER-base: L=12, H=768, A=12, Total Parameters=86M. UNITER-large: L=24, H=1024, A=16, Total Parameters=303M (L: number of stacked Transformer blocks; H: hidden activation dimension; A: number of attention heads). 882 and 3645 V100 GPU hours were used for pre-training UNITER-base and UNITER-large.
4.1 DOWNSTREAM TASKS
In VQA, VCR and NLVR2 tasks, given an input image (or a pair of images) and a natural language question (or description), the model predicts an answer (or judges the correctness of the description) based on the visual content in the image. For Visual Entailment, we evaluate on the SNLI-VE dataset. The goal is to predict whether a given image semantically entails an input sentence. Classification accuracy over three classes (“Entailment”, “Neutral” and “Contradiction”) is used to measure model performance. For Image-Text Retrieval, we consider two datasets (COCO and Flickr30K) and evaluate the model in two settings: Image Retrieval (IR) and Text Retrieval (TR). Referring Expression (RE) Comprehension requires the model to select the target from a set of image region proposals given the query description. Models are evaluated on both ground-truth objects and detected proposals11 (MAttNet (Yu et al., 2018)).
For VQA, VCR, NLVR2, Visual Entailment and Image-Text Retrieval, we extract the joint embedding of the input image-text pairs via a multi-layer perceptron (MLP) from the representation of the [CLS] token. For RE Comprehension, we use the MLP to compute the region-wise alignment scores. These MLP layers are learned during the finetuning stage. Specifically, we formulate VQA, VCR, NLVR2, Visual Entailment and RE Comprehension as classification problems and minimize the cross-entropy loss over the ground-truth answers/responses. For Image-Text Retrieval, we formulate it as a ranking problem. During finetuning, we sample three pairs of image and text, one positive pair from the dataset and two negative pairs by randomly replacing its sentence/image with others. We compute the similarity scores (based on the joint embedding) for both positive and negative pairs, and maximize the margin between them through triplet loss.
4.2 EVALUATION ON PRE-TRAINING TASKS
We analyze the effectiveness of different pre-training settings through ablation studies over VQA, NLVR2, Flickr30K and RefCOCO+ as representative V+L benchmarks. In addition to standard metrics for each benchmark (listed in Table 2) , we also use Meta-Sum (sum of all the scores across all the benchmarks) as a global metric.
Firstly, we establish two baselines: Line 1 (L1) in Table 3 indicates no pre-training is involved, and L2 shows the results from MLM initialized with pre-trained weights from Devlin et al. (2019). Although MLM trained on text only did not absorb any image information during pre-training, we see a gain of approximately +30 on Meta-Sum over L1. Hence, we use the pre-trained weights in L2 to initialize our model for the following experiments.
11The evaluation splits of RE comprehension using detected proposals are denoted as vald, testd, etc.
Secondly, we validate the effectiveness of each pre-training task through a thorough ablation study. Comparing L2 and L3, MRFR (L3) achieves better results than MLM (L2) only on NLVR2. On the other hand, when pre-trained on ITM (L4) or MLM (L5) only, we observe a significant improvement across all the tasks over L1 and L2 baselines. When combining different pre-training tasks, MLM + ITM (L6) improves over single ITM (L4) or MLM (L5). When MLM, ITM and MRM are jointly trained (L7-L10), we observe consistent performance gain across all the benchmarks. Among the three variants of MRM (L7-L9), we observe that MRC-kl (L9) achieves the best performance (397.09) when combined with MLM + ITM, while MRC (L7) the worst (393.97). When combining MRC-kl and MRFR together with MLM and ITM (L10), we find that they are complimentary to each other, which leads to the highest Meta-Sum score. We use this as the optimal pre-training setting for further experiments.
Additionally, we validate the contributions of conditional masking through a comparison study. When we perform random masking on both modalities simultaneously during pre-training, i.e., w/o conditional masking (L11), we observe a decrease in Meta-Sum score (396.51) compared to that with conditional masking (399.97). This indicates that the conditional masking strategy enables the model to learn better joint image-text representations effectively.
Lastly, we study the effects of pre-training datasets. Our experiments so far have been focused on In-domain data. In this study, we pre-train our model on Out-of-domain data (Conceptual Captions
+ SBU Captions). A performance drop (395.45 in L12) from the model trained on In-domain data (COCO + Visual Genome) (399.97 in L10) shows that although Out-of-domain data contain more images, the model still benefits more from being exposed to similar downstream images during pretraining. We further pre-train our model on both In-domain and Out-of-domain data. With doubled data size, the model continues to improve (402.50 in L13).
4.3 RESULTS ON DOWNSTREAM TASKS
Table 4 presents the results of UNITER on all downstream tasks. Both our base and large models are pre-trained on In-domain+Out-of-domain datasets, with the optimal pre-training setting: MLM+ITM+MRC-kl+MRFR. The implementation details of each task are provided in Appendix A.2. We compare with both task-specific models and concurrent pre-trained models on each downstream task. SOTA task-specific models include: MCAN (Yu et al., 2019) for VQA, MaxEnt (Suhr et al., 2019) for NLVR2, B2T2 (Alberti et al., 2019) for VCR, SCAN (Lee et al., 2018) for ImageText Retrieval, EVE-Image (Xie et al., 2019) for SNLI-VE, and MAttNet for RE Comprehension (RefCOCO, RefCOCO+ and RefCOCOg)12. Concurrent pre-trained models include: ViLBERT, LXMERT, Unicoder-VL, VisualBERT and VLBERT.
Results show that our UNITER-large model achieves new state of the art across all the benchmarks. UNITER-base model also outperforms the others by a large margin across all tasks except VQA. Specifically, our UNITER-base model outperforms SOTA by approximately +2.8% for VCR on Q→AR, +2.5% for NLVR2, +7% for SNLI-VE, +4% on R@1 for Image-Text Retrieval (+15% for zero-shot setting), and +2% for RE Comprehension.
Note that LXMERT pre-trains with downstream VQA (+VG+GQA) data, which may help adapt the model to VQA task. However, when evaluated on unseen tasks such as NLVR2, UNITER-base achieves 3% gain over LXMERT. In addition, among all the models pre-trained on image-text pairs only, our UNITER-base outperforms the others by >1.5% on VQA.
It is also worth mentioning that both VilBERT and LXMERT observed two-stream model outperforms single-stream model, while our results show empirically that with our pre-training setting, single-stream model can achieve new state-of-the-art results, with much fewer parameters (UNITER-base: 86M, LXMERT: 183M, VilBERT: 221M)13.
For VCR, we propose a two-stage pre-training approach: (i) pre-train on standard pre-training datasets; and then (ii) pre-train on downstream VCR dataset. Interestingly, while VLBERT and B2T2 observed that pre-training is not very helpful on VCR, we find that the second-stage pretraining can significantly boost model performance, while the first-stage pre-training still helps but with limited effects (results shown in Table 5). This indicates that the proposed two-stage approach is highly effective in our pre-trained model over new data that are unseen in pre-training datasets.
Different from other tasks, NLVR2 takes two images as input. Thus, directly finetuning UNITER pre-trained with image-sentence pairs might not lead to optimal performance, as the interactions between paired images are not learned during the pre-training stage. Thus, we experimented with three modified settings on NLVR2: (i) Triplet: joint embedding of images pairs and query captions; (ii) Pair: individual embedding of each image and each query caption; and (iii) Pair-biattn: a bidirectional attention is added to the Pair model to learn the interactions between the paired images.
Comparison results are presented in Table 6. The Pair setting achieves better performance than the Triplet setting even without cross-attention between the image pairs. We hypothesize that it is due to the fact that our UNITER is pre-trained with image-text pairs. Thus, it is difficult to finetune a pairbased pre-trained model on triplet input. The bidirectional attention mechanism in the Pair-biattn setting, however, compensates the lack of cross-attention between images, hence yielding the best performance with a large margin. This show that with minimal surgery on the top layer of UNITER, our pre-trained model can adapt to new tasks that are very different from pre-training tasks.
12MAttNet results are updated using the same features as the others. More details are provided in Appendix. 13The word embedding layer contains excessive rare words, thus excluded from the parameter counts.
Setting dev test-P Triplet 72.76 73.55 Pair 75.37 75.97 Pair-biattn 77.14 77.87
Table 6: Experiments on three modified settings for NLVR2. All models use pre-trained UNITER-base.
5 CONCLUSION
In this paper, we present UNITER, a large-scale pre-trained model providing UNiversal ImageTExt Representations for Vision-and-Language tasks. Three main pre-training tasks are proposed and evaluated through extensive ablation studies. Trained with both in-domain and out-of-domain datasets, UNITER outperforms state-of-the-art models over multiple V+L tasks by a significant margin. Future work includes studying early interaction between raw image pixels and sentence tokens, as well as developing more effective pre-training tasks.
A APPENDIX
A.1 DATASET COLLECTION
As introduced, our full dataset is composed of four existing V+L datasets: COCO, Visual Genome, Conceptual Captions, and SBU Captions. The dataset collection is not simply combining them, as we need to make sure none of the downstream evaluation images are seen during pre-training. Among them, COCO is the most tricky one to clean, as several downstream tasks are built based on it. Figure 2 lists the splits from VQA, Image-Text Retrieval, COCO Captioning, RefCOCO/RefCOCO+/RefCOCOg, and the bottom-up top-down (BUTD) detection (Anderson et al., 2018), all from COCO images.
As observed, the validation and test splits of different tasks are scattered across the raw COCO splits. Therefore, we exclude all those evaluation images that appeared in the downstream tasks. In addition, we also exclude all co-occurring Flickr30K images via URL matching, making sure the zero-shot image-text retrieval evaluation on Flickr is fair. The remaining images become the COCO subset within our full dataset, as shown in Figure 2 bottom row. We apply the same rules to Visual Genome, Conceptual Captions, and SBU Captions.
A.2 IMPLEMENTATION DETAILS
Our models are implemented based on PyTorch14 (Paszke et al., 2017). To speed up training, we use Nvidia Apex15 for mixed precision training. All pre-training experiments are run on Nvidia V100 GPUs (16GB VRAM; PCIe connection). Finetuning experiments are implemented on the same hardware or Titan RTX GPUs (24GB VRAM). To further speed up training, we implement dynamic sequence length to reduce padding and batch examples by number of input units (text tokens + image regions). For large pre-training experiments, we use Horovod16 + NCCL17 for multi-node communications (on TCP connections through ethernet) with up to 4 nodes of 4x V100 server. Gradient accumulation (Ott et al., 2018) is also applied to reduce multi-GPU communication overheads.
Visual Question Answering (VQA) We follow Yu et al. (2019) to take 3129 most frequent answers as answer candidates, and assign a soft target score to each candidate based on its relevancy to the 10 human responses. To finetune on VQA dataset, we use a binary cross-entropy loss to train a multi-label classifier using batch size of 10240 input units over maximum 5K steps. We use AdamW optimizer (Loshchilov & Hutter, 2019) with a learning rate of 3e− 4 and weight decay of 0.01. At inference time, the max-probable answer is selected as the predicted answer. For results on test-dev and test-std splits, both training and validation sets are used for training, and additional question-answer pairs from Visual Genome are used for data augmentation as in Yu et al. (2019).
14https://pytorch.org/ 15https://github.com/NVIDIA/apex 16https://github.com/horovod/horovod 17https://github.com/NVIDIA/nccl
Visual Commonsense Reasoning (VCR) VCR can be decomposed into two multiple-choice subtasks: question-answering task (Q → A) and answer-justification task (QA → R). In the holistic setting (Q→ AR), a model needs to first choose an answer from the answer choices, then select a supporting rationale from rationale choices if the chosen answer is correct. We train our model in two settings simultaneously. When testing in the holistic setting, we first apply the model to predict an answer, then obtain the rationale from the same model based on the given question and the predicted answer. To finetune on VCR dataset, we concatenate the question (the qeustion and the ground truth answer) and each answer (rationale) choice from the four possible answer (rationale) candidates. The ‘modality embedding’ is extended to help distinguish question, answer and rationale. Crossentropy loss is used to train a classifier over two classes (‘‘right’’ or ‘‘wrong’’) for each question-answer pair (question-answer-rationale triplet) with a batch size of 4096 input units over maximum 5K steps. We use AdamW optimizer with a learning rate of 1e − 4 and weight decay of 0.01.
Since the images and text in VCR dataset are very different from our pre-training dataset, we further pre-train our model on VCR, using MLM, MRFR and MRC-kl as the pre-training tasks. ITM is discarded because the text in VCR does not explicitly describe the image. The results of both pretrainings on VCR are reported in Table 5 and discussed in the main text. In conclusion, for downstream tasks that contain new data which is very different from the pre-training datasets, secondstage pre-training helps further boost the performance.
In our implementation, the second-stage pre-training is implemented with a batch size of 4096 intput units, a learning rate of 3e− 4 and a weight decay of 0.01 over maximum 60K steps. After secondstage pre-traing, we finetune our model with a learning rate of 6e− 5 over maximum 8K steps.
Natural Language for Visual Reasoning for Real (NLVR2) NLVR2 is a new challenging task for visual reasoning. The goal is to determine whether a natural language statement is true about the given image pair. Here we discuss the three architecture variants of NLVR2 finetuning in detail. Since UNITER only handles one image and one text input at pre-training, the ‘modality embedding’ is extended to help distinguish the additional image presented in the NLVR2 task. For the Triplet setup, we concatenate the image regions and then feed into the UNITER model. An MLP transform is applied on the [CLS] output for binary classification. For the Pair setup, we treat one input example as two text-image pairs by repeating the text. The two [CLS] outputs from UNITER are then depth concatenated as the joint embedding for the example. Another MLP further transform this embedding for the final classification. For the Pair-biattn setup, the input format is the same as the Pair setup. As for the joint representation, instead of rely on only two [CLS] outputs, we apply a multi-head attention layer (Vaswani et al., 2017) on one sequence of joint image-text embeddings to attend to the other sequence of embeddings, and vice versa. After this ‘bidirectional’ attention interactions, a simple attentional pooling is applied on each output sequences and then a final concat+MLP layer transforms the cross-attended joint representation for true/false classification.
We finetune UNITER on NLVR2 for 8K steps with a batch size of 10K input units. AdamW optimizer is used with learning rate of 1e− 4 and weight decay of 0.01.
Image-Text Retrieval Two datasets are considered for this task: COCO and Flickr30K. COCO consists of 123K images, each accompanied with five human-written captions. We follow Karpathy & Fei-Fei (2015) to split the data into 82K/5K/5K training/validation/test images. Additional 30K images from MSCOCO validation set are also included to improve training as in Lee et al. (2018). Flickr30K dataset contains 31K images collected from the Flickr website, with five textual descriptions per image. We follow Karpathy & Fei-Fei (2015) to split the data into 30K/1K/1K training/validation/test splits. During finetuning, we sample two negative image-text pairs per positive sample from image and text sides, respectively. For COCO, we use batch size of 60 examples, learning rate of 2e− 5 and finetune our model for 20K steps. For Flickr30K, we finetune our model with a batch size of 120 examples and a learning rate of 5e− 5 over maximum 16K steps. To obtain the final results in Table 4, we further sample hard negatives to facilitate the finetuning. For every N steps, we randomly sample 128 negative images per text input and obtain a sparse scoring matrix for the whole training set. For each image, we choose the top 20 ranked negative sentences as hard negative samples. Similarly, we get 20 hard negative images for each sentence according to their scores. The hard negatives are sent to the model as additional negative samples.
In the end, we have two randomly sampled negatives and two hard negative samples per positive sample. N is set to 4000 for COCO and 2500 for Flickr30K.
Visual Entailment (SNLI-VE) Visual Entailment is a task derived from Flickr30K images and Stanford Natural Language Inference (SNLI) dataset, where the goal is to determine the logical relationship between a natural language statement and an image. Similar to BERT for Natural Language Inference (NLI), we treat SNLI-VE as a three-way classification problem and apply an MLP Transform on [CLS] output. The UNITER model is finetuned using cross-entropy loss. The batch size is set to 10K input units and we use AdamW with learning rate of 8e − 5 to train for 3K steps.
Referring Expression Comprehension We use three referring expression datasets: RefCOCO, RefCOCO+, and RefCOCOg for the evaluation, all collected on COCO images. To finetune UNITER on this task, we add a MLP layer on top of the region outputs from Transformer, to compute the alignment score between the query phrase/sentence and each region. Since only one object is paired with the query phrase/sentence, we apply cross-entropy loss on the normalized alignment scores. The finetuning is efficient - we train the model with a batch size of 64 examples and a learning rate of 5e− 5 for only 5 epochs, and achieve state-of-the-art performance. Note all works including ours use off-the-shelf object detectors trained on COCO (and Visual Genome) to extract the visual features. While this does not affect other downstream tasks, it raises an issue for RE comprehension, as the val/test images of RefCOCO, RefCOCO+, and RefCOCOg are a subset of COCO’s training split. Strictly, our object detector is not allowed to train with these val/test images. However, just for a “fair” comparison with concurrent works, we ignore this issue and use the same features (Anderson et al., 2018) as the others. We also update the results of MAttNet using this ”contaminated” features, whose accuracy is 1.5% higher than the original one. As aforementioned, the interaction between sentence and image could start from tokens and pixels instead of the extracted features. We leave this study and RE comprehension with strictly correct features to future work.
A.3 VISUALIZATION
Similar to Kovaleva et al. (2019), we observe several patterns in the attention maps of the UNITER model, as shown in Fig. 3. Note that different from Kovaleva et al. (2019), our attention mechanism
operates in both inter- and intra-modalitiy manners. For completeness, we briefly discuss each pattern here:
• Vertical: attention to special tokens [CLS] or [SEP];
• Diagonal: attention to the token/region itself or preceding/following tokens/regions;
• Vertical + Diagonal: mixture of vertical and diagonal;
• Block: intra-modality attention, i.e., textual self-attention and visual self-attention;
• Heterogeneous: diverse attentions that cannot be categorized and is highly dependent on actual input;
• Reversed Block: inter-modality attention, i.e., text-to-image and image-to-text attention.
Note that Reversed Block (Fig. 3f) shows cross-modality alignment between tokens and regions. In Fig. 4, 5, and 6, we visualize several examples of text-to-image attention to demonstrate the local cross-modality alignment between regions and tokens.
A.4 CONDITIONAL MASKING VS. JOINT RANDOM MASKING
We further discuss the advantage of our proposed conditional masking over joint random masking used in (Tan & Bansal, 2019; Lu et al., 2019). Intuitively, our conditional masking learns better latent alignment of entities (regions and words) across two modalities. Fig. 7 shows an example image with “man with his dog and cat sitting on a sofa”. With conditional masking, when the region of dog is masked, our model should be able to infer that the region is dog, based on the context of both surrounding regions and the full sentence (Fig. 7(a)), and vice versa. However, for the joint masking implementation, it could happen when both the region of dog and the word dog are
masked (Fig. 7(b)). In such case, the model has to make the prediction blindly, which might lead to mis-alignment.
To verify this intuition, we show the validation curves during pre-training of MLM and MRC-kl in Fig. 8. Each sub-figure shows a comparison between applying conditional masking and joint random masking during the pre-training of UNITER. The MLM accuracy measures how well UNITER can reconstruct the masked words, and MRC-kl accuracy18 measures how well UNITER can classify the masked regions. In both cases, as shown in Fig. 8, our conditional masking converges faster and achieves higher final accuracy than joint random masking. In addition, Table 3 (row 10 & 11) shows our conditional masking also performs better on fine-tuned downstream tasks.
A.5 MORE RESULTS ON VCR AND NLVR2
Following the VCR setup in Table. 5, we further construct an ensemble model using 10 UNITERlarge. Table. 7 shows the comparison between VLBERT, ViLBERT and UNITER on VCR. The Q → AR accuracy of our ensemble model outperforms ViLBERT (Lu et al., 2019) ensemble by a large margin of 7.0%. Note even single UNITER-large already outperforms ViLBERT ensemble and VLBERT-large by 3.0%.
Besides, we also compare our UNITER-large with LXMERT (Tan & Bansal, 2019) and VisualBERT (Li et al., 2019b) on an additional testing split of NLVR2 in Table. 8. Our results consistently outperform the previous SOTA on all metrics19 by a large margin of ∼4.0%.
A.6 DIRECT COMPARISON TO VLBERT AND VILBERT
To further demonstrate our idea, we conduct a direct comparison to ViLBERT (Lu et al., 2019) and VLBERT (Su et al., 2019), trained on Conceptual Captions (Sharma et al., 2018). We pre-train UNITER on Conceptual Captions only (instead of 4 datasets in 3.3) using our proposed conditional masking and the best pre-training tasks (MLM + ITM + MRC-kl + MRFR). Table. 9 shows that
18When validating on MRC-kl accuracy, we simply pick the most confident category from the predicted probability and measure its correctness.
19The balanced and unbalanced evaluations were introduced in Suhr & Artzi (2019).
UNITER still consistently outperforms the other models by a visible margin on VQA and RefCOCO+. | 1. What is the focus of the paper regarding image-text representations, and what are the strengths and weaknesses of the proposed approach?
2. Are there any concerns regarding the clarity and motivation of certain parts of the method, and how can they be addressed?
3. How does the novelty of the paper compare to prior works, and what's missing in terms of understanding and intuition about the conditioned masking idea?
4. What are the advantages and disadvantages of the experimental analysis conducted in the paper? | Review | Review
# 1. Summary
The authors introduce a new pre-training procedure for image-text representations. The idea is to train the model on a huge collection of different image-text datasets and the use the model for downstream tasks. The difference between the proposal wrt the concurrent work is that conditioned masking is used: (i) Masked Language Modeling (MLM) conditioned on image; (ii) Masked Region Modeling (MRM) conditioned on text; and (iii) joint Image-Text Matching (ITM).
I am on the fence for this paper given the balance between strengths and weaknesses listed below. I am conservative here and decide for weak reject; but I am open for discussion, if the authors answer to my concerns detailed below.
Strengths:
* State-of-the-art results on several downstream vision-language tasks
* Empirical work to investigate different ways to perform conditioned masking
Weaknesses:
* Some parts of the method needs clarification (see point 2 below) to better understand the details and practical advantages of the method.
* Limited novelty: the paper is an extension of BERT to the visual domain (see point 3 below)
# 2. Clarity and Motivation
The paper reads quite well, although some points need to be improved:
* "Compared with LXMERT (Tan & Bansal, 2019) and ViLBERT (Lu et al., 2019) that use two streams (one Transformer for each modality), our UNITER model can learn joint contextualized ...", why is this an advantage? Using two streams might also lead to learning context? Maybe an example can clarify my question.
* End of Sec. 3.1 (and paragraph in Sec. 3.2): not clear how the model is training for ITM. What's the input and output? Why do you need a new symbol [CLS]?
* Sec. 3.2 ITM: "an additional special token [CLS] is fed into our model, which indicates the fused representation of both modalities" - This is not clear. Why this special token is needed? Why is not needed in the MLM and MRM?
* "The scoring function is denoted as s" -> please indicate in the main text what function you used
* MRFM and MRC are clear, however the intuition of MRC-kl is missing. Why is this needed? What does it mean in practice to minimize such divergence (provide practical example)?
* Combination of tasks (MLM + ITM + MRC-kl + MRFR) -> it is not clear how this is done in practice. Is the loss function composed (summed)? Within the mini-batch, the method randomly chooses which operation to do (e.g., MLM) for each sample? This should be clarified in the main text of the paper.
# 3. Novelty
The novelty of the paper is quite limited since it is an extension of BERT to the visual domain. The authors propose an empirical analysis of different ways to mask the visual input, however this might not be a substantial extension of previous work. In fact, recently there are many other papers (ViLBERT, VisualBERT, LXBERT, ...) working on similar topic with small differences. What it is missing in this paper is an understanding and intuition on the reasons why the conditioned masking idea should be better than the other visual masking ideas proposed in previous work.
# 4. Experimentation
The main advantage of this paper relies on the extensive experimental analysis done on many challenging datasets reaching the state of the art on several downstream tasks.
The evaluation on both pre-training tasks and downstream tasks show that the method is working well in practice. |
ICLR | Title
UNITER: Learning UNiversal Image-TExt Representations
Abstract
Joint image-text embedding is the bedrock for most Vision-and-Language (V+L) tasks, where multimodality inputs are jointly processed for visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design three pre-training tasks: Masked Language Modeling (MLM), ImageText Matching (ITM), and Masked Region Modeling (MRM, with three variants). Different from concurrent work on multimodal pre-training that apply joint random masking to both modalities, we use conditioned masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). Comprehensive analysis shows that conditioned masking yields better performance than unconditioned masking. We also conduct a thorough ablation study to find an optimal setting for the combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question Answering, Image-Text Retrieval, Referring Expression Comprehension, Visual Commonsense Reasoning, Visual Entailment, and NLVR.
1 INTRODUCTION
Most Vision-and-Language tasks rely on joint multimodel embeddings to bridge the semantic gap between visual and textual clues in images and text, although such representations are usually tailored for specific tasks. For example, MCB (Fukui et al., 2017), BAN (Kim et al., 2018), DFAF (Gao et al., 2019) proposed advanced multimodal fusion methods for Visual Question Answering (VQA) (Antol et al., 2015). SCAN (Lee et al., 2018) and MAttNet (Yu et al., 2018) studied learning latent alignment between words and image regions for Image-Text Retrieval (Wang et al., 2016) and Referring Expression Comprehension (Kazemzadeh et al., 2014) tasks. While each of these proposed models has pushed the state of the art on respective benchmarks, their architectures are diverse and the learned representations are highly task-specific, preventing them from being generalized to other tasks. This raises a million-dollar question: can we learn a universal image-text representation for all V+L tasks?
To answer this question, we introduce UNiversal Image-TExt Representations (UNITER), a largescale pre-trained model for multimodal embedding. We adopt Transformer (Vaswani et al., 2017) as the core of our model, to leverage its elegant self-attention mechanism designed for learning contextualized representations. Inspired by BERT (Devlin et al., 2019), which has successfully applied Transformer to NLP tasks through large-scale language modeling, we pre-train UNITER through three pre-training tasks: (i) Masked Language Modeling (MLM) conditioned on image; (ii) Masked Region Modeling (MRM) conditioned on text; and (iii) joint Image-Text Matching (ITM). To further investigate the effectiveness of MRM, we propose three MRM variants: (i) Masked Region Classification (MRC); (ii) Masked Region Feature Regression (MRFR); and (iii) Masked Region Classification with KL-divergence (MRC-kl).
As shown in Figure 1, UNITER first encodes image regions (visual features and bounding box features) and textual words (tokens and positions) into a common embedding space with Image Embedder and Text Embedder, then applies a Transformer module to learn generalizable contex-
tualized embeddings for each region and word through aforementioned pre-training tasks. Compared with LXMERT (Tan & Bansal, 2019) and ViLBERT (Lu et al., 2019) that use two streams (one Transformer for each modality), our UNITER model can learn joint contextualized representations for image regions and textual words through a single Transformer. Besides, our masked language/region modeling is conditioned on full observation of image/text, different from other concurrent pre-trained models that apply joint random masking to both modalities. We show that the conditional masking strategy can successfully ease the missing-alignment between images and text, and obtain better joint embeddings for downstream tasks. Detailed ablation study also demonstrates that the combination of MLM+ITM+MRC-kl+MRFR yields the best pre-training performance.
To demonstrate the power of UNITER, we evaluate on six V+L tasks across nine datasets, including: (i) VQA; (ii) Visual Commonsense Reasoning (VCR) (Zellers et al., 2019); (iii) NLVR2 (Suhr et al., 2019); (iv) Visual Entailment (Xie et al., 2019); (v) Image-Text Retrieval (including zero-shot setting) (Lee et al., 2018); and (vi) Referring Expression Comprehension. Our UNITER model is trained on a large-scale V+L dataset composed of four subsets: (i) COCO (Lin et al., 2014); (ii) Visual Genome (VG) (Krishna et al., 2017); (iii) Conceptual Captions (CC) (Sharma et al., 2018); and (iv) SBU Captions (Ordonez et al., 2011). Experiments show that UNITER achieves new state of the art with significant performance boost across all six downstream tasks. Moreover, training on additional CC and SBU data (containing unseen images/text in downstream tasks) further boosts model performance over training on COCO and VG only.
Our contributions can be summarized as follows: (i) We introduce UNITER, a powerful UNiversal Image-TExt Representations for Vision-and-Language tasks. (ii) We achieve new state of the art (SOTA) on multiple V+L benchmarks, outperforming existing SOTA and concurrent multimodal pre-training methods by a large margin. (iii) We present extensive experiments and analysis to provide useful insights on the effectiveness of each pre-training task/dataset for multimodal encoder training.
2 RELATED WORK
Self-supervised learning utilizes original data as its own source of supervision, which has been applied to many Computer Vision tasks, such as image colorization (Zhang et al., 2016), solving jigsaw puzzles (Noroozi & Favaro, 2016; Trinh et al., 2019), inpainting (Pathak et al., 2016), rotation prediction (Gidaris et al., 2018), and relative location prediction (Doersch et al., 2015). Recently, pre-trained language models such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), GPT2 (Radford et al., 2019), and XLNet (Yang et al., 2019) have shown great advances for NLP tasks. There are two keys to their success: effective pre-training tasks over large language corpus, and the use of Transformer (Vaswani et al., 2017) for learning contextualized text representations.
More recently, there has been some concurrent work on self-supervised learning for multimodal tasks, by pre-training on large-scale image/video and text pairs, then finetuning on downstream tasks. For example, VideoBERT (Sun et al., 2019) applied BERT to learn a bidrectional joint distribution over quantized video frame features and linguistic tokens from video-text pairs. ViLBERT (Lu
et al., 2019) and LXMERT (Tan & Bansal, 2019) introduced the two-stream architecture, where two Transformers are applied to images and text independently, which will be fused by a third Transformer in a later stage. On the other hand, VisualBERT (Li et al., 2019b), Unicoder-VL (Li et al., 2019a), VL-BERT (Su et al., 2019) and B2T2 (Alberti et al., 2019) proposed the singlestream architecture, where a single Transformer is applied to both image and text. Specifically, LXMERT model was pre-trained with downstream tasks such as VQA (Antol et al., 2015) and GQA (Hudson & Manning, 2019), while the others were pre-trained on image-text pairs only. Our UNITER model belongs to the second family. One key difference between UNITER and the other methods is the masking approach on pre-training tasks. Instead of randomly masking both image regions and sentence words, we use conditional masking, i.e., masking only one modality while keeping the other untainted. In addition, we examine the best combination of pre-training tasks through a thorough ablation study on the effects of each pre-training task and dataset on downstream tasks.
Another related work is DDAF (Gao et al., 2019), which proposed a novel architecture of intermodality and intra-modality attention modules to learn the latent alignment between two modalities for VQA. Compared with Gao et al. (2019), UNITER learns a relatively more generic V+L representation via pre-training.
3 UNIVERSAL IMAGE-TEXT REPRESENTATIONS
In this section, we first introduce the model architecture of UNITER (Section 3.1), then describe the designed pre-training tasks and V+L datasets used for pre-training (Section 3.2 and 3.3).
3.1 MODEL OVERVIEW
The model architecture of UNITER is illustrated in Figure 1. Given a pair of image and sentence, UNITER takes the visual regions of the image and textual tokens of the sentence as the input. We design an Image Embedder and a Text Embedder to extract their respective embeddings. These embeddings are then feed it into a multi-layer self-attention Transformer to learn a cross-modality contextualized embedding between visual regions and textual tokens. Note that the self-attention mechanism in Transformer is order-less, thus it is necessary to explicitly encode positions/locations of tokens/regions as additional inputs.
Specifically, in Image Embedder, we first use Faster R-CNN1 to extract the visual features (pooled ROI features) for each region. We also encode the location features for each region via a 7- dimensional vector2. Both visual and location features are then fed through a fully-connected (FC) layer, to be projected into the same embedding space. The final visual embedding for each region is obtained by summing up the two FC outputs and then passing through a layer normalization (LN) layer. For Text Embedder, we follow BERT (Devlin et al., 2019) and tokenize the input sentence into WordPieces (Wu et al., 2016). The final representation for each sub-word token3 is obtained via summing up its word embedding and position embedding, followed by another LN layer4.
We introduce three main tasks to pre-train our model: Masked Language Modeling conditioned on image regions (MLM), Masked Region Modeling conditioned on input text (with three variants) (MRM), and Image-Text Matching (ITM). As shown in Figure 1, our MRM and MLM are in analogy to BERT, where we randomly mask some words or regions from the input and learn to recover the words or regions as the output of Transformer. Specifically, word masking is realized by replacing the token with a special token [MASK], and region masking is implemented by replacing the visual feature vector with all zeros. Note that each time we only mask one modality while keeping the other modality intact, instead of randomly masking both modalities like ViLBERT and LXMERT. This prevents potential miss-alignment when a masked region happens to be described by a masked word. Empirically, we show that with conditional masking, our model is able to learn better embeddings
1Our Faster R-CNN was pre-trained on Visual Genome object+attribute data (Anderson et al., 2018). 2[x1, y1, x2, y2, w, h, w ∗ h] (normalized top/left/bottom/right coordinates, width, height, and area.) 3We use word/sub-word and token interchangeably throughout the rest of the paper. 4We also use a special modality embedding to help the model distinguish between textual and visual input, which is similar to the ‘segment embedding’ in BERT. This embedding is also summed before the LN layer in each embedder. For simplicity, this modality embedding is omitted in Figure 1.
(in Section 4.2). Lastly, we also learn an instance-level alignment (rather than token/region-level) between the whole image and the sentence via ITM. During training, we sample both positive and negative image-sentence pairs and learn their matching scores.
To pre-train UNITER with the aforementioned different tasks, we randomly sample one pre-training task for each mini-batch and train on only one objective per SGD update.
3.2 PRE-TRAINING TASKS
Masked Language Modeling (MLM) We denote the image regions as v = {v1, ..., vK}, the input words as w = {w1, ..., wT }, and the mask indices as m ∈ NM . 5 In MLM, we randomly mask out the input words with probability of 15%, and replace the masked ones wm with special token [MASK]6. The goal is to predict these masked words based on the observation of their surrounding words w\m and all image regions v, by minimizing the negative log-likelihood:
LMLM(θ) = −E(w,v)∼D logPθ(wm|w\m,v). (1)
where θ is the trainable parameters. Each pair (w,v) is sampled from the whole training set D.
Image-Text Matching (ITM) In ITM, an additional special token [CLS] is fed into our model, which indicates the fused representation of both modalities. The inputs to ITM are a sentence and a set of image regions, and the output is a binary label (0 for negative match, and 1 for positive match). We extract the representation of [CLS] token as the joint representation of the input text and image, then fed into a fully connected layer and a sigmoid function to predict a score between 0 and 1. We denote the output score as sθ(w,v). The ITM supervision is over the [CLS] token.7 During training, we sample a positive or negative pair (w,v) from the dataset D at each step. The negative pair is created by replacing the image or text in a paired sample with a randomly-selected one from other samples. We denote the label as y ∈ {0, 1}, indicating if the sampled pair is a match. Then we apply a binary cross-entropy loss for optimization:
LITM(θ) = −E(w,v)∼D[y log sθ(v,w) + (1− y) log(1− sθ(v,w))]). (2)
Masked Region Modeling (MRM) Similar to MLM, we also sample image regions and mask their visual features with a probability of 15%. The model is trained to reconstruct the masked regions vm given the remaining regions v\m and all the words w. The visual features vm of the masked region are replaced by zeros. Unlike textual tokens that are represented as discrete labels, visual features are high-dimensional and continuous, thus cannot be supervised via class likelihood. Instead, we propose three variants for Masked Region Modeling, which share the same objective base:
LMRM(θ) = E(w,v)∼Dfθ(vm|v\m,w). (3)
1) Masked Region Feature Regression (MRFR) MRFR learns to regress the Transformer output of each masked region v(i)m to its visual features. Specifically, we apply an FC layer to convert its Transformer output into a vector hθ(v (i) m ) of same dimension as the input ROI pooled feature r(v (i) m ).
Then we apply L2 regression between the two: fθ(vm|v\m,w) = ∑M i=1 ‖hθ(v (i) m )− r(v(i)m )‖22.
2) Masked Region Classification (MRC) MRC learns to predict the object semantic class for each masked region. We first feed the Transformer output of the masked region v(i)m into an FC layer to predict the scores of K object classes, which further goes through a softmax function to be transformed into a normalized distribution gθ(v (i) m ) ∈ RK . Note that there is no ground-truth label, as the object categories are not provided. Thus, we use the object detection output from Faster RCNN, and take the detected object category (with the highest confidence score) as the label of the masked region, which will be converted into a one-hot vector c(v(i)m ) ∈ RK . The final objective minimizes the cross-entropy (CE) loss: fθ(vm|v\m,w) = ∑M i=1 CE(c(v (i) m ), gθ(v (i) m )).
5N is the natural numbers, M is the number of masked tokens, and m is the set of masked indices. 6Following BERT, we decompose this 15% into 10% random word, 10% unchanged, and 80% [MASK]. 7The supervision over the [CLS] token in pretraining also alleviates the input mismatch problem between pretraining tasks and downstream finetuning tasks, since most of the downstream tasks take the representation of [CLS] token as the joint representation.
3) Masked Region Classification with KL-Divergence (MRC-kl) MRC takes the most likely object class from the object detection model as the hard label (w.p. 0 or 1), which assumes the detected object class is the ground-truth label for the region. However, this may not be true, as no ground-truth label is provided for the detected region. Thus, in MRC-kl, we avoid this assumption by using soft label as supervision signal, which is the raw output from the detector (i.e., a distribution of object classes c̃(v(i)m )). MRC-kl aims to distill such knowledge into UNITER as Hinton et al. (2015), by minimizing the KL divergence between two distributions: fθ(vm|v\m,w) = ∑M i=1DKL(c̃(v (i) m )||gθ(v(i)m )).
3.3 PRE-TRAINING DATASETS
We construct our pre-training dataset based on four existing V+L datasets: COCO (Lin et al., 2014), Visual Genome (VG) (Krishna et al., 2017), Conceptual Captions (CC) (Sharma et al., 2018), and SBU Captions (Ordonez et al., 2011). Only image and sentence pairs are used for our pre-training purpose, which makes the model framework more scalable, as additional image-sentence pairs are easy to harvest for further pre-training.
To study the effects of different datasets on pre-training, we divide the four datasets into two categories. The first one consists of image captioning data from COCO and dense captioning data from VG. We call it “In-domain” data, as most V+L tasks are built on top of these two datasets. To obtain a ‘fair’ data split, we merge the raw training and validation splits from COCO, and exclude all validation and test images that appear in downstream tasks. We also exclude all co-occurring Flickr30K (Plummer et al., 2015) images via URL matching, as both COCO and Flickr30K images were crawled from Flickr and may have overlaps8. The same rule was applied to Visual Genome as well. In this way, we obtain 5.6M image-text pairs for training and 131K image-text pairs for our internal validation, which is half the size of the dataset used in LXMERT (Tan & Bansal, 2019), due to the filtering of overlapping images and the use of image-text pairs only. We also use additional Out-of-domain data from Conceptual Captions (Sharma et al., 2018) and SBU Captions (Ordonez et al., 2011) for model training9. The statistics on the cleaned splits are provided in Table 1.
4 EXPERIMENTS
We evaluate UNITER on six V+L tasks (listed in Table 2), by transferring the pre-trained model to each target task and finetuning through end-to-end training. We report experimental results on two model sizes: UNITER-base with 12 layers and UNITER-large with 24 layers10.
8A total of 222 images were eliminated through this process. 9We apply the same URL matching method, excluding 109 images from the training set.
10UNITER-base: L=12, H=768, A=12, Total Parameters=86M. UNITER-large: L=24, H=1024, A=16, Total Parameters=303M (L: number of stacked Transformer blocks; H: hidden activation dimension; A: number of attention heads). 882 and 3645 V100 GPU hours were used for pre-training UNITER-base and UNITER-large.
4.1 DOWNSTREAM TASKS
In VQA, VCR and NLVR2 tasks, given an input image (or a pair of images) and a natural language question (or description), the model predicts an answer (or judges the correctness of the description) based on the visual content in the image. For Visual Entailment, we evaluate on the SNLI-VE dataset. The goal is to predict whether a given image semantically entails an input sentence. Classification accuracy over three classes (“Entailment”, “Neutral” and “Contradiction”) is used to measure model performance. For Image-Text Retrieval, we consider two datasets (COCO and Flickr30K) and evaluate the model in two settings: Image Retrieval (IR) and Text Retrieval (TR). Referring Expression (RE) Comprehension requires the model to select the target from a set of image region proposals given the query description. Models are evaluated on both ground-truth objects and detected proposals11 (MAttNet (Yu et al., 2018)).
For VQA, VCR, NLVR2, Visual Entailment and Image-Text Retrieval, we extract the joint embedding of the input image-text pairs via a multi-layer perceptron (MLP) from the representation of the [CLS] token. For RE Comprehension, we use the MLP to compute the region-wise alignment scores. These MLP layers are learned during the finetuning stage. Specifically, we formulate VQA, VCR, NLVR2, Visual Entailment and RE Comprehension as classification problems and minimize the cross-entropy loss over the ground-truth answers/responses. For Image-Text Retrieval, we formulate it as a ranking problem. During finetuning, we sample three pairs of image and text, one positive pair from the dataset and two negative pairs by randomly replacing its sentence/image with others. We compute the similarity scores (based on the joint embedding) for both positive and negative pairs, and maximize the margin between them through triplet loss.
4.2 EVALUATION ON PRE-TRAINING TASKS
We analyze the effectiveness of different pre-training settings through ablation studies over VQA, NLVR2, Flickr30K and RefCOCO+ as representative V+L benchmarks. In addition to standard metrics for each benchmark (listed in Table 2) , we also use Meta-Sum (sum of all the scores across all the benchmarks) as a global metric.
Firstly, we establish two baselines: Line 1 (L1) in Table 3 indicates no pre-training is involved, and L2 shows the results from MLM initialized with pre-trained weights from Devlin et al. (2019). Although MLM trained on text only did not absorb any image information during pre-training, we see a gain of approximately +30 on Meta-Sum over L1. Hence, we use the pre-trained weights in L2 to initialize our model for the following experiments.
11The evaluation splits of RE comprehension using detected proposals are denoted as vald, testd, etc.
Secondly, we validate the effectiveness of each pre-training task through a thorough ablation study. Comparing L2 and L3, MRFR (L3) achieves better results than MLM (L2) only on NLVR2. On the other hand, when pre-trained on ITM (L4) or MLM (L5) only, we observe a significant improvement across all the tasks over L1 and L2 baselines. When combining different pre-training tasks, MLM + ITM (L6) improves over single ITM (L4) or MLM (L5). When MLM, ITM and MRM are jointly trained (L7-L10), we observe consistent performance gain across all the benchmarks. Among the three variants of MRM (L7-L9), we observe that MRC-kl (L9) achieves the best performance (397.09) when combined with MLM + ITM, while MRC (L7) the worst (393.97). When combining MRC-kl and MRFR together with MLM and ITM (L10), we find that they are complimentary to each other, which leads to the highest Meta-Sum score. We use this as the optimal pre-training setting for further experiments.
Additionally, we validate the contributions of conditional masking through a comparison study. When we perform random masking on both modalities simultaneously during pre-training, i.e., w/o conditional masking (L11), we observe a decrease in Meta-Sum score (396.51) compared to that with conditional masking (399.97). This indicates that the conditional masking strategy enables the model to learn better joint image-text representations effectively.
Lastly, we study the effects of pre-training datasets. Our experiments so far have been focused on In-domain data. In this study, we pre-train our model on Out-of-domain data (Conceptual Captions
+ SBU Captions). A performance drop (395.45 in L12) from the model trained on In-domain data (COCO + Visual Genome) (399.97 in L10) shows that although Out-of-domain data contain more images, the model still benefits more from being exposed to similar downstream images during pretraining. We further pre-train our model on both In-domain and Out-of-domain data. With doubled data size, the model continues to improve (402.50 in L13).
4.3 RESULTS ON DOWNSTREAM TASKS
Table 4 presents the results of UNITER on all downstream tasks. Both our base and large models are pre-trained on In-domain+Out-of-domain datasets, with the optimal pre-training setting: MLM+ITM+MRC-kl+MRFR. The implementation details of each task are provided in Appendix A.2. We compare with both task-specific models and concurrent pre-trained models on each downstream task. SOTA task-specific models include: MCAN (Yu et al., 2019) for VQA, MaxEnt (Suhr et al., 2019) for NLVR2, B2T2 (Alberti et al., 2019) for VCR, SCAN (Lee et al., 2018) for ImageText Retrieval, EVE-Image (Xie et al., 2019) for SNLI-VE, and MAttNet for RE Comprehension (RefCOCO, RefCOCO+ and RefCOCOg)12. Concurrent pre-trained models include: ViLBERT, LXMERT, Unicoder-VL, VisualBERT and VLBERT.
Results show that our UNITER-large model achieves new state of the art across all the benchmarks. UNITER-base model also outperforms the others by a large margin across all tasks except VQA. Specifically, our UNITER-base model outperforms SOTA by approximately +2.8% for VCR on Q→AR, +2.5% for NLVR2, +7% for SNLI-VE, +4% on R@1 for Image-Text Retrieval (+15% for zero-shot setting), and +2% for RE Comprehension.
Note that LXMERT pre-trains with downstream VQA (+VG+GQA) data, which may help adapt the model to VQA task. However, when evaluated on unseen tasks such as NLVR2, UNITER-base achieves 3% gain over LXMERT. In addition, among all the models pre-trained on image-text pairs only, our UNITER-base outperforms the others by >1.5% on VQA.
It is also worth mentioning that both VilBERT and LXMERT observed two-stream model outperforms single-stream model, while our results show empirically that with our pre-training setting, single-stream model can achieve new state-of-the-art results, with much fewer parameters (UNITER-base: 86M, LXMERT: 183M, VilBERT: 221M)13.
For VCR, we propose a two-stage pre-training approach: (i) pre-train on standard pre-training datasets; and then (ii) pre-train on downstream VCR dataset. Interestingly, while VLBERT and B2T2 observed that pre-training is not very helpful on VCR, we find that the second-stage pretraining can significantly boost model performance, while the first-stage pre-training still helps but with limited effects (results shown in Table 5). This indicates that the proposed two-stage approach is highly effective in our pre-trained model over new data that are unseen in pre-training datasets.
Different from other tasks, NLVR2 takes two images as input. Thus, directly finetuning UNITER pre-trained with image-sentence pairs might not lead to optimal performance, as the interactions between paired images are not learned during the pre-training stage. Thus, we experimented with three modified settings on NLVR2: (i) Triplet: joint embedding of images pairs and query captions; (ii) Pair: individual embedding of each image and each query caption; and (iii) Pair-biattn: a bidirectional attention is added to the Pair model to learn the interactions between the paired images.
Comparison results are presented in Table 6. The Pair setting achieves better performance than the Triplet setting even without cross-attention between the image pairs. We hypothesize that it is due to the fact that our UNITER is pre-trained with image-text pairs. Thus, it is difficult to finetune a pairbased pre-trained model on triplet input. The bidirectional attention mechanism in the Pair-biattn setting, however, compensates the lack of cross-attention between images, hence yielding the best performance with a large margin. This show that with minimal surgery on the top layer of UNITER, our pre-trained model can adapt to new tasks that are very different from pre-training tasks.
12MAttNet results are updated using the same features as the others. More details are provided in Appendix. 13The word embedding layer contains excessive rare words, thus excluded from the parameter counts.
Setting dev test-P Triplet 72.76 73.55 Pair 75.37 75.97 Pair-biattn 77.14 77.87
Table 6: Experiments on three modified settings for NLVR2. All models use pre-trained UNITER-base.
5 CONCLUSION
In this paper, we present UNITER, a large-scale pre-trained model providing UNiversal ImageTExt Representations for Vision-and-Language tasks. Three main pre-training tasks are proposed and evaluated through extensive ablation studies. Trained with both in-domain and out-of-domain datasets, UNITER outperforms state-of-the-art models over multiple V+L tasks by a significant margin. Future work includes studying early interaction between raw image pixels and sentence tokens, as well as developing more effective pre-training tasks.
A APPENDIX
A.1 DATASET COLLECTION
As introduced, our full dataset is composed of four existing V+L datasets: COCO, Visual Genome, Conceptual Captions, and SBU Captions. The dataset collection is not simply combining them, as we need to make sure none of the downstream evaluation images are seen during pre-training. Among them, COCO is the most tricky one to clean, as several downstream tasks are built based on it. Figure 2 lists the splits from VQA, Image-Text Retrieval, COCO Captioning, RefCOCO/RefCOCO+/RefCOCOg, and the bottom-up top-down (BUTD) detection (Anderson et al., 2018), all from COCO images.
As observed, the validation and test splits of different tasks are scattered across the raw COCO splits. Therefore, we exclude all those evaluation images that appeared in the downstream tasks. In addition, we also exclude all co-occurring Flickr30K images via URL matching, making sure the zero-shot image-text retrieval evaluation on Flickr is fair. The remaining images become the COCO subset within our full dataset, as shown in Figure 2 bottom row. We apply the same rules to Visual Genome, Conceptual Captions, and SBU Captions.
A.2 IMPLEMENTATION DETAILS
Our models are implemented based on PyTorch14 (Paszke et al., 2017). To speed up training, we use Nvidia Apex15 for mixed precision training. All pre-training experiments are run on Nvidia V100 GPUs (16GB VRAM; PCIe connection). Finetuning experiments are implemented on the same hardware or Titan RTX GPUs (24GB VRAM). To further speed up training, we implement dynamic sequence length to reduce padding and batch examples by number of input units (text tokens + image regions). For large pre-training experiments, we use Horovod16 + NCCL17 for multi-node communications (on TCP connections through ethernet) with up to 4 nodes of 4x V100 server. Gradient accumulation (Ott et al., 2018) is also applied to reduce multi-GPU communication overheads.
Visual Question Answering (VQA) We follow Yu et al. (2019) to take 3129 most frequent answers as answer candidates, and assign a soft target score to each candidate based on its relevancy to the 10 human responses. To finetune on VQA dataset, we use a binary cross-entropy loss to train a multi-label classifier using batch size of 10240 input units over maximum 5K steps. We use AdamW optimizer (Loshchilov & Hutter, 2019) with a learning rate of 3e− 4 and weight decay of 0.01. At inference time, the max-probable answer is selected as the predicted answer. For results on test-dev and test-std splits, both training and validation sets are used for training, and additional question-answer pairs from Visual Genome are used for data augmentation as in Yu et al. (2019).
14https://pytorch.org/ 15https://github.com/NVIDIA/apex 16https://github.com/horovod/horovod 17https://github.com/NVIDIA/nccl
Visual Commonsense Reasoning (VCR) VCR can be decomposed into two multiple-choice subtasks: question-answering task (Q → A) and answer-justification task (QA → R). In the holistic setting (Q→ AR), a model needs to first choose an answer from the answer choices, then select a supporting rationale from rationale choices if the chosen answer is correct. We train our model in two settings simultaneously. When testing in the holistic setting, we first apply the model to predict an answer, then obtain the rationale from the same model based on the given question and the predicted answer. To finetune on VCR dataset, we concatenate the question (the qeustion and the ground truth answer) and each answer (rationale) choice from the four possible answer (rationale) candidates. The ‘modality embedding’ is extended to help distinguish question, answer and rationale. Crossentropy loss is used to train a classifier over two classes (‘‘right’’ or ‘‘wrong’’) for each question-answer pair (question-answer-rationale triplet) with a batch size of 4096 input units over maximum 5K steps. We use AdamW optimizer with a learning rate of 1e − 4 and weight decay of 0.01.
Since the images and text in VCR dataset are very different from our pre-training dataset, we further pre-train our model on VCR, using MLM, MRFR and MRC-kl as the pre-training tasks. ITM is discarded because the text in VCR does not explicitly describe the image. The results of both pretrainings on VCR are reported in Table 5 and discussed in the main text. In conclusion, for downstream tasks that contain new data which is very different from the pre-training datasets, secondstage pre-training helps further boost the performance.
In our implementation, the second-stage pre-training is implemented with a batch size of 4096 intput units, a learning rate of 3e− 4 and a weight decay of 0.01 over maximum 60K steps. After secondstage pre-traing, we finetune our model with a learning rate of 6e− 5 over maximum 8K steps.
Natural Language for Visual Reasoning for Real (NLVR2) NLVR2 is a new challenging task for visual reasoning. The goal is to determine whether a natural language statement is true about the given image pair. Here we discuss the three architecture variants of NLVR2 finetuning in detail. Since UNITER only handles one image and one text input at pre-training, the ‘modality embedding’ is extended to help distinguish the additional image presented in the NLVR2 task. For the Triplet setup, we concatenate the image regions and then feed into the UNITER model. An MLP transform is applied on the [CLS] output for binary classification. For the Pair setup, we treat one input example as two text-image pairs by repeating the text. The two [CLS] outputs from UNITER are then depth concatenated as the joint embedding for the example. Another MLP further transform this embedding for the final classification. For the Pair-biattn setup, the input format is the same as the Pair setup. As for the joint representation, instead of rely on only two [CLS] outputs, we apply a multi-head attention layer (Vaswani et al., 2017) on one sequence of joint image-text embeddings to attend to the other sequence of embeddings, and vice versa. After this ‘bidirectional’ attention interactions, a simple attentional pooling is applied on each output sequences and then a final concat+MLP layer transforms the cross-attended joint representation for true/false classification.
We finetune UNITER on NLVR2 for 8K steps with a batch size of 10K input units. AdamW optimizer is used with learning rate of 1e− 4 and weight decay of 0.01.
Image-Text Retrieval Two datasets are considered for this task: COCO and Flickr30K. COCO consists of 123K images, each accompanied with five human-written captions. We follow Karpathy & Fei-Fei (2015) to split the data into 82K/5K/5K training/validation/test images. Additional 30K images from MSCOCO validation set are also included to improve training as in Lee et al. (2018). Flickr30K dataset contains 31K images collected from the Flickr website, with five textual descriptions per image. We follow Karpathy & Fei-Fei (2015) to split the data into 30K/1K/1K training/validation/test splits. During finetuning, we sample two negative image-text pairs per positive sample from image and text sides, respectively. For COCO, we use batch size of 60 examples, learning rate of 2e− 5 and finetune our model for 20K steps. For Flickr30K, we finetune our model with a batch size of 120 examples and a learning rate of 5e− 5 over maximum 16K steps. To obtain the final results in Table 4, we further sample hard negatives to facilitate the finetuning. For every N steps, we randomly sample 128 negative images per text input and obtain a sparse scoring matrix for the whole training set. For each image, we choose the top 20 ranked negative sentences as hard negative samples. Similarly, we get 20 hard negative images for each sentence according to their scores. The hard negatives are sent to the model as additional negative samples.
In the end, we have two randomly sampled negatives and two hard negative samples per positive sample. N is set to 4000 for COCO and 2500 for Flickr30K.
Visual Entailment (SNLI-VE) Visual Entailment is a task derived from Flickr30K images and Stanford Natural Language Inference (SNLI) dataset, where the goal is to determine the logical relationship between a natural language statement and an image. Similar to BERT for Natural Language Inference (NLI), we treat SNLI-VE as a three-way classification problem and apply an MLP Transform on [CLS] output. The UNITER model is finetuned using cross-entropy loss. The batch size is set to 10K input units and we use AdamW with learning rate of 8e − 5 to train for 3K steps.
Referring Expression Comprehension We use three referring expression datasets: RefCOCO, RefCOCO+, and RefCOCOg for the evaluation, all collected on COCO images. To finetune UNITER on this task, we add a MLP layer on top of the region outputs from Transformer, to compute the alignment score between the query phrase/sentence and each region. Since only one object is paired with the query phrase/sentence, we apply cross-entropy loss on the normalized alignment scores. The finetuning is efficient - we train the model with a batch size of 64 examples and a learning rate of 5e− 5 for only 5 epochs, and achieve state-of-the-art performance. Note all works including ours use off-the-shelf object detectors trained on COCO (and Visual Genome) to extract the visual features. While this does not affect other downstream tasks, it raises an issue for RE comprehension, as the val/test images of RefCOCO, RefCOCO+, and RefCOCOg are a subset of COCO’s training split. Strictly, our object detector is not allowed to train with these val/test images. However, just for a “fair” comparison with concurrent works, we ignore this issue and use the same features (Anderson et al., 2018) as the others. We also update the results of MAttNet using this ”contaminated” features, whose accuracy is 1.5% higher than the original one. As aforementioned, the interaction between sentence and image could start from tokens and pixels instead of the extracted features. We leave this study and RE comprehension with strictly correct features to future work.
A.3 VISUALIZATION
Similar to Kovaleva et al. (2019), we observe several patterns in the attention maps of the UNITER model, as shown in Fig. 3. Note that different from Kovaleva et al. (2019), our attention mechanism
operates in both inter- and intra-modalitiy manners. For completeness, we briefly discuss each pattern here:
• Vertical: attention to special tokens [CLS] or [SEP];
• Diagonal: attention to the token/region itself or preceding/following tokens/regions;
• Vertical + Diagonal: mixture of vertical and diagonal;
• Block: intra-modality attention, i.e., textual self-attention and visual self-attention;
• Heterogeneous: diverse attentions that cannot be categorized and is highly dependent on actual input;
• Reversed Block: inter-modality attention, i.e., text-to-image and image-to-text attention.
Note that Reversed Block (Fig. 3f) shows cross-modality alignment between tokens and regions. In Fig. 4, 5, and 6, we visualize several examples of text-to-image attention to demonstrate the local cross-modality alignment between regions and tokens.
A.4 CONDITIONAL MASKING VS. JOINT RANDOM MASKING
We further discuss the advantage of our proposed conditional masking over joint random masking used in (Tan & Bansal, 2019; Lu et al., 2019). Intuitively, our conditional masking learns better latent alignment of entities (regions and words) across two modalities. Fig. 7 shows an example image with “man with his dog and cat sitting on a sofa”. With conditional masking, when the region of dog is masked, our model should be able to infer that the region is dog, based on the context of both surrounding regions and the full sentence (Fig. 7(a)), and vice versa. However, for the joint masking implementation, it could happen when both the region of dog and the word dog are
masked (Fig. 7(b)). In such case, the model has to make the prediction blindly, which might lead to mis-alignment.
To verify this intuition, we show the validation curves during pre-training of MLM and MRC-kl in Fig. 8. Each sub-figure shows a comparison between applying conditional masking and joint random masking during the pre-training of UNITER. The MLM accuracy measures how well UNITER can reconstruct the masked words, and MRC-kl accuracy18 measures how well UNITER can classify the masked regions. In both cases, as shown in Fig. 8, our conditional masking converges faster and achieves higher final accuracy than joint random masking. In addition, Table 3 (row 10 & 11) shows our conditional masking also performs better on fine-tuned downstream tasks.
A.5 MORE RESULTS ON VCR AND NLVR2
Following the VCR setup in Table. 5, we further construct an ensemble model using 10 UNITERlarge. Table. 7 shows the comparison between VLBERT, ViLBERT and UNITER on VCR. The Q → AR accuracy of our ensemble model outperforms ViLBERT (Lu et al., 2019) ensemble by a large margin of 7.0%. Note even single UNITER-large already outperforms ViLBERT ensemble and VLBERT-large by 3.0%.
Besides, we also compare our UNITER-large with LXMERT (Tan & Bansal, 2019) and VisualBERT (Li et al., 2019b) on an additional testing split of NLVR2 in Table. 8. Our results consistently outperform the previous SOTA on all metrics19 by a large margin of ∼4.0%.
A.6 DIRECT COMPARISON TO VLBERT AND VILBERT
To further demonstrate our idea, we conduct a direct comparison to ViLBERT (Lu et al., 2019) and VLBERT (Su et al., 2019), trained on Conceptual Captions (Sharma et al., 2018). We pre-train UNITER on Conceptual Captions only (instead of 4 datasets in 3.3) using our proposed conditional masking and the best pre-training tasks (MLM + ITM + MRC-kl + MRFR). Table. 9 shows that
18When validating on MRC-kl accuracy, we simply pick the most confident category from the predicted probability and measure its correctness.
19The balanced and unbalanced evaluations were introduced in Suhr & Artzi (2019).
UNITER still consistently outperforms the other models by a visible margin on VQA and RefCOCO+. | 1. What are the advantages of using a single-stream transformer over a two-stream transformer?
2. Can the authors provide visualizations of attention weights to help understand the model's behavior?
3. What is the significance of the modification made to the existing pre-training procedure by conditional masking?
4. How does the reviewer assess the technical contribution of the paper, particularly in comparison to prior works such as VQA?
5. Are the proposed modifications to the BERT training procedure (MLM and MRM) truly novel or have they been explored before in other studies? | Review | Review
This paper presents a novel method for image-text representations called UNITER. The proposed method has been subsequently tested in many downstream tasks. A detailed ablation study helps to understand the role of each pretrained task in the proposed model.
Although the empirical results are nice, performing the intensive set of experiments on many different tasks is definitely time-consuming and needs a lot of engineering efforts, the technical contribution does not seem significant to me. The paper modifies an existing pre-training procedure by conditional masking (Section 2). I agree this is well-motivated but it has little novelty and a similar idea is there in VQA (See “Dynamic fusion with intra and inter-modality attention flow for visual question answering”). MLM and MRM are not new training procedure either, they are basically extending the BERT’s training procedure with the consideration of multiple modalities.
I have some questions for the authors:
(1) What are the advantages of using single-stream transformer over two-stream transformer (page 2). I guess it leads to fewer parameters but I don’t think this is a big problem.
(2) Some visualization of attention weights would be helpful.
Minor
• In “m \e N^M” (equation 1), what is N and M? |
ICLR | Title
UNITER: Learning UNiversal Image-TExt Representations
Abstract
Joint image-text embedding is the bedrock for most Vision-and-Language (V+L) tasks, where multimodality inputs are jointly processed for visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design three pre-training tasks: Masked Language Modeling (MLM), ImageText Matching (ITM), and Masked Region Modeling (MRM, with three variants). Different from concurrent work on multimodal pre-training that apply joint random masking to both modalities, we use conditioned masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). Comprehensive analysis shows that conditioned masking yields better performance than unconditioned masking. We also conduct a thorough ablation study to find an optimal setting for the combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question Answering, Image-Text Retrieval, Referring Expression Comprehension, Visual Commonsense Reasoning, Visual Entailment, and NLVR.
1 INTRODUCTION
Most Vision-and-Language tasks rely on joint multimodel embeddings to bridge the semantic gap between visual and textual clues in images and text, although such representations are usually tailored for specific tasks. For example, MCB (Fukui et al., 2017), BAN (Kim et al., 2018), DFAF (Gao et al., 2019) proposed advanced multimodal fusion methods for Visual Question Answering (VQA) (Antol et al., 2015). SCAN (Lee et al., 2018) and MAttNet (Yu et al., 2018) studied learning latent alignment between words and image regions for Image-Text Retrieval (Wang et al., 2016) and Referring Expression Comprehension (Kazemzadeh et al., 2014) tasks. While each of these proposed models has pushed the state of the art on respective benchmarks, their architectures are diverse and the learned representations are highly task-specific, preventing them from being generalized to other tasks. This raises a million-dollar question: can we learn a universal image-text representation for all V+L tasks?
To answer this question, we introduce UNiversal Image-TExt Representations (UNITER), a largescale pre-trained model for multimodal embedding. We adopt Transformer (Vaswani et al., 2017) as the core of our model, to leverage its elegant self-attention mechanism designed for learning contextualized representations. Inspired by BERT (Devlin et al., 2019), which has successfully applied Transformer to NLP tasks through large-scale language modeling, we pre-train UNITER through three pre-training tasks: (i) Masked Language Modeling (MLM) conditioned on image; (ii) Masked Region Modeling (MRM) conditioned on text; and (iii) joint Image-Text Matching (ITM). To further investigate the effectiveness of MRM, we propose three MRM variants: (i) Masked Region Classification (MRC); (ii) Masked Region Feature Regression (MRFR); and (iii) Masked Region Classification with KL-divergence (MRC-kl).
As shown in Figure 1, UNITER first encodes image regions (visual features and bounding box features) and textual words (tokens and positions) into a common embedding space with Image Embedder and Text Embedder, then applies a Transformer module to learn generalizable contex-
tualized embeddings for each region and word through aforementioned pre-training tasks. Compared with LXMERT (Tan & Bansal, 2019) and ViLBERT (Lu et al., 2019) that use two streams (one Transformer for each modality), our UNITER model can learn joint contextualized representations for image regions and textual words through a single Transformer. Besides, our masked language/region modeling is conditioned on full observation of image/text, different from other concurrent pre-trained models that apply joint random masking to both modalities. We show that the conditional masking strategy can successfully ease the missing-alignment between images and text, and obtain better joint embeddings for downstream tasks. Detailed ablation study also demonstrates that the combination of MLM+ITM+MRC-kl+MRFR yields the best pre-training performance.
To demonstrate the power of UNITER, we evaluate on six V+L tasks across nine datasets, including: (i) VQA; (ii) Visual Commonsense Reasoning (VCR) (Zellers et al., 2019); (iii) NLVR2 (Suhr et al., 2019); (iv) Visual Entailment (Xie et al., 2019); (v) Image-Text Retrieval (including zero-shot setting) (Lee et al., 2018); and (vi) Referring Expression Comprehension. Our UNITER model is trained on a large-scale V+L dataset composed of four subsets: (i) COCO (Lin et al., 2014); (ii) Visual Genome (VG) (Krishna et al., 2017); (iii) Conceptual Captions (CC) (Sharma et al., 2018); and (iv) SBU Captions (Ordonez et al., 2011). Experiments show that UNITER achieves new state of the art with significant performance boost across all six downstream tasks. Moreover, training on additional CC and SBU data (containing unseen images/text in downstream tasks) further boosts model performance over training on COCO and VG only.
Our contributions can be summarized as follows: (i) We introduce UNITER, a powerful UNiversal Image-TExt Representations for Vision-and-Language tasks. (ii) We achieve new state of the art (SOTA) on multiple V+L benchmarks, outperforming existing SOTA and concurrent multimodal pre-training methods by a large margin. (iii) We present extensive experiments and analysis to provide useful insights on the effectiveness of each pre-training task/dataset for multimodal encoder training.
2 RELATED WORK
Self-supervised learning utilizes original data as its own source of supervision, which has been applied to many Computer Vision tasks, such as image colorization (Zhang et al., 2016), solving jigsaw puzzles (Noroozi & Favaro, 2016; Trinh et al., 2019), inpainting (Pathak et al., 2016), rotation prediction (Gidaris et al., 2018), and relative location prediction (Doersch et al., 2015). Recently, pre-trained language models such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), GPT2 (Radford et al., 2019), and XLNet (Yang et al., 2019) have shown great advances for NLP tasks. There are two keys to their success: effective pre-training tasks over large language corpus, and the use of Transformer (Vaswani et al., 2017) for learning contextualized text representations.
More recently, there has been some concurrent work on self-supervised learning for multimodal tasks, by pre-training on large-scale image/video and text pairs, then finetuning on downstream tasks. For example, VideoBERT (Sun et al., 2019) applied BERT to learn a bidrectional joint distribution over quantized video frame features and linguistic tokens from video-text pairs. ViLBERT (Lu
et al., 2019) and LXMERT (Tan & Bansal, 2019) introduced the two-stream architecture, where two Transformers are applied to images and text independently, which will be fused by a third Transformer in a later stage. On the other hand, VisualBERT (Li et al., 2019b), Unicoder-VL (Li et al., 2019a), VL-BERT (Su et al., 2019) and B2T2 (Alberti et al., 2019) proposed the singlestream architecture, where a single Transformer is applied to both image and text. Specifically, LXMERT model was pre-trained with downstream tasks such as VQA (Antol et al., 2015) and GQA (Hudson & Manning, 2019), while the others were pre-trained on image-text pairs only. Our UNITER model belongs to the second family. One key difference between UNITER and the other methods is the masking approach on pre-training tasks. Instead of randomly masking both image regions and sentence words, we use conditional masking, i.e., masking only one modality while keeping the other untainted. In addition, we examine the best combination of pre-training tasks through a thorough ablation study on the effects of each pre-training task and dataset on downstream tasks.
Another related work is DDAF (Gao et al., 2019), which proposed a novel architecture of intermodality and intra-modality attention modules to learn the latent alignment between two modalities for VQA. Compared with Gao et al. (2019), UNITER learns a relatively more generic V+L representation via pre-training.
3 UNIVERSAL IMAGE-TEXT REPRESENTATIONS
In this section, we first introduce the model architecture of UNITER (Section 3.1), then describe the designed pre-training tasks and V+L datasets used for pre-training (Section 3.2 and 3.3).
3.1 MODEL OVERVIEW
The model architecture of UNITER is illustrated in Figure 1. Given a pair of image and sentence, UNITER takes the visual regions of the image and textual tokens of the sentence as the input. We design an Image Embedder and a Text Embedder to extract their respective embeddings. These embeddings are then feed it into a multi-layer self-attention Transformer to learn a cross-modality contextualized embedding between visual regions and textual tokens. Note that the self-attention mechanism in Transformer is order-less, thus it is necessary to explicitly encode positions/locations of tokens/regions as additional inputs.
Specifically, in Image Embedder, we first use Faster R-CNN1 to extract the visual features (pooled ROI features) for each region. We also encode the location features for each region via a 7- dimensional vector2. Both visual and location features are then fed through a fully-connected (FC) layer, to be projected into the same embedding space. The final visual embedding for each region is obtained by summing up the two FC outputs and then passing through a layer normalization (LN) layer. For Text Embedder, we follow BERT (Devlin et al., 2019) and tokenize the input sentence into WordPieces (Wu et al., 2016). The final representation for each sub-word token3 is obtained via summing up its word embedding and position embedding, followed by another LN layer4.
We introduce three main tasks to pre-train our model: Masked Language Modeling conditioned on image regions (MLM), Masked Region Modeling conditioned on input text (with three variants) (MRM), and Image-Text Matching (ITM). As shown in Figure 1, our MRM and MLM are in analogy to BERT, where we randomly mask some words or regions from the input and learn to recover the words or regions as the output of Transformer. Specifically, word masking is realized by replacing the token with a special token [MASK], and region masking is implemented by replacing the visual feature vector with all zeros. Note that each time we only mask one modality while keeping the other modality intact, instead of randomly masking both modalities like ViLBERT and LXMERT. This prevents potential miss-alignment when a masked region happens to be described by a masked word. Empirically, we show that with conditional masking, our model is able to learn better embeddings
1Our Faster R-CNN was pre-trained on Visual Genome object+attribute data (Anderson et al., 2018). 2[x1, y1, x2, y2, w, h, w ∗ h] (normalized top/left/bottom/right coordinates, width, height, and area.) 3We use word/sub-word and token interchangeably throughout the rest of the paper. 4We also use a special modality embedding to help the model distinguish between textual and visual input, which is similar to the ‘segment embedding’ in BERT. This embedding is also summed before the LN layer in each embedder. For simplicity, this modality embedding is omitted in Figure 1.
(in Section 4.2). Lastly, we also learn an instance-level alignment (rather than token/region-level) between the whole image and the sentence via ITM. During training, we sample both positive and negative image-sentence pairs and learn their matching scores.
To pre-train UNITER with the aforementioned different tasks, we randomly sample one pre-training task for each mini-batch and train on only one objective per SGD update.
3.2 PRE-TRAINING TASKS
Masked Language Modeling (MLM) We denote the image regions as v = {v1, ..., vK}, the input words as w = {w1, ..., wT }, and the mask indices as m ∈ NM . 5 In MLM, we randomly mask out the input words with probability of 15%, and replace the masked ones wm with special token [MASK]6. The goal is to predict these masked words based on the observation of their surrounding words w\m and all image regions v, by minimizing the negative log-likelihood:
LMLM(θ) = −E(w,v)∼D logPθ(wm|w\m,v). (1)
where θ is the trainable parameters. Each pair (w,v) is sampled from the whole training set D.
Image-Text Matching (ITM) In ITM, an additional special token [CLS] is fed into our model, which indicates the fused representation of both modalities. The inputs to ITM are a sentence and a set of image regions, and the output is a binary label (0 for negative match, and 1 for positive match). We extract the representation of [CLS] token as the joint representation of the input text and image, then fed into a fully connected layer and a sigmoid function to predict a score between 0 and 1. We denote the output score as sθ(w,v). The ITM supervision is over the [CLS] token.7 During training, we sample a positive or negative pair (w,v) from the dataset D at each step. The negative pair is created by replacing the image or text in a paired sample with a randomly-selected one from other samples. We denote the label as y ∈ {0, 1}, indicating if the sampled pair is a match. Then we apply a binary cross-entropy loss for optimization:
LITM(θ) = −E(w,v)∼D[y log sθ(v,w) + (1− y) log(1− sθ(v,w))]). (2)
Masked Region Modeling (MRM) Similar to MLM, we also sample image regions and mask their visual features with a probability of 15%. The model is trained to reconstruct the masked regions vm given the remaining regions v\m and all the words w. The visual features vm of the masked region are replaced by zeros. Unlike textual tokens that are represented as discrete labels, visual features are high-dimensional and continuous, thus cannot be supervised via class likelihood. Instead, we propose three variants for Masked Region Modeling, which share the same objective base:
LMRM(θ) = E(w,v)∼Dfθ(vm|v\m,w). (3)
1) Masked Region Feature Regression (MRFR) MRFR learns to regress the Transformer output of each masked region v(i)m to its visual features. Specifically, we apply an FC layer to convert its Transformer output into a vector hθ(v (i) m ) of same dimension as the input ROI pooled feature r(v (i) m ).
Then we apply L2 regression between the two: fθ(vm|v\m,w) = ∑M i=1 ‖hθ(v (i) m )− r(v(i)m )‖22.
2) Masked Region Classification (MRC) MRC learns to predict the object semantic class for each masked region. We first feed the Transformer output of the masked region v(i)m into an FC layer to predict the scores of K object classes, which further goes through a softmax function to be transformed into a normalized distribution gθ(v (i) m ) ∈ RK . Note that there is no ground-truth label, as the object categories are not provided. Thus, we use the object detection output from Faster RCNN, and take the detected object category (with the highest confidence score) as the label of the masked region, which will be converted into a one-hot vector c(v(i)m ) ∈ RK . The final objective minimizes the cross-entropy (CE) loss: fθ(vm|v\m,w) = ∑M i=1 CE(c(v (i) m ), gθ(v (i) m )).
5N is the natural numbers, M is the number of masked tokens, and m is the set of masked indices. 6Following BERT, we decompose this 15% into 10% random word, 10% unchanged, and 80% [MASK]. 7The supervision over the [CLS] token in pretraining also alleviates the input mismatch problem between pretraining tasks and downstream finetuning tasks, since most of the downstream tasks take the representation of [CLS] token as the joint representation.
3) Masked Region Classification with KL-Divergence (MRC-kl) MRC takes the most likely object class from the object detection model as the hard label (w.p. 0 or 1), which assumes the detected object class is the ground-truth label for the region. However, this may not be true, as no ground-truth label is provided for the detected region. Thus, in MRC-kl, we avoid this assumption by using soft label as supervision signal, which is the raw output from the detector (i.e., a distribution of object classes c̃(v(i)m )). MRC-kl aims to distill such knowledge into UNITER as Hinton et al. (2015), by minimizing the KL divergence between two distributions: fθ(vm|v\m,w) = ∑M i=1DKL(c̃(v (i) m )||gθ(v(i)m )).
3.3 PRE-TRAINING DATASETS
We construct our pre-training dataset based on four existing V+L datasets: COCO (Lin et al., 2014), Visual Genome (VG) (Krishna et al., 2017), Conceptual Captions (CC) (Sharma et al., 2018), and SBU Captions (Ordonez et al., 2011). Only image and sentence pairs are used for our pre-training purpose, which makes the model framework more scalable, as additional image-sentence pairs are easy to harvest for further pre-training.
To study the effects of different datasets on pre-training, we divide the four datasets into two categories. The first one consists of image captioning data from COCO and dense captioning data from VG. We call it “In-domain” data, as most V+L tasks are built on top of these two datasets. To obtain a ‘fair’ data split, we merge the raw training and validation splits from COCO, and exclude all validation and test images that appear in downstream tasks. We also exclude all co-occurring Flickr30K (Plummer et al., 2015) images via URL matching, as both COCO and Flickr30K images were crawled from Flickr and may have overlaps8. The same rule was applied to Visual Genome as well. In this way, we obtain 5.6M image-text pairs for training and 131K image-text pairs for our internal validation, which is half the size of the dataset used in LXMERT (Tan & Bansal, 2019), due to the filtering of overlapping images and the use of image-text pairs only. We also use additional Out-of-domain data from Conceptual Captions (Sharma et al., 2018) and SBU Captions (Ordonez et al., 2011) for model training9. The statistics on the cleaned splits are provided in Table 1.
4 EXPERIMENTS
We evaluate UNITER on six V+L tasks (listed in Table 2), by transferring the pre-trained model to each target task and finetuning through end-to-end training. We report experimental results on two model sizes: UNITER-base with 12 layers and UNITER-large with 24 layers10.
8A total of 222 images were eliminated through this process. 9We apply the same URL matching method, excluding 109 images from the training set.
10UNITER-base: L=12, H=768, A=12, Total Parameters=86M. UNITER-large: L=24, H=1024, A=16, Total Parameters=303M (L: number of stacked Transformer blocks; H: hidden activation dimension; A: number of attention heads). 882 and 3645 V100 GPU hours were used for pre-training UNITER-base and UNITER-large.
4.1 DOWNSTREAM TASKS
In VQA, VCR and NLVR2 tasks, given an input image (or a pair of images) and a natural language question (or description), the model predicts an answer (or judges the correctness of the description) based on the visual content in the image. For Visual Entailment, we evaluate on the SNLI-VE dataset. The goal is to predict whether a given image semantically entails an input sentence. Classification accuracy over three classes (“Entailment”, “Neutral” and “Contradiction”) is used to measure model performance. For Image-Text Retrieval, we consider two datasets (COCO and Flickr30K) and evaluate the model in two settings: Image Retrieval (IR) and Text Retrieval (TR). Referring Expression (RE) Comprehension requires the model to select the target from a set of image region proposals given the query description. Models are evaluated on both ground-truth objects and detected proposals11 (MAttNet (Yu et al., 2018)).
For VQA, VCR, NLVR2, Visual Entailment and Image-Text Retrieval, we extract the joint embedding of the input image-text pairs via a multi-layer perceptron (MLP) from the representation of the [CLS] token. For RE Comprehension, we use the MLP to compute the region-wise alignment scores. These MLP layers are learned during the finetuning stage. Specifically, we formulate VQA, VCR, NLVR2, Visual Entailment and RE Comprehension as classification problems and minimize the cross-entropy loss over the ground-truth answers/responses. For Image-Text Retrieval, we formulate it as a ranking problem. During finetuning, we sample three pairs of image and text, one positive pair from the dataset and two negative pairs by randomly replacing its sentence/image with others. We compute the similarity scores (based on the joint embedding) for both positive and negative pairs, and maximize the margin between them through triplet loss.
4.2 EVALUATION ON PRE-TRAINING TASKS
We analyze the effectiveness of different pre-training settings through ablation studies over VQA, NLVR2, Flickr30K and RefCOCO+ as representative V+L benchmarks. In addition to standard metrics for each benchmark (listed in Table 2) , we also use Meta-Sum (sum of all the scores across all the benchmarks) as a global metric.
Firstly, we establish two baselines: Line 1 (L1) in Table 3 indicates no pre-training is involved, and L2 shows the results from MLM initialized with pre-trained weights from Devlin et al. (2019). Although MLM trained on text only did not absorb any image information during pre-training, we see a gain of approximately +30 on Meta-Sum over L1. Hence, we use the pre-trained weights in L2 to initialize our model for the following experiments.
11The evaluation splits of RE comprehension using detected proposals are denoted as vald, testd, etc.
Secondly, we validate the effectiveness of each pre-training task through a thorough ablation study. Comparing L2 and L3, MRFR (L3) achieves better results than MLM (L2) only on NLVR2. On the other hand, when pre-trained on ITM (L4) or MLM (L5) only, we observe a significant improvement across all the tasks over L1 and L2 baselines. When combining different pre-training tasks, MLM + ITM (L6) improves over single ITM (L4) or MLM (L5). When MLM, ITM and MRM are jointly trained (L7-L10), we observe consistent performance gain across all the benchmarks. Among the three variants of MRM (L7-L9), we observe that MRC-kl (L9) achieves the best performance (397.09) when combined with MLM + ITM, while MRC (L7) the worst (393.97). When combining MRC-kl and MRFR together with MLM and ITM (L10), we find that they are complimentary to each other, which leads to the highest Meta-Sum score. We use this as the optimal pre-training setting for further experiments.
Additionally, we validate the contributions of conditional masking through a comparison study. When we perform random masking on both modalities simultaneously during pre-training, i.e., w/o conditional masking (L11), we observe a decrease in Meta-Sum score (396.51) compared to that with conditional masking (399.97). This indicates that the conditional masking strategy enables the model to learn better joint image-text representations effectively.
Lastly, we study the effects of pre-training datasets. Our experiments so far have been focused on In-domain data. In this study, we pre-train our model on Out-of-domain data (Conceptual Captions
+ SBU Captions). A performance drop (395.45 in L12) from the model trained on In-domain data (COCO + Visual Genome) (399.97 in L10) shows that although Out-of-domain data contain more images, the model still benefits more from being exposed to similar downstream images during pretraining. We further pre-train our model on both In-domain and Out-of-domain data. With doubled data size, the model continues to improve (402.50 in L13).
4.3 RESULTS ON DOWNSTREAM TASKS
Table 4 presents the results of UNITER on all downstream tasks. Both our base and large models are pre-trained on In-domain+Out-of-domain datasets, with the optimal pre-training setting: MLM+ITM+MRC-kl+MRFR. The implementation details of each task are provided in Appendix A.2. We compare with both task-specific models and concurrent pre-trained models on each downstream task. SOTA task-specific models include: MCAN (Yu et al., 2019) for VQA, MaxEnt (Suhr et al., 2019) for NLVR2, B2T2 (Alberti et al., 2019) for VCR, SCAN (Lee et al., 2018) for ImageText Retrieval, EVE-Image (Xie et al., 2019) for SNLI-VE, and MAttNet for RE Comprehension (RefCOCO, RefCOCO+ and RefCOCOg)12. Concurrent pre-trained models include: ViLBERT, LXMERT, Unicoder-VL, VisualBERT and VLBERT.
Results show that our UNITER-large model achieves new state of the art across all the benchmarks. UNITER-base model also outperforms the others by a large margin across all tasks except VQA. Specifically, our UNITER-base model outperforms SOTA by approximately +2.8% for VCR on Q→AR, +2.5% for NLVR2, +7% for SNLI-VE, +4% on R@1 for Image-Text Retrieval (+15% for zero-shot setting), and +2% for RE Comprehension.
Note that LXMERT pre-trains with downstream VQA (+VG+GQA) data, which may help adapt the model to VQA task. However, when evaluated on unseen tasks such as NLVR2, UNITER-base achieves 3% gain over LXMERT. In addition, among all the models pre-trained on image-text pairs only, our UNITER-base outperforms the others by >1.5% on VQA.
It is also worth mentioning that both VilBERT and LXMERT observed two-stream model outperforms single-stream model, while our results show empirically that with our pre-training setting, single-stream model can achieve new state-of-the-art results, with much fewer parameters (UNITER-base: 86M, LXMERT: 183M, VilBERT: 221M)13.
For VCR, we propose a two-stage pre-training approach: (i) pre-train on standard pre-training datasets; and then (ii) pre-train on downstream VCR dataset. Interestingly, while VLBERT and B2T2 observed that pre-training is not very helpful on VCR, we find that the second-stage pretraining can significantly boost model performance, while the first-stage pre-training still helps but with limited effects (results shown in Table 5). This indicates that the proposed two-stage approach is highly effective in our pre-trained model over new data that are unseen in pre-training datasets.
Different from other tasks, NLVR2 takes two images as input. Thus, directly finetuning UNITER pre-trained with image-sentence pairs might not lead to optimal performance, as the interactions between paired images are not learned during the pre-training stage. Thus, we experimented with three modified settings on NLVR2: (i) Triplet: joint embedding of images pairs and query captions; (ii) Pair: individual embedding of each image and each query caption; and (iii) Pair-biattn: a bidirectional attention is added to the Pair model to learn the interactions between the paired images.
Comparison results are presented in Table 6. The Pair setting achieves better performance than the Triplet setting even without cross-attention between the image pairs. We hypothesize that it is due to the fact that our UNITER is pre-trained with image-text pairs. Thus, it is difficult to finetune a pairbased pre-trained model on triplet input. The bidirectional attention mechanism in the Pair-biattn setting, however, compensates the lack of cross-attention between images, hence yielding the best performance with a large margin. This show that with minimal surgery on the top layer of UNITER, our pre-trained model can adapt to new tasks that are very different from pre-training tasks.
12MAttNet results are updated using the same features as the others. More details are provided in Appendix. 13The word embedding layer contains excessive rare words, thus excluded from the parameter counts.
Setting dev test-P Triplet 72.76 73.55 Pair 75.37 75.97 Pair-biattn 77.14 77.87
Table 6: Experiments on three modified settings for NLVR2. All models use pre-trained UNITER-base.
5 CONCLUSION
In this paper, we present UNITER, a large-scale pre-trained model providing UNiversal ImageTExt Representations for Vision-and-Language tasks. Three main pre-training tasks are proposed and evaluated through extensive ablation studies. Trained with both in-domain and out-of-domain datasets, UNITER outperforms state-of-the-art models over multiple V+L tasks by a significant margin. Future work includes studying early interaction between raw image pixels and sentence tokens, as well as developing more effective pre-training tasks.
A APPENDIX
A.1 DATASET COLLECTION
As introduced, our full dataset is composed of four existing V+L datasets: COCO, Visual Genome, Conceptual Captions, and SBU Captions. The dataset collection is not simply combining them, as we need to make sure none of the downstream evaluation images are seen during pre-training. Among them, COCO is the most tricky one to clean, as several downstream tasks are built based on it. Figure 2 lists the splits from VQA, Image-Text Retrieval, COCO Captioning, RefCOCO/RefCOCO+/RefCOCOg, and the bottom-up top-down (BUTD) detection (Anderson et al., 2018), all from COCO images.
As observed, the validation and test splits of different tasks are scattered across the raw COCO splits. Therefore, we exclude all those evaluation images that appeared in the downstream tasks. In addition, we also exclude all co-occurring Flickr30K images via URL matching, making sure the zero-shot image-text retrieval evaluation on Flickr is fair. The remaining images become the COCO subset within our full dataset, as shown in Figure 2 bottom row. We apply the same rules to Visual Genome, Conceptual Captions, and SBU Captions.
A.2 IMPLEMENTATION DETAILS
Our models are implemented based on PyTorch14 (Paszke et al., 2017). To speed up training, we use Nvidia Apex15 for mixed precision training. All pre-training experiments are run on Nvidia V100 GPUs (16GB VRAM; PCIe connection). Finetuning experiments are implemented on the same hardware or Titan RTX GPUs (24GB VRAM). To further speed up training, we implement dynamic sequence length to reduce padding and batch examples by number of input units (text tokens + image regions). For large pre-training experiments, we use Horovod16 + NCCL17 for multi-node communications (on TCP connections through ethernet) with up to 4 nodes of 4x V100 server. Gradient accumulation (Ott et al., 2018) is also applied to reduce multi-GPU communication overheads.
Visual Question Answering (VQA) We follow Yu et al. (2019) to take 3129 most frequent answers as answer candidates, and assign a soft target score to each candidate based on its relevancy to the 10 human responses. To finetune on VQA dataset, we use a binary cross-entropy loss to train a multi-label classifier using batch size of 10240 input units over maximum 5K steps. We use AdamW optimizer (Loshchilov & Hutter, 2019) with a learning rate of 3e− 4 and weight decay of 0.01. At inference time, the max-probable answer is selected as the predicted answer. For results on test-dev and test-std splits, both training and validation sets are used for training, and additional question-answer pairs from Visual Genome are used for data augmentation as in Yu et al. (2019).
14https://pytorch.org/ 15https://github.com/NVIDIA/apex 16https://github.com/horovod/horovod 17https://github.com/NVIDIA/nccl
Visual Commonsense Reasoning (VCR) VCR can be decomposed into two multiple-choice subtasks: question-answering task (Q → A) and answer-justification task (QA → R). In the holistic setting (Q→ AR), a model needs to first choose an answer from the answer choices, then select a supporting rationale from rationale choices if the chosen answer is correct. We train our model in two settings simultaneously. When testing in the holistic setting, we first apply the model to predict an answer, then obtain the rationale from the same model based on the given question and the predicted answer. To finetune on VCR dataset, we concatenate the question (the qeustion and the ground truth answer) and each answer (rationale) choice from the four possible answer (rationale) candidates. The ‘modality embedding’ is extended to help distinguish question, answer and rationale. Crossentropy loss is used to train a classifier over two classes (‘‘right’’ or ‘‘wrong’’) for each question-answer pair (question-answer-rationale triplet) with a batch size of 4096 input units over maximum 5K steps. We use AdamW optimizer with a learning rate of 1e − 4 and weight decay of 0.01.
Since the images and text in VCR dataset are very different from our pre-training dataset, we further pre-train our model on VCR, using MLM, MRFR and MRC-kl as the pre-training tasks. ITM is discarded because the text in VCR does not explicitly describe the image. The results of both pretrainings on VCR are reported in Table 5 and discussed in the main text. In conclusion, for downstream tasks that contain new data which is very different from the pre-training datasets, secondstage pre-training helps further boost the performance.
In our implementation, the second-stage pre-training is implemented with a batch size of 4096 intput units, a learning rate of 3e− 4 and a weight decay of 0.01 over maximum 60K steps. After secondstage pre-traing, we finetune our model with a learning rate of 6e− 5 over maximum 8K steps.
Natural Language for Visual Reasoning for Real (NLVR2) NLVR2 is a new challenging task for visual reasoning. The goal is to determine whether a natural language statement is true about the given image pair. Here we discuss the three architecture variants of NLVR2 finetuning in detail. Since UNITER only handles one image and one text input at pre-training, the ‘modality embedding’ is extended to help distinguish the additional image presented in the NLVR2 task. For the Triplet setup, we concatenate the image regions and then feed into the UNITER model. An MLP transform is applied on the [CLS] output for binary classification. For the Pair setup, we treat one input example as two text-image pairs by repeating the text. The two [CLS] outputs from UNITER are then depth concatenated as the joint embedding for the example. Another MLP further transform this embedding for the final classification. For the Pair-biattn setup, the input format is the same as the Pair setup. As for the joint representation, instead of rely on only two [CLS] outputs, we apply a multi-head attention layer (Vaswani et al., 2017) on one sequence of joint image-text embeddings to attend to the other sequence of embeddings, and vice versa. After this ‘bidirectional’ attention interactions, a simple attentional pooling is applied on each output sequences and then a final concat+MLP layer transforms the cross-attended joint representation for true/false classification.
We finetune UNITER on NLVR2 for 8K steps with a batch size of 10K input units. AdamW optimizer is used with learning rate of 1e− 4 and weight decay of 0.01.
Image-Text Retrieval Two datasets are considered for this task: COCO and Flickr30K. COCO consists of 123K images, each accompanied with five human-written captions. We follow Karpathy & Fei-Fei (2015) to split the data into 82K/5K/5K training/validation/test images. Additional 30K images from MSCOCO validation set are also included to improve training as in Lee et al. (2018). Flickr30K dataset contains 31K images collected from the Flickr website, with five textual descriptions per image. We follow Karpathy & Fei-Fei (2015) to split the data into 30K/1K/1K training/validation/test splits. During finetuning, we sample two negative image-text pairs per positive sample from image and text sides, respectively. For COCO, we use batch size of 60 examples, learning rate of 2e− 5 and finetune our model for 20K steps. For Flickr30K, we finetune our model with a batch size of 120 examples and a learning rate of 5e− 5 over maximum 16K steps. To obtain the final results in Table 4, we further sample hard negatives to facilitate the finetuning. For every N steps, we randomly sample 128 negative images per text input and obtain a sparse scoring matrix for the whole training set. For each image, we choose the top 20 ranked negative sentences as hard negative samples. Similarly, we get 20 hard negative images for each sentence according to their scores. The hard negatives are sent to the model as additional negative samples.
In the end, we have two randomly sampled negatives and two hard negative samples per positive sample. N is set to 4000 for COCO and 2500 for Flickr30K.
Visual Entailment (SNLI-VE) Visual Entailment is a task derived from Flickr30K images and Stanford Natural Language Inference (SNLI) dataset, where the goal is to determine the logical relationship between a natural language statement and an image. Similar to BERT for Natural Language Inference (NLI), we treat SNLI-VE as a three-way classification problem and apply an MLP Transform on [CLS] output. The UNITER model is finetuned using cross-entropy loss. The batch size is set to 10K input units and we use AdamW with learning rate of 8e − 5 to train for 3K steps.
Referring Expression Comprehension We use three referring expression datasets: RefCOCO, RefCOCO+, and RefCOCOg for the evaluation, all collected on COCO images. To finetune UNITER on this task, we add a MLP layer on top of the region outputs from Transformer, to compute the alignment score between the query phrase/sentence and each region. Since only one object is paired with the query phrase/sentence, we apply cross-entropy loss on the normalized alignment scores. The finetuning is efficient - we train the model with a batch size of 64 examples and a learning rate of 5e− 5 for only 5 epochs, and achieve state-of-the-art performance. Note all works including ours use off-the-shelf object detectors trained on COCO (and Visual Genome) to extract the visual features. While this does not affect other downstream tasks, it raises an issue for RE comprehension, as the val/test images of RefCOCO, RefCOCO+, and RefCOCOg are a subset of COCO’s training split. Strictly, our object detector is not allowed to train with these val/test images. However, just for a “fair” comparison with concurrent works, we ignore this issue and use the same features (Anderson et al., 2018) as the others. We also update the results of MAttNet using this ”contaminated” features, whose accuracy is 1.5% higher than the original one. As aforementioned, the interaction between sentence and image could start from tokens and pixels instead of the extracted features. We leave this study and RE comprehension with strictly correct features to future work.
A.3 VISUALIZATION
Similar to Kovaleva et al. (2019), we observe several patterns in the attention maps of the UNITER model, as shown in Fig. 3. Note that different from Kovaleva et al. (2019), our attention mechanism
operates in both inter- and intra-modalitiy manners. For completeness, we briefly discuss each pattern here:
• Vertical: attention to special tokens [CLS] or [SEP];
• Diagonal: attention to the token/region itself or preceding/following tokens/regions;
• Vertical + Diagonal: mixture of vertical and diagonal;
• Block: intra-modality attention, i.e., textual self-attention and visual self-attention;
• Heterogeneous: diverse attentions that cannot be categorized and is highly dependent on actual input;
• Reversed Block: inter-modality attention, i.e., text-to-image and image-to-text attention.
Note that Reversed Block (Fig. 3f) shows cross-modality alignment between tokens and regions. In Fig. 4, 5, and 6, we visualize several examples of text-to-image attention to demonstrate the local cross-modality alignment between regions and tokens.
A.4 CONDITIONAL MASKING VS. JOINT RANDOM MASKING
We further discuss the advantage of our proposed conditional masking over joint random masking used in (Tan & Bansal, 2019; Lu et al., 2019). Intuitively, our conditional masking learns better latent alignment of entities (regions and words) across two modalities. Fig. 7 shows an example image with “man with his dog and cat sitting on a sofa”. With conditional masking, when the region of dog is masked, our model should be able to infer that the region is dog, based on the context of both surrounding regions and the full sentence (Fig. 7(a)), and vice versa. However, for the joint masking implementation, it could happen when both the region of dog and the word dog are
masked (Fig. 7(b)). In such case, the model has to make the prediction blindly, which might lead to mis-alignment.
To verify this intuition, we show the validation curves during pre-training of MLM and MRC-kl in Fig. 8. Each sub-figure shows a comparison between applying conditional masking and joint random masking during the pre-training of UNITER. The MLM accuracy measures how well UNITER can reconstruct the masked words, and MRC-kl accuracy18 measures how well UNITER can classify the masked regions. In both cases, as shown in Fig. 8, our conditional masking converges faster and achieves higher final accuracy than joint random masking. In addition, Table 3 (row 10 & 11) shows our conditional masking also performs better on fine-tuned downstream tasks.
A.5 MORE RESULTS ON VCR AND NLVR2
Following the VCR setup in Table. 5, we further construct an ensemble model using 10 UNITERlarge. Table. 7 shows the comparison between VLBERT, ViLBERT and UNITER on VCR. The Q → AR accuracy of our ensemble model outperforms ViLBERT (Lu et al., 2019) ensemble by a large margin of 7.0%. Note even single UNITER-large already outperforms ViLBERT ensemble and VLBERT-large by 3.0%.
Besides, we also compare our UNITER-large with LXMERT (Tan & Bansal, 2019) and VisualBERT (Li et al., 2019b) on an additional testing split of NLVR2 in Table. 8. Our results consistently outperform the previous SOTA on all metrics19 by a large margin of ∼4.0%.
A.6 DIRECT COMPARISON TO VLBERT AND VILBERT
To further demonstrate our idea, we conduct a direct comparison to ViLBERT (Lu et al., 2019) and VLBERT (Su et al., 2019), trained on Conceptual Captions (Sharma et al., 2018). We pre-train UNITER on Conceptual Captions only (instead of 4 datasets in 3.3) using our proposed conditional masking and the best pre-training tasks (MLM + ITM + MRC-kl + MRFR). Table. 9 shows that
18When validating on MRC-kl accuracy, we simply pick the most confident category from the predicted probability and measure its correctness.
19The balanced and unbalanced evaluations were introduced in Suhr & Artzi (2019).
UNITER still consistently outperforms the other models by a visible margin on VQA and RefCOCO+. | 1. What is the focus and contribution of the paper on transformer-based image and text representation?
2. What are the strengths of the proposed approach, particularly in terms of its performance in various tasks?
3. What are the weaknesses of the paper, especially regarding the lack of understanding of the pre-trained network's representations?
4. How does the reviewer assess the significance and impact of the paper's findings?
5. Are there any concerns or questions regarding the methodology, experiments, or conclusions drawn by the authors? | Review | Review
This is an impressive paper. LIke BERT, it proposes a tranformer based approach to derive a pre-trained network for representing images and texts. The resulting pre-trained network, used in 9 different tasks, advances the SOTA on all the tasks.
The major limitation of this paper is why. Why does it happen? How this results can be achieved? What is exactly represented in this pre-trained network. Why the tasks used for pre-training build a network that is so informative?
This is really the major obscure point of this impressive paper. |
ICLR | Title
An evaluation of quality and robustness of smoothed explanations
Abstract
Explanation methods play a crucial role in helping to understand the decisions of deep neural networks (DNNs) to develop trust that is critical for the adoption of predictive models. However, explanation methods are easily manipulated through visually imperceptible perturbations that generate misleading explanations. The geometry of the decision surface of the DNNs has been identified as the main cause of this phenomenon and several smoothing approaches have been proposed to build more robust explanations. In this work, we provide a thorough evaluation of the quality and robustness of the explanations derived by smoothing approaches. Their different properties are evaluated with extensive experiments, which reveal the settings where the smoothed explanations are better, and also worse than the explanations derived by the common Gradient method. By making the connection with the literature on adversarial attacks, we further show that such smoothed explanations are robust primarily against additive `p-norm attacks. However, a combination of additive and non-additive attacks can still manipulate these explanations, which reveals shortcomings in their robustness properties.
1 INTRODUCTION
Explanation methods attribute a numerical value to each data feature in order to quantify its relative importance towards the model’s prediction. Such attributions help to better understand and trust complex models like deep neural networks (DNNs). In safety-critical tasks, such an understanding is a prerequisite to the deployment of DNNs, because a domain expert will never make important decisions based on a model’s prediction unless that model is trustworthy. Moreover, explanations can help to understand the reasons behind the decision of a model, and when it comes to model debugging, they can reveal the presence of any spurious data correlations that may lead to faulty predictions during inference (Ribeiro et al., 2016).
In the context of image classification with deep neural networks, several explanation methods have been proposed based on the gradient with respect to input, also called gradient-based explanations (Baehrens et al., 2010; Bach et al., 2015; Selvaraju et al., 2017; Sundararajan et al., 2017; Springenberg et al., 2015). The explanation generated by these methods, a saliency map, highlights the parts of the image that contributed to the prediction. Recent work has shown that gradient-based explanations of neural networks can be fragile and can be easily manipulated via adversarially perturbed inputs (Ghorbani et al., 2019; Dombrowski et al., 2019; Heo et al., 2019; Viering et al., 2019; Kindermans et al., 2019). That is, one can find a small-norm perturbation to be added to an input ( often imperceptible), such that the focus of the explanation changes towards irrelevant features while the model’s output remains unchanged. This, in turn, can make explanations inappropriate to help end-users gain trust in a model’s prediction.
The large curvature of the decision surface of neural networks has been identified as one of the causes of fragility for gradient-based explanations (Ghorbani et al., 2019; Dombrowski et al., 2019; Wang et al., 2020). To make explanations more robust, a class of approaches proposed smoothing the explanation or making the decision surface of neural networks more smooth (Wang et al., 2020; Dombrowski et al., 2019; Ivankay et al., 2020). We refer to these approaches as smoothing approaches. It is worth mentioning that similar methods have been proposed in the context of adversarial robustness, with the aim of flattening the decision surface of neural networks in order to reach more robust predictions (Moosavi-Dezfooli et al., 2019; Qin et al., 2019).
Here, we provide a thorough investigation of the explanations derived by smoothing approaches in terms of explanation quality and robustness. We employ various tests to assess the quality of these explanations. Each test evaluates a desirable property for explanations, such as: sensitivity to changes in the model, fidelity to the predictor function, etc. In terms of robustness, we show that explanations derived by smoothing approaches only provide robustness against additive `p norm attacks. Specifically, in this work, we show that compared to additive attacks, attacks based on the combination of spatial transformation (Xiao et al., 2018) and/or color transformation (Laidlaw & Feizi, 2019) together with additive perturbations are more effective in manipulating these explanations. Our contributions can be summarized as follows:
• We study the effectiveness of smoothing approaches to achieve robust explanations. We present results on evaluating both the quality and robustness properties of smoothed explanations.
• We assess the quality of smoothed explanations via presenting the results of various quality tests. Our results demonstrate the pros and cons of smoothed explanations with respect to the following quality aspects: sensitivity to model parameters, class discriminativeness, Infidelity, and sparseness.
• We present results for different combination of additive and non-additive attacks, and show that they are able to manipulate explanations derived by smoothing approaches more successfully. Combining different types of perturbations to achieve stronger attacks has been a topic of investigation in the context of adversarial examples (Jordan et al., 2019). To the best of our knowledge, this is the first time such attacks have been used in the context of explanations.
Related works. There have been several works aiming to make explanations more robust. These works mostly focused on either modifying the explanation method itself or modifying the predictor model to achieve robust explanations. Wang et al. (2020) introduced Uniform Gradient, which is similar to Smooth Gradient unless it uses Uniform noise, and showed that it can hardly be manipulated by additive attacks. Dombrowski et al. (2019) proved that a network with soft-plus activations has a more robust Gradient explanation compared to a ReLU network, given that the parameter β of the soft-plus function is chosen to be sufficiently small. Consequently, they proposed the β-smoothing approach in which they substitute the ReLU activations of a trained network by softplus functions with a small β parameter. Wang et al. (2020) introduced a regularization term called Smooth Surface Regularization (SSR) to the training objective of a DNN. This training objective penalizes the large curvature of a DNN by regularizing the eigenvalue of the input hessian with the maximum absolute value. Moreover, they showed that adversarial training (Madry et al., 2018) also leads to more robust explanations. This fact can also be deduced from the results of (MoosaviDezfooli et al., 2019) as they showed that adversarial training leads to a significant decrease in the curvature of the loss surface with respect to inputs. Anders et al. (2020) proposed an attack in which they adversarially manipulate the model instead of the input in order to manipulate the explanation. Then they propose a modification to the existing explanation methods to make them more robust against such manipulated models. Lakkaraju et al. (2020) proposed a framework for generating robust and stable black box explanations based on adversarial training. Chen et al. (2019) introduced a regularization term to the training objective of neural networks to achieve robust Integrated Gradient explanations. Finally, Dombrowski et al. (2020) developed a theoretical framework to derive bounds on the maximum manipubality of explanations and proposed three different techniques to boost the robustness of explanations. In this work, we show that the robustness of smoothed explanations can be affected by employing a combination of additive and non-additive attacks. Furthermore, we present a through evaluation of the different quality aspects of smoothed explanations.
2 BACKGROUND
First, we provide the definition of an explanation map and then briefly describe the explanation methods we used in this paper. Then we continue with introducing the attacks to explanations and the smoothing approaches we are going to study in this paper.
Consider a model f : Rd → RK which classifies an input x ∈ Rd into one of the K classes. An explanation map, denoted by hf (x) : Rd → Rd, associates a score to each feature of the input
indicating the relevance of that feature towards the model’s prediction. For instance, in the context of image classification, saliency maps associate a score to each pixel of the input image resulting in a heatmap that highlights important regions of the image leading to the model prediction. In this work, we focus on the gradient-based explanations and mainly on the Gradient method. Given a model f and an input x, the Gradient explanation is defined as∇xf(x). Since other gradient-based explanation methods make use of the gradients with respect to input, we argue that our results could be extended to those explanation methods as well. We will also consider two smoothed variants, namely Smooth (Smilkov et al., 2017) and Uniform Gradient (Wang et al., 2020) methods.
2.1 ATTACKS TO MANIPULATE EXPLANATIONS
Similarly to common adversarial attacks (Goodfellow et al., 2015; Moosavi-Dezfooli et al., 2016; Szegedy et al., 2014), recent work has shown that explanations can also be manipulated by adding a small and almost imperceptible perturbation to the input (Ghorbani et al., 2019; Dombrowski et al., 2019). We refer to this class of attacks as explanation attacks. There have been various formulations for explanation attacks (Ghorbani et al., 2019; Dombrowski et al., 2019). In this work, we will use the formulation introduced by Dombrowski et al. (2019). In this attack, the attacker tries to find a perturbed input for which the explanation is manipulated to be very similar to a given target explanation map while the output of the model remains approximately unchanged. Note that the target map could be any heatmap in general; however, we used the explanation of a target image as a target map in this work. Below, we will give a formal definition of this attack.
Definition 1 (Targeted manipulation attack). An explanation hf (x) for model f(x) is vulnerable to attack at input x if there exist a perturbed input xadv , such that hf (xadv) is similar to a given target map ht but the model’s output remains unchanged. An attacker finds xadv by minimizing the following objective function:
L = ∥∥hf (xadv)− ht∥∥2 + γ1 ‖f(xadv)− f(x)‖2 + γ2Lreg(x,xadv) (1)
The first term in (1) ensures the similarity of the manipulated explanation to the target map, the second term ensures the similarity between the model output for the original and perturbed inputs, and the third term regularizes the perturbation to ensure perceptual similarity between the original and perturbed images. Note that Lreg is defined by the attacker according to the type of the perturbation. The relative weighting of the terms in (1) is controlled by the hyper-parameters γ1 and γ2.
2.2 TOWARDS ROBUST EXPLANATIONS
Recent works have tried to define the robustness of explanations in terms of the sensitivity of input gradients to changes in the input data (Wang et al., 2020; Dombrowski et al., 2019). Wang et al. (2020) define the robustness of explanations by the Lipschitz continuity coefficient of the input gradients; a smaller coefficient means that the explanation is less sensitive to the changes in the input and hence more robust. In this regard, a class of approaches to generate robust explanations have been proposed in the recent works, which are either based on smoothing out the explanation maps or flattening the decision boundary of the model itself. Broadly, these approaches can be classified into two categories: (1) Post-hoc approaches do not require retraining of the network and can be applied as a post-processing step. (2) Ad-hoc approaches to robust explanations require retraining of the network and hence are more costly.
In this work, we consider Smooth Gradient (Smilkov et al., 2017), Uniform Gradient (Wang et al., 2020), and β-smoothing (Dombrowski et al., 2019) as post-hoc approaches. The first two methods involve smoothing the explanation map, while the third one smooths the decision surface of the model. All three approaches act on pre-trained models, and hence are characterized as post-hoc. Among the ad-hoc methods, we study the explanations generated by adversarially trained networks, and networks trained with curvature regularization (CURE) (Moosavi-Dezfooli et al., 2019), which is a similar approach to SSR (Wang et al., 2020)1.
1We experiment only with CURE, because with the publicly available code of SSR we were not able to reproduce the results in (Wang et al., 2020).
3 EVALUATING POST-HOC APPROACHES
Here, we begin by evaluating the quality of explanations derived by post-hoc approaches that do not require retraining of the network. Then, we evaluate the robustness of these explanations by presenting results on effective non-additive attacks to manipulate them. For all of the experiments in this section, we used a VGG-16 network trained on ImageNet (Russakovsky et al., 2015), and for generating the explanation maps we used the Captum (Kokhlikyan et al., 2020) package. Moreover, for the β-smoothing approach we always set β = 0.8 as suggested in (Dombrowski et al., 2019).
3.1 QUALITY OF EXPLANATIONS OF POST-HOC APPROACHES
To evaluate and compare the quality of the explanations, we use various quality tests presented in the literature. In general, assessing the quality of an explanation is a challenging task and each quality test only evaluates a specific quality aspect of an explanation. Therefore the assemblage of quality tests helps to understand which quality aspects of the explanations are improved and which are deteriorated by the smoothing approaches.
Cascade randomization of model parameters. Adebayo et al. (2018) argued that it is desired for an explanation to be sensitive to the changes in the model parameters. They proposed a model parameter randomization test to assess this sensitivity. In this test, the parameters of a model are progressively randomized from the top layer (logits) to the bottom layers. In each step of randomization, the explanation from the resulting model is compared against the explanation from the original model. Randomizing the model parameters means losing what the model has learned from the data during training. Therefore, we expect a ”good” explanation to be destroyed in this process. However, if an explanation is insensitive to the randomization of the model parameters, then it is not deemed appropriate for debugging the model under erroneous predictions.
The visual results of this test for Gradient explanation and post-hoc approaches are shown in Figure 1. More examples of this test can be found in the Appendix. One can observe that the explanations derived from post-hoc approaches show less sensitivity to the randomization of model parameters than compared to the Gradient method. This can also be verified by the Spearman rank correlation between the original and randomized explanations shown in Figure 2. We observe that for the smoothed explanation methods, the original and randomized explanations have a high rank correlation after the randomization of the top layers of the network. These results highlight that using Smooth Gradient, Uniform Gradient, and β-smoothing to achieve a more robust explanation can come at the expense of having explanations that are less sensitive to model parameters.
Class sensitivity of explanations. A good visual explanation should be able to localize the image regions relevant to the target category, i.e., it should be class discriminative (Selvaraju et al., 2017). This is particularly significant when dealing with images containing more than one object. To assess the class discriminativeness of an explanation we used a quality test equivalent to the pointing game (Zhang et al., 2016). We sampled images from the MS COCO dataset (Lin et al., 2014), containing two objects that are also present among the ImageNet class labels. For this test we only keep
the samples for which one of the objects in the image is the top predicted class by the network and the other object is among the top 20 predicted classes by the network. We compute the explanation maps for each of the class labels corresponding to the objects. Using the segmentation mask of the objects provided in the dataset as ground truth, we compute what percentage of the top-20 values in the explanation maps generated for each target category are inside the corresponding segmentation masks. The results of this test are shown in table 1 and a visual depiction of this test is given in Figure 3. These results indicate that the smoothed explanation methods are less discriminatory when generated for the target class label that has a lower probability. This suggests that in terms of class discriminativeness of explanations, the post-hoc smoothing approaches investigated in this paper are inferior to the Gradient method.
Sparseness of explanations. To create explanations that are human-accessible, it is advantageous to have a sparse explanation map (Molnar, 2019), i.e, only the features that are truly predictive of the model output should have significant contributions, and irrelevant features should have negligible contributions. Sparse explanations are more concise because they only include features with significant contribution making it simpler for end-users to understand the reasons for a specific prediction of the model (Chalasani et al., 2020). To measure the sparseness of an explanation map, we applied the Gini Index on the absolute value of the flattened explanation maps. The Gini Index is a metric that measures the sparseness of a vector with non-negative values (Hurley & Rickard, 2009). By definition, the Gini Index take values in [0, 1] with higher values indicating more sparseness. Table 2 shows the average Gini Index of the Gradient, Smooth Gradient, Uniform Gradient, and
β-smoothing computed for 1000 randomly sampled images from ImageNet. The results show that compared to the Gradient method, Smooth Gradient and Uniform Gradient provide less concise explanations, whereas β-smoothing actually improves the sparseness of the explanations as compared to the Gradient method.
Explanation Infidelity. Introduced in Yeh et al. (2019), this metric captures how the predictor function changes in response to significant perturbations to the input and is defined as the expected difference between the two terms: 1) the dot product of the input perturbation and the explanation and 2) the difference between function values after significant perturbations to the input. The metric generalizes the completeness axiom (Shrikumar et al., 2017; Sundararajan et al., 2017) because it allows for different types of perturbations which could be of interest depending on the problem and the dataset. We use the infidelity metric to compare the effect of post-hoc smoothing approaches on the fidelity of explanations to the predictor function. As suggested in (Yeh et al., 2019), we used the square removal perturbation to compute the infidelity of explanations for randomly selected images from ImageNet. Table 3 shows the results for the post-hoc approaches. A lower infidelity value indicates better fidelity of the explanation to the predictor function. The results suggest that the degree of smoothing used to robustify explanations, also improves their infidelity. Therefore with respect to the Infidelity metric, all of the smoothed explanations investigated in this section are superior to the Gradient method. This finding is also in line with the results of Yeh et al. (2019), i.e., that modest smoothing improves the infidelity of explanations.
3.2 ROBUSTNESS OF EXPLANATIONS OF POST-HOC APPROACHES
Now, we will evaluate the robustness of Smooth Gradient, Uniform Gradient, and β-smoothing explanations. We present attacks composed of additive and non-additive perturbations, and show that they are more effective than additive attacks to manipulate explanations. The non-additive attacks we employed are spatial transformation attacks (Xiao et al., 2018), and recoloring attacks (Laidlaw & Feizi, 2019). See the Appendix B for a brief description of each of these attacks. In the rest of this paper, we refer to the additive attack as Delta, spatial transformation attack as StAdv, and recoloring attack as Recolor.
We used the projected gradient descent (PGD) algorithm to optimize the objective function (1)2. In our experiments, we evaluate three combinations of attacks, namely Delta, Delta+StAdv, and Delta+StAdv+Recolor, against the explanation of a VGG-16 network trained on ImageNet (Russakovsky et al., 2015). See Appendix C.1 for the details about the `∞ norm for each type of the perturbations and the hyper-parameters used in each attack setting.
We use two metrics to evaluate the attacks: (1) The Cosine Distance metric (cosd) to evaluate the similarity between the target and manipulated explanations (Wang et al., 2020). A lower cosine distance corresponds to a lower `2 distance between the target and manipulated explanations indicating a higher similarity. The range of the values for cosd is between 0 and 1. (2) The LPIPS metric for
2As discussed in (Ghorbani et al., 2019; Dombrowski et al., 2019), to avoid zero-valued gradients when optimizing (1), we have to replace the ReLU activation with its smooth approximation. In this work, we used a soft-plus function with β = 100.
quantifying the perceptual similarity between images (Zhang et al., 2018). A lower LPIPS value indicates higher similarity.
Figure 4 shows the cosine distance between the target and manipulated explanations, and the perceptual similarity (LPIPS) between the perturbed and original images for each attack setting. We can observe that Delta+StAdv, and Delta+StAdv+Recolor attacks are more effective than Delta attacks to manipulate β-smoothing explanations, i.e, with a less perceptible perturbation (lower LPIPS value), we can reach a cosd value between manipulated and target explanations very close to the cosd value when attacking the Gradient method. The effect of the non-additive attacks is less significant on the Smooth and Uniform Gradient methods, however we can still observe improvements in the cosd values under these attacks. Taken together, these results show that Smooth Gradient, Uniform Gradient, and β-smoothing explanations are more vulnurale to non-additive attacks and hence such attacks should be considered as a threat to the robustness of these methods. As an example, we can visually see the effectiveness of Delta+StAdv+Recolor attack against differnt explanation methods in Figure 5.
4 EVALUATING AD-HOC APPROACHES
Here, we recreate the experiments of Section 3 for the ad-hoc approaches. We study the explanations of networks trained with curvature regularization (CURE) (Moosavi-Dezfooli et al., 2019),
and adversarial training (Madry et al., 2018). Training with CURE, regularizes the eigenvalue of the input hessian with maximum absolute value and is similar to SSR, which was shown to improve the robustness of explanations against additive attacks (Wang et al., 2020). Adversarial training also smooths the decision surface and can provide more robust explanations.
For the experiments in this section, we used a ResNet-18 network trained with CURE and an adversarially trained ResNet-18 network trained on adversarial examples with `∞ norm of the perturbations upper bounded by 8/255 (Engstrom et al., 2019). Both networks are trained on CIFAR-10 dataset (Krizhevsky, 2012).
4.1 QUALITY OF EXPLANATIONS OF AD-HOC APPROACHES
Cascade randomization of model parameters. We evaluate the sensitivity of explanations of the networks trained with CURE and adversarial training using the cascade randomization of model parameters test.
The Spearman rank correlation between the original and randomized explanations is shown in Figure 6. These Results show the explanation of an adversarially trained network is less sensitive to model parameters. This suggests that the explanation of an adversarially trained network cannot be helpful to debug a model when it is making a wrong prediction.
Sparseness of explanations. We compare the sparseness of the explanations derived by ad-hoc approaches, using the Gini Index metric. Table 4 compares the Gini Index for the explanations of networks trained with different training objectives. These results show that adversarial training helps to improve the sparseness of explanations as compared to standard training. Hence the explanations of an adversarially trained network are more concise. This is in line with the results of Chalasani et al. (2020) as well. However, the rsults of Table 4 indicates that training a network with CURE does not help to improve the sparseness of explanations as compared to standard training.
Explanation Infidelity. To compare the fidelity of explanations derived by ad-hoc approaches to the predictor function, we used the Infidelity metric with square perturbation (Yeh et al., 2019). Table 5 shows the results for randomly selected images from CIFAR-10. A lower infidelity value indicates better fidelity of the explanation to the predictor function. From these results, we can observe that training a network with CURE and adversarial training helps to improve the explanation Infidelity. Therefore with respect to the Infidelity metric, the ad-hoc smoothing approaches investigated in this section improve the explanation Infidelity as compared to standard training.
4.2 ROBUSTNESS OF EXPLANATIONS OF AD-HOC APPROACHES
Now, we evaluate the improvement of robustness via ad-hoc approaches. We present results for Delta, Delta+StAdv, and Delta+StAdv+Recolor attacks against explanations of the networks trained with CURE and adversarial training. Figure 7 shows the results of these attacks. For the adversarially trained network, we can observe that non-additive attacks can more effectively manipulate explanations compared to the additive attacks. However, even with the strongest attack setting we still cannot get close to the cosd value reached by attacking the explanation of the network trained in standard way. For the attacks against the explanation of the network trained with CURE, the effect of non-additive attacks are less significant in terms of the cosd value, however we can still observe that such attacks can reach similar cosd values with perceptually less visible perturbations.
5 CONCLUSION
We have evaluated two aspects of smoothed explanations: a) explanation quality, and b) robustness of explanation. In terms of explanation quality, we performed a thorough evaluation of four quality aspects: sensitivity to model parameters, class discriminativeness, sparseness, and infidelity. Our results show that the smoothed explanations investigated in this paper perform worse than those of the Gradient method in terms of sensitivity to model parameters and class discriminativeness. On the other hand, we show that using such smoothing methods helps to improve explanation Infidelity and sparseness.
We further looked at the robustness of explanations, when inputs are perturbed by a combination of additive and non-additive attacks. To the best of our knowledge, this is the first time such attacks are used to manipulate explanations. Our experimental results highlighted the fact that non-additive attacks are still a threat for explanation methods, including the smoothed ones. These results also point us to the fact that many problems in explanation robustness can be addressed by making analogies with the area of prediction robustness. As these two areas are closely related, the solutions already explored in prediction robustness can be potentially helpful to study explanation robustness. This will be the focus of our future work. | 1. What are the key contributions and findings of the paper regarding explanation quality and robustness?
2. What are the strengths of the paper in terms of its writing style, approach, and positives?
3. What are the weaknesses of the paper regarding its ability to generalize beyond specific networks and lack of formalization of the problem?
4. How does the reviewer assess the significance and relevance of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The paper presents experiments for both post-hoc and ad-hoc explainers to better understand their quality and robustness.
Review
The paper touches on a very important problem - the quality and understanding of (smoothed) explanations. It is very well written and an easy to follow.
It has many positives. I particularly liked the approaches to assess robustness as they seems very reasonable and widely applicable, for example using LPIPS and perception and a way of giving an interesting new perspective. The main to areas for improvement / clarification are as follows.
Generalization. How do the results extend beyond the specific networks chosen? It is difficult to understand how relevant the conclusions are in general and whether the example configuration in influencing the conclusions too much.
Formalization. For me, the paper lack in the formalization of the problem. After reading the paper I don't fully have a clear notion of what makes an explanation of this form high quality and the exact properties that one should be looking for to assess it. Of course, the metrics reported are proxies to it, but the actual objetive is not clear to me. |
ICLR | Title
An evaluation of quality and robustness of smoothed explanations
Abstract
Explanation methods play a crucial role in helping to understand the decisions of deep neural networks (DNNs) to develop trust that is critical for the adoption of predictive models. However, explanation methods are easily manipulated through visually imperceptible perturbations that generate misleading explanations. The geometry of the decision surface of the DNNs has been identified as the main cause of this phenomenon and several smoothing approaches have been proposed to build more robust explanations. In this work, we provide a thorough evaluation of the quality and robustness of the explanations derived by smoothing approaches. Their different properties are evaluated with extensive experiments, which reveal the settings where the smoothed explanations are better, and also worse than the explanations derived by the common Gradient method. By making the connection with the literature on adversarial attacks, we further show that such smoothed explanations are robust primarily against additive `p-norm attacks. However, a combination of additive and non-additive attacks can still manipulate these explanations, which reveals shortcomings in their robustness properties.
1 INTRODUCTION
Explanation methods attribute a numerical value to each data feature in order to quantify its relative importance towards the model’s prediction. Such attributions help to better understand and trust complex models like deep neural networks (DNNs). In safety-critical tasks, such an understanding is a prerequisite to the deployment of DNNs, because a domain expert will never make important decisions based on a model’s prediction unless that model is trustworthy. Moreover, explanations can help to understand the reasons behind the decision of a model, and when it comes to model debugging, they can reveal the presence of any spurious data correlations that may lead to faulty predictions during inference (Ribeiro et al., 2016).
In the context of image classification with deep neural networks, several explanation methods have been proposed based on the gradient with respect to input, also called gradient-based explanations (Baehrens et al., 2010; Bach et al., 2015; Selvaraju et al., 2017; Sundararajan et al., 2017; Springenberg et al., 2015). The explanation generated by these methods, a saliency map, highlights the parts of the image that contributed to the prediction. Recent work has shown that gradient-based explanations of neural networks can be fragile and can be easily manipulated via adversarially perturbed inputs (Ghorbani et al., 2019; Dombrowski et al., 2019; Heo et al., 2019; Viering et al., 2019; Kindermans et al., 2019). That is, one can find a small-norm perturbation to be added to an input ( often imperceptible), such that the focus of the explanation changes towards irrelevant features while the model’s output remains unchanged. This, in turn, can make explanations inappropriate to help end-users gain trust in a model’s prediction.
The large curvature of the decision surface of neural networks has been identified as one of the causes of fragility for gradient-based explanations (Ghorbani et al., 2019; Dombrowski et al., 2019; Wang et al., 2020). To make explanations more robust, a class of approaches proposed smoothing the explanation or making the decision surface of neural networks more smooth (Wang et al., 2020; Dombrowski et al., 2019; Ivankay et al., 2020). We refer to these approaches as smoothing approaches. It is worth mentioning that similar methods have been proposed in the context of adversarial robustness, with the aim of flattening the decision surface of neural networks in order to reach more robust predictions (Moosavi-Dezfooli et al., 2019; Qin et al., 2019).
Here, we provide a thorough investigation of the explanations derived by smoothing approaches in terms of explanation quality and robustness. We employ various tests to assess the quality of these explanations. Each test evaluates a desirable property for explanations, such as: sensitivity to changes in the model, fidelity to the predictor function, etc. In terms of robustness, we show that explanations derived by smoothing approaches only provide robustness against additive `p norm attacks. Specifically, in this work, we show that compared to additive attacks, attacks based on the combination of spatial transformation (Xiao et al., 2018) and/or color transformation (Laidlaw & Feizi, 2019) together with additive perturbations are more effective in manipulating these explanations. Our contributions can be summarized as follows:
• We study the effectiveness of smoothing approaches to achieve robust explanations. We present results on evaluating both the quality and robustness properties of smoothed explanations.
• We assess the quality of smoothed explanations via presenting the results of various quality tests. Our results demonstrate the pros and cons of smoothed explanations with respect to the following quality aspects: sensitivity to model parameters, class discriminativeness, Infidelity, and sparseness.
• We present results for different combination of additive and non-additive attacks, and show that they are able to manipulate explanations derived by smoothing approaches more successfully. Combining different types of perturbations to achieve stronger attacks has been a topic of investigation in the context of adversarial examples (Jordan et al., 2019). To the best of our knowledge, this is the first time such attacks have been used in the context of explanations.
Related works. There have been several works aiming to make explanations more robust. These works mostly focused on either modifying the explanation method itself or modifying the predictor model to achieve robust explanations. Wang et al. (2020) introduced Uniform Gradient, which is similar to Smooth Gradient unless it uses Uniform noise, and showed that it can hardly be manipulated by additive attacks. Dombrowski et al. (2019) proved that a network with soft-plus activations has a more robust Gradient explanation compared to a ReLU network, given that the parameter β of the soft-plus function is chosen to be sufficiently small. Consequently, they proposed the β-smoothing approach in which they substitute the ReLU activations of a trained network by softplus functions with a small β parameter. Wang et al. (2020) introduced a regularization term called Smooth Surface Regularization (SSR) to the training objective of a DNN. This training objective penalizes the large curvature of a DNN by regularizing the eigenvalue of the input hessian with the maximum absolute value. Moreover, they showed that adversarial training (Madry et al., 2018) also leads to more robust explanations. This fact can also be deduced from the results of (MoosaviDezfooli et al., 2019) as they showed that adversarial training leads to a significant decrease in the curvature of the loss surface with respect to inputs. Anders et al. (2020) proposed an attack in which they adversarially manipulate the model instead of the input in order to manipulate the explanation. Then they propose a modification to the existing explanation methods to make them more robust against such manipulated models. Lakkaraju et al. (2020) proposed a framework for generating robust and stable black box explanations based on adversarial training. Chen et al. (2019) introduced a regularization term to the training objective of neural networks to achieve robust Integrated Gradient explanations. Finally, Dombrowski et al. (2020) developed a theoretical framework to derive bounds on the maximum manipubality of explanations and proposed three different techniques to boost the robustness of explanations. In this work, we show that the robustness of smoothed explanations can be affected by employing a combination of additive and non-additive attacks. Furthermore, we present a through evaluation of the different quality aspects of smoothed explanations.
2 BACKGROUND
First, we provide the definition of an explanation map and then briefly describe the explanation methods we used in this paper. Then we continue with introducing the attacks to explanations and the smoothing approaches we are going to study in this paper.
Consider a model f : Rd → RK which classifies an input x ∈ Rd into one of the K classes. An explanation map, denoted by hf (x) : Rd → Rd, associates a score to each feature of the input
indicating the relevance of that feature towards the model’s prediction. For instance, in the context of image classification, saliency maps associate a score to each pixel of the input image resulting in a heatmap that highlights important regions of the image leading to the model prediction. In this work, we focus on the gradient-based explanations and mainly on the Gradient method. Given a model f and an input x, the Gradient explanation is defined as∇xf(x). Since other gradient-based explanation methods make use of the gradients with respect to input, we argue that our results could be extended to those explanation methods as well. We will also consider two smoothed variants, namely Smooth (Smilkov et al., 2017) and Uniform Gradient (Wang et al., 2020) methods.
2.1 ATTACKS TO MANIPULATE EXPLANATIONS
Similarly to common adversarial attacks (Goodfellow et al., 2015; Moosavi-Dezfooli et al., 2016; Szegedy et al., 2014), recent work has shown that explanations can also be manipulated by adding a small and almost imperceptible perturbation to the input (Ghorbani et al., 2019; Dombrowski et al., 2019). We refer to this class of attacks as explanation attacks. There have been various formulations for explanation attacks (Ghorbani et al., 2019; Dombrowski et al., 2019). In this work, we will use the formulation introduced by Dombrowski et al. (2019). In this attack, the attacker tries to find a perturbed input for which the explanation is manipulated to be very similar to a given target explanation map while the output of the model remains approximately unchanged. Note that the target map could be any heatmap in general; however, we used the explanation of a target image as a target map in this work. Below, we will give a formal definition of this attack.
Definition 1 (Targeted manipulation attack). An explanation hf (x) for model f(x) is vulnerable to attack at input x if there exist a perturbed input xadv , such that hf (xadv) is similar to a given target map ht but the model’s output remains unchanged. An attacker finds xadv by minimizing the following objective function:
L = ∥∥hf (xadv)− ht∥∥2 + γ1 ‖f(xadv)− f(x)‖2 + γ2Lreg(x,xadv) (1)
The first term in (1) ensures the similarity of the manipulated explanation to the target map, the second term ensures the similarity between the model output for the original and perturbed inputs, and the third term regularizes the perturbation to ensure perceptual similarity between the original and perturbed images. Note that Lreg is defined by the attacker according to the type of the perturbation. The relative weighting of the terms in (1) is controlled by the hyper-parameters γ1 and γ2.
2.2 TOWARDS ROBUST EXPLANATIONS
Recent works have tried to define the robustness of explanations in terms of the sensitivity of input gradients to changes in the input data (Wang et al., 2020; Dombrowski et al., 2019). Wang et al. (2020) define the robustness of explanations by the Lipschitz continuity coefficient of the input gradients; a smaller coefficient means that the explanation is less sensitive to the changes in the input and hence more robust. In this regard, a class of approaches to generate robust explanations have been proposed in the recent works, which are either based on smoothing out the explanation maps or flattening the decision boundary of the model itself. Broadly, these approaches can be classified into two categories: (1) Post-hoc approaches do not require retraining of the network and can be applied as a post-processing step. (2) Ad-hoc approaches to robust explanations require retraining of the network and hence are more costly.
In this work, we consider Smooth Gradient (Smilkov et al., 2017), Uniform Gradient (Wang et al., 2020), and β-smoothing (Dombrowski et al., 2019) as post-hoc approaches. The first two methods involve smoothing the explanation map, while the third one smooths the decision surface of the model. All three approaches act on pre-trained models, and hence are characterized as post-hoc. Among the ad-hoc methods, we study the explanations generated by adversarially trained networks, and networks trained with curvature regularization (CURE) (Moosavi-Dezfooli et al., 2019), which is a similar approach to SSR (Wang et al., 2020)1.
1We experiment only with CURE, because with the publicly available code of SSR we were not able to reproduce the results in (Wang et al., 2020).
3 EVALUATING POST-HOC APPROACHES
Here, we begin by evaluating the quality of explanations derived by post-hoc approaches that do not require retraining of the network. Then, we evaluate the robustness of these explanations by presenting results on effective non-additive attacks to manipulate them. For all of the experiments in this section, we used a VGG-16 network trained on ImageNet (Russakovsky et al., 2015), and for generating the explanation maps we used the Captum (Kokhlikyan et al., 2020) package. Moreover, for the β-smoothing approach we always set β = 0.8 as suggested in (Dombrowski et al., 2019).
3.1 QUALITY OF EXPLANATIONS OF POST-HOC APPROACHES
To evaluate and compare the quality of the explanations, we use various quality tests presented in the literature. In general, assessing the quality of an explanation is a challenging task and each quality test only evaluates a specific quality aspect of an explanation. Therefore the assemblage of quality tests helps to understand which quality aspects of the explanations are improved and which are deteriorated by the smoothing approaches.
Cascade randomization of model parameters. Adebayo et al. (2018) argued that it is desired for an explanation to be sensitive to the changes in the model parameters. They proposed a model parameter randomization test to assess this sensitivity. In this test, the parameters of a model are progressively randomized from the top layer (logits) to the bottom layers. In each step of randomization, the explanation from the resulting model is compared against the explanation from the original model. Randomizing the model parameters means losing what the model has learned from the data during training. Therefore, we expect a ”good” explanation to be destroyed in this process. However, if an explanation is insensitive to the randomization of the model parameters, then it is not deemed appropriate for debugging the model under erroneous predictions.
The visual results of this test for Gradient explanation and post-hoc approaches are shown in Figure 1. More examples of this test can be found in the Appendix. One can observe that the explanations derived from post-hoc approaches show less sensitivity to the randomization of model parameters than compared to the Gradient method. This can also be verified by the Spearman rank correlation between the original and randomized explanations shown in Figure 2. We observe that for the smoothed explanation methods, the original and randomized explanations have a high rank correlation after the randomization of the top layers of the network. These results highlight that using Smooth Gradient, Uniform Gradient, and β-smoothing to achieve a more robust explanation can come at the expense of having explanations that are less sensitive to model parameters.
Class sensitivity of explanations. A good visual explanation should be able to localize the image regions relevant to the target category, i.e., it should be class discriminative (Selvaraju et al., 2017). This is particularly significant when dealing with images containing more than one object. To assess the class discriminativeness of an explanation we used a quality test equivalent to the pointing game (Zhang et al., 2016). We sampled images from the MS COCO dataset (Lin et al., 2014), containing two objects that are also present among the ImageNet class labels. For this test we only keep
the samples for which one of the objects in the image is the top predicted class by the network and the other object is among the top 20 predicted classes by the network. We compute the explanation maps for each of the class labels corresponding to the objects. Using the segmentation mask of the objects provided in the dataset as ground truth, we compute what percentage of the top-20 values in the explanation maps generated for each target category are inside the corresponding segmentation masks. The results of this test are shown in table 1 and a visual depiction of this test is given in Figure 3. These results indicate that the smoothed explanation methods are less discriminatory when generated for the target class label that has a lower probability. This suggests that in terms of class discriminativeness of explanations, the post-hoc smoothing approaches investigated in this paper are inferior to the Gradient method.
Sparseness of explanations. To create explanations that are human-accessible, it is advantageous to have a sparse explanation map (Molnar, 2019), i.e, only the features that are truly predictive of the model output should have significant contributions, and irrelevant features should have negligible contributions. Sparse explanations are more concise because they only include features with significant contribution making it simpler for end-users to understand the reasons for a specific prediction of the model (Chalasani et al., 2020). To measure the sparseness of an explanation map, we applied the Gini Index on the absolute value of the flattened explanation maps. The Gini Index is a metric that measures the sparseness of a vector with non-negative values (Hurley & Rickard, 2009). By definition, the Gini Index take values in [0, 1] with higher values indicating more sparseness. Table 2 shows the average Gini Index of the Gradient, Smooth Gradient, Uniform Gradient, and
β-smoothing computed for 1000 randomly sampled images from ImageNet. The results show that compared to the Gradient method, Smooth Gradient and Uniform Gradient provide less concise explanations, whereas β-smoothing actually improves the sparseness of the explanations as compared to the Gradient method.
Explanation Infidelity. Introduced in Yeh et al. (2019), this metric captures how the predictor function changes in response to significant perturbations to the input and is defined as the expected difference between the two terms: 1) the dot product of the input perturbation and the explanation and 2) the difference between function values after significant perturbations to the input. The metric generalizes the completeness axiom (Shrikumar et al., 2017; Sundararajan et al., 2017) because it allows for different types of perturbations which could be of interest depending on the problem and the dataset. We use the infidelity metric to compare the effect of post-hoc smoothing approaches on the fidelity of explanations to the predictor function. As suggested in (Yeh et al., 2019), we used the square removal perturbation to compute the infidelity of explanations for randomly selected images from ImageNet. Table 3 shows the results for the post-hoc approaches. A lower infidelity value indicates better fidelity of the explanation to the predictor function. The results suggest that the degree of smoothing used to robustify explanations, also improves their infidelity. Therefore with respect to the Infidelity metric, all of the smoothed explanations investigated in this section are superior to the Gradient method. This finding is also in line with the results of Yeh et al. (2019), i.e., that modest smoothing improves the infidelity of explanations.
3.2 ROBUSTNESS OF EXPLANATIONS OF POST-HOC APPROACHES
Now, we will evaluate the robustness of Smooth Gradient, Uniform Gradient, and β-smoothing explanations. We present attacks composed of additive and non-additive perturbations, and show that they are more effective than additive attacks to manipulate explanations. The non-additive attacks we employed are spatial transformation attacks (Xiao et al., 2018), and recoloring attacks (Laidlaw & Feizi, 2019). See the Appendix B for a brief description of each of these attacks. In the rest of this paper, we refer to the additive attack as Delta, spatial transformation attack as StAdv, and recoloring attack as Recolor.
We used the projected gradient descent (PGD) algorithm to optimize the objective function (1)2. In our experiments, we evaluate three combinations of attacks, namely Delta, Delta+StAdv, and Delta+StAdv+Recolor, against the explanation of a VGG-16 network trained on ImageNet (Russakovsky et al., 2015). See Appendix C.1 for the details about the `∞ norm for each type of the perturbations and the hyper-parameters used in each attack setting.
We use two metrics to evaluate the attacks: (1) The Cosine Distance metric (cosd) to evaluate the similarity between the target and manipulated explanations (Wang et al., 2020). A lower cosine distance corresponds to a lower `2 distance between the target and manipulated explanations indicating a higher similarity. The range of the values for cosd is between 0 and 1. (2) The LPIPS metric for
2As discussed in (Ghorbani et al., 2019; Dombrowski et al., 2019), to avoid zero-valued gradients when optimizing (1), we have to replace the ReLU activation with its smooth approximation. In this work, we used a soft-plus function with β = 100.
quantifying the perceptual similarity between images (Zhang et al., 2018). A lower LPIPS value indicates higher similarity.
Figure 4 shows the cosine distance between the target and manipulated explanations, and the perceptual similarity (LPIPS) between the perturbed and original images for each attack setting. We can observe that Delta+StAdv, and Delta+StAdv+Recolor attacks are more effective than Delta attacks to manipulate β-smoothing explanations, i.e, with a less perceptible perturbation (lower LPIPS value), we can reach a cosd value between manipulated and target explanations very close to the cosd value when attacking the Gradient method. The effect of the non-additive attacks is less significant on the Smooth and Uniform Gradient methods, however we can still observe improvements in the cosd values under these attacks. Taken together, these results show that Smooth Gradient, Uniform Gradient, and β-smoothing explanations are more vulnurale to non-additive attacks and hence such attacks should be considered as a threat to the robustness of these methods. As an example, we can visually see the effectiveness of Delta+StAdv+Recolor attack against differnt explanation methods in Figure 5.
4 EVALUATING AD-HOC APPROACHES
Here, we recreate the experiments of Section 3 for the ad-hoc approaches. We study the explanations of networks trained with curvature regularization (CURE) (Moosavi-Dezfooli et al., 2019),
and adversarial training (Madry et al., 2018). Training with CURE, regularizes the eigenvalue of the input hessian with maximum absolute value and is similar to SSR, which was shown to improve the robustness of explanations against additive attacks (Wang et al., 2020). Adversarial training also smooths the decision surface and can provide more robust explanations.
For the experiments in this section, we used a ResNet-18 network trained with CURE and an adversarially trained ResNet-18 network trained on adversarial examples with `∞ norm of the perturbations upper bounded by 8/255 (Engstrom et al., 2019). Both networks are trained on CIFAR-10 dataset (Krizhevsky, 2012).
4.1 QUALITY OF EXPLANATIONS OF AD-HOC APPROACHES
Cascade randomization of model parameters. We evaluate the sensitivity of explanations of the networks trained with CURE and adversarial training using the cascade randomization of model parameters test.
The Spearman rank correlation between the original and randomized explanations is shown in Figure 6. These Results show the explanation of an adversarially trained network is less sensitive to model parameters. This suggests that the explanation of an adversarially trained network cannot be helpful to debug a model when it is making a wrong prediction.
Sparseness of explanations. We compare the sparseness of the explanations derived by ad-hoc approaches, using the Gini Index metric. Table 4 compares the Gini Index for the explanations of networks trained with different training objectives. These results show that adversarial training helps to improve the sparseness of explanations as compared to standard training. Hence the explanations of an adversarially trained network are more concise. This is in line with the results of Chalasani et al. (2020) as well. However, the rsults of Table 4 indicates that training a network with CURE does not help to improve the sparseness of explanations as compared to standard training.
Explanation Infidelity. To compare the fidelity of explanations derived by ad-hoc approaches to the predictor function, we used the Infidelity metric with square perturbation (Yeh et al., 2019). Table 5 shows the results for randomly selected images from CIFAR-10. A lower infidelity value indicates better fidelity of the explanation to the predictor function. From these results, we can observe that training a network with CURE and adversarial training helps to improve the explanation Infidelity. Therefore with respect to the Infidelity metric, the ad-hoc smoothing approaches investigated in this section improve the explanation Infidelity as compared to standard training.
4.2 ROBUSTNESS OF EXPLANATIONS OF AD-HOC APPROACHES
Now, we evaluate the improvement of robustness via ad-hoc approaches. We present results for Delta, Delta+StAdv, and Delta+StAdv+Recolor attacks against explanations of the networks trained with CURE and adversarial training. Figure 7 shows the results of these attacks. For the adversarially trained network, we can observe that non-additive attacks can more effectively manipulate explanations compared to the additive attacks. However, even with the strongest attack setting we still cannot get close to the cosd value reached by attacking the explanation of the network trained in standard way. For the attacks against the explanation of the network trained with CURE, the effect of non-additive attacks are less significant in terms of the cosd value, however we can still observe that such attacks can reach similar cosd values with perceptually less visible perturbations.
5 CONCLUSION
We have evaluated two aspects of smoothed explanations: a) explanation quality, and b) robustness of explanation. In terms of explanation quality, we performed a thorough evaluation of four quality aspects: sensitivity to model parameters, class discriminativeness, sparseness, and infidelity. Our results show that the smoothed explanations investigated in this paper perform worse than those of the Gradient method in terms of sensitivity to model parameters and class discriminativeness. On the other hand, we show that using such smoothing methods helps to improve explanation Infidelity and sparseness.
We further looked at the robustness of explanations, when inputs are perturbed by a combination of additive and non-additive attacks. To the best of our knowledge, this is the first time such attacks are used to manipulate explanations. Our experimental results highlighted the fact that non-additive attacks are still a threat for explanation methods, including the smoothed ones. These results also point us to the fact that many problems in explanation robustness can be addressed by making analogies with the area of prediction robustness. As these two areas are closely related, the solutions already explored in prediction robustness can be potentially helpful to study explanation robustness. This will be the focus of our future work. | 1. What is the main contribution of the paper regarding smoothed attribution methods?
2. What are the strengths and weaknesses of the proposed non-Lp attacks compared to prior works?
3. How does the reviewer assess the choice of Gini Index for evaluating sparsity in attributions?
4. Are there any related works that should be included and discussed in the paper regarding smoothing techniques and explanation faithfulness?
5. What are the limitations of the paper's focus on sparsity measurement in attributions? | Summary Of The Paper
Review | Summary Of The Paper
The main contributions of this paper are a series of empirical results on smoothed attribution methods that are designed to show several smoothing techniques in the previous literature may produce worse explanations and these smoothing techniques are also not robust to non-Lp attacks.
Review
Strength
This paper presents sanity check, non-Lp adversarial attacks, and sparsity measurement on SmoothGrad, Uniform Gradient and
β
-smoothing. The results are generally interesting to the community. Using the Gini Index to measure the sparsity is an interesting idea that might be useful for the follow-up works. However, I have several questions regarding this metric. I will elaborate in the next subsection.
Weakness
It is not surprising that Lp-based defenses (smoothing) are not robust to non-Lp threat models in the community. The authors aim to present evidence that smoothing techniques are not robust enough because the proposed non-Lp attacks can break the explanations, which by itself seems to be an unfair comparison for the prior work. In fact, the contributions showing the proposed non-Lp attacks can break Lp defense is not a new observation.
The proposed attacks do not seem to produce very different results. In Fig 5, resulting adversarial attacks from new technologies remain very similar to Delta attack. In Fig 4, it seems that the new attacks only make more than 0.05 difference with the Delta attack on
β
-smoothing in cosine distance. On other attributions, the improvement seems to be minimal.
Gini Index for evaluating sparseness. The motivation to use Gini Index seems to be fair, however, a lot of prior works have proposed several different metrics [1, 2, 3, 4] in measuring the sparseness (or the concentrations and the localizations on the relevant features) of attributions. At least some discussions and justifications of the proposed metrics should be included. The robustness-related and fidelity-related evaluations are motivated from the paper’s main idea, understanding if smoothing techniques provide faithful explanations, however, the transition to study sparsity is somewhat sudden to me and it seems to be unconnected from the previous content, only because the following reason authors provide: “To create explanations that are human-accessible, it is advantageous to have a sparse explanation map”
Some related prior work, i.e. ROAR [5] , that finds smoothing techniques do not create significant degeneration to explanations should be included and discussed, especially when the paper is trying to provide the shortcoming of the smoothing techniques.
[1] A. Chattopadhay, A. Sarkar, P. Howlader and V. N. Balasubramanian, "Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks," 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 2018, pp. 839-847, doi: 10.1109/WACV.2018.00097.
[2] Poppi, S., Cornia, M., Baraldi, L., & Cucchiara, R. (2021). Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2299-2304.
[3] Wang, H., Wang, Z., Du, M., Yang, F., Zhang, Z., Ding, S., Mardziel, P., & Hu, X. (2020). Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 111-119.
[4] Fong, R., Patrick, M., & Vedaldi, A. (2019). Understanding Deep Networks via Extremal Perturbations and Smooth Masks. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2950-2958.
[5] Hooker, Sara et al. “A Benchmark for Interpretability Methods in Deep Neural Networks.” NeurIPS(2019). |
ICLR | Title
An evaluation of quality and robustness of smoothed explanations
Abstract
Explanation methods play a crucial role in helping to understand the decisions of deep neural networks (DNNs) to develop trust that is critical for the adoption of predictive models. However, explanation methods are easily manipulated through visually imperceptible perturbations that generate misleading explanations. The geometry of the decision surface of the DNNs has been identified as the main cause of this phenomenon and several smoothing approaches have been proposed to build more robust explanations. In this work, we provide a thorough evaluation of the quality and robustness of the explanations derived by smoothing approaches. Their different properties are evaluated with extensive experiments, which reveal the settings where the smoothed explanations are better, and also worse than the explanations derived by the common Gradient method. By making the connection with the literature on adversarial attacks, we further show that such smoothed explanations are robust primarily against additive `p-norm attacks. However, a combination of additive and non-additive attacks can still manipulate these explanations, which reveals shortcomings in their robustness properties.
1 INTRODUCTION
Explanation methods attribute a numerical value to each data feature in order to quantify its relative importance towards the model’s prediction. Such attributions help to better understand and trust complex models like deep neural networks (DNNs). In safety-critical tasks, such an understanding is a prerequisite to the deployment of DNNs, because a domain expert will never make important decisions based on a model’s prediction unless that model is trustworthy. Moreover, explanations can help to understand the reasons behind the decision of a model, and when it comes to model debugging, they can reveal the presence of any spurious data correlations that may lead to faulty predictions during inference (Ribeiro et al., 2016).
In the context of image classification with deep neural networks, several explanation methods have been proposed based on the gradient with respect to input, also called gradient-based explanations (Baehrens et al., 2010; Bach et al., 2015; Selvaraju et al., 2017; Sundararajan et al., 2017; Springenberg et al., 2015). The explanation generated by these methods, a saliency map, highlights the parts of the image that contributed to the prediction. Recent work has shown that gradient-based explanations of neural networks can be fragile and can be easily manipulated via adversarially perturbed inputs (Ghorbani et al., 2019; Dombrowski et al., 2019; Heo et al., 2019; Viering et al., 2019; Kindermans et al., 2019). That is, one can find a small-norm perturbation to be added to an input ( often imperceptible), such that the focus of the explanation changes towards irrelevant features while the model’s output remains unchanged. This, in turn, can make explanations inappropriate to help end-users gain trust in a model’s prediction.
The large curvature of the decision surface of neural networks has been identified as one of the causes of fragility for gradient-based explanations (Ghorbani et al., 2019; Dombrowski et al., 2019; Wang et al., 2020). To make explanations more robust, a class of approaches proposed smoothing the explanation or making the decision surface of neural networks more smooth (Wang et al., 2020; Dombrowski et al., 2019; Ivankay et al., 2020). We refer to these approaches as smoothing approaches. It is worth mentioning that similar methods have been proposed in the context of adversarial robustness, with the aim of flattening the decision surface of neural networks in order to reach more robust predictions (Moosavi-Dezfooli et al., 2019; Qin et al., 2019).
Here, we provide a thorough investigation of the explanations derived by smoothing approaches in terms of explanation quality and robustness. We employ various tests to assess the quality of these explanations. Each test evaluates a desirable property for explanations, such as: sensitivity to changes in the model, fidelity to the predictor function, etc. In terms of robustness, we show that explanations derived by smoothing approaches only provide robustness against additive `p norm attacks. Specifically, in this work, we show that compared to additive attacks, attacks based on the combination of spatial transformation (Xiao et al., 2018) and/or color transformation (Laidlaw & Feizi, 2019) together with additive perturbations are more effective in manipulating these explanations. Our contributions can be summarized as follows:
• We study the effectiveness of smoothing approaches to achieve robust explanations. We present results on evaluating both the quality and robustness properties of smoothed explanations.
• We assess the quality of smoothed explanations via presenting the results of various quality tests. Our results demonstrate the pros and cons of smoothed explanations with respect to the following quality aspects: sensitivity to model parameters, class discriminativeness, Infidelity, and sparseness.
• We present results for different combination of additive and non-additive attacks, and show that they are able to manipulate explanations derived by smoothing approaches more successfully. Combining different types of perturbations to achieve stronger attacks has been a topic of investigation in the context of adversarial examples (Jordan et al., 2019). To the best of our knowledge, this is the first time such attacks have been used in the context of explanations.
Related works. There have been several works aiming to make explanations more robust. These works mostly focused on either modifying the explanation method itself or modifying the predictor model to achieve robust explanations. Wang et al. (2020) introduced Uniform Gradient, which is similar to Smooth Gradient unless it uses Uniform noise, and showed that it can hardly be manipulated by additive attacks. Dombrowski et al. (2019) proved that a network with soft-plus activations has a more robust Gradient explanation compared to a ReLU network, given that the parameter β of the soft-plus function is chosen to be sufficiently small. Consequently, they proposed the β-smoothing approach in which they substitute the ReLU activations of a trained network by softplus functions with a small β parameter. Wang et al. (2020) introduced a regularization term called Smooth Surface Regularization (SSR) to the training objective of a DNN. This training objective penalizes the large curvature of a DNN by regularizing the eigenvalue of the input hessian with the maximum absolute value. Moreover, they showed that adversarial training (Madry et al., 2018) also leads to more robust explanations. This fact can also be deduced from the results of (MoosaviDezfooli et al., 2019) as they showed that adversarial training leads to a significant decrease in the curvature of the loss surface with respect to inputs. Anders et al. (2020) proposed an attack in which they adversarially manipulate the model instead of the input in order to manipulate the explanation. Then they propose a modification to the existing explanation methods to make them more robust against such manipulated models. Lakkaraju et al. (2020) proposed a framework for generating robust and stable black box explanations based on adversarial training. Chen et al. (2019) introduced a regularization term to the training objective of neural networks to achieve robust Integrated Gradient explanations. Finally, Dombrowski et al. (2020) developed a theoretical framework to derive bounds on the maximum manipubality of explanations and proposed three different techniques to boost the robustness of explanations. In this work, we show that the robustness of smoothed explanations can be affected by employing a combination of additive and non-additive attacks. Furthermore, we present a through evaluation of the different quality aspects of smoothed explanations.
2 BACKGROUND
First, we provide the definition of an explanation map and then briefly describe the explanation methods we used in this paper. Then we continue with introducing the attacks to explanations and the smoothing approaches we are going to study in this paper.
Consider a model f : Rd → RK which classifies an input x ∈ Rd into one of the K classes. An explanation map, denoted by hf (x) : Rd → Rd, associates a score to each feature of the input
indicating the relevance of that feature towards the model’s prediction. For instance, in the context of image classification, saliency maps associate a score to each pixel of the input image resulting in a heatmap that highlights important regions of the image leading to the model prediction. In this work, we focus on the gradient-based explanations and mainly on the Gradient method. Given a model f and an input x, the Gradient explanation is defined as∇xf(x). Since other gradient-based explanation methods make use of the gradients with respect to input, we argue that our results could be extended to those explanation methods as well. We will also consider two smoothed variants, namely Smooth (Smilkov et al., 2017) and Uniform Gradient (Wang et al., 2020) methods.
2.1 ATTACKS TO MANIPULATE EXPLANATIONS
Similarly to common adversarial attacks (Goodfellow et al., 2015; Moosavi-Dezfooli et al., 2016; Szegedy et al., 2014), recent work has shown that explanations can also be manipulated by adding a small and almost imperceptible perturbation to the input (Ghorbani et al., 2019; Dombrowski et al., 2019). We refer to this class of attacks as explanation attacks. There have been various formulations for explanation attacks (Ghorbani et al., 2019; Dombrowski et al., 2019). In this work, we will use the formulation introduced by Dombrowski et al. (2019). In this attack, the attacker tries to find a perturbed input for which the explanation is manipulated to be very similar to a given target explanation map while the output of the model remains approximately unchanged. Note that the target map could be any heatmap in general; however, we used the explanation of a target image as a target map in this work. Below, we will give a formal definition of this attack.
Definition 1 (Targeted manipulation attack). An explanation hf (x) for model f(x) is vulnerable to attack at input x if there exist a perturbed input xadv , such that hf (xadv) is similar to a given target map ht but the model’s output remains unchanged. An attacker finds xadv by minimizing the following objective function:
L = ∥∥hf (xadv)− ht∥∥2 + γ1 ‖f(xadv)− f(x)‖2 + γ2Lreg(x,xadv) (1)
The first term in (1) ensures the similarity of the manipulated explanation to the target map, the second term ensures the similarity between the model output for the original and perturbed inputs, and the third term regularizes the perturbation to ensure perceptual similarity between the original and perturbed images. Note that Lreg is defined by the attacker according to the type of the perturbation. The relative weighting of the terms in (1) is controlled by the hyper-parameters γ1 and γ2.
2.2 TOWARDS ROBUST EXPLANATIONS
Recent works have tried to define the robustness of explanations in terms of the sensitivity of input gradients to changes in the input data (Wang et al., 2020; Dombrowski et al., 2019). Wang et al. (2020) define the robustness of explanations by the Lipschitz continuity coefficient of the input gradients; a smaller coefficient means that the explanation is less sensitive to the changes in the input and hence more robust. In this regard, a class of approaches to generate robust explanations have been proposed in the recent works, which are either based on smoothing out the explanation maps or flattening the decision boundary of the model itself. Broadly, these approaches can be classified into two categories: (1) Post-hoc approaches do not require retraining of the network and can be applied as a post-processing step. (2) Ad-hoc approaches to robust explanations require retraining of the network and hence are more costly.
In this work, we consider Smooth Gradient (Smilkov et al., 2017), Uniform Gradient (Wang et al., 2020), and β-smoothing (Dombrowski et al., 2019) as post-hoc approaches. The first two methods involve smoothing the explanation map, while the third one smooths the decision surface of the model. All three approaches act on pre-trained models, and hence are characterized as post-hoc. Among the ad-hoc methods, we study the explanations generated by adversarially trained networks, and networks trained with curvature regularization (CURE) (Moosavi-Dezfooli et al., 2019), which is a similar approach to SSR (Wang et al., 2020)1.
1We experiment only with CURE, because with the publicly available code of SSR we were not able to reproduce the results in (Wang et al., 2020).
3 EVALUATING POST-HOC APPROACHES
Here, we begin by evaluating the quality of explanations derived by post-hoc approaches that do not require retraining of the network. Then, we evaluate the robustness of these explanations by presenting results on effective non-additive attacks to manipulate them. For all of the experiments in this section, we used a VGG-16 network trained on ImageNet (Russakovsky et al., 2015), and for generating the explanation maps we used the Captum (Kokhlikyan et al., 2020) package. Moreover, for the β-smoothing approach we always set β = 0.8 as suggested in (Dombrowski et al., 2019).
3.1 QUALITY OF EXPLANATIONS OF POST-HOC APPROACHES
To evaluate and compare the quality of the explanations, we use various quality tests presented in the literature. In general, assessing the quality of an explanation is a challenging task and each quality test only evaluates a specific quality aspect of an explanation. Therefore the assemblage of quality tests helps to understand which quality aspects of the explanations are improved and which are deteriorated by the smoothing approaches.
Cascade randomization of model parameters. Adebayo et al. (2018) argued that it is desired for an explanation to be sensitive to the changes in the model parameters. They proposed a model parameter randomization test to assess this sensitivity. In this test, the parameters of a model are progressively randomized from the top layer (logits) to the bottom layers. In each step of randomization, the explanation from the resulting model is compared against the explanation from the original model. Randomizing the model parameters means losing what the model has learned from the data during training. Therefore, we expect a ”good” explanation to be destroyed in this process. However, if an explanation is insensitive to the randomization of the model parameters, then it is not deemed appropriate for debugging the model under erroneous predictions.
The visual results of this test for Gradient explanation and post-hoc approaches are shown in Figure 1. More examples of this test can be found in the Appendix. One can observe that the explanations derived from post-hoc approaches show less sensitivity to the randomization of model parameters than compared to the Gradient method. This can also be verified by the Spearman rank correlation between the original and randomized explanations shown in Figure 2. We observe that for the smoothed explanation methods, the original and randomized explanations have a high rank correlation after the randomization of the top layers of the network. These results highlight that using Smooth Gradient, Uniform Gradient, and β-smoothing to achieve a more robust explanation can come at the expense of having explanations that are less sensitive to model parameters.
Class sensitivity of explanations. A good visual explanation should be able to localize the image regions relevant to the target category, i.e., it should be class discriminative (Selvaraju et al., 2017). This is particularly significant when dealing with images containing more than one object. To assess the class discriminativeness of an explanation we used a quality test equivalent to the pointing game (Zhang et al., 2016). We sampled images from the MS COCO dataset (Lin et al., 2014), containing two objects that are also present among the ImageNet class labels. For this test we only keep
the samples for which one of the objects in the image is the top predicted class by the network and the other object is among the top 20 predicted classes by the network. We compute the explanation maps for each of the class labels corresponding to the objects. Using the segmentation mask of the objects provided in the dataset as ground truth, we compute what percentage of the top-20 values in the explanation maps generated for each target category are inside the corresponding segmentation masks. The results of this test are shown in table 1 and a visual depiction of this test is given in Figure 3. These results indicate that the smoothed explanation methods are less discriminatory when generated for the target class label that has a lower probability. This suggests that in terms of class discriminativeness of explanations, the post-hoc smoothing approaches investigated in this paper are inferior to the Gradient method.
Sparseness of explanations. To create explanations that are human-accessible, it is advantageous to have a sparse explanation map (Molnar, 2019), i.e, only the features that are truly predictive of the model output should have significant contributions, and irrelevant features should have negligible contributions. Sparse explanations are more concise because they only include features with significant contribution making it simpler for end-users to understand the reasons for a specific prediction of the model (Chalasani et al., 2020). To measure the sparseness of an explanation map, we applied the Gini Index on the absolute value of the flattened explanation maps. The Gini Index is a metric that measures the sparseness of a vector with non-negative values (Hurley & Rickard, 2009). By definition, the Gini Index take values in [0, 1] with higher values indicating more sparseness. Table 2 shows the average Gini Index of the Gradient, Smooth Gradient, Uniform Gradient, and
β-smoothing computed for 1000 randomly sampled images from ImageNet. The results show that compared to the Gradient method, Smooth Gradient and Uniform Gradient provide less concise explanations, whereas β-smoothing actually improves the sparseness of the explanations as compared to the Gradient method.
Explanation Infidelity. Introduced in Yeh et al. (2019), this metric captures how the predictor function changes in response to significant perturbations to the input and is defined as the expected difference between the two terms: 1) the dot product of the input perturbation and the explanation and 2) the difference between function values after significant perturbations to the input. The metric generalizes the completeness axiom (Shrikumar et al., 2017; Sundararajan et al., 2017) because it allows for different types of perturbations which could be of interest depending on the problem and the dataset. We use the infidelity metric to compare the effect of post-hoc smoothing approaches on the fidelity of explanations to the predictor function. As suggested in (Yeh et al., 2019), we used the square removal perturbation to compute the infidelity of explanations for randomly selected images from ImageNet. Table 3 shows the results for the post-hoc approaches. A lower infidelity value indicates better fidelity of the explanation to the predictor function. The results suggest that the degree of smoothing used to robustify explanations, also improves their infidelity. Therefore with respect to the Infidelity metric, all of the smoothed explanations investigated in this section are superior to the Gradient method. This finding is also in line with the results of Yeh et al. (2019), i.e., that modest smoothing improves the infidelity of explanations.
3.2 ROBUSTNESS OF EXPLANATIONS OF POST-HOC APPROACHES
Now, we will evaluate the robustness of Smooth Gradient, Uniform Gradient, and β-smoothing explanations. We present attacks composed of additive and non-additive perturbations, and show that they are more effective than additive attacks to manipulate explanations. The non-additive attacks we employed are spatial transformation attacks (Xiao et al., 2018), and recoloring attacks (Laidlaw & Feizi, 2019). See the Appendix B for a brief description of each of these attacks. In the rest of this paper, we refer to the additive attack as Delta, spatial transformation attack as StAdv, and recoloring attack as Recolor.
We used the projected gradient descent (PGD) algorithm to optimize the objective function (1)2. In our experiments, we evaluate three combinations of attacks, namely Delta, Delta+StAdv, and Delta+StAdv+Recolor, against the explanation of a VGG-16 network trained on ImageNet (Russakovsky et al., 2015). See Appendix C.1 for the details about the `∞ norm for each type of the perturbations and the hyper-parameters used in each attack setting.
We use two metrics to evaluate the attacks: (1) The Cosine Distance metric (cosd) to evaluate the similarity between the target and manipulated explanations (Wang et al., 2020). A lower cosine distance corresponds to a lower `2 distance between the target and manipulated explanations indicating a higher similarity. The range of the values for cosd is between 0 and 1. (2) The LPIPS metric for
2As discussed in (Ghorbani et al., 2019; Dombrowski et al., 2019), to avoid zero-valued gradients when optimizing (1), we have to replace the ReLU activation with its smooth approximation. In this work, we used a soft-plus function with β = 100.
quantifying the perceptual similarity between images (Zhang et al., 2018). A lower LPIPS value indicates higher similarity.
Figure 4 shows the cosine distance between the target and manipulated explanations, and the perceptual similarity (LPIPS) between the perturbed and original images for each attack setting. We can observe that Delta+StAdv, and Delta+StAdv+Recolor attacks are more effective than Delta attacks to manipulate β-smoothing explanations, i.e, with a less perceptible perturbation (lower LPIPS value), we can reach a cosd value between manipulated and target explanations very close to the cosd value when attacking the Gradient method. The effect of the non-additive attacks is less significant on the Smooth and Uniform Gradient methods, however we can still observe improvements in the cosd values under these attacks. Taken together, these results show that Smooth Gradient, Uniform Gradient, and β-smoothing explanations are more vulnurale to non-additive attacks and hence such attacks should be considered as a threat to the robustness of these methods. As an example, we can visually see the effectiveness of Delta+StAdv+Recolor attack against differnt explanation methods in Figure 5.
4 EVALUATING AD-HOC APPROACHES
Here, we recreate the experiments of Section 3 for the ad-hoc approaches. We study the explanations of networks trained with curvature regularization (CURE) (Moosavi-Dezfooli et al., 2019),
and adversarial training (Madry et al., 2018). Training with CURE, regularizes the eigenvalue of the input hessian with maximum absolute value and is similar to SSR, which was shown to improve the robustness of explanations against additive attacks (Wang et al., 2020). Adversarial training also smooths the decision surface and can provide more robust explanations.
For the experiments in this section, we used a ResNet-18 network trained with CURE and an adversarially trained ResNet-18 network trained on adversarial examples with `∞ norm of the perturbations upper bounded by 8/255 (Engstrom et al., 2019). Both networks are trained on CIFAR-10 dataset (Krizhevsky, 2012).
4.1 QUALITY OF EXPLANATIONS OF AD-HOC APPROACHES
Cascade randomization of model parameters. We evaluate the sensitivity of explanations of the networks trained with CURE and adversarial training using the cascade randomization of model parameters test.
The Spearman rank correlation between the original and randomized explanations is shown in Figure 6. These Results show the explanation of an adversarially trained network is less sensitive to model parameters. This suggests that the explanation of an adversarially trained network cannot be helpful to debug a model when it is making a wrong prediction.
Sparseness of explanations. We compare the sparseness of the explanations derived by ad-hoc approaches, using the Gini Index metric. Table 4 compares the Gini Index for the explanations of networks trained with different training objectives. These results show that adversarial training helps to improve the sparseness of explanations as compared to standard training. Hence the explanations of an adversarially trained network are more concise. This is in line with the results of Chalasani et al. (2020) as well. However, the rsults of Table 4 indicates that training a network with CURE does not help to improve the sparseness of explanations as compared to standard training.
Explanation Infidelity. To compare the fidelity of explanations derived by ad-hoc approaches to the predictor function, we used the Infidelity metric with square perturbation (Yeh et al., 2019). Table 5 shows the results for randomly selected images from CIFAR-10. A lower infidelity value indicates better fidelity of the explanation to the predictor function. From these results, we can observe that training a network with CURE and adversarial training helps to improve the explanation Infidelity. Therefore with respect to the Infidelity metric, the ad-hoc smoothing approaches investigated in this section improve the explanation Infidelity as compared to standard training.
4.2 ROBUSTNESS OF EXPLANATIONS OF AD-HOC APPROACHES
Now, we evaluate the improvement of robustness via ad-hoc approaches. We present results for Delta, Delta+StAdv, and Delta+StAdv+Recolor attacks against explanations of the networks trained with CURE and adversarial training. Figure 7 shows the results of these attacks. For the adversarially trained network, we can observe that non-additive attacks can more effectively manipulate explanations compared to the additive attacks. However, even with the strongest attack setting we still cannot get close to the cosd value reached by attacking the explanation of the network trained in standard way. For the attacks against the explanation of the network trained with CURE, the effect of non-additive attacks are less significant in terms of the cosd value, however we can still observe that such attacks can reach similar cosd values with perceptually less visible perturbations.
5 CONCLUSION
We have evaluated two aspects of smoothed explanations: a) explanation quality, and b) robustness of explanation. In terms of explanation quality, we performed a thorough evaluation of four quality aspects: sensitivity to model parameters, class discriminativeness, sparseness, and infidelity. Our results show that the smoothed explanations investigated in this paper perform worse than those of the Gradient method in terms of sensitivity to model parameters and class discriminativeness. On the other hand, we show that using such smoothing methods helps to improve explanation Infidelity and sparseness.
We further looked at the robustness of explanations, when inputs are perturbed by a combination of additive and non-additive attacks. To the best of our knowledge, this is the first time such attacks are used to manipulate explanations. Our experimental results highlighted the fact that non-additive attacks are still a threat for explanation methods, including the smoothed ones. These results also point us to the fact that many problems in explanation robustness can be addressed by making analogies with the area of prediction robustness. As these two areas are closely related, the solutions already explored in prediction robustness can be potentially helpful to study explanation robustness. This will be the focus of our future work. | 1. What is the focus of the paper, and what contributions does it make to the field of explainable AI?
2. What are the strengths and weaknesses of the paper regarding its experimental design and evaluation metrics?
3. How does the reviewer assess the quality and robustness of the proposed approaches?
4. What are some concerns or questions raised by the reviewer regarding the paper's methodology and results?
5. How might the authors address these concerns and improve their work? | Summary Of The Paper
Review | Summary Of The Paper
The authors empirically evaluate the quality and robustness of 3 post-hoc and 2 ad-hoc approaches for robustification of gradient-based attributions under a combination of 3 adversarial attack approaches which they use to target the attributions. They evaluate the robustness via cosine-distance and LPIPS to a target-heatmap, and the quality via spearman correlation to the original under cascade randomization of model parameters, the sparseness of the attribution maps using the GINI index, and the explanation infidelity.
Review
Strengths
a broad set of evaluation metrics is used to measure the quality
hyper-parameters are well-documented, reproducibility is high
methods from prior work are studied thoroughly for their application
the paper is structured and written well
Weaknesses
the sample set of almost all experiments is very small and should be increased
currently for post-hoc approaches on ImageNet:
128 samples for the similarity to the target map
100 samples for cascade randomization
20 (!) samples for the segmentation
100 samples for sparseness with the GINI index
100 samples for infidelity
currently for the ad-hoc approaches on CIFAR-10:
320 samples for the similarity to the target map
100 samples for cascade randomization
100 samples for sparseness with the GINI index
100 samples for infidelity
errors should be reported for the segmentation experiment in table 1, the cascade randomization (Figure 2 & 6), the GINI index (Table 2 & 4) and the infidelity (Table 3 & 5)
only a single model on a single dataset is used for each experiment, which should be increased
VGG16 on ImageNet for post-hoc approaches
ResNet-18 on CIFAR-10 for ad-hoc approaches
only a single run of the experiments is conducted with only a single seed (model parameters)
to test for statistical significance, the experiments should, in addition to more models and a higher sample size, be conducted multiple times with multiple seeds
are the error bars in Figures 4 and 7 percentiles or the standard deviation?
the errors in Figures 4 and 7 are very large ("compatible results" most of the time), but this is not discussed
in the segmentation experiment in Table 1 only the 20 pixels with highest attribution values are investigated
why do the authors not use all pixels' attribution values?
if there is any reason behind this, it should be discussed, otherwise all pixels should be used
this approach measures spikey attribution maps in a different way than smooth ones, since the spikey attribution maps will have more of their total attribution scores measured
in Table 3 the infidelity for Smooth and Uniform are only marginally better (1.43 vs. 1.42), yet in the text this is simply described as an improvement (this marginal improvement will probably even become less significant with errors reported)
it seems a little suspicious that Smooth and Uniform Gradient have the exact same values for GINI Index and infidelity in Tables 2 & 3, though this may just be by chance
for the similarity in Figures 4 & 7, it may give more insights to also compute the distance to the original, ie. dist(h(x_{adv}, h(x)))
why is the segmentation experiment is not conducted for the ad-hoc methods?
minor: a label "higher is better" for figures 4 & 7 could improve readability |
ICLR | Title
An evaluation of quality and robustness of smoothed explanations
Abstract
Explanation methods play a crucial role in helping to understand the decisions of deep neural networks (DNNs) to develop trust that is critical for the adoption of predictive models. However, explanation methods are easily manipulated through visually imperceptible perturbations that generate misleading explanations. The geometry of the decision surface of the DNNs has been identified as the main cause of this phenomenon and several smoothing approaches have been proposed to build more robust explanations. In this work, we provide a thorough evaluation of the quality and robustness of the explanations derived by smoothing approaches. Their different properties are evaluated with extensive experiments, which reveal the settings where the smoothed explanations are better, and also worse than the explanations derived by the common Gradient method. By making the connection with the literature on adversarial attacks, we further show that such smoothed explanations are robust primarily against additive `p-norm attacks. However, a combination of additive and non-additive attacks can still manipulate these explanations, which reveals shortcomings in their robustness properties.
1 INTRODUCTION
Explanation methods attribute a numerical value to each data feature in order to quantify its relative importance towards the model’s prediction. Such attributions help to better understand and trust complex models like deep neural networks (DNNs). In safety-critical tasks, such an understanding is a prerequisite to the deployment of DNNs, because a domain expert will never make important decisions based on a model’s prediction unless that model is trustworthy. Moreover, explanations can help to understand the reasons behind the decision of a model, and when it comes to model debugging, they can reveal the presence of any spurious data correlations that may lead to faulty predictions during inference (Ribeiro et al., 2016).
In the context of image classification with deep neural networks, several explanation methods have been proposed based on the gradient with respect to input, also called gradient-based explanations (Baehrens et al., 2010; Bach et al., 2015; Selvaraju et al., 2017; Sundararajan et al., 2017; Springenberg et al., 2015). The explanation generated by these methods, a saliency map, highlights the parts of the image that contributed to the prediction. Recent work has shown that gradient-based explanations of neural networks can be fragile and can be easily manipulated via adversarially perturbed inputs (Ghorbani et al., 2019; Dombrowski et al., 2019; Heo et al., 2019; Viering et al., 2019; Kindermans et al., 2019). That is, one can find a small-norm perturbation to be added to an input ( often imperceptible), such that the focus of the explanation changes towards irrelevant features while the model’s output remains unchanged. This, in turn, can make explanations inappropriate to help end-users gain trust in a model’s prediction.
The large curvature of the decision surface of neural networks has been identified as one of the causes of fragility for gradient-based explanations (Ghorbani et al., 2019; Dombrowski et al., 2019; Wang et al., 2020). To make explanations more robust, a class of approaches proposed smoothing the explanation or making the decision surface of neural networks more smooth (Wang et al., 2020; Dombrowski et al., 2019; Ivankay et al., 2020). We refer to these approaches as smoothing approaches. It is worth mentioning that similar methods have been proposed in the context of adversarial robustness, with the aim of flattening the decision surface of neural networks in order to reach more robust predictions (Moosavi-Dezfooli et al., 2019; Qin et al., 2019).
Here, we provide a thorough investigation of the explanations derived by smoothing approaches in terms of explanation quality and robustness. We employ various tests to assess the quality of these explanations. Each test evaluates a desirable property for explanations, such as: sensitivity to changes in the model, fidelity to the predictor function, etc. In terms of robustness, we show that explanations derived by smoothing approaches only provide robustness against additive `p norm attacks. Specifically, in this work, we show that compared to additive attacks, attacks based on the combination of spatial transformation (Xiao et al., 2018) and/or color transformation (Laidlaw & Feizi, 2019) together with additive perturbations are more effective in manipulating these explanations. Our contributions can be summarized as follows:
• We study the effectiveness of smoothing approaches to achieve robust explanations. We present results on evaluating both the quality and robustness properties of smoothed explanations.
• We assess the quality of smoothed explanations via presenting the results of various quality tests. Our results demonstrate the pros and cons of smoothed explanations with respect to the following quality aspects: sensitivity to model parameters, class discriminativeness, Infidelity, and sparseness.
• We present results for different combination of additive and non-additive attacks, and show that they are able to manipulate explanations derived by smoothing approaches more successfully. Combining different types of perturbations to achieve stronger attacks has been a topic of investigation in the context of adversarial examples (Jordan et al., 2019). To the best of our knowledge, this is the first time such attacks have been used in the context of explanations.
Related works. There have been several works aiming to make explanations more robust. These works mostly focused on either modifying the explanation method itself or modifying the predictor model to achieve robust explanations. Wang et al. (2020) introduced Uniform Gradient, which is similar to Smooth Gradient unless it uses Uniform noise, and showed that it can hardly be manipulated by additive attacks. Dombrowski et al. (2019) proved that a network with soft-plus activations has a more robust Gradient explanation compared to a ReLU network, given that the parameter β of the soft-plus function is chosen to be sufficiently small. Consequently, they proposed the β-smoothing approach in which they substitute the ReLU activations of a trained network by softplus functions with a small β parameter. Wang et al. (2020) introduced a regularization term called Smooth Surface Regularization (SSR) to the training objective of a DNN. This training objective penalizes the large curvature of a DNN by regularizing the eigenvalue of the input hessian with the maximum absolute value. Moreover, they showed that adversarial training (Madry et al., 2018) also leads to more robust explanations. This fact can also be deduced from the results of (MoosaviDezfooli et al., 2019) as they showed that adversarial training leads to a significant decrease in the curvature of the loss surface with respect to inputs. Anders et al. (2020) proposed an attack in which they adversarially manipulate the model instead of the input in order to manipulate the explanation. Then they propose a modification to the existing explanation methods to make them more robust against such manipulated models. Lakkaraju et al. (2020) proposed a framework for generating robust and stable black box explanations based on adversarial training. Chen et al. (2019) introduced a regularization term to the training objective of neural networks to achieve robust Integrated Gradient explanations. Finally, Dombrowski et al. (2020) developed a theoretical framework to derive bounds on the maximum manipubality of explanations and proposed three different techniques to boost the robustness of explanations. In this work, we show that the robustness of smoothed explanations can be affected by employing a combination of additive and non-additive attacks. Furthermore, we present a through evaluation of the different quality aspects of smoothed explanations.
2 BACKGROUND
First, we provide the definition of an explanation map and then briefly describe the explanation methods we used in this paper. Then we continue with introducing the attacks to explanations and the smoothing approaches we are going to study in this paper.
Consider a model f : Rd → RK which classifies an input x ∈ Rd into one of the K classes. An explanation map, denoted by hf (x) : Rd → Rd, associates a score to each feature of the input
indicating the relevance of that feature towards the model’s prediction. For instance, in the context of image classification, saliency maps associate a score to each pixel of the input image resulting in a heatmap that highlights important regions of the image leading to the model prediction. In this work, we focus on the gradient-based explanations and mainly on the Gradient method. Given a model f and an input x, the Gradient explanation is defined as∇xf(x). Since other gradient-based explanation methods make use of the gradients with respect to input, we argue that our results could be extended to those explanation methods as well. We will also consider two smoothed variants, namely Smooth (Smilkov et al., 2017) and Uniform Gradient (Wang et al., 2020) methods.
2.1 ATTACKS TO MANIPULATE EXPLANATIONS
Similarly to common adversarial attacks (Goodfellow et al., 2015; Moosavi-Dezfooli et al., 2016; Szegedy et al., 2014), recent work has shown that explanations can also be manipulated by adding a small and almost imperceptible perturbation to the input (Ghorbani et al., 2019; Dombrowski et al., 2019). We refer to this class of attacks as explanation attacks. There have been various formulations for explanation attacks (Ghorbani et al., 2019; Dombrowski et al., 2019). In this work, we will use the formulation introduced by Dombrowski et al. (2019). In this attack, the attacker tries to find a perturbed input for which the explanation is manipulated to be very similar to a given target explanation map while the output of the model remains approximately unchanged. Note that the target map could be any heatmap in general; however, we used the explanation of a target image as a target map in this work. Below, we will give a formal definition of this attack.
Definition 1 (Targeted manipulation attack). An explanation hf (x) for model f(x) is vulnerable to attack at input x if there exist a perturbed input xadv , such that hf (xadv) is similar to a given target map ht but the model’s output remains unchanged. An attacker finds xadv by minimizing the following objective function:
L = ∥∥hf (xadv)− ht∥∥2 + γ1 ‖f(xadv)− f(x)‖2 + γ2Lreg(x,xadv) (1)
The first term in (1) ensures the similarity of the manipulated explanation to the target map, the second term ensures the similarity between the model output for the original and perturbed inputs, and the third term regularizes the perturbation to ensure perceptual similarity between the original and perturbed images. Note that Lreg is defined by the attacker according to the type of the perturbation. The relative weighting of the terms in (1) is controlled by the hyper-parameters γ1 and γ2.
2.2 TOWARDS ROBUST EXPLANATIONS
Recent works have tried to define the robustness of explanations in terms of the sensitivity of input gradients to changes in the input data (Wang et al., 2020; Dombrowski et al., 2019). Wang et al. (2020) define the robustness of explanations by the Lipschitz continuity coefficient of the input gradients; a smaller coefficient means that the explanation is less sensitive to the changes in the input and hence more robust. In this regard, a class of approaches to generate robust explanations have been proposed in the recent works, which are either based on smoothing out the explanation maps or flattening the decision boundary of the model itself. Broadly, these approaches can be classified into two categories: (1) Post-hoc approaches do not require retraining of the network and can be applied as a post-processing step. (2) Ad-hoc approaches to robust explanations require retraining of the network and hence are more costly.
In this work, we consider Smooth Gradient (Smilkov et al., 2017), Uniform Gradient (Wang et al., 2020), and β-smoothing (Dombrowski et al., 2019) as post-hoc approaches. The first two methods involve smoothing the explanation map, while the third one smooths the decision surface of the model. All three approaches act on pre-trained models, and hence are characterized as post-hoc. Among the ad-hoc methods, we study the explanations generated by adversarially trained networks, and networks trained with curvature regularization (CURE) (Moosavi-Dezfooli et al., 2019), which is a similar approach to SSR (Wang et al., 2020)1.
1We experiment only with CURE, because with the publicly available code of SSR we were not able to reproduce the results in (Wang et al., 2020).
3 EVALUATING POST-HOC APPROACHES
Here, we begin by evaluating the quality of explanations derived by post-hoc approaches that do not require retraining of the network. Then, we evaluate the robustness of these explanations by presenting results on effective non-additive attacks to manipulate them. For all of the experiments in this section, we used a VGG-16 network trained on ImageNet (Russakovsky et al., 2015), and for generating the explanation maps we used the Captum (Kokhlikyan et al., 2020) package. Moreover, for the β-smoothing approach we always set β = 0.8 as suggested in (Dombrowski et al., 2019).
3.1 QUALITY OF EXPLANATIONS OF POST-HOC APPROACHES
To evaluate and compare the quality of the explanations, we use various quality tests presented in the literature. In general, assessing the quality of an explanation is a challenging task and each quality test only evaluates a specific quality aspect of an explanation. Therefore the assemblage of quality tests helps to understand which quality aspects of the explanations are improved and which are deteriorated by the smoothing approaches.
Cascade randomization of model parameters. Adebayo et al. (2018) argued that it is desired for an explanation to be sensitive to the changes in the model parameters. They proposed a model parameter randomization test to assess this sensitivity. In this test, the parameters of a model are progressively randomized from the top layer (logits) to the bottom layers. In each step of randomization, the explanation from the resulting model is compared against the explanation from the original model. Randomizing the model parameters means losing what the model has learned from the data during training. Therefore, we expect a ”good” explanation to be destroyed in this process. However, if an explanation is insensitive to the randomization of the model parameters, then it is not deemed appropriate for debugging the model under erroneous predictions.
The visual results of this test for Gradient explanation and post-hoc approaches are shown in Figure 1. More examples of this test can be found in the Appendix. One can observe that the explanations derived from post-hoc approaches show less sensitivity to the randomization of model parameters than compared to the Gradient method. This can also be verified by the Spearman rank correlation between the original and randomized explanations shown in Figure 2. We observe that for the smoothed explanation methods, the original and randomized explanations have a high rank correlation after the randomization of the top layers of the network. These results highlight that using Smooth Gradient, Uniform Gradient, and β-smoothing to achieve a more robust explanation can come at the expense of having explanations that are less sensitive to model parameters.
Class sensitivity of explanations. A good visual explanation should be able to localize the image regions relevant to the target category, i.e., it should be class discriminative (Selvaraju et al., 2017). This is particularly significant when dealing with images containing more than one object. To assess the class discriminativeness of an explanation we used a quality test equivalent to the pointing game (Zhang et al., 2016). We sampled images from the MS COCO dataset (Lin et al., 2014), containing two objects that are also present among the ImageNet class labels. For this test we only keep
the samples for which one of the objects in the image is the top predicted class by the network and the other object is among the top 20 predicted classes by the network. We compute the explanation maps for each of the class labels corresponding to the objects. Using the segmentation mask of the objects provided in the dataset as ground truth, we compute what percentage of the top-20 values in the explanation maps generated for each target category are inside the corresponding segmentation masks. The results of this test are shown in table 1 and a visual depiction of this test is given in Figure 3. These results indicate that the smoothed explanation methods are less discriminatory when generated for the target class label that has a lower probability. This suggests that in terms of class discriminativeness of explanations, the post-hoc smoothing approaches investigated in this paper are inferior to the Gradient method.
Sparseness of explanations. To create explanations that are human-accessible, it is advantageous to have a sparse explanation map (Molnar, 2019), i.e, only the features that are truly predictive of the model output should have significant contributions, and irrelevant features should have negligible contributions. Sparse explanations are more concise because they only include features with significant contribution making it simpler for end-users to understand the reasons for a specific prediction of the model (Chalasani et al., 2020). To measure the sparseness of an explanation map, we applied the Gini Index on the absolute value of the flattened explanation maps. The Gini Index is a metric that measures the sparseness of a vector with non-negative values (Hurley & Rickard, 2009). By definition, the Gini Index take values in [0, 1] with higher values indicating more sparseness. Table 2 shows the average Gini Index of the Gradient, Smooth Gradient, Uniform Gradient, and
β-smoothing computed for 1000 randomly sampled images from ImageNet. The results show that compared to the Gradient method, Smooth Gradient and Uniform Gradient provide less concise explanations, whereas β-smoothing actually improves the sparseness of the explanations as compared to the Gradient method.
Explanation Infidelity. Introduced in Yeh et al. (2019), this metric captures how the predictor function changes in response to significant perturbations to the input and is defined as the expected difference between the two terms: 1) the dot product of the input perturbation and the explanation and 2) the difference between function values after significant perturbations to the input. The metric generalizes the completeness axiom (Shrikumar et al., 2017; Sundararajan et al., 2017) because it allows for different types of perturbations which could be of interest depending on the problem and the dataset. We use the infidelity metric to compare the effect of post-hoc smoothing approaches on the fidelity of explanations to the predictor function. As suggested in (Yeh et al., 2019), we used the square removal perturbation to compute the infidelity of explanations for randomly selected images from ImageNet. Table 3 shows the results for the post-hoc approaches. A lower infidelity value indicates better fidelity of the explanation to the predictor function. The results suggest that the degree of smoothing used to robustify explanations, also improves their infidelity. Therefore with respect to the Infidelity metric, all of the smoothed explanations investigated in this section are superior to the Gradient method. This finding is also in line with the results of Yeh et al. (2019), i.e., that modest smoothing improves the infidelity of explanations.
3.2 ROBUSTNESS OF EXPLANATIONS OF POST-HOC APPROACHES
Now, we will evaluate the robustness of Smooth Gradient, Uniform Gradient, and β-smoothing explanations. We present attacks composed of additive and non-additive perturbations, and show that they are more effective than additive attacks to manipulate explanations. The non-additive attacks we employed are spatial transformation attacks (Xiao et al., 2018), and recoloring attacks (Laidlaw & Feizi, 2019). See the Appendix B for a brief description of each of these attacks. In the rest of this paper, we refer to the additive attack as Delta, spatial transformation attack as StAdv, and recoloring attack as Recolor.
We used the projected gradient descent (PGD) algorithm to optimize the objective function (1)2. In our experiments, we evaluate three combinations of attacks, namely Delta, Delta+StAdv, and Delta+StAdv+Recolor, against the explanation of a VGG-16 network trained on ImageNet (Russakovsky et al., 2015). See Appendix C.1 for the details about the `∞ norm for each type of the perturbations and the hyper-parameters used in each attack setting.
We use two metrics to evaluate the attacks: (1) The Cosine Distance metric (cosd) to evaluate the similarity between the target and manipulated explanations (Wang et al., 2020). A lower cosine distance corresponds to a lower `2 distance between the target and manipulated explanations indicating a higher similarity. The range of the values for cosd is between 0 and 1. (2) The LPIPS metric for
2As discussed in (Ghorbani et al., 2019; Dombrowski et al., 2019), to avoid zero-valued gradients when optimizing (1), we have to replace the ReLU activation with its smooth approximation. In this work, we used a soft-plus function with β = 100.
quantifying the perceptual similarity between images (Zhang et al., 2018). A lower LPIPS value indicates higher similarity.
Figure 4 shows the cosine distance between the target and manipulated explanations, and the perceptual similarity (LPIPS) between the perturbed and original images for each attack setting. We can observe that Delta+StAdv, and Delta+StAdv+Recolor attacks are more effective than Delta attacks to manipulate β-smoothing explanations, i.e, with a less perceptible perturbation (lower LPIPS value), we can reach a cosd value between manipulated and target explanations very close to the cosd value when attacking the Gradient method. The effect of the non-additive attacks is less significant on the Smooth and Uniform Gradient methods, however we can still observe improvements in the cosd values under these attacks. Taken together, these results show that Smooth Gradient, Uniform Gradient, and β-smoothing explanations are more vulnurale to non-additive attacks and hence such attacks should be considered as a threat to the robustness of these methods. As an example, we can visually see the effectiveness of Delta+StAdv+Recolor attack against differnt explanation methods in Figure 5.
4 EVALUATING AD-HOC APPROACHES
Here, we recreate the experiments of Section 3 for the ad-hoc approaches. We study the explanations of networks trained with curvature regularization (CURE) (Moosavi-Dezfooli et al., 2019),
and adversarial training (Madry et al., 2018). Training with CURE, regularizes the eigenvalue of the input hessian with maximum absolute value and is similar to SSR, which was shown to improve the robustness of explanations against additive attacks (Wang et al., 2020). Adversarial training also smooths the decision surface and can provide more robust explanations.
For the experiments in this section, we used a ResNet-18 network trained with CURE and an adversarially trained ResNet-18 network trained on adversarial examples with `∞ norm of the perturbations upper bounded by 8/255 (Engstrom et al., 2019). Both networks are trained on CIFAR-10 dataset (Krizhevsky, 2012).
4.1 QUALITY OF EXPLANATIONS OF AD-HOC APPROACHES
Cascade randomization of model parameters. We evaluate the sensitivity of explanations of the networks trained with CURE and adversarial training using the cascade randomization of model parameters test.
The Spearman rank correlation between the original and randomized explanations is shown in Figure 6. These Results show the explanation of an adversarially trained network is less sensitive to model parameters. This suggests that the explanation of an adversarially trained network cannot be helpful to debug a model when it is making a wrong prediction.
Sparseness of explanations. We compare the sparseness of the explanations derived by ad-hoc approaches, using the Gini Index metric. Table 4 compares the Gini Index for the explanations of networks trained with different training objectives. These results show that adversarial training helps to improve the sparseness of explanations as compared to standard training. Hence the explanations of an adversarially trained network are more concise. This is in line with the results of Chalasani et al. (2020) as well. However, the rsults of Table 4 indicates that training a network with CURE does not help to improve the sparseness of explanations as compared to standard training.
Explanation Infidelity. To compare the fidelity of explanations derived by ad-hoc approaches to the predictor function, we used the Infidelity metric with square perturbation (Yeh et al., 2019). Table 5 shows the results for randomly selected images from CIFAR-10. A lower infidelity value indicates better fidelity of the explanation to the predictor function. From these results, we can observe that training a network with CURE and adversarial training helps to improve the explanation Infidelity. Therefore with respect to the Infidelity metric, the ad-hoc smoothing approaches investigated in this section improve the explanation Infidelity as compared to standard training.
4.2 ROBUSTNESS OF EXPLANATIONS OF AD-HOC APPROACHES
Now, we evaluate the improvement of robustness via ad-hoc approaches. We present results for Delta, Delta+StAdv, and Delta+StAdv+Recolor attacks against explanations of the networks trained with CURE and adversarial training. Figure 7 shows the results of these attacks. For the adversarially trained network, we can observe that non-additive attacks can more effectively manipulate explanations compared to the additive attacks. However, even with the strongest attack setting we still cannot get close to the cosd value reached by attacking the explanation of the network trained in standard way. For the attacks against the explanation of the network trained with CURE, the effect of non-additive attacks are less significant in terms of the cosd value, however we can still observe that such attacks can reach similar cosd values with perceptually less visible perturbations.
5 CONCLUSION
We have evaluated two aspects of smoothed explanations: a) explanation quality, and b) robustness of explanation. In terms of explanation quality, we performed a thorough evaluation of four quality aspects: sensitivity to model parameters, class discriminativeness, sparseness, and infidelity. Our results show that the smoothed explanations investigated in this paper perform worse than those of the Gradient method in terms of sensitivity to model parameters and class discriminativeness. On the other hand, we show that using such smoothing methods helps to improve explanation Infidelity and sparseness.
We further looked at the robustness of explanations, when inputs are perturbed by a combination of additive and non-additive attacks. To the best of our knowledge, this is the first time such attacks are used to manipulate explanations. Our experimental results highlighted the fact that non-additive attacks are still a threat for explanation methods, including the smoothed ones. These results also point us to the fact that many problems in explanation robustness can be addressed by making analogies with the area of prediction robustness. As these two areas are closely related, the solutions already explored in prediction robustness can be potentially helpful to study explanation robustness. This will be the focus of our future work. | 1. What is the focus of the paper regarding evaluation and explanation approaches?
2. What are the strengths and weaknesses of the paper's experimental analysis?
3. Do you have any concerns about the paper's claims and conclusions?
4. How does the reviewer assess the novelty, significance, technical soundness, clarity, and quality of the paper?
5. Are there any suggestions for additional experiments or improvements to the current research? | Summary Of The Paper
Review | Summary Of The Paper
This paper evaluates the quality and robustness of explanations of three post-hoc smoothing approaches (Smooth Gradient, Uniform Gradient, B-smoothing), and two ad-hoc smoothing approaches (CURE, Adv). It evaluates the quality of explanations based on the model parameter sensitivity, class sensitivity, sparseness, infidelity. It also evaluates the robustness of explanations to combinations of additive, spatial transformation, and recoloring attacks by comparing similarities between target and explanation maps. All evaluation is performed on publicly available benchmark datasets such as ImageNet and MS COCO. Based on their experimental results, the authors made several claims about the quality and robustness compared to the vanilla gradient method. For example, the authors claim that most of the smoothing methods are less sensitive to perturbation of model parameters so they may not be helpful to debug a model, and they may not be "robust" to non-additive types of attacks.
Review
Novelty and Significance: The novelty and significance of this paper are in the experimental evaluation of "smoothing explanation approaches" using diverse evaluation measures. The major novelty and significance of this paper can come from section 3.2 that aims to show non-robustness of the smoothing methods. However, the experimental results do not show a significant difference between the three attacks, which degrades the novelty and significance. Section 3.1. experiment and result are similar to those in prior works, but still can provide somewhat meaningful observations (i.e. insensitivity) about the smoothing approaches if error bar is provided.
Technical soundness: The experimental evidence is not fully supporting the authors' claims. Additional experiments and in-depth discussion are needed.
Table1-3, 5: Please provide error bar (i.e. empirical confidence interval). If error bar is not small enough to distinguish the difference, please use more samples for evaluation.
Figure 4, 7. The two metrics are not consistent with each other. For example, in figure 4, the non-additive attack is shown as effective for Smooth gradient and
β
-smoothing in the left figure but not in the right figure. Why do those two metrics show different results?
Figure 4, 7. Each smoothing approach shows no significant differences between the three attacks. It is quite hard to say that those methods are less robust to other types of attacks based on this experiment. Please provide more experimental evidence or use more samples to reduce the error bar.
Writing and Clarity: Minor corrections are needed.
Page 1. The first sentence 'Explanation methods attribute a numerical value ... ' is not always true. There are other explanation methods not using attribution scores. Please modify this sentence.
Page 2. Infidelity -> infidelity |
ICLR | Title
On the Necessity of Disentangled Representations for Downstream Tasks
Abstract
A disentangled representation encodes generative factors of data in a separable and compact pattern. Thus it is widely believed that such a representation format benefits downstream tasks. In this paper, we challenge the necessity of disentangled representation in downstream applications. Specifically, we show that dimension-wise disentangled representations are not necessary for downstream tasks using neural networks that take learned representations as input. We provide extensive empirical evidence against the necessity of disentanglement, covering multiple datasets, representation learning methods, and downstream network architectures. Moreover, our study reveals that informativeness of representations best accounts for downstream performance. The positive correlation between the informativeness and disentanglement explains the claimed usefulness of disentangled representations in previous works.
1 INTRODUCTION
Disentanglement has been considered an essential property of representation learning (Bengio et al., 2013; Peters et al., 2017; Goodfellow et al., 2016; Bengio et al., 2007; Schmidhuber, 1992; Lake et al., 2017; Tschannen et al., 2018). Though there is no widely accepted formal definition yet, the fundamental intuition is that a disentangled representation should separately and distinctly capture information from generative data factors (Bengio et al., 2013). In practice, disentanglement is often implemented to emphasize a dimension-wise relationship, i.e., a representation dimension should capture information from exactly one factor and vice versa (Locatello et al., 2019b; Higgins et al., 2016; Kim & Mnih, 2018; Chen et al., 2018; Eastwood & Williams, 2018; Ridgeway & Mozer, 2018; Kumar et al., 2017; Do & Tran, 2019). Disentangled representations offer human-interpretable factor dependencies. Therefore, in theory, they are robust to variations in the natural data and are expected to benefit downstream performances (Bengio et al., 2013).
Researchers are interested in empirically verifying these purported advantages. Especially, they focus on the following two-staged tasks: (1) extracting representations in an unsupervised manner from data, (2) then performing downstream neural networks training based on learned representations (van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020; Locatello et al., 2020). Among various downstream tasks, except the ones that explicitly require disentanglement (Higgins et al., 2018b; Gabbay & Hoshen, 2021; Schölkopf et al., 2021), abstract visual reasoning is widely recognized as a popular testbed (van Steenkiste et al., 2019; Locatello et al., 2020; Schölkopf et al., 2021). The premise behind it aligns with the goals of machine intelligence (Snow et al., 1984; Carpenter et al., 1990). Moreover, its mechanism ensures valid measurement of representations downstream performance (Fleuret et al., 2011; Barrett et al., 2018).
In the abstract visual reasoning task, intelligent agents are asked to take human IQ tests, i.e., predict the missing panel of Raven’s Progressive Matrices (RPMs) (Raven, 1941). Indeed it is a challenging task for representation learning (Barrett et al., 2018; van Steenkiste et al., 2019). Disentanglement literature often takes this task as an encouraging example to show that disentanglement leads to quicker learning and better final performance (van Steenkiste et al., 2019; Locatello et al., 2020; Schölkopf et al., 2021).
However, on the abstract visual reasoning task, we find that rotating disentangled representations, i.e., multiplying the representations by an orthonormal matrix, has no impact on sample efficiency and final accuracy. We construct the most disentangled representations, i.e., normalized true factors.
Then we solve the downstream tasks from them and their rotated variants. As shown in Figure 2a, there is little difference between the accuracy curves of original and rotated representations throughout the learning process. On one hand, this phenomenon is surprising since the rotation decreases dimension-wise disentanglement by destroying axis alignment (Locatello et al., 2019b). Indeed, in Figure 2b we can observe notable drops in disentanglement metric scores (first 5 columns). Our finding demonstrates that disentanglement does not affect the downstream learning trajectory, which is against the commonly believed usefulness of disentanglement. On the other hand, it is not surprising since we apply an invertible linear transform. We can observe that Logistic Regression (LR) accuracy remains 100% before and after rotation, indicating that a simple linear layer could eliminate the effects of rotation.
Per such facts, some questions arise: Are disentangled representations necessary for two-staged tasks? If not, which property matters? To address them, we conduct an extensive empirical study based on abstract reasoning tasks. Our contributions are as follows.
• We challenge the necessity of disentanglement for abstract reasoning tasks. We find that (1) entangling representations by random rotation has little impact, and (2) general-purpose representation learning methods could reach better or competitive performance than disentanglement methods.
• Following Eastwood & Williams (2018), we term what information the representation has learned as informativeness. We show that informativeness matters downstream performance most. (1) Logistic regression (LR) accuracy on factor classification correlates most with downstream performance, comparing with disentanglement metrics. (2) Conditioning on close LR accuracy, disentanglement still correlates mildly. (3) The informativeness is behind the previously argued usefulness of disentanglement since we observe a positive correlation between LR and disentanglement metrics.
• We conduct a large-scale empirical study supporting our claim. We train 720 representation learning models covering two datasets, including both disentanglement and general-purpose methods. Then we train 5 WReNs (Barrett et al., 2018) and 5 Transformers (Vaswani et al., 2017; Hahne et al., 2019) using the outputs of each representation learning model to perform abstract reasoning, yielding a total of 7200 abstract reasoning models.
2 RELATED WORK
Disentangled representation learning. There is no agreed-upon formal definition of disentanglement. Therefore, in practice, disentanglement is often interpreted as a one-to-one mapping between representation dimensions and generative factors of data, which we term “dimension-wise disentanglement”. It requires that the representation dimension encode only one factor and vice versa (Locatello et al., 2019b; Eastwood & Williams, 2018; Kumar et al., 2017; Do & Tran, 2019). Besides dimension-wise disentanglement, Higgins et al. (2018a) propose a definition from the group theory perspective. However, its requirement in interaction with the environment prevents applicable learning methods for existing disentanglement benchmarks (Caselles-Dupré et al., 2019).
Adopting the dimension-wise definition, researchers develop methods and metrics. SOTA disentanglement methods are mainly variants of generative methods (Higgins et al., 2016; Kim & Mnih, 2018; Burgess et al., 2018; Kumar et al., 2017; Chen et al., 2018; 2016; Jeon et al., 2018; Lin et al., 2020). Corresponding metrics are designed in the following ways (Zaidi et al., 2020): intervening factors (Higgins et al., 2016; Kim & Mnih, 2018), estimating mutual information (Chen et al., 2018), and developing classifiers (Eastwood & Williams, 2018; Kumar et al., 2017). Another line of work related to disentangled representation learning is the Independent Component Analysis (ICA) (Comon, 1994). ICA aims to recover independent components of the data, using the mean correlation coefficient (MCC) as the metric. However, ICA models require access to auxiliary variables (Hyvarinen et al., 2019), leading to inevitable supervision for image datasets training (Hyvarinen & Morioka, 2016; Khemakhem et al., 2020a;b; Klindt et al., 2020). In this paper, we focus on the downstream performance of unsupervised representation learning.
Downstream tasks. It is widely believed that disentangled representations benefit downstream tasks. Intuitively, they offer a human-understandable structure with ready access to salient factors, hence should be enjoying robust generalization capacity (Bengio et al., 2013; Do & Tran, 2019). Several works conduct empirical studies on downstream tasks to support the notions above, includ-
ing abstract reasoning (van Steenkiste et al., 2019), fairness (Locatello et al., 2019a), and sim2real transfer (Dittadi et al., 2020). Among these works, van Steenkiste et al. (2019) provide the most encouraging evidence from abstract reasoning tasks. We adopt their settings and investigate the same tasks. However, their results are questionable. Firstly, it underestimates factors’ linear classification accuracy, yielding a weaker correlation between informativeness and downstream performance (see Figure 9 in Appendix A.3). Moreover, only variants of VAEs are considered. We address these issues and achieve opposite conclusions.
Abstract visual reasoning has been a popular benchmark to measure the representation’s downstream performance, especially in disentanglement literature (Steenbrugge et al., 2018; van Steenkiste et al., 2019; Dittadi et al., 2020; Locatello et al., 2020; Schölkopf et al., 2021). The most common type is the Raven’s Progressive Matrices (RPMs) (Raven, 1941), which highly emphasize abstract and relational reasoning capacities and effectively represent human intelligence (Snow et al., 1984; Carpenter et al., 1990). To solve RPMs, one is asked to complete the missing panel of a 3× 3 grid by exploring the logical relationships of 8 context panels. Moreover, abstract visual reasoning is a well-developed benchmark for representation learning. Given that it is coupled with a principle treatment of generalization (Fleuret et al., 2011), a neural network can not solve reasoning tasks by simply memorizing superficial statistical features. Besides, it can avoid pitfalls where test-specific heuristics learned by downstream models obscures the original properties of representations (Barrett et al., 2018). To summarize, (1) the goal of abstract visual reasoning highlights our requirements for representation learning, and (2) its mechanism ensures valid measurements. For these reasons, we focus on the necessity of disentanglement for the abstract reasoning task.
3 DOWNSTREAM BENCHMARK: ABSTRACT VISUAL REASONING
This section contains background on the downstream benchmark framework. We first introduce the definition of the abstract visual reasoning task. Then we present the framework’s ingredients: representation learning methods, metrics, and abstract reasoning models.
3.1 ABSTRACT VISUAL REASONING AS A TWO-STAGED TASK
The abstract visual reasoning tasks are highly inspired by the famous human IQ test, Raven’s Progressive Matrices (RPMs) (Raven, 1941). Figure 1 shows an RPM question in our evaluation dataset. There are eight context panels and one missing panel in the left part of the figure. The context panels are arranged following some logical rules across rows. During the test, the subject must pick one of the six candidates listed in the right part to fix the missing panel. The goal is to maintain the logical relationships given by the contexts. More details of RPMs are available in Appendix A.4.
We adopt RPMs as a downstream benchmark following van Steenkiste et al. (2019). To measure the necessity of disentanglement for downstream tasks, we separate the evaluation process into two stages: (1) In Stage-1, representation learning models extract representations from images of which RPMs consist, and (2) in Stage-2, abstract reasoning models predict the missing panels from the frozen representations of contexts and answer candidates. Correspondingly, we denote representation learning models as Stage-1 models while abstract reasoning models as Stage-2 models. For Stage-1, we measure the disentanglement properties of the representations. A diverse set of Stage-1 and Stage-2 models are trained, yielding multiple samples from the joint distribution of representation metric scores and downstream accuracy. Finally, we study the relationships between representation qualities and downstream performance. We aim to investigate whether more disentangled representations perform better on abstract reasoning tasks.
The two-staged framework leverages large-scale experiments to reveal connections between the disentanglement of representations and their downstream performance. It provides a precise measurement of the importance of disentanglement. Therefore the two-staged framework is widely-accepted (van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020; Locatello et al., 2020).
3.2 BACKGROUND OF REPRESENTATION LEARNING
Disentangled representation learning methods. The seminal works of Higgins et al. (2016) and Chen et al. (2016) embody disentanglement by augmenting deep generative models (Kingma & Welling, 2013; Goodfellow et al., 2014). For disentangled representation learning methods, we focus on variants of VAE. Namely, β-VAE (Higgins et al., 2016), AnnealedVAE (Burgess et al., 2018), β-TCVAE (Chen et al., 2018), FactorVAE (Kim & Mnih, 2018), and DIP-VAE (Kumar et al., 2017). They achieve disentanglement mainly by encouraging independence between representation dimensions. Please refer to Appendix A.2 for details.
General-purpose representation learning methods. In our study, methods not (explicitly) encouraging disentanglement are called general-purpose methods. We take BYOL (Grill et al., 2020) as a representative. BYOL is a negative-free contrastive learning method. It creates different “views” of an image by data augmentation and pulls together their distance in representation space. To avoid collapsing to trivial representations, a predictor appending to one of the siamese encoders and exponential moving average update strategy (He et al., 2020) are employed. It does not encourage disentanglement due to the lack of regularizers. Indeed, the empirical evidence in Cao et al. (2022) demonstrates that representations learned by BYOL have weak disentanglement.
Representation property metrics. Considered properties of representations cover two axes of metrics: disentanglement metrics and informativeness metrics (Eastwood & Williams, 2018). We include BetaVAE score (Higgins et al., 2016), FactorVAE score (Kim & Mnih, 2018), Mutual Information Gap (Chen et al., 2018) , SAP (Kumar et al., 2017), and DCI Disentanglement (Eastwood & Williams, 2018). Locatello et al. (2019b) proves their agreement on VAE methods with extensive experiments. Though their measurements are different, their results are positively correlated. On the other hand, informativeness requires representations to encode enough information about factors. In this work, we employ Logistic Regression (LR). It is a favorable metric adopted by unsupervised pretraining literature (He et al., 2020; Grill et al., 2020; Caron et al., 2021). Given the weak capacity of linear models, a higher LR accuracy ensures that sufficient information is explicitly encoded. However, it does not emphasize a dimension-wise encoding pattern like disentanglement. To distinguish, we term the property indicated by LR as informativeness.
3.3 BACKGROUND OF METHODS FOR ABSTRACT REASONING
In Stage-1, we extract representations of eight context panels (the left part of Figure 1) and six answer candidates (the right part of Figure 1). Then in Stage-2, downstream models perform abstract reasoning from the (frozen) representations. Abstract reasoning models evaluate whether filling the blank panel by a candidate follows the logical rules given by contexts. For a trial Ti of one candidate ai ∈ A = {a1, ..., a6} and eight context panels C = {c1, ..., c8}, its score is calculated as follows: Yi = Stage2(Stage1(Ti)), Stage1(Ti) = {Stage1(c1), . . . ,Stage1(c8)} ∪ {Stage1(ai)}, (1)
where Yi is the score of trial Ti, Stage1(·),Stage2(·) denote the forward process of the Stage-1 and Stage-2 models, and Stage1(Ti) is the representations of contexts and candidate ai. After evaluating all trials {T1, T2, . . . , T6}, the output answer is argmaxi Yi. We implement two different well-defined structures of Stage-2 models, namely, WReN (Barrett et al., 2018) and Transformer (Vaswani et al., 2017; Hahne et al., 2019). First, they employ an MLP or a Transformer to embed an RPM trial. Then an MLP head predicts a scalar score from the embeddings.
4 EXPERIMENTS
In this Section, we conduct a systematic empirical study about representation properties’ impacts on downstream performance. First, we introduce our experimental conditions in Section 4.1. Then we provide empirical evidence to challenge the necessity of disentanglement (Section 4.2) and to tell which property matters (Section 4.3).
4.1 EXPERIMENTS SETUP
We build upon the experiment conditions of van Steenkiste et al. (2019). Abstract visual reasoning tasks, i.e., RPMs, are solved through a two-stage process: data Stage-1−−−−→ representations Stage-2−−−−→
RPM answers. We first train Stage-1 models in an unsupervised manner and evaluate their disentanglement and informativeness. Then Stage-2 models are trained and evaluated on downstream tasks, yielding an abstract reasoning accuracy of a representation. Provided with a large amount of (representation property score, downstream performance) pairs, we conduct a systematic study to investigate the necessity of disentanglement. More implementation details are available in Appendix A.
Datasets. We replicate the RPM generation protocol in van Steenkiste et al. (2019). The panel images consist of disentanglement benchmark image datasets, namely, Abstract dSprites (Matthey et al., 2017; van Steenkiste et al., 2019) and 3DShapes (Burgess & Kim, 2018). The rows of RPMs are arranged following the logical AND of ground truth factors. As for hardness, we only reserve hard-mixed, whose contexts and candidates are more confusing. According to the generation process, the size of generated RPMs is sufficiently large (about 10144), allowing us to produce fresh samples throughout training.
Reference models. Stage-1 models extract representations from RPM’s panels. To ensure the generalizability of the results, we include 360 disentangled VAEs (denoted as DisVAEs) and 360 BYOLs. Our choices of Stage-1 models cover both disentangled and general-purpose representation learning methods. Moreover, we are interested in the overall relationship between representation properties and downstream performance. Therefore we need to study the correlation between two distributions, i.e., representation metric scores and downstream performance. For this, we include various samples for both Stage-1 and Stage-2 to ensure they are representative enough. For Stage-1, a diverse set of configurations are included for each type of representation learning model. According to the histograms in Appendix C.4, our choices span various disentanglement and informativeness scores. For Stage-2, to better estimate the downstream performance distribution, we use multiple Stage-2 configurations for each representation instead of searching for the best one. Specifically, we train 10 Stage-2 models (5 WReNs and 5 Transformers) for every Stage-1 model. Stage-2 configurations are randomly sampled from a search space described in Appendix A.3 and shared across Stage-1 models. By this, we ensure fair comparisons across representations.
Training protocol. Training is conducted two-staged. Firstly, we train Stage-1 models in an unsupervised manner on the dataset consisting of RPMs’ panels, i.e., Abstract dSprites or 3DShapes. For DisVAE models, we use the training protocol of van Steenkiste et al. (2019), while for BYOL models, we follow Cao et al. (2022). In Stage-2, all models are trained for 10K iterations with a batch size of 32. After every 100 iterations, we evaluate the accuracy on newly generated 50 mini-batches of unseen RPM samples for validation and another 50 mini-batches for testing.
Evaluation protocol. We first evaluate the two stages separately. Then we analyze the relationship between the two stages, i.e., representation properties and downstream performance. Specifically, to challenge the necessity of disentanglement, we are interested in whether more disentangled representations lead to better downstream performance. Further, if it turns out that disentanglement is of limited importance, can we find another metric that better accounts for downstream performance? Therefore, for Stage-1, we employ representation metrics described in Section 3.2 to measure two aspects: disentanglement and informativeness. For all Stage-1 models, we compute the following metric scores: BetaVAE score, FactorVAE score, MIG, SAP, and LR accuracy. DCI Disentanglement is only evaluated for DisVAEs since it takes hours to develop the Gradient Boosting Trees required during the evaluation process on high-dimensional representations of BYOLs (Cao et al., 2022). For Stage-2, we inspect accuracy on newly generated test sets every 100 iterations, yielding accuracy for multiple training steps. Since every step sees fresh samples, we employ training curves to measure sample efficiency. We also report accuracy-#samples curves in Appendix C.2 .
To summarize the downstream performance of a Stage-1 model, over 5 WReNs or 5 Transformers in Stage-2, we report the mean accuracy denoted as WReN or Trans., and max accuracy denoted as WReN⋆ or Trans.⋆. Finally, we calculate the rank correlation (Spearman) between the mean performance of Stage-1 models (WReN and Trans.) at certain Stage-2 steps and their Stage-1 metric scores. A larger correlation indicates a higher significance of the representation property on downstream performance.
4.2 ARE DISENTANGLED REPRESENTATIONS NECESSARY?
Hereafter we challenge the necessity of disentanglement. We begin by comparing a disentangled representation v.s. a deliberately designed, entangled representation on the downstream performance. Then we discuss the necessity of disentanglement inductive bias by evaluating the performance of disentanglement and general-purpose representation learning methods.
Effects of attenuating disentanglement. We first construct the most disentangled representations, i.e., the normalized true factor values. We normalize the true factor values to have zero means and unit standard deviations, yielding 6-d representations (note that Abstract dSprites and 3DShapes are both labeled with 6 ground truth factors). Then we rotate the constructed representations by multiplying randomly generated orthonormal matrices. Afterward, each dimension of the rotated feature captures a combination of factors, thus destroying disentanglement. Finally, we perform abstract reasoning training from true factors before and after rotations. We also conduct rotations on representations learned by DisVAEs.
We run 5 seeds defining the randomly generated rotation matrices and Stage-2 model configurations. We report results on 3DShapes with original/rotated true factors as representations and WReNs as Stage-2 models in Figure 2. As depicted in Figure 2a, there is little difference between performance before and after rotation throughout the training process. Yet Figure 2b shows significant drops in disentanglement metric scores. This surprising phenomenon suggests that even though we drastically entangle the representations, the downstream performance remains unchanged, firmly against the necessity of disentanglement. However, we can see from Figure 2b that LR scores are 100% before and after rotation. It is easy to understand because the rotation we applied
is just an invertible linear transform, which a simple LR can recover, not to mention more capable Stage-2 models. Moreover, we observe similar results for learned representations (Figure 3). We select the most disentangled DisVAE measured by FactorVAE score among the 180 DisVAE models trained on 3DShapes (recall Section 4.1). As shown in Figure 3, rotation does not hurt the performance of representations learned by DisVAEs, backing up our claim that disentanglement representations might not be necessary to achieve good downstream performance. More results of rotation experiments on other datasets are reported in Appendix C.3.
Summary: Destroying disentanglement (by random rotation) in representations does not have a noticeable impact on downstream performance throughout training.
Advantages of disentanglement inductive bias. From previous results, we demonstrate that both high performance and high sample efficiency can be achieved even if we deliberately destroy disen-
tanglement. Further, we are interested in the inductive biases of Stage-1 models: Do disentangled representation learning models have absolute advantages on downstream performance over generalpurpose models? For this, we compare the downstream performance of different families of learning models described in Section 4.1, including BYOL, β-VAE, AnnealedVAE, β-TCVAE, FactorVAE, DIP-VAE-I, and DIP-VAE-II. Among them, BYOL does not explicitly encourage disentanglement. On the other hand, all DisVAEs are disentangled representation learning methods. From a large pool of 7200 checkpoints, we report the best performance for each model family.
Figure 4 shows overviews of training trajectories of Stage-1 models with the highest performing WReN and Trans. on 3DShpaes for multiple training steps. For WReN as Stage-2 models (Figure 4a), BYOL leads at the beginning, then DisVAEs catch up, and finally, BYOL converges at a higher accuracy. In contrast, when Stage-2 models are Transformers, BYOL’s curve grows faster, but DisVAEs and BYOL converge with comparable performance. In general, the two curves evolve in almost identical patterns with small gaps, indicating that disentanglement inductive bias is of limited utility in improving downstream sample efficiency. Corresponding analysis on Abstract dSprites is available in Appendix C.3, where we reach the same conclusions. As for final performance, we report maximal WReN, WReN⋆, Trans. and Trans.⋆ across different Stage-2 models and datasets in Table 1. We select checkpoints to evaluate based on validation accuracy. In particular, the best WReN and Trans. of BYOL are higher than that of DisVAEs’. In addition, it appears that BYOL performs better than or on par with DisVAEs in terms of WReN⋆ and Trans.⋆. Especially, BYOL outperforms DisVAEs on Abstract dSprites with a considerable margin.
Summary: Models not intended for disentangled representation learning can reach superior or comparable downstream performance. Therefore disentanglement inductive bias does not necessarily lead to better sample efficiency or final accuracy.
4.3 WHICH PROPERTY MATTERS DOWNSTREAM PERFORMANCE?
The results in Section 4.2 provide encouraging cases against the necessity of disentanglement. Additionally, we are interested in several further issues: (1) Which property matters downstream performance most? (2) How can we interpret the previously claimed benefits from disentanglement(Bengio et al., 2013; Higgins et al., 2016; van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020)? On account of these questions, we start by investigating how different representation properties influence downstream accuracy. We include informativeness and various disentanglement metrics.
Recall that we train 720 Stage-1 and 7200 Stage-2 models (see Section 4.1). By taking WReN and Trans. as measurements (average reasoning accuracy over 5 WReNs or 5 Transformers), we yield 720 representations paired with their downstream performance. Generally, our analysis is based on rank correlation (Spearman) between representation metric scores and downstream performance. If the correlation score is high, we can conclude that the representation property measured by the considered metric score is significant to downstream performance.
The representation property of the most significance. We calculate the rank correlation between downstream accuracy and disentanglement and informativeness scores. Meanwhile, we report rank correlation at steps 1K, 2K, 5K, and 10K, and the step with the highest validation accuracy. From correlations at different training steps, we can tell how a representation property affects sample efficiency.
Figure 5 displays rank correlations between representation metric scores and abstract reasoning test accuracy on 3DShapes. Firstly we can find that Logistic Regression accuracy (LR) correlates most with downstream performance. The strong correlation is exploited for all considered models at multiple steps. Since LR requires sufficient information to be captured and extracted easily from representations, we can conclude that the informativeness matters most in broad conditions. In contrast, we observe that the importance of disentanglement varies among Stage-1 model families. Disentangled representation learning models (DisVAEs) exhibit strong positive correlations for several disentanglement metrics (but weaker than LR), such as FactorVAE score and DCI Disentanglement. However, their significance does not apply to BYOL, where the correlation of disentanglement is mild or even negative. In Figure 6 we plot the (WReN, metric score) pairs at step 10000. Indeed, for BYOL-WReN on 3DShapes, we can see the linear regression provides a good fit of downstream accuracy and informativeness metrics. As for disentanglement metrics, we can see that BetaVAE score and FactorVAE score suffer from narrow spreads. For MIG and SAP, the regression lines have negative slopes. We conduct a similar analysis on Abstract dSprites and take the same observations. Please refer to Appendix C.4 for more details.
Summary: The informativeness influences downstream performance most. The results are consistent across datasets and model structures.
Understanding for the previously claimed success of disentanglement. Previous works (van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020; Locatello et al., 2020) have reported empirical evidence backing up the advantages of disentangled representations. Consistently, we observe relatively strong correlations with disentanglement metrics, especially when Stage-1 models are DisVAEs in Figure 5. Based on our conclusion on the significance of the informativeness, we study the DisVAE-WReN case and provide some insights to explain why the disentanglement metrics have a high correlation to downstream performance in some cases.
We compute the overall correlations between metrics. The results are shown in Figure 7. For DisVAEs, we find that informativeness and disentanglement have high correlation scores. In particular, we can observe relatively strong correlations between LR and FactorVAE score and BetaVAE score. Accordingly, these disentanglement metrics exhibit relatively strong correlations with downstream performance in Figure 5a. In contrast, other disentanglement metrics correlate mildly with LR. And they are ineffective for downstream performance. Therefore, disentanglement metrics are not truly predictable for downstream performance, but LR is.
To “purify” the effect of disentanglement, a natural question is: If two representations are of close informativeness, does the more disentangled one more helpful for downstream tasks? For this, we employ adjusted metrics in Locatello et al. (2019a):
Adj. Metric = Metric− 1 5 ∑ i∈N(LR) Metrici, (2)
For a representation and a certain metric (we care more about disentanglement metrics), we denote its original metric score as Metric. Then we find its 5 nearest neighbors in terms of LR, which we write as N(LR). Finally, the difference between the original metric score and the mean score of the nearest neighbors is reported as adjusted metrics. Intuitively, we calculate the relative disentanglement for representations with close LR.
Figure 7b displays correlations between adjusted metrics and downstream performance. We can find that all adjusted disentanglement metrics correlate mildly with downstream performance. From this, we can see that when informativeness is close, being disentangled contributes only a small portion to the downstream performance when the downstream training steps are limited (In our case, less than or equal to 2000 steps, see Figure 4 and Figure 7).
Summary: The informativeness is the most predictable metric for downstream performance. Disentanglement only brings small extra benefits at the very beginning of downstream training.
5 CONCLUSION
In this paper, we challenge the necessity of dimension-wise disentanglement for downstream tasks. We conduct a large-scale empirical study on the abstract visual reasoning task. We start by showing that high downstream performance can be achieved by less disentangled representations. In addition, we identify that the informativeness is of the most significance. Finally, we conclude that dimensionwise disentanglement is unnecessary for downstream tasks using deep neural networks with learned representations as input.
REPRODUCIBILITY STATEMENT
We provide information to reproduce our results in Appendix A. We commit to making our codes publicly available.
A REPRODUCIBILITY
In this Section, we provide implementation details to ensure reproducibility. In addition, we commit to making our codes, configurations, and running logs publicly available. All experiments are run on a machine with 2 Intel Xeon Gold 5218R 20-core processors and 4 Nvidia GeForce RTX 3090 GPUs.
A.1 REPRESENTATION LEARNING METHODS
We include both disentangled representation learning methods and general-purpose representation learning methods. i.e., DisVAEs and BYOL (Grill et al., 2020).
DisVAEs implementation. The DisVAEs include β-VAE (Higgins et al., 2016), AnnealedVAE (Burgess et al., 2018), β-TCVAE (Chen et al., 2018), FactorVAE (Kim & Mnih, 2018), and DIP-VAE-I and DIP-VAE-II (Kumar et al., 2017). We use the output of the encoder, the mean of qϕ(z|x), as representations. Hereafter, we introduce details for each method. The above methods encourage disentanglement by adding regularizers to ELBO. Adopting the notation in Tschannen et al. (2018), their objectives can be written in the following unified form:
Ep(x)[Eqϕ(z|x)[− log pθ(x|z)]] + λ1Ep(x)[R1(qϕ(z|x))] + λ2R2(qϕ(z)), (3)
where qϕ(z|x) is the posterior parameterized by the output of the encoder, pθ(x|z) is induced by the decoder output, R1, R2 are the regularizer applying to the posterior and aggregate posterior, and λ1, λ2 are the coefficients controlling regularization. In the objective of β-VAE, β = λ1 > 1, λ2 = 0. Taking R1(qϕ(z|x)) := DKL[qϕ(z|x)||p(z)] forces the posterior to be close to the prior (usually unit gaussian), hence penalizing the capacity of the information bottleneck and encourage disentanglement. FactorVAE and β-TCVAE takes λ1 = 0, λ2 = 1. With R2(qϕ(z)) := TC(qϕ(z)), they penalize the Total Correlation (TC) (Watanabe, 1960). FactorVAE estimates TC by adversarial training, while β-TCVAE estimates TC by biased Monte Carlo sampling. Finally, DIP-VAE-I and DIP-VAE-II take λ1 = 0, λ2 ≥ 1 and R2(qϕ(z)) := ||Covqϕ(z) − I||2F , penalizing the distance between aggregated posterior and factorized prior.
We use the code and configurations from the DisLib 1 (Locatello et al., 2019b). As for parameters, we use the same sweep as van Steenkiste et al. (2019): for each one of the 6 DisVAEs, we use 6 configurations. We train each model using 5 different random seeds. Since we consider 2 datasets (3DShapes and Abstract dSprites), finally, we yield 6 ∗ 6 ∗ 5 ∗ 2 = 360 DisVAE checkpoints.
BYOL implementation. BYOL (Grill et al., 2020) is a contrastive learning method. Figure 8 shows its pipeline. For each image x, we first create two “views” of it by data augmentation, i.e., x1 and x2. Then they are input to the siamese encoders: the online encoder and the target encoder. Specifically, x1 is fed to the online encoder, while x2 is fed to the target encoder, yielding the output
1https://github.com/google-research/disentanglement_lib.git
z1 and z2, respectively. As for architectures, both encoders share the same representation network and projection MLP. The prediction MLP is appended to the online encoder in order to avoid BYOL learning trivial representations. The objective of BYOL is
L = − ⟨z1, z2⟩ ∥z1∥2∥z2∥2 . (4)
We are pulling the representations of the two “views” close. While training, the online encoder’s parameters are updated by gradient descent. However, the target encoder’s parameters are updated by the online parameters’ Exponential Moving Average (EMA) (He et al., 2020). After training, we only keep the online encoder and use the output of the representation network as representations.
We use the PyTorch implementation of BYOL 2. We use the representation network architecture as shown in Table 2, where the representation dimension D is a parameter to be set. Except for normalization and output dimensions, the representation network architecture of BYOL and the encoder architecture of DisVAEs are similar. As for predictor and projector, we use the pipeline Linear→ BN → ReLU → Linear with 256 hidden neurons. We train the BYOLs for 105 epochs using the Adam optimizer with β1 = 0.9, β2 = 0.999, ϵ = 10−8, and learning rate (lr) as a variable parameter. For augmentation, we use the pipeline of Cao et al. (2022) (in PyTorch-style):
1. RandomApply(transforms.ColorJitter(xjit, xjit, xjit, 0.2), p=0.8) 2. RandomGrayScale(p=pgray) 3. RandomHorizontalFlip() 4. RandomApply(transforms.GaussianBlur((3,3), (1.0, 2.0)), p=0.2) 5. RandomResizeCrop(size=(64, 64), scale=(xcrop, 1.0))
The xjit, pgray, and xcrop are parameters to be set. xjit controls how much to jitter brightness, contrast, and saturation. pgray controls the probability to convert the image to grayscale. xcrop defines the lower bound for the random area of the crop.
We perform a parameter sweep on the cross product of intervals of parameters D, norm, lr, xjit, pgray, and xcrop. On 3DShapes, we use the following parameter grid (in scikit-learn style):
[ {’D’: [32, 64, 128], ’lr’: [3e-2, 3e-3], ’norm’: [BatchNorm()], ’x_jit’: [0.6, 0.8], ’p_gray’: [0.5, 0.7, 0.9], ’x_crop’: [1.0]}, {’D’: [256], ’lr’: [3e-4, 3e-5], ’norm’: [BatchNorm(), GroupNorm(num_groups=4)], ’x_jit’: [0.4, 0.8], ’p_gray’: [0.3, 0.5, 0.7], ’x_crop’: [1.0]} ]
On Abstract dSprites, we use the following parameter grid:
2https://github.com/lucidrains/byol-pytorch.git
[ {’D’: [32, 64, 128], ’lr’: [3e-3, 3e-4], ’norm’: [BatchNorm()], ’x_jit’: [0.6, 0.8], ’p_gray’: [0.0, 0.1, 0.2], ’x_crop’: [0.6]}, {’D’: [256], ’lr’: [3e-4, 3e-5], ’norm’: [BatchNorm(), GroupNorm(num_groups=4)], ’x_jit’: [0.4, 0.8], ’p_gray’: [0.0, 0.1, 0.2], ’x_crop’: [0.6]} ]
For each parameter configuration, we run it with 3 random seeds. Finally, we trained 360 BYOLs in total.
A.2 ABSTRACT REASONING METHODS
We include two abstract reasoning network architectures: WReN (Barrett et al., 2018; van Steenkiste et al., 2019) and Transformer (Vaswani et al., 2017; Hahne et al., 2019).
WReN implementation. WReN consists of two parts: graph MLP and edge MLP. Here we use the same notations as in Section 3.3. For the representations of a trial Stage1(Ti), edge MLP takes a pair of representations in Stage1(Ti) as input and embed them to edge embeddings. Then all edge embeddings of Stage1(Ti) (in total C29=36) are added up and input to the graph MLP. Finally, the graph MLP output a scalar score, predicting the correctness of the trial Ti.
We use the code (van Steenkiste et al., 2019) to implement WReN. And we use the same parameter searching spaces as them. All WReNs are trained in 10K steps with a batch size of 32. The learning rate for the Adam optimizer is sampled from the set {0.01, 0.001, 0.0001} while β1 = 0.9, β2 = 0.999, and ϵ = 10−8. For the edge MLP in the WReN model, we uniformly sample its hidden units in 256 or 512, and we uniformly choose its number of hidden layers in 2, 3, or 4. Similarly, for the graph MLP in the WReN model, we uniformly sample its hidden units in 128 or 512, and we uniformly choose its number of hidden layers in 1 or 2 before the final linear layer to predict the final score. We also uniformly sample whether we apply no dropout, dropout of 0.25, dropout of 0.5, or dropout of 0.75 to units before this last layer.
Transformer implementation. We simplify the architecture of Hahne et al. (2019). Here we treat Stage1(Ti) as a sequence. We first linear project all representations and prepend them with a learnable [class] token. We add them with learnable positional embeddings. Then they are input into a stack of Transformer blocks (Vaswani et al., 2017). Finally, an MLP predicts a scalar score from the class embedding of the final Transformer block.
We implement the Transformer architecture ourselves with utilities of the DisLib code base. All Transformers are trained for the same steps and same batch size as WReN, i.e., 10K steps with a batch size of 32. We use the Adam optimizer with weight decay and cosine learning rate scheduler. The learning rate for the Adam optimizer is uniformly selected from {5e− 4, 6e− 4, 7e− 4}. The depth of Transformer blocks is uniformly set to be 2, 3, or 4. The dimensions of q, k, v of the selfattention model are uniformly 32 or 64. The MLP head uses the same architecture and parameter space as the graph MLP in WReN. For other fixed parameters, please refer to our codes for details.
A.3 REPRESENTATION METRICS
In the main text, we employ disentanglement and informativeness metrics to measure the properties of representations. Here we provide more details.
Disentanglement metrics. We use the setup and implementation of Locatello et al. (2019b). Here we briefly introduce the details of our considered metrics. Namely, BetaVAE score (Higgins et al., 2016), FactorVAE score (Kim & Mnih, 2018), Mutual Information Gap (Chen et al., 2018) , SAP (Kumar et al., 2017), and DCI Disentanglement (Eastwood & Williams, 2018). The BetaVAE score and the FactorVAE score predict the intervened factor from representations to measure disentanglement. The Mutual Information Gap and SAP compute the gap in response for each factor between the two highest representation dimensions. The difference is that MIG measures mutual information while SAP measures classification accuracy. The DCI Disentanglement calculates the entropy of the relative importance of a latent dimension in predicting factors. We follow previous studies (Locatello et al., 2019b; van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020) to develop a Gradient Boosting Tree (GBT) for prediction during the DCI Disentanglement evaluation.
Though according to Eastwood & Williams (2018) any classifier could be used. As reported by Cao et al. (2022), the GBT takes hours to train from high-dimensional representations learned by BYOL. Thus we only report DCI Disentanglement score for DisVAEs.
Informativeness metrics. We use LR to measure the informativeness of representations. We train a Logistic Regression model to predict factor values from representations. We use 10000 samples to train LR. Unlike van Steenkiste et al. (2019), we use “multinomial” instead of “one v.s. rest” as the multi-class classification scheme. As shown in Figure 9a, for the same set of representations, “one v.s. Rest” LR has inferior prediction accuracy. Moreover, ranking by scores of these two LRs yields different results. In Figure ,9b we can observe different correlations of the “one v.s. Rest” LR. To better estimate informativeness, we use “multinomial” LR as the measurement.
A.4 ABSTRACT VISUAL REASONING DATASETS
We use the two abstract visual reasoning datasets developed by van Steenkiste et al. (2019). i.e., Ravens’ Progressive Matrices created from 3DShapes (Burgess & Kim, 2018) and Abstract dSprites (Matthey et al., 2017; van Steenkiste et al., 2019).
We sketch the rules here by taking the RPM in Figure 1 as an example. The reasoning attributes are the ground truth factors of 3DShpaes. i.e., floor hue, wall hue, object hue, scale, shape, and orientation. Each row in the 3 × 3 matrix has 1, 2, or 3 ground truth factors taking a fixed value. And the 3 rows have the same fixed ground truth factors, though they might take different values. From the context panels, one should discover the underlying logical relationship. Finally, one is asked to fill the missing panel by one of the candidates. For the RPM in Figure 1, from the contexts, we can infer that the fixed factors are: wall hue, shape, and orientation. Then for the third row, from the first 2 panels, we know that the values for the shared factors are: the wall hue is blue, the shape is cylinder, and the orientation is the azimuth that makes the wall corner appears in the righter part of the image. So we choose the candidate with these factor values as the solution, as shown in Figure 10a. Figure 10b shows a sample of RPMs with answers on Abstract dSprites.
B ABLATIONS ON GENERAL-PURPOSE REPRESENTATION LEARNING METHODS
In the main text, we use BYOL as a representative of general-purpose representation learning methods. For completeness, here we introduce another general-purpose method, SimSiam (Chen & He, 2021). We modify the code of BYOL 3 to train SimSiams on 3DShapes with the following parameter grid:
[ {’D’: [512], ’lr’: [3e-4, 3e-5], ’norm’: [BatchNorm()], ’x_jit’: [0.4, 0.8],
3https://github.com/lucidrains/byol-pytorch.git
’p_gray’: [0.3, 0.5, 0.7], ’x_crop’: [0.6, 1.0]} ]
For each configuration, we run with 3 seeds. So finally, we yield 72 SimSiams. Then we use the same WReNs for DisVAEs and BYOLs as Stage-2 models.
The results of SimSiam-WReN agree with our conclusions in the main text. As for the best performance, we have WReN=85.1% and WReN⋆=94.1%, which is better than DisVAEs’. Figure 11 shows the correlations of downstream performance and representation properties. LR still correlates most for all considered steps.
C ADDITIONAL RESULTS
Figure 13: Accuracy v.s. #samples curves of the most disentangled DisVAEs before and after rotation. It is consistent with Figure 3.
0 2000 4000 6000 8000 10000 Sample Size (#Batches)
0.2
0.4
0.6
0.8
1.0
A cc
ur ac
y
BYOL DisVAEs
(a) Stage-2=WReN
0 2000 4000 6000 8000 10000 Sample Size (#Batches)
0.2
0.4
0.6
0.8
1.0
A cc
ur ac
y
BYOL DisVAEs
(b) Stage-2=Transformer
Figure 14: Accuracy v.s. #samples curves of the Stage-1 models with the best WReN or Trans.. It is consistent with Figure 4
C.1 ADDITIONAL RESULTS OF FINAL PERFORMANCE
In Table 1 we report the best final performance of DisVAEs and BYOLs. Here we provide more details on which type of DisVAEs at which steps achieve the reported performance in Table 1. We can observe that the best DisVAEs vary with different datasets and Stage-2 models. As for the best steps, except 3DShapes-WReN, BYOL achieves the best performance earlier than DisVAEs.
C.2 ACCURACY-#SAMPLES CURVES
We employ training curves (accuracy-step) in the main text to evaluate sample efficiency following van Steenkiste et al. (2019). For completeness, here we show accuracy-#samples curves.
We present the accuracy-#samples versions of Figure 3 and Figure 4, i.e., Figure 13 and Figure 14. We train the same models as in the main text until convergence with fixed training data sizes of 100, 1000, 5000, 7000, and 10000 batches. Then for each sample size, we plot the test performance at the
step with the highest validation accuracy. We can see the ranking of representations and evolving patterns of both types of curves agree well.
C.3 ADDITIONAL RESULTS OF RANDOM ROTATION EXPERIMENTS
This section contains additional results of the random rotation experiments. Here we report the downstream performance of deliberately entangled (by random rotation) representations.
Figure 12 shows the same experiments as Figure 2 on Abstract dSprites. We can observe that the two curves in Figure 12a are almost identical. And in Figure 12b, we can observe that disentanglement metric scores drop drastically while LR remains the same. We notice that LR is not 100%. This is because some factors of Abstract dSprites have too many support values. e.g., the x and y positions both have 32 possible values. However, our conclusion in the main text still holds as we observe that LR is invariant to random rotation. On Abstract dSprites, we randomly rotate the most disentangled representations from DisVAEs (measured by FactorVAE score). In Figure 15, we can see that rotation has little impact on the training trajectories. So our conclusion is similar across datasets.
C.4 ADDITIONAL RESULTS OF CORRELATIONS
In this part, we report additional results related to the correlations between representation metrics and downstream performance.
Absolute values of metric scores and downstream accuracy. We show the histograms as a sanity check of the distribution of metric scores and downstream accuracy. Figure 16 presents the score distributions of each metric. We report the mean metric scores with STDs to depict the overall properties for Stage-1 models in Table 4. Figure 17 and Figure 18 display the distributions of downstream performance.
Rank correlations. This part contains additional results of rank correlations. On 3DShapes, Figure 19 displays rank correlations between adjusted metrics and downstream accuracy, Figure 20 shows the overall correlation between metrics. On Abstract dSprites, Figure 21 shows correlations between metrics and downstream performance. Then Figure 22 presents correlations between ad-
justed metrics and downstream performance. Finally, Figure 23 displays the overall correlations between metrics.
Plots of (metric score, downstream accuracy) pairs. Figures 24, 25, 26, 27, 28, 29, 30, and 31 provide an in-depth view of the correlations, where we plot (metrics, downstream accuracy) pairs. | 1. What is the main contribution of the paper regarding dimension-wise disentanglement scores and downstream performance?
2. What are the strengths and weaknesses of the paper's investigation of the correlation between disentanglement and downstream performance?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are some minor issues mentioned by the reviewer regarding the paper's writing and citations? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper investigates the correlation between dimension-wise disentanglement scores and downstream performance. In particular, it does so when using MLPs or Transformers to perform the task of abstract visual reasoning using the learned representation. After observing a poor correlation on this task, it concludes that disentanglement is not necessary for good downstream performance.
Strengths And Weaknesses
Strengths:
Clear writing: this paper was easy to read and understand, communication was clear.
Important question/investigation: The usefulness of disentanglement for downstream tasks is an open and important question.
Weaknesses:
Not clear who claimed disentanglement is necessary for downstream tasks:
I agree with the authors that it is often said/believed that disentangled representations can be beneficial for downstream tasks.
However, throughout this paper, including in the title, the authors speak of the necessity of disentanglement representations for downstream tasks, claiming that they "challenge the necessity of disentanglement for downstream tasks". It is not clear to me who has claimed that disentanglement is necessary, i.e. why it should be surprising that there exists a downstream task for which disentanglement does not help---this seems like a trivial statement.
This is a major weakness since "challenging the necessity of disentanglement" is perhaps the central contribution of this paper.
A more interesting and non-trivial alternative question could be: when does disentanglement help, and when does it not? This is what prior works investigated [2,3,4], Naturally, there will be tasks for which it helps and tasks for which it does not.
Incorrect evaluation of sample efficiency:
The authors use learning curves (gradient step vs. accuracy) to evaluate "sample efficiency". While each step sees new samples, this is fundamentally flawed since it convolutes sample efficiency (the performance with N samples) and update/step efficiency (the performance with M gradient updates).
This is a major weakness since sample efficiency is one of the most commonly-purported downstream benefits of disentanglement [1,2,3,4], making it central to the authors' claims.
Insufficient comparison to related studies on disentanglement and downstream performance:
Many works have thoroughly investigated the correlation between disentanglement and downstream performance [2,3,4]. These works were much wider in their scope (tasks, datasets, representations) and reached different conclusions. Despite the attempt to undermine these studies in the related work and on page 9, I was left unconvinced that the results in this paper are novel or should underline/question those that came before. A better comparison and explanation would help, or perhaps a more specific claim could be made (e.g. relating to this one reasoning task with neural-net architectures).
Minor:
Rotation-of-factors issue is well-known and studied: The authors claim to "find that rotating disentangled representations [...] has no impact on [...] final accuracy". This seems unsurprising given the well-known issue of rotation of factors in a linear factor-analysis model [5, sec. 9.6], which also leads to the condition in independent components analysis (ICA) that at most one of the factors can be Gaussian [6]. It was also discussed in [7]. Finally, note that it has also been investigated by a very recent (perhaps concurrent) work [8] which proposes a new notion of disentanglement that is unaffected by such rotations [8].
The term "informativeness" is either overloaded or not cited: The authors seem to use the term "informativeness" to refer to the linear-classifier performance in classifying the ground-truth factors from the learned (disentangled) representation. If so, this is precisely the definition of the "informativeness" metric in [7], but the authors never mention this relation. If this is in fact the same metric, the authors should appropriately cite, or if not, they should use a different name/term to avoid overloading an existing metric for evaluating disentangled representations.
Unclear to me why disentanglement should help the specific abstract-reasoning task used: Are only a subset of the factors needed? Do only a subset of the factors change? Is it surprising that disentanglement does not help?
Incorrect and imprecise statements:
In the second paragraph of the introduction, the authors claim that the abstract-reasoning task is general and widely-adopted, while other downstream evaluation tasks are "trivial or domain-specific", citing many past evaluations. I have to disagree with this presentation of abstract visual-reasoning as the holy-grail of downstream evaluations, and suggest that the wording be toned down.
"Locatello et al. (2019b) proves their agreement on VAE methods" -- Locatello et al. do not prove the agreement of different disentanglement metrics -- many of them measure different things.
"it takes hours to develop the Gradient Boosting Trees required [to evaluate DCI disentanglement]": GBTs are not required to evaluate DCI disentanglement---any classifier can be used, including those with a lower cost (e.g. random forests).
[1] Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798-1828.
[2] Locatello, F., Bauer, S., Lucic, M., Rätsch, G., Gelly, S., Schölkopf, B., & Bachem, O. (2020). A Sober Look at the Unsupervised Learning of Disentangled Representations and their Evaluation. Journal of Machine Learning Research, 21, 1-62.
[3] Van Steenkiste, S., Locatello, F., Schmidhuber, J., & Bachem, O. (2019). Are disentangled representations helpful for abstract visual reasoning?. Advances in Neural Information Processing Systems, 32.
[4] Dittadi, A., Träuble, F., Locatello, F., Wüthrich, M., Agrawal, V., Winther, O., ... & Schölkopf, B. (2021). On the Transfer of Disentangled Representations in Realistic Settings. In International Conference on Learning Representations.
[5] Mardia, K. V., Kent, J. T., & Bibby, J. M. (1979). Multivariate Analysis. Academic Press, London.
[6] Hyvarinen, A., Karhunen, J., & Oja, E. (2001). Independent Component Analysis. Wiley.
[7] Eastwood, C., & Williams, C. K. I. (2018). A framework for the quantitative evaluation of disentangled representations. In International Conference on Learning Representations.
[8] Eastwood, C., Nicolicioiu, A. L., von Kügelgen, J., Kekić, A., Träuble, F., Dittadi, A., & Schölkopf, B. (2022). DCI-ES: An Extended Disentanglement Framework with Connections to Identifiability. arXiv preprint arXiv:2210.00364.
Clarity, Quality, Novelty And Reproducibility
Clarity: Good. (see strength above)
Quality: Poor.
Main contribution of "challenging the necessity of disentangled representations" is questionable (see weaknesses above).
Incorrect evaluation of sample efficiency (see weaknesses above).
Novelty: Poor/limited.
Many works have thoroughly investigated the correlation between disentanglement and downstream performance. Novelty of this work is unclear in relation to those works, except for the questionable focus on "necessity". |
ICLR | Title
On the Necessity of Disentangled Representations for Downstream Tasks
Abstract
A disentangled representation encodes generative factors of data in a separable and compact pattern. Thus it is widely believed that such a representation format benefits downstream tasks. In this paper, we challenge the necessity of disentangled representation in downstream applications. Specifically, we show that dimension-wise disentangled representations are not necessary for downstream tasks using neural networks that take learned representations as input. We provide extensive empirical evidence against the necessity of disentanglement, covering multiple datasets, representation learning methods, and downstream network architectures. Moreover, our study reveals that informativeness of representations best accounts for downstream performance. The positive correlation between the informativeness and disentanglement explains the claimed usefulness of disentangled representations in previous works.
1 INTRODUCTION
Disentanglement has been considered an essential property of representation learning (Bengio et al., 2013; Peters et al., 2017; Goodfellow et al., 2016; Bengio et al., 2007; Schmidhuber, 1992; Lake et al., 2017; Tschannen et al., 2018). Though there is no widely accepted formal definition yet, the fundamental intuition is that a disentangled representation should separately and distinctly capture information from generative data factors (Bengio et al., 2013). In practice, disentanglement is often implemented to emphasize a dimension-wise relationship, i.e., a representation dimension should capture information from exactly one factor and vice versa (Locatello et al., 2019b; Higgins et al., 2016; Kim & Mnih, 2018; Chen et al., 2018; Eastwood & Williams, 2018; Ridgeway & Mozer, 2018; Kumar et al., 2017; Do & Tran, 2019). Disentangled representations offer human-interpretable factor dependencies. Therefore, in theory, they are robust to variations in the natural data and are expected to benefit downstream performances (Bengio et al., 2013).
Researchers are interested in empirically verifying these purported advantages. Especially, they focus on the following two-staged tasks: (1) extracting representations in an unsupervised manner from data, (2) then performing downstream neural networks training based on learned representations (van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020; Locatello et al., 2020). Among various downstream tasks, except the ones that explicitly require disentanglement (Higgins et al., 2018b; Gabbay & Hoshen, 2021; Schölkopf et al., 2021), abstract visual reasoning is widely recognized as a popular testbed (van Steenkiste et al., 2019; Locatello et al., 2020; Schölkopf et al., 2021). The premise behind it aligns with the goals of machine intelligence (Snow et al., 1984; Carpenter et al., 1990). Moreover, its mechanism ensures valid measurement of representations downstream performance (Fleuret et al., 2011; Barrett et al., 2018).
In the abstract visual reasoning task, intelligent agents are asked to take human IQ tests, i.e., predict the missing panel of Raven’s Progressive Matrices (RPMs) (Raven, 1941). Indeed it is a challenging task for representation learning (Barrett et al., 2018; van Steenkiste et al., 2019). Disentanglement literature often takes this task as an encouraging example to show that disentanglement leads to quicker learning and better final performance (van Steenkiste et al., 2019; Locatello et al., 2020; Schölkopf et al., 2021).
However, on the abstract visual reasoning task, we find that rotating disentangled representations, i.e., multiplying the representations by an orthonormal matrix, has no impact on sample efficiency and final accuracy. We construct the most disentangled representations, i.e., normalized true factors.
Then we solve the downstream tasks from them and their rotated variants. As shown in Figure 2a, there is little difference between the accuracy curves of original and rotated representations throughout the learning process. On one hand, this phenomenon is surprising since the rotation decreases dimension-wise disentanglement by destroying axis alignment (Locatello et al., 2019b). Indeed, in Figure 2b we can observe notable drops in disentanglement metric scores (first 5 columns). Our finding demonstrates that disentanglement does not affect the downstream learning trajectory, which is against the commonly believed usefulness of disentanglement. On the other hand, it is not surprising since we apply an invertible linear transform. We can observe that Logistic Regression (LR) accuracy remains 100% before and after rotation, indicating that a simple linear layer could eliminate the effects of rotation.
Per such facts, some questions arise: Are disentangled representations necessary for two-staged tasks? If not, which property matters? To address them, we conduct an extensive empirical study based on abstract reasoning tasks. Our contributions are as follows.
• We challenge the necessity of disentanglement for abstract reasoning tasks. We find that (1) entangling representations by random rotation has little impact, and (2) general-purpose representation learning methods could reach better or competitive performance than disentanglement methods.
• Following Eastwood & Williams (2018), we term what information the representation has learned as informativeness. We show that informativeness matters downstream performance most. (1) Logistic regression (LR) accuracy on factor classification correlates most with downstream performance, comparing with disentanglement metrics. (2) Conditioning on close LR accuracy, disentanglement still correlates mildly. (3) The informativeness is behind the previously argued usefulness of disentanglement since we observe a positive correlation between LR and disentanglement metrics.
• We conduct a large-scale empirical study supporting our claim. We train 720 representation learning models covering two datasets, including both disentanglement and general-purpose methods. Then we train 5 WReNs (Barrett et al., 2018) and 5 Transformers (Vaswani et al., 2017; Hahne et al., 2019) using the outputs of each representation learning model to perform abstract reasoning, yielding a total of 7200 abstract reasoning models.
2 RELATED WORK
Disentangled representation learning. There is no agreed-upon formal definition of disentanglement. Therefore, in practice, disentanglement is often interpreted as a one-to-one mapping between representation dimensions and generative factors of data, which we term “dimension-wise disentanglement”. It requires that the representation dimension encode only one factor and vice versa (Locatello et al., 2019b; Eastwood & Williams, 2018; Kumar et al., 2017; Do & Tran, 2019). Besides dimension-wise disentanglement, Higgins et al. (2018a) propose a definition from the group theory perspective. However, its requirement in interaction with the environment prevents applicable learning methods for existing disentanglement benchmarks (Caselles-Dupré et al., 2019).
Adopting the dimension-wise definition, researchers develop methods and metrics. SOTA disentanglement methods are mainly variants of generative methods (Higgins et al., 2016; Kim & Mnih, 2018; Burgess et al., 2018; Kumar et al., 2017; Chen et al., 2018; 2016; Jeon et al., 2018; Lin et al., 2020). Corresponding metrics are designed in the following ways (Zaidi et al., 2020): intervening factors (Higgins et al., 2016; Kim & Mnih, 2018), estimating mutual information (Chen et al., 2018), and developing classifiers (Eastwood & Williams, 2018; Kumar et al., 2017). Another line of work related to disentangled representation learning is the Independent Component Analysis (ICA) (Comon, 1994). ICA aims to recover independent components of the data, using the mean correlation coefficient (MCC) as the metric. However, ICA models require access to auxiliary variables (Hyvarinen et al., 2019), leading to inevitable supervision for image datasets training (Hyvarinen & Morioka, 2016; Khemakhem et al., 2020a;b; Klindt et al., 2020). In this paper, we focus on the downstream performance of unsupervised representation learning.
Downstream tasks. It is widely believed that disentangled representations benefit downstream tasks. Intuitively, they offer a human-understandable structure with ready access to salient factors, hence should be enjoying robust generalization capacity (Bengio et al., 2013; Do & Tran, 2019). Several works conduct empirical studies on downstream tasks to support the notions above, includ-
ing abstract reasoning (van Steenkiste et al., 2019), fairness (Locatello et al., 2019a), and sim2real transfer (Dittadi et al., 2020). Among these works, van Steenkiste et al. (2019) provide the most encouraging evidence from abstract reasoning tasks. We adopt their settings and investigate the same tasks. However, their results are questionable. Firstly, it underestimates factors’ linear classification accuracy, yielding a weaker correlation between informativeness and downstream performance (see Figure 9 in Appendix A.3). Moreover, only variants of VAEs are considered. We address these issues and achieve opposite conclusions.
Abstract visual reasoning has been a popular benchmark to measure the representation’s downstream performance, especially in disentanglement literature (Steenbrugge et al., 2018; van Steenkiste et al., 2019; Dittadi et al., 2020; Locatello et al., 2020; Schölkopf et al., 2021). The most common type is the Raven’s Progressive Matrices (RPMs) (Raven, 1941), which highly emphasize abstract and relational reasoning capacities and effectively represent human intelligence (Snow et al., 1984; Carpenter et al., 1990). To solve RPMs, one is asked to complete the missing panel of a 3× 3 grid by exploring the logical relationships of 8 context panels. Moreover, abstract visual reasoning is a well-developed benchmark for representation learning. Given that it is coupled with a principle treatment of generalization (Fleuret et al., 2011), a neural network can not solve reasoning tasks by simply memorizing superficial statistical features. Besides, it can avoid pitfalls where test-specific heuristics learned by downstream models obscures the original properties of representations (Barrett et al., 2018). To summarize, (1) the goal of abstract visual reasoning highlights our requirements for representation learning, and (2) its mechanism ensures valid measurements. For these reasons, we focus on the necessity of disentanglement for the abstract reasoning task.
3 DOWNSTREAM BENCHMARK: ABSTRACT VISUAL REASONING
This section contains background on the downstream benchmark framework. We first introduce the definition of the abstract visual reasoning task. Then we present the framework’s ingredients: representation learning methods, metrics, and abstract reasoning models.
3.1 ABSTRACT VISUAL REASONING AS A TWO-STAGED TASK
The abstract visual reasoning tasks are highly inspired by the famous human IQ test, Raven’s Progressive Matrices (RPMs) (Raven, 1941). Figure 1 shows an RPM question in our evaluation dataset. There are eight context panels and one missing panel in the left part of the figure. The context panels are arranged following some logical rules across rows. During the test, the subject must pick one of the six candidates listed in the right part to fix the missing panel. The goal is to maintain the logical relationships given by the contexts. More details of RPMs are available in Appendix A.4.
We adopt RPMs as a downstream benchmark following van Steenkiste et al. (2019). To measure the necessity of disentanglement for downstream tasks, we separate the evaluation process into two stages: (1) In Stage-1, representation learning models extract representations from images of which RPMs consist, and (2) in Stage-2, abstract reasoning models predict the missing panels from the frozen representations of contexts and answer candidates. Correspondingly, we denote representation learning models as Stage-1 models while abstract reasoning models as Stage-2 models. For Stage-1, we measure the disentanglement properties of the representations. A diverse set of Stage-1 and Stage-2 models are trained, yielding multiple samples from the joint distribution of representation metric scores and downstream accuracy. Finally, we study the relationships between representation qualities and downstream performance. We aim to investigate whether more disentangled representations perform better on abstract reasoning tasks.
The two-staged framework leverages large-scale experiments to reveal connections between the disentanglement of representations and their downstream performance. It provides a precise measurement of the importance of disentanglement. Therefore the two-staged framework is widely-accepted (van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020; Locatello et al., 2020).
3.2 BACKGROUND OF REPRESENTATION LEARNING
Disentangled representation learning methods. The seminal works of Higgins et al. (2016) and Chen et al. (2016) embody disentanglement by augmenting deep generative models (Kingma & Welling, 2013; Goodfellow et al., 2014). For disentangled representation learning methods, we focus on variants of VAE. Namely, β-VAE (Higgins et al., 2016), AnnealedVAE (Burgess et al., 2018), β-TCVAE (Chen et al., 2018), FactorVAE (Kim & Mnih, 2018), and DIP-VAE (Kumar et al., 2017). They achieve disentanglement mainly by encouraging independence between representation dimensions. Please refer to Appendix A.2 for details.
General-purpose representation learning methods. In our study, methods not (explicitly) encouraging disentanglement are called general-purpose methods. We take BYOL (Grill et al., 2020) as a representative. BYOL is a negative-free contrastive learning method. It creates different “views” of an image by data augmentation and pulls together their distance in representation space. To avoid collapsing to trivial representations, a predictor appending to one of the siamese encoders and exponential moving average update strategy (He et al., 2020) are employed. It does not encourage disentanglement due to the lack of regularizers. Indeed, the empirical evidence in Cao et al. (2022) demonstrates that representations learned by BYOL have weak disentanglement.
Representation property metrics. Considered properties of representations cover two axes of metrics: disentanglement metrics and informativeness metrics (Eastwood & Williams, 2018). We include BetaVAE score (Higgins et al., 2016), FactorVAE score (Kim & Mnih, 2018), Mutual Information Gap (Chen et al., 2018) , SAP (Kumar et al., 2017), and DCI Disentanglement (Eastwood & Williams, 2018). Locatello et al. (2019b) proves their agreement on VAE methods with extensive experiments. Though their measurements are different, their results are positively correlated. On the other hand, informativeness requires representations to encode enough information about factors. In this work, we employ Logistic Regression (LR). It is a favorable metric adopted by unsupervised pretraining literature (He et al., 2020; Grill et al., 2020; Caron et al., 2021). Given the weak capacity of linear models, a higher LR accuracy ensures that sufficient information is explicitly encoded. However, it does not emphasize a dimension-wise encoding pattern like disentanglement. To distinguish, we term the property indicated by LR as informativeness.
3.3 BACKGROUND OF METHODS FOR ABSTRACT REASONING
In Stage-1, we extract representations of eight context panels (the left part of Figure 1) and six answer candidates (the right part of Figure 1). Then in Stage-2, downstream models perform abstract reasoning from the (frozen) representations. Abstract reasoning models evaluate whether filling the blank panel by a candidate follows the logical rules given by contexts. For a trial Ti of one candidate ai ∈ A = {a1, ..., a6} and eight context panels C = {c1, ..., c8}, its score is calculated as follows: Yi = Stage2(Stage1(Ti)), Stage1(Ti) = {Stage1(c1), . . . ,Stage1(c8)} ∪ {Stage1(ai)}, (1)
where Yi is the score of trial Ti, Stage1(·),Stage2(·) denote the forward process of the Stage-1 and Stage-2 models, and Stage1(Ti) is the representations of contexts and candidate ai. After evaluating all trials {T1, T2, . . . , T6}, the output answer is argmaxi Yi. We implement two different well-defined structures of Stage-2 models, namely, WReN (Barrett et al., 2018) and Transformer (Vaswani et al., 2017; Hahne et al., 2019). First, they employ an MLP or a Transformer to embed an RPM trial. Then an MLP head predicts a scalar score from the embeddings.
4 EXPERIMENTS
In this Section, we conduct a systematic empirical study about representation properties’ impacts on downstream performance. First, we introduce our experimental conditions in Section 4.1. Then we provide empirical evidence to challenge the necessity of disentanglement (Section 4.2) and to tell which property matters (Section 4.3).
4.1 EXPERIMENTS SETUP
We build upon the experiment conditions of van Steenkiste et al. (2019). Abstract visual reasoning tasks, i.e., RPMs, are solved through a two-stage process: data Stage-1−−−−→ representations Stage-2−−−−→
RPM answers. We first train Stage-1 models in an unsupervised manner and evaluate their disentanglement and informativeness. Then Stage-2 models are trained and evaluated on downstream tasks, yielding an abstract reasoning accuracy of a representation. Provided with a large amount of (representation property score, downstream performance) pairs, we conduct a systematic study to investigate the necessity of disentanglement. More implementation details are available in Appendix A.
Datasets. We replicate the RPM generation protocol in van Steenkiste et al. (2019). The panel images consist of disentanglement benchmark image datasets, namely, Abstract dSprites (Matthey et al., 2017; van Steenkiste et al., 2019) and 3DShapes (Burgess & Kim, 2018). The rows of RPMs are arranged following the logical AND of ground truth factors. As for hardness, we only reserve hard-mixed, whose contexts and candidates are more confusing. According to the generation process, the size of generated RPMs is sufficiently large (about 10144), allowing us to produce fresh samples throughout training.
Reference models. Stage-1 models extract representations from RPM’s panels. To ensure the generalizability of the results, we include 360 disentangled VAEs (denoted as DisVAEs) and 360 BYOLs. Our choices of Stage-1 models cover both disentangled and general-purpose representation learning methods. Moreover, we are interested in the overall relationship between representation properties and downstream performance. Therefore we need to study the correlation between two distributions, i.e., representation metric scores and downstream performance. For this, we include various samples for both Stage-1 and Stage-2 to ensure they are representative enough. For Stage-1, a diverse set of configurations are included for each type of representation learning model. According to the histograms in Appendix C.4, our choices span various disentanglement and informativeness scores. For Stage-2, to better estimate the downstream performance distribution, we use multiple Stage-2 configurations for each representation instead of searching for the best one. Specifically, we train 10 Stage-2 models (5 WReNs and 5 Transformers) for every Stage-1 model. Stage-2 configurations are randomly sampled from a search space described in Appendix A.3 and shared across Stage-1 models. By this, we ensure fair comparisons across representations.
Training protocol. Training is conducted two-staged. Firstly, we train Stage-1 models in an unsupervised manner on the dataset consisting of RPMs’ panels, i.e., Abstract dSprites or 3DShapes. For DisVAE models, we use the training protocol of van Steenkiste et al. (2019), while for BYOL models, we follow Cao et al. (2022). In Stage-2, all models are trained for 10K iterations with a batch size of 32. After every 100 iterations, we evaluate the accuracy on newly generated 50 mini-batches of unseen RPM samples for validation and another 50 mini-batches for testing.
Evaluation protocol. We first evaluate the two stages separately. Then we analyze the relationship between the two stages, i.e., representation properties and downstream performance. Specifically, to challenge the necessity of disentanglement, we are interested in whether more disentangled representations lead to better downstream performance. Further, if it turns out that disentanglement is of limited importance, can we find another metric that better accounts for downstream performance? Therefore, for Stage-1, we employ representation metrics described in Section 3.2 to measure two aspects: disentanglement and informativeness. For all Stage-1 models, we compute the following metric scores: BetaVAE score, FactorVAE score, MIG, SAP, and LR accuracy. DCI Disentanglement is only evaluated for DisVAEs since it takes hours to develop the Gradient Boosting Trees required during the evaluation process on high-dimensional representations of BYOLs (Cao et al., 2022). For Stage-2, we inspect accuracy on newly generated test sets every 100 iterations, yielding accuracy for multiple training steps. Since every step sees fresh samples, we employ training curves to measure sample efficiency. We also report accuracy-#samples curves in Appendix C.2 .
To summarize the downstream performance of a Stage-1 model, over 5 WReNs or 5 Transformers in Stage-2, we report the mean accuracy denoted as WReN or Trans., and max accuracy denoted as WReN⋆ or Trans.⋆. Finally, we calculate the rank correlation (Spearman) between the mean performance of Stage-1 models (WReN and Trans.) at certain Stage-2 steps and their Stage-1 metric scores. A larger correlation indicates a higher significance of the representation property on downstream performance.
4.2 ARE DISENTANGLED REPRESENTATIONS NECESSARY?
Hereafter we challenge the necessity of disentanglement. We begin by comparing a disentangled representation v.s. a deliberately designed, entangled representation on the downstream performance. Then we discuss the necessity of disentanglement inductive bias by evaluating the performance of disentanglement and general-purpose representation learning methods.
Effects of attenuating disentanglement. We first construct the most disentangled representations, i.e., the normalized true factor values. We normalize the true factor values to have zero means and unit standard deviations, yielding 6-d representations (note that Abstract dSprites and 3DShapes are both labeled with 6 ground truth factors). Then we rotate the constructed representations by multiplying randomly generated orthonormal matrices. Afterward, each dimension of the rotated feature captures a combination of factors, thus destroying disentanglement. Finally, we perform abstract reasoning training from true factors before and after rotations. We also conduct rotations on representations learned by DisVAEs.
We run 5 seeds defining the randomly generated rotation matrices and Stage-2 model configurations. We report results on 3DShapes with original/rotated true factors as representations and WReNs as Stage-2 models in Figure 2. As depicted in Figure 2a, there is little difference between performance before and after rotation throughout the training process. Yet Figure 2b shows significant drops in disentanglement metric scores. This surprising phenomenon suggests that even though we drastically entangle the representations, the downstream performance remains unchanged, firmly against the necessity of disentanglement. However, we can see from Figure 2b that LR scores are 100% before and after rotation. It is easy to understand because the rotation we applied
is just an invertible linear transform, which a simple LR can recover, not to mention more capable Stage-2 models. Moreover, we observe similar results for learned representations (Figure 3). We select the most disentangled DisVAE measured by FactorVAE score among the 180 DisVAE models trained on 3DShapes (recall Section 4.1). As shown in Figure 3, rotation does not hurt the performance of representations learned by DisVAEs, backing up our claim that disentanglement representations might not be necessary to achieve good downstream performance. More results of rotation experiments on other datasets are reported in Appendix C.3.
Summary: Destroying disentanglement (by random rotation) in representations does not have a noticeable impact on downstream performance throughout training.
Advantages of disentanglement inductive bias. From previous results, we demonstrate that both high performance and high sample efficiency can be achieved even if we deliberately destroy disen-
tanglement. Further, we are interested in the inductive biases of Stage-1 models: Do disentangled representation learning models have absolute advantages on downstream performance over generalpurpose models? For this, we compare the downstream performance of different families of learning models described in Section 4.1, including BYOL, β-VAE, AnnealedVAE, β-TCVAE, FactorVAE, DIP-VAE-I, and DIP-VAE-II. Among them, BYOL does not explicitly encourage disentanglement. On the other hand, all DisVAEs are disentangled representation learning methods. From a large pool of 7200 checkpoints, we report the best performance for each model family.
Figure 4 shows overviews of training trajectories of Stage-1 models with the highest performing WReN and Trans. on 3DShpaes for multiple training steps. For WReN as Stage-2 models (Figure 4a), BYOL leads at the beginning, then DisVAEs catch up, and finally, BYOL converges at a higher accuracy. In contrast, when Stage-2 models are Transformers, BYOL’s curve grows faster, but DisVAEs and BYOL converge with comparable performance. In general, the two curves evolve in almost identical patterns with small gaps, indicating that disentanglement inductive bias is of limited utility in improving downstream sample efficiency. Corresponding analysis on Abstract dSprites is available in Appendix C.3, where we reach the same conclusions. As for final performance, we report maximal WReN, WReN⋆, Trans. and Trans.⋆ across different Stage-2 models and datasets in Table 1. We select checkpoints to evaluate based on validation accuracy. In particular, the best WReN and Trans. of BYOL are higher than that of DisVAEs’. In addition, it appears that BYOL performs better than or on par with DisVAEs in terms of WReN⋆ and Trans.⋆. Especially, BYOL outperforms DisVAEs on Abstract dSprites with a considerable margin.
Summary: Models not intended for disentangled representation learning can reach superior or comparable downstream performance. Therefore disentanglement inductive bias does not necessarily lead to better sample efficiency or final accuracy.
4.3 WHICH PROPERTY MATTERS DOWNSTREAM PERFORMANCE?
The results in Section 4.2 provide encouraging cases against the necessity of disentanglement. Additionally, we are interested in several further issues: (1) Which property matters downstream performance most? (2) How can we interpret the previously claimed benefits from disentanglement(Bengio et al., 2013; Higgins et al., 2016; van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020)? On account of these questions, we start by investigating how different representation properties influence downstream accuracy. We include informativeness and various disentanglement metrics.
Recall that we train 720 Stage-1 and 7200 Stage-2 models (see Section 4.1). By taking WReN and Trans. as measurements (average reasoning accuracy over 5 WReNs or 5 Transformers), we yield 720 representations paired with their downstream performance. Generally, our analysis is based on rank correlation (Spearman) between representation metric scores and downstream performance. If the correlation score is high, we can conclude that the representation property measured by the considered metric score is significant to downstream performance.
The representation property of the most significance. We calculate the rank correlation between downstream accuracy and disentanglement and informativeness scores. Meanwhile, we report rank correlation at steps 1K, 2K, 5K, and 10K, and the step with the highest validation accuracy. From correlations at different training steps, we can tell how a representation property affects sample efficiency.
Figure 5 displays rank correlations between representation metric scores and abstract reasoning test accuracy on 3DShapes. Firstly we can find that Logistic Regression accuracy (LR) correlates most with downstream performance. The strong correlation is exploited for all considered models at multiple steps. Since LR requires sufficient information to be captured and extracted easily from representations, we can conclude that the informativeness matters most in broad conditions. In contrast, we observe that the importance of disentanglement varies among Stage-1 model families. Disentangled representation learning models (DisVAEs) exhibit strong positive correlations for several disentanglement metrics (but weaker than LR), such as FactorVAE score and DCI Disentanglement. However, their significance does not apply to BYOL, where the correlation of disentanglement is mild or even negative. In Figure 6 we plot the (WReN, metric score) pairs at step 10000. Indeed, for BYOL-WReN on 3DShapes, we can see the linear regression provides a good fit of downstream accuracy and informativeness metrics. As for disentanglement metrics, we can see that BetaVAE score and FactorVAE score suffer from narrow spreads. For MIG and SAP, the regression lines have negative slopes. We conduct a similar analysis on Abstract dSprites and take the same observations. Please refer to Appendix C.4 for more details.
Summary: The informativeness influences downstream performance most. The results are consistent across datasets and model structures.
Understanding for the previously claimed success of disentanglement. Previous works (van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020; Locatello et al., 2020) have reported empirical evidence backing up the advantages of disentangled representations. Consistently, we observe relatively strong correlations with disentanglement metrics, especially when Stage-1 models are DisVAEs in Figure 5. Based on our conclusion on the significance of the informativeness, we study the DisVAE-WReN case and provide some insights to explain why the disentanglement metrics have a high correlation to downstream performance in some cases.
We compute the overall correlations between metrics. The results are shown in Figure 7. For DisVAEs, we find that informativeness and disentanglement have high correlation scores. In particular, we can observe relatively strong correlations between LR and FactorVAE score and BetaVAE score. Accordingly, these disentanglement metrics exhibit relatively strong correlations with downstream performance in Figure 5a. In contrast, other disentanglement metrics correlate mildly with LR. And they are ineffective for downstream performance. Therefore, disentanglement metrics are not truly predictable for downstream performance, but LR is.
To “purify” the effect of disentanglement, a natural question is: If two representations are of close informativeness, does the more disentangled one more helpful for downstream tasks? For this, we employ adjusted metrics in Locatello et al. (2019a):
Adj. Metric = Metric− 1 5 ∑ i∈N(LR) Metrici, (2)
For a representation and a certain metric (we care more about disentanglement metrics), we denote its original metric score as Metric. Then we find its 5 nearest neighbors in terms of LR, which we write as N(LR). Finally, the difference between the original metric score and the mean score of the nearest neighbors is reported as adjusted metrics. Intuitively, we calculate the relative disentanglement for representations with close LR.
Figure 7b displays correlations between adjusted metrics and downstream performance. We can find that all adjusted disentanglement metrics correlate mildly with downstream performance. From this, we can see that when informativeness is close, being disentangled contributes only a small portion to the downstream performance when the downstream training steps are limited (In our case, less than or equal to 2000 steps, see Figure 4 and Figure 7).
Summary: The informativeness is the most predictable metric for downstream performance. Disentanglement only brings small extra benefits at the very beginning of downstream training.
5 CONCLUSION
In this paper, we challenge the necessity of dimension-wise disentanglement for downstream tasks. We conduct a large-scale empirical study on the abstract visual reasoning task. We start by showing that high downstream performance can be achieved by less disentangled representations. In addition, we identify that the informativeness is of the most significance. Finally, we conclude that dimensionwise disentanglement is unnecessary for downstream tasks using deep neural networks with learned representations as input.
REPRODUCIBILITY STATEMENT
We provide information to reproduce our results in Appendix A. We commit to making our codes publicly available.
A REPRODUCIBILITY
In this Section, we provide implementation details to ensure reproducibility. In addition, we commit to making our codes, configurations, and running logs publicly available. All experiments are run on a machine with 2 Intel Xeon Gold 5218R 20-core processors and 4 Nvidia GeForce RTX 3090 GPUs.
A.1 REPRESENTATION LEARNING METHODS
We include both disentangled representation learning methods and general-purpose representation learning methods. i.e., DisVAEs and BYOL (Grill et al., 2020).
DisVAEs implementation. The DisVAEs include β-VAE (Higgins et al., 2016), AnnealedVAE (Burgess et al., 2018), β-TCVAE (Chen et al., 2018), FactorVAE (Kim & Mnih, 2018), and DIP-VAE-I and DIP-VAE-II (Kumar et al., 2017). We use the output of the encoder, the mean of qϕ(z|x), as representations. Hereafter, we introduce details for each method. The above methods encourage disentanglement by adding regularizers to ELBO. Adopting the notation in Tschannen et al. (2018), their objectives can be written in the following unified form:
Ep(x)[Eqϕ(z|x)[− log pθ(x|z)]] + λ1Ep(x)[R1(qϕ(z|x))] + λ2R2(qϕ(z)), (3)
where qϕ(z|x) is the posterior parameterized by the output of the encoder, pθ(x|z) is induced by the decoder output, R1, R2 are the regularizer applying to the posterior and aggregate posterior, and λ1, λ2 are the coefficients controlling regularization. In the objective of β-VAE, β = λ1 > 1, λ2 = 0. Taking R1(qϕ(z|x)) := DKL[qϕ(z|x)||p(z)] forces the posterior to be close to the prior (usually unit gaussian), hence penalizing the capacity of the information bottleneck and encourage disentanglement. FactorVAE and β-TCVAE takes λ1 = 0, λ2 = 1. With R2(qϕ(z)) := TC(qϕ(z)), they penalize the Total Correlation (TC) (Watanabe, 1960). FactorVAE estimates TC by adversarial training, while β-TCVAE estimates TC by biased Monte Carlo sampling. Finally, DIP-VAE-I and DIP-VAE-II take λ1 = 0, λ2 ≥ 1 and R2(qϕ(z)) := ||Covqϕ(z) − I||2F , penalizing the distance between aggregated posterior and factorized prior.
We use the code and configurations from the DisLib 1 (Locatello et al., 2019b). As for parameters, we use the same sweep as van Steenkiste et al. (2019): for each one of the 6 DisVAEs, we use 6 configurations. We train each model using 5 different random seeds. Since we consider 2 datasets (3DShapes and Abstract dSprites), finally, we yield 6 ∗ 6 ∗ 5 ∗ 2 = 360 DisVAE checkpoints.
BYOL implementation. BYOL (Grill et al., 2020) is a contrastive learning method. Figure 8 shows its pipeline. For each image x, we first create two “views” of it by data augmentation, i.e., x1 and x2. Then they are input to the siamese encoders: the online encoder and the target encoder. Specifically, x1 is fed to the online encoder, while x2 is fed to the target encoder, yielding the output
1https://github.com/google-research/disentanglement_lib.git
z1 and z2, respectively. As for architectures, both encoders share the same representation network and projection MLP. The prediction MLP is appended to the online encoder in order to avoid BYOL learning trivial representations. The objective of BYOL is
L = − ⟨z1, z2⟩ ∥z1∥2∥z2∥2 . (4)
We are pulling the representations of the two “views” close. While training, the online encoder’s parameters are updated by gradient descent. However, the target encoder’s parameters are updated by the online parameters’ Exponential Moving Average (EMA) (He et al., 2020). After training, we only keep the online encoder and use the output of the representation network as representations.
We use the PyTorch implementation of BYOL 2. We use the representation network architecture as shown in Table 2, where the representation dimension D is a parameter to be set. Except for normalization and output dimensions, the representation network architecture of BYOL and the encoder architecture of DisVAEs are similar. As for predictor and projector, we use the pipeline Linear→ BN → ReLU → Linear with 256 hidden neurons. We train the BYOLs for 105 epochs using the Adam optimizer with β1 = 0.9, β2 = 0.999, ϵ = 10−8, and learning rate (lr) as a variable parameter. For augmentation, we use the pipeline of Cao et al. (2022) (in PyTorch-style):
1. RandomApply(transforms.ColorJitter(xjit, xjit, xjit, 0.2), p=0.8) 2. RandomGrayScale(p=pgray) 3. RandomHorizontalFlip() 4. RandomApply(transforms.GaussianBlur((3,3), (1.0, 2.0)), p=0.2) 5. RandomResizeCrop(size=(64, 64), scale=(xcrop, 1.0))
The xjit, pgray, and xcrop are parameters to be set. xjit controls how much to jitter brightness, contrast, and saturation. pgray controls the probability to convert the image to grayscale. xcrop defines the lower bound for the random area of the crop.
We perform a parameter sweep on the cross product of intervals of parameters D, norm, lr, xjit, pgray, and xcrop. On 3DShapes, we use the following parameter grid (in scikit-learn style):
[ {’D’: [32, 64, 128], ’lr’: [3e-2, 3e-3], ’norm’: [BatchNorm()], ’x_jit’: [0.6, 0.8], ’p_gray’: [0.5, 0.7, 0.9], ’x_crop’: [1.0]}, {’D’: [256], ’lr’: [3e-4, 3e-5], ’norm’: [BatchNorm(), GroupNorm(num_groups=4)], ’x_jit’: [0.4, 0.8], ’p_gray’: [0.3, 0.5, 0.7], ’x_crop’: [1.0]} ]
On Abstract dSprites, we use the following parameter grid:
2https://github.com/lucidrains/byol-pytorch.git
[ {’D’: [32, 64, 128], ’lr’: [3e-3, 3e-4], ’norm’: [BatchNorm()], ’x_jit’: [0.6, 0.8], ’p_gray’: [0.0, 0.1, 0.2], ’x_crop’: [0.6]}, {’D’: [256], ’lr’: [3e-4, 3e-5], ’norm’: [BatchNorm(), GroupNorm(num_groups=4)], ’x_jit’: [0.4, 0.8], ’p_gray’: [0.0, 0.1, 0.2], ’x_crop’: [0.6]} ]
For each parameter configuration, we run it with 3 random seeds. Finally, we trained 360 BYOLs in total.
A.2 ABSTRACT REASONING METHODS
We include two abstract reasoning network architectures: WReN (Barrett et al., 2018; van Steenkiste et al., 2019) and Transformer (Vaswani et al., 2017; Hahne et al., 2019).
WReN implementation. WReN consists of two parts: graph MLP and edge MLP. Here we use the same notations as in Section 3.3. For the representations of a trial Stage1(Ti), edge MLP takes a pair of representations in Stage1(Ti) as input and embed them to edge embeddings. Then all edge embeddings of Stage1(Ti) (in total C29=36) are added up and input to the graph MLP. Finally, the graph MLP output a scalar score, predicting the correctness of the trial Ti.
We use the code (van Steenkiste et al., 2019) to implement WReN. And we use the same parameter searching spaces as them. All WReNs are trained in 10K steps with a batch size of 32. The learning rate for the Adam optimizer is sampled from the set {0.01, 0.001, 0.0001} while β1 = 0.9, β2 = 0.999, and ϵ = 10−8. For the edge MLP in the WReN model, we uniformly sample its hidden units in 256 or 512, and we uniformly choose its number of hidden layers in 2, 3, or 4. Similarly, for the graph MLP in the WReN model, we uniformly sample its hidden units in 128 or 512, and we uniformly choose its number of hidden layers in 1 or 2 before the final linear layer to predict the final score. We also uniformly sample whether we apply no dropout, dropout of 0.25, dropout of 0.5, or dropout of 0.75 to units before this last layer.
Transformer implementation. We simplify the architecture of Hahne et al. (2019). Here we treat Stage1(Ti) as a sequence. We first linear project all representations and prepend them with a learnable [class] token. We add them with learnable positional embeddings. Then they are input into a stack of Transformer blocks (Vaswani et al., 2017). Finally, an MLP predicts a scalar score from the class embedding of the final Transformer block.
We implement the Transformer architecture ourselves with utilities of the DisLib code base. All Transformers are trained for the same steps and same batch size as WReN, i.e., 10K steps with a batch size of 32. We use the Adam optimizer with weight decay and cosine learning rate scheduler. The learning rate for the Adam optimizer is uniformly selected from {5e− 4, 6e− 4, 7e− 4}. The depth of Transformer blocks is uniformly set to be 2, 3, or 4. The dimensions of q, k, v of the selfattention model are uniformly 32 or 64. The MLP head uses the same architecture and parameter space as the graph MLP in WReN. For other fixed parameters, please refer to our codes for details.
A.3 REPRESENTATION METRICS
In the main text, we employ disentanglement and informativeness metrics to measure the properties of representations. Here we provide more details.
Disentanglement metrics. We use the setup and implementation of Locatello et al. (2019b). Here we briefly introduce the details of our considered metrics. Namely, BetaVAE score (Higgins et al., 2016), FactorVAE score (Kim & Mnih, 2018), Mutual Information Gap (Chen et al., 2018) , SAP (Kumar et al., 2017), and DCI Disentanglement (Eastwood & Williams, 2018). The BetaVAE score and the FactorVAE score predict the intervened factor from representations to measure disentanglement. The Mutual Information Gap and SAP compute the gap in response for each factor between the two highest representation dimensions. The difference is that MIG measures mutual information while SAP measures classification accuracy. The DCI Disentanglement calculates the entropy of the relative importance of a latent dimension in predicting factors. We follow previous studies (Locatello et al., 2019b; van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020) to develop a Gradient Boosting Tree (GBT) for prediction during the DCI Disentanglement evaluation.
Though according to Eastwood & Williams (2018) any classifier could be used. As reported by Cao et al. (2022), the GBT takes hours to train from high-dimensional representations learned by BYOL. Thus we only report DCI Disentanglement score for DisVAEs.
Informativeness metrics. We use LR to measure the informativeness of representations. We train a Logistic Regression model to predict factor values from representations. We use 10000 samples to train LR. Unlike van Steenkiste et al. (2019), we use “multinomial” instead of “one v.s. rest” as the multi-class classification scheme. As shown in Figure 9a, for the same set of representations, “one v.s. Rest” LR has inferior prediction accuracy. Moreover, ranking by scores of these two LRs yields different results. In Figure ,9b we can observe different correlations of the “one v.s. Rest” LR. To better estimate informativeness, we use “multinomial” LR as the measurement.
A.4 ABSTRACT VISUAL REASONING DATASETS
We use the two abstract visual reasoning datasets developed by van Steenkiste et al. (2019). i.e., Ravens’ Progressive Matrices created from 3DShapes (Burgess & Kim, 2018) and Abstract dSprites (Matthey et al., 2017; van Steenkiste et al., 2019).
We sketch the rules here by taking the RPM in Figure 1 as an example. The reasoning attributes are the ground truth factors of 3DShpaes. i.e., floor hue, wall hue, object hue, scale, shape, and orientation. Each row in the 3 × 3 matrix has 1, 2, or 3 ground truth factors taking a fixed value. And the 3 rows have the same fixed ground truth factors, though they might take different values. From the context panels, one should discover the underlying logical relationship. Finally, one is asked to fill the missing panel by one of the candidates. For the RPM in Figure 1, from the contexts, we can infer that the fixed factors are: wall hue, shape, and orientation. Then for the third row, from the first 2 panels, we know that the values for the shared factors are: the wall hue is blue, the shape is cylinder, and the orientation is the azimuth that makes the wall corner appears in the righter part of the image. So we choose the candidate with these factor values as the solution, as shown in Figure 10a. Figure 10b shows a sample of RPMs with answers on Abstract dSprites.
B ABLATIONS ON GENERAL-PURPOSE REPRESENTATION LEARNING METHODS
In the main text, we use BYOL as a representative of general-purpose representation learning methods. For completeness, here we introduce another general-purpose method, SimSiam (Chen & He, 2021). We modify the code of BYOL 3 to train SimSiams on 3DShapes with the following parameter grid:
[ {’D’: [512], ’lr’: [3e-4, 3e-5], ’norm’: [BatchNorm()], ’x_jit’: [0.4, 0.8],
3https://github.com/lucidrains/byol-pytorch.git
’p_gray’: [0.3, 0.5, 0.7], ’x_crop’: [0.6, 1.0]} ]
For each configuration, we run with 3 seeds. So finally, we yield 72 SimSiams. Then we use the same WReNs for DisVAEs and BYOLs as Stage-2 models.
The results of SimSiam-WReN agree with our conclusions in the main text. As for the best performance, we have WReN=85.1% and WReN⋆=94.1%, which is better than DisVAEs’. Figure 11 shows the correlations of downstream performance and representation properties. LR still correlates most for all considered steps.
C ADDITIONAL RESULTS
Figure 13: Accuracy v.s. #samples curves of the most disentangled DisVAEs before and after rotation. It is consistent with Figure 3.
0 2000 4000 6000 8000 10000 Sample Size (#Batches)
0.2
0.4
0.6
0.8
1.0
A cc
ur ac
y
BYOL DisVAEs
(a) Stage-2=WReN
0 2000 4000 6000 8000 10000 Sample Size (#Batches)
0.2
0.4
0.6
0.8
1.0
A cc
ur ac
y
BYOL DisVAEs
(b) Stage-2=Transformer
Figure 14: Accuracy v.s. #samples curves of the Stage-1 models with the best WReN or Trans.. It is consistent with Figure 4
C.1 ADDITIONAL RESULTS OF FINAL PERFORMANCE
In Table 1 we report the best final performance of DisVAEs and BYOLs. Here we provide more details on which type of DisVAEs at which steps achieve the reported performance in Table 1. We can observe that the best DisVAEs vary with different datasets and Stage-2 models. As for the best steps, except 3DShapes-WReN, BYOL achieves the best performance earlier than DisVAEs.
C.2 ACCURACY-#SAMPLES CURVES
We employ training curves (accuracy-step) in the main text to evaluate sample efficiency following van Steenkiste et al. (2019). For completeness, here we show accuracy-#samples curves.
We present the accuracy-#samples versions of Figure 3 and Figure 4, i.e., Figure 13 and Figure 14. We train the same models as in the main text until convergence with fixed training data sizes of 100, 1000, 5000, 7000, and 10000 batches. Then for each sample size, we plot the test performance at the
step with the highest validation accuracy. We can see the ranking of representations and evolving patterns of both types of curves agree well.
C.3 ADDITIONAL RESULTS OF RANDOM ROTATION EXPERIMENTS
This section contains additional results of the random rotation experiments. Here we report the downstream performance of deliberately entangled (by random rotation) representations.
Figure 12 shows the same experiments as Figure 2 on Abstract dSprites. We can observe that the two curves in Figure 12a are almost identical. And in Figure 12b, we can observe that disentanglement metric scores drop drastically while LR remains the same. We notice that LR is not 100%. This is because some factors of Abstract dSprites have too many support values. e.g., the x and y positions both have 32 possible values. However, our conclusion in the main text still holds as we observe that LR is invariant to random rotation. On Abstract dSprites, we randomly rotate the most disentangled representations from DisVAEs (measured by FactorVAE score). In Figure 15, we can see that rotation has little impact on the training trajectories. So our conclusion is similar across datasets.
C.4 ADDITIONAL RESULTS OF CORRELATIONS
In this part, we report additional results related to the correlations between representation metrics and downstream performance.
Absolute values of metric scores and downstream accuracy. We show the histograms as a sanity check of the distribution of metric scores and downstream accuracy. Figure 16 presents the score distributions of each metric. We report the mean metric scores with STDs to depict the overall properties for Stage-1 models in Table 4. Figure 17 and Figure 18 display the distributions of downstream performance.
Rank correlations. This part contains additional results of rank correlations. On 3DShapes, Figure 19 displays rank correlations between adjusted metrics and downstream accuracy, Figure 20 shows the overall correlation between metrics. On Abstract dSprites, Figure 21 shows correlations between metrics and downstream performance. Then Figure 22 presents correlations between ad-
justed metrics and downstream performance. Finally, Figure 23 displays the overall correlations between metrics.
Plots of (metric score, downstream accuracy) pairs. Figures 24, 25, 26, 27, 28, 29, 30, and 31 provide an in-depth view of the correlations, where we plot (metrics, downstream accuracy) pairs. | 1. What is the main contribution of the paper regarding downstream tasks and disentangled representation?
2. What are the strengths of the proposed approach, particularly in terms of experiment design?
3. What are the weaknesses of the paper, especially regarding the inclusion of certain statements and their implications?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors call into question the conventional thinking that downstream tasks (the take a representation as input) benefit from a disentangled representation. Their results are that the informativeness of the representation not the disentangled nature of is what results in improved downstream performance.
Strengths And Weaknesses
As I was reading the paper I was confused by these two statements:
"However, on the abstract visual reasoning task, we find that rotating disentangled representations, i.e., multiplying the representations by an orthonormal matrix, has no impact on sample efficiency and final accuracy."
-and-
"Our finding demonstrates that disentanglement does not affect the downstream learning trajectory, which is against the commonly believed usefulness of disentanglement. On the other hand, it is not surprising since we apply an invertible linear transform. We can observe that Logistic Regression (LR) accuracy remains 100% before and after rotation, indicating that a simple linear layer could eliminate the effects of rotation."
It appeared the authors believed that the multiplication would destroy the disentangled representation, and used this to prove their thesis. But, they later acknowledged that LR is capable of reversing the multiplication. I am at a loss for why to include this in the paper, and why to have it so early in the text.
The authors have a good experiment design, using learners that provide a disentangled representation (various VAE based approached) and compare against learned representations created by BYOL (which makes no claim to disentanglement). These representations are used to perform abstract visual reasoning tasks. They use the accuracy of using a logistic regression on the learned representation (informativeness) as a metric to show the usefulness of the representation.
Clarity, Quality, Novelty And Reproducibility
The paper is well written. The experiment is very thorough. A researcher should be able to reproduce the results. |
ICLR | Title
On the Necessity of Disentangled Representations for Downstream Tasks
Abstract
A disentangled representation encodes generative factors of data in a separable and compact pattern. Thus it is widely believed that such a representation format benefits downstream tasks. In this paper, we challenge the necessity of disentangled representation in downstream applications. Specifically, we show that dimension-wise disentangled representations are not necessary for downstream tasks using neural networks that take learned representations as input. We provide extensive empirical evidence against the necessity of disentanglement, covering multiple datasets, representation learning methods, and downstream network architectures. Moreover, our study reveals that informativeness of representations best accounts for downstream performance. The positive correlation between the informativeness and disentanglement explains the claimed usefulness of disentangled representations in previous works.
1 INTRODUCTION
Disentanglement has been considered an essential property of representation learning (Bengio et al., 2013; Peters et al., 2017; Goodfellow et al., 2016; Bengio et al., 2007; Schmidhuber, 1992; Lake et al., 2017; Tschannen et al., 2018). Though there is no widely accepted formal definition yet, the fundamental intuition is that a disentangled representation should separately and distinctly capture information from generative data factors (Bengio et al., 2013). In practice, disentanglement is often implemented to emphasize a dimension-wise relationship, i.e., a representation dimension should capture information from exactly one factor and vice versa (Locatello et al., 2019b; Higgins et al., 2016; Kim & Mnih, 2018; Chen et al., 2018; Eastwood & Williams, 2018; Ridgeway & Mozer, 2018; Kumar et al., 2017; Do & Tran, 2019). Disentangled representations offer human-interpretable factor dependencies. Therefore, in theory, they are robust to variations in the natural data and are expected to benefit downstream performances (Bengio et al., 2013).
Researchers are interested in empirically verifying these purported advantages. Especially, they focus on the following two-staged tasks: (1) extracting representations in an unsupervised manner from data, (2) then performing downstream neural networks training based on learned representations (van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020; Locatello et al., 2020). Among various downstream tasks, except the ones that explicitly require disentanglement (Higgins et al., 2018b; Gabbay & Hoshen, 2021; Schölkopf et al., 2021), abstract visual reasoning is widely recognized as a popular testbed (van Steenkiste et al., 2019; Locatello et al., 2020; Schölkopf et al., 2021). The premise behind it aligns with the goals of machine intelligence (Snow et al., 1984; Carpenter et al., 1990). Moreover, its mechanism ensures valid measurement of representations downstream performance (Fleuret et al., 2011; Barrett et al., 2018).
In the abstract visual reasoning task, intelligent agents are asked to take human IQ tests, i.e., predict the missing panel of Raven’s Progressive Matrices (RPMs) (Raven, 1941). Indeed it is a challenging task for representation learning (Barrett et al., 2018; van Steenkiste et al., 2019). Disentanglement literature often takes this task as an encouraging example to show that disentanglement leads to quicker learning and better final performance (van Steenkiste et al., 2019; Locatello et al., 2020; Schölkopf et al., 2021).
However, on the abstract visual reasoning task, we find that rotating disentangled representations, i.e., multiplying the representations by an orthonormal matrix, has no impact on sample efficiency and final accuracy. We construct the most disentangled representations, i.e., normalized true factors.
Then we solve the downstream tasks from them and their rotated variants. As shown in Figure 2a, there is little difference between the accuracy curves of original and rotated representations throughout the learning process. On one hand, this phenomenon is surprising since the rotation decreases dimension-wise disentanglement by destroying axis alignment (Locatello et al., 2019b). Indeed, in Figure 2b we can observe notable drops in disentanglement metric scores (first 5 columns). Our finding demonstrates that disentanglement does not affect the downstream learning trajectory, which is against the commonly believed usefulness of disentanglement. On the other hand, it is not surprising since we apply an invertible linear transform. We can observe that Logistic Regression (LR) accuracy remains 100% before and after rotation, indicating that a simple linear layer could eliminate the effects of rotation.
Per such facts, some questions arise: Are disentangled representations necessary for two-staged tasks? If not, which property matters? To address them, we conduct an extensive empirical study based on abstract reasoning tasks. Our contributions are as follows.
• We challenge the necessity of disentanglement for abstract reasoning tasks. We find that (1) entangling representations by random rotation has little impact, and (2) general-purpose representation learning methods could reach better or competitive performance than disentanglement methods.
• Following Eastwood & Williams (2018), we term what information the representation has learned as informativeness. We show that informativeness matters downstream performance most. (1) Logistic regression (LR) accuracy on factor classification correlates most with downstream performance, comparing with disentanglement metrics. (2) Conditioning on close LR accuracy, disentanglement still correlates mildly. (3) The informativeness is behind the previously argued usefulness of disentanglement since we observe a positive correlation between LR and disentanglement metrics.
• We conduct a large-scale empirical study supporting our claim. We train 720 representation learning models covering two datasets, including both disentanglement and general-purpose methods. Then we train 5 WReNs (Barrett et al., 2018) and 5 Transformers (Vaswani et al., 2017; Hahne et al., 2019) using the outputs of each representation learning model to perform abstract reasoning, yielding a total of 7200 abstract reasoning models.
2 RELATED WORK
Disentangled representation learning. There is no agreed-upon formal definition of disentanglement. Therefore, in practice, disentanglement is often interpreted as a one-to-one mapping between representation dimensions and generative factors of data, which we term “dimension-wise disentanglement”. It requires that the representation dimension encode only one factor and vice versa (Locatello et al., 2019b; Eastwood & Williams, 2018; Kumar et al., 2017; Do & Tran, 2019). Besides dimension-wise disentanglement, Higgins et al. (2018a) propose a definition from the group theory perspective. However, its requirement in interaction with the environment prevents applicable learning methods for existing disentanglement benchmarks (Caselles-Dupré et al., 2019).
Adopting the dimension-wise definition, researchers develop methods and metrics. SOTA disentanglement methods are mainly variants of generative methods (Higgins et al., 2016; Kim & Mnih, 2018; Burgess et al., 2018; Kumar et al., 2017; Chen et al., 2018; 2016; Jeon et al., 2018; Lin et al., 2020). Corresponding metrics are designed in the following ways (Zaidi et al., 2020): intervening factors (Higgins et al., 2016; Kim & Mnih, 2018), estimating mutual information (Chen et al., 2018), and developing classifiers (Eastwood & Williams, 2018; Kumar et al., 2017). Another line of work related to disentangled representation learning is the Independent Component Analysis (ICA) (Comon, 1994). ICA aims to recover independent components of the data, using the mean correlation coefficient (MCC) as the metric. However, ICA models require access to auxiliary variables (Hyvarinen et al., 2019), leading to inevitable supervision for image datasets training (Hyvarinen & Morioka, 2016; Khemakhem et al., 2020a;b; Klindt et al., 2020). In this paper, we focus on the downstream performance of unsupervised representation learning.
Downstream tasks. It is widely believed that disentangled representations benefit downstream tasks. Intuitively, they offer a human-understandable structure with ready access to salient factors, hence should be enjoying robust generalization capacity (Bengio et al., 2013; Do & Tran, 2019). Several works conduct empirical studies on downstream tasks to support the notions above, includ-
ing abstract reasoning (van Steenkiste et al., 2019), fairness (Locatello et al., 2019a), and sim2real transfer (Dittadi et al., 2020). Among these works, van Steenkiste et al. (2019) provide the most encouraging evidence from abstract reasoning tasks. We adopt their settings and investigate the same tasks. However, their results are questionable. Firstly, it underestimates factors’ linear classification accuracy, yielding a weaker correlation between informativeness and downstream performance (see Figure 9 in Appendix A.3). Moreover, only variants of VAEs are considered. We address these issues and achieve opposite conclusions.
Abstract visual reasoning has been a popular benchmark to measure the representation’s downstream performance, especially in disentanglement literature (Steenbrugge et al., 2018; van Steenkiste et al., 2019; Dittadi et al., 2020; Locatello et al., 2020; Schölkopf et al., 2021). The most common type is the Raven’s Progressive Matrices (RPMs) (Raven, 1941), which highly emphasize abstract and relational reasoning capacities and effectively represent human intelligence (Snow et al., 1984; Carpenter et al., 1990). To solve RPMs, one is asked to complete the missing panel of a 3× 3 grid by exploring the logical relationships of 8 context panels. Moreover, abstract visual reasoning is a well-developed benchmark for representation learning. Given that it is coupled with a principle treatment of generalization (Fleuret et al., 2011), a neural network can not solve reasoning tasks by simply memorizing superficial statistical features. Besides, it can avoid pitfalls where test-specific heuristics learned by downstream models obscures the original properties of representations (Barrett et al., 2018). To summarize, (1) the goal of abstract visual reasoning highlights our requirements for representation learning, and (2) its mechanism ensures valid measurements. For these reasons, we focus on the necessity of disentanglement for the abstract reasoning task.
3 DOWNSTREAM BENCHMARK: ABSTRACT VISUAL REASONING
This section contains background on the downstream benchmark framework. We first introduce the definition of the abstract visual reasoning task. Then we present the framework’s ingredients: representation learning methods, metrics, and abstract reasoning models.
3.1 ABSTRACT VISUAL REASONING AS A TWO-STAGED TASK
The abstract visual reasoning tasks are highly inspired by the famous human IQ test, Raven’s Progressive Matrices (RPMs) (Raven, 1941). Figure 1 shows an RPM question in our evaluation dataset. There are eight context panels and one missing panel in the left part of the figure. The context panels are arranged following some logical rules across rows. During the test, the subject must pick one of the six candidates listed in the right part to fix the missing panel. The goal is to maintain the logical relationships given by the contexts. More details of RPMs are available in Appendix A.4.
We adopt RPMs as a downstream benchmark following van Steenkiste et al. (2019). To measure the necessity of disentanglement for downstream tasks, we separate the evaluation process into two stages: (1) In Stage-1, representation learning models extract representations from images of which RPMs consist, and (2) in Stage-2, abstract reasoning models predict the missing panels from the frozen representations of contexts and answer candidates. Correspondingly, we denote representation learning models as Stage-1 models while abstract reasoning models as Stage-2 models. For Stage-1, we measure the disentanglement properties of the representations. A diverse set of Stage-1 and Stage-2 models are trained, yielding multiple samples from the joint distribution of representation metric scores and downstream accuracy. Finally, we study the relationships between representation qualities and downstream performance. We aim to investigate whether more disentangled representations perform better on abstract reasoning tasks.
The two-staged framework leverages large-scale experiments to reveal connections between the disentanglement of representations and their downstream performance. It provides a precise measurement of the importance of disentanglement. Therefore the two-staged framework is widely-accepted (van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020; Locatello et al., 2020).
3.2 BACKGROUND OF REPRESENTATION LEARNING
Disentangled representation learning methods. The seminal works of Higgins et al. (2016) and Chen et al. (2016) embody disentanglement by augmenting deep generative models (Kingma & Welling, 2013; Goodfellow et al., 2014). For disentangled representation learning methods, we focus on variants of VAE. Namely, β-VAE (Higgins et al., 2016), AnnealedVAE (Burgess et al., 2018), β-TCVAE (Chen et al., 2018), FactorVAE (Kim & Mnih, 2018), and DIP-VAE (Kumar et al., 2017). They achieve disentanglement mainly by encouraging independence between representation dimensions. Please refer to Appendix A.2 for details.
General-purpose representation learning methods. In our study, methods not (explicitly) encouraging disentanglement are called general-purpose methods. We take BYOL (Grill et al., 2020) as a representative. BYOL is a negative-free contrastive learning method. It creates different “views” of an image by data augmentation and pulls together their distance in representation space. To avoid collapsing to trivial representations, a predictor appending to one of the siamese encoders and exponential moving average update strategy (He et al., 2020) are employed. It does not encourage disentanglement due to the lack of regularizers. Indeed, the empirical evidence in Cao et al. (2022) demonstrates that representations learned by BYOL have weak disentanglement.
Representation property metrics. Considered properties of representations cover two axes of metrics: disentanglement metrics and informativeness metrics (Eastwood & Williams, 2018). We include BetaVAE score (Higgins et al., 2016), FactorVAE score (Kim & Mnih, 2018), Mutual Information Gap (Chen et al., 2018) , SAP (Kumar et al., 2017), and DCI Disentanglement (Eastwood & Williams, 2018). Locatello et al. (2019b) proves their agreement on VAE methods with extensive experiments. Though their measurements are different, their results are positively correlated. On the other hand, informativeness requires representations to encode enough information about factors. In this work, we employ Logistic Regression (LR). It is a favorable metric adopted by unsupervised pretraining literature (He et al., 2020; Grill et al., 2020; Caron et al., 2021). Given the weak capacity of linear models, a higher LR accuracy ensures that sufficient information is explicitly encoded. However, it does not emphasize a dimension-wise encoding pattern like disentanglement. To distinguish, we term the property indicated by LR as informativeness.
3.3 BACKGROUND OF METHODS FOR ABSTRACT REASONING
In Stage-1, we extract representations of eight context panels (the left part of Figure 1) and six answer candidates (the right part of Figure 1). Then in Stage-2, downstream models perform abstract reasoning from the (frozen) representations. Abstract reasoning models evaluate whether filling the blank panel by a candidate follows the logical rules given by contexts. For a trial Ti of one candidate ai ∈ A = {a1, ..., a6} and eight context panels C = {c1, ..., c8}, its score is calculated as follows: Yi = Stage2(Stage1(Ti)), Stage1(Ti) = {Stage1(c1), . . . ,Stage1(c8)} ∪ {Stage1(ai)}, (1)
where Yi is the score of trial Ti, Stage1(·),Stage2(·) denote the forward process of the Stage-1 and Stage-2 models, and Stage1(Ti) is the representations of contexts and candidate ai. After evaluating all trials {T1, T2, . . . , T6}, the output answer is argmaxi Yi. We implement two different well-defined structures of Stage-2 models, namely, WReN (Barrett et al., 2018) and Transformer (Vaswani et al., 2017; Hahne et al., 2019). First, they employ an MLP or a Transformer to embed an RPM trial. Then an MLP head predicts a scalar score from the embeddings.
4 EXPERIMENTS
In this Section, we conduct a systematic empirical study about representation properties’ impacts on downstream performance. First, we introduce our experimental conditions in Section 4.1. Then we provide empirical evidence to challenge the necessity of disentanglement (Section 4.2) and to tell which property matters (Section 4.3).
4.1 EXPERIMENTS SETUP
We build upon the experiment conditions of van Steenkiste et al. (2019). Abstract visual reasoning tasks, i.e., RPMs, are solved through a two-stage process: data Stage-1−−−−→ representations Stage-2−−−−→
RPM answers. We first train Stage-1 models in an unsupervised manner and evaluate their disentanglement and informativeness. Then Stage-2 models are trained and evaluated on downstream tasks, yielding an abstract reasoning accuracy of a representation. Provided with a large amount of (representation property score, downstream performance) pairs, we conduct a systematic study to investigate the necessity of disentanglement. More implementation details are available in Appendix A.
Datasets. We replicate the RPM generation protocol in van Steenkiste et al. (2019). The panel images consist of disentanglement benchmark image datasets, namely, Abstract dSprites (Matthey et al., 2017; van Steenkiste et al., 2019) and 3DShapes (Burgess & Kim, 2018). The rows of RPMs are arranged following the logical AND of ground truth factors. As for hardness, we only reserve hard-mixed, whose contexts and candidates are more confusing. According to the generation process, the size of generated RPMs is sufficiently large (about 10144), allowing us to produce fresh samples throughout training.
Reference models. Stage-1 models extract representations from RPM’s panels. To ensure the generalizability of the results, we include 360 disentangled VAEs (denoted as DisVAEs) and 360 BYOLs. Our choices of Stage-1 models cover both disentangled and general-purpose representation learning methods. Moreover, we are interested in the overall relationship between representation properties and downstream performance. Therefore we need to study the correlation between two distributions, i.e., representation metric scores and downstream performance. For this, we include various samples for both Stage-1 and Stage-2 to ensure they are representative enough. For Stage-1, a diverse set of configurations are included for each type of representation learning model. According to the histograms in Appendix C.4, our choices span various disentanglement and informativeness scores. For Stage-2, to better estimate the downstream performance distribution, we use multiple Stage-2 configurations for each representation instead of searching for the best one. Specifically, we train 10 Stage-2 models (5 WReNs and 5 Transformers) for every Stage-1 model. Stage-2 configurations are randomly sampled from a search space described in Appendix A.3 and shared across Stage-1 models. By this, we ensure fair comparisons across representations.
Training protocol. Training is conducted two-staged. Firstly, we train Stage-1 models in an unsupervised manner on the dataset consisting of RPMs’ panels, i.e., Abstract dSprites or 3DShapes. For DisVAE models, we use the training protocol of van Steenkiste et al. (2019), while for BYOL models, we follow Cao et al. (2022). In Stage-2, all models are trained for 10K iterations with a batch size of 32. After every 100 iterations, we evaluate the accuracy on newly generated 50 mini-batches of unseen RPM samples for validation and another 50 mini-batches for testing.
Evaluation protocol. We first evaluate the two stages separately. Then we analyze the relationship between the two stages, i.e., representation properties and downstream performance. Specifically, to challenge the necessity of disentanglement, we are interested in whether more disentangled representations lead to better downstream performance. Further, if it turns out that disentanglement is of limited importance, can we find another metric that better accounts for downstream performance? Therefore, for Stage-1, we employ representation metrics described in Section 3.2 to measure two aspects: disentanglement and informativeness. For all Stage-1 models, we compute the following metric scores: BetaVAE score, FactorVAE score, MIG, SAP, and LR accuracy. DCI Disentanglement is only evaluated for DisVAEs since it takes hours to develop the Gradient Boosting Trees required during the evaluation process on high-dimensional representations of BYOLs (Cao et al., 2022). For Stage-2, we inspect accuracy on newly generated test sets every 100 iterations, yielding accuracy for multiple training steps. Since every step sees fresh samples, we employ training curves to measure sample efficiency. We also report accuracy-#samples curves in Appendix C.2 .
To summarize the downstream performance of a Stage-1 model, over 5 WReNs or 5 Transformers in Stage-2, we report the mean accuracy denoted as WReN or Trans., and max accuracy denoted as WReN⋆ or Trans.⋆. Finally, we calculate the rank correlation (Spearman) between the mean performance of Stage-1 models (WReN and Trans.) at certain Stage-2 steps and their Stage-1 metric scores. A larger correlation indicates a higher significance of the representation property on downstream performance.
4.2 ARE DISENTANGLED REPRESENTATIONS NECESSARY?
Hereafter we challenge the necessity of disentanglement. We begin by comparing a disentangled representation v.s. a deliberately designed, entangled representation on the downstream performance. Then we discuss the necessity of disentanglement inductive bias by evaluating the performance of disentanglement and general-purpose representation learning methods.
Effects of attenuating disentanglement. We first construct the most disentangled representations, i.e., the normalized true factor values. We normalize the true factor values to have zero means and unit standard deviations, yielding 6-d representations (note that Abstract dSprites and 3DShapes are both labeled with 6 ground truth factors). Then we rotate the constructed representations by multiplying randomly generated orthonormal matrices. Afterward, each dimension of the rotated feature captures a combination of factors, thus destroying disentanglement. Finally, we perform abstract reasoning training from true factors before and after rotations. We also conduct rotations on representations learned by DisVAEs.
We run 5 seeds defining the randomly generated rotation matrices and Stage-2 model configurations. We report results on 3DShapes with original/rotated true factors as representations and WReNs as Stage-2 models in Figure 2. As depicted in Figure 2a, there is little difference between performance before and after rotation throughout the training process. Yet Figure 2b shows significant drops in disentanglement metric scores. This surprising phenomenon suggests that even though we drastically entangle the representations, the downstream performance remains unchanged, firmly against the necessity of disentanglement. However, we can see from Figure 2b that LR scores are 100% before and after rotation. It is easy to understand because the rotation we applied
is just an invertible linear transform, which a simple LR can recover, not to mention more capable Stage-2 models. Moreover, we observe similar results for learned representations (Figure 3). We select the most disentangled DisVAE measured by FactorVAE score among the 180 DisVAE models trained on 3DShapes (recall Section 4.1). As shown in Figure 3, rotation does not hurt the performance of representations learned by DisVAEs, backing up our claim that disentanglement representations might not be necessary to achieve good downstream performance. More results of rotation experiments on other datasets are reported in Appendix C.3.
Summary: Destroying disentanglement (by random rotation) in representations does not have a noticeable impact on downstream performance throughout training.
Advantages of disentanglement inductive bias. From previous results, we demonstrate that both high performance and high sample efficiency can be achieved even if we deliberately destroy disen-
tanglement. Further, we are interested in the inductive biases of Stage-1 models: Do disentangled representation learning models have absolute advantages on downstream performance over generalpurpose models? For this, we compare the downstream performance of different families of learning models described in Section 4.1, including BYOL, β-VAE, AnnealedVAE, β-TCVAE, FactorVAE, DIP-VAE-I, and DIP-VAE-II. Among them, BYOL does not explicitly encourage disentanglement. On the other hand, all DisVAEs are disentangled representation learning methods. From a large pool of 7200 checkpoints, we report the best performance for each model family.
Figure 4 shows overviews of training trajectories of Stage-1 models with the highest performing WReN and Trans. on 3DShpaes for multiple training steps. For WReN as Stage-2 models (Figure 4a), BYOL leads at the beginning, then DisVAEs catch up, and finally, BYOL converges at a higher accuracy. In contrast, when Stage-2 models are Transformers, BYOL’s curve grows faster, but DisVAEs and BYOL converge with comparable performance. In general, the two curves evolve in almost identical patterns with small gaps, indicating that disentanglement inductive bias is of limited utility in improving downstream sample efficiency. Corresponding analysis on Abstract dSprites is available in Appendix C.3, where we reach the same conclusions. As for final performance, we report maximal WReN, WReN⋆, Trans. and Trans.⋆ across different Stage-2 models and datasets in Table 1. We select checkpoints to evaluate based on validation accuracy. In particular, the best WReN and Trans. of BYOL are higher than that of DisVAEs’. In addition, it appears that BYOL performs better than or on par with DisVAEs in terms of WReN⋆ and Trans.⋆. Especially, BYOL outperforms DisVAEs on Abstract dSprites with a considerable margin.
Summary: Models not intended for disentangled representation learning can reach superior or comparable downstream performance. Therefore disentanglement inductive bias does not necessarily lead to better sample efficiency or final accuracy.
4.3 WHICH PROPERTY MATTERS DOWNSTREAM PERFORMANCE?
The results in Section 4.2 provide encouraging cases against the necessity of disentanglement. Additionally, we are interested in several further issues: (1) Which property matters downstream performance most? (2) How can we interpret the previously claimed benefits from disentanglement(Bengio et al., 2013; Higgins et al., 2016; van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020)? On account of these questions, we start by investigating how different representation properties influence downstream accuracy. We include informativeness and various disentanglement metrics.
Recall that we train 720 Stage-1 and 7200 Stage-2 models (see Section 4.1). By taking WReN and Trans. as measurements (average reasoning accuracy over 5 WReNs or 5 Transformers), we yield 720 representations paired with their downstream performance. Generally, our analysis is based on rank correlation (Spearman) between representation metric scores and downstream performance. If the correlation score is high, we can conclude that the representation property measured by the considered metric score is significant to downstream performance.
The representation property of the most significance. We calculate the rank correlation between downstream accuracy and disentanglement and informativeness scores. Meanwhile, we report rank correlation at steps 1K, 2K, 5K, and 10K, and the step with the highest validation accuracy. From correlations at different training steps, we can tell how a representation property affects sample efficiency.
Figure 5 displays rank correlations between representation metric scores and abstract reasoning test accuracy on 3DShapes. Firstly we can find that Logistic Regression accuracy (LR) correlates most with downstream performance. The strong correlation is exploited for all considered models at multiple steps. Since LR requires sufficient information to be captured and extracted easily from representations, we can conclude that the informativeness matters most in broad conditions. In contrast, we observe that the importance of disentanglement varies among Stage-1 model families. Disentangled representation learning models (DisVAEs) exhibit strong positive correlations for several disentanglement metrics (but weaker than LR), such as FactorVAE score and DCI Disentanglement. However, their significance does not apply to BYOL, where the correlation of disentanglement is mild or even negative. In Figure 6 we plot the (WReN, metric score) pairs at step 10000. Indeed, for BYOL-WReN on 3DShapes, we can see the linear regression provides a good fit of downstream accuracy and informativeness metrics. As for disentanglement metrics, we can see that BetaVAE score and FactorVAE score suffer from narrow spreads. For MIG and SAP, the regression lines have negative slopes. We conduct a similar analysis on Abstract dSprites and take the same observations. Please refer to Appendix C.4 for more details.
Summary: The informativeness influences downstream performance most. The results are consistent across datasets and model structures.
Understanding for the previously claimed success of disentanglement. Previous works (van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020; Locatello et al., 2020) have reported empirical evidence backing up the advantages of disentangled representations. Consistently, we observe relatively strong correlations with disentanglement metrics, especially when Stage-1 models are DisVAEs in Figure 5. Based on our conclusion on the significance of the informativeness, we study the DisVAE-WReN case and provide some insights to explain why the disentanglement metrics have a high correlation to downstream performance in some cases.
We compute the overall correlations between metrics. The results are shown in Figure 7. For DisVAEs, we find that informativeness and disentanglement have high correlation scores. In particular, we can observe relatively strong correlations between LR and FactorVAE score and BetaVAE score. Accordingly, these disentanglement metrics exhibit relatively strong correlations with downstream performance in Figure 5a. In contrast, other disentanglement metrics correlate mildly with LR. And they are ineffective for downstream performance. Therefore, disentanglement metrics are not truly predictable for downstream performance, but LR is.
To “purify” the effect of disentanglement, a natural question is: If two representations are of close informativeness, does the more disentangled one more helpful for downstream tasks? For this, we employ adjusted metrics in Locatello et al. (2019a):
Adj. Metric = Metric− 1 5 ∑ i∈N(LR) Metrici, (2)
For a representation and a certain metric (we care more about disentanglement metrics), we denote its original metric score as Metric. Then we find its 5 nearest neighbors in terms of LR, which we write as N(LR). Finally, the difference between the original metric score and the mean score of the nearest neighbors is reported as adjusted metrics. Intuitively, we calculate the relative disentanglement for representations with close LR.
Figure 7b displays correlations between adjusted metrics and downstream performance. We can find that all adjusted disentanglement metrics correlate mildly with downstream performance. From this, we can see that when informativeness is close, being disentangled contributes only a small portion to the downstream performance when the downstream training steps are limited (In our case, less than or equal to 2000 steps, see Figure 4 and Figure 7).
Summary: The informativeness is the most predictable metric for downstream performance. Disentanglement only brings small extra benefits at the very beginning of downstream training.
5 CONCLUSION
In this paper, we challenge the necessity of dimension-wise disentanglement for downstream tasks. We conduct a large-scale empirical study on the abstract visual reasoning task. We start by showing that high downstream performance can be achieved by less disentangled representations. In addition, we identify that the informativeness is of the most significance. Finally, we conclude that dimensionwise disentanglement is unnecessary for downstream tasks using deep neural networks with learned representations as input.
REPRODUCIBILITY STATEMENT
We provide information to reproduce our results in Appendix A. We commit to making our codes publicly available.
A REPRODUCIBILITY
In this Section, we provide implementation details to ensure reproducibility. In addition, we commit to making our codes, configurations, and running logs publicly available. All experiments are run on a machine with 2 Intel Xeon Gold 5218R 20-core processors and 4 Nvidia GeForce RTX 3090 GPUs.
A.1 REPRESENTATION LEARNING METHODS
We include both disentangled representation learning methods and general-purpose representation learning methods. i.e., DisVAEs and BYOL (Grill et al., 2020).
DisVAEs implementation. The DisVAEs include β-VAE (Higgins et al., 2016), AnnealedVAE (Burgess et al., 2018), β-TCVAE (Chen et al., 2018), FactorVAE (Kim & Mnih, 2018), and DIP-VAE-I and DIP-VAE-II (Kumar et al., 2017). We use the output of the encoder, the mean of qϕ(z|x), as representations. Hereafter, we introduce details for each method. The above methods encourage disentanglement by adding regularizers to ELBO. Adopting the notation in Tschannen et al. (2018), their objectives can be written in the following unified form:
Ep(x)[Eqϕ(z|x)[− log pθ(x|z)]] + λ1Ep(x)[R1(qϕ(z|x))] + λ2R2(qϕ(z)), (3)
where qϕ(z|x) is the posterior parameterized by the output of the encoder, pθ(x|z) is induced by the decoder output, R1, R2 are the regularizer applying to the posterior and aggregate posterior, and λ1, λ2 are the coefficients controlling regularization. In the objective of β-VAE, β = λ1 > 1, λ2 = 0. Taking R1(qϕ(z|x)) := DKL[qϕ(z|x)||p(z)] forces the posterior to be close to the prior (usually unit gaussian), hence penalizing the capacity of the information bottleneck and encourage disentanglement. FactorVAE and β-TCVAE takes λ1 = 0, λ2 = 1. With R2(qϕ(z)) := TC(qϕ(z)), they penalize the Total Correlation (TC) (Watanabe, 1960). FactorVAE estimates TC by adversarial training, while β-TCVAE estimates TC by biased Monte Carlo sampling. Finally, DIP-VAE-I and DIP-VAE-II take λ1 = 0, λ2 ≥ 1 and R2(qϕ(z)) := ||Covqϕ(z) − I||2F , penalizing the distance between aggregated posterior and factorized prior.
We use the code and configurations from the DisLib 1 (Locatello et al., 2019b). As for parameters, we use the same sweep as van Steenkiste et al. (2019): for each one of the 6 DisVAEs, we use 6 configurations. We train each model using 5 different random seeds. Since we consider 2 datasets (3DShapes and Abstract dSprites), finally, we yield 6 ∗ 6 ∗ 5 ∗ 2 = 360 DisVAE checkpoints.
BYOL implementation. BYOL (Grill et al., 2020) is a contrastive learning method. Figure 8 shows its pipeline. For each image x, we first create two “views” of it by data augmentation, i.e., x1 and x2. Then they are input to the siamese encoders: the online encoder and the target encoder. Specifically, x1 is fed to the online encoder, while x2 is fed to the target encoder, yielding the output
1https://github.com/google-research/disentanglement_lib.git
z1 and z2, respectively. As for architectures, both encoders share the same representation network and projection MLP. The prediction MLP is appended to the online encoder in order to avoid BYOL learning trivial representations. The objective of BYOL is
L = − ⟨z1, z2⟩ ∥z1∥2∥z2∥2 . (4)
We are pulling the representations of the two “views” close. While training, the online encoder’s parameters are updated by gradient descent. However, the target encoder’s parameters are updated by the online parameters’ Exponential Moving Average (EMA) (He et al., 2020). After training, we only keep the online encoder and use the output of the representation network as representations.
We use the PyTorch implementation of BYOL 2. We use the representation network architecture as shown in Table 2, where the representation dimension D is a parameter to be set. Except for normalization and output dimensions, the representation network architecture of BYOL and the encoder architecture of DisVAEs are similar. As for predictor and projector, we use the pipeline Linear→ BN → ReLU → Linear with 256 hidden neurons. We train the BYOLs for 105 epochs using the Adam optimizer with β1 = 0.9, β2 = 0.999, ϵ = 10−8, and learning rate (lr) as a variable parameter. For augmentation, we use the pipeline of Cao et al. (2022) (in PyTorch-style):
1. RandomApply(transforms.ColorJitter(xjit, xjit, xjit, 0.2), p=0.8) 2. RandomGrayScale(p=pgray) 3. RandomHorizontalFlip() 4. RandomApply(transforms.GaussianBlur((3,3), (1.0, 2.0)), p=0.2) 5. RandomResizeCrop(size=(64, 64), scale=(xcrop, 1.0))
The xjit, pgray, and xcrop are parameters to be set. xjit controls how much to jitter brightness, contrast, and saturation. pgray controls the probability to convert the image to grayscale. xcrop defines the lower bound for the random area of the crop.
We perform a parameter sweep on the cross product of intervals of parameters D, norm, lr, xjit, pgray, and xcrop. On 3DShapes, we use the following parameter grid (in scikit-learn style):
[ {’D’: [32, 64, 128], ’lr’: [3e-2, 3e-3], ’norm’: [BatchNorm()], ’x_jit’: [0.6, 0.8], ’p_gray’: [0.5, 0.7, 0.9], ’x_crop’: [1.0]}, {’D’: [256], ’lr’: [3e-4, 3e-5], ’norm’: [BatchNorm(), GroupNorm(num_groups=4)], ’x_jit’: [0.4, 0.8], ’p_gray’: [0.3, 0.5, 0.7], ’x_crop’: [1.0]} ]
On Abstract dSprites, we use the following parameter grid:
2https://github.com/lucidrains/byol-pytorch.git
[ {’D’: [32, 64, 128], ’lr’: [3e-3, 3e-4], ’norm’: [BatchNorm()], ’x_jit’: [0.6, 0.8], ’p_gray’: [0.0, 0.1, 0.2], ’x_crop’: [0.6]}, {’D’: [256], ’lr’: [3e-4, 3e-5], ’norm’: [BatchNorm(), GroupNorm(num_groups=4)], ’x_jit’: [0.4, 0.8], ’p_gray’: [0.0, 0.1, 0.2], ’x_crop’: [0.6]} ]
For each parameter configuration, we run it with 3 random seeds. Finally, we trained 360 BYOLs in total.
A.2 ABSTRACT REASONING METHODS
We include two abstract reasoning network architectures: WReN (Barrett et al., 2018; van Steenkiste et al., 2019) and Transformer (Vaswani et al., 2017; Hahne et al., 2019).
WReN implementation. WReN consists of two parts: graph MLP and edge MLP. Here we use the same notations as in Section 3.3. For the representations of a trial Stage1(Ti), edge MLP takes a pair of representations in Stage1(Ti) as input and embed them to edge embeddings. Then all edge embeddings of Stage1(Ti) (in total C29=36) are added up and input to the graph MLP. Finally, the graph MLP output a scalar score, predicting the correctness of the trial Ti.
We use the code (van Steenkiste et al., 2019) to implement WReN. And we use the same parameter searching spaces as them. All WReNs are trained in 10K steps with a batch size of 32. The learning rate for the Adam optimizer is sampled from the set {0.01, 0.001, 0.0001} while β1 = 0.9, β2 = 0.999, and ϵ = 10−8. For the edge MLP in the WReN model, we uniformly sample its hidden units in 256 or 512, and we uniformly choose its number of hidden layers in 2, 3, or 4. Similarly, for the graph MLP in the WReN model, we uniformly sample its hidden units in 128 or 512, and we uniformly choose its number of hidden layers in 1 or 2 before the final linear layer to predict the final score. We also uniformly sample whether we apply no dropout, dropout of 0.25, dropout of 0.5, or dropout of 0.75 to units before this last layer.
Transformer implementation. We simplify the architecture of Hahne et al. (2019). Here we treat Stage1(Ti) as a sequence. We first linear project all representations and prepend them with a learnable [class] token. We add them with learnable positional embeddings. Then they are input into a stack of Transformer blocks (Vaswani et al., 2017). Finally, an MLP predicts a scalar score from the class embedding of the final Transformer block.
We implement the Transformer architecture ourselves with utilities of the DisLib code base. All Transformers are trained for the same steps and same batch size as WReN, i.e., 10K steps with a batch size of 32. We use the Adam optimizer with weight decay and cosine learning rate scheduler. The learning rate for the Adam optimizer is uniformly selected from {5e− 4, 6e− 4, 7e− 4}. The depth of Transformer blocks is uniformly set to be 2, 3, or 4. The dimensions of q, k, v of the selfattention model are uniformly 32 or 64. The MLP head uses the same architecture and parameter space as the graph MLP in WReN. For other fixed parameters, please refer to our codes for details.
A.3 REPRESENTATION METRICS
In the main text, we employ disentanglement and informativeness metrics to measure the properties of representations. Here we provide more details.
Disentanglement metrics. We use the setup and implementation of Locatello et al. (2019b). Here we briefly introduce the details of our considered metrics. Namely, BetaVAE score (Higgins et al., 2016), FactorVAE score (Kim & Mnih, 2018), Mutual Information Gap (Chen et al., 2018) , SAP (Kumar et al., 2017), and DCI Disentanglement (Eastwood & Williams, 2018). The BetaVAE score and the FactorVAE score predict the intervened factor from representations to measure disentanglement. The Mutual Information Gap and SAP compute the gap in response for each factor between the two highest representation dimensions. The difference is that MIG measures mutual information while SAP measures classification accuracy. The DCI Disentanglement calculates the entropy of the relative importance of a latent dimension in predicting factors. We follow previous studies (Locatello et al., 2019b; van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020) to develop a Gradient Boosting Tree (GBT) for prediction during the DCI Disentanglement evaluation.
Though according to Eastwood & Williams (2018) any classifier could be used. As reported by Cao et al. (2022), the GBT takes hours to train from high-dimensional representations learned by BYOL. Thus we only report DCI Disentanglement score for DisVAEs.
Informativeness metrics. We use LR to measure the informativeness of representations. We train a Logistic Regression model to predict factor values from representations. We use 10000 samples to train LR. Unlike van Steenkiste et al. (2019), we use “multinomial” instead of “one v.s. rest” as the multi-class classification scheme. As shown in Figure 9a, for the same set of representations, “one v.s. Rest” LR has inferior prediction accuracy. Moreover, ranking by scores of these two LRs yields different results. In Figure ,9b we can observe different correlations of the “one v.s. Rest” LR. To better estimate informativeness, we use “multinomial” LR as the measurement.
A.4 ABSTRACT VISUAL REASONING DATASETS
We use the two abstract visual reasoning datasets developed by van Steenkiste et al. (2019). i.e., Ravens’ Progressive Matrices created from 3DShapes (Burgess & Kim, 2018) and Abstract dSprites (Matthey et al., 2017; van Steenkiste et al., 2019).
We sketch the rules here by taking the RPM in Figure 1 as an example. The reasoning attributes are the ground truth factors of 3DShpaes. i.e., floor hue, wall hue, object hue, scale, shape, and orientation. Each row in the 3 × 3 matrix has 1, 2, or 3 ground truth factors taking a fixed value. And the 3 rows have the same fixed ground truth factors, though they might take different values. From the context panels, one should discover the underlying logical relationship. Finally, one is asked to fill the missing panel by one of the candidates. For the RPM in Figure 1, from the contexts, we can infer that the fixed factors are: wall hue, shape, and orientation. Then for the third row, from the first 2 panels, we know that the values for the shared factors are: the wall hue is blue, the shape is cylinder, and the orientation is the azimuth that makes the wall corner appears in the righter part of the image. So we choose the candidate with these factor values as the solution, as shown in Figure 10a. Figure 10b shows a sample of RPMs with answers on Abstract dSprites.
B ABLATIONS ON GENERAL-PURPOSE REPRESENTATION LEARNING METHODS
In the main text, we use BYOL as a representative of general-purpose representation learning methods. For completeness, here we introduce another general-purpose method, SimSiam (Chen & He, 2021). We modify the code of BYOL 3 to train SimSiams on 3DShapes with the following parameter grid:
[ {’D’: [512], ’lr’: [3e-4, 3e-5], ’norm’: [BatchNorm()], ’x_jit’: [0.4, 0.8],
3https://github.com/lucidrains/byol-pytorch.git
’p_gray’: [0.3, 0.5, 0.7], ’x_crop’: [0.6, 1.0]} ]
For each configuration, we run with 3 seeds. So finally, we yield 72 SimSiams. Then we use the same WReNs for DisVAEs and BYOLs as Stage-2 models.
The results of SimSiam-WReN agree with our conclusions in the main text. As for the best performance, we have WReN=85.1% and WReN⋆=94.1%, which is better than DisVAEs’. Figure 11 shows the correlations of downstream performance and representation properties. LR still correlates most for all considered steps.
C ADDITIONAL RESULTS
Figure 13: Accuracy v.s. #samples curves of the most disentangled DisVAEs before and after rotation. It is consistent with Figure 3.
0 2000 4000 6000 8000 10000 Sample Size (#Batches)
0.2
0.4
0.6
0.8
1.0
A cc
ur ac
y
BYOL DisVAEs
(a) Stage-2=WReN
0 2000 4000 6000 8000 10000 Sample Size (#Batches)
0.2
0.4
0.6
0.8
1.0
A cc
ur ac
y
BYOL DisVAEs
(b) Stage-2=Transformer
Figure 14: Accuracy v.s. #samples curves of the Stage-1 models with the best WReN or Trans.. It is consistent with Figure 4
C.1 ADDITIONAL RESULTS OF FINAL PERFORMANCE
In Table 1 we report the best final performance of DisVAEs and BYOLs. Here we provide more details on which type of DisVAEs at which steps achieve the reported performance in Table 1. We can observe that the best DisVAEs vary with different datasets and Stage-2 models. As for the best steps, except 3DShapes-WReN, BYOL achieves the best performance earlier than DisVAEs.
C.2 ACCURACY-#SAMPLES CURVES
We employ training curves (accuracy-step) in the main text to evaluate sample efficiency following van Steenkiste et al. (2019). For completeness, here we show accuracy-#samples curves.
We present the accuracy-#samples versions of Figure 3 and Figure 4, i.e., Figure 13 and Figure 14. We train the same models as in the main text until convergence with fixed training data sizes of 100, 1000, 5000, 7000, and 10000 batches. Then for each sample size, we plot the test performance at the
step with the highest validation accuracy. We can see the ranking of representations and evolving patterns of both types of curves agree well.
C.3 ADDITIONAL RESULTS OF RANDOM ROTATION EXPERIMENTS
This section contains additional results of the random rotation experiments. Here we report the downstream performance of deliberately entangled (by random rotation) representations.
Figure 12 shows the same experiments as Figure 2 on Abstract dSprites. We can observe that the two curves in Figure 12a are almost identical. And in Figure 12b, we can observe that disentanglement metric scores drop drastically while LR remains the same. We notice that LR is not 100%. This is because some factors of Abstract dSprites have too many support values. e.g., the x and y positions both have 32 possible values. However, our conclusion in the main text still holds as we observe that LR is invariant to random rotation. On Abstract dSprites, we randomly rotate the most disentangled representations from DisVAEs (measured by FactorVAE score). In Figure 15, we can see that rotation has little impact on the training trajectories. So our conclusion is similar across datasets.
C.4 ADDITIONAL RESULTS OF CORRELATIONS
In this part, we report additional results related to the correlations between representation metrics and downstream performance.
Absolute values of metric scores and downstream accuracy. We show the histograms as a sanity check of the distribution of metric scores and downstream accuracy. Figure 16 presents the score distributions of each metric. We report the mean metric scores with STDs to depict the overall properties for Stage-1 models in Table 4. Figure 17 and Figure 18 display the distributions of downstream performance.
Rank correlations. This part contains additional results of rank correlations. On 3DShapes, Figure 19 displays rank correlations between adjusted metrics and downstream accuracy, Figure 20 shows the overall correlation between metrics. On Abstract dSprites, Figure 21 shows correlations between metrics and downstream performance. Then Figure 22 presents correlations between ad-
justed metrics and downstream performance. Finally, Figure 23 displays the overall correlations between metrics.
Plots of (metric score, downstream accuracy) pairs. Figures 24, 25, 26, 27, 28, 29, 30, and 31 provide an in-depth view of the correlations, where we plot (metrics, downstream accuracy) pairs. | 1. What is the main contribution of the paper regarding dimension-wise disentangled representations?
2. What are the strengths of the proposed approach, particularly in terms of its ability to challenge the necessity of disentanglement for downstream tasks?
3. What are the weaknesses of the paper, especially regarding the exploration of tasks and domains?
4. Do you have any concerns about the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies dimension-wise disentangled representations for downstream applications. Through extensive experiments, the authors conclude that disentanglement is not a necessity for achieving good performance in downstream tasks, and general-purpose representation learning methods could achieve better (or at least competitive performance) than disentanglement methods.
The authors show that Logistic regression accuracy on factor classification well-correlates with downstream task performance. The reason that we feel disentanglement is useful for downstream tasks is presumably due to the positive correlation between LR and disentanglement metrics.
Strengths And Weaknesses
Strength:
--The authors challenge the necessity of dimension-wise disentanglement for downstream tasks via extensive amounts of ablation studies. Through experiments, the authors examined 1) effects of attenuating disentanglement, 2) general-purpose vs disentangled training, 3) How different disentanglement/informativeness metrics correlate with downstream tasks, and 4) the correlation between LR and some disentanglement metrics. Every claim/reasoning is supported by experiments.
--The authors conduct multiple runs of experiments, and cover different kinds of model architectures and disentanglement training methods and metrics. The exploration is pretty thorough.
Weakness:
--The exploration should be expanded to other tasks and domains.
--There are a lot of general-purpose pre-training algorithms, but the authors mostly focus on BOYL.
--Though two-stage training is acceptable, what if the experiments are conducted in a joint-training setup where disentangling losses are used to regularize the supervised loss?
Clarity, Quality, Novelty And Reproducibility
I don’t have specific concerns on clarity, quality and reproducibility. |
ICLR | Title
On the Necessity of Disentangled Representations for Downstream Tasks
Abstract
A disentangled representation encodes generative factors of data in a separable and compact pattern. Thus it is widely believed that such a representation format benefits downstream tasks. In this paper, we challenge the necessity of disentangled representation in downstream applications. Specifically, we show that dimension-wise disentangled representations are not necessary for downstream tasks using neural networks that take learned representations as input. We provide extensive empirical evidence against the necessity of disentanglement, covering multiple datasets, representation learning methods, and downstream network architectures. Moreover, our study reveals that informativeness of representations best accounts for downstream performance. The positive correlation between the informativeness and disentanglement explains the claimed usefulness of disentangled representations in previous works.
1 INTRODUCTION
Disentanglement has been considered an essential property of representation learning (Bengio et al., 2013; Peters et al., 2017; Goodfellow et al., 2016; Bengio et al., 2007; Schmidhuber, 1992; Lake et al., 2017; Tschannen et al., 2018). Though there is no widely accepted formal definition yet, the fundamental intuition is that a disentangled representation should separately and distinctly capture information from generative data factors (Bengio et al., 2013). In practice, disentanglement is often implemented to emphasize a dimension-wise relationship, i.e., a representation dimension should capture information from exactly one factor and vice versa (Locatello et al., 2019b; Higgins et al., 2016; Kim & Mnih, 2018; Chen et al., 2018; Eastwood & Williams, 2018; Ridgeway & Mozer, 2018; Kumar et al., 2017; Do & Tran, 2019). Disentangled representations offer human-interpretable factor dependencies. Therefore, in theory, they are robust to variations in the natural data and are expected to benefit downstream performances (Bengio et al., 2013).
Researchers are interested in empirically verifying these purported advantages. Especially, they focus on the following two-staged tasks: (1) extracting representations in an unsupervised manner from data, (2) then performing downstream neural networks training based on learned representations (van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020; Locatello et al., 2020). Among various downstream tasks, except the ones that explicitly require disentanglement (Higgins et al., 2018b; Gabbay & Hoshen, 2021; Schölkopf et al., 2021), abstract visual reasoning is widely recognized as a popular testbed (van Steenkiste et al., 2019; Locatello et al., 2020; Schölkopf et al., 2021). The premise behind it aligns with the goals of machine intelligence (Snow et al., 1984; Carpenter et al., 1990). Moreover, its mechanism ensures valid measurement of representations downstream performance (Fleuret et al., 2011; Barrett et al., 2018).
In the abstract visual reasoning task, intelligent agents are asked to take human IQ tests, i.e., predict the missing panel of Raven’s Progressive Matrices (RPMs) (Raven, 1941). Indeed it is a challenging task for representation learning (Barrett et al., 2018; van Steenkiste et al., 2019). Disentanglement literature often takes this task as an encouraging example to show that disentanglement leads to quicker learning and better final performance (van Steenkiste et al., 2019; Locatello et al., 2020; Schölkopf et al., 2021).
However, on the abstract visual reasoning task, we find that rotating disentangled representations, i.e., multiplying the representations by an orthonormal matrix, has no impact on sample efficiency and final accuracy. We construct the most disentangled representations, i.e., normalized true factors.
Then we solve the downstream tasks from them and their rotated variants. As shown in Figure 2a, there is little difference between the accuracy curves of original and rotated representations throughout the learning process. On one hand, this phenomenon is surprising since the rotation decreases dimension-wise disentanglement by destroying axis alignment (Locatello et al., 2019b). Indeed, in Figure 2b we can observe notable drops in disentanglement metric scores (first 5 columns). Our finding demonstrates that disentanglement does not affect the downstream learning trajectory, which is against the commonly believed usefulness of disentanglement. On the other hand, it is not surprising since we apply an invertible linear transform. We can observe that Logistic Regression (LR) accuracy remains 100% before and after rotation, indicating that a simple linear layer could eliminate the effects of rotation.
Per such facts, some questions arise: Are disentangled representations necessary for two-staged tasks? If not, which property matters? To address them, we conduct an extensive empirical study based on abstract reasoning tasks. Our contributions are as follows.
• We challenge the necessity of disentanglement for abstract reasoning tasks. We find that (1) entangling representations by random rotation has little impact, and (2) general-purpose representation learning methods could reach better or competitive performance than disentanglement methods.
• Following Eastwood & Williams (2018), we term what information the representation has learned as informativeness. We show that informativeness matters downstream performance most. (1) Logistic regression (LR) accuracy on factor classification correlates most with downstream performance, comparing with disentanglement metrics. (2) Conditioning on close LR accuracy, disentanglement still correlates mildly. (3) The informativeness is behind the previously argued usefulness of disentanglement since we observe a positive correlation between LR and disentanglement metrics.
• We conduct a large-scale empirical study supporting our claim. We train 720 representation learning models covering two datasets, including both disentanglement and general-purpose methods. Then we train 5 WReNs (Barrett et al., 2018) and 5 Transformers (Vaswani et al., 2017; Hahne et al., 2019) using the outputs of each representation learning model to perform abstract reasoning, yielding a total of 7200 abstract reasoning models.
2 RELATED WORK
Disentangled representation learning. There is no agreed-upon formal definition of disentanglement. Therefore, in practice, disentanglement is often interpreted as a one-to-one mapping between representation dimensions and generative factors of data, which we term “dimension-wise disentanglement”. It requires that the representation dimension encode only one factor and vice versa (Locatello et al., 2019b; Eastwood & Williams, 2018; Kumar et al., 2017; Do & Tran, 2019). Besides dimension-wise disentanglement, Higgins et al. (2018a) propose a definition from the group theory perspective. However, its requirement in interaction with the environment prevents applicable learning methods for existing disentanglement benchmarks (Caselles-Dupré et al., 2019).
Adopting the dimension-wise definition, researchers develop methods and metrics. SOTA disentanglement methods are mainly variants of generative methods (Higgins et al., 2016; Kim & Mnih, 2018; Burgess et al., 2018; Kumar et al., 2017; Chen et al., 2018; 2016; Jeon et al., 2018; Lin et al., 2020). Corresponding metrics are designed in the following ways (Zaidi et al., 2020): intervening factors (Higgins et al., 2016; Kim & Mnih, 2018), estimating mutual information (Chen et al., 2018), and developing classifiers (Eastwood & Williams, 2018; Kumar et al., 2017). Another line of work related to disentangled representation learning is the Independent Component Analysis (ICA) (Comon, 1994). ICA aims to recover independent components of the data, using the mean correlation coefficient (MCC) as the metric. However, ICA models require access to auxiliary variables (Hyvarinen et al., 2019), leading to inevitable supervision for image datasets training (Hyvarinen & Morioka, 2016; Khemakhem et al., 2020a;b; Klindt et al., 2020). In this paper, we focus on the downstream performance of unsupervised representation learning.
Downstream tasks. It is widely believed that disentangled representations benefit downstream tasks. Intuitively, they offer a human-understandable structure with ready access to salient factors, hence should be enjoying robust generalization capacity (Bengio et al., 2013; Do & Tran, 2019). Several works conduct empirical studies on downstream tasks to support the notions above, includ-
ing abstract reasoning (van Steenkiste et al., 2019), fairness (Locatello et al., 2019a), and sim2real transfer (Dittadi et al., 2020). Among these works, van Steenkiste et al. (2019) provide the most encouraging evidence from abstract reasoning tasks. We adopt their settings and investigate the same tasks. However, their results are questionable. Firstly, it underestimates factors’ linear classification accuracy, yielding a weaker correlation between informativeness and downstream performance (see Figure 9 in Appendix A.3). Moreover, only variants of VAEs are considered. We address these issues and achieve opposite conclusions.
Abstract visual reasoning has been a popular benchmark to measure the representation’s downstream performance, especially in disentanglement literature (Steenbrugge et al., 2018; van Steenkiste et al., 2019; Dittadi et al., 2020; Locatello et al., 2020; Schölkopf et al., 2021). The most common type is the Raven’s Progressive Matrices (RPMs) (Raven, 1941), which highly emphasize abstract and relational reasoning capacities and effectively represent human intelligence (Snow et al., 1984; Carpenter et al., 1990). To solve RPMs, one is asked to complete the missing panel of a 3× 3 grid by exploring the logical relationships of 8 context panels. Moreover, abstract visual reasoning is a well-developed benchmark for representation learning. Given that it is coupled with a principle treatment of generalization (Fleuret et al., 2011), a neural network can not solve reasoning tasks by simply memorizing superficial statistical features. Besides, it can avoid pitfalls where test-specific heuristics learned by downstream models obscures the original properties of representations (Barrett et al., 2018). To summarize, (1) the goal of abstract visual reasoning highlights our requirements for representation learning, and (2) its mechanism ensures valid measurements. For these reasons, we focus on the necessity of disentanglement for the abstract reasoning task.
3 DOWNSTREAM BENCHMARK: ABSTRACT VISUAL REASONING
This section contains background on the downstream benchmark framework. We first introduce the definition of the abstract visual reasoning task. Then we present the framework’s ingredients: representation learning methods, metrics, and abstract reasoning models.
3.1 ABSTRACT VISUAL REASONING AS A TWO-STAGED TASK
The abstract visual reasoning tasks are highly inspired by the famous human IQ test, Raven’s Progressive Matrices (RPMs) (Raven, 1941). Figure 1 shows an RPM question in our evaluation dataset. There are eight context panels and one missing panel in the left part of the figure. The context panels are arranged following some logical rules across rows. During the test, the subject must pick one of the six candidates listed in the right part to fix the missing panel. The goal is to maintain the logical relationships given by the contexts. More details of RPMs are available in Appendix A.4.
We adopt RPMs as a downstream benchmark following van Steenkiste et al. (2019). To measure the necessity of disentanglement for downstream tasks, we separate the evaluation process into two stages: (1) In Stage-1, representation learning models extract representations from images of which RPMs consist, and (2) in Stage-2, abstract reasoning models predict the missing panels from the frozen representations of contexts and answer candidates. Correspondingly, we denote representation learning models as Stage-1 models while abstract reasoning models as Stage-2 models. For Stage-1, we measure the disentanglement properties of the representations. A diverse set of Stage-1 and Stage-2 models are trained, yielding multiple samples from the joint distribution of representation metric scores and downstream accuracy. Finally, we study the relationships between representation qualities and downstream performance. We aim to investigate whether more disentangled representations perform better on abstract reasoning tasks.
The two-staged framework leverages large-scale experiments to reveal connections between the disentanglement of representations and their downstream performance. It provides a precise measurement of the importance of disentanglement. Therefore the two-staged framework is widely-accepted (van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020; Locatello et al., 2020).
3.2 BACKGROUND OF REPRESENTATION LEARNING
Disentangled representation learning methods. The seminal works of Higgins et al. (2016) and Chen et al. (2016) embody disentanglement by augmenting deep generative models (Kingma & Welling, 2013; Goodfellow et al., 2014). For disentangled representation learning methods, we focus on variants of VAE. Namely, β-VAE (Higgins et al., 2016), AnnealedVAE (Burgess et al., 2018), β-TCVAE (Chen et al., 2018), FactorVAE (Kim & Mnih, 2018), and DIP-VAE (Kumar et al., 2017). They achieve disentanglement mainly by encouraging independence between representation dimensions. Please refer to Appendix A.2 for details.
General-purpose representation learning methods. In our study, methods not (explicitly) encouraging disentanglement are called general-purpose methods. We take BYOL (Grill et al., 2020) as a representative. BYOL is a negative-free contrastive learning method. It creates different “views” of an image by data augmentation and pulls together their distance in representation space. To avoid collapsing to trivial representations, a predictor appending to one of the siamese encoders and exponential moving average update strategy (He et al., 2020) are employed. It does not encourage disentanglement due to the lack of regularizers. Indeed, the empirical evidence in Cao et al. (2022) demonstrates that representations learned by BYOL have weak disentanglement.
Representation property metrics. Considered properties of representations cover two axes of metrics: disentanglement metrics and informativeness metrics (Eastwood & Williams, 2018). We include BetaVAE score (Higgins et al., 2016), FactorVAE score (Kim & Mnih, 2018), Mutual Information Gap (Chen et al., 2018) , SAP (Kumar et al., 2017), and DCI Disentanglement (Eastwood & Williams, 2018). Locatello et al. (2019b) proves their agreement on VAE methods with extensive experiments. Though their measurements are different, their results are positively correlated. On the other hand, informativeness requires representations to encode enough information about factors. In this work, we employ Logistic Regression (LR). It is a favorable metric adopted by unsupervised pretraining literature (He et al., 2020; Grill et al., 2020; Caron et al., 2021). Given the weak capacity of linear models, a higher LR accuracy ensures that sufficient information is explicitly encoded. However, it does not emphasize a dimension-wise encoding pattern like disentanglement. To distinguish, we term the property indicated by LR as informativeness.
3.3 BACKGROUND OF METHODS FOR ABSTRACT REASONING
In Stage-1, we extract representations of eight context panels (the left part of Figure 1) and six answer candidates (the right part of Figure 1). Then in Stage-2, downstream models perform abstract reasoning from the (frozen) representations. Abstract reasoning models evaluate whether filling the blank panel by a candidate follows the logical rules given by contexts. For a trial Ti of one candidate ai ∈ A = {a1, ..., a6} and eight context panels C = {c1, ..., c8}, its score is calculated as follows: Yi = Stage2(Stage1(Ti)), Stage1(Ti) = {Stage1(c1), . . . ,Stage1(c8)} ∪ {Stage1(ai)}, (1)
where Yi is the score of trial Ti, Stage1(·),Stage2(·) denote the forward process of the Stage-1 and Stage-2 models, and Stage1(Ti) is the representations of contexts and candidate ai. After evaluating all trials {T1, T2, . . . , T6}, the output answer is argmaxi Yi. We implement two different well-defined structures of Stage-2 models, namely, WReN (Barrett et al., 2018) and Transformer (Vaswani et al., 2017; Hahne et al., 2019). First, they employ an MLP or a Transformer to embed an RPM trial. Then an MLP head predicts a scalar score from the embeddings.
4 EXPERIMENTS
In this Section, we conduct a systematic empirical study about representation properties’ impacts on downstream performance. First, we introduce our experimental conditions in Section 4.1. Then we provide empirical evidence to challenge the necessity of disentanglement (Section 4.2) and to tell which property matters (Section 4.3).
4.1 EXPERIMENTS SETUP
We build upon the experiment conditions of van Steenkiste et al. (2019). Abstract visual reasoning tasks, i.e., RPMs, are solved through a two-stage process: data Stage-1−−−−→ representations Stage-2−−−−→
RPM answers. We first train Stage-1 models in an unsupervised manner and evaluate their disentanglement and informativeness. Then Stage-2 models are trained and evaluated on downstream tasks, yielding an abstract reasoning accuracy of a representation. Provided with a large amount of (representation property score, downstream performance) pairs, we conduct a systematic study to investigate the necessity of disentanglement. More implementation details are available in Appendix A.
Datasets. We replicate the RPM generation protocol in van Steenkiste et al. (2019). The panel images consist of disentanglement benchmark image datasets, namely, Abstract dSprites (Matthey et al., 2017; van Steenkiste et al., 2019) and 3DShapes (Burgess & Kim, 2018). The rows of RPMs are arranged following the logical AND of ground truth factors. As for hardness, we only reserve hard-mixed, whose contexts and candidates are more confusing. According to the generation process, the size of generated RPMs is sufficiently large (about 10144), allowing us to produce fresh samples throughout training.
Reference models. Stage-1 models extract representations from RPM’s panels. To ensure the generalizability of the results, we include 360 disentangled VAEs (denoted as DisVAEs) and 360 BYOLs. Our choices of Stage-1 models cover both disentangled and general-purpose representation learning methods. Moreover, we are interested in the overall relationship between representation properties and downstream performance. Therefore we need to study the correlation between two distributions, i.e., representation metric scores and downstream performance. For this, we include various samples for both Stage-1 and Stage-2 to ensure they are representative enough. For Stage-1, a diverse set of configurations are included for each type of representation learning model. According to the histograms in Appendix C.4, our choices span various disentanglement and informativeness scores. For Stage-2, to better estimate the downstream performance distribution, we use multiple Stage-2 configurations for each representation instead of searching for the best one. Specifically, we train 10 Stage-2 models (5 WReNs and 5 Transformers) for every Stage-1 model. Stage-2 configurations are randomly sampled from a search space described in Appendix A.3 and shared across Stage-1 models. By this, we ensure fair comparisons across representations.
Training protocol. Training is conducted two-staged. Firstly, we train Stage-1 models in an unsupervised manner on the dataset consisting of RPMs’ panels, i.e., Abstract dSprites or 3DShapes. For DisVAE models, we use the training protocol of van Steenkiste et al. (2019), while for BYOL models, we follow Cao et al. (2022). In Stage-2, all models are trained for 10K iterations with a batch size of 32. After every 100 iterations, we evaluate the accuracy on newly generated 50 mini-batches of unseen RPM samples for validation and another 50 mini-batches for testing.
Evaluation protocol. We first evaluate the two stages separately. Then we analyze the relationship between the two stages, i.e., representation properties and downstream performance. Specifically, to challenge the necessity of disentanglement, we are interested in whether more disentangled representations lead to better downstream performance. Further, if it turns out that disentanglement is of limited importance, can we find another metric that better accounts for downstream performance? Therefore, for Stage-1, we employ representation metrics described in Section 3.2 to measure two aspects: disentanglement and informativeness. For all Stage-1 models, we compute the following metric scores: BetaVAE score, FactorVAE score, MIG, SAP, and LR accuracy. DCI Disentanglement is only evaluated for DisVAEs since it takes hours to develop the Gradient Boosting Trees required during the evaluation process on high-dimensional representations of BYOLs (Cao et al., 2022). For Stage-2, we inspect accuracy on newly generated test sets every 100 iterations, yielding accuracy for multiple training steps. Since every step sees fresh samples, we employ training curves to measure sample efficiency. We also report accuracy-#samples curves in Appendix C.2 .
To summarize the downstream performance of a Stage-1 model, over 5 WReNs or 5 Transformers in Stage-2, we report the mean accuracy denoted as WReN or Trans., and max accuracy denoted as WReN⋆ or Trans.⋆. Finally, we calculate the rank correlation (Spearman) between the mean performance of Stage-1 models (WReN and Trans.) at certain Stage-2 steps and their Stage-1 metric scores. A larger correlation indicates a higher significance of the representation property on downstream performance.
4.2 ARE DISENTANGLED REPRESENTATIONS NECESSARY?
Hereafter we challenge the necessity of disentanglement. We begin by comparing a disentangled representation v.s. a deliberately designed, entangled representation on the downstream performance. Then we discuss the necessity of disentanglement inductive bias by evaluating the performance of disentanglement and general-purpose representation learning methods.
Effects of attenuating disentanglement. We first construct the most disentangled representations, i.e., the normalized true factor values. We normalize the true factor values to have zero means and unit standard deviations, yielding 6-d representations (note that Abstract dSprites and 3DShapes are both labeled with 6 ground truth factors). Then we rotate the constructed representations by multiplying randomly generated orthonormal matrices. Afterward, each dimension of the rotated feature captures a combination of factors, thus destroying disentanglement. Finally, we perform abstract reasoning training from true factors before and after rotations. We also conduct rotations on representations learned by DisVAEs.
We run 5 seeds defining the randomly generated rotation matrices and Stage-2 model configurations. We report results on 3DShapes with original/rotated true factors as representations and WReNs as Stage-2 models in Figure 2. As depicted in Figure 2a, there is little difference between performance before and after rotation throughout the training process. Yet Figure 2b shows significant drops in disentanglement metric scores. This surprising phenomenon suggests that even though we drastically entangle the representations, the downstream performance remains unchanged, firmly against the necessity of disentanglement. However, we can see from Figure 2b that LR scores are 100% before and after rotation. It is easy to understand because the rotation we applied
is just an invertible linear transform, which a simple LR can recover, not to mention more capable Stage-2 models. Moreover, we observe similar results for learned representations (Figure 3). We select the most disentangled DisVAE measured by FactorVAE score among the 180 DisVAE models trained on 3DShapes (recall Section 4.1). As shown in Figure 3, rotation does not hurt the performance of representations learned by DisVAEs, backing up our claim that disentanglement representations might not be necessary to achieve good downstream performance. More results of rotation experiments on other datasets are reported in Appendix C.3.
Summary: Destroying disentanglement (by random rotation) in representations does not have a noticeable impact on downstream performance throughout training.
Advantages of disentanglement inductive bias. From previous results, we demonstrate that both high performance and high sample efficiency can be achieved even if we deliberately destroy disen-
tanglement. Further, we are interested in the inductive biases of Stage-1 models: Do disentangled representation learning models have absolute advantages on downstream performance over generalpurpose models? For this, we compare the downstream performance of different families of learning models described in Section 4.1, including BYOL, β-VAE, AnnealedVAE, β-TCVAE, FactorVAE, DIP-VAE-I, and DIP-VAE-II. Among them, BYOL does not explicitly encourage disentanglement. On the other hand, all DisVAEs are disentangled representation learning methods. From a large pool of 7200 checkpoints, we report the best performance for each model family.
Figure 4 shows overviews of training trajectories of Stage-1 models with the highest performing WReN and Trans. on 3DShpaes for multiple training steps. For WReN as Stage-2 models (Figure 4a), BYOL leads at the beginning, then DisVAEs catch up, and finally, BYOL converges at a higher accuracy. In contrast, when Stage-2 models are Transformers, BYOL’s curve grows faster, but DisVAEs and BYOL converge with comparable performance. In general, the two curves evolve in almost identical patterns with small gaps, indicating that disentanglement inductive bias is of limited utility in improving downstream sample efficiency. Corresponding analysis on Abstract dSprites is available in Appendix C.3, where we reach the same conclusions. As for final performance, we report maximal WReN, WReN⋆, Trans. and Trans.⋆ across different Stage-2 models and datasets in Table 1. We select checkpoints to evaluate based on validation accuracy. In particular, the best WReN and Trans. of BYOL are higher than that of DisVAEs’. In addition, it appears that BYOL performs better than or on par with DisVAEs in terms of WReN⋆ and Trans.⋆. Especially, BYOL outperforms DisVAEs on Abstract dSprites with a considerable margin.
Summary: Models not intended for disentangled representation learning can reach superior or comparable downstream performance. Therefore disentanglement inductive bias does not necessarily lead to better sample efficiency or final accuracy.
4.3 WHICH PROPERTY MATTERS DOWNSTREAM PERFORMANCE?
The results in Section 4.2 provide encouraging cases against the necessity of disentanglement. Additionally, we are interested in several further issues: (1) Which property matters downstream performance most? (2) How can we interpret the previously claimed benefits from disentanglement(Bengio et al., 2013; Higgins et al., 2016; van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020)? On account of these questions, we start by investigating how different representation properties influence downstream accuracy. We include informativeness and various disentanglement metrics.
Recall that we train 720 Stage-1 and 7200 Stage-2 models (see Section 4.1). By taking WReN and Trans. as measurements (average reasoning accuracy over 5 WReNs or 5 Transformers), we yield 720 representations paired with their downstream performance. Generally, our analysis is based on rank correlation (Spearman) between representation metric scores and downstream performance. If the correlation score is high, we can conclude that the representation property measured by the considered metric score is significant to downstream performance.
The representation property of the most significance. We calculate the rank correlation between downstream accuracy and disentanglement and informativeness scores. Meanwhile, we report rank correlation at steps 1K, 2K, 5K, and 10K, and the step with the highest validation accuracy. From correlations at different training steps, we can tell how a representation property affects sample efficiency.
Figure 5 displays rank correlations between representation metric scores and abstract reasoning test accuracy on 3DShapes. Firstly we can find that Logistic Regression accuracy (LR) correlates most with downstream performance. The strong correlation is exploited for all considered models at multiple steps. Since LR requires sufficient information to be captured and extracted easily from representations, we can conclude that the informativeness matters most in broad conditions. In contrast, we observe that the importance of disentanglement varies among Stage-1 model families. Disentangled representation learning models (DisVAEs) exhibit strong positive correlations for several disentanglement metrics (but weaker than LR), such as FactorVAE score and DCI Disentanglement. However, their significance does not apply to BYOL, where the correlation of disentanglement is mild or even negative. In Figure 6 we plot the (WReN, metric score) pairs at step 10000. Indeed, for BYOL-WReN on 3DShapes, we can see the linear regression provides a good fit of downstream accuracy and informativeness metrics. As for disentanglement metrics, we can see that BetaVAE score and FactorVAE score suffer from narrow spreads. For MIG and SAP, the regression lines have negative slopes. We conduct a similar analysis on Abstract dSprites and take the same observations. Please refer to Appendix C.4 for more details.
Summary: The informativeness influences downstream performance most. The results are consistent across datasets and model structures.
Understanding for the previously claimed success of disentanglement. Previous works (van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020; Locatello et al., 2020) have reported empirical evidence backing up the advantages of disentangled representations. Consistently, we observe relatively strong correlations with disentanglement metrics, especially when Stage-1 models are DisVAEs in Figure 5. Based on our conclusion on the significance of the informativeness, we study the DisVAE-WReN case and provide some insights to explain why the disentanglement metrics have a high correlation to downstream performance in some cases.
We compute the overall correlations between metrics. The results are shown in Figure 7. For DisVAEs, we find that informativeness and disentanglement have high correlation scores. In particular, we can observe relatively strong correlations between LR and FactorVAE score and BetaVAE score. Accordingly, these disentanglement metrics exhibit relatively strong correlations with downstream performance in Figure 5a. In contrast, other disentanglement metrics correlate mildly with LR. And they are ineffective for downstream performance. Therefore, disentanglement metrics are not truly predictable for downstream performance, but LR is.
To “purify” the effect of disentanglement, a natural question is: If two representations are of close informativeness, does the more disentangled one more helpful for downstream tasks? For this, we employ adjusted metrics in Locatello et al. (2019a):
Adj. Metric = Metric− 1 5 ∑ i∈N(LR) Metrici, (2)
For a representation and a certain metric (we care more about disentanglement metrics), we denote its original metric score as Metric. Then we find its 5 nearest neighbors in terms of LR, which we write as N(LR). Finally, the difference between the original metric score and the mean score of the nearest neighbors is reported as adjusted metrics. Intuitively, we calculate the relative disentanglement for representations with close LR.
Figure 7b displays correlations between adjusted metrics and downstream performance. We can find that all adjusted disentanglement metrics correlate mildly with downstream performance. From this, we can see that when informativeness is close, being disentangled contributes only a small portion to the downstream performance when the downstream training steps are limited (In our case, less than or equal to 2000 steps, see Figure 4 and Figure 7).
Summary: The informativeness is the most predictable metric for downstream performance. Disentanglement only brings small extra benefits at the very beginning of downstream training.
5 CONCLUSION
In this paper, we challenge the necessity of dimension-wise disentanglement for downstream tasks. We conduct a large-scale empirical study on the abstract visual reasoning task. We start by showing that high downstream performance can be achieved by less disentangled representations. In addition, we identify that the informativeness is of the most significance. Finally, we conclude that dimensionwise disentanglement is unnecessary for downstream tasks using deep neural networks with learned representations as input.
REPRODUCIBILITY STATEMENT
We provide information to reproduce our results in Appendix A. We commit to making our codes publicly available.
A REPRODUCIBILITY
In this Section, we provide implementation details to ensure reproducibility. In addition, we commit to making our codes, configurations, and running logs publicly available. All experiments are run on a machine with 2 Intel Xeon Gold 5218R 20-core processors and 4 Nvidia GeForce RTX 3090 GPUs.
A.1 REPRESENTATION LEARNING METHODS
We include both disentangled representation learning methods and general-purpose representation learning methods. i.e., DisVAEs and BYOL (Grill et al., 2020).
DisVAEs implementation. The DisVAEs include β-VAE (Higgins et al., 2016), AnnealedVAE (Burgess et al., 2018), β-TCVAE (Chen et al., 2018), FactorVAE (Kim & Mnih, 2018), and DIP-VAE-I and DIP-VAE-II (Kumar et al., 2017). We use the output of the encoder, the mean of qϕ(z|x), as representations. Hereafter, we introduce details for each method. The above methods encourage disentanglement by adding regularizers to ELBO. Adopting the notation in Tschannen et al. (2018), their objectives can be written in the following unified form:
Ep(x)[Eqϕ(z|x)[− log pθ(x|z)]] + λ1Ep(x)[R1(qϕ(z|x))] + λ2R2(qϕ(z)), (3)
where qϕ(z|x) is the posterior parameterized by the output of the encoder, pθ(x|z) is induced by the decoder output, R1, R2 are the regularizer applying to the posterior and aggregate posterior, and λ1, λ2 are the coefficients controlling regularization. In the objective of β-VAE, β = λ1 > 1, λ2 = 0. Taking R1(qϕ(z|x)) := DKL[qϕ(z|x)||p(z)] forces the posterior to be close to the prior (usually unit gaussian), hence penalizing the capacity of the information bottleneck and encourage disentanglement. FactorVAE and β-TCVAE takes λ1 = 0, λ2 = 1. With R2(qϕ(z)) := TC(qϕ(z)), they penalize the Total Correlation (TC) (Watanabe, 1960). FactorVAE estimates TC by adversarial training, while β-TCVAE estimates TC by biased Monte Carlo sampling. Finally, DIP-VAE-I and DIP-VAE-II take λ1 = 0, λ2 ≥ 1 and R2(qϕ(z)) := ||Covqϕ(z) − I||2F , penalizing the distance between aggregated posterior and factorized prior.
We use the code and configurations from the DisLib 1 (Locatello et al., 2019b). As for parameters, we use the same sweep as van Steenkiste et al. (2019): for each one of the 6 DisVAEs, we use 6 configurations. We train each model using 5 different random seeds. Since we consider 2 datasets (3DShapes and Abstract dSprites), finally, we yield 6 ∗ 6 ∗ 5 ∗ 2 = 360 DisVAE checkpoints.
BYOL implementation. BYOL (Grill et al., 2020) is a contrastive learning method. Figure 8 shows its pipeline. For each image x, we first create two “views” of it by data augmentation, i.e., x1 and x2. Then they are input to the siamese encoders: the online encoder and the target encoder. Specifically, x1 is fed to the online encoder, while x2 is fed to the target encoder, yielding the output
1https://github.com/google-research/disentanglement_lib.git
z1 and z2, respectively. As for architectures, both encoders share the same representation network and projection MLP. The prediction MLP is appended to the online encoder in order to avoid BYOL learning trivial representations. The objective of BYOL is
L = − ⟨z1, z2⟩ ∥z1∥2∥z2∥2 . (4)
We are pulling the representations of the two “views” close. While training, the online encoder’s parameters are updated by gradient descent. However, the target encoder’s parameters are updated by the online parameters’ Exponential Moving Average (EMA) (He et al., 2020). After training, we only keep the online encoder and use the output of the representation network as representations.
We use the PyTorch implementation of BYOL 2. We use the representation network architecture as shown in Table 2, where the representation dimension D is a parameter to be set. Except for normalization and output dimensions, the representation network architecture of BYOL and the encoder architecture of DisVAEs are similar. As for predictor and projector, we use the pipeline Linear→ BN → ReLU → Linear with 256 hidden neurons. We train the BYOLs for 105 epochs using the Adam optimizer with β1 = 0.9, β2 = 0.999, ϵ = 10−8, and learning rate (lr) as a variable parameter. For augmentation, we use the pipeline of Cao et al. (2022) (in PyTorch-style):
1. RandomApply(transforms.ColorJitter(xjit, xjit, xjit, 0.2), p=0.8) 2. RandomGrayScale(p=pgray) 3. RandomHorizontalFlip() 4. RandomApply(transforms.GaussianBlur((3,3), (1.0, 2.0)), p=0.2) 5. RandomResizeCrop(size=(64, 64), scale=(xcrop, 1.0))
The xjit, pgray, and xcrop are parameters to be set. xjit controls how much to jitter brightness, contrast, and saturation. pgray controls the probability to convert the image to grayscale. xcrop defines the lower bound for the random area of the crop.
We perform a parameter sweep on the cross product of intervals of parameters D, norm, lr, xjit, pgray, and xcrop. On 3DShapes, we use the following parameter grid (in scikit-learn style):
[ {’D’: [32, 64, 128], ’lr’: [3e-2, 3e-3], ’norm’: [BatchNorm()], ’x_jit’: [0.6, 0.8], ’p_gray’: [0.5, 0.7, 0.9], ’x_crop’: [1.0]}, {’D’: [256], ’lr’: [3e-4, 3e-5], ’norm’: [BatchNorm(), GroupNorm(num_groups=4)], ’x_jit’: [0.4, 0.8], ’p_gray’: [0.3, 0.5, 0.7], ’x_crop’: [1.0]} ]
On Abstract dSprites, we use the following parameter grid:
2https://github.com/lucidrains/byol-pytorch.git
[ {’D’: [32, 64, 128], ’lr’: [3e-3, 3e-4], ’norm’: [BatchNorm()], ’x_jit’: [0.6, 0.8], ’p_gray’: [0.0, 0.1, 0.2], ’x_crop’: [0.6]}, {’D’: [256], ’lr’: [3e-4, 3e-5], ’norm’: [BatchNorm(), GroupNorm(num_groups=4)], ’x_jit’: [0.4, 0.8], ’p_gray’: [0.0, 0.1, 0.2], ’x_crop’: [0.6]} ]
For each parameter configuration, we run it with 3 random seeds. Finally, we trained 360 BYOLs in total.
A.2 ABSTRACT REASONING METHODS
We include two abstract reasoning network architectures: WReN (Barrett et al., 2018; van Steenkiste et al., 2019) and Transformer (Vaswani et al., 2017; Hahne et al., 2019).
WReN implementation. WReN consists of two parts: graph MLP and edge MLP. Here we use the same notations as in Section 3.3. For the representations of a trial Stage1(Ti), edge MLP takes a pair of representations in Stage1(Ti) as input and embed them to edge embeddings. Then all edge embeddings of Stage1(Ti) (in total C29=36) are added up and input to the graph MLP. Finally, the graph MLP output a scalar score, predicting the correctness of the trial Ti.
We use the code (van Steenkiste et al., 2019) to implement WReN. And we use the same parameter searching spaces as them. All WReNs are trained in 10K steps with a batch size of 32. The learning rate for the Adam optimizer is sampled from the set {0.01, 0.001, 0.0001} while β1 = 0.9, β2 = 0.999, and ϵ = 10−8. For the edge MLP in the WReN model, we uniformly sample its hidden units in 256 or 512, and we uniformly choose its number of hidden layers in 2, 3, or 4. Similarly, for the graph MLP in the WReN model, we uniformly sample its hidden units in 128 or 512, and we uniformly choose its number of hidden layers in 1 or 2 before the final linear layer to predict the final score. We also uniformly sample whether we apply no dropout, dropout of 0.25, dropout of 0.5, or dropout of 0.75 to units before this last layer.
Transformer implementation. We simplify the architecture of Hahne et al. (2019). Here we treat Stage1(Ti) as a sequence. We first linear project all representations and prepend them with a learnable [class] token. We add them with learnable positional embeddings. Then they are input into a stack of Transformer blocks (Vaswani et al., 2017). Finally, an MLP predicts a scalar score from the class embedding of the final Transformer block.
We implement the Transformer architecture ourselves with utilities of the DisLib code base. All Transformers are trained for the same steps and same batch size as WReN, i.e., 10K steps with a batch size of 32. We use the Adam optimizer with weight decay and cosine learning rate scheduler. The learning rate for the Adam optimizer is uniformly selected from {5e− 4, 6e− 4, 7e− 4}. The depth of Transformer blocks is uniformly set to be 2, 3, or 4. The dimensions of q, k, v of the selfattention model are uniformly 32 or 64. The MLP head uses the same architecture and parameter space as the graph MLP in WReN. For other fixed parameters, please refer to our codes for details.
A.3 REPRESENTATION METRICS
In the main text, we employ disentanglement and informativeness metrics to measure the properties of representations. Here we provide more details.
Disentanglement metrics. We use the setup and implementation of Locatello et al. (2019b). Here we briefly introduce the details of our considered metrics. Namely, BetaVAE score (Higgins et al., 2016), FactorVAE score (Kim & Mnih, 2018), Mutual Information Gap (Chen et al., 2018) , SAP (Kumar et al., 2017), and DCI Disentanglement (Eastwood & Williams, 2018). The BetaVAE score and the FactorVAE score predict the intervened factor from representations to measure disentanglement. The Mutual Information Gap and SAP compute the gap in response for each factor between the two highest representation dimensions. The difference is that MIG measures mutual information while SAP measures classification accuracy. The DCI Disentanglement calculates the entropy of the relative importance of a latent dimension in predicting factors. We follow previous studies (Locatello et al., 2019b; van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020) to develop a Gradient Boosting Tree (GBT) for prediction during the DCI Disentanglement evaluation.
Though according to Eastwood & Williams (2018) any classifier could be used. As reported by Cao et al. (2022), the GBT takes hours to train from high-dimensional representations learned by BYOL. Thus we only report DCI Disentanglement score for DisVAEs.
Informativeness metrics. We use LR to measure the informativeness of representations. We train a Logistic Regression model to predict factor values from representations. We use 10000 samples to train LR. Unlike van Steenkiste et al. (2019), we use “multinomial” instead of “one v.s. rest” as the multi-class classification scheme. As shown in Figure 9a, for the same set of representations, “one v.s. Rest” LR has inferior prediction accuracy. Moreover, ranking by scores of these two LRs yields different results. In Figure ,9b we can observe different correlations of the “one v.s. Rest” LR. To better estimate informativeness, we use “multinomial” LR as the measurement.
A.4 ABSTRACT VISUAL REASONING DATASETS
We use the two abstract visual reasoning datasets developed by van Steenkiste et al. (2019). i.e., Ravens’ Progressive Matrices created from 3DShapes (Burgess & Kim, 2018) and Abstract dSprites (Matthey et al., 2017; van Steenkiste et al., 2019).
We sketch the rules here by taking the RPM in Figure 1 as an example. The reasoning attributes are the ground truth factors of 3DShpaes. i.e., floor hue, wall hue, object hue, scale, shape, and orientation. Each row in the 3 × 3 matrix has 1, 2, or 3 ground truth factors taking a fixed value. And the 3 rows have the same fixed ground truth factors, though they might take different values. From the context panels, one should discover the underlying logical relationship. Finally, one is asked to fill the missing panel by one of the candidates. For the RPM in Figure 1, from the contexts, we can infer that the fixed factors are: wall hue, shape, and orientation. Then for the third row, from the first 2 panels, we know that the values for the shared factors are: the wall hue is blue, the shape is cylinder, and the orientation is the azimuth that makes the wall corner appears in the righter part of the image. So we choose the candidate with these factor values as the solution, as shown in Figure 10a. Figure 10b shows a sample of RPMs with answers on Abstract dSprites.
B ABLATIONS ON GENERAL-PURPOSE REPRESENTATION LEARNING METHODS
In the main text, we use BYOL as a representative of general-purpose representation learning methods. For completeness, here we introduce another general-purpose method, SimSiam (Chen & He, 2021). We modify the code of BYOL 3 to train SimSiams on 3DShapes with the following parameter grid:
[ {’D’: [512], ’lr’: [3e-4, 3e-5], ’norm’: [BatchNorm()], ’x_jit’: [0.4, 0.8],
3https://github.com/lucidrains/byol-pytorch.git
’p_gray’: [0.3, 0.5, 0.7], ’x_crop’: [0.6, 1.0]} ]
For each configuration, we run with 3 seeds. So finally, we yield 72 SimSiams. Then we use the same WReNs for DisVAEs and BYOLs as Stage-2 models.
The results of SimSiam-WReN agree with our conclusions in the main text. As for the best performance, we have WReN=85.1% and WReN⋆=94.1%, which is better than DisVAEs’. Figure 11 shows the correlations of downstream performance and representation properties. LR still correlates most for all considered steps.
C ADDITIONAL RESULTS
Figure 13: Accuracy v.s. #samples curves of the most disentangled DisVAEs before and after rotation. It is consistent with Figure 3.
0 2000 4000 6000 8000 10000 Sample Size (#Batches)
0.2
0.4
0.6
0.8
1.0
A cc
ur ac
y
BYOL DisVAEs
(a) Stage-2=WReN
0 2000 4000 6000 8000 10000 Sample Size (#Batches)
0.2
0.4
0.6
0.8
1.0
A cc
ur ac
y
BYOL DisVAEs
(b) Stage-2=Transformer
Figure 14: Accuracy v.s. #samples curves of the Stage-1 models with the best WReN or Trans.. It is consistent with Figure 4
C.1 ADDITIONAL RESULTS OF FINAL PERFORMANCE
In Table 1 we report the best final performance of DisVAEs and BYOLs. Here we provide more details on which type of DisVAEs at which steps achieve the reported performance in Table 1. We can observe that the best DisVAEs vary with different datasets and Stage-2 models. As for the best steps, except 3DShapes-WReN, BYOL achieves the best performance earlier than DisVAEs.
C.2 ACCURACY-#SAMPLES CURVES
We employ training curves (accuracy-step) in the main text to evaluate sample efficiency following van Steenkiste et al. (2019). For completeness, here we show accuracy-#samples curves.
We present the accuracy-#samples versions of Figure 3 and Figure 4, i.e., Figure 13 and Figure 14. We train the same models as in the main text until convergence with fixed training data sizes of 100, 1000, 5000, 7000, and 10000 batches. Then for each sample size, we plot the test performance at the
step with the highest validation accuracy. We can see the ranking of representations and evolving patterns of both types of curves agree well.
C.3 ADDITIONAL RESULTS OF RANDOM ROTATION EXPERIMENTS
This section contains additional results of the random rotation experiments. Here we report the downstream performance of deliberately entangled (by random rotation) representations.
Figure 12 shows the same experiments as Figure 2 on Abstract dSprites. We can observe that the two curves in Figure 12a are almost identical. And in Figure 12b, we can observe that disentanglement metric scores drop drastically while LR remains the same. We notice that LR is not 100%. This is because some factors of Abstract dSprites have too many support values. e.g., the x and y positions both have 32 possible values. However, our conclusion in the main text still holds as we observe that LR is invariant to random rotation. On Abstract dSprites, we randomly rotate the most disentangled representations from DisVAEs (measured by FactorVAE score). In Figure 15, we can see that rotation has little impact on the training trajectories. So our conclusion is similar across datasets.
C.4 ADDITIONAL RESULTS OF CORRELATIONS
In this part, we report additional results related to the correlations between representation metrics and downstream performance.
Absolute values of metric scores and downstream accuracy. We show the histograms as a sanity check of the distribution of metric scores and downstream accuracy. Figure 16 presents the score distributions of each metric. We report the mean metric scores with STDs to depict the overall properties for Stage-1 models in Table 4. Figure 17 and Figure 18 display the distributions of downstream performance.
Rank correlations. This part contains additional results of rank correlations. On 3DShapes, Figure 19 displays rank correlations between adjusted metrics and downstream accuracy, Figure 20 shows the overall correlation between metrics. On Abstract dSprites, Figure 21 shows correlations between metrics and downstream performance. Then Figure 22 presents correlations between ad-
justed metrics and downstream performance. Finally, Figure 23 displays the overall correlations between metrics.
Plots of (metric score, downstream accuracy) pairs. Figures 24, 25, 26, 27, 28, 29, 30, and 31 provide an in-depth view of the correlations, where we plot (metrics, downstream accuracy) pairs. | 1. What is the main contribution of the paper, and what are the strengths and weaknesses of the proposed approach?
2. What are the concerns regarding the comparison between true disentangled representations and entangled representations learned by a standard VAE?
3. What are the issues with the reporting of accuracy in Table 1, and what information is missing regarding the speed of convergence and standard deviations of performance?
4. How does the reviewer assess the correlation between informativeness and downstream tasks, and what additional analysis would be necessary to conclude that informativeness is more beneficial than disentanglement?
5. What minor suggestions does the reviewer have regarding the placement of Figure 1 and the need for further clarification on selecting the final DisVAEs representation? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper performs a large scale empirical study to investigate whether disentangled representations provide a clear benefit for the final performance on downstream tasks. First, the ground-truth disentangled representation (normalized true factors) are compared to a rotated version of the same representations. The authors show that the two types of representations yields no significant difference in the final downstream performance. The second paper contribution compares the final performance of two models (a WReN and a Transformer) on an abstract reasoning tasks, using representations learned both via disentanglement-oriented learning methods (DisVAEs) and an entangled representation learning method (BYOL). They report a better downstream performance using representations based on BYOL. Finally, the authors show that the Informativeness is the metric that correlates the most with downstream performance on both DisVAE and BYOL representations, with disentanglement bringing only a small extra benefit.
Strengths And Weaknesses
STRENGHTS
(+) The experimental setting, is for the most part, reasonable and well-designed. The paper is well-written and easy to follow. The experiments try to tackle relevant questions in the representation learning community.
WEAKNESSES
The evidence presented is not enough to back up all claims of this paper. In particular, I have the following concerns:
(-) In the first contribution, the true disentangled representations are compared only with a rotated version of themselves. Even if the rotated representations are not disentangled according to the metrics, I think that they are still a much more similar to a disentangled representation than the typical representations learned by deep learning models. Therefore, comparing between the two is not enough to claim that disentanglement is not beneficial for downstream performance in general. A fairer comparison would be between true disentangled representations and entangled representations learned by a standard VAE, where \lambda_1 and \lambda_2 of Equation 3 are both set to 0.
(-) Table 1. Two concerns: first, for which DisVAEs model does the reported accuracy refer to? Is it an average between the models? Second, reporting the step with the highest validation accuracy for both DisVAE and BYOL can be misleading, since it hides information about when that accuracy is achieved. It might be the case where BYOL achieves an overall better accuracy, but much later than DisVAEs. In that case, using a DisVAE model for representation learning could still be valuable. It is generally fairer to fix a specific number of training steps, or to run an early stopping strategy. More generally, it can be interesting to have more information about the speed of convergence of these model. Furthermore, I would have expected to see the standard deviations of performance of the 5 different runs of WReNs and Transformers reported in Table 1.
(-) Figure 1 should be moved closer to section 4.2, for the sake of readability.The results of Figures 5 and 6 show that informativeness has a strong correlation with downstream tasks. However, it seems that disentanglement somewhat implies informativeness, and that the representations trained with BYOLS exhibit high disentanglement scores on dSprites, at least in the case of the beta-VAE and FactorVAE scores. I am not sure that correlations alone are enough to make any conclusions about the usefulness of disentanglement. In order to conclude that informativeness is more beneficial than disentanglement on downstream tasks, I would like to see a comparison of the absolute values of disentanglement scores and informativeness scores for the two representations, showing that BYOL representations have higher informativeness, lower disentanglement, and higher downstream performance than DisVAE representations. Finally, it is still not very clear to me how the authors selected the final DisVAEs representation to be used in the figures.
On the minors side, Figure 1 should be moved closer to section 4.2, for the sake of readability.
Clarity, Quality, Novelty And Reproducibility
The claims discussed in the paper are relevant for the research community. The experiments seems well-designed, except for the points highlighted in the previous section. The writing style is generally clear and concise. The authors report all the details needed for reproducibility; however, they do not share the code of the experiments (but commit to sharing it after publication). |
ICLR | Title
On the Necessity of Disentangled Representations for Downstream Tasks
Abstract
A disentangled representation encodes generative factors of data in a separable and compact pattern. Thus it is widely believed that such a representation format benefits downstream tasks. In this paper, we challenge the necessity of disentangled representation in downstream applications. Specifically, we show that dimension-wise disentangled representations are not necessary for downstream tasks using neural networks that take learned representations as input. We provide extensive empirical evidence against the necessity of disentanglement, covering multiple datasets, representation learning methods, and downstream network architectures. Moreover, our study reveals that informativeness of representations best accounts for downstream performance. The positive correlation between the informativeness and disentanglement explains the claimed usefulness of disentangled representations in previous works.
1 INTRODUCTION
Disentanglement has been considered an essential property of representation learning (Bengio et al., 2013; Peters et al., 2017; Goodfellow et al., 2016; Bengio et al., 2007; Schmidhuber, 1992; Lake et al., 2017; Tschannen et al., 2018). Though there is no widely accepted formal definition yet, the fundamental intuition is that a disentangled representation should separately and distinctly capture information from generative data factors (Bengio et al., 2013). In practice, disentanglement is often implemented to emphasize a dimension-wise relationship, i.e., a representation dimension should capture information from exactly one factor and vice versa (Locatello et al., 2019b; Higgins et al., 2016; Kim & Mnih, 2018; Chen et al., 2018; Eastwood & Williams, 2018; Ridgeway & Mozer, 2018; Kumar et al., 2017; Do & Tran, 2019). Disentangled representations offer human-interpretable factor dependencies. Therefore, in theory, they are robust to variations in the natural data and are expected to benefit downstream performances (Bengio et al., 2013).
Researchers are interested in empirically verifying these purported advantages. Especially, they focus on the following two-staged tasks: (1) extracting representations in an unsupervised manner from data, (2) then performing downstream neural networks training based on learned representations (van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020; Locatello et al., 2020). Among various downstream tasks, except the ones that explicitly require disentanglement (Higgins et al., 2018b; Gabbay & Hoshen, 2021; Schölkopf et al., 2021), abstract visual reasoning is widely recognized as a popular testbed (van Steenkiste et al., 2019; Locatello et al., 2020; Schölkopf et al., 2021). The premise behind it aligns with the goals of machine intelligence (Snow et al., 1984; Carpenter et al., 1990). Moreover, its mechanism ensures valid measurement of representations downstream performance (Fleuret et al., 2011; Barrett et al., 2018).
In the abstract visual reasoning task, intelligent agents are asked to take human IQ tests, i.e., predict the missing panel of Raven’s Progressive Matrices (RPMs) (Raven, 1941). Indeed it is a challenging task for representation learning (Barrett et al., 2018; van Steenkiste et al., 2019). Disentanglement literature often takes this task as an encouraging example to show that disentanglement leads to quicker learning and better final performance (van Steenkiste et al., 2019; Locatello et al., 2020; Schölkopf et al., 2021).
However, on the abstract visual reasoning task, we find that rotating disentangled representations, i.e., multiplying the representations by an orthonormal matrix, has no impact on sample efficiency and final accuracy. We construct the most disentangled representations, i.e., normalized true factors.
Then we solve the downstream tasks from them and their rotated variants. As shown in Figure 2a, there is little difference between the accuracy curves of original and rotated representations throughout the learning process. On one hand, this phenomenon is surprising since the rotation decreases dimension-wise disentanglement by destroying axis alignment (Locatello et al., 2019b). Indeed, in Figure 2b we can observe notable drops in disentanglement metric scores (first 5 columns). Our finding demonstrates that disentanglement does not affect the downstream learning trajectory, which is against the commonly believed usefulness of disentanglement. On the other hand, it is not surprising since we apply an invertible linear transform. We can observe that Logistic Regression (LR) accuracy remains 100% before and after rotation, indicating that a simple linear layer could eliminate the effects of rotation.
Per such facts, some questions arise: Are disentangled representations necessary for two-staged tasks? If not, which property matters? To address them, we conduct an extensive empirical study based on abstract reasoning tasks. Our contributions are as follows.
• We challenge the necessity of disentanglement for abstract reasoning tasks. We find that (1) entangling representations by random rotation has little impact, and (2) general-purpose representation learning methods could reach better or competitive performance than disentanglement methods.
• Following Eastwood & Williams (2018), we term what information the representation has learned as informativeness. We show that informativeness matters downstream performance most. (1) Logistic regression (LR) accuracy on factor classification correlates most with downstream performance, comparing with disentanglement metrics. (2) Conditioning on close LR accuracy, disentanglement still correlates mildly. (3) The informativeness is behind the previously argued usefulness of disentanglement since we observe a positive correlation between LR and disentanglement metrics.
• We conduct a large-scale empirical study supporting our claim. We train 720 representation learning models covering two datasets, including both disentanglement and general-purpose methods. Then we train 5 WReNs (Barrett et al., 2018) and 5 Transformers (Vaswani et al., 2017; Hahne et al., 2019) using the outputs of each representation learning model to perform abstract reasoning, yielding a total of 7200 abstract reasoning models.
2 RELATED WORK
Disentangled representation learning. There is no agreed-upon formal definition of disentanglement. Therefore, in practice, disentanglement is often interpreted as a one-to-one mapping between representation dimensions and generative factors of data, which we term “dimension-wise disentanglement”. It requires that the representation dimension encode only one factor and vice versa (Locatello et al., 2019b; Eastwood & Williams, 2018; Kumar et al., 2017; Do & Tran, 2019). Besides dimension-wise disentanglement, Higgins et al. (2018a) propose a definition from the group theory perspective. However, its requirement in interaction with the environment prevents applicable learning methods for existing disentanglement benchmarks (Caselles-Dupré et al., 2019).
Adopting the dimension-wise definition, researchers develop methods and metrics. SOTA disentanglement methods are mainly variants of generative methods (Higgins et al., 2016; Kim & Mnih, 2018; Burgess et al., 2018; Kumar et al., 2017; Chen et al., 2018; 2016; Jeon et al., 2018; Lin et al., 2020). Corresponding metrics are designed in the following ways (Zaidi et al., 2020): intervening factors (Higgins et al., 2016; Kim & Mnih, 2018), estimating mutual information (Chen et al., 2018), and developing classifiers (Eastwood & Williams, 2018; Kumar et al., 2017). Another line of work related to disentangled representation learning is the Independent Component Analysis (ICA) (Comon, 1994). ICA aims to recover independent components of the data, using the mean correlation coefficient (MCC) as the metric. However, ICA models require access to auxiliary variables (Hyvarinen et al., 2019), leading to inevitable supervision for image datasets training (Hyvarinen & Morioka, 2016; Khemakhem et al., 2020a;b; Klindt et al., 2020). In this paper, we focus on the downstream performance of unsupervised representation learning.
Downstream tasks. It is widely believed that disentangled representations benefit downstream tasks. Intuitively, they offer a human-understandable structure with ready access to salient factors, hence should be enjoying robust generalization capacity (Bengio et al., 2013; Do & Tran, 2019). Several works conduct empirical studies on downstream tasks to support the notions above, includ-
ing abstract reasoning (van Steenkiste et al., 2019), fairness (Locatello et al., 2019a), and sim2real transfer (Dittadi et al., 2020). Among these works, van Steenkiste et al. (2019) provide the most encouraging evidence from abstract reasoning tasks. We adopt their settings and investigate the same tasks. However, their results are questionable. Firstly, it underestimates factors’ linear classification accuracy, yielding a weaker correlation between informativeness and downstream performance (see Figure 9 in Appendix A.3). Moreover, only variants of VAEs are considered. We address these issues and achieve opposite conclusions.
Abstract visual reasoning has been a popular benchmark to measure the representation’s downstream performance, especially in disentanglement literature (Steenbrugge et al., 2018; van Steenkiste et al., 2019; Dittadi et al., 2020; Locatello et al., 2020; Schölkopf et al., 2021). The most common type is the Raven’s Progressive Matrices (RPMs) (Raven, 1941), which highly emphasize abstract and relational reasoning capacities and effectively represent human intelligence (Snow et al., 1984; Carpenter et al., 1990). To solve RPMs, one is asked to complete the missing panel of a 3× 3 grid by exploring the logical relationships of 8 context panels. Moreover, abstract visual reasoning is a well-developed benchmark for representation learning. Given that it is coupled with a principle treatment of generalization (Fleuret et al., 2011), a neural network can not solve reasoning tasks by simply memorizing superficial statistical features. Besides, it can avoid pitfalls where test-specific heuristics learned by downstream models obscures the original properties of representations (Barrett et al., 2018). To summarize, (1) the goal of abstract visual reasoning highlights our requirements for representation learning, and (2) its mechanism ensures valid measurements. For these reasons, we focus on the necessity of disentanglement for the abstract reasoning task.
3 DOWNSTREAM BENCHMARK: ABSTRACT VISUAL REASONING
This section contains background on the downstream benchmark framework. We first introduce the definition of the abstract visual reasoning task. Then we present the framework’s ingredients: representation learning methods, metrics, and abstract reasoning models.
3.1 ABSTRACT VISUAL REASONING AS A TWO-STAGED TASK
The abstract visual reasoning tasks are highly inspired by the famous human IQ test, Raven’s Progressive Matrices (RPMs) (Raven, 1941). Figure 1 shows an RPM question in our evaluation dataset. There are eight context panels and one missing panel in the left part of the figure. The context panels are arranged following some logical rules across rows. During the test, the subject must pick one of the six candidates listed in the right part to fix the missing panel. The goal is to maintain the logical relationships given by the contexts. More details of RPMs are available in Appendix A.4.
We adopt RPMs as a downstream benchmark following van Steenkiste et al. (2019). To measure the necessity of disentanglement for downstream tasks, we separate the evaluation process into two stages: (1) In Stage-1, representation learning models extract representations from images of which RPMs consist, and (2) in Stage-2, abstract reasoning models predict the missing panels from the frozen representations of contexts and answer candidates. Correspondingly, we denote representation learning models as Stage-1 models while abstract reasoning models as Stage-2 models. For Stage-1, we measure the disentanglement properties of the representations. A diverse set of Stage-1 and Stage-2 models are trained, yielding multiple samples from the joint distribution of representation metric scores and downstream accuracy. Finally, we study the relationships between representation qualities and downstream performance. We aim to investigate whether more disentangled representations perform better on abstract reasoning tasks.
The two-staged framework leverages large-scale experiments to reveal connections between the disentanglement of representations and their downstream performance. It provides a precise measurement of the importance of disentanglement. Therefore the two-staged framework is widely-accepted (van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020; Locatello et al., 2020).
3.2 BACKGROUND OF REPRESENTATION LEARNING
Disentangled representation learning methods. The seminal works of Higgins et al. (2016) and Chen et al. (2016) embody disentanglement by augmenting deep generative models (Kingma & Welling, 2013; Goodfellow et al., 2014). For disentangled representation learning methods, we focus on variants of VAE. Namely, β-VAE (Higgins et al., 2016), AnnealedVAE (Burgess et al., 2018), β-TCVAE (Chen et al., 2018), FactorVAE (Kim & Mnih, 2018), and DIP-VAE (Kumar et al., 2017). They achieve disentanglement mainly by encouraging independence between representation dimensions. Please refer to Appendix A.2 for details.
General-purpose representation learning methods. In our study, methods not (explicitly) encouraging disentanglement are called general-purpose methods. We take BYOL (Grill et al., 2020) as a representative. BYOL is a negative-free contrastive learning method. It creates different “views” of an image by data augmentation and pulls together their distance in representation space. To avoid collapsing to trivial representations, a predictor appending to one of the siamese encoders and exponential moving average update strategy (He et al., 2020) are employed. It does not encourage disentanglement due to the lack of regularizers. Indeed, the empirical evidence in Cao et al. (2022) demonstrates that representations learned by BYOL have weak disentanglement.
Representation property metrics. Considered properties of representations cover two axes of metrics: disentanglement metrics and informativeness metrics (Eastwood & Williams, 2018). We include BetaVAE score (Higgins et al., 2016), FactorVAE score (Kim & Mnih, 2018), Mutual Information Gap (Chen et al., 2018) , SAP (Kumar et al., 2017), and DCI Disentanglement (Eastwood & Williams, 2018). Locatello et al. (2019b) proves their agreement on VAE methods with extensive experiments. Though their measurements are different, their results are positively correlated. On the other hand, informativeness requires representations to encode enough information about factors. In this work, we employ Logistic Regression (LR). It is a favorable metric adopted by unsupervised pretraining literature (He et al., 2020; Grill et al., 2020; Caron et al., 2021). Given the weak capacity of linear models, a higher LR accuracy ensures that sufficient information is explicitly encoded. However, it does not emphasize a dimension-wise encoding pattern like disentanglement. To distinguish, we term the property indicated by LR as informativeness.
3.3 BACKGROUND OF METHODS FOR ABSTRACT REASONING
In Stage-1, we extract representations of eight context panels (the left part of Figure 1) and six answer candidates (the right part of Figure 1). Then in Stage-2, downstream models perform abstract reasoning from the (frozen) representations. Abstract reasoning models evaluate whether filling the blank panel by a candidate follows the logical rules given by contexts. For a trial Ti of one candidate ai ∈ A = {a1, ..., a6} and eight context panels C = {c1, ..., c8}, its score is calculated as follows: Yi = Stage2(Stage1(Ti)), Stage1(Ti) = {Stage1(c1), . . . ,Stage1(c8)} ∪ {Stage1(ai)}, (1)
where Yi is the score of trial Ti, Stage1(·),Stage2(·) denote the forward process of the Stage-1 and Stage-2 models, and Stage1(Ti) is the representations of contexts and candidate ai. After evaluating all trials {T1, T2, . . . , T6}, the output answer is argmaxi Yi. We implement two different well-defined structures of Stage-2 models, namely, WReN (Barrett et al., 2018) and Transformer (Vaswani et al., 2017; Hahne et al., 2019). First, they employ an MLP or a Transformer to embed an RPM trial. Then an MLP head predicts a scalar score from the embeddings.
4 EXPERIMENTS
In this Section, we conduct a systematic empirical study about representation properties’ impacts on downstream performance. First, we introduce our experimental conditions in Section 4.1. Then we provide empirical evidence to challenge the necessity of disentanglement (Section 4.2) and to tell which property matters (Section 4.3).
4.1 EXPERIMENTS SETUP
We build upon the experiment conditions of van Steenkiste et al. (2019). Abstract visual reasoning tasks, i.e., RPMs, are solved through a two-stage process: data Stage-1−−−−→ representations Stage-2−−−−→
RPM answers. We first train Stage-1 models in an unsupervised manner and evaluate their disentanglement and informativeness. Then Stage-2 models are trained and evaluated on downstream tasks, yielding an abstract reasoning accuracy of a representation. Provided with a large amount of (representation property score, downstream performance) pairs, we conduct a systematic study to investigate the necessity of disentanglement. More implementation details are available in Appendix A.
Datasets. We replicate the RPM generation protocol in van Steenkiste et al. (2019). The panel images consist of disentanglement benchmark image datasets, namely, Abstract dSprites (Matthey et al., 2017; van Steenkiste et al., 2019) and 3DShapes (Burgess & Kim, 2018). The rows of RPMs are arranged following the logical AND of ground truth factors. As for hardness, we only reserve hard-mixed, whose contexts and candidates are more confusing. According to the generation process, the size of generated RPMs is sufficiently large (about 10144), allowing us to produce fresh samples throughout training.
Reference models. Stage-1 models extract representations from RPM’s panels. To ensure the generalizability of the results, we include 360 disentangled VAEs (denoted as DisVAEs) and 360 BYOLs. Our choices of Stage-1 models cover both disentangled and general-purpose representation learning methods. Moreover, we are interested in the overall relationship between representation properties and downstream performance. Therefore we need to study the correlation between two distributions, i.e., representation metric scores and downstream performance. For this, we include various samples for both Stage-1 and Stage-2 to ensure they are representative enough. For Stage-1, a diverse set of configurations are included for each type of representation learning model. According to the histograms in Appendix C.4, our choices span various disentanglement and informativeness scores. For Stage-2, to better estimate the downstream performance distribution, we use multiple Stage-2 configurations for each representation instead of searching for the best one. Specifically, we train 10 Stage-2 models (5 WReNs and 5 Transformers) for every Stage-1 model. Stage-2 configurations are randomly sampled from a search space described in Appendix A.3 and shared across Stage-1 models. By this, we ensure fair comparisons across representations.
Training protocol. Training is conducted two-staged. Firstly, we train Stage-1 models in an unsupervised manner on the dataset consisting of RPMs’ panels, i.e., Abstract dSprites or 3DShapes. For DisVAE models, we use the training protocol of van Steenkiste et al. (2019), while for BYOL models, we follow Cao et al. (2022). In Stage-2, all models are trained for 10K iterations with a batch size of 32. After every 100 iterations, we evaluate the accuracy on newly generated 50 mini-batches of unseen RPM samples for validation and another 50 mini-batches for testing.
Evaluation protocol. We first evaluate the two stages separately. Then we analyze the relationship between the two stages, i.e., representation properties and downstream performance. Specifically, to challenge the necessity of disentanglement, we are interested in whether more disentangled representations lead to better downstream performance. Further, if it turns out that disentanglement is of limited importance, can we find another metric that better accounts for downstream performance? Therefore, for Stage-1, we employ representation metrics described in Section 3.2 to measure two aspects: disentanglement and informativeness. For all Stage-1 models, we compute the following metric scores: BetaVAE score, FactorVAE score, MIG, SAP, and LR accuracy. DCI Disentanglement is only evaluated for DisVAEs since it takes hours to develop the Gradient Boosting Trees required during the evaluation process on high-dimensional representations of BYOLs (Cao et al., 2022). For Stage-2, we inspect accuracy on newly generated test sets every 100 iterations, yielding accuracy for multiple training steps. Since every step sees fresh samples, we employ training curves to measure sample efficiency. We also report accuracy-#samples curves in Appendix C.2 .
To summarize the downstream performance of a Stage-1 model, over 5 WReNs or 5 Transformers in Stage-2, we report the mean accuracy denoted as WReN or Trans., and max accuracy denoted as WReN⋆ or Trans.⋆. Finally, we calculate the rank correlation (Spearman) between the mean performance of Stage-1 models (WReN and Trans.) at certain Stage-2 steps and their Stage-1 metric scores. A larger correlation indicates a higher significance of the representation property on downstream performance.
4.2 ARE DISENTANGLED REPRESENTATIONS NECESSARY?
Hereafter we challenge the necessity of disentanglement. We begin by comparing a disentangled representation v.s. a deliberately designed, entangled representation on the downstream performance. Then we discuss the necessity of disentanglement inductive bias by evaluating the performance of disentanglement and general-purpose representation learning methods.
Effects of attenuating disentanglement. We first construct the most disentangled representations, i.e., the normalized true factor values. We normalize the true factor values to have zero means and unit standard deviations, yielding 6-d representations (note that Abstract dSprites and 3DShapes are both labeled with 6 ground truth factors). Then we rotate the constructed representations by multiplying randomly generated orthonormal matrices. Afterward, each dimension of the rotated feature captures a combination of factors, thus destroying disentanglement. Finally, we perform abstract reasoning training from true factors before and after rotations. We also conduct rotations on representations learned by DisVAEs.
We run 5 seeds defining the randomly generated rotation matrices and Stage-2 model configurations. We report results on 3DShapes with original/rotated true factors as representations and WReNs as Stage-2 models in Figure 2. As depicted in Figure 2a, there is little difference between performance before and after rotation throughout the training process. Yet Figure 2b shows significant drops in disentanglement metric scores. This surprising phenomenon suggests that even though we drastically entangle the representations, the downstream performance remains unchanged, firmly against the necessity of disentanglement. However, we can see from Figure 2b that LR scores are 100% before and after rotation. It is easy to understand because the rotation we applied
is just an invertible linear transform, which a simple LR can recover, not to mention more capable Stage-2 models. Moreover, we observe similar results for learned representations (Figure 3). We select the most disentangled DisVAE measured by FactorVAE score among the 180 DisVAE models trained on 3DShapes (recall Section 4.1). As shown in Figure 3, rotation does not hurt the performance of representations learned by DisVAEs, backing up our claim that disentanglement representations might not be necessary to achieve good downstream performance. More results of rotation experiments on other datasets are reported in Appendix C.3.
Summary: Destroying disentanglement (by random rotation) in representations does not have a noticeable impact on downstream performance throughout training.
Advantages of disentanglement inductive bias. From previous results, we demonstrate that both high performance and high sample efficiency can be achieved even if we deliberately destroy disen-
tanglement. Further, we are interested in the inductive biases of Stage-1 models: Do disentangled representation learning models have absolute advantages on downstream performance over generalpurpose models? For this, we compare the downstream performance of different families of learning models described in Section 4.1, including BYOL, β-VAE, AnnealedVAE, β-TCVAE, FactorVAE, DIP-VAE-I, and DIP-VAE-II. Among them, BYOL does not explicitly encourage disentanglement. On the other hand, all DisVAEs are disentangled representation learning methods. From a large pool of 7200 checkpoints, we report the best performance for each model family.
Figure 4 shows overviews of training trajectories of Stage-1 models with the highest performing WReN and Trans. on 3DShpaes for multiple training steps. For WReN as Stage-2 models (Figure 4a), BYOL leads at the beginning, then DisVAEs catch up, and finally, BYOL converges at a higher accuracy. In contrast, when Stage-2 models are Transformers, BYOL’s curve grows faster, but DisVAEs and BYOL converge with comparable performance. In general, the two curves evolve in almost identical patterns with small gaps, indicating that disentanglement inductive bias is of limited utility in improving downstream sample efficiency. Corresponding analysis on Abstract dSprites is available in Appendix C.3, where we reach the same conclusions. As for final performance, we report maximal WReN, WReN⋆, Trans. and Trans.⋆ across different Stage-2 models and datasets in Table 1. We select checkpoints to evaluate based on validation accuracy. In particular, the best WReN and Trans. of BYOL are higher than that of DisVAEs’. In addition, it appears that BYOL performs better than or on par with DisVAEs in terms of WReN⋆ and Trans.⋆. Especially, BYOL outperforms DisVAEs on Abstract dSprites with a considerable margin.
Summary: Models not intended for disentangled representation learning can reach superior or comparable downstream performance. Therefore disentanglement inductive bias does not necessarily lead to better sample efficiency or final accuracy.
4.3 WHICH PROPERTY MATTERS DOWNSTREAM PERFORMANCE?
The results in Section 4.2 provide encouraging cases against the necessity of disentanglement. Additionally, we are interested in several further issues: (1) Which property matters downstream performance most? (2) How can we interpret the previously claimed benefits from disentanglement(Bengio et al., 2013; Higgins et al., 2016; van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020)? On account of these questions, we start by investigating how different representation properties influence downstream accuracy. We include informativeness and various disentanglement metrics.
Recall that we train 720 Stage-1 and 7200 Stage-2 models (see Section 4.1). By taking WReN and Trans. as measurements (average reasoning accuracy over 5 WReNs or 5 Transformers), we yield 720 representations paired with their downstream performance. Generally, our analysis is based on rank correlation (Spearman) between representation metric scores and downstream performance. If the correlation score is high, we can conclude that the representation property measured by the considered metric score is significant to downstream performance.
The representation property of the most significance. We calculate the rank correlation between downstream accuracy and disentanglement and informativeness scores. Meanwhile, we report rank correlation at steps 1K, 2K, 5K, and 10K, and the step with the highest validation accuracy. From correlations at different training steps, we can tell how a representation property affects sample efficiency.
Figure 5 displays rank correlations between representation metric scores and abstract reasoning test accuracy on 3DShapes. Firstly we can find that Logistic Regression accuracy (LR) correlates most with downstream performance. The strong correlation is exploited for all considered models at multiple steps. Since LR requires sufficient information to be captured and extracted easily from representations, we can conclude that the informativeness matters most in broad conditions. In contrast, we observe that the importance of disentanglement varies among Stage-1 model families. Disentangled representation learning models (DisVAEs) exhibit strong positive correlations for several disentanglement metrics (but weaker than LR), such as FactorVAE score and DCI Disentanglement. However, their significance does not apply to BYOL, where the correlation of disentanglement is mild or even negative. In Figure 6 we plot the (WReN, metric score) pairs at step 10000. Indeed, for BYOL-WReN on 3DShapes, we can see the linear regression provides a good fit of downstream accuracy and informativeness metrics. As for disentanglement metrics, we can see that BetaVAE score and FactorVAE score suffer from narrow spreads. For MIG and SAP, the regression lines have negative slopes. We conduct a similar analysis on Abstract dSprites and take the same observations. Please refer to Appendix C.4 for more details.
Summary: The informativeness influences downstream performance most. The results are consistent across datasets and model structures.
Understanding for the previously claimed success of disentanglement. Previous works (van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020; Locatello et al., 2020) have reported empirical evidence backing up the advantages of disentangled representations. Consistently, we observe relatively strong correlations with disentanglement metrics, especially when Stage-1 models are DisVAEs in Figure 5. Based on our conclusion on the significance of the informativeness, we study the DisVAE-WReN case and provide some insights to explain why the disentanglement metrics have a high correlation to downstream performance in some cases.
We compute the overall correlations between metrics. The results are shown in Figure 7. For DisVAEs, we find that informativeness and disentanglement have high correlation scores. In particular, we can observe relatively strong correlations between LR and FactorVAE score and BetaVAE score. Accordingly, these disentanglement metrics exhibit relatively strong correlations with downstream performance in Figure 5a. In contrast, other disentanglement metrics correlate mildly with LR. And they are ineffective for downstream performance. Therefore, disentanglement metrics are not truly predictable for downstream performance, but LR is.
To “purify” the effect of disentanglement, a natural question is: If two representations are of close informativeness, does the more disentangled one more helpful for downstream tasks? For this, we employ adjusted metrics in Locatello et al. (2019a):
Adj. Metric = Metric− 1 5 ∑ i∈N(LR) Metrici, (2)
For a representation and a certain metric (we care more about disentanglement metrics), we denote its original metric score as Metric. Then we find its 5 nearest neighbors in terms of LR, which we write as N(LR). Finally, the difference between the original metric score and the mean score of the nearest neighbors is reported as adjusted metrics. Intuitively, we calculate the relative disentanglement for representations with close LR.
Figure 7b displays correlations between adjusted metrics and downstream performance. We can find that all adjusted disentanglement metrics correlate mildly with downstream performance. From this, we can see that when informativeness is close, being disentangled contributes only a small portion to the downstream performance when the downstream training steps are limited (In our case, less than or equal to 2000 steps, see Figure 4 and Figure 7).
Summary: The informativeness is the most predictable metric for downstream performance. Disentanglement only brings small extra benefits at the very beginning of downstream training.
5 CONCLUSION
In this paper, we challenge the necessity of dimension-wise disentanglement for downstream tasks. We conduct a large-scale empirical study on the abstract visual reasoning task. We start by showing that high downstream performance can be achieved by less disentangled representations. In addition, we identify that the informativeness is of the most significance. Finally, we conclude that dimensionwise disentanglement is unnecessary for downstream tasks using deep neural networks with learned representations as input.
REPRODUCIBILITY STATEMENT
We provide information to reproduce our results in Appendix A. We commit to making our codes publicly available.
A REPRODUCIBILITY
In this Section, we provide implementation details to ensure reproducibility. In addition, we commit to making our codes, configurations, and running logs publicly available. All experiments are run on a machine with 2 Intel Xeon Gold 5218R 20-core processors and 4 Nvidia GeForce RTX 3090 GPUs.
A.1 REPRESENTATION LEARNING METHODS
We include both disentangled representation learning methods and general-purpose representation learning methods. i.e., DisVAEs and BYOL (Grill et al., 2020).
DisVAEs implementation. The DisVAEs include β-VAE (Higgins et al., 2016), AnnealedVAE (Burgess et al., 2018), β-TCVAE (Chen et al., 2018), FactorVAE (Kim & Mnih, 2018), and DIP-VAE-I and DIP-VAE-II (Kumar et al., 2017). We use the output of the encoder, the mean of qϕ(z|x), as representations. Hereafter, we introduce details for each method. The above methods encourage disentanglement by adding regularizers to ELBO. Adopting the notation in Tschannen et al. (2018), their objectives can be written in the following unified form:
Ep(x)[Eqϕ(z|x)[− log pθ(x|z)]] + λ1Ep(x)[R1(qϕ(z|x))] + λ2R2(qϕ(z)), (3)
where qϕ(z|x) is the posterior parameterized by the output of the encoder, pθ(x|z) is induced by the decoder output, R1, R2 are the regularizer applying to the posterior and aggregate posterior, and λ1, λ2 are the coefficients controlling regularization. In the objective of β-VAE, β = λ1 > 1, λ2 = 0. Taking R1(qϕ(z|x)) := DKL[qϕ(z|x)||p(z)] forces the posterior to be close to the prior (usually unit gaussian), hence penalizing the capacity of the information bottleneck and encourage disentanglement. FactorVAE and β-TCVAE takes λ1 = 0, λ2 = 1. With R2(qϕ(z)) := TC(qϕ(z)), they penalize the Total Correlation (TC) (Watanabe, 1960). FactorVAE estimates TC by adversarial training, while β-TCVAE estimates TC by biased Monte Carlo sampling. Finally, DIP-VAE-I and DIP-VAE-II take λ1 = 0, λ2 ≥ 1 and R2(qϕ(z)) := ||Covqϕ(z) − I||2F , penalizing the distance between aggregated posterior and factorized prior.
We use the code and configurations from the DisLib 1 (Locatello et al., 2019b). As for parameters, we use the same sweep as van Steenkiste et al. (2019): for each one of the 6 DisVAEs, we use 6 configurations. We train each model using 5 different random seeds. Since we consider 2 datasets (3DShapes and Abstract dSprites), finally, we yield 6 ∗ 6 ∗ 5 ∗ 2 = 360 DisVAE checkpoints.
BYOL implementation. BYOL (Grill et al., 2020) is a contrastive learning method. Figure 8 shows its pipeline. For each image x, we first create two “views” of it by data augmentation, i.e., x1 and x2. Then they are input to the siamese encoders: the online encoder and the target encoder. Specifically, x1 is fed to the online encoder, while x2 is fed to the target encoder, yielding the output
1https://github.com/google-research/disentanglement_lib.git
z1 and z2, respectively. As for architectures, both encoders share the same representation network and projection MLP. The prediction MLP is appended to the online encoder in order to avoid BYOL learning trivial representations. The objective of BYOL is
L = − ⟨z1, z2⟩ ∥z1∥2∥z2∥2 . (4)
We are pulling the representations of the two “views” close. While training, the online encoder’s parameters are updated by gradient descent. However, the target encoder’s parameters are updated by the online parameters’ Exponential Moving Average (EMA) (He et al., 2020). After training, we only keep the online encoder and use the output of the representation network as representations.
We use the PyTorch implementation of BYOL 2. We use the representation network architecture as shown in Table 2, where the representation dimension D is a parameter to be set. Except for normalization and output dimensions, the representation network architecture of BYOL and the encoder architecture of DisVAEs are similar. As for predictor and projector, we use the pipeline Linear→ BN → ReLU → Linear with 256 hidden neurons. We train the BYOLs for 105 epochs using the Adam optimizer with β1 = 0.9, β2 = 0.999, ϵ = 10−8, and learning rate (lr) as a variable parameter. For augmentation, we use the pipeline of Cao et al. (2022) (in PyTorch-style):
1. RandomApply(transforms.ColorJitter(xjit, xjit, xjit, 0.2), p=0.8) 2. RandomGrayScale(p=pgray) 3. RandomHorizontalFlip() 4. RandomApply(transforms.GaussianBlur((3,3), (1.0, 2.0)), p=0.2) 5. RandomResizeCrop(size=(64, 64), scale=(xcrop, 1.0))
The xjit, pgray, and xcrop are parameters to be set. xjit controls how much to jitter brightness, contrast, and saturation. pgray controls the probability to convert the image to grayscale. xcrop defines the lower bound for the random area of the crop.
We perform a parameter sweep on the cross product of intervals of parameters D, norm, lr, xjit, pgray, and xcrop. On 3DShapes, we use the following parameter grid (in scikit-learn style):
[ {’D’: [32, 64, 128], ’lr’: [3e-2, 3e-3], ’norm’: [BatchNorm()], ’x_jit’: [0.6, 0.8], ’p_gray’: [0.5, 0.7, 0.9], ’x_crop’: [1.0]}, {’D’: [256], ’lr’: [3e-4, 3e-5], ’norm’: [BatchNorm(), GroupNorm(num_groups=4)], ’x_jit’: [0.4, 0.8], ’p_gray’: [0.3, 0.5, 0.7], ’x_crop’: [1.0]} ]
On Abstract dSprites, we use the following parameter grid:
2https://github.com/lucidrains/byol-pytorch.git
[ {’D’: [32, 64, 128], ’lr’: [3e-3, 3e-4], ’norm’: [BatchNorm()], ’x_jit’: [0.6, 0.8], ’p_gray’: [0.0, 0.1, 0.2], ’x_crop’: [0.6]}, {’D’: [256], ’lr’: [3e-4, 3e-5], ’norm’: [BatchNorm(), GroupNorm(num_groups=4)], ’x_jit’: [0.4, 0.8], ’p_gray’: [0.0, 0.1, 0.2], ’x_crop’: [0.6]} ]
For each parameter configuration, we run it with 3 random seeds. Finally, we trained 360 BYOLs in total.
A.2 ABSTRACT REASONING METHODS
We include two abstract reasoning network architectures: WReN (Barrett et al., 2018; van Steenkiste et al., 2019) and Transformer (Vaswani et al., 2017; Hahne et al., 2019).
WReN implementation. WReN consists of two parts: graph MLP and edge MLP. Here we use the same notations as in Section 3.3. For the representations of a trial Stage1(Ti), edge MLP takes a pair of representations in Stage1(Ti) as input and embed them to edge embeddings. Then all edge embeddings of Stage1(Ti) (in total C29=36) are added up and input to the graph MLP. Finally, the graph MLP output a scalar score, predicting the correctness of the trial Ti.
We use the code (van Steenkiste et al., 2019) to implement WReN. And we use the same parameter searching spaces as them. All WReNs are trained in 10K steps with a batch size of 32. The learning rate for the Adam optimizer is sampled from the set {0.01, 0.001, 0.0001} while β1 = 0.9, β2 = 0.999, and ϵ = 10−8. For the edge MLP in the WReN model, we uniformly sample its hidden units in 256 or 512, and we uniformly choose its number of hidden layers in 2, 3, or 4. Similarly, for the graph MLP in the WReN model, we uniformly sample its hidden units in 128 or 512, and we uniformly choose its number of hidden layers in 1 or 2 before the final linear layer to predict the final score. We also uniformly sample whether we apply no dropout, dropout of 0.25, dropout of 0.5, or dropout of 0.75 to units before this last layer.
Transformer implementation. We simplify the architecture of Hahne et al. (2019). Here we treat Stage1(Ti) as a sequence. We first linear project all representations and prepend them with a learnable [class] token. We add them with learnable positional embeddings. Then they are input into a stack of Transformer blocks (Vaswani et al., 2017). Finally, an MLP predicts a scalar score from the class embedding of the final Transformer block.
We implement the Transformer architecture ourselves with utilities of the DisLib code base. All Transformers are trained for the same steps and same batch size as WReN, i.e., 10K steps with a batch size of 32. We use the Adam optimizer with weight decay and cosine learning rate scheduler. The learning rate for the Adam optimizer is uniformly selected from {5e− 4, 6e− 4, 7e− 4}. The depth of Transformer blocks is uniformly set to be 2, 3, or 4. The dimensions of q, k, v of the selfattention model are uniformly 32 or 64. The MLP head uses the same architecture and parameter space as the graph MLP in WReN. For other fixed parameters, please refer to our codes for details.
A.3 REPRESENTATION METRICS
In the main text, we employ disentanglement and informativeness metrics to measure the properties of representations. Here we provide more details.
Disentanglement metrics. We use the setup and implementation of Locatello et al. (2019b). Here we briefly introduce the details of our considered metrics. Namely, BetaVAE score (Higgins et al., 2016), FactorVAE score (Kim & Mnih, 2018), Mutual Information Gap (Chen et al., 2018) , SAP (Kumar et al., 2017), and DCI Disentanglement (Eastwood & Williams, 2018). The BetaVAE score and the FactorVAE score predict the intervened factor from representations to measure disentanglement. The Mutual Information Gap and SAP compute the gap in response for each factor between the two highest representation dimensions. The difference is that MIG measures mutual information while SAP measures classification accuracy. The DCI Disentanglement calculates the entropy of the relative importance of a latent dimension in predicting factors. We follow previous studies (Locatello et al., 2019b; van Steenkiste et al., 2019; Locatello et al., 2019a; Dittadi et al., 2020) to develop a Gradient Boosting Tree (GBT) for prediction during the DCI Disentanglement evaluation.
Though according to Eastwood & Williams (2018) any classifier could be used. As reported by Cao et al. (2022), the GBT takes hours to train from high-dimensional representations learned by BYOL. Thus we only report DCI Disentanglement score for DisVAEs.
Informativeness metrics. We use LR to measure the informativeness of representations. We train a Logistic Regression model to predict factor values from representations. We use 10000 samples to train LR. Unlike van Steenkiste et al. (2019), we use “multinomial” instead of “one v.s. rest” as the multi-class classification scheme. As shown in Figure 9a, for the same set of representations, “one v.s. Rest” LR has inferior prediction accuracy. Moreover, ranking by scores of these two LRs yields different results. In Figure ,9b we can observe different correlations of the “one v.s. Rest” LR. To better estimate informativeness, we use “multinomial” LR as the measurement.
A.4 ABSTRACT VISUAL REASONING DATASETS
We use the two abstract visual reasoning datasets developed by van Steenkiste et al. (2019). i.e., Ravens’ Progressive Matrices created from 3DShapes (Burgess & Kim, 2018) and Abstract dSprites (Matthey et al., 2017; van Steenkiste et al., 2019).
We sketch the rules here by taking the RPM in Figure 1 as an example. The reasoning attributes are the ground truth factors of 3DShpaes. i.e., floor hue, wall hue, object hue, scale, shape, and orientation. Each row in the 3 × 3 matrix has 1, 2, or 3 ground truth factors taking a fixed value. And the 3 rows have the same fixed ground truth factors, though they might take different values. From the context panels, one should discover the underlying logical relationship. Finally, one is asked to fill the missing panel by one of the candidates. For the RPM in Figure 1, from the contexts, we can infer that the fixed factors are: wall hue, shape, and orientation. Then for the third row, from the first 2 panels, we know that the values for the shared factors are: the wall hue is blue, the shape is cylinder, and the orientation is the azimuth that makes the wall corner appears in the righter part of the image. So we choose the candidate with these factor values as the solution, as shown in Figure 10a. Figure 10b shows a sample of RPMs with answers on Abstract dSprites.
B ABLATIONS ON GENERAL-PURPOSE REPRESENTATION LEARNING METHODS
In the main text, we use BYOL as a representative of general-purpose representation learning methods. For completeness, here we introduce another general-purpose method, SimSiam (Chen & He, 2021). We modify the code of BYOL 3 to train SimSiams on 3DShapes with the following parameter grid:
[ {’D’: [512], ’lr’: [3e-4, 3e-5], ’norm’: [BatchNorm()], ’x_jit’: [0.4, 0.8],
3https://github.com/lucidrains/byol-pytorch.git
’p_gray’: [0.3, 0.5, 0.7], ’x_crop’: [0.6, 1.0]} ]
For each configuration, we run with 3 seeds. So finally, we yield 72 SimSiams. Then we use the same WReNs for DisVAEs and BYOLs as Stage-2 models.
The results of SimSiam-WReN agree with our conclusions in the main text. As for the best performance, we have WReN=85.1% and WReN⋆=94.1%, which is better than DisVAEs’. Figure 11 shows the correlations of downstream performance and representation properties. LR still correlates most for all considered steps.
C ADDITIONAL RESULTS
Figure 13: Accuracy v.s. #samples curves of the most disentangled DisVAEs before and after rotation. It is consistent with Figure 3.
0 2000 4000 6000 8000 10000 Sample Size (#Batches)
0.2
0.4
0.6
0.8
1.0
A cc
ur ac
y
BYOL DisVAEs
(a) Stage-2=WReN
0 2000 4000 6000 8000 10000 Sample Size (#Batches)
0.2
0.4
0.6
0.8
1.0
A cc
ur ac
y
BYOL DisVAEs
(b) Stage-2=Transformer
Figure 14: Accuracy v.s. #samples curves of the Stage-1 models with the best WReN or Trans.. It is consistent with Figure 4
C.1 ADDITIONAL RESULTS OF FINAL PERFORMANCE
In Table 1 we report the best final performance of DisVAEs and BYOLs. Here we provide more details on which type of DisVAEs at which steps achieve the reported performance in Table 1. We can observe that the best DisVAEs vary with different datasets and Stage-2 models. As for the best steps, except 3DShapes-WReN, BYOL achieves the best performance earlier than DisVAEs.
C.2 ACCURACY-#SAMPLES CURVES
We employ training curves (accuracy-step) in the main text to evaluate sample efficiency following van Steenkiste et al. (2019). For completeness, here we show accuracy-#samples curves.
We present the accuracy-#samples versions of Figure 3 and Figure 4, i.e., Figure 13 and Figure 14. We train the same models as in the main text until convergence with fixed training data sizes of 100, 1000, 5000, 7000, and 10000 batches. Then for each sample size, we plot the test performance at the
step with the highest validation accuracy. We can see the ranking of representations and evolving patterns of both types of curves agree well.
C.3 ADDITIONAL RESULTS OF RANDOM ROTATION EXPERIMENTS
This section contains additional results of the random rotation experiments. Here we report the downstream performance of deliberately entangled (by random rotation) representations.
Figure 12 shows the same experiments as Figure 2 on Abstract dSprites. We can observe that the two curves in Figure 12a are almost identical. And in Figure 12b, we can observe that disentanglement metric scores drop drastically while LR remains the same. We notice that LR is not 100%. This is because some factors of Abstract dSprites have too many support values. e.g., the x and y positions both have 32 possible values. However, our conclusion in the main text still holds as we observe that LR is invariant to random rotation. On Abstract dSprites, we randomly rotate the most disentangled representations from DisVAEs (measured by FactorVAE score). In Figure 15, we can see that rotation has little impact on the training trajectories. So our conclusion is similar across datasets.
C.4 ADDITIONAL RESULTS OF CORRELATIONS
In this part, we report additional results related to the correlations between representation metrics and downstream performance.
Absolute values of metric scores and downstream accuracy. We show the histograms as a sanity check of the distribution of metric scores and downstream accuracy. Figure 16 presents the score distributions of each metric. We report the mean metric scores with STDs to depict the overall properties for Stage-1 models in Table 4. Figure 17 and Figure 18 display the distributions of downstream performance.
Rank correlations. This part contains additional results of rank correlations. On 3DShapes, Figure 19 displays rank correlations between adjusted metrics and downstream accuracy, Figure 20 shows the overall correlation between metrics. On Abstract dSprites, Figure 21 shows correlations between metrics and downstream performance. Then Figure 22 presents correlations between ad-
justed metrics and downstream performance. Finally, Figure 23 displays the overall correlations between metrics.
Plots of (metric score, downstream accuracy) pairs. Figures 24, 25, 26, 27, 28, 29, 30, and 31 provide an in-depth view of the correlations, where we plot (metrics, downstream accuracy) pairs. | 1. What is the focus of the paper regarding disentangled representation and downstream tasks?
2. What are the strengths of the proposed approach, particularly in terms of experimental evaluation and informativeness measurement?
3. What are the weaknesses of the paper, especially regarding the comparison between disentangled representation and deliberate entanglement?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper challenges a common belief that a disentangled representation is useful for downstream tasks. Following up Steenkiste et al., 2019 and Locatello et al. 2019b, the authors focused on the informativeness of the representation and its correlation with the performance of downstream tasks.
Strengths And Weaknesses
Strong points:
-extensive experimentation and evaluation
-effective use of logistic regression as a measure of informativeness
-a comparison between the effects of disentangled representation vs. deliberate entanglement of disentangled representations on downstream task
Weak points:
-lack of theoretical contribution
-there are far more types of disentangling VAEs against one general-purpose representation learning method (BYOL). Since downstream test performance is averaged over different VAEs, is this a fair comparison?
Clarity, Quality, Novelty And Reproducibility
The paper is well-written and clear |
ICLR | Title
Class-wise Visual Explanations for Deep Neural Networks
Abstract
Many explainable AI (XAI) methods have been proposed to interpret neural network’s decisions on why they predict what they predict locally through gradient information. Yet, existing works mainly for local explanation lack a global knowledge to show class-wise explanations in the whole training procedure. To fill this gap, we proposed to visualize global explanation in the input space for every class learned in the training procedure. Specifically, our solution finds a representation set that could demonstrate the learned knowledge for each class. To achieve this goal, we optimize the representation set by imitating the model training procedure over the full dataset. Experimental results show that our method could generate class-wise explanations with high quality in a series of image classification datasets. Using our global explanation, we further analyze the model knowledge in different training procedures, including adversarial training, and noisy label learning. Moreover, we illustrate that the generated explanations could lend insights into diagnosing model failures, such as revealing triggers in a backdoored model.
1 INTRODUCTION
Deep neural networks (DNNs) have achieved unprecedented success over various tasks across different domains. However, there is still limited understanding of how to interpret their decisions due to their black-box nature. In other words, current DNNs lack the desirable transparency ability to explain why they predict what they predict. "Correct" prediction without reasonable explanation from DNNs may bring a huge concern on their prediction’s reliability, especially in security-sensitive tasks such as medical image analysis (Nwadike et al., 2020), auto-pilot in the self-driving system (Wang et al., 2021) etc.
To acquire a better understanding of model prediction, many attempts have been made to interpret DNNs from different perspectives. A major focus have been put on the local explanations, either through the saliency map (Simonyan et al., 2013; Smilkov et al., 2017; Adebayo et al., 2018) and unique identification factors (Zhou et al., 2016; Selvaraju et al., 2017), or extracting explainable attributions on models’ decision for each input example (Ribeiro et al., 2016; Hinton et al., 2015; Frosst & Hinton, 2017). These techniques have a good ability to visualize feature representation utilized for debugging trained DNNs in a local manner. Yet, local explanations can only interpret resulting model decisions for a particular input sample, being unable to visualize any intermediate attributions or model knowledge learned in a whole training procedure. Thus, a global explanation for viewing overall training logic is desirable, i.e., visualizing intermediate explanations for verifying commonly-used hypotheses in different training steps. Apart from that, with respect to diagnosing model prediction, local explanations are further challenged by the recent progress in backdoor attacks (Zhao et al., 2021). DNNs have been shown vulnerable to backdoor attacks. However, local explanations fail to provide helpful (intermediate) explanations for analyzing backdoor patterns given a target label. To our best knowledge, no prior work can reveal complex backdoor patterns when visualizing a whole training process for DNNs.
Existing global explanations methods focus on extracting rules from DNN models (Hara & Hayashi, 2018; Lakkaraju et al., 2017) or distilling black-box models into small and easy-to-explained substitute models (Tan et al., 2019; Bastani et al., 2017). However, the quality of generated explanations highly depends on the complexity of the designed rules and the substitute model. Besides, the substitute model is a approximation of the explained model, which could also introduce extra biases to explanations.
Our solution. This paper gives a generic method for the global visual explanation, given an inputting class-wise dataset and training procedure only, without requiring any additional hypotheses. Our global explanation can be applied to visualize the whole training process of any DNNs, finally providing much richer visual explanations and knowledge than local explanations. Unlike prior global explanations, our method requires neither extracting rules from a black-box model nor distill knowledge to a simpler model. Instead, it visualizes learned knowledge directly over representative data to attain more comprehensive explanations, rather than processing a simplified model summarized from original data. Representative data, in the pixel space rather than feature space, enables debugging a model and studying hidden mechanisms more easily in a training procedure. Intrinsically, our superior results benefit from shortening the route of generating visual explanation, saying, distilling knowledge directly from data instead of from a model derived from data.
To be specific, given a training dataset and corresponding training procedure, we optimize representative data points so that the learned model over representative data could approach the same one trained with the full dataset. By adding more reference snapshots in the training trajectory of model parameters, our method supports extracting a high-quality visualization to reveal global class-wise patterns learned by specifying various training manners. Empirical results on datasets show desirable class-wise explanations with high fidelity in Sec. 4. Besides, Sec. 5 analyzes debugging model failure (i.e.,inserted backdoor) that exists in the training procedure. To demonstrate generality, we utilize our class-wise explanation to analyze feature representation learned by various training procedures in Sec. 6, including training dynamics, adversarial training, and noisy label learning.
Our contributions are summarized below:
• We propose a global visual explanation method in the input space to reveal the key representation points learned with respect to different classes. Our method could generate a class-discriminative, natural-looking, and high-quality visual explanation. • We show the proposed method could help in diagnosing several model failures, including backdoor attacks where the local explanation is completely failed. • We show a proof-of-concept of how to understand the model knowledge in different training phases and different training methods. This is critical for understanding the generalization of deep learning and sheds light on how to design a better training algorithm.
2 RELATED WORK
2.1 LOCAL EXPLANATION METHODS
Local explanation methods aim to help understand the decision procedure for a specific sample. At the pixel-level, saliency map (Simonyan et al., 2013) is the first to identify the sensitivity of each pixel towards the final prediction by highlighting the largest score in the calculated gradients. The vanilla gradient method is later improved via smoothed gradient (Smilkov et al., 2017), and guided back-propagation (Adebayo et al., 2018). Class Activation Mapping (CAM) (Zhou et al., 2016) visualizes discriminative regions by simplifying the model into one without any fully-connected layer. Later, Grad-CAM (Selvaraju et al., 2017) then makes CAM without altering model architecture by incorporating the gradient information. Other than directly pixel-level explanation, LIME (Ribeiro et al., 2016) approximates the model with a linear model locally, which relies on instances randomly generated in the neighborhood of the sample to be explained. After deriving the model, interpretable features are then projected back into the original feature space to get a final explanation. SHAP (Lundberg & Lee, 2017) re-formalizes additive feature attribution problem into a cooperative game and use Shapley value to assign each feature an important score for a particular prediction. Recently, a class-wise local explanation method has been proposed to visualize a sparse representation of model knowledge (Zhao et al., 2021). However, the method heavily depends on the given canvas (example). Although our method also focuses on the class-wise pattern, our global explanation method neither depends on a specific sample nor requires sparse constraints for generating representations in the input space.
2.2 GLOBAL EXPLANATION METHODS
Different from local explanation methods, global explanation aims to describe the overall logic of the black box, including how the model parameter affects the resulting prediction on average, and what and how the model has been learned in the training procedure. The global explanation is first to be formalized as extracting some rules from the black-box model to interpret the model (Bastani
et al., 2017). As neural networks become more complex and deep, it is then very hard to extract rules directly from the model. Model distillation (Hinton et al., 2015) is then applied to simplify complex black-box models to a smaller yet interpretable one with a similar performance to the original model. Neural networks have been distilled into trees structure model (Craven & Shavlik, 1995; Frosst & Hinton, 2017) and an additive model for the global explanation (Tan et al., 2018).
By selecting the prototypes or representative samples, example-based explanation methods (Kim et al., 2016; Gurumoorthy et al., 2019) could provide a condensed view of the whole training dataset and also select a subset of the whole training set as global explanations. However, the selected prototype set is always very large, containing thousands of samples. It’s hard for users to directly obtain clear and concise explanations.Besides, example-based methods can only select existing samples from the training dataset. Therefore, those method are unable to represent the knowledge learned in various training paradigms like adversarial training. However, our method that only need a dozen examples to consist the global explanations could reflect knowledge from various training paradigms by directly generating explanations from trained models.
Activation maximization (AM) method (Olah et al., 2017; Yosinski et al., 2015; Nguyen et al., 2017; 2019) visualize the learned feature of various neurons of DNN models in the input space. It could also be used as a global explanation to visualize learned class-wise patterns of DNN models by maximizing output neuron of each class. However, directly taking maximization always generates less coherent and high-frequency local patterns. To generate the global coherent and natural visualizations, they need to be combined with hand-designed regularization like Gaussian blur, dropout, mean initialization, and deep generative models (Nguyen et al., 2019). Compared with AM method, our method can generate high-quality, global coherent, and diverse class-wise explanations without any prior regularization.
3 METHODOLOGY
Suppose θR is the model trained using the dataset R with n samples, our goal is to find a global explanation set S that contains class-discriminative explanation for every class Si, for i = 0, . . . , C− 1. In the meantime, the explanation set should be much smaller than the original set i.e |S| ≪ n. In this section, unlike the previous works extracting the set from the model directly (Bastani et al., 2017), we propose a new method by synthesizing S such that the model trained on S should be equal to θR, since the model could be expressed as a function of representation samples for every class. Intuitively, this representation could be thought of as extracting critical points around the model’s decision boundary. Let’s take the support vector machine (SVM) as an example. The support vectors, whose size is much smaller than the size of the whole training set, that determine the SVM model could be thought of as the representation set we aim to extract 1.
3.1 SEARCHING CLASS-WISE EXPLANATION SET
For simplicity, we consider a classification model fθ : X (m) → YC which maps input x in the input space Xwith m dimension input to a label y at the label space Y with C class and the model parameter is θ. Given a training set R consists of n instances (x1,y1), . . . , (xn,yn). Consider a non-negative real-valued loss function L that penalize the difference between the prediction fθ(x) and true label y from an unknown data distribution P , (x,y) ∼ P , we aim to find the model θR as:
θR = argmin θ R(θ) =
∫ L(fθ(x),y)dP (x,y) ≈ 1
n n∑ i=1 L(fθ(xi),yi) (1)
We formulate searching the class-wise explanation set S as the below bi-level optimization problem:
min S D(θS ,θR) s.t θS = argmin θ LS(θ) := 1 |S| |S|∑ i=1 L(fθ(xi),yi) (2)
where D(·) is a distance metric to measure the distance between the model trained by the dataset R and representation set S. We use the sum of the MSE loss and the cosine similarity in the implementation. In other words, we aim to find a set of critical representation for each class that let
1In Soudry et al. (2018), the authors theoretically proved that SGD implicitly converges to solutions that maximally separate the dataset. This also implies that some samples are more relevant than others to decision boundary learned by the model.
the model θS be trained on S is close to the original. By imitating model parameters θR, we could then extract class-discriminative features from the dataset R into S. Since the above problem (Eq.2) is a bi-level optimization problem, we can solve it using alternative minimization. Specifically, for each iteration of outer loop, we utilize gradient descent algorithm with a fixed number of iterations M to solve inner problem:
θS = θSM = OPT(θ S 0 ,S,M), the update: θSt+1 ← θSt − ηθ∇θLS(θSt ) (3)
where OPT means optimization process (gradient descent) with M iteration updates. θS0 is initialized with same initial point of θR. After obtaining θSM , we solve the outer problem by taking one step gradient descent: St+1 ← St − ηS∇SD(θS ,θR) (4) After obtaining S , we will project it into the same image pixel value range as samples in the dataset R to make the explanation directly interpretable in the input space. At the same time, we limit the the size of explanation set S to be 10 images per class at most. To make our global explanations independent of the sampling dataset distribution P and reflect the true model decision, S is always initialized from the standard Gaussian Noise.
However, the above bi-level optimization is rather hard to solve directly. If the inner loops are unrolled too many times, the computation and memory occupation will increase exponentially and the problem becomes infeasible. At the same time, since the model parameter space is highly non-convex, it becomes extremely difficult to accurately approach θR with limited inner iterations when starting from a random initialized θS0 .
3.2 IMITATING TRAINING TRAJECTORIES
To address the aforementioned challenges, recent works about data condensation (Zhao et al., 2021; Cazenavette et al., 2022) have proposed to imitate short ranges of original model training process rather than imitating the final model parameter θR directly. Inspired by those works, we propose to imitate short intervals of the whole training process for each iteration of the outer loop. Specifically, after each epoch of training model on R, we save the model checkpoint θRk as well as the random initialization point θR0 . We then get a whole training trajectory as {θR0 ,θR1 , . . . ,θRk , . . . ,θRT } and each checkpoint in the training trajectory could act as a reference point. For each iteration in outer loop, we uniformly and randomly choose a reference point as starting point θRk , and choose the reference point after N epochs θRk+N as the end point. With the model θ
S initialized as θRk , in the inner loop, we use M step gradient descent to obtain the θSM and then make it approach θ R k+N . We then update the S according to the distance loss between θSM and θRk+N . To sum, instead of directly imitating the final parameter θR, we choose to imitate the short range training dynamics to obtain the S. Therefore, we turn the original problem into the following optimization problem:
min S Ek∼[0,...,T ]D(θSM ,θRk+N ) s.t θSM = argmin θ LS(θ) := 1 |S| |S|∑ i=1 L(fθ(xi),yi) (5)
where T is the difference of the whole training trajectory. To avoid affect from randomness about initialization of θR, we also optimize the above problem on multiple training trajectories obtained by training θR with different initializations.
Furthermore, to make the global set independent of the sampling dataset distribution P and reflect the true model decision, during the optimization, S is initialized from the standard Gaussian Noise. Fidelity: It is necessary for the explanation method to have a high fidelity. In other words, the prediction produced by an explanation should agree with the original model as much as possible. By approaching the θS with θR, the model trained θS on S would have achieve a similar generalization ability on both distribution P and other distributions as well. Here, we show the final MSE loss, ∥θRk+N−θSM∥2 obtained by our method. We also take the original trajectory difference ∥θRk −θRk+N∥2 as the reference. We compute their average values over multiple trajectories starting from different starting point θRk . Following experimental settings in Sec. 4, the average values of ∥θRk+N − θSM∥2 and ∥θRk − θRk+N∥2 are 0.35 and 0.76 respectively. We can observe that the MSE between the obtained θSM and end point θ R k+N is significantly reduced compared with the original trajectory so that the θSM is close enough to the reference point θ R k+N after the update.
Side benefits: Since we have multiple reference points of the training trajectory rather than only the final parameter θR, we could also investigate the feature representation learned during the training dynamics, which has been a open question in the deep learning field and several hypothesis have already been proposed to study this problem. By selecting the reference model parameter in different stage, we now provide a way to visualize the feature characteristic of different training stage. We defer the discussion later in Sec. 6.1.
Although the proposed trajectory matching is used in data condensation (Cazenavette et al., 2022), the goal of our work is totally different from data condensation which aims to improve the data efficiency of the training procedure by limiting the number of training samples. With the different objectives, data condensation doesn’t require generating a highly interpretable explanation, so directly applying the data condensation method is not applicable. For example, tricks like ZCA transformation are widely used in data condensation (Cazenavette et al., 2022) however cause the generated samples hard to interpret. Also, there is no constraint on pixel range for generated samples in the data condensation as well. Besides, we also utilize our method for different applications such as backdoor detection in Sec. 5, and visualizing knowledge of various training paradigms in Sec. 6, where data condensation methods haven’t explored.
4 CLASSWISE EXPLANATION
In this section, we conduct experiments to show that the proposed method could generate high-quality, class-discriminative global explanation. And we also take comparisons between our method and Activation maximization (AM) method.
Datasets and model architecture: We use four popular datasets: SVHN (Netzer et al., 2011), GTSRB (Stallkamp et al., 2012), CIFAR-10 (Krizhevsky et al., 2009) and Tiny ImageNet (Chrabaszcz et al., 2017). For the model architecture, we choose the simple ConvNet architecture (Gidaris & Komodakis, 2018), AlexNet (Krizhevsky et al., 2012), and VGG-11 (Simonyan & Zisserman, 2015) for illustration. The ConvNet has 3 duplicate convolution blocks followed by a linear classifier. Each block consists of 128 filters, average pooling, ReLU activation and instance normalization. For the large scale dataset, Tiny ImageNet, we use the ConvNet with 4 duplicate convolution blocks. Due to the page limit, we show the experimental results of AlexNet and VGG in Sec. B.1 of Appendix.
Implementation details: We set inner loop iterations M and N to be 60 and 2 for all datasets to generate the visual explanations. For selecting reference point as the starting point, we uniformly and randomly select from 0-20 epoch checkpoints for SVHN, GTSRB, and CIFAR-10 and 0-40 epoch checkpoints for Tiny ImageNet. The learning rate ηθ is set to be 0.02 for updating the θSt in the inner loop, while ηS is 1000 for updating S in the outer loop. And the number of outer loop iteration is 5000. We use standard Gaussian noise to initialize our class-wise explanations S. For Activation maximization (AM) method, we directly maximize the final output of each class before softmax layer. We also add the Gaussian blur regularization in AM method to have better visual quality on the explanations. To have a fair comparison, we limit the the size of explanation set S to be 10 images per class and initialize S with Gaussian noise. The implementation details are shown in Appendix.
As shown in Figure 1, for each class of each dataset, our method could generate high-quality class-discriminative explanations, while AM method generate a visually unidentifiable explanation with a lot of high-frequency noise and irregular color background. At the same time, our generated explanation shows a great diversity, while AM method keeps generating very similar patterns. For example, the model trained on GTSRB would learn the
shape and color of traffic signs (circle, triangle, blue, red) and the content inside the sign (number, light, arrow). Moreover, it could be observed that the generated explanation has an apparent number pattern although numbers in SVHN dataset are collected from different sources with different backgrounds. Our class-wise explanations reveal that the model has learned to extract and combine important features from the background information.
To further evaluate whether the generated explanation is highly class-discriminative, we put the explanations back into the original ConvNet model to see the classification results. For example, for the generated explanation of Cat class, we test if the original ConvNet and other models would classify them as Cat. The higher the classification accuracy, the highly class-discriminative the generated
explanation of each class. Apart from the original ConvNet, we also utilize other pretrained models with different architectures containing ResNet50 (He et al., 2016), WideResNet28-10 (Zagoruyko & Komodakis, 2016), and DenseNet121 (Huang et al., 2017). The classification results are shown in Table 1.
5 DIAGNOSING MODEL FAILURES: BACKDOOR ATTACK DETECTION
It has been shown recently that the current state-of-art deep neural networks are vulnerable to backdoor attacks (Gu et al., 2019; Chen et al., 2017). Backdoor attacks aim to embed a hidden pattern in the training so that the trained model would still perform normally when the backdoor is not activated; otherwise, the prediction would be manipulated to the attack designated label. The backdoor trigger could be a sparse and simple pattern (Gu et al., 2019) or be a more sophisticated designed pattern like Blended attack (’Hello-Kitty’ trigger) (Chen et al., 2017), SIG attack (vertical stripe trigger) (Barni et al., 2019). A lot of detection and defense methods have thus been proposed to detect whether the model is backdoored (Wang et al., 2019). One of the major requirement of the defenses is to use the find&patch strategy, where the defense method first find the exact trigger and then filter that trigger in the datasets. However, the existing defenses all depend on the assumption that the backdoor trigger has to be sparse and simple, which is unable to defend against some complex triggers such as Hello-Kitty or vertical stripe. We show the proposed method could recover both simple and complex backdoor triggers accurately. Here we apply our method and AM method to reveal the trigger learned by the ConvNet model in the CIFAR-10 dataset. Specifically, we choose three different kinds of backdoor attacks: 1) the Blended attack (’Hello-Kitty’ trigger) (Chen et al.,
2017); 2) the SIG attack (vertical stripe trigger) (Barni et al., 2019); 3) the Badnet attack (grid trigger) (Gu et al., 2019). For all backdoor attacks, we select Dog as our targeted class. For attack setups, we follow the previous works (Li et al., 2021b;a) and set the poison rate to be 0.05. Please refer to Appendix for more details of backdoor attacks. For visual explanation of the model trained on CIFAR-10 with backdoor attacks, we fix the starting point with the first checkpoint in the expert trajectory and set the N as 1. The other parameters for visual explanation are the same in Sec.4.
As shown in left parts of Figure 2, our method could successfully reveal all kinds of inserted triggers with high quality, even when the poisoning rate is very low. Simultaneously, our explanation keeps the other classes natural and clean. On the other hand, AM fails to extract clear triggers from poisoned models and the reserved triggers have a clear difference with the ground truth. Moreover, our method could quickly recover the backdoor trigger as we only need one reference point i.e N = 1. By only poisoning a small number of examples, all explanations in the Dog class are consistently carrying the exact trigger with the same shape and location. We could easily notice whether a model has been backdoored through the proposed explanation, thus finding the corresponding trigger. Our method could then be used to filter out those examples with the revealed trigger and purify the model.
6 VISUALIZING MODEL KNOWLEDGE
In this section, we further demonstrate that the proposed method could be used to analyze the feature representation learned by different training methods and phases.
6.1 TRAINING DYNAMICS
DNNs have shown great success on a variety of tasks. However, it is still a great challenge to understand why a model could generalize well on the test set. It is thus important to know what model has learned intrinsically in the whole training procedure. In this section, we show the proposed method could be used as a tool to visualize model knowledge in different stages of the training procedure. We use the ConvNet model trained on the CIFAR-10 dataset as an example. To reveal the difference among knowledge in different stages in the whole training process, we choose the starting point of the trajectory sequentially and set the N = 1. That means we make S imitate the training dynamics for only one epoch. The other parameters are the same in Sec.4. The knowledge learned in different stages are shown in Figure 3. We set the start point sequentially: (a) random initialization point; (b) the checkpoint saved after 2 epochs; (c) the checkpoint saved after 5 epochs; (d) the checkpoint saved after 10 epochs; (e) the checkpoint saved after 20 epochs; (f) the checkpoint saved after 30 epochs.
As shown in Figure 3, in the early stage of the training procedure, the knowledge learned by the model have rich information about the color and the rough contour of the class object. For example, the background of Ship class is always blue, and that of Horse class is brown. Also, some rough shapes, such as horse’s body and car’s body, could be easily identified. With the training process continuing, model knowledge tend to include clean and sharp local traits of the object, such as the head of the horse and the buckhorn. In the meantime, texture becomes more clear and becomes the dominant feature in the later phase of training. Although the model gets better performance with training, the learned knowledge is actually less aligned with human perception. This observation is align with Kumar et al. (2022). They also observed that representation from underfitting ImageNet models with modest validation accuracy achieves the best perception score.
6.2 ADVERSARIAL TRAINING
Adversarial training (AT) has been one of the most effective methods to enhance adversarial robustness (Madry et al., 2018). At the same time, an adversarially trained "robust model" tends to generate a better feature representation that has a better semantic meaning and aligns better with human perception (Ilyas et al., 2019). Recently, adversarial perturbations have been used to improve the model generalization in both computer vision (Xie et al., 2020) and natural language processing domains (Gan et al., 2020). In this section, we study the model knowledge obtained by adversarially trained model.
In the experiment, we use the ℓ∞ PGD-AT (Madry et al., 2018) to train a ConvNet AT model. We use the cross-entropy loss and set the perturbation constraint ϵ = 4/255. We set the number of iterations
of inner maximization as 10 and step size as 2/255. More details of adversarial training are shown in the Appendix. The other parameters for conducting visualizations are the same as those used in Sec.4.
From Figure 4, the most obvious difference is the contrast ratio of adversarially trained feature is much smaller than the normal training. We find that the representation set from adversarial training has a much wider pixel range from [−5, 5] compared to [−1, 1] in the normal training. Then, we have to project the representation set into the [−1, 1] space in order to do the visualization. The change of pixel range might be brought by the adversarial training mechanism. That is, the model relies on edge cases much heavier since model update depends on calculating loss on adversarial examples generated in the inner loop. As we are approaching the representation set using the adversarial training method, the representation set has a wider pixel range. Moreover, we can clearly observe that adversarially-trained feature representation aligns better with human perception and looks more "clean", which is also supported by other works (Ilyas et al., 2019; Xie et al., 2020).
6.3 NOISY LABEL TRAINING
Since the seminal work discusses the learning algorithm should cope with incorrect training examples (Angluin & Laird, 1988), machine learning with noisy label has become a heated topic as, in real-world applications, the labels are often noisy and imperfect. Therefore, it is important to understand how the neural network knowledge will be changed when the label is noisy. On the other hand, deep learning is well-known for having ability to learn very complex features and becomes over-fitting. While label noise could be seen as a challenge in current machine learning, a proper noise strength could act as a good regularizer to help the model generalize better. In this section, we study the difference of model knowledge under the different levels of label noise. Here, we also use ConvNet model trained on the CIFAR-10 dataset as an example. We modify the dataset by adding various levels of noise (25%, 50%, 75%) to the labels of the training set. This noise is added by taking, say in the case of 25% label noise, 25% of the examples at random and randomly permuting their labels. For generating class-wise explanations, we also follow the hyperparameter setting used in Sec.4.
As shown in Figure 5, a small amount of label noise like 25% does help to achieve a more human-like model knowledge. For example, birds and cats in Figure 5 have a much sharper and better semantic feature than pure training without any label noise. Our observation is that label noise could act as a good regularizer to help to get the model knowledge closer to human perception. However, as the noise level increases, the quality drops significantly. When the noise level reaches 75%, the learned knowledge becomes barely recognizable.
7 CONCLUSION
In this paper, we propose a global visual explanation method, which generates high-quality and class-discriminative explanation in the input space. We further show the proposed method could be utilized for debugging model failures, such as revealing backdoor triggers in the attack. Finally, we devise a way to study the model knowledge in different training mechanisms, which sheds light on building a more generalizable and trustworthy machine learning method.
B FULL VISUALIZATION RESULTS
In Sec. B.1, we show full visualization results of the class-wise explanations. In Sec. B.2, we show full class-wise explanations of model poisoned by backdoor attacks on CIFAR-10 dataset. In Sec. B.3, we show full class-wise explanations of different phases of the model training process on CIFAR-10 dataset. We demonstrate full class-wise explanations of the adversarially trained model on CIFAR-10 dataset in Sec. B.4. We demonstrate full class-wise explanations of the model trained under the different levels of noise on CIFAR-10 dataset in Sec. B.5.
2https://github.com/utkuozbulak/pytorch-cnn-visualizations
B.1 THE VISUALIZATION RESULTS OF CLASSWISE EXPLANATIONS
We demonstrate the whole class-wise explanations from our method about ConvNet on CIFAR-10, SVHN, GTSRB, and Tiny ImageNet in Figure 6, 7, 8, 9, and 10, respectively. For CIFAR-10, SVHN and GTSRB, we set the size of class-wise explanation set as 10 images per class ane 1 for Tiny Imagnet. For all datasets, we use the standard Gaussian noise as the initialization of our explanation set S. For VGG-11 and AlexNet models, we show the visualizations generated from our method on CIFAR10 in Figure 11 and Figure 12.
The visualization generated from Activation maximization method of these four datasets on the ConvNet are shown Figure 13, 14, 15, 16, and 17.
The class-wise explanations about ConvNet on CIFAR-10 are shown in Figure 6. Each row corresponds to each class.
The class-wise explanations about ConvNet on SVHN are shown in Figure 7.
The class-wise explanations about ConvNet on GTSRB are shown in Figure 8 and 9. The Figure 8 is about class-wise visualizations of class label 0− 18. The Figure 9 is about class-wise visualizations of class label 19− 42.
The below Figure 9 is about class-wise visualizations of class label 19− 42 on GTSRB dataset.
For the large scale dataset, Tiny ImageNet, we use the ConvNet with 4 convolution blocks. Due to the computation and memory cost, we set the size of class-wise explanation set as 1 image per class on Tiny ImageNet. The class-wise explanations about ConvNet on Tiny ImageNet are shown in Figure 10. Each subfigure corresponds to each class.
The class-wise explanations about VGG-11 and AlexNet on CIFAR-10 generated by our method are shown in below figures, Figure 11 an 12. Each row corresponds to each class.
Being similar to the results on ConvNet model, our method can still generate high-quality visualizations on larger CNN models. And our class-wise explanations have the apparent class-wise features for each class. These also verify that our method could generalize well on various network architectures.
The class-wise explanations generated from AM about ConvNet on CIFAR-10 are shown in Figure 13. Each row corresponds to each class.
The class-wise explanations about ConvNet on SVHN are shown in Figure 14.
The class-wise explanations from AM method about ConvNet on GTSRB are shown in Figure 15 and 16. The Figure 15 is about class-wise visualizations of class label 0 − 18. The Figure 16 is about class-wise visualizations of class label 19− 42.
The below Figure 16 is about class-wise visualizations from AM of class label 19− 42 on GTSRB dataset.
For the large scale dataset, Tiny ImageNet, we use the ConvNet with 4 convolution blocks. Due to the computation and memory cost, we set the size of class-wise explanation set as 1 image per class on Tiny ImageNet. The class-wise explanations generated from AM about ConvNet on Tiny ImageNet are shown in Figure 17. Each subfigure corresponds to each class.
B.2 THE VISUALIZATION RESULTS OF BACKDOOR ATTACKS
We first demonstrate the visualizations of three different triggers from Activation maximization are shown in Figure 18. The visualizations of backdoor learning with three different attacks from our method are in the Figure 19, 21, and 20.
The visualizations from our method of three different triggers are shown in the the following 3 figures.
B.3 THE VISUALIZATION RESULTS OF TRAINING DYNAMICS
In this section, we show the class-wise visualizations of different phases of training process with our method.
B.4 THE VISUALIZATION RESULTS OF AT MODEL
In this section, we show the class-wise explanations of adversarially trained model.
B.5 THE VISUALIZATION RESULTS OF NOISY LABEL TRAINING
In this section, we show the class-wise explanations of models trained with the different levels of label noise in the following figures. | 1. What is the main contribution of the paper in terms of learning representations for class explanations?
2. How does the proposed approach differ from other methods such as learning prototypes using auto-encoders or visualizing neurons and layers with tools like Open AI Microscope or Net Dissection?
3. What are the weaknesses of the paper regarding its claims and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper aims at learning a representation for each class, as a potential explanation - this could be summarized as global explanation for targeted classes - basically segmenting the explanations towards class descriptions.
Strengths And Weaknesses
Unclear how is this different from learning prototype from neural network architectures using auto-encoder architectures as another branch.
Unclear how is this different from neuron / layer visualization from Open AI Microscope [1] or other LUCID-based visualization
Missing literature venue on network dissection [2]
[1] https://openai.com/blog/microscope/ [2] http://netdissect.csail.mit.edu/
Clarity, Quality, Novelty And Reproducibility
clear paper
clear motivation
missing experimental results vs. state-of-the-art approaches
no end-user evaluation |
ICLR | Title
Class-wise Visual Explanations for Deep Neural Networks
Abstract
Many explainable AI (XAI) methods have been proposed to interpret neural network’s decisions on why they predict what they predict locally through gradient information. Yet, existing works mainly for local explanation lack a global knowledge to show class-wise explanations in the whole training procedure. To fill this gap, we proposed to visualize global explanation in the input space for every class learned in the training procedure. Specifically, our solution finds a representation set that could demonstrate the learned knowledge for each class. To achieve this goal, we optimize the representation set by imitating the model training procedure over the full dataset. Experimental results show that our method could generate class-wise explanations with high quality in a series of image classification datasets. Using our global explanation, we further analyze the model knowledge in different training procedures, including adversarial training, and noisy label learning. Moreover, we illustrate that the generated explanations could lend insights into diagnosing model failures, such as revealing triggers in a backdoored model.
1 INTRODUCTION
Deep neural networks (DNNs) have achieved unprecedented success over various tasks across different domains. However, there is still limited understanding of how to interpret their decisions due to their black-box nature. In other words, current DNNs lack the desirable transparency ability to explain why they predict what they predict. "Correct" prediction without reasonable explanation from DNNs may bring a huge concern on their prediction’s reliability, especially in security-sensitive tasks such as medical image analysis (Nwadike et al., 2020), auto-pilot in the self-driving system (Wang et al., 2021) etc.
To acquire a better understanding of model prediction, many attempts have been made to interpret DNNs from different perspectives. A major focus have been put on the local explanations, either through the saliency map (Simonyan et al., 2013; Smilkov et al., 2017; Adebayo et al., 2018) and unique identification factors (Zhou et al., 2016; Selvaraju et al., 2017), or extracting explainable attributions on models’ decision for each input example (Ribeiro et al., 2016; Hinton et al., 2015; Frosst & Hinton, 2017). These techniques have a good ability to visualize feature representation utilized for debugging trained DNNs in a local manner. Yet, local explanations can only interpret resulting model decisions for a particular input sample, being unable to visualize any intermediate attributions or model knowledge learned in a whole training procedure. Thus, a global explanation for viewing overall training logic is desirable, i.e., visualizing intermediate explanations for verifying commonly-used hypotheses in different training steps. Apart from that, with respect to diagnosing model prediction, local explanations are further challenged by the recent progress in backdoor attacks (Zhao et al., 2021). DNNs have been shown vulnerable to backdoor attacks. However, local explanations fail to provide helpful (intermediate) explanations for analyzing backdoor patterns given a target label. To our best knowledge, no prior work can reveal complex backdoor patterns when visualizing a whole training process for DNNs.
Existing global explanations methods focus on extracting rules from DNN models (Hara & Hayashi, 2018; Lakkaraju et al., 2017) or distilling black-box models into small and easy-to-explained substitute models (Tan et al., 2019; Bastani et al., 2017). However, the quality of generated explanations highly depends on the complexity of the designed rules and the substitute model. Besides, the substitute model is a approximation of the explained model, which could also introduce extra biases to explanations.
Our solution. This paper gives a generic method for the global visual explanation, given an inputting class-wise dataset and training procedure only, without requiring any additional hypotheses. Our global explanation can be applied to visualize the whole training process of any DNNs, finally providing much richer visual explanations and knowledge than local explanations. Unlike prior global explanations, our method requires neither extracting rules from a black-box model nor distill knowledge to a simpler model. Instead, it visualizes learned knowledge directly over representative data to attain more comprehensive explanations, rather than processing a simplified model summarized from original data. Representative data, in the pixel space rather than feature space, enables debugging a model and studying hidden mechanisms more easily in a training procedure. Intrinsically, our superior results benefit from shortening the route of generating visual explanation, saying, distilling knowledge directly from data instead of from a model derived from data.
To be specific, given a training dataset and corresponding training procedure, we optimize representative data points so that the learned model over representative data could approach the same one trained with the full dataset. By adding more reference snapshots in the training trajectory of model parameters, our method supports extracting a high-quality visualization to reveal global class-wise patterns learned by specifying various training manners. Empirical results on datasets show desirable class-wise explanations with high fidelity in Sec. 4. Besides, Sec. 5 analyzes debugging model failure (i.e.,inserted backdoor) that exists in the training procedure. To demonstrate generality, we utilize our class-wise explanation to analyze feature representation learned by various training procedures in Sec. 6, including training dynamics, adversarial training, and noisy label learning.
Our contributions are summarized below:
• We propose a global visual explanation method in the input space to reveal the key representation points learned with respect to different classes. Our method could generate a class-discriminative, natural-looking, and high-quality visual explanation. • We show the proposed method could help in diagnosing several model failures, including backdoor attacks where the local explanation is completely failed. • We show a proof-of-concept of how to understand the model knowledge in different training phases and different training methods. This is critical for understanding the generalization of deep learning and sheds light on how to design a better training algorithm.
2 RELATED WORK
2.1 LOCAL EXPLANATION METHODS
Local explanation methods aim to help understand the decision procedure for a specific sample. At the pixel-level, saliency map (Simonyan et al., 2013) is the first to identify the sensitivity of each pixel towards the final prediction by highlighting the largest score in the calculated gradients. The vanilla gradient method is later improved via smoothed gradient (Smilkov et al., 2017), and guided back-propagation (Adebayo et al., 2018). Class Activation Mapping (CAM) (Zhou et al., 2016) visualizes discriminative regions by simplifying the model into one without any fully-connected layer. Later, Grad-CAM (Selvaraju et al., 2017) then makes CAM without altering model architecture by incorporating the gradient information. Other than directly pixel-level explanation, LIME (Ribeiro et al., 2016) approximates the model with a linear model locally, which relies on instances randomly generated in the neighborhood of the sample to be explained. After deriving the model, interpretable features are then projected back into the original feature space to get a final explanation. SHAP (Lundberg & Lee, 2017) re-formalizes additive feature attribution problem into a cooperative game and use Shapley value to assign each feature an important score for a particular prediction. Recently, a class-wise local explanation method has been proposed to visualize a sparse representation of model knowledge (Zhao et al., 2021). However, the method heavily depends on the given canvas (example). Although our method also focuses on the class-wise pattern, our global explanation method neither depends on a specific sample nor requires sparse constraints for generating representations in the input space.
2.2 GLOBAL EXPLANATION METHODS
Different from local explanation methods, global explanation aims to describe the overall logic of the black box, including how the model parameter affects the resulting prediction on average, and what and how the model has been learned in the training procedure. The global explanation is first to be formalized as extracting some rules from the black-box model to interpret the model (Bastani
et al., 2017). As neural networks become more complex and deep, it is then very hard to extract rules directly from the model. Model distillation (Hinton et al., 2015) is then applied to simplify complex black-box models to a smaller yet interpretable one with a similar performance to the original model. Neural networks have been distilled into trees structure model (Craven & Shavlik, 1995; Frosst & Hinton, 2017) and an additive model for the global explanation (Tan et al., 2018).
By selecting the prototypes or representative samples, example-based explanation methods (Kim et al., 2016; Gurumoorthy et al., 2019) could provide a condensed view of the whole training dataset and also select a subset of the whole training set as global explanations. However, the selected prototype set is always very large, containing thousands of samples. It’s hard for users to directly obtain clear and concise explanations.Besides, example-based methods can only select existing samples from the training dataset. Therefore, those method are unable to represent the knowledge learned in various training paradigms like adversarial training. However, our method that only need a dozen examples to consist the global explanations could reflect knowledge from various training paradigms by directly generating explanations from trained models.
Activation maximization (AM) method (Olah et al., 2017; Yosinski et al., 2015; Nguyen et al., 2017; 2019) visualize the learned feature of various neurons of DNN models in the input space. It could also be used as a global explanation to visualize learned class-wise patterns of DNN models by maximizing output neuron of each class. However, directly taking maximization always generates less coherent and high-frequency local patterns. To generate the global coherent and natural visualizations, they need to be combined with hand-designed regularization like Gaussian blur, dropout, mean initialization, and deep generative models (Nguyen et al., 2019). Compared with AM method, our method can generate high-quality, global coherent, and diverse class-wise explanations without any prior regularization.
3 METHODOLOGY
Suppose θR is the model trained using the dataset R with n samples, our goal is to find a global explanation set S that contains class-discriminative explanation for every class Si, for i = 0, . . . , C− 1. In the meantime, the explanation set should be much smaller than the original set i.e |S| ≪ n. In this section, unlike the previous works extracting the set from the model directly (Bastani et al., 2017), we propose a new method by synthesizing S such that the model trained on S should be equal to θR, since the model could be expressed as a function of representation samples for every class. Intuitively, this representation could be thought of as extracting critical points around the model’s decision boundary. Let’s take the support vector machine (SVM) as an example. The support vectors, whose size is much smaller than the size of the whole training set, that determine the SVM model could be thought of as the representation set we aim to extract 1.
3.1 SEARCHING CLASS-WISE EXPLANATION SET
For simplicity, we consider a classification model fθ : X (m) → YC which maps input x in the input space Xwith m dimension input to a label y at the label space Y with C class and the model parameter is θ. Given a training set R consists of n instances (x1,y1), . . . , (xn,yn). Consider a non-negative real-valued loss function L that penalize the difference between the prediction fθ(x) and true label y from an unknown data distribution P , (x,y) ∼ P , we aim to find the model θR as:
θR = argmin θ R(θ) =
∫ L(fθ(x),y)dP (x,y) ≈ 1
n n∑ i=1 L(fθ(xi),yi) (1)
We formulate searching the class-wise explanation set S as the below bi-level optimization problem:
min S D(θS ,θR) s.t θS = argmin θ LS(θ) := 1 |S| |S|∑ i=1 L(fθ(xi),yi) (2)
where D(·) is a distance metric to measure the distance between the model trained by the dataset R and representation set S. We use the sum of the MSE loss and the cosine similarity in the implementation. In other words, we aim to find a set of critical representation for each class that let
1In Soudry et al. (2018), the authors theoretically proved that SGD implicitly converges to solutions that maximally separate the dataset. This also implies that some samples are more relevant than others to decision boundary learned by the model.
the model θS be trained on S is close to the original. By imitating model parameters θR, we could then extract class-discriminative features from the dataset R into S. Since the above problem (Eq.2) is a bi-level optimization problem, we can solve it using alternative minimization. Specifically, for each iteration of outer loop, we utilize gradient descent algorithm with a fixed number of iterations M to solve inner problem:
θS = θSM = OPT(θ S 0 ,S,M), the update: θSt+1 ← θSt − ηθ∇θLS(θSt ) (3)
where OPT means optimization process (gradient descent) with M iteration updates. θS0 is initialized with same initial point of θR. After obtaining θSM , we solve the outer problem by taking one step gradient descent: St+1 ← St − ηS∇SD(θS ,θR) (4) After obtaining S , we will project it into the same image pixel value range as samples in the dataset R to make the explanation directly interpretable in the input space. At the same time, we limit the the size of explanation set S to be 10 images per class at most. To make our global explanations independent of the sampling dataset distribution P and reflect the true model decision, S is always initialized from the standard Gaussian Noise.
However, the above bi-level optimization is rather hard to solve directly. If the inner loops are unrolled too many times, the computation and memory occupation will increase exponentially and the problem becomes infeasible. At the same time, since the model parameter space is highly non-convex, it becomes extremely difficult to accurately approach θR with limited inner iterations when starting from a random initialized θS0 .
3.2 IMITATING TRAINING TRAJECTORIES
To address the aforementioned challenges, recent works about data condensation (Zhao et al., 2021; Cazenavette et al., 2022) have proposed to imitate short ranges of original model training process rather than imitating the final model parameter θR directly. Inspired by those works, we propose to imitate short intervals of the whole training process for each iteration of the outer loop. Specifically, after each epoch of training model on R, we save the model checkpoint θRk as well as the random initialization point θR0 . We then get a whole training trajectory as {θR0 ,θR1 , . . . ,θRk , . . . ,θRT } and each checkpoint in the training trajectory could act as a reference point. For each iteration in outer loop, we uniformly and randomly choose a reference point as starting point θRk , and choose the reference point after N epochs θRk+N as the end point. With the model θ
S initialized as θRk , in the inner loop, we use M step gradient descent to obtain the θSM and then make it approach θ R k+N . We then update the S according to the distance loss between θSM and θRk+N . To sum, instead of directly imitating the final parameter θR, we choose to imitate the short range training dynamics to obtain the S. Therefore, we turn the original problem into the following optimization problem:
min S Ek∼[0,...,T ]D(θSM ,θRk+N ) s.t θSM = argmin θ LS(θ) := 1 |S| |S|∑ i=1 L(fθ(xi),yi) (5)
where T is the difference of the whole training trajectory. To avoid affect from randomness about initialization of θR, we also optimize the above problem on multiple training trajectories obtained by training θR with different initializations.
Furthermore, to make the global set independent of the sampling dataset distribution P and reflect the true model decision, during the optimization, S is initialized from the standard Gaussian Noise. Fidelity: It is necessary for the explanation method to have a high fidelity. In other words, the prediction produced by an explanation should agree with the original model as much as possible. By approaching the θS with θR, the model trained θS on S would have achieve a similar generalization ability on both distribution P and other distributions as well. Here, we show the final MSE loss, ∥θRk+N−θSM∥2 obtained by our method. We also take the original trajectory difference ∥θRk −θRk+N∥2 as the reference. We compute their average values over multiple trajectories starting from different starting point θRk . Following experimental settings in Sec. 4, the average values of ∥θRk+N − θSM∥2 and ∥θRk − θRk+N∥2 are 0.35 and 0.76 respectively. We can observe that the MSE between the obtained θSM and end point θ R k+N is significantly reduced compared with the original trajectory so that the θSM is close enough to the reference point θ R k+N after the update.
Side benefits: Since we have multiple reference points of the training trajectory rather than only the final parameter θR, we could also investigate the feature representation learned during the training dynamics, which has been a open question in the deep learning field and several hypothesis have already been proposed to study this problem. By selecting the reference model parameter in different stage, we now provide a way to visualize the feature characteristic of different training stage. We defer the discussion later in Sec. 6.1.
Although the proposed trajectory matching is used in data condensation (Cazenavette et al., 2022), the goal of our work is totally different from data condensation which aims to improve the data efficiency of the training procedure by limiting the number of training samples. With the different objectives, data condensation doesn’t require generating a highly interpretable explanation, so directly applying the data condensation method is not applicable. For example, tricks like ZCA transformation are widely used in data condensation (Cazenavette et al., 2022) however cause the generated samples hard to interpret. Also, there is no constraint on pixel range for generated samples in the data condensation as well. Besides, we also utilize our method for different applications such as backdoor detection in Sec. 5, and visualizing knowledge of various training paradigms in Sec. 6, where data condensation methods haven’t explored.
4 CLASSWISE EXPLANATION
In this section, we conduct experiments to show that the proposed method could generate high-quality, class-discriminative global explanation. And we also take comparisons between our method and Activation maximization (AM) method.
Datasets and model architecture: We use four popular datasets: SVHN (Netzer et al., 2011), GTSRB (Stallkamp et al., 2012), CIFAR-10 (Krizhevsky et al., 2009) and Tiny ImageNet (Chrabaszcz et al., 2017). For the model architecture, we choose the simple ConvNet architecture (Gidaris & Komodakis, 2018), AlexNet (Krizhevsky et al., 2012), and VGG-11 (Simonyan & Zisserman, 2015) for illustration. The ConvNet has 3 duplicate convolution blocks followed by a linear classifier. Each block consists of 128 filters, average pooling, ReLU activation and instance normalization. For the large scale dataset, Tiny ImageNet, we use the ConvNet with 4 duplicate convolution blocks. Due to the page limit, we show the experimental results of AlexNet and VGG in Sec. B.1 of Appendix.
Implementation details: We set inner loop iterations M and N to be 60 and 2 for all datasets to generate the visual explanations. For selecting reference point as the starting point, we uniformly and randomly select from 0-20 epoch checkpoints for SVHN, GTSRB, and CIFAR-10 and 0-40 epoch checkpoints for Tiny ImageNet. The learning rate ηθ is set to be 0.02 for updating the θSt in the inner loop, while ηS is 1000 for updating S in the outer loop. And the number of outer loop iteration is 5000. We use standard Gaussian noise to initialize our class-wise explanations S. For Activation maximization (AM) method, we directly maximize the final output of each class before softmax layer. We also add the Gaussian blur regularization in AM method to have better visual quality on the explanations. To have a fair comparison, we limit the the size of explanation set S to be 10 images per class and initialize S with Gaussian noise. The implementation details are shown in Appendix.
As shown in Figure 1, for each class of each dataset, our method could generate high-quality class-discriminative explanations, while AM method generate a visually unidentifiable explanation with a lot of high-frequency noise and irregular color background. At the same time, our generated explanation shows a great diversity, while AM method keeps generating very similar patterns. For example, the model trained on GTSRB would learn the
shape and color of traffic signs (circle, triangle, blue, red) and the content inside the sign (number, light, arrow). Moreover, it could be observed that the generated explanation has an apparent number pattern although numbers in SVHN dataset are collected from different sources with different backgrounds. Our class-wise explanations reveal that the model has learned to extract and combine important features from the background information.
To further evaluate whether the generated explanation is highly class-discriminative, we put the explanations back into the original ConvNet model to see the classification results. For example, for the generated explanation of Cat class, we test if the original ConvNet and other models would classify them as Cat. The higher the classification accuracy, the highly class-discriminative the generated
explanation of each class. Apart from the original ConvNet, we also utilize other pretrained models with different architectures containing ResNet50 (He et al., 2016), WideResNet28-10 (Zagoruyko & Komodakis, 2016), and DenseNet121 (Huang et al., 2017). The classification results are shown in Table 1.
5 DIAGNOSING MODEL FAILURES: BACKDOOR ATTACK DETECTION
It has been shown recently that the current state-of-art deep neural networks are vulnerable to backdoor attacks (Gu et al., 2019; Chen et al., 2017). Backdoor attacks aim to embed a hidden pattern in the training so that the trained model would still perform normally when the backdoor is not activated; otherwise, the prediction would be manipulated to the attack designated label. The backdoor trigger could be a sparse and simple pattern (Gu et al., 2019) or be a more sophisticated designed pattern like Blended attack (’Hello-Kitty’ trigger) (Chen et al., 2017), SIG attack (vertical stripe trigger) (Barni et al., 2019). A lot of detection and defense methods have thus been proposed to detect whether the model is backdoored (Wang et al., 2019). One of the major requirement of the defenses is to use the find&patch strategy, where the defense method first find the exact trigger and then filter that trigger in the datasets. However, the existing defenses all depend on the assumption that the backdoor trigger has to be sparse and simple, which is unable to defend against some complex triggers such as Hello-Kitty or vertical stripe. We show the proposed method could recover both simple and complex backdoor triggers accurately. Here we apply our method and AM method to reveal the trigger learned by the ConvNet model in the CIFAR-10 dataset. Specifically, we choose three different kinds of backdoor attacks: 1) the Blended attack (’Hello-Kitty’ trigger) (Chen et al.,
2017); 2) the SIG attack (vertical stripe trigger) (Barni et al., 2019); 3) the Badnet attack (grid trigger) (Gu et al., 2019). For all backdoor attacks, we select Dog as our targeted class. For attack setups, we follow the previous works (Li et al., 2021b;a) and set the poison rate to be 0.05. Please refer to Appendix for more details of backdoor attacks. For visual explanation of the model trained on CIFAR-10 with backdoor attacks, we fix the starting point with the first checkpoint in the expert trajectory and set the N as 1. The other parameters for visual explanation are the same in Sec.4.
As shown in left parts of Figure 2, our method could successfully reveal all kinds of inserted triggers with high quality, even when the poisoning rate is very low. Simultaneously, our explanation keeps the other classes natural and clean. On the other hand, AM fails to extract clear triggers from poisoned models and the reserved triggers have a clear difference with the ground truth. Moreover, our method could quickly recover the backdoor trigger as we only need one reference point i.e N = 1. By only poisoning a small number of examples, all explanations in the Dog class are consistently carrying the exact trigger with the same shape and location. We could easily notice whether a model has been backdoored through the proposed explanation, thus finding the corresponding trigger. Our method could then be used to filter out those examples with the revealed trigger and purify the model.
6 VISUALIZING MODEL KNOWLEDGE
In this section, we further demonstrate that the proposed method could be used to analyze the feature representation learned by different training methods and phases.
6.1 TRAINING DYNAMICS
DNNs have shown great success on a variety of tasks. However, it is still a great challenge to understand why a model could generalize well on the test set. It is thus important to know what model has learned intrinsically in the whole training procedure. In this section, we show the proposed method could be used as a tool to visualize model knowledge in different stages of the training procedure. We use the ConvNet model trained on the CIFAR-10 dataset as an example. To reveal the difference among knowledge in different stages in the whole training process, we choose the starting point of the trajectory sequentially and set the N = 1. That means we make S imitate the training dynamics for only one epoch. The other parameters are the same in Sec.4. The knowledge learned in different stages are shown in Figure 3. We set the start point sequentially: (a) random initialization point; (b) the checkpoint saved after 2 epochs; (c) the checkpoint saved after 5 epochs; (d) the checkpoint saved after 10 epochs; (e) the checkpoint saved after 20 epochs; (f) the checkpoint saved after 30 epochs.
As shown in Figure 3, in the early stage of the training procedure, the knowledge learned by the model have rich information about the color and the rough contour of the class object. For example, the background of Ship class is always blue, and that of Horse class is brown. Also, some rough shapes, such as horse’s body and car’s body, could be easily identified. With the training process continuing, model knowledge tend to include clean and sharp local traits of the object, such as the head of the horse and the buckhorn. In the meantime, texture becomes more clear and becomes the dominant feature in the later phase of training. Although the model gets better performance with training, the learned knowledge is actually less aligned with human perception. This observation is align with Kumar et al. (2022). They also observed that representation from underfitting ImageNet models with modest validation accuracy achieves the best perception score.
6.2 ADVERSARIAL TRAINING
Adversarial training (AT) has been one of the most effective methods to enhance adversarial robustness (Madry et al., 2018). At the same time, an adversarially trained "robust model" tends to generate a better feature representation that has a better semantic meaning and aligns better with human perception (Ilyas et al., 2019). Recently, adversarial perturbations have been used to improve the model generalization in both computer vision (Xie et al., 2020) and natural language processing domains (Gan et al., 2020). In this section, we study the model knowledge obtained by adversarially trained model.
In the experiment, we use the ℓ∞ PGD-AT (Madry et al., 2018) to train a ConvNet AT model. We use the cross-entropy loss and set the perturbation constraint ϵ = 4/255. We set the number of iterations
of inner maximization as 10 and step size as 2/255. More details of adversarial training are shown in the Appendix. The other parameters for conducting visualizations are the same as those used in Sec.4.
From Figure 4, the most obvious difference is the contrast ratio of adversarially trained feature is much smaller than the normal training. We find that the representation set from adversarial training has a much wider pixel range from [−5, 5] compared to [−1, 1] in the normal training. Then, we have to project the representation set into the [−1, 1] space in order to do the visualization. The change of pixel range might be brought by the adversarial training mechanism. That is, the model relies on edge cases much heavier since model update depends on calculating loss on adversarial examples generated in the inner loop. As we are approaching the representation set using the adversarial training method, the representation set has a wider pixel range. Moreover, we can clearly observe that adversarially-trained feature representation aligns better with human perception and looks more "clean", which is also supported by other works (Ilyas et al., 2019; Xie et al., 2020).
6.3 NOISY LABEL TRAINING
Since the seminal work discusses the learning algorithm should cope with incorrect training examples (Angluin & Laird, 1988), machine learning with noisy label has become a heated topic as, in real-world applications, the labels are often noisy and imperfect. Therefore, it is important to understand how the neural network knowledge will be changed when the label is noisy. On the other hand, deep learning is well-known for having ability to learn very complex features and becomes over-fitting. While label noise could be seen as a challenge in current machine learning, a proper noise strength could act as a good regularizer to help the model generalize better. In this section, we study the difference of model knowledge under the different levels of label noise. Here, we also use ConvNet model trained on the CIFAR-10 dataset as an example. We modify the dataset by adding various levels of noise (25%, 50%, 75%) to the labels of the training set. This noise is added by taking, say in the case of 25% label noise, 25% of the examples at random and randomly permuting their labels. For generating class-wise explanations, we also follow the hyperparameter setting used in Sec.4.
As shown in Figure 5, a small amount of label noise like 25% does help to achieve a more human-like model knowledge. For example, birds and cats in Figure 5 have a much sharper and better semantic feature than pure training without any label noise. Our observation is that label noise could act as a good regularizer to help to get the model knowledge closer to human perception. However, as the noise level increases, the quality drops significantly. When the noise level reaches 75%, the learned knowledge becomes barely recognizable.
7 CONCLUSION
In this paper, we propose a global visual explanation method, which generates high-quality and class-discriminative explanation in the input space. We further show the proposed method could be utilized for debugging model failures, such as revealing backdoor triggers in the attack. Finally, we devise a way to study the model knowledge in different training mechanisms, which sheds light on building a more generalizable and trustworthy machine learning method.
B FULL VISUALIZATION RESULTS
In Sec. B.1, we show full visualization results of the class-wise explanations. In Sec. B.2, we show full class-wise explanations of model poisoned by backdoor attacks on CIFAR-10 dataset. In Sec. B.3, we show full class-wise explanations of different phases of the model training process on CIFAR-10 dataset. We demonstrate full class-wise explanations of the adversarially trained model on CIFAR-10 dataset in Sec. B.4. We demonstrate full class-wise explanations of the model trained under the different levels of noise on CIFAR-10 dataset in Sec. B.5.
2https://github.com/utkuozbulak/pytorch-cnn-visualizations
B.1 THE VISUALIZATION RESULTS OF CLASSWISE EXPLANATIONS
We demonstrate the whole class-wise explanations from our method about ConvNet on CIFAR-10, SVHN, GTSRB, and Tiny ImageNet in Figure 6, 7, 8, 9, and 10, respectively. For CIFAR-10, SVHN and GTSRB, we set the size of class-wise explanation set as 10 images per class ane 1 for Tiny Imagnet. For all datasets, we use the standard Gaussian noise as the initialization of our explanation set S. For VGG-11 and AlexNet models, we show the visualizations generated from our method on CIFAR10 in Figure 11 and Figure 12.
The visualization generated from Activation maximization method of these four datasets on the ConvNet are shown Figure 13, 14, 15, 16, and 17.
The class-wise explanations about ConvNet on CIFAR-10 are shown in Figure 6. Each row corresponds to each class.
The class-wise explanations about ConvNet on SVHN are shown in Figure 7.
The class-wise explanations about ConvNet on GTSRB are shown in Figure 8 and 9. The Figure 8 is about class-wise visualizations of class label 0− 18. The Figure 9 is about class-wise visualizations of class label 19− 42.
The below Figure 9 is about class-wise visualizations of class label 19− 42 on GTSRB dataset.
For the large scale dataset, Tiny ImageNet, we use the ConvNet with 4 convolution blocks. Due to the computation and memory cost, we set the size of class-wise explanation set as 1 image per class on Tiny ImageNet. The class-wise explanations about ConvNet on Tiny ImageNet are shown in Figure 10. Each subfigure corresponds to each class.
The class-wise explanations about VGG-11 and AlexNet on CIFAR-10 generated by our method are shown in below figures, Figure 11 an 12. Each row corresponds to each class.
Being similar to the results on ConvNet model, our method can still generate high-quality visualizations on larger CNN models. And our class-wise explanations have the apparent class-wise features for each class. These also verify that our method could generalize well on various network architectures.
The class-wise explanations generated from AM about ConvNet on CIFAR-10 are shown in Figure 13. Each row corresponds to each class.
The class-wise explanations about ConvNet on SVHN are shown in Figure 14.
The class-wise explanations from AM method about ConvNet on GTSRB are shown in Figure 15 and 16. The Figure 15 is about class-wise visualizations of class label 0 − 18. The Figure 16 is about class-wise visualizations of class label 19− 42.
The below Figure 16 is about class-wise visualizations from AM of class label 19− 42 on GTSRB dataset.
For the large scale dataset, Tiny ImageNet, we use the ConvNet with 4 convolution blocks. Due to the computation and memory cost, we set the size of class-wise explanation set as 1 image per class on Tiny ImageNet. The class-wise explanations generated from AM about ConvNet on Tiny ImageNet are shown in Figure 17. Each subfigure corresponds to each class.
B.2 THE VISUALIZATION RESULTS OF BACKDOOR ATTACKS
We first demonstrate the visualizations of three different triggers from Activation maximization are shown in Figure 18. The visualizations of backdoor learning with three different attacks from our method are in the Figure 19, 21, and 20.
The visualizations from our method of three different triggers are shown in the the following 3 figures.
B.3 THE VISUALIZATION RESULTS OF TRAINING DYNAMICS
In this section, we show the class-wise visualizations of different phases of training process with our method.
B.4 THE VISUALIZATION RESULTS OF AT MODEL
In this section, we show the class-wise explanations of adversarially trained model.
B.5 THE VISUALIZATION RESULTS OF NOISY LABEL TRAINING
In this section, we show the class-wise explanations of models trained with the different levels of label noise in the following figures. | 1. What is the main contribution of the paper regarding dataset distillation?
2. What are the strengths and weaknesses of the proposed approach compared to existing techniques?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are some concerns regarding the interpretability and naturalness of the generated images?
5. How does the proposed method compare to other baseline methods in terms of image quality and objective metrics? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes to use dataset distillation method to generate a small representative dataset for each class as a visualization and interpretation technique of neural classifiers.
Strengths And Weaknesses
Strengths
This paper is mostly well-written, and the findings in this paper are interesting and insightful.
Weaknesses
This paper has several major weaknesses that need to be addressed.
To start with, the proposed technique is very similar to the dataset distillation technique. I do notice that the authors have included a discussion about the differentiation from dataset distillation. The first point that the authors made is that the proposed method can generate images that are more interpretable to humans. However, this is merely achieved by removing some tricks and adding pixel range constraints. There is not much original design that truly aims to improve the interpretability and naturalness of the generated images. Therefore, the major contribution, from my perspective, is the use of such an existing technique to model interpretation, which compromises the novelty of this paper.
On a related note, the picture quality of the generated images is not satisfactory. The main paper focuses on cifar-10 interpretation, which is very small and simple. Even on this dataset the generated images are sort of blurry. For larger images like tiny-ImageNet, as shown in the appendix, the generated images are almost un-interpretable to humans. This raises concerns about the applicability of the proposed algorithm.
The paper seems to include a pretty weak baseline for comparison. The AM with Gaussian blurring is known to have pretty poor image quality. As the authors also mentioned, AM with generative model regularization, such as Nguyen (2017) cited in the paper and BigGAN-AM (https://arxiv.org/pdf/1910.04760.pdf), can generate images with much higher quality even on ImageNet, which the proposed algorithm does not seem capable of doing yet. It would be nice to include stronger baselines like these and discuss the advantage of the proposed algorithm over these baselines.
The results in the paper are mostly presented by qualitative analysis, i.e. by showing some representative images. Not many objective metrics are included. I understand that objective evaluation is hard for model interpretation work like this, but there are some objective metrics that are useful. For example, one of the main claims of this paper is that the proposed algorithm can generate good-quality images, it would be nice to include image quality metrics, such as FID. Otherwise, it is sometimes hard to tell the differences in quality, such as in Figure 5 row (a) vs (b) vs (c).
The mathematical expressions are sometimes inaccurate. For example, in equation (2), the authors use
∑
i
=
1
|
S
|
to denote summing over the set
S
. However, this actually means summing the first
|
S
|
elements in the training set, because
(
x
i
,
y
i
)
is already defined as the training set examples. One possible fix is to define
(
x
i
~
,
y
i
)
as the elements in
S
and replace the
x
i
with
x
i
~
. The same applies to equation (5). Also, in equation (5), shouldn't
θ
M
S
be defined as
O
P
T
(
θ
k
R
,
S
,
N
)
? Otherwise equation (5) would mean matching
θ
k
+
N
R
with the minimizer of the loss on
S
, which is in contradiction with the trajectory-matching objective. Finally, the
R
and
R
(script font vs regular font) are used interchangeably to represent the training set. Considering the script font is always used to represent the other sets, e.g.
S
, all the regular
R
should be replaced with script
R
.
Clarity, Quality, Novelty And Reproducibility
Clarity
The paper is mostly clear, with minor issues with mathematical notations. See weaknesses for more details.
Quality
The proposed methods are evaluated on a wide variety of tasks, which is good and insightful. However, stronger baselines and more metrics should be included. See weaknesses for more details.
Novelty
The paper proposes to apply a known technique to solve a new task. The novelty is good but not great. See weaknesses for more details.
Reproducibility
The paper contains detailed information to implement the proposed algorithm. |
ICLR | Title
Class-wise Visual Explanations for Deep Neural Networks
Abstract
Many explainable AI (XAI) methods have been proposed to interpret neural network’s decisions on why they predict what they predict locally through gradient information. Yet, existing works mainly for local explanation lack a global knowledge to show class-wise explanations in the whole training procedure. To fill this gap, we proposed to visualize global explanation in the input space for every class learned in the training procedure. Specifically, our solution finds a representation set that could demonstrate the learned knowledge for each class. To achieve this goal, we optimize the representation set by imitating the model training procedure over the full dataset. Experimental results show that our method could generate class-wise explanations with high quality in a series of image classification datasets. Using our global explanation, we further analyze the model knowledge in different training procedures, including adversarial training, and noisy label learning. Moreover, we illustrate that the generated explanations could lend insights into diagnosing model failures, such as revealing triggers in a backdoored model.
1 INTRODUCTION
Deep neural networks (DNNs) have achieved unprecedented success over various tasks across different domains. However, there is still limited understanding of how to interpret their decisions due to their black-box nature. In other words, current DNNs lack the desirable transparency ability to explain why they predict what they predict. "Correct" prediction without reasonable explanation from DNNs may bring a huge concern on their prediction’s reliability, especially in security-sensitive tasks such as medical image analysis (Nwadike et al., 2020), auto-pilot in the self-driving system (Wang et al., 2021) etc.
To acquire a better understanding of model prediction, many attempts have been made to interpret DNNs from different perspectives. A major focus have been put on the local explanations, either through the saliency map (Simonyan et al., 2013; Smilkov et al., 2017; Adebayo et al., 2018) and unique identification factors (Zhou et al., 2016; Selvaraju et al., 2017), or extracting explainable attributions on models’ decision for each input example (Ribeiro et al., 2016; Hinton et al., 2015; Frosst & Hinton, 2017). These techniques have a good ability to visualize feature representation utilized for debugging trained DNNs in a local manner. Yet, local explanations can only interpret resulting model decisions for a particular input sample, being unable to visualize any intermediate attributions or model knowledge learned in a whole training procedure. Thus, a global explanation for viewing overall training logic is desirable, i.e., visualizing intermediate explanations for verifying commonly-used hypotheses in different training steps. Apart from that, with respect to diagnosing model prediction, local explanations are further challenged by the recent progress in backdoor attacks (Zhao et al., 2021). DNNs have been shown vulnerable to backdoor attacks. However, local explanations fail to provide helpful (intermediate) explanations for analyzing backdoor patterns given a target label. To our best knowledge, no prior work can reveal complex backdoor patterns when visualizing a whole training process for DNNs.
Existing global explanations methods focus on extracting rules from DNN models (Hara & Hayashi, 2018; Lakkaraju et al., 2017) or distilling black-box models into small and easy-to-explained substitute models (Tan et al., 2019; Bastani et al., 2017). However, the quality of generated explanations highly depends on the complexity of the designed rules and the substitute model. Besides, the substitute model is a approximation of the explained model, which could also introduce extra biases to explanations.
Our solution. This paper gives a generic method for the global visual explanation, given an inputting class-wise dataset and training procedure only, without requiring any additional hypotheses. Our global explanation can be applied to visualize the whole training process of any DNNs, finally providing much richer visual explanations and knowledge than local explanations. Unlike prior global explanations, our method requires neither extracting rules from a black-box model nor distill knowledge to a simpler model. Instead, it visualizes learned knowledge directly over representative data to attain more comprehensive explanations, rather than processing a simplified model summarized from original data. Representative data, in the pixel space rather than feature space, enables debugging a model and studying hidden mechanisms more easily in a training procedure. Intrinsically, our superior results benefit from shortening the route of generating visual explanation, saying, distilling knowledge directly from data instead of from a model derived from data.
To be specific, given a training dataset and corresponding training procedure, we optimize representative data points so that the learned model over representative data could approach the same one trained with the full dataset. By adding more reference snapshots in the training trajectory of model parameters, our method supports extracting a high-quality visualization to reveal global class-wise patterns learned by specifying various training manners. Empirical results on datasets show desirable class-wise explanations with high fidelity in Sec. 4. Besides, Sec. 5 analyzes debugging model failure (i.e.,inserted backdoor) that exists in the training procedure. To demonstrate generality, we utilize our class-wise explanation to analyze feature representation learned by various training procedures in Sec. 6, including training dynamics, adversarial training, and noisy label learning.
Our contributions are summarized below:
• We propose a global visual explanation method in the input space to reveal the key representation points learned with respect to different classes. Our method could generate a class-discriminative, natural-looking, and high-quality visual explanation. • We show the proposed method could help in diagnosing several model failures, including backdoor attacks where the local explanation is completely failed. • We show a proof-of-concept of how to understand the model knowledge in different training phases and different training methods. This is critical for understanding the generalization of deep learning and sheds light on how to design a better training algorithm.
2 RELATED WORK
2.1 LOCAL EXPLANATION METHODS
Local explanation methods aim to help understand the decision procedure for a specific sample. At the pixel-level, saliency map (Simonyan et al., 2013) is the first to identify the sensitivity of each pixel towards the final prediction by highlighting the largest score in the calculated gradients. The vanilla gradient method is later improved via smoothed gradient (Smilkov et al., 2017), and guided back-propagation (Adebayo et al., 2018). Class Activation Mapping (CAM) (Zhou et al., 2016) visualizes discriminative regions by simplifying the model into one without any fully-connected layer. Later, Grad-CAM (Selvaraju et al., 2017) then makes CAM without altering model architecture by incorporating the gradient information. Other than directly pixel-level explanation, LIME (Ribeiro et al., 2016) approximates the model with a linear model locally, which relies on instances randomly generated in the neighborhood of the sample to be explained. After deriving the model, interpretable features are then projected back into the original feature space to get a final explanation. SHAP (Lundberg & Lee, 2017) re-formalizes additive feature attribution problem into a cooperative game and use Shapley value to assign each feature an important score for a particular prediction. Recently, a class-wise local explanation method has been proposed to visualize a sparse representation of model knowledge (Zhao et al., 2021). However, the method heavily depends on the given canvas (example). Although our method also focuses on the class-wise pattern, our global explanation method neither depends on a specific sample nor requires sparse constraints for generating representations in the input space.
2.2 GLOBAL EXPLANATION METHODS
Different from local explanation methods, global explanation aims to describe the overall logic of the black box, including how the model parameter affects the resulting prediction on average, and what and how the model has been learned in the training procedure. The global explanation is first to be formalized as extracting some rules from the black-box model to interpret the model (Bastani
et al., 2017). As neural networks become more complex and deep, it is then very hard to extract rules directly from the model. Model distillation (Hinton et al., 2015) is then applied to simplify complex black-box models to a smaller yet interpretable one with a similar performance to the original model. Neural networks have been distilled into trees structure model (Craven & Shavlik, 1995; Frosst & Hinton, 2017) and an additive model for the global explanation (Tan et al., 2018).
By selecting the prototypes or representative samples, example-based explanation methods (Kim et al., 2016; Gurumoorthy et al., 2019) could provide a condensed view of the whole training dataset and also select a subset of the whole training set as global explanations. However, the selected prototype set is always very large, containing thousands of samples. It’s hard for users to directly obtain clear and concise explanations.Besides, example-based methods can only select existing samples from the training dataset. Therefore, those method are unable to represent the knowledge learned in various training paradigms like adversarial training. However, our method that only need a dozen examples to consist the global explanations could reflect knowledge from various training paradigms by directly generating explanations from trained models.
Activation maximization (AM) method (Olah et al., 2017; Yosinski et al., 2015; Nguyen et al., 2017; 2019) visualize the learned feature of various neurons of DNN models in the input space. It could also be used as a global explanation to visualize learned class-wise patterns of DNN models by maximizing output neuron of each class. However, directly taking maximization always generates less coherent and high-frequency local patterns. To generate the global coherent and natural visualizations, they need to be combined with hand-designed regularization like Gaussian blur, dropout, mean initialization, and deep generative models (Nguyen et al., 2019). Compared with AM method, our method can generate high-quality, global coherent, and diverse class-wise explanations without any prior regularization.
3 METHODOLOGY
Suppose θR is the model trained using the dataset R with n samples, our goal is to find a global explanation set S that contains class-discriminative explanation for every class Si, for i = 0, . . . , C− 1. In the meantime, the explanation set should be much smaller than the original set i.e |S| ≪ n. In this section, unlike the previous works extracting the set from the model directly (Bastani et al., 2017), we propose a new method by synthesizing S such that the model trained on S should be equal to θR, since the model could be expressed as a function of representation samples for every class. Intuitively, this representation could be thought of as extracting critical points around the model’s decision boundary. Let’s take the support vector machine (SVM) as an example. The support vectors, whose size is much smaller than the size of the whole training set, that determine the SVM model could be thought of as the representation set we aim to extract 1.
3.1 SEARCHING CLASS-WISE EXPLANATION SET
For simplicity, we consider a classification model fθ : X (m) → YC which maps input x in the input space Xwith m dimension input to a label y at the label space Y with C class and the model parameter is θ. Given a training set R consists of n instances (x1,y1), . . . , (xn,yn). Consider a non-negative real-valued loss function L that penalize the difference between the prediction fθ(x) and true label y from an unknown data distribution P , (x,y) ∼ P , we aim to find the model θR as:
θR = argmin θ R(θ) =
∫ L(fθ(x),y)dP (x,y) ≈ 1
n n∑ i=1 L(fθ(xi),yi) (1)
We formulate searching the class-wise explanation set S as the below bi-level optimization problem:
min S D(θS ,θR) s.t θS = argmin θ LS(θ) := 1 |S| |S|∑ i=1 L(fθ(xi),yi) (2)
where D(·) is a distance metric to measure the distance between the model trained by the dataset R and representation set S. We use the sum of the MSE loss and the cosine similarity in the implementation. In other words, we aim to find a set of critical representation for each class that let
1In Soudry et al. (2018), the authors theoretically proved that SGD implicitly converges to solutions that maximally separate the dataset. This also implies that some samples are more relevant than others to decision boundary learned by the model.
the model θS be trained on S is close to the original. By imitating model parameters θR, we could then extract class-discriminative features from the dataset R into S. Since the above problem (Eq.2) is a bi-level optimization problem, we can solve it using alternative minimization. Specifically, for each iteration of outer loop, we utilize gradient descent algorithm with a fixed number of iterations M to solve inner problem:
θS = θSM = OPT(θ S 0 ,S,M), the update: θSt+1 ← θSt − ηθ∇θLS(θSt ) (3)
where OPT means optimization process (gradient descent) with M iteration updates. θS0 is initialized with same initial point of θR. After obtaining θSM , we solve the outer problem by taking one step gradient descent: St+1 ← St − ηS∇SD(θS ,θR) (4) After obtaining S , we will project it into the same image pixel value range as samples in the dataset R to make the explanation directly interpretable in the input space. At the same time, we limit the the size of explanation set S to be 10 images per class at most. To make our global explanations independent of the sampling dataset distribution P and reflect the true model decision, S is always initialized from the standard Gaussian Noise.
However, the above bi-level optimization is rather hard to solve directly. If the inner loops are unrolled too many times, the computation and memory occupation will increase exponentially and the problem becomes infeasible. At the same time, since the model parameter space is highly non-convex, it becomes extremely difficult to accurately approach θR with limited inner iterations when starting from a random initialized θS0 .
3.2 IMITATING TRAINING TRAJECTORIES
To address the aforementioned challenges, recent works about data condensation (Zhao et al., 2021; Cazenavette et al., 2022) have proposed to imitate short ranges of original model training process rather than imitating the final model parameter θR directly. Inspired by those works, we propose to imitate short intervals of the whole training process for each iteration of the outer loop. Specifically, after each epoch of training model on R, we save the model checkpoint θRk as well as the random initialization point θR0 . We then get a whole training trajectory as {θR0 ,θR1 , . . . ,θRk , . . . ,θRT } and each checkpoint in the training trajectory could act as a reference point. For each iteration in outer loop, we uniformly and randomly choose a reference point as starting point θRk , and choose the reference point after N epochs θRk+N as the end point. With the model θ
S initialized as θRk , in the inner loop, we use M step gradient descent to obtain the θSM and then make it approach θ R k+N . We then update the S according to the distance loss between θSM and θRk+N . To sum, instead of directly imitating the final parameter θR, we choose to imitate the short range training dynamics to obtain the S. Therefore, we turn the original problem into the following optimization problem:
min S Ek∼[0,...,T ]D(θSM ,θRk+N ) s.t θSM = argmin θ LS(θ) := 1 |S| |S|∑ i=1 L(fθ(xi),yi) (5)
where T is the difference of the whole training trajectory. To avoid affect from randomness about initialization of θR, we also optimize the above problem on multiple training trajectories obtained by training θR with different initializations.
Furthermore, to make the global set independent of the sampling dataset distribution P and reflect the true model decision, during the optimization, S is initialized from the standard Gaussian Noise. Fidelity: It is necessary for the explanation method to have a high fidelity. In other words, the prediction produced by an explanation should agree with the original model as much as possible. By approaching the θS with θR, the model trained θS on S would have achieve a similar generalization ability on both distribution P and other distributions as well. Here, we show the final MSE loss, ∥θRk+N−θSM∥2 obtained by our method. We also take the original trajectory difference ∥θRk −θRk+N∥2 as the reference. We compute their average values over multiple trajectories starting from different starting point θRk . Following experimental settings in Sec. 4, the average values of ∥θRk+N − θSM∥2 and ∥θRk − θRk+N∥2 are 0.35 and 0.76 respectively. We can observe that the MSE between the obtained θSM and end point θ R k+N is significantly reduced compared with the original trajectory so that the θSM is close enough to the reference point θ R k+N after the update.
Side benefits: Since we have multiple reference points of the training trajectory rather than only the final parameter θR, we could also investigate the feature representation learned during the training dynamics, which has been a open question in the deep learning field and several hypothesis have already been proposed to study this problem. By selecting the reference model parameter in different stage, we now provide a way to visualize the feature characteristic of different training stage. We defer the discussion later in Sec. 6.1.
Although the proposed trajectory matching is used in data condensation (Cazenavette et al., 2022), the goal of our work is totally different from data condensation which aims to improve the data efficiency of the training procedure by limiting the number of training samples. With the different objectives, data condensation doesn’t require generating a highly interpretable explanation, so directly applying the data condensation method is not applicable. For example, tricks like ZCA transformation are widely used in data condensation (Cazenavette et al., 2022) however cause the generated samples hard to interpret. Also, there is no constraint on pixel range for generated samples in the data condensation as well. Besides, we also utilize our method for different applications such as backdoor detection in Sec. 5, and visualizing knowledge of various training paradigms in Sec. 6, where data condensation methods haven’t explored.
4 CLASSWISE EXPLANATION
In this section, we conduct experiments to show that the proposed method could generate high-quality, class-discriminative global explanation. And we also take comparisons between our method and Activation maximization (AM) method.
Datasets and model architecture: We use four popular datasets: SVHN (Netzer et al., 2011), GTSRB (Stallkamp et al., 2012), CIFAR-10 (Krizhevsky et al., 2009) and Tiny ImageNet (Chrabaszcz et al., 2017). For the model architecture, we choose the simple ConvNet architecture (Gidaris & Komodakis, 2018), AlexNet (Krizhevsky et al., 2012), and VGG-11 (Simonyan & Zisserman, 2015) for illustration. The ConvNet has 3 duplicate convolution blocks followed by a linear classifier. Each block consists of 128 filters, average pooling, ReLU activation and instance normalization. For the large scale dataset, Tiny ImageNet, we use the ConvNet with 4 duplicate convolution blocks. Due to the page limit, we show the experimental results of AlexNet and VGG in Sec. B.1 of Appendix.
Implementation details: We set inner loop iterations M and N to be 60 and 2 for all datasets to generate the visual explanations. For selecting reference point as the starting point, we uniformly and randomly select from 0-20 epoch checkpoints for SVHN, GTSRB, and CIFAR-10 and 0-40 epoch checkpoints for Tiny ImageNet. The learning rate ηθ is set to be 0.02 for updating the θSt in the inner loop, while ηS is 1000 for updating S in the outer loop. And the number of outer loop iteration is 5000. We use standard Gaussian noise to initialize our class-wise explanations S. For Activation maximization (AM) method, we directly maximize the final output of each class before softmax layer. We also add the Gaussian blur regularization in AM method to have better visual quality on the explanations. To have a fair comparison, we limit the the size of explanation set S to be 10 images per class and initialize S with Gaussian noise. The implementation details are shown in Appendix.
As shown in Figure 1, for each class of each dataset, our method could generate high-quality class-discriminative explanations, while AM method generate a visually unidentifiable explanation with a lot of high-frequency noise and irregular color background. At the same time, our generated explanation shows a great diversity, while AM method keeps generating very similar patterns. For example, the model trained on GTSRB would learn the
shape and color of traffic signs (circle, triangle, blue, red) and the content inside the sign (number, light, arrow). Moreover, it could be observed that the generated explanation has an apparent number pattern although numbers in SVHN dataset are collected from different sources with different backgrounds. Our class-wise explanations reveal that the model has learned to extract and combine important features from the background information.
To further evaluate whether the generated explanation is highly class-discriminative, we put the explanations back into the original ConvNet model to see the classification results. For example, for the generated explanation of Cat class, we test if the original ConvNet and other models would classify them as Cat. The higher the classification accuracy, the highly class-discriminative the generated
explanation of each class. Apart from the original ConvNet, we also utilize other pretrained models with different architectures containing ResNet50 (He et al., 2016), WideResNet28-10 (Zagoruyko & Komodakis, 2016), and DenseNet121 (Huang et al., 2017). The classification results are shown in Table 1.
5 DIAGNOSING MODEL FAILURES: BACKDOOR ATTACK DETECTION
It has been shown recently that the current state-of-art deep neural networks are vulnerable to backdoor attacks (Gu et al., 2019; Chen et al., 2017). Backdoor attacks aim to embed a hidden pattern in the training so that the trained model would still perform normally when the backdoor is not activated; otherwise, the prediction would be manipulated to the attack designated label. The backdoor trigger could be a sparse and simple pattern (Gu et al., 2019) or be a more sophisticated designed pattern like Blended attack (’Hello-Kitty’ trigger) (Chen et al., 2017), SIG attack (vertical stripe trigger) (Barni et al., 2019). A lot of detection and defense methods have thus been proposed to detect whether the model is backdoored (Wang et al., 2019). One of the major requirement of the defenses is to use the find&patch strategy, where the defense method first find the exact trigger and then filter that trigger in the datasets. However, the existing defenses all depend on the assumption that the backdoor trigger has to be sparse and simple, which is unable to defend against some complex triggers such as Hello-Kitty or vertical stripe. We show the proposed method could recover both simple and complex backdoor triggers accurately. Here we apply our method and AM method to reveal the trigger learned by the ConvNet model in the CIFAR-10 dataset. Specifically, we choose three different kinds of backdoor attacks: 1) the Blended attack (’Hello-Kitty’ trigger) (Chen et al.,
2017); 2) the SIG attack (vertical stripe trigger) (Barni et al., 2019); 3) the Badnet attack (grid trigger) (Gu et al., 2019). For all backdoor attacks, we select Dog as our targeted class. For attack setups, we follow the previous works (Li et al., 2021b;a) and set the poison rate to be 0.05. Please refer to Appendix for more details of backdoor attacks. For visual explanation of the model trained on CIFAR-10 with backdoor attacks, we fix the starting point with the first checkpoint in the expert trajectory and set the N as 1. The other parameters for visual explanation are the same in Sec.4.
As shown in left parts of Figure 2, our method could successfully reveal all kinds of inserted triggers with high quality, even when the poisoning rate is very low. Simultaneously, our explanation keeps the other classes natural and clean. On the other hand, AM fails to extract clear triggers from poisoned models and the reserved triggers have a clear difference with the ground truth. Moreover, our method could quickly recover the backdoor trigger as we only need one reference point i.e N = 1. By only poisoning a small number of examples, all explanations in the Dog class are consistently carrying the exact trigger with the same shape and location. We could easily notice whether a model has been backdoored through the proposed explanation, thus finding the corresponding trigger. Our method could then be used to filter out those examples with the revealed trigger and purify the model.
6 VISUALIZING MODEL KNOWLEDGE
In this section, we further demonstrate that the proposed method could be used to analyze the feature representation learned by different training methods and phases.
6.1 TRAINING DYNAMICS
DNNs have shown great success on a variety of tasks. However, it is still a great challenge to understand why a model could generalize well on the test set. It is thus important to know what model has learned intrinsically in the whole training procedure. In this section, we show the proposed method could be used as a tool to visualize model knowledge in different stages of the training procedure. We use the ConvNet model trained on the CIFAR-10 dataset as an example. To reveal the difference among knowledge in different stages in the whole training process, we choose the starting point of the trajectory sequentially and set the N = 1. That means we make S imitate the training dynamics for only one epoch. The other parameters are the same in Sec.4. The knowledge learned in different stages are shown in Figure 3. We set the start point sequentially: (a) random initialization point; (b) the checkpoint saved after 2 epochs; (c) the checkpoint saved after 5 epochs; (d) the checkpoint saved after 10 epochs; (e) the checkpoint saved after 20 epochs; (f) the checkpoint saved after 30 epochs.
As shown in Figure 3, in the early stage of the training procedure, the knowledge learned by the model have rich information about the color and the rough contour of the class object. For example, the background of Ship class is always blue, and that of Horse class is brown. Also, some rough shapes, such as horse’s body and car’s body, could be easily identified. With the training process continuing, model knowledge tend to include clean and sharp local traits of the object, such as the head of the horse and the buckhorn. In the meantime, texture becomes more clear and becomes the dominant feature in the later phase of training. Although the model gets better performance with training, the learned knowledge is actually less aligned with human perception. This observation is align with Kumar et al. (2022). They also observed that representation from underfitting ImageNet models with modest validation accuracy achieves the best perception score.
6.2 ADVERSARIAL TRAINING
Adversarial training (AT) has been one of the most effective methods to enhance adversarial robustness (Madry et al., 2018). At the same time, an adversarially trained "robust model" tends to generate a better feature representation that has a better semantic meaning and aligns better with human perception (Ilyas et al., 2019). Recently, adversarial perturbations have been used to improve the model generalization in both computer vision (Xie et al., 2020) and natural language processing domains (Gan et al., 2020). In this section, we study the model knowledge obtained by adversarially trained model.
In the experiment, we use the ℓ∞ PGD-AT (Madry et al., 2018) to train a ConvNet AT model. We use the cross-entropy loss and set the perturbation constraint ϵ = 4/255. We set the number of iterations
of inner maximization as 10 and step size as 2/255. More details of adversarial training are shown in the Appendix. The other parameters for conducting visualizations are the same as those used in Sec.4.
From Figure 4, the most obvious difference is the contrast ratio of adversarially trained feature is much smaller than the normal training. We find that the representation set from adversarial training has a much wider pixel range from [−5, 5] compared to [−1, 1] in the normal training. Then, we have to project the representation set into the [−1, 1] space in order to do the visualization. The change of pixel range might be brought by the adversarial training mechanism. That is, the model relies on edge cases much heavier since model update depends on calculating loss on adversarial examples generated in the inner loop. As we are approaching the representation set using the adversarial training method, the representation set has a wider pixel range. Moreover, we can clearly observe that adversarially-trained feature representation aligns better with human perception and looks more "clean", which is also supported by other works (Ilyas et al., 2019; Xie et al., 2020).
6.3 NOISY LABEL TRAINING
Since the seminal work discusses the learning algorithm should cope with incorrect training examples (Angluin & Laird, 1988), machine learning with noisy label has become a heated topic as, in real-world applications, the labels are often noisy and imperfect. Therefore, it is important to understand how the neural network knowledge will be changed when the label is noisy. On the other hand, deep learning is well-known for having ability to learn very complex features and becomes over-fitting. While label noise could be seen as a challenge in current machine learning, a proper noise strength could act as a good regularizer to help the model generalize better. In this section, we study the difference of model knowledge under the different levels of label noise. Here, we also use ConvNet model trained on the CIFAR-10 dataset as an example. We modify the dataset by adding various levels of noise (25%, 50%, 75%) to the labels of the training set. This noise is added by taking, say in the case of 25% label noise, 25% of the examples at random and randomly permuting their labels. For generating class-wise explanations, we also follow the hyperparameter setting used in Sec.4.
As shown in Figure 5, a small amount of label noise like 25% does help to achieve a more human-like model knowledge. For example, birds and cats in Figure 5 have a much sharper and better semantic feature than pure training without any label noise. Our observation is that label noise could act as a good regularizer to help to get the model knowledge closer to human perception. However, as the noise level increases, the quality drops significantly. When the noise level reaches 75%, the learned knowledge becomes barely recognizable.
7 CONCLUSION
In this paper, we propose a global visual explanation method, which generates high-quality and class-discriminative explanation in the input space. We further show the proposed method could be utilized for debugging model failures, such as revealing backdoor triggers in the attack. Finally, we devise a way to study the model knowledge in different training mechanisms, which sheds light on building a more generalizable and trustworthy machine learning method.
B FULL VISUALIZATION RESULTS
In Sec. B.1, we show full visualization results of the class-wise explanations. In Sec. B.2, we show full class-wise explanations of model poisoned by backdoor attacks on CIFAR-10 dataset. In Sec. B.3, we show full class-wise explanations of different phases of the model training process on CIFAR-10 dataset. We demonstrate full class-wise explanations of the adversarially trained model on CIFAR-10 dataset in Sec. B.4. We demonstrate full class-wise explanations of the model trained under the different levels of noise on CIFAR-10 dataset in Sec. B.5.
2https://github.com/utkuozbulak/pytorch-cnn-visualizations
B.1 THE VISUALIZATION RESULTS OF CLASSWISE EXPLANATIONS
We demonstrate the whole class-wise explanations from our method about ConvNet on CIFAR-10, SVHN, GTSRB, and Tiny ImageNet in Figure 6, 7, 8, 9, and 10, respectively. For CIFAR-10, SVHN and GTSRB, we set the size of class-wise explanation set as 10 images per class ane 1 for Tiny Imagnet. For all datasets, we use the standard Gaussian noise as the initialization of our explanation set S. For VGG-11 and AlexNet models, we show the visualizations generated from our method on CIFAR10 in Figure 11 and Figure 12.
The visualization generated from Activation maximization method of these four datasets on the ConvNet are shown Figure 13, 14, 15, 16, and 17.
The class-wise explanations about ConvNet on CIFAR-10 are shown in Figure 6. Each row corresponds to each class.
The class-wise explanations about ConvNet on SVHN are shown in Figure 7.
The class-wise explanations about ConvNet on GTSRB are shown in Figure 8 and 9. The Figure 8 is about class-wise visualizations of class label 0− 18. The Figure 9 is about class-wise visualizations of class label 19− 42.
The below Figure 9 is about class-wise visualizations of class label 19− 42 on GTSRB dataset.
For the large scale dataset, Tiny ImageNet, we use the ConvNet with 4 convolution blocks. Due to the computation and memory cost, we set the size of class-wise explanation set as 1 image per class on Tiny ImageNet. The class-wise explanations about ConvNet on Tiny ImageNet are shown in Figure 10. Each subfigure corresponds to each class.
The class-wise explanations about VGG-11 and AlexNet on CIFAR-10 generated by our method are shown in below figures, Figure 11 an 12. Each row corresponds to each class.
Being similar to the results on ConvNet model, our method can still generate high-quality visualizations on larger CNN models. And our class-wise explanations have the apparent class-wise features for each class. These also verify that our method could generalize well on various network architectures.
The class-wise explanations generated from AM about ConvNet on CIFAR-10 are shown in Figure 13. Each row corresponds to each class.
The class-wise explanations about ConvNet on SVHN are shown in Figure 14.
The class-wise explanations from AM method about ConvNet on GTSRB are shown in Figure 15 and 16. The Figure 15 is about class-wise visualizations of class label 0 − 18. The Figure 16 is about class-wise visualizations of class label 19− 42.
The below Figure 16 is about class-wise visualizations from AM of class label 19− 42 on GTSRB dataset.
For the large scale dataset, Tiny ImageNet, we use the ConvNet with 4 convolution blocks. Due to the computation and memory cost, we set the size of class-wise explanation set as 1 image per class on Tiny ImageNet. The class-wise explanations generated from AM about ConvNet on Tiny ImageNet are shown in Figure 17. Each subfigure corresponds to each class.
B.2 THE VISUALIZATION RESULTS OF BACKDOOR ATTACKS
We first demonstrate the visualizations of three different triggers from Activation maximization are shown in Figure 18. The visualizations of backdoor learning with three different attacks from our method are in the Figure 19, 21, and 20.
The visualizations from our method of three different triggers are shown in the the following 3 figures.
B.3 THE VISUALIZATION RESULTS OF TRAINING DYNAMICS
In this section, we show the class-wise visualizations of different phases of training process with our method.
B.4 THE VISUALIZATION RESULTS OF AT MODEL
In this section, we show the class-wise explanations of adversarially trained model.
B.5 THE VISUALIZATION RESULTS OF NOISY LABEL TRAINING
In this section, we show the class-wise explanations of models trained with the different levels of label noise in the following figures. | 1. What is the main contribution of the paper regarding visualization techniques for DNNs?
2. What are the strengths and weaknesses of the proposed approach, particularly in its application in understanding networks and analyzing backdoor training sets?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any suggestions or alternatives proposed by the reviewer to improve the paper's focus on XAI or its overall impact? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper introduces a novel visualisation of DNNs by learning a small set of training images that leads to a similar set of model weights as those obtained with a full dataset. The demonstrate the utility of this in terms of understanding networks and their classification and particularly in analysing networks trained with backdoor training sets.
Strengths And Weaknesses
The paper introduces a novel and interesting visualisation technique that helps characterise the training examples that exemplify classes. Visualisation of DNNs have been of great interest and fascination to many researchers in the field and this is undoubtedly an interesting addition.
The paper is let down by the writing. This could immediately be improved using a grammar checking tool which would eliminate many annoying errors which detract from the basic narrative. It would benefit from a slightly clearer explanation. Personally I am not convinced it helps that much by concentrating on XAI. Clearly to some extent all visualisation techniques such as those in the famous Zeiler and Fergus paper help explain DNN, but their goal is often slightly different to those of XAI. However, I accept that the current techniques might be helpful in cases like identifying training sets with backdoor examples (although I suspect there are much easier approaches to doing this such as visually inspecting the training set).
Clarity, Quality, Novelty And Reproducibility
To me the clarity of the writing was disappointing because the underlying idea and the work carried out was very good. The method appears novel and I see no issues with reproducibility although it does look like quite hard work. |
ICLR | Title
Class-wise Visual Explanations for Deep Neural Networks
Abstract
Many explainable AI (XAI) methods have been proposed to interpret neural network’s decisions on why they predict what they predict locally through gradient information. Yet, existing works mainly for local explanation lack a global knowledge to show class-wise explanations in the whole training procedure. To fill this gap, we proposed to visualize global explanation in the input space for every class learned in the training procedure. Specifically, our solution finds a representation set that could demonstrate the learned knowledge for each class. To achieve this goal, we optimize the representation set by imitating the model training procedure over the full dataset. Experimental results show that our method could generate class-wise explanations with high quality in a series of image classification datasets. Using our global explanation, we further analyze the model knowledge in different training procedures, including adversarial training, and noisy label learning. Moreover, we illustrate that the generated explanations could lend insights into diagnosing model failures, such as revealing triggers in a backdoored model.
1 INTRODUCTION
Deep neural networks (DNNs) have achieved unprecedented success over various tasks across different domains. However, there is still limited understanding of how to interpret their decisions due to their black-box nature. In other words, current DNNs lack the desirable transparency ability to explain why they predict what they predict. "Correct" prediction without reasonable explanation from DNNs may bring a huge concern on their prediction’s reliability, especially in security-sensitive tasks such as medical image analysis (Nwadike et al., 2020), auto-pilot in the self-driving system (Wang et al., 2021) etc.
To acquire a better understanding of model prediction, many attempts have been made to interpret DNNs from different perspectives. A major focus have been put on the local explanations, either through the saliency map (Simonyan et al., 2013; Smilkov et al., 2017; Adebayo et al., 2018) and unique identification factors (Zhou et al., 2016; Selvaraju et al., 2017), or extracting explainable attributions on models’ decision for each input example (Ribeiro et al., 2016; Hinton et al., 2015; Frosst & Hinton, 2017). These techniques have a good ability to visualize feature representation utilized for debugging trained DNNs in a local manner. Yet, local explanations can only interpret resulting model decisions for a particular input sample, being unable to visualize any intermediate attributions or model knowledge learned in a whole training procedure. Thus, a global explanation for viewing overall training logic is desirable, i.e., visualizing intermediate explanations for verifying commonly-used hypotheses in different training steps. Apart from that, with respect to diagnosing model prediction, local explanations are further challenged by the recent progress in backdoor attacks (Zhao et al., 2021). DNNs have been shown vulnerable to backdoor attacks. However, local explanations fail to provide helpful (intermediate) explanations for analyzing backdoor patterns given a target label. To our best knowledge, no prior work can reveal complex backdoor patterns when visualizing a whole training process for DNNs.
Existing global explanations methods focus on extracting rules from DNN models (Hara & Hayashi, 2018; Lakkaraju et al., 2017) or distilling black-box models into small and easy-to-explained substitute models (Tan et al., 2019; Bastani et al., 2017). However, the quality of generated explanations highly depends on the complexity of the designed rules and the substitute model. Besides, the substitute model is a approximation of the explained model, which could also introduce extra biases to explanations.
Our solution. This paper gives a generic method for the global visual explanation, given an inputting class-wise dataset and training procedure only, without requiring any additional hypotheses. Our global explanation can be applied to visualize the whole training process of any DNNs, finally providing much richer visual explanations and knowledge than local explanations. Unlike prior global explanations, our method requires neither extracting rules from a black-box model nor distill knowledge to a simpler model. Instead, it visualizes learned knowledge directly over representative data to attain more comprehensive explanations, rather than processing a simplified model summarized from original data. Representative data, in the pixel space rather than feature space, enables debugging a model and studying hidden mechanisms more easily in a training procedure. Intrinsically, our superior results benefit from shortening the route of generating visual explanation, saying, distilling knowledge directly from data instead of from a model derived from data.
To be specific, given a training dataset and corresponding training procedure, we optimize representative data points so that the learned model over representative data could approach the same one trained with the full dataset. By adding more reference snapshots in the training trajectory of model parameters, our method supports extracting a high-quality visualization to reveal global class-wise patterns learned by specifying various training manners. Empirical results on datasets show desirable class-wise explanations with high fidelity in Sec. 4. Besides, Sec. 5 analyzes debugging model failure (i.e.,inserted backdoor) that exists in the training procedure. To demonstrate generality, we utilize our class-wise explanation to analyze feature representation learned by various training procedures in Sec. 6, including training dynamics, adversarial training, and noisy label learning.
Our contributions are summarized below:
• We propose a global visual explanation method in the input space to reveal the key representation points learned with respect to different classes. Our method could generate a class-discriminative, natural-looking, and high-quality visual explanation. • We show the proposed method could help in diagnosing several model failures, including backdoor attacks where the local explanation is completely failed. • We show a proof-of-concept of how to understand the model knowledge in different training phases and different training methods. This is critical for understanding the generalization of deep learning and sheds light on how to design a better training algorithm.
2 RELATED WORK
2.1 LOCAL EXPLANATION METHODS
Local explanation methods aim to help understand the decision procedure for a specific sample. At the pixel-level, saliency map (Simonyan et al., 2013) is the first to identify the sensitivity of each pixel towards the final prediction by highlighting the largest score in the calculated gradients. The vanilla gradient method is later improved via smoothed gradient (Smilkov et al., 2017), and guided back-propagation (Adebayo et al., 2018). Class Activation Mapping (CAM) (Zhou et al., 2016) visualizes discriminative regions by simplifying the model into one without any fully-connected layer. Later, Grad-CAM (Selvaraju et al., 2017) then makes CAM without altering model architecture by incorporating the gradient information. Other than directly pixel-level explanation, LIME (Ribeiro et al., 2016) approximates the model with a linear model locally, which relies on instances randomly generated in the neighborhood of the sample to be explained. After deriving the model, interpretable features are then projected back into the original feature space to get a final explanation. SHAP (Lundberg & Lee, 2017) re-formalizes additive feature attribution problem into a cooperative game and use Shapley value to assign each feature an important score for a particular prediction. Recently, a class-wise local explanation method has been proposed to visualize a sparse representation of model knowledge (Zhao et al., 2021). However, the method heavily depends on the given canvas (example). Although our method also focuses on the class-wise pattern, our global explanation method neither depends on a specific sample nor requires sparse constraints for generating representations in the input space.
2.2 GLOBAL EXPLANATION METHODS
Different from local explanation methods, global explanation aims to describe the overall logic of the black box, including how the model parameter affects the resulting prediction on average, and what and how the model has been learned in the training procedure. The global explanation is first to be formalized as extracting some rules from the black-box model to interpret the model (Bastani
et al., 2017). As neural networks become more complex and deep, it is then very hard to extract rules directly from the model. Model distillation (Hinton et al., 2015) is then applied to simplify complex black-box models to a smaller yet interpretable one with a similar performance to the original model. Neural networks have been distilled into trees structure model (Craven & Shavlik, 1995; Frosst & Hinton, 2017) and an additive model for the global explanation (Tan et al., 2018).
By selecting the prototypes or representative samples, example-based explanation methods (Kim et al., 2016; Gurumoorthy et al., 2019) could provide a condensed view of the whole training dataset and also select a subset of the whole training set as global explanations. However, the selected prototype set is always very large, containing thousands of samples. It’s hard for users to directly obtain clear and concise explanations.Besides, example-based methods can only select existing samples from the training dataset. Therefore, those method are unable to represent the knowledge learned in various training paradigms like adversarial training. However, our method that only need a dozen examples to consist the global explanations could reflect knowledge from various training paradigms by directly generating explanations from trained models.
Activation maximization (AM) method (Olah et al., 2017; Yosinski et al., 2015; Nguyen et al., 2017; 2019) visualize the learned feature of various neurons of DNN models in the input space. It could also be used as a global explanation to visualize learned class-wise patterns of DNN models by maximizing output neuron of each class. However, directly taking maximization always generates less coherent and high-frequency local patterns. To generate the global coherent and natural visualizations, they need to be combined with hand-designed regularization like Gaussian blur, dropout, mean initialization, and deep generative models (Nguyen et al., 2019). Compared with AM method, our method can generate high-quality, global coherent, and diverse class-wise explanations without any prior regularization.
3 METHODOLOGY
Suppose θR is the model trained using the dataset R with n samples, our goal is to find a global explanation set S that contains class-discriminative explanation for every class Si, for i = 0, . . . , C− 1. In the meantime, the explanation set should be much smaller than the original set i.e |S| ≪ n. In this section, unlike the previous works extracting the set from the model directly (Bastani et al., 2017), we propose a new method by synthesizing S such that the model trained on S should be equal to θR, since the model could be expressed as a function of representation samples for every class. Intuitively, this representation could be thought of as extracting critical points around the model’s decision boundary. Let’s take the support vector machine (SVM) as an example. The support vectors, whose size is much smaller than the size of the whole training set, that determine the SVM model could be thought of as the representation set we aim to extract 1.
3.1 SEARCHING CLASS-WISE EXPLANATION SET
For simplicity, we consider a classification model fθ : X (m) → YC which maps input x in the input space Xwith m dimension input to a label y at the label space Y with C class and the model parameter is θ. Given a training set R consists of n instances (x1,y1), . . . , (xn,yn). Consider a non-negative real-valued loss function L that penalize the difference between the prediction fθ(x) and true label y from an unknown data distribution P , (x,y) ∼ P , we aim to find the model θR as:
θR = argmin θ R(θ) =
∫ L(fθ(x),y)dP (x,y) ≈ 1
n n∑ i=1 L(fθ(xi),yi) (1)
We formulate searching the class-wise explanation set S as the below bi-level optimization problem:
min S D(θS ,θR) s.t θS = argmin θ LS(θ) := 1 |S| |S|∑ i=1 L(fθ(xi),yi) (2)
where D(·) is a distance metric to measure the distance between the model trained by the dataset R and representation set S. We use the sum of the MSE loss and the cosine similarity in the implementation. In other words, we aim to find a set of critical representation for each class that let
1In Soudry et al. (2018), the authors theoretically proved that SGD implicitly converges to solutions that maximally separate the dataset. This also implies that some samples are more relevant than others to decision boundary learned by the model.
the model θS be trained on S is close to the original. By imitating model parameters θR, we could then extract class-discriminative features from the dataset R into S. Since the above problem (Eq.2) is a bi-level optimization problem, we can solve it using alternative minimization. Specifically, for each iteration of outer loop, we utilize gradient descent algorithm with a fixed number of iterations M to solve inner problem:
θS = θSM = OPT(θ S 0 ,S,M), the update: θSt+1 ← θSt − ηθ∇θLS(θSt ) (3)
where OPT means optimization process (gradient descent) with M iteration updates. θS0 is initialized with same initial point of θR. After obtaining θSM , we solve the outer problem by taking one step gradient descent: St+1 ← St − ηS∇SD(θS ,θR) (4) After obtaining S , we will project it into the same image pixel value range as samples in the dataset R to make the explanation directly interpretable in the input space. At the same time, we limit the the size of explanation set S to be 10 images per class at most. To make our global explanations independent of the sampling dataset distribution P and reflect the true model decision, S is always initialized from the standard Gaussian Noise.
However, the above bi-level optimization is rather hard to solve directly. If the inner loops are unrolled too many times, the computation and memory occupation will increase exponentially and the problem becomes infeasible. At the same time, since the model parameter space is highly non-convex, it becomes extremely difficult to accurately approach θR with limited inner iterations when starting from a random initialized θS0 .
3.2 IMITATING TRAINING TRAJECTORIES
To address the aforementioned challenges, recent works about data condensation (Zhao et al., 2021; Cazenavette et al., 2022) have proposed to imitate short ranges of original model training process rather than imitating the final model parameter θR directly. Inspired by those works, we propose to imitate short intervals of the whole training process for each iteration of the outer loop. Specifically, after each epoch of training model on R, we save the model checkpoint θRk as well as the random initialization point θR0 . We then get a whole training trajectory as {θR0 ,θR1 , . . . ,θRk , . . . ,θRT } and each checkpoint in the training trajectory could act as a reference point. For each iteration in outer loop, we uniformly and randomly choose a reference point as starting point θRk , and choose the reference point after N epochs θRk+N as the end point. With the model θ
S initialized as θRk , in the inner loop, we use M step gradient descent to obtain the θSM and then make it approach θ R k+N . We then update the S according to the distance loss between θSM and θRk+N . To sum, instead of directly imitating the final parameter θR, we choose to imitate the short range training dynamics to obtain the S. Therefore, we turn the original problem into the following optimization problem:
min S Ek∼[0,...,T ]D(θSM ,θRk+N ) s.t θSM = argmin θ LS(θ) := 1 |S| |S|∑ i=1 L(fθ(xi),yi) (5)
where T is the difference of the whole training trajectory. To avoid affect from randomness about initialization of θR, we also optimize the above problem on multiple training trajectories obtained by training θR with different initializations.
Furthermore, to make the global set independent of the sampling dataset distribution P and reflect the true model decision, during the optimization, S is initialized from the standard Gaussian Noise. Fidelity: It is necessary for the explanation method to have a high fidelity. In other words, the prediction produced by an explanation should agree with the original model as much as possible. By approaching the θS with θR, the model trained θS on S would have achieve a similar generalization ability on both distribution P and other distributions as well. Here, we show the final MSE loss, ∥θRk+N−θSM∥2 obtained by our method. We also take the original trajectory difference ∥θRk −θRk+N∥2 as the reference. We compute their average values over multiple trajectories starting from different starting point θRk . Following experimental settings in Sec. 4, the average values of ∥θRk+N − θSM∥2 and ∥θRk − θRk+N∥2 are 0.35 and 0.76 respectively. We can observe that the MSE between the obtained θSM and end point θ R k+N is significantly reduced compared with the original trajectory so that the θSM is close enough to the reference point θ R k+N after the update.
Side benefits: Since we have multiple reference points of the training trajectory rather than only the final parameter θR, we could also investigate the feature representation learned during the training dynamics, which has been a open question in the deep learning field and several hypothesis have already been proposed to study this problem. By selecting the reference model parameter in different stage, we now provide a way to visualize the feature characteristic of different training stage. We defer the discussion later in Sec. 6.1.
Although the proposed trajectory matching is used in data condensation (Cazenavette et al., 2022), the goal of our work is totally different from data condensation which aims to improve the data efficiency of the training procedure by limiting the number of training samples. With the different objectives, data condensation doesn’t require generating a highly interpretable explanation, so directly applying the data condensation method is not applicable. For example, tricks like ZCA transformation are widely used in data condensation (Cazenavette et al., 2022) however cause the generated samples hard to interpret. Also, there is no constraint on pixel range for generated samples in the data condensation as well. Besides, we also utilize our method for different applications such as backdoor detection in Sec. 5, and visualizing knowledge of various training paradigms in Sec. 6, where data condensation methods haven’t explored.
4 CLASSWISE EXPLANATION
In this section, we conduct experiments to show that the proposed method could generate high-quality, class-discriminative global explanation. And we also take comparisons between our method and Activation maximization (AM) method.
Datasets and model architecture: We use four popular datasets: SVHN (Netzer et al., 2011), GTSRB (Stallkamp et al., 2012), CIFAR-10 (Krizhevsky et al., 2009) and Tiny ImageNet (Chrabaszcz et al., 2017). For the model architecture, we choose the simple ConvNet architecture (Gidaris & Komodakis, 2018), AlexNet (Krizhevsky et al., 2012), and VGG-11 (Simonyan & Zisserman, 2015) for illustration. The ConvNet has 3 duplicate convolution blocks followed by a linear classifier. Each block consists of 128 filters, average pooling, ReLU activation and instance normalization. For the large scale dataset, Tiny ImageNet, we use the ConvNet with 4 duplicate convolution blocks. Due to the page limit, we show the experimental results of AlexNet and VGG in Sec. B.1 of Appendix.
Implementation details: We set inner loop iterations M and N to be 60 and 2 for all datasets to generate the visual explanations. For selecting reference point as the starting point, we uniformly and randomly select from 0-20 epoch checkpoints for SVHN, GTSRB, and CIFAR-10 and 0-40 epoch checkpoints for Tiny ImageNet. The learning rate ηθ is set to be 0.02 for updating the θSt in the inner loop, while ηS is 1000 for updating S in the outer loop. And the number of outer loop iteration is 5000. We use standard Gaussian noise to initialize our class-wise explanations S. For Activation maximization (AM) method, we directly maximize the final output of each class before softmax layer. We also add the Gaussian blur regularization in AM method to have better visual quality on the explanations. To have a fair comparison, we limit the the size of explanation set S to be 10 images per class and initialize S with Gaussian noise. The implementation details are shown in Appendix.
As shown in Figure 1, for each class of each dataset, our method could generate high-quality class-discriminative explanations, while AM method generate a visually unidentifiable explanation with a lot of high-frequency noise and irregular color background. At the same time, our generated explanation shows a great diversity, while AM method keeps generating very similar patterns. For example, the model trained on GTSRB would learn the
shape and color of traffic signs (circle, triangle, blue, red) and the content inside the sign (number, light, arrow). Moreover, it could be observed that the generated explanation has an apparent number pattern although numbers in SVHN dataset are collected from different sources with different backgrounds. Our class-wise explanations reveal that the model has learned to extract and combine important features from the background information.
To further evaluate whether the generated explanation is highly class-discriminative, we put the explanations back into the original ConvNet model to see the classification results. For example, for the generated explanation of Cat class, we test if the original ConvNet and other models would classify them as Cat. The higher the classification accuracy, the highly class-discriminative the generated
explanation of each class. Apart from the original ConvNet, we also utilize other pretrained models with different architectures containing ResNet50 (He et al., 2016), WideResNet28-10 (Zagoruyko & Komodakis, 2016), and DenseNet121 (Huang et al., 2017). The classification results are shown in Table 1.
5 DIAGNOSING MODEL FAILURES: BACKDOOR ATTACK DETECTION
It has been shown recently that the current state-of-art deep neural networks are vulnerable to backdoor attacks (Gu et al., 2019; Chen et al., 2017). Backdoor attacks aim to embed a hidden pattern in the training so that the trained model would still perform normally when the backdoor is not activated; otherwise, the prediction would be manipulated to the attack designated label. The backdoor trigger could be a sparse and simple pattern (Gu et al., 2019) or be a more sophisticated designed pattern like Blended attack (’Hello-Kitty’ trigger) (Chen et al., 2017), SIG attack (vertical stripe trigger) (Barni et al., 2019). A lot of detection and defense methods have thus been proposed to detect whether the model is backdoored (Wang et al., 2019). One of the major requirement of the defenses is to use the find&patch strategy, where the defense method first find the exact trigger and then filter that trigger in the datasets. However, the existing defenses all depend on the assumption that the backdoor trigger has to be sparse and simple, which is unable to defend against some complex triggers such as Hello-Kitty or vertical stripe. We show the proposed method could recover both simple and complex backdoor triggers accurately. Here we apply our method and AM method to reveal the trigger learned by the ConvNet model in the CIFAR-10 dataset. Specifically, we choose three different kinds of backdoor attacks: 1) the Blended attack (’Hello-Kitty’ trigger) (Chen et al.,
2017); 2) the SIG attack (vertical stripe trigger) (Barni et al., 2019); 3) the Badnet attack (grid trigger) (Gu et al., 2019). For all backdoor attacks, we select Dog as our targeted class. For attack setups, we follow the previous works (Li et al., 2021b;a) and set the poison rate to be 0.05. Please refer to Appendix for more details of backdoor attacks. For visual explanation of the model trained on CIFAR-10 with backdoor attacks, we fix the starting point with the first checkpoint in the expert trajectory and set the N as 1. The other parameters for visual explanation are the same in Sec.4.
As shown in left parts of Figure 2, our method could successfully reveal all kinds of inserted triggers with high quality, even when the poisoning rate is very low. Simultaneously, our explanation keeps the other classes natural and clean. On the other hand, AM fails to extract clear triggers from poisoned models and the reserved triggers have a clear difference with the ground truth. Moreover, our method could quickly recover the backdoor trigger as we only need one reference point i.e N = 1. By only poisoning a small number of examples, all explanations in the Dog class are consistently carrying the exact trigger with the same shape and location. We could easily notice whether a model has been backdoored through the proposed explanation, thus finding the corresponding trigger. Our method could then be used to filter out those examples with the revealed trigger and purify the model.
6 VISUALIZING MODEL KNOWLEDGE
In this section, we further demonstrate that the proposed method could be used to analyze the feature representation learned by different training methods and phases.
6.1 TRAINING DYNAMICS
DNNs have shown great success on a variety of tasks. However, it is still a great challenge to understand why a model could generalize well on the test set. It is thus important to know what model has learned intrinsically in the whole training procedure. In this section, we show the proposed method could be used as a tool to visualize model knowledge in different stages of the training procedure. We use the ConvNet model trained on the CIFAR-10 dataset as an example. To reveal the difference among knowledge in different stages in the whole training process, we choose the starting point of the trajectory sequentially and set the N = 1. That means we make S imitate the training dynamics for only one epoch. The other parameters are the same in Sec.4. The knowledge learned in different stages are shown in Figure 3. We set the start point sequentially: (a) random initialization point; (b) the checkpoint saved after 2 epochs; (c) the checkpoint saved after 5 epochs; (d) the checkpoint saved after 10 epochs; (e) the checkpoint saved after 20 epochs; (f) the checkpoint saved after 30 epochs.
As shown in Figure 3, in the early stage of the training procedure, the knowledge learned by the model have rich information about the color and the rough contour of the class object. For example, the background of Ship class is always blue, and that of Horse class is brown. Also, some rough shapes, such as horse’s body and car’s body, could be easily identified. With the training process continuing, model knowledge tend to include clean and sharp local traits of the object, such as the head of the horse and the buckhorn. In the meantime, texture becomes more clear and becomes the dominant feature in the later phase of training. Although the model gets better performance with training, the learned knowledge is actually less aligned with human perception. This observation is align with Kumar et al. (2022). They also observed that representation from underfitting ImageNet models with modest validation accuracy achieves the best perception score.
6.2 ADVERSARIAL TRAINING
Adversarial training (AT) has been one of the most effective methods to enhance adversarial robustness (Madry et al., 2018). At the same time, an adversarially trained "robust model" tends to generate a better feature representation that has a better semantic meaning and aligns better with human perception (Ilyas et al., 2019). Recently, adversarial perturbations have been used to improve the model generalization in both computer vision (Xie et al., 2020) and natural language processing domains (Gan et al., 2020). In this section, we study the model knowledge obtained by adversarially trained model.
In the experiment, we use the ℓ∞ PGD-AT (Madry et al., 2018) to train a ConvNet AT model. We use the cross-entropy loss and set the perturbation constraint ϵ = 4/255. We set the number of iterations
of inner maximization as 10 and step size as 2/255. More details of adversarial training are shown in the Appendix. The other parameters for conducting visualizations are the same as those used in Sec.4.
From Figure 4, the most obvious difference is the contrast ratio of adversarially trained feature is much smaller than the normal training. We find that the representation set from adversarial training has a much wider pixel range from [−5, 5] compared to [−1, 1] in the normal training. Then, we have to project the representation set into the [−1, 1] space in order to do the visualization. The change of pixel range might be brought by the adversarial training mechanism. That is, the model relies on edge cases much heavier since model update depends on calculating loss on adversarial examples generated in the inner loop. As we are approaching the representation set using the adversarial training method, the representation set has a wider pixel range. Moreover, we can clearly observe that adversarially-trained feature representation aligns better with human perception and looks more "clean", which is also supported by other works (Ilyas et al., 2019; Xie et al., 2020).
6.3 NOISY LABEL TRAINING
Since the seminal work discusses the learning algorithm should cope with incorrect training examples (Angluin & Laird, 1988), machine learning with noisy label has become a heated topic as, in real-world applications, the labels are often noisy and imperfect. Therefore, it is important to understand how the neural network knowledge will be changed when the label is noisy. On the other hand, deep learning is well-known for having ability to learn very complex features and becomes over-fitting. While label noise could be seen as a challenge in current machine learning, a proper noise strength could act as a good regularizer to help the model generalize better. In this section, we study the difference of model knowledge under the different levels of label noise. Here, we also use ConvNet model trained on the CIFAR-10 dataset as an example. We modify the dataset by adding various levels of noise (25%, 50%, 75%) to the labels of the training set. This noise is added by taking, say in the case of 25% label noise, 25% of the examples at random and randomly permuting their labels. For generating class-wise explanations, we also follow the hyperparameter setting used in Sec.4.
As shown in Figure 5, a small amount of label noise like 25% does help to achieve a more human-like model knowledge. For example, birds and cats in Figure 5 have a much sharper and better semantic feature than pure training without any label noise. Our observation is that label noise could act as a good regularizer to help to get the model knowledge closer to human perception. However, as the noise level increases, the quality drops significantly. When the noise level reaches 75%, the learned knowledge becomes barely recognizable.
7 CONCLUSION
In this paper, we propose a global visual explanation method, which generates high-quality and class-discriminative explanation in the input space. We further show the proposed method could be utilized for debugging model failures, such as revealing backdoor triggers in the attack. Finally, we devise a way to study the model knowledge in different training mechanisms, which sheds light on building a more generalizable and trustworthy machine learning method.
B FULL VISUALIZATION RESULTS
In Sec. B.1, we show full visualization results of the class-wise explanations. In Sec. B.2, we show full class-wise explanations of model poisoned by backdoor attacks on CIFAR-10 dataset. In Sec. B.3, we show full class-wise explanations of different phases of the model training process on CIFAR-10 dataset. We demonstrate full class-wise explanations of the adversarially trained model on CIFAR-10 dataset in Sec. B.4. We demonstrate full class-wise explanations of the model trained under the different levels of noise on CIFAR-10 dataset in Sec. B.5.
2https://github.com/utkuozbulak/pytorch-cnn-visualizations
B.1 THE VISUALIZATION RESULTS OF CLASSWISE EXPLANATIONS
We demonstrate the whole class-wise explanations from our method about ConvNet on CIFAR-10, SVHN, GTSRB, and Tiny ImageNet in Figure 6, 7, 8, 9, and 10, respectively. For CIFAR-10, SVHN and GTSRB, we set the size of class-wise explanation set as 10 images per class ane 1 for Tiny Imagnet. For all datasets, we use the standard Gaussian noise as the initialization of our explanation set S. For VGG-11 and AlexNet models, we show the visualizations generated from our method on CIFAR10 in Figure 11 and Figure 12.
The visualization generated from Activation maximization method of these four datasets on the ConvNet are shown Figure 13, 14, 15, 16, and 17.
The class-wise explanations about ConvNet on CIFAR-10 are shown in Figure 6. Each row corresponds to each class.
The class-wise explanations about ConvNet on SVHN are shown in Figure 7.
The class-wise explanations about ConvNet on GTSRB are shown in Figure 8 and 9. The Figure 8 is about class-wise visualizations of class label 0− 18. The Figure 9 is about class-wise visualizations of class label 19− 42.
The below Figure 9 is about class-wise visualizations of class label 19− 42 on GTSRB dataset.
For the large scale dataset, Tiny ImageNet, we use the ConvNet with 4 convolution blocks. Due to the computation and memory cost, we set the size of class-wise explanation set as 1 image per class on Tiny ImageNet. The class-wise explanations about ConvNet on Tiny ImageNet are shown in Figure 10. Each subfigure corresponds to each class.
The class-wise explanations about VGG-11 and AlexNet on CIFAR-10 generated by our method are shown in below figures, Figure 11 an 12. Each row corresponds to each class.
Being similar to the results on ConvNet model, our method can still generate high-quality visualizations on larger CNN models. And our class-wise explanations have the apparent class-wise features for each class. These also verify that our method could generalize well on various network architectures.
The class-wise explanations generated from AM about ConvNet on CIFAR-10 are shown in Figure 13. Each row corresponds to each class.
The class-wise explanations about ConvNet on SVHN are shown in Figure 14.
The class-wise explanations from AM method about ConvNet on GTSRB are shown in Figure 15 and 16. The Figure 15 is about class-wise visualizations of class label 0 − 18. The Figure 16 is about class-wise visualizations of class label 19− 42.
The below Figure 16 is about class-wise visualizations from AM of class label 19− 42 on GTSRB dataset.
For the large scale dataset, Tiny ImageNet, we use the ConvNet with 4 convolution blocks. Due to the computation and memory cost, we set the size of class-wise explanation set as 1 image per class on Tiny ImageNet. The class-wise explanations generated from AM about ConvNet on Tiny ImageNet are shown in Figure 17. Each subfigure corresponds to each class.
B.2 THE VISUALIZATION RESULTS OF BACKDOOR ATTACKS
We first demonstrate the visualizations of three different triggers from Activation maximization are shown in Figure 18. The visualizations of backdoor learning with three different attacks from our method are in the Figure 19, 21, and 20.
The visualizations from our method of three different triggers are shown in the the following 3 figures.
B.3 THE VISUALIZATION RESULTS OF TRAINING DYNAMICS
In this section, we show the class-wise visualizations of different phases of training process with our method.
B.4 THE VISUALIZATION RESULTS OF AT MODEL
In this section, we show the class-wise explanations of adversarially trained model.
B.5 THE VISUALIZATION RESULTS OF NOISY LABEL TRAINING
In this section, we show the class-wise explanations of models trained with the different levels of label noise in the following figures. | 1. What is the focus and contribution of the paper regarding explanation sets?
2. What are the strengths of the proposed approach, particularly in terms of bi-level optimization and imitation learning?
3. What are the weaknesses of the paper, especially regarding quantitative evaluations and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper describes a new idea of extracting a class-specific explanation set from the entire training set the set and cast the learning problem into bi-level optimization framework where the inner optimization problem is to find the explanation set and outer optimization problem is to encourage the similar between the parameters learned from the full set and the explanation subset. To mitigate the computational problem for the inner optimization, authors used imitation learning to imitate the short range of original model training process. Authors primarily used qualitative evaluation with AM method in terms of classification performance, backdoor attack detection, and visualization model knowledge.
Strengths And Weaknesses
The idea of identification explanation set for each class as well as formulating the problem as bi-level optimization is novel
The imitating learning approach to identify the explanation set is clever and effective.
Qualitative experiments to comparing performance are comprehensive and convincing.
On discriminative power: The only quantitative evaluation is classification accuracy shown in Table 1 on four data sets and four CNN architectures. It seems that authors only show the results for the class Cat. Can authors show results on other classes and/or other settings (e.g., backdoor attacks) as well to sustain the performance.
For binary discriminative classification, it would not be unreasonable to compare with a vanilla margin classifier, such as SVM, at least on smaller data sets, in terms of the effectiveness of selecting the explanation subset for each class in terms of performance and computational complexity.
Authors set 10 as the size of class-specific explanation set, which may work well for the classes w/o outliers or with compact representation. For others, explanation quality can decrease dramatically. A related issue would be the increase of the computational load with regard to the size of explanation set.
Literature review appears to be outdated and incomplete. For example, the so-called local explanation method, the influential body of attribution-based work, represented by Integrated Gradient (IG) are not even mentioned. Most of related works that are discussed are outdated that do not reflect the state-of-the-art.
Clarity, Quality, Novelty And Reproducibility
The paper is clear in describing the proposed methodology and results but not too much in the literature review and introduction. The reviewer does not agree with the authors’ classification of global explanation method, which are XAI methods based on surrogate models. The global method may refer to the attribution-based method that requires smoothing or integrating the loss surface, for example, IG type of methods that requires a reference point to generate explanation. The proposed method of explaining by examples is novel. Compared with the local explanation methods that are based nearest neighbors or prototypes, it is more suitable for real-world deployment due to the pre-selected explanation set for each class, leading to efficient computation in inference time. |
ICLR | Title
FiD-Light: Efficient and Effective Retrieval-Augmented Text Generation
Abstract
Retrieval-augmented generation models offer many benefits over standalone language models: besides a textual answer to a given query they provide provenance items retrieved from an updateable knowledge base. However, they are also more complex systems and need to handle long inputs. In this work, we introduce FiDLight to strongly increase the efficiency of the state-of-the-art retrieval-augmented FiD model, while maintaining the same level of effectiveness. Our FiD-Light model constrains the information flow from the encoder (which encodes passages separately) to the decoder (using concatenated encoded representations). Furthermore, we adapt FiD-Light with re-ranking capabilities through textual source pointers, to improve the top-ranked provenance precision. Our experiments on a diverse set of seven knowledge intensive tasks (KILT) show FiD-Light consistently improves the Pareto frontier between query latency and effectiveness. FiD-Light with source pointing sets substantial new state-of-the-art results on six KILT tasks for combined text generation and provenance retrieval evaluation, while maintaining reasonable efficiency.
N/A
Retrieval-augmented generation models offer many benefits over standalone language models: besides a textual answer to a given query they provide provenance items retrieved from an updateable knowledge base. However, they are also more complex systems and need to handle long inputs. In this work, we introduce FiDLight to strongly increase the efficiency of the state-of-the-art retrieval-augmented FiD model, while maintaining the same level of effectiveness. Our FiD-Light model constrains the information flow from the encoder (which encodes passages separately) to the decoder (using concatenated encoded representations). Furthermore, we adapt FiD-Light with re-ranking capabilities through textual source pointers, to improve the top-ranked provenance precision. Our experiments on a diverse set of seven knowledge intensive tasks (KILT) show FiD-Light consistently improves the Pareto frontier between query latency and effectiveness. FiD-Light with source pointing sets substantial new state-of-the-art results on six KILT tasks for combined text generation and provenance retrieval evaluation, while maintaining reasonable efficiency.
1 INTRODUCTION
Enabling machine learning models to access information contained in parametric or non-parametric storage (i.e., retrieval-enhanced machine learning) can lead to efficiency and/or effectiveness improvements in a wide range of learning tasks (Zamani et al., 2022). For example, retrievalaugmented generation (Lewis et al., 2020), which is the focus of this paper, has a manifold of benefits over closed-loop language modelling in knowledge intensive tasks: Answers can be grounded in (multiple) specific pieces of information which enables clear attribution (Dehghani et al., 2019; Rashkin et al., 2021; Lamm et al., 2021); the knowledge base can easily be managed, updated, and swapped (Izacard et al., 2022); the decomposition of retrieval and generation module offers clear efficiency-effectiveness tradeoff controls; and the data structure of combined retrieval and text generation enables many insightful failure analyses. However, with these benefits also come downsides, such as a higher system complexity with higher training and inference cost. Therefore, our goal is to reduce costs as much as possible, while retaining effectiveness, to make these benefits more widely available.
The most effective approach for knowledge intensive tasks, such as those contained in the KILT benchmark (Petroni et al., 2021), is the Fusion-in-Decoder (FiD) model proposed by Izacard & Grave (2020). The FiD model uses an external retriever, such as a dense retrieval model, to gather candidate passages, which are encoded with the query by a T5-encoder (Raffel et al., 2020); the encoded vectors are concatenated and fed through a T5-decoder to produce a single output string. FiD can synthesize answers from multiple different sources, which leads to state-of-the-art results in many tasks from open domain QA to fact verification (Hofstätter et al., 2022; Izacard et al., 2022).
While undoubtedly the leading architecture – in terms of effectiveness for knowledge intensive generation tasks – the FiD model is resource intensive. In state-of-the-art configurations concatenating all encoded tokens before the decoding leads often to sequences longer than 10 thousand vectors, coupled with auto-regressive decoding, this leads to a high inference latency. In Figure 1 we plot the average latency of a single query measured on a single TPUv4 of the encoder and decoder modules
of FiD.1 The first observation is the overpowering 93% of time spent on decoding in FiD. A common and straightforward approach to reduce the latency of FiD is to reduce the number of input passages, e.g., to only 10 passages. While this approach naturally reduces the overall latency, the decoding latency still requires 10 times as long as the encoding (see Figure 1). Crucially, this approach will also reduce the model’s effectiveness substantially, as we show later in this work (see §4.3). To overcome the inefficiencies of the decoding, we propose FiD-Light, a simple yet effective adaptation of the FiD model. The connection between the encoder and decoder has a large capacity for information in FiD. In contrast, the retrieval community, showed that in applications, such as dense retrieval with dot-product scoring, encoded information may be compressed to a fraction of the original input length, including representing passages in a single (Hofstätter et al., 2021) or multiple vectors (Chen et al., 2020). Following these footsteps, we propose to compress the number of vectors per encoded passage, to a fraction of the input vectors, before they are accessed by the decoder. Using this approach FiD-Light is able to ingest a large number of passages with strongly reduced latency, as illustrated in Figure 1. Here we still use 40 passages, showing the same encoding time as FiD, but a substantially faster decoding (now on par with the encoding time), for a total latency lower than FiD with 10 passages.
The knowledge intensive tasks we aim to solve ideally require a system to produce both a generated output text, as well as a ranked list of provenance items from the knowledge base. However, FiD is limited to only produce output text. Falling back to return the original candidate ranking is usually sub-optimal with low-precision. To incorporate re-ranking capabilities into FiD-Light we adapt a passage marker workflow proposed by Lakhotia et al. (2021) as part of FiD-Ex. They marked the input passages with textual indices, and trained the model to output the relevant indices in the output text. We find that using these textual indices or source pointers directly as output, as Lakhotia et al. (2021) proposed, is brittle and prone to distribution shifts in the number of expected relevant passages between training and evaluation (see §4.2). Therefore, our FiD-LightSP approach re-ranks the selected passages to the top of the ranked list, without discarding the rest of the retrieved list, for higher robustness and improved results.
We conduct experiments on seven tasks of the KILT benchmark composed by Petroni et al. (2021) spanning open domain QA, slot filling, fact verification, and dialogue tasks. We study the following research questions to demonstrate the efficacy of our proposed FiD-LightSP model:
RQ1 What impact does training the retrieval module have on FiD-LightSP downstream results?
The quality of the final result is strongly bound by the recall quality of the retriever module. While many complex end-to-end training procedures have been proposed (Singh et al., 2021; Izacard et al., 2022), we focus on simple, yet effective directly supervised dense retrieval training. We show that a simple retrieval training comfortably outperforms a zero-shot retrieval baseline from Hofstätter et al. (2022) and the resulting FiD-LightSP downstream results take a major step towards a realistic oracle retriever ceiling.
RQ2 How robust is our source pointing and re-ranking workflow applied to FiD and FiD-Light?
We use available passage relevance information for each task in the KILT benchmark to train our source pointer output via text markers. We train the FiD(-Light) generator to output the indices for
1All our measurements in this work are conducted on TPUv4s, however we confirmed that using V100 GPUs we observe a similar ratio of time spent in the encoder vs. the decoder of FiD and FiD-Light.
all relevantly retrieved passages during training, before generating the textual answer. We observe that FiD(-Light)SP is learning an expected distribution for the number of selected passages, which might not match relevance distributions during evaluation. To mitigate this problem we propose to use the source pointer to re-rank the initial list. We show this improves the results over FiD-Ex. Comparing the effectiveness of the source pointers between different FiD-Light settings and the FiD baseline we find FiDSP to rapidly lose effectiveness when the number of input passages is reduced, while FiD-LightSP is able to hold the passage precision at much lower latency.
RQ3 How does FiD-LightSP compare to the FiDSP baseline in efficiency-effectiveness tradeoffs?
The common approach to speed up FiD is to reduce the number of input passages. To this we compare our FiD-LightSP model using a static number of passages, but varying the number of vectors fed into the decoder as well as changing the T5 backbone size. We show that while FiDSP with fewer passages strongly degrades, FiD-LightSP is able to hold most of the initial maximum effectiveness of FiDSP, while being 3× faster. This Pareto optimal result between latency and effectiveness is complemented when we increase the T5-backbone sizes in FiD-LightSP to receive the benefits of larger models, while still outperforming the initial FiDSP baseline in terms of efficiency. Overall FiD-LightSP is Pareto optimal on six out of the seven tested tasks.
RQ4 How does FiD-LightSP compare to related methods on the KILT benchmark?
We submitted three representative configurations of FiD-LightSP to the blind-evaluated KILT leaderboard test set to compare them to other methods for knowledge intensive tasks. We evaluate FiDLightSP on the main metric of the KILT benchmark: combined KILT-scores (which only counts a text generation score if the R-Precision for the query is 1). We show FiD-LightSP outperforms previous SOTA models by considerable margins on the KILT-scores on six tasks. We set new SOTA results compared to the previous best methods on:
- QA HotpotQA +11.1 K-EM (+61.3%), NQ +7.5 K-EM (+17.2%), TriviaQA +5.8 K-EM (+10.0%) - Slot Filling zsRE +10.8 K-AC (+14.8%), T-REx +0.5 K-AC (+0.7%) - Fact Verification FEVER +6.0 K-AC (+7.6%)
We hope these results demonstrate to the community that SOTA results are achievable with reasonable efficiency and that efficient retrieval-augmented generation has a promising future ahead.
2 BACKGROUND AND RELATED WORK
In this section, we first review the FiD model and FiD-Ex workflow, which adds textual explanation markers to FiD. We further discuss other related work in this area.
2.1 FID (FUSION IN DECODER) WITH EXPLANATIONS
A critical capability for retrieval-augmented models is to be able to synthesize and utilize information from multiple distinct retrieved items (Zamani et al., 2022). To effectively implement this paradigm Izacard & Grave (2020) proposed the FiD model, which re-wires the computational graph between an of-the-shelf pre-trained Transformer Encoder and Decoder (Vaswani et al., 2017). Usually FiD is initialized with the pre-trained T5 model (Raffel et al., 2020). Given a query q, we retrieve a set of n candidate passages using a separate retrieval module. The retriever is independently trained, and can take any traditional, neural or hybrid architecture. As in Izacard & Grave (2020), we use a single dense retriever, as it has been shown to outperform traditional retrieval methods (Hofstätter et al., 2022). To encode the information, FiD concatenates the query q with each retrieved passage p and independently feeds (one per index i) the sequences through a Transformer encoder (TE): ei = TE([“query: ”; q; “context: ”; pi]) (1) The resulting encoded representations – using one vector per token – are concatenated into a single long sequence, which is fed through the Transformer decoder (TD), autoregressively during inference, to produce a single output sequence o:
o = TD([e1; e2; ...; en]) (2)
FiD has two main limitations: (1) the text-only output does not provide any information about the exact passage(s) which were used to synthesize the output; and (2) the long input sequence leads to highly inefficient autoregressive decoding (as shown in Figure 1). While the expected output is relatively short (in the magnitude of dozens of tokens), the input to the decoder is large with O(n ∗ (|q|+ |p|)) tokens (in the magnitude of thousands of tokens). To alleviate limitation (1) Lakhotia et al. (2021) adapt the FiD workflow with textual explanations (FiD-Ex) inspired by the WT5 (Why?, T5) concept proposed by Narang et al. (2020). For FiD-Ex, the FiD architecture is left untouched; Lakhotia et al. (2021) only adapt the textual input and target output. The input to the encoder is augmented with indices (from 1 to n) to identifiy individual passages:2 ei = TE([“query: ”; q; “index: ”; i; “context: ”; pi]) (3) And the target output t during training is augmented with the indices (using the regular tokens for the numbers and spaces as separators for multiple indices) of all the known relevant passages R+ in the retrieved set: t̂ = [“index: ”; {r|r ∈ R+}; “text: ”; t] (4) On one hand, this textual formulation packs more capabilities in the same text based architecture, on the other hand we note that this discrete selection of the top-|R+| passages from the candidate set is a strong departure from the prevalent pairwise re-ranking models. It opens a new range of induced biases about expected distributions of |R+| not studied before. During inference the output is parsed to extract the indices as numbers and remove the additional textual markers to evaluate the output text.
2.2 RELATED WORK
Efficient Generation Models. To enable their ubiquitous use, a key component besides their safety, is the efficiency of text generators to run at scale. Naturally, many studies work to achieve this goal from various angles. Schuster et al. (2022) propose an adaptive early exiting language model, which exits the decoder stack of Transformer layers early for easy to predict tokens. The LongT5 model focuses on improving the efficiency of the encoder for long input sequences (Guo et al., 2021), in contrast we focus on the decoder efficiency, as FiD’s encoder input is usually short. We believe our FiD-Light adaptations are orthogonal to many other algorithmic and engineering-based generation efficiency improvements and can be combined in future work. For a comprehensive overview over efficient transformer architectures, we refer the reader to Tay et al. (2022).
Retrieval-Enhanced Machine Learning. The foundational retrieval-augmented models, e.g., FiD (Izacard & Grave, 2020), RAG (Lewis et al., 2020), and REALM, (Guu et al., 2020) are trained to solve individual tasks. Many of their recent improvements optimized end-to-end processes (e.g., EMDR2 (Singh et al., 2021)), ensembling multiple modules (e.g., R2-D2 (Fajcik et al., 2021)), or creating multiple training loops to update the indexed documents multiple times (e.g., Hindsight (Paranjape et al., 2021)). In contrast, we focus on architectural efficiency improvements with a simple training paradigm. Recently, more task-independent retrieval-enhanced language models emerged, such as retrieving from a text-snippet database (Borgeaud et al., 2021) or learning to retrieve from the web with reinforcement learning (Nakano et al., 2021). For more information on retrieval-enhanced machine learning models, we refer the reader to Zamani et al. (2022).
Improving and Adapting the FiD Model. To integrate passage relevance prediction into FiD, Asai et al. (2021) add a second decoding module, which is called for every query-passage sequence to indicate its relevance. They also use this setup to generate silver-relevance scores for unjudged passages. Yu et al. (2022) replace the retrieval module with a large language model to generate supporting documents, which are then fused to generate the answer by a default FiD implementation. The current top-systems on the KILT leaderboard (Hofstätter et al., 2022; Izacard et al., 2022) use strong retrievers in combination with large T5-backbones for FiD. They also improve the supervised training by using better data sampling or pre-training procedures for more data efficient fine-tuning. We continue in the spirit of these related works with additional efficiency and capability improvements of FiD.
2Note we adapted the formulation of Lakhotia et al. (2021) from sentence markers to passage indices, to make the formulation more general.
3 FID-LIGHT WITH SOURCE POINTERS
With FiD-LightSP we overcome the two main limitations of the FiD-Ex model and workflow: We drastically increase the efficiency of the decoder, by reducing its computational requirement, and we improve the robustness of the passage selection with a source pointing workflow, by shifting our view from an explanation to a second, parallel-solved task: re-ranking passages. We provide an overview of our FiD-LightSP model and source pointer workflow in Figure 2.
Decoder Efficiency. Following our initial observation, that FiD spends most time in the decoding phase (Figure 1), we adapt the original FiD decoding step (Eq. 2) to reduce the length of each encoded query-passage pair to k vectors via a function f :
ô = TD([fk(e1); fk(e2); ...; fk(en)]) (5)
This reduces the input length from the previous O(n ∗ (|q|+ |p|)) to O(n ∗ k), where k |q|+ |p|. The exact compression ratio depends on the required tokens for the used tasks; we experiment with configurations from a 6x to 384x fold reduction. In our experiments, for simplicity, we instantiate fk as the first k vectors of each sequence. While this architecture change is simple, it strongly disrupts previous assumptions that every encoded token is accessible for decoding in the T5 architecture. Its simplicity also means that the community can easily adapt existing codebases with this change to benefit from the efficiency improvements.
Source Pointing Robustness To enable source pointing in FiD-Light, we train the model with the source pointing concept proposed by Lakhotia et al. (2021) in FiD-Ex. Our novel contribution is how we handle the output of the source pointers at inference time. If we use them directly as result, as in FiD-Ex, we are prone to instability in the number of returned passages. The question of processing the output further almost becomes philosophical: if we treat the source pointers as explanations we can not process them any further without corrupting the explanation. While, there might be a correlation between the textual output and the source pointed passages, we are treating finding the source passages as a concurrent task to generating the text output. Because we are not claiming them to be explanations we can now process them further.
We propose to merge the initial ranked candidate list of passages C with the source pointing selected passage by re-ranking the selected passages (found in the decoded output ô) to the top of the list:
Ĉ1:r =
[ [r|r ∈ ô]; [r|r ∈ C, r /∈ ô] ] (6)
To compute all selected passages r ∈ ô we first parse the output ô with a simple parser for the trained format given in Eq. 4, including a conversion from the text-tokens representing the indices to integers. In case the model selects multiple passages we keep the selection order of the model output. If a task contains graded relevance annotations for training passages, we can train the model to follow the grades, if only binary relevance is available (as in the case with KILT), we keep the rankordering of the multiple selected passages from the initial candidate list. This change leads to higher robustness in our provenance results, as distribution differences between training and evaluation otherwise lead to a disadvantaged position, as we demonstrate in Section 4.2.
4 RESULTS
We empirically address the research questions laid out in the introduction. We study the importance of the retriever module, the efficacy of the source pointer workflow, the tradeoff between efficiency and effectiveness using a controlled baseline, and finally we compare our FiD-LightSP to related methods on the blind-evaluated KILT leaderboard. We detail our experiment design in Appendix A.
4.1 INFLUENCE OF THE RETRIEVER
The retrieval module is the backbone for all retrieval-augmented generation. The generation quality is to a large extent bound by the retrieval quality, especially if the retrieved information is not memorized by the generator. To answer RQ1 What impact does training the retrieval module have on FiD-LightSP downstream results? we have to be careful to acknowledge the uncertainty of sparse ranking annotations (Hofstätter et al., 2022).
To accurately quantify the retriever’s contribution, we compare the downstream effect of a zero-shot, a fine-tuned (methodology described in detail in Appendix B), and two oracle retrievers in Table 1. In the first section (rows 1-3) retrievers are evaluated without access to relevance judgements (a real-world environment), whereas in the second section (rows 4 & 5) we infuse relevance information during the evaluation (oracle environment). We find that training the retriever with in-domain training data (row 2) consistently improves results over a zero-shot retriever (row 1) as used by (Hofstätter et al., 2022). While always ingesting all known relevant passages during training (row 3) does not significantly change the downstream performance.
To account for annotation uncertainty in our retriever as oracle experiments, we study two scenarios: 1) infusing all known relevant passages into the retrieved candidate list (row 4) and 2) setting the candidates to be only the known relevant passages (row 5). Commonly, the community compares their results only against the second oracle scenario, showing a large headroom for future improvements for the retriever (Glass et al., 2021; Shuster et al., 2021). However, we argue, due to the sparsity of the annotations, we should compare the results to our more realistic first oracle scenario (row 4). It still shows a significant opportunity for improvement, albeit the total headroom is roughly halfed across the board. Future work may explore more fine-tuning aspects, but we decide to select the simple fine-tuned retriever (row 2).
4.2 SOURCE POINTER ROBUSTNESS
While the initial source pointer concept has been proposed by FiD-Ex as sentence markers for explainability, we are the first to study their application in the more complex passage ranking context combined with our compressed FiD-Light architecture. Therefore, we study RQ2 How robust is our source pointing and re-ranking workflow applied to FiD and FiD-Light?
As introduced earlier, we train the source pointing capabilities into FiD(-Light) by flagging all known relevant passages retrieved in the candidate passage set. By directly using the size of the known relevant item set during training we instill a strong expectation prior into the model of how many passages ought to be relevant for a given task. Note, if a known relevant passage is not retrieved we cannot use it for training the generator. In Figure 3, we observe these effects for four representative tasks of the KILT benchmark. Each of these tasks shows a different expected distribution target. We note that the training distribution differs from the target, as it skips non-recalled relevant items. We find the model output distribution on the validation set to closely match the training distribution (albeit here we make no claims about the correctness of the selected passages).
0%
25%
50%
75% 100% (a) TriviaQA Target Train Model Output
(b) HotpotQA
0 1 2 30%
25%
50%
75%
100% (c) FEVER
0 1 2 3
(d) zsRE
Selected Source Passages
Re la
tiv e
Oc cu
rre nc
e
Figure 3: Distributions of source pointer passages for FiD-LightSP (T5-Base).
Table 2: Comparing our source pointer (SP) re-ranking with the direct model output (Ex) using KILT scores for passages and documents. Bold indicates improvement of SP over Ex larger than the 95% CI.
Model Open Domain QA Fact Slot Fill.
HotpotQA TriviaQA FEVER zsRE Pas. Doc. Pas. Doc. Pas. Doc. Pas. Doc.
T5-Base 1 FiD-Ex 25.4 25.6 22.0 34.1 70.1 77.2 70.1 71.6 2 FiDSP 25.8 26.1 23.1 39.5 71.1 78.3 70.1 71.7
3 FiD-Light-Ex 23.5 23.7 18.8 32.1 70.0 77.1 69.3 71.2 4 FiD-LightSP 23.8 24.1 19.8 37.6 71.6 78.1 69.3 71.4
T5-Large 5 FiD-Light-Ex 26.6 26.9 22.6 36.3 72.6 79.2 70.9 72.7 6 FiD-LightSP 26.9 27.3 23.5 41.4 74.2 80.4 70.9 72.8
T5-XL 7 FiD-Light-Ex 28.2 28.4 24.8 38.7 73.9 80.5 73.1 75.9 8 FiD-LightSP 28.4 28.7 25.7 43.8 75.5 81.7 73.2 76.1
However, focusing on higher passage counts in Figure 3 (a) TriviaQA and (c) FEVER shows that the model struggles to output 3 passages as often as it is expected to do. This weakness becomes visible, when we evaluate the standard R-Precision of the selection, which needs at least R returned items to reach the full score, given R known relevant items.
To overcome this limitation, we propose instead of directly outputting the selection (FiD-Ex) to move the selected passages to the top of the ranked list. This essentially transforms FiD(-Light) into a re-ranking model. In Table 2, we show the ablation study to confirm the usefulness of the proposed re-ranking on final downstream results. Our approach is strictly positive or neutral for the results, as we are filling up holes, that would result in penalties. Confirming our hypothesis originating in Figure 3, we see stat. significant improvements across all configurations on the two task, where the model struggled to fill up the full distribution: TriviaQA and FEVER.
While in this work we do not change the KILT evaluation methodology and optimize our models towards the current standard evaluation, we note that these findings represent interesting avenues for future work requiring evaluation setup changes: We may choose to train the model to only select a single passage or even re-rank the whole list with our textual source pointers as re-rankers.
We might be tempted to directly compare the intersetting results in Table 2, for example FiDSP in row 2 with FiD-LightSP in row 4 (T5-Base). Here we observe, especially on HotpotQA and TriviaQA, a quality reduction, which would lead us to the conclusion that source pointing in FiD-Light is less robust than FiD. To put these results into perspective, we exemplary selected HotpotQA and plot the query latency as well as the R-Precision of the models in Figure 4. For FiDSP, we modulate the number of input passages; for FiDLight we modulate the number of vectors k fed to the decoder and the backbone size. We clearly observe a stark reduction in quality for the FiDSP model, when the number of input passages is reduced. On the other hand our FiD-LightSP variants are able to almost keep the same level of effectivness, and larger backbones, while still faster than the FiDSP baseline also produce a higher quality. Therefore, an equal-efficiency comparison in Table 2 involves row 2 and row 8 (using T5-XL). We are diving deeper in these tradeoffs in the next section.
4.3 EFFICIENCY - EFFECTIVENESS TRADEOFF
Ultimately, we as a community want our research be applied to real world use, to benefit society. A major component, besides concerns about safety and social biases as summarized by Bender et al. (2021), is the efficiency of the deployed system. To understand the impact of our proposed FiD-Light architecture we study RQ3 How does FiD-LightSP compare to the FiDSP baseline in efficiency-effectiveness tradeoffs?
The KILT benchmark gives us the opportunity to study our changes in a large variety of tasks, with different properties, so that we can make confident claims about the efficacy of our changes. In Figure 5 we show our ablation results per task. For each task we report the average query latency (y-axes) and the main KILT-score effectiveness metric (x-axes). The gray line indicates our FiD baseline by modulating input passage counts – from 40 down to 1. Our FiD-Light models all have access to the full 40 passages, and here we are modulating T5 sizes as well as the number of vectors (1, 8, 32, 64) fed into the decoder.
We start our discussion with the open domain QA tasks in Figure 5 (a, b, & c) as they provide a similar picture: Comparing our FiD-LightSP model with the baseline we do observe a drop in effectiveness from the strongest baseline (gray dotted vertical line) when using the same T5-Base model. However, due to the more efficient architecture we are able to swap backbones and earn the benefits of those larger models in terms of effectiveness. At the same time we outperform the latency of the baseline as well, shifting the Pareto optimum. Interestingly, the FiD-LightSP model with T5-XL and only a single encoded vector per passage shows a larger drop in effectiveness than the counterparts for smaller T5’s. The only 2-label classification task, FEVER, shown in Figure 5 (d), exhibits the lowest reduction in effectiveness, when constraining the number of encoded vectors in FiD-LightSP. This is likely due to the fact, that only little generation is necessary to solve the task. Therefore, our FiD-LightSP configurations improve the Pareto optimum again. The slot-filling tasks in Figure 5 (e & f) show less impact of the T5 size, with little improvement for Large and XL over the Base configurations. Fortunately, we also observe a similarly small reduction in effectiveness for reducing the number of encoded FiD-LightSP vectors, leading to our final Pareto gains.
In conclusion we observe clear and statistically significant improvements between FiDSP and FiDLightSP – both in terms of effectiveness and efficiency – across a variety of KILT tasks. FiD-LightSP
can lower the query latency by more than 2x and still deliver higher effectiveness by upgrading the language model backbone size.
4.4 COMPARISON TO RELATED WORK
In addition to showing improvements over our own baselines, we now demonstrate the effectiveness of FiD-LightSP in a broader context and answer RQ4 How does FiD-LightSP compare to related methods on the KILT benchmark? The community is fortunate to have a blind-evaluation leaderboard for all KILT tasks3 at our disposal to compare our approaches on a level playing field, where everyone may submit their highly-tuned systems. While the top spots of a leaderboard are typically not populated by efficient methods, we nevertheless submitted three different configurations of FiD-LightSP – all more efficient than our FiD baseline with 40 input passages. We selected a single checkpoint to submit for all tasks, so as to demonstrate our multi-task capabilities and not overfit a single submission to a single task.
We show the leaderboard results for the main KILT-score metrics in Table 3. For the independent breakdown of text generation and retrieval leaderboard scores we direct the reader to Appendix C. Even our T5-Base configuration in row 8 already outperforms previous SOTA results on five out of the seven tasks. With T5-Large and T5-XL (both continuously reducing the number of encoded vectors, to increase efficiency) set new SOTA results on six out of the seven tasks. Only WoW remains a weak spot, albeit not dramatically different to previous results. The fusion capabilities of FiD paired with our robust source pointing set especially impressive results on the challenging HotpotQA task, where exactly two distinct passages containing parts of the answer have to be placed on top of the ranked list. Here, we outperform previous methods by 61% or 11.1 KILT-EM points. On the other two QA task we reach +7.5 K-EM (+17.2%) for NQ and +5.8 K-EM (+10.0%) for TriviaQA. The zsRE task with +10.8 K-AC (+14.8%) and FEVER with +6.0 K-AC (+7.6%) round off our strong new SOTA results across a variety of tasks.
5 CONCLUSION
We proposed the FiD-Light model with a robust source pointing workflow to overcome efficiency and versatility limitations in the previous state-of-the-art retrieval-augmented generation model FiD. We adapted the FiD model architecture to compress the amount of information fed to the decoder, for drastically reduced inference latency. We demonstrated at the same time only a modest reduction in effectiveness, which can be alleviated with larger T5-backbones leading to Pareto optimal results on six KILT tasks. Our multi-task system achieved substantial new state-of-the-art results for combined retrieval and generation metrics on six KILT tasks compared to previous methods on the public leaderboard. These results demonstrate that we do not need to always scale up to achieve the highest effectiveness, enabling more researchers to work on this problem in the future.
3The leaderboard is available at: https://eval.ai/web/challenges/challenge-page/689
A EXPERIMENT DESIGN
Implementation. Our experiment setup follows the state-of-the-art multi-task relevance sampled training sets of Hofstätter et al. (2022). All our experiments are based on the T5X framework (Roberts et al., 2022). We start with a GTR-Base dense retrieval model (Ni et al., 2021), which is pre-trained on the MSMARCO passage retrieval task (Bajaj et al., 2016) and has been shown to generalize well on the BEIR benchmark (Thakur et al., 2021). We train our FiD(-Light) models using T5 v1.1 as language model backbone (Raffel et al., 2020) on TPUs. We attach task-specific markers to the queries for the multi-task training. We cap the input at 384 tokens (combined query and passage) and a maximum of 64 output tokens. For training, we use a batch size of 128 with up to 40 retrieved passages, and a learning rate of 10−3 with the Adafactor optimizer (Shazeer & Stern, 2018). We do not tune our models to a specific checkpoint, rather train them all for 50K steps. The only special case is T5-XL, which uses a learning rate of 5 ∗ 10−4 and is trained for 30K steps. During decoding we use beam search with a beam size of 4.
Datasets. We conduct experiments on 7 KILT tasks: HotpotQA (Yang et al., 2018), TriviaQA (Joshi et al., 2017), Natural Questions (NQ) (Kwiatkowski et al., 2019), T-REx (Elsahar et al., 2018), Zero Shot RE (zsRE) (Levy et al., 2017), FEVER (Thorne et al., 2018), and Wizard of Wikipedia (WoW) (Dinan et al., 2018). We give an overview over the dataset in Table 4. We used the filtered training & passage sets from Hofstätter et al. (2022) and the original evaluation sets from Petroni et al. (2021).
Evaluation. We follow the KILT evaluation setup proposed by Petroni et al. (2021), in particular we focus on the main KILT-score metrics, which combines both a text output metric M (such as EM, Accuracy, or F1) with R-Precision (RP ) per query, before aggregating the individual query results over the query result set Q:
KM = 1 |Q| ∑ q∈Q M(qtext) ∗ (RP (qprovenance) == 1) (7)
In essence, KILT-scores only count the text score M if the R-Precision of the query is 1, meaning all R relevant passages or documents are returned on the top-R positions of the ranked list. This metric makes the assumption that only a few (1 to 2) items are marked as relevant, as is the case in the KILT dataset. To reduce the noise in our dev results, we present the mean and a 95% confidence interval measured with a t-statistic of the last 10 checkpoints (every thousand steps from 40K to 50K training steps). For our leaderboard submission, we selected a single checkpoint for all tasks. Unfortunately, we can not compute statistical significance tests compared to other methods, as the submission files and gold-labels are not publicly available.
B DENSE RETRIEVAL TUNING RESULTS
In our experiments we use a ”double-finetuned” GTR dense retriever retriever: First it was trained on the MSMARCO retrieval task (Bajaj et al., 2016) by Ni et al. (2021) and then we fine-tuned their checkpoint further on our combined KILT training set to create a single generalized KILT retrieval module, akin to Maillard et al. (2021). We created passage retrieval training triples containing a query, a known relevant passage, and a sampled negative passage (randomly sampled from the top100 GTR zero-shot rankings for the query). We then fine-tuned the retriever for 100K steps using the GTR default parameters in the t5x Retrieval framework. We did not employ knowledge distillation (Hofstätter et al., 2020) or complex end-to-end losses (Izacard et al., 2022), to demonstrate the effectiveness of our approach in a simple setting which likely is orthogonal to more complex training setups.
This approach means, that while we expect to learn retrieve better results, we may overshoot our target and overfit on the training data, leading to a growing divide in the the train vs. test performance. This matters strongly in our retrieval-augmented generation setup, because we use the fully trained retrieval model as the source for our generation training data. We aim to detect and avoid unnecessary distribution shifts to actually train the generator on the expected retrieval performance and not an overfitted training set.
We choose to modulate the learning rate to control for and study the train vs. test distribution shift. We focus on the recall at the highest cutoff we use in our experiments (the top-40) and provide our results in Table 5. First, we show the zero-shot results, as used by the previous state-of-the-art FiD models from Hofstätter et al. (2022), followed by our novel fine-tuned GTR models. Our first observation is that in all tasks we are able to achieve significant R@40 gains on the dev set compared to the zero-shot baseline – ranging from 0.13 to 0.20 absolute changes. Concerning our learning rate study, we find too high learning rates (especially 0.1 and 0.05) show a high ∆T, which indicates a strong distribution shift between train and test. If we were to only train one of the high learning rate checkpoints and compare the dev results to the zero-shot baseline we could be tempted to use them, as their dev results look strong. However, due to our fine-grained analysis we see that it would introduce a strong distribution shift.
Another interesting observation we make is how different task categories seem to converge at different velocities – the open domain QA tasks reach their optimal dev results with higher learning rates, while the other tasks fare better with lower rates. Curiously, we would have guessed a reverse trend, as the initial MSMARCO retrieval task is more closely aligned to QA, suggesting less needed movement. We did not continue to tune the composition of our retrieval training as it is only a secondary contribution to this work and the differences are quite small compared to the margin we achieve to the zero shot baseline. Therefore, we decided to go forward with the 0.005 learning rate, as it overall gives the best results with low distribution shifts.
C DETAILED RELATED WORK COMPARISONS
In Table 3 we focused on the combined retrieval and text generation KILT scores. Now, we investigate our results further, by analyzing the two components independently in Table 6. For each task we report the leaderboard text generation test score (EM, AC, or F1) and the retrieval quality via R-Precision. As previously noted, (Izacard & Grave, 2020; Hofstätter et al., 2022), there is a strong correlation between model size and text generation quality on KILT. For better comparability, and to not ”poison” the task with only very large models, that are not trainable for many of our fellow researchers, we report small and large model numbers for FiD-Light.
Looking at the existing leaderboard entries we observe the top systems mostly rely on the FiD architecture. The most recent and highest performing approaches are FiD generators with relevance sampling and Atlas training regimes (row 8,9). It is important to note, that these two systems are very inefficient: They run 50 and 100 passages through FiD per query and use T5-XL and T5-XXL respectively. They also only focus on the text generation part of the KILT challenge, and chose not to submit any supporting passages for the generation. This is in large part due to the fact, that FiD on its own does not provided a ranking component to the passages, which leads to under-performing results.
Our FiD-LightSP entries cover multiple T5 and k encoded vector sizes. While there is our expected spread of the text generation quality based on the the T5 size, we observe that this spread is substantially smaller for the R-Precision metric. To be able to compare methods, the KILT leaderboard computes the R-Precision on a document level. We transformed our passage ranking to document ranks, by taking the highest ranked passage per document as the document rank, and removing subsequent passages from that document from the ranked list. Overall all our models beginning with T5-Base set new SOTA results across the board for the ranking sub-task, even considering we only re-rank 40 passages. Analysing the text generation quality, we see no new SOTA results for FiD-LightSP, but we remain competitive with the largest and slowest entries in the leaderboard.
To conclude, we showed the reason for our overall strong SOTA results on the KILT scores in Table 3 is the combination of competitive text generation quality with strong SOTA ranking results shown in Table 6.
D FAILURE ANALYSIS
The setup of the knowledge intensive text generation with supporting passages, not only enables positive evaluation via the KILT scores, but also a rich quantitative failure analysis. As BoydGraber & Börschinger (2019); Hofstätter et al. (2022) argued, we should spend more time and energy looking beyond our aggregated metrics. Therefore, in Figure 6 we look at the composition of the raw output results of FiD-LightSP (without re-ranking) in 4 potential outcomes: 1) both passage and text results are wrong; 2) correct passage, but wrong text; 3) correct text, but wrong passages; and 4) both result parts are correct. We analyze the results of two T5-backbones across our KILT tasks.
Interestingly, we do not observe converging trends in their failures between the Base and XL backbones across tasks. But we do see strong differences in the distribution of failure types between tasks. The open domain QA tasks are more likely to fail, especially both parts. For the FEVER fact verification, if we scored the relevant passage on top we are very likely to also get the right boolean answer. The large part of wrong passage selection, but right answer in TriviaQA is likely attributed to its high degree of noise as observed by Hofstätter et al. (2022). HotpotQA remains the most challenging task with the highest double failure rate.
We note that the KILT tasks are highly noisy: we only have 1-2 relevant marked passages in most cases and few if any textual variations of the text answers. This is also the reason we did not run this analysis on WoW, which has no exact text matches. We hypothesize, that if both result parts fail, we are more likely to have a true failure of the model compared to only failing one aspect, which could indicate a noise issue in the datasets. However, to confidently claim this we would need to conduct a thorough annotation campaign of the existing results.
We created an interactive website for inspecting all model outputs of FiD-Light split by our failure analysis modes from Figure 6. The website displays the user 10 random results, per category and task, so as not to enable cherry picking by us. Every refresh of the website creates a new random sample allowing the users to playfully, yet targeted explore the datasets and results in a qualitative way. The website is available at: anonymized | 1. What is the focus and contribution of the paper on improving the efficiency of the fusion-in-decoder model?
2. What are the strengths of the proposed approach, particularly in terms of its simplicity and effectiveness?
3. What are the weaknesses of the paper, especially regarding some parts that need further clarification?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper introduces FiD-light, a more efficient variant of the fusion-in-decoder model that maintains/outperforms state-of-the-art performances on the KILT dataset, while drastically increasing the model's efficiency. To achieve this, FiD light compresses the length of input vectors and uses re-ranking to improve the top-ranked provenance precision.
Strengths And Weaknesses
Strength
simple and effective solution to improve efficiency
clear improvements across datasets and tasks
Weaknesses
some parts of the paper should be better clarified (see my comments below)
Clarity, Quality, Novelty And Reproducibility
The paper is clear and well-written. The appendix contains information for reproducibility. |
ICLR | Title
FiD-Light: Efficient and Effective Retrieval-Augmented Text Generation
Abstract
Retrieval-augmented generation models offer many benefits over standalone language models: besides a textual answer to a given query they provide provenance items retrieved from an updateable knowledge base. However, they are also more complex systems and need to handle long inputs. In this work, we introduce FiDLight to strongly increase the efficiency of the state-of-the-art retrieval-augmented FiD model, while maintaining the same level of effectiveness. Our FiD-Light model constrains the information flow from the encoder (which encodes passages separately) to the decoder (using concatenated encoded representations). Furthermore, we adapt FiD-Light with re-ranking capabilities through textual source pointers, to improve the top-ranked provenance precision. Our experiments on a diverse set of seven knowledge intensive tasks (KILT) show FiD-Light consistently improves the Pareto frontier between query latency and effectiveness. FiD-Light with source pointing sets substantial new state-of-the-art results on six KILT tasks for combined text generation and provenance retrieval evaluation, while maintaining reasonable efficiency.
N/A
Retrieval-augmented generation models offer many benefits over standalone language models: besides a textual answer to a given query they provide provenance items retrieved from an updateable knowledge base. However, they are also more complex systems and need to handle long inputs. In this work, we introduce FiDLight to strongly increase the efficiency of the state-of-the-art retrieval-augmented FiD model, while maintaining the same level of effectiveness. Our FiD-Light model constrains the information flow from the encoder (which encodes passages separately) to the decoder (using concatenated encoded representations). Furthermore, we adapt FiD-Light with re-ranking capabilities through textual source pointers, to improve the top-ranked provenance precision. Our experiments on a diverse set of seven knowledge intensive tasks (KILT) show FiD-Light consistently improves the Pareto frontier between query latency and effectiveness. FiD-Light with source pointing sets substantial new state-of-the-art results on six KILT tasks for combined text generation and provenance retrieval evaluation, while maintaining reasonable efficiency.
1 INTRODUCTION
Enabling machine learning models to access information contained in parametric or non-parametric storage (i.e., retrieval-enhanced machine learning) can lead to efficiency and/or effectiveness improvements in a wide range of learning tasks (Zamani et al., 2022). For example, retrievalaugmented generation (Lewis et al., 2020), which is the focus of this paper, has a manifold of benefits over closed-loop language modelling in knowledge intensive tasks: Answers can be grounded in (multiple) specific pieces of information which enables clear attribution (Dehghani et al., 2019; Rashkin et al., 2021; Lamm et al., 2021); the knowledge base can easily be managed, updated, and swapped (Izacard et al., 2022); the decomposition of retrieval and generation module offers clear efficiency-effectiveness tradeoff controls; and the data structure of combined retrieval and text generation enables many insightful failure analyses. However, with these benefits also come downsides, such as a higher system complexity with higher training and inference cost. Therefore, our goal is to reduce costs as much as possible, while retaining effectiveness, to make these benefits more widely available.
The most effective approach for knowledge intensive tasks, such as those contained in the KILT benchmark (Petroni et al., 2021), is the Fusion-in-Decoder (FiD) model proposed by Izacard & Grave (2020). The FiD model uses an external retriever, such as a dense retrieval model, to gather candidate passages, which are encoded with the query by a T5-encoder (Raffel et al., 2020); the encoded vectors are concatenated and fed through a T5-decoder to produce a single output string. FiD can synthesize answers from multiple different sources, which leads to state-of-the-art results in many tasks from open domain QA to fact verification (Hofstätter et al., 2022; Izacard et al., 2022).
While undoubtedly the leading architecture – in terms of effectiveness for knowledge intensive generation tasks – the FiD model is resource intensive. In state-of-the-art configurations concatenating all encoded tokens before the decoding leads often to sequences longer than 10 thousand vectors, coupled with auto-regressive decoding, this leads to a high inference latency. In Figure 1 we plot the average latency of a single query measured on a single TPUv4 of the encoder and decoder modules
of FiD.1 The first observation is the overpowering 93% of time spent on decoding in FiD. A common and straightforward approach to reduce the latency of FiD is to reduce the number of input passages, e.g., to only 10 passages. While this approach naturally reduces the overall latency, the decoding latency still requires 10 times as long as the encoding (see Figure 1). Crucially, this approach will also reduce the model’s effectiveness substantially, as we show later in this work (see §4.3). To overcome the inefficiencies of the decoding, we propose FiD-Light, a simple yet effective adaptation of the FiD model. The connection between the encoder and decoder has a large capacity for information in FiD. In contrast, the retrieval community, showed that in applications, such as dense retrieval with dot-product scoring, encoded information may be compressed to a fraction of the original input length, including representing passages in a single (Hofstätter et al., 2021) or multiple vectors (Chen et al., 2020). Following these footsteps, we propose to compress the number of vectors per encoded passage, to a fraction of the input vectors, before they are accessed by the decoder. Using this approach FiD-Light is able to ingest a large number of passages with strongly reduced latency, as illustrated in Figure 1. Here we still use 40 passages, showing the same encoding time as FiD, but a substantially faster decoding (now on par with the encoding time), for a total latency lower than FiD with 10 passages.
The knowledge intensive tasks we aim to solve ideally require a system to produce both a generated output text, as well as a ranked list of provenance items from the knowledge base. However, FiD is limited to only produce output text. Falling back to return the original candidate ranking is usually sub-optimal with low-precision. To incorporate re-ranking capabilities into FiD-Light we adapt a passage marker workflow proposed by Lakhotia et al. (2021) as part of FiD-Ex. They marked the input passages with textual indices, and trained the model to output the relevant indices in the output text. We find that using these textual indices or source pointers directly as output, as Lakhotia et al. (2021) proposed, is brittle and prone to distribution shifts in the number of expected relevant passages between training and evaluation (see §4.2). Therefore, our FiD-LightSP approach re-ranks the selected passages to the top of the ranked list, without discarding the rest of the retrieved list, for higher robustness and improved results.
We conduct experiments on seven tasks of the KILT benchmark composed by Petroni et al. (2021) spanning open domain QA, slot filling, fact verification, and dialogue tasks. We study the following research questions to demonstrate the efficacy of our proposed FiD-LightSP model:
RQ1 What impact does training the retrieval module have on FiD-LightSP downstream results?
The quality of the final result is strongly bound by the recall quality of the retriever module. While many complex end-to-end training procedures have been proposed (Singh et al., 2021; Izacard et al., 2022), we focus on simple, yet effective directly supervised dense retrieval training. We show that a simple retrieval training comfortably outperforms a zero-shot retrieval baseline from Hofstätter et al. (2022) and the resulting FiD-LightSP downstream results take a major step towards a realistic oracle retriever ceiling.
RQ2 How robust is our source pointing and re-ranking workflow applied to FiD and FiD-Light?
We use available passage relevance information for each task in the KILT benchmark to train our source pointer output via text markers. We train the FiD(-Light) generator to output the indices for
1All our measurements in this work are conducted on TPUv4s, however we confirmed that using V100 GPUs we observe a similar ratio of time spent in the encoder vs. the decoder of FiD and FiD-Light.
all relevantly retrieved passages during training, before generating the textual answer. We observe that FiD(-Light)SP is learning an expected distribution for the number of selected passages, which might not match relevance distributions during evaluation. To mitigate this problem we propose to use the source pointer to re-rank the initial list. We show this improves the results over FiD-Ex. Comparing the effectiveness of the source pointers between different FiD-Light settings and the FiD baseline we find FiDSP to rapidly lose effectiveness when the number of input passages is reduced, while FiD-LightSP is able to hold the passage precision at much lower latency.
RQ3 How does FiD-LightSP compare to the FiDSP baseline in efficiency-effectiveness tradeoffs?
The common approach to speed up FiD is to reduce the number of input passages. To this we compare our FiD-LightSP model using a static number of passages, but varying the number of vectors fed into the decoder as well as changing the T5 backbone size. We show that while FiDSP with fewer passages strongly degrades, FiD-LightSP is able to hold most of the initial maximum effectiveness of FiDSP, while being 3× faster. This Pareto optimal result between latency and effectiveness is complemented when we increase the T5-backbone sizes in FiD-LightSP to receive the benefits of larger models, while still outperforming the initial FiDSP baseline in terms of efficiency. Overall FiD-LightSP is Pareto optimal on six out of the seven tested tasks.
RQ4 How does FiD-LightSP compare to related methods on the KILT benchmark?
We submitted three representative configurations of FiD-LightSP to the blind-evaluated KILT leaderboard test set to compare them to other methods for knowledge intensive tasks. We evaluate FiDLightSP on the main metric of the KILT benchmark: combined KILT-scores (which only counts a text generation score if the R-Precision for the query is 1). We show FiD-LightSP outperforms previous SOTA models by considerable margins on the KILT-scores on six tasks. We set new SOTA results compared to the previous best methods on:
- QA HotpotQA +11.1 K-EM (+61.3%), NQ +7.5 K-EM (+17.2%), TriviaQA +5.8 K-EM (+10.0%) - Slot Filling zsRE +10.8 K-AC (+14.8%), T-REx +0.5 K-AC (+0.7%) - Fact Verification FEVER +6.0 K-AC (+7.6%)
We hope these results demonstrate to the community that SOTA results are achievable with reasonable efficiency and that efficient retrieval-augmented generation has a promising future ahead.
2 BACKGROUND AND RELATED WORK
In this section, we first review the FiD model and FiD-Ex workflow, which adds textual explanation markers to FiD. We further discuss other related work in this area.
2.1 FID (FUSION IN DECODER) WITH EXPLANATIONS
A critical capability for retrieval-augmented models is to be able to synthesize and utilize information from multiple distinct retrieved items (Zamani et al., 2022). To effectively implement this paradigm Izacard & Grave (2020) proposed the FiD model, which re-wires the computational graph between an of-the-shelf pre-trained Transformer Encoder and Decoder (Vaswani et al., 2017). Usually FiD is initialized with the pre-trained T5 model (Raffel et al., 2020). Given a query q, we retrieve a set of n candidate passages using a separate retrieval module. The retriever is independently trained, and can take any traditional, neural or hybrid architecture. As in Izacard & Grave (2020), we use a single dense retriever, as it has been shown to outperform traditional retrieval methods (Hofstätter et al., 2022). To encode the information, FiD concatenates the query q with each retrieved passage p and independently feeds (one per index i) the sequences through a Transformer encoder (TE): ei = TE([“query: ”; q; “context: ”; pi]) (1) The resulting encoded representations – using one vector per token – are concatenated into a single long sequence, which is fed through the Transformer decoder (TD), autoregressively during inference, to produce a single output sequence o:
o = TD([e1; e2; ...; en]) (2)
FiD has two main limitations: (1) the text-only output does not provide any information about the exact passage(s) which were used to synthesize the output; and (2) the long input sequence leads to highly inefficient autoregressive decoding (as shown in Figure 1). While the expected output is relatively short (in the magnitude of dozens of tokens), the input to the decoder is large with O(n ∗ (|q|+ |p|)) tokens (in the magnitude of thousands of tokens). To alleviate limitation (1) Lakhotia et al. (2021) adapt the FiD workflow with textual explanations (FiD-Ex) inspired by the WT5 (Why?, T5) concept proposed by Narang et al. (2020). For FiD-Ex, the FiD architecture is left untouched; Lakhotia et al. (2021) only adapt the textual input and target output. The input to the encoder is augmented with indices (from 1 to n) to identifiy individual passages:2 ei = TE([“query: ”; q; “index: ”; i; “context: ”; pi]) (3) And the target output t during training is augmented with the indices (using the regular tokens for the numbers and spaces as separators for multiple indices) of all the known relevant passages R+ in the retrieved set: t̂ = [“index: ”; {r|r ∈ R+}; “text: ”; t] (4) On one hand, this textual formulation packs more capabilities in the same text based architecture, on the other hand we note that this discrete selection of the top-|R+| passages from the candidate set is a strong departure from the prevalent pairwise re-ranking models. It opens a new range of induced biases about expected distributions of |R+| not studied before. During inference the output is parsed to extract the indices as numbers and remove the additional textual markers to evaluate the output text.
2.2 RELATED WORK
Efficient Generation Models. To enable their ubiquitous use, a key component besides their safety, is the efficiency of text generators to run at scale. Naturally, many studies work to achieve this goal from various angles. Schuster et al. (2022) propose an adaptive early exiting language model, which exits the decoder stack of Transformer layers early for easy to predict tokens. The LongT5 model focuses on improving the efficiency of the encoder for long input sequences (Guo et al., 2021), in contrast we focus on the decoder efficiency, as FiD’s encoder input is usually short. We believe our FiD-Light adaptations are orthogonal to many other algorithmic and engineering-based generation efficiency improvements and can be combined in future work. For a comprehensive overview over efficient transformer architectures, we refer the reader to Tay et al. (2022).
Retrieval-Enhanced Machine Learning. The foundational retrieval-augmented models, e.g., FiD (Izacard & Grave, 2020), RAG (Lewis et al., 2020), and REALM, (Guu et al., 2020) are trained to solve individual tasks. Many of their recent improvements optimized end-to-end processes (e.g., EMDR2 (Singh et al., 2021)), ensembling multiple modules (e.g., R2-D2 (Fajcik et al., 2021)), or creating multiple training loops to update the indexed documents multiple times (e.g., Hindsight (Paranjape et al., 2021)). In contrast, we focus on architectural efficiency improvements with a simple training paradigm. Recently, more task-independent retrieval-enhanced language models emerged, such as retrieving from a text-snippet database (Borgeaud et al., 2021) or learning to retrieve from the web with reinforcement learning (Nakano et al., 2021). For more information on retrieval-enhanced machine learning models, we refer the reader to Zamani et al. (2022).
Improving and Adapting the FiD Model. To integrate passage relevance prediction into FiD, Asai et al. (2021) add a second decoding module, which is called for every query-passage sequence to indicate its relevance. They also use this setup to generate silver-relevance scores for unjudged passages. Yu et al. (2022) replace the retrieval module with a large language model to generate supporting documents, which are then fused to generate the answer by a default FiD implementation. The current top-systems on the KILT leaderboard (Hofstätter et al., 2022; Izacard et al., 2022) use strong retrievers in combination with large T5-backbones for FiD. They also improve the supervised training by using better data sampling or pre-training procedures for more data efficient fine-tuning. We continue in the spirit of these related works with additional efficiency and capability improvements of FiD.
2Note we adapted the formulation of Lakhotia et al. (2021) from sentence markers to passage indices, to make the formulation more general.
3 FID-LIGHT WITH SOURCE POINTERS
With FiD-LightSP we overcome the two main limitations of the FiD-Ex model and workflow: We drastically increase the efficiency of the decoder, by reducing its computational requirement, and we improve the robustness of the passage selection with a source pointing workflow, by shifting our view from an explanation to a second, parallel-solved task: re-ranking passages. We provide an overview of our FiD-LightSP model and source pointer workflow in Figure 2.
Decoder Efficiency. Following our initial observation, that FiD spends most time in the decoding phase (Figure 1), we adapt the original FiD decoding step (Eq. 2) to reduce the length of each encoded query-passage pair to k vectors via a function f :
ô = TD([fk(e1); fk(e2); ...; fk(en)]) (5)
This reduces the input length from the previous O(n ∗ (|q|+ |p|)) to O(n ∗ k), where k |q|+ |p|. The exact compression ratio depends on the required tokens for the used tasks; we experiment with configurations from a 6x to 384x fold reduction. In our experiments, for simplicity, we instantiate fk as the first k vectors of each sequence. While this architecture change is simple, it strongly disrupts previous assumptions that every encoded token is accessible for decoding in the T5 architecture. Its simplicity also means that the community can easily adapt existing codebases with this change to benefit from the efficiency improvements.
Source Pointing Robustness To enable source pointing in FiD-Light, we train the model with the source pointing concept proposed by Lakhotia et al. (2021) in FiD-Ex. Our novel contribution is how we handle the output of the source pointers at inference time. If we use them directly as result, as in FiD-Ex, we are prone to instability in the number of returned passages. The question of processing the output further almost becomes philosophical: if we treat the source pointers as explanations we can not process them any further without corrupting the explanation. While, there might be a correlation between the textual output and the source pointed passages, we are treating finding the source passages as a concurrent task to generating the text output. Because we are not claiming them to be explanations we can now process them further.
We propose to merge the initial ranked candidate list of passages C with the source pointing selected passage by re-ranking the selected passages (found in the decoded output ô) to the top of the list:
Ĉ1:r =
[ [r|r ∈ ô]; [r|r ∈ C, r /∈ ô] ] (6)
To compute all selected passages r ∈ ô we first parse the output ô with a simple parser for the trained format given in Eq. 4, including a conversion from the text-tokens representing the indices to integers. In case the model selects multiple passages we keep the selection order of the model output. If a task contains graded relevance annotations for training passages, we can train the model to follow the grades, if only binary relevance is available (as in the case with KILT), we keep the rankordering of the multiple selected passages from the initial candidate list. This change leads to higher robustness in our provenance results, as distribution differences between training and evaluation otherwise lead to a disadvantaged position, as we demonstrate in Section 4.2.
4 RESULTS
We empirically address the research questions laid out in the introduction. We study the importance of the retriever module, the efficacy of the source pointer workflow, the tradeoff between efficiency and effectiveness using a controlled baseline, and finally we compare our FiD-LightSP to related methods on the blind-evaluated KILT leaderboard. We detail our experiment design in Appendix A.
4.1 INFLUENCE OF THE RETRIEVER
The retrieval module is the backbone for all retrieval-augmented generation. The generation quality is to a large extent bound by the retrieval quality, especially if the retrieved information is not memorized by the generator. To answer RQ1 What impact does training the retrieval module have on FiD-LightSP downstream results? we have to be careful to acknowledge the uncertainty of sparse ranking annotations (Hofstätter et al., 2022).
To accurately quantify the retriever’s contribution, we compare the downstream effect of a zero-shot, a fine-tuned (methodology described in detail in Appendix B), and two oracle retrievers in Table 1. In the first section (rows 1-3) retrievers are evaluated without access to relevance judgements (a real-world environment), whereas in the second section (rows 4 & 5) we infuse relevance information during the evaluation (oracle environment). We find that training the retriever with in-domain training data (row 2) consistently improves results over a zero-shot retriever (row 1) as used by (Hofstätter et al., 2022). While always ingesting all known relevant passages during training (row 3) does not significantly change the downstream performance.
To account for annotation uncertainty in our retriever as oracle experiments, we study two scenarios: 1) infusing all known relevant passages into the retrieved candidate list (row 4) and 2) setting the candidates to be only the known relevant passages (row 5). Commonly, the community compares their results only against the second oracle scenario, showing a large headroom for future improvements for the retriever (Glass et al., 2021; Shuster et al., 2021). However, we argue, due to the sparsity of the annotations, we should compare the results to our more realistic first oracle scenario (row 4). It still shows a significant opportunity for improvement, albeit the total headroom is roughly halfed across the board. Future work may explore more fine-tuning aspects, but we decide to select the simple fine-tuned retriever (row 2).
4.2 SOURCE POINTER ROBUSTNESS
While the initial source pointer concept has been proposed by FiD-Ex as sentence markers for explainability, we are the first to study their application in the more complex passage ranking context combined with our compressed FiD-Light architecture. Therefore, we study RQ2 How robust is our source pointing and re-ranking workflow applied to FiD and FiD-Light?
As introduced earlier, we train the source pointing capabilities into FiD(-Light) by flagging all known relevant passages retrieved in the candidate passage set. By directly using the size of the known relevant item set during training we instill a strong expectation prior into the model of how many passages ought to be relevant for a given task. Note, if a known relevant passage is not retrieved we cannot use it for training the generator. In Figure 3, we observe these effects for four representative tasks of the KILT benchmark. Each of these tasks shows a different expected distribution target. We note that the training distribution differs from the target, as it skips non-recalled relevant items. We find the model output distribution on the validation set to closely match the training distribution (albeit here we make no claims about the correctness of the selected passages).
0%
25%
50%
75% 100% (a) TriviaQA Target Train Model Output
(b) HotpotQA
0 1 2 30%
25%
50%
75%
100% (c) FEVER
0 1 2 3
(d) zsRE
Selected Source Passages
Re la
tiv e
Oc cu
rre nc
e
Figure 3: Distributions of source pointer passages for FiD-LightSP (T5-Base).
Table 2: Comparing our source pointer (SP) re-ranking with the direct model output (Ex) using KILT scores for passages and documents. Bold indicates improvement of SP over Ex larger than the 95% CI.
Model Open Domain QA Fact Slot Fill.
HotpotQA TriviaQA FEVER zsRE Pas. Doc. Pas. Doc. Pas. Doc. Pas. Doc.
T5-Base 1 FiD-Ex 25.4 25.6 22.0 34.1 70.1 77.2 70.1 71.6 2 FiDSP 25.8 26.1 23.1 39.5 71.1 78.3 70.1 71.7
3 FiD-Light-Ex 23.5 23.7 18.8 32.1 70.0 77.1 69.3 71.2 4 FiD-LightSP 23.8 24.1 19.8 37.6 71.6 78.1 69.3 71.4
T5-Large 5 FiD-Light-Ex 26.6 26.9 22.6 36.3 72.6 79.2 70.9 72.7 6 FiD-LightSP 26.9 27.3 23.5 41.4 74.2 80.4 70.9 72.8
T5-XL 7 FiD-Light-Ex 28.2 28.4 24.8 38.7 73.9 80.5 73.1 75.9 8 FiD-LightSP 28.4 28.7 25.7 43.8 75.5 81.7 73.2 76.1
However, focusing on higher passage counts in Figure 3 (a) TriviaQA and (c) FEVER shows that the model struggles to output 3 passages as often as it is expected to do. This weakness becomes visible, when we evaluate the standard R-Precision of the selection, which needs at least R returned items to reach the full score, given R known relevant items.
To overcome this limitation, we propose instead of directly outputting the selection (FiD-Ex) to move the selected passages to the top of the ranked list. This essentially transforms FiD(-Light) into a re-ranking model. In Table 2, we show the ablation study to confirm the usefulness of the proposed re-ranking on final downstream results. Our approach is strictly positive or neutral for the results, as we are filling up holes, that would result in penalties. Confirming our hypothesis originating in Figure 3, we see stat. significant improvements across all configurations on the two task, where the model struggled to fill up the full distribution: TriviaQA and FEVER.
While in this work we do not change the KILT evaluation methodology and optimize our models towards the current standard evaluation, we note that these findings represent interesting avenues for future work requiring evaluation setup changes: We may choose to train the model to only select a single passage or even re-rank the whole list with our textual source pointers as re-rankers.
We might be tempted to directly compare the intersetting results in Table 2, for example FiDSP in row 2 with FiD-LightSP in row 4 (T5-Base). Here we observe, especially on HotpotQA and TriviaQA, a quality reduction, which would lead us to the conclusion that source pointing in FiD-Light is less robust than FiD. To put these results into perspective, we exemplary selected HotpotQA and plot the query latency as well as the R-Precision of the models in Figure 4. For FiDSP, we modulate the number of input passages; for FiDLight we modulate the number of vectors k fed to the decoder and the backbone size. We clearly observe a stark reduction in quality for the FiDSP model, when the number of input passages is reduced. On the other hand our FiD-LightSP variants are able to almost keep the same level of effectivness, and larger backbones, while still faster than the FiDSP baseline also produce a higher quality. Therefore, an equal-efficiency comparison in Table 2 involves row 2 and row 8 (using T5-XL). We are diving deeper in these tradeoffs in the next section.
4.3 EFFICIENCY - EFFECTIVENESS TRADEOFF
Ultimately, we as a community want our research be applied to real world use, to benefit society. A major component, besides concerns about safety and social biases as summarized by Bender et al. (2021), is the efficiency of the deployed system. To understand the impact of our proposed FiD-Light architecture we study RQ3 How does FiD-LightSP compare to the FiDSP baseline in efficiency-effectiveness tradeoffs?
The KILT benchmark gives us the opportunity to study our changes in a large variety of tasks, with different properties, so that we can make confident claims about the efficacy of our changes. In Figure 5 we show our ablation results per task. For each task we report the average query latency (y-axes) and the main KILT-score effectiveness metric (x-axes). The gray line indicates our FiD baseline by modulating input passage counts – from 40 down to 1. Our FiD-Light models all have access to the full 40 passages, and here we are modulating T5 sizes as well as the number of vectors (1, 8, 32, 64) fed into the decoder.
We start our discussion with the open domain QA tasks in Figure 5 (a, b, & c) as they provide a similar picture: Comparing our FiD-LightSP model with the baseline we do observe a drop in effectiveness from the strongest baseline (gray dotted vertical line) when using the same T5-Base model. However, due to the more efficient architecture we are able to swap backbones and earn the benefits of those larger models in terms of effectiveness. At the same time we outperform the latency of the baseline as well, shifting the Pareto optimum. Interestingly, the FiD-LightSP model with T5-XL and only a single encoded vector per passage shows a larger drop in effectiveness than the counterparts for smaller T5’s. The only 2-label classification task, FEVER, shown in Figure 5 (d), exhibits the lowest reduction in effectiveness, when constraining the number of encoded vectors in FiD-LightSP. This is likely due to the fact, that only little generation is necessary to solve the task. Therefore, our FiD-LightSP configurations improve the Pareto optimum again. The slot-filling tasks in Figure 5 (e & f) show less impact of the T5 size, with little improvement for Large and XL over the Base configurations. Fortunately, we also observe a similarly small reduction in effectiveness for reducing the number of encoded FiD-LightSP vectors, leading to our final Pareto gains.
In conclusion we observe clear and statistically significant improvements between FiDSP and FiDLightSP – both in terms of effectiveness and efficiency – across a variety of KILT tasks. FiD-LightSP
can lower the query latency by more than 2x and still deliver higher effectiveness by upgrading the language model backbone size.
4.4 COMPARISON TO RELATED WORK
In addition to showing improvements over our own baselines, we now demonstrate the effectiveness of FiD-LightSP in a broader context and answer RQ4 How does FiD-LightSP compare to related methods on the KILT benchmark? The community is fortunate to have a blind-evaluation leaderboard for all KILT tasks3 at our disposal to compare our approaches on a level playing field, where everyone may submit their highly-tuned systems. While the top spots of a leaderboard are typically not populated by efficient methods, we nevertheless submitted three different configurations of FiD-LightSP – all more efficient than our FiD baseline with 40 input passages. We selected a single checkpoint to submit for all tasks, so as to demonstrate our multi-task capabilities and not overfit a single submission to a single task.
We show the leaderboard results for the main KILT-score metrics in Table 3. For the independent breakdown of text generation and retrieval leaderboard scores we direct the reader to Appendix C. Even our T5-Base configuration in row 8 already outperforms previous SOTA results on five out of the seven tasks. With T5-Large and T5-XL (both continuously reducing the number of encoded vectors, to increase efficiency) set new SOTA results on six out of the seven tasks. Only WoW remains a weak spot, albeit not dramatically different to previous results. The fusion capabilities of FiD paired with our robust source pointing set especially impressive results on the challenging HotpotQA task, where exactly two distinct passages containing parts of the answer have to be placed on top of the ranked list. Here, we outperform previous methods by 61% or 11.1 KILT-EM points. On the other two QA task we reach +7.5 K-EM (+17.2%) for NQ and +5.8 K-EM (+10.0%) for TriviaQA. The zsRE task with +10.8 K-AC (+14.8%) and FEVER with +6.0 K-AC (+7.6%) round off our strong new SOTA results across a variety of tasks.
5 CONCLUSION
We proposed the FiD-Light model with a robust source pointing workflow to overcome efficiency and versatility limitations in the previous state-of-the-art retrieval-augmented generation model FiD. We adapted the FiD model architecture to compress the amount of information fed to the decoder, for drastically reduced inference latency. We demonstrated at the same time only a modest reduction in effectiveness, which can be alleviated with larger T5-backbones leading to Pareto optimal results on six KILT tasks. Our multi-task system achieved substantial new state-of-the-art results for combined retrieval and generation metrics on six KILT tasks compared to previous methods on the public leaderboard. These results demonstrate that we do not need to always scale up to achieve the highest effectiveness, enabling more researchers to work on this problem in the future.
3The leaderboard is available at: https://eval.ai/web/challenges/challenge-page/689
A EXPERIMENT DESIGN
Implementation. Our experiment setup follows the state-of-the-art multi-task relevance sampled training sets of Hofstätter et al. (2022). All our experiments are based on the T5X framework (Roberts et al., 2022). We start with a GTR-Base dense retrieval model (Ni et al., 2021), which is pre-trained on the MSMARCO passage retrieval task (Bajaj et al., 2016) and has been shown to generalize well on the BEIR benchmark (Thakur et al., 2021). We train our FiD(-Light) models using T5 v1.1 as language model backbone (Raffel et al., 2020) on TPUs. We attach task-specific markers to the queries for the multi-task training. We cap the input at 384 tokens (combined query and passage) and a maximum of 64 output tokens. For training, we use a batch size of 128 with up to 40 retrieved passages, and a learning rate of 10−3 with the Adafactor optimizer (Shazeer & Stern, 2018). We do not tune our models to a specific checkpoint, rather train them all for 50K steps. The only special case is T5-XL, which uses a learning rate of 5 ∗ 10−4 and is trained for 30K steps. During decoding we use beam search with a beam size of 4.
Datasets. We conduct experiments on 7 KILT tasks: HotpotQA (Yang et al., 2018), TriviaQA (Joshi et al., 2017), Natural Questions (NQ) (Kwiatkowski et al., 2019), T-REx (Elsahar et al., 2018), Zero Shot RE (zsRE) (Levy et al., 2017), FEVER (Thorne et al., 2018), and Wizard of Wikipedia (WoW) (Dinan et al., 2018). We give an overview over the dataset in Table 4. We used the filtered training & passage sets from Hofstätter et al. (2022) and the original evaluation sets from Petroni et al. (2021).
Evaluation. We follow the KILT evaluation setup proposed by Petroni et al. (2021), in particular we focus on the main KILT-score metrics, which combines both a text output metric M (such as EM, Accuracy, or F1) with R-Precision (RP ) per query, before aggregating the individual query results over the query result set Q:
KM = 1 |Q| ∑ q∈Q M(qtext) ∗ (RP (qprovenance) == 1) (7)
In essence, KILT-scores only count the text score M if the R-Precision of the query is 1, meaning all R relevant passages or documents are returned on the top-R positions of the ranked list. This metric makes the assumption that only a few (1 to 2) items are marked as relevant, as is the case in the KILT dataset. To reduce the noise in our dev results, we present the mean and a 95% confidence interval measured with a t-statistic of the last 10 checkpoints (every thousand steps from 40K to 50K training steps). For our leaderboard submission, we selected a single checkpoint for all tasks. Unfortunately, we can not compute statistical significance tests compared to other methods, as the submission files and gold-labels are not publicly available.
B DENSE RETRIEVAL TUNING RESULTS
In our experiments we use a ”double-finetuned” GTR dense retriever retriever: First it was trained on the MSMARCO retrieval task (Bajaj et al., 2016) by Ni et al. (2021) and then we fine-tuned their checkpoint further on our combined KILT training set to create a single generalized KILT retrieval module, akin to Maillard et al. (2021). We created passage retrieval training triples containing a query, a known relevant passage, and a sampled negative passage (randomly sampled from the top100 GTR zero-shot rankings for the query). We then fine-tuned the retriever for 100K steps using the GTR default parameters in the t5x Retrieval framework. We did not employ knowledge distillation (Hofstätter et al., 2020) or complex end-to-end losses (Izacard et al., 2022), to demonstrate the effectiveness of our approach in a simple setting which likely is orthogonal to more complex training setups.
This approach means, that while we expect to learn retrieve better results, we may overshoot our target and overfit on the training data, leading to a growing divide in the the train vs. test performance. This matters strongly in our retrieval-augmented generation setup, because we use the fully trained retrieval model as the source for our generation training data. We aim to detect and avoid unnecessary distribution shifts to actually train the generator on the expected retrieval performance and not an overfitted training set.
We choose to modulate the learning rate to control for and study the train vs. test distribution shift. We focus on the recall at the highest cutoff we use in our experiments (the top-40) and provide our results in Table 5. First, we show the zero-shot results, as used by the previous state-of-the-art FiD models from Hofstätter et al. (2022), followed by our novel fine-tuned GTR models. Our first observation is that in all tasks we are able to achieve significant R@40 gains on the dev set compared to the zero-shot baseline – ranging from 0.13 to 0.20 absolute changes. Concerning our learning rate study, we find too high learning rates (especially 0.1 and 0.05) show a high ∆T, which indicates a strong distribution shift between train and test. If we were to only train one of the high learning rate checkpoints and compare the dev results to the zero-shot baseline we could be tempted to use them, as their dev results look strong. However, due to our fine-grained analysis we see that it would introduce a strong distribution shift.
Another interesting observation we make is how different task categories seem to converge at different velocities – the open domain QA tasks reach their optimal dev results with higher learning rates, while the other tasks fare better with lower rates. Curiously, we would have guessed a reverse trend, as the initial MSMARCO retrieval task is more closely aligned to QA, suggesting less needed movement. We did not continue to tune the composition of our retrieval training as it is only a secondary contribution to this work and the differences are quite small compared to the margin we achieve to the zero shot baseline. Therefore, we decided to go forward with the 0.005 learning rate, as it overall gives the best results with low distribution shifts.
C DETAILED RELATED WORK COMPARISONS
In Table 3 we focused on the combined retrieval and text generation KILT scores. Now, we investigate our results further, by analyzing the two components independently in Table 6. For each task we report the leaderboard text generation test score (EM, AC, or F1) and the retrieval quality via R-Precision. As previously noted, (Izacard & Grave, 2020; Hofstätter et al., 2022), there is a strong correlation between model size and text generation quality on KILT. For better comparability, and to not ”poison” the task with only very large models, that are not trainable for many of our fellow researchers, we report small and large model numbers for FiD-Light.
Looking at the existing leaderboard entries we observe the top systems mostly rely on the FiD architecture. The most recent and highest performing approaches are FiD generators with relevance sampling and Atlas training regimes (row 8,9). It is important to note, that these two systems are very inefficient: They run 50 and 100 passages through FiD per query and use T5-XL and T5-XXL respectively. They also only focus on the text generation part of the KILT challenge, and chose not to submit any supporting passages for the generation. This is in large part due to the fact, that FiD on its own does not provided a ranking component to the passages, which leads to under-performing results.
Our FiD-LightSP entries cover multiple T5 and k encoded vector sizes. While there is our expected spread of the text generation quality based on the the T5 size, we observe that this spread is substantially smaller for the R-Precision metric. To be able to compare methods, the KILT leaderboard computes the R-Precision on a document level. We transformed our passage ranking to document ranks, by taking the highest ranked passage per document as the document rank, and removing subsequent passages from that document from the ranked list. Overall all our models beginning with T5-Base set new SOTA results across the board for the ranking sub-task, even considering we only re-rank 40 passages. Analysing the text generation quality, we see no new SOTA results for FiD-LightSP, but we remain competitive with the largest and slowest entries in the leaderboard.
To conclude, we showed the reason for our overall strong SOTA results on the KILT scores in Table 3 is the combination of competitive text generation quality with strong SOTA ranking results shown in Table 6.
D FAILURE ANALYSIS
The setup of the knowledge intensive text generation with supporting passages, not only enables positive evaluation via the KILT scores, but also a rich quantitative failure analysis. As BoydGraber & Börschinger (2019); Hofstätter et al. (2022) argued, we should spend more time and energy looking beyond our aggregated metrics. Therefore, in Figure 6 we look at the composition of the raw output results of FiD-LightSP (without re-ranking) in 4 potential outcomes: 1) both passage and text results are wrong; 2) correct passage, but wrong text; 3) correct text, but wrong passages; and 4) both result parts are correct. We analyze the results of two T5-backbones across our KILT tasks.
Interestingly, we do not observe converging trends in their failures between the Base and XL backbones across tasks. But we do see strong differences in the distribution of failure types between tasks. The open domain QA tasks are more likely to fail, especially both parts. For the FEVER fact verification, if we scored the relevant passage on top we are very likely to also get the right boolean answer. The large part of wrong passage selection, but right answer in TriviaQA is likely attributed to its high degree of noise as observed by Hofstätter et al. (2022). HotpotQA remains the most challenging task with the highest double failure rate.
We note that the KILT tasks are highly noisy: we only have 1-2 relevant marked passages in most cases and few if any textual variations of the text answers. This is also the reason we did not run this analysis on WoW, which has no exact text matches. We hypothesize, that if both result parts fail, we are more likely to have a true failure of the model compared to only failing one aspect, which could indicate a noise issue in the datasets. However, to confidently claim this we would need to conduct a thorough annotation campaign of the existing results.
We created an interactive website for inspecting all model outputs of FiD-Light split by our failure analysis modes from Figure 6. The website displays the user 10 random results, per category and task, so as not to enable cherry picking by us. Every refresh of the website creates a new random sample allowing the users to playfully, yet targeted explore the datasets and results in a qualitative way. The website is available at: anonymized | 1. What are the key contributions and strengths of the paper regarding the modified FID model?
2. What are the weaknesses and limitations of the paper, particularly in terms of clarity and reproducibility?
3. How does the reviewer assess the novelty and experimental value of the proposed modifications?
4. What are the suggestions for improving the paper, such as providing more detail in certain sections or replacing certain figures?
5. How does the reviewer view the paper's related work section and its positioning within the field? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper modifies two aspects of the FID model (retrieval-augmented text generation) in section 3: (1) the authors truncate the passages to speed up the model (2) they modify the explainability component by using a ranking task. Results on KILT show a substantial improvement over the FID model.
Strengths And Weaknesses
Strengths: simple modification of the FID model that improves its performance a lot.
Weaknesses: lack of clarity in the description of the model (still not quite sure what the source pointing method is, but the answers clarified y doubts)
The proposed modifications are well-motivated and sensible, showing an improvement for the KILT tasks. Both modifications are quite simple but do work in practice, the paper has some experimental value.
Clarity, Quality, Novelty And Reproducibility
The novelty relies mostly on the use of a new ranking loss, but unfortunately this is the less well explained part of the paper (and this hinders reproducibility a lot, as well as the interest of the paper):
in section 3: Eq. 6 is not understandable:
o
^
is supposed to ber the representation of the encoded passages (Eq. 5) - what does
r
∈
o
^
mean? Also, how is used when training the model? In the experimental section,
How figure 3 shows that the model struggles to output 3 passages as often as it should? What does "we are filling up holes" means? How can we "observe a stark reduction when the number of input passages is reduced" on Figure 4?
In section 4.4, table 3 should be replaced by the one in appendix (non aggregated results) since the picture is less clear.
Other parts that could be improved are the related works section (no positioning of the paper with respect to related works). |
ICLR | Title
FiD-Light: Efficient and Effective Retrieval-Augmented Text Generation
Abstract
Retrieval-augmented generation models offer many benefits over standalone language models: besides a textual answer to a given query they provide provenance items retrieved from an updateable knowledge base. However, they are also more complex systems and need to handle long inputs. In this work, we introduce FiDLight to strongly increase the efficiency of the state-of-the-art retrieval-augmented FiD model, while maintaining the same level of effectiveness. Our FiD-Light model constrains the information flow from the encoder (which encodes passages separately) to the decoder (using concatenated encoded representations). Furthermore, we adapt FiD-Light with re-ranking capabilities through textual source pointers, to improve the top-ranked provenance precision. Our experiments on a diverse set of seven knowledge intensive tasks (KILT) show FiD-Light consistently improves the Pareto frontier between query latency and effectiveness. FiD-Light with source pointing sets substantial new state-of-the-art results on six KILT tasks for combined text generation and provenance retrieval evaluation, while maintaining reasonable efficiency.
N/A
Retrieval-augmented generation models offer many benefits over standalone language models: besides a textual answer to a given query they provide provenance items retrieved from an updateable knowledge base. However, they are also more complex systems and need to handle long inputs. In this work, we introduce FiDLight to strongly increase the efficiency of the state-of-the-art retrieval-augmented FiD model, while maintaining the same level of effectiveness. Our FiD-Light model constrains the information flow from the encoder (which encodes passages separately) to the decoder (using concatenated encoded representations). Furthermore, we adapt FiD-Light with re-ranking capabilities through textual source pointers, to improve the top-ranked provenance precision. Our experiments on a diverse set of seven knowledge intensive tasks (KILT) show FiD-Light consistently improves the Pareto frontier between query latency and effectiveness. FiD-Light with source pointing sets substantial new state-of-the-art results on six KILT tasks for combined text generation and provenance retrieval evaluation, while maintaining reasonable efficiency.
1 INTRODUCTION
Enabling machine learning models to access information contained in parametric or non-parametric storage (i.e., retrieval-enhanced machine learning) can lead to efficiency and/or effectiveness improvements in a wide range of learning tasks (Zamani et al., 2022). For example, retrievalaugmented generation (Lewis et al., 2020), which is the focus of this paper, has a manifold of benefits over closed-loop language modelling in knowledge intensive tasks: Answers can be grounded in (multiple) specific pieces of information which enables clear attribution (Dehghani et al., 2019; Rashkin et al., 2021; Lamm et al., 2021); the knowledge base can easily be managed, updated, and swapped (Izacard et al., 2022); the decomposition of retrieval and generation module offers clear efficiency-effectiveness tradeoff controls; and the data structure of combined retrieval and text generation enables many insightful failure analyses. However, with these benefits also come downsides, such as a higher system complexity with higher training and inference cost. Therefore, our goal is to reduce costs as much as possible, while retaining effectiveness, to make these benefits more widely available.
The most effective approach for knowledge intensive tasks, such as those contained in the KILT benchmark (Petroni et al., 2021), is the Fusion-in-Decoder (FiD) model proposed by Izacard & Grave (2020). The FiD model uses an external retriever, such as a dense retrieval model, to gather candidate passages, which are encoded with the query by a T5-encoder (Raffel et al., 2020); the encoded vectors are concatenated and fed through a T5-decoder to produce a single output string. FiD can synthesize answers from multiple different sources, which leads to state-of-the-art results in many tasks from open domain QA to fact verification (Hofstätter et al., 2022; Izacard et al., 2022).
While undoubtedly the leading architecture – in terms of effectiveness for knowledge intensive generation tasks – the FiD model is resource intensive. In state-of-the-art configurations concatenating all encoded tokens before the decoding leads often to sequences longer than 10 thousand vectors, coupled with auto-regressive decoding, this leads to a high inference latency. In Figure 1 we plot the average latency of a single query measured on a single TPUv4 of the encoder and decoder modules
of FiD.1 The first observation is the overpowering 93% of time spent on decoding in FiD. A common and straightforward approach to reduce the latency of FiD is to reduce the number of input passages, e.g., to only 10 passages. While this approach naturally reduces the overall latency, the decoding latency still requires 10 times as long as the encoding (see Figure 1). Crucially, this approach will also reduce the model’s effectiveness substantially, as we show later in this work (see §4.3). To overcome the inefficiencies of the decoding, we propose FiD-Light, a simple yet effective adaptation of the FiD model. The connection between the encoder and decoder has a large capacity for information in FiD. In contrast, the retrieval community, showed that in applications, such as dense retrieval with dot-product scoring, encoded information may be compressed to a fraction of the original input length, including representing passages in a single (Hofstätter et al., 2021) or multiple vectors (Chen et al., 2020). Following these footsteps, we propose to compress the number of vectors per encoded passage, to a fraction of the input vectors, before they are accessed by the decoder. Using this approach FiD-Light is able to ingest a large number of passages with strongly reduced latency, as illustrated in Figure 1. Here we still use 40 passages, showing the same encoding time as FiD, but a substantially faster decoding (now on par with the encoding time), for a total latency lower than FiD with 10 passages.
The knowledge intensive tasks we aim to solve ideally require a system to produce both a generated output text, as well as a ranked list of provenance items from the knowledge base. However, FiD is limited to only produce output text. Falling back to return the original candidate ranking is usually sub-optimal with low-precision. To incorporate re-ranking capabilities into FiD-Light we adapt a passage marker workflow proposed by Lakhotia et al. (2021) as part of FiD-Ex. They marked the input passages with textual indices, and trained the model to output the relevant indices in the output text. We find that using these textual indices or source pointers directly as output, as Lakhotia et al. (2021) proposed, is brittle and prone to distribution shifts in the number of expected relevant passages between training and evaluation (see §4.2). Therefore, our FiD-LightSP approach re-ranks the selected passages to the top of the ranked list, without discarding the rest of the retrieved list, for higher robustness and improved results.
We conduct experiments on seven tasks of the KILT benchmark composed by Petroni et al. (2021) spanning open domain QA, slot filling, fact verification, and dialogue tasks. We study the following research questions to demonstrate the efficacy of our proposed FiD-LightSP model:
RQ1 What impact does training the retrieval module have on FiD-LightSP downstream results?
The quality of the final result is strongly bound by the recall quality of the retriever module. While many complex end-to-end training procedures have been proposed (Singh et al., 2021; Izacard et al., 2022), we focus on simple, yet effective directly supervised dense retrieval training. We show that a simple retrieval training comfortably outperforms a zero-shot retrieval baseline from Hofstätter et al. (2022) and the resulting FiD-LightSP downstream results take a major step towards a realistic oracle retriever ceiling.
RQ2 How robust is our source pointing and re-ranking workflow applied to FiD and FiD-Light?
We use available passage relevance information for each task in the KILT benchmark to train our source pointer output via text markers. We train the FiD(-Light) generator to output the indices for
1All our measurements in this work are conducted on TPUv4s, however we confirmed that using V100 GPUs we observe a similar ratio of time spent in the encoder vs. the decoder of FiD and FiD-Light.
all relevantly retrieved passages during training, before generating the textual answer. We observe that FiD(-Light)SP is learning an expected distribution for the number of selected passages, which might not match relevance distributions during evaluation. To mitigate this problem we propose to use the source pointer to re-rank the initial list. We show this improves the results over FiD-Ex. Comparing the effectiveness of the source pointers between different FiD-Light settings and the FiD baseline we find FiDSP to rapidly lose effectiveness when the number of input passages is reduced, while FiD-LightSP is able to hold the passage precision at much lower latency.
RQ3 How does FiD-LightSP compare to the FiDSP baseline in efficiency-effectiveness tradeoffs?
The common approach to speed up FiD is to reduce the number of input passages. To this we compare our FiD-LightSP model using a static number of passages, but varying the number of vectors fed into the decoder as well as changing the T5 backbone size. We show that while FiDSP with fewer passages strongly degrades, FiD-LightSP is able to hold most of the initial maximum effectiveness of FiDSP, while being 3× faster. This Pareto optimal result between latency and effectiveness is complemented when we increase the T5-backbone sizes in FiD-LightSP to receive the benefits of larger models, while still outperforming the initial FiDSP baseline in terms of efficiency. Overall FiD-LightSP is Pareto optimal on six out of the seven tested tasks.
RQ4 How does FiD-LightSP compare to related methods on the KILT benchmark?
We submitted three representative configurations of FiD-LightSP to the blind-evaluated KILT leaderboard test set to compare them to other methods for knowledge intensive tasks. We evaluate FiDLightSP on the main metric of the KILT benchmark: combined KILT-scores (which only counts a text generation score if the R-Precision for the query is 1). We show FiD-LightSP outperforms previous SOTA models by considerable margins on the KILT-scores on six tasks. We set new SOTA results compared to the previous best methods on:
- QA HotpotQA +11.1 K-EM (+61.3%), NQ +7.5 K-EM (+17.2%), TriviaQA +5.8 K-EM (+10.0%) - Slot Filling zsRE +10.8 K-AC (+14.8%), T-REx +0.5 K-AC (+0.7%) - Fact Verification FEVER +6.0 K-AC (+7.6%)
We hope these results demonstrate to the community that SOTA results are achievable with reasonable efficiency and that efficient retrieval-augmented generation has a promising future ahead.
2 BACKGROUND AND RELATED WORK
In this section, we first review the FiD model and FiD-Ex workflow, which adds textual explanation markers to FiD. We further discuss other related work in this area.
2.1 FID (FUSION IN DECODER) WITH EXPLANATIONS
A critical capability for retrieval-augmented models is to be able to synthesize and utilize information from multiple distinct retrieved items (Zamani et al., 2022). To effectively implement this paradigm Izacard & Grave (2020) proposed the FiD model, which re-wires the computational graph between an of-the-shelf pre-trained Transformer Encoder and Decoder (Vaswani et al., 2017). Usually FiD is initialized with the pre-trained T5 model (Raffel et al., 2020). Given a query q, we retrieve a set of n candidate passages using a separate retrieval module. The retriever is independently trained, and can take any traditional, neural or hybrid architecture. As in Izacard & Grave (2020), we use a single dense retriever, as it has been shown to outperform traditional retrieval methods (Hofstätter et al., 2022). To encode the information, FiD concatenates the query q with each retrieved passage p and independently feeds (one per index i) the sequences through a Transformer encoder (TE): ei = TE([“query: ”; q; “context: ”; pi]) (1) The resulting encoded representations – using one vector per token – are concatenated into a single long sequence, which is fed through the Transformer decoder (TD), autoregressively during inference, to produce a single output sequence o:
o = TD([e1; e2; ...; en]) (2)
FiD has two main limitations: (1) the text-only output does not provide any information about the exact passage(s) which were used to synthesize the output; and (2) the long input sequence leads to highly inefficient autoregressive decoding (as shown in Figure 1). While the expected output is relatively short (in the magnitude of dozens of tokens), the input to the decoder is large with O(n ∗ (|q|+ |p|)) tokens (in the magnitude of thousands of tokens). To alleviate limitation (1) Lakhotia et al. (2021) adapt the FiD workflow with textual explanations (FiD-Ex) inspired by the WT5 (Why?, T5) concept proposed by Narang et al. (2020). For FiD-Ex, the FiD architecture is left untouched; Lakhotia et al. (2021) only adapt the textual input and target output. The input to the encoder is augmented with indices (from 1 to n) to identifiy individual passages:2 ei = TE([“query: ”; q; “index: ”; i; “context: ”; pi]) (3) And the target output t during training is augmented with the indices (using the regular tokens for the numbers and spaces as separators for multiple indices) of all the known relevant passages R+ in the retrieved set: t̂ = [“index: ”; {r|r ∈ R+}; “text: ”; t] (4) On one hand, this textual formulation packs more capabilities in the same text based architecture, on the other hand we note that this discrete selection of the top-|R+| passages from the candidate set is a strong departure from the prevalent pairwise re-ranking models. It opens a new range of induced biases about expected distributions of |R+| not studied before. During inference the output is parsed to extract the indices as numbers and remove the additional textual markers to evaluate the output text.
2.2 RELATED WORK
Efficient Generation Models. To enable their ubiquitous use, a key component besides their safety, is the efficiency of text generators to run at scale. Naturally, many studies work to achieve this goal from various angles. Schuster et al. (2022) propose an adaptive early exiting language model, which exits the decoder stack of Transformer layers early for easy to predict tokens. The LongT5 model focuses on improving the efficiency of the encoder for long input sequences (Guo et al., 2021), in contrast we focus on the decoder efficiency, as FiD’s encoder input is usually short. We believe our FiD-Light adaptations are orthogonal to many other algorithmic and engineering-based generation efficiency improvements and can be combined in future work. For a comprehensive overview over efficient transformer architectures, we refer the reader to Tay et al. (2022).
Retrieval-Enhanced Machine Learning. The foundational retrieval-augmented models, e.g., FiD (Izacard & Grave, 2020), RAG (Lewis et al., 2020), and REALM, (Guu et al., 2020) are trained to solve individual tasks. Many of their recent improvements optimized end-to-end processes (e.g., EMDR2 (Singh et al., 2021)), ensembling multiple modules (e.g., R2-D2 (Fajcik et al., 2021)), or creating multiple training loops to update the indexed documents multiple times (e.g., Hindsight (Paranjape et al., 2021)). In contrast, we focus on architectural efficiency improvements with a simple training paradigm. Recently, more task-independent retrieval-enhanced language models emerged, such as retrieving from a text-snippet database (Borgeaud et al., 2021) or learning to retrieve from the web with reinforcement learning (Nakano et al., 2021). For more information on retrieval-enhanced machine learning models, we refer the reader to Zamani et al. (2022).
Improving and Adapting the FiD Model. To integrate passage relevance prediction into FiD, Asai et al. (2021) add a second decoding module, which is called for every query-passage sequence to indicate its relevance. They also use this setup to generate silver-relevance scores for unjudged passages. Yu et al. (2022) replace the retrieval module with a large language model to generate supporting documents, which are then fused to generate the answer by a default FiD implementation. The current top-systems on the KILT leaderboard (Hofstätter et al., 2022; Izacard et al., 2022) use strong retrievers in combination with large T5-backbones for FiD. They also improve the supervised training by using better data sampling or pre-training procedures for more data efficient fine-tuning. We continue in the spirit of these related works with additional efficiency and capability improvements of FiD.
2Note we adapted the formulation of Lakhotia et al. (2021) from sentence markers to passage indices, to make the formulation more general.
3 FID-LIGHT WITH SOURCE POINTERS
With FiD-LightSP we overcome the two main limitations of the FiD-Ex model and workflow: We drastically increase the efficiency of the decoder, by reducing its computational requirement, and we improve the robustness of the passage selection with a source pointing workflow, by shifting our view from an explanation to a second, parallel-solved task: re-ranking passages. We provide an overview of our FiD-LightSP model and source pointer workflow in Figure 2.
Decoder Efficiency. Following our initial observation, that FiD spends most time in the decoding phase (Figure 1), we adapt the original FiD decoding step (Eq. 2) to reduce the length of each encoded query-passage pair to k vectors via a function f :
ô = TD([fk(e1); fk(e2); ...; fk(en)]) (5)
This reduces the input length from the previous O(n ∗ (|q|+ |p|)) to O(n ∗ k), where k |q|+ |p|. The exact compression ratio depends on the required tokens for the used tasks; we experiment with configurations from a 6x to 384x fold reduction. In our experiments, for simplicity, we instantiate fk as the first k vectors of each sequence. While this architecture change is simple, it strongly disrupts previous assumptions that every encoded token is accessible for decoding in the T5 architecture. Its simplicity also means that the community can easily adapt existing codebases with this change to benefit from the efficiency improvements.
Source Pointing Robustness To enable source pointing in FiD-Light, we train the model with the source pointing concept proposed by Lakhotia et al. (2021) in FiD-Ex. Our novel contribution is how we handle the output of the source pointers at inference time. If we use them directly as result, as in FiD-Ex, we are prone to instability in the number of returned passages. The question of processing the output further almost becomes philosophical: if we treat the source pointers as explanations we can not process them any further without corrupting the explanation. While, there might be a correlation between the textual output and the source pointed passages, we are treating finding the source passages as a concurrent task to generating the text output. Because we are not claiming them to be explanations we can now process them further.
We propose to merge the initial ranked candidate list of passages C with the source pointing selected passage by re-ranking the selected passages (found in the decoded output ô) to the top of the list:
Ĉ1:r =
[ [r|r ∈ ô]; [r|r ∈ C, r /∈ ô] ] (6)
To compute all selected passages r ∈ ô we first parse the output ô with a simple parser for the trained format given in Eq. 4, including a conversion from the text-tokens representing the indices to integers. In case the model selects multiple passages we keep the selection order of the model output. If a task contains graded relevance annotations for training passages, we can train the model to follow the grades, if only binary relevance is available (as in the case with KILT), we keep the rankordering of the multiple selected passages from the initial candidate list. This change leads to higher robustness in our provenance results, as distribution differences between training and evaluation otherwise lead to a disadvantaged position, as we demonstrate in Section 4.2.
4 RESULTS
We empirically address the research questions laid out in the introduction. We study the importance of the retriever module, the efficacy of the source pointer workflow, the tradeoff between efficiency and effectiveness using a controlled baseline, and finally we compare our FiD-LightSP to related methods on the blind-evaluated KILT leaderboard. We detail our experiment design in Appendix A.
4.1 INFLUENCE OF THE RETRIEVER
The retrieval module is the backbone for all retrieval-augmented generation. The generation quality is to a large extent bound by the retrieval quality, especially if the retrieved information is not memorized by the generator. To answer RQ1 What impact does training the retrieval module have on FiD-LightSP downstream results? we have to be careful to acknowledge the uncertainty of sparse ranking annotations (Hofstätter et al., 2022).
To accurately quantify the retriever’s contribution, we compare the downstream effect of a zero-shot, a fine-tuned (methodology described in detail in Appendix B), and two oracle retrievers in Table 1. In the first section (rows 1-3) retrievers are evaluated without access to relevance judgements (a real-world environment), whereas in the second section (rows 4 & 5) we infuse relevance information during the evaluation (oracle environment). We find that training the retriever with in-domain training data (row 2) consistently improves results over a zero-shot retriever (row 1) as used by (Hofstätter et al., 2022). While always ingesting all known relevant passages during training (row 3) does not significantly change the downstream performance.
To account for annotation uncertainty in our retriever as oracle experiments, we study two scenarios: 1) infusing all known relevant passages into the retrieved candidate list (row 4) and 2) setting the candidates to be only the known relevant passages (row 5). Commonly, the community compares their results only against the second oracle scenario, showing a large headroom for future improvements for the retriever (Glass et al., 2021; Shuster et al., 2021). However, we argue, due to the sparsity of the annotations, we should compare the results to our more realistic first oracle scenario (row 4). It still shows a significant opportunity for improvement, albeit the total headroom is roughly halfed across the board. Future work may explore more fine-tuning aspects, but we decide to select the simple fine-tuned retriever (row 2).
4.2 SOURCE POINTER ROBUSTNESS
While the initial source pointer concept has been proposed by FiD-Ex as sentence markers for explainability, we are the first to study their application in the more complex passage ranking context combined with our compressed FiD-Light architecture. Therefore, we study RQ2 How robust is our source pointing and re-ranking workflow applied to FiD and FiD-Light?
As introduced earlier, we train the source pointing capabilities into FiD(-Light) by flagging all known relevant passages retrieved in the candidate passage set. By directly using the size of the known relevant item set during training we instill a strong expectation prior into the model of how many passages ought to be relevant for a given task. Note, if a known relevant passage is not retrieved we cannot use it for training the generator. In Figure 3, we observe these effects for four representative tasks of the KILT benchmark. Each of these tasks shows a different expected distribution target. We note that the training distribution differs from the target, as it skips non-recalled relevant items. We find the model output distribution on the validation set to closely match the training distribution (albeit here we make no claims about the correctness of the selected passages).
0%
25%
50%
75% 100% (a) TriviaQA Target Train Model Output
(b) HotpotQA
0 1 2 30%
25%
50%
75%
100% (c) FEVER
0 1 2 3
(d) zsRE
Selected Source Passages
Re la
tiv e
Oc cu
rre nc
e
Figure 3: Distributions of source pointer passages for FiD-LightSP (T5-Base).
Table 2: Comparing our source pointer (SP) re-ranking with the direct model output (Ex) using KILT scores for passages and documents. Bold indicates improvement of SP over Ex larger than the 95% CI.
Model Open Domain QA Fact Slot Fill.
HotpotQA TriviaQA FEVER zsRE Pas. Doc. Pas. Doc. Pas. Doc. Pas. Doc.
T5-Base 1 FiD-Ex 25.4 25.6 22.0 34.1 70.1 77.2 70.1 71.6 2 FiDSP 25.8 26.1 23.1 39.5 71.1 78.3 70.1 71.7
3 FiD-Light-Ex 23.5 23.7 18.8 32.1 70.0 77.1 69.3 71.2 4 FiD-LightSP 23.8 24.1 19.8 37.6 71.6 78.1 69.3 71.4
T5-Large 5 FiD-Light-Ex 26.6 26.9 22.6 36.3 72.6 79.2 70.9 72.7 6 FiD-LightSP 26.9 27.3 23.5 41.4 74.2 80.4 70.9 72.8
T5-XL 7 FiD-Light-Ex 28.2 28.4 24.8 38.7 73.9 80.5 73.1 75.9 8 FiD-LightSP 28.4 28.7 25.7 43.8 75.5 81.7 73.2 76.1
However, focusing on higher passage counts in Figure 3 (a) TriviaQA and (c) FEVER shows that the model struggles to output 3 passages as often as it is expected to do. This weakness becomes visible, when we evaluate the standard R-Precision of the selection, which needs at least R returned items to reach the full score, given R known relevant items.
To overcome this limitation, we propose instead of directly outputting the selection (FiD-Ex) to move the selected passages to the top of the ranked list. This essentially transforms FiD(-Light) into a re-ranking model. In Table 2, we show the ablation study to confirm the usefulness of the proposed re-ranking on final downstream results. Our approach is strictly positive or neutral for the results, as we are filling up holes, that would result in penalties. Confirming our hypothesis originating in Figure 3, we see stat. significant improvements across all configurations on the two task, where the model struggled to fill up the full distribution: TriviaQA and FEVER.
While in this work we do not change the KILT evaluation methodology and optimize our models towards the current standard evaluation, we note that these findings represent interesting avenues for future work requiring evaluation setup changes: We may choose to train the model to only select a single passage or even re-rank the whole list with our textual source pointers as re-rankers.
We might be tempted to directly compare the intersetting results in Table 2, for example FiDSP in row 2 with FiD-LightSP in row 4 (T5-Base). Here we observe, especially on HotpotQA and TriviaQA, a quality reduction, which would lead us to the conclusion that source pointing in FiD-Light is less robust than FiD. To put these results into perspective, we exemplary selected HotpotQA and plot the query latency as well as the R-Precision of the models in Figure 4. For FiDSP, we modulate the number of input passages; for FiDLight we modulate the number of vectors k fed to the decoder and the backbone size. We clearly observe a stark reduction in quality for the FiDSP model, when the number of input passages is reduced. On the other hand our FiD-LightSP variants are able to almost keep the same level of effectivness, and larger backbones, while still faster than the FiDSP baseline also produce a higher quality. Therefore, an equal-efficiency comparison in Table 2 involves row 2 and row 8 (using T5-XL). We are diving deeper in these tradeoffs in the next section.
4.3 EFFICIENCY - EFFECTIVENESS TRADEOFF
Ultimately, we as a community want our research be applied to real world use, to benefit society. A major component, besides concerns about safety and social biases as summarized by Bender et al. (2021), is the efficiency of the deployed system. To understand the impact of our proposed FiD-Light architecture we study RQ3 How does FiD-LightSP compare to the FiDSP baseline in efficiency-effectiveness tradeoffs?
The KILT benchmark gives us the opportunity to study our changes in a large variety of tasks, with different properties, so that we can make confident claims about the efficacy of our changes. In Figure 5 we show our ablation results per task. For each task we report the average query latency (y-axes) and the main KILT-score effectiveness metric (x-axes). The gray line indicates our FiD baseline by modulating input passage counts – from 40 down to 1. Our FiD-Light models all have access to the full 40 passages, and here we are modulating T5 sizes as well as the number of vectors (1, 8, 32, 64) fed into the decoder.
We start our discussion with the open domain QA tasks in Figure 5 (a, b, & c) as they provide a similar picture: Comparing our FiD-LightSP model with the baseline we do observe a drop in effectiveness from the strongest baseline (gray dotted vertical line) when using the same T5-Base model. However, due to the more efficient architecture we are able to swap backbones and earn the benefits of those larger models in terms of effectiveness. At the same time we outperform the latency of the baseline as well, shifting the Pareto optimum. Interestingly, the FiD-LightSP model with T5-XL and only a single encoded vector per passage shows a larger drop in effectiveness than the counterparts for smaller T5’s. The only 2-label classification task, FEVER, shown in Figure 5 (d), exhibits the lowest reduction in effectiveness, when constraining the number of encoded vectors in FiD-LightSP. This is likely due to the fact, that only little generation is necessary to solve the task. Therefore, our FiD-LightSP configurations improve the Pareto optimum again. The slot-filling tasks in Figure 5 (e & f) show less impact of the T5 size, with little improvement for Large and XL over the Base configurations. Fortunately, we also observe a similarly small reduction in effectiveness for reducing the number of encoded FiD-LightSP vectors, leading to our final Pareto gains.
In conclusion we observe clear and statistically significant improvements between FiDSP and FiDLightSP – both in terms of effectiveness and efficiency – across a variety of KILT tasks. FiD-LightSP
can lower the query latency by more than 2x and still deliver higher effectiveness by upgrading the language model backbone size.
4.4 COMPARISON TO RELATED WORK
In addition to showing improvements over our own baselines, we now demonstrate the effectiveness of FiD-LightSP in a broader context and answer RQ4 How does FiD-LightSP compare to related methods on the KILT benchmark? The community is fortunate to have a blind-evaluation leaderboard for all KILT tasks3 at our disposal to compare our approaches on a level playing field, where everyone may submit their highly-tuned systems. While the top spots of a leaderboard are typically not populated by efficient methods, we nevertheless submitted three different configurations of FiD-LightSP – all more efficient than our FiD baseline with 40 input passages. We selected a single checkpoint to submit for all tasks, so as to demonstrate our multi-task capabilities and not overfit a single submission to a single task.
We show the leaderboard results for the main KILT-score metrics in Table 3. For the independent breakdown of text generation and retrieval leaderboard scores we direct the reader to Appendix C. Even our T5-Base configuration in row 8 already outperforms previous SOTA results on five out of the seven tasks. With T5-Large and T5-XL (both continuously reducing the number of encoded vectors, to increase efficiency) set new SOTA results on six out of the seven tasks. Only WoW remains a weak spot, albeit not dramatically different to previous results. The fusion capabilities of FiD paired with our robust source pointing set especially impressive results on the challenging HotpotQA task, where exactly two distinct passages containing parts of the answer have to be placed on top of the ranked list. Here, we outperform previous methods by 61% or 11.1 KILT-EM points. On the other two QA task we reach +7.5 K-EM (+17.2%) for NQ and +5.8 K-EM (+10.0%) for TriviaQA. The zsRE task with +10.8 K-AC (+14.8%) and FEVER with +6.0 K-AC (+7.6%) round off our strong new SOTA results across a variety of tasks.
5 CONCLUSION
We proposed the FiD-Light model with a robust source pointing workflow to overcome efficiency and versatility limitations in the previous state-of-the-art retrieval-augmented generation model FiD. We adapted the FiD model architecture to compress the amount of information fed to the decoder, for drastically reduced inference latency. We demonstrated at the same time only a modest reduction in effectiveness, which can be alleviated with larger T5-backbones leading to Pareto optimal results on six KILT tasks. Our multi-task system achieved substantial new state-of-the-art results for combined retrieval and generation metrics on six KILT tasks compared to previous methods on the public leaderboard. These results demonstrate that we do not need to always scale up to achieve the highest effectiveness, enabling more researchers to work on this problem in the future.
3The leaderboard is available at: https://eval.ai/web/challenges/challenge-page/689
A EXPERIMENT DESIGN
Implementation. Our experiment setup follows the state-of-the-art multi-task relevance sampled training sets of Hofstätter et al. (2022). All our experiments are based on the T5X framework (Roberts et al., 2022). We start with a GTR-Base dense retrieval model (Ni et al., 2021), which is pre-trained on the MSMARCO passage retrieval task (Bajaj et al., 2016) and has been shown to generalize well on the BEIR benchmark (Thakur et al., 2021). We train our FiD(-Light) models using T5 v1.1 as language model backbone (Raffel et al., 2020) on TPUs. We attach task-specific markers to the queries for the multi-task training. We cap the input at 384 tokens (combined query and passage) and a maximum of 64 output tokens. For training, we use a batch size of 128 with up to 40 retrieved passages, and a learning rate of 10−3 with the Adafactor optimizer (Shazeer & Stern, 2018). We do not tune our models to a specific checkpoint, rather train them all for 50K steps. The only special case is T5-XL, which uses a learning rate of 5 ∗ 10−4 and is trained for 30K steps. During decoding we use beam search with a beam size of 4.
Datasets. We conduct experiments on 7 KILT tasks: HotpotQA (Yang et al., 2018), TriviaQA (Joshi et al., 2017), Natural Questions (NQ) (Kwiatkowski et al., 2019), T-REx (Elsahar et al., 2018), Zero Shot RE (zsRE) (Levy et al., 2017), FEVER (Thorne et al., 2018), and Wizard of Wikipedia (WoW) (Dinan et al., 2018). We give an overview over the dataset in Table 4. We used the filtered training & passage sets from Hofstätter et al. (2022) and the original evaluation sets from Petroni et al. (2021).
Evaluation. We follow the KILT evaluation setup proposed by Petroni et al. (2021), in particular we focus on the main KILT-score metrics, which combines both a text output metric M (such as EM, Accuracy, or F1) with R-Precision (RP ) per query, before aggregating the individual query results over the query result set Q:
KM = 1 |Q| ∑ q∈Q M(qtext) ∗ (RP (qprovenance) == 1) (7)
In essence, KILT-scores only count the text score M if the R-Precision of the query is 1, meaning all R relevant passages or documents are returned on the top-R positions of the ranked list. This metric makes the assumption that only a few (1 to 2) items are marked as relevant, as is the case in the KILT dataset. To reduce the noise in our dev results, we present the mean and a 95% confidence interval measured with a t-statistic of the last 10 checkpoints (every thousand steps from 40K to 50K training steps). For our leaderboard submission, we selected a single checkpoint for all tasks. Unfortunately, we can not compute statistical significance tests compared to other methods, as the submission files and gold-labels are not publicly available.
B DENSE RETRIEVAL TUNING RESULTS
In our experiments we use a ”double-finetuned” GTR dense retriever retriever: First it was trained on the MSMARCO retrieval task (Bajaj et al., 2016) by Ni et al. (2021) and then we fine-tuned their checkpoint further on our combined KILT training set to create a single generalized KILT retrieval module, akin to Maillard et al. (2021). We created passage retrieval training triples containing a query, a known relevant passage, and a sampled negative passage (randomly sampled from the top100 GTR zero-shot rankings for the query). We then fine-tuned the retriever for 100K steps using the GTR default parameters in the t5x Retrieval framework. We did not employ knowledge distillation (Hofstätter et al., 2020) or complex end-to-end losses (Izacard et al., 2022), to demonstrate the effectiveness of our approach in a simple setting which likely is orthogonal to more complex training setups.
This approach means, that while we expect to learn retrieve better results, we may overshoot our target and overfit on the training data, leading to a growing divide in the the train vs. test performance. This matters strongly in our retrieval-augmented generation setup, because we use the fully trained retrieval model as the source for our generation training data. We aim to detect and avoid unnecessary distribution shifts to actually train the generator on the expected retrieval performance and not an overfitted training set.
We choose to modulate the learning rate to control for and study the train vs. test distribution shift. We focus on the recall at the highest cutoff we use in our experiments (the top-40) and provide our results in Table 5. First, we show the zero-shot results, as used by the previous state-of-the-art FiD models from Hofstätter et al. (2022), followed by our novel fine-tuned GTR models. Our first observation is that in all tasks we are able to achieve significant R@40 gains on the dev set compared to the zero-shot baseline – ranging from 0.13 to 0.20 absolute changes. Concerning our learning rate study, we find too high learning rates (especially 0.1 and 0.05) show a high ∆T, which indicates a strong distribution shift between train and test. If we were to only train one of the high learning rate checkpoints and compare the dev results to the zero-shot baseline we could be tempted to use them, as their dev results look strong. However, due to our fine-grained analysis we see that it would introduce a strong distribution shift.
Another interesting observation we make is how different task categories seem to converge at different velocities – the open domain QA tasks reach their optimal dev results with higher learning rates, while the other tasks fare better with lower rates. Curiously, we would have guessed a reverse trend, as the initial MSMARCO retrieval task is more closely aligned to QA, suggesting less needed movement. We did not continue to tune the composition of our retrieval training as it is only a secondary contribution to this work and the differences are quite small compared to the margin we achieve to the zero shot baseline. Therefore, we decided to go forward with the 0.005 learning rate, as it overall gives the best results with low distribution shifts.
C DETAILED RELATED WORK COMPARISONS
In Table 3 we focused on the combined retrieval and text generation KILT scores. Now, we investigate our results further, by analyzing the two components independently in Table 6. For each task we report the leaderboard text generation test score (EM, AC, or F1) and the retrieval quality via R-Precision. As previously noted, (Izacard & Grave, 2020; Hofstätter et al., 2022), there is a strong correlation between model size and text generation quality on KILT. For better comparability, and to not ”poison” the task with only very large models, that are not trainable for many of our fellow researchers, we report small and large model numbers for FiD-Light.
Looking at the existing leaderboard entries we observe the top systems mostly rely on the FiD architecture. The most recent and highest performing approaches are FiD generators with relevance sampling and Atlas training regimes (row 8,9). It is important to note, that these two systems are very inefficient: They run 50 and 100 passages through FiD per query and use T5-XL and T5-XXL respectively. They also only focus on the text generation part of the KILT challenge, and chose not to submit any supporting passages for the generation. This is in large part due to the fact, that FiD on its own does not provided a ranking component to the passages, which leads to under-performing results.
Our FiD-LightSP entries cover multiple T5 and k encoded vector sizes. While there is our expected spread of the text generation quality based on the the T5 size, we observe that this spread is substantially smaller for the R-Precision metric. To be able to compare methods, the KILT leaderboard computes the R-Precision on a document level. We transformed our passage ranking to document ranks, by taking the highest ranked passage per document as the document rank, and removing subsequent passages from that document from the ranked list. Overall all our models beginning with T5-Base set new SOTA results across the board for the ranking sub-task, even considering we only re-rank 40 passages. Analysing the text generation quality, we see no new SOTA results for FiD-LightSP, but we remain competitive with the largest and slowest entries in the leaderboard.
To conclude, we showed the reason for our overall strong SOTA results on the KILT scores in Table 3 is the combination of competitive text generation quality with strong SOTA ranking results shown in Table 6.
D FAILURE ANALYSIS
The setup of the knowledge intensive text generation with supporting passages, not only enables positive evaluation via the KILT scores, but also a rich quantitative failure analysis. As BoydGraber & Börschinger (2019); Hofstätter et al. (2022) argued, we should spend more time and energy looking beyond our aggregated metrics. Therefore, in Figure 6 we look at the composition of the raw output results of FiD-LightSP (without re-ranking) in 4 potential outcomes: 1) both passage and text results are wrong; 2) correct passage, but wrong text; 3) correct text, but wrong passages; and 4) both result parts are correct. We analyze the results of two T5-backbones across our KILT tasks.
Interestingly, we do not observe converging trends in their failures between the Base and XL backbones across tasks. But we do see strong differences in the distribution of failure types between tasks. The open domain QA tasks are more likely to fail, especially both parts. For the FEVER fact verification, if we scored the relevant passage on top we are very likely to also get the right boolean answer. The large part of wrong passage selection, but right answer in TriviaQA is likely attributed to its high degree of noise as observed by Hofstätter et al. (2022). HotpotQA remains the most challenging task with the highest double failure rate.
We note that the KILT tasks are highly noisy: we only have 1-2 relevant marked passages in most cases and few if any textual variations of the text answers. This is also the reason we did not run this analysis on WoW, which has no exact text matches. We hypothesize, that if both result parts fail, we are more likely to have a true failure of the model compared to only failing one aspect, which could indicate a noise issue in the datasets. However, to confidently claim this we would need to conduct a thorough annotation campaign of the existing results.
We created an interactive website for inspecting all model outputs of FiD-Light split by our failure analysis modes from Figure 6. The website displays the user 10 random results, per category and task, so as not to enable cherry picking by us. Every refresh of the website creates a new random sample allowing the users to playfully, yet targeted explore the datasets and results in a qualitative way. The website is available at: anonymized | 1. What is the focus of the paper regarding improving Fusion-in-decoder's effectiveness and efficiency?
2. What are the strengths of the proposed approach, particularly its simplicity and empirical usefulness?
3. What are the weaknesses of the paper, such as the concern about exaggerated time complexity and lack of discussion on chosen options?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes an extension of Fusion-in-decoder for improving both the effectiveness and efficiency. To improve the efficiency, noting that the decoding step occupies most of the time complexity, the method proposes a “selection-based compression” of the encoded representations, denoted as FiD-Light, by using only first-k vectors in the sequnce of the encoded token representations for each passage before directly fusing them in the decoder. To enhance the effectiveness, the reranking is performed based on the modified application of the “source pointing” method, denoted as FiD^{SP} by FiD-Ex by explicitly moving the source-pointed passages (i.e. highly relevant passages) forward in the list of the passages in the encoder, such that the distribution of the encoder’s input becomes more similar between training and test samples. Experiment results show that the modified “source pointing” method of Fid-Ex improves the performance both on the setting of FiD-Ex and FiD-Light. In addition, FiD-Light^{SP}, the method of applying both FiD-Light and FiD^{SP}, substantially reduces the time complexity, demonstrating that FiD-Light^{SP} achieves significantly improved performance at the fixed latency time budget.
Overall, the key contribution of the paper is the simple but novel extension of FiD, which has widely been used for retrieval-augmented LM and its empirical usefulness on several standard datasets.
Strengths And Weaknesses
Strengths
This proposed idea is quite simple and well-motivated, making an interesting extension of FiD, which would be helpful to the FiD-based literature.
The experiment results are solidly done, showing that the proposed FiD-Light is quite efficient, as well as keeping the effectiveness on various datasets.
Weaknesses:
The proposed method mainly focuses on the efficiency. While single query latency is used for the efficiency metric in the paper, FiD could be designed efficiently under the batch-style GPU-based parallel processing. Thus, a stricter GPU-aware time complexity needs to be used for the efficiency measure of FiD. The concern is that the time complexity of FiD may be exaggerated in the experiments. Assuming the realistic situation that the input is processed at the maximum length limit of the encoder using the nicely designed GPU-based implementation, the authors need to compare the time complexities between the proposed method and FiD.
In the proposed method, the chosen options need to be further explored or discussed. Why the first-k tokens in FiD-Light are selected? Why the source pointed passages are moved forward? (not moved backward)? How performances are changed when varying k?
The naïve combination of reranking and FiD can reduce the time complexity, in the way that after reranking, the ranked top m passages can be selected as input for FiD while discarding other passages. But, it is not clear whether these reranking-and-selecting experiments are performed in the comparison of the paper.
Clarity, Quality, Novelty And Reproducibility
The paper is often not easy to read, due to somehow dense manner. A substantial revision for readability may be required for better clarity. The proposed idea is an incremental work of FiD, not being very novel (not so innovative), although it is helpful to the litererature. The experiments of the tradeoff on the efficiency and effectiveness are solidly done, demonstrating the interesting behaviors and results of FiD and FiD-Light. |
ICLR | Title
Few-Shot Domain Adaptation For End-to-End Communication
Abstract
The problem of end-to-end learning of a communication system using an autoencoder – consisting of an encoder, channel, and decoder modeled using neural networks – has recently been shown to be an effective approach. A challenge faced in the practical adoption of this learning approach is that under changing channel conditions (e.g. a wireless link), it requires frequent retraining of the autoencoder in order to maintain a low decoding error rate. Since retraining is both time consuming and requires a large number of samples, it becomes impractical when the channel distribution is changing quickly. We propose to address this problem using a fast and sample-efficient (few-shot) domain adaptation method that does not change the encoder and decoder networks. Different from conventional training-time unsupervised or semi-supervised domain adaptation, here we have a trained autoencoder from a source distribution that we want to adapt (at test time) to a target distribution using only a small labeled dataset, and no unlabeled data. We focus on a generative channel model based on the Gaussian mixture density network (MDN), and propose a regularized, parameter-efficient adaptation of the MDN using a set of affine transformations. The learned affine transformations are then used to design an optimal transformation at the decoder input to compensate for the distribution shift, and effectively present to the decoder inputs close to the source distribution. Experiments on many simulated distribution changes common to the wireless setting, and a real mmWave FPGA testbed demonstrate the effectiveness of our method at adaptation using very few target domain samples 1.
1 INTRODUCTION
End-to-end (e2e) learning of a communication system using an autoencoder has been recently shown to be a promising approach for designing the next generation of wireless networks (O’Shea & Hoydis, 2017; Dörner et al., 2018; Aoudia & Hoydis, 2019; O’Shea et al., 2019; Ye et al., 2018; Wang et al., 2017). This new paradigm is a viable alternative for optimizing communication in diverse applications, hardware, and environments (Hoydis et al., 2021). It is particularly promising for dense deployments of low-cost transceivers, where there is interference between the devices and hardware imperfections that are difficult to model analytically. The key idea of e2e learning for a communication system is to use an autoencoder architecture to model and learn the transmitter and receiver jointly using neural networks in order to minimize the e2e symbol error rate (SER).
The channel (i.e., propagation medium and transceiver imperfections) can be represented as a stochastic transfer function that transforms its input z ∈ Rd to an output x ∈ Rd. It can be regarded as a black-box that is typically non-linear and non-differentiable due to hardware imperfections (e.g., quantization and amplifiers). Since autoencoders are trained using stochastic gradient descent (SGD)-based optimization (O’Shea & Hoydis, 2017), it is challenging to work with a black-box channel that is not differentiable. One approach to address this problem is to use a known mathemat-
1Code for our work: https://github.com/jayaram-r/domain-adaptation-autoencoder
ical model of the channel (e.g., additive Gaussian noise), which would enable the computation of gradients with respect to the autoencoder parameters via backpropagation. However, such standard channel models do not capture well the realistic channel effects as shown in Aoudia & Hoydis (2018). Alternatively, recent works have proposed to learn the channel using deep generative models that approximate p(x | z), the conditional probability density of the channel, using Generative Adversarial Networks (GANs) (O’Shea et al., 2019; Ye et al., 2018), Mixture Density Networks (MDNs) (García Martí et al., 2020), and conditional Variational Autoencoders (VAEs) (Xia et al., 2020). The use of a differentiable generative model of the channel enables SGD-based training of the autoencoder, while also capturing realistic channel effects better than standard models.
Although this e2e optimization with a generative channel model learned from data can improve the physical-layer design for communication systems, in reality, channels often change, requiring collection of a large number of samples and frequent retraining of the channel model and autoencoder. For this reason, adapting the generative channel model and the autoencoder as often as possible, using only a small number of samples is required for good communication performance. Prior works have (to be best of our knowledge) not addressed the adaptation problem for autoencoder-based e2e learning, which is crucial for real-time deployment of such a system under frequently-changing channel conditions. In this paper, we study the problem of domain adaptation (DA) of autoencoders using an MDN as the channel model. In contrast to conventional DA, where the target domain has a large unlabeled dataset and sometimes also a small labeled dataset (semi-supervised DA) (Ben-David et al., 2006), here we consider a few-shot DA setting where the target domain has only a small labeled dataset, and no unlabeled data. This setting applies to our problem since we only get to collect a small number of labeled samples at a time from the changing target domain (here the channel) 2.
Towards addressing this important practical problem, we make the following contributions:
• We propose a parameter- and sample-efficient method for adapting a generative MDN (used for modeling the channel) based on the properties of Gaussian mixtures (§ 3.1 and § 3.2). • Based on the MDN adaptation, we propose an optimal input-transformation method at the decoder that compensates for changes in the channel distribution, and decreases or maintains the error rate of the autoencoder without any modification to the encoder and decoder networks (§ 3.3). • Experiments on a mmWave FPGA platform and a number of simulated distribution changes show strong performance improvements for our method. For instance, in the FPGA experiment, our method improves the SER by 69% with only 10 samples per class from the target distribution (§ 4).
Related Work. Recent approaches for DA such as DANN (Ganin et al., 2016), based on adversarial learning of a shared representation between the source and target domains (Ganin & Lempitsky, 2015; Ganin et al., 2016; Long et al., 2018; Saito et al., 2018; Zhao et al., 2019; Johansson et al., 2019), have achieved much success on computer vision and natural language processing. Their high-level idea is to adversarially learn a shared feature representation for which inputs from the source and target distributions are nearly indistinguishable to a domain discriminator DNN, such that a label predictor DNN using this representation and trained using labeled data from only the source domain also generalizes well to the target domain. Adversarial DA methods are not suitable for our problem, which requires fast and frequent test-time DA, because of their high computational and sample complexity and the imbalance in the number of source and target domain samples.
Related frameworks such as transfer learning (Long et al., 2015; 2016), model-agnostic metalearning (Finn et al., 2017), domain-adaptive few-shot learning (Zhao et al., 2021; Sun et al., 2019), and supervised DA (Motiian et al., 2017a;b) also deal with the problem of adaptation using a small number of samples. Most of them are not applicable to our problem because they primarily address novel classes (with potentially different distributions), and knowledge transfer from existing to novel tasks. Motiian et al. (2017a) is closely related since they also deal with a target domain that only has a small labeled dataset and has the same label space. The key difference is that Motiian et al. (2017a) address the training-time few-shot DA problem, while we focus on test-time few-shot DA. Specifically, their adversarial DA method requires both the source and target domain datasets at training time, and can be computationally expensive to retrain for every new batch of target domain data (a key motivation for this work is to avoid frequent retraining).
2In our problem, labels correspond to the transmitted messages and are essentially obtained for free (see § 3).
2 PRIMER ON AUTOENCODER-BASED END-TO-END COMMUNICATION
Notations. We denote vectors and matrices with boldface symbols. We define the indicator function 1(c) that takes value 1 (0) when the condition c is true (false). For any integer n ≥ 1, we define [n] = {1, · · · , n}. We denote the one-hot-coded vector with 1 at index i and the rest zeros by 1i. The probability density of a multivariate Gaussian with mean µ and covariance matrix Σ is denoted by N (x |µ,Σ). We use the superscripts s and t to denote quantities corresponding to the source and target domain respectively. Table 2 in the Appendix provides a quick reference for the notations.
Following (O’Shea & Hoydis, 2017; Dörner et al., 2018), consider a singleinput, single-output (SISO) communication system shown in Fig. 1, consisting of a transmitter (or encoder), channel, and receiver (or decoder). The encoder Eθe(·) is a multi-layer feedforward neural network (NN) with parameters θe, that maps an input message y ∈ Y := {1, · · · ,m} into an encoded symbol z ∈ Rd. The input
message y is mapped into a one-hot-coded vector 1y prior to being processed by the encoder 3. The message y is equivalent to a class label in machine learning terms, and the encoded symbol z = Eθe(1y) is like a representative vector for the class y. We note that the dimension of the encoding d is small (less than 10), and d = 2 is typically used to coincide with traditional modulation techniques (O’Shea & Hoydis, 2017; Goldsmith, 2005). The set of distinct encoded symbols Z = {Eθe(11), · · · ,Eθe(1m)} is referred to as the constellation of the autoencoder. The symbol z is transmitted (via the custom modulation learned by the encoder) over a communication channel, represented by an unknown conditional probability density p(x | z), and is received at the output of the channel as a noisy, distorted symbol x ∈ Rd. The decoder Dθd(·) is also a multilayer, feed-forward NN with parameters θd that predicts the class-posterior probabilities over the m messages based on the distorted channel output x. The decoder is essentially a classifier whose input-output mapping is defined by Dθd(x) := [Pθd(1 |x), · · · , Pθd(m |x)], where Pθd(y |x) is the predicted probability of class y given x. The class with the highest predicted probability is the decoded message ŷ(x) = argmaxy∈Y Pθd(y |x). As in standard classification, the performance metric of the autoencoder is the symbol error rate (SER), defined as E(x,y)[1(ŷ(x) ̸= y)]. Generative Channel Model. In order to learn the encoder and decoder networks using SGDbased optimization, it is necessary to have a differentiable backward path from the decoder to the encoder through the channel. We address this by learning a parametric generative model of the channel Pθc(x | z) (with parameters θc) that closely approximates the true channel conditional density p(x | z). There exists a stochastic data generation or sampling function x = hθc(z,u) corresponding to the generative model, where u captures the random aspects of the channel (e.g., noise and phase offsets; details in Appendix E). In this work, we model the conditional density of the channel using a set of m Gaussian mixtures, one per input message (or class) y ∈ Y:
Pθc(x | z) = k∑
i=1
πi(z)N ( x |µi(z),Σi(z) ) , z ∈ {Eθe(11), · · · ,Eθe(1m)}. (1)
Here, k is the number of components, µi(z) ∈ Rd is the mean vector, Σi(z) ∈ Rd×d is the (symmetric, positive-definite) covariance matrix, and πi(z) ∈ [0, 1] is the prior probability of component i. It is convenient to express the component prior probability in terms of the softmax function as πi(z) = eαi(z) / ∑k j=1 e
αj(z), ∀i ∈ [k], where αi(z) ∈ R are the component prior logits. We define the parameter vector of component i as ϕi(z)T = [αi(z),µi(z)T , vec(Σi(z))T ], where vec(·) is the vector representation of the unique entries of the covariance matrix. We also define the combined parameter vector from all components by ϕ(z)T = [ϕ1(z)T , · · · ,ϕk(z)T ]. An MDN can model complex conditional distributions by combining a feed-forward network with a parametric mixture density (Bishop, 1994; 2007). We use the MDN to predict the parameters of the
3The encoder has a normalization layer that constrains the average power of the symbols (see Appendix D).
Gaussian mixtures ϕ(z) as a function of its input symbol z, i.e., ϕ(z) = Mθc(z), where θc are the parameters of the MDN network. The MDN output with all the mixture parameters has dimension p = k (d(d+ 1)/2 + d + 1). While there are competing methods for generative modeling of the channel such as conditional GANs (Ye et al., 2018) and VAEs (Xia et al., 2020), we choose the Gaussian MDN based on i) the strong approximation properties of Gaussian mixtures (Kostantinos, 2000) for learning probability distributions; and ii) the analytical and computational tractability it lends to our domain adaptation formulation. The effectiveness of a Gaussian MDN for wireless channel modeling has also been shown in García Martí et al. (2020).
The input-output function of the autoencoder is given by fθ(1y) = Dθd(hθc(Eθe(1y),u)), and the goal of autoencoder learning is to minimize the symbol error rate. Since the sampling function hθc of a Gaussian mixture channel is not directly differentiable, we apply the Gumbel-Softmax reparametrization (Jang et al., 2017) to obtain a differentiable sampling function (details in Appendix E). More background, including the training algorithm of the autoencoder, is in Appendix D.
3 PROPOSED METHOD
Problem Setup. Let x, y, z denote a realization of the channel output, message (class label), and channel input (symbol) distributed according to the joint distribution p(x, y, z). We first establish the following result about the joint distribution. Proposition 1. The joint distributions p(x, y, z) and p(x, y) can be expressed in the following form:
p(x, y, z) = p ( x |Eθe(1y) ) p(y) δ(z−Eθe(1y)), ∀x, z ∈ Rd, y ∈ Y
p(x, y) = p ( x |Eθe(1y) ) p(y), ∀x ∈ Rd, y ∈ Y, (2)
where δ(·) is the Dirac delta (or Impulse) function, and we define p(x | y) := p(x |Eθe(1y)) as the conditional distribution of x given the class y.
The proof is simple and given in Appendix A. Let Ds = {(xsi , ysi , zsi ), i = 1, · · · , Ns} be a large dataset from a source distribution ps(x, y, z) = ps(x | y) ps(y) δ(z−Eθe(1y)). The data collection involves sending multiple copies of each of the m messages through the channel (e.g., over the air from the transmitter to receiver) by using a standard modulation technique (encoding) for z (e.g., M-QAM (Goldsmith, 2005)), and observing the corresponding channel output x. Different from conventional machine learning, where class labeling is expensive, in this setting the class label is simply the message transmitted, which is obtained for free while collecting the data. The MDN channel model and autoencoder are trained on Ds according to Algorithm 1 (see Appendix D.3).
Due to changes in the channel condition and environmental factors (e.g., moving obstacles), suppose the data distribution changes to pt(x, y, z) = pt(x | y) pt(y) δ(z − Eθe(1y)). While the distribution change may cause a drop in the autoencoder’s performance, we assume that it is gradual enough that domain adaptation is possible (David et al., 2010) (by domain, here
we mean the state of the communication channel during the time period when the MDN and autoencoder are trained). As discussed in § 1, the main challenge in this setting is to collect a sufficiently large dataset to retrain the MDN and autoencoder under the distribution shift. Therefore, suppose we collect a small dataset from the target distribution Dt = {(xti, yti , zti), i = 1, · · · , N t}, where N t ≪ Ns. Our goal is to design a few-shot domain adaptation for the MDN and autoencoder in order to maintain or improve the symbol error rate.
Distribution Change. Referring to the joint distribution Eq. (2), the class prior p(y) is the prior probability of a message y transmitted through the system. In this work, we make a reasonable practical assumption that this prior probability does not change, i.e., pt(y) ≈ ps(y), ∀y ∈ Y . However, the class-conditional distribution of channel output p(x | y) changes, and therefore the class-posterior distribution p(y |x) also changes. This is commonly referred to as the conditional shift assumption (Zhang et al., 2013) (different from covariate shift (Sugiyama et al., 2007)).
Overview of the Proposed Method. Recall from Eqn. (1) that we model the channel distribution p(x | z) as a Gaussian mixture Pθc(x | z), whose parameters are predicted by the MDN, i.e., ϕ(z) =
Mθc(z). From Proposition 1, the m class-conditional distributions of x are given by p(x | y) = p(x |Eθe(1y)), ∀y ∈ Y . Therefore, in our setting, adaptation of the class-conditional distributions is equivalent to adaptation of the m Gaussian mixtures in Eqn. (1. Adaptation of the Gaussian mixtures can be directly accomplished by adapting the MDN (i.e., the parameters θc) using the small target-domain dataset Dt. Our proposed adaptation of the autoencoder consists of two key steps:
1. A light-weight, parameter-efficient adaptation of the MDN using the small target dataset Dt. 2. An efficient feature transformation at the input of the decoder (based on the MDN adaptation) that
compensates for changes in the class-conditional distributions.
Our method requires adaptation of only the MDN (channel model), while the encoder and decoder networks (θe and θd) remain unchanged, making it amenable to fast and frequent adaptation that requires collecting only a small target dataset each time (few-shot setting).
3.1 MDN CHANNEL MODEL ADAPTATION
Our goal is to adapt the m Gaussian mixtures in Eqn (1) that model the source class-conditional distributions. Suppose the m adapted Gaussian mixtures corresponding to the (unknown) target class-conditional distributions are
P θ̂c (x | z) = k∑ i=1 π̂i(z)N ( x | µ̂i(z), Σ̂i(z) ) , z ∈ {Eθe(11), · · · ,Eθe(1m)}, (3)
where θ̂c are parameters of the adapted (target) MDN, and the component means, covariances, and prior probabilities with a hat notation are defined as in § 2. The adapted MDN predicts all the parameters of the target Gaussian mixture as ϕ̂(z) = Mθ̂c(z) as shown in Fig. 2, where ϕ̂(z) is defined in the same way as ϕ(z). Instead of naively fine-tuning all the MDN parameters θc, or even just the final fully-connected layer 4, we propose a parameter-efficient adaptation of the MDN based on the affine-transformation property of the Gaussian distribution, i.e., one can transform between any two multivariate Gaussians through a general affine transformation. First, we state some basic assumptions required to make the proposed adaptation tractable.
A1) The source and target Gaussian mixtures per class have the same number of components k. A2) The source and target Gaussian mixtures (from each class) have a one-to-one correspondence
between their components.
Assumption A1 is made in order to not have to change the architecture of the MDN during adaptation due to adding or removing of components. Both assumptions A1 and A2 5 make it tractable to find the closed-form expression for a simplified KL-divergence between the source and target Gaussian mixtures per class (see Proposition 2).
Parameter Transformations. As shown in Appendix B.2, the transformations between the source and target Gaussian mixture parameters, for any symbol z ∈ Z and component i ∈ [k], are given by
µ̂i(z) = Ai µi(z) + bi, Σ̂i(z) = Ci Σi(z)C T i , and α̂i(z) = βi αi(z) + γi. (4)
The affine transformation parameters Ai ∈ Rd×d and bi ∈ Rd transform the means, Ci ∈ Rd×d transforms the covariance matrix, and βi, γi ∈ R transform the prior logits. The vector of all adaptation parameters to be optimized is defined by ψT = [ψT1 , · · · ,ψTk ], where ψi contains all the affine-transformation parameters from component i. The number of adaptation parameters is given by k (2 d2 + d+ 2). This is typically much smaller than the number of MDN parameters (weights and biases from all layers), even if we consider only the final fully-connected layer for fine-tuning (see Table 1). In Fig. 2, the adaptation layer mapping ϕ(z) to ϕ̂(z) basically implements the parameter transformations defined in Eqn. (4). We observe that the affine-transformation parameters are not dependent on the symbol z (or the class), which is a constraint we impose in order to keep the number of adaptation parameters small. This is also consistent with the MDN parameters θc being independent of the symbol z. Allowing the affine transformations to depend of z would provide more flexibility, but at the same time require more target domain data for successful adaptation.
4We show in our experiments that both the fine-tuning approaches fail to adapt well. 5We perform ablation experiments (Appendix C.4) that evaluate our method under random Gaussian mixtures
with mismatched components. We find that our method is robust even when these assumptions are violated.
Proposition 2. Given m Gaussian mixtures from the source domain and m Gaussian mixtures from the target domain (one each per class), which satisfy Assumptions A1 and A2, the KL-divergence between Pθc(x,K | z) and Pθ̂c(x,K | z) can be computed in closed-form, and is given by:
Dψ(Pθc , Pθ̂c ) = EPθc
[ log
Pθc(x,K | z) P θ̂c (x,K | z)
] =
∑ z∈Z p(z) k∑ i=1 πi(z) log πi(z) π̂i(z)
+ ∑ z∈Z p(z) k∑ i=1 πi(z)DKL ( N ( · |µi(z),Σi(z) ) , N ( · | µ̂i(z), Σ̂i(z) )) , (5)
where K is the mixture component random variable. The first term is the KL-divergence between the component prior probabilities, which simplifies into a function of the parameters [β1, γ1, · · · , βk, γk] . The second term involves the KL-divergence between two multivariate Gaussians (a standard result), which also simplifies into a function of ψ.
The proof and the final expression for the KL-divergence as a function ofψ are given in Appendix A.1. The symbol priors {p(z), z ∈ Z} are estimated using the class proportions from the source dataset Ds. We note that this result is different from the KL-divergence between two arbitrary Gaussian mixtures, for which there is no closed-form expression (Hershey & Olsen, 2007).
3.2 REGULARIZED ADAPTATION OBJECTIVE
From the above analysis, we can formulate the MDN adaptation as the equivalent problem of finding the optimal set of affine transformations (one per-component) mapping the source to the target Gaussian mixtures. To reduce the possibility of the adaptation finding bad solutions due to the small-sample setting, we introduce a regularization term based on the KL-divergence (defined earlier), which constrains the distribution shift produced by the affine transformations. We consider two scenarios for adaptation: 1)
Generative adaptation of the MDN in isolation and 2) Discriminative adaptation of the MDN as part of the autoencoder. In the first case, the goal of adaptation is to find a good generative model for the target channel distribution, while in the second case the goal is to improve the classification accuracy of the autoencoder on the target distribution. We focus on the discriminative adaptation here, and present the very similar generative adaptation in Appendix B.3.
Since the goal of adaptation is to improving the decoder’s accuracy in recovering the transmitted symbol z from the channel output x, we use the (negative) symbol posterior log-likelihood (PLL) as the first data-dependent term of the adaptation objective. The second term is the simplified KLdivergence between the source and target Gaussian mixtures, which does not depend on the data.
JPLL(ψ ;λ) = −1 N t Nt∑ n=1 logP θ̂c (ztn |xtn) + λDψ(Pθc , Pθ̂c). (6)
The symbol posterior P θ̂c (z |x) is computed from the conditional P θ̂c (x | z) and the symbol priors {p(z), z ∈ Z} using Bayes rule. We observe that the adaptation objective is a smooth and nonconvex function of ψ . Also, computation of the objective and its gradient (w.r.tψ) are inexpensive operations since i) they do not require forward and back-propagation through the layers of the MDN and ii) both N t and the dimension of ψ are small. Therefore, we use the BFGS Quasi-Newton method (Nocedal & Wright, 2006) for minimization, instead of SGD-based large-scale optimization (e.g., Adam). The regularization constant λ is a hyper-parameter of the proposed method, and we propose a validation metric in Appendix B.4) to set its value automatically.
3.3 DECODER ADAPTATION USING FEATURE TRANSFORMATIONS
We propose a computationally-efficient feature transformation g−1 : Rd 7→ Rd at the decoder such that the transformed inputs x̂s = g−1(xt) are closely aligned to the source distribution on which
the decoder was trained (see Fig. 3). This is based on the optimal affine-transformations ψ of the MDN found by minimizing Eqn. (6). This method does not require any change to the trained encoder and decoder networks, making it well suited for the few-shot DA setting. Consider a test input xt at the decoder from the target-domain marginal distribution pt(x) =∑ z∈Z p(z) ∑k i=1 π̂i(z)N ( x | µ̂i(z), Σ̂i(z) ) . As shown in Appendix B.2, conditioned on a given symbol z ∈ Z and component i ∈ [k], the affine transformation that maps from the target Gaussian distribution xt | z, i ∼ N(x | µ̂i(z), Σ̂i(z)) to the source Gaussian distribution xs | z, i ∼ N(x |µi(z),Σi(z)) is given by
x̂s = g−1zi (x t) := C−1i (x t − Ai µi(z) − bi) + µi(z). (7) However, this transformation requires knowledge of both the transmitted symbol z and the mixture component i, which are not observed at the decoder (the decoder only observes the channel output xt). We address this by taking the expected affine transformation from target to source, where the expectation is with respect to the joint posterior over the symbol z and component i, given the channel output xt. This posterior distribution based on the target Gaussian mixture is:
P θ̂c (z, i |xt) =
p(z) π̂i(z)N ( xt | µ̂i(z), Σ̂i(z) )∑ z′ ∑ j p(z ′) π̂j(z′)N ( xt | µ̂j(z′), Σ̂j(z′)
) . The expected inverse-affine feature transformation at the decoder is then defined as
g−1(xt) := EP θ̂c (z,i |x) [ g−1zi (x t) | xt ] = ∑ z∈Z ∑ i∈[k] P θ̂c (z, i |xt) g−1zi (x t). (8)
We show that this conditional expectation is the optimal transformation from the standpoint of meansquared-error estimation (Kay, 1993) in Appendix A.2. The adapted decoder based on this feature transformation is illustrated in Fig. 3 and defined as D̂θd(x t ;ψ) := Dθd(g −1(xt)). For small to moderate number of symbols m and number of components k, this transformation is computationally efficient and easy to implement at the receiver of a communication system. A discussion of the computational complexity of the proposed method is given in Appendix B.5.
4 EXPERIMENTS
We perform experiments to evaluate the proposed adaptation method for the MDN and autoencoder. Our main findings are summarized as follows: 1) the proposed method adapts well to changes in the channel distribution using only a few samples per class, often leading to strong improvement over the baselines; 2) our method performs well under multiple simulated distribution changes, and notably on our mmWave FPGA experiments; 3) Extensive ablation studies show that the proposed KL-divergence based regularization and the validation metric for setting λ are effective.
Setup. We implemented the MDN, autoencoder networks, and the adaptation methods in Python using TensorFlow (Abadi et al., 2015) and TensorFlow Probability. We used the following setting in our experiments. The size of the message set m is fixed to 16, corresponding to 4 bits. The dimension of the encoding (output of the encoder) d is set to 2, and the number of mixture components k is set to 5. More details on the experimental setup, neural network architecture, and the hyper-parameters are given in Appendix C.1.
Baseline Methods. We compare the performance of our method with the following baselines: 1) No adaptation, which is the MDN and autoencoder from the source domain without adaptation. 2) Retrained MDN and autoencoder, which is like an “oracle method” that has access to a large dataset from the target domain. 3) Finetune - where the method optimizes all the MDN parameters for 200 epochs and optimizes the decoder for 20 epochs 6. 4) Finetune last - which follows the same approach as “Finetune”, but only optimizes the last layer of MDN (all the layers of the decoder are however optimized). We note that traditional domain adaptation methods are not suitable for this problem because it requires adaptation of both the MDN (generative model) and the decoder.
Datasets. The simulated channel variations are based on models commonly used for wireless communication, specifically: i) Additive white Gaussian noise (AWGN), ii) Ricean fading, and iii)
6We found no significant gains with larger number of epochs in this case.
Uniform or flat fading (Goldsmith, 2005). Details on these channel models and calculation of the their signal-to-noise ratio (SNR) are provided in Appendix F. We also created simulated distribution changes using random, class-conditional Gaussian mixtures for both the source and target channels (we also include random phase shifts). The parameters of the source and target Gaussian mixtures are generated in a random but controlled manner as detailed in Appendix C.3. We also evaluate the performance of the adaptation methods on real over-the-air wireless experiments. We use a recent high-performance mmWave testbed (Lacruz et al., 2021), featuring a high-end FPGA board with 2 GHz bandwidth per-channel and 60 GHz SIVERS antennas (SIVERSIMA, 2020). We introduce distribution changes via (In-phase and Quadrature-phase) IQ imbalance-based distortions to the symbol constellation, and gradually increase the level of imbalance to the system 7. More details on the FPGA experimental setup are given in Appendix C.2.
Evaluation Protocol. Due to the space limit, we provide details of the evaluation protocol such as train, adaptation, and test sample sizes, and the number of random trials used to get averaged performance in Appendix C.1. We report the symbol error rate (SER) on a large held-out test dataset (from the target domain) as a function of the number of target-domain samples per class. The only hyper-parameter λ of our method is set automatically using the validation metric proposed in B.4.
4.1 AUTOENCODER ADAPTATION ON SIMULATED DISTRIBUTION CHANGES
The adaptation results under simulated distributions changes are given in Figs. 4 and 5, with the symbol error rates plotted as a function of the number of target samples per class. In Fig. 4, we consider standard channel distributions such as AWGN, Ricean fading, and Uniform fading. In Fig. 5, we consider random Gaussian mixtures for both the source and the target distributions. We observe that the proposed adaptation leads to a strong improvement in SER in all cases, except in the case of AWGN to Ricean fading (Fig. 4. c). We provide some insights on the failure of our method in this case in Appendix C.5. Note that the methods “No adapt” and “Retrained autoenc” have the same SER for all target sample sizes (i.e., a horizontal line). We find both the finetuning baselines to
7IQ imbalance is a common issue in RF communication that introduces distortions to the final constellation.
have very similar SER in all cases, and there is not much improvement compared to no adaptation. This suggests that our approach of constraining the number of adaptation parameters and using the KL-divergence regularization are effective in the few-shot DA setting (see Table 1).
4.2 AUTOENCODER ADAPTATION ON FPGA EXPERIMENTS
For this experiment, different levels of distribution change are introduced by varying the IQ imbalance over 20%, 25%, and 30% (higher IQ imbalance corresponds to larger distribution change). From Fig. 6, we observe that the proposed method achieves significant reduction in error rate compared to the (non-oracle) baselines. The relative improvement in SER over the baselines is more pronounced under higher IQ imbalance. For instance, at 30% IQ imbalance, our method achieves a relative SER improvement of around 69% over the fine-tuning baselines using only 10 samples per-class.
4.3 ADDITIONAL EXPERIMENTS
We have performed a number of additional experiments including ablation studies, which are reported in Appendix C.4 through C.6. They include: 1) evaluating the proposed validation metric for automatically setting the hyper-parameter λ; 2) evaluating the importance of the KLdivergence regularization in the adaptation objective; 3) performance of our method when the source and target Gaussian mixtures have a mismatch in the components (addressing As-
sumptions A1 and A2); 4) performance of our method when there is no distribution shift; and 5) performance of the generative adaptation of the MDN channel. To summarize the observations, we found the validation metric to be effective at setting the value of λ, and that our method has good performance even when Assumptions A1 and A2 are violated, or when there is no distribution shift. The generative MDN adaptation leads to increased log-likelihoods with as low as 2 samples per class.
5 CONCLUSIONS
In this work, we explore one of the first approaches for domain adaptation of autoencoder based e2e communication in the few-shot setting. We first propose a light-weight and parameter-efficient method for adapting a Gaussian MDN with a very small number of samples from the target distribution. Based on the MDN adaptation, we propose an optimal input transformation method at the decoder that attempts to closely align the target domain inputs to the source domain. We demonstrate the effectiveness of the proposed methods through extensive experiments on both simulated channels and a mmWave FPGA testbed. A discussion of limitations and future directions is given in Appendix B.6.
ACKNOWLEDGMENTS
Banerjee, Raghuram, and Zeng were supported in part through the following grants — US National Science Foundation’s CNS-2112562, CNS-2107060, CNS-2003129, CNS-1838733, and CNS-1647152, and the US Department of Commerce’s 70NANB21H043. Somesh Jha was partially supported by the DARPA-GARD problem under agreement number 885000. The authors from IMDEA Networks were sponsored by the Spanish Ministry of Economic Affairs and Digital Transformation under European Union NextGeneration-EU projects TSI-063000-2021-59 RISC-6G and TSI-063000-2021-63 MAP-6G, and by the Regional Government of Madrid and the European Union through the European Regional Development Fund (ERDF) project REACT-CONTACT-CM-23479.
Appendix
Table 2: Commonly used notations
Notation Description
y ∈ Y := {1, · · · ,m} Input message or class label. Usually m = 2b, where b is the number of bits. 1y, y ∈ Y One-hot-coded representation of a label (message) y, with 1 at position y and zeros elsewhere. z ∈ Z ⊂ Rd with |Z| = m Encoded representation or symbol vector corresponding to an input message. x ∈ Rd Channel output that is the feature vector to be classified by the decoder. Eθe(1y) Encoder NN with parameters θe mapping a one-hot-coded message to a symbol vector in Rd. Dθd(x) = [Pθd(1 |x), · · · , Pθd(m |x)] Decoder NN with parameters θd mapping the channel output into probabilities over the message set. ŷ(x) = argmaxy∈Y Pθd(y |x) Class (message) prediction of the decoder. Pθc(x | z) Conditional density (generative) model of the channel with parameters θc. ϕ(z) = Mθc(z) Mixture density network that predicts the parameters of a Gaussian mixture. x = hθc(z,u) Transfer or sampling function corresponding to the channel conditional density. fθ(1y) = Dθd(hθc(Eθe(1y),u)) Input-output mapping of the autoencoder with combined parameter vector θ T = [θTe ,θ T c ,θ T d ]. ψT = [ψT1 , · · · ,ψTk ] Affine transformation (adaptation) parameters per component used to adapt the MDN. gzi and g −1 zi , i ∈ [k], z ∈ Z Affine transformations between the components of the source-to-target Gaussian mixtures and vice-verse. DKL(p, q) Kullback-Leibler divergence between the distributions p and q. N(· |µ,Σ) Multivariate Gaussian density with mean vector µ and covariance matrix Σ. δ(x− x0) Dirac delta or impulse function centered at x0. Cat(p1, · · · , pk) Categorical distribution with pi ≥ 0 and ∑ i pi = 1. 1(c) Indicator function mapping a predicate c to 1 if true and 0 if false. ∥x∥p ℓp norm of a vector x.
The appendices are organized as follows:
• Appendix A discusses the theoretical results from the main paper. • Appendix B provides additional details on the proposed method including:
– Discussion on class labels and labeled data in the communication setting (Appendix B.1). – Feature and parameter transformation between multivariate Gaussians (Appendix B.2). – Generative adaptation of the MDN channel (Appendix B.3). – The validation metric used for setting the hyper-parameter λ (Appendix B.4). – Computational complexity analysis of the proposed method (Appendix B.5). – Limitations and future work (Appendix B.6).
• Appendix C provides additional details on the experiments and additional results, including ablation studies of the proposed method.
• Appendix D provides additional background on the following topics: 1) components of an endto-end autoencoder-based communication system, 2) generative modeling using mixture density networks, 3) training algorithm of the autoencoder, and 4) a primer on domain adaptation.
• Appendix E provides details on the MDN training and differentiable sampling using the Gumbelsoftmax reparametrization.
• Appendix F provides details on the simulated channel distributions used in our experiments.
A THEORETICAL RESULTS
Propostion 1 (restatement). The joint distributions p(x, y, z) and p(x, y) can be expressed in the following form:
p(x, y, z) = p ( x |Eθe(1y) ) p(y) δ(z−Eθe(1y)), ∀x, z ∈ Rd, y ∈ Y
p(x, y) = p ( x |Eθe(1y) ) p(y), ∀x ∈ Rd, y ∈ Y, (9)
where δ(·) is the Dirac delta (or Impulse) function, and we define p(x | y) := p(x |Eθe(1y)) as the conditional distribution of x given the class y.
Proof. It follows from the dependence y → z → x defined by our generative model that p(x, y, z) = p(y) p(z | y) p(x | z, y)
= p(y) δ(z−Eθe(1y)) p(x |Eθe(1y), y) = p(y) δ(z−Eθe(1y)) p(x |Eθe(1y)).
In the second step, the conditional p(z | y) reduces to the Dirac delta since the symbol z can only take one of the m values from the constellation Z = {Eθe(11), · · · ,Eθe(1m)} (for a fixed encoder mapping). The distribution p(x, y) in Eq. (9) is obtained from the third step by integrating p(x, y, z) over all z, and using the integration property of the Dirac delta.
A.1 KL-DIVERGENCE BETWEEN THE SOURCE AND TARGET GAUSSIAN MIXTURES
Propostion 2 (restatement). Given m Gaussian mixtures from the source domain and m Gaussian mixtures from the target domain (one each per class), which satisfy Assumptions A1 and A2, the KL-divergence between Pθc(x,K | z) and Pθ̂c(x,K | z) can be computed in closed-form, and is given by:
Dψ(Pθc , Pθ̂c ) = EPθc
[ log
Pθc(x,K | z) P θ̂c (x,K | z)
] =
∑ z∈Z p(z) k∑ i=1 πi(z) log πi(z) π̂i(z)
+ ∑ z∈Z p(z) k∑ i=1 πi(z)DKL ( N ( · |µi(z),Σi(z) ) , N ( · | µ̂i(z), Σ̂i(z) )) , (10)
where K is the mixture component random variable. The first term is the KL-divergence between the component prior probabilities, which simplifies into a function of the parameters [β1, γ1, · · · , βk, γk] . The second term involves the KL-divergence between two multivariate Gaussians (a standard result), which also simplifies into a function of ψ.
Proof. Referring to § 3.1, we derive the closed-form KL-divergence between the source and target Gaussian mixtures under Assumptions 1 and 2, i.e., the source and target Gaussian mixtures have the same number of components that have a one-to-one association. Recall that θc and θ̂c are the parameters of the original (source) and the adapted (target) MDN respectively. Let K ∈ {1, · · · , k} denote the latent component random variable.
Dψ(Pθc , Pθ̂c ) = EPθc
[ log
Pθc(x,K | z) P θ̂c (x,K | z)
]
= ∑ z∈Z p(z) k∑ i=1 ∫ Rd Pθc(x,K = i | z) log Pθc(x,K = i | z) P θ̂c (x,K = i | z) dx
= ∑ z∈Z p(z) k∑ i=1 Pθc(K = i | z) ∫ Rd Pθc(x | z,K = i) log Pθc(K = i | z)Pθc(x | z,K = i) P θ̂c (K = i | z)P θ̂c (x | z,K = i) dx = ∑ z∈Z p(z) k∑ i=1 πi(z) ∫ Rd N (x |µi(z),Σi(z)) log πi(z) π̂i(z) + log N (x |µi(z),Σi(z)) N ( x | µ̂i(z), Σ̂i(z) ) dx
= ∑ z∈Z p(z) k∑ i=1 πi(z) log πi(z) π̂i(z)
+ ∑ z∈Z p(z) k∑ i=1 πi(z)DKL ( N (· |µi(z),Σi(z)) , N ( · | µ̂i(z), Σ̂i(z) )) . (11)
The second term in the final expression involves the KL-divergence between two multivariate Gaussians (a standard result) given by
DKL
( N(· |µ,Σ), N(· | µ̂, Σ̂) ) = 1
2 log det(Σ̂) det(Σ) + 1 2 tr(Σ̂−1 Σ)
+ 1 2 (µ̂ − µ)T Σ̂−1 (µ̂ − µ) − d 2 .
For clarity, we further simplify Eq. (11) for the case of diagonal covariances by applying the above result. Recall that the Gaussian mixture parameters of the source and target domains are related by the parameter transformations in Eq. (4). The second term in Eq. (11) involving the KL-divergence between multivariate Gaussians, simplifies to
DKL ( N ( · |µi(z),σ2i (z) ) , N ( · | µ̂i(z), σ̂2i (z) )) = 1
2 d∑ j=1 [ log c2ij + 1 c2ij +
1
c2ij σ 2 ij(z)
( aij µij(z) + bij − µij(z) )2]− d 2 . (12)
The first term in Eq. (11) involving the KL-divergence between the component prior probabilties can be expressed as a function of the adaptation parameters [β1, γ1, · · · , βk, γk] as follows:
k∑ i=1 πi(z) log πi(z) π̂i(z) = k∑ i=1 eαi(z) q(z) [ log eαi(z) q(x) − log e βi αi(z)+ γi q̂(z) ]
= log( k∑ i=1 eβi αi(z)+ γi) − log( k∑ i=1 eαi(z)) + k∑ i=1 eαi(z) q(z) (αi(z) − βi αi(z) − γi) , (13)
where q(z) = ∑k
j=1 e αj(z) and q̂(x) = ∑k j=1 e
βj αj(z)+ γj are the normalization terms in the softmax function. Substituting Eqs. (12) and (13) into the last step of Eq. (11) gives the KL-divergence between the source and target Gaussian mixtures as a function of the adaptation parameters ψ.
A.2 OPTIMALITY OF THE FEATURE TRANSFORMATION
We show that the proposed feature transformation at the decoder in § 3.3 is optimal in the mimimum mean-squared error sense. The problem setting is that, at the decoder, we observe an input xt from the target domain marginal distribution, i.e.,
xt ∼ pt(x) = ∑ z∈Z p(z) k∑ i=1 π̂i(z)N ( x | µ̂i(z), Σ̂i(z) ) ,
where Z = {Eθe(11), · · · ,Eθe(1m)} is the encoder’s constellation. Suppose we knew the symbol z = Eθe(1y) that was transmitted and the mixture component i ∈ [k], then the transformation g−1zi (x
t) in Eq. (7) can map xt to the corresponding Gaussian component of the source distribution. However, since z and i are not observed at the decoder, we propose to find the transformation g−1 : Rd 7→ Rd (independent of z and i) that minimizes the following expected squared error:
J ( g−1(xt) ) = 1
2 EP θ̂c (z,i |x)
[ ∥g−1zi (x t) − g−1(xt)∥22 | xt ] . (14)
This is the conditional expectation over (z, i) given xt with respect to the posterior distribution P θ̂c (z, i |x). Since xt is fixed, the above objective is a function of the vector w := g−1(xt) ∈ Rd, and it can be simplified as follows:
J(w) = 1
2 EP θ̂c (z,i |x)
[ ∥g−1zi (x t) − w∥22 | xt ]
= 1
2 EP θ̂c (z,i |x)
[ g−1zi (x t)Tg−1zi (x t) | xt ] + 1
2 wTw
− wT EP θ̂c (z,i |x) [ g−1zi (x t) | xt ] .
Note that w comes outside the expectation since it does not depend on z or i. The minimum of this simple quadratic function can be found by setting the gradient of J with respect to w to 0, giving
w⋆ = g−1(xt) = EP θ̂c (z,i |x) [ g−1zi (x t) | xt ]
= ∑ z∈Z ∑ i∈[k] P θ̂c (z, i |xt) g−1zi (x t).
This is the feature transformation at the decoder proposed in § 3.3.
B ADDITIONAL DETAILS ON THE PROPOSED METHOD
In this section we provide additional details on the proposed method that could not be discussed in § 3 of the main paper.
B.1 CLASS LABELS AND LABELED DATA
We would like to clarify that the statement “class labels are available for free” is made in Section 3 in order to highlight the fact that class labels are easy to obtain in this end-to-end communication
setting, unlike other domains (e.g. computer vision) where labeling data could be expensive. Since the transmitted message is also the class label, it is always available without additional effort during the data collection (from the packet preambles). However, note that it is still challenging / expensive to collect a large number of samples for domain adaptation, as discussed in Section 1. In contrast, it may be easy to obtain plenty of unlabeled data in other domains such as computer vision, where labeling is expensive.
In communication protocols, preambles are attached to the front of the packets for synchronization, carrier frequency offset correction, and other tasks. The preambles consist of sequences of known symbols (which have a one-to-one mapping to the messages). Therefore, these sequences can be used as the labeled dataset since the receiver obtains the distorted symbol and knows the ground truth. The proposed MDN adaptation and input transformation at the decoder do not incur any modifications to the encoder (transmitter side). The constellation learned by the autoencoder is kept fixed during adaptation. Therefore, using the preambles from a small number of packets, our method performs adaptation at the receiver side and maintains the symbol error rate performance without communicating any information back to the encoder.
B.2 TRANSFORMATION BETWEEN MULTIVARIATE GAUSSIANS
We discuss the feature and parameter transformations between any two multivariate Gaussians. This result was applied to formulate the MDN adaptation in Eqs. (4) and (7). Consider first the standard transformation from x ∼ N(· |µ,Σ) to x̂ ∼ N(· | µ̂, Σ̂) given by the two-step process:
• Apply a whitening transformation z = D−1/2 UT (x− µ) such that z ∼ N(· |0, I). • Transform z into the new Gaussian density using x̂ = Û D̂1/2 z + µ̂.
We have denoted the eigen-decomposition of the covariance matrices by Σ = UDUT and Σ̂ = ÛD̂ÛT , where U and Û are the orthonormal eigenvector matrices, and D and D̂ are the diagonal eigenvalue matrices. Combining the two steps, the overall transformation from x to x̂ is given by
x̂ = Û D̂1/2 D−1/2 UT (x− µ) + µ̂. (15)
Suppose we define the matrix C = Û D̂1/2 D−1/2 UT , then it is easily verified that the covariance matrices are related by Σ̂ = CΣCT . In general, the mean vector and covariance matrix of any two Gaussians can be related by the following parameter transformations:
µ̂ = Aµ + b and Σ̂ = CΣCT , (16)
with parameters A ∈ Rd×d, b ∈ Rd, and C ∈ Rd×d. Substituting the above parameter transformations into the feature transformation in Eq. (15), we get
x̂ = C (x − µ) + Aµ + b.
From the above, we can also define the inverse feature transformation from x̂ ∼ N(· | µ̂, Σ̂) to x ∼ N(· |µ,Σ) :
x = C−1 (x̂ − Aµ − b) + µ.
B.3 GENERATIVE ADAPTATION OF THE MDN
In § 3.2, we discussed the discriminative adaptation objective for the MDN, which is used when the MDN is adapted as part of the autoencoder in order to improve the end-to-end error rate. This adaptation approach was used for the experiments in § 4. On the other hand, we may be interested in adapting the MDN in isolation with the goal of improving its performance as a generative model of the channel. For this scenario, the adaptation objective Eq. 6 is modified as follows. The first (data-dependent) term is replaced with the negative conditional log-likelihood (CLL) of the target dataset, while the second KL-divergence term remains the same:
JCLL(ψ ;λ) = −1 N t Nt∑ n=1 logP θ̂c (xtn | ztn) + λDψ(Pθc , Pθ̂c), (17)
where µ̂i(z), Σ̂i(z) and α̂i(z) as a function of ψ are given by Eq. (4). The parameters of the original Gaussian mixture αi(z),µi(z),Σi(z), ∀i are constants since they have no dependence on
ψ. The regularization constant λ ≥ 0 controls the allowed KL-divergence between the source and target Gaussian mixtures. Small values of λ weight the CLL term more, allowing more exploration in the adaptation, while large values of λ impose a strong regularization to constrain the space of target distributions. We evaluate the performance of this generative MDN adaptation in Appendix C.6.
B.4 VALIDATION METRIC FOR AUTOMATICALLY SETTING λ
The choice of λ in the adaptation objectives Eqs. (6) and 17 is crucial as it sets the right level of regularization suitable for the target domain distribution. Since the target domain dataset is very small, it is difficult to apply cross-validation type of methods to select λ. We propose a validation metric V (ψ ;Dt) that utilizes the feature-transformed target domain dataset to evaluate the quality of the adapted solutions for different λ values.
Let ψ denote the adaptation parameters found by minimizing the objective Eq. (6) for a specific λ ≥ 0. The feature transformation (from target to source domain) at the decoder g−1(x) based on the adaptation parameters ψ is given by Eq. (8). Recall that the target domain dataset is Dt = {(xtn, ytn, ztn), n = 1, · · · , N t}. We define the feature-transformed target domain dataset as:
Dttrans = { ( g−1(xtn), y t n, z t n ) , n = 1, · · · , N t}.
Suppose ψ is a good adaptation solution, then we expect the decoder (trained on the source domain dataset) to have good classification performance on Dttrans. For a given feature-transformed target domain sample, the decoder predicts the class posterior probabilities: Dθd(g
−1(xtn)) = [Pθd ( 1 | g−1(xtn) ) , · · · , Pθd ( m | g−1(xtn) ) ]. We define the validation metric as the negative posterior log-likelihood of the decoder on Dttrans, given by
V (ψ ;Dt) = − 1 N t Nt∑ n=1 logPθd ( ytn | g−1(xtn) ) (18)
We expect smaller values of V (ψ ;Dt) to correspond to better adaptation solutions. The adaptation objective is minimized with λ varied over a range of values, and in each case the adapted solution ψ is evaluated using the validation metric. The pair of λ and ψ resulting in the smallest validation metric is chosen as the final adapted solution. The search set of λ used in our experiments was {10−5, 10−4, 10−3, 10−2, 0.1, 1, 10, 100}. See Appendix C.4 for an ablation study on the choice of hyper-parameter λ using this validation metric.
Generative MDN Adaptation. The validation metric proposed above depends on the decoder, and cannot be used when the MDN is adapted as a generative model in isolation (Appendix B.3). For this setting, we modify the validation metric based on the following idea. Suppose the adaptation finds a good solution, then we expect Dttrans to have a high conditional log-likelihood under the (original) source domain MDN. The validation metric is therefore given by
V (ψ ;Dt) = − 1 N t Nt∑ n=1 logPθc ( g−1(xtn) | ztn ) , (19)
where Pθc is the Gaussian mixture given by Eq. 1.
B.5 COMPLEXITY ANALYSIS
We provide an analysis of the computational complexity of the proposed adaptation methods.
MDN Adaptation.
The number of free parameters being optimized in the adaptation objective (Eqs. 6 or 17) is given by |ψ| = k (2 d2 + d+ 2). This is much smaller than the number of parameters in a typical MDN, even considering only the final fully-connected layer (see Table 1 for a comparison). Each step of the BFGS optimization involves computing the objective function, its gradient, and an estimate of its inverse Hessian. The cost of one step of BFGS can thus be expressed as O(N t k d2 |ψ|2). Suppose BFGS runs for a maximum of T iterations and the optimization is repeated for L values of λ, then the overall cost of adaptation is given by O(LT N t k d2 |ψ|2). Note that the optimization for different λ values can be easily solved in parallel.
Test-time Adaptation at the Decoder.
We analyze the computational cost of the feature transformation-based adaptation at the decoder proposed in § 3.3. Consider a single test input xt at the decoder. The feature transformation method first computes the posterior distribution Pθ̂c(z, i |x
t) over the set of symbols-component pairs of size km. Computation of each exponent factor in the posterior distribution requires O(d3) operations for the full-covariance case, and O(d) operations for the diagonal covariance case. This corresponds to calculation of the log of the Gaussian density. Therefore, computation of the posterior distribution for a single (z, i) pair requires O(kmd3) operations for the full-covariance case (similarly for the diagonal case). Computation of the affine transformation g−1zi (x
t) for a single (z, i) pair requires O(d2) operations (the matrix Ci only needs to be inverted once prior to test-time adaptation). Since calculation of the posterior term dominates the computation, the overall cost of computing the transformation in Eq (8) over the km symbol-component pairs will be O(kmkmd3) = O(k2 m2 d3).
We note that in practical communication systems d is small (typically d = 2). The number of symbols or messages m can vary from 4 to 1024 in powers of 2. The number of mixture components k can be any positive integer, but is usually not more than a few tens to keep the size of the MDN practical. Therefore, the computational cost of test-time adaptation at the decoder based on the feature transformation method is relatively small, making our proposed adaptation very computationally efficient to implement at the receiver side of a communication system.
B.6 LIMITATIONS AND FUTURE WORK
The proposed work focuses mainly on a mixture density network (MDN) as the generative channel model, which allows us to exploit some of their useful properties in our formulation. Generalizing the proposed few-shot domain adaptation to other types of generative channel models such as conditional GANs, VAEs, and normalizing flows (Dinh et al., 2017) could be an interesting direction. These generative models can handle more high-dimensional structured inputs.
The proposed work does not adapt the encoder network, i.e., the autoencoder constellation is not adapted to changes in the channel distribution. Adapting the encoder, decoder, and channel networks jointly would allow for more flexibility, but would likely be slower and require more data from the target distribution.
We focused on memoryless channels, where inter-symbol-interference (ISI) is not a problem. In practice, communication channels can have memory and ISI would have to be addressed by the training and adaptation methods. Under changing channels, one would have to also adapt an Equalizer model (algorithm) in order to mitigate ISI.
C ADDITIONAL EXPERIMENTS
We provide additional details on the experiments in § 4 and report additional results, including ablation studies on the proposed method.
C.1 EXPERIMENTAL SETUP
We implemented the mixture density network and communication autoencoder models using TensorFlow (Abadi et al., 2015) and TensorFlow Probability. We used the BFGS optimizer implementation available in TensorFlow Probability. The code base for our work has been submitted as a supplementary material. All the experiments were run on a Macbook Pro with 16 GB memory and 8 CPU cores. Table 3 summarizes the architecture of the encoder, MDN (channel model), and decoder neural networks. Note that the output layer of the MDN is a concatenation (denoted by ⊕) of three fully-connected layers predicting the means, variances, and mixing prior logit parameters of the Gaussian mixture. The following setting is used in all our experiments. The size of the message set m (also the number of classes) was fixed to 16, corresponding to 4 bits. The dimension of the encoding d was set to 2, and the number of mixture components k was set to 5. The size of the hidden layers nh was set to 100.
The parameters ψ of the proposed adaptation method are initialized as follows for each component i:
Ai = Id, bi = 0, Ci = Id, βi = 1, γi = 0,
where Id is the d× d identity matrix. This initialization ensures that the target Gaussian mixtures (per class) are always initially equal to the source Gaussian mixtures. The regularization constant λ in the adaptation objective was varied over 8 equally-spaced values on the log-scale with range 10−5 to 100, specifically {10−5, 10−4, 10−3, 10−2, 0.1, 1, 10, 100}. The λ value and ψ corresponding to the smallest validation metric are selected as the final solution.
We used the Adam optimizer (Kingma & Ba, 2015) with a fixed learning rate of 0.001, batch size of 128, and 100 epochs for training the MDN. For adaptation of the MDN using the baseline methods Finetune and Finetune last, we used Adam with the same learning rate for 200 epochs. The batch size is set as b = max{10, 0.1N t}, where N t is number of adaptation samples in the target dataset. For training the autoencoder using Algorithm 1, we found that stochastic gradient descent (SGD) with Nesterov momentum (constant 0.9), and an exponential learning rate schedule between 0.1 and 0.005 works better than Adam.
Finetuning Baselines. We provide additional details on the baselines Finetune and Finetune last. Both the methods first initialize the target domain MDN, encoder, and decoder networks with the corresponding parameters from the source domain. The method Finetune first finetunes all the MDN parameters to minimize the conditional log-likelihood of the target dataset using the Adam optimizer. After the MDN is finetuned, we freeze the parameters of the MDN and encoder, and train only the decoder using data generated from the updated MDN channel. The method Finetune last differs from Finetune in that it optimizes only the weights of the final MDN layer.
From the results in Figures 4, 5, and 6, we observe that the baselines Finetune and Finetune last have very similar performance compared to the case of no adaptation. We have investigated this carefully and verified that this is not due to a bug or insufficient optimization (e.g., by checking if the final weights of the MDN and decoder are different for both methods). For both methods, we tried a range of learning rates for Adam and increased the number of epochs to a large number (beyond 200 was not helpful). We have reported the best-case results for these methods, which suggests that they are not effective at adaptation using small target domain datasets. As mentioned in Section 4.1, we hypothesize that using the KL-divergence based regularization and constraining the number of adaptation parameters leads to more effective performance of our method.
Uncertainty Estimation. Since there is inherent randomness in our experiments, especially with the small sample sizes of the target dataset, we always report average results from multiple trials. For the experiments on standard simulated channel variations (e.g., AWGN to Ricean fading), we report the results from 10 trials. For the random Gaussian mixtures experiment, we report the average and standard error over 50 random source/target dataset pairs. For the FPGA experiments, we report the results from 20 random trials. The average metrics (symbol error rate and log-likelihood) are reported in the plots.
Evaluation Protocol. We create a random class-stratified 50-50 train-test split (each of size 300,000) for data from both the source and target domains. Performance on both domains is always evaluated on the held-out test split. The train split from the target domain dataset is sub-sampled to create adaptation datasets of different sizes, specifically with 5, 10, 20, 30, 40, and 50 samples per class (symbol). For the generative adaptation experiments on the MDN (Appendix C.6), the number of adaptation samples from the target domain is reduced even further. We varied it from 2 samples perclass to 20 samples per-class in order to highlight the improvements obtained by the proposed method. The oracle baseline method, which retrains the autoencoder and MDN on the target distribution, uses the entire training dataset from the target domain.
Choice of SNR. For the experiments on simulated channel distributions such as AWGN, Ricean fading, and Uniform fading, we set the signal-to-noise ratio (SNR) to 14 dB for the source distribution and 20 dB for the target distribution. The connection between the SNR and the distribution parameters is given in Appendix F. We have experimented with other combinations of SNR for the source and target channels and found a similar trend in the adaptation performance.
In the simulated experiments, we focused on the SNR range of 14 dB to 20 dB. Our process for selecting this SNR range was by first evaluating the symbol error rate (SER) vs. SNR curve of the autoencoder for the different simulated channel distributions. We found that going below 14 dB SNR results in a degradation of the autoencoder’s performance (except for the AWGN channel, which we do not use as a target distribution). Also, going above 20 dB SNR did not lead to a significant decrease in the SER. For the channels such as Ricean fading and Uniform fading, we found that even a retrained autoencoder has a relatively high error rate for lower SNRs.
C.2 DETAILS ON THE FPGA EXPERIMENT
Referring to the experiment in § 4.2, for the real and over-the-air traces we used the platform from Lacruz et al. (2021). This ultra-wide-band mm-wave transceiver baseband memory-based design is developed on top of an ZCU111 RFSoC FPGA. This evaluation board features a Zynq Ultrascale + ZCU28DR. This FPGA is equipped with 8× 8 AD/DA converters with Giga-sampling capabilities, which make it ideal for RF system development; the 4 GB DDR4 memories contain RF-ADCs with up to 4 GSPS of sampling rate, and RF-DACs with up to 6.544 GSPS. This board also includes a quad-core ARM Cortex-A53 and a dual-core ARM Cortex-R5 real-time processor.
For the radio frequency, we used 60 GHz RF front-end antennas. These kits include a 16 + 16 TRX patch array antenna plus the RF module with up/down conversion from baseband to I/Q channels,
and TX/RX local oscillator (LO) frequency control. The antennas use 57 − 71 GHz, a range of frequencies that cover the unlicensed 60 GHz band for mm-wave channels, and are managed from a PC Host via USB.
We implemented a hardware on the loop training. For the experimentation on real traces, we use Matlab as a central axis. The PC host running Matlab is connected to the platform via Ethernet. The FPGA can transmit different custom waveforms like 16-QAM frames from the 802.11ad and 802.11ay standards, with 2 GHz of bandwidth. The frames are sent over-the-air via 60 GHz radio frequency kits, and the samples are stored at the FPGA DDR memory. We decode the received data from the transmission, removing the preamble and header fields and extracting the symbols to train the MDN. We add a preamble to the generated constellation from the MDN for packet detection purposes, and we transmit again the new waveforms over-the-air. Finally, the adaptation is performed offline with the decoded symbols from the custom autoencoder-learned constellation.
Source and Target Domains.
For the experiment in § 4.2, we introduced distribution changes via IQ imbalance-based distortions to the symbol constellation, and evaluated the adaptation performance as a function of the level of imbalance. The source domain would be the original channel, the over-the-air link between the transmitter and receiver on which the training data is collected. This source domain data is used for training the MDN and the autoencoder. The target domain would be a modification of the source domain where the symbols used by the transmitter are distorted by modifying the in-phase and quadrature-phase (IQ) components of the RF signal. This causes a change in the distribution observed by the receiver (decoder), leading to a drop in performance without any adaptation.
C.3 DETAILS ON THE RANDOM GAUSSIAN MIXTURE DATASETS
We created a simulated distribution shift setting where data from both the source and target domains are generated from class-conditional Gaussian mixtures whose parameters are modified between the two domains (e.g., see Fig. 7). The parameters for the source and target Gaussian mixtures are generated as follows:
Source Domain. The source domain data is generated with a standard 16-QAM constellation ZQAM, which has 16 classes (messages). Let ks be the number of components in the source Gaussian mixture.
For each z ∈ ZQAM:
• Calculate dmin, the minimum distance from z to the remaining symbols in ZQAM. Let σs = dmin / 4 be a constant standard deviation for this symbol.
• Component priors: generate πi(z) ∼ Unif(0.05, 0.95), ∀i ∈ [ks]. Normalize the priors to sum to 1.
• Component means: generate µi(z) ∼ N(· | z, σ2sI), ∀i ∈ [ks].
• Component covariances: generate s1, · · · , sd iid∼ Unif(0.2σs, σs) and let Σi(z) =
diag(s21, · · · , s2d), ∀i ∈ [ks] (the covariances are diagonal). • Generate Ns /m samples corresponding to symbol z from the Gaussian mixture: xsn ∼∑ks
i=1 πi(z)N(x |µi(z),Σi(z)).
Target Domain. The parameters of the target Gaussian mixture are generated in a very similar way. The MDN and autoencoder are trained on the source domain dataset. Let Z = {Eθe(11), · · · ,Eθe(1m)} be the constellation learned by the autoencoder. Let kt be the number of components in the target Gaussian mixture. For each z ∈ Z:
• Calculate dmin, the minimum distance from z to the remaining symbols in Z . Let σt = dmin / 4 be a constant standard deviation for this symbol.
• Component priors: generate π̂i(z) ∼ Unif(0.05, 0.95), ∀i ∈ [kt]. Normalize the priors to sum to 1.
• Component means: generate µ̂i(z) ∼ N(· | z, σ2t I), ∀i ∈ [kt].
• Component covariances: generate s1, · · · , sd iid∼ Unif(0.2σt, σt) and let Σ̂i(z) =
diag(s21, · · · , s2d), ∀i ∈ [kt] (the covariances are diagonal). • Generate N t /m samples corresponding to symbol z from the Gaussian mixture: xtn ∼∑kt
i=1 π̂i(z)N(x | µ̂i(z), Σ̂i(z)).
We set ks = kt = 3, except for the experiment where the source and target Gaussian mixtures are mismatched. In this case, ks and kt are randomly selected for each dataset from the range {3, 4, 5, 6}. Random Phase Shift. We allow the channel output x to be randomly phase shifted on top of other distribution changes. This is done by matrix multiplication of x with a rotation matrix, where the rotation angle for each sample is uniformly selected from [−ϕ, ϕ]. We set ϕ to π/18 or 10 degrees. Results on a dataset with random phase shift applied on top of random Gaussian mixture distribution shift can be found in Fig. 5c.
C.4 ABLATION EXPERIMENTS
We perform ablation experiments to understand: 1) the choice of the hyper-parameter λ, 2) the importance of the KL-divergence regularization in the adaptation objective, 3) performance of our method when the source and target Gaussian mixtures have mismatched components, and 4) the performance of our method when there is no distribution change.
Automatic Selection of Hyper-parameter λ. We evaluate the proposed validation metric for automatically selecting the hyper-parameter λ and report the results in Fig. 8. We run the proposed method for different fixed values of λ as well as the automatically-selected λ, and compare their
performance on the target domain test set. We consider both simulated channel variations and the random Gaussian mixture datasets. From the figure, we observe that in most cases performance based on the automatically set value of λ is better than other fixed choices of λ. The case of adaptation from AWGN to Ricean fading is an exception, where our method does not learn a good adaptation solution (see Fig. 4c). In this case, we observe from Fig. 8b that the setting λ = 0.0001 has the best symbol error rate.
Performance Under Component Mismatch. We evaluate the symbol error rate performance of all the methods in the setting where the number of components in the source and target Gaussian mixtures is mismatched. The number of components in the source and target Gaussian mixtures is randomly selected from the range 3 to 6. From Fig. 11, we observe that the proposed method has strong performance improvements even in this mismatched setting, suggesting that our method can perform well even when Assumptions A1 and A2 are not satisfied.
Importance of the KL-divergence Regularization. Recall that the adaptation objectives Eqs. (6) and (17) include the KL-divergence term scaled by λ in order to avoid large distribution changes when there is not enough support from the small target-domain dataset. A natural question to ask is whether this term is useful and helps improve the adaptation solution when λ > 0. To answer this, we compare the performance of our method with λ = 0 with that our our method with λ set automatically using the validation metric. The results of this comparison are given in Fig. 9 on four simulated channel variations. The results are averaged over multiple trials as before. It is clear that setting λ = 0 for our method leads to much higher symbol error rates compared to setting λ to a non-zero value using the validation metric, establishing the importance of the KL-divergence term.
Performance Under No Distribution Change. We evaluate the symbol error rate performance of all the methods in the setting where there is no distribution change. In this setting, the performance of the MDN and autoencoder should not change, and we expect the proposed adaptation method to maintain a similar performance (not lead to increased symbol error rate). In Fig. 10, we report the results of this experiment when both the source and target channel distributions are either Ricean fading or Uniform fading. We consider a medium SNR value of 14 dB and a high SNR value of 20 dB. We observe that our method is relatively stable even when there is no distribution change, and there is only a small increase in error rate. For instance, in Fig. 10c, the error rate of our method increases from 0.015 to 0.018 for 5 samples per class.
We expect that a practical system that frequently adapts to changes in the channel distribution should first have a distribution change-detection algorithm that takes a batch of new samples from the channel and detects whether there is any change in the distribution. The actual domain adaptation algorithm is then applied only when a distribution change is detected. In this way, any potential drop in the autoencoder’s performance when there is no distribution change can be made less likely.
C.5 ANALYSIS OF THE FAILURE ON AWGN TO RICEAN FADING
Referring to Fig. 4. c in the main paper, we observe that our method has a worse symbol error rate compared to no adaptation and the other baselines for the adaptation setting from an AWGN channel at 14d | 1. What is the focus and contribution of the paper regarding domain adaptation in generative learnt channel models?
2. What are the strengths of the proposed approach, particularly in its extensive evaluation, motivation, and insightfulness?
3. What are the concerns or weaknesses of the paper, such as the claim of obtaining labeled data for free, unclear evaluation patterns, and performance degradation under no distribution change?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper addresses the problem of handling domain-shifts that arises in generative learnt channel models in E2E communication systems in a few-shot setting.
The proposed domain adaptation approach is tailored around a Mixture Density Network (MDN) representing the channel model. In here, the approach:
learns an adapter layer, which models an affine transform of the original conditional channel distribution
introduces an additional regularization objective to ensure the adapter doesn't converge to bad/degenerate solutions
presents a feature transformation formulation on the decoder side to aid learning on the domain-shifted distributions
The approach is evaluated extensively, covering multiple types of distribution changes in both synthethic settings as well on a high-resolution mmWave testbed.
Strengths And Weaknesses
Strengths
1. Extensive evaluation
The approach is evaluated rigorously with well-suited baselines and a range of scenarios (e.g., multiple types of domain shifts, real-world evaluation). I especially appreciate evaluations studying when the (reasonable) assumptions are violated.
2. Motivation and relevant problem
While there has been a lot of attention on generative channel modelling recently, most works in my knowledge largely (and somewhat incorrectly) assume a stationary distribution. This paper takes a step in the right direction by addressing this pain-point.
3. Insightful approach
The approach overall is insightful and makes sense. By learning an adapter network and learning parameters relevant for the domain shifts (e.g., like FiLM modules), it makes few-shot domain-adaptation more tractable.
Furthermore, I find the choice of the channel model representation (MDNs) to also be sufficiently appropriate for the task (as opposed to GANs) for this study.
Concerns
1. "labeled set obtained for free"
The paper at multiple times claims that few-shot learning is especially possible since we can get labeled dataset for free -- I find this slightly confusing.
Wouldn't the labeled dataset be split between the encoder (transmitter) and decoder (receiver) devices? As a result, for a party to have the full labeled dataset, isn't a prerequisite communicating labels back to the other party?
2. Evaluation: Some observations unclear
I found some patterns in the evaluation was somewhat unclear and would appreciate the authors' answers on the questions below:
(a) Oracle-approach gap in Figure 4/5: I'm slightly surprised that proposed approach's symbol error rate does not converge to the oracle with a reasonable number of additional examples (50 * 16-QAM classes = 800), given that there are 50 learnable parameters. Are the authors aware if convergence is possible with even higher examples? Morevover, what is the size of the source dataset?
(b) Unchanged error rates in Figure 4/5 for many baselines: Are the authors aware of why the error rates of many baselines do not improve at all in spite of more training examples? Were the "finetune" baselines finedtuned only on the new data or a combination? In the case of combination, are domain-invariant features learnt?
(nitpick) Please summarize the performance degradation discussions in Ricean fading experiments in the main paper.
3. Evaluation: Performance under no distribution change
I appreciate that the authors also evaluate under a non-domain shifted dataset in Figure 10. Can the authors clarify why results drop in performance when there is no distribution change?
Specifically, it appears that the adapter layers' parameters are initialized such that it produces a identity mapping (page 18), so I'm surprised that this nonetheless degrades performance.
4. SNR=14-20 dB
Can the authors comment whether a SNR of 14-20dB (which to me appears really large) is a reasonable setting? Did the authors also evaluate SNR vs. error rates for the approach and baselines? I wonder if the results shown here apply only in high SNR regimes.
Clarity, Quality, Novelty And Reproducibility
Clarity: Good. It was generally easy reading the paper, thanks to really crisp text and a comprehensive background section. The minor issue I found is that some patterns in the results are not discussed (see concern 2, 3) The only nitpick I have are the figures (esp. Figures 4-6) where legends are highly illegible.
Quality: Good. While there are minor discrepancies the approach (e.g., performance slightly deteriorates when there is no distribution change, does not translate well to certain distribution changes), I think it can be overlooked in light of the remaining contributions.
Novelty: Very good. The authors tackle a very well motivated problem (see strength 2) and propose an insightful approach to tackle it (see strength 3).
Reproducibility: Very good. The main paper (esp. the large appendix) appears to contain many details of the approach. Additionally, the code is provided as well. I'm not sure if the authors plan to release the channels from the mmWave FPGA testbed. |
ICLR | Title
Few-Shot Domain Adaptation For End-to-End Communication
Abstract
The problem of end-to-end learning of a communication system using an autoencoder – consisting of an encoder, channel, and decoder modeled using neural networks – has recently been shown to be an effective approach. A challenge faced in the practical adoption of this learning approach is that under changing channel conditions (e.g. a wireless link), it requires frequent retraining of the autoencoder in order to maintain a low decoding error rate. Since retraining is both time consuming and requires a large number of samples, it becomes impractical when the channel distribution is changing quickly. We propose to address this problem using a fast and sample-efficient (few-shot) domain adaptation method that does not change the encoder and decoder networks. Different from conventional training-time unsupervised or semi-supervised domain adaptation, here we have a trained autoencoder from a source distribution that we want to adapt (at test time) to a target distribution using only a small labeled dataset, and no unlabeled data. We focus on a generative channel model based on the Gaussian mixture density network (MDN), and propose a regularized, parameter-efficient adaptation of the MDN using a set of affine transformations. The learned affine transformations are then used to design an optimal transformation at the decoder input to compensate for the distribution shift, and effectively present to the decoder inputs close to the source distribution. Experiments on many simulated distribution changes common to the wireless setting, and a real mmWave FPGA testbed demonstrate the effectiveness of our method at adaptation using very few target domain samples 1.
1 INTRODUCTION
End-to-end (e2e) learning of a communication system using an autoencoder has been recently shown to be a promising approach for designing the next generation of wireless networks (O’Shea & Hoydis, 2017; Dörner et al., 2018; Aoudia & Hoydis, 2019; O’Shea et al., 2019; Ye et al., 2018; Wang et al., 2017). This new paradigm is a viable alternative for optimizing communication in diverse applications, hardware, and environments (Hoydis et al., 2021). It is particularly promising for dense deployments of low-cost transceivers, where there is interference between the devices and hardware imperfections that are difficult to model analytically. The key idea of e2e learning for a communication system is to use an autoencoder architecture to model and learn the transmitter and receiver jointly using neural networks in order to minimize the e2e symbol error rate (SER).
The channel (i.e., propagation medium and transceiver imperfections) can be represented as a stochastic transfer function that transforms its input z ∈ Rd to an output x ∈ Rd. It can be regarded as a black-box that is typically non-linear and non-differentiable due to hardware imperfections (e.g., quantization and amplifiers). Since autoencoders are trained using stochastic gradient descent (SGD)-based optimization (O’Shea & Hoydis, 2017), it is challenging to work with a black-box channel that is not differentiable. One approach to address this problem is to use a known mathemat-
1Code for our work: https://github.com/jayaram-r/domain-adaptation-autoencoder
ical model of the channel (e.g., additive Gaussian noise), which would enable the computation of gradients with respect to the autoencoder parameters via backpropagation. However, such standard channel models do not capture well the realistic channel effects as shown in Aoudia & Hoydis (2018). Alternatively, recent works have proposed to learn the channel using deep generative models that approximate p(x | z), the conditional probability density of the channel, using Generative Adversarial Networks (GANs) (O’Shea et al., 2019; Ye et al., 2018), Mixture Density Networks (MDNs) (García Martí et al., 2020), and conditional Variational Autoencoders (VAEs) (Xia et al., 2020). The use of a differentiable generative model of the channel enables SGD-based training of the autoencoder, while also capturing realistic channel effects better than standard models.
Although this e2e optimization with a generative channel model learned from data can improve the physical-layer design for communication systems, in reality, channels often change, requiring collection of a large number of samples and frequent retraining of the channel model and autoencoder. For this reason, adapting the generative channel model and the autoencoder as often as possible, using only a small number of samples is required for good communication performance. Prior works have (to be best of our knowledge) not addressed the adaptation problem for autoencoder-based e2e learning, which is crucial for real-time deployment of such a system under frequently-changing channel conditions. In this paper, we study the problem of domain adaptation (DA) of autoencoders using an MDN as the channel model. In contrast to conventional DA, where the target domain has a large unlabeled dataset and sometimes also a small labeled dataset (semi-supervised DA) (Ben-David et al., 2006), here we consider a few-shot DA setting where the target domain has only a small labeled dataset, and no unlabeled data. This setting applies to our problem since we only get to collect a small number of labeled samples at a time from the changing target domain (here the channel) 2.
Towards addressing this important practical problem, we make the following contributions:
• We propose a parameter- and sample-efficient method for adapting a generative MDN (used for modeling the channel) based on the properties of Gaussian mixtures (§ 3.1 and § 3.2). • Based on the MDN adaptation, we propose an optimal input-transformation method at the decoder that compensates for changes in the channel distribution, and decreases or maintains the error rate of the autoencoder without any modification to the encoder and decoder networks (§ 3.3). • Experiments on a mmWave FPGA platform and a number of simulated distribution changes show strong performance improvements for our method. For instance, in the FPGA experiment, our method improves the SER by 69% with only 10 samples per class from the target distribution (§ 4).
Related Work. Recent approaches for DA such as DANN (Ganin et al., 2016), based on adversarial learning of a shared representation between the source and target domains (Ganin & Lempitsky, 2015; Ganin et al., 2016; Long et al., 2018; Saito et al., 2018; Zhao et al., 2019; Johansson et al., 2019), have achieved much success on computer vision and natural language processing. Their high-level idea is to adversarially learn a shared feature representation for which inputs from the source and target distributions are nearly indistinguishable to a domain discriminator DNN, such that a label predictor DNN using this representation and trained using labeled data from only the source domain also generalizes well to the target domain. Adversarial DA methods are not suitable for our problem, which requires fast and frequent test-time DA, because of their high computational and sample complexity and the imbalance in the number of source and target domain samples.
Related frameworks such as transfer learning (Long et al., 2015; 2016), model-agnostic metalearning (Finn et al., 2017), domain-adaptive few-shot learning (Zhao et al., 2021; Sun et al., 2019), and supervised DA (Motiian et al., 2017a;b) also deal with the problem of adaptation using a small number of samples. Most of them are not applicable to our problem because they primarily address novel classes (with potentially different distributions), and knowledge transfer from existing to novel tasks. Motiian et al. (2017a) is closely related since they also deal with a target domain that only has a small labeled dataset and has the same label space. The key difference is that Motiian et al. (2017a) address the training-time few-shot DA problem, while we focus on test-time few-shot DA. Specifically, their adversarial DA method requires both the source and target domain datasets at training time, and can be computationally expensive to retrain for every new batch of target domain data (a key motivation for this work is to avoid frequent retraining).
2In our problem, labels correspond to the transmitted messages and are essentially obtained for free (see § 3).
2 PRIMER ON AUTOENCODER-BASED END-TO-END COMMUNICATION
Notations. We denote vectors and matrices with boldface symbols. We define the indicator function 1(c) that takes value 1 (0) when the condition c is true (false). For any integer n ≥ 1, we define [n] = {1, · · · , n}. We denote the one-hot-coded vector with 1 at index i and the rest zeros by 1i. The probability density of a multivariate Gaussian with mean µ and covariance matrix Σ is denoted by N (x |µ,Σ). We use the superscripts s and t to denote quantities corresponding to the source and target domain respectively. Table 2 in the Appendix provides a quick reference for the notations.
Following (O’Shea & Hoydis, 2017; Dörner et al., 2018), consider a singleinput, single-output (SISO) communication system shown in Fig. 1, consisting of a transmitter (or encoder), channel, and receiver (or decoder). The encoder Eθe(·) is a multi-layer feedforward neural network (NN) with parameters θe, that maps an input message y ∈ Y := {1, · · · ,m} into an encoded symbol z ∈ Rd. The input
message y is mapped into a one-hot-coded vector 1y prior to being processed by the encoder 3. The message y is equivalent to a class label in machine learning terms, and the encoded symbol z = Eθe(1y) is like a representative vector for the class y. We note that the dimension of the encoding d is small (less than 10), and d = 2 is typically used to coincide with traditional modulation techniques (O’Shea & Hoydis, 2017; Goldsmith, 2005). The set of distinct encoded symbols Z = {Eθe(11), · · · ,Eθe(1m)} is referred to as the constellation of the autoencoder. The symbol z is transmitted (via the custom modulation learned by the encoder) over a communication channel, represented by an unknown conditional probability density p(x | z), and is received at the output of the channel as a noisy, distorted symbol x ∈ Rd. The decoder Dθd(·) is also a multilayer, feed-forward NN with parameters θd that predicts the class-posterior probabilities over the m messages based on the distorted channel output x. The decoder is essentially a classifier whose input-output mapping is defined by Dθd(x) := [Pθd(1 |x), · · · , Pθd(m |x)], where Pθd(y |x) is the predicted probability of class y given x. The class with the highest predicted probability is the decoded message ŷ(x) = argmaxy∈Y Pθd(y |x). As in standard classification, the performance metric of the autoencoder is the symbol error rate (SER), defined as E(x,y)[1(ŷ(x) ̸= y)]. Generative Channel Model. In order to learn the encoder and decoder networks using SGDbased optimization, it is necessary to have a differentiable backward path from the decoder to the encoder through the channel. We address this by learning a parametric generative model of the channel Pθc(x | z) (with parameters θc) that closely approximates the true channel conditional density p(x | z). There exists a stochastic data generation or sampling function x = hθc(z,u) corresponding to the generative model, where u captures the random aspects of the channel (e.g., noise and phase offsets; details in Appendix E). In this work, we model the conditional density of the channel using a set of m Gaussian mixtures, one per input message (or class) y ∈ Y:
Pθc(x | z) = k∑
i=1
πi(z)N ( x |µi(z),Σi(z) ) , z ∈ {Eθe(11), · · · ,Eθe(1m)}. (1)
Here, k is the number of components, µi(z) ∈ Rd is the mean vector, Σi(z) ∈ Rd×d is the (symmetric, positive-definite) covariance matrix, and πi(z) ∈ [0, 1] is the prior probability of component i. It is convenient to express the component prior probability in terms of the softmax function as πi(z) = eαi(z) / ∑k j=1 e
αj(z), ∀i ∈ [k], where αi(z) ∈ R are the component prior logits. We define the parameter vector of component i as ϕi(z)T = [αi(z),µi(z)T , vec(Σi(z))T ], where vec(·) is the vector representation of the unique entries of the covariance matrix. We also define the combined parameter vector from all components by ϕ(z)T = [ϕ1(z)T , · · · ,ϕk(z)T ]. An MDN can model complex conditional distributions by combining a feed-forward network with a parametric mixture density (Bishop, 1994; 2007). We use the MDN to predict the parameters of the
3The encoder has a normalization layer that constrains the average power of the symbols (see Appendix D).
Gaussian mixtures ϕ(z) as a function of its input symbol z, i.e., ϕ(z) = Mθc(z), where θc are the parameters of the MDN network. The MDN output with all the mixture parameters has dimension p = k (d(d+ 1)/2 + d + 1). While there are competing methods for generative modeling of the channel such as conditional GANs (Ye et al., 2018) and VAEs (Xia et al., 2020), we choose the Gaussian MDN based on i) the strong approximation properties of Gaussian mixtures (Kostantinos, 2000) for learning probability distributions; and ii) the analytical and computational tractability it lends to our domain adaptation formulation. The effectiveness of a Gaussian MDN for wireless channel modeling has also been shown in García Martí et al. (2020).
The input-output function of the autoencoder is given by fθ(1y) = Dθd(hθc(Eθe(1y),u)), and the goal of autoencoder learning is to minimize the symbol error rate. Since the sampling function hθc of a Gaussian mixture channel is not directly differentiable, we apply the Gumbel-Softmax reparametrization (Jang et al., 2017) to obtain a differentiable sampling function (details in Appendix E). More background, including the training algorithm of the autoencoder, is in Appendix D.
3 PROPOSED METHOD
Problem Setup. Let x, y, z denote a realization of the channel output, message (class label), and channel input (symbol) distributed according to the joint distribution p(x, y, z). We first establish the following result about the joint distribution. Proposition 1. The joint distributions p(x, y, z) and p(x, y) can be expressed in the following form:
p(x, y, z) = p ( x |Eθe(1y) ) p(y) δ(z−Eθe(1y)), ∀x, z ∈ Rd, y ∈ Y
p(x, y) = p ( x |Eθe(1y) ) p(y), ∀x ∈ Rd, y ∈ Y, (2)
where δ(·) is the Dirac delta (or Impulse) function, and we define p(x | y) := p(x |Eθe(1y)) as the conditional distribution of x given the class y.
The proof is simple and given in Appendix A. Let Ds = {(xsi , ysi , zsi ), i = 1, · · · , Ns} be a large dataset from a source distribution ps(x, y, z) = ps(x | y) ps(y) δ(z−Eθe(1y)). The data collection involves sending multiple copies of each of the m messages through the channel (e.g., over the air from the transmitter to receiver) by using a standard modulation technique (encoding) for z (e.g., M-QAM (Goldsmith, 2005)), and observing the corresponding channel output x. Different from conventional machine learning, where class labeling is expensive, in this setting the class label is simply the message transmitted, which is obtained for free while collecting the data. The MDN channel model and autoencoder are trained on Ds according to Algorithm 1 (see Appendix D.3).
Due to changes in the channel condition and environmental factors (e.g., moving obstacles), suppose the data distribution changes to pt(x, y, z) = pt(x | y) pt(y) δ(z − Eθe(1y)). While the distribution change may cause a drop in the autoencoder’s performance, we assume that it is gradual enough that domain adaptation is possible (David et al., 2010) (by domain, here
we mean the state of the communication channel during the time period when the MDN and autoencoder are trained). As discussed in § 1, the main challenge in this setting is to collect a sufficiently large dataset to retrain the MDN and autoencoder under the distribution shift. Therefore, suppose we collect a small dataset from the target distribution Dt = {(xti, yti , zti), i = 1, · · · , N t}, where N t ≪ Ns. Our goal is to design a few-shot domain adaptation for the MDN and autoencoder in order to maintain or improve the symbol error rate.
Distribution Change. Referring to the joint distribution Eq. (2), the class prior p(y) is the prior probability of a message y transmitted through the system. In this work, we make a reasonable practical assumption that this prior probability does not change, i.e., pt(y) ≈ ps(y), ∀y ∈ Y . However, the class-conditional distribution of channel output p(x | y) changes, and therefore the class-posterior distribution p(y |x) also changes. This is commonly referred to as the conditional shift assumption (Zhang et al., 2013) (different from covariate shift (Sugiyama et al., 2007)).
Overview of the Proposed Method. Recall from Eqn. (1) that we model the channel distribution p(x | z) as a Gaussian mixture Pθc(x | z), whose parameters are predicted by the MDN, i.e., ϕ(z) =
Mθc(z). From Proposition 1, the m class-conditional distributions of x are given by p(x | y) = p(x |Eθe(1y)), ∀y ∈ Y . Therefore, in our setting, adaptation of the class-conditional distributions is equivalent to adaptation of the m Gaussian mixtures in Eqn. (1. Adaptation of the Gaussian mixtures can be directly accomplished by adapting the MDN (i.e., the parameters θc) using the small target-domain dataset Dt. Our proposed adaptation of the autoencoder consists of two key steps:
1. A light-weight, parameter-efficient adaptation of the MDN using the small target dataset Dt. 2. An efficient feature transformation at the input of the decoder (based on the MDN adaptation) that
compensates for changes in the class-conditional distributions.
Our method requires adaptation of only the MDN (channel model), while the encoder and decoder networks (θe and θd) remain unchanged, making it amenable to fast and frequent adaptation that requires collecting only a small target dataset each time (few-shot setting).
3.1 MDN CHANNEL MODEL ADAPTATION
Our goal is to adapt the m Gaussian mixtures in Eqn (1) that model the source class-conditional distributions. Suppose the m adapted Gaussian mixtures corresponding to the (unknown) target class-conditional distributions are
P θ̂c (x | z) = k∑ i=1 π̂i(z)N ( x | µ̂i(z), Σ̂i(z) ) , z ∈ {Eθe(11), · · · ,Eθe(1m)}, (3)
where θ̂c are parameters of the adapted (target) MDN, and the component means, covariances, and prior probabilities with a hat notation are defined as in § 2. The adapted MDN predicts all the parameters of the target Gaussian mixture as ϕ̂(z) = Mθ̂c(z) as shown in Fig. 2, where ϕ̂(z) is defined in the same way as ϕ(z). Instead of naively fine-tuning all the MDN parameters θc, or even just the final fully-connected layer 4, we propose a parameter-efficient adaptation of the MDN based on the affine-transformation property of the Gaussian distribution, i.e., one can transform between any two multivariate Gaussians through a general affine transformation. First, we state some basic assumptions required to make the proposed adaptation tractable.
A1) The source and target Gaussian mixtures per class have the same number of components k. A2) The source and target Gaussian mixtures (from each class) have a one-to-one correspondence
between their components.
Assumption A1 is made in order to not have to change the architecture of the MDN during adaptation due to adding or removing of components. Both assumptions A1 and A2 5 make it tractable to find the closed-form expression for a simplified KL-divergence between the source and target Gaussian mixtures per class (see Proposition 2).
Parameter Transformations. As shown in Appendix B.2, the transformations between the source and target Gaussian mixture parameters, for any symbol z ∈ Z and component i ∈ [k], are given by
µ̂i(z) = Ai µi(z) + bi, Σ̂i(z) = Ci Σi(z)C T i , and α̂i(z) = βi αi(z) + γi. (4)
The affine transformation parameters Ai ∈ Rd×d and bi ∈ Rd transform the means, Ci ∈ Rd×d transforms the covariance matrix, and βi, γi ∈ R transform the prior logits. The vector of all adaptation parameters to be optimized is defined by ψT = [ψT1 , · · · ,ψTk ], where ψi contains all the affine-transformation parameters from component i. The number of adaptation parameters is given by k (2 d2 + d+ 2). This is typically much smaller than the number of MDN parameters (weights and biases from all layers), even if we consider only the final fully-connected layer for fine-tuning (see Table 1). In Fig. 2, the adaptation layer mapping ϕ(z) to ϕ̂(z) basically implements the parameter transformations defined in Eqn. (4). We observe that the affine-transformation parameters are not dependent on the symbol z (or the class), which is a constraint we impose in order to keep the number of adaptation parameters small. This is also consistent with the MDN parameters θc being independent of the symbol z. Allowing the affine transformations to depend of z would provide more flexibility, but at the same time require more target domain data for successful adaptation.
4We show in our experiments that both the fine-tuning approaches fail to adapt well. 5We perform ablation experiments (Appendix C.4) that evaluate our method under random Gaussian mixtures
with mismatched components. We find that our method is robust even when these assumptions are violated.
Proposition 2. Given m Gaussian mixtures from the source domain and m Gaussian mixtures from the target domain (one each per class), which satisfy Assumptions A1 and A2, the KL-divergence between Pθc(x,K | z) and Pθ̂c(x,K | z) can be computed in closed-form, and is given by:
Dψ(Pθc , Pθ̂c ) = EPθc
[ log
Pθc(x,K | z) P θ̂c (x,K | z)
] =
∑ z∈Z p(z) k∑ i=1 πi(z) log πi(z) π̂i(z)
+ ∑ z∈Z p(z) k∑ i=1 πi(z)DKL ( N ( · |µi(z),Σi(z) ) , N ( · | µ̂i(z), Σ̂i(z) )) , (5)
where K is the mixture component random variable. The first term is the KL-divergence between the component prior probabilities, which simplifies into a function of the parameters [β1, γ1, · · · , βk, γk] . The second term involves the KL-divergence between two multivariate Gaussians (a standard result), which also simplifies into a function of ψ.
The proof and the final expression for the KL-divergence as a function ofψ are given in Appendix A.1. The symbol priors {p(z), z ∈ Z} are estimated using the class proportions from the source dataset Ds. We note that this result is different from the KL-divergence between two arbitrary Gaussian mixtures, for which there is no closed-form expression (Hershey & Olsen, 2007).
3.2 REGULARIZED ADAPTATION OBJECTIVE
From the above analysis, we can formulate the MDN adaptation as the equivalent problem of finding the optimal set of affine transformations (one per-component) mapping the source to the target Gaussian mixtures. To reduce the possibility of the adaptation finding bad solutions due to the small-sample setting, we introduce a regularization term based on the KL-divergence (defined earlier), which constrains the distribution shift produced by the affine transformations. We consider two scenarios for adaptation: 1)
Generative adaptation of the MDN in isolation and 2) Discriminative adaptation of the MDN as part of the autoencoder. In the first case, the goal of adaptation is to find a good generative model for the target channel distribution, while in the second case the goal is to improve the classification accuracy of the autoencoder on the target distribution. We focus on the discriminative adaptation here, and present the very similar generative adaptation in Appendix B.3.
Since the goal of adaptation is to improving the decoder’s accuracy in recovering the transmitted symbol z from the channel output x, we use the (negative) symbol posterior log-likelihood (PLL) as the first data-dependent term of the adaptation objective. The second term is the simplified KLdivergence between the source and target Gaussian mixtures, which does not depend on the data.
JPLL(ψ ;λ) = −1 N t Nt∑ n=1 logP θ̂c (ztn |xtn) + λDψ(Pθc , Pθ̂c). (6)
The symbol posterior P θ̂c (z |x) is computed from the conditional P θ̂c (x | z) and the symbol priors {p(z), z ∈ Z} using Bayes rule. We observe that the adaptation objective is a smooth and nonconvex function of ψ . Also, computation of the objective and its gradient (w.r.tψ) are inexpensive operations since i) they do not require forward and back-propagation through the layers of the MDN and ii) both N t and the dimension of ψ are small. Therefore, we use the BFGS Quasi-Newton method (Nocedal & Wright, 2006) for minimization, instead of SGD-based large-scale optimization (e.g., Adam). The regularization constant λ is a hyper-parameter of the proposed method, and we propose a validation metric in Appendix B.4) to set its value automatically.
3.3 DECODER ADAPTATION USING FEATURE TRANSFORMATIONS
We propose a computationally-efficient feature transformation g−1 : Rd 7→ Rd at the decoder such that the transformed inputs x̂s = g−1(xt) are closely aligned to the source distribution on which
the decoder was trained (see Fig. 3). This is based on the optimal affine-transformations ψ of the MDN found by minimizing Eqn. (6). This method does not require any change to the trained encoder and decoder networks, making it well suited for the few-shot DA setting. Consider a test input xt at the decoder from the target-domain marginal distribution pt(x) =∑ z∈Z p(z) ∑k i=1 π̂i(z)N ( x | µ̂i(z), Σ̂i(z) ) . As shown in Appendix B.2, conditioned on a given symbol z ∈ Z and component i ∈ [k], the affine transformation that maps from the target Gaussian distribution xt | z, i ∼ N(x | µ̂i(z), Σ̂i(z)) to the source Gaussian distribution xs | z, i ∼ N(x |µi(z),Σi(z)) is given by
x̂s = g−1zi (x t) := C−1i (x t − Ai µi(z) − bi) + µi(z). (7) However, this transformation requires knowledge of both the transmitted symbol z and the mixture component i, which are not observed at the decoder (the decoder only observes the channel output xt). We address this by taking the expected affine transformation from target to source, where the expectation is with respect to the joint posterior over the symbol z and component i, given the channel output xt. This posterior distribution based on the target Gaussian mixture is:
P θ̂c (z, i |xt) =
p(z) π̂i(z)N ( xt | µ̂i(z), Σ̂i(z) )∑ z′ ∑ j p(z ′) π̂j(z′)N ( xt | µ̂j(z′), Σ̂j(z′)
) . The expected inverse-affine feature transformation at the decoder is then defined as
g−1(xt) := EP θ̂c (z,i |x) [ g−1zi (x t) | xt ] = ∑ z∈Z ∑ i∈[k] P θ̂c (z, i |xt) g−1zi (x t). (8)
We show that this conditional expectation is the optimal transformation from the standpoint of meansquared-error estimation (Kay, 1993) in Appendix A.2. The adapted decoder based on this feature transformation is illustrated in Fig. 3 and defined as D̂θd(x t ;ψ) := Dθd(g −1(xt)). For small to moderate number of symbols m and number of components k, this transformation is computationally efficient and easy to implement at the receiver of a communication system. A discussion of the computational complexity of the proposed method is given in Appendix B.5.
4 EXPERIMENTS
We perform experiments to evaluate the proposed adaptation method for the MDN and autoencoder. Our main findings are summarized as follows: 1) the proposed method adapts well to changes in the channel distribution using only a few samples per class, often leading to strong improvement over the baselines; 2) our method performs well under multiple simulated distribution changes, and notably on our mmWave FPGA experiments; 3) Extensive ablation studies show that the proposed KL-divergence based regularization and the validation metric for setting λ are effective.
Setup. We implemented the MDN, autoencoder networks, and the adaptation methods in Python using TensorFlow (Abadi et al., 2015) and TensorFlow Probability. We used the following setting in our experiments. The size of the message set m is fixed to 16, corresponding to 4 bits. The dimension of the encoding (output of the encoder) d is set to 2, and the number of mixture components k is set to 5. More details on the experimental setup, neural network architecture, and the hyper-parameters are given in Appendix C.1.
Baseline Methods. We compare the performance of our method with the following baselines: 1) No adaptation, which is the MDN and autoencoder from the source domain without adaptation. 2) Retrained MDN and autoencoder, which is like an “oracle method” that has access to a large dataset from the target domain. 3) Finetune - where the method optimizes all the MDN parameters for 200 epochs and optimizes the decoder for 20 epochs 6. 4) Finetune last - which follows the same approach as “Finetune”, but only optimizes the last layer of MDN (all the layers of the decoder are however optimized). We note that traditional domain adaptation methods are not suitable for this problem because it requires adaptation of both the MDN (generative model) and the decoder.
Datasets. The simulated channel variations are based on models commonly used for wireless communication, specifically: i) Additive white Gaussian noise (AWGN), ii) Ricean fading, and iii)
6We found no significant gains with larger number of epochs in this case.
Uniform or flat fading (Goldsmith, 2005). Details on these channel models and calculation of the their signal-to-noise ratio (SNR) are provided in Appendix F. We also created simulated distribution changes using random, class-conditional Gaussian mixtures for both the source and target channels (we also include random phase shifts). The parameters of the source and target Gaussian mixtures are generated in a random but controlled manner as detailed in Appendix C.3. We also evaluate the performance of the adaptation methods on real over-the-air wireless experiments. We use a recent high-performance mmWave testbed (Lacruz et al., 2021), featuring a high-end FPGA board with 2 GHz bandwidth per-channel and 60 GHz SIVERS antennas (SIVERSIMA, 2020). We introduce distribution changes via (In-phase and Quadrature-phase) IQ imbalance-based distortions to the symbol constellation, and gradually increase the level of imbalance to the system 7. More details on the FPGA experimental setup are given in Appendix C.2.
Evaluation Protocol. Due to the space limit, we provide details of the evaluation protocol such as train, adaptation, and test sample sizes, and the number of random trials used to get averaged performance in Appendix C.1. We report the symbol error rate (SER) on a large held-out test dataset (from the target domain) as a function of the number of target-domain samples per class. The only hyper-parameter λ of our method is set automatically using the validation metric proposed in B.4.
4.1 AUTOENCODER ADAPTATION ON SIMULATED DISTRIBUTION CHANGES
The adaptation results under simulated distributions changes are given in Figs. 4 and 5, with the symbol error rates plotted as a function of the number of target samples per class. In Fig. 4, we consider standard channel distributions such as AWGN, Ricean fading, and Uniform fading. In Fig. 5, we consider random Gaussian mixtures for both the source and the target distributions. We observe that the proposed adaptation leads to a strong improvement in SER in all cases, except in the case of AWGN to Ricean fading (Fig. 4. c). We provide some insights on the failure of our method in this case in Appendix C.5. Note that the methods “No adapt” and “Retrained autoenc” have the same SER for all target sample sizes (i.e., a horizontal line). We find both the finetuning baselines to
7IQ imbalance is a common issue in RF communication that introduces distortions to the final constellation.
have very similar SER in all cases, and there is not much improvement compared to no adaptation. This suggests that our approach of constraining the number of adaptation parameters and using the KL-divergence regularization are effective in the few-shot DA setting (see Table 1).
4.2 AUTOENCODER ADAPTATION ON FPGA EXPERIMENTS
For this experiment, different levels of distribution change are introduced by varying the IQ imbalance over 20%, 25%, and 30% (higher IQ imbalance corresponds to larger distribution change). From Fig. 6, we observe that the proposed method achieves significant reduction in error rate compared to the (non-oracle) baselines. The relative improvement in SER over the baselines is more pronounced under higher IQ imbalance. For instance, at 30% IQ imbalance, our method achieves a relative SER improvement of around 69% over the fine-tuning baselines using only 10 samples per-class.
4.3 ADDITIONAL EXPERIMENTS
We have performed a number of additional experiments including ablation studies, which are reported in Appendix C.4 through C.6. They include: 1) evaluating the proposed validation metric for automatically setting the hyper-parameter λ; 2) evaluating the importance of the KLdivergence regularization in the adaptation objective; 3) performance of our method when the source and target Gaussian mixtures have a mismatch in the components (addressing As-
sumptions A1 and A2); 4) performance of our method when there is no distribution shift; and 5) performance of the generative adaptation of the MDN channel. To summarize the observations, we found the validation metric to be effective at setting the value of λ, and that our method has good performance even when Assumptions A1 and A2 are violated, or when there is no distribution shift. The generative MDN adaptation leads to increased log-likelihoods with as low as 2 samples per class.
5 CONCLUSIONS
In this work, we explore one of the first approaches for domain adaptation of autoencoder based e2e communication in the few-shot setting. We first propose a light-weight and parameter-efficient method for adapting a Gaussian MDN with a very small number of samples from the target distribution. Based on the MDN adaptation, we propose an optimal input transformation method at the decoder that attempts to closely align the target domain inputs to the source domain. We demonstrate the effectiveness of the proposed methods through extensive experiments on both simulated channels and a mmWave FPGA testbed. A discussion of limitations and future directions is given in Appendix B.6.
ACKNOWLEDGMENTS
Banerjee, Raghuram, and Zeng were supported in part through the following grants — US National Science Foundation’s CNS-2112562, CNS-2107060, CNS-2003129, CNS-1838733, and CNS-1647152, and the US Department of Commerce’s 70NANB21H043. Somesh Jha was partially supported by the DARPA-GARD problem under agreement number 885000. The authors from IMDEA Networks were sponsored by the Spanish Ministry of Economic Affairs and Digital Transformation under European Union NextGeneration-EU projects TSI-063000-2021-59 RISC-6G and TSI-063000-2021-63 MAP-6G, and by the Regional Government of Madrid and the European Union through the European Regional Development Fund (ERDF) project REACT-CONTACT-CM-23479.
Appendix
Table 2: Commonly used notations
Notation Description
y ∈ Y := {1, · · · ,m} Input message or class label. Usually m = 2b, where b is the number of bits. 1y, y ∈ Y One-hot-coded representation of a label (message) y, with 1 at position y and zeros elsewhere. z ∈ Z ⊂ Rd with |Z| = m Encoded representation or symbol vector corresponding to an input message. x ∈ Rd Channel output that is the feature vector to be classified by the decoder. Eθe(1y) Encoder NN with parameters θe mapping a one-hot-coded message to a symbol vector in Rd. Dθd(x) = [Pθd(1 |x), · · · , Pθd(m |x)] Decoder NN with parameters θd mapping the channel output into probabilities over the message set. ŷ(x) = argmaxy∈Y Pθd(y |x) Class (message) prediction of the decoder. Pθc(x | z) Conditional density (generative) model of the channel with parameters θc. ϕ(z) = Mθc(z) Mixture density network that predicts the parameters of a Gaussian mixture. x = hθc(z,u) Transfer or sampling function corresponding to the channel conditional density. fθ(1y) = Dθd(hθc(Eθe(1y),u)) Input-output mapping of the autoencoder with combined parameter vector θ T = [θTe ,θ T c ,θ T d ]. ψT = [ψT1 , · · · ,ψTk ] Affine transformation (adaptation) parameters per component used to adapt the MDN. gzi and g −1 zi , i ∈ [k], z ∈ Z Affine transformations between the components of the source-to-target Gaussian mixtures and vice-verse. DKL(p, q) Kullback-Leibler divergence between the distributions p and q. N(· |µ,Σ) Multivariate Gaussian density with mean vector µ and covariance matrix Σ. δ(x− x0) Dirac delta or impulse function centered at x0. Cat(p1, · · · , pk) Categorical distribution with pi ≥ 0 and ∑ i pi = 1. 1(c) Indicator function mapping a predicate c to 1 if true and 0 if false. ∥x∥p ℓp norm of a vector x.
The appendices are organized as follows:
• Appendix A discusses the theoretical results from the main paper. • Appendix B provides additional details on the proposed method including:
– Discussion on class labels and labeled data in the communication setting (Appendix B.1). – Feature and parameter transformation between multivariate Gaussians (Appendix B.2). – Generative adaptation of the MDN channel (Appendix B.3). – The validation metric used for setting the hyper-parameter λ (Appendix B.4). – Computational complexity analysis of the proposed method (Appendix B.5). – Limitations and future work (Appendix B.6).
• Appendix C provides additional details on the experiments and additional results, including ablation studies of the proposed method.
• Appendix D provides additional background on the following topics: 1) components of an endto-end autoencoder-based communication system, 2) generative modeling using mixture density networks, 3) training algorithm of the autoencoder, and 4) a primer on domain adaptation.
• Appendix E provides details on the MDN training and differentiable sampling using the Gumbelsoftmax reparametrization.
• Appendix F provides details on the simulated channel distributions used in our experiments.
A THEORETICAL RESULTS
Propostion 1 (restatement). The joint distributions p(x, y, z) and p(x, y) can be expressed in the following form:
p(x, y, z) = p ( x |Eθe(1y) ) p(y) δ(z−Eθe(1y)), ∀x, z ∈ Rd, y ∈ Y
p(x, y) = p ( x |Eθe(1y) ) p(y), ∀x ∈ Rd, y ∈ Y, (9)
where δ(·) is the Dirac delta (or Impulse) function, and we define p(x | y) := p(x |Eθe(1y)) as the conditional distribution of x given the class y.
Proof. It follows from the dependence y → z → x defined by our generative model that p(x, y, z) = p(y) p(z | y) p(x | z, y)
= p(y) δ(z−Eθe(1y)) p(x |Eθe(1y), y) = p(y) δ(z−Eθe(1y)) p(x |Eθe(1y)).
In the second step, the conditional p(z | y) reduces to the Dirac delta since the symbol z can only take one of the m values from the constellation Z = {Eθe(11), · · · ,Eθe(1m)} (for a fixed encoder mapping). The distribution p(x, y) in Eq. (9) is obtained from the third step by integrating p(x, y, z) over all z, and using the integration property of the Dirac delta.
A.1 KL-DIVERGENCE BETWEEN THE SOURCE AND TARGET GAUSSIAN MIXTURES
Propostion 2 (restatement). Given m Gaussian mixtures from the source domain and m Gaussian mixtures from the target domain (one each per class), which satisfy Assumptions A1 and A2, the KL-divergence between Pθc(x,K | z) and Pθ̂c(x,K | z) can be computed in closed-form, and is given by:
Dψ(Pθc , Pθ̂c ) = EPθc
[ log
Pθc(x,K | z) P θ̂c (x,K | z)
] =
∑ z∈Z p(z) k∑ i=1 πi(z) log πi(z) π̂i(z)
+ ∑ z∈Z p(z) k∑ i=1 πi(z)DKL ( N ( · |µi(z),Σi(z) ) , N ( · | µ̂i(z), Σ̂i(z) )) , (10)
where K is the mixture component random variable. The first term is the KL-divergence between the component prior probabilities, which simplifies into a function of the parameters [β1, γ1, · · · , βk, γk] . The second term involves the KL-divergence between two multivariate Gaussians (a standard result), which also simplifies into a function of ψ.
Proof. Referring to § 3.1, we derive the closed-form KL-divergence between the source and target Gaussian mixtures under Assumptions 1 and 2, i.e., the source and target Gaussian mixtures have the same number of components that have a one-to-one association. Recall that θc and θ̂c are the parameters of the original (source) and the adapted (target) MDN respectively. Let K ∈ {1, · · · , k} denote the latent component random variable.
Dψ(Pθc , Pθ̂c ) = EPθc
[ log
Pθc(x,K | z) P θ̂c (x,K | z)
]
= ∑ z∈Z p(z) k∑ i=1 ∫ Rd Pθc(x,K = i | z) log Pθc(x,K = i | z) P θ̂c (x,K = i | z) dx
= ∑ z∈Z p(z) k∑ i=1 Pθc(K = i | z) ∫ Rd Pθc(x | z,K = i) log Pθc(K = i | z)Pθc(x | z,K = i) P θ̂c (K = i | z)P θ̂c (x | z,K = i) dx = ∑ z∈Z p(z) k∑ i=1 πi(z) ∫ Rd N (x |µi(z),Σi(z)) log πi(z) π̂i(z) + log N (x |µi(z),Σi(z)) N ( x | µ̂i(z), Σ̂i(z) ) dx
= ∑ z∈Z p(z) k∑ i=1 πi(z) log πi(z) π̂i(z)
+ ∑ z∈Z p(z) k∑ i=1 πi(z)DKL ( N (· |µi(z),Σi(z)) , N ( · | µ̂i(z), Σ̂i(z) )) . (11)
The second term in the final expression involves the KL-divergence between two multivariate Gaussians (a standard result) given by
DKL
( N(· |µ,Σ), N(· | µ̂, Σ̂) ) = 1
2 log det(Σ̂) det(Σ) + 1 2 tr(Σ̂−1 Σ)
+ 1 2 (µ̂ − µ)T Σ̂−1 (µ̂ − µ) − d 2 .
For clarity, we further simplify Eq. (11) for the case of diagonal covariances by applying the above result. Recall that the Gaussian mixture parameters of the source and target domains are related by the parameter transformations in Eq. (4). The second term in Eq. (11) involving the KL-divergence between multivariate Gaussians, simplifies to
DKL ( N ( · |µi(z),σ2i (z) ) , N ( · | µ̂i(z), σ̂2i (z) )) = 1
2 d∑ j=1 [ log c2ij + 1 c2ij +
1
c2ij σ 2 ij(z)
( aij µij(z) + bij − µij(z) )2]− d 2 . (12)
The first term in Eq. (11) involving the KL-divergence between the component prior probabilties can be expressed as a function of the adaptation parameters [β1, γ1, · · · , βk, γk] as follows:
k∑ i=1 πi(z) log πi(z) π̂i(z) = k∑ i=1 eαi(z) q(z) [ log eαi(z) q(x) − log e βi αi(z)+ γi q̂(z) ]
= log( k∑ i=1 eβi αi(z)+ γi) − log( k∑ i=1 eαi(z)) + k∑ i=1 eαi(z) q(z) (αi(z) − βi αi(z) − γi) , (13)
where q(z) = ∑k
j=1 e αj(z) and q̂(x) = ∑k j=1 e
βj αj(z)+ γj are the normalization terms in the softmax function. Substituting Eqs. (12) and (13) into the last step of Eq. (11) gives the KL-divergence between the source and target Gaussian mixtures as a function of the adaptation parameters ψ.
A.2 OPTIMALITY OF THE FEATURE TRANSFORMATION
We show that the proposed feature transformation at the decoder in § 3.3 is optimal in the mimimum mean-squared error sense. The problem setting is that, at the decoder, we observe an input xt from the target domain marginal distribution, i.e.,
xt ∼ pt(x) = ∑ z∈Z p(z) k∑ i=1 π̂i(z)N ( x | µ̂i(z), Σ̂i(z) ) ,
where Z = {Eθe(11), · · · ,Eθe(1m)} is the encoder’s constellation. Suppose we knew the symbol z = Eθe(1y) that was transmitted and the mixture component i ∈ [k], then the transformation g−1zi (x
t) in Eq. (7) can map xt to the corresponding Gaussian component of the source distribution. However, since z and i are not observed at the decoder, we propose to find the transformation g−1 : Rd 7→ Rd (independent of z and i) that minimizes the following expected squared error:
J ( g−1(xt) ) = 1
2 EP θ̂c (z,i |x)
[ ∥g−1zi (x t) − g−1(xt)∥22 | xt ] . (14)
This is the conditional expectation over (z, i) given xt with respect to the posterior distribution P θ̂c (z, i |x). Since xt is fixed, the above objective is a function of the vector w := g−1(xt) ∈ Rd, and it can be simplified as follows:
J(w) = 1
2 EP θ̂c (z,i |x)
[ ∥g−1zi (x t) − w∥22 | xt ]
= 1
2 EP θ̂c (z,i |x)
[ g−1zi (x t)Tg−1zi (x t) | xt ] + 1
2 wTw
− wT EP θ̂c (z,i |x) [ g−1zi (x t) | xt ] .
Note that w comes outside the expectation since it does not depend on z or i. The minimum of this simple quadratic function can be found by setting the gradient of J with respect to w to 0, giving
w⋆ = g−1(xt) = EP θ̂c (z,i |x) [ g−1zi (x t) | xt ]
= ∑ z∈Z ∑ i∈[k] P θ̂c (z, i |xt) g−1zi (x t).
This is the feature transformation at the decoder proposed in § 3.3.
B ADDITIONAL DETAILS ON THE PROPOSED METHOD
In this section we provide additional details on the proposed method that could not be discussed in § 3 of the main paper.
B.1 CLASS LABELS AND LABELED DATA
We would like to clarify that the statement “class labels are available for free” is made in Section 3 in order to highlight the fact that class labels are easy to obtain in this end-to-end communication
setting, unlike other domains (e.g. computer vision) where labeling data could be expensive. Since the transmitted message is also the class label, it is always available without additional effort during the data collection (from the packet preambles). However, note that it is still challenging / expensive to collect a large number of samples for domain adaptation, as discussed in Section 1. In contrast, it may be easy to obtain plenty of unlabeled data in other domains such as computer vision, where labeling is expensive.
In communication protocols, preambles are attached to the front of the packets for synchronization, carrier frequency offset correction, and other tasks. The preambles consist of sequences of known symbols (which have a one-to-one mapping to the messages). Therefore, these sequences can be used as the labeled dataset since the receiver obtains the distorted symbol and knows the ground truth. The proposed MDN adaptation and input transformation at the decoder do not incur any modifications to the encoder (transmitter side). The constellation learned by the autoencoder is kept fixed during adaptation. Therefore, using the preambles from a small number of packets, our method performs adaptation at the receiver side and maintains the symbol error rate performance without communicating any information back to the encoder.
B.2 TRANSFORMATION BETWEEN MULTIVARIATE GAUSSIANS
We discuss the feature and parameter transformations between any two multivariate Gaussians. This result was applied to formulate the MDN adaptation in Eqs. (4) and (7). Consider first the standard transformation from x ∼ N(· |µ,Σ) to x̂ ∼ N(· | µ̂, Σ̂) given by the two-step process:
• Apply a whitening transformation z = D−1/2 UT (x− µ) such that z ∼ N(· |0, I). • Transform z into the new Gaussian density using x̂ = Û D̂1/2 z + µ̂.
We have denoted the eigen-decomposition of the covariance matrices by Σ = UDUT and Σ̂ = ÛD̂ÛT , where U and Û are the orthonormal eigenvector matrices, and D and D̂ are the diagonal eigenvalue matrices. Combining the two steps, the overall transformation from x to x̂ is given by
x̂ = Û D̂1/2 D−1/2 UT (x− µ) + µ̂. (15)
Suppose we define the matrix C = Û D̂1/2 D−1/2 UT , then it is easily verified that the covariance matrices are related by Σ̂ = CΣCT . In general, the mean vector and covariance matrix of any two Gaussians can be related by the following parameter transformations:
µ̂ = Aµ + b and Σ̂ = CΣCT , (16)
with parameters A ∈ Rd×d, b ∈ Rd, and C ∈ Rd×d. Substituting the above parameter transformations into the feature transformation in Eq. (15), we get
x̂ = C (x − µ) + Aµ + b.
From the above, we can also define the inverse feature transformation from x̂ ∼ N(· | µ̂, Σ̂) to x ∼ N(· |µ,Σ) :
x = C−1 (x̂ − Aµ − b) + µ.
B.3 GENERATIVE ADAPTATION OF THE MDN
In § 3.2, we discussed the discriminative adaptation objective for the MDN, which is used when the MDN is adapted as part of the autoencoder in order to improve the end-to-end error rate. This adaptation approach was used for the experiments in § 4. On the other hand, we may be interested in adapting the MDN in isolation with the goal of improving its performance as a generative model of the channel. For this scenario, the adaptation objective Eq. 6 is modified as follows. The first (data-dependent) term is replaced with the negative conditional log-likelihood (CLL) of the target dataset, while the second KL-divergence term remains the same:
JCLL(ψ ;λ) = −1 N t Nt∑ n=1 logP θ̂c (xtn | ztn) + λDψ(Pθc , Pθ̂c), (17)
where µ̂i(z), Σ̂i(z) and α̂i(z) as a function of ψ are given by Eq. (4). The parameters of the original Gaussian mixture αi(z),µi(z),Σi(z), ∀i are constants since they have no dependence on
ψ. The regularization constant λ ≥ 0 controls the allowed KL-divergence between the source and target Gaussian mixtures. Small values of λ weight the CLL term more, allowing more exploration in the adaptation, while large values of λ impose a strong regularization to constrain the space of target distributions. We evaluate the performance of this generative MDN adaptation in Appendix C.6.
B.4 VALIDATION METRIC FOR AUTOMATICALLY SETTING λ
The choice of λ in the adaptation objectives Eqs. (6) and 17 is crucial as it sets the right level of regularization suitable for the target domain distribution. Since the target domain dataset is very small, it is difficult to apply cross-validation type of methods to select λ. We propose a validation metric V (ψ ;Dt) that utilizes the feature-transformed target domain dataset to evaluate the quality of the adapted solutions for different λ values.
Let ψ denote the adaptation parameters found by minimizing the objective Eq. (6) for a specific λ ≥ 0. The feature transformation (from target to source domain) at the decoder g−1(x) based on the adaptation parameters ψ is given by Eq. (8). Recall that the target domain dataset is Dt = {(xtn, ytn, ztn), n = 1, · · · , N t}. We define the feature-transformed target domain dataset as:
Dttrans = { ( g−1(xtn), y t n, z t n ) , n = 1, · · · , N t}.
Suppose ψ is a good adaptation solution, then we expect the decoder (trained on the source domain dataset) to have good classification performance on Dttrans. For a given feature-transformed target domain sample, the decoder predicts the class posterior probabilities: Dθd(g
−1(xtn)) = [Pθd ( 1 | g−1(xtn) ) , · · · , Pθd ( m | g−1(xtn) ) ]. We define the validation metric as the negative posterior log-likelihood of the decoder on Dttrans, given by
V (ψ ;Dt) = − 1 N t Nt∑ n=1 logPθd ( ytn | g−1(xtn) ) (18)
We expect smaller values of V (ψ ;Dt) to correspond to better adaptation solutions. The adaptation objective is minimized with λ varied over a range of values, and in each case the adapted solution ψ is evaluated using the validation metric. The pair of λ and ψ resulting in the smallest validation metric is chosen as the final adapted solution. The search set of λ used in our experiments was {10−5, 10−4, 10−3, 10−2, 0.1, 1, 10, 100}. See Appendix C.4 for an ablation study on the choice of hyper-parameter λ using this validation metric.
Generative MDN Adaptation. The validation metric proposed above depends on the decoder, and cannot be used when the MDN is adapted as a generative model in isolation (Appendix B.3). For this setting, we modify the validation metric based on the following idea. Suppose the adaptation finds a good solution, then we expect Dttrans to have a high conditional log-likelihood under the (original) source domain MDN. The validation metric is therefore given by
V (ψ ;Dt) = − 1 N t Nt∑ n=1 logPθc ( g−1(xtn) | ztn ) , (19)
where Pθc is the Gaussian mixture given by Eq. 1.
B.5 COMPLEXITY ANALYSIS
We provide an analysis of the computational complexity of the proposed adaptation methods.
MDN Adaptation.
The number of free parameters being optimized in the adaptation objective (Eqs. 6 or 17) is given by |ψ| = k (2 d2 + d+ 2). This is much smaller than the number of parameters in a typical MDN, even considering only the final fully-connected layer (see Table 1 for a comparison). Each step of the BFGS optimization involves computing the objective function, its gradient, and an estimate of its inverse Hessian. The cost of one step of BFGS can thus be expressed as O(N t k d2 |ψ|2). Suppose BFGS runs for a maximum of T iterations and the optimization is repeated for L values of λ, then the overall cost of adaptation is given by O(LT N t k d2 |ψ|2). Note that the optimization for different λ values can be easily solved in parallel.
Test-time Adaptation at the Decoder.
We analyze the computational cost of the feature transformation-based adaptation at the decoder proposed in § 3.3. Consider a single test input xt at the decoder. The feature transformation method first computes the posterior distribution Pθ̂c(z, i |x
t) over the set of symbols-component pairs of size km. Computation of each exponent factor in the posterior distribution requires O(d3) operations for the full-covariance case, and O(d) operations for the diagonal covariance case. This corresponds to calculation of the log of the Gaussian density. Therefore, computation of the posterior distribution for a single (z, i) pair requires O(kmd3) operations for the full-covariance case (similarly for the diagonal case). Computation of the affine transformation g−1zi (x
t) for a single (z, i) pair requires O(d2) operations (the matrix Ci only needs to be inverted once prior to test-time adaptation). Since calculation of the posterior term dominates the computation, the overall cost of computing the transformation in Eq (8) over the km symbol-component pairs will be O(kmkmd3) = O(k2 m2 d3).
We note that in practical communication systems d is small (typically d = 2). The number of symbols or messages m can vary from 4 to 1024 in powers of 2. The number of mixture components k can be any positive integer, but is usually not more than a few tens to keep the size of the MDN practical. Therefore, the computational cost of test-time adaptation at the decoder based on the feature transformation method is relatively small, making our proposed adaptation very computationally efficient to implement at the receiver side of a communication system.
B.6 LIMITATIONS AND FUTURE WORK
The proposed work focuses mainly on a mixture density network (MDN) as the generative channel model, which allows us to exploit some of their useful properties in our formulation. Generalizing the proposed few-shot domain adaptation to other types of generative channel models such as conditional GANs, VAEs, and normalizing flows (Dinh et al., 2017) could be an interesting direction. These generative models can handle more high-dimensional structured inputs.
The proposed work does not adapt the encoder network, i.e., the autoencoder constellation is not adapted to changes in the channel distribution. Adapting the encoder, decoder, and channel networks jointly would allow for more flexibility, but would likely be slower and require more data from the target distribution.
We focused on memoryless channels, where inter-symbol-interference (ISI) is not a problem. In practice, communication channels can have memory and ISI would have to be addressed by the training and adaptation methods. Under changing channels, one would have to also adapt an Equalizer model (algorithm) in order to mitigate ISI.
C ADDITIONAL EXPERIMENTS
We provide additional details on the experiments in § 4 and report additional results, including ablation studies on the proposed method.
C.1 EXPERIMENTAL SETUP
We implemented the mixture density network and communication autoencoder models using TensorFlow (Abadi et al., 2015) and TensorFlow Probability. We used the BFGS optimizer implementation available in TensorFlow Probability. The code base for our work has been submitted as a supplementary material. All the experiments were run on a Macbook Pro with 16 GB memory and 8 CPU cores. Table 3 summarizes the architecture of the encoder, MDN (channel model), and decoder neural networks. Note that the output layer of the MDN is a concatenation (denoted by ⊕) of three fully-connected layers predicting the means, variances, and mixing prior logit parameters of the Gaussian mixture. The following setting is used in all our experiments. The size of the message set m (also the number of classes) was fixed to 16, corresponding to 4 bits. The dimension of the encoding d was set to 2, and the number of mixture components k was set to 5. The size of the hidden layers nh was set to 100.
The parameters ψ of the proposed adaptation method are initialized as follows for each component i:
Ai = Id, bi = 0, Ci = Id, βi = 1, γi = 0,
where Id is the d× d identity matrix. This initialization ensures that the target Gaussian mixtures (per class) are always initially equal to the source Gaussian mixtures. The regularization constant λ in the adaptation objective was varied over 8 equally-spaced values on the log-scale with range 10−5 to 100, specifically {10−5, 10−4, 10−3, 10−2, 0.1, 1, 10, 100}. The λ value and ψ corresponding to the smallest validation metric are selected as the final solution.
We used the Adam optimizer (Kingma & Ba, 2015) with a fixed learning rate of 0.001, batch size of 128, and 100 epochs for training the MDN. For adaptation of the MDN using the baseline methods Finetune and Finetune last, we used Adam with the same learning rate for 200 epochs. The batch size is set as b = max{10, 0.1N t}, where N t is number of adaptation samples in the target dataset. For training the autoencoder using Algorithm 1, we found that stochastic gradient descent (SGD) with Nesterov momentum (constant 0.9), and an exponential learning rate schedule between 0.1 and 0.005 works better than Adam.
Finetuning Baselines. We provide additional details on the baselines Finetune and Finetune last. Both the methods first initialize the target domain MDN, encoder, and decoder networks with the corresponding parameters from the source domain. The method Finetune first finetunes all the MDN parameters to minimize the conditional log-likelihood of the target dataset using the Adam optimizer. After the MDN is finetuned, we freeze the parameters of the MDN and encoder, and train only the decoder using data generated from the updated MDN channel. The method Finetune last differs from Finetune in that it optimizes only the weights of the final MDN layer.
From the results in Figures 4, 5, and 6, we observe that the baselines Finetune and Finetune last have very similar performance compared to the case of no adaptation. We have investigated this carefully and verified that this is not due to a bug or insufficient optimization (e.g., by checking if the final weights of the MDN and decoder are different for both methods). For both methods, we tried a range of learning rates for Adam and increased the number of epochs to a large number (beyond 200 was not helpful). We have reported the best-case results for these methods, which suggests that they are not effective at adaptation using small target domain datasets. As mentioned in Section 4.1, we hypothesize that using the KL-divergence based regularization and constraining the number of adaptation parameters leads to more effective performance of our method.
Uncertainty Estimation. Since there is inherent randomness in our experiments, especially with the small sample sizes of the target dataset, we always report average results from multiple trials. For the experiments on standard simulated channel variations (e.g., AWGN to Ricean fading), we report the results from 10 trials. For the random Gaussian mixtures experiment, we report the average and standard error over 50 random source/target dataset pairs. For the FPGA experiments, we report the results from 20 random trials. The average metrics (symbol error rate and log-likelihood) are reported in the plots.
Evaluation Protocol. We create a random class-stratified 50-50 train-test split (each of size 300,000) for data from both the source and target domains. Performance on both domains is always evaluated on the held-out test split. The train split from the target domain dataset is sub-sampled to create adaptation datasets of different sizes, specifically with 5, 10, 20, 30, 40, and 50 samples per class (symbol). For the generative adaptation experiments on the MDN (Appendix C.6), the number of adaptation samples from the target domain is reduced even further. We varied it from 2 samples perclass to 20 samples per-class in order to highlight the improvements obtained by the proposed method. The oracle baseline method, which retrains the autoencoder and MDN on the target distribution, uses the entire training dataset from the target domain.
Choice of SNR. For the experiments on simulated channel distributions such as AWGN, Ricean fading, and Uniform fading, we set the signal-to-noise ratio (SNR) to 14 dB for the source distribution and 20 dB for the target distribution. The connection between the SNR and the distribution parameters is given in Appendix F. We have experimented with other combinations of SNR for the source and target channels and found a similar trend in the adaptation performance.
In the simulated experiments, we focused on the SNR range of 14 dB to 20 dB. Our process for selecting this SNR range was by first evaluating the symbol error rate (SER) vs. SNR curve of the autoencoder for the different simulated channel distributions. We found that going below 14 dB SNR results in a degradation of the autoencoder’s performance (except for the AWGN channel, which we do not use as a target distribution). Also, going above 20 dB SNR did not lead to a significant decrease in the SER. For the channels such as Ricean fading and Uniform fading, we found that even a retrained autoencoder has a relatively high error rate for lower SNRs.
C.2 DETAILS ON THE FPGA EXPERIMENT
Referring to the experiment in § 4.2, for the real and over-the-air traces we used the platform from Lacruz et al. (2021). This ultra-wide-band mm-wave transceiver baseband memory-based design is developed on top of an ZCU111 RFSoC FPGA. This evaluation board features a Zynq Ultrascale + ZCU28DR. This FPGA is equipped with 8× 8 AD/DA converters with Giga-sampling capabilities, which make it ideal for RF system development; the 4 GB DDR4 memories contain RF-ADCs with up to 4 GSPS of sampling rate, and RF-DACs with up to 6.544 GSPS. This board also includes a quad-core ARM Cortex-A53 and a dual-core ARM Cortex-R5 real-time processor.
For the radio frequency, we used 60 GHz RF front-end antennas. These kits include a 16 + 16 TRX patch array antenna plus the RF module with up/down conversion from baseband to I/Q channels,
and TX/RX local oscillator (LO) frequency control. The antennas use 57 − 71 GHz, a range of frequencies that cover the unlicensed 60 GHz band for mm-wave channels, and are managed from a PC Host via USB.
We implemented a hardware on the loop training. For the experimentation on real traces, we use Matlab as a central axis. The PC host running Matlab is connected to the platform via Ethernet. The FPGA can transmit different custom waveforms like 16-QAM frames from the 802.11ad and 802.11ay standards, with 2 GHz of bandwidth. The frames are sent over-the-air via 60 GHz radio frequency kits, and the samples are stored at the FPGA DDR memory. We decode the received data from the transmission, removing the preamble and header fields and extracting the symbols to train the MDN. We add a preamble to the generated constellation from the MDN for packet detection purposes, and we transmit again the new waveforms over-the-air. Finally, the adaptation is performed offline with the decoded symbols from the custom autoencoder-learned constellation.
Source and Target Domains.
For the experiment in § 4.2, we introduced distribution changes via IQ imbalance-based distortions to the symbol constellation, and evaluated the adaptation performance as a function of the level of imbalance. The source domain would be the original channel, the over-the-air link between the transmitter and receiver on which the training data is collected. This source domain data is used for training the MDN and the autoencoder. The target domain would be a modification of the source domain where the symbols used by the transmitter are distorted by modifying the in-phase and quadrature-phase (IQ) components of the RF signal. This causes a change in the distribution observed by the receiver (decoder), leading to a drop in performance without any adaptation.
C.3 DETAILS ON THE RANDOM GAUSSIAN MIXTURE DATASETS
We created a simulated distribution shift setting where data from both the source and target domains are generated from class-conditional Gaussian mixtures whose parameters are modified between the two domains (e.g., see Fig. 7). The parameters for the source and target Gaussian mixtures are generated as follows:
Source Domain. The source domain data is generated with a standard 16-QAM constellation ZQAM, which has 16 classes (messages). Let ks be the number of components in the source Gaussian mixture.
For each z ∈ ZQAM:
• Calculate dmin, the minimum distance from z to the remaining symbols in ZQAM. Let σs = dmin / 4 be a constant standard deviation for this symbol.
• Component priors: generate πi(z) ∼ Unif(0.05, 0.95), ∀i ∈ [ks]. Normalize the priors to sum to 1.
• Component means: generate µi(z) ∼ N(· | z, σ2sI), ∀i ∈ [ks].
• Component covariances: generate s1, · · · , sd iid∼ Unif(0.2σs, σs) and let Σi(z) =
diag(s21, · · · , s2d), ∀i ∈ [ks] (the covariances are diagonal). • Generate Ns /m samples corresponding to symbol z from the Gaussian mixture: xsn ∼∑ks
i=1 πi(z)N(x |µi(z),Σi(z)).
Target Domain. The parameters of the target Gaussian mixture are generated in a very similar way. The MDN and autoencoder are trained on the source domain dataset. Let Z = {Eθe(11), · · · ,Eθe(1m)} be the constellation learned by the autoencoder. Let kt be the number of components in the target Gaussian mixture. For each z ∈ Z:
• Calculate dmin, the minimum distance from z to the remaining symbols in Z . Let σt = dmin / 4 be a constant standard deviation for this symbol.
• Component priors: generate π̂i(z) ∼ Unif(0.05, 0.95), ∀i ∈ [kt]. Normalize the priors to sum to 1.
• Component means: generate µ̂i(z) ∼ N(· | z, σ2t I), ∀i ∈ [kt].
• Component covariances: generate s1, · · · , sd iid∼ Unif(0.2σt, σt) and let Σ̂i(z) =
diag(s21, · · · , s2d), ∀i ∈ [kt] (the covariances are diagonal). • Generate N t /m samples corresponding to symbol z from the Gaussian mixture: xtn ∼∑kt
i=1 π̂i(z)N(x | µ̂i(z), Σ̂i(z)).
We set ks = kt = 3, except for the experiment where the source and target Gaussian mixtures are mismatched. In this case, ks and kt are randomly selected for each dataset from the range {3, 4, 5, 6}. Random Phase Shift. We allow the channel output x to be randomly phase shifted on top of other distribution changes. This is done by matrix multiplication of x with a rotation matrix, where the rotation angle for each sample is uniformly selected from [−ϕ, ϕ]. We set ϕ to π/18 or 10 degrees. Results on a dataset with random phase shift applied on top of random Gaussian mixture distribution shift can be found in Fig. 5c.
C.4 ABLATION EXPERIMENTS
We perform ablation experiments to understand: 1) the choice of the hyper-parameter λ, 2) the importance of the KL-divergence regularization in the adaptation objective, 3) performance of our method when the source and target Gaussian mixtures have mismatched components, and 4) the performance of our method when there is no distribution change.
Automatic Selection of Hyper-parameter λ. We evaluate the proposed validation metric for automatically selecting the hyper-parameter λ and report the results in Fig. 8. We run the proposed method for different fixed values of λ as well as the automatically-selected λ, and compare their
performance on the target domain test set. We consider both simulated channel variations and the random Gaussian mixture datasets. From the figure, we observe that in most cases performance based on the automatically set value of λ is better than other fixed choices of λ. The case of adaptation from AWGN to Ricean fading is an exception, where our method does not learn a good adaptation solution (see Fig. 4c). In this case, we observe from Fig. 8b that the setting λ = 0.0001 has the best symbol error rate.
Performance Under Component Mismatch. We evaluate the symbol error rate performance of all the methods in the setting where the number of components in the source and target Gaussian mixtures is mismatched. The number of components in the source and target Gaussian mixtures is randomly selected from the range 3 to 6. From Fig. 11, we observe that the proposed method has strong performance improvements even in this mismatched setting, suggesting that our method can perform well even when Assumptions A1 and A2 are not satisfied.
Importance of the KL-divergence Regularization. Recall that the adaptation objectives Eqs. (6) and (17) include the KL-divergence term scaled by λ in order to avoid large distribution changes when there is not enough support from the small target-domain dataset. A natural question to ask is whether this term is useful and helps improve the adaptation solution when λ > 0. To answer this, we compare the performance of our method with λ = 0 with that our our method with λ set automatically using the validation metric. The results of this comparison are given in Fig. 9 on four simulated channel variations. The results are averaged over multiple trials as before. It is clear that setting λ = 0 for our method leads to much higher symbol error rates compared to setting λ to a non-zero value using the validation metric, establishing the importance of the KL-divergence term.
Performance Under No Distribution Change. We evaluate the symbol error rate performance of all the methods in the setting where there is no distribution change. In this setting, the performance of the MDN and autoencoder should not change, and we expect the proposed adaptation method to maintain a similar performance (not lead to increased symbol error rate). In Fig. 10, we report the results of this experiment when both the source and target channel distributions are either Ricean fading or Uniform fading. We consider a medium SNR value of 14 dB and a high SNR value of 20 dB. We observe that our method is relatively stable even when there is no distribution change, and there is only a small increase in error rate. For instance, in Fig. 10c, the error rate of our method increases from 0.015 to 0.018 for 5 samples per class.
We expect that a practical system that frequently adapts to changes in the channel distribution should first have a distribution change-detection algorithm that takes a batch of new samples from the channel and detects whether there is any change in the distribution. The actual domain adaptation algorithm is then applied only when a distribution change is detected. In this way, any potential drop in the autoencoder’s performance when there is no distribution change can be made less likely.
C.5 ANALYSIS OF THE FAILURE ON AWGN TO RICEAN FADING
Referring to Fig. 4. c in the main paper, we observe that our method has a worse symbol error rate compared to no adaptation and the other baselines for the adaptation setting from an AWGN channel at 14d | 1. What is the focus and contribution of the paper regarding channel changes in communication systems?
2. What are the strengths of the proposed approach, particularly in its application to few-shot domain adaptation?
3. What are the weaknesses of the paper, especially regarding the choice of baselines and evaluation metrics?
4. Do you have any concerns about the experimental setup or the scalability of the proposed method?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper models the changes of channel in a communication system as a few-shot domain adaptation problem. They employ the Gaussian mixture density network to specifically model the channel and propose a transformation to compensate for changes in the channel distribution. They perform experiments both on simulated channel distributions and FPGA.
Strengths And Weaknesses
Pros:
This paper considers the frequent change of channel in communication systems. They treat the change of channel as distribution shift and link this practical problem with few-shot domain adaptation. I think this is novel and advanced enough in the field of communication.
The proposed solution is easy to follow and can be used in more general few-shot DA scenarios. In addition, this method has better real-time performance compared with previous works.
Cons:
This paper lacks the few-shot domain adaptation methods as baselines, e.g., [1]. Current baselines are all the basic FDA solutions, and I worry about their competitiveness.
The evaluation metric they use is only the SER. As the application research article, the performance of the proposed method in practical communication problems is essential. They need to show the advantages of their learning-based method over conventional methods.
I find that the number of target data per class is more than 10 in this paper, and maybe such an amount is beyond the scale of few-shot learning. Additional experiments with less than 7 samples per class are important.
[1] Mottian et al. Few-Shot Adversarial Domain Adaptation. NeurIPS, 2017.
Clarity, Quality, Novelty And Reproducibility
These are all seem good. The presentation is clear, and the writing quality is above the bar. For the communication field, I think the novelty is sufficient. They also provide the source code in supplementary materials. |
ICLR | Title
Few-Shot Domain Adaptation For End-to-End Communication
Abstract
The problem of end-to-end learning of a communication system using an autoencoder – consisting of an encoder, channel, and decoder modeled using neural networks – has recently been shown to be an effective approach. A challenge faced in the practical adoption of this learning approach is that under changing channel conditions (e.g. a wireless link), it requires frequent retraining of the autoencoder in order to maintain a low decoding error rate. Since retraining is both time consuming and requires a large number of samples, it becomes impractical when the channel distribution is changing quickly. We propose to address this problem using a fast and sample-efficient (few-shot) domain adaptation method that does not change the encoder and decoder networks. Different from conventional training-time unsupervised or semi-supervised domain adaptation, here we have a trained autoencoder from a source distribution that we want to adapt (at test time) to a target distribution using only a small labeled dataset, and no unlabeled data. We focus on a generative channel model based on the Gaussian mixture density network (MDN), and propose a regularized, parameter-efficient adaptation of the MDN using a set of affine transformations. The learned affine transformations are then used to design an optimal transformation at the decoder input to compensate for the distribution shift, and effectively present to the decoder inputs close to the source distribution. Experiments on many simulated distribution changes common to the wireless setting, and a real mmWave FPGA testbed demonstrate the effectiveness of our method at adaptation using very few target domain samples 1.
1 INTRODUCTION
End-to-end (e2e) learning of a communication system using an autoencoder has been recently shown to be a promising approach for designing the next generation of wireless networks (O’Shea & Hoydis, 2017; Dörner et al., 2018; Aoudia & Hoydis, 2019; O’Shea et al., 2019; Ye et al., 2018; Wang et al., 2017). This new paradigm is a viable alternative for optimizing communication in diverse applications, hardware, and environments (Hoydis et al., 2021). It is particularly promising for dense deployments of low-cost transceivers, where there is interference between the devices and hardware imperfections that are difficult to model analytically. The key idea of e2e learning for a communication system is to use an autoencoder architecture to model and learn the transmitter and receiver jointly using neural networks in order to minimize the e2e symbol error rate (SER).
The channel (i.e., propagation medium and transceiver imperfections) can be represented as a stochastic transfer function that transforms its input z ∈ Rd to an output x ∈ Rd. It can be regarded as a black-box that is typically non-linear and non-differentiable due to hardware imperfections (e.g., quantization and amplifiers). Since autoencoders are trained using stochastic gradient descent (SGD)-based optimization (O’Shea & Hoydis, 2017), it is challenging to work with a black-box channel that is not differentiable. One approach to address this problem is to use a known mathemat-
1Code for our work: https://github.com/jayaram-r/domain-adaptation-autoencoder
ical model of the channel (e.g., additive Gaussian noise), which would enable the computation of gradients with respect to the autoencoder parameters via backpropagation. However, such standard channel models do not capture well the realistic channel effects as shown in Aoudia & Hoydis (2018). Alternatively, recent works have proposed to learn the channel using deep generative models that approximate p(x | z), the conditional probability density of the channel, using Generative Adversarial Networks (GANs) (O’Shea et al., 2019; Ye et al., 2018), Mixture Density Networks (MDNs) (García Martí et al., 2020), and conditional Variational Autoencoders (VAEs) (Xia et al., 2020). The use of a differentiable generative model of the channel enables SGD-based training of the autoencoder, while also capturing realistic channel effects better than standard models.
Although this e2e optimization with a generative channel model learned from data can improve the physical-layer design for communication systems, in reality, channels often change, requiring collection of a large number of samples and frequent retraining of the channel model and autoencoder. For this reason, adapting the generative channel model and the autoencoder as often as possible, using only a small number of samples is required for good communication performance. Prior works have (to be best of our knowledge) not addressed the adaptation problem for autoencoder-based e2e learning, which is crucial for real-time deployment of such a system under frequently-changing channel conditions. In this paper, we study the problem of domain adaptation (DA) of autoencoders using an MDN as the channel model. In contrast to conventional DA, where the target domain has a large unlabeled dataset and sometimes also a small labeled dataset (semi-supervised DA) (Ben-David et al., 2006), here we consider a few-shot DA setting where the target domain has only a small labeled dataset, and no unlabeled data. This setting applies to our problem since we only get to collect a small number of labeled samples at a time from the changing target domain (here the channel) 2.
Towards addressing this important practical problem, we make the following contributions:
• We propose a parameter- and sample-efficient method for adapting a generative MDN (used for modeling the channel) based on the properties of Gaussian mixtures (§ 3.1 and § 3.2). • Based on the MDN adaptation, we propose an optimal input-transformation method at the decoder that compensates for changes in the channel distribution, and decreases or maintains the error rate of the autoencoder without any modification to the encoder and decoder networks (§ 3.3). • Experiments on a mmWave FPGA platform and a number of simulated distribution changes show strong performance improvements for our method. For instance, in the FPGA experiment, our method improves the SER by 69% with only 10 samples per class from the target distribution (§ 4).
Related Work. Recent approaches for DA such as DANN (Ganin et al., 2016), based on adversarial learning of a shared representation between the source and target domains (Ganin & Lempitsky, 2015; Ganin et al., 2016; Long et al., 2018; Saito et al., 2018; Zhao et al., 2019; Johansson et al., 2019), have achieved much success on computer vision and natural language processing. Their high-level idea is to adversarially learn a shared feature representation for which inputs from the source and target distributions are nearly indistinguishable to a domain discriminator DNN, such that a label predictor DNN using this representation and trained using labeled data from only the source domain also generalizes well to the target domain. Adversarial DA methods are not suitable for our problem, which requires fast and frequent test-time DA, because of their high computational and sample complexity and the imbalance in the number of source and target domain samples.
Related frameworks such as transfer learning (Long et al., 2015; 2016), model-agnostic metalearning (Finn et al., 2017), domain-adaptive few-shot learning (Zhao et al., 2021; Sun et al., 2019), and supervised DA (Motiian et al., 2017a;b) also deal with the problem of adaptation using a small number of samples. Most of them are not applicable to our problem because they primarily address novel classes (with potentially different distributions), and knowledge transfer from existing to novel tasks. Motiian et al. (2017a) is closely related since they also deal with a target domain that only has a small labeled dataset and has the same label space. The key difference is that Motiian et al. (2017a) address the training-time few-shot DA problem, while we focus on test-time few-shot DA. Specifically, their adversarial DA method requires both the source and target domain datasets at training time, and can be computationally expensive to retrain for every new batch of target domain data (a key motivation for this work is to avoid frequent retraining).
2In our problem, labels correspond to the transmitted messages and are essentially obtained for free (see § 3).
2 PRIMER ON AUTOENCODER-BASED END-TO-END COMMUNICATION
Notations. We denote vectors and matrices with boldface symbols. We define the indicator function 1(c) that takes value 1 (0) when the condition c is true (false). For any integer n ≥ 1, we define [n] = {1, · · · , n}. We denote the one-hot-coded vector with 1 at index i and the rest zeros by 1i. The probability density of a multivariate Gaussian with mean µ and covariance matrix Σ is denoted by N (x |µ,Σ). We use the superscripts s and t to denote quantities corresponding to the source and target domain respectively. Table 2 in the Appendix provides a quick reference for the notations.
Following (O’Shea & Hoydis, 2017; Dörner et al., 2018), consider a singleinput, single-output (SISO) communication system shown in Fig. 1, consisting of a transmitter (or encoder), channel, and receiver (or decoder). The encoder Eθe(·) is a multi-layer feedforward neural network (NN) with parameters θe, that maps an input message y ∈ Y := {1, · · · ,m} into an encoded symbol z ∈ Rd. The input
message y is mapped into a one-hot-coded vector 1y prior to being processed by the encoder 3. The message y is equivalent to a class label in machine learning terms, and the encoded symbol z = Eθe(1y) is like a representative vector for the class y. We note that the dimension of the encoding d is small (less than 10), and d = 2 is typically used to coincide with traditional modulation techniques (O’Shea & Hoydis, 2017; Goldsmith, 2005). The set of distinct encoded symbols Z = {Eθe(11), · · · ,Eθe(1m)} is referred to as the constellation of the autoencoder. The symbol z is transmitted (via the custom modulation learned by the encoder) over a communication channel, represented by an unknown conditional probability density p(x | z), and is received at the output of the channel as a noisy, distorted symbol x ∈ Rd. The decoder Dθd(·) is also a multilayer, feed-forward NN with parameters θd that predicts the class-posterior probabilities over the m messages based on the distorted channel output x. The decoder is essentially a classifier whose input-output mapping is defined by Dθd(x) := [Pθd(1 |x), · · · , Pθd(m |x)], where Pθd(y |x) is the predicted probability of class y given x. The class with the highest predicted probability is the decoded message ŷ(x) = argmaxy∈Y Pθd(y |x). As in standard classification, the performance metric of the autoencoder is the symbol error rate (SER), defined as E(x,y)[1(ŷ(x) ̸= y)]. Generative Channel Model. In order to learn the encoder and decoder networks using SGDbased optimization, it is necessary to have a differentiable backward path from the decoder to the encoder through the channel. We address this by learning a parametric generative model of the channel Pθc(x | z) (with parameters θc) that closely approximates the true channel conditional density p(x | z). There exists a stochastic data generation or sampling function x = hθc(z,u) corresponding to the generative model, where u captures the random aspects of the channel (e.g., noise and phase offsets; details in Appendix E). In this work, we model the conditional density of the channel using a set of m Gaussian mixtures, one per input message (or class) y ∈ Y:
Pθc(x | z) = k∑
i=1
πi(z)N ( x |µi(z),Σi(z) ) , z ∈ {Eθe(11), · · · ,Eθe(1m)}. (1)
Here, k is the number of components, µi(z) ∈ Rd is the mean vector, Σi(z) ∈ Rd×d is the (symmetric, positive-definite) covariance matrix, and πi(z) ∈ [0, 1] is the prior probability of component i. It is convenient to express the component prior probability in terms of the softmax function as πi(z) = eαi(z) / ∑k j=1 e
αj(z), ∀i ∈ [k], where αi(z) ∈ R are the component prior logits. We define the parameter vector of component i as ϕi(z)T = [αi(z),µi(z)T , vec(Σi(z))T ], where vec(·) is the vector representation of the unique entries of the covariance matrix. We also define the combined parameter vector from all components by ϕ(z)T = [ϕ1(z)T , · · · ,ϕk(z)T ]. An MDN can model complex conditional distributions by combining a feed-forward network with a parametric mixture density (Bishop, 1994; 2007). We use the MDN to predict the parameters of the
3The encoder has a normalization layer that constrains the average power of the symbols (see Appendix D).
Gaussian mixtures ϕ(z) as a function of its input symbol z, i.e., ϕ(z) = Mθc(z), where θc are the parameters of the MDN network. The MDN output with all the mixture parameters has dimension p = k (d(d+ 1)/2 + d + 1). While there are competing methods for generative modeling of the channel such as conditional GANs (Ye et al., 2018) and VAEs (Xia et al., 2020), we choose the Gaussian MDN based on i) the strong approximation properties of Gaussian mixtures (Kostantinos, 2000) for learning probability distributions; and ii) the analytical and computational tractability it lends to our domain adaptation formulation. The effectiveness of a Gaussian MDN for wireless channel modeling has also been shown in García Martí et al. (2020).
The input-output function of the autoencoder is given by fθ(1y) = Dθd(hθc(Eθe(1y),u)), and the goal of autoencoder learning is to minimize the symbol error rate. Since the sampling function hθc of a Gaussian mixture channel is not directly differentiable, we apply the Gumbel-Softmax reparametrization (Jang et al., 2017) to obtain a differentiable sampling function (details in Appendix E). More background, including the training algorithm of the autoencoder, is in Appendix D.
3 PROPOSED METHOD
Problem Setup. Let x, y, z denote a realization of the channel output, message (class label), and channel input (symbol) distributed according to the joint distribution p(x, y, z). We first establish the following result about the joint distribution. Proposition 1. The joint distributions p(x, y, z) and p(x, y) can be expressed in the following form:
p(x, y, z) = p ( x |Eθe(1y) ) p(y) δ(z−Eθe(1y)), ∀x, z ∈ Rd, y ∈ Y
p(x, y) = p ( x |Eθe(1y) ) p(y), ∀x ∈ Rd, y ∈ Y, (2)
where δ(·) is the Dirac delta (or Impulse) function, and we define p(x | y) := p(x |Eθe(1y)) as the conditional distribution of x given the class y.
The proof is simple and given in Appendix A. Let Ds = {(xsi , ysi , zsi ), i = 1, · · · , Ns} be a large dataset from a source distribution ps(x, y, z) = ps(x | y) ps(y) δ(z−Eθe(1y)). The data collection involves sending multiple copies of each of the m messages through the channel (e.g., over the air from the transmitter to receiver) by using a standard modulation technique (encoding) for z (e.g., M-QAM (Goldsmith, 2005)), and observing the corresponding channel output x. Different from conventional machine learning, where class labeling is expensive, in this setting the class label is simply the message transmitted, which is obtained for free while collecting the data. The MDN channel model and autoencoder are trained on Ds according to Algorithm 1 (see Appendix D.3).
Due to changes in the channel condition and environmental factors (e.g., moving obstacles), suppose the data distribution changes to pt(x, y, z) = pt(x | y) pt(y) δ(z − Eθe(1y)). While the distribution change may cause a drop in the autoencoder’s performance, we assume that it is gradual enough that domain adaptation is possible (David et al., 2010) (by domain, here
we mean the state of the communication channel during the time period when the MDN and autoencoder are trained). As discussed in § 1, the main challenge in this setting is to collect a sufficiently large dataset to retrain the MDN and autoencoder under the distribution shift. Therefore, suppose we collect a small dataset from the target distribution Dt = {(xti, yti , zti), i = 1, · · · , N t}, where N t ≪ Ns. Our goal is to design a few-shot domain adaptation for the MDN and autoencoder in order to maintain or improve the symbol error rate.
Distribution Change. Referring to the joint distribution Eq. (2), the class prior p(y) is the prior probability of a message y transmitted through the system. In this work, we make a reasonable practical assumption that this prior probability does not change, i.e., pt(y) ≈ ps(y), ∀y ∈ Y . However, the class-conditional distribution of channel output p(x | y) changes, and therefore the class-posterior distribution p(y |x) also changes. This is commonly referred to as the conditional shift assumption (Zhang et al., 2013) (different from covariate shift (Sugiyama et al., 2007)).
Overview of the Proposed Method. Recall from Eqn. (1) that we model the channel distribution p(x | z) as a Gaussian mixture Pθc(x | z), whose parameters are predicted by the MDN, i.e., ϕ(z) =
Mθc(z). From Proposition 1, the m class-conditional distributions of x are given by p(x | y) = p(x |Eθe(1y)), ∀y ∈ Y . Therefore, in our setting, adaptation of the class-conditional distributions is equivalent to adaptation of the m Gaussian mixtures in Eqn. (1. Adaptation of the Gaussian mixtures can be directly accomplished by adapting the MDN (i.e., the parameters θc) using the small target-domain dataset Dt. Our proposed adaptation of the autoencoder consists of two key steps:
1. A light-weight, parameter-efficient adaptation of the MDN using the small target dataset Dt. 2. An efficient feature transformation at the input of the decoder (based on the MDN adaptation) that
compensates for changes in the class-conditional distributions.
Our method requires adaptation of only the MDN (channel model), while the encoder and decoder networks (θe and θd) remain unchanged, making it amenable to fast and frequent adaptation that requires collecting only a small target dataset each time (few-shot setting).
3.1 MDN CHANNEL MODEL ADAPTATION
Our goal is to adapt the m Gaussian mixtures in Eqn (1) that model the source class-conditional distributions. Suppose the m adapted Gaussian mixtures corresponding to the (unknown) target class-conditional distributions are
P θ̂c (x | z) = k∑ i=1 π̂i(z)N ( x | µ̂i(z), Σ̂i(z) ) , z ∈ {Eθe(11), · · · ,Eθe(1m)}, (3)
where θ̂c are parameters of the adapted (target) MDN, and the component means, covariances, and prior probabilities with a hat notation are defined as in § 2. The adapted MDN predicts all the parameters of the target Gaussian mixture as ϕ̂(z) = Mθ̂c(z) as shown in Fig. 2, where ϕ̂(z) is defined in the same way as ϕ(z). Instead of naively fine-tuning all the MDN parameters θc, or even just the final fully-connected layer 4, we propose a parameter-efficient adaptation of the MDN based on the affine-transformation property of the Gaussian distribution, i.e., one can transform between any two multivariate Gaussians through a general affine transformation. First, we state some basic assumptions required to make the proposed adaptation tractable.
A1) The source and target Gaussian mixtures per class have the same number of components k. A2) The source and target Gaussian mixtures (from each class) have a one-to-one correspondence
between their components.
Assumption A1 is made in order to not have to change the architecture of the MDN during adaptation due to adding or removing of components. Both assumptions A1 and A2 5 make it tractable to find the closed-form expression for a simplified KL-divergence between the source and target Gaussian mixtures per class (see Proposition 2).
Parameter Transformations. As shown in Appendix B.2, the transformations between the source and target Gaussian mixture parameters, for any symbol z ∈ Z and component i ∈ [k], are given by
µ̂i(z) = Ai µi(z) + bi, Σ̂i(z) = Ci Σi(z)C T i , and α̂i(z) = βi αi(z) + γi. (4)
The affine transformation parameters Ai ∈ Rd×d and bi ∈ Rd transform the means, Ci ∈ Rd×d transforms the covariance matrix, and βi, γi ∈ R transform the prior logits. The vector of all adaptation parameters to be optimized is defined by ψT = [ψT1 , · · · ,ψTk ], where ψi contains all the affine-transformation parameters from component i. The number of adaptation parameters is given by k (2 d2 + d+ 2). This is typically much smaller than the number of MDN parameters (weights and biases from all layers), even if we consider only the final fully-connected layer for fine-tuning (see Table 1). In Fig. 2, the adaptation layer mapping ϕ(z) to ϕ̂(z) basically implements the parameter transformations defined in Eqn. (4). We observe that the affine-transformation parameters are not dependent on the symbol z (or the class), which is a constraint we impose in order to keep the number of adaptation parameters small. This is also consistent with the MDN parameters θc being independent of the symbol z. Allowing the affine transformations to depend of z would provide more flexibility, but at the same time require more target domain data for successful adaptation.
4We show in our experiments that both the fine-tuning approaches fail to adapt well. 5We perform ablation experiments (Appendix C.4) that evaluate our method under random Gaussian mixtures
with mismatched components. We find that our method is robust even when these assumptions are violated.
Proposition 2. Given m Gaussian mixtures from the source domain and m Gaussian mixtures from the target domain (one each per class), which satisfy Assumptions A1 and A2, the KL-divergence between Pθc(x,K | z) and Pθ̂c(x,K | z) can be computed in closed-form, and is given by:
Dψ(Pθc , Pθ̂c ) = EPθc
[ log
Pθc(x,K | z) P θ̂c (x,K | z)
] =
∑ z∈Z p(z) k∑ i=1 πi(z) log πi(z) π̂i(z)
+ ∑ z∈Z p(z) k∑ i=1 πi(z)DKL ( N ( · |µi(z),Σi(z) ) , N ( · | µ̂i(z), Σ̂i(z) )) , (5)
where K is the mixture component random variable. The first term is the KL-divergence between the component prior probabilities, which simplifies into a function of the parameters [β1, γ1, · · · , βk, γk] . The second term involves the KL-divergence between two multivariate Gaussians (a standard result), which also simplifies into a function of ψ.
The proof and the final expression for the KL-divergence as a function ofψ are given in Appendix A.1. The symbol priors {p(z), z ∈ Z} are estimated using the class proportions from the source dataset Ds. We note that this result is different from the KL-divergence between two arbitrary Gaussian mixtures, for which there is no closed-form expression (Hershey & Olsen, 2007).
3.2 REGULARIZED ADAPTATION OBJECTIVE
From the above analysis, we can formulate the MDN adaptation as the equivalent problem of finding the optimal set of affine transformations (one per-component) mapping the source to the target Gaussian mixtures. To reduce the possibility of the adaptation finding bad solutions due to the small-sample setting, we introduce a regularization term based on the KL-divergence (defined earlier), which constrains the distribution shift produced by the affine transformations. We consider two scenarios for adaptation: 1)
Generative adaptation of the MDN in isolation and 2) Discriminative adaptation of the MDN as part of the autoencoder. In the first case, the goal of adaptation is to find a good generative model for the target channel distribution, while in the second case the goal is to improve the classification accuracy of the autoencoder on the target distribution. We focus on the discriminative adaptation here, and present the very similar generative adaptation in Appendix B.3.
Since the goal of adaptation is to improving the decoder’s accuracy in recovering the transmitted symbol z from the channel output x, we use the (negative) symbol posterior log-likelihood (PLL) as the first data-dependent term of the adaptation objective. The second term is the simplified KLdivergence between the source and target Gaussian mixtures, which does not depend on the data.
JPLL(ψ ;λ) = −1 N t Nt∑ n=1 logP θ̂c (ztn |xtn) + λDψ(Pθc , Pθ̂c). (6)
The symbol posterior P θ̂c (z |x) is computed from the conditional P θ̂c (x | z) and the symbol priors {p(z), z ∈ Z} using Bayes rule. We observe that the adaptation objective is a smooth and nonconvex function of ψ . Also, computation of the objective and its gradient (w.r.tψ) are inexpensive operations since i) they do not require forward and back-propagation through the layers of the MDN and ii) both N t and the dimension of ψ are small. Therefore, we use the BFGS Quasi-Newton method (Nocedal & Wright, 2006) for minimization, instead of SGD-based large-scale optimization (e.g., Adam). The regularization constant λ is a hyper-parameter of the proposed method, and we propose a validation metric in Appendix B.4) to set its value automatically.
3.3 DECODER ADAPTATION USING FEATURE TRANSFORMATIONS
We propose a computationally-efficient feature transformation g−1 : Rd 7→ Rd at the decoder such that the transformed inputs x̂s = g−1(xt) are closely aligned to the source distribution on which
the decoder was trained (see Fig. 3). This is based on the optimal affine-transformations ψ of the MDN found by minimizing Eqn. (6). This method does not require any change to the trained encoder and decoder networks, making it well suited for the few-shot DA setting. Consider a test input xt at the decoder from the target-domain marginal distribution pt(x) =∑ z∈Z p(z) ∑k i=1 π̂i(z)N ( x | µ̂i(z), Σ̂i(z) ) . As shown in Appendix B.2, conditioned on a given symbol z ∈ Z and component i ∈ [k], the affine transformation that maps from the target Gaussian distribution xt | z, i ∼ N(x | µ̂i(z), Σ̂i(z)) to the source Gaussian distribution xs | z, i ∼ N(x |µi(z),Σi(z)) is given by
x̂s = g−1zi (x t) := C−1i (x t − Ai µi(z) − bi) + µi(z). (7) However, this transformation requires knowledge of both the transmitted symbol z and the mixture component i, which are not observed at the decoder (the decoder only observes the channel output xt). We address this by taking the expected affine transformation from target to source, where the expectation is with respect to the joint posterior over the symbol z and component i, given the channel output xt. This posterior distribution based on the target Gaussian mixture is:
P θ̂c (z, i |xt) =
p(z) π̂i(z)N ( xt | µ̂i(z), Σ̂i(z) )∑ z′ ∑ j p(z ′) π̂j(z′)N ( xt | µ̂j(z′), Σ̂j(z′)
) . The expected inverse-affine feature transformation at the decoder is then defined as
g−1(xt) := EP θ̂c (z,i |x) [ g−1zi (x t) | xt ] = ∑ z∈Z ∑ i∈[k] P θ̂c (z, i |xt) g−1zi (x t). (8)
We show that this conditional expectation is the optimal transformation from the standpoint of meansquared-error estimation (Kay, 1993) in Appendix A.2. The adapted decoder based on this feature transformation is illustrated in Fig. 3 and defined as D̂θd(x t ;ψ) := Dθd(g −1(xt)). For small to moderate number of symbols m and number of components k, this transformation is computationally efficient and easy to implement at the receiver of a communication system. A discussion of the computational complexity of the proposed method is given in Appendix B.5.
4 EXPERIMENTS
We perform experiments to evaluate the proposed adaptation method for the MDN and autoencoder. Our main findings are summarized as follows: 1) the proposed method adapts well to changes in the channel distribution using only a few samples per class, often leading to strong improvement over the baselines; 2) our method performs well under multiple simulated distribution changes, and notably on our mmWave FPGA experiments; 3) Extensive ablation studies show that the proposed KL-divergence based regularization and the validation metric for setting λ are effective.
Setup. We implemented the MDN, autoencoder networks, and the adaptation methods in Python using TensorFlow (Abadi et al., 2015) and TensorFlow Probability. We used the following setting in our experiments. The size of the message set m is fixed to 16, corresponding to 4 bits. The dimension of the encoding (output of the encoder) d is set to 2, and the number of mixture components k is set to 5. More details on the experimental setup, neural network architecture, and the hyper-parameters are given in Appendix C.1.
Baseline Methods. We compare the performance of our method with the following baselines: 1) No adaptation, which is the MDN and autoencoder from the source domain without adaptation. 2) Retrained MDN and autoencoder, which is like an “oracle method” that has access to a large dataset from the target domain. 3) Finetune - where the method optimizes all the MDN parameters for 200 epochs and optimizes the decoder for 20 epochs 6. 4) Finetune last - which follows the same approach as “Finetune”, but only optimizes the last layer of MDN (all the layers of the decoder are however optimized). We note that traditional domain adaptation methods are not suitable for this problem because it requires adaptation of both the MDN (generative model) and the decoder.
Datasets. The simulated channel variations are based on models commonly used for wireless communication, specifically: i) Additive white Gaussian noise (AWGN), ii) Ricean fading, and iii)
6We found no significant gains with larger number of epochs in this case.
Uniform or flat fading (Goldsmith, 2005). Details on these channel models and calculation of the their signal-to-noise ratio (SNR) are provided in Appendix F. We also created simulated distribution changes using random, class-conditional Gaussian mixtures for both the source and target channels (we also include random phase shifts). The parameters of the source and target Gaussian mixtures are generated in a random but controlled manner as detailed in Appendix C.3. We also evaluate the performance of the adaptation methods on real over-the-air wireless experiments. We use a recent high-performance mmWave testbed (Lacruz et al., 2021), featuring a high-end FPGA board with 2 GHz bandwidth per-channel and 60 GHz SIVERS antennas (SIVERSIMA, 2020). We introduce distribution changes via (In-phase and Quadrature-phase) IQ imbalance-based distortions to the symbol constellation, and gradually increase the level of imbalance to the system 7. More details on the FPGA experimental setup are given in Appendix C.2.
Evaluation Protocol. Due to the space limit, we provide details of the evaluation protocol such as train, adaptation, and test sample sizes, and the number of random trials used to get averaged performance in Appendix C.1. We report the symbol error rate (SER) on a large held-out test dataset (from the target domain) as a function of the number of target-domain samples per class. The only hyper-parameter λ of our method is set automatically using the validation metric proposed in B.4.
4.1 AUTOENCODER ADAPTATION ON SIMULATED DISTRIBUTION CHANGES
The adaptation results under simulated distributions changes are given in Figs. 4 and 5, with the symbol error rates plotted as a function of the number of target samples per class. In Fig. 4, we consider standard channel distributions such as AWGN, Ricean fading, and Uniform fading. In Fig. 5, we consider random Gaussian mixtures for both the source and the target distributions. We observe that the proposed adaptation leads to a strong improvement in SER in all cases, except in the case of AWGN to Ricean fading (Fig. 4. c). We provide some insights on the failure of our method in this case in Appendix C.5. Note that the methods “No adapt” and “Retrained autoenc” have the same SER for all target sample sizes (i.e., a horizontal line). We find both the finetuning baselines to
7IQ imbalance is a common issue in RF communication that introduces distortions to the final constellation.
have very similar SER in all cases, and there is not much improvement compared to no adaptation. This suggests that our approach of constraining the number of adaptation parameters and using the KL-divergence regularization are effective in the few-shot DA setting (see Table 1).
4.2 AUTOENCODER ADAPTATION ON FPGA EXPERIMENTS
For this experiment, different levels of distribution change are introduced by varying the IQ imbalance over 20%, 25%, and 30% (higher IQ imbalance corresponds to larger distribution change). From Fig. 6, we observe that the proposed method achieves significant reduction in error rate compared to the (non-oracle) baselines. The relative improvement in SER over the baselines is more pronounced under higher IQ imbalance. For instance, at 30% IQ imbalance, our method achieves a relative SER improvement of around 69% over the fine-tuning baselines using only 10 samples per-class.
4.3 ADDITIONAL EXPERIMENTS
We have performed a number of additional experiments including ablation studies, which are reported in Appendix C.4 through C.6. They include: 1) evaluating the proposed validation metric for automatically setting the hyper-parameter λ; 2) evaluating the importance of the KLdivergence regularization in the adaptation objective; 3) performance of our method when the source and target Gaussian mixtures have a mismatch in the components (addressing As-
sumptions A1 and A2); 4) performance of our method when there is no distribution shift; and 5) performance of the generative adaptation of the MDN channel. To summarize the observations, we found the validation metric to be effective at setting the value of λ, and that our method has good performance even when Assumptions A1 and A2 are violated, or when there is no distribution shift. The generative MDN adaptation leads to increased log-likelihoods with as low as 2 samples per class.
5 CONCLUSIONS
In this work, we explore one of the first approaches for domain adaptation of autoencoder based e2e communication in the few-shot setting. We first propose a light-weight and parameter-efficient method for adapting a Gaussian MDN with a very small number of samples from the target distribution. Based on the MDN adaptation, we propose an optimal input transformation method at the decoder that attempts to closely align the target domain inputs to the source domain. We demonstrate the effectiveness of the proposed methods through extensive experiments on both simulated channels and a mmWave FPGA testbed. A discussion of limitations and future directions is given in Appendix B.6.
ACKNOWLEDGMENTS
Banerjee, Raghuram, and Zeng were supported in part through the following grants — US National Science Foundation’s CNS-2112562, CNS-2107060, CNS-2003129, CNS-1838733, and CNS-1647152, and the US Department of Commerce’s 70NANB21H043. Somesh Jha was partially supported by the DARPA-GARD problem under agreement number 885000. The authors from IMDEA Networks were sponsored by the Spanish Ministry of Economic Affairs and Digital Transformation under European Union NextGeneration-EU projects TSI-063000-2021-59 RISC-6G and TSI-063000-2021-63 MAP-6G, and by the Regional Government of Madrid and the European Union through the European Regional Development Fund (ERDF) project REACT-CONTACT-CM-23479.
Appendix
Table 2: Commonly used notations
Notation Description
y ∈ Y := {1, · · · ,m} Input message or class label. Usually m = 2b, where b is the number of bits. 1y, y ∈ Y One-hot-coded representation of a label (message) y, with 1 at position y and zeros elsewhere. z ∈ Z ⊂ Rd with |Z| = m Encoded representation or symbol vector corresponding to an input message. x ∈ Rd Channel output that is the feature vector to be classified by the decoder. Eθe(1y) Encoder NN with parameters θe mapping a one-hot-coded message to a symbol vector in Rd. Dθd(x) = [Pθd(1 |x), · · · , Pθd(m |x)] Decoder NN with parameters θd mapping the channel output into probabilities over the message set. ŷ(x) = argmaxy∈Y Pθd(y |x) Class (message) prediction of the decoder. Pθc(x | z) Conditional density (generative) model of the channel with parameters θc. ϕ(z) = Mθc(z) Mixture density network that predicts the parameters of a Gaussian mixture. x = hθc(z,u) Transfer or sampling function corresponding to the channel conditional density. fθ(1y) = Dθd(hθc(Eθe(1y),u)) Input-output mapping of the autoencoder with combined parameter vector θ T = [θTe ,θ T c ,θ T d ]. ψT = [ψT1 , · · · ,ψTk ] Affine transformation (adaptation) parameters per component used to adapt the MDN. gzi and g −1 zi , i ∈ [k], z ∈ Z Affine transformations between the components of the source-to-target Gaussian mixtures and vice-verse. DKL(p, q) Kullback-Leibler divergence between the distributions p and q. N(· |µ,Σ) Multivariate Gaussian density with mean vector µ and covariance matrix Σ. δ(x− x0) Dirac delta or impulse function centered at x0. Cat(p1, · · · , pk) Categorical distribution with pi ≥ 0 and ∑ i pi = 1. 1(c) Indicator function mapping a predicate c to 1 if true and 0 if false. ∥x∥p ℓp norm of a vector x.
The appendices are organized as follows:
• Appendix A discusses the theoretical results from the main paper. • Appendix B provides additional details on the proposed method including:
– Discussion on class labels and labeled data in the communication setting (Appendix B.1). – Feature and parameter transformation between multivariate Gaussians (Appendix B.2). – Generative adaptation of the MDN channel (Appendix B.3). – The validation metric used for setting the hyper-parameter λ (Appendix B.4). – Computational complexity analysis of the proposed method (Appendix B.5). – Limitations and future work (Appendix B.6).
• Appendix C provides additional details on the experiments and additional results, including ablation studies of the proposed method.
• Appendix D provides additional background on the following topics: 1) components of an endto-end autoencoder-based communication system, 2) generative modeling using mixture density networks, 3) training algorithm of the autoencoder, and 4) a primer on domain adaptation.
• Appendix E provides details on the MDN training and differentiable sampling using the Gumbelsoftmax reparametrization.
• Appendix F provides details on the simulated channel distributions used in our experiments.
A THEORETICAL RESULTS
Propostion 1 (restatement). The joint distributions p(x, y, z) and p(x, y) can be expressed in the following form:
p(x, y, z) = p ( x |Eθe(1y) ) p(y) δ(z−Eθe(1y)), ∀x, z ∈ Rd, y ∈ Y
p(x, y) = p ( x |Eθe(1y) ) p(y), ∀x ∈ Rd, y ∈ Y, (9)
where δ(·) is the Dirac delta (or Impulse) function, and we define p(x | y) := p(x |Eθe(1y)) as the conditional distribution of x given the class y.
Proof. It follows from the dependence y → z → x defined by our generative model that p(x, y, z) = p(y) p(z | y) p(x | z, y)
= p(y) δ(z−Eθe(1y)) p(x |Eθe(1y), y) = p(y) δ(z−Eθe(1y)) p(x |Eθe(1y)).
In the second step, the conditional p(z | y) reduces to the Dirac delta since the symbol z can only take one of the m values from the constellation Z = {Eθe(11), · · · ,Eθe(1m)} (for a fixed encoder mapping). The distribution p(x, y) in Eq. (9) is obtained from the third step by integrating p(x, y, z) over all z, and using the integration property of the Dirac delta.
A.1 KL-DIVERGENCE BETWEEN THE SOURCE AND TARGET GAUSSIAN MIXTURES
Propostion 2 (restatement). Given m Gaussian mixtures from the source domain and m Gaussian mixtures from the target domain (one each per class), which satisfy Assumptions A1 and A2, the KL-divergence between Pθc(x,K | z) and Pθ̂c(x,K | z) can be computed in closed-form, and is given by:
Dψ(Pθc , Pθ̂c ) = EPθc
[ log
Pθc(x,K | z) P θ̂c (x,K | z)
] =
∑ z∈Z p(z) k∑ i=1 πi(z) log πi(z) π̂i(z)
+ ∑ z∈Z p(z) k∑ i=1 πi(z)DKL ( N ( · |µi(z),Σi(z) ) , N ( · | µ̂i(z), Σ̂i(z) )) , (10)
where K is the mixture component random variable. The first term is the KL-divergence between the component prior probabilities, which simplifies into a function of the parameters [β1, γ1, · · · , βk, γk] . The second term involves the KL-divergence between two multivariate Gaussians (a standard result), which also simplifies into a function of ψ.
Proof. Referring to § 3.1, we derive the closed-form KL-divergence between the source and target Gaussian mixtures under Assumptions 1 and 2, i.e., the source and target Gaussian mixtures have the same number of components that have a one-to-one association. Recall that θc and θ̂c are the parameters of the original (source) and the adapted (target) MDN respectively. Let K ∈ {1, · · · , k} denote the latent component random variable.
Dψ(Pθc , Pθ̂c ) = EPθc
[ log
Pθc(x,K | z) P θ̂c (x,K | z)
]
= ∑ z∈Z p(z) k∑ i=1 ∫ Rd Pθc(x,K = i | z) log Pθc(x,K = i | z) P θ̂c (x,K = i | z) dx
= ∑ z∈Z p(z) k∑ i=1 Pθc(K = i | z) ∫ Rd Pθc(x | z,K = i) log Pθc(K = i | z)Pθc(x | z,K = i) P θ̂c (K = i | z)P θ̂c (x | z,K = i) dx = ∑ z∈Z p(z) k∑ i=1 πi(z) ∫ Rd N (x |µi(z),Σi(z)) log πi(z) π̂i(z) + log N (x |µi(z),Σi(z)) N ( x | µ̂i(z), Σ̂i(z) ) dx
= ∑ z∈Z p(z) k∑ i=1 πi(z) log πi(z) π̂i(z)
+ ∑ z∈Z p(z) k∑ i=1 πi(z)DKL ( N (· |µi(z),Σi(z)) , N ( · | µ̂i(z), Σ̂i(z) )) . (11)
The second term in the final expression involves the KL-divergence between two multivariate Gaussians (a standard result) given by
DKL
( N(· |µ,Σ), N(· | µ̂, Σ̂) ) = 1
2 log det(Σ̂) det(Σ) + 1 2 tr(Σ̂−1 Σ)
+ 1 2 (µ̂ − µ)T Σ̂−1 (µ̂ − µ) − d 2 .
For clarity, we further simplify Eq. (11) for the case of diagonal covariances by applying the above result. Recall that the Gaussian mixture parameters of the source and target domains are related by the parameter transformations in Eq. (4). The second term in Eq. (11) involving the KL-divergence between multivariate Gaussians, simplifies to
DKL ( N ( · |µi(z),σ2i (z) ) , N ( · | µ̂i(z), σ̂2i (z) )) = 1
2 d∑ j=1 [ log c2ij + 1 c2ij +
1
c2ij σ 2 ij(z)
( aij µij(z) + bij − µij(z) )2]− d 2 . (12)
The first term in Eq. (11) involving the KL-divergence between the component prior probabilties can be expressed as a function of the adaptation parameters [β1, γ1, · · · , βk, γk] as follows:
k∑ i=1 πi(z) log πi(z) π̂i(z) = k∑ i=1 eαi(z) q(z) [ log eαi(z) q(x) − log e βi αi(z)+ γi q̂(z) ]
= log( k∑ i=1 eβi αi(z)+ γi) − log( k∑ i=1 eαi(z)) + k∑ i=1 eαi(z) q(z) (αi(z) − βi αi(z) − γi) , (13)
where q(z) = ∑k
j=1 e αj(z) and q̂(x) = ∑k j=1 e
βj αj(z)+ γj are the normalization terms in the softmax function. Substituting Eqs. (12) and (13) into the last step of Eq. (11) gives the KL-divergence between the source and target Gaussian mixtures as a function of the adaptation parameters ψ.
A.2 OPTIMALITY OF THE FEATURE TRANSFORMATION
We show that the proposed feature transformation at the decoder in § 3.3 is optimal in the mimimum mean-squared error sense. The problem setting is that, at the decoder, we observe an input xt from the target domain marginal distribution, i.e.,
xt ∼ pt(x) = ∑ z∈Z p(z) k∑ i=1 π̂i(z)N ( x | µ̂i(z), Σ̂i(z) ) ,
where Z = {Eθe(11), · · · ,Eθe(1m)} is the encoder’s constellation. Suppose we knew the symbol z = Eθe(1y) that was transmitted and the mixture component i ∈ [k], then the transformation g−1zi (x
t) in Eq. (7) can map xt to the corresponding Gaussian component of the source distribution. However, since z and i are not observed at the decoder, we propose to find the transformation g−1 : Rd 7→ Rd (independent of z and i) that minimizes the following expected squared error:
J ( g−1(xt) ) = 1
2 EP θ̂c (z,i |x)
[ ∥g−1zi (x t) − g−1(xt)∥22 | xt ] . (14)
This is the conditional expectation over (z, i) given xt with respect to the posterior distribution P θ̂c (z, i |x). Since xt is fixed, the above objective is a function of the vector w := g−1(xt) ∈ Rd, and it can be simplified as follows:
J(w) = 1
2 EP θ̂c (z,i |x)
[ ∥g−1zi (x t) − w∥22 | xt ]
= 1
2 EP θ̂c (z,i |x)
[ g−1zi (x t)Tg−1zi (x t) | xt ] + 1
2 wTw
− wT EP θ̂c (z,i |x) [ g−1zi (x t) | xt ] .
Note that w comes outside the expectation since it does not depend on z or i. The minimum of this simple quadratic function can be found by setting the gradient of J with respect to w to 0, giving
w⋆ = g−1(xt) = EP θ̂c (z,i |x) [ g−1zi (x t) | xt ]
= ∑ z∈Z ∑ i∈[k] P θ̂c (z, i |xt) g−1zi (x t).
This is the feature transformation at the decoder proposed in § 3.3.
B ADDITIONAL DETAILS ON THE PROPOSED METHOD
In this section we provide additional details on the proposed method that could not be discussed in § 3 of the main paper.
B.1 CLASS LABELS AND LABELED DATA
We would like to clarify that the statement “class labels are available for free” is made in Section 3 in order to highlight the fact that class labels are easy to obtain in this end-to-end communication
setting, unlike other domains (e.g. computer vision) where labeling data could be expensive. Since the transmitted message is also the class label, it is always available without additional effort during the data collection (from the packet preambles). However, note that it is still challenging / expensive to collect a large number of samples for domain adaptation, as discussed in Section 1. In contrast, it may be easy to obtain plenty of unlabeled data in other domains such as computer vision, where labeling is expensive.
In communication protocols, preambles are attached to the front of the packets for synchronization, carrier frequency offset correction, and other tasks. The preambles consist of sequences of known symbols (which have a one-to-one mapping to the messages). Therefore, these sequences can be used as the labeled dataset since the receiver obtains the distorted symbol and knows the ground truth. The proposed MDN adaptation and input transformation at the decoder do not incur any modifications to the encoder (transmitter side). The constellation learned by the autoencoder is kept fixed during adaptation. Therefore, using the preambles from a small number of packets, our method performs adaptation at the receiver side and maintains the symbol error rate performance without communicating any information back to the encoder.
B.2 TRANSFORMATION BETWEEN MULTIVARIATE GAUSSIANS
We discuss the feature and parameter transformations between any two multivariate Gaussians. This result was applied to formulate the MDN adaptation in Eqs. (4) and (7). Consider first the standard transformation from x ∼ N(· |µ,Σ) to x̂ ∼ N(· | µ̂, Σ̂) given by the two-step process:
• Apply a whitening transformation z = D−1/2 UT (x− µ) such that z ∼ N(· |0, I). • Transform z into the new Gaussian density using x̂ = Û D̂1/2 z + µ̂.
We have denoted the eigen-decomposition of the covariance matrices by Σ = UDUT and Σ̂ = ÛD̂ÛT , where U and Û are the orthonormal eigenvector matrices, and D and D̂ are the diagonal eigenvalue matrices. Combining the two steps, the overall transformation from x to x̂ is given by
x̂ = Û D̂1/2 D−1/2 UT (x− µ) + µ̂. (15)
Suppose we define the matrix C = Û D̂1/2 D−1/2 UT , then it is easily verified that the covariance matrices are related by Σ̂ = CΣCT . In general, the mean vector and covariance matrix of any two Gaussians can be related by the following parameter transformations:
µ̂ = Aµ + b and Σ̂ = CΣCT , (16)
with parameters A ∈ Rd×d, b ∈ Rd, and C ∈ Rd×d. Substituting the above parameter transformations into the feature transformation in Eq. (15), we get
x̂ = C (x − µ) + Aµ + b.
From the above, we can also define the inverse feature transformation from x̂ ∼ N(· | µ̂, Σ̂) to x ∼ N(· |µ,Σ) :
x = C−1 (x̂ − Aµ − b) + µ.
B.3 GENERATIVE ADAPTATION OF THE MDN
In § 3.2, we discussed the discriminative adaptation objective for the MDN, which is used when the MDN is adapted as part of the autoencoder in order to improve the end-to-end error rate. This adaptation approach was used for the experiments in § 4. On the other hand, we may be interested in adapting the MDN in isolation with the goal of improving its performance as a generative model of the channel. For this scenario, the adaptation objective Eq. 6 is modified as follows. The first (data-dependent) term is replaced with the negative conditional log-likelihood (CLL) of the target dataset, while the second KL-divergence term remains the same:
JCLL(ψ ;λ) = −1 N t Nt∑ n=1 logP θ̂c (xtn | ztn) + λDψ(Pθc , Pθ̂c), (17)
where µ̂i(z), Σ̂i(z) and α̂i(z) as a function of ψ are given by Eq. (4). The parameters of the original Gaussian mixture αi(z),µi(z),Σi(z), ∀i are constants since they have no dependence on
ψ. The regularization constant λ ≥ 0 controls the allowed KL-divergence between the source and target Gaussian mixtures. Small values of λ weight the CLL term more, allowing more exploration in the adaptation, while large values of λ impose a strong regularization to constrain the space of target distributions. We evaluate the performance of this generative MDN adaptation in Appendix C.6.
B.4 VALIDATION METRIC FOR AUTOMATICALLY SETTING λ
The choice of λ in the adaptation objectives Eqs. (6) and 17 is crucial as it sets the right level of regularization suitable for the target domain distribution. Since the target domain dataset is very small, it is difficult to apply cross-validation type of methods to select λ. We propose a validation metric V (ψ ;Dt) that utilizes the feature-transformed target domain dataset to evaluate the quality of the adapted solutions for different λ values.
Let ψ denote the adaptation parameters found by minimizing the objective Eq. (6) for a specific λ ≥ 0. The feature transformation (from target to source domain) at the decoder g−1(x) based on the adaptation parameters ψ is given by Eq. (8). Recall that the target domain dataset is Dt = {(xtn, ytn, ztn), n = 1, · · · , N t}. We define the feature-transformed target domain dataset as:
Dttrans = { ( g−1(xtn), y t n, z t n ) , n = 1, · · · , N t}.
Suppose ψ is a good adaptation solution, then we expect the decoder (trained on the source domain dataset) to have good classification performance on Dttrans. For a given feature-transformed target domain sample, the decoder predicts the class posterior probabilities: Dθd(g
−1(xtn)) = [Pθd ( 1 | g−1(xtn) ) , · · · , Pθd ( m | g−1(xtn) ) ]. We define the validation metric as the negative posterior log-likelihood of the decoder on Dttrans, given by
V (ψ ;Dt) = − 1 N t Nt∑ n=1 logPθd ( ytn | g−1(xtn) ) (18)
We expect smaller values of V (ψ ;Dt) to correspond to better adaptation solutions. The adaptation objective is minimized with λ varied over a range of values, and in each case the adapted solution ψ is evaluated using the validation metric. The pair of λ and ψ resulting in the smallest validation metric is chosen as the final adapted solution. The search set of λ used in our experiments was {10−5, 10−4, 10−3, 10−2, 0.1, 1, 10, 100}. See Appendix C.4 for an ablation study on the choice of hyper-parameter λ using this validation metric.
Generative MDN Adaptation. The validation metric proposed above depends on the decoder, and cannot be used when the MDN is adapted as a generative model in isolation (Appendix B.3). For this setting, we modify the validation metric based on the following idea. Suppose the adaptation finds a good solution, then we expect Dttrans to have a high conditional log-likelihood under the (original) source domain MDN. The validation metric is therefore given by
V (ψ ;Dt) = − 1 N t Nt∑ n=1 logPθc ( g−1(xtn) | ztn ) , (19)
where Pθc is the Gaussian mixture given by Eq. 1.
B.5 COMPLEXITY ANALYSIS
We provide an analysis of the computational complexity of the proposed adaptation methods.
MDN Adaptation.
The number of free parameters being optimized in the adaptation objective (Eqs. 6 or 17) is given by |ψ| = k (2 d2 + d+ 2). This is much smaller than the number of parameters in a typical MDN, even considering only the final fully-connected layer (see Table 1 for a comparison). Each step of the BFGS optimization involves computing the objective function, its gradient, and an estimate of its inverse Hessian. The cost of one step of BFGS can thus be expressed as O(N t k d2 |ψ|2). Suppose BFGS runs for a maximum of T iterations and the optimization is repeated for L values of λ, then the overall cost of adaptation is given by O(LT N t k d2 |ψ|2). Note that the optimization for different λ values can be easily solved in parallel.
Test-time Adaptation at the Decoder.
We analyze the computational cost of the feature transformation-based adaptation at the decoder proposed in § 3.3. Consider a single test input xt at the decoder. The feature transformation method first computes the posterior distribution Pθ̂c(z, i |x
t) over the set of symbols-component pairs of size km. Computation of each exponent factor in the posterior distribution requires O(d3) operations for the full-covariance case, and O(d) operations for the diagonal covariance case. This corresponds to calculation of the log of the Gaussian density. Therefore, computation of the posterior distribution for a single (z, i) pair requires O(kmd3) operations for the full-covariance case (similarly for the diagonal case). Computation of the affine transformation g−1zi (x
t) for a single (z, i) pair requires O(d2) operations (the matrix Ci only needs to be inverted once prior to test-time adaptation). Since calculation of the posterior term dominates the computation, the overall cost of computing the transformation in Eq (8) over the km symbol-component pairs will be O(kmkmd3) = O(k2 m2 d3).
We note that in practical communication systems d is small (typically d = 2). The number of symbols or messages m can vary from 4 to 1024 in powers of 2. The number of mixture components k can be any positive integer, but is usually not more than a few tens to keep the size of the MDN practical. Therefore, the computational cost of test-time adaptation at the decoder based on the feature transformation method is relatively small, making our proposed adaptation very computationally efficient to implement at the receiver side of a communication system.
B.6 LIMITATIONS AND FUTURE WORK
The proposed work focuses mainly on a mixture density network (MDN) as the generative channel model, which allows us to exploit some of their useful properties in our formulation. Generalizing the proposed few-shot domain adaptation to other types of generative channel models such as conditional GANs, VAEs, and normalizing flows (Dinh et al., 2017) could be an interesting direction. These generative models can handle more high-dimensional structured inputs.
The proposed work does not adapt the encoder network, i.e., the autoencoder constellation is not adapted to changes in the channel distribution. Adapting the encoder, decoder, and channel networks jointly would allow for more flexibility, but would likely be slower and require more data from the target distribution.
We focused on memoryless channels, where inter-symbol-interference (ISI) is not a problem. In practice, communication channels can have memory and ISI would have to be addressed by the training and adaptation methods. Under changing channels, one would have to also adapt an Equalizer model (algorithm) in order to mitigate ISI.
C ADDITIONAL EXPERIMENTS
We provide additional details on the experiments in § 4 and report additional results, including ablation studies on the proposed method.
C.1 EXPERIMENTAL SETUP
We implemented the mixture density network and communication autoencoder models using TensorFlow (Abadi et al., 2015) and TensorFlow Probability. We used the BFGS optimizer implementation available in TensorFlow Probability. The code base for our work has been submitted as a supplementary material. All the experiments were run on a Macbook Pro with 16 GB memory and 8 CPU cores. Table 3 summarizes the architecture of the encoder, MDN (channel model), and decoder neural networks. Note that the output layer of the MDN is a concatenation (denoted by ⊕) of three fully-connected layers predicting the means, variances, and mixing prior logit parameters of the Gaussian mixture. The following setting is used in all our experiments. The size of the message set m (also the number of classes) was fixed to 16, corresponding to 4 bits. The dimension of the encoding d was set to 2, and the number of mixture components k was set to 5. The size of the hidden layers nh was set to 100.
The parameters ψ of the proposed adaptation method are initialized as follows for each component i:
Ai = Id, bi = 0, Ci = Id, βi = 1, γi = 0,
where Id is the d× d identity matrix. This initialization ensures that the target Gaussian mixtures (per class) are always initially equal to the source Gaussian mixtures. The regularization constant λ in the adaptation objective was varied over 8 equally-spaced values on the log-scale with range 10−5 to 100, specifically {10−5, 10−4, 10−3, 10−2, 0.1, 1, 10, 100}. The λ value and ψ corresponding to the smallest validation metric are selected as the final solution.
We used the Adam optimizer (Kingma & Ba, 2015) with a fixed learning rate of 0.001, batch size of 128, and 100 epochs for training the MDN. For adaptation of the MDN using the baseline methods Finetune and Finetune last, we used Adam with the same learning rate for 200 epochs. The batch size is set as b = max{10, 0.1N t}, where N t is number of adaptation samples in the target dataset. For training the autoencoder using Algorithm 1, we found that stochastic gradient descent (SGD) with Nesterov momentum (constant 0.9), and an exponential learning rate schedule between 0.1 and 0.005 works better than Adam.
Finetuning Baselines. We provide additional details on the baselines Finetune and Finetune last. Both the methods first initialize the target domain MDN, encoder, and decoder networks with the corresponding parameters from the source domain. The method Finetune first finetunes all the MDN parameters to minimize the conditional log-likelihood of the target dataset using the Adam optimizer. After the MDN is finetuned, we freeze the parameters of the MDN and encoder, and train only the decoder using data generated from the updated MDN channel. The method Finetune last differs from Finetune in that it optimizes only the weights of the final MDN layer.
From the results in Figures 4, 5, and 6, we observe that the baselines Finetune and Finetune last have very similar performance compared to the case of no adaptation. We have investigated this carefully and verified that this is not due to a bug or insufficient optimization (e.g., by checking if the final weights of the MDN and decoder are different for both methods). For both methods, we tried a range of learning rates for Adam and increased the number of epochs to a large number (beyond 200 was not helpful). We have reported the best-case results for these methods, which suggests that they are not effective at adaptation using small target domain datasets. As mentioned in Section 4.1, we hypothesize that using the KL-divergence based regularization and constraining the number of adaptation parameters leads to more effective performance of our method.
Uncertainty Estimation. Since there is inherent randomness in our experiments, especially with the small sample sizes of the target dataset, we always report average results from multiple trials. For the experiments on standard simulated channel variations (e.g., AWGN to Ricean fading), we report the results from 10 trials. For the random Gaussian mixtures experiment, we report the average and standard error over 50 random source/target dataset pairs. For the FPGA experiments, we report the results from 20 random trials. The average metrics (symbol error rate and log-likelihood) are reported in the plots.
Evaluation Protocol. We create a random class-stratified 50-50 train-test split (each of size 300,000) for data from both the source and target domains. Performance on both domains is always evaluated on the held-out test split. The train split from the target domain dataset is sub-sampled to create adaptation datasets of different sizes, specifically with 5, 10, 20, 30, 40, and 50 samples per class (symbol). For the generative adaptation experiments on the MDN (Appendix C.6), the number of adaptation samples from the target domain is reduced even further. We varied it from 2 samples perclass to 20 samples per-class in order to highlight the improvements obtained by the proposed method. The oracle baseline method, which retrains the autoencoder and MDN on the target distribution, uses the entire training dataset from the target domain.
Choice of SNR. For the experiments on simulated channel distributions such as AWGN, Ricean fading, and Uniform fading, we set the signal-to-noise ratio (SNR) to 14 dB for the source distribution and 20 dB for the target distribution. The connection between the SNR and the distribution parameters is given in Appendix F. We have experimented with other combinations of SNR for the source and target channels and found a similar trend in the adaptation performance.
In the simulated experiments, we focused on the SNR range of 14 dB to 20 dB. Our process for selecting this SNR range was by first evaluating the symbol error rate (SER) vs. SNR curve of the autoencoder for the different simulated channel distributions. We found that going below 14 dB SNR results in a degradation of the autoencoder’s performance (except for the AWGN channel, which we do not use as a target distribution). Also, going above 20 dB SNR did not lead to a significant decrease in the SER. For the channels such as Ricean fading and Uniform fading, we found that even a retrained autoencoder has a relatively high error rate for lower SNRs.
C.2 DETAILS ON THE FPGA EXPERIMENT
Referring to the experiment in § 4.2, for the real and over-the-air traces we used the platform from Lacruz et al. (2021). This ultra-wide-band mm-wave transceiver baseband memory-based design is developed on top of an ZCU111 RFSoC FPGA. This evaluation board features a Zynq Ultrascale + ZCU28DR. This FPGA is equipped with 8× 8 AD/DA converters with Giga-sampling capabilities, which make it ideal for RF system development; the 4 GB DDR4 memories contain RF-ADCs with up to 4 GSPS of sampling rate, and RF-DACs with up to 6.544 GSPS. This board also includes a quad-core ARM Cortex-A53 and a dual-core ARM Cortex-R5 real-time processor.
For the radio frequency, we used 60 GHz RF front-end antennas. These kits include a 16 + 16 TRX patch array antenna plus the RF module with up/down conversion from baseband to I/Q channels,
and TX/RX local oscillator (LO) frequency control. The antennas use 57 − 71 GHz, a range of frequencies that cover the unlicensed 60 GHz band for mm-wave channels, and are managed from a PC Host via USB.
We implemented a hardware on the loop training. For the experimentation on real traces, we use Matlab as a central axis. The PC host running Matlab is connected to the platform via Ethernet. The FPGA can transmit different custom waveforms like 16-QAM frames from the 802.11ad and 802.11ay standards, with 2 GHz of bandwidth. The frames are sent over-the-air via 60 GHz radio frequency kits, and the samples are stored at the FPGA DDR memory. We decode the received data from the transmission, removing the preamble and header fields and extracting the symbols to train the MDN. We add a preamble to the generated constellation from the MDN for packet detection purposes, and we transmit again the new waveforms over-the-air. Finally, the adaptation is performed offline with the decoded symbols from the custom autoencoder-learned constellation.
Source and Target Domains.
For the experiment in § 4.2, we introduced distribution changes via IQ imbalance-based distortions to the symbol constellation, and evaluated the adaptation performance as a function of the level of imbalance. The source domain would be the original channel, the over-the-air link between the transmitter and receiver on which the training data is collected. This source domain data is used for training the MDN and the autoencoder. The target domain would be a modification of the source domain where the symbols used by the transmitter are distorted by modifying the in-phase and quadrature-phase (IQ) components of the RF signal. This causes a change in the distribution observed by the receiver (decoder), leading to a drop in performance without any adaptation.
C.3 DETAILS ON THE RANDOM GAUSSIAN MIXTURE DATASETS
We created a simulated distribution shift setting where data from both the source and target domains are generated from class-conditional Gaussian mixtures whose parameters are modified between the two domains (e.g., see Fig. 7). The parameters for the source and target Gaussian mixtures are generated as follows:
Source Domain. The source domain data is generated with a standard 16-QAM constellation ZQAM, which has 16 classes (messages). Let ks be the number of components in the source Gaussian mixture.
For each z ∈ ZQAM:
• Calculate dmin, the minimum distance from z to the remaining symbols in ZQAM. Let σs = dmin / 4 be a constant standard deviation for this symbol.
• Component priors: generate πi(z) ∼ Unif(0.05, 0.95), ∀i ∈ [ks]. Normalize the priors to sum to 1.
• Component means: generate µi(z) ∼ N(· | z, σ2sI), ∀i ∈ [ks].
• Component covariances: generate s1, · · · , sd iid∼ Unif(0.2σs, σs) and let Σi(z) =
diag(s21, · · · , s2d), ∀i ∈ [ks] (the covariances are diagonal). • Generate Ns /m samples corresponding to symbol z from the Gaussian mixture: xsn ∼∑ks
i=1 πi(z)N(x |µi(z),Σi(z)).
Target Domain. The parameters of the target Gaussian mixture are generated in a very similar way. The MDN and autoencoder are trained on the source domain dataset. Let Z = {Eθe(11), · · · ,Eθe(1m)} be the constellation learned by the autoencoder. Let kt be the number of components in the target Gaussian mixture. For each z ∈ Z:
• Calculate dmin, the minimum distance from z to the remaining symbols in Z . Let σt = dmin / 4 be a constant standard deviation for this symbol.
• Component priors: generate π̂i(z) ∼ Unif(0.05, 0.95), ∀i ∈ [kt]. Normalize the priors to sum to 1.
• Component means: generate µ̂i(z) ∼ N(· | z, σ2t I), ∀i ∈ [kt].
• Component covariances: generate s1, · · · , sd iid∼ Unif(0.2σt, σt) and let Σ̂i(z) =
diag(s21, · · · , s2d), ∀i ∈ [kt] (the covariances are diagonal). • Generate N t /m samples corresponding to symbol z from the Gaussian mixture: xtn ∼∑kt
i=1 π̂i(z)N(x | µ̂i(z), Σ̂i(z)).
We set ks = kt = 3, except for the experiment where the source and target Gaussian mixtures are mismatched. In this case, ks and kt are randomly selected for each dataset from the range {3, 4, 5, 6}. Random Phase Shift. We allow the channel output x to be randomly phase shifted on top of other distribution changes. This is done by matrix multiplication of x with a rotation matrix, where the rotation angle for each sample is uniformly selected from [−ϕ, ϕ]. We set ϕ to π/18 or 10 degrees. Results on a dataset with random phase shift applied on top of random Gaussian mixture distribution shift can be found in Fig. 5c.
C.4 ABLATION EXPERIMENTS
We perform ablation experiments to understand: 1) the choice of the hyper-parameter λ, 2) the importance of the KL-divergence regularization in the adaptation objective, 3) performance of our method when the source and target Gaussian mixtures have mismatched components, and 4) the performance of our method when there is no distribution change.
Automatic Selection of Hyper-parameter λ. We evaluate the proposed validation metric for automatically selecting the hyper-parameter λ and report the results in Fig. 8. We run the proposed method for different fixed values of λ as well as the automatically-selected λ, and compare their
performance on the target domain test set. We consider both simulated channel variations and the random Gaussian mixture datasets. From the figure, we observe that in most cases performance based on the automatically set value of λ is better than other fixed choices of λ. The case of adaptation from AWGN to Ricean fading is an exception, where our method does not learn a good adaptation solution (see Fig. 4c). In this case, we observe from Fig. 8b that the setting λ = 0.0001 has the best symbol error rate.
Performance Under Component Mismatch. We evaluate the symbol error rate performance of all the methods in the setting where the number of components in the source and target Gaussian mixtures is mismatched. The number of components in the source and target Gaussian mixtures is randomly selected from the range 3 to 6. From Fig. 11, we observe that the proposed method has strong performance improvements even in this mismatched setting, suggesting that our method can perform well even when Assumptions A1 and A2 are not satisfied.
Importance of the KL-divergence Regularization. Recall that the adaptation objectives Eqs. (6) and (17) include the KL-divergence term scaled by λ in order to avoid large distribution changes when there is not enough support from the small target-domain dataset. A natural question to ask is whether this term is useful and helps improve the adaptation solution when λ > 0. To answer this, we compare the performance of our method with λ = 0 with that our our method with λ set automatically using the validation metric. The results of this comparison are given in Fig. 9 on four simulated channel variations. The results are averaged over multiple trials as before. It is clear that setting λ = 0 for our method leads to much higher symbol error rates compared to setting λ to a non-zero value using the validation metric, establishing the importance of the KL-divergence term.
Performance Under No Distribution Change. We evaluate the symbol error rate performance of all the methods in the setting where there is no distribution change. In this setting, the performance of the MDN and autoencoder should not change, and we expect the proposed adaptation method to maintain a similar performance (not lead to increased symbol error rate). In Fig. 10, we report the results of this experiment when both the source and target channel distributions are either Ricean fading or Uniform fading. We consider a medium SNR value of 14 dB and a high SNR value of 20 dB. We observe that our method is relatively stable even when there is no distribution change, and there is only a small increase in error rate. For instance, in Fig. 10c, the error rate of our method increases from 0.015 to 0.018 for 5 samples per class.
We expect that a practical system that frequently adapts to changes in the channel distribution should first have a distribution change-detection algorithm that takes a batch of new samples from the channel and detects whether there is any change in the distribution. The actual domain adaptation algorithm is then applied only when a distribution change is detected. In this way, any potential drop in the autoencoder’s performance when there is no distribution change can be made less likely.
C.5 ANALYSIS OF THE FAILURE ON AWGN TO RICEAN FADING
Referring to Fig. 4. c in the main paper, we observe that our method has a worse symbol error rate compared to no adaptation and the other baselines for the adaptation setting from an AWGN channel at 14d | 1. What is the focus and contribution of the paper regarding few-shot domain adaptation?
2. What are the strengths of the proposed approach, particularly in its organization, demonstration, and assumptions?
3. What are the weaknesses of the paper, especially regarding some confusing aspects and non-self-explanatory figures?
4. Do you have any questions about the effectiveness of the method, such as its performance compared to a baseline model?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this paper, the authors provide a few-shot domain adaptation method to address the channel distribution changes of communication systems. Specifically, using the properties of Gaussian mixtures, they propose a solid domain adaption process for the generative channel model (MDN). Besides, they propose a input-transformation method which transform the input of decoder from target domain into source domain, without modifying the encoder-decoder networks. They also derive experiments on a mmWave FPGA platform and show the strong performance improvements of the proposed method.
Strengths And Weaknesses
Strength 1. The paper is well organized and has rich details. 2. The advantage of the proposed method is clearly stated and demonstrated. 3. The adaptation approach is based on appropriate assumptions and is well supported by the properties of Gaussian mixtures. 4. The effectiveness of the method is evaluated by both simulated and real experiments, and there are also experiments when the assumptions could not hold.
Weaknesses 1. Some confusions. In Parameter Transformation part, you state that “The number of adaptation parameters is given by k (2 d2 + d + 2). This is typically much smaller than the number of MDN parameters (weights and biases from all layers)”. In previous part you state that “The MDN output with all the mixture parameters has dimension p = k (d(d + 1)/2 + d + 1).” Why the adaptation parameters is much smaller than the number of MDN parameters? 2. Some figures are not self-explanatory. For instance, in Figure 4, the line of No adapt or Finetune are covered by other lines, without additional explanation. 3. More experiments. How the unsupervised domain adaptation performs based on the baseline model and how it compares with the proposed approach?
Clarity, Quality, Novelty And Reproducibility
The paper is well organized and clearly stated. The advantage of the method is well stated. |
ICLR | Title
Information Lattice Learning
Abstract
Information Lattice Learning (ILL) is a general framework to learn decomposed representations, called rules, of a signal such as an image or a probability distribution. Each rule is a coarsened signal used to gain some human-interpretable insight into what might govern the nature of the original signal. To summarize the signal, we need several disentangled rules arranged in a hierarchy, formalized by a lattice structure. ILL focuses on explainability and generalizability from “small data”, and aims for rules akin to those humans distill from experience (rather than a representation optimized for a specific task like classification). This paper focuses on a mathematical and algorithmic presentation of ILL, then demonstrates how ILL addresses the core question “what makes X an X” or “what makes X different from Y” to create effective, rule-based explanations designed to help human learners understand. The key part here is what rather than tasks like generating X or predicting labels X,Y. Typical applications of ILL are presented for artistic and scientific knowledge discovery. These use ILL to learn music theory from scores and chemical laws from molecule data, revealing relationships between domains. We include initial benchmarks and assessments for ILL to demonstrate efficacy.
1 INTRODUCTION
With rapid progress in AI, there is an increasing desire for general AI (Goertzel & Pennachin, 2007; Chollet, 2019) and explainable AI (Adadi & Berrada, 2018; Molnar, 2019), which exhibit broad, human-like cognitive capacities. One common pursuit is to move away from “black boxes” designed for specific tasks to achieve broad generalization through strong abstractions made from only a few examples, with neither unlimited priors nor unlimited data (“primitive priors” & “small data” instead). In this pursuit, we present a new, task-nonspecific framework—Information Lattice Learning (ILL)— to learn representations akin to human-distilled rules, e.g., producing much of a standard music theory curriculum as well as new rules in a form directly interpretable by students (shown at the end).
The term information lattice was first defined by Shannon (1953), but remains largely conceptual and unexplored. In the context of abstraction and representation learning, we independently develop representation lattices that coincide with Shannon’s information lattice when restricted to his context. Instead of inventing a new name, we adopt Shannon’s. However, we not only generalize the original definition—an information lattice here is a hierarchical distribution of representations—but we also bring learning into the lattice, yielding the name ILL.
ILL explains a signal (e.g., a probability distribution) by disentangled representations, called rules. A rule explains some but not all aspects of the signal, but together the collection of rules aims to capture a large part of the signal. ILL is specially designed to address the core question “what makes X an X” or “what makes X different from Y”, emphasizing the what rather than generating X or predicting labels X,Y in order to facilitate effective, rule-based explanations designed to help human learners understand. A music AI classifying concertos, or generating one that mimics the masters, does not necessarily produce human insight about what makes a concerto a concerto or the best rules a novice composer might employ to write one. Our focus represents a shift from much representation-learning work (Bengio et al., 2013) that aim to find the best representation for solving a specific task (e.g., classification) rather than strong concern for explainability. Instead of optimizing a task-specific objective function (e.g., classification error), ILL balances more general objectives that favor fewer, simpler rules for interpretability, and more essential rules for effectiveness—all formalized later.
One intuition behind ILL is to break the whole into simple pieces, similar to breaking a signal into a Fourier series. Yet, rather than decomposition via projection to orthonormal basis and synthesis
via weighted sum, we decompose a signal in a hierarchical space called a lattice. Another intuition behind ILL is feature selection. Yet, rather than features, we use partitions to mimic human concepts and enable structured search in a partition lattice to mimic human learning. The goal is to restore human-like, hierarchical rule abstraction-and-realization through signal decomposition-and-synthesis in a lattice (called projection-and-lifting, Figure 1: left), resulting in more than a sum of parts.
ILL comprises two phases: (a) lattice construction; (b) learning (i.e., searching) in the lattice. This is similar to many machine learning (ML) models comprising (a) function class specification then (b) learning in the function class, e.g., constructing a neural network then learning—finding optimal parameters via back-propagation—in the network. ILL’s construction phase is prior-efficient: it builds in universal priors that resemble human innate cognition (cf. the Core Knowledge priors (Spelke & Kinzler, 2007)), then grows a lattice of abstractions. The priors can be customized, however, to cater to a particular human learner, or facilitate more exotic knowledge discovery. ILL’s learning phase is data-efficient: it learns from “small data” encoded by a signal, but searches for rich explanations of the signal via rule learning, wherein abstraction is key to “making small data large”. Notably, the construction phase is prior-driven, not data-driven—data comes in only at the learning phase. Hence, the same construction may be reused in different learning phases for different data sets or even data on different topics (Figure 1: right). Featuring these two phases, ILL is thus a hybrid model that threads the needle between a full data-driven model and a full prior-driven model, echoing the notion of “starting like a baby; learning like a child” (Hutson, 2018).
ILL is related to many research areas. It draws ideas and approaches from lattice theory, information theory, group theory, and optimization. It shares algorithmic similarity with a range of techniques including MaxEnt, data compression, autoencoders, and compressed sensing, but with a much greater focus on achieving human-like explainability and generalizability. Below, we broadly compares ILL to prominent, related models, leaving more comparisons to the Appendix for most similar ones.
Compared to ILL is deep learning a “white-box” model balancing human-explainability and task performance Bayesian inference modeling human reasoning with widely shared, common priors and few, simple rules rather than using probabilistic inference as the driving force tree-like models structurally more general: a tree (e.g., decision tree or hierarchical clustering)
is essentially a linear lattice (called a chain formally) depicting a unidirectional refinement or coarsening process
concept lattice in FCA (Ganter & Wille, 2012) conceptually more general and may include both known and unknown concepts; ILL does not require but discovers domain knowledge (more details in Appendix A)
We illustrate ILL applications by learning music theory from scores, chemical laws from compounds, and show how ILL’s common priors facilitate mutual interpretation between the two subjects. To begin, imagine Tom and Jerry are playing two 12-key pianos simultaneously, one note at a time (Figure 1: right). The frequency of the played two-note chords gives a 2D signal plotted as a 12× 12 grayscale heatmap. Inspecting this heatmap, what might be the underlying rules that govern their co-play? (Check: all grey pixels have a larger “Jerry-coordinate” and project to a black key along the “Tom-axis”.) We now elaborate on ILL and use it to distill rules for complex, realistic cases.
2 INFORMATION LATTICE: ABSTRACTIONS AND RULES OF A SIGNAL
Signal. A signal is a function ξ : X → R. For notational brevity and computational reasons, assume ξ is non-negative and X ⊆ Rn is finite (not a limitation: see Appendix B). For example, a signal ξ : {1, . . . , 6} → R being a probability mass function (pmf) of a dice roll, or a signal ξ : {0, . . . , 27}2 → R being a 28× 28 grayscale image. We denote the set of all signals on X by SX . Partition / abstraction. We use a partition P of a set X to denote an abstraction of X; we call a cell C ∈ P an (abstracted) concept. The intuition is simple: a partition of a set renders a “coarse-grained view” of the set, or more precisely, an equivalence relation on the set. In this view, we identify equivalence classes of elements (concepts) instead of individual elements. For example, the partition P = {{1, 3, 5}, {2, 4, 6}} of the six outcomes of the roll of a die identify two concepts (odd, even). Rule / representation. A rule of a signal ξ : X → R is a “coarsened” signal rξ : P → R defined on a partition P of X with rξ(C) := ∑ x∈C ξ(x) for any C ∈ P . In this paper, a rule of a signal is what we mean by a representation of a signal. If the signal is a grayscale image, a rule can be a special type of blurring or downsampling of the image; if the signal is a probability distribution, a rule can be a pmf of the “orbits” of the distribution for lifted inference algorithms (Holtzen et al., 2019; Kersting, 2012). More generally, we define a rule (regardless of any signal) over a set X by any signal on any partition of X; accordingly, we denote the set of all rules over X byRX := ∪P∈{all partitions of X}SP . Partition lattice. Abstractions are hierarchical: one coarse-grained view can be coarser than another. Let the partition lattice (PX , ) of a setX be the partially ordered set (poset) containing all partitions of X equipped with the partial order coarser than ( ), or finer than ( ), defined in the standard way. Let P := {{x} | x ∈ X} and P := {X} denote the finest and the coarsest partition, respectively. Per general lattice theory (Davey & Priestley, 2002), PX is a complete lattice: every subset P ⊆ PX has a unique supremum ∨P and a unique infimum ∧P, where ∨P is called the join of P denoting its coarsest common refinement and ∧P is called the meet of P denoting its finest common coarsening. Information lattice. The information lattice (Rξ,⇐) of a signal ξ : X → R is the poset of all rules of ξ equipped with the partial order more general than: for any two rules r, r′ ∈ Rξ, we say r is more general than r′ (or r′ is more specific), denoted r ⇐ r′, if domain(r) domain(r′). Notably, Rξ ⊆ RX andRξ is isomorphic to the underlying partition lattice via projection defined below. Projection and lifting. For any signal ξ ∈ SX , we define the projection operator ↓ξ: PX → Rξ by letting ↓ξ (P) be the rule of ξ on P . One can check that ↓ξ: (PX , )→ (Rξ,⇐) is an isomorphism. Conversely, we define the general lifting operator ⇑X : RX → 2SX by letting ⇑X (r) denote the set of all signals that satisfy the rule r, i.e., ⇑X (r) := {ξ ∈ SX | ↓ξ (domain(r)) = r} ⊆ SX . To make lifting unique and per Principles of Indifference (Eva, 2019), we introduce a special lifting ↑X (r) to pick the most “uniform” signal in ⇑X (r). Formally, define ‖ · ‖q : SX → R by ‖ξ‖q := ( ∑ x∈X ξ(x)
q)1/q. For any ξ, ξ′ ∈ SX satisfying ‖ξ‖1 = ‖ξ′‖1, we say that ξ is more uniform than ξ′ (or ξ′ is more deterministic) if ‖ξ‖2 ≤ ‖ξ′‖2. We define the (special) lifting operator ↑X : RX → SX by ↑X (r) := argminξ∈⇑X(r)‖ξ‖2 (can be computed by simply averaging). Notation here follows the convention as to function projections to quotient spaces (Kondor & Trivedi, 2018). Lifting a single rule to the signal domain can be extended in two ways: (a) lift to a finer rule domain P instead of X , i.e., ⇑P (r) or ↑P (r); (b) lift more than one rules. Accordingly, we write ⇑ := ⇑X and ↑ := ↑X as defaults, write R = ↓ξ (P) := {↓ξ (P) | P ∈ P} ⊆ Rξ to denote a rule set, and write ⇑(R) := ∩r∈R ⇑(r) = {η ∈ SX | ↓η (P) = R} and ↑(R) := argminη∈⇑(R)‖η‖2 to denote signals that satisfy all rules in R (general lifting) and the most uniform one (special lifting), respectively. More computational details on lifting and its intimate relation to join are in Appendix C.
3 INFORMATION LATTICE LEARNING (ILL)
We first formalize ILL as a single optimization problem and then solve it practically in two phases. Let ξ : X → R be a signal we want to explain. By explaining, we mean to search for a rule set R = ↓ξ (P) ⊆ Rξ such that: (a)R recovers ξ well, orR is essential; (b)R is simple. The main idea agrees with Algorithm Information Theory (Chaitin, 1987; Chater & Vitányi, 2003), but we use an information-lattice based formulation focusing on explainability. We start our formulation below.
We say a rule setR recovers the signal ξ exactly if ↑(R) = ξ. Yet, exact recovery may not always be achieved. The information loss occurs for two reasons: (a) insufficient abstractions, i.e., the join ∨P is strictly coarser than P; (b) the choice made in favor of uniformity is inappropriate. Instead of pursuing exact recovery, we introduce ∆(↑ (R), ξ)—a distance (e.g., `p distance) or a divergence (e.g., KL divergence) function—to measure the loss, with a smaller ∆ indicating a more essentialR. We say a rule setR is simpler if it contains fewer and simpler rules. Formally, we wantR minimal, i.e., each rule r ∈ R is indispensable so as to achieve the same ↑(R). Also, we want each rule r ∈ R informationally simple, measured by smaller Shannon entropy Ent(r), so r is more deterministic (Falk & Konold, 1997), easier to remember (Pape et al., 2015) and closer to our common-sense definition of a “rule”. Notably, the partial order renders a tradeoff between the two criteria: r ⇐ r′ implies r is dispensable in anyR ⊇ {r, r′} but on the other hand Ent(r) ≤ Ent(r′), so including more-specific rules makes the rule set small yet each individual rule (informationally) hard.
The main problem. The formal definition of an ILL problem is: given a signal ξ : X → R, minimize R⊆Rξ ∆(↑(R), ξ) subject to R is minimal; Ent(r) ≤ for any r ∈ R. (1) The search space involves the full information lattice (Rξ,⇐), or isomorphically, the full partition lattice (PX , ). Yet, the size of this lattice, i.e., the Bell numberB|X|, scales faster than exponentially in |X|. It is unrealistic to compute all partitions of X (unless X is tiny), let alone the partial order. Besides computational concerns, there are two reasons to avoid the full lattice (but to leave it implicitly in the background): (a) the full lattice has unnecessarily high resolution, comprising many nearlyidentical partitions particularly when X is large; (b) considering explainability, not every partition has an easy-to-interpret criterion by which the abstraction is made. As such, Formulation (1) is only conceptual and impractical. Next, we relax it and make it practical via two ILL phases.
3.1 PRACTICAL LATTICE CONSTRUCTION: TO START LIKE A BABY (PHASE I)
Information lattice construction plays a role similar to building a function class in ML, sometimes called meta-learning. While its importance is commonly understood, the construction phase in many data-driven models is often treated cursorily—using basic templates and/or ad-hoc priors—leaving most of the computation to the learning phase. In contrast, we put substantial effort into our priordriven construction phase. Pursuing generality and interpretability, we want universal, simple priors that are domain-agnostic and close to the innate cognition of a human baby (Marcus, 2018). Here we draw those from Core Knowledge (Spelke & Kinzler, 2007; Chollet, 2019), which include “the (small) natural numbers and elementary arithmetic prior” and “the elementary geometry and topology prior”. We then give algorithms to construct abstractions from these priors, and consider such a construction prior-efficient if it is interpretable, expressive, and systematic. In the following flowchart, we summarize information lattice construction as generating a partition sublattice. 20.00 pt
hPhF,Sii_ hPhF,Sii_^···(PhF,Si, )PhF,Si = P hF i [ PGhSiF, S hF i, GhSi
seeds (priors)
features/ symmetries
partition multiset
partition poset
partition semilattice
partition sublattice
1 2 4 53
hierarchy stageprior-driven stage completion stage
1 2 Feature / Symmetry-induced partitions. Unlike data clustering, our prior-driven partitions are induced from two data-independent sources—features and symmetries. We draw priors—in the form of seed features F and seed transformations S—from Core Knowledge as a basis, and then generate a set of partitions P〈F,S〉 as follows: as an example, for X = R2:
F = {w[1], w[2], w[1,2], sort, argsort, sum, diff, div2, . . . , div19, mod2, . . . , mod19} (2) S = {horizontal, vertical, diagonal translations} ∪ {rotations} ∪ {reflections} (3)
Φ〈F 〉 : set of features generated by F via function composition G〈S〉 : set of subgroups generated by subsets of S via subgroup generation PΦ〈F 〉 : set of partitions generated by features in Φ〈F 〉 via preimages PG〈S〉 : set of partitions generated by subgroups in G〈S〉 via orbits
In (2), wI denotes coordinate selection (like indexing/slicing in python) and the other functions are defined as in python (div and mod are like in python divmod). Then, P〈F,S〉 = PΦ〈F 〉 ∪ PG〈S〉. 3 Partition poset. We next sort P〈F,S〉, computationally a multiset, into the poset (P〈S,F 〉, ). We import algorithmic skeleton from generic poset-sorting algorithms (Caspard et al., 2012; Daskalakis et al., 2011), with an outer routine incrementally adding elements and querying an inner subroutine (an oracle) for pairwise comparison. Yet, our poset is special: its elements are called tagged partitions where a tag records the generating source(s) of its tagged partition, e.g., features and/or symmetries. So, we have specially designed both the outer routine ADD PARTITION and the oracle COMPARE by leveraging (a) transitivity (valid for all posets), (b) partition size (valid for partitions), and (c) partition tag (valid for tagged partitions) to pre-determine or filter relations. We relegate details to Appendix E. The data structures for posets include po matrix and hasse diagram, encoding the partial order ≺ (ancestors/descendants) and the cover relation ≺c (parents/children), respectively (Garg, 2015). 4 5 Partition semi/sublattice. To complete (P〈F,S〉, ) into a lattice, we compute the sublattice (of PX ) generated by P〈F,S〉. We follow the idea of alternating-join-and-meet completions borrowed from one of the two generic sublattice-completion methods (Bertet & Morvan, 1999). A discussion on our choice and other related methods is in Appendix D. However, we implement join-semilattice completion (meet-semilattice is dual) in our special context of tagged partitions, which echoes what we did in 3 and reuses ADD PARTITION. The adjustments are (a) changing tags from features and symmetries to join formulae and (b) changing the inner subroutine from pairwise comparison to computing join. We then run a sequence of alternating joins and meets to complete the lattice. For interpretability, one may want to stop early in the completion sequence. While a single join or meet remains simple for human interpretation—often understood as the intersection or union of concepts (e.g., the join of colored items and sized items gives items indexed by color and size)—having alternating joins and meets may hinder comprehension. More details on a single-step join-semilatticecompletion, the completion sequence, and tips on early stopping are relegated to Appendix E.
3.2 PRACTICAL LATTICE LEARNING: TO LEARN LIKE A CHILD (PHASE II)
Learning in an information lattice means solving the optimization Problem (1), i.e., to search for a minimal subset of simple rules from the information lattice of a signal so as to best explain that signal. Let P• be the sublattice (or semilattice, poset, if early stopped) from the construction phase. Projecting a signal ξ : X → R to P• yields the information sublattice R• := ↓ξ (P•) ⊆ Rξ. It is worth reiterating that (a) P• is constructed first and is data-independent; (b) ξ (data) comes after P•; (c) (R•,⇐) is isomorphic to (P•, ): R• retains the partial order (po matrix and hasse diagram) and interpretability from P•. As such,R• is what is given at the beginning of the learning phase. The main problem (relaxed). For practicality, we relax Problem (1): instead of the full latticeRξ, we restrict the search space toR•; instead of minimal rule sets, we consider only antichains (whose elements are mutually incomparable), necessary for minimality. This yields:
minimize R⊆R•
∆(↑(R), ξ) subject to R is an antichain; Ent(r) ≤ for any r ∈ R. (4)
To solve Problem (4), we adopt a (greedy) idea similar to principal component analysis (PCA): we first search for the most essential rule—which decreases ∆ most—in explaining the signal, then the second most essential rule in explaining the rest of the signal, and so on. Specifically, we start with an empty rule setR(0) := ∅, and add rules iteratively. LetR(k) be the rule set formed by Iteration (Iter) k andR(k)⇐ := {r ∈ R• | r ⇐ r′ for some r′ ∈ R(k)}. LetR≤ := {r ∈ R• | Ent(r) ≤ }. Then,
(in Iter k + 1) minimize ∆(↑(R(k) ∪ {r}), ξ) subject to r ∈ R(k)feasible := R≤ −R(k)⇐ . (5) We pre-computeR≤ (instead of the wholeR•) before iterations, which can be done by a breadth-first search (BFS) on P•’s hasse diagram, from bottom (the coarsest) up. As to the monotonicity of Ent w.r.t. the partial order (cf. the grouping axiom of entropy (Cover & Thomas, 2012)), any BFS branch ends once the entropy exceeds . (For later use, we save the setR> of ending rules in BFS, i.e., the lower frontier ofR> .) In contrast,R(k)⇐ is computed per iteration (by querying P•’s po matrix).
Under review as a conference paper at ICLR 2021
mnistt_7_3_solve_alternate_5000
Nested vs. alternating optimization. Computing ↑(R(k)∪{r}) requires solving a minimization, so Problem (5) is a nested optimization: argmin
r∈R(k)feasible ∆(argminη∈⇑(R(k)∪{r})‖η‖2, ξ). One may
de-nest the two: instead of comparing rules by lifting them up to the signal domain, we compare them “downstairs” on their own rule domains. So, instead of minimizing (5)’s objective, we
maximize r ∈ R≤ −R(k)⇐
∆(↓↑(R(k)) (domain(r)), ↓ξ (domain(r))) = ∆(↓↑(R(k)) (domain(r)), r). (6)
The idea is to find the rule domain on which the recovered ↑(R(k)) and the target signal ξ exhibit the largest gap. Adding this rule to the rule set maximally closes the gap in (6), and tends to minimize the original objective in (5). Nicely, in (6) the lifting does not involve r, so (5) is de-nested, which further iterates into an alternating min max (or lift project) optimization. Let r(k)? be the solution and ∆ (k) ? be the optimal value in Iter k. We updateR(k+1) := R(k) ∪ {r(k+1)? } − {r(k+1)? ’s descendants} (so always an antichain), and proceed to the next iteration. Iterations end whenever the feasible set is empty, or may end early if the rule becomes less essential, measured by |∆(k+1)? −∆(k)? | ≤ γ in the nested setting, and ∆(k)? ≤ γ in the alternating setting (for some γ). The full learning path & complexity. We denote a solve process for Problem (6) by SOLVE( , γ), or SOLVE( ) if γ is fixed ahead. To avoid tuning manually, we solve an -path. For 1 < 2 < · · · , assume SOLVE( i) takes Ki iterations, we run the following to solve the main relaxed Problem (6):
∅ = R(0) → SOLVE( 1)→ R(K1) → SOLVE( 2)→ R(K1+K2) → · · · (7) So, lattice learning boils down to solving a sequence of combinatorial optimizations on the Hasse diagram of a lattice. We walk through the full process (7) via a toy example, starting with a signal ξ : {0, . . . , 27}2 → [0, 1] denoting an image of “7” and a toy-sized information lattice of the signal (Figure 3A). The sequence of optimizations (7) proceeds at two paces concurrently: the slower pace is indexed by i; the faster pace is indexed by iteration number k. As mentioned earlier, the setsR≤ i
are pre-computed at the slower pace, with the (i+ 1)th BFS initialized fromR> i (the ending rules in the ith BFS). The monotonicity of Ent w.r.t. the partial order assures that these BFSs add up to a single (global) BFS on the entire Hasse diagram, climbing up the lattice from the bottom. This is shown in Figure 3B as the monotonic expansion of the blue region (R≤ ) explored by BFS. Locally at each iteration along the slower pace, solving Problem (6) is quadratic in the worst case when the feasible set is an antichain (i.e., no order), and linear in the best case when the feasible set is a chain (i.e., totally ordered). Since local BFSs add up to a single BFS with a standard linear complexity, the entire learning phase has a total complexity between linear and quadratic in the number of vertices and edges in the whole Hasse diagram. In general, the denser the diagram is, the lower the complexity is. This is because R(k)⇐ tends to be large in this case with more descendants activated (i.e., red in Figure 3B), which in turn effectively shrinks the feasible set (i.e., the blue region minus red). For example, unlike the first three iterations in Figure 3B, the 4th iteration ( = 3) activates more than one rules, including the one being extracted as well as all its unexplored descendants. Further, the upper bound is rarely reached. Unlike in this toy example, BFS in practice is often early stopped when becomes large, i.e., when later rules become more random. Hence, targeting at more deterministic and disentangled rules only, not all vertices and edges are traversed by BFS. In the end of the learning process, for explanatory purposes, we store the entire -path and the (R(k))k≥0 sequence instead of just the very last one. This yields a rule trace as the standard ILL output, which we present below.
How to read ILL output. ILL outputs a rule trace comprising an evolving sequence of rules, rule sets, and recovered signals (Figure 3C). The three sequences are indexed by iteration and by -path, so the rule set by the last iteration under any (starred) is the returned solution to the main Problem (4). We depict a rule by its lifting, since it sketches both the partition and the rule values. Figure 3C gives a full presentation of a rule trace. We also introduce a two-line shorthand (Figure 3D), keeping only the sequence of the recovered signals and that of the rules. A rule trace answers what makes ξ an ξ, or what are the best -simple rules explaining ξ. ILL rules are more interpretable than just eyeballing patterns. (a) The interpretability of the trace is manifest in its controllability via , γ: smaller for simpler rules and larger γ for more essential rules. (b) The interpretability of each rule is gained from its partition tag—the criteria by which the abstraction is made. A tag may contain several generating sources as different interpretations of the same rule abstraction. Like different proofs of a theorem, a partition tag with multiple sources reveals equivalent characterizations of a structure and thus, more insights of the signal. So, tags are not only computationally beneficial in constructing lattices, but also key to interpretation. We present in-depth analyses on tags in the applications below.
4 ILL EXAMPLES
We show typical ILL examples on knowledge discovery in art and science: learning music theory from scores and chemical laws from compounds (while relegating more analyses on handwritten digits to Appendix F). For both, we fix the same priors—F, S in (2)(3)—thus the same lattice. We fix the same parameters: -path is 0.2 < 3.2 < 6.2 < · · · (tip: a small offset at the beginning, e.g., 0.2, is used to get nearly-deterministic rules) and γ is 20% of the initial signal gap. This fixed setting is used to show generality and for comparison. Yet, the parameters can be fine tuned in practice.
Music illustration. Signals are probability distributions of chords encoded as vectors of MIDI keys. Figure 4a) shows such a signal—the frequency distribution of two-note chords extracted from the soprano and bass parts of Bach’s C-score chorales (Illiac Software, Inc., 2020)—with the learned rule trace listed below. The first rule is tagged by argsort ◦w[1,2] and has probability all concentrated in one cell whose elements have a larger y-coordinate (the black region above the diagonal). So, this is a deterministic rule, echoing the law of “no voice crossing (N.V.C.)”, i.e., soprano higher than bass. Checking later rule tags finds laws of voice range (V.R.), diatonic scale (D.S.), and consonant interval (C.I.)—almost all of the main static rules on two-voice counterpoint. Notably, the third rule is tagged by both mod12 ◦ w[1] and vertical translation invariance. From both feature and symmetry views, this tag identifies the concept of all Cs, all Ds, etc., which is the music concept of pitch class. The feature view explicitly reveals a period of 12 in pitches—the notion of an octave (in defining pitch class); the symmetry view reveals the topology—the manifold where the concepts lie—in this case a 2D torus.
Chemistry illustration. Signals are boolean-valued functions indicating the presence of compound formulae encoded as vectors of atomic numbers in a molecule database. Figure 4b) shows a signal attained by collecting two-element compounds from the Materials Project database (Jain et al., 2013) of common compounds. The first rule tagged by div18 ◦w[2] is deterministic: Element 2 can never be
Ar, K, Ca. It nicely captures the visual pattern in Figure 4b) (the last three vacant columns) and hints suggestively at some chemistry rules. The second rule tagged by mod8 ◦w[2] has peaks at cells tagged by feature values 1, 7, 0, 6. These cells, for Element 2, are halogens (+H), pnictogens, chalcogens, crystallogens. The third rule shows alkali metals, alkaline earth metals, crystallogens, icosagens are the cells common for Element 1. Next rule shows the common combinations, e.g., alkali metals and halogens. Note that the 2nd, 3rd, 4th rules for chemistry and the 5th, 3rd, 4th rules for music share the same tags, except that mod12 becomes mod8—period changes from 12 (a music octave) to 8 (number of main groups). So, when two chemical elements form a compound, they are like two music notes forming a chord! The music concepts of pitch classes and intervals parallel the chemical concepts of groups and their distances. Although abstractions are shared, rules differ. Instead of a diatonic scale in Bach’s chorales, chemistry uses a “cation scale” and an “anion scale”. It is interesting that our intention to show ILL’s generality (same lattice, parameters for different subjects) also suggests links between art and science by interpreting phenomena (signals) in one subject from the perspective of the other (Bodurow, 2018). Applications that extend the experiment here beyond a clustering model to restore the periodic table (Zhou et al., 2018) and render complex molecules in high dimensions are ongoing, aiming to discover new laws, new interpretations of existing laws, and new materials.
Real-world deployment & evaluation. We generalized the music illustration to a real app of an automatic music theorist (Yu et al., 2016; Yu & Varshney, 2017). It specially implements the alternating min max setting into a “student teacher” model: the student is a (music) generator and the teacher is a discriminator. The two form a loop where the teacher guides the student towards a target style through iterative feedback (extracting rules) and exercise (applying rules). This app extends the above music illustration considerably. It considers more music voices, so now signals are in higher dimensions and rules are on more complex chord structure. It considers temporal structure, so now signals include many (un)conditional chord distributions (multi-n-grams), yielding both context-free and context-dependent rules, but new challenges too, namely rare contexts and contradictory rules. ILL’s core idea of abstraction makes “small data large” thus, rare contexts common (Yu & Varshney, 2017), and a redesigned lifting operator solves contradiction (Yu et al., 2017). Further, parameters like , γ are made into self-explanatory knobs for users to personalize their learning pace.
We conducted two studies to assess rule-learning capability and interpretability. We present the main results here and detail the procedures in Appendix G. In the first study, we compared ILL-discovered rules with human-codified domain knowledge to see how much known can be reproduced and how much new can be discovered. Trained on just 370 Bach’s chorales, our model reproduced in explicit
Under review as a conference paper at ICLR 2021
a.
covered 66%
hinted 26%
missed 7%
how much known?
Under review as a conference paper at ICLR 2021 the histogram—a symbolic and pictorial encoding. Students were explicitly instructed that writing out a description that was basically a literal repetition of the histogram (e.g., taking a modulo 12 of a chord results in a 91.2% chance of being 0, 0, 4, 7) is not acceptable: they must reveal the music behind the math. In fact, we made it clear to the students that we only want qualitative descriptions. Students were specifically told (in the instructions) to only pay attention to the relative values of the probabilities whose exact numbers are unimportant (e.g., what are most likely, what are more likely, what are almost impossible). This homework was due in two weeks. During the two-week period, we asked the students to complete it independently, with no group work or office hours.
Assess Human Interpretations. The homework was designed in a way such that every rule historgram encoded at least one music concept/rule consistent with standard music theory. In addition, every histogram contained either one additional known music rule or something strange that either conflicted with a known rule or represented something new. We assigned two points per rule. Further, we made an initial rubric containing the (authoritative) music keywords used to describe every rule histogram. Because students’ answers arrived in the form of qualitative text, to ensure credibility and fairness of the initial rubric, we held a discussion session at a regular lecture time (80 minutes) with all students as well as the teaching staff. During the discussion session, we went over all 25 rules one by one. For each, we first announced the keywords in the initial rubric and explained to the students that these keywords would later be used to grade their homework. However, in the discussion session, every student was encouraged to object to any of our announced keywords and/or to propose new keywords accompanied with a convincing explanation. New/modified keywords that were commonly agreed upon were added/updated to the initial rubric. By the end of discussion session, we compiled a more inclusive rubric containing broadly accepted keywords. This rubric-generating process was transparent to all the students. In the final step, we manually graded every student’s answer sheet against keywords in the rubric and computed their scores. A summary of the students’ performances is presented in Table 5. Except for cases where the student did not do the homework, a major source of score deduction was from misunderstanding the n-gram (e.g., the probability of the current chord conditioned on the previous chord was mistakenly interpreted as the probability of the previous chord conditioned on the current one). This may be largely due to unfamiliarity with the n-gram models for new CS+Music students. Nevertheless, the majority of the students who did the homework (2/3) succeeded (with respect to the 30/50 passing grade) in interpreting the rules generated from ILL, which in turn provides evidence on the interpretability of the AI-produced knowledge itself.
Table 5: Students’ final scores.
Score Range # of Students 50 3
[40,50) 7 [30,40) 2 [20,30) 4 [10,20) 1 [0,10) 1
0 5
H CONCLUSION AND BROADER IMPACTS
Model transparency and interpretability are important for trustworthy AI, especially when interacting directly with people such as scientists, artists, and even multidisciplinary researchers bridging the Two Cultures (Snow, 1959) (e.g., like music and chemistry). The core philosophy underlying ILL arises from a human-centered standpoint and our long-term pursuit of “getting humanity back into artificial intelligence”. We strive to develop human-like artificial intelligence, which in turn may help advance human intelligence—a goal at the intersection of AGI (artificial general intelligence (Goertzel & Pennachin, 2007)), XAI (explainable artificial intelligence (Adadi & Berrada, 2018)), and “AI as augmented intelligence” (Jordan, 2019).
As such, the focus of interpretability in this line of research is not just the end result of the model, but the entire learning process. This emphasis on process is not only manifest in this paper (e.g.,
23
b. how interpretable? c. figured soprano
entropy = 4.76 figured alto entropy = 4.78
figured tenor entropy = 4.80 figured bass entropy = 4.34
how much new?
Figure 5: ILL assessments on knowledge discovery tasks.
forms 66% of a standard music theory curriculum (Figure 5A). In the rest, about 26% (e.g., harmonic functions and music forms) wa implicitly hi ted at by the cur ent n-gram based model, modeling only transitions of abstractions but not explicitly abstractions of transitions—a future direction. In the second study, we ran a human-subject experiment in the form of homework for a music class. The homework asked 23 students o write verbal interpretations of ILL-generated rules rendered as histograms over tagged partitions. Grading was based on a rubric of keywords generated via majority vote in a later discussion among students and teachers. Figure 5B shows that the majority (2/3) of the students who did the homework succeeded (w.r.t. the 30/50 passing grade) in the interpretation task, which in turn shows the interpretability of the AI-produced knowledge itself.
In the first study, our model also discovered new rules that interested our colleagues in the music school. (a) Tritone resolution is crucial in tonal music, yet in Bach’s chorales, tritones sometimes do not resolve in typical ways, but consistently transition to other dissonances like a minor seventh. (b) A new notion of “the interval of intervals” was consistently extracted in several rule traces. This “second derivative”, like acceleration in mechanics, might suggest a new microscopic chord structure heretofore unconsidered. (c) New symmetry patterns reveal new harmonic foundations. As a parallel concept of harmony traditionally built on figured bass (dominant in Bach’s chorales confirmed by ILL), ILL reveals “figured soprano” as the next alternative in explaining Bach’s music (Figure 5C). Although not the best view for explaining Bach according to ILL and is not included in any standard music theory class, it may be a valuable perspective for music starting deviating from classical. This was confirmed by domain experts (Sokol, 2016), with more details in the end of Appendix G.1.
5 DISCUSSION: LIMITATIONS AND CHALLENGES
As a first step, we devise a new representation-learning model intended to be both theoretically sound and intrinsically interpretable. This paper shows typical setups and applications, but ILL is a general framework that admits new designs of its components, e.g., projection-and-lifting or priors. Notably, designing a lattice not only sets the rule-learning capacity but also the “vocabulary” for interpretation which, like the Sapir-Whorf hypothesis for human language, limits how a lattice explains signals. Likewise, priors have pros and cons based on what we seek to explain and to whom (e.g., not all signals are best explained by symmetry, nor can everyone reads symmetry equally well). One solution is to explore multiple lattices while balancing expressiveness and computation—a common practice in picking ML models too. Further, whether a signal is indeed governed by simple rules requires rethinking. Sometimes, no rules exist, then ILL will indicate this and a case-by-case study will be needed. Sometimes, rules are insufficient: is music in fact governed by music theory? Theory is better viewed as necessary but not sufficient for good music: great composers need not be great theorists.
Following studies comparing human-codified knowledge and using human-subject experiments for interpretability, more systematic ILL benchmarking and assessment remain challenging and need long-term efforts. Benchmarking is not as easy as for task-specific settings (Chollet, 2019), requiring better comparison schemes or a downstream task. Effective ILL assessments must focus on new discoveries and the ability to assist people. Instead of a Turing test for machine-generated music, one may (at a meta-level) consider tests between independent and machine-aided compositions, but both are done by humans. Further, ILL may be incorporated with other models, having an ILL version of deep learning or vice versa. For example, using ILL as a pre-processing or post-interpretation module in other models to achieve superior task performance as well as controllability and interpretability. One other possibility may use ILL to analyze attention matrices (as signals) learned from BERT or GPT (Rogers et al., 2020). More future visions are in Appendix H.
A CONNECTION TO CONCEPT LATTICE
Per our definition, a concept refers to a component of an abstraction, or more precisely, is a cell in a partition or an equivalence class under an equivalence relation. This definition is consistent with a formal concept defined in formal concept analysis (FCA) (Ganter & Wille, 2012; Ganter et al., 2016; Priss, 2006) as a set of objects (extent) sharing a set of attributes (intent), which can be also treated as objects that are equivalent under the attributes. However, our definition of a concept generalizes that of a formal concept in two ways. First, in our case, a partition or an equivalence relation is not induced from domain-specific attributes through formal logic and formal ontology, but from universal priors drawn from the Core Knowledge (detailed in Section 3.1 in the main paper). Second, specifying a partition considers all of its concepts, whereas specifying a set of formal concepts only considers those with respect to a given formal context. As a result, partition lattices in our case generalize concept lattices in FCA, and are not generated, hence not constrained, by domain knowledge such as those encoded in formal ontologies.
Mathematically, let (PX , ) be the partition lattice comprising all partitions of X and (2X ,⊆) be the subset lattice comprising all subsets of X . Clearly, the power set 2X is the same as {C ∈ P | P ∈ PX}. That is, the subset lattice is also the lattice comprising all concepts from all partitions of X , which can be then called the full concept lattice. So, one can define any concept lattice in FCA as a sublattice of the full concept lattice (cf. Definition 3 in (Ganter et al., 2016)). Yet, such a concept sublattice does not have to include all concepts from a partition, and in many cases, it tends to miss many concepts if they are not known in the existing ontology. We give two examples below to further illustrate the connection between a partition lattice and a concept lattice.
First, consider biological taxonomy. Dogs and cats are two concepts in species which is an abstraction containing other concepts such as eagles. Likewise, mammals and birds are two concepts in class which is an abstraction containing other concepts such as reptiles and insects; further, animals and plants are two concepts in kingdom. In light of hierarchy, as abstractions, species class kingdom (in a partition lattice); as concepts, dogs ⊆ mammals ⊆ animals (in a concept lattice). Note that when forming a concept lattice, one may not need to include say, all species. Yet when having species as an abstraction in a partition lattice, this abstraction must contain all species including known species and unknowns, where the latter is usually of more interest for knowledge discovery.
Second, consider music theory. C major triads, C minor triads, and B diminished triads are concepts in an abstraction induced by music octave-shift and permutation invariance. Further, major triads, minor triads, and diminished triads are concepts in another abstraction induced by music octave-shift, permutation, and further transposition invariance. Clearly, for abstractions, the former abstraction is finer than the latter; for concepts, the set of C major triads is a subset (or a special case) of the set of major triads. However, chords that are not defined in traditional music theory but appear as new concepts in a known abstraction (e.g., the two above) may be more interesting, since they may suggest new composition possibilities while still obeying the same music abstraction, in this case the same music symmetry. New concepts from new abstractions may push the composition boundary even further, suggesting new types of chords discovered from e.g., new symmetry (but possibly within a known symmetry family). See the end of Appendix G.1 for more examples from new discoveries.
B MORE GENERALIZED FORMALISM FOR INFORMATION LATTICE
The mathematical setting in the main paper is for a non-negative signal on a finite domain. However, this is not a limitation, but purely for notational brevity and computational reasons. First, regarding non-negativity, in many real scenarios, the signal is bounded and its value is only relative. In these cases, one can simply add an offset to the signal to make it non-negative. More generally, we can
consider a signal to be any measurable function ξ : X → Rn. Then the notions of an abstraction, a concept, a rule, as well as the partial order can be generalized as in Table 1. Hence, the notion of an information lattice is still well-defined in the generalized setting. The essence of the two settings lies in how we formalize an abstraction, whether using a partition or a σ-algebra. However, the two are not very different from each other: any partition of X generates a σ-algebra on X , and any σ-algebra on a countable X is uniquely generated by a partition of X (Çınlar, 2011).
Further, the main paper uses the summation functional in defining a rule of a signal, or the projection operator. However, other options are possible, e.g., mean, max, min, or a specially designed functional. The lifting operator can then be redesigned accordingly. In particular, besides always favoring the most uniform signal, the design of the special lifting can have extra freedom in considering other criteria for picking a signal from the general lifting.
C MORE INSIGHTS ON THE SPECIAL LIFTING
Consider the special lifting ↑(R) for any rule setR = ↓ξ (P) of a given signal ξ. Computing ↑(R) is simple ifR = {r} contains only a single rule. In this case, ↑(R)(x) = ↑(r)(x) := r(C)/|C| for any x ∈ C ∈ domain(r), which requires simply averaging within each cell. However, computing ↑ (R) becomes much less trivial when |R| > 1. By definition, we need to solve the minimization problem:
↑(R) := argminr∈⇑(R)‖r‖2. (8)
Instead of directly throwing the above problem (8) into a generic optimization solver, there is a more efficient approach which also reveals more insights on the special lifting. More specifically, one can check that any multi-rule lifting ↑(R) can be computed as a single-rule lifting ↑(r?) where the single rule r? is defined on the join ∨P and is computed as follows:
r? := argminr∈⇑(∨P)(R)‖r̃‖2, with the weighted norm ‖r̃‖2 := √∑
C
r(C)2
|C| . (9)
So, instead of liftingR directly to the signal domain X , we liftR to the join ∨P first and then to X . Since | ∨P| ≤ |X|, the minimization problem (9) is in a smaller dimension compared to the original problem (8), and thus, can be solved more efficiently. In the minimization problem (9), by definition, ⇑(∨P) (R) := {r : ∨P → R | ↓r (P) = R}. Hence, every rule r ∈ ⇑(∨P) (R) can be treated as a single-rule summary of the rule setR, and r? is one of them—the one that yields the most uniform signal. Realizing the special lifting R → ↑ (R) as the two-step lifting R → r? → ↑ (r?) = ↑ (R) reveals the following insight: given rules abstracting ξ at different levels (coarser or finer), the best one can hope to faithfully explain ξ is at the level of the join. Determining ξ at any level finer than the join would then require additional assumptions other than the rule set itself, such as the preference of uniformity used here. This further explains the two sources of information loss (join and uniformity) discussed in the recovery process of a signal (cf. Section 3 in the main paper). Notably, to determine a signal even at the level of join may be ambigious, since the general lifting ⇑(∨P) (R) to the join is not necessarily a singleton. This particularly implies that r? as one of the single-rule summaries ofR of ξ is not necessarily a rule of ξ, i.e., there is no guarantee that r? = ↓ξ (∨P). To make it so, we need more rules.
D EXISTING WORK ON SUBLATTICE GENERATION
General methods for computing the sublattice LB of a full lattice L generated by a subset B ⊆ L fall into two basic families, depending on whether the full lattice needs to be computed. The first uses alternating join- and meet-completions, with worse-case complexityO(2|B|); the second characterizes the elements of L that belong to the sublattice, with complexity O(min(|J(L)|, |M(L)|)2|L|) where J(L) and M(L) denote the number of join-irreducibles and meet-irreducibles, respectively (Bertet & Morvan, 1999). The latter requires computing the full lattice, which is intractable in our case of partition lattices, as |L| = |PX | grows faster than exponentially in |X| whereas |P〈F,S〉| is usually smaller than |X|. So, we use the first approach and compute alternating join- and meet-completions. The same principle of avoiding computing the full lattice has been applied to the special context of concept lattices (Kauer & Krupka, 2015), yet the technique there still requires the full formal context corresponding to the full concept lattice. Note that sublattice completion is, by definition, computing the smallest sublattice LB (in a full lattice L) containing the input subset B ⊆ L, where LB must inherit the meet and join operations from L. It generalizes but is not the same as Dedekind-MacNeille completion (Bertet & Morvan, 1999; MacNeille, 1937; Bertet et al., 1997).
E MORE DETAILS ON THE CONSTRUCTION PHASE
This section elaborates on the second half of Section 3.1 in the main paper, presenting more algorithmic details on poset construction and sublattice completion. The core data structures for posets are the so-called adjacency matrix and Hasse diagram, encoding the partial order ≺ and the cover relation ≺c, respectively (Garg, 2015). The former is best for querying ancestors and descendants of a partition within the lattice; the latter is best for querying parents and children of a partition. (A more advanced technique includes chain-decomposition, but the two here are sufficient for this paper.) More specifically,
P ′ is an ancestor of P ⇐⇒ P ≺ P ′
P ′ is a parent of P ⇐⇒ P ≺c P ′ (i.e., P ≺ P ′ but no P ′′ satisfies P ≺ P ′′ ≺ P ′). We introduce a few algorithmic notations. Given a partition poset (P, ), we use P.po matrix and P.hasse diagram to denote the adjacency matrix and Hasse diagram of P, respectively. For any partition P ∈ P, we use P.ancestors, P.descendants, P.parents, and P.children to denote the sets of ancestors, descendants, parents, and children of P , respectively. Notably, the two data structures are not only important for the construction phase but for the subsequent learning phase as well. The core subroutine in the construction phase is ADD PARTITION sketched as Algorithm 1. It is the key unit step in both poset construction and (join-)semilattice completion.
Poset construction. This corresponds to Step 3 in the flowchart in Section 3.1 of the main paper. Recall that poset construction refers to the process of sorting a multiset P〈F,S〉 of tagged partitions into a poset (P〈F,S〉, ), where the partition tags are features and symmetries. Naively, if we write an inner subroutine COMPARE(P,P ′)—called an oracle in the related literature—to compare two partitions, sorting a multiset into a poset amounts to ( N 2 ) calls of this pairwise comparison where N is the size of the input multiset. So, the common idea shared in almost all poset sorting algorithms is to reduce the number of oracle calls as much as possible. As mentioned in the main paper, considering the additional properties in our case, we leverage (a) transitivity (valid for all posets), (b) partition size (valid for partitions), and (c) partition tag (valid for tagged partitions) to pre-determine or pre-filter relations. In other words, we want to infer from the context as many pairwise relations as possible, so that the number of actual pairwise comparisons can be minimized.
More specifically, we start from an empty poset, and call ADD PARTITION to incrementally add partitions from the input multiset to the poset. As the outer subroutine, ADD PARTITION leverages transitivity and partition size by maintaining three live data structures, namely size2partns, po matrix, and hasse diagram, so as to avoid calling COMPARE whenever possible. Consequently, COMPARE is called only at two places (underlined in Algorithm 1): one for = and one for ≺. When called as the inner subroutine, COMPARE(P,P ′) does not always perform an actual computation for pairwise comparison. Instead, it first checks if the tags are informative (e.g., compositions/supergroups imply coarser partitions) and only if not, makes an actual comparison. With the additional information from partition size, an actual comparison can be done in O(|X|) time
Algorithm 1: ADD PARTITION (Pτ ,P): adds a tagged partition Pτ to a partition poset (P, ) Input: a tagged partition Pτ , where the tag τ can be a feature/symmetry or a join/meet formula;
a partition poset (P, ), with the following members and hash tables: · every P ∈ P is a unique partition (indexed by a unique identifier) · P.partn2tags[P] := {τ | Pτ = P} denotes the set of all tags inducing P · P.size2partns[k] := {P | |P| = k} denotes the set of all P ∈ P with size k · P.po matrix encodes the partial order ≺, best for getting P.ancestors/descendants · P.hasse diagram encodes the cover relation ≺c, best for getting P.parents/children
Step 1: determine if Pτ is new by COMPARE(P,Pτ ) (for =) for every P ∈ P.size2partns[|Pτ |]
if Pτ ∈ P.size2partns[|Pτ |]: update P.partn2tags[Pτ] by adding τ ; return else: create a new hash entry P.partn2tags[Pτ] = {τ}; proceed to Step 2
Step 2: add the new partition Pτ to P (2a) update P.size2partns[|Pτ |] by adding Pτ (2b) update P.po matrix and P.hasse diagram
– for every existing size k < |Pτ | sorted in a descending order: for every P ∈ P.size2partns[k]:
if P.parents ∩ Pτ .descendants 6= ∅: update P.po matrix by adding P ≺ Pτ else: COMPARE(P,Pτ ); update P.po matrix and P.hasse diagram if P ≺ Pτ
(here one can check: it is necessarily the case that P ≺c Pτ ) – do the above symmetrically for every existing size k > |Pτ | sorted in an ascending order – (note: every P ∈ P.size2partns[k] for k = |Pτ | is incomparable with Pτ ) – clean cover relation: remove any P∗ ≺c P∗ from P.hasse diagram if P∗ ≺c Pτ ≺c P∗
via a mapping process. More specifically, given two partitions P,P ′, without loss of generality, we assume |P| ≤ |P ′|. An actual comparison is made by tentatively creating a mapping ν : P ′ → P . One can check that such a ν exists if and only if P P ′. Hence, if |P| = |P ′| (resp. |P| < |P ′|), one can determine = (resp.≺) if ν is created successfully or incomparability otherwise. The mapping complexity is linear in |X|, with linear coefficient 1 if mapping succeeds and with linear coefficient < 1 if mapping fails. In the worst case (e.g., if all partitions are incomparable), all ( N 2 ) pairwise comparisons are required. Our algorithm works best when partitions are richly related (i.e., the Hasse diagram is dense), which is indeed the case for our tagged partitions induced from systematically formed features and symmetries.
Semilattice completion. This corresponds to Step 4 in the flowchart in Section 3.1 of the main paper. Recall that join-semilattice completion refers to the process of completing a partition poset into a semilattice. We only detail join-semilattice completion, since meet-semilattice completion can be done symmetrically. Formally, we want to compute the join-semilattice of PX generated by the input poset (P〈F,S〉, ). We denote the resulting join-semilattice by 〈P〈F,S〉〉∨. By definition,
〈P〈F,S〉〉∨ := {∨P | P ⊆ P〈F,S〉}. Naively, if computing 〈P〈F,S〉〉∨ literally from the above definition, one has to iterate over all subsets of P〈F,S〉 and compute their joins. This amounts to 2N join computations where N = |P〈F,S〉| is the size of the input poset, and moreover, many of the joins are not pairwise. Yet, similar to our earlier poset construction, we may reduce the computations of joins by an incremental method, which also embeds ADD PARTITION as a subroutine and utilizes partition sizes and tags, but now the tags are join formulae instead of features or symmetries.
More specifically, we start with an empty semilattice P, and add partitions in P〈F,S〉 to P one by one from smaller-sized to larger-sized (note: the size information is maintained in P〈F,S〉.size2partns). When a partition P ∈ P〈F,S〉 is to be added, we make a tag named by itself, i.e., let Pτ := P with τ := {P}, and then call ADD PARTITION(Pτ ,P). There are two possibilities here: Pτ already exists in P (call ends by Step 1) or Pτ is new (call ends by Step 2). In the former, we are done with Pτ .
In the latter, for every P ′ ∈ P\{Pτ}, compute the pairwise join J (P ′) := ∨{Pτ ,P ′} and its tags T (P ′) := {τ ∪ τ ′ | τ ′ ∈ P.partn2tags[P ′]}, and call ADD PARTITION(J (P ′)T (P′),P). Like COMPARE, computing join can be optimized by leveraging previously computed tags and partial order in the input poset P〈F,S〉, so as to avoid an actual join computation whenever possible. When inferring from the context is not possible, one can perform an actual join computation ∨(P,P ′) in O(|X|) time. This is done by collecting the unique pairs of cell IDs (C(x), C ′(x)) for every x ∈ X , where C(x) and C ′(x) denote the cell IDs of x in P and P ′, respectively. In the worst case (e.g., if all partitions are incomparable and join-irreducible), the complexity is inevitably O(2N ). However, like in poset construction, our algorithm works best when the partial order structure is rich.
Practical tips for sublattice completion. This corresponds to Step 5 in the flowchart in Section 3.1 of the main paper. Recall that constructing the sublattice of PX generated by P〈S,F 〉 follows the alternating process: L0 := P〈S,F 〉, L1 := 〈L0〉∨, L2 := 〈L1〉∧, L3 := 〈L2〉∨, and so forth, which terminates as soon as Lk−1 = Lk. We denote the end result by 〈P〈S,F 〉〉∨∧···, which is the desired sublattice. However, we may want to stop early in the completion sequence, due to concerns from computation, interpretability, expressiveness, as well as their tradeoffs. We suggest a practical tip on deciding where to stop. If the input poset P〈F,S〉 is small, run alternating joins and meets, or even complete it to the sublattice if affordable. If P〈F,S〉 is moderate, complete the joins only (as join is closely related to rule lifting, see Appdenix C for more details). If P〈F,S〉 is large, just use it.
F MORE ANALYSES IN THE LEARNING PHASE
This section elaborates on the last paragraph of Section 3.2 in the main paper, presenting more analyses and interpretations on the rule traces elicited from the toy handwritten-digit examples. Yet, as mentioned in the main paper, computer vision is currently not among the typical use cases of ILL. Learning rules of handwritten digits may not be of much independent interest unless for calligraphy. So, the analyses and interpretations here are for illustration purposes only. We refer readers to the Broader Impact section in the main paper for possible future directions on how ILL may be used, together with other ML models, to solve computer vision tasks.
Recall that the main use case of ILL is to explain a signal ξ, answering what makes ξ an ξ. The same toy example illustrating an ILL process is replayed here in Figure 3. The signal ξ : {0, . . . , 27}2 → [0, 1] is a grayscale image of a handwritten “7”. In this case, a rule of ξ, or the projection of ξ to a partition of {0, . . . , 27}2, can be viewed as gathering “ink” within each partition cell. Accordingly, the (special) lifting can be viewed as redistributing the gathered “ink” (evenly) in each cell. Hence, we term this view the ink model. For visual convenience, we depict a rule of a 2D signal by its lifting (i.e., another grayscale image), since with pixels in the same cell colored the same, we can use the lifting to sketch both the partition and the rule values. More precisely, when a lifting represents a rule, it must be viewed in terms of blocks or superpixels; whereas a real lifting (i.e., a signal or a real image) is viewed normally by the regular pixels. To better clarify, all rules in Figure 3 are displayed in red boxes, whereas all liftings are in green ones.
For a simple illustration, we draw a small number of features and symmetries to generate a poset (P•) of 21 partitions. The corresponding part of the information lattice (R•) is shown by its Hasse diagram in Figure 3. Further, on top of the Hasse diagram, we demarcate the frontiers of the sublevel sets (R≤ ) by six blue dashed curves. Note that in this tiny diagram, we have sketched a full range of sublevel sets, yet for large diagrams, sublevel sets are constructed for small -values only in a single-pass BFS. The right part of Figure 3 illustrates a complete ILL process in the alternating setting, with lift project signified by the green up-arrows and red down-arrows, respectively. During the learning process, ILL tries to minimize the gap in the signal domain (upstairs) through iterative eliminations of the largest gap in the rule domain (downstairs). Eliminating a larger rule gap tends to imply a larger drop in the signal gap, but not necessarily in every iteration, since the special lifting may accidentally recover a better signal if the assumed uniformity is, by chance, present in the signal. The rule setR(k) formed per iteration is presented in the middle of the right part of Figure 3, which joinly shows the complete rule trace continuously progressing along the -path.
The rule set in the last iteration under any (marked by ? in Figure 3) is the returned solution to the main relaxed Problem (4) in the main paper. This rule set is used to answer what makes ξ an ξ. For example, let rj denote the rule with ID j (here a rule ID is the same as the partition ID, the unique identifier used in Algorithm 1 during the construction phase). Then, among all rules whose entropies
are no larger than = 2, the third rule set in the traceR(3) = {r9, r1, r18} best explains what makes ξ an ξ. However, if more complex rules are allowed, say if all rule entropies are now capped by = 6, R(7) = {r13, r15, r19} is the best. Recall that we do not just eyeball the rules to get intuitive understandings. Every rule is the projection of the signal to a tagged partition, where the tag, generated in a prior-driven way, explicitly explains the underlying abstraction criteria. For example, r19 in Figure 3 comes from a symmetry tag representing a permutation invariance, which visually renders as a reflection invariance. Rules r8 and r9 come from two feature tags div7 ◦ w[1] and div7 ◦ w[2], respectively. These two feature tags represent the continuous and even collapsing in the first and the second coordinate, respectively, which visually render as horizontal and vertical strips in either case. Both rules are later absorbed into r13 tagged by div7 ◦w[1,2], since its rule domain is strictly finer. These rules (r8, r9, r13) apparently summarize the horizontal and vertical parts of the handwritten “7”. Further, the vertical part of the “7” is longer and slants more, so we see more vertically-patterned rules in the rule trace (r9, r11, r15). These rules are obtained from finer and finer abstractions along the horizontal direction, so as to capture more details on the vertical part of that “7” such as its slope. Notably, among these vertically-patterned rules, r11 is induced from the symmetry representing a horizontal translation invariance, but it is quickly absorbed into r15 whose entropy is not much higher. This transient appearance of r11 implies that it plays a less important role in explaining this handwritten “7”. In fact, from more experiments, symmetries in general play a less important role in explaining many “7”s. This is, however, not the case in explaining many “8”s, where symmetries occur much more often. For example, consider a symmetry fused from translation and permutation invariances whose fundamental domain is homeomorphic to a Möbius strip. We hypothesize that this topological property might be related to the twisted nature of an “8”. For a visual comparison, we present the rule traces learned from a “7” and an “8” below in Figure 6, as well as the visual similarity between a Möbius strip and an “8”. mnistt_7_3_solve_alternate_5000
mnistc_8_2_solve_alternate_5000
G STUDIES ON ILL-BASED MUSIC APPLICATION
We introduce two tests associated with a real-world application. The first is to assess rule-learning efficacy, where we compare machine-discovered rules to human-codified domain knowledge. The second is to assess human-interpretability, where we use human subject experiments on interpreting machine-generated rules.
The application here is our first step towards building an automatic music theorist and pedagogue, which is to be deployed as an assistant in music research and education. The two tests are our initial effort towards a systematic benchmarking and assessment platform. In the continuing effort of bridging human and machine intelligence, new standards are to be set and commonly agreed upon, so as to reasonably compare machine-codified discoveries with human-codified knowledge, as well as to use human-subject experiments for assessing interpretability. Fully developing assessment protocols is a challenging, long-term endeavor. Here, we use the two tests as starting points, and present results from each. Respectively, the first experiment tests music rule discovery, a basic requirement to be a theorist; the second tests interpretability, a basic requirement to be a pedagogue.
To conduct the two tests, we first build a user-friendly web application, which is used to better see and control the ILL learning process and results. Figure 7 illustrates the web interface. Users learn music rules—each as a histogram over a tagged partition (i.e., machine-codified music concepts)—and control their learning pace via self-explanatory knobs whose set values are automatically converted to internal parameters (e.g., , γ). One critical music-specific extension to the vanilla ILL presented in the main paper is adding a temporal component, since music is highly contextual. This amounts to considering more than one signal simultaneously, which include various (un)conditional chord distributions (multiple n-grams with varying n’s and varying conditionals) encoding information of individual chords as well as melodic and harmonic progressions. Accordingly, ILL produces both context-free and context-dependent rules, each of which is indexed by a partition and a conditional under that partition. For example, given the partition that is equivalent to classifying music chords into roman numerals and conditioned on the previous two chords being a I64 followed by a V, a rule specifies the probability distribution of the next roman numeral, and in this case reproduces the music rule on Cadential-64. Note that in a context-dependent rule, not only is the query chord abstracted, but also the conditional. This is in contrast with many classical n-gram models where no abstraction is present and thus may suffer from the problem of rare contexts, where a conditional occurs very few or even zero times in the training set. However here, the core idea of abstraction makes “small data” large and thus rare contexts common. More examples of context-free and context-dependent rules are illustrated as histograms in Figure 8. These rule histograms are generated from ILL based on 370 of Bach’s four-part chorales (in the format of digital sheet music), and are used in the two experiments detailed below.
G.1 COMPARISON TO HUMAN-CODIFIED KNOWLEDGE
We compare rules learned from ILL to a standard undergraduate music theory curriculum. We want to use known laws from music theory as a benchmark to see how ILL-generated rules correspond to human-codified music knowledge. In particular, we want to see what is covered, what is new, and what is different. Yet, the ultimate goal is not just to use known music theory as a ground truth for the purpose of driving ILL to fully reconstruct what we know, but eventually to discover new rules,
to gain new understandings of existing rules, to suggest new composition possibilities, as well as to teach rules in a personalized way.
A priori we are aware of three major differences between human-codified music theory and ILLgenerated rules. (a) In light of music raw representations (input), laws of music theory are derived from all aspects in sheet music whereas ILL-generated rules are currently derived from only MIDI pitches and their durations. This is because we currently study ILL as a general framework. When a music-specific application is to be developed later, one can include more music raw representations such as letter pitches, meter, measure, beaming, and articulations. (b) In light of rule format (output), laws of music theory and ILL-generated rules have two different styles, with the former being more descriptive and absolute (hard), whereas the latter being more numerical and probabilistic (soft). For instance, a music rule that completely forbids consecutive fifths is reproduced by an ILL-generated rule that assigns a small non-zero probability to the event. Therefore, while it is possible to “translate”, with information loss, a (precise) ILL-generated rule to a (verbal) rule in known theory, it may not make sense to “translate” in the opposite direction. Also, it is not a good idea to hardcode known rules as categorical labels in a supervised setting, since music rules are inherently flexible and hardcoding may lead to a rule-based AI that generates somewhat “mechanical” music such as the Illiac Suite (Hiller & Isaacson, 1957). (c) In light of purposes, laws of music theory are more intended for general pedagogical purposes, rather than to reflect the style of a particular data set. For instance, while consecutive fifths are banned in homework and exams, they may be widely used in many pop songs. Even in our data set of Bach’s chorales (which are supposed to follow the known rules quite well), we see Bach himself wrote a handful of consecutive perfect intervals. On the contrary, ILL-generated rules are specific to the input data set. We may certainly find some data sets that follow the known rules quite well (e.g., Bach’s chorales), but also others that break many known rules and even set their own rules.
Keeping these three differences in mind and by further isolating them from the comparison results, we can reveal the remaining differences that are due to the rule-learning process itself. To come up with the benchmark, we compiled a comprehensive syllabus of laws from music theory taught in our music school’s theory review course, which runs through the full series of theory classes at a fast pace. This human-codified music knowledge is organized as a running list of 75 topics and subtopics indexed by lecture number. On the other hand, ILL-generated rules are indexed by partition (ID) and n-gram (n). The results are summarized below in Table 2, where the colored crosses in the last column indicate topics that are missed by ILL due to different reasons.
Among the total 75 topics in Table 2, we first ignore 7 of them (red crosses) which require music raw representations beyond MIDI pitches and durations (e.g., accents and enharmonic respellings of some augmented sixth chords). ILL covered 45 out of the remaining 68 topics, yielding a coverage of 66%. Among the 23 missed topics, 18 (blue crosses) are related to deeper-level temporal abstractions such as harmonic functions, key areas, and forms. These temporal abstractions may be better modeled as abstractions of transitions, which are implicitly captured but not explicitly recovered from our current multi-abstraction multi-n-gram language model, modeling only transitions of abstractions. The other 5 missed topics (black crosses) are tricky and require ad-hoc encodings, which are not explicitly learnable (but may be implicitly captured to some extent) from our current ILL implementation. Accordingly, the composition of the 30 = 7 + 18 + 5 uncovered topics suggest three future directions to possibly raise the rule-learning capacity of the current implementation: (a) include more music raw representations; (b) model abstractions of transitions; (c) either make music-specific adjustments when developing music apps or figure out a more expressive and more general framework in the long run. However, remember that the goal here is not to reproduce what we know but to augment it. So, we may certainly stop after enabling abstractions of transitions, which in the best case can yield an improved coverage of 84% (i.e., 93% of the topics from MIDI notes only) which is good enough.
Lecture Music Theory Partition IDs n-gram
1 music accents 7 2 pitch 1-4 1 3 2 pitch class 16-19 1 3 2 interval 31-36 1 3
Table 2 (cont.)
Lecture Music Theory Partition IDs n-gram
2 interval class 97-102 1 3 3 stepwise melodic motion (counterpoint) 1-4 2 3 3 consonant harmonic intervals (counterpoint) 97-102 1 3 3 beginning scale degree (counterpoint) 16-19 2 3 3 ending scale degree (counterpoint) 16-19 2 3 3 beginning interval class (counterpoint) 97-102 2 3 3 ending interval class (counterpoint) 97-102 2 3 3 parallel perfect intervals (counterpoint) 97-102 2 3 3 directed perfect intervals (counterpoint) 7 3 law of recovery (counterpoint) 1-4 ≥3 3 3 contrapuntal cadence (counterpoint) 1-4, 97-102 2,3 3 3 melodic minor ascending line (counterpoint) 7 4 tri | 1. What is the main contribution of the paper, and how does it differ from other works in the field?
2. How does the reviewer assess the clarity and accessibility of the paper's content, particularly regarding mathematical definitions and notation?
3. Does the reviewer think that the authors' approach can be simplified without losing its effectiveness?
4. What are some basic elements of the approach that the reviewer believes need further explanation or definition?
5. How does the reviewer evaluate the relevance and novelty of the paper's contributions in light of existing research in algorithmic information theory (AIT)? | Review | Review
The authors propose an approach to explain a given signal
ξ
(i.e., some function of interest, such as a 2D image, or a probability distribution) by learning simple "rules" that can accurately reconstruct it. They demonstrate their approach on a music dataset and a chemistry dataset.
I like the authors' introduction, problem statement, and the somewhat unusual viewpoint and the datasets. Despite this, I cannot recommend this paper for publication. The main issue is that, starting on page 4 and without clear justification, the authors introduce a nearly-impenetrable thicket of mathematical definitions and notation. I would be more accepting of this style if the approach and results absolutely necessitated it. However, it is not clear to me that this is actually the case --- as far as I can tell, what the authors propose to do is basically (1) generate a set of simple features (functions of the input space, created by composing various primitives and symmetries), (2) select a simple subset of these features that explain the target signal
ξ
accurately. It seems to me that this kind of approach can be formulated without 90% of the machinery employed by the authors. Even if it can't, the authors should start with a simple, understandable formulation of their approach, demonstrate the corresponding results, and -- if needed -- make it more complex in order to achieve better results. Note also that, despite the high complexity, some basic elements of the approach, as are needed to understand the proposed objective function, are left undefined (for example, what does it mean to have the "Shannon entropy of a rule",
E
n
t
(
r
)
, when
r
is some arbitrary real-valued function? What is the actual distance measure
Δ
used?)
Another major issue with this paper is that the author seem largely unaware of the closely related, and very well-established, theories of induction coming from algorithmic information theory (AIT), as developed by Solomonoff, Chaitin, Rissanen (via minimum description length), and others. It seems to me that the proposed approach, of explaining a signal by finding simple rules that accurately reconstruct it, is basically trying to find a compressed version of the signal, i.e., a simple program for the signal, which is example the approach advocated by AIT. The relevant literature is too vast to mention, but one starting point could be Chater and Vitanyi, Simplicity: a unifying principle in cognitive science?, TICS, 2003. |
ICLR | Title
Information Lattice Learning
Abstract
Information Lattice Learning (ILL) is a general framework to learn decomposed representations, called rules, of a signal such as an image or a probability distribution. Each rule is a coarsened signal used to gain some human-interpretable insight into what might govern the nature of the original signal. To summarize the signal, we need several disentangled rules arranged in a hierarchy, formalized by a lattice structure. ILL focuses on explainability and generalizability from “small data”, and aims for rules akin to those humans distill from experience (rather than a representation optimized for a specific task like classification). This paper focuses on a mathematical and algorithmic presentation of ILL, then demonstrates how ILL addresses the core question “what makes X an X” or “what makes X different from Y” to create effective, rule-based explanations designed to help human learners understand. The key part here is what rather than tasks like generating X or predicting labels X,Y. Typical applications of ILL are presented for artistic and scientific knowledge discovery. These use ILL to learn music theory from scores and chemical laws from molecule data, revealing relationships between domains. We include initial benchmarks and assessments for ILL to demonstrate efficacy.
1 INTRODUCTION
With rapid progress in AI, there is an increasing desire for general AI (Goertzel & Pennachin, 2007; Chollet, 2019) and explainable AI (Adadi & Berrada, 2018; Molnar, 2019), which exhibit broad, human-like cognitive capacities. One common pursuit is to move away from “black boxes” designed for specific tasks to achieve broad generalization through strong abstractions made from only a few examples, with neither unlimited priors nor unlimited data (“primitive priors” & “small data” instead). In this pursuit, we present a new, task-nonspecific framework—Information Lattice Learning (ILL)— to learn representations akin to human-distilled rules, e.g., producing much of a standard music theory curriculum as well as new rules in a form directly interpretable by students (shown at the end).
The term information lattice was first defined by Shannon (1953), but remains largely conceptual and unexplored. In the context of abstraction and representation learning, we independently develop representation lattices that coincide with Shannon’s information lattice when restricted to his context. Instead of inventing a new name, we adopt Shannon’s. However, we not only generalize the original definition—an information lattice here is a hierarchical distribution of representations—but we also bring learning into the lattice, yielding the name ILL.
ILL explains a signal (e.g., a probability distribution) by disentangled representations, called rules. A rule explains some but not all aspects of the signal, but together the collection of rules aims to capture a large part of the signal. ILL is specially designed to address the core question “what makes X an X” or “what makes X different from Y”, emphasizing the what rather than generating X or predicting labels X,Y in order to facilitate effective, rule-based explanations designed to help human learners understand. A music AI classifying concertos, or generating one that mimics the masters, does not necessarily produce human insight about what makes a concerto a concerto or the best rules a novice composer might employ to write one. Our focus represents a shift from much representation-learning work (Bengio et al., 2013) that aim to find the best representation for solving a specific task (e.g., classification) rather than strong concern for explainability. Instead of optimizing a task-specific objective function (e.g., classification error), ILL balances more general objectives that favor fewer, simpler rules for interpretability, and more essential rules for effectiveness—all formalized later.
One intuition behind ILL is to break the whole into simple pieces, similar to breaking a signal into a Fourier series. Yet, rather than decomposition via projection to orthonormal basis and synthesis
via weighted sum, we decompose a signal in a hierarchical space called a lattice. Another intuition behind ILL is feature selection. Yet, rather than features, we use partitions to mimic human concepts and enable structured search in a partition lattice to mimic human learning. The goal is to restore human-like, hierarchical rule abstraction-and-realization through signal decomposition-and-synthesis in a lattice (called projection-and-lifting, Figure 1: left), resulting in more than a sum of parts.
ILL comprises two phases: (a) lattice construction; (b) learning (i.e., searching) in the lattice. This is similar to many machine learning (ML) models comprising (a) function class specification then (b) learning in the function class, e.g., constructing a neural network then learning—finding optimal parameters via back-propagation—in the network. ILL’s construction phase is prior-efficient: it builds in universal priors that resemble human innate cognition (cf. the Core Knowledge priors (Spelke & Kinzler, 2007)), then grows a lattice of abstractions. The priors can be customized, however, to cater to a particular human learner, or facilitate more exotic knowledge discovery. ILL’s learning phase is data-efficient: it learns from “small data” encoded by a signal, but searches for rich explanations of the signal via rule learning, wherein abstraction is key to “making small data large”. Notably, the construction phase is prior-driven, not data-driven—data comes in only at the learning phase. Hence, the same construction may be reused in different learning phases for different data sets or even data on different topics (Figure 1: right). Featuring these two phases, ILL is thus a hybrid model that threads the needle between a full data-driven model and a full prior-driven model, echoing the notion of “starting like a baby; learning like a child” (Hutson, 2018).
ILL is related to many research areas. It draws ideas and approaches from lattice theory, information theory, group theory, and optimization. It shares algorithmic similarity with a range of techniques including MaxEnt, data compression, autoencoders, and compressed sensing, but with a much greater focus on achieving human-like explainability and generalizability. Below, we broadly compares ILL to prominent, related models, leaving more comparisons to the Appendix for most similar ones.
Compared to ILL is deep learning a “white-box” model balancing human-explainability and task performance Bayesian inference modeling human reasoning with widely shared, common priors and few, simple rules rather than using probabilistic inference as the driving force tree-like models structurally more general: a tree (e.g., decision tree or hierarchical clustering)
is essentially a linear lattice (called a chain formally) depicting a unidirectional refinement or coarsening process
concept lattice in FCA (Ganter & Wille, 2012) conceptually more general and may include both known and unknown concepts; ILL does not require but discovers domain knowledge (more details in Appendix A)
We illustrate ILL applications by learning music theory from scores, chemical laws from compounds, and show how ILL’s common priors facilitate mutual interpretation between the two subjects. To begin, imagine Tom and Jerry are playing two 12-key pianos simultaneously, one note at a time (Figure 1: right). The frequency of the played two-note chords gives a 2D signal plotted as a 12× 12 grayscale heatmap. Inspecting this heatmap, what might be the underlying rules that govern their co-play? (Check: all grey pixels have a larger “Jerry-coordinate” and project to a black key along the “Tom-axis”.) We now elaborate on ILL and use it to distill rules for complex, realistic cases.
2 INFORMATION LATTICE: ABSTRACTIONS AND RULES OF A SIGNAL
Signal. A signal is a function ξ : X → R. For notational brevity and computational reasons, assume ξ is non-negative and X ⊆ Rn is finite (not a limitation: see Appendix B). For example, a signal ξ : {1, . . . , 6} → R being a probability mass function (pmf) of a dice roll, or a signal ξ : {0, . . . , 27}2 → R being a 28× 28 grayscale image. We denote the set of all signals on X by SX . Partition / abstraction. We use a partition P of a set X to denote an abstraction of X; we call a cell C ∈ P an (abstracted) concept. The intuition is simple: a partition of a set renders a “coarse-grained view” of the set, or more precisely, an equivalence relation on the set. In this view, we identify equivalence classes of elements (concepts) instead of individual elements. For example, the partition P = {{1, 3, 5}, {2, 4, 6}} of the six outcomes of the roll of a die identify two concepts (odd, even). Rule / representation. A rule of a signal ξ : X → R is a “coarsened” signal rξ : P → R defined on a partition P of X with rξ(C) := ∑ x∈C ξ(x) for any C ∈ P . In this paper, a rule of a signal is what we mean by a representation of a signal. If the signal is a grayscale image, a rule can be a special type of blurring or downsampling of the image; if the signal is a probability distribution, a rule can be a pmf of the “orbits” of the distribution for lifted inference algorithms (Holtzen et al., 2019; Kersting, 2012). More generally, we define a rule (regardless of any signal) over a set X by any signal on any partition of X; accordingly, we denote the set of all rules over X byRX := ∪P∈{all partitions of X}SP . Partition lattice. Abstractions are hierarchical: one coarse-grained view can be coarser than another. Let the partition lattice (PX , ) of a setX be the partially ordered set (poset) containing all partitions of X equipped with the partial order coarser than ( ), or finer than ( ), defined in the standard way. Let P := {{x} | x ∈ X} and P := {X} denote the finest and the coarsest partition, respectively. Per general lattice theory (Davey & Priestley, 2002), PX is a complete lattice: every subset P ⊆ PX has a unique supremum ∨P and a unique infimum ∧P, where ∨P is called the join of P denoting its coarsest common refinement and ∧P is called the meet of P denoting its finest common coarsening. Information lattice. The information lattice (Rξ,⇐) of a signal ξ : X → R is the poset of all rules of ξ equipped with the partial order more general than: for any two rules r, r′ ∈ Rξ, we say r is more general than r′ (or r′ is more specific), denoted r ⇐ r′, if domain(r) domain(r′). Notably, Rξ ⊆ RX andRξ is isomorphic to the underlying partition lattice via projection defined below. Projection and lifting. For any signal ξ ∈ SX , we define the projection operator ↓ξ: PX → Rξ by letting ↓ξ (P) be the rule of ξ on P . One can check that ↓ξ: (PX , )→ (Rξ,⇐) is an isomorphism. Conversely, we define the general lifting operator ⇑X : RX → 2SX by letting ⇑X (r) denote the set of all signals that satisfy the rule r, i.e., ⇑X (r) := {ξ ∈ SX | ↓ξ (domain(r)) = r} ⊆ SX . To make lifting unique and per Principles of Indifference (Eva, 2019), we introduce a special lifting ↑X (r) to pick the most “uniform” signal in ⇑X (r). Formally, define ‖ · ‖q : SX → R by ‖ξ‖q := ( ∑ x∈X ξ(x)
q)1/q. For any ξ, ξ′ ∈ SX satisfying ‖ξ‖1 = ‖ξ′‖1, we say that ξ is more uniform than ξ′ (or ξ′ is more deterministic) if ‖ξ‖2 ≤ ‖ξ′‖2. We define the (special) lifting operator ↑X : RX → SX by ↑X (r) := argminξ∈⇑X(r)‖ξ‖2 (can be computed by simply averaging). Notation here follows the convention as to function projections to quotient spaces (Kondor & Trivedi, 2018). Lifting a single rule to the signal domain can be extended in two ways: (a) lift to a finer rule domain P instead of X , i.e., ⇑P (r) or ↑P (r); (b) lift more than one rules. Accordingly, we write ⇑ := ⇑X and ↑ := ↑X as defaults, write R = ↓ξ (P) := {↓ξ (P) | P ∈ P} ⊆ Rξ to denote a rule set, and write ⇑(R) := ∩r∈R ⇑(r) = {η ∈ SX | ↓η (P) = R} and ↑(R) := argminη∈⇑(R)‖η‖2 to denote signals that satisfy all rules in R (general lifting) and the most uniform one (special lifting), respectively. More computational details on lifting and its intimate relation to join are in Appendix C.
3 INFORMATION LATTICE LEARNING (ILL)
We first formalize ILL as a single optimization problem and then solve it practically in two phases. Let ξ : X → R be a signal we want to explain. By explaining, we mean to search for a rule set R = ↓ξ (P) ⊆ Rξ such that: (a)R recovers ξ well, orR is essential; (b)R is simple. The main idea agrees with Algorithm Information Theory (Chaitin, 1987; Chater & Vitányi, 2003), but we use an information-lattice based formulation focusing on explainability. We start our formulation below.
We say a rule setR recovers the signal ξ exactly if ↑(R) = ξ. Yet, exact recovery may not always be achieved. The information loss occurs for two reasons: (a) insufficient abstractions, i.e., the join ∨P is strictly coarser than P; (b) the choice made in favor of uniformity is inappropriate. Instead of pursuing exact recovery, we introduce ∆(↑ (R), ξ)—a distance (e.g., `p distance) or a divergence (e.g., KL divergence) function—to measure the loss, with a smaller ∆ indicating a more essentialR. We say a rule setR is simpler if it contains fewer and simpler rules. Formally, we wantR minimal, i.e., each rule r ∈ R is indispensable so as to achieve the same ↑(R). Also, we want each rule r ∈ R informationally simple, measured by smaller Shannon entropy Ent(r), so r is more deterministic (Falk & Konold, 1997), easier to remember (Pape et al., 2015) and closer to our common-sense definition of a “rule”. Notably, the partial order renders a tradeoff between the two criteria: r ⇐ r′ implies r is dispensable in anyR ⊇ {r, r′} but on the other hand Ent(r) ≤ Ent(r′), so including more-specific rules makes the rule set small yet each individual rule (informationally) hard.
The main problem. The formal definition of an ILL problem is: given a signal ξ : X → R, minimize R⊆Rξ ∆(↑(R), ξ) subject to R is minimal; Ent(r) ≤ for any r ∈ R. (1) The search space involves the full information lattice (Rξ,⇐), or isomorphically, the full partition lattice (PX , ). Yet, the size of this lattice, i.e., the Bell numberB|X|, scales faster than exponentially in |X|. It is unrealistic to compute all partitions of X (unless X is tiny), let alone the partial order. Besides computational concerns, there are two reasons to avoid the full lattice (but to leave it implicitly in the background): (a) the full lattice has unnecessarily high resolution, comprising many nearlyidentical partitions particularly when X is large; (b) considering explainability, not every partition has an easy-to-interpret criterion by which the abstraction is made. As such, Formulation (1) is only conceptual and impractical. Next, we relax it and make it practical via two ILL phases.
3.1 PRACTICAL LATTICE CONSTRUCTION: TO START LIKE A BABY (PHASE I)
Information lattice construction plays a role similar to building a function class in ML, sometimes called meta-learning. While its importance is commonly understood, the construction phase in many data-driven models is often treated cursorily—using basic templates and/or ad-hoc priors—leaving most of the computation to the learning phase. In contrast, we put substantial effort into our priordriven construction phase. Pursuing generality and interpretability, we want universal, simple priors that are domain-agnostic and close to the innate cognition of a human baby (Marcus, 2018). Here we draw those from Core Knowledge (Spelke & Kinzler, 2007; Chollet, 2019), which include “the (small) natural numbers and elementary arithmetic prior” and “the elementary geometry and topology prior”. We then give algorithms to construct abstractions from these priors, and consider such a construction prior-efficient if it is interpretable, expressive, and systematic. In the following flowchart, we summarize information lattice construction as generating a partition sublattice. 20.00 pt
hPhF,Sii_ hPhF,Sii_^···(PhF,Si, )PhF,Si = P hF i [ PGhSiF, S hF i, GhSi
seeds (priors)
features/ symmetries
partition multiset
partition poset
partition semilattice
partition sublattice
1 2 4 53
hierarchy stageprior-driven stage completion stage
1 2 Feature / Symmetry-induced partitions. Unlike data clustering, our prior-driven partitions are induced from two data-independent sources—features and symmetries. We draw priors—in the form of seed features F and seed transformations S—from Core Knowledge as a basis, and then generate a set of partitions P〈F,S〉 as follows: as an example, for X = R2:
F = {w[1], w[2], w[1,2], sort, argsort, sum, diff, div2, . . . , div19, mod2, . . . , mod19} (2) S = {horizontal, vertical, diagonal translations} ∪ {rotations} ∪ {reflections} (3)
Φ〈F 〉 : set of features generated by F via function composition G〈S〉 : set of subgroups generated by subsets of S via subgroup generation PΦ〈F 〉 : set of partitions generated by features in Φ〈F 〉 via preimages PG〈S〉 : set of partitions generated by subgroups in G〈S〉 via orbits
In (2), wI denotes coordinate selection (like indexing/slicing in python) and the other functions are defined as in python (div and mod are like in python divmod). Then, P〈F,S〉 = PΦ〈F 〉 ∪ PG〈S〉. 3 Partition poset. We next sort P〈F,S〉, computationally a multiset, into the poset (P〈S,F 〉, ). We import algorithmic skeleton from generic poset-sorting algorithms (Caspard et al., 2012; Daskalakis et al., 2011), with an outer routine incrementally adding elements and querying an inner subroutine (an oracle) for pairwise comparison. Yet, our poset is special: its elements are called tagged partitions where a tag records the generating source(s) of its tagged partition, e.g., features and/or symmetries. So, we have specially designed both the outer routine ADD PARTITION and the oracle COMPARE by leveraging (a) transitivity (valid for all posets), (b) partition size (valid for partitions), and (c) partition tag (valid for tagged partitions) to pre-determine or filter relations. We relegate details to Appendix E. The data structures for posets include po matrix and hasse diagram, encoding the partial order ≺ (ancestors/descendants) and the cover relation ≺c (parents/children), respectively (Garg, 2015). 4 5 Partition semi/sublattice. To complete (P〈F,S〉, ) into a lattice, we compute the sublattice (of PX ) generated by P〈F,S〉. We follow the idea of alternating-join-and-meet completions borrowed from one of the two generic sublattice-completion methods (Bertet & Morvan, 1999). A discussion on our choice and other related methods is in Appendix D. However, we implement join-semilattice completion (meet-semilattice is dual) in our special context of tagged partitions, which echoes what we did in 3 and reuses ADD PARTITION. The adjustments are (a) changing tags from features and symmetries to join formulae and (b) changing the inner subroutine from pairwise comparison to computing join. We then run a sequence of alternating joins and meets to complete the lattice. For interpretability, one may want to stop early in the completion sequence. While a single join or meet remains simple for human interpretation—often understood as the intersection or union of concepts (e.g., the join of colored items and sized items gives items indexed by color and size)—having alternating joins and meets may hinder comprehension. More details on a single-step join-semilatticecompletion, the completion sequence, and tips on early stopping are relegated to Appendix E.
3.2 PRACTICAL LATTICE LEARNING: TO LEARN LIKE A CHILD (PHASE II)
Learning in an information lattice means solving the optimization Problem (1), i.e., to search for a minimal subset of simple rules from the information lattice of a signal so as to best explain that signal. Let P• be the sublattice (or semilattice, poset, if early stopped) from the construction phase. Projecting a signal ξ : X → R to P• yields the information sublattice R• := ↓ξ (P•) ⊆ Rξ. It is worth reiterating that (a) P• is constructed first and is data-independent; (b) ξ (data) comes after P•; (c) (R•,⇐) is isomorphic to (P•, ): R• retains the partial order (po matrix and hasse diagram) and interpretability from P•. As such,R• is what is given at the beginning of the learning phase. The main problem (relaxed). For practicality, we relax Problem (1): instead of the full latticeRξ, we restrict the search space toR•; instead of minimal rule sets, we consider only antichains (whose elements are mutually incomparable), necessary for minimality. This yields:
minimize R⊆R•
∆(↑(R), ξ) subject to R is an antichain; Ent(r) ≤ for any r ∈ R. (4)
To solve Problem (4), we adopt a (greedy) idea similar to principal component analysis (PCA): we first search for the most essential rule—which decreases ∆ most—in explaining the signal, then the second most essential rule in explaining the rest of the signal, and so on. Specifically, we start with an empty rule setR(0) := ∅, and add rules iteratively. LetR(k) be the rule set formed by Iteration (Iter) k andR(k)⇐ := {r ∈ R• | r ⇐ r′ for some r′ ∈ R(k)}. LetR≤ := {r ∈ R• | Ent(r) ≤ }. Then,
(in Iter k + 1) minimize ∆(↑(R(k) ∪ {r}), ξ) subject to r ∈ R(k)feasible := R≤ −R(k)⇐ . (5) We pre-computeR≤ (instead of the wholeR•) before iterations, which can be done by a breadth-first search (BFS) on P•’s hasse diagram, from bottom (the coarsest) up. As to the monotonicity of Ent w.r.t. the partial order (cf. the grouping axiom of entropy (Cover & Thomas, 2012)), any BFS branch ends once the entropy exceeds . (For later use, we save the setR> of ending rules in BFS, i.e., the lower frontier ofR> .) In contrast,R(k)⇐ is computed per iteration (by querying P•’s po matrix).
Under review as a conference paper at ICLR 2021
mnistt_7_3_solve_alternate_5000
Nested vs. alternating optimization. Computing ↑(R(k)∪{r}) requires solving a minimization, so Problem (5) is a nested optimization: argmin
r∈R(k)feasible ∆(argminη∈⇑(R(k)∪{r})‖η‖2, ξ). One may
de-nest the two: instead of comparing rules by lifting them up to the signal domain, we compare them “downstairs” on their own rule domains. So, instead of minimizing (5)’s objective, we
maximize r ∈ R≤ −R(k)⇐
∆(↓↑(R(k)) (domain(r)), ↓ξ (domain(r))) = ∆(↓↑(R(k)) (domain(r)), r). (6)
The idea is to find the rule domain on which the recovered ↑(R(k)) and the target signal ξ exhibit the largest gap. Adding this rule to the rule set maximally closes the gap in (6), and tends to minimize the original objective in (5). Nicely, in (6) the lifting does not involve r, so (5) is de-nested, which further iterates into an alternating min max (or lift project) optimization. Let r(k)? be the solution and ∆ (k) ? be the optimal value in Iter k. We updateR(k+1) := R(k) ∪ {r(k+1)? } − {r(k+1)? ’s descendants} (so always an antichain), and proceed to the next iteration. Iterations end whenever the feasible set is empty, or may end early if the rule becomes less essential, measured by |∆(k+1)? −∆(k)? | ≤ γ in the nested setting, and ∆(k)? ≤ γ in the alternating setting (for some γ). The full learning path & complexity. We denote a solve process for Problem (6) by SOLVE( , γ), or SOLVE( ) if γ is fixed ahead. To avoid tuning manually, we solve an -path. For 1 < 2 < · · · , assume SOLVE( i) takes Ki iterations, we run the following to solve the main relaxed Problem (6):
∅ = R(0) → SOLVE( 1)→ R(K1) → SOLVE( 2)→ R(K1+K2) → · · · (7) So, lattice learning boils down to solving a sequence of combinatorial optimizations on the Hasse diagram of a lattice. We walk through the full process (7) via a toy example, starting with a signal ξ : {0, . . . , 27}2 → [0, 1] denoting an image of “7” and a toy-sized information lattice of the signal (Figure 3A). The sequence of optimizations (7) proceeds at two paces concurrently: the slower pace is indexed by i; the faster pace is indexed by iteration number k. As mentioned earlier, the setsR≤ i
are pre-computed at the slower pace, with the (i+ 1)th BFS initialized fromR> i (the ending rules in the ith BFS). The monotonicity of Ent w.r.t. the partial order assures that these BFSs add up to a single (global) BFS on the entire Hasse diagram, climbing up the lattice from the bottom. This is shown in Figure 3B as the monotonic expansion of the blue region (R≤ ) explored by BFS. Locally at each iteration along the slower pace, solving Problem (6) is quadratic in the worst case when the feasible set is an antichain (i.e., no order), and linear in the best case when the feasible set is a chain (i.e., totally ordered). Since local BFSs add up to a single BFS with a standard linear complexity, the entire learning phase has a total complexity between linear and quadratic in the number of vertices and edges in the whole Hasse diagram. In general, the denser the diagram is, the lower the complexity is. This is because R(k)⇐ tends to be large in this case with more descendants activated (i.e., red in Figure 3B), which in turn effectively shrinks the feasible set (i.e., the blue region minus red). For example, unlike the first three iterations in Figure 3B, the 4th iteration ( = 3) activates more than one rules, including the one being extracted as well as all its unexplored descendants. Further, the upper bound is rarely reached. Unlike in this toy example, BFS in practice is often early stopped when becomes large, i.e., when later rules become more random. Hence, targeting at more deterministic and disentangled rules only, not all vertices and edges are traversed by BFS. In the end of the learning process, for explanatory purposes, we store the entire -path and the (R(k))k≥0 sequence instead of just the very last one. This yields a rule trace as the standard ILL output, which we present below.
How to read ILL output. ILL outputs a rule trace comprising an evolving sequence of rules, rule sets, and recovered signals (Figure 3C). The three sequences are indexed by iteration and by -path, so the rule set by the last iteration under any (starred) is the returned solution to the main Problem (4). We depict a rule by its lifting, since it sketches both the partition and the rule values. Figure 3C gives a full presentation of a rule trace. We also introduce a two-line shorthand (Figure 3D), keeping only the sequence of the recovered signals and that of the rules. A rule trace answers what makes ξ an ξ, or what are the best -simple rules explaining ξ. ILL rules are more interpretable than just eyeballing patterns. (a) The interpretability of the trace is manifest in its controllability via , γ: smaller for simpler rules and larger γ for more essential rules. (b) The interpretability of each rule is gained from its partition tag—the criteria by which the abstraction is made. A tag may contain several generating sources as different interpretations of the same rule abstraction. Like different proofs of a theorem, a partition tag with multiple sources reveals equivalent characterizations of a structure and thus, more insights of the signal. So, tags are not only computationally beneficial in constructing lattices, but also key to interpretation. We present in-depth analyses on tags in the applications below.
4 ILL EXAMPLES
We show typical ILL examples on knowledge discovery in art and science: learning music theory from scores and chemical laws from compounds (while relegating more analyses on handwritten digits to Appendix F). For both, we fix the same priors—F, S in (2)(3)—thus the same lattice. We fix the same parameters: -path is 0.2 < 3.2 < 6.2 < · · · (tip: a small offset at the beginning, e.g., 0.2, is used to get nearly-deterministic rules) and γ is 20% of the initial signal gap. This fixed setting is used to show generality and for comparison. Yet, the parameters can be fine tuned in practice.
Music illustration. Signals are probability distributions of chords encoded as vectors of MIDI keys. Figure 4a) shows such a signal—the frequency distribution of two-note chords extracted from the soprano and bass parts of Bach’s C-score chorales (Illiac Software, Inc., 2020)—with the learned rule trace listed below. The first rule is tagged by argsort ◦w[1,2] and has probability all concentrated in one cell whose elements have a larger y-coordinate (the black region above the diagonal). So, this is a deterministic rule, echoing the law of “no voice crossing (N.V.C.)”, i.e., soprano higher than bass. Checking later rule tags finds laws of voice range (V.R.), diatonic scale (D.S.), and consonant interval (C.I.)—almost all of the main static rules on two-voice counterpoint. Notably, the third rule is tagged by both mod12 ◦ w[1] and vertical translation invariance. From both feature and symmetry views, this tag identifies the concept of all Cs, all Ds, etc., which is the music concept of pitch class. The feature view explicitly reveals a period of 12 in pitches—the notion of an octave (in defining pitch class); the symmetry view reveals the topology—the manifold where the concepts lie—in this case a 2D torus.
Chemistry illustration. Signals are boolean-valued functions indicating the presence of compound formulae encoded as vectors of atomic numbers in a molecule database. Figure 4b) shows a signal attained by collecting two-element compounds from the Materials Project database (Jain et al., 2013) of common compounds. The first rule tagged by div18 ◦w[2] is deterministic: Element 2 can never be
Ar, K, Ca. It nicely captures the visual pattern in Figure 4b) (the last three vacant columns) and hints suggestively at some chemistry rules. The second rule tagged by mod8 ◦w[2] has peaks at cells tagged by feature values 1, 7, 0, 6. These cells, for Element 2, are halogens (+H), pnictogens, chalcogens, crystallogens. The third rule shows alkali metals, alkaline earth metals, crystallogens, icosagens are the cells common for Element 1. Next rule shows the common combinations, e.g., alkali metals and halogens. Note that the 2nd, 3rd, 4th rules for chemistry and the 5th, 3rd, 4th rules for music share the same tags, except that mod12 becomes mod8—period changes from 12 (a music octave) to 8 (number of main groups). So, when two chemical elements form a compound, they are like two music notes forming a chord! The music concepts of pitch classes and intervals parallel the chemical concepts of groups and their distances. Although abstractions are shared, rules differ. Instead of a diatonic scale in Bach’s chorales, chemistry uses a “cation scale” and an “anion scale”. It is interesting that our intention to show ILL’s generality (same lattice, parameters for different subjects) also suggests links between art and science by interpreting phenomena (signals) in one subject from the perspective of the other (Bodurow, 2018). Applications that extend the experiment here beyond a clustering model to restore the periodic table (Zhou et al., 2018) and render complex molecules in high dimensions are ongoing, aiming to discover new laws, new interpretations of existing laws, and new materials.
Real-world deployment & evaluation. We generalized the music illustration to a real app of an automatic music theorist (Yu et al., 2016; Yu & Varshney, 2017). It specially implements the alternating min max setting into a “student teacher” model: the student is a (music) generator and the teacher is a discriminator. The two form a loop where the teacher guides the student towards a target style through iterative feedback (extracting rules) and exercise (applying rules). This app extends the above music illustration considerably. It considers more music voices, so now signals are in higher dimensions and rules are on more complex chord structure. It considers temporal structure, so now signals include many (un)conditional chord distributions (multi-n-grams), yielding both context-free and context-dependent rules, but new challenges too, namely rare contexts and contradictory rules. ILL’s core idea of abstraction makes “small data large” thus, rare contexts common (Yu & Varshney, 2017), and a redesigned lifting operator solves contradiction (Yu et al., 2017). Further, parameters like , γ are made into self-explanatory knobs for users to personalize their learning pace.
We conducted two studies to assess rule-learning capability and interpretability. We present the main results here and detail the procedures in Appendix G. In the first study, we compared ILL-discovered rules with human-codified domain knowledge to see how much known can be reproduced and how much new can be discovered. Trained on just 370 Bach’s chorales, our model reproduced in explicit
Under review as a conference paper at ICLR 2021
a.
covered 66%
hinted 26%
missed 7%
how much known?
Under review as a conference paper at ICLR 2021 the histogram—a symbolic and pictorial encoding. Students were explicitly instructed that writing out a description that was basically a literal repetition of the histogram (e.g., taking a modulo 12 of a chord results in a 91.2% chance of being 0, 0, 4, 7) is not acceptable: they must reveal the music behind the math. In fact, we made it clear to the students that we only want qualitative descriptions. Students were specifically told (in the instructions) to only pay attention to the relative values of the probabilities whose exact numbers are unimportant (e.g., what are most likely, what are more likely, what are almost impossible). This homework was due in two weeks. During the two-week period, we asked the students to complete it independently, with no group work or office hours.
Assess Human Interpretations. The homework was designed in a way such that every rule historgram encoded at least one music concept/rule consistent with standard music theory. In addition, every histogram contained either one additional known music rule or something strange that either conflicted with a known rule or represented something new. We assigned two points per rule. Further, we made an initial rubric containing the (authoritative) music keywords used to describe every rule histogram. Because students’ answers arrived in the form of qualitative text, to ensure credibility and fairness of the initial rubric, we held a discussion session at a regular lecture time (80 minutes) with all students as well as the teaching staff. During the discussion session, we went over all 25 rules one by one. For each, we first announced the keywords in the initial rubric and explained to the students that these keywords would later be used to grade their homework. However, in the discussion session, every student was encouraged to object to any of our announced keywords and/or to propose new keywords accompanied with a convincing explanation. New/modified keywords that were commonly agreed upon were added/updated to the initial rubric. By the end of discussion session, we compiled a more inclusive rubric containing broadly accepted keywords. This rubric-generating process was transparent to all the students. In the final step, we manually graded every student’s answer sheet against keywords in the rubric and computed their scores. A summary of the students’ performances is presented in Table 5. Except for cases where the student did not do the homework, a major source of score deduction was from misunderstanding the n-gram (e.g., the probability of the current chord conditioned on the previous chord was mistakenly interpreted as the probability of the previous chord conditioned on the current one). This may be largely due to unfamiliarity with the n-gram models for new CS+Music students. Nevertheless, the majority of the students who did the homework (2/3) succeeded (with respect to the 30/50 passing grade) in interpreting the rules generated from ILL, which in turn provides evidence on the interpretability of the AI-produced knowledge itself.
Table 5: Students’ final scores.
Score Range # of Students 50 3
[40,50) 7 [30,40) 2 [20,30) 4 [10,20) 1 [0,10) 1
0 5
H CONCLUSION AND BROADER IMPACTS
Model transparency and interpretability are important for trustworthy AI, especially when interacting directly with people such as scientists, artists, and even multidisciplinary researchers bridging the Two Cultures (Snow, 1959) (e.g., like music and chemistry). The core philosophy underlying ILL arises from a human-centered standpoint and our long-term pursuit of “getting humanity back into artificial intelligence”. We strive to develop human-like artificial intelligence, which in turn may help advance human intelligence—a goal at the intersection of AGI (artificial general intelligence (Goertzel & Pennachin, 2007)), XAI (explainable artificial intelligence (Adadi & Berrada, 2018)), and “AI as augmented intelligence” (Jordan, 2019).
As such, the focus of interpretability in this line of research is not just the end result of the model, but the entire learning process. This emphasis on process is not only manifest in this paper (e.g.,
23
b. how interpretable? c. figured soprano
entropy = 4.76 figured alto entropy = 4.78
figured tenor entropy = 4.80 figured bass entropy = 4.34
how much new?
Figure 5: ILL assessments on knowledge discovery tasks.
forms 66% of a standard music theory curriculum (Figure 5A). In the rest, about 26% (e.g., harmonic functions and music forms) wa implicitly hi ted at by the cur ent n-gram based model, modeling only transitions of abstractions but not explicitly abstractions of transitions—a future direction. In the second study, we ran a human-subject experiment in the form of homework for a music class. The homework asked 23 students o write verbal interpretations of ILL-generated rules rendered as histograms over tagged partitions. Grading was based on a rubric of keywords generated via majority vote in a later discussion among students and teachers. Figure 5B shows that the majority (2/3) of the students who did the homework succeeded (w.r.t. the 30/50 passing grade) in the interpretation task, which in turn shows the interpretability of the AI-produced knowledge itself.
In the first study, our model also discovered new rules that interested our colleagues in the music school. (a) Tritone resolution is crucial in tonal music, yet in Bach’s chorales, tritones sometimes do not resolve in typical ways, but consistently transition to other dissonances like a minor seventh. (b) A new notion of “the interval of intervals” was consistently extracted in several rule traces. This “second derivative”, like acceleration in mechanics, might suggest a new microscopic chord structure heretofore unconsidered. (c) New symmetry patterns reveal new harmonic foundations. As a parallel concept of harmony traditionally built on figured bass (dominant in Bach’s chorales confirmed by ILL), ILL reveals “figured soprano” as the next alternative in explaining Bach’s music (Figure 5C). Although not the best view for explaining Bach according to ILL and is not included in any standard music theory class, it may be a valuable perspective for music starting deviating from classical. This was confirmed by domain experts (Sokol, 2016), with more details in the end of Appendix G.1.
5 DISCUSSION: LIMITATIONS AND CHALLENGES
As a first step, we devise a new representation-learning model intended to be both theoretically sound and intrinsically interpretable. This paper shows typical setups and applications, but ILL is a general framework that admits new designs of its components, e.g., projection-and-lifting or priors. Notably, designing a lattice not only sets the rule-learning capacity but also the “vocabulary” for interpretation which, like the Sapir-Whorf hypothesis for human language, limits how a lattice explains signals. Likewise, priors have pros and cons based on what we seek to explain and to whom (e.g., not all signals are best explained by symmetry, nor can everyone reads symmetry equally well). One solution is to explore multiple lattices while balancing expressiveness and computation—a common practice in picking ML models too. Further, whether a signal is indeed governed by simple rules requires rethinking. Sometimes, no rules exist, then ILL will indicate this and a case-by-case study will be needed. Sometimes, rules are insufficient: is music in fact governed by music theory? Theory is better viewed as necessary but not sufficient for good music: great composers need not be great theorists.
Following studies comparing human-codified knowledge and using human-subject experiments for interpretability, more systematic ILL benchmarking and assessment remain challenging and need long-term efforts. Benchmarking is not as easy as for task-specific settings (Chollet, 2019), requiring better comparison schemes or a downstream task. Effective ILL assessments must focus on new discoveries and the ability to assist people. Instead of a Turing test for machine-generated music, one may (at a meta-level) consider tests between independent and machine-aided compositions, but both are done by humans. Further, ILL may be incorporated with other models, having an ILL version of deep learning or vice versa. For example, using ILL as a pre-processing or post-interpretation module in other models to achieve superior task performance as well as controllability and interpretability. One other possibility may use ILL to analyze attention matrices (as signals) learned from BERT or GPT (Rogers et al., 2020). More future visions are in Appendix H.
A CONNECTION TO CONCEPT LATTICE
Per our definition, a concept refers to a component of an abstraction, or more precisely, is a cell in a partition or an equivalence class under an equivalence relation. This definition is consistent with a formal concept defined in formal concept analysis (FCA) (Ganter & Wille, 2012; Ganter et al., 2016; Priss, 2006) as a set of objects (extent) sharing a set of attributes (intent), which can be also treated as objects that are equivalent under the attributes. However, our definition of a concept generalizes that of a formal concept in two ways. First, in our case, a partition or an equivalence relation is not induced from domain-specific attributes through formal logic and formal ontology, but from universal priors drawn from the Core Knowledge (detailed in Section 3.1 in the main paper). Second, specifying a partition considers all of its concepts, whereas specifying a set of formal concepts only considers those with respect to a given formal context. As a result, partition lattices in our case generalize concept lattices in FCA, and are not generated, hence not constrained, by domain knowledge such as those encoded in formal ontologies.
Mathematically, let (PX , ) be the partition lattice comprising all partitions of X and (2X ,⊆) be the subset lattice comprising all subsets of X . Clearly, the power set 2X is the same as {C ∈ P | P ∈ PX}. That is, the subset lattice is also the lattice comprising all concepts from all partitions of X , which can be then called the full concept lattice. So, one can define any concept lattice in FCA as a sublattice of the full concept lattice (cf. Definition 3 in (Ganter et al., 2016)). Yet, such a concept sublattice does not have to include all concepts from a partition, and in many cases, it tends to miss many concepts if they are not known in the existing ontology. We give two examples below to further illustrate the connection between a partition lattice and a concept lattice.
First, consider biological taxonomy. Dogs and cats are two concepts in species which is an abstraction containing other concepts such as eagles. Likewise, mammals and birds are two concepts in class which is an abstraction containing other concepts such as reptiles and insects; further, animals and plants are two concepts in kingdom. In light of hierarchy, as abstractions, species class kingdom (in a partition lattice); as concepts, dogs ⊆ mammals ⊆ animals (in a concept lattice). Note that when forming a concept lattice, one may not need to include say, all species. Yet when having species as an abstraction in a partition lattice, this abstraction must contain all species including known species and unknowns, where the latter is usually of more interest for knowledge discovery.
Second, consider music theory. C major triads, C minor triads, and B diminished triads are concepts in an abstraction induced by music octave-shift and permutation invariance. Further, major triads, minor triads, and diminished triads are concepts in another abstraction induced by music octave-shift, permutation, and further transposition invariance. Clearly, for abstractions, the former abstraction is finer than the latter; for concepts, the set of C major triads is a subset (or a special case) of the set of major triads. However, chords that are not defined in traditional music theory but appear as new concepts in a known abstraction (e.g., the two above) may be more interesting, since they may suggest new composition possibilities while still obeying the same music abstraction, in this case the same music symmetry. New concepts from new abstractions may push the composition boundary even further, suggesting new types of chords discovered from e.g., new symmetry (but possibly within a known symmetry family). See the end of Appendix G.1 for more examples from new discoveries.
B MORE GENERALIZED FORMALISM FOR INFORMATION LATTICE
The mathematical setting in the main paper is for a non-negative signal on a finite domain. However, this is not a limitation, but purely for notational brevity and computational reasons. First, regarding non-negativity, in many real scenarios, the signal is bounded and its value is only relative. In these cases, one can simply add an offset to the signal to make it non-negative. More generally, we can
consider a signal to be any measurable function ξ : X → Rn. Then the notions of an abstraction, a concept, a rule, as well as the partial order can be generalized as in Table 1. Hence, the notion of an information lattice is still well-defined in the generalized setting. The essence of the two settings lies in how we formalize an abstraction, whether using a partition or a σ-algebra. However, the two are not very different from each other: any partition of X generates a σ-algebra on X , and any σ-algebra on a countable X is uniquely generated by a partition of X (Çınlar, 2011).
Further, the main paper uses the summation functional in defining a rule of a signal, or the projection operator. However, other options are possible, e.g., mean, max, min, or a specially designed functional. The lifting operator can then be redesigned accordingly. In particular, besides always favoring the most uniform signal, the design of the special lifting can have extra freedom in considering other criteria for picking a signal from the general lifting.
C MORE INSIGHTS ON THE SPECIAL LIFTING
Consider the special lifting ↑(R) for any rule setR = ↓ξ (P) of a given signal ξ. Computing ↑(R) is simple ifR = {r} contains only a single rule. In this case, ↑(R)(x) = ↑(r)(x) := r(C)/|C| for any x ∈ C ∈ domain(r), which requires simply averaging within each cell. However, computing ↑ (R) becomes much less trivial when |R| > 1. By definition, we need to solve the minimization problem:
↑(R) := argminr∈⇑(R)‖r‖2. (8)
Instead of directly throwing the above problem (8) into a generic optimization solver, there is a more efficient approach which also reveals more insights on the special lifting. More specifically, one can check that any multi-rule lifting ↑(R) can be computed as a single-rule lifting ↑(r?) where the single rule r? is defined on the join ∨P and is computed as follows:
r? := argminr∈⇑(∨P)(R)‖r̃‖2, with the weighted norm ‖r̃‖2 := √∑
C
r(C)2
|C| . (9)
So, instead of liftingR directly to the signal domain X , we liftR to the join ∨P first and then to X . Since | ∨P| ≤ |X|, the minimization problem (9) is in a smaller dimension compared to the original problem (8), and thus, can be solved more efficiently. In the minimization problem (9), by definition, ⇑(∨P) (R) := {r : ∨P → R | ↓r (P) = R}. Hence, every rule r ∈ ⇑(∨P) (R) can be treated as a single-rule summary of the rule setR, and r? is one of them—the one that yields the most uniform signal. Realizing the special lifting R → ↑ (R) as the two-step lifting R → r? → ↑ (r?) = ↑ (R) reveals the following insight: given rules abstracting ξ at different levels (coarser or finer), the best one can hope to faithfully explain ξ is at the level of the join. Determining ξ at any level finer than the join would then require additional assumptions other than the rule set itself, such as the preference of uniformity used here. This further explains the two sources of information loss (join and uniformity) discussed in the recovery process of a signal (cf. Section 3 in the main paper). Notably, to determine a signal even at the level of join may be ambigious, since the general lifting ⇑(∨P) (R) to the join is not necessarily a singleton. This particularly implies that r? as one of the single-rule summaries ofR of ξ is not necessarily a rule of ξ, i.e., there is no guarantee that r? = ↓ξ (∨P). To make it so, we need more rules.
D EXISTING WORK ON SUBLATTICE GENERATION
General methods for computing the sublattice LB of a full lattice L generated by a subset B ⊆ L fall into two basic families, depending on whether the full lattice needs to be computed. The first uses alternating join- and meet-completions, with worse-case complexityO(2|B|); the second characterizes the elements of L that belong to the sublattice, with complexity O(min(|J(L)|, |M(L)|)2|L|) where J(L) and M(L) denote the number of join-irreducibles and meet-irreducibles, respectively (Bertet & Morvan, 1999). The latter requires computing the full lattice, which is intractable in our case of partition lattices, as |L| = |PX | grows faster than exponentially in |X| whereas |P〈F,S〉| is usually smaller than |X|. So, we use the first approach and compute alternating join- and meet-completions. The same principle of avoiding computing the full lattice has been applied to the special context of concept lattices (Kauer & Krupka, 2015), yet the technique there still requires the full formal context corresponding to the full concept lattice. Note that sublattice completion is, by definition, computing the smallest sublattice LB (in a full lattice L) containing the input subset B ⊆ L, where LB must inherit the meet and join operations from L. It generalizes but is not the same as Dedekind-MacNeille completion (Bertet & Morvan, 1999; MacNeille, 1937; Bertet et al., 1997).
E MORE DETAILS ON THE CONSTRUCTION PHASE
This section elaborates on the second half of Section 3.1 in the main paper, presenting more algorithmic details on poset construction and sublattice completion. The core data structures for posets are the so-called adjacency matrix and Hasse diagram, encoding the partial order ≺ and the cover relation ≺c, respectively (Garg, 2015). The former is best for querying ancestors and descendants of a partition within the lattice; the latter is best for querying parents and children of a partition. (A more advanced technique includes chain-decomposition, but the two here are sufficient for this paper.) More specifically,
P ′ is an ancestor of P ⇐⇒ P ≺ P ′
P ′ is a parent of P ⇐⇒ P ≺c P ′ (i.e., P ≺ P ′ but no P ′′ satisfies P ≺ P ′′ ≺ P ′). We introduce a few algorithmic notations. Given a partition poset (P, ), we use P.po matrix and P.hasse diagram to denote the adjacency matrix and Hasse diagram of P, respectively. For any partition P ∈ P, we use P.ancestors, P.descendants, P.parents, and P.children to denote the sets of ancestors, descendants, parents, and children of P , respectively. Notably, the two data structures are not only important for the construction phase but for the subsequent learning phase as well. The core subroutine in the construction phase is ADD PARTITION sketched as Algorithm 1. It is the key unit step in both poset construction and (join-)semilattice completion.
Poset construction. This corresponds to Step 3 in the flowchart in Section 3.1 of the main paper. Recall that poset construction refers to the process of sorting a multiset P〈F,S〉 of tagged partitions into a poset (P〈F,S〉, ), where the partition tags are features and symmetries. Naively, if we write an inner subroutine COMPARE(P,P ′)—called an oracle in the related literature—to compare two partitions, sorting a multiset into a poset amounts to ( N 2 ) calls of this pairwise comparison where N is the size of the input multiset. So, the common idea shared in almost all poset sorting algorithms is to reduce the number of oracle calls as much as possible. As mentioned in the main paper, considering the additional properties in our case, we leverage (a) transitivity (valid for all posets), (b) partition size (valid for partitions), and (c) partition tag (valid for tagged partitions) to pre-determine or pre-filter relations. In other words, we want to infer from the context as many pairwise relations as possible, so that the number of actual pairwise comparisons can be minimized.
More specifically, we start from an empty poset, and call ADD PARTITION to incrementally add partitions from the input multiset to the poset. As the outer subroutine, ADD PARTITION leverages transitivity and partition size by maintaining three live data structures, namely size2partns, po matrix, and hasse diagram, so as to avoid calling COMPARE whenever possible. Consequently, COMPARE is called only at two places (underlined in Algorithm 1): one for = and one for ≺. When called as the inner subroutine, COMPARE(P,P ′) does not always perform an actual computation for pairwise comparison. Instead, it first checks if the tags are informative (e.g., compositions/supergroups imply coarser partitions) and only if not, makes an actual comparison. With the additional information from partition size, an actual comparison can be done in O(|X|) time
Algorithm 1: ADD PARTITION (Pτ ,P): adds a tagged partition Pτ to a partition poset (P, ) Input: a tagged partition Pτ , where the tag τ can be a feature/symmetry or a join/meet formula;
a partition poset (P, ), with the following members and hash tables: · every P ∈ P is a unique partition (indexed by a unique identifier) · P.partn2tags[P] := {τ | Pτ = P} denotes the set of all tags inducing P · P.size2partns[k] := {P | |P| = k} denotes the set of all P ∈ P with size k · P.po matrix encodes the partial order ≺, best for getting P.ancestors/descendants · P.hasse diagram encodes the cover relation ≺c, best for getting P.parents/children
Step 1: determine if Pτ is new by COMPARE(P,Pτ ) (for =) for every P ∈ P.size2partns[|Pτ |]
if Pτ ∈ P.size2partns[|Pτ |]: update P.partn2tags[Pτ] by adding τ ; return else: create a new hash entry P.partn2tags[Pτ] = {τ}; proceed to Step 2
Step 2: add the new partition Pτ to P (2a) update P.size2partns[|Pτ |] by adding Pτ (2b) update P.po matrix and P.hasse diagram
– for every existing size k < |Pτ | sorted in a descending order: for every P ∈ P.size2partns[k]:
if P.parents ∩ Pτ .descendants 6= ∅: update P.po matrix by adding P ≺ Pτ else: COMPARE(P,Pτ ); update P.po matrix and P.hasse diagram if P ≺ Pτ
(here one can check: it is necessarily the case that P ≺c Pτ ) – do the above symmetrically for every existing size k > |Pτ | sorted in an ascending order – (note: every P ∈ P.size2partns[k] for k = |Pτ | is incomparable with Pτ ) – clean cover relation: remove any P∗ ≺c P∗ from P.hasse diagram if P∗ ≺c Pτ ≺c P∗
via a mapping process. More specifically, given two partitions P,P ′, without loss of generality, we assume |P| ≤ |P ′|. An actual comparison is made by tentatively creating a mapping ν : P ′ → P . One can check that such a ν exists if and only if P P ′. Hence, if |P| = |P ′| (resp. |P| < |P ′|), one can determine = (resp.≺) if ν is created successfully or incomparability otherwise. The mapping complexity is linear in |X|, with linear coefficient 1 if mapping succeeds and with linear coefficient < 1 if mapping fails. In the worst case (e.g., if all partitions are incomparable), all ( N 2 ) pairwise comparisons are required. Our algorithm works best when partitions are richly related (i.e., the Hasse diagram is dense), which is indeed the case for our tagged partitions induced from systematically formed features and symmetries.
Semilattice completion. This corresponds to Step 4 in the flowchart in Section 3.1 of the main paper. Recall that join-semilattice completion refers to the process of completing a partition poset into a semilattice. We only detail join-semilattice completion, since meet-semilattice completion can be done symmetrically. Formally, we want to compute the join-semilattice of PX generated by the input poset (P〈F,S〉, ). We denote the resulting join-semilattice by 〈P〈F,S〉〉∨. By definition,
〈P〈F,S〉〉∨ := {∨P | P ⊆ P〈F,S〉}. Naively, if computing 〈P〈F,S〉〉∨ literally from the above definition, one has to iterate over all subsets of P〈F,S〉 and compute their joins. This amounts to 2N join computations where N = |P〈F,S〉| is the size of the input poset, and moreover, many of the joins are not pairwise. Yet, similar to our earlier poset construction, we may reduce the computations of joins by an incremental method, which also embeds ADD PARTITION as a subroutine and utilizes partition sizes and tags, but now the tags are join formulae instead of features or symmetries.
More specifically, we start with an empty semilattice P, and add partitions in P〈F,S〉 to P one by one from smaller-sized to larger-sized (note: the size information is maintained in P〈F,S〉.size2partns). When a partition P ∈ P〈F,S〉 is to be added, we make a tag named by itself, i.e., let Pτ := P with τ := {P}, and then call ADD PARTITION(Pτ ,P). There are two possibilities here: Pτ already exists in P (call ends by Step 1) or Pτ is new (call ends by Step 2). In the former, we are done with Pτ .
In the latter, for every P ′ ∈ P\{Pτ}, compute the pairwise join J (P ′) := ∨{Pτ ,P ′} and its tags T (P ′) := {τ ∪ τ ′ | τ ′ ∈ P.partn2tags[P ′]}, and call ADD PARTITION(J (P ′)T (P′),P). Like COMPARE, computing join can be optimized by leveraging previously computed tags and partial order in the input poset P〈F,S〉, so as to avoid an actual join computation whenever possible. When inferring from the context is not possible, one can perform an actual join computation ∨(P,P ′) in O(|X|) time. This is done by collecting the unique pairs of cell IDs (C(x), C ′(x)) for every x ∈ X , where C(x) and C ′(x) denote the cell IDs of x in P and P ′, respectively. In the worst case (e.g., if all partitions are incomparable and join-irreducible), the complexity is inevitably O(2N ). However, like in poset construction, our algorithm works best when the partial order structure is rich.
Practical tips for sublattice completion. This corresponds to Step 5 in the flowchart in Section 3.1 of the main paper. Recall that constructing the sublattice of PX generated by P〈S,F 〉 follows the alternating process: L0 := P〈S,F 〉, L1 := 〈L0〉∨, L2 := 〈L1〉∧, L3 := 〈L2〉∨, and so forth, which terminates as soon as Lk−1 = Lk. We denote the end result by 〈P〈S,F 〉〉∨∧···, which is the desired sublattice. However, we may want to stop early in the completion sequence, due to concerns from computation, interpretability, expressiveness, as well as their tradeoffs. We suggest a practical tip on deciding where to stop. If the input poset P〈F,S〉 is small, run alternating joins and meets, or even complete it to the sublattice if affordable. If P〈F,S〉 is moderate, complete the joins only (as join is closely related to rule lifting, see Appdenix C for more details). If P〈F,S〉 is large, just use it.
F MORE ANALYSES IN THE LEARNING PHASE
This section elaborates on the last paragraph of Section 3.2 in the main paper, presenting more analyses and interpretations on the rule traces elicited from the toy handwritten-digit examples. Yet, as mentioned in the main paper, computer vision is currently not among the typical use cases of ILL. Learning rules of handwritten digits may not be of much independent interest unless for calligraphy. So, the analyses and interpretations here are for illustration purposes only. We refer readers to the Broader Impact section in the main paper for possible future directions on how ILL may be used, together with other ML models, to solve computer vision tasks.
Recall that the main use case of ILL is to explain a signal ξ, answering what makes ξ an ξ. The same toy example illustrating an ILL process is replayed here in Figure 3. The signal ξ : {0, . . . , 27}2 → [0, 1] is a grayscale image of a handwritten “7”. In this case, a rule of ξ, or the projection of ξ to a partition of {0, . . . , 27}2, can be viewed as gathering “ink” within each partition cell. Accordingly, the (special) lifting can be viewed as redistributing the gathered “ink” (evenly) in each cell. Hence, we term this view the ink model. For visual convenience, we depict a rule of a 2D signal by its lifting (i.e., another grayscale image), since with pixels in the same cell colored the same, we can use the lifting to sketch both the partition and the rule values. More precisely, when a lifting represents a rule, it must be viewed in terms of blocks or superpixels; whereas a real lifting (i.e., a signal or a real image) is viewed normally by the regular pixels. To better clarify, all rules in Figure 3 are displayed in red boxes, whereas all liftings are in green ones.
For a simple illustration, we draw a small number of features and symmetries to generate a poset (P•) of 21 partitions. The corresponding part of the information lattice (R•) is shown by its Hasse diagram in Figure 3. Further, on top of the Hasse diagram, we demarcate the frontiers of the sublevel sets (R≤ ) by six blue dashed curves. Note that in this tiny diagram, we have sketched a full range of sublevel sets, yet for large diagrams, sublevel sets are constructed for small -values only in a single-pass BFS. The right part of Figure 3 illustrates a complete ILL process in the alternating setting, with lift project signified by the green up-arrows and red down-arrows, respectively. During the learning process, ILL tries to minimize the gap in the signal domain (upstairs) through iterative eliminations of the largest gap in the rule domain (downstairs). Eliminating a larger rule gap tends to imply a larger drop in the signal gap, but not necessarily in every iteration, since the special lifting may accidentally recover a better signal if the assumed uniformity is, by chance, present in the signal. The rule setR(k) formed per iteration is presented in the middle of the right part of Figure 3, which joinly shows the complete rule trace continuously progressing along the -path.
The rule set in the last iteration under any (marked by ? in Figure 3) is the returned solution to the main relaxed Problem (4) in the main paper. This rule set is used to answer what makes ξ an ξ. For example, let rj denote the rule with ID j (here a rule ID is the same as the partition ID, the unique identifier used in Algorithm 1 during the construction phase). Then, among all rules whose entropies
are no larger than = 2, the third rule set in the traceR(3) = {r9, r1, r18} best explains what makes ξ an ξ. However, if more complex rules are allowed, say if all rule entropies are now capped by = 6, R(7) = {r13, r15, r19} is the best. Recall that we do not just eyeball the rules to get intuitive understandings. Every rule is the projection of the signal to a tagged partition, where the tag, generated in a prior-driven way, explicitly explains the underlying abstraction criteria. For example, r19 in Figure 3 comes from a symmetry tag representing a permutation invariance, which visually renders as a reflection invariance. Rules r8 and r9 come from two feature tags div7 ◦ w[1] and div7 ◦ w[2], respectively. These two feature tags represent the continuous and even collapsing in the first and the second coordinate, respectively, which visually render as horizontal and vertical strips in either case. Both rules are later absorbed into r13 tagged by div7 ◦w[1,2], since its rule domain is strictly finer. These rules (r8, r9, r13) apparently summarize the horizontal and vertical parts of the handwritten “7”. Further, the vertical part of the “7” is longer and slants more, so we see more vertically-patterned rules in the rule trace (r9, r11, r15). These rules are obtained from finer and finer abstractions along the horizontal direction, so as to capture more details on the vertical part of that “7” such as its slope. Notably, among these vertically-patterned rules, r11 is induced from the symmetry representing a horizontal translation invariance, but it is quickly absorbed into r15 whose entropy is not much higher. This transient appearance of r11 implies that it plays a less important role in explaining this handwritten “7”. In fact, from more experiments, symmetries in general play a less important role in explaining many “7”s. This is, however, not the case in explaining many “8”s, where symmetries occur much more often. For example, consider a symmetry fused from translation and permutation invariances whose fundamental domain is homeomorphic to a Möbius strip. We hypothesize that this topological property might be related to the twisted nature of an “8”. For a visual comparison, we present the rule traces learned from a “7” and an “8” below in Figure 6, as well as the visual similarity between a Möbius strip and an “8”. mnistt_7_3_solve_alternate_5000
mnistc_8_2_solve_alternate_5000
G STUDIES ON ILL-BASED MUSIC APPLICATION
We introduce two tests associated with a real-world application. The first is to assess rule-learning efficacy, where we compare machine-discovered rules to human-codified domain knowledge. The second is to assess human-interpretability, where we use human subject experiments on interpreting machine-generated rules.
The application here is our first step towards building an automatic music theorist and pedagogue, which is to be deployed as an assistant in music research and education. The two tests are our initial effort towards a systematic benchmarking and assessment platform. In the continuing effort of bridging human and machine intelligence, new standards are to be set and commonly agreed upon, so as to reasonably compare machine-codified discoveries with human-codified knowledge, as well as to use human-subject experiments for assessing interpretability. Fully developing assessment protocols is a challenging, long-term endeavor. Here, we use the two tests as starting points, and present results from each. Respectively, the first experiment tests music rule discovery, a basic requirement to be a theorist; the second tests interpretability, a basic requirement to be a pedagogue.
To conduct the two tests, we first build a user-friendly web application, which is used to better see and control the ILL learning process and results. Figure 7 illustrates the web interface. Users learn music rules—each as a histogram over a tagged partition (i.e., machine-codified music concepts)—and control their learning pace via self-explanatory knobs whose set values are automatically converted to internal parameters (e.g., , γ). One critical music-specific extension to the vanilla ILL presented in the main paper is adding a temporal component, since music is highly contextual. This amounts to considering more than one signal simultaneously, which include various (un)conditional chord distributions (multiple n-grams with varying n’s and varying conditionals) encoding information of individual chords as well as melodic and harmonic progressions. Accordingly, ILL produces both context-free and context-dependent rules, each of which is indexed by a partition and a conditional under that partition. For example, given the partition that is equivalent to classifying music chords into roman numerals and conditioned on the previous two chords being a I64 followed by a V, a rule specifies the probability distribution of the next roman numeral, and in this case reproduces the music rule on Cadential-64. Note that in a context-dependent rule, not only is the query chord abstracted, but also the conditional. This is in contrast with many classical n-gram models where no abstraction is present and thus may suffer from the problem of rare contexts, where a conditional occurs very few or even zero times in the training set. However here, the core idea of abstraction makes “small data” large and thus rare contexts common. More examples of context-free and context-dependent rules are illustrated as histograms in Figure 8. These rule histograms are generated from ILL based on 370 of Bach’s four-part chorales (in the format of digital sheet music), and are used in the two experiments detailed below.
G.1 COMPARISON TO HUMAN-CODIFIED KNOWLEDGE
We compare rules learned from ILL to a standard undergraduate music theory curriculum. We want to use known laws from music theory as a benchmark to see how ILL-generated rules correspond to human-codified music knowledge. In particular, we want to see what is covered, what is new, and what is different. Yet, the ultimate goal is not just to use known music theory as a ground truth for the purpose of driving ILL to fully reconstruct what we know, but eventually to discover new rules,
to gain new understandings of existing rules, to suggest new composition possibilities, as well as to teach rules in a personalized way.
A priori we are aware of three major differences between human-codified music theory and ILLgenerated rules. (a) In light of music raw representations (input), laws of music theory are derived from all aspects in sheet music whereas ILL-generated rules are currently derived from only MIDI pitches and their durations. This is because we currently study ILL as a general framework. When a music-specific application is to be developed later, one can include more music raw representations such as letter pitches, meter, measure, beaming, and articulations. (b) In light of rule format (output), laws of music theory and ILL-generated rules have two different styles, with the former being more descriptive and absolute (hard), whereas the latter being more numerical and probabilistic (soft). For instance, a music rule that completely forbids consecutive fifths is reproduced by an ILL-generated rule that assigns a small non-zero probability to the event. Therefore, while it is possible to “translate”, with information loss, a (precise) ILL-generated rule to a (verbal) rule in known theory, it may not make sense to “translate” in the opposite direction. Also, it is not a good idea to hardcode known rules as categorical labels in a supervised setting, since music rules are inherently flexible and hardcoding may lead to a rule-based AI that generates somewhat “mechanical” music such as the Illiac Suite (Hiller & Isaacson, 1957). (c) In light of purposes, laws of music theory are more intended for general pedagogical purposes, rather than to reflect the style of a particular data set. For instance, while consecutive fifths are banned in homework and exams, they may be widely used in many pop songs. Even in our data set of Bach’s chorales (which are supposed to follow the known rules quite well), we see Bach himself wrote a handful of consecutive perfect intervals. On the contrary, ILL-generated rules are specific to the input data set. We may certainly find some data sets that follow the known rules quite well (e.g., Bach’s chorales), but also others that break many known rules and even set their own rules.
Keeping these three differences in mind and by further isolating them from the comparison results, we can reveal the remaining differences that are due to the rule-learning process itself. To come up with the benchmark, we compiled a comprehensive syllabus of laws from music theory taught in our music school’s theory review course, which runs through the full series of theory classes at a fast pace. This human-codified music knowledge is organized as a running list of 75 topics and subtopics indexed by lecture number. On the other hand, ILL-generated rules are indexed by partition (ID) and n-gram (n). The results are summarized below in Table 2, where the colored crosses in the last column indicate topics that are missed by ILL due to different reasons.
Among the total 75 topics in Table 2, we first ignore 7 of them (red crosses) which require music raw representations beyond MIDI pitches and durations (e.g., accents and enharmonic respellings of some augmented sixth chords). ILL covered 45 out of the remaining 68 topics, yielding a coverage of 66%. Among the 23 missed topics, 18 (blue crosses) are related to deeper-level temporal abstractions such as harmonic functions, key areas, and forms. These temporal abstractions may be better modeled as abstractions of transitions, which are implicitly captured but not explicitly recovered from our current multi-abstraction multi-n-gram language model, modeling only transitions of abstractions. The other 5 missed topics (black crosses) are tricky and require ad-hoc encodings, which are not explicitly learnable (but may be implicitly captured to some extent) from our current ILL implementation. Accordingly, the composition of the 30 = 7 + 18 + 5 uncovered topics suggest three future directions to possibly raise the rule-learning capacity of the current implementation: (a) include more music raw representations; (b) model abstractions of transitions; (c) either make music-specific adjustments when developing music apps or figure out a more expressive and more general framework in the long run. However, remember that the goal here is not to reproduce what we know but to augment it. So, we may certainly stop after enabling abstractions of transitions, which in the best case can yield an improved coverage of 84% (i.e., 93% of the topics from MIDI notes only) which is good enough.
Lecture Music Theory Partition IDs n-gram
1 music accents 7 2 pitch 1-4 1 3 2 pitch class 16-19 1 3 2 interval 31-36 1 3
Table 2 (cont.)
Lecture Music Theory Partition IDs n-gram
2 interval class 97-102 1 3 3 stepwise melodic motion (counterpoint) 1-4 2 3 3 consonant harmonic intervals (counterpoint) 97-102 1 3 3 beginning scale degree (counterpoint) 16-19 2 3 3 ending scale degree (counterpoint) 16-19 2 3 3 beginning interval class (counterpoint) 97-102 2 3 3 ending interval class (counterpoint) 97-102 2 3 3 parallel perfect intervals (counterpoint) 97-102 2 3 3 directed perfect intervals (counterpoint) 7 3 law of recovery (counterpoint) 1-4 ≥3 3 3 contrapuntal cadence (counterpoint) 1-4, 97-102 2,3 3 3 melodic minor ascending line (counterpoint) 7 4 tri | 1. What is the main contribution of the paper, and how does it address the challenges of explainability and generalizability in machine learning?
2. How does the proposed framework differ from existing approaches in terms of its ability to handle small data and provide explanations?
3. Are there any concerns regarding the complexity of the notation used in the paper, and how might this impact the tractability and understandability of the results?
4. What kind of experimental results would be needed to demonstrate the effectiveness and generalizability of the proposed framework, and how might these be presented in a way that is accessible to both domain experts and the broader audience?
5. Can the framework be applied to real-world problems such as generating new music or discovering new chemical laws, and how might this be demonstrated? | Review | Review
This paper has addressed a very ambitious goal about explainability and generalizability from “small data" by generalizing the information lattice defined by Shannon. The topic of this paper is very significant but there are a few questions that I concern:
The paper has tried to answer some well-known challenging problems in machine learning such as explainability and generalizability from a very different perspective. However, the authors simply introduce some kind of framework but not provide a persuasive analysis or theoretical/empirical results to show it addressing the problems in the introduction. In fact, I do not find a theorem or commonly recognized experiment comparison in this paper. Thus I can not evaluate the significance of the technical contents.
The paper has used very complicated notations such as up/down arrows to show their results. However, is it really necessary? The tractability of the resulted problems (1) and the relaxed version (4) should be seriously concerned, not only providing a certain explanation or heuristic. Meanwhile, I recommend the authors to use simple and explicit enough formulations to show their framework so that we can know the tractability at the first glance, such as the convexity/nonconvexity, continuity/discontinuity, etc.
The experiments may be of interest to domain experts. However, it is not very attractive to the general audience? If the structure is really useful, can it be used to generate new music or find new chemistry laws? As the authors concern about generalizability in the introduction, I believe such reports are necessary. Meanwhile, if the framework is really useful, can it be used in the commonly accepted tasks and be compared with state of the art methods such as the deep learning approach mentioned in the introduction? |
ICLR | Title
Information Lattice Learning
Abstract
Information Lattice Learning (ILL) is a general framework to learn decomposed representations, called rules, of a signal such as an image or a probability distribution. Each rule is a coarsened signal used to gain some human-interpretable insight into what might govern the nature of the original signal. To summarize the signal, we need several disentangled rules arranged in a hierarchy, formalized by a lattice structure. ILL focuses on explainability and generalizability from “small data”, and aims for rules akin to those humans distill from experience (rather than a representation optimized for a specific task like classification). This paper focuses on a mathematical and algorithmic presentation of ILL, then demonstrates how ILL addresses the core question “what makes X an X” or “what makes X different from Y” to create effective, rule-based explanations designed to help human learners understand. The key part here is what rather than tasks like generating X or predicting labels X,Y. Typical applications of ILL are presented for artistic and scientific knowledge discovery. These use ILL to learn music theory from scores and chemical laws from molecule data, revealing relationships between domains. We include initial benchmarks and assessments for ILL to demonstrate efficacy.
1 INTRODUCTION
With rapid progress in AI, there is an increasing desire for general AI (Goertzel & Pennachin, 2007; Chollet, 2019) and explainable AI (Adadi & Berrada, 2018; Molnar, 2019), which exhibit broad, human-like cognitive capacities. One common pursuit is to move away from “black boxes” designed for specific tasks to achieve broad generalization through strong abstractions made from only a few examples, with neither unlimited priors nor unlimited data (“primitive priors” & “small data” instead). In this pursuit, we present a new, task-nonspecific framework—Information Lattice Learning (ILL)— to learn representations akin to human-distilled rules, e.g., producing much of a standard music theory curriculum as well as new rules in a form directly interpretable by students (shown at the end).
The term information lattice was first defined by Shannon (1953), but remains largely conceptual and unexplored. In the context of abstraction and representation learning, we independently develop representation lattices that coincide with Shannon’s information lattice when restricted to his context. Instead of inventing a new name, we adopt Shannon’s. However, we not only generalize the original definition—an information lattice here is a hierarchical distribution of representations—but we also bring learning into the lattice, yielding the name ILL.
ILL explains a signal (e.g., a probability distribution) by disentangled representations, called rules. A rule explains some but not all aspects of the signal, but together the collection of rules aims to capture a large part of the signal. ILL is specially designed to address the core question “what makes X an X” or “what makes X different from Y”, emphasizing the what rather than generating X or predicting labels X,Y in order to facilitate effective, rule-based explanations designed to help human learners understand. A music AI classifying concertos, or generating one that mimics the masters, does not necessarily produce human insight about what makes a concerto a concerto or the best rules a novice composer might employ to write one. Our focus represents a shift from much representation-learning work (Bengio et al., 2013) that aim to find the best representation for solving a specific task (e.g., classification) rather than strong concern for explainability. Instead of optimizing a task-specific objective function (e.g., classification error), ILL balances more general objectives that favor fewer, simpler rules for interpretability, and more essential rules for effectiveness—all formalized later.
One intuition behind ILL is to break the whole into simple pieces, similar to breaking a signal into a Fourier series. Yet, rather than decomposition via projection to orthonormal basis and synthesis
via weighted sum, we decompose a signal in a hierarchical space called a lattice. Another intuition behind ILL is feature selection. Yet, rather than features, we use partitions to mimic human concepts and enable structured search in a partition lattice to mimic human learning. The goal is to restore human-like, hierarchical rule abstraction-and-realization through signal decomposition-and-synthesis in a lattice (called projection-and-lifting, Figure 1: left), resulting in more than a sum of parts.
ILL comprises two phases: (a) lattice construction; (b) learning (i.e., searching) in the lattice. This is similar to many machine learning (ML) models comprising (a) function class specification then (b) learning in the function class, e.g., constructing a neural network then learning—finding optimal parameters via back-propagation—in the network. ILL’s construction phase is prior-efficient: it builds in universal priors that resemble human innate cognition (cf. the Core Knowledge priors (Spelke & Kinzler, 2007)), then grows a lattice of abstractions. The priors can be customized, however, to cater to a particular human learner, or facilitate more exotic knowledge discovery. ILL’s learning phase is data-efficient: it learns from “small data” encoded by a signal, but searches for rich explanations of the signal via rule learning, wherein abstraction is key to “making small data large”. Notably, the construction phase is prior-driven, not data-driven—data comes in only at the learning phase. Hence, the same construction may be reused in different learning phases for different data sets or even data on different topics (Figure 1: right). Featuring these two phases, ILL is thus a hybrid model that threads the needle between a full data-driven model and a full prior-driven model, echoing the notion of “starting like a baby; learning like a child” (Hutson, 2018).
ILL is related to many research areas. It draws ideas and approaches from lattice theory, information theory, group theory, and optimization. It shares algorithmic similarity with a range of techniques including MaxEnt, data compression, autoencoders, and compressed sensing, but with a much greater focus on achieving human-like explainability and generalizability. Below, we broadly compares ILL to prominent, related models, leaving more comparisons to the Appendix for most similar ones.
Compared to ILL is deep learning a “white-box” model balancing human-explainability and task performance Bayesian inference modeling human reasoning with widely shared, common priors and few, simple rules rather than using probabilistic inference as the driving force tree-like models structurally more general: a tree (e.g., decision tree or hierarchical clustering)
is essentially a linear lattice (called a chain formally) depicting a unidirectional refinement or coarsening process
concept lattice in FCA (Ganter & Wille, 2012) conceptually more general and may include both known and unknown concepts; ILL does not require but discovers domain knowledge (more details in Appendix A)
We illustrate ILL applications by learning music theory from scores, chemical laws from compounds, and show how ILL’s common priors facilitate mutual interpretation between the two subjects. To begin, imagine Tom and Jerry are playing two 12-key pianos simultaneously, one note at a time (Figure 1: right). The frequency of the played two-note chords gives a 2D signal plotted as a 12× 12 grayscale heatmap. Inspecting this heatmap, what might be the underlying rules that govern their co-play? (Check: all grey pixels have a larger “Jerry-coordinate” and project to a black key along the “Tom-axis”.) We now elaborate on ILL and use it to distill rules for complex, realistic cases.
2 INFORMATION LATTICE: ABSTRACTIONS AND RULES OF A SIGNAL
Signal. A signal is a function ξ : X → R. For notational brevity and computational reasons, assume ξ is non-negative and X ⊆ Rn is finite (not a limitation: see Appendix B). For example, a signal ξ : {1, . . . , 6} → R being a probability mass function (pmf) of a dice roll, or a signal ξ : {0, . . . , 27}2 → R being a 28× 28 grayscale image. We denote the set of all signals on X by SX . Partition / abstraction. We use a partition P of a set X to denote an abstraction of X; we call a cell C ∈ P an (abstracted) concept. The intuition is simple: a partition of a set renders a “coarse-grained view” of the set, or more precisely, an equivalence relation on the set. In this view, we identify equivalence classes of elements (concepts) instead of individual elements. For example, the partition P = {{1, 3, 5}, {2, 4, 6}} of the six outcomes of the roll of a die identify two concepts (odd, even). Rule / representation. A rule of a signal ξ : X → R is a “coarsened” signal rξ : P → R defined on a partition P of X with rξ(C) := ∑ x∈C ξ(x) for any C ∈ P . In this paper, a rule of a signal is what we mean by a representation of a signal. If the signal is a grayscale image, a rule can be a special type of blurring or downsampling of the image; if the signal is a probability distribution, a rule can be a pmf of the “orbits” of the distribution for lifted inference algorithms (Holtzen et al., 2019; Kersting, 2012). More generally, we define a rule (regardless of any signal) over a set X by any signal on any partition of X; accordingly, we denote the set of all rules over X byRX := ∪P∈{all partitions of X}SP . Partition lattice. Abstractions are hierarchical: one coarse-grained view can be coarser than another. Let the partition lattice (PX , ) of a setX be the partially ordered set (poset) containing all partitions of X equipped with the partial order coarser than ( ), or finer than ( ), defined in the standard way. Let P := {{x} | x ∈ X} and P := {X} denote the finest and the coarsest partition, respectively. Per general lattice theory (Davey & Priestley, 2002), PX is a complete lattice: every subset P ⊆ PX has a unique supremum ∨P and a unique infimum ∧P, where ∨P is called the join of P denoting its coarsest common refinement and ∧P is called the meet of P denoting its finest common coarsening. Information lattice. The information lattice (Rξ,⇐) of a signal ξ : X → R is the poset of all rules of ξ equipped with the partial order more general than: for any two rules r, r′ ∈ Rξ, we say r is more general than r′ (or r′ is more specific), denoted r ⇐ r′, if domain(r) domain(r′). Notably, Rξ ⊆ RX andRξ is isomorphic to the underlying partition lattice via projection defined below. Projection and lifting. For any signal ξ ∈ SX , we define the projection operator ↓ξ: PX → Rξ by letting ↓ξ (P) be the rule of ξ on P . One can check that ↓ξ: (PX , )→ (Rξ,⇐) is an isomorphism. Conversely, we define the general lifting operator ⇑X : RX → 2SX by letting ⇑X (r) denote the set of all signals that satisfy the rule r, i.e., ⇑X (r) := {ξ ∈ SX | ↓ξ (domain(r)) = r} ⊆ SX . To make lifting unique and per Principles of Indifference (Eva, 2019), we introduce a special lifting ↑X (r) to pick the most “uniform” signal in ⇑X (r). Formally, define ‖ · ‖q : SX → R by ‖ξ‖q := ( ∑ x∈X ξ(x)
q)1/q. For any ξ, ξ′ ∈ SX satisfying ‖ξ‖1 = ‖ξ′‖1, we say that ξ is more uniform than ξ′ (or ξ′ is more deterministic) if ‖ξ‖2 ≤ ‖ξ′‖2. We define the (special) lifting operator ↑X : RX → SX by ↑X (r) := argminξ∈⇑X(r)‖ξ‖2 (can be computed by simply averaging). Notation here follows the convention as to function projections to quotient spaces (Kondor & Trivedi, 2018). Lifting a single rule to the signal domain can be extended in two ways: (a) lift to a finer rule domain P instead of X , i.e., ⇑P (r) or ↑P (r); (b) lift more than one rules. Accordingly, we write ⇑ := ⇑X and ↑ := ↑X as defaults, write R = ↓ξ (P) := {↓ξ (P) | P ∈ P} ⊆ Rξ to denote a rule set, and write ⇑(R) := ∩r∈R ⇑(r) = {η ∈ SX | ↓η (P) = R} and ↑(R) := argminη∈⇑(R)‖η‖2 to denote signals that satisfy all rules in R (general lifting) and the most uniform one (special lifting), respectively. More computational details on lifting and its intimate relation to join are in Appendix C.
3 INFORMATION LATTICE LEARNING (ILL)
We first formalize ILL as a single optimization problem and then solve it practically in two phases. Let ξ : X → R be a signal we want to explain. By explaining, we mean to search for a rule set R = ↓ξ (P) ⊆ Rξ such that: (a)R recovers ξ well, orR is essential; (b)R is simple. The main idea agrees with Algorithm Information Theory (Chaitin, 1987; Chater & Vitányi, 2003), but we use an information-lattice based formulation focusing on explainability. We start our formulation below.
We say a rule setR recovers the signal ξ exactly if ↑(R) = ξ. Yet, exact recovery may not always be achieved. The information loss occurs for two reasons: (a) insufficient abstractions, i.e., the join ∨P is strictly coarser than P; (b) the choice made in favor of uniformity is inappropriate. Instead of pursuing exact recovery, we introduce ∆(↑ (R), ξ)—a distance (e.g., `p distance) or a divergence (e.g., KL divergence) function—to measure the loss, with a smaller ∆ indicating a more essentialR. We say a rule setR is simpler if it contains fewer and simpler rules. Formally, we wantR minimal, i.e., each rule r ∈ R is indispensable so as to achieve the same ↑(R). Also, we want each rule r ∈ R informationally simple, measured by smaller Shannon entropy Ent(r), so r is more deterministic (Falk & Konold, 1997), easier to remember (Pape et al., 2015) and closer to our common-sense definition of a “rule”. Notably, the partial order renders a tradeoff between the two criteria: r ⇐ r′ implies r is dispensable in anyR ⊇ {r, r′} but on the other hand Ent(r) ≤ Ent(r′), so including more-specific rules makes the rule set small yet each individual rule (informationally) hard.
The main problem. The formal definition of an ILL problem is: given a signal ξ : X → R, minimize R⊆Rξ ∆(↑(R), ξ) subject to R is minimal; Ent(r) ≤ for any r ∈ R. (1) The search space involves the full information lattice (Rξ,⇐), or isomorphically, the full partition lattice (PX , ). Yet, the size of this lattice, i.e., the Bell numberB|X|, scales faster than exponentially in |X|. It is unrealistic to compute all partitions of X (unless X is tiny), let alone the partial order. Besides computational concerns, there are two reasons to avoid the full lattice (but to leave it implicitly in the background): (a) the full lattice has unnecessarily high resolution, comprising many nearlyidentical partitions particularly when X is large; (b) considering explainability, not every partition has an easy-to-interpret criterion by which the abstraction is made. As such, Formulation (1) is only conceptual and impractical. Next, we relax it and make it practical via two ILL phases.
3.1 PRACTICAL LATTICE CONSTRUCTION: TO START LIKE A BABY (PHASE I)
Information lattice construction plays a role similar to building a function class in ML, sometimes called meta-learning. While its importance is commonly understood, the construction phase in many data-driven models is often treated cursorily—using basic templates and/or ad-hoc priors—leaving most of the computation to the learning phase. In contrast, we put substantial effort into our priordriven construction phase. Pursuing generality and interpretability, we want universal, simple priors that are domain-agnostic and close to the innate cognition of a human baby (Marcus, 2018). Here we draw those from Core Knowledge (Spelke & Kinzler, 2007; Chollet, 2019), which include “the (small) natural numbers and elementary arithmetic prior” and “the elementary geometry and topology prior”. We then give algorithms to construct abstractions from these priors, and consider such a construction prior-efficient if it is interpretable, expressive, and systematic. In the following flowchart, we summarize information lattice construction as generating a partition sublattice. 20.00 pt
hPhF,Sii_ hPhF,Sii_^···(PhF,Si, )PhF,Si = P hF i [ PGhSiF, S hF i, GhSi
seeds (priors)
features/ symmetries
partition multiset
partition poset
partition semilattice
partition sublattice
1 2 4 53
hierarchy stageprior-driven stage completion stage
1 2 Feature / Symmetry-induced partitions. Unlike data clustering, our prior-driven partitions are induced from two data-independent sources—features and symmetries. We draw priors—in the form of seed features F and seed transformations S—from Core Knowledge as a basis, and then generate a set of partitions P〈F,S〉 as follows: as an example, for X = R2:
F = {w[1], w[2], w[1,2], sort, argsort, sum, diff, div2, . . . , div19, mod2, . . . , mod19} (2) S = {horizontal, vertical, diagonal translations} ∪ {rotations} ∪ {reflections} (3)
Φ〈F 〉 : set of features generated by F via function composition G〈S〉 : set of subgroups generated by subsets of S via subgroup generation PΦ〈F 〉 : set of partitions generated by features in Φ〈F 〉 via preimages PG〈S〉 : set of partitions generated by subgroups in G〈S〉 via orbits
In (2), wI denotes coordinate selection (like indexing/slicing in python) and the other functions are defined as in python (div and mod are like in python divmod). Then, P〈F,S〉 = PΦ〈F 〉 ∪ PG〈S〉. 3 Partition poset. We next sort P〈F,S〉, computationally a multiset, into the poset (P〈S,F 〉, ). We import algorithmic skeleton from generic poset-sorting algorithms (Caspard et al., 2012; Daskalakis et al., 2011), with an outer routine incrementally adding elements and querying an inner subroutine (an oracle) for pairwise comparison. Yet, our poset is special: its elements are called tagged partitions where a tag records the generating source(s) of its tagged partition, e.g., features and/or symmetries. So, we have specially designed both the outer routine ADD PARTITION and the oracle COMPARE by leveraging (a) transitivity (valid for all posets), (b) partition size (valid for partitions), and (c) partition tag (valid for tagged partitions) to pre-determine or filter relations. We relegate details to Appendix E. The data structures for posets include po matrix and hasse diagram, encoding the partial order ≺ (ancestors/descendants) and the cover relation ≺c (parents/children), respectively (Garg, 2015). 4 5 Partition semi/sublattice. To complete (P〈F,S〉, ) into a lattice, we compute the sublattice (of PX ) generated by P〈F,S〉. We follow the idea of alternating-join-and-meet completions borrowed from one of the two generic sublattice-completion methods (Bertet & Morvan, 1999). A discussion on our choice and other related methods is in Appendix D. However, we implement join-semilattice completion (meet-semilattice is dual) in our special context of tagged partitions, which echoes what we did in 3 and reuses ADD PARTITION. The adjustments are (a) changing tags from features and symmetries to join formulae and (b) changing the inner subroutine from pairwise comparison to computing join. We then run a sequence of alternating joins and meets to complete the lattice. For interpretability, one may want to stop early in the completion sequence. While a single join or meet remains simple for human interpretation—often understood as the intersection or union of concepts (e.g., the join of colored items and sized items gives items indexed by color and size)—having alternating joins and meets may hinder comprehension. More details on a single-step join-semilatticecompletion, the completion sequence, and tips on early stopping are relegated to Appendix E.
3.2 PRACTICAL LATTICE LEARNING: TO LEARN LIKE A CHILD (PHASE II)
Learning in an information lattice means solving the optimization Problem (1), i.e., to search for a minimal subset of simple rules from the information lattice of a signal so as to best explain that signal. Let P• be the sublattice (or semilattice, poset, if early stopped) from the construction phase. Projecting a signal ξ : X → R to P• yields the information sublattice R• := ↓ξ (P•) ⊆ Rξ. It is worth reiterating that (a) P• is constructed first and is data-independent; (b) ξ (data) comes after P•; (c) (R•,⇐) is isomorphic to (P•, ): R• retains the partial order (po matrix and hasse diagram) and interpretability from P•. As such,R• is what is given at the beginning of the learning phase. The main problem (relaxed). For practicality, we relax Problem (1): instead of the full latticeRξ, we restrict the search space toR•; instead of minimal rule sets, we consider only antichains (whose elements are mutually incomparable), necessary for minimality. This yields:
minimize R⊆R•
∆(↑(R), ξ) subject to R is an antichain; Ent(r) ≤ for any r ∈ R. (4)
To solve Problem (4), we adopt a (greedy) idea similar to principal component analysis (PCA): we first search for the most essential rule—which decreases ∆ most—in explaining the signal, then the second most essential rule in explaining the rest of the signal, and so on. Specifically, we start with an empty rule setR(0) := ∅, and add rules iteratively. LetR(k) be the rule set formed by Iteration (Iter) k andR(k)⇐ := {r ∈ R• | r ⇐ r′ for some r′ ∈ R(k)}. LetR≤ := {r ∈ R• | Ent(r) ≤ }. Then,
(in Iter k + 1) minimize ∆(↑(R(k) ∪ {r}), ξ) subject to r ∈ R(k)feasible := R≤ −R(k)⇐ . (5) We pre-computeR≤ (instead of the wholeR•) before iterations, which can be done by a breadth-first search (BFS) on P•’s hasse diagram, from bottom (the coarsest) up. As to the monotonicity of Ent w.r.t. the partial order (cf. the grouping axiom of entropy (Cover & Thomas, 2012)), any BFS branch ends once the entropy exceeds . (For later use, we save the setR> of ending rules in BFS, i.e., the lower frontier ofR> .) In contrast,R(k)⇐ is computed per iteration (by querying P•’s po matrix).
Under review as a conference paper at ICLR 2021
mnistt_7_3_solve_alternate_5000
Nested vs. alternating optimization. Computing ↑(R(k)∪{r}) requires solving a minimization, so Problem (5) is a nested optimization: argmin
r∈R(k)feasible ∆(argminη∈⇑(R(k)∪{r})‖η‖2, ξ). One may
de-nest the two: instead of comparing rules by lifting them up to the signal domain, we compare them “downstairs” on their own rule domains. So, instead of minimizing (5)’s objective, we
maximize r ∈ R≤ −R(k)⇐
∆(↓↑(R(k)) (domain(r)), ↓ξ (domain(r))) = ∆(↓↑(R(k)) (domain(r)), r). (6)
The idea is to find the rule domain on which the recovered ↑(R(k)) and the target signal ξ exhibit the largest gap. Adding this rule to the rule set maximally closes the gap in (6), and tends to minimize the original objective in (5). Nicely, in (6) the lifting does not involve r, so (5) is de-nested, which further iterates into an alternating min max (or lift project) optimization. Let r(k)? be the solution and ∆ (k) ? be the optimal value in Iter k. We updateR(k+1) := R(k) ∪ {r(k+1)? } − {r(k+1)? ’s descendants} (so always an antichain), and proceed to the next iteration. Iterations end whenever the feasible set is empty, or may end early if the rule becomes less essential, measured by |∆(k+1)? −∆(k)? | ≤ γ in the nested setting, and ∆(k)? ≤ γ in the alternating setting (for some γ). The full learning path & complexity. We denote a solve process for Problem (6) by SOLVE( , γ), or SOLVE( ) if γ is fixed ahead. To avoid tuning manually, we solve an -path. For 1 < 2 < · · · , assume SOLVE( i) takes Ki iterations, we run the following to solve the main relaxed Problem (6):
∅ = R(0) → SOLVE( 1)→ R(K1) → SOLVE( 2)→ R(K1+K2) → · · · (7) So, lattice learning boils down to solving a sequence of combinatorial optimizations on the Hasse diagram of a lattice. We walk through the full process (7) via a toy example, starting with a signal ξ : {0, . . . , 27}2 → [0, 1] denoting an image of “7” and a toy-sized information lattice of the signal (Figure 3A). The sequence of optimizations (7) proceeds at two paces concurrently: the slower pace is indexed by i; the faster pace is indexed by iteration number k. As mentioned earlier, the setsR≤ i
are pre-computed at the slower pace, with the (i+ 1)th BFS initialized fromR> i (the ending rules in the ith BFS). The monotonicity of Ent w.r.t. the partial order assures that these BFSs add up to a single (global) BFS on the entire Hasse diagram, climbing up the lattice from the bottom. This is shown in Figure 3B as the monotonic expansion of the blue region (R≤ ) explored by BFS. Locally at each iteration along the slower pace, solving Problem (6) is quadratic in the worst case when the feasible set is an antichain (i.e., no order), and linear in the best case when the feasible set is a chain (i.e., totally ordered). Since local BFSs add up to a single BFS with a standard linear complexity, the entire learning phase has a total complexity between linear and quadratic in the number of vertices and edges in the whole Hasse diagram. In general, the denser the diagram is, the lower the complexity is. This is because R(k)⇐ tends to be large in this case with more descendants activated (i.e., red in Figure 3B), which in turn effectively shrinks the feasible set (i.e., the blue region minus red). For example, unlike the first three iterations in Figure 3B, the 4th iteration ( = 3) activates more than one rules, including the one being extracted as well as all its unexplored descendants. Further, the upper bound is rarely reached. Unlike in this toy example, BFS in practice is often early stopped when becomes large, i.e., when later rules become more random. Hence, targeting at more deterministic and disentangled rules only, not all vertices and edges are traversed by BFS. In the end of the learning process, for explanatory purposes, we store the entire -path and the (R(k))k≥0 sequence instead of just the very last one. This yields a rule trace as the standard ILL output, which we present below.
How to read ILL output. ILL outputs a rule trace comprising an evolving sequence of rules, rule sets, and recovered signals (Figure 3C). The three sequences are indexed by iteration and by -path, so the rule set by the last iteration under any (starred) is the returned solution to the main Problem (4). We depict a rule by its lifting, since it sketches both the partition and the rule values. Figure 3C gives a full presentation of a rule trace. We also introduce a two-line shorthand (Figure 3D), keeping only the sequence of the recovered signals and that of the rules. A rule trace answers what makes ξ an ξ, or what are the best -simple rules explaining ξ. ILL rules are more interpretable than just eyeballing patterns. (a) The interpretability of the trace is manifest in its controllability via , γ: smaller for simpler rules and larger γ for more essential rules. (b) The interpretability of each rule is gained from its partition tag—the criteria by which the abstraction is made. A tag may contain several generating sources as different interpretations of the same rule abstraction. Like different proofs of a theorem, a partition tag with multiple sources reveals equivalent characterizations of a structure and thus, more insights of the signal. So, tags are not only computationally beneficial in constructing lattices, but also key to interpretation. We present in-depth analyses on tags in the applications below.
4 ILL EXAMPLES
We show typical ILL examples on knowledge discovery in art and science: learning music theory from scores and chemical laws from compounds (while relegating more analyses on handwritten digits to Appendix F). For both, we fix the same priors—F, S in (2)(3)—thus the same lattice. We fix the same parameters: -path is 0.2 < 3.2 < 6.2 < · · · (tip: a small offset at the beginning, e.g., 0.2, is used to get nearly-deterministic rules) and γ is 20% of the initial signal gap. This fixed setting is used to show generality and for comparison. Yet, the parameters can be fine tuned in practice.
Music illustration. Signals are probability distributions of chords encoded as vectors of MIDI keys. Figure 4a) shows such a signal—the frequency distribution of two-note chords extracted from the soprano and bass parts of Bach’s C-score chorales (Illiac Software, Inc., 2020)—with the learned rule trace listed below. The first rule is tagged by argsort ◦w[1,2] and has probability all concentrated in one cell whose elements have a larger y-coordinate (the black region above the diagonal). So, this is a deterministic rule, echoing the law of “no voice crossing (N.V.C.)”, i.e., soprano higher than bass. Checking later rule tags finds laws of voice range (V.R.), diatonic scale (D.S.), and consonant interval (C.I.)—almost all of the main static rules on two-voice counterpoint. Notably, the third rule is tagged by both mod12 ◦ w[1] and vertical translation invariance. From both feature and symmetry views, this tag identifies the concept of all Cs, all Ds, etc., which is the music concept of pitch class. The feature view explicitly reveals a period of 12 in pitches—the notion of an octave (in defining pitch class); the symmetry view reveals the topology—the manifold where the concepts lie—in this case a 2D torus.
Chemistry illustration. Signals are boolean-valued functions indicating the presence of compound formulae encoded as vectors of atomic numbers in a molecule database. Figure 4b) shows a signal attained by collecting two-element compounds from the Materials Project database (Jain et al., 2013) of common compounds. The first rule tagged by div18 ◦w[2] is deterministic: Element 2 can never be
Ar, K, Ca. It nicely captures the visual pattern in Figure 4b) (the last three vacant columns) and hints suggestively at some chemistry rules. The second rule tagged by mod8 ◦w[2] has peaks at cells tagged by feature values 1, 7, 0, 6. These cells, for Element 2, are halogens (+H), pnictogens, chalcogens, crystallogens. The third rule shows alkali metals, alkaline earth metals, crystallogens, icosagens are the cells common for Element 1. Next rule shows the common combinations, e.g., alkali metals and halogens. Note that the 2nd, 3rd, 4th rules for chemistry and the 5th, 3rd, 4th rules for music share the same tags, except that mod12 becomes mod8—period changes from 12 (a music octave) to 8 (number of main groups). So, when two chemical elements form a compound, they are like two music notes forming a chord! The music concepts of pitch classes and intervals parallel the chemical concepts of groups and their distances. Although abstractions are shared, rules differ. Instead of a diatonic scale in Bach’s chorales, chemistry uses a “cation scale” and an “anion scale”. It is interesting that our intention to show ILL’s generality (same lattice, parameters for different subjects) also suggests links between art and science by interpreting phenomena (signals) in one subject from the perspective of the other (Bodurow, 2018). Applications that extend the experiment here beyond a clustering model to restore the periodic table (Zhou et al., 2018) and render complex molecules in high dimensions are ongoing, aiming to discover new laws, new interpretations of existing laws, and new materials.
Real-world deployment & evaluation. We generalized the music illustration to a real app of an automatic music theorist (Yu et al., 2016; Yu & Varshney, 2017). It specially implements the alternating min max setting into a “student teacher” model: the student is a (music) generator and the teacher is a discriminator. The two form a loop where the teacher guides the student towards a target style through iterative feedback (extracting rules) and exercise (applying rules). This app extends the above music illustration considerably. It considers more music voices, so now signals are in higher dimensions and rules are on more complex chord structure. It considers temporal structure, so now signals include many (un)conditional chord distributions (multi-n-grams), yielding both context-free and context-dependent rules, but new challenges too, namely rare contexts and contradictory rules. ILL’s core idea of abstraction makes “small data large” thus, rare contexts common (Yu & Varshney, 2017), and a redesigned lifting operator solves contradiction (Yu et al., 2017). Further, parameters like , γ are made into self-explanatory knobs for users to personalize their learning pace.
We conducted two studies to assess rule-learning capability and interpretability. We present the main results here and detail the procedures in Appendix G. In the first study, we compared ILL-discovered rules with human-codified domain knowledge to see how much known can be reproduced and how much new can be discovered. Trained on just 370 Bach’s chorales, our model reproduced in explicit
Under review as a conference paper at ICLR 2021
a.
covered 66%
hinted 26%
missed 7%
how much known?
Under review as a conference paper at ICLR 2021 the histogram—a symbolic and pictorial encoding. Students were explicitly instructed that writing out a description that was basically a literal repetition of the histogram (e.g., taking a modulo 12 of a chord results in a 91.2% chance of being 0, 0, 4, 7) is not acceptable: they must reveal the music behind the math. In fact, we made it clear to the students that we only want qualitative descriptions. Students were specifically told (in the instructions) to only pay attention to the relative values of the probabilities whose exact numbers are unimportant (e.g., what are most likely, what are more likely, what are almost impossible). This homework was due in two weeks. During the two-week period, we asked the students to complete it independently, with no group work or office hours.
Assess Human Interpretations. The homework was designed in a way such that every rule historgram encoded at least one music concept/rule consistent with standard music theory. In addition, every histogram contained either one additional known music rule or something strange that either conflicted with a known rule or represented something new. We assigned two points per rule. Further, we made an initial rubric containing the (authoritative) music keywords used to describe every rule histogram. Because students’ answers arrived in the form of qualitative text, to ensure credibility and fairness of the initial rubric, we held a discussion session at a regular lecture time (80 minutes) with all students as well as the teaching staff. During the discussion session, we went over all 25 rules one by one. For each, we first announced the keywords in the initial rubric and explained to the students that these keywords would later be used to grade their homework. However, in the discussion session, every student was encouraged to object to any of our announced keywords and/or to propose new keywords accompanied with a convincing explanation. New/modified keywords that were commonly agreed upon were added/updated to the initial rubric. By the end of discussion session, we compiled a more inclusive rubric containing broadly accepted keywords. This rubric-generating process was transparent to all the students. In the final step, we manually graded every student’s answer sheet against keywords in the rubric and computed their scores. A summary of the students’ performances is presented in Table 5. Except for cases where the student did not do the homework, a major source of score deduction was from misunderstanding the n-gram (e.g., the probability of the current chord conditioned on the previous chord was mistakenly interpreted as the probability of the previous chord conditioned on the current one). This may be largely due to unfamiliarity with the n-gram models for new CS+Music students. Nevertheless, the majority of the students who did the homework (2/3) succeeded (with respect to the 30/50 passing grade) in interpreting the rules generated from ILL, which in turn provides evidence on the interpretability of the AI-produced knowledge itself.
Table 5: Students’ final scores.
Score Range # of Students 50 3
[40,50) 7 [30,40) 2 [20,30) 4 [10,20) 1 [0,10) 1
0 5
H CONCLUSION AND BROADER IMPACTS
Model transparency and interpretability are important for trustworthy AI, especially when interacting directly with people such as scientists, artists, and even multidisciplinary researchers bridging the Two Cultures (Snow, 1959) (e.g., like music and chemistry). The core philosophy underlying ILL arises from a human-centered standpoint and our long-term pursuit of “getting humanity back into artificial intelligence”. We strive to develop human-like artificial intelligence, which in turn may help advance human intelligence—a goal at the intersection of AGI (artificial general intelligence (Goertzel & Pennachin, 2007)), XAI (explainable artificial intelligence (Adadi & Berrada, 2018)), and “AI as augmented intelligence” (Jordan, 2019).
As such, the focus of interpretability in this line of research is not just the end result of the model, but the entire learning process. This emphasis on process is not only manifest in this paper (e.g.,
23
b. how interpretable? c. figured soprano
entropy = 4.76 figured alto entropy = 4.78
figured tenor entropy = 4.80 figured bass entropy = 4.34
how much new?
Figure 5: ILL assessments on knowledge discovery tasks.
forms 66% of a standard music theory curriculum (Figure 5A). In the rest, about 26% (e.g., harmonic functions and music forms) wa implicitly hi ted at by the cur ent n-gram based model, modeling only transitions of abstractions but not explicitly abstractions of transitions—a future direction. In the second study, we ran a human-subject experiment in the form of homework for a music class. The homework asked 23 students o write verbal interpretations of ILL-generated rules rendered as histograms over tagged partitions. Grading was based on a rubric of keywords generated via majority vote in a later discussion among students and teachers. Figure 5B shows that the majority (2/3) of the students who did the homework succeeded (w.r.t. the 30/50 passing grade) in the interpretation task, which in turn shows the interpretability of the AI-produced knowledge itself.
In the first study, our model also discovered new rules that interested our colleagues in the music school. (a) Tritone resolution is crucial in tonal music, yet in Bach’s chorales, tritones sometimes do not resolve in typical ways, but consistently transition to other dissonances like a minor seventh. (b) A new notion of “the interval of intervals” was consistently extracted in several rule traces. This “second derivative”, like acceleration in mechanics, might suggest a new microscopic chord structure heretofore unconsidered. (c) New symmetry patterns reveal new harmonic foundations. As a parallel concept of harmony traditionally built on figured bass (dominant in Bach’s chorales confirmed by ILL), ILL reveals “figured soprano” as the next alternative in explaining Bach’s music (Figure 5C). Although not the best view for explaining Bach according to ILL and is not included in any standard music theory class, it may be a valuable perspective for music starting deviating from classical. This was confirmed by domain experts (Sokol, 2016), with more details in the end of Appendix G.1.
5 DISCUSSION: LIMITATIONS AND CHALLENGES
As a first step, we devise a new representation-learning model intended to be both theoretically sound and intrinsically interpretable. This paper shows typical setups and applications, but ILL is a general framework that admits new designs of its components, e.g., projection-and-lifting or priors. Notably, designing a lattice not only sets the rule-learning capacity but also the “vocabulary” for interpretation which, like the Sapir-Whorf hypothesis for human language, limits how a lattice explains signals. Likewise, priors have pros and cons based on what we seek to explain and to whom (e.g., not all signals are best explained by symmetry, nor can everyone reads symmetry equally well). One solution is to explore multiple lattices while balancing expressiveness and computation—a common practice in picking ML models too. Further, whether a signal is indeed governed by simple rules requires rethinking. Sometimes, no rules exist, then ILL will indicate this and a case-by-case study will be needed. Sometimes, rules are insufficient: is music in fact governed by music theory? Theory is better viewed as necessary but not sufficient for good music: great composers need not be great theorists.
Following studies comparing human-codified knowledge and using human-subject experiments for interpretability, more systematic ILL benchmarking and assessment remain challenging and need long-term efforts. Benchmarking is not as easy as for task-specific settings (Chollet, 2019), requiring better comparison schemes or a downstream task. Effective ILL assessments must focus on new discoveries and the ability to assist people. Instead of a Turing test for machine-generated music, one may (at a meta-level) consider tests between independent and machine-aided compositions, but both are done by humans. Further, ILL may be incorporated with other models, having an ILL version of deep learning or vice versa. For example, using ILL as a pre-processing or post-interpretation module in other models to achieve superior task performance as well as controllability and interpretability. One other possibility may use ILL to analyze attention matrices (as signals) learned from BERT or GPT (Rogers et al., 2020). More future visions are in Appendix H.
A CONNECTION TO CONCEPT LATTICE
Per our definition, a concept refers to a component of an abstraction, or more precisely, is a cell in a partition or an equivalence class under an equivalence relation. This definition is consistent with a formal concept defined in formal concept analysis (FCA) (Ganter & Wille, 2012; Ganter et al., 2016; Priss, 2006) as a set of objects (extent) sharing a set of attributes (intent), which can be also treated as objects that are equivalent under the attributes. However, our definition of a concept generalizes that of a formal concept in two ways. First, in our case, a partition or an equivalence relation is not induced from domain-specific attributes through formal logic and formal ontology, but from universal priors drawn from the Core Knowledge (detailed in Section 3.1 in the main paper). Second, specifying a partition considers all of its concepts, whereas specifying a set of formal concepts only considers those with respect to a given formal context. As a result, partition lattices in our case generalize concept lattices in FCA, and are not generated, hence not constrained, by domain knowledge such as those encoded in formal ontologies.
Mathematically, let (PX , ) be the partition lattice comprising all partitions of X and (2X ,⊆) be the subset lattice comprising all subsets of X . Clearly, the power set 2X is the same as {C ∈ P | P ∈ PX}. That is, the subset lattice is also the lattice comprising all concepts from all partitions of X , which can be then called the full concept lattice. So, one can define any concept lattice in FCA as a sublattice of the full concept lattice (cf. Definition 3 in (Ganter et al., 2016)). Yet, such a concept sublattice does not have to include all concepts from a partition, and in many cases, it tends to miss many concepts if they are not known in the existing ontology. We give two examples below to further illustrate the connection between a partition lattice and a concept lattice.
First, consider biological taxonomy. Dogs and cats are two concepts in species which is an abstraction containing other concepts such as eagles. Likewise, mammals and birds are two concepts in class which is an abstraction containing other concepts such as reptiles and insects; further, animals and plants are two concepts in kingdom. In light of hierarchy, as abstractions, species class kingdom (in a partition lattice); as concepts, dogs ⊆ mammals ⊆ animals (in a concept lattice). Note that when forming a concept lattice, one may not need to include say, all species. Yet when having species as an abstraction in a partition lattice, this abstraction must contain all species including known species and unknowns, where the latter is usually of more interest for knowledge discovery.
Second, consider music theory. C major triads, C minor triads, and B diminished triads are concepts in an abstraction induced by music octave-shift and permutation invariance. Further, major triads, minor triads, and diminished triads are concepts in another abstraction induced by music octave-shift, permutation, and further transposition invariance. Clearly, for abstractions, the former abstraction is finer than the latter; for concepts, the set of C major triads is a subset (or a special case) of the set of major triads. However, chords that are not defined in traditional music theory but appear as new concepts in a known abstraction (e.g., the two above) may be more interesting, since they may suggest new composition possibilities while still obeying the same music abstraction, in this case the same music symmetry. New concepts from new abstractions may push the composition boundary even further, suggesting new types of chords discovered from e.g., new symmetry (but possibly within a known symmetry family). See the end of Appendix G.1 for more examples from new discoveries.
B MORE GENERALIZED FORMALISM FOR INFORMATION LATTICE
The mathematical setting in the main paper is for a non-negative signal on a finite domain. However, this is not a limitation, but purely for notational brevity and computational reasons. First, regarding non-negativity, in many real scenarios, the signal is bounded and its value is only relative. In these cases, one can simply add an offset to the signal to make it non-negative. More generally, we can
consider a signal to be any measurable function ξ : X → Rn. Then the notions of an abstraction, a concept, a rule, as well as the partial order can be generalized as in Table 1. Hence, the notion of an information lattice is still well-defined in the generalized setting. The essence of the two settings lies in how we formalize an abstraction, whether using a partition or a σ-algebra. However, the two are not very different from each other: any partition of X generates a σ-algebra on X , and any σ-algebra on a countable X is uniquely generated by a partition of X (Çınlar, 2011).
Further, the main paper uses the summation functional in defining a rule of a signal, or the projection operator. However, other options are possible, e.g., mean, max, min, or a specially designed functional. The lifting operator can then be redesigned accordingly. In particular, besides always favoring the most uniform signal, the design of the special lifting can have extra freedom in considering other criteria for picking a signal from the general lifting.
C MORE INSIGHTS ON THE SPECIAL LIFTING
Consider the special lifting ↑(R) for any rule setR = ↓ξ (P) of a given signal ξ. Computing ↑(R) is simple ifR = {r} contains only a single rule. In this case, ↑(R)(x) = ↑(r)(x) := r(C)/|C| for any x ∈ C ∈ domain(r), which requires simply averaging within each cell. However, computing ↑ (R) becomes much less trivial when |R| > 1. By definition, we need to solve the minimization problem:
↑(R) := argminr∈⇑(R)‖r‖2. (8)
Instead of directly throwing the above problem (8) into a generic optimization solver, there is a more efficient approach which also reveals more insights on the special lifting. More specifically, one can check that any multi-rule lifting ↑(R) can be computed as a single-rule lifting ↑(r?) where the single rule r? is defined on the join ∨P and is computed as follows:
r? := argminr∈⇑(∨P)(R)‖r̃‖2, with the weighted norm ‖r̃‖2 := √∑
C
r(C)2
|C| . (9)
So, instead of liftingR directly to the signal domain X , we liftR to the join ∨P first and then to X . Since | ∨P| ≤ |X|, the minimization problem (9) is in a smaller dimension compared to the original problem (8), and thus, can be solved more efficiently. In the minimization problem (9), by definition, ⇑(∨P) (R) := {r : ∨P → R | ↓r (P) = R}. Hence, every rule r ∈ ⇑(∨P) (R) can be treated as a single-rule summary of the rule setR, and r? is one of them—the one that yields the most uniform signal. Realizing the special lifting R → ↑ (R) as the two-step lifting R → r? → ↑ (r?) = ↑ (R) reveals the following insight: given rules abstracting ξ at different levels (coarser or finer), the best one can hope to faithfully explain ξ is at the level of the join. Determining ξ at any level finer than the join would then require additional assumptions other than the rule set itself, such as the preference of uniformity used here. This further explains the two sources of information loss (join and uniformity) discussed in the recovery process of a signal (cf. Section 3 in the main paper). Notably, to determine a signal even at the level of join may be ambigious, since the general lifting ⇑(∨P) (R) to the join is not necessarily a singleton. This particularly implies that r? as one of the single-rule summaries ofR of ξ is not necessarily a rule of ξ, i.e., there is no guarantee that r? = ↓ξ (∨P). To make it so, we need more rules.
D EXISTING WORK ON SUBLATTICE GENERATION
General methods for computing the sublattice LB of a full lattice L generated by a subset B ⊆ L fall into two basic families, depending on whether the full lattice needs to be computed. The first uses alternating join- and meet-completions, with worse-case complexityO(2|B|); the second characterizes the elements of L that belong to the sublattice, with complexity O(min(|J(L)|, |M(L)|)2|L|) where J(L) and M(L) denote the number of join-irreducibles and meet-irreducibles, respectively (Bertet & Morvan, 1999). The latter requires computing the full lattice, which is intractable in our case of partition lattices, as |L| = |PX | grows faster than exponentially in |X| whereas |P〈F,S〉| is usually smaller than |X|. So, we use the first approach and compute alternating join- and meet-completions. The same principle of avoiding computing the full lattice has been applied to the special context of concept lattices (Kauer & Krupka, 2015), yet the technique there still requires the full formal context corresponding to the full concept lattice. Note that sublattice completion is, by definition, computing the smallest sublattice LB (in a full lattice L) containing the input subset B ⊆ L, where LB must inherit the meet and join operations from L. It generalizes but is not the same as Dedekind-MacNeille completion (Bertet & Morvan, 1999; MacNeille, 1937; Bertet et al., 1997).
E MORE DETAILS ON THE CONSTRUCTION PHASE
This section elaborates on the second half of Section 3.1 in the main paper, presenting more algorithmic details on poset construction and sublattice completion. The core data structures for posets are the so-called adjacency matrix and Hasse diagram, encoding the partial order ≺ and the cover relation ≺c, respectively (Garg, 2015). The former is best for querying ancestors and descendants of a partition within the lattice; the latter is best for querying parents and children of a partition. (A more advanced technique includes chain-decomposition, but the two here are sufficient for this paper.) More specifically,
P ′ is an ancestor of P ⇐⇒ P ≺ P ′
P ′ is a parent of P ⇐⇒ P ≺c P ′ (i.e., P ≺ P ′ but no P ′′ satisfies P ≺ P ′′ ≺ P ′). We introduce a few algorithmic notations. Given a partition poset (P, ), we use P.po matrix and P.hasse diagram to denote the adjacency matrix and Hasse diagram of P, respectively. For any partition P ∈ P, we use P.ancestors, P.descendants, P.parents, and P.children to denote the sets of ancestors, descendants, parents, and children of P , respectively. Notably, the two data structures are not only important for the construction phase but for the subsequent learning phase as well. The core subroutine in the construction phase is ADD PARTITION sketched as Algorithm 1. It is the key unit step in both poset construction and (join-)semilattice completion.
Poset construction. This corresponds to Step 3 in the flowchart in Section 3.1 of the main paper. Recall that poset construction refers to the process of sorting a multiset P〈F,S〉 of tagged partitions into a poset (P〈F,S〉, ), where the partition tags are features and symmetries. Naively, if we write an inner subroutine COMPARE(P,P ′)—called an oracle in the related literature—to compare two partitions, sorting a multiset into a poset amounts to ( N 2 ) calls of this pairwise comparison where N is the size of the input multiset. So, the common idea shared in almost all poset sorting algorithms is to reduce the number of oracle calls as much as possible. As mentioned in the main paper, considering the additional properties in our case, we leverage (a) transitivity (valid for all posets), (b) partition size (valid for partitions), and (c) partition tag (valid for tagged partitions) to pre-determine or pre-filter relations. In other words, we want to infer from the context as many pairwise relations as possible, so that the number of actual pairwise comparisons can be minimized.
More specifically, we start from an empty poset, and call ADD PARTITION to incrementally add partitions from the input multiset to the poset. As the outer subroutine, ADD PARTITION leverages transitivity and partition size by maintaining three live data structures, namely size2partns, po matrix, and hasse diagram, so as to avoid calling COMPARE whenever possible. Consequently, COMPARE is called only at two places (underlined in Algorithm 1): one for = and one for ≺. When called as the inner subroutine, COMPARE(P,P ′) does not always perform an actual computation for pairwise comparison. Instead, it first checks if the tags are informative (e.g., compositions/supergroups imply coarser partitions) and only if not, makes an actual comparison. With the additional information from partition size, an actual comparison can be done in O(|X|) time
Algorithm 1: ADD PARTITION (Pτ ,P): adds a tagged partition Pτ to a partition poset (P, ) Input: a tagged partition Pτ , where the tag τ can be a feature/symmetry or a join/meet formula;
a partition poset (P, ), with the following members and hash tables: · every P ∈ P is a unique partition (indexed by a unique identifier) · P.partn2tags[P] := {τ | Pτ = P} denotes the set of all tags inducing P · P.size2partns[k] := {P | |P| = k} denotes the set of all P ∈ P with size k · P.po matrix encodes the partial order ≺, best for getting P.ancestors/descendants · P.hasse diagram encodes the cover relation ≺c, best for getting P.parents/children
Step 1: determine if Pτ is new by COMPARE(P,Pτ ) (for =) for every P ∈ P.size2partns[|Pτ |]
if Pτ ∈ P.size2partns[|Pτ |]: update P.partn2tags[Pτ] by adding τ ; return else: create a new hash entry P.partn2tags[Pτ] = {τ}; proceed to Step 2
Step 2: add the new partition Pτ to P (2a) update P.size2partns[|Pτ |] by adding Pτ (2b) update P.po matrix and P.hasse diagram
– for every existing size k < |Pτ | sorted in a descending order: for every P ∈ P.size2partns[k]:
if P.parents ∩ Pτ .descendants 6= ∅: update P.po matrix by adding P ≺ Pτ else: COMPARE(P,Pτ ); update P.po matrix and P.hasse diagram if P ≺ Pτ
(here one can check: it is necessarily the case that P ≺c Pτ ) – do the above symmetrically for every existing size k > |Pτ | sorted in an ascending order – (note: every P ∈ P.size2partns[k] for k = |Pτ | is incomparable with Pτ ) – clean cover relation: remove any P∗ ≺c P∗ from P.hasse diagram if P∗ ≺c Pτ ≺c P∗
via a mapping process. More specifically, given two partitions P,P ′, without loss of generality, we assume |P| ≤ |P ′|. An actual comparison is made by tentatively creating a mapping ν : P ′ → P . One can check that such a ν exists if and only if P P ′. Hence, if |P| = |P ′| (resp. |P| < |P ′|), one can determine = (resp.≺) if ν is created successfully or incomparability otherwise. The mapping complexity is linear in |X|, with linear coefficient 1 if mapping succeeds and with linear coefficient < 1 if mapping fails. In the worst case (e.g., if all partitions are incomparable), all ( N 2 ) pairwise comparisons are required. Our algorithm works best when partitions are richly related (i.e., the Hasse diagram is dense), which is indeed the case for our tagged partitions induced from systematically formed features and symmetries.
Semilattice completion. This corresponds to Step 4 in the flowchart in Section 3.1 of the main paper. Recall that join-semilattice completion refers to the process of completing a partition poset into a semilattice. We only detail join-semilattice completion, since meet-semilattice completion can be done symmetrically. Formally, we want to compute the join-semilattice of PX generated by the input poset (P〈F,S〉, ). We denote the resulting join-semilattice by 〈P〈F,S〉〉∨. By definition,
〈P〈F,S〉〉∨ := {∨P | P ⊆ P〈F,S〉}. Naively, if computing 〈P〈F,S〉〉∨ literally from the above definition, one has to iterate over all subsets of P〈F,S〉 and compute their joins. This amounts to 2N join computations where N = |P〈F,S〉| is the size of the input poset, and moreover, many of the joins are not pairwise. Yet, similar to our earlier poset construction, we may reduce the computations of joins by an incremental method, which also embeds ADD PARTITION as a subroutine and utilizes partition sizes and tags, but now the tags are join formulae instead of features or symmetries.
More specifically, we start with an empty semilattice P, and add partitions in P〈F,S〉 to P one by one from smaller-sized to larger-sized (note: the size information is maintained in P〈F,S〉.size2partns). When a partition P ∈ P〈F,S〉 is to be added, we make a tag named by itself, i.e., let Pτ := P with τ := {P}, and then call ADD PARTITION(Pτ ,P). There are two possibilities here: Pτ already exists in P (call ends by Step 1) or Pτ is new (call ends by Step 2). In the former, we are done with Pτ .
In the latter, for every P ′ ∈ P\{Pτ}, compute the pairwise join J (P ′) := ∨{Pτ ,P ′} and its tags T (P ′) := {τ ∪ τ ′ | τ ′ ∈ P.partn2tags[P ′]}, and call ADD PARTITION(J (P ′)T (P′),P). Like COMPARE, computing join can be optimized by leveraging previously computed tags and partial order in the input poset P〈F,S〉, so as to avoid an actual join computation whenever possible. When inferring from the context is not possible, one can perform an actual join computation ∨(P,P ′) in O(|X|) time. This is done by collecting the unique pairs of cell IDs (C(x), C ′(x)) for every x ∈ X , where C(x) and C ′(x) denote the cell IDs of x in P and P ′, respectively. In the worst case (e.g., if all partitions are incomparable and join-irreducible), the complexity is inevitably O(2N ). However, like in poset construction, our algorithm works best when the partial order structure is rich.
Practical tips for sublattice completion. This corresponds to Step 5 in the flowchart in Section 3.1 of the main paper. Recall that constructing the sublattice of PX generated by P〈S,F 〉 follows the alternating process: L0 := P〈S,F 〉, L1 := 〈L0〉∨, L2 := 〈L1〉∧, L3 := 〈L2〉∨, and so forth, which terminates as soon as Lk−1 = Lk. We denote the end result by 〈P〈S,F 〉〉∨∧···, which is the desired sublattice. However, we may want to stop early in the completion sequence, due to concerns from computation, interpretability, expressiveness, as well as their tradeoffs. We suggest a practical tip on deciding where to stop. If the input poset P〈F,S〉 is small, run alternating joins and meets, or even complete it to the sublattice if affordable. If P〈F,S〉 is moderate, complete the joins only (as join is closely related to rule lifting, see Appdenix C for more details). If P〈F,S〉 is large, just use it.
F MORE ANALYSES IN THE LEARNING PHASE
This section elaborates on the last paragraph of Section 3.2 in the main paper, presenting more analyses and interpretations on the rule traces elicited from the toy handwritten-digit examples. Yet, as mentioned in the main paper, computer vision is currently not among the typical use cases of ILL. Learning rules of handwritten digits may not be of much independent interest unless for calligraphy. So, the analyses and interpretations here are for illustration purposes only. We refer readers to the Broader Impact section in the main paper for possible future directions on how ILL may be used, together with other ML models, to solve computer vision tasks.
Recall that the main use case of ILL is to explain a signal ξ, answering what makes ξ an ξ. The same toy example illustrating an ILL process is replayed here in Figure 3. The signal ξ : {0, . . . , 27}2 → [0, 1] is a grayscale image of a handwritten “7”. In this case, a rule of ξ, or the projection of ξ to a partition of {0, . . . , 27}2, can be viewed as gathering “ink” within each partition cell. Accordingly, the (special) lifting can be viewed as redistributing the gathered “ink” (evenly) in each cell. Hence, we term this view the ink model. For visual convenience, we depict a rule of a 2D signal by its lifting (i.e., another grayscale image), since with pixels in the same cell colored the same, we can use the lifting to sketch both the partition and the rule values. More precisely, when a lifting represents a rule, it must be viewed in terms of blocks or superpixels; whereas a real lifting (i.e., a signal or a real image) is viewed normally by the regular pixels. To better clarify, all rules in Figure 3 are displayed in red boxes, whereas all liftings are in green ones.
For a simple illustration, we draw a small number of features and symmetries to generate a poset (P•) of 21 partitions. The corresponding part of the information lattice (R•) is shown by its Hasse diagram in Figure 3. Further, on top of the Hasse diagram, we demarcate the frontiers of the sublevel sets (R≤ ) by six blue dashed curves. Note that in this tiny diagram, we have sketched a full range of sublevel sets, yet for large diagrams, sublevel sets are constructed for small -values only in a single-pass BFS. The right part of Figure 3 illustrates a complete ILL process in the alternating setting, with lift project signified by the green up-arrows and red down-arrows, respectively. During the learning process, ILL tries to minimize the gap in the signal domain (upstairs) through iterative eliminations of the largest gap in the rule domain (downstairs). Eliminating a larger rule gap tends to imply a larger drop in the signal gap, but not necessarily in every iteration, since the special lifting may accidentally recover a better signal if the assumed uniformity is, by chance, present in the signal. The rule setR(k) formed per iteration is presented in the middle of the right part of Figure 3, which joinly shows the complete rule trace continuously progressing along the -path.
The rule set in the last iteration under any (marked by ? in Figure 3) is the returned solution to the main relaxed Problem (4) in the main paper. This rule set is used to answer what makes ξ an ξ. For example, let rj denote the rule with ID j (here a rule ID is the same as the partition ID, the unique identifier used in Algorithm 1 during the construction phase). Then, among all rules whose entropies
are no larger than = 2, the third rule set in the traceR(3) = {r9, r1, r18} best explains what makes ξ an ξ. However, if more complex rules are allowed, say if all rule entropies are now capped by = 6, R(7) = {r13, r15, r19} is the best. Recall that we do not just eyeball the rules to get intuitive understandings. Every rule is the projection of the signal to a tagged partition, where the tag, generated in a prior-driven way, explicitly explains the underlying abstraction criteria. For example, r19 in Figure 3 comes from a symmetry tag representing a permutation invariance, which visually renders as a reflection invariance. Rules r8 and r9 come from two feature tags div7 ◦ w[1] and div7 ◦ w[2], respectively. These two feature tags represent the continuous and even collapsing in the first and the second coordinate, respectively, which visually render as horizontal and vertical strips in either case. Both rules are later absorbed into r13 tagged by div7 ◦w[1,2], since its rule domain is strictly finer. These rules (r8, r9, r13) apparently summarize the horizontal and vertical parts of the handwritten “7”. Further, the vertical part of the “7” is longer and slants more, so we see more vertically-patterned rules in the rule trace (r9, r11, r15). These rules are obtained from finer and finer abstractions along the horizontal direction, so as to capture more details on the vertical part of that “7” such as its slope. Notably, among these vertically-patterned rules, r11 is induced from the symmetry representing a horizontal translation invariance, but it is quickly absorbed into r15 whose entropy is not much higher. This transient appearance of r11 implies that it plays a less important role in explaining this handwritten “7”. In fact, from more experiments, symmetries in general play a less important role in explaining many “7”s. This is, however, not the case in explaining many “8”s, where symmetries occur much more often. For example, consider a symmetry fused from translation and permutation invariances whose fundamental domain is homeomorphic to a Möbius strip. We hypothesize that this topological property might be related to the twisted nature of an “8”. For a visual comparison, we present the rule traces learned from a “7” and an “8” below in Figure 6, as well as the visual similarity between a Möbius strip and an “8”. mnistt_7_3_solve_alternate_5000
mnistc_8_2_solve_alternate_5000
G STUDIES ON ILL-BASED MUSIC APPLICATION
We introduce two tests associated with a real-world application. The first is to assess rule-learning efficacy, where we compare machine-discovered rules to human-codified domain knowledge. The second is to assess human-interpretability, where we use human subject experiments on interpreting machine-generated rules.
The application here is our first step towards building an automatic music theorist and pedagogue, which is to be deployed as an assistant in music research and education. The two tests are our initial effort towards a systematic benchmarking and assessment platform. In the continuing effort of bridging human and machine intelligence, new standards are to be set and commonly agreed upon, so as to reasonably compare machine-codified discoveries with human-codified knowledge, as well as to use human-subject experiments for assessing interpretability. Fully developing assessment protocols is a challenging, long-term endeavor. Here, we use the two tests as starting points, and present results from each. Respectively, the first experiment tests music rule discovery, a basic requirement to be a theorist; the second tests interpretability, a basic requirement to be a pedagogue.
To conduct the two tests, we first build a user-friendly web application, which is used to better see and control the ILL learning process and results. Figure 7 illustrates the web interface. Users learn music rules—each as a histogram over a tagged partition (i.e., machine-codified music concepts)—and control their learning pace via self-explanatory knobs whose set values are automatically converted to internal parameters (e.g., , γ). One critical music-specific extension to the vanilla ILL presented in the main paper is adding a temporal component, since music is highly contextual. This amounts to considering more than one signal simultaneously, which include various (un)conditional chord distributions (multiple n-grams with varying n’s and varying conditionals) encoding information of individual chords as well as melodic and harmonic progressions. Accordingly, ILL produces both context-free and context-dependent rules, each of which is indexed by a partition and a conditional under that partition. For example, given the partition that is equivalent to classifying music chords into roman numerals and conditioned on the previous two chords being a I64 followed by a V, a rule specifies the probability distribution of the next roman numeral, and in this case reproduces the music rule on Cadential-64. Note that in a context-dependent rule, not only is the query chord abstracted, but also the conditional. This is in contrast with many classical n-gram models where no abstraction is present and thus may suffer from the problem of rare contexts, where a conditional occurs very few or even zero times in the training set. However here, the core idea of abstraction makes “small data” large and thus rare contexts common. More examples of context-free and context-dependent rules are illustrated as histograms in Figure 8. These rule histograms are generated from ILL based on 370 of Bach’s four-part chorales (in the format of digital sheet music), and are used in the two experiments detailed below.
G.1 COMPARISON TO HUMAN-CODIFIED KNOWLEDGE
We compare rules learned from ILL to a standard undergraduate music theory curriculum. We want to use known laws from music theory as a benchmark to see how ILL-generated rules correspond to human-codified music knowledge. In particular, we want to see what is covered, what is new, and what is different. Yet, the ultimate goal is not just to use known music theory as a ground truth for the purpose of driving ILL to fully reconstruct what we know, but eventually to discover new rules,
to gain new understandings of existing rules, to suggest new composition possibilities, as well as to teach rules in a personalized way.
A priori we are aware of three major differences between human-codified music theory and ILLgenerated rules. (a) In light of music raw representations (input), laws of music theory are derived from all aspects in sheet music whereas ILL-generated rules are currently derived from only MIDI pitches and their durations. This is because we currently study ILL as a general framework. When a music-specific application is to be developed later, one can include more music raw representations such as letter pitches, meter, measure, beaming, and articulations. (b) In light of rule format (output), laws of music theory and ILL-generated rules have two different styles, with the former being more descriptive and absolute (hard), whereas the latter being more numerical and probabilistic (soft). For instance, a music rule that completely forbids consecutive fifths is reproduced by an ILL-generated rule that assigns a small non-zero probability to the event. Therefore, while it is possible to “translate”, with information loss, a (precise) ILL-generated rule to a (verbal) rule in known theory, it may not make sense to “translate” in the opposite direction. Also, it is not a good idea to hardcode known rules as categorical labels in a supervised setting, since music rules are inherently flexible and hardcoding may lead to a rule-based AI that generates somewhat “mechanical” music such as the Illiac Suite (Hiller & Isaacson, 1957). (c) In light of purposes, laws of music theory are more intended for general pedagogical purposes, rather than to reflect the style of a particular data set. For instance, while consecutive fifths are banned in homework and exams, they may be widely used in many pop songs. Even in our data set of Bach’s chorales (which are supposed to follow the known rules quite well), we see Bach himself wrote a handful of consecutive perfect intervals. On the contrary, ILL-generated rules are specific to the input data set. We may certainly find some data sets that follow the known rules quite well (e.g., Bach’s chorales), but also others that break many known rules and even set their own rules.
Keeping these three differences in mind and by further isolating them from the comparison results, we can reveal the remaining differences that are due to the rule-learning process itself. To come up with the benchmark, we compiled a comprehensive syllabus of laws from music theory taught in our music school’s theory review course, which runs through the full series of theory classes at a fast pace. This human-codified music knowledge is organized as a running list of 75 topics and subtopics indexed by lecture number. On the other hand, ILL-generated rules are indexed by partition (ID) and n-gram (n). The results are summarized below in Table 2, where the colored crosses in the last column indicate topics that are missed by ILL due to different reasons.
Among the total 75 topics in Table 2, we first ignore 7 of them (red crosses) which require music raw representations beyond MIDI pitches and durations (e.g., accents and enharmonic respellings of some augmented sixth chords). ILL covered 45 out of the remaining 68 topics, yielding a coverage of 66%. Among the 23 missed topics, 18 (blue crosses) are related to deeper-level temporal abstractions such as harmonic functions, key areas, and forms. These temporal abstractions may be better modeled as abstractions of transitions, which are implicitly captured but not explicitly recovered from our current multi-abstraction multi-n-gram language model, modeling only transitions of abstractions. The other 5 missed topics (black crosses) are tricky and require ad-hoc encodings, which are not explicitly learnable (but may be implicitly captured to some extent) from our current ILL implementation. Accordingly, the composition of the 30 = 7 + 18 + 5 uncovered topics suggest three future directions to possibly raise the rule-learning capacity of the current implementation: (a) include more music raw representations; (b) model abstractions of transitions; (c) either make music-specific adjustments when developing music apps or figure out a more expressive and more general framework in the long run. However, remember that the goal here is not to reproduce what we know but to augment it. So, we may certainly stop after enabling abstractions of transitions, which in the best case can yield an improved coverage of 84% (i.e., 93% of the topics from MIDI notes only) which is good enough.
Lecture Music Theory Partition IDs n-gram
1 music accents 7 2 pitch 1-4 1 3 2 pitch class 16-19 1 3 2 interval 31-36 1 3
Table 2 (cont.)
Lecture Music Theory Partition IDs n-gram
2 interval class 97-102 1 3 3 stepwise melodic motion (counterpoint) 1-4 2 3 3 consonant harmonic intervals (counterpoint) 97-102 1 3 3 beginning scale degree (counterpoint) 16-19 2 3 3 ending scale degree (counterpoint) 16-19 2 3 3 beginning interval class (counterpoint) 97-102 2 3 3 ending interval class (counterpoint) 97-102 2 3 3 parallel perfect intervals (counterpoint) 97-102 2 3 3 directed perfect intervals (counterpoint) 7 3 law of recovery (counterpoint) 1-4 ≥3 3 3 contrapuntal cadence (counterpoint) 1-4, 97-102 2,3 3 3 melodic minor ascending line (counterpoint) 7 4 tri | 1. What is the focus and contribution of the paper on information lattice learning?
2. What are the strengths of the proposed approach, particularly in its application to various domains?
3. What are the weaknesses of the paper, especially regarding the complexity and scalability of the algorithm?
4. Do you have any concerns about the definition of signals and how they can be restricted?
5. How does the reviewer assess the novelty and potential impact of the proposed framework?
6. Are there any suggestions for improving or extending the proposed method, such as integrating it with deep learning techniques?
7. How does the reviewer evaluate the clarity and quality of the paper's content? | Review | Review
This paper proposes a novel learning framework called information lattice learning. It is formulated as an optimization problem that finds decomposed hierarchical representations that are efficient in explaining data using a two-phased approach. ILL generalizes Shannon's information lattice and authors demonstrate ILL can be applied to learning music theory from scores and chemical laws from molecular data. This paper is proposing a new research direction and I believe it is worth to be presented. One concern I have is the complexity and scalability of the proposed algorithm.
Authors emphasize "small data", but I don't see why the proposed approach cannot be applied to "large data". In page 15, authors mention the worst case complexity of O(2^N). Does it mean the proposed approach works only for "simple" examples such as discovering music theory and chemical laws considered in this paper? Can authors elaborate more on the complexity and the scalability issues of their algorithm? Did authors only consider "small data" regime due to the scalability problem?
The definition of signal seems very general and it can even include pmf's. How can we enforce restrictions on signals such as probability simplex?
Can authors comment on how to make a deep learning version of the proposed framework? Say, hierarchical info GAN, hierarchical VAE, etc.?
It would be interesting to compare their work with existing unsupervised deep learning algorithms that attempt to find disentangled representations. |
ICLR | Title
Information Lattice Learning
Abstract
Information Lattice Learning (ILL) is a general framework to learn decomposed representations, called rules, of a signal such as an image or a probability distribution. Each rule is a coarsened signal used to gain some human-interpretable insight into what might govern the nature of the original signal. To summarize the signal, we need several disentangled rules arranged in a hierarchy, formalized by a lattice structure. ILL focuses on explainability and generalizability from “small data”, and aims for rules akin to those humans distill from experience (rather than a representation optimized for a specific task like classification). This paper focuses on a mathematical and algorithmic presentation of ILL, then demonstrates how ILL addresses the core question “what makes X an X” or “what makes X different from Y” to create effective, rule-based explanations designed to help human learners understand. The key part here is what rather than tasks like generating X or predicting labels X,Y. Typical applications of ILL are presented for artistic and scientific knowledge discovery. These use ILL to learn music theory from scores and chemical laws from molecule data, revealing relationships between domains. We include initial benchmarks and assessments for ILL to demonstrate efficacy.
1 INTRODUCTION
With rapid progress in AI, there is an increasing desire for general AI (Goertzel & Pennachin, 2007; Chollet, 2019) and explainable AI (Adadi & Berrada, 2018; Molnar, 2019), which exhibit broad, human-like cognitive capacities. One common pursuit is to move away from “black boxes” designed for specific tasks to achieve broad generalization through strong abstractions made from only a few examples, with neither unlimited priors nor unlimited data (“primitive priors” & “small data” instead). In this pursuit, we present a new, task-nonspecific framework—Information Lattice Learning (ILL)— to learn representations akin to human-distilled rules, e.g., producing much of a standard music theory curriculum as well as new rules in a form directly interpretable by students (shown at the end).
The term information lattice was first defined by Shannon (1953), but remains largely conceptual and unexplored. In the context of abstraction and representation learning, we independently develop representation lattices that coincide with Shannon’s information lattice when restricted to his context. Instead of inventing a new name, we adopt Shannon’s. However, we not only generalize the original definition—an information lattice here is a hierarchical distribution of representations—but we also bring learning into the lattice, yielding the name ILL.
ILL explains a signal (e.g., a probability distribution) by disentangled representations, called rules. A rule explains some but not all aspects of the signal, but together the collection of rules aims to capture a large part of the signal. ILL is specially designed to address the core question “what makes X an X” or “what makes X different from Y”, emphasizing the what rather than generating X or predicting labels X,Y in order to facilitate effective, rule-based explanations designed to help human learners understand. A music AI classifying concertos, or generating one that mimics the masters, does not necessarily produce human insight about what makes a concerto a concerto or the best rules a novice composer might employ to write one. Our focus represents a shift from much representation-learning work (Bengio et al., 2013) that aim to find the best representation for solving a specific task (e.g., classification) rather than strong concern for explainability. Instead of optimizing a task-specific objective function (e.g., classification error), ILL balances more general objectives that favor fewer, simpler rules for interpretability, and more essential rules for effectiveness—all formalized later.
One intuition behind ILL is to break the whole into simple pieces, similar to breaking a signal into a Fourier series. Yet, rather than decomposition via projection to orthonormal basis and synthesis
via weighted sum, we decompose a signal in a hierarchical space called a lattice. Another intuition behind ILL is feature selection. Yet, rather than features, we use partitions to mimic human concepts and enable structured search in a partition lattice to mimic human learning. The goal is to restore human-like, hierarchical rule abstraction-and-realization through signal decomposition-and-synthesis in a lattice (called projection-and-lifting, Figure 1: left), resulting in more than a sum of parts.
ILL comprises two phases: (a) lattice construction; (b) learning (i.e., searching) in the lattice. This is similar to many machine learning (ML) models comprising (a) function class specification then (b) learning in the function class, e.g., constructing a neural network then learning—finding optimal parameters via back-propagation—in the network. ILL’s construction phase is prior-efficient: it builds in universal priors that resemble human innate cognition (cf. the Core Knowledge priors (Spelke & Kinzler, 2007)), then grows a lattice of abstractions. The priors can be customized, however, to cater to a particular human learner, or facilitate more exotic knowledge discovery. ILL’s learning phase is data-efficient: it learns from “small data” encoded by a signal, but searches for rich explanations of the signal via rule learning, wherein abstraction is key to “making small data large”. Notably, the construction phase is prior-driven, not data-driven—data comes in only at the learning phase. Hence, the same construction may be reused in different learning phases for different data sets or even data on different topics (Figure 1: right). Featuring these two phases, ILL is thus a hybrid model that threads the needle between a full data-driven model and a full prior-driven model, echoing the notion of “starting like a baby; learning like a child” (Hutson, 2018).
ILL is related to many research areas. It draws ideas and approaches from lattice theory, information theory, group theory, and optimization. It shares algorithmic similarity with a range of techniques including MaxEnt, data compression, autoencoders, and compressed sensing, but with a much greater focus on achieving human-like explainability and generalizability. Below, we broadly compares ILL to prominent, related models, leaving more comparisons to the Appendix for most similar ones.
Compared to ILL is deep learning a “white-box” model balancing human-explainability and task performance Bayesian inference modeling human reasoning with widely shared, common priors and few, simple rules rather than using probabilistic inference as the driving force tree-like models structurally more general: a tree (e.g., decision tree or hierarchical clustering)
is essentially a linear lattice (called a chain formally) depicting a unidirectional refinement or coarsening process
concept lattice in FCA (Ganter & Wille, 2012) conceptually more general and may include both known and unknown concepts; ILL does not require but discovers domain knowledge (more details in Appendix A)
We illustrate ILL applications by learning music theory from scores, chemical laws from compounds, and show how ILL’s common priors facilitate mutual interpretation between the two subjects. To begin, imagine Tom and Jerry are playing two 12-key pianos simultaneously, one note at a time (Figure 1: right). The frequency of the played two-note chords gives a 2D signal plotted as a 12× 12 grayscale heatmap. Inspecting this heatmap, what might be the underlying rules that govern their co-play? (Check: all grey pixels have a larger “Jerry-coordinate” and project to a black key along the “Tom-axis”.) We now elaborate on ILL and use it to distill rules for complex, realistic cases.
2 INFORMATION LATTICE: ABSTRACTIONS AND RULES OF A SIGNAL
Signal. A signal is a function ξ : X → R. For notational brevity and computational reasons, assume ξ is non-negative and X ⊆ Rn is finite (not a limitation: see Appendix B). For example, a signal ξ : {1, . . . , 6} → R being a probability mass function (pmf) of a dice roll, or a signal ξ : {0, . . . , 27}2 → R being a 28× 28 grayscale image. We denote the set of all signals on X by SX . Partition / abstraction. We use a partition P of a set X to denote an abstraction of X; we call a cell C ∈ P an (abstracted) concept. The intuition is simple: a partition of a set renders a “coarse-grained view” of the set, or more precisely, an equivalence relation on the set. In this view, we identify equivalence classes of elements (concepts) instead of individual elements. For example, the partition P = {{1, 3, 5}, {2, 4, 6}} of the six outcomes of the roll of a die identify two concepts (odd, even). Rule / representation. A rule of a signal ξ : X → R is a “coarsened” signal rξ : P → R defined on a partition P of X with rξ(C) := ∑ x∈C ξ(x) for any C ∈ P . In this paper, a rule of a signal is what we mean by a representation of a signal. If the signal is a grayscale image, a rule can be a special type of blurring or downsampling of the image; if the signal is a probability distribution, a rule can be a pmf of the “orbits” of the distribution for lifted inference algorithms (Holtzen et al., 2019; Kersting, 2012). More generally, we define a rule (regardless of any signal) over a set X by any signal on any partition of X; accordingly, we denote the set of all rules over X byRX := ∪P∈{all partitions of X}SP . Partition lattice. Abstractions are hierarchical: one coarse-grained view can be coarser than another. Let the partition lattice (PX , ) of a setX be the partially ordered set (poset) containing all partitions of X equipped with the partial order coarser than ( ), or finer than ( ), defined in the standard way. Let P := {{x} | x ∈ X} and P := {X} denote the finest and the coarsest partition, respectively. Per general lattice theory (Davey & Priestley, 2002), PX is a complete lattice: every subset P ⊆ PX has a unique supremum ∨P and a unique infimum ∧P, where ∨P is called the join of P denoting its coarsest common refinement and ∧P is called the meet of P denoting its finest common coarsening. Information lattice. The information lattice (Rξ,⇐) of a signal ξ : X → R is the poset of all rules of ξ equipped with the partial order more general than: for any two rules r, r′ ∈ Rξ, we say r is more general than r′ (or r′ is more specific), denoted r ⇐ r′, if domain(r) domain(r′). Notably, Rξ ⊆ RX andRξ is isomorphic to the underlying partition lattice via projection defined below. Projection and lifting. For any signal ξ ∈ SX , we define the projection operator ↓ξ: PX → Rξ by letting ↓ξ (P) be the rule of ξ on P . One can check that ↓ξ: (PX , )→ (Rξ,⇐) is an isomorphism. Conversely, we define the general lifting operator ⇑X : RX → 2SX by letting ⇑X (r) denote the set of all signals that satisfy the rule r, i.e., ⇑X (r) := {ξ ∈ SX | ↓ξ (domain(r)) = r} ⊆ SX . To make lifting unique and per Principles of Indifference (Eva, 2019), we introduce a special lifting ↑X (r) to pick the most “uniform” signal in ⇑X (r). Formally, define ‖ · ‖q : SX → R by ‖ξ‖q := ( ∑ x∈X ξ(x)
q)1/q. For any ξ, ξ′ ∈ SX satisfying ‖ξ‖1 = ‖ξ′‖1, we say that ξ is more uniform than ξ′ (or ξ′ is more deterministic) if ‖ξ‖2 ≤ ‖ξ′‖2. We define the (special) lifting operator ↑X : RX → SX by ↑X (r) := argminξ∈⇑X(r)‖ξ‖2 (can be computed by simply averaging). Notation here follows the convention as to function projections to quotient spaces (Kondor & Trivedi, 2018). Lifting a single rule to the signal domain can be extended in two ways: (a) lift to a finer rule domain P instead of X , i.e., ⇑P (r) or ↑P (r); (b) lift more than one rules. Accordingly, we write ⇑ := ⇑X and ↑ := ↑X as defaults, write R = ↓ξ (P) := {↓ξ (P) | P ∈ P} ⊆ Rξ to denote a rule set, and write ⇑(R) := ∩r∈R ⇑(r) = {η ∈ SX | ↓η (P) = R} and ↑(R) := argminη∈⇑(R)‖η‖2 to denote signals that satisfy all rules in R (general lifting) and the most uniform one (special lifting), respectively. More computational details on lifting and its intimate relation to join are in Appendix C.
3 INFORMATION LATTICE LEARNING (ILL)
We first formalize ILL as a single optimization problem and then solve it practically in two phases. Let ξ : X → R be a signal we want to explain. By explaining, we mean to search for a rule set R = ↓ξ (P) ⊆ Rξ such that: (a)R recovers ξ well, orR is essential; (b)R is simple. The main idea agrees with Algorithm Information Theory (Chaitin, 1987; Chater & Vitányi, 2003), but we use an information-lattice based formulation focusing on explainability. We start our formulation below.
We say a rule setR recovers the signal ξ exactly if ↑(R) = ξ. Yet, exact recovery may not always be achieved. The information loss occurs for two reasons: (a) insufficient abstractions, i.e., the join ∨P is strictly coarser than P; (b) the choice made in favor of uniformity is inappropriate. Instead of pursuing exact recovery, we introduce ∆(↑ (R), ξ)—a distance (e.g., `p distance) or a divergence (e.g., KL divergence) function—to measure the loss, with a smaller ∆ indicating a more essentialR. We say a rule setR is simpler if it contains fewer and simpler rules. Formally, we wantR minimal, i.e., each rule r ∈ R is indispensable so as to achieve the same ↑(R). Also, we want each rule r ∈ R informationally simple, measured by smaller Shannon entropy Ent(r), so r is more deterministic (Falk & Konold, 1997), easier to remember (Pape et al., 2015) and closer to our common-sense definition of a “rule”. Notably, the partial order renders a tradeoff between the two criteria: r ⇐ r′ implies r is dispensable in anyR ⊇ {r, r′} but on the other hand Ent(r) ≤ Ent(r′), so including more-specific rules makes the rule set small yet each individual rule (informationally) hard.
The main problem. The formal definition of an ILL problem is: given a signal ξ : X → R, minimize R⊆Rξ ∆(↑(R), ξ) subject to R is minimal; Ent(r) ≤ for any r ∈ R. (1) The search space involves the full information lattice (Rξ,⇐), or isomorphically, the full partition lattice (PX , ). Yet, the size of this lattice, i.e., the Bell numberB|X|, scales faster than exponentially in |X|. It is unrealistic to compute all partitions of X (unless X is tiny), let alone the partial order. Besides computational concerns, there are two reasons to avoid the full lattice (but to leave it implicitly in the background): (a) the full lattice has unnecessarily high resolution, comprising many nearlyidentical partitions particularly when X is large; (b) considering explainability, not every partition has an easy-to-interpret criterion by which the abstraction is made. As such, Formulation (1) is only conceptual and impractical. Next, we relax it and make it practical via two ILL phases.
3.1 PRACTICAL LATTICE CONSTRUCTION: TO START LIKE A BABY (PHASE I)
Information lattice construction plays a role similar to building a function class in ML, sometimes called meta-learning. While its importance is commonly understood, the construction phase in many data-driven models is often treated cursorily—using basic templates and/or ad-hoc priors—leaving most of the computation to the learning phase. In contrast, we put substantial effort into our priordriven construction phase. Pursuing generality and interpretability, we want universal, simple priors that are domain-agnostic and close to the innate cognition of a human baby (Marcus, 2018). Here we draw those from Core Knowledge (Spelke & Kinzler, 2007; Chollet, 2019), which include “the (small) natural numbers and elementary arithmetic prior” and “the elementary geometry and topology prior”. We then give algorithms to construct abstractions from these priors, and consider such a construction prior-efficient if it is interpretable, expressive, and systematic. In the following flowchart, we summarize information lattice construction as generating a partition sublattice. 20.00 pt
hPhF,Sii_ hPhF,Sii_^···(PhF,Si, )PhF,Si = P hF i [ PGhSiF, S hF i, GhSi
seeds (priors)
features/ symmetries
partition multiset
partition poset
partition semilattice
partition sublattice
1 2 4 53
hierarchy stageprior-driven stage completion stage
1 2 Feature / Symmetry-induced partitions. Unlike data clustering, our prior-driven partitions are induced from two data-independent sources—features and symmetries. We draw priors—in the form of seed features F and seed transformations S—from Core Knowledge as a basis, and then generate a set of partitions P〈F,S〉 as follows: as an example, for X = R2:
F = {w[1], w[2], w[1,2], sort, argsort, sum, diff, div2, . . . , div19, mod2, . . . , mod19} (2) S = {horizontal, vertical, diagonal translations} ∪ {rotations} ∪ {reflections} (3)
Φ〈F 〉 : set of features generated by F via function composition G〈S〉 : set of subgroups generated by subsets of S via subgroup generation PΦ〈F 〉 : set of partitions generated by features in Φ〈F 〉 via preimages PG〈S〉 : set of partitions generated by subgroups in G〈S〉 via orbits
In (2), wI denotes coordinate selection (like indexing/slicing in python) and the other functions are defined as in python (div and mod are like in python divmod). Then, P〈F,S〉 = PΦ〈F 〉 ∪ PG〈S〉. 3 Partition poset. We next sort P〈F,S〉, computationally a multiset, into the poset (P〈S,F 〉, ). We import algorithmic skeleton from generic poset-sorting algorithms (Caspard et al., 2012; Daskalakis et al., 2011), with an outer routine incrementally adding elements and querying an inner subroutine (an oracle) for pairwise comparison. Yet, our poset is special: its elements are called tagged partitions where a tag records the generating source(s) of its tagged partition, e.g., features and/or symmetries. So, we have specially designed both the outer routine ADD PARTITION and the oracle COMPARE by leveraging (a) transitivity (valid for all posets), (b) partition size (valid for partitions), and (c) partition tag (valid for tagged partitions) to pre-determine or filter relations. We relegate details to Appendix E. The data structures for posets include po matrix and hasse diagram, encoding the partial order ≺ (ancestors/descendants) and the cover relation ≺c (parents/children), respectively (Garg, 2015). 4 5 Partition semi/sublattice. To complete (P〈F,S〉, ) into a lattice, we compute the sublattice (of PX ) generated by P〈F,S〉. We follow the idea of alternating-join-and-meet completions borrowed from one of the two generic sublattice-completion methods (Bertet & Morvan, 1999). A discussion on our choice and other related methods is in Appendix D. However, we implement join-semilattice completion (meet-semilattice is dual) in our special context of tagged partitions, which echoes what we did in 3 and reuses ADD PARTITION. The adjustments are (a) changing tags from features and symmetries to join formulae and (b) changing the inner subroutine from pairwise comparison to computing join. We then run a sequence of alternating joins and meets to complete the lattice. For interpretability, one may want to stop early in the completion sequence. While a single join or meet remains simple for human interpretation—often understood as the intersection or union of concepts (e.g., the join of colored items and sized items gives items indexed by color and size)—having alternating joins and meets may hinder comprehension. More details on a single-step join-semilatticecompletion, the completion sequence, and tips on early stopping are relegated to Appendix E.
3.2 PRACTICAL LATTICE LEARNING: TO LEARN LIKE A CHILD (PHASE II)
Learning in an information lattice means solving the optimization Problem (1), i.e., to search for a minimal subset of simple rules from the information lattice of a signal so as to best explain that signal. Let P• be the sublattice (or semilattice, poset, if early stopped) from the construction phase. Projecting a signal ξ : X → R to P• yields the information sublattice R• := ↓ξ (P•) ⊆ Rξ. It is worth reiterating that (a) P• is constructed first and is data-independent; (b) ξ (data) comes after P•; (c) (R•,⇐) is isomorphic to (P•, ): R• retains the partial order (po matrix and hasse diagram) and interpretability from P•. As such,R• is what is given at the beginning of the learning phase. The main problem (relaxed). For practicality, we relax Problem (1): instead of the full latticeRξ, we restrict the search space toR•; instead of minimal rule sets, we consider only antichains (whose elements are mutually incomparable), necessary for minimality. This yields:
minimize R⊆R•
∆(↑(R), ξ) subject to R is an antichain; Ent(r) ≤ for any r ∈ R. (4)
To solve Problem (4), we adopt a (greedy) idea similar to principal component analysis (PCA): we first search for the most essential rule—which decreases ∆ most—in explaining the signal, then the second most essential rule in explaining the rest of the signal, and so on. Specifically, we start with an empty rule setR(0) := ∅, and add rules iteratively. LetR(k) be the rule set formed by Iteration (Iter) k andR(k)⇐ := {r ∈ R• | r ⇐ r′ for some r′ ∈ R(k)}. LetR≤ := {r ∈ R• | Ent(r) ≤ }. Then,
(in Iter k + 1) minimize ∆(↑(R(k) ∪ {r}), ξ) subject to r ∈ R(k)feasible := R≤ −R(k)⇐ . (5) We pre-computeR≤ (instead of the wholeR•) before iterations, which can be done by a breadth-first search (BFS) on P•’s hasse diagram, from bottom (the coarsest) up. As to the monotonicity of Ent w.r.t. the partial order (cf. the grouping axiom of entropy (Cover & Thomas, 2012)), any BFS branch ends once the entropy exceeds . (For later use, we save the setR> of ending rules in BFS, i.e., the lower frontier ofR> .) In contrast,R(k)⇐ is computed per iteration (by querying P•’s po matrix).
Under review as a conference paper at ICLR 2021
mnistt_7_3_solve_alternate_5000
Nested vs. alternating optimization. Computing ↑(R(k)∪{r}) requires solving a minimization, so Problem (5) is a nested optimization: argmin
r∈R(k)feasible ∆(argminη∈⇑(R(k)∪{r})‖η‖2, ξ). One may
de-nest the two: instead of comparing rules by lifting them up to the signal domain, we compare them “downstairs” on their own rule domains. So, instead of minimizing (5)’s objective, we
maximize r ∈ R≤ −R(k)⇐
∆(↓↑(R(k)) (domain(r)), ↓ξ (domain(r))) = ∆(↓↑(R(k)) (domain(r)), r). (6)
The idea is to find the rule domain on which the recovered ↑(R(k)) and the target signal ξ exhibit the largest gap. Adding this rule to the rule set maximally closes the gap in (6), and tends to minimize the original objective in (5). Nicely, in (6) the lifting does not involve r, so (5) is de-nested, which further iterates into an alternating min max (or lift project) optimization. Let r(k)? be the solution and ∆ (k) ? be the optimal value in Iter k. We updateR(k+1) := R(k) ∪ {r(k+1)? } − {r(k+1)? ’s descendants} (so always an antichain), and proceed to the next iteration. Iterations end whenever the feasible set is empty, or may end early if the rule becomes less essential, measured by |∆(k+1)? −∆(k)? | ≤ γ in the nested setting, and ∆(k)? ≤ γ in the alternating setting (for some γ). The full learning path & complexity. We denote a solve process for Problem (6) by SOLVE( , γ), or SOLVE( ) if γ is fixed ahead. To avoid tuning manually, we solve an -path. For 1 < 2 < · · · , assume SOLVE( i) takes Ki iterations, we run the following to solve the main relaxed Problem (6):
∅ = R(0) → SOLVE( 1)→ R(K1) → SOLVE( 2)→ R(K1+K2) → · · · (7) So, lattice learning boils down to solving a sequence of combinatorial optimizations on the Hasse diagram of a lattice. We walk through the full process (7) via a toy example, starting with a signal ξ : {0, . . . , 27}2 → [0, 1] denoting an image of “7” and a toy-sized information lattice of the signal (Figure 3A). The sequence of optimizations (7) proceeds at two paces concurrently: the slower pace is indexed by i; the faster pace is indexed by iteration number k. As mentioned earlier, the setsR≤ i
are pre-computed at the slower pace, with the (i+ 1)th BFS initialized fromR> i (the ending rules in the ith BFS). The monotonicity of Ent w.r.t. the partial order assures that these BFSs add up to a single (global) BFS on the entire Hasse diagram, climbing up the lattice from the bottom. This is shown in Figure 3B as the monotonic expansion of the blue region (R≤ ) explored by BFS. Locally at each iteration along the slower pace, solving Problem (6) is quadratic in the worst case when the feasible set is an antichain (i.e., no order), and linear in the best case when the feasible set is a chain (i.e., totally ordered). Since local BFSs add up to a single BFS with a standard linear complexity, the entire learning phase has a total complexity between linear and quadratic in the number of vertices and edges in the whole Hasse diagram. In general, the denser the diagram is, the lower the complexity is. This is because R(k)⇐ tends to be large in this case with more descendants activated (i.e., red in Figure 3B), which in turn effectively shrinks the feasible set (i.e., the blue region minus red). For example, unlike the first three iterations in Figure 3B, the 4th iteration ( = 3) activates more than one rules, including the one being extracted as well as all its unexplored descendants. Further, the upper bound is rarely reached. Unlike in this toy example, BFS in practice is often early stopped when becomes large, i.e., when later rules become more random. Hence, targeting at more deterministic and disentangled rules only, not all vertices and edges are traversed by BFS. In the end of the learning process, for explanatory purposes, we store the entire -path and the (R(k))k≥0 sequence instead of just the very last one. This yields a rule trace as the standard ILL output, which we present below.
How to read ILL output. ILL outputs a rule trace comprising an evolving sequence of rules, rule sets, and recovered signals (Figure 3C). The three sequences are indexed by iteration and by -path, so the rule set by the last iteration under any (starred) is the returned solution to the main Problem (4). We depict a rule by its lifting, since it sketches both the partition and the rule values. Figure 3C gives a full presentation of a rule trace. We also introduce a two-line shorthand (Figure 3D), keeping only the sequence of the recovered signals and that of the rules. A rule trace answers what makes ξ an ξ, or what are the best -simple rules explaining ξ. ILL rules are more interpretable than just eyeballing patterns. (a) The interpretability of the trace is manifest in its controllability via , γ: smaller for simpler rules and larger γ for more essential rules. (b) The interpretability of each rule is gained from its partition tag—the criteria by which the abstraction is made. A tag may contain several generating sources as different interpretations of the same rule abstraction. Like different proofs of a theorem, a partition tag with multiple sources reveals equivalent characterizations of a structure and thus, more insights of the signal. So, tags are not only computationally beneficial in constructing lattices, but also key to interpretation. We present in-depth analyses on tags in the applications below.
4 ILL EXAMPLES
We show typical ILL examples on knowledge discovery in art and science: learning music theory from scores and chemical laws from compounds (while relegating more analyses on handwritten digits to Appendix F). For both, we fix the same priors—F, S in (2)(3)—thus the same lattice. We fix the same parameters: -path is 0.2 < 3.2 < 6.2 < · · · (tip: a small offset at the beginning, e.g., 0.2, is used to get nearly-deterministic rules) and γ is 20% of the initial signal gap. This fixed setting is used to show generality and for comparison. Yet, the parameters can be fine tuned in practice.
Music illustration. Signals are probability distributions of chords encoded as vectors of MIDI keys. Figure 4a) shows such a signal—the frequency distribution of two-note chords extracted from the soprano and bass parts of Bach’s C-score chorales (Illiac Software, Inc., 2020)—with the learned rule trace listed below. The first rule is tagged by argsort ◦w[1,2] and has probability all concentrated in one cell whose elements have a larger y-coordinate (the black region above the diagonal). So, this is a deterministic rule, echoing the law of “no voice crossing (N.V.C.)”, i.e., soprano higher than bass. Checking later rule tags finds laws of voice range (V.R.), diatonic scale (D.S.), and consonant interval (C.I.)—almost all of the main static rules on two-voice counterpoint. Notably, the third rule is tagged by both mod12 ◦ w[1] and vertical translation invariance. From both feature and symmetry views, this tag identifies the concept of all Cs, all Ds, etc., which is the music concept of pitch class. The feature view explicitly reveals a period of 12 in pitches—the notion of an octave (in defining pitch class); the symmetry view reveals the topology—the manifold where the concepts lie—in this case a 2D torus.
Chemistry illustration. Signals are boolean-valued functions indicating the presence of compound formulae encoded as vectors of atomic numbers in a molecule database. Figure 4b) shows a signal attained by collecting two-element compounds from the Materials Project database (Jain et al., 2013) of common compounds. The first rule tagged by div18 ◦w[2] is deterministic: Element 2 can never be
Ar, K, Ca. It nicely captures the visual pattern in Figure 4b) (the last three vacant columns) and hints suggestively at some chemistry rules. The second rule tagged by mod8 ◦w[2] has peaks at cells tagged by feature values 1, 7, 0, 6. These cells, for Element 2, are halogens (+H), pnictogens, chalcogens, crystallogens. The third rule shows alkali metals, alkaline earth metals, crystallogens, icosagens are the cells common for Element 1. Next rule shows the common combinations, e.g., alkali metals and halogens. Note that the 2nd, 3rd, 4th rules for chemistry and the 5th, 3rd, 4th rules for music share the same tags, except that mod12 becomes mod8—period changes from 12 (a music octave) to 8 (number of main groups). So, when two chemical elements form a compound, they are like two music notes forming a chord! The music concepts of pitch classes and intervals parallel the chemical concepts of groups and their distances. Although abstractions are shared, rules differ. Instead of a diatonic scale in Bach’s chorales, chemistry uses a “cation scale” and an “anion scale”. It is interesting that our intention to show ILL’s generality (same lattice, parameters for different subjects) also suggests links between art and science by interpreting phenomena (signals) in one subject from the perspective of the other (Bodurow, 2018). Applications that extend the experiment here beyond a clustering model to restore the periodic table (Zhou et al., 2018) and render complex molecules in high dimensions are ongoing, aiming to discover new laws, new interpretations of existing laws, and new materials.
Real-world deployment & evaluation. We generalized the music illustration to a real app of an automatic music theorist (Yu et al., 2016; Yu & Varshney, 2017). It specially implements the alternating min max setting into a “student teacher” model: the student is a (music) generator and the teacher is a discriminator. The two form a loop where the teacher guides the student towards a target style through iterative feedback (extracting rules) and exercise (applying rules). This app extends the above music illustration considerably. It considers more music voices, so now signals are in higher dimensions and rules are on more complex chord structure. It considers temporal structure, so now signals include many (un)conditional chord distributions (multi-n-grams), yielding both context-free and context-dependent rules, but new challenges too, namely rare contexts and contradictory rules. ILL’s core idea of abstraction makes “small data large” thus, rare contexts common (Yu & Varshney, 2017), and a redesigned lifting operator solves contradiction (Yu et al., 2017). Further, parameters like , γ are made into self-explanatory knobs for users to personalize their learning pace.
We conducted two studies to assess rule-learning capability and interpretability. We present the main results here and detail the procedures in Appendix G. In the first study, we compared ILL-discovered rules with human-codified domain knowledge to see how much known can be reproduced and how much new can be discovered. Trained on just 370 Bach’s chorales, our model reproduced in explicit
Under review as a conference paper at ICLR 2021
a.
covered 66%
hinted 26%
missed 7%
how much known?
Under review as a conference paper at ICLR 2021 the histogram—a symbolic and pictorial encoding. Students were explicitly instructed that writing out a description that was basically a literal repetition of the histogram (e.g., taking a modulo 12 of a chord results in a 91.2% chance of being 0, 0, 4, 7) is not acceptable: they must reveal the music behind the math. In fact, we made it clear to the students that we only want qualitative descriptions. Students were specifically told (in the instructions) to only pay attention to the relative values of the probabilities whose exact numbers are unimportant (e.g., what are most likely, what are more likely, what are almost impossible). This homework was due in two weeks. During the two-week period, we asked the students to complete it independently, with no group work or office hours.
Assess Human Interpretations. The homework was designed in a way such that every rule historgram encoded at least one music concept/rule consistent with standard music theory. In addition, every histogram contained either one additional known music rule or something strange that either conflicted with a known rule or represented something new. We assigned two points per rule. Further, we made an initial rubric containing the (authoritative) music keywords used to describe every rule histogram. Because students’ answers arrived in the form of qualitative text, to ensure credibility and fairness of the initial rubric, we held a discussion session at a regular lecture time (80 minutes) with all students as well as the teaching staff. During the discussion session, we went over all 25 rules one by one. For each, we first announced the keywords in the initial rubric and explained to the students that these keywords would later be used to grade their homework. However, in the discussion session, every student was encouraged to object to any of our announced keywords and/or to propose new keywords accompanied with a convincing explanation. New/modified keywords that were commonly agreed upon were added/updated to the initial rubric. By the end of discussion session, we compiled a more inclusive rubric containing broadly accepted keywords. This rubric-generating process was transparent to all the students. In the final step, we manually graded every student’s answer sheet against keywords in the rubric and computed their scores. A summary of the students’ performances is presented in Table 5. Except for cases where the student did not do the homework, a major source of score deduction was from misunderstanding the n-gram (e.g., the probability of the current chord conditioned on the previous chord was mistakenly interpreted as the probability of the previous chord conditioned on the current one). This may be largely due to unfamiliarity with the n-gram models for new CS+Music students. Nevertheless, the majority of the students who did the homework (2/3) succeeded (with respect to the 30/50 passing grade) in interpreting the rules generated from ILL, which in turn provides evidence on the interpretability of the AI-produced knowledge itself.
Table 5: Students’ final scores.
Score Range # of Students 50 3
[40,50) 7 [30,40) 2 [20,30) 4 [10,20) 1 [0,10) 1
0 5
H CONCLUSION AND BROADER IMPACTS
Model transparency and interpretability are important for trustworthy AI, especially when interacting directly with people such as scientists, artists, and even multidisciplinary researchers bridging the Two Cultures (Snow, 1959) (e.g., like music and chemistry). The core philosophy underlying ILL arises from a human-centered standpoint and our long-term pursuit of “getting humanity back into artificial intelligence”. We strive to develop human-like artificial intelligence, which in turn may help advance human intelligence—a goal at the intersection of AGI (artificial general intelligence (Goertzel & Pennachin, 2007)), XAI (explainable artificial intelligence (Adadi & Berrada, 2018)), and “AI as augmented intelligence” (Jordan, 2019).
As such, the focus of interpretability in this line of research is not just the end result of the model, but the entire learning process. This emphasis on process is not only manifest in this paper (e.g.,
23
b. how interpretable? c. figured soprano
entropy = 4.76 figured alto entropy = 4.78
figured tenor entropy = 4.80 figured bass entropy = 4.34
how much new?
Figure 5: ILL assessments on knowledge discovery tasks.
forms 66% of a standard music theory curriculum (Figure 5A). In the rest, about 26% (e.g., harmonic functions and music forms) wa implicitly hi ted at by the cur ent n-gram based model, modeling only transitions of abstractions but not explicitly abstractions of transitions—a future direction. In the second study, we ran a human-subject experiment in the form of homework for a music class. The homework asked 23 students o write verbal interpretations of ILL-generated rules rendered as histograms over tagged partitions. Grading was based on a rubric of keywords generated via majority vote in a later discussion among students and teachers. Figure 5B shows that the majority (2/3) of the students who did the homework succeeded (w.r.t. the 30/50 passing grade) in the interpretation task, which in turn shows the interpretability of the AI-produced knowledge itself.
In the first study, our model also discovered new rules that interested our colleagues in the music school. (a) Tritone resolution is crucial in tonal music, yet in Bach’s chorales, tritones sometimes do not resolve in typical ways, but consistently transition to other dissonances like a minor seventh. (b) A new notion of “the interval of intervals” was consistently extracted in several rule traces. This “second derivative”, like acceleration in mechanics, might suggest a new microscopic chord structure heretofore unconsidered. (c) New symmetry patterns reveal new harmonic foundations. As a parallel concept of harmony traditionally built on figured bass (dominant in Bach’s chorales confirmed by ILL), ILL reveals “figured soprano” as the next alternative in explaining Bach’s music (Figure 5C). Although not the best view for explaining Bach according to ILL and is not included in any standard music theory class, it may be a valuable perspective for music starting deviating from classical. This was confirmed by domain experts (Sokol, 2016), with more details in the end of Appendix G.1.
5 DISCUSSION: LIMITATIONS AND CHALLENGES
As a first step, we devise a new representation-learning model intended to be both theoretically sound and intrinsically interpretable. This paper shows typical setups and applications, but ILL is a general framework that admits new designs of its components, e.g., projection-and-lifting or priors. Notably, designing a lattice not only sets the rule-learning capacity but also the “vocabulary” for interpretation which, like the Sapir-Whorf hypothesis for human language, limits how a lattice explains signals. Likewise, priors have pros and cons based on what we seek to explain and to whom (e.g., not all signals are best explained by symmetry, nor can everyone reads symmetry equally well). One solution is to explore multiple lattices while balancing expressiveness and computation—a common practice in picking ML models too. Further, whether a signal is indeed governed by simple rules requires rethinking. Sometimes, no rules exist, then ILL will indicate this and a case-by-case study will be needed. Sometimes, rules are insufficient: is music in fact governed by music theory? Theory is better viewed as necessary but not sufficient for good music: great composers need not be great theorists.
Following studies comparing human-codified knowledge and using human-subject experiments for interpretability, more systematic ILL benchmarking and assessment remain challenging and need long-term efforts. Benchmarking is not as easy as for task-specific settings (Chollet, 2019), requiring better comparison schemes or a downstream task. Effective ILL assessments must focus on new discoveries and the ability to assist people. Instead of a Turing test for machine-generated music, one may (at a meta-level) consider tests between independent and machine-aided compositions, but both are done by humans. Further, ILL may be incorporated with other models, having an ILL version of deep learning or vice versa. For example, using ILL as a pre-processing or post-interpretation module in other models to achieve superior task performance as well as controllability and interpretability. One other possibility may use ILL to analyze attention matrices (as signals) learned from BERT or GPT (Rogers et al., 2020). More future visions are in Appendix H.
A CONNECTION TO CONCEPT LATTICE
Per our definition, a concept refers to a component of an abstraction, or more precisely, is a cell in a partition or an equivalence class under an equivalence relation. This definition is consistent with a formal concept defined in formal concept analysis (FCA) (Ganter & Wille, 2012; Ganter et al., 2016; Priss, 2006) as a set of objects (extent) sharing a set of attributes (intent), which can be also treated as objects that are equivalent under the attributes. However, our definition of a concept generalizes that of a formal concept in two ways. First, in our case, a partition or an equivalence relation is not induced from domain-specific attributes through formal logic and formal ontology, but from universal priors drawn from the Core Knowledge (detailed in Section 3.1 in the main paper). Second, specifying a partition considers all of its concepts, whereas specifying a set of formal concepts only considers those with respect to a given formal context. As a result, partition lattices in our case generalize concept lattices in FCA, and are not generated, hence not constrained, by domain knowledge such as those encoded in formal ontologies.
Mathematically, let (PX , ) be the partition lattice comprising all partitions of X and (2X ,⊆) be the subset lattice comprising all subsets of X . Clearly, the power set 2X is the same as {C ∈ P | P ∈ PX}. That is, the subset lattice is also the lattice comprising all concepts from all partitions of X , which can be then called the full concept lattice. So, one can define any concept lattice in FCA as a sublattice of the full concept lattice (cf. Definition 3 in (Ganter et al., 2016)). Yet, such a concept sublattice does not have to include all concepts from a partition, and in many cases, it tends to miss many concepts if they are not known in the existing ontology. We give two examples below to further illustrate the connection between a partition lattice and a concept lattice.
First, consider biological taxonomy. Dogs and cats are two concepts in species which is an abstraction containing other concepts such as eagles. Likewise, mammals and birds are two concepts in class which is an abstraction containing other concepts such as reptiles and insects; further, animals and plants are two concepts in kingdom. In light of hierarchy, as abstractions, species class kingdom (in a partition lattice); as concepts, dogs ⊆ mammals ⊆ animals (in a concept lattice). Note that when forming a concept lattice, one may not need to include say, all species. Yet when having species as an abstraction in a partition lattice, this abstraction must contain all species including known species and unknowns, where the latter is usually of more interest for knowledge discovery.
Second, consider music theory. C major triads, C minor triads, and B diminished triads are concepts in an abstraction induced by music octave-shift and permutation invariance. Further, major triads, minor triads, and diminished triads are concepts in another abstraction induced by music octave-shift, permutation, and further transposition invariance. Clearly, for abstractions, the former abstraction is finer than the latter; for concepts, the set of C major triads is a subset (or a special case) of the set of major triads. However, chords that are not defined in traditional music theory but appear as new concepts in a known abstraction (e.g., the two above) may be more interesting, since they may suggest new composition possibilities while still obeying the same music abstraction, in this case the same music symmetry. New concepts from new abstractions may push the composition boundary even further, suggesting new types of chords discovered from e.g., new symmetry (but possibly within a known symmetry family). See the end of Appendix G.1 for more examples from new discoveries.
B MORE GENERALIZED FORMALISM FOR INFORMATION LATTICE
The mathematical setting in the main paper is for a non-negative signal on a finite domain. However, this is not a limitation, but purely for notational brevity and computational reasons. First, regarding non-negativity, in many real scenarios, the signal is bounded and its value is only relative. In these cases, one can simply add an offset to the signal to make it non-negative. More generally, we can
consider a signal to be any measurable function ξ : X → Rn. Then the notions of an abstraction, a concept, a rule, as well as the partial order can be generalized as in Table 1. Hence, the notion of an information lattice is still well-defined in the generalized setting. The essence of the two settings lies in how we formalize an abstraction, whether using a partition or a σ-algebra. However, the two are not very different from each other: any partition of X generates a σ-algebra on X , and any σ-algebra on a countable X is uniquely generated by a partition of X (Çınlar, 2011).
Further, the main paper uses the summation functional in defining a rule of a signal, or the projection operator. However, other options are possible, e.g., mean, max, min, or a specially designed functional. The lifting operator can then be redesigned accordingly. In particular, besides always favoring the most uniform signal, the design of the special lifting can have extra freedom in considering other criteria for picking a signal from the general lifting.
C MORE INSIGHTS ON THE SPECIAL LIFTING
Consider the special lifting ↑(R) for any rule setR = ↓ξ (P) of a given signal ξ. Computing ↑(R) is simple ifR = {r} contains only a single rule. In this case, ↑(R)(x) = ↑(r)(x) := r(C)/|C| for any x ∈ C ∈ domain(r), which requires simply averaging within each cell. However, computing ↑ (R) becomes much less trivial when |R| > 1. By definition, we need to solve the minimization problem:
↑(R) := argminr∈⇑(R)‖r‖2. (8)
Instead of directly throwing the above problem (8) into a generic optimization solver, there is a more efficient approach which also reveals more insights on the special lifting. More specifically, one can check that any multi-rule lifting ↑(R) can be computed as a single-rule lifting ↑(r?) where the single rule r? is defined on the join ∨P and is computed as follows:
r? := argminr∈⇑(∨P)(R)‖r̃‖2, with the weighted norm ‖r̃‖2 := √∑
C
r(C)2
|C| . (9)
So, instead of liftingR directly to the signal domain X , we liftR to the join ∨P first and then to X . Since | ∨P| ≤ |X|, the minimization problem (9) is in a smaller dimension compared to the original problem (8), and thus, can be solved more efficiently. In the minimization problem (9), by definition, ⇑(∨P) (R) := {r : ∨P → R | ↓r (P) = R}. Hence, every rule r ∈ ⇑(∨P) (R) can be treated as a single-rule summary of the rule setR, and r? is one of them—the one that yields the most uniform signal. Realizing the special lifting R → ↑ (R) as the two-step lifting R → r? → ↑ (r?) = ↑ (R) reveals the following insight: given rules abstracting ξ at different levels (coarser or finer), the best one can hope to faithfully explain ξ is at the level of the join. Determining ξ at any level finer than the join would then require additional assumptions other than the rule set itself, such as the preference of uniformity used here. This further explains the two sources of information loss (join and uniformity) discussed in the recovery process of a signal (cf. Section 3 in the main paper). Notably, to determine a signal even at the level of join may be ambigious, since the general lifting ⇑(∨P) (R) to the join is not necessarily a singleton. This particularly implies that r? as one of the single-rule summaries ofR of ξ is not necessarily a rule of ξ, i.e., there is no guarantee that r? = ↓ξ (∨P). To make it so, we need more rules.
D EXISTING WORK ON SUBLATTICE GENERATION
General methods for computing the sublattice LB of a full lattice L generated by a subset B ⊆ L fall into two basic families, depending on whether the full lattice needs to be computed. The first uses alternating join- and meet-completions, with worse-case complexityO(2|B|); the second characterizes the elements of L that belong to the sublattice, with complexity O(min(|J(L)|, |M(L)|)2|L|) where J(L) and M(L) denote the number of join-irreducibles and meet-irreducibles, respectively (Bertet & Morvan, 1999). The latter requires computing the full lattice, which is intractable in our case of partition lattices, as |L| = |PX | grows faster than exponentially in |X| whereas |P〈F,S〉| is usually smaller than |X|. So, we use the first approach and compute alternating join- and meet-completions. The same principle of avoiding computing the full lattice has been applied to the special context of concept lattices (Kauer & Krupka, 2015), yet the technique there still requires the full formal context corresponding to the full concept lattice. Note that sublattice completion is, by definition, computing the smallest sublattice LB (in a full lattice L) containing the input subset B ⊆ L, where LB must inherit the meet and join operations from L. It generalizes but is not the same as Dedekind-MacNeille completion (Bertet & Morvan, 1999; MacNeille, 1937; Bertet et al., 1997).
E MORE DETAILS ON THE CONSTRUCTION PHASE
This section elaborates on the second half of Section 3.1 in the main paper, presenting more algorithmic details on poset construction and sublattice completion. The core data structures for posets are the so-called adjacency matrix and Hasse diagram, encoding the partial order ≺ and the cover relation ≺c, respectively (Garg, 2015). The former is best for querying ancestors and descendants of a partition within the lattice; the latter is best for querying parents and children of a partition. (A more advanced technique includes chain-decomposition, but the two here are sufficient for this paper.) More specifically,
P ′ is an ancestor of P ⇐⇒ P ≺ P ′
P ′ is a parent of P ⇐⇒ P ≺c P ′ (i.e., P ≺ P ′ but no P ′′ satisfies P ≺ P ′′ ≺ P ′). We introduce a few algorithmic notations. Given a partition poset (P, ), we use P.po matrix and P.hasse diagram to denote the adjacency matrix and Hasse diagram of P, respectively. For any partition P ∈ P, we use P.ancestors, P.descendants, P.parents, and P.children to denote the sets of ancestors, descendants, parents, and children of P , respectively. Notably, the two data structures are not only important for the construction phase but for the subsequent learning phase as well. The core subroutine in the construction phase is ADD PARTITION sketched as Algorithm 1. It is the key unit step in both poset construction and (join-)semilattice completion.
Poset construction. This corresponds to Step 3 in the flowchart in Section 3.1 of the main paper. Recall that poset construction refers to the process of sorting a multiset P〈F,S〉 of tagged partitions into a poset (P〈F,S〉, ), where the partition tags are features and symmetries. Naively, if we write an inner subroutine COMPARE(P,P ′)—called an oracle in the related literature—to compare two partitions, sorting a multiset into a poset amounts to ( N 2 ) calls of this pairwise comparison where N is the size of the input multiset. So, the common idea shared in almost all poset sorting algorithms is to reduce the number of oracle calls as much as possible. As mentioned in the main paper, considering the additional properties in our case, we leverage (a) transitivity (valid for all posets), (b) partition size (valid for partitions), and (c) partition tag (valid for tagged partitions) to pre-determine or pre-filter relations. In other words, we want to infer from the context as many pairwise relations as possible, so that the number of actual pairwise comparisons can be minimized.
More specifically, we start from an empty poset, and call ADD PARTITION to incrementally add partitions from the input multiset to the poset. As the outer subroutine, ADD PARTITION leverages transitivity and partition size by maintaining three live data structures, namely size2partns, po matrix, and hasse diagram, so as to avoid calling COMPARE whenever possible. Consequently, COMPARE is called only at two places (underlined in Algorithm 1): one for = and one for ≺. When called as the inner subroutine, COMPARE(P,P ′) does not always perform an actual computation for pairwise comparison. Instead, it first checks if the tags are informative (e.g., compositions/supergroups imply coarser partitions) and only if not, makes an actual comparison. With the additional information from partition size, an actual comparison can be done in O(|X|) time
Algorithm 1: ADD PARTITION (Pτ ,P): adds a tagged partition Pτ to a partition poset (P, ) Input: a tagged partition Pτ , where the tag τ can be a feature/symmetry or a join/meet formula;
a partition poset (P, ), with the following members and hash tables: · every P ∈ P is a unique partition (indexed by a unique identifier) · P.partn2tags[P] := {τ | Pτ = P} denotes the set of all tags inducing P · P.size2partns[k] := {P | |P| = k} denotes the set of all P ∈ P with size k · P.po matrix encodes the partial order ≺, best for getting P.ancestors/descendants · P.hasse diagram encodes the cover relation ≺c, best for getting P.parents/children
Step 1: determine if Pτ is new by COMPARE(P,Pτ ) (for =) for every P ∈ P.size2partns[|Pτ |]
if Pτ ∈ P.size2partns[|Pτ |]: update P.partn2tags[Pτ] by adding τ ; return else: create a new hash entry P.partn2tags[Pτ] = {τ}; proceed to Step 2
Step 2: add the new partition Pτ to P (2a) update P.size2partns[|Pτ |] by adding Pτ (2b) update P.po matrix and P.hasse diagram
– for every existing size k < |Pτ | sorted in a descending order: for every P ∈ P.size2partns[k]:
if P.parents ∩ Pτ .descendants 6= ∅: update P.po matrix by adding P ≺ Pτ else: COMPARE(P,Pτ ); update P.po matrix and P.hasse diagram if P ≺ Pτ
(here one can check: it is necessarily the case that P ≺c Pτ ) – do the above symmetrically for every existing size k > |Pτ | sorted in an ascending order – (note: every P ∈ P.size2partns[k] for k = |Pτ | is incomparable with Pτ ) – clean cover relation: remove any P∗ ≺c P∗ from P.hasse diagram if P∗ ≺c Pτ ≺c P∗
via a mapping process. More specifically, given two partitions P,P ′, without loss of generality, we assume |P| ≤ |P ′|. An actual comparison is made by tentatively creating a mapping ν : P ′ → P . One can check that such a ν exists if and only if P P ′. Hence, if |P| = |P ′| (resp. |P| < |P ′|), one can determine = (resp.≺) if ν is created successfully or incomparability otherwise. The mapping complexity is linear in |X|, with linear coefficient 1 if mapping succeeds and with linear coefficient < 1 if mapping fails. In the worst case (e.g., if all partitions are incomparable), all ( N 2 ) pairwise comparisons are required. Our algorithm works best when partitions are richly related (i.e., the Hasse diagram is dense), which is indeed the case for our tagged partitions induced from systematically formed features and symmetries.
Semilattice completion. This corresponds to Step 4 in the flowchart in Section 3.1 of the main paper. Recall that join-semilattice completion refers to the process of completing a partition poset into a semilattice. We only detail join-semilattice completion, since meet-semilattice completion can be done symmetrically. Formally, we want to compute the join-semilattice of PX generated by the input poset (P〈F,S〉, ). We denote the resulting join-semilattice by 〈P〈F,S〉〉∨. By definition,
〈P〈F,S〉〉∨ := {∨P | P ⊆ P〈F,S〉}. Naively, if computing 〈P〈F,S〉〉∨ literally from the above definition, one has to iterate over all subsets of P〈F,S〉 and compute their joins. This amounts to 2N join computations where N = |P〈F,S〉| is the size of the input poset, and moreover, many of the joins are not pairwise. Yet, similar to our earlier poset construction, we may reduce the computations of joins by an incremental method, which also embeds ADD PARTITION as a subroutine and utilizes partition sizes and tags, but now the tags are join formulae instead of features or symmetries.
More specifically, we start with an empty semilattice P, and add partitions in P〈F,S〉 to P one by one from smaller-sized to larger-sized (note: the size information is maintained in P〈F,S〉.size2partns). When a partition P ∈ P〈F,S〉 is to be added, we make a tag named by itself, i.e., let Pτ := P with τ := {P}, and then call ADD PARTITION(Pτ ,P). There are two possibilities here: Pτ already exists in P (call ends by Step 1) or Pτ is new (call ends by Step 2). In the former, we are done with Pτ .
In the latter, for every P ′ ∈ P\{Pτ}, compute the pairwise join J (P ′) := ∨{Pτ ,P ′} and its tags T (P ′) := {τ ∪ τ ′ | τ ′ ∈ P.partn2tags[P ′]}, and call ADD PARTITION(J (P ′)T (P′),P). Like COMPARE, computing join can be optimized by leveraging previously computed tags and partial order in the input poset P〈F,S〉, so as to avoid an actual join computation whenever possible. When inferring from the context is not possible, one can perform an actual join computation ∨(P,P ′) in O(|X|) time. This is done by collecting the unique pairs of cell IDs (C(x), C ′(x)) for every x ∈ X , where C(x) and C ′(x) denote the cell IDs of x in P and P ′, respectively. In the worst case (e.g., if all partitions are incomparable and join-irreducible), the complexity is inevitably O(2N ). However, like in poset construction, our algorithm works best when the partial order structure is rich.
Practical tips for sublattice completion. This corresponds to Step 5 in the flowchart in Section 3.1 of the main paper. Recall that constructing the sublattice of PX generated by P〈S,F 〉 follows the alternating process: L0 := P〈S,F 〉, L1 := 〈L0〉∨, L2 := 〈L1〉∧, L3 := 〈L2〉∨, and so forth, which terminates as soon as Lk−1 = Lk. We denote the end result by 〈P〈S,F 〉〉∨∧···, which is the desired sublattice. However, we may want to stop early in the completion sequence, due to concerns from computation, interpretability, expressiveness, as well as their tradeoffs. We suggest a practical tip on deciding where to stop. If the input poset P〈F,S〉 is small, run alternating joins and meets, or even complete it to the sublattice if affordable. If P〈F,S〉 is moderate, complete the joins only (as join is closely related to rule lifting, see Appdenix C for more details). If P〈F,S〉 is large, just use it.
F MORE ANALYSES IN THE LEARNING PHASE
This section elaborates on the last paragraph of Section 3.2 in the main paper, presenting more analyses and interpretations on the rule traces elicited from the toy handwritten-digit examples. Yet, as mentioned in the main paper, computer vision is currently not among the typical use cases of ILL. Learning rules of handwritten digits may not be of much independent interest unless for calligraphy. So, the analyses and interpretations here are for illustration purposes only. We refer readers to the Broader Impact section in the main paper for possible future directions on how ILL may be used, together with other ML models, to solve computer vision tasks.
Recall that the main use case of ILL is to explain a signal ξ, answering what makes ξ an ξ. The same toy example illustrating an ILL process is replayed here in Figure 3. The signal ξ : {0, . . . , 27}2 → [0, 1] is a grayscale image of a handwritten “7”. In this case, a rule of ξ, or the projection of ξ to a partition of {0, . . . , 27}2, can be viewed as gathering “ink” within each partition cell. Accordingly, the (special) lifting can be viewed as redistributing the gathered “ink” (evenly) in each cell. Hence, we term this view the ink model. For visual convenience, we depict a rule of a 2D signal by its lifting (i.e., another grayscale image), since with pixels in the same cell colored the same, we can use the lifting to sketch both the partition and the rule values. More precisely, when a lifting represents a rule, it must be viewed in terms of blocks or superpixels; whereas a real lifting (i.e., a signal or a real image) is viewed normally by the regular pixels. To better clarify, all rules in Figure 3 are displayed in red boxes, whereas all liftings are in green ones.
For a simple illustration, we draw a small number of features and symmetries to generate a poset (P•) of 21 partitions. The corresponding part of the information lattice (R•) is shown by its Hasse diagram in Figure 3. Further, on top of the Hasse diagram, we demarcate the frontiers of the sublevel sets (R≤ ) by six blue dashed curves. Note that in this tiny diagram, we have sketched a full range of sublevel sets, yet for large diagrams, sublevel sets are constructed for small -values only in a single-pass BFS. The right part of Figure 3 illustrates a complete ILL process in the alternating setting, with lift project signified by the green up-arrows and red down-arrows, respectively. During the learning process, ILL tries to minimize the gap in the signal domain (upstairs) through iterative eliminations of the largest gap in the rule domain (downstairs). Eliminating a larger rule gap tends to imply a larger drop in the signal gap, but not necessarily in every iteration, since the special lifting may accidentally recover a better signal if the assumed uniformity is, by chance, present in the signal. The rule setR(k) formed per iteration is presented in the middle of the right part of Figure 3, which joinly shows the complete rule trace continuously progressing along the -path.
The rule set in the last iteration under any (marked by ? in Figure 3) is the returned solution to the main relaxed Problem (4) in the main paper. This rule set is used to answer what makes ξ an ξ. For example, let rj denote the rule with ID j (here a rule ID is the same as the partition ID, the unique identifier used in Algorithm 1 during the construction phase). Then, among all rules whose entropies
are no larger than = 2, the third rule set in the traceR(3) = {r9, r1, r18} best explains what makes ξ an ξ. However, if more complex rules are allowed, say if all rule entropies are now capped by = 6, R(7) = {r13, r15, r19} is the best. Recall that we do not just eyeball the rules to get intuitive understandings. Every rule is the projection of the signal to a tagged partition, where the tag, generated in a prior-driven way, explicitly explains the underlying abstraction criteria. For example, r19 in Figure 3 comes from a symmetry tag representing a permutation invariance, which visually renders as a reflection invariance. Rules r8 and r9 come from two feature tags div7 ◦ w[1] and div7 ◦ w[2], respectively. These two feature tags represent the continuous and even collapsing in the first and the second coordinate, respectively, which visually render as horizontal and vertical strips in either case. Both rules are later absorbed into r13 tagged by div7 ◦w[1,2], since its rule domain is strictly finer. These rules (r8, r9, r13) apparently summarize the horizontal and vertical parts of the handwritten “7”. Further, the vertical part of the “7” is longer and slants more, so we see more vertically-patterned rules in the rule trace (r9, r11, r15). These rules are obtained from finer and finer abstractions along the horizontal direction, so as to capture more details on the vertical part of that “7” such as its slope. Notably, among these vertically-patterned rules, r11 is induced from the symmetry representing a horizontal translation invariance, but it is quickly absorbed into r15 whose entropy is not much higher. This transient appearance of r11 implies that it plays a less important role in explaining this handwritten “7”. In fact, from more experiments, symmetries in general play a less important role in explaining many “7”s. This is, however, not the case in explaining many “8”s, where symmetries occur much more often. For example, consider a symmetry fused from translation and permutation invariances whose fundamental domain is homeomorphic to a Möbius strip. We hypothesize that this topological property might be related to the twisted nature of an “8”. For a visual comparison, we present the rule traces learned from a “7” and an “8” below in Figure 6, as well as the visual similarity between a Möbius strip and an “8”. mnistt_7_3_solve_alternate_5000
mnistc_8_2_solve_alternate_5000
G STUDIES ON ILL-BASED MUSIC APPLICATION
We introduce two tests associated with a real-world application. The first is to assess rule-learning efficacy, where we compare machine-discovered rules to human-codified domain knowledge. The second is to assess human-interpretability, where we use human subject experiments on interpreting machine-generated rules.
The application here is our first step towards building an automatic music theorist and pedagogue, which is to be deployed as an assistant in music research and education. The two tests are our initial effort towards a systematic benchmarking and assessment platform. In the continuing effort of bridging human and machine intelligence, new standards are to be set and commonly agreed upon, so as to reasonably compare machine-codified discoveries with human-codified knowledge, as well as to use human-subject experiments for assessing interpretability. Fully developing assessment protocols is a challenging, long-term endeavor. Here, we use the two tests as starting points, and present results from each. Respectively, the first experiment tests music rule discovery, a basic requirement to be a theorist; the second tests interpretability, a basic requirement to be a pedagogue.
To conduct the two tests, we first build a user-friendly web application, which is used to better see and control the ILL learning process and results. Figure 7 illustrates the web interface. Users learn music rules—each as a histogram over a tagged partition (i.e., machine-codified music concepts)—and control their learning pace via self-explanatory knobs whose set values are automatically converted to internal parameters (e.g., , γ). One critical music-specific extension to the vanilla ILL presented in the main paper is adding a temporal component, since music is highly contextual. This amounts to considering more than one signal simultaneously, which include various (un)conditional chord distributions (multiple n-grams with varying n’s and varying conditionals) encoding information of individual chords as well as melodic and harmonic progressions. Accordingly, ILL produces both context-free and context-dependent rules, each of which is indexed by a partition and a conditional under that partition. For example, given the partition that is equivalent to classifying music chords into roman numerals and conditioned on the previous two chords being a I64 followed by a V, a rule specifies the probability distribution of the next roman numeral, and in this case reproduces the music rule on Cadential-64. Note that in a context-dependent rule, not only is the query chord abstracted, but also the conditional. This is in contrast with many classical n-gram models where no abstraction is present and thus may suffer from the problem of rare contexts, where a conditional occurs very few or even zero times in the training set. However here, the core idea of abstraction makes “small data” large and thus rare contexts common. More examples of context-free and context-dependent rules are illustrated as histograms in Figure 8. These rule histograms are generated from ILL based on 370 of Bach’s four-part chorales (in the format of digital sheet music), and are used in the two experiments detailed below.
G.1 COMPARISON TO HUMAN-CODIFIED KNOWLEDGE
We compare rules learned from ILL to a standard undergraduate music theory curriculum. We want to use known laws from music theory as a benchmark to see how ILL-generated rules correspond to human-codified music knowledge. In particular, we want to see what is covered, what is new, and what is different. Yet, the ultimate goal is not just to use known music theory as a ground truth for the purpose of driving ILL to fully reconstruct what we know, but eventually to discover new rules,
to gain new understandings of existing rules, to suggest new composition possibilities, as well as to teach rules in a personalized way.
A priori we are aware of three major differences between human-codified music theory and ILLgenerated rules. (a) In light of music raw representations (input), laws of music theory are derived from all aspects in sheet music whereas ILL-generated rules are currently derived from only MIDI pitches and their durations. This is because we currently study ILL as a general framework. When a music-specific application is to be developed later, one can include more music raw representations such as letter pitches, meter, measure, beaming, and articulations. (b) In light of rule format (output), laws of music theory and ILL-generated rules have two different styles, with the former being more descriptive and absolute (hard), whereas the latter being more numerical and probabilistic (soft). For instance, a music rule that completely forbids consecutive fifths is reproduced by an ILL-generated rule that assigns a small non-zero probability to the event. Therefore, while it is possible to “translate”, with information loss, a (precise) ILL-generated rule to a (verbal) rule in known theory, it may not make sense to “translate” in the opposite direction. Also, it is not a good idea to hardcode known rules as categorical labels in a supervised setting, since music rules are inherently flexible and hardcoding may lead to a rule-based AI that generates somewhat “mechanical” music such as the Illiac Suite (Hiller & Isaacson, 1957). (c) In light of purposes, laws of music theory are more intended for general pedagogical purposes, rather than to reflect the style of a particular data set. For instance, while consecutive fifths are banned in homework and exams, they may be widely used in many pop songs. Even in our data set of Bach’s chorales (which are supposed to follow the known rules quite well), we see Bach himself wrote a handful of consecutive perfect intervals. On the contrary, ILL-generated rules are specific to the input data set. We may certainly find some data sets that follow the known rules quite well (e.g., Bach’s chorales), but also others that break many known rules and even set their own rules.
Keeping these three differences in mind and by further isolating them from the comparison results, we can reveal the remaining differences that are due to the rule-learning process itself. To come up with the benchmark, we compiled a comprehensive syllabus of laws from music theory taught in our music school’s theory review course, which runs through the full series of theory classes at a fast pace. This human-codified music knowledge is organized as a running list of 75 topics and subtopics indexed by lecture number. On the other hand, ILL-generated rules are indexed by partition (ID) and n-gram (n). The results are summarized below in Table 2, where the colored crosses in the last column indicate topics that are missed by ILL due to different reasons.
Among the total 75 topics in Table 2, we first ignore 7 of them (red crosses) which require music raw representations beyond MIDI pitches and durations (e.g., accents and enharmonic respellings of some augmented sixth chords). ILL covered 45 out of the remaining 68 topics, yielding a coverage of 66%. Among the 23 missed topics, 18 (blue crosses) are related to deeper-level temporal abstractions such as harmonic functions, key areas, and forms. These temporal abstractions may be better modeled as abstractions of transitions, which are implicitly captured but not explicitly recovered from our current multi-abstraction multi-n-gram language model, modeling only transitions of abstractions. The other 5 missed topics (black crosses) are tricky and require ad-hoc encodings, which are not explicitly learnable (but may be implicitly captured to some extent) from our current ILL implementation. Accordingly, the composition of the 30 = 7 + 18 + 5 uncovered topics suggest three future directions to possibly raise the rule-learning capacity of the current implementation: (a) include more music raw representations; (b) model abstractions of transitions; (c) either make music-specific adjustments when developing music apps or figure out a more expressive and more general framework in the long run. However, remember that the goal here is not to reproduce what we know but to augment it. So, we may certainly stop after enabling abstractions of transitions, which in the best case can yield an improved coverage of 84% (i.e., 93% of the topics from MIDI notes only) which is good enough.
Lecture Music Theory Partition IDs n-gram
1 music accents 7 2 pitch 1-4 1 3 2 pitch class 16-19 1 3 2 interval 31-36 1 3
Table 2 (cont.)
Lecture Music Theory Partition IDs n-gram
2 interval class 97-102 1 3 3 stepwise melodic motion (counterpoint) 1-4 2 3 3 consonant harmonic intervals (counterpoint) 97-102 1 3 3 beginning scale degree (counterpoint) 16-19 2 3 3 ending scale degree (counterpoint) 16-19 2 3 3 beginning interval class (counterpoint) 97-102 2 3 3 ending interval class (counterpoint) 97-102 2 3 3 parallel perfect intervals (counterpoint) 97-102 2 3 3 directed perfect intervals (counterpoint) 7 3 law of recovery (counterpoint) 1-4 ≥3 3 3 contrapuntal cadence (counterpoint) 1-4, 97-102 2,3 3 3 melodic minor ascending line (counterpoint) 7 4 tri | 1. What is the main contribution of the paper regarding data analysis?
2. What are the strengths and weaknesses of the proposed approach in terms of its generality and interpretability?
3. How does the reviewer assess the significance of the paper's content and its impact on solving the fundamental challenge in summarizing data?
4. Do you have any questions or concerns about the paper's terminology and notation, particularly regarding signals and inputs?
5. How does the reviewer evaluate the effectiveness of the paper's organization and structure, including the use of appendices? | Review | Review
The authors perform a descriptive analysis of data by attempting to identify elements in the partial ordering of all partitions on the data which admit a compact definition. Compact definitions are those that are formed by composition of a small number of predefined (prior) set of mathematical operations. Projection and lifting operations are defined to relate descriptions of partition cells to one another through rules. The quality of a description is measured by the divergence between the data and the (special) lifting of the rule set, under the constraint that rules satisfy an upper bound on their entropy.
The approach is general, but due to the intractable size of the information lattice (set of all partitions), simplifications are necessary to produce tractable algorithms. Thus, the authors rely on predefined sets of mathematical operations. This set serves as the defacto language of the summarizations that result. The trouble is that, while the authors describe their method as interpretable, this reviewer finds it very difficult to interpret the summarizations even on the toy problems presented. Moreover, one might refute this difficulty by requiring the user to specify the terms in which they would like to describe the data. Even if a user were capable of doing this, many concepts humans might use are prohibitively difficult to define mathematically, both as a functions and compositions.
General human level summarization of data is a very important task in ML/AI. In the opinion of this reviewer, the community has not adequately solved this problem. The submitted work attempts to move the line forward, but faces a fundamental challenge. We may summarize a set of data by appealing to various groupings of said data (i.e. those that represent fundamental concepts), but we still face the problem of summarizing those groupings. We have only kicked the can down the road so to speak.
The paper is very dense in terminology, which is sometimes conflicting. The authors clearly state that a signal is a function from data to the reals, but then use the same term to describe images. The authors appear to use both 'X' and 'signal' to describe input data. The very heavy use of appendices appears to be a work-around to stuff a great deal of content into the 8-10 page limitation. It makes the paper feel disconnected. |
ICLR | Title
Extrapolation and learning equations
Abstract
In classical machine learning, regression is treated as a black box process of identifying a suitable function from a hypothesis set without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. This paper proposes a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions and is able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified. INTRODUCTION The quality of a model is typically measured by its ability to generalize from a training set to previously unseen data from the same distribution. In regression tasks generalization essentially boils down to interpolation if the training data is sufficiently dense. As long as models are selected correctly, i. e. in a way to not overfit the data, the regression problem is well understood and can – at least conceptually – be considered solved. However, when working with data from real-world devices, e. g. controlling a robotic arm, interpolation might not be sufficient. It could happen that future data lies outside of the training domain, e. g. when the arm is temporarily operated outside of its specifications. For the sake of robustness and safety it is desirable in such a case to have a regression model that continues to make good predictions, or at least does not fail catastrophically. This setting, which we call extrapolation generalization, is the topic of the present paper. We are particularly interested in regression tasks for systems that can be described by real-valued analytic expression, e. g. mechanical systems such as a pendulum or a robotic arm. These are typically governed by a highly nonlinear function but it is nevertheless possible, in principle, to infer their behavior on an extrapolation domain from their behavior elsewhere. We make two main contributions: 1) a new type of network that can learn analytical expressions and is able to extrapolate to unseen domains and 2) a model selection strategy tailored to the extrapolation setting. The following section describes the setting of regression and extrapolation. Afterwards we introduce our method and discuss the architecture, its training, and its relation to prior art. We present our results in the Section Experimental evaluation and close with conclusions. REGRESSION AND EXTRAPOLATION We consider a multivariate regression problem with a training set {(x1, y1), . . . , (xN , yN )} with x ∈ R, y ∈ R. Because our main interest lies on extrapolation in the context of learning the dynamics of physical systems we assume the data originates from an unknown analytical function (or system of functions), φ : R → R with additive zero-mean noise, ξ, i. e. y = φ(x) + ξ and Eξ = 0. The function φ may, for instance, reflect a system of ordinary differential equations that govern the movements of a robot arm or the like. The general task is to learn a function ψ : R → R that approximates the true functional relation as well as possible in the squared loss sense, i. e. achieves minimal expected error E‖ψ(x) − φ(x)‖2. In practice, we only have particular examples of the function values available and measure the quality of predicting in terms of the empirical error on
INTRODUCTION
The quality of a model is typically measured by its ability to generalize from a training set to previously unseen data from the same distribution. In regression tasks generalization essentially boils down to interpolation if the training data is sufficiently dense. As long as models are selected correctly, i. e. in a way to not overfit the data, the regression problem is well understood and can – at least conceptually – be considered solved. However, when working with data from real-world devices, e. g. controlling a robotic arm, interpolation might not be sufficient. It could happen that future data lies outside of the training domain, e. g. when the arm is temporarily operated outside of its specifications. For the sake of robustness and safety it is desirable in such a case to have a regression model that continues to make good predictions, or at least does not fail catastrophically. This setting, which we call extrapolation generalization, is the topic of the present paper.
We are particularly interested in regression tasks for systems that can be described by real-valued analytic expression, e. g. mechanical systems such as a pendulum or a robotic arm. These are typically governed by a highly nonlinear function but it is nevertheless possible, in principle, to infer their behavior on an extrapolation domain from their behavior elsewhere. We make two main contributions: 1) a new type of network that can learn analytical expressions and is able to extrapolate to unseen domains and 2) a model selection strategy tailored to the extrapolation setting.
The following section describes the setting of regression and extrapolation. Afterwards we introduce our method and discuss the architecture, its training, and its relation to prior art. We present our results in the Section Experimental evaluation and close with conclusions.
REGRESSION AND EXTRAPOLATION
We consider a multivariate regression problem with a training set {(x1, y1), . . . , (xN , yN )} with x ∈ Rn, y ∈ Rm. Because our main interest lies on extrapolation in the context of learning the dynamics of physical systems we assume the data originates from an unknown analytical function (or system of functions), φ : Rn → Rm with additive zero-mean noise, ξ, i. e. y = φ(x) + ξ and Eξ = 0. The function φ may, for instance, reflect a system of ordinary differential equations that govern the movements of a robot arm or the like. The general task is to learn a function ψ : Rn → Rm that approximates the true functional relation as well as possible in the squared loss sense, i. e. achieves minimal expected error E‖ψ(x) − φ(x)‖2. In practice, we only have particular examples of the function values available and measure the quality of predicting in terms of the empirical error on
training or test data D,
E(D) = 1
N N∑ i=1 ‖ψ(xi)− yi‖2 . (1)
If training and test data are sampled from the same distribution then we speak about an interpolation problem. In the extrapolation setting the training data is assumed to cover only a limited range of the data domain. In the example of the robot arm, for instance, the training may be restricted to a certain joint angle range or maximal velocity. For testing we want to make predictions about the unseen domains, e. g. for higher velocities. To succeed in this task, it is essential to identify the underlying functional relationship instead of just minimizing the empirical error, as detailed below. As usual, we split the data that is available at training time into a part for model training and a part for validation or model selection.
LEARNING A NETWORK FOR FUNCTION EXTRAPOLATION
The main model we propose is a multi-layered feed-forward network with computational units specifically designed for the extrapolation regression tasks. For an L-layer network, there are L− 1 hidden layers, each consisting of a linear mapping followed by non-linear transformations. For simplicity of notation, we explain the network as if each hidden layer had the same structure (k′ inputs, k outputs). In practice, each layer can be designed independently of the others, of course, as long as input/output dimensions match.
The linear mapping at level l maps the k′-dimensional input y(l−1) to the d-dimensional intermediate representation z given by
z(l) = W (l)y(l−1) + b(l), (2)
where y(l−1) is the output of the previous layer, with the convention y(0) = x. The weight matrix W (l) ∈ Rd×k′ and the bias vector b(l) ∈ Rd are free parameters that are learned during training. The non-linear transformation contains u unary units, fi : R→ R, for i = 1, . . . , u, and v binary units, gj : R× R→ R for j = 1, . . . , v. Their outputs are concatenated to form the layer output
y(l) := ( f1(z (l) 1 ), f2(z (l) 2 ), . . . , fu(z (l) u ), g1(z (l) u+1, z (l) u+2), . . . , gv(z (l) u+2v−1, z (l) u+2v) ) . (3)
In total, the nonlinear stage has k = u+v outputs and d = u+2v inputs. The unary units, f1, . . . , fu receive the respective component, z1, . . . , zu as inputs, and each unit may be one of the following base functions as specified in a fixed type parameter Ii ∈ {0, 1, 2, 3}
fi(zi) := zi if Ii = 0, sin(zi) if Ii = 1, cos(zi) if Ii = 2, sigm(zi) if Ii = 3,
for i = 1, . . . , u, (4)
where sigm(z) = 11+e−z is the standard sigmoid function. The binary units, g1, . . . , gv receive the remaining component, zu+1, . . . , zu+2v, as input in pairs of two. They are multiplication units that compute the product of their two input values:
gj(zu+2j−1, zu+2j) := zu+2j−1 · zu+2j for j = 1, . . . , v. (5)
Finally, the L-th and last layer computes the regression values by a linear read-out
y(L) := W (L)y(L−1) + b(L). (6)
The architecture is depicted in Fig. 1. We call the new architecture Equation Learner (EQL) and denote the function it defines by ψ.
DISCUSSION OF THE ARCHITECTURE
The proposed network architecture differs in two main aspects from typical feed-forward networks: the existence of multiplication units and the possibility of sine and cosine as nonlinearities for the unary units. Both design choices are motivated by our objective of learning a system of equations that govern a physical system and can extrapolate to new parts of the input space.
Sigmoid nonlinearities are the canonical choice of activation function for artificial neural networks (ANN) and proved to be successful. In fact, we include sigmoids in our architecture, making it a super class of ANNs. However, they were typically disabled by the training procedure corresponding to their absence in the considered physical equations. Other, predominantly local nonlinearities, in particular radial basis functions Broomhead & Lowe (1988) we do not include, since one cannot expect them to extrapolate at all. Further nonlinearities, such as (square) roots and logarithms, could in principle be useful for learning physical equations, but they pose problems because their domains of definition is restricted to positive inputs. We leave the task of incorporating them in a principled way to future work.
The ability to multiply two values is a second crucial component of our network architecture. Again, it is inspired by the typical form of physical equations, where multiplication of components is arguably second common basic operation after addition (which the linear layers can perform). Multiplication was introduced into neural networks long ago as product-units Durbin & Rumelhart (1989) and Pi-Sigma-unit Shin & Ghosh (1991). The product-units have large fan-in that compute products over all their inputs, potentiated by the respective weights. The result is typically the behavior of a high order polynomial, which are powerful function approximators, but rarely occur in physical equations. Polynomials are also known to require careful fine-tuning in order not to overfit, which makes them a risky choice for the purpose of extrapolation. The Pi-Sigma units are multiplication units with a fixed number of factors and our multiplication units are a special for 2 factors. We find that multiplying just two values at a time is well adjusted to the task we aim at, as it allows to control the maximal degree of the learned polynomial by the depth of the network.
Finally, each layer of the network contains unary units that act as identity maps, which in particular gives the network the option to learn functions with smaller number of nonlinearities than the total network depths.
NETWORK TRAINING
The EQL is fully differentiable in its free parameters θ = {W (1), . . . ,W (L), b(1), . . . , b(L)}, which allows us to train it in an end-to-end fashion using back-propagation. We adopt a Lasso-like objective Tibshirani (1996),
L(D) = 1 N |D|∑ i=1 ‖ψ(xi)− yi‖2 + λ L∑ l=1 ∣∣W (l)∣∣ 1 , (7)
that is, a linear combination of L2 loss and L1 regularization, and apply a stochastic gradient descent algorithm with mini-batches and Adam Kingma & Ba (2015) for calculating the updates:
θt+1 = θt + Adam
( ∂L ( D(t) ) ∂θ , α ) , (8)
where D(t) denotes the current mini-batch and α is the stepsize parameter. The choice of Adam is not critical and standard stochastic gradient descent also works. In all numerical experiments we use α = 0.001 and a mini-batch size of 20.
The role of the L1 regularization is to encourage networks with sparse connections, matching the intuition that a typical formula describing a physical system contains only a small number of terms, each operating only on a few variables. However, in a non-convex setting where local minima are likely to occur, this type of regularization can have an undesirable side-effect: during the course of the optimization the weights hardly ever change their sign. The reason is that the regularization leads to a constant rate of weight decay whereas the counteracting derivative with respect to the square loss is proportional to the backpropagated error signal and the input to the unit. The latter contributions are often smaller along paths with small weights, such that many weights go to zero and stay there. Additionally, any non-zero regularization term causes the learned weights to reflect a trade-off between minimizing the loss and the regularizer. Although, this can lead to improved generalization, it also results in a systematic underestimation of the function values.
Therefore, we follow a hybrid regularization strategy: at the beginning of the training procedure (t < t1) we use no regularization (λ = 0), such that parameters can vary freely and reach reasonable starting points. Afterwards, we switch on the regularization by setting λ to a nonzero value, which has the effect that a sparse network structure emerges. Finally, for the last steps of the training (t > t2) we disable L1 regularization (λ = 0) but enforce the same L0 norm of the weights. This is achieved by keeping all weights w ∈ W 1...L that are close to 0 at 0, i. e. if |w| < 0.001 then w = 0 during the remaining epochs. This ensures that the learned model finds not only a function of the right parametric form, but also fits the observed values as closely as possible. We observed that the exact choice of breakpoints t1 and t2 is not critical. In practice, we use t1 = 14T and t2 = 19 20T , where T is total number of update steps. T was selected large enough to ensure convergence. Note, that convergence to a sparse structure is important here, so early stopping will be disadvantageous.
MODEL SELECTION FOR EXTRAPOLATION
EQL networks have a number of hyper-parameters, e. g. the number of layers, the number of units and the regularization constant. Unfortunately, standard techniques for model selection, such as evaluation on a hold-out set or cross-validation, will not be optimal for our purpose, since they rely on interpolation quality. In order to extrapolate the network has to find the “right” formula. But how can we tell? Using Occams razor principle: the simplest formula is most likely the right one. Intuitively, if we have the choice between cos(x) and its truncated power series approximation 1− x2/2 + x4/24, the first one is preferred. We use the number of active hidden units in the network as a proxy for the complexity of the formula, see Appendix A1 for details. One could also think of differentiating between the unit types. In any case, this argumentation is only correct if the model explains the data well, i. e. it has a low validation error. So we have a dual objective to minimize, which we solve by ranking the instances w. r. t. validation error and sparsity and select the one with the smallest L2 norm (in rank-space), see Eq. (15).
Furthermore, the optimization process may only find a local optimum of the training objective, which depends on the initialization of the parameters. We use independent runs to quantify expected performance deviations.
RELATED WORK
In the field of machine learning, regression is often treated as a black box process of identifying a suitable real-valued function from a hypothesis set, e. g. a reproducing kernel Hilbert space for Gaussian Processes Regression (GPR) Williams & Rasmussen (2006) or Support Vector Regression (SVR) Smola & Schölkopf (2004), or a multi-layer network of suitable expressive power Specht (1991). The goal is to find a prediction function that leads to a small expected error on future data, not
necessarily to gain insight into the mechanism of how the output values derive from the inputs. The goal of finding an interpretable function is rather common in the natural sciences, such as biology, where high noise levels and strong inter-system variability often make it important to rely on external prior knowledge, and finding a “biologically plausible” model is often preferable over finding one that makes the highest prediction accuracy. As a consequence, model classes are often highly constrained, e. g. allowing only for sparse linear models.
The task of learning a true, nonlinear, functional dependence from observing a physical system, has received little attention in the machine learning literature so far, but forms the basis of the field of system identification. There, typically the functional form of the system is known and only the parameters have to be identified. Another approach is to model the time evolution with autoregressive models or higher order convolution integrals (Volterra series) but learning analytic formulas is not common.
Causal learning is an area of recent research that aims at identifying a causal relation between multiple observables, which are typically the result of a physical process. Classically, this tasks reduces to finding a minimal graphical model based only on tests of conditional independence Pearl (2000). Although very successful in some fields, this classical approach only provides a factorization of the problem, separating causes and effects, but it leaves the exact functional dependency unexplained. Recent extensions of causal learning can take a functional view, but typically do not constrain the regression functions to physically plausible ones, but rather constrain the noise distributions Peters et al. (2014). The topic of learning a regression function with emphasis on extrapolation performance has not been studied much in the literature so far. Existing work on time series prediction deals with extrapolation in the temporal domain, i. e. predict the next value(s) Wiener (1949). By our nomenclature, this is typically rather an interpolation task, when the prediction is based on the behaviour of the series at earlier time steps but with similar value distribution Müller et al. (1997); Györfi et al. (2013). Extrapolating in the data domain implies that the data distribution at prediction time will differ from the data distribution at training time. This is traditionally called the domain adaptation setting. In particular, since we assume a common labeling function, our setting would fall under the covariate shift setting Quionero-Candela et al. (2009). Unfortunately, this connection is not particularly useful for our problem. As domain adaptation typically does not make additional assumptions about how the data distribution may change, existing methods need access to some unlabeled data from the test distribution already at training time Ben-David et al. (2010). In our setting this is not possible to obtain.
On the technical level, EQL networks are an instance of general feed-forward networks for function approximation Bishop (1995). In contrast to recent trends towards deep learning Bengio (2009); Bengio et al. (2013), our goal is not to learn any data representation, but to learn a function which compactly represents the input-output relation and generalizes between different regions of the data space, like a physical formula. Structurally, EQL networks resemble sum-product networks (SPNs) Poon & Domingos (2012) and Pi-Sigma networks (PSNs) Shin & Ghosh (1991), in the sense that both are based on directed acyclic graphs with computational units that allows for summation and multiplication. Otherwise, SPNs are different as they act as efficient alternative to probabilistic graphical models for representing probability distributions, whereas EQL networks are meant for the classical task of function approximation. In PSNs each output needs to be passed through multiplicative units, whereas in EQL multiplication is optional.
Finding equations for observations is also known as symbolic regression where a search is performed in a certain function space, typically done with evolutionary computation. With these techniques it is possible to discover physical laws such as invariants and conserved quantities Schmidt & Lipson (2009). Unfortunately, the computational complexity/search time explodes for larger expressions and high-dimensional problems. We attempt to circumvent this by modeling it as a gradient based optimization problem. Related to symbolic regression is finding mathematical identities for instance to find computationally more efficient expressions. In Zaremba et al. (2014) this was done using machine learning to overcome the potentially exponential search space.
EXPERIMENTAL EVALUATION
We demonstrate the ability of EQL to learn physically inspired models with good extrapolation quality by experiments on synthetic and real data. For this, we implemented the network training and
evaluation procedure in python based on the theano framework Theano Development Team (2016). We will make the code for training and evaluation public after acceptance of the manuscript.
Pendulum. We first present the results of learning the equations of motion for a very simple physical system: a pendulum. The state space of a pendulum is X = R× R where the first value is the angle of the pole in radians and the second value is the angular velocity. In the physics literature, these are usually denoted as (θ, ω), but for our purposes, we call them (x1, x2) in order to keep the notation consistent between experiments. The pendulum’s dynamic behavior is governed by the following two ordinary differential equations:
ẋ1 = x2 and ẋ2 = −g sin(x1) , (9) where g = 9.81 is the gravitation constant.
We divide each equation by g in order to balance the output scales and form a regression problem with two output values, y1 = 1gx2 and y2 = − sin(x1). As training data, we sample 1000 points uniformly in the hypercube [−h, h] × [−h, h] for h = 2. Note that this domain contains more than half of a sine period, so it should be sufficient to identify the analytic expression. The target values are disturbed by Gaussian noise with standard derivation σ = 0.01. We also define three test sets, each with 1000 points. The interpolation test set is sampled from the same data distribution as the training set. The extrapolation (near) test set contains data sampled uniformly from the data domain [− 3
2 h, 3 2 h] × [− 3 2 h, 3 2 h] \ [−h, h] × [−h, h], which is
relatively near the training region and the extrapolation (far) test set extends the region to further outside: [−2h, 2h]× [−2h, 2h]\ [−h, h]× [−h, h]. We train a 2-layer EQL and perform model selection among the hyper-parameters: the regularization strength λ ∈ 10{−7,−6.3,−6,−5.3,−5,−4.3,−4,−3.3,−3} and the number of nodes 1
4 u = v ∈ {1, 3, 5}. All weights are randomly initialized from a normal distribution with σ = √
1/(k′ + d). The unit selection I is set such that all unit types are equally often. To ensure convergence we chose T = 10000 epochs. We compare our algorithm to a standard multilayer perceptron (MLP) with tanh activation functions and possible hyperparameters: λ as for EQL, number of layers L ∈ {2, 3}, and number of neurons k ∈ {5, 10, 20}. A second baseline is given by epsilon support vector regression (SVR) Basak et al. (2007) with two hyperparameters C ∈ 10{−3,−2,−1,0,1,2,3,3.5} and ∈ 10{−3,−2,−1,0} using radial basis function kernel with width γ ∈ {0.05, 0.1, 0.2, 0.5, 1.0}.
Numeric results are reported in Tab. 1. As expected all models are able to interpolate well with a test error on the order of the noise level (σ = 0.01). For extrapolation however, the performance differ between the approaches. For MLP the prediction quality decreases quickly when leaving the training domain. SVR remains a bit better in the near extrapolation but also fails catastrophically on the far extrapolation data. EQL, on the other hand, extrapolates well, both near and far away from the training domain. The reasons can be seen in Figure 2: while the MLP and SVR simply learns a function that interpolates the training values, EQL finds the correct functional expression and therefore predicts the correct values for any input data.
Double pendulum kinematics. The second system we consider real double pendulum where the forward kinematics should be learned. For that we use recorded trajectories of a real double pendulum Schmidt & Lipson (2009). The task here is to learn the position of the tips of the double pendulum segments from the given joint angles (x1, x2). These positions where not measured such that we supply them by the following formula: y1 = cos(x1), y2 = cos(x1) + cos(x1 + x2), y3 = sin(x1), y4 = sin(x1) + sin(x1 + x2) where (y1, y3) and (y2, y4) correspond to x-y-coordinates of the first and second end-point respectively. The dataset contains two short trajectories. The first
covers only part of the domain (input as well as output) and consists of 819 samples where 10% was used as validation set (randomly sampled), see Fig. 3(a). The second trajectory corresponds to a behavior with several spins of both pendulum segments such that a much larger domain is covered. Nevertheless the angle values are confined to [−π, π]. We use this trajectory as extrapolation test set. The trajectory and the outputs of our method are shown in Fig. 3(b). The prediction for unseen domains is perfect, which is also illustrated in a systematic sweep, see Fig. 3(c). The performance of MLP is off already near the training domain. SVR is a bit better, but still does not give usable predictions for the test data, see also the root means square error in Fig. 3(d).
Model selection is performed to determine λ as above, u = v ∈ {3, 5}, (MLP: k ∈ {5, 10, 20}) and layer number L ∈ {2, 3}.
Robotic arms. A more complicated task is to learn the forward kinematics of multi-segment robotic arms. We consider planar arms with 3, 4, and 5 joints, where each segment is 0.5 units long. For training the arm is controlled by sinusoidal joint target angles with amplitude in [−π/2, π/2], each joint with a different frequency. The number of data points are: 3000, 6000, and 18000 for the 3, 4, and 5 segment arms respectively, with added noise as above. For testing extrapolation performance the amplitude [−π, π] was used. Note that the extrapolation space is much larger than the training space. The task is to predict the coordinates of the end-effector of the arms (kin-3-end, kin-4-end) and the coordinates of all segment positions kin-5-all. The numerical results, see Tab. 2, shows that our method is able to extrapolate in these cases. Model selection as above with u = v ∈ {10, 20}, (MLP: k ∈ {10, 50}) and layer number L ∈ {2, 3, 4}. To illustrate the dependence on the amount of
noise and the number of available training points we provide a quantification in Appendix A2. In short, increasing noise can be compensated by increasing amount of data to keep the performance.
Learning complex formula. In order to find out whether EQL can also learn more complicated formulas, we consider three examples with four-dimensional input and one-dimensional output:
y = 1/3 (sin(πx1) + sin (2πx2 + π/8) + x2 − x3x4) F-1 (10) y = 1/3 ( sin(πx1) + x2 cos(2πx1 + π/4) + x3 − x24 ) F-2 (11)
y = 1/3 ((1 + x2) sin(πx1) + x2x3x4) F-3 (12)
The first equation requires only one hidden layer to be represented. The second equation and third equation should requires two hidden layers. In particular, F-2 contains a product of x2 and cos and F-3 contains a product of three terms, and we use it to test if our restriction to only pairwise product units causes problems for more complex target functions. We follow the same procedure as in the pendulum case for building training and test sets, though with h = 1 as input data range. We use 10000 points for training set and validation set (90%-10% split) and 5000 points for each of the test sets. Model selection for EQL is performed as above using the number of layers L ∈ 2, 3, 4. The number of units is set to 14u = v = 10. For the MLP, we select L and λ from the same set as above as well as k ∈ {10, 30}.
Table 3 shows the numerical results. Again, all methods are able to interpolate, but only EQL achieves good extrapolation results, except for equation F-3. There it settles in 9 out of 10 cases into a local minimum and finds only an approximating equation that deviates outside the training domain. Interestingly, if we restrict the base functions to not contain cosine, the algorithm finds the right formula. Note, the sparsity of the correct formula is lower than those of the approximation, so it should be selected if found. Figure Fig. 4 illustrates the performance and the learned networks visually. It shows one of the model-selected instances for each case. For F-1 the correct formula was identified, so correct predictions can be made even far outside the training region (much further than illustrated). For F-2 the network provided us with a surprise, because it yields good extrapolation performance with only one hidden layer! How can it implement x2 cos(2πx1 + π/4)? Apparently it uses 1.21(cos(−2πx1+π+π/4+0.41x2)+sin(2πx1+π/4+0.41x2)) which is a good approximation for x2 ∈ [−2, 2]. The sparsity of this solution is 5 whereas the true solution needs at least 6, which explains its selection. For F-3 the suboptimal local minima uses some strange way of approximating (1 + x2) sin(x1) using (x1 + x1x2) cos(βx1), which deviates fast, however the true solution would be sparser but was not found. Only if we remove cosine from the base functions we get always the correct formula, see Fig. 4(c).
X-Ray transition energies. As a further example we consider data measured in atomic physics. When shooting electron beams onto atoms one can excite them and they consequently emit x-ray radiation with characteristic peak energies. For each element/isotope these energies are different as they correspond to the potential difference between the electron shells, such that one can identify elements in a probe this way. The data is taken from Deslattes et al. (2003), where we consider one specific transition, called the K α2 line, because it was measured for all elements. The true relationship between atomic number Z and transition energies is complicated, as it involves many body interactions and no closed-form solution exists. Nevertheless we can find out which relationships our system proposes. It is known that the main relationship isK α2 ∝ Z2 according to Moseley’s law. Further correction terms for elements with larger Z are potentially of higher order. We have data for elements with 10 ≤ Z ≤ 100, which is split into training/validation sets in the range [10, 91] (70/10 data points) and extrapolation test set in the interval [92, 100] (14 data points because of isotops). Since we have so little data we evaluate the performance for 10 independent training/validation
splits. The data is scaled to lie in [0, 1], i. e. x = Z/100 and y = Kα2/100000. Model selection is here based on validation error only. The selection for sparsity and validation error only yields the Z2 relationship. Mini-batch size is 2 here and T = 50000 was used. Figure 5 presents the data, the predictions, the learned formulas and the numerical results. EQL and SVR achieve similar performance and MLP is significantly worse. However, EQL also yields interpretable formulas, see Fig. 5(e) that can be used to gain insights into the potential relationship.
POOR EXTRAPOLATION OUT OF MODEL CLASS — CART-PENDULUM SYSTEM
Let us now go beyond our assumptions and consider cases where the true target function is not an element of the hypothesis set.
Consider a pendulum attached to a cart that can move horizontally along a rail but that is attached to a spring damper system, see Fig. 6(a). The system is parametrized by 4 unknowns: the position of the cart, the velocity of the cart, the angle of the pendulum and the angular velocity of the pendulum. We combine these into a four-dimensional vector x = (x1, . . . , x4).
We set up a regression problem with four outputs from the corresponding system of ordinary differential equations where y1 = ẋ1 = x3, y2 = ẋ2 = x4 and
y3 = −x1 − 0.01x3 + x24 sin (x2) + 0.1x4 cos (x2) + 9.81 sin (x2) cos (x2)
sin2 (x2) + 1 , (13)
y4 = −0.2x4 − 19.62 sin (x2) + x1 cos (x2) + 0.01x3 cos (x2)− x24 sin (x2) cos (x2)
sin2 (x2) + 1 .
The formulas contain divisions which are not included in our architecture due to their singularities. To incorporate them in a principled manner is left for future work. Thus, the cart-pendulum dynamics is outside the hypothesis class. In this case we cannot expect great extrapolation performance and this is confirmed by the experiments. In Fig. 6(b,c) the extrapolation performance is illustrated by slicing through the input space. The near extrapolation performance is still acceptable for both EQL and MLP, but as soon as the training region is left further even the best instances differ considerably from the true values, see also the numeric results in Tab. 4. The SVR is performing poorly also for near extrapolation range. Inspecting the learned expressions we find that the sigmoid functions are rarely used.
CONCLUSIONS
We presented a new network architecture called EQL that can learn analytic expressions that typically occur in equations governing physical, in particular mechanical, systems. The network is fully differentiable, which allows end-to-end training using backpropagation. By sequencing L1 regularization and fixing L0 norm we achieve sparse representations with unbiased estimation of factors within the learned equations. We also introduce a model selection procedure specifically designed to select for good extrapolation quality by a multiobjective criterion based on validation error and sparsity. The proposed method is able to learn functional relations and extrapolate them to unseen parts of the data space, as we demonstrate by experiments on synthetic as well as real data. The approach learns concise functional forms that may provide insights into the relationships within the data, as we show on physical measurements of x-ray transition energies.
The optimization problem is nontrivial and has many local minima. We have shown cases where the algorithm is not reliably finding the right equation but instead finds an approximation only, in which case extrapolation may be poor.
If the origin of the data is not in the hypothesis class, i. e. the underlying expression cannot be represented by the network and good extrapolation performance cannot be achieved. Thus it is important to increase the model class by incorporating more base functions which we will address in future work alongside the application to even larger examples. We expect good scaling capabilities to larger systems due to the gradient based optimization. Apart from the extrapolation we also expect improved interpolation results in high-dimensional spaces, where data is less dense.
ACKNOWLEDGMENTS
This work was in parts funded by the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement no. 308036: "Life-long learning of visual scene understanding" (L3ViSU). GM received funding from the People Programme (Marie Curie Actions) in FP7/2007-2013 under REA grant agreement no. 291734.
APPENDIX
A1: MODEL SELECTION DETAILS
QUANTIFYING SPARSITY
We actually want a measure of complexity of the formula, however, since it is not clear what is the right choice of a measure, we use the sparsity instead, by counting the number of active/used hidden units denoted by s. For a given network phi we get
s(φ) = L∑ l=1 k∑ i=1 Θ(|W (l)i,· | ∗ |W (l+1) ·,i | − 0.01) , (14)
where Θ is the heavyside function and 0.01 is an arbitrary threshold. For the multiplication units the norm of the incoming weights for both inputs are added (omitted to avoid clutter in the formula).
SELECTION CRITERIA
As stated in the main text, we strive to choose the model that is both simple and has good performance in terms of the validation set. Since both quantities have different scales, we proposed to choose them based on their ranking. Let rv(φ) and rs(φ) be the ranks of the network φ w. r. t. the validation error and sparsity s(φ)respectively, then the network with minimal squared rank norm is selected:
arg min φ
[ rv(φ)2 + rs(φ)2 ] (15)
In Fig. 7 the extrapolation performance of all considered networks for the kin2D-4-end dataset is visualized in dependence of validation error and the sparsity. It becomes evident that the best performing networks are both sparse and have a low validation error.
A2: DEPENDENCE ON NOISE AND NUMBER OF DATA POINTS
In order to understand how the method depends on the amount of noise and the number of datapoints we scan through the two parameters and present the empirical results in Fig. 8. In general the method is robust to noise and as expected, more noise can be compensated by more data. | 1. What are the strengths and weaknesses of the paper's approach to extrapolate a given dataset and predict formulae with naturally occurring functions?
2. How does the proposed method differ from existing methods, and what are the advantages and limitations of incorporating functions with 2 or more inputs?
3. Can you provide further explanations or clarifications regarding the claim on page 8, and how does the model predict a correct solution within certain limits?
4. How does the neural approach model hundreds of variables, and what are the challenges and potential solutions for scaling up the model?
5. Is it possible to ensure that this architecture is a universal approximator, and what are the necessary conditions or modifications required to achieve universality? | Review | Review
Thank you for an interesting perspective on the neural approaches to approximate physical phenomenon. This paper describes a method to extrapolate a given dataset and predict formulae with naturally occurring functions like sine, cosine, multiplication etc.
Pros
- The approach is rather simple and hence can be applied to existing methods. The major difference is incorporating functions with 2 or more inputs which was done successfully in the paper.
- It seems that MLP, even though it is good for interpolation, it fails to extrapolate data to model the correct function. It was a great idea to use basis functions like sine, cosine to make the approach more explicit.
Cons
- Page 8, the claim that x2 cos(ax1 + b) ~ 1.21(cos(-ax1 + π + b + 0.41x2) + sin(ax1 + b + 0.41x2)) for y in [-2,2] is not entirely correct. There should be some restrictions on 'a' and 'b' as well as the approximate equality doesn't hold for all real values of 'a' and 'b'. Although, for a=2*pi and b=pi/4, the claim is correct so the model is predicting a correct solution within certain limits.
- Most of the experiments involve up to 4 variables. It would be interesting to see how the neural approach models hundreds of variables.
- Another way of looking at the model is that the non-linearities like sine, cosine, multiplication act as basis functions. If the data is a linear combination of such functions, the model will be able to learn the weights. As division is not one of the non-linearities, predicting expressions in Equation 13 seems unlikely. Hence, I was wondering, is it possible to make sure that this architecture is a universal approximator.
Suggested Edits
- Page 8, It seems that there is a typographical error in the expression 1.21(cos(ax1 + π + b + 0.41x2) + sin(ax1 + b + 0.41x2)). When compared with the predicted formula in Figure 4(b), it should be 1.21(cos(-ax1 + π + b + 0.41x2) + sin(ax1 + b + 0.41x2)). |
ICLR | Title
Extrapolation and learning equations
Abstract
In classical machine learning, regression is treated as a black box process of identifying a suitable function from a hypothesis set without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. This paper proposes a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions and is able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified. INTRODUCTION The quality of a model is typically measured by its ability to generalize from a training set to previously unseen data from the same distribution. In regression tasks generalization essentially boils down to interpolation if the training data is sufficiently dense. As long as models are selected correctly, i. e. in a way to not overfit the data, the regression problem is well understood and can – at least conceptually – be considered solved. However, when working with data from real-world devices, e. g. controlling a robotic arm, interpolation might not be sufficient. It could happen that future data lies outside of the training domain, e. g. when the arm is temporarily operated outside of its specifications. For the sake of robustness and safety it is desirable in such a case to have a regression model that continues to make good predictions, or at least does not fail catastrophically. This setting, which we call extrapolation generalization, is the topic of the present paper. We are particularly interested in regression tasks for systems that can be described by real-valued analytic expression, e. g. mechanical systems such as a pendulum or a robotic arm. These are typically governed by a highly nonlinear function but it is nevertheless possible, in principle, to infer their behavior on an extrapolation domain from their behavior elsewhere. We make two main contributions: 1) a new type of network that can learn analytical expressions and is able to extrapolate to unseen domains and 2) a model selection strategy tailored to the extrapolation setting. The following section describes the setting of regression and extrapolation. Afterwards we introduce our method and discuss the architecture, its training, and its relation to prior art. We present our results in the Section Experimental evaluation and close with conclusions. REGRESSION AND EXTRAPOLATION We consider a multivariate regression problem with a training set {(x1, y1), . . . , (xN , yN )} with x ∈ R, y ∈ R. Because our main interest lies on extrapolation in the context of learning the dynamics of physical systems we assume the data originates from an unknown analytical function (or system of functions), φ : R → R with additive zero-mean noise, ξ, i. e. y = φ(x) + ξ and Eξ = 0. The function φ may, for instance, reflect a system of ordinary differential equations that govern the movements of a robot arm or the like. The general task is to learn a function ψ : R → R that approximates the true functional relation as well as possible in the squared loss sense, i. e. achieves minimal expected error E‖ψ(x) − φ(x)‖2. In practice, we only have particular examples of the function values available and measure the quality of predicting in terms of the empirical error on
INTRODUCTION
The quality of a model is typically measured by its ability to generalize from a training set to previously unseen data from the same distribution. In regression tasks generalization essentially boils down to interpolation if the training data is sufficiently dense. As long as models are selected correctly, i. e. in a way to not overfit the data, the regression problem is well understood and can – at least conceptually – be considered solved. However, when working with data from real-world devices, e. g. controlling a robotic arm, interpolation might not be sufficient. It could happen that future data lies outside of the training domain, e. g. when the arm is temporarily operated outside of its specifications. For the sake of robustness and safety it is desirable in such a case to have a regression model that continues to make good predictions, or at least does not fail catastrophically. This setting, which we call extrapolation generalization, is the topic of the present paper.
We are particularly interested in regression tasks for systems that can be described by real-valued analytic expression, e. g. mechanical systems such as a pendulum or a robotic arm. These are typically governed by a highly nonlinear function but it is nevertheless possible, in principle, to infer their behavior on an extrapolation domain from their behavior elsewhere. We make two main contributions: 1) a new type of network that can learn analytical expressions and is able to extrapolate to unseen domains and 2) a model selection strategy tailored to the extrapolation setting.
The following section describes the setting of regression and extrapolation. Afterwards we introduce our method and discuss the architecture, its training, and its relation to prior art. We present our results in the Section Experimental evaluation and close with conclusions.
REGRESSION AND EXTRAPOLATION
We consider a multivariate regression problem with a training set {(x1, y1), . . . , (xN , yN )} with x ∈ Rn, y ∈ Rm. Because our main interest lies on extrapolation in the context of learning the dynamics of physical systems we assume the data originates from an unknown analytical function (or system of functions), φ : Rn → Rm with additive zero-mean noise, ξ, i. e. y = φ(x) + ξ and Eξ = 0. The function φ may, for instance, reflect a system of ordinary differential equations that govern the movements of a robot arm or the like. The general task is to learn a function ψ : Rn → Rm that approximates the true functional relation as well as possible in the squared loss sense, i. e. achieves minimal expected error E‖ψ(x) − φ(x)‖2. In practice, we only have particular examples of the function values available and measure the quality of predicting in terms of the empirical error on
training or test data D,
E(D) = 1
N N∑ i=1 ‖ψ(xi)− yi‖2 . (1)
If training and test data are sampled from the same distribution then we speak about an interpolation problem. In the extrapolation setting the training data is assumed to cover only a limited range of the data domain. In the example of the robot arm, for instance, the training may be restricted to a certain joint angle range or maximal velocity. For testing we want to make predictions about the unseen domains, e. g. for higher velocities. To succeed in this task, it is essential to identify the underlying functional relationship instead of just minimizing the empirical error, as detailed below. As usual, we split the data that is available at training time into a part for model training and a part for validation or model selection.
LEARNING A NETWORK FOR FUNCTION EXTRAPOLATION
The main model we propose is a multi-layered feed-forward network with computational units specifically designed for the extrapolation regression tasks. For an L-layer network, there are L− 1 hidden layers, each consisting of a linear mapping followed by non-linear transformations. For simplicity of notation, we explain the network as if each hidden layer had the same structure (k′ inputs, k outputs). In practice, each layer can be designed independently of the others, of course, as long as input/output dimensions match.
The linear mapping at level l maps the k′-dimensional input y(l−1) to the d-dimensional intermediate representation z given by
z(l) = W (l)y(l−1) + b(l), (2)
where y(l−1) is the output of the previous layer, with the convention y(0) = x. The weight matrix W (l) ∈ Rd×k′ and the bias vector b(l) ∈ Rd are free parameters that are learned during training. The non-linear transformation contains u unary units, fi : R→ R, for i = 1, . . . , u, and v binary units, gj : R× R→ R for j = 1, . . . , v. Their outputs are concatenated to form the layer output
y(l) := ( f1(z (l) 1 ), f2(z (l) 2 ), . . . , fu(z (l) u ), g1(z (l) u+1, z (l) u+2), . . . , gv(z (l) u+2v−1, z (l) u+2v) ) . (3)
In total, the nonlinear stage has k = u+v outputs and d = u+2v inputs. The unary units, f1, . . . , fu receive the respective component, z1, . . . , zu as inputs, and each unit may be one of the following base functions as specified in a fixed type parameter Ii ∈ {0, 1, 2, 3}
fi(zi) := zi if Ii = 0, sin(zi) if Ii = 1, cos(zi) if Ii = 2, sigm(zi) if Ii = 3,
for i = 1, . . . , u, (4)
where sigm(z) = 11+e−z is the standard sigmoid function. The binary units, g1, . . . , gv receive the remaining component, zu+1, . . . , zu+2v, as input in pairs of two. They are multiplication units that compute the product of their two input values:
gj(zu+2j−1, zu+2j) := zu+2j−1 · zu+2j for j = 1, . . . , v. (5)
Finally, the L-th and last layer computes the regression values by a linear read-out
y(L) := W (L)y(L−1) + b(L). (6)
The architecture is depicted in Fig. 1. We call the new architecture Equation Learner (EQL) and denote the function it defines by ψ.
DISCUSSION OF THE ARCHITECTURE
The proposed network architecture differs in two main aspects from typical feed-forward networks: the existence of multiplication units and the possibility of sine and cosine as nonlinearities for the unary units. Both design choices are motivated by our objective of learning a system of equations that govern a physical system and can extrapolate to new parts of the input space.
Sigmoid nonlinearities are the canonical choice of activation function for artificial neural networks (ANN) and proved to be successful. In fact, we include sigmoids in our architecture, making it a super class of ANNs. However, they were typically disabled by the training procedure corresponding to their absence in the considered physical equations. Other, predominantly local nonlinearities, in particular radial basis functions Broomhead & Lowe (1988) we do not include, since one cannot expect them to extrapolate at all. Further nonlinearities, such as (square) roots and logarithms, could in principle be useful for learning physical equations, but they pose problems because their domains of definition is restricted to positive inputs. We leave the task of incorporating them in a principled way to future work.
The ability to multiply two values is a second crucial component of our network architecture. Again, it is inspired by the typical form of physical equations, where multiplication of components is arguably second common basic operation after addition (which the linear layers can perform). Multiplication was introduced into neural networks long ago as product-units Durbin & Rumelhart (1989) and Pi-Sigma-unit Shin & Ghosh (1991). The product-units have large fan-in that compute products over all their inputs, potentiated by the respective weights. The result is typically the behavior of a high order polynomial, which are powerful function approximators, but rarely occur in physical equations. Polynomials are also known to require careful fine-tuning in order not to overfit, which makes them a risky choice for the purpose of extrapolation. The Pi-Sigma units are multiplication units with a fixed number of factors and our multiplication units are a special for 2 factors. We find that multiplying just two values at a time is well adjusted to the task we aim at, as it allows to control the maximal degree of the learned polynomial by the depth of the network.
Finally, each layer of the network contains unary units that act as identity maps, which in particular gives the network the option to learn functions with smaller number of nonlinearities than the total network depths.
NETWORK TRAINING
The EQL is fully differentiable in its free parameters θ = {W (1), . . . ,W (L), b(1), . . . , b(L)}, which allows us to train it in an end-to-end fashion using back-propagation. We adopt a Lasso-like objective Tibshirani (1996),
L(D) = 1 N |D|∑ i=1 ‖ψ(xi)− yi‖2 + λ L∑ l=1 ∣∣W (l)∣∣ 1 , (7)
that is, a linear combination of L2 loss and L1 regularization, and apply a stochastic gradient descent algorithm with mini-batches and Adam Kingma & Ba (2015) for calculating the updates:
θt+1 = θt + Adam
( ∂L ( D(t) ) ∂θ , α ) , (8)
where D(t) denotes the current mini-batch and α is the stepsize parameter. The choice of Adam is not critical and standard stochastic gradient descent also works. In all numerical experiments we use α = 0.001 and a mini-batch size of 20.
The role of the L1 regularization is to encourage networks with sparse connections, matching the intuition that a typical formula describing a physical system contains only a small number of terms, each operating only on a few variables. However, in a non-convex setting where local minima are likely to occur, this type of regularization can have an undesirable side-effect: during the course of the optimization the weights hardly ever change their sign. The reason is that the regularization leads to a constant rate of weight decay whereas the counteracting derivative with respect to the square loss is proportional to the backpropagated error signal and the input to the unit. The latter contributions are often smaller along paths with small weights, such that many weights go to zero and stay there. Additionally, any non-zero regularization term causes the learned weights to reflect a trade-off between minimizing the loss and the regularizer. Although, this can lead to improved generalization, it also results in a systematic underestimation of the function values.
Therefore, we follow a hybrid regularization strategy: at the beginning of the training procedure (t < t1) we use no regularization (λ = 0), such that parameters can vary freely and reach reasonable starting points. Afterwards, we switch on the regularization by setting λ to a nonzero value, which has the effect that a sparse network structure emerges. Finally, for the last steps of the training (t > t2) we disable L1 regularization (λ = 0) but enforce the same L0 norm of the weights. This is achieved by keeping all weights w ∈ W 1...L that are close to 0 at 0, i. e. if |w| < 0.001 then w = 0 during the remaining epochs. This ensures that the learned model finds not only a function of the right parametric form, but also fits the observed values as closely as possible. We observed that the exact choice of breakpoints t1 and t2 is not critical. In practice, we use t1 = 14T and t2 = 19 20T , where T is total number of update steps. T was selected large enough to ensure convergence. Note, that convergence to a sparse structure is important here, so early stopping will be disadvantageous.
MODEL SELECTION FOR EXTRAPOLATION
EQL networks have a number of hyper-parameters, e. g. the number of layers, the number of units and the regularization constant. Unfortunately, standard techniques for model selection, such as evaluation on a hold-out set or cross-validation, will not be optimal for our purpose, since they rely on interpolation quality. In order to extrapolate the network has to find the “right” formula. But how can we tell? Using Occams razor principle: the simplest formula is most likely the right one. Intuitively, if we have the choice between cos(x) and its truncated power series approximation 1− x2/2 + x4/24, the first one is preferred. We use the number of active hidden units in the network as a proxy for the complexity of the formula, see Appendix A1 for details. One could also think of differentiating between the unit types. In any case, this argumentation is only correct if the model explains the data well, i. e. it has a low validation error. So we have a dual objective to minimize, which we solve by ranking the instances w. r. t. validation error and sparsity and select the one with the smallest L2 norm (in rank-space), see Eq. (15).
Furthermore, the optimization process may only find a local optimum of the training objective, which depends on the initialization of the parameters. We use independent runs to quantify expected performance deviations.
RELATED WORK
In the field of machine learning, regression is often treated as a black box process of identifying a suitable real-valued function from a hypothesis set, e. g. a reproducing kernel Hilbert space for Gaussian Processes Regression (GPR) Williams & Rasmussen (2006) or Support Vector Regression (SVR) Smola & Schölkopf (2004), or a multi-layer network of suitable expressive power Specht (1991). The goal is to find a prediction function that leads to a small expected error on future data, not
necessarily to gain insight into the mechanism of how the output values derive from the inputs. The goal of finding an interpretable function is rather common in the natural sciences, such as biology, where high noise levels and strong inter-system variability often make it important to rely on external prior knowledge, and finding a “biologically plausible” model is often preferable over finding one that makes the highest prediction accuracy. As a consequence, model classes are often highly constrained, e. g. allowing only for sparse linear models.
The task of learning a true, nonlinear, functional dependence from observing a physical system, has received little attention in the machine learning literature so far, but forms the basis of the field of system identification. There, typically the functional form of the system is known and only the parameters have to be identified. Another approach is to model the time evolution with autoregressive models or higher order convolution integrals (Volterra series) but learning analytic formulas is not common.
Causal learning is an area of recent research that aims at identifying a causal relation between multiple observables, which are typically the result of a physical process. Classically, this tasks reduces to finding a minimal graphical model based only on tests of conditional independence Pearl (2000). Although very successful in some fields, this classical approach only provides a factorization of the problem, separating causes and effects, but it leaves the exact functional dependency unexplained. Recent extensions of causal learning can take a functional view, but typically do not constrain the regression functions to physically plausible ones, but rather constrain the noise distributions Peters et al. (2014). The topic of learning a regression function with emphasis on extrapolation performance has not been studied much in the literature so far. Existing work on time series prediction deals with extrapolation in the temporal domain, i. e. predict the next value(s) Wiener (1949). By our nomenclature, this is typically rather an interpolation task, when the prediction is based on the behaviour of the series at earlier time steps but with similar value distribution Müller et al. (1997); Györfi et al. (2013). Extrapolating in the data domain implies that the data distribution at prediction time will differ from the data distribution at training time. This is traditionally called the domain adaptation setting. In particular, since we assume a common labeling function, our setting would fall under the covariate shift setting Quionero-Candela et al. (2009). Unfortunately, this connection is not particularly useful for our problem. As domain adaptation typically does not make additional assumptions about how the data distribution may change, existing methods need access to some unlabeled data from the test distribution already at training time Ben-David et al. (2010). In our setting this is not possible to obtain.
On the technical level, EQL networks are an instance of general feed-forward networks for function approximation Bishop (1995). In contrast to recent trends towards deep learning Bengio (2009); Bengio et al. (2013), our goal is not to learn any data representation, but to learn a function which compactly represents the input-output relation and generalizes between different regions of the data space, like a physical formula. Structurally, EQL networks resemble sum-product networks (SPNs) Poon & Domingos (2012) and Pi-Sigma networks (PSNs) Shin & Ghosh (1991), in the sense that both are based on directed acyclic graphs with computational units that allows for summation and multiplication. Otherwise, SPNs are different as they act as efficient alternative to probabilistic graphical models for representing probability distributions, whereas EQL networks are meant for the classical task of function approximation. In PSNs each output needs to be passed through multiplicative units, whereas in EQL multiplication is optional.
Finding equations for observations is also known as symbolic regression where a search is performed in a certain function space, typically done with evolutionary computation. With these techniques it is possible to discover physical laws such as invariants and conserved quantities Schmidt & Lipson (2009). Unfortunately, the computational complexity/search time explodes for larger expressions and high-dimensional problems. We attempt to circumvent this by modeling it as a gradient based optimization problem. Related to symbolic regression is finding mathematical identities for instance to find computationally more efficient expressions. In Zaremba et al. (2014) this was done using machine learning to overcome the potentially exponential search space.
EXPERIMENTAL EVALUATION
We demonstrate the ability of EQL to learn physically inspired models with good extrapolation quality by experiments on synthetic and real data. For this, we implemented the network training and
evaluation procedure in python based on the theano framework Theano Development Team (2016). We will make the code for training and evaluation public after acceptance of the manuscript.
Pendulum. We first present the results of learning the equations of motion for a very simple physical system: a pendulum. The state space of a pendulum is X = R× R where the first value is the angle of the pole in radians and the second value is the angular velocity. In the physics literature, these are usually denoted as (θ, ω), but for our purposes, we call them (x1, x2) in order to keep the notation consistent between experiments. The pendulum’s dynamic behavior is governed by the following two ordinary differential equations:
ẋ1 = x2 and ẋ2 = −g sin(x1) , (9) where g = 9.81 is the gravitation constant.
We divide each equation by g in order to balance the output scales and form a regression problem with two output values, y1 = 1gx2 and y2 = − sin(x1). As training data, we sample 1000 points uniformly in the hypercube [−h, h] × [−h, h] for h = 2. Note that this domain contains more than half of a sine period, so it should be sufficient to identify the analytic expression. The target values are disturbed by Gaussian noise with standard derivation σ = 0.01. We also define three test sets, each with 1000 points. The interpolation test set is sampled from the same data distribution as the training set. The extrapolation (near) test set contains data sampled uniformly from the data domain [− 3
2 h, 3 2 h] × [− 3 2 h, 3 2 h] \ [−h, h] × [−h, h], which is
relatively near the training region and the extrapolation (far) test set extends the region to further outside: [−2h, 2h]× [−2h, 2h]\ [−h, h]× [−h, h]. We train a 2-layer EQL and perform model selection among the hyper-parameters: the regularization strength λ ∈ 10{−7,−6.3,−6,−5.3,−5,−4.3,−4,−3.3,−3} and the number of nodes 1
4 u = v ∈ {1, 3, 5}. All weights are randomly initialized from a normal distribution with σ = √
1/(k′ + d). The unit selection I is set such that all unit types are equally often. To ensure convergence we chose T = 10000 epochs. We compare our algorithm to a standard multilayer perceptron (MLP) with tanh activation functions and possible hyperparameters: λ as for EQL, number of layers L ∈ {2, 3}, and number of neurons k ∈ {5, 10, 20}. A second baseline is given by epsilon support vector regression (SVR) Basak et al. (2007) with two hyperparameters C ∈ 10{−3,−2,−1,0,1,2,3,3.5} and ∈ 10{−3,−2,−1,0} using radial basis function kernel with width γ ∈ {0.05, 0.1, 0.2, 0.5, 1.0}.
Numeric results are reported in Tab. 1. As expected all models are able to interpolate well with a test error on the order of the noise level (σ = 0.01). For extrapolation however, the performance differ between the approaches. For MLP the prediction quality decreases quickly when leaving the training domain. SVR remains a bit better in the near extrapolation but also fails catastrophically on the far extrapolation data. EQL, on the other hand, extrapolates well, both near and far away from the training domain. The reasons can be seen in Figure 2: while the MLP and SVR simply learns a function that interpolates the training values, EQL finds the correct functional expression and therefore predicts the correct values for any input data.
Double pendulum kinematics. The second system we consider real double pendulum where the forward kinematics should be learned. For that we use recorded trajectories of a real double pendulum Schmidt & Lipson (2009). The task here is to learn the position of the tips of the double pendulum segments from the given joint angles (x1, x2). These positions where not measured such that we supply them by the following formula: y1 = cos(x1), y2 = cos(x1) + cos(x1 + x2), y3 = sin(x1), y4 = sin(x1) + sin(x1 + x2) where (y1, y3) and (y2, y4) correspond to x-y-coordinates of the first and second end-point respectively. The dataset contains two short trajectories. The first
covers only part of the domain (input as well as output) and consists of 819 samples where 10% was used as validation set (randomly sampled), see Fig. 3(a). The second trajectory corresponds to a behavior with several spins of both pendulum segments such that a much larger domain is covered. Nevertheless the angle values are confined to [−π, π]. We use this trajectory as extrapolation test set. The trajectory and the outputs of our method are shown in Fig. 3(b). The prediction for unseen domains is perfect, which is also illustrated in a systematic sweep, see Fig. 3(c). The performance of MLP is off already near the training domain. SVR is a bit better, but still does not give usable predictions for the test data, see also the root means square error in Fig. 3(d).
Model selection is performed to determine λ as above, u = v ∈ {3, 5}, (MLP: k ∈ {5, 10, 20}) and layer number L ∈ {2, 3}.
Robotic arms. A more complicated task is to learn the forward kinematics of multi-segment robotic arms. We consider planar arms with 3, 4, and 5 joints, where each segment is 0.5 units long. For training the arm is controlled by sinusoidal joint target angles with amplitude in [−π/2, π/2], each joint with a different frequency. The number of data points are: 3000, 6000, and 18000 for the 3, 4, and 5 segment arms respectively, with added noise as above. For testing extrapolation performance the amplitude [−π, π] was used. Note that the extrapolation space is much larger than the training space. The task is to predict the coordinates of the end-effector of the arms (kin-3-end, kin-4-end) and the coordinates of all segment positions kin-5-all. The numerical results, see Tab. 2, shows that our method is able to extrapolate in these cases. Model selection as above with u = v ∈ {10, 20}, (MLP: k ∈ {10, 50}) and layer number L ∈ {2, 3, 4}. To illustrate the dependence on the amount of
noise and the number of available training points we provide a quantification in Appendix A2. In short, increasing noise can be compensated by increasing amount of data to keep the performance.
Learning complex formula. In order to find out whether EQL can also learn more complicated formulas, we consider three examples with four-dimensional input and one-dimensional output:
y = 1/3 (sin(πx1) + sin (2πx2 + π/8) + x2 − x3x4) F-1 (10) y = 1/3 ( sin(πx1) + x2 cos(2πx1 + π/4) + x3 − x24 ) F-2 (11)
y = 1/3 ((1 + x2) sin(πx1) + x2x3x4) F-3 (12)
The first equation requires only one hidden layer to be represented. The second equation and third equation should requires two hidden layers. In particular, F-2 contains a product of x2 and cos and F-3 contains a product of three terms, and we use it to test if our restriction to only pairwise product units causes problems for more complex target functions. We follow the same procedure as in the pendulum case for building training and test sets, though with h = 1 as input data range. We use 10000 points for training set and validation set (90%-10% split) and 5000 points for each of the test sets. Model selection for EQL is performed as above using the number of layers L ∈ 2, 3, 4. The number of units is set to 14u = v = 10. For the MLP, we select L and λ from the same set as above as well as k ∈ {10, 30}.
Table 3 shows the numerical results. Again, all methods are able to interpolate, but only EQL achieves good extrapolation results, except for equation F-3. There it settles in 9 out of 10 cases into a local minimum and finds only an approximating equation that deviates outside the training domain. Interestingly, if we restrict the base functions to not contain cosine, the algorithm finds the right formula. Note, the sparsity of the correct formula is lower than those of the approximation, so it should be selected if found. Figure Fig. 4 illustrates the performance and the learned networks visually. It shows one of the model-selected instances for each case. For F-1 the correct formula was identified, so correct predictions can be made even far outside the training region (much further than illustrated). For F-2 the network provided us with a surprise, because it yields good extrapolation performance with only one hidden layer! How can it implement x2 cos(2πx1 + π/4)? Apparently it uses 1.21(cos(−2πx1+π+π/4+0.41x2)+sin(2πx1+π/4+0.41x2)) which is a good approximation for x2 ∈ [−2, 2]. The sparsity of this solution is 5 whereas the true solution needs at least 6, which explains its selection. For F-3 the suboptimal local minima uses some strange way of approximating (1 + x2) sin(x1) using (x1 + x1x2) cos(βx1), which deviates fast, however the true solution would be sparser but was not found. Only if we remove cosine from the base functions we get always the correct formula, see Fig. 4(c).
X-Ray transition energies. As a further example we consider data measured in atomic physics. When shooting electron beams onto atoms one can excite them and they consequently emit x-ray radiation with characteristic peak energies. For each element/isotope these energies are different as they correspond to the potential difference between the electron shells, such that one can identify elements in a probe this way. The data is taken from Deslattes et al. (2003), where we consider one specific transition, called the K α2 line, because it was measured for all elements. The true relationship between atomic number Z and transition energies is complicated, as it involves many body interactions and no closed-form solution exists. Nevertheless we can find out which relationships our system proposes. It is known that the main relationship isK α2 ∝ Z2 according to Moseley’s law. Further correction terms for elements with larger Z are potentially of higher order. We have data for elements with 10 ≤ Z ≤ 100, which is split into training/validation sets in the range [10, 91] (70/10 data points) and extrapolation test set in the interval [92, 100] (14 data points because of isotops). Since we have so little data we evaluate the performance for 10 independent training/validation
splits. The data is scaled to lie in [0, 1], i. e. x = Z/100 and y = Kα2/100000. Model selection is here based on validation error only. The selection for sparsity and validation error only yields the Z2 relationship. Mini-batch size is 2 here and T = 50000 was used. Figure 5 presents the data, the predictions, the learned formulas and the numerical results. EQL and SVR achieve similar performance and MLP is significantly worse. However, EQL also yields interpretable formulas, see Fig. 5(e) that can be used to gain insights into the potential relationship.
POOR EXTRAPOLATION OUT OF MODEL CLASS — CART-PENDULUM SYSTEM
Let us now go beyond our assumptions and consider cases where the true target function is not an element of the hypothesis set.
Consider a pendulum attached to a cart that can move horizontally along a rail but that is attached to a spring damper system, see Fig. 6(a). The system is parametrized by 4 unknowns: the position of the cart, the velocity of the cart, the angle of the pendulum and the angular velocity of the pendulum. We combine these into a four-dimensional vector x = (x1, . . . , x4).
We set up a regression problem with four outputs from the corresponding system of ordinary differential equations where y1 = ẋ1 = x3, y2 = ẋ2 = x4 and
y3 = −x1 − 0.01x3 + x24 sin (x2) + 0.1x4 cos (x2) + 9.81 sin (x2) cos (x2)
sin2 (x2) + 1 , (13)
y4 = −0.2x4 − 19.62 sin (x2) + x1 cos (x2) + 0.01x3 cos (x2)− x24 sin (x2) cos (x2)
sin2 (x2) + 1 .
The formulas contain divisions which are not included in our architecture due to their singularities. To incorporate them in a principled manner is left for future work. Thus, the cart-pendulum dynamics is outside the hypothesis class. In this case we cannot expect great extrapolation performance and this is confirmed by the experiments. In Fig. 6(b,c) the extrapolation performance is illustrated by slicing through the input space. The near extrapolation performance is still acceptable for both EQL and MLP, but as soon as the training region is left further even the best instances differ considerably from the true values, see also the numeric results in Tab. 4. The SVR is performing poorly also for near extrapolation range. Inspecting the learned expressions we find that the sigmoid functions are rarely used.
CONCLUSIONS
We presented a new network architecture called EQL that can learn analytic expressions that typically occur in equations governing physical, in particular mechanical, systems. The network is fully differentiable, which allows end-to-end training using backpropagation. By sequencing L1 regularization and fixing L0 norm we achieve sparse representations with unbiased estimation of factors within the learned equations. We also introduce a model selection procedure specifically designed to select for good extrapolation quality by a multiobjective criterion based on validation error and sparsity. The proposed method is able to learn functional relations and extrapolate them to unseen parts of the data space, as we demonstrate by experiments on synthetic as well as real data. The approach learns concise functional forms that may provide insights into the relationships within the data, as we show on physical measurements of x-ray transition energies.
The optimization problem is nontrivial and has many local minima. We have shown cases where the algorithm is not reliably finding the right equation but instead finds an approximation only, in which case extrapolation may be poor.
If the origin of the data is not in the hypothesis class, i. e. the underlying expression cannot be represented by the network and good extrapolation performance cannot be achieved. Thus it is important to increase the model class by incorporating more base functions which we will address in future work alongside the application to even larger examples. We expect good scaling capabilities to larger systems due to the gradient based optimization. Apart from the extrapolation we also expect improved interpolation results in high-dimensional spaces, where data is less dense.
ACKNOWLEDGMENTS
This work was in parts funded by the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement no. 308036: "Life-long learning of visual scene understanding" (L3ViSU). GM received funding from the People Programme (Marie Curie Actions) in FP7/2007-2013 under REA grant agreement no. 291734.
APPENDIX
A1: MODEL SELECTION DETAILS
QUANTIFYING SPARSITY
We actually want a measure of complexity of the formula, however, since it is not clear what is the right choice of a measure, we use the sparsity instead, by counting the number of active/used hidden units denoted by s. For a given network phi we get
s(φ) = L∑ l=1 k∑ i=1 Θ(|W (l)i,· | ∗ |W (l+1) ·,i | − 0.01) , (14)
where Θ is the heavyside function and 0.01 is an arbitrary threshold. For the multiplication units the norm of the incoming weights for both inputs are added (omitted to avoid clutter in the formula).
SELECTION CRITERIA
As stated in the main text, we strive to choose the model that is both simple and has good performance in terms of the validation set. Since both quantities have different scales, we proposed to choose them based on their ranking. Let rv(φ) and rs(φ) be the ranks of the network φ w. r. t. the validation error and sparsity s(φ)respectively, then the network with minimal squared rank norm is selected:
arg min φ
[ rv(φ)2 + rs(φ)2 ] (15)
In Fig. 7 the extrapolation performance of all considered networks for the kin2D-4-end dataset is visualized in dependence of validation error and the sparsity. It becomes evident that the best performing networks are both sparse and have a low validation error.
A2: DEPENDENCE ON NOISE AND NUMBER OF DATA POINTS
In order to understand how the method depends on the amount of noise and the number of datapoints we scan through the two parameters and present the empirical results in Fig. 8. In general the method is robust to noise and as expected, more noise can be compensated by more data. | 1. What is the focus of the paper regarding physical systems and analytical equations?
2. What are the strengths and weaknesses of the proposed approach in terms of scalability and complexity?
3. How does the reviewer assess the contribution and novelty of the paper's content?
4. What are some concerns regarding the tools and techniques used in the research?
5. Are there any suggestions for future work and improvements to the current approach? | Review | Review
The authors attempt to extract analytical equations governing physical systems from observations - an important task. Being able to capture succinct and interpretable rules which a physical system follows is of great importance. However, the authors do this with simple and naive tools which will not scale to complex tasks, offering no new insights or advances to the field.
The contribution of the paper (and the first four pages of the submission!) can be summarised in one sentence:
"Learn the weights of a small network with cosine, sinusoid, and input elements products activation functions s.t. the weights are sparse (L1)".
The learnt network weights with its fixed structure are then presented as the learnt equation.
This research uses tools from literature from the '90s (I haven't seen the abbreviation ANN (page 3) for a long time) and does not build on modern techniques which have advanced a lot since then. I would encourage the authors to review modern literature and continue working on this important task. |
ICLR | Title
Extrapolation and learning equations
Abstract
In classical machine learning, regression is treated as a black box process of identifying a suitable function from a hypothesis set without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. This paper proposes a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions and is able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified. INTRODUCTION The quality of a model is typically measured by its ability to generalize from a training set to previously unseen data from the same distribution. In regression tasks generalization essentially boils down to interpolation if the training data is sufficiently dense. As long as models are selected correctly, i. e. in a way to not overfit the data, the regression problem is well understood and can – at least conceptually – be considered solved. However, when working with data from real-world devices, e. g. controlling a robotic arm, interpolation might not be sufficient. It could happen that future data lies outside of the training domain, e. g. when the arm is temporarily operated outside of its specifications. For the sake of robustness and safety it is desirable in such a case to have a regression model that continues to make good predictions, or at least does not fail catastrophically. This setting, which we call extrapolation generalization, is the topic of the present paper. We are particularly interested in regression tasks for systems that can be described by real-valued analytic expression, e. g. mechanical systems such as a pendulum or a robotic arm. These are typically governed by a highly nonlinear function but it is nevertheless possible, in principle, to infer their behavior on an extrapolation domain from their behavior elsewhere. We make two main contributions: 1) a new type of network that can learn analytical expressions and is able to extrapolate to unseen domains and 2) a model selection strategy tailored to the extrapolation setting. The following section describes the setting of regression and extrapolation. Afterwards we introduce our method and discuss the architecture, its training, and its relation to prior art. We present our results in the Section Experimental evaluation and close with conclusions. REGRESSION AND EXTRAPOLATION We consider a multivariate regression problem with a training set {(x1, y1), . . . , (xN , yN )} with x ∈ R, y ∈ R. Because our main interest lies on extrapolation in the context of learning the dynamics of physical systems we assume the data originates from an unknown analytical function (or system of functions), φ : R → R with additive zero-mean noise, ξ, i. e. y = φ(x) + ξ and Eξ = 0. The function φ may, for instance, reflect a system of ordinary differential equations that govern the movements of a robot arm or the like. The general task is to learn a function ψ : R → R that approximates the true functional relation as well as possible in the squared loss sense, i. e. achieves minimal expected error E‖ψ(x) − φ(x)‖2. In practice, we only have particular examples of the function values available and measure the quality of predicting in terms of the empirical error on
INTRODUCTION
The quality of a model is typically measured by its ability to generalize from a training set to previously unseen data from the same distribution. In regression tasks generalization essentially boils down to interpolation if the training data is sufficiently dense. As long as models are selected correctly, i. e. in a way to not overfit the data, the regression problem is well understood and can – at least conceptually – be considered solved. However, when working with data from real-world devices, e. g. controlling a robotic arm, interpolation might not be sufficient. It could happen that future data lies outside of the training domain, e. g. when the arm is temporarily operated outside of its specifications. For the sake of robustness and safety it is desirable in such a case to have a regression model that continues to make good predictions, or at least does not fail catastrophically. This setting, which we call extrapolation generalization, is the topic of the present paper.
We are particularly interested in regression tasks for systems that can be described by real-valued analytic expression, e. g. mechanical systems such as a pendulum or a robotic arm. These are typically governed by a highly nonlinear function but it is nevertheless possible, in principle, to infer their behavior on an extrapolation domain from their behavior elsewhere. We make two main contributions: 1) a new type of network that can learn analytical expressions and is able to extrapolate to unseen domains and 2) a model selection strategy tailored to the extrapolation setting.
The following section describes the setting of regression and extrapolation. Afterwards we introduce our method and discuss the architecture, its training, and its relation to prior art. We present our results in the Section Experimental evaluation and close with conclusions.
REGRESSION AND EXTRAPOLATION
We consider a multivariate regression problem with a training set {(x1, y1), . . . , (xN , yN )} with x ∈ Rn, y ∈ Rm. Because our main interest lies on extrapolation in the context of learning the dynamics of physical systems we assume the data originates from an unknown analytical function (or system of functions), φ : Rn → Rm with additive zero-mean noise, ξ, i. e. y = φ(x) + ξ and Eξ = 0. The function φ may, for instance, reflect a system of ordinary differential equations that govern the movements of a robot arm or the like. The general task is to learn a function ψ : Rn → Rm that approximates the true functional relation as well as possible in the squared loss sense, i. e. achieves minimal expected error E‖ψ(x) − φ(x)‖2. In practice, we only have particular examples of the function values available and measure the quality of predicting in terms of the empirical error on
training or test data D,
E(D) = 1
N N∑ i=1 ‖ψ(xi)− yi‖2 . (1)
If training and test data are sampled from the same distribution then we speak about an interpolation problem. In the extrapolation setting the training data is assumed to cover only a limited range of the data domain. In the example of the robot arm, for instance, the training may be restricted to a certain joint angle range or maximal velocity. For testing we want to make predictions about the unseen domains, e. g. for higher velocities. To succeed in this task, it is essential to identify the underlying functional relationship instead of just minimizing the empirical error, as detailed below. As usual, we split the data that is available at training time into a part for model training and a part for validation or model selection.
LEARNING A NETWORK FOR FUNCTION EXTRAPOLATION
The main model we propose is a multi-layered feed-forward network with computational units specifically designed for the extrapolation regression tasks. For an L-layer network, there are L− 1 hidden layers, each consisting of a linear mapping followed by non-linear transformations. For simplicity of notation, we explain the network as if each hidden layer had the same structure (k′ inputs, k outputs). In practice, each layer can be designed independently of the others, of course, as long as input/output dimensions match.
The linear mapping at level l maps the k′-dimensional input y(l−1) to the d-dimensional intermediate representation z given by
z(l) = W (l)y(l−1) + b(l), (2)
where y(l−1) is the output of the previous layer, with the convention y(0) = x. The weight matrix W (l) ∈ Rd×k′ and the bias vector b(l) ∈ Rd are free parameters that are learned during training. The non-linear transformation contains u unary units, fi : R→ R, for i = 1, . . . , u, and v binary units, gj : R× R→ R for j = 1, . . . , v. Their outputs are concatenated to form the layer output
y(l) := ( f1(z (l) 1 ), f2(z (l) 2 ), . . . , fu(z (l) u ), g1(z (l) u+1, z (l) u+2), . . . , gv(z (l) u+2v−1, z (l) u+2v) ) . (3)
In total, the nonlinear stage has k = u+v outputs and d = u+2v inputs. The unary units, f1, . . . , fu receive the respective component, z1, . . . , zu as inputs, and each unit may be one of the following base functions as specified in a fixed type parameter Ii ∈ {0, 1, 2, 3}
fi(zi) := zi if Ii = 0, sin(zi) if Ii = 1, cos(zi) if Ii = 2, sigm(zi) if Ii = 3,
for i = 1, . . . , u, (4)
where sigm(z) = 11+e−z is the standard sigmoid function. The binary units, g1, . . . , gv receive the remaining component, zu+1, . . . , zu+2v, as input in pairs of two. They are multiplication units that compute the product of their two input values:
gj(zu+2j−1, zu+2j) := zu+2j−1 · zu+2j for j = 1, . . . , v. (5)
Finally, the L-th and last layer computes the regression values by a linear read-out
y(L) := W (L)y(L−1) + b(L). (6)
The architecture is depicted in Fig. 1. We call the new architecture Equation Learner (EQL) and denote the function it defines by ψ.
DISCUSSION OF THE ARCHITECTURE
The proposed network architecture differs in two main aspects from typical feed-forward networks: the existence of multiplication units and the possibility of sine and cosine as nonlinearities for the unary units. Both design choices are motivated by our objective of learning a system of equations that govern a physical system and can extrapolate to new parts of the input space.
Sigmoid nonlinearities are the canonical choice of activation function for artificial neural networks (ANN) and proved to be successful. In fact, we include sigmoids in our architecture, making it a super class of ANNs. However, they were typically disabled by the training procedure corresponding to their absence in the considered physical equations. Other, predominantly local nonlinearities, in particular radial basis functions Broomhead & Lowe (1988) we do not include, since one cannot expect them to extrapolate at all. Further nonlinearities, such as (square) roots and logarithms, could in principle be useful for learning physical equations, but they pose problems because their domains of definition is restricted to positive inputs. We leave the task of incorporating them in a principled way to future work.
The ability to multiply two values is a second crucial component of our network architecture. Again, it is inspired by the typical form of physical equations, where multiplication of components is arguably second common basic operation after addition (which the linear layers can perform). Multiplication was introduced into neural networks long ago as product-units Durbin & Rumelhart (1989) and Pi-Sigma-unit Shin & Ghosh (1991). The product-units have large fan-in that compute products over all their inputs, potentiated by the respective weights. The result is typically the behavior of a high order polynomial, which are powerful function approximators, but rarely occur in physical equations. Polynomials are also known to require careful fine-tuning in order not to overfit, which makes them a risky choice for the purpose of extrapolation. The Pi-Sigma units are multiplication units with a fixed number of factors and our multiplication units are a special for 2 factors. We find that multiplying just two values at a time is well adjusted to the task we aim at, as it allows to control the maximal degree of the learned polynomial by the depth of the network.
Finally, each layer of the network contains unary units that act as identity maps, which in particular gives the network the option to learn functions with smaller number of nonlinearities than the total network depths.
NETWORK TRAINING
The EQL is fully differentiable in its free parameters θ = {W (1), . . . ,W (L), b(1), . . . , b(L)}, which allows us to train it in an end-to-end fashion using back-propagation. We adopt a Lasso-like objective Tibshirani (1996),
L(D) = 1 N |D|∑ i=1 ‖ψ(xi)− yi‖2 + λ L∑ l=1 ∣∣W (l)∣∣ 1 , (7)
that is, a linear combination of L2 loss and L1 regularization, and apply a stochastic gradient descent algorithm with mini-batches and Adam Kingma & Ba (2015) for calculating the updates:
θt+1 = θt + Adam
( ∂L ( D(t) ) ∂θ , α ) , (8)
where D(t) denotes the current mini-batch and α is the stepsize parameter. The choice of Adam is not critical and standard stochastic gradient descent also works. In all numerical experiments we use α = 0.001 and a mini-batch size of 20.
The role of the L1 regularization is to encourage networks with sparse connections, matching the intuition that a typical formula describing a physical system contains only a small number of terms, each operating only on a few variables. However, in a non-convex setting where local minima are likely to occur, this type of regularization can have an undesirable side-effect: during the course of the optimization the weights hardly ever change their sign. The reason is that the regularization leads to a constant rate of weight decay whereas the counteracting derivative with respect to the square loss is proportional to the backpropagated error signal and the input to the unit. The latter contributions are often smaller along paths with small weights, such that many weights go to zero and stay there. Additionally, any non-zero regularization term causes the learned weights to reflect a trade-off between minimizing the loss and the regularizer. Although, this can lead to improved generalization, it also results in a systematic underestimation of the function values.
Therefore, we follow a hybrid regularization strategy: at the beginning of the training procedure (t < t1) we use no regularization (λ = 0), such that parameters can vary freely and reach reasonable starting points. Afterwards, we switch on the regularization by setting λ to a nonzero value, which has the effect that a sparse network structure emerges. Finally, for the last steps of the training (t > t2) we disable L1 regularization (λ = 0) but enforce the same L0 norm of the weights. This is achieved by keeping all weights w ∈ W 1...L that are close to 0 at 0, i. e. if |w| < 0.001 then w = 0 during the remaining epochs. This ensures that the learned model finds not only a function of the right parametric form, but also fits the observed values as closely as possible. We observed that the exact choice of breakpoints t1 and t2 is not critical. In practice, we use t1 = 14T and t2 = 19 20T , where T is total number of update steps. T was selected large enough to ensure convergence. Note, that convergence to a sparse structure is important here, so early stopping will be disadvantageous.
MODEL SELECTION FOR EXTRAPOLATION
EQL networks have a number of hyper-parameters, e. g. the number of layers, the number of units and the regularization constant. Unfortunately, standard techniques for model selection, such as evaluation on a hold-out set or cross-validation, will not be optimal for our purpose, since they rely on interpolation quality. In order to extrapolate the network has to find the “right” formula. But how can we tell? Using Occams razor principle: the simplest formula is most likely the right one. Intuitively, if we have the choice between cos(x) and its truncated power series approximation 1− x2/2 + x4/24, the first one is preferred. We use the number of active hidden units in the network as a proxy for the complexity of the formula, see Appendix A1 for details. One could also think of differentiating between the unit types. In any case, this argumentation is only correct if the model explains the data well, i. e. it has a low validation error. So we have a dual objective to minimize, which we solve by ranking the instances w. r. t. validation error and sparsity and select the one with the smallest L2 norm (in rank-space), see Eq. (15).
Furthermore, the optimization process may only find a local optimum of the training objective, which depends on the initialization of the parameters. We use independent runs to quantify expected performance deviations.
RELATED WORK
In the field of machine learning, regression is often treated as a black box process of identifying a suitable real-valued function from a hypothesis set, e. g. a reproducing kernel Hilbert space for Gaussian Processes Regression (GPR) Williams & Rasmussen (2006) or Support Vector Regression (SVR) Smola & Schölkopf (2004), or a multi-layer network of suitable expressive power Specht (1991). The goal is to find a prediction function that leads to a small expected error on future data, not
necessarily to gain insight into the mechanism of how the output values derive from the inputs. The goal of finding an interpretable function is rather common in the natural sciences, such as biology, where high noise levels and strong inter-system variability often make it important to rely on external prior knowledge, and finding a “biologically plausible” model is often preferable over finding one that makes the highest prediction accuracy. As a consequence, model classes are often highly constrained, e. g. allowing only for sparse linear models.
The task of learning a true, nonlinear, functional dependence from observing a physical system, has received little attention in the machine learning literature so far, but forms the basis of the field of system identification. There, typically the functional form of the system is known and only the parameters have to be identified. Another approach is to model the time evolution with autoregressive models or higher order convolution integrals (Volterra series) but learning analytic formulas is not common.
Causal learning is an area of recent research that aims at identifying a causal relation between multiple observables, which are typically the result of a physical process. Classically, this tasks reduces to finding a minimal graphical model based only on tests of conditional independence Pearl (2000). Although very successful in some fields, this classical approach only provides a factorization of the problem, separating causes and effects, but it leaves the exact functional dependency unexplained. Recent extensions of causal learning can take a functional view, but typically do not constrain the regression functions to physically plausible ones, but rather constrain the noise distributions Peters et al. (2014). The topic of learning a regression function with emphasis on extrapolation performance has not been studied much in the literature so far. Existing work on time series prediction deals with extrapolation in the temporal domain, i. e. predict the next value(s) Wiener (1949). By our nomenclature, this is typically rather an interpolation task, when the prediction is based on the behaviour of the series at earlier time steps but with similar value distribution Müller et al. (1997); Györfi et al. (2013). Extrapolating in the data domain implies that the data distribution at prediction time will differ from the data distribution at training time. This is traditionally called the domain adaptation setting. In particular, since we assume a common labeling function, our setting would fall under the covariate shift setting Quionero-Candela et al. (2009). Unfortunately, this connection is not particularly useful for our problem. As domain adaptation typically does not make additional assumptions about how the data distribution may change, existing methods need access to some unlabeled data from the test distribution already at training time Ben-David et al. (2010). In our setting this is not possible to obtain.
On the technical level, EQL networks are an instance of general feed-forward networks for function approximation Bishop (1995). In contrast to recent trends towards deep learning Bengio (2009); Bengio et al. (2013), our goal is not to learn any data representation, but to learn a function which compactly represents the input-output relation and generalizes between different regions of the data space, like a physical formula. Structurally, EQL networks resemble sum-product networks (SPNs) Poon & Domingos (2012) and Pi-Sigma networks (PSNs) Shin & Ghosh (1991), in the sense that both are based on directed acyclic graphs with computational units that allows for summation and multiplication. Otherwise, SPNs are different as they act as efficient alternative to probabilistic graphical models for representing probability distributions, whereas EQL networks are meant for the classical task of function approximation. In PSNs each output needs to be passed through multiplicative units, whereas in EQL multiplication is optional.
Finding equations for observations is also known as symbolic regression where a search is performed in a certain function space, typically done with evolutionary computation. With these techniques it is possible to discover physical laws such as invariants and conserved quantities Schmidt & Lipson (2009). Unfortunately, the computational complexity/search time explodes for larger expressions and high-dimensional problems. We attempt to circumvent this by modeling it as a gradient based optimization problem. Related to symbolic regression is finding mathematical identities for instance to find computationally more efficient expressions. In Zaremba et al. (2014) this was done using machine learning to overcome the potentially exponential search space.
EXPERIMENTAL EVALUATION
We demonstrate the ability of EQL to learn physically inspired models with good extrapolation quality by experiments on synthetic and real data. For this, we implemented the network training and
evaluation procedure in python based on the theano framework Theano Development Team (2016). We will make the code for training and evaluation public after acceptance of the manuscript.
Pendulum. We first present the results of learning the equations of motion for a very simple physical system: a pendulum. The state space of a pendulum is X = R× R where the first value is the angle of the pole in radians and the second value is the angular velocity. In the physics literature, these are usually denoted as (θ, ω), but for our purposes, we call them (x1, x2) in order to keep the notation consistent between experiments. The pendulum’s dynamic behavior is governed by the following two ordinary differential equations:
ẋ1 = x2 and ẋ2 = −g sin(x1) , (9) where g = 9.81 is the gravitation constant.
We divide each equation by g in order to balance the output scales and form a regression problem with two output values, y1 = 1gx2 and y2 = − sin(x1). As training data, we sample 1000 points uniformly in the hypercube [−h, h] × [−h, h] for h = 2. Note that this domain contains more than half of a sine period, so it should be sufficient to identify the analytic expression. The target values are disturbed by Gaussian noise with standard derivation σ = 0.01. We also define three test sets, each with 1000 points. The interpolation test set is sampled from the same data distribution as the training set. The extrapolation (near) test set contains data sampled uniformly from the data domain [− 3
2 h, 3 2 h] × [− 3 2 h, 3 2 h] \ [−h, h] × [−h, h], which is
relatively near the training region and the extrapolation (far) test set extends the region to further outside: [−2h, 2h]× [−2h, 2h]\ [−h, h]× [−h, h]. We train a 2-layer EQL and perform model selection among the hyper-parameters: the regularization strength λ ∈ 10{−7,−6.3,−6,−5.3,−5,−4.3,−4,−3.3,−3} and the number of nodes 1
4 u = v ∈ {1, 3, 5}. All weights are randomly initialized from a normal distribution with σ = √
1/(k′ + d). The unit selection I is set such that all unit types are equally often. To ensure convergence we chose T = 10000 epochs. We compare our algorithm to a standard multilayer perceptron (MLP) with tanh activation functions and possible hyperparameters: λ as for EQL, number of layers L ∈ {2, 3}, and number of neurons k ∈ {5, 10, 20}. A second baseline is given by epsilon support vector regression (SVR) Basak et al. (2007) with two hyperparameters C ∈ 10{−3,−2,−1,0,1,2,3,3.5} and ∈ 10{−3,−2,−1,0} using radial basis function kernel with width γ ∈ {0.05, 0.1, 0.2, 0.5, 1.0}.
Numeric results are reported in Tab. 1. As expected all models are able to interpolate well with a test error on the order of the noise level (σ = 0.01). For extrapolation however, the performance differ between the approaches. For MLP the prediction quality decreases quickly when leaving the training domain. SVR remains a bit better in the near extrapolation but also fails catastrophically on the far extrapolation data. EQL, on the other hand, extrapolates well, both near and far away from the training domain. The reasons can be seen in Figure 2: while the MLP and SVR simply learns a function that interpolates the training values, EQL finds the correct functional expression and therefore predicts the correct values for any input data.
Double pendulum kinematics. The second system we consider real double pendulum where the forward kinematics should be learned. For that we use recorded trajectories of a real double pendulum Schmidt & Lipson (2009). The task here is to learn the position of the tips of the double pendulum segments from the given joint angles (x1, x2). These positions where not measured such that we supply them by the following formula: y1 = cos(x1), y2 = cos(x1) + cos(x1 + x2), y3 = sin(x1), y4 = sin(x1) + sin(x1 + x2) where (y1, y3) and (y2, y4) correspond to x-y-coordinates of the first and second end-point respectively. The dataset contains two short trajectories. The first
covers only part of the domain (input as well as output) and consists of 819 samples where 10% was used as validation set (randomly sampled), see Fig. 3(a). The second trajectory corresponds to a behavior with several spins of both pendulum segments such that a much larger domain is covered. Nevertheless the angle values are confined to [−π, π]. We use this trajectory as extrapolation test set. The trajectory and the outputs of our method are shown in Fig. 3(b). The prediction for unseen domains is perfect, which is also illustrated in a systematic sweep, see Fig. 3(c). The performance of MLP is off already near the training domain. SVR is a bit better, but still does not give usable predictions for the test data, see also the root means square error in Fig. 3(d).
Model selection is performed to determine λ as above, u = v ∈ {3, 5}, (MLP: k ∈ {5, 10, 20}) and layer number L ∈ {2, 3}.
Robotic arms. A more complicated task is to learn the forward kinematics of multi-segment robotic arms. We consider planar arms with 3, 4, and 5 joints, where each segment is 0.5 units long. For training the arm is controlled by sinusoidal joint target angles with amplitude in [−π/2, π/2], each joint with a different frequency. The number of data points are: 3000, 6000, and 18000 for the 3, 4, and 5 segment arms respectively, with added noise as above. For testing extrapolation performance the amplitude [−π, π] was used. Note that the extrapolation space is much larger than the training space. The task is to predict the coordinates of the end-effector of the arms (kin-3-end, kin-4-end) and the coordinates of all segment positions kin-5-all. The numerical results, see Tab. 2, shows that our method is able to extrapolate in these cases. Model selection as above with u = v ∈ {10, 20}, (MLP: k ∈ {10, 50}) and layer number L ∈ {2, 3, 4}. To illustrate the dependence on the amount of
noise and the number of available training points we provide a quantification in Appendix A2. In short, increasing noise can be compensated by increasing amount of data to keep the performance.
Learning complex formula. In order to find out whether EQL can also learn more complicated formulas, we consider three examples with four-dimensional input and one-dimensional output:
y = 1/3 (sin(πx1) + sin (2πx2 + π/8) + x2 − x3x4) F-1 (10) y = 1/3 ( sin(πx1) + x2 cos(2πx1 + π/4) + x3 − x24 ) F-2 (11)
y = 1/3 ((1 + x2) sin(πx1) + x2x3x4) F-3 (12)
The first equation requires only one hidden layer to be represented. The second equation and third equation should requires two hidden layers. In particular, F-2 contains a product of x2 and cos and F-3 contains a product of three terms, and we use it to test if our restriction to only pairwise product units causes problems for more complex target functions. We follow the same procedure as in the pendulum case for building training and test sets, though with h = 1 as input data range. We use 10000 points for training set and validation set (90%-10% split) and 5000 points for each of the test sets. Model selection for EQL is performed as above using the number of layers L ∈ 2, 3, 4. The number of units is set to 14u = v = 10. For the MLP, we select L and λ from the same set as above as well as k ∈ {10, 30}.
Table 3 shows the numerical results. Again, all methods are able to interpolate, but only EQL achieves good extrapolation results, except for equation F-3. There it settles in 9 out of 10 cases into a local minimum and finds only an approximating equation that deviates outside the training domain. Interestingly, if we restrict the base functions to not contain cosine, the algorithm finds the right formula. Note, the sparsity of the correct formula is lower than those of the approximation, so it should be selected if found. Figure Fig. 4 illustrates the performance and the learned networks visually. It shows one of the model-selected instances for each case. For F-1 the correct formula was identified, so correct predictions can be made even far outside the training region (much further than illustrated). For F-2 the network provided us with a surprise, because it yields good extrapolation performance with only one hidden layer! How can it implement x2 cos(2πx1 + π/4)? Apparently it uses 1.21(cos(−2πx1+π+π/4+0.41x2)+sin(2πx1+π/4+0.41x2)) which is a good approximation for x2 ∈ [−2, 2]. The sparsity of this solution is 5 whereas the true solution needs at least 6, which explains its selection. For F-3 the suboptimal local minima uses some strange way of approximating (1 + x2) sin(x1) using (x1 + x1x2) cos(βx1), which deviates fast, however the true solution would be sparser but was not found. Only if we remove cosine from the base functions we get always the correct formula, see Fig. 4(c).
X-Ray transition energies. As a further example we consider data measured in atomic physics. When shooting electron beams onto atoms one can excite them and they consequently emit x-ray radiation with characteristic peak energies. For each element/isotope these energies are different as they correspond to the potential difference between the electron shells, such that one can identify elements in a probe this way. The data is taken from Deslattes et al. (2003), where we consider one specific transition, called the K α2 line, because it was measured for all elements. The true relationship between atomic number Z and transition energies is complicated, as it involves many body interactions and no closed-form solution exists. Nevertheless we can find out which relationships our system proposes. It is known that the main relationship isK α2 ∝ Z2 according to Moseley’s law. Further correction terms for elements with larger Z are potentially of higher order. We have data for elements with 10 ≤ Z ≤ 100, which is split into training/validation sets in the range [10, 91] (70/10 data points) and extrapolation test set in the interval [92, 100] (14 data points because of isotops). Since we have so little data we evaluate the performance for 10 independent training/validation
splits. The data is scaled to lie in [0, 1], i. e. x = Z/100 and y = Kα2/100000. Model selection is here based on validation error only. The selection for sparsity and validation error only yields the Z2 relationship. Mini-batch size is 2 here and T = 50000 was used. Figure 5 presents the data, the predictions, the learned formulas and the numerical results. EQL and SVR achieve similar performance and MLP is significantly worse. However, EQL also yields interpretable formulas, see Fig. 5(e) that can be used to gain insights into the potential relationship.
POOR EXTRAPOLATION OUT OF MODEL CLASS — CART-PENDULUM SYSTEM
Let us now go beyond our assumptions and consider cases where the true target function is not an element of the hypothesis set.
Consider a pendulum attached to a cart that can move horizontally along a rail but that is attached to a spring damper system, see Fig. 6(a). The system is parametrized by 4 unknowns: the position of the cart, the velocity of the cart, the angle of the pendulum and the angular velocity of the pendulum. We combine these into a four-dimensional vector x = (x1, . . . , x4).
We set up a regression problem with four outputs from the corresponding system of ordinary differential equations where y1 = ẋ1 = x3, y2 = ẋ2 = x4 and
y3 = −x1 − 0.01x3 + x24 sin (x2) + 0.1x4 cos (x2) + 9.81 sin (x2) cos (x2)
sin2 (x2) + 1 , (13)
y4 = −0.2x4 − 19.62 sin (x2) + x1 cos (x2) + 0.01x3 cos (x2)− x24 sin (x2) cos (x2)
sin2 (x2) + 1 .
The formulas contain divisions which are not included in our architecture due to their singularities. To incorporate them in a principled manner is left for future work. Thus, the cart-pendulum dynamics is outside the hypothesis class. In this case we cannot expect great extrapolation performance and this is confirmed by the experiments. In Fig. 6(b,c) the extrapolation performance is illustrated by slicing through the input space. The near extrapolation performance is still acceptable for both EQL and MLP, but as soon as the training region is left further even the best instances differ considerably from the true values, see also the numeric results in Tab. 4. The SVR is performing poorly also for near extrapolation range. Inspecting the learned expressions we find that the sigmoid functions are rarely used.
CONCLUSIONS
We presented a new network architecture called EQL that can learn analytic expressions that typically occur in equations governing physical, in particular mechanical, systems. The network is fully differentiable, which allows end-to-end training using backpropagation. By sequencing L1 regularization and fixing L0 norm we achieve sparse representations with unbiased estimation of factors within the learned equations. We also introduce a model selection procedure specifically designed to select for good extrapolation quality by a multiobjective criterion based on validation error and sparsity. The proposed method is able to learn functional relations and extrapolate them to unseen parts of the data space, as we demonstrate by experiments on synthetic as well as real data. The approach learns concise functional forms that may provide insights into the relationships within the data, as we show on physical measurements of x-ray transition energies.
The optimization problem is nontrivial and has many local minima. We have shown cases where the algorithm is not reliably finding the right equation but instead finds an approximation only, in which case extrapolation may be poor.
If the origin of the data is not in the hypothesis class, i. e. the underlying expression cannot be represented by the network and good extrapolation performance cannot be achieved. Thus it is important to increase the model class by incorporating more base functions which we will address in future work alongside the application to even larger examples. We expect good scaling capabilities to larger systems due to the gradient based optimization. Apart from the extrapolation we also expect improved interpolation results in high-dimensional spaces, where data is less dense.
ACKNOWLEDGMENTS
This work was in parts funded by the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement no. 308036: "Life-long learning of visual scene understanding" (L3ViSU). GM received funding from the People Programme (Marie Curie Actions) in FP7/2007-2013 under REA grant agreement no. 291734.
APPENDIX
A1: MODEL SELECTION DETAILS
QUANTIFYING SPARSITY
We actually want a measure of complexity of the formula, however, since it is not clear what is the right choice of a measure, we use the sparsity instead, by counting the number of active/used hidden units denoted by s. For a given network phi we get
s(φ) = L∑ l=1 k∑ i=1 Θ(|W (l)i,· | ∗ |W (l+1) ·,i | − 0.01) , (14)
where Θ is the heavyside function and 0.01 is an arbitrary threshold. For the multiplication units the norm of the incoming weights for both inputs are added (omitted to avoid clutter in the formula).
SELECTION CRITERIA
As stated in the main text, we strive to choose the model that is both simple and has good performance in terms of the validation set. Since both quantities have different scales, we proposed to choose them based on their ranking. Let rv(φ) and rs(φ) be the ranks of the network φ w. r. t. the validation error and sparsity s(φ)respectively, then the network with minimal squared rank norm is selected:
arg min φ
[ rv(φ)2 + rs(φ)2 ] (15)
In Fig. 7 the extrapolation performance of all considered networks for the kin2D-4-end dataset is visualized in dependence of validation error and the sparsity. It becomes evident that the best performing networks are both sparse and have a low validation error.
A2: DEPENDENCE ON NOISE AND NUMBER OF DATA POINTS
In order to understand how the method depends on the amount of noise and the number of datapoints we scan through the two parameters and present the empirical results in Fig. 8. In general the method is robust to noise and as expected, more noise can be compensated by more data. | 1. What is the focus and contribution of the paper regarding transfer learning and extrapolation?
2. What are the strengths of the proposed EQL model, particularly in terms of its ability to capture system dynamics and interpretability?
3. Do you have any concerns or questions about the multiplication units used in the EQL model?
4. How does the reviewer compare the EQL approach to other methods that model extrapolation data with uncertainty?
5. Are there any limitations or trade-offs associated with using the EQL model, such as the need for prior knowledge of the underlying dynamics? | Review | Review
Thank you for an interesting read.
To my knowledge, very few papers have looked at transfer learning with **no** target domain data (the authors called this task as "extrapolation"). This paper clearly shows that the knowledge of the underlying system dynamics is crucial in this case. The experiments clearly showed the promising potential of the proposed EQL model. I think EQL is very interesting also from the perspective of interpretability, which is crucial for data analysis in scientific domains.
Quesions and comments:
1. Multiplication units. By the universal approximation theorem, multiplication can also be represented by a neural network in the usual sense. I agree with the authors' explanation of interpolation and extrapolation, but I still don't quite understand why multiplication unit is crucial here. I guess is it because this representation generalises better when training data is not that representative for the future?
2. Fitting an EQL vs. fitting a polynomial. It seems to me that the number of layers in EQL has some connections to the degree of the polynomial. Assume we know the underlying dynamics we want to learn can be represented by a polynomial. Then what's the difference between fitting a polynomial (with model selection techniques to determine the degree) and fitting an EQL (with model selection techniques to determine the number of layers)? Also your experiments showed that the selection of basis functions (specific to the underlying dynamics you want to learn) is crucial for the performance. This means you need to have some prior knowledge on the form of the equation anyway!
3. Ben-David et al. 2010 has presented some error bounds for the hypothesis that is trained on source data but tested on the target data. I wonder if your EQL model can achieve better error bounds?
4. Can you comment on the comparison of your method to those who modelled the extrapolation data with **uncertainty**? |
ICLR | Title
Variational Lossy Autoencoder
Abstract
Representation learning seeks to expose certain aspects of observed data in a learned representation that’s amenable to downstream tasks like classification. For instance, a good representation for 2D images might be one that describes only global structure and discards information about detailed texture. In this paper, we present a simple but principled method to learn such global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN/CNN. Our proposed VAE model allows us to have control over what the global latent code can learn and by designing the architecture accordingly, we can force the global latent code to discard irrelevant information such as texture in 2D images, and hence the VAE only “autoencodes” data in a lossy fashion. In addition, by leveraging autoregressive models as both prior distribution p(z) and decoding distribution p(x|z), we can greatly improve generative modeling performance of VAEs, achieving new state-of-the-art results on MNIST, OMNIGLOT and Caltech-101 Silhouettes density estimation tasks as well as competitive results on CIFAR10.
1 INTRODUCTION
A key goal of representation learning is to identify and disentangle the underlying causal factors of the data, so that it becomes easier to understand the data, to classify it, or to perform other tasks (Bengio et al., 2013). For image data this often means that we are interested in uncovering the “global structure” that captures the content of an image (for example, the identity of objects present in the image) and its “style”, but that we are typically less interested in the local and high frequency sources of variation such as the specific textures or white noise patterns.
A popular approach for learning representations is to fit a probabilistic latent variable model, an approach also known as analysis-by-synthesis (Yuille & Kersten, 2006; Nair et al., 2008). By learning a generative model of the data with the appropriate hierarchical structure of latent variables, it is hoped that the model will somehow uncover and untangle those causal sources of variations that we happen to be interested in. However, without further assumptions, representation learning via generative modeling is ill-posed: there are many different possible generative models with different (or no) kinds of latent variables that all encode the same probability density function on our observed data. Thus, the results we empirically get using this approach are highly dependent on the specific architectural and modeling choices that are made. Moreover, the objective that we optimize is often completely disconnected from the goal of learning a good representation: An autoregressive model of the data may achieve the same log-likelihood as a variational autoencoder (VAE) (Kingma & Welling, 2013), but the structure learned by the two models is completely different: the latter typically has a clear hierarchy of latent variables, while the autoregressive model has no stochastic latent variables at all (although it is conceivable that the deterministic hidden units of the autoregressive models will have meaningful and useful representations). For this reason, autoregressive models have thus far not been popular for the purpose of learning representations, even though they are extremely powerful as generative models (see e.g. van den Oord et al., 2016a).
A natural question becomes: is it possible to have a model that is a powerful density estimator and at the same time has the right hierarchical structure for representation learning? A potential solution would be to use a hybrid model that has both the latent variable structure of a VAE, as
well as the powerful recurrence of an autoregressive model. However, earlier attempts at combining these two kinds of models have run into the problem that the autoregressive part of the model ends up explaining all structure in the data, while the latent variables are not used (Fabius & van Amersfoort, 2014; Chung et al., 2015; Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016). Bowman et al. (2015) noted that weakening the autoregressive part of the model by, for example, dropout can encourage the latent variables to be used. We analyze why weakening is necessary, and we propose a principled solution that takes advantage of this property to control what kind of information goes into latent variables. The model we propose performs well as a density estimator, as evidenced by state-of-the-art log-likelihood results on MNIST, OMNIGLOT and Caltech-101, and also has a structure that is uniquely suited for learning interesting global representations of data.
2 VAES DO NOT AUTOENCODE IN GENERAL
A VAE is frequently interpreted as a regularized autoencoder (Kingma & Welling, 2013; Zhang et al., 2016), but the conditions under which it is guaranteed to autoencode (reconstruction being close to original datapoint) are not discussed. In this section, we discuss the often-neglected fact that VAEs do not always autoencode and give explicit reasons why previous attempts to apply VAE in sequence modeling found that the latent code is generally not used unless the decoder is weakened (Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016). The understanding of when VAE does autoencode will be an essential building piece for VLAE.
2.1 TECHNICAL BACKGROUND
Let x be observed variables, z latent variables and let p(x, z) be the parametric model of their joint distribution, called the generative model defined over the variables. Given a dataset X = {x1, ...,xN} we wish to perform maximum likelihood learning of its parameters:
log p(X) = N∑ i=1 log p(x(i)), (1)
but in general this marginal likelihood is intractable to compute or differentiate directly for flexible generative models that have high-dimensional latent variables and flexible priors and likelihoods. A solution is to introduce q(z|x), a parametric inference model defined over the latent variables, and optimize the variational lower bound on the marginal log-likelihood of each observation x:
log p(x) ≥ Eq(z|x) [log p(x, z)− log q(z|x)] = L(x; θ) (2) where θ indicates the parameters of p and q models.
There are various ways to optimize the lower bound L(x; θ); for continuous z it can be done efficiently through a re-parameterization of q(z|x) (Kingma & Welling, 2013; Rezende et al., 2014). This way of optimizing the variational lower bound with a parametric inference network and reparameterization of continuous latent variables is usually called VAE. The “autoencoding” terminology comes from the fact that the lower bound L(x; θ) can be re-arranged:
L(x; θ) = Eq(z|x) [log p(x, z)− log q(z|x)] (3) = Eq(z|x) [log p(x|z)]−DKL(q(z|x)||p(z)) (4)
where the first term can be seen as the expectation of negative reconstruction error and the KL divergence term can be seen as a regularizer, which as a whole could be seen as a regularized autoencoder loss with q(z|x) being the encoder and p(x|z) being the decoder. In the context of 2D images modeling, the decoding distribution p(x|z) is usually chosen to be a simple factorized distribution, i.e. p(x|z) = ∏ i p(xi|z), and this setup often yields a sharp decoding distribution p(x|z) that tends to reconstruct original datapoint x exactly.
2.2 BITS-BACK CODING AND INFORMATION PREFERENCE
It’s straightforward to see that having a more powerful p(x|z) will make VAE’s marginal generative distribution p(x) = ∫ z p(z)p(x|z)dz more expressive. This idea has been explored extensively
in previous work applying VAE to sequence modeling (Fabius & van Amersfoort, 2014; Chung et al., 2015; Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016), where the decoding distribution is a powerful RNN with autoregressive dependency, i.e., p(x|z) =∏ i p(xi|z,x<i). Since RNNs are universal function approximators and any joint distribution over x admits an autoregressive factorization, the RNN autoregressive decoding distribution can in theory represent any probability distribution even without dependence on z.
However, previous attempts have found it hard to benefit from VAE when using an expressive decoding distribution p(x|z). Indeed it’s documented in detail by Bowman et al. (2015) that in most cases when an RNN autoregressive decoding distribution is used, the latent code z is completely ignored and the model regresses to be a standard unconditional RNN autoregressive distribution that doesn’t depend on the latent code. This phenomenon is commonly attributed to “optimization challenges” of VAE in the literature (Bowman et al., 2015; Serban et al., 2016; Kaae Sønderby et al., 2016) because early in the training the approximate posterior q(z|x) carries little information about datapoint x and hence it’s easy for the model to just set the approximate posterior to be the prior to avoid paying any regularization cost DKL(q(z|x)||p(z)). Here we present a simple but often-neglected observation that this phenomenon arises not just due to optimization challenges and instead even if we can solve the optimization problems exactly, the latent code should still be ignored at optimum for most practical instances of VAE that have intractable true posterior distributions and sufficiently powerful decoders. It is easiest to understand this observation from a Bits-Back Coding perspective of VAE.
It is well-known that Bits-Back Coding is an information-theoretic view of Variational Inference (Hinton & Van Camp, 1993; Honkela & Valpola, 2004) and specific links have been established between Bits-Back Coding and the Helmholtz Machine/VAE (Hinton & Zemel, 1994; Gregor et al., 2013). Here we briefly relate VAE to Bits-Back Coding for self-containedness:
First recall that the goal of designing an efficient coding protocol is to minimize the expected code length of communicating x. To explain Bits-Back Coding, let’s first consider a more naive coding scheme. VAE can be seen as a way to encode data in a two-part code: p(z) and p(x|z), where z can be seen as the essence/structure of a datum and is encoded first and then the modeling error (deviation from z’s structure) is encoded next. The expected code length under this naive coding scheme for a given data distribution is hence:
Cnaive(x) = Ex∼data,z∼q(z|x) [− log p(z)− log p(x|z)] (5)
This coding scheme is, however, inefficient. Bits-Back Coding improves on it by noticing that the encoder distribution q(z|x) can be used to transmit additional information, up to H(q(z|x)) expected nats, as long as the receiver also has access to q(z|x). The decoding scheme works as follows: a receiver first decodes z from p(z), then decodes x from p(x|z) and, by running the same approximate posterior that the sender is using, decodes a secondary message from q(z|x). Hence, to properly measure the code length of VAE’s two-part code, we need to subtract the extra information from q(z|x). Using Bit-Back Coding, the expected code length equates to the negative variational lower bound or the so-called Helmholtz variational free energy, which means minimizing code length is equivalent to maximizing the variational lower bound:
CBitsBack(x) = Ex∼data,z∼q(z|x) [log q(z|x)− log p(z)− log p(x|z)] (6) = Ex∼data [−L(x)] (7)
Casting the problem of optimizing VAE into designing an efficient coding scheme easily allows us to reason when the latent code z will be used: the latent code z will be used when the two-part code is an efficient code. Recalling that the lower-bound of expected code length for data is given by the Shannon entropy of data generation distribution: H(data) = Ex∼data [− log pdata(x)], we can analyze VAE’s coding efficiency:
CBitsBack(x) = Ex∼data,z∼q(z|x) [log q(z|x)− log p(z)− log p(x|z)] (8) = Ex∼data [− log p(x) +DKL(q(z|x)||p(z|x))] (9) ≥ Ex∼data [− log pdata(x) +DKL(q(z|x)||p(z|x))] (10) = H(data) + Ex∼data [DKL(q(z|x)||p(z|x))] (11)
Since Kullback Leibler divergence is always non-negative, we know that using the two-part code derived from VAE suffers at least an extra code length of DKL(q(z|x)||p(z|x)) nats for using a posterior that’s not precise. Many previous works in Variational Inference have designed flexible approximate posteriors to better approximate true posterior (Salimans et al., 2014; Rezende & Mohamed, 2015; Tran et al., 2015; Kingma et al., 2016). Improved posterior approximations have shown to be effective in improving variational inference but none of the existing methods are able to completely close the gap between approximate posterior and true posterior. This leads us to believe that for most practical models, at least in the near future, the extra coding costDKL(q(z|x)||p(z|x)) will exist and will not be negligible.
Once we understand the inefficiency of the Bits-Back Coding mechanism, it’s simple to realize why sometimes the latent code z is not used: if the p(x|z) could model pdata(x) without using information from z, then it will not use z, in which case the true posterior p(z|x) is simply the prior p(z) and it’s usually easy to set q(z|x) to be p(z) to avoid incurring an extra cost DKL(q(z|x)||p(z|x)). And it’s exactly the case when a powerful decoding distribution is used like an RNN autoregressive distribution, which given enough capacity is able to model arbitrarily complex distributions. Hence there exists a preference of information when a VAE is optimized: information that can be modeled locally by decoding distribution p(x|z) without access to z will be encoded locally and only the remainder will be encoded in z.
We note that one common way to encourage putting information into the code is to use a factorized decoder p(x|z) = ∏ i p(xi|z) but so long as there is one dimension xj that’s independent of all other dimensions for true data distribution, pdata(x) = pdata(xj)pdata(x 6=j), then the latent code doesn’t contain all the information about x since at least xj will be modeled locally by factorized p(x|z). This kind of independence structure rarely exists in images so common VAEs that have factorized decoder autoencode almost exactly. Other techniques to encourage the usage of the latent code include annealing the relative weight of of DKL(q(z|x)||p(z)) in the variational lower bound (Bowman et al., 2015; Kaae Sønderby et al., 2016) or the use of free bits (Kingma et al., 2016), which can serve the dual purpose of smoothing the optimization landscape and canceling out part of the Bits-Back Code inefficiency DKL(q(z|x)||p(z|x)).
3 VARIATIONAL LOSSY AUTOENCODER
The discussion in Section 2.2 suggests that autoregressive models cannot be combined with VAE since information will be preferred to be modeled by autoregressive models. Nevertheless, in this section, we present two complementary classes of improvements to VAE that utilize autoregressive models fruitfully to explicitly control representation learning and improve density estimation.
3.1 LOSSY CODE VIA EXPLICIT INFORMATION PLACEMENT
Even though the information preference property of VAE might suggest that one should always use the full autoregressive models to achieve a better code length/log-likelihood, especially when slow data generation is not a concern, we argue that this information preference property can be exploited to turn the VAE into a powerful representation learning method that gives us fine-grained control over the kind of information that gets included in the learned representation.
When we try to learn a lossy compression/representation of data, we can simply construct a decoding distribution that’s capable of modeling the part of information that we don’t want the lossy representation to capture, but, critically, that’s incapable of modelling the information that we do want the lossy representation to capture.
For instance, if we are interested in learning a global representation for 2D images that doesn’t encode information about detailed texture, we can construct a specific factorization of the autoregressive distribution such that it has a small local receptive field as decoding distribution, e.g., plocal(x|z) = ∏ i p(xi|z,xWindowAround(i)). Notice that, as long as xWindowAround(i) is smaller than x<i, plocal(x|z) won’t be able to represent arbitrarily complex distribution over x without dependence on z since the receptive field is limited such that not all distributions over x admit such factorizations. In particular, the receptive field window can be a small rectangle adjacent to a pixel xi and in this case long-range dependency will be encoded in the latent code z. On the other hand, if the true data distribution admits such factorization for a given datum x and dimension i, i.e.
pdata(xi|xWindowAround(i)) = pdata(xi|x<i), then the information preference property discussed in Section 2.2 will apply here, which means that all the information will be encoded in local autoregressive distribution for xi. Local statistics of 2D images like texture will likely be modeled completely by a small local window, whereas global structural information of an images like shapes of objects is long-range dependency that can only be communicated through latent code z. Therefore we have given an example VAE that will produce a lossy compression of 2D images carrying exclusively global information that can’t be modeled locally.
Notice that a global representation is only one of many possible lossy representations that we can construct using this information preference property. For instance, the conditional of an autoregressive distribution might depend on a heavily down-sampled receptive field so that it can only model long-range pattern whereas local high-frequency statistics need to be encoded into the latent code. Hence we have demonstrated that we can achieve explicit placement of information by constraining the receptive field/factorization of an autoregressive distribution that’s used as decoding distribution.
We want to additionally emphasize the information preference property is an asymptotic view in a sense that it only holds when the variational lowerbound can be optimized well. Thus, we are not proposing an alternative to techniques like free bits Kingma et al. (2016) or KL annealing, and indeed they are still useful methods to smooth the optimization problem and used in this paper’s experiments.
3.2 LEARNED PRIOR WITH AUTOREGRESSIVE FLOW
Inefficiency in Bits-Back Coding, i.e., the mismatch between approximate posterior and true posterior, can be exploited to construct a lossy code but it’s still important to minimize such inefficiency to improve overall modeling performance/coding efficiency. We propose to parametrize the prior distribution p(z; θ) with an autoregressive model and show that a type of autoregressive latent code can in theory reduce inefficiency in Bits-Back coding.
It is well-known that limited approximate posteriors impede learning and therefore various expressive posterior approximations have been proposed to improve VAE’s density estimation performance (Turner et al., 2008; Mnih & Gregor, 2014; Salimans et al., 2014; Rezende & Mohamed, 2015; Kingma et al., 2016). One such class of approximate posteriors that has been shown to attain good empirical performance is based on the idea of Normalizing Flow, which is to apply an invertible mapping to a simple random variable, for example a factorized Gaussian as commonly used for q(z|x), in order to obtain a complicated random variable. For an invertible transformation between a simple distribution y and a more flexible z, we know from the change-of-variable technique that log q(z|x) = log q(y|x) − log det dzdy and using q(z|x) as approximate posterior will decrease the coding efficiency gap DKL(q(z|x)||p(z|x)) provided the transformation is sufficiently expressive. Kingma et al. (2016) introduced Inverse Autoregressive Flow, which is a powerful class of such invertible mappings that have simple determinant: zi =
yi−µi(y1:i−1) σi(y1:i−1) , where µi(.) ∈ R, σi(.) ∈ R+ are general functions that can be parametrized by expressive neural networks, such as MADE and PixelCNN variants (Germain et al., 2015; van den Oord et al., 2016a). Inverse autoregressive flow is the inverse/whitening of autoregressive flow: yi = ziσi(y1:i−1) + µi(y1:i−1). We refer interested readers to (Rezende & Mohamed, 2015; Kingma et al., 2016) for in-depth discussions on related topics.
In this paper, we propose to parametrize our learnable prior as an autoregressive flow from some simple noise source like spherical Gaussian. Next, we show that using latent code transformed by autoregressive flow (AF) is equivalent to using inverse autoregressive flow (IAF) approximate posterior, which explains why it can similarly improve Bits-Back Coding efficiency. Moreover, compared with an IAF posterior, an AF prior has a more expressive generative model that essentially “comes for free”.
For an autoregressive flow f , some continuous noise source is transformed into latent code z: z = f( ). Assuming the density function for noise source is u( ), we similarly know that log p(z) = log u( ) + log det d dz .
Simply re-arranging the variational lowerbound for using AF prior reveals that having an AF latent code z is equivalent to using an IAF posterior for that we can interpret as the new latent code:
L(x; θ) = Ez∼q(z|x) [log p(x|z) + log p(z)− log q(z|x)] (12) = Ez∼q(z|x), =f−1(z) [ log p(x|f( )) + log u( ) + log det d
dz − log q(z|x)
] (13)
= Ez∼q(z|x), =f−1(z) log p(x|f( )) + log u( )− (log q(z|x)− log det d dz
)︸ ︷︷ ︸ IAF Posterior (14) AF prior is the same as IAF posterior along the encoder path, f−1(q(z|x)), but differs along the decoder/generator path: IAF posterior has a shorter decoder path p(x|z) whereas AF prior has a deeper decoder path p(x|f( )). The crucial observation is that AF prior and IAF posterior have the same computation cost under the expectation of z ∼ q(z|x), so using AF prior makes the model more expressive at no training time cost.
4 EXPERIMENTS
In this paper, we evaluate VLAE on 2D images and leave extensions to other forms of data to future work. For the rest of the section, we define a VLAE model as a VAE that uses AF prior and autoregressive decoder. We choose to implement conditional distribution p(x|z) with a smallreceptive-field PixelCNN (van den Oord et al., 2016a), which has been proved to be a scalable autoregressive model.
For evaluation, we use binary image datasets that are commonly used for density estimation tasks: MNIST (LeCun et al., 1998) (both statically binarized 1 and dynamically binarized version (Burda et al., 2015a)), OMNIGLOT (Lake et al., 2013; Burda et al., 2015a) and Caltech-101 Silhouettes (Marlin et al., 2010). All datasets uniformly consist of 28x28 binary images, which allow us to use a unified architecture. VAE networks used in binary image datasets are simple variants of ResNet VAEs described in (Salimans et al., 2014; Kingma et al., 2016). For the decoder, we use a variant of PixelCNN that has 6 layers of masked convolution with filter size 3, which means the window of dependency, xWindowAround(i), is limited to a small local patch. During training, ”free bits” (Kingma et al., 2016) is used improve optimization stability. Experimental setup and hyperparameters are detailed in the appendix. Reported marginal NLL is estimated using Importance Sampling with 4096 samples.
We designed experiments to answer the following questions:
• Can VLAE learn lossy codes that encode global statistics? • Does using AF priors improves upon using IAF posteriors as predicted by theory? • Does using autoregressive decoding distributions improve density estimation performance?
4.1 LOSSY COMPRESSION
First we are interested in whether VLAE can learn a lossy representation/compression of data by using the PixelCNN decoder to model local statistics. We trained VLAE model on Statically Binarized MNIST and the converged model has E[DKL(q(z|x)||p(z))] = 13.3 nats = 19.2 bits, which is the number of bits it uses on average to encode/compress one MNIST image. By comparison, an identical VAE model with factorized decoding distribution will uses on average 37.3 bits in latent code, and this thus indicates that VLAE can learn a lossier compression than a VAE with regular factorized conditional distribution.
The next question is whether VLAE’s lossy compression encodes global statistics and discards local statistics. In Fig 1a, we visualize original images xdata and one random “decompression” xdecompressed from VLAE: z ∼ q(z|xdata),xdecompressed ∼ p(x|z). We observe that none of the
1We use the version provided by Hugo Larochelle.
decompressions is an exact reconstruction of the original image but instead the global structure of the image was encoded in the lossy code z and regenerated. Also worth noting is that local statistics are not preserved but a new set of likely local statistics are generated in the decompressed images: the binary masks are usually different and local styles like stroke width are sometimes slightly different.
However, we remark that the lossy code z doesn’t always capture the kind of global information that we care about and it’s dependent on the type of constraint we put on the decoder. For instance, in Fig 4b, we show decompressions for OMNIGLOT dataset, which has more meaningful variations in small patches than MNIST, and we can observe that semantics are not preserved in some cases. This highlights the need to specify the type of statistics we care about in a representation, which will be different across tasks and datasets, and design decoding distribution accordingly.
4.2 DENSITY ESTIMATION
Next we investigate whether leveraging autoregressive models as latent distribution p(z) and as decoding distribution p(x|z) would improve density estimation performance. To verify whether AF prior is able to improve upon IAF posterior alone, it’s desirable to test this model without using autoregressive decoder but instead using the conventional independent Bernoulli distribution for p(x|z). Hence we use the best performing model from Kingma et al.
(2016) on statically binarized MNIST and make the single modification of replacing the original IAF posterior with an equivalent AF prior, removing the context. As seen in Table 1, VAE with AF prior is outperforming VAE with an equivalent IAF posterior, indicating that the deeper generative model from AF prior is beneficial. A similar gain carries over when an autoregressive decoder is used: on statically binarized MNIST, using AF prior instead of IAF posterior reduces train NLL by 0.8 nat and test NLL by 0.6 nat.
Next we evaluate whether using autoregressive decoding distribution can improve performance and we show in Table 1 that a VLAE model, with AF prior and PixelCNN conditional, is able to outperform a VAE with just AF prior and achieves new state-of-the-art results on statically binarized MNIST.
In addition, we hypothesize that the separation of different types of information, the modeling global structure in latent code and local statistics in PixelCNN, likely has some form of good inductive biases for 2D images. In order to evaluate if VLAE is an expressive density estimator with good inductive biases, we will test a single VLAE model, with the same network architecture, on all binary datasets. We choose hyperparameters manually on statically binarized MNIST and use the same hyperparameters to evaluate on dynamically binarized MNIST, OMNIGLOT and Caltech-101 Silhouettes. We also note that better performance can be obtained if we individually tune hyperparameters for each dataset. As a concrete demonstration, we report the performance of a fine-tuned VLAE on OMNIGLOT dataset in Table 3.
As seen in Table 2,3,4, with the same set of hyperparameters tuned on statically binarized MNIST, VLAE is able to perform well on the rest of datasets, significantly exceeding previous state-ofthe-art results on dynamically binarized MNIST and Caltech-101 Silhouettes and tying statistically with best previous result on OMNIGLOT. In order to isolate the effect of expressive PixelCNN as decoder, we also report performance of the same PixelCNN trained without VAE part under the name “Unconditional Decoder”.
4.3 NATURAL IMAGES: CIFAR10
In addition to binary image datasets, we have applied VLAE to the CIFAR10 dataset of natural images. Density estimation of CIFAR10 images has been a challenging benchmark problem used by many recent generative models and hence is great task to position VLAE among existing methods.
We investigated using ResNet (He et al., 2016) and DenseNet (Huang et al., 2016) as building blocks for VAE networks and observed that DenseNet reduces overfitting. We also propose a new optimization technique that blends the advantages of KL annealing (Serban et al., 2016) and ”free bits” (Kingma et al., 2016) to stabilize learning on this challenging dataset. Detailed experimental setup is described in Appendix.
VLAE is compared to other methods on CIFAR10 in Table 5. We show that VLAE models attain new state-of-the-art performance among other variationally trained latent-variable models. DenseNet VLAE model also outperforms most other tractable likelihood models including Gated PixelCNN and PixelRNN and has results only slightly worse than currently unarchived state-of-the-art PixelCNN++.
We also investigate learning lossy codes on CIFAR10 images. To illustrate how does the receptive field size of PixelCNN decoder influence properties of learned latent codes, we show visualizations of similar VLAE models with receptive fields of different sizes. Specifically we say a receptive field, xWindowAround(i), has size AxB when a pixel xi can depend on the rectangle block of size AxB immediately on top of xi as well as the ⌈ A−1 2 ⌉ pixels immediately to the left of xi. We use this notation to refer to different types of PixelCNN decoders in Figure 3.
From (a)-(c) in Figure 3, we can see that larger receptive fields progressively make autoregressive decoders capture more structural information. In (a), a smaller receptive field tends to preserve rather detailed shape information in the lossy code whereas the latent code only retains rough shape in (c) with a larger receptive field.
It’s interesting to also note that in (a)-(c), oftentimes color information is partially omitted from latent codes and one explanation can be that color is very predictable locally. However, color information can be important to preserve if our task is, for example, object classification. To demonstrate how we can encode color information in the lossy code, we can choose to make PixelCNN decoder depend only on images’ grayscale versions. In other words, instead of choosing the decoder to be plocal(x|z) = ∏ i p(xi|z,xWindowAround(i)), we use a decoder of the form
plocal(x|z) = ∏ i p(xi|z,Grayscale(xWindowAround(i))). In (d) of Figure 3, we visualize lossy codes for a VLAE that has the same receptive field size as (c) but uses a “grayscale receptive field”. We note that the lossy codes in (d) encode roughly the same structural information as those in (c) but generally generate objects that are more recognizable due to the preservation of color information. This serves as one example of how we can design the lossy latent code carefully to encode what’s important and what’s not.
5 RELATED WORK
We investigate a fusion between variational autoencoders with continuous latent variables (Kingma & Welling, 2013; Rezende et al., 2014) and neural autoregressive models. For autoregression, we specifically apply a novel type of architecture where autoregression is realised through a carefully
constructed deep convolutional network, introduced in the PixelCNN model for images (van den Oord et al., 2016a,b). These family of convolutional autoregressive models was further explored, and extended, for audio in WaveNet (Oord et al., 2016), video in Video Pixel Networks (Kalchbrenner et al., 2016b) and language in ByteNet (Kalchbrenner et al., 2016a).
The combination of latent variables with expressive decoder was previously explored using recurrent networks mainly in the context of language modeling (Chung et al., 2015; Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016). Bowman et al. (2015) has also proposed to weaken an otherwise too expressive decoder by dropout to force some information into latent codes.
Concurrent with our work, PixelVAE (Gulrajani et al., 2016) also explored using conditional PixelCNN as a VAE’s decoder and has obtained impressive density modeling results through the use of multiple levels of stochastic units.
Using autoregressive model on latent code was explored in the context of discrete latent variables in DARN (Gregor et al., 2013). Kingma et al. (2016), Kaae Sønderby et al. (2016), Gregor et al. (2016) and Salimans (2016) explored VAE architecture with an explicitly deep autoregressive prior for continuous latent variables, but the autoregressive data likelihood is intractable in those architectures and needs to inferred variationally. In contrast, we use multiple steps of autoregressive flows that has exact likelihood and analyze the effect of using expressive latent code.
Optimization challenges for using (all levels of) continuous latent code were discussed before and practical solutions were proposed (Bowman et al., 2015; Kaae Sønderby et al., 2016; Kingma et al., 2016). In this paper, we present a complementary perspective on when/how should the latent code be used by appealing to a Bits-Back interpretation of VAE.
Learning a lossy compressor with latent variable model has been investigated with ConvDRAW (Gregor et al., 2016). It learns a hierarchy of latent variables and just using high-level latent variables will result in a lossy compression that performs similarly to JPEG. Our model similarly learns a lossy compressor but it uses an autoregressive model to explicitly control what kind of information should be lost in compression.
6 CONCLUSION
In this paper, we analyze the condition under which the latent code in VAE should be used, i.e. when does VAE autoencode, and use this observation to design a VAE model that’s a lossy compressor of observed data. At modeling level, we propose two complementary improvements to VAE that are shown to have good empirical performance.
VLAE has the appealing properties of controllable representation learning and improved density estimation performance but these properties come at a cost: compared with VAE models that have simple prior and decoder, VLAE is slower at generation due to the sequential nature of autoregressive model.
Moving forward, we believe it’s exciting to extend this principle of learning lossy codes to other forms of data, in particular those that have a temporal aspect like audio and video. Another promising direction is to design representations that contain only information for downstream tasks and utilize those representations to improve semi-supervised learning.
A DETAILED EXPERIMENT SETUP FOR BINARY IMAGES
For VAE’s encoder and decoder, we use the same ResNet (He et al., 2015) VAE architecture as the one used in IAF MNIST experiment (Kingma et al., 2016). The only difference is that the decoder network now, instead of outputing a 28x28x1 spatial feature map to specify the mean of a factorized bernoulli distribution, outputs a 28x28x4 spatial feature map that’s concatenated with the original binary image channel-wise, forming a 28x28x5 feature map that’s then fed through a typical masked PixelCNN (van den Oord et al., 2016a). As such even though the PixelCNN conditions on the latent code, we don’t call it a Conditional PixelCNN because it doesn’t use the specific architecture that was proposed in van den Oord et al. (2016b). For the PixelCNN, it has 6 masked convolution layers with 12 3x3 filters organized in ResNet blocks, and it has 4 additional 1x1 convolution ResNet block between every other masked convolution layer to increase processing capacity since it employs fewer masked convolutions than usual. All the masked convolution layer have their weights tied to reduce overfitting on statically binarized MNIST, and untying the weights will increase performance for other datasets. Experiments are tuned on the validation set and then final experiment was run with train and validation set, with performance evaluated with test set. Exponential Linear Units (Clevert et al., 2015) are used as activation functions in both VAE network and PixelCNN network. Weight normalization is everywhere with data-dependent initialization (Salimans & Kingma, 2016).
A latent code of dimension 64 was used. For AF prior, it’s implemented with MADE (Germain et al., 2015) as detailed in Kingma et al. (2016). We used 4 steps of autoregressive flow and each flow is implemented by a 3-layer MADE that has 640 hidden units and uses Relu (Nair & Hinton, 2010) as activation functions. Differing from the practice of Kingma et al. (2016), we use mean-only autoregressive flow, which we found to be more numerically stable.
In terms of training, Adamax (Kingma & Ba, 2014) was used with a learning rate of 0.002. 0.01 nats/data-dim free bits (Kingma et al., 2016) was found to be effective in dealing with the problem of all the latent code being ignored early in training. Polyak averaging (Polyak & Juditsky, 1992) was used to compute the final parameters, with α = 0.998.
All experiments are implemented using TensorFlow (Abadi et al., 2016).
B ADDITIONAL EXPERIMENT SETUP FOR CIFAR10
Latent codes are represented by 16 feature maps of size 8x8, and this choice of spatial stochastic units are inspired by ResNet IAF VAE (Kingma et al., 2016). Prior distribution is factorized Gaussian noise transformed by 6 autoregressive flows, each of which is implemented by a PixelCNN (van den Oord et al., 2016a) with 2 hidden layers and 128 feature maps. Between every other autoregressive flow, the ordering of stochastic units is reversed.
ResNet VLAE has the following structure for encoder: 2 ResNet blocks, Conv w/ stride=2, 2 ResNet blocks, Conv w/ stride=2, 3 ResNet blocks, 1x1 convolution and has a symmetric decoder. Channel size = 48 for 32x32 feature maps and 96 for other feature maps. DenseNet VLAE follows a similar structure: replacing 2 ResNet blocks with one DenseNet block of 3 steps and each step produces a certain number of feature maps such that at the end of a block, the concatenated feature maps is slightly more than the ResNet VLAE at the same stage.
Conditional PixelCNN++ (Salimans et al., 2017) is used as the decoder. Specifically the channelautoregressive variant is used to ensure there is sufficient capacity even when the receptive field is small. Specifically, the decoder PixelCNN has 4 blocks of 64 feature maps where each block is conditioned on previous blocks with Gated ResNet connections and hence the PixelCNN decoders we use are shallow but very wide. For 4x2 receptive field experiment, we use 1 layer of vertical stack convolutions and 2 layers of horizontal stack convolutions; for 5x3 receptive field experiment, we use 2 layers of vertical stack convolutions and 2 layers of horizontal stack convolutions; For 5x3 receptive field experiment, we use 2 layers of vertical stack convolutions and 2 layers of horizontal stack convolutions; For 7x4 receptive field experiment, we use 3 layers of vertical stack convolutions and 3 layers of horizontal stack convolutions; for 7x4 Grayscale experiment, we transform RGB
images into gray-scale images via this specific transformation: (0.299∗R)+(0.587G)+(0.114B). Best density estimation result is obtained with 7x4 receptive field experiments.
C SOFT FREE BITS
”Free bits” was a technique proposed in (Kingma et al., 2016) where K groups of stochastic units are encouraged to be used through the following surrogate objective:
L̃λ = Ex∼M [ Eq(z|x) [log p(x|z)] ] − K∑ j=1 maximum(λ,Ex∼M [DKL(q(zj |x)||p(zj))])
This technique is easy to use since it’s usually easy to determine the minimum number of bits/nats, λ, stochastic units need to encode. Choosing λ is hence easier than setting a fixed KL annealing schedule (Serban et al., 2016).
On the other hand, Kl annealing has the benefit of the surrogate objective will smoothly become the true objective, the variational lower bound where as ”free bits” has a sharp transition at the boundary. Therefore, we propose to still use λ as hyperparameter to specify at least λ nats should be used but try to change the optimization objective as slowly as possible:
LSoftFreeBits(x; θ) =Eq(z|x) [log p(x|z)]− γDKL(q(z|x)||p(z)) where 0 < γ ≤ 1. And we make the optimization smoother by changing γ slowly online to make sure at least λ nats are used: when Kl is too much higher than λ (we experimented wide range of thresholds from 3% to 30%, all of which yield improved results, and we tend to use 5% us a threshold), γ is increased, and when Kl lower than λ, γ is decreased to encourage information flow.
We found it sufficient to increase/decrease at 10% increment and didn’t further tune this parameter.
D AUTOREGRESSIVE DECODER WITHOUT AUTOREGRESSIVE PRIOR
In this section, we investigate the scenario of just using an autoregressive decoder without using an autoregressive prior. We compare the exact same model in three configurations: 1) using smallreceptive-field PixelCNN as an unconditional density estimator; 2) using small-receptive-field as a decoder in a VAE with Gaussian latent variables; 3) replacing Gaussian latent variables with autoregressive flow latent variables in 2).
In Table 1, we can observe that each step of modification improves density estimation performance. In addition, using an autoregressive latent code makes the latent code transmit more information as shown in the difference of E[DKL(q(z|x)||p(z))].
E CIFAR10 GENERATED SAMPLES | 1. What is the focus of the paper regarding Variational Autoencoders?
2. What are the strengths of the proposed approach, particularly in understanding the latent code?
3. Do you have concerns about the proposed approaches to force the latent variables' use?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any minor issues or questions regarding the paper? | Review | Review
This paper proposes a Variational Autoencoder model that can discard information found irrelevant, in order to learn interesting global representations of the data. This can be seen as a lossy compression algorithm, hence the name Variational Lossy Autoencoder. To achieve such model, the authors combine VAEs with neural autoregressive models resulting in a model that has both a latent variable structure and a powerful recurrence structure.
The authors first present an insightful Bits-Back interpretation of VAE to show when and how the latent code is ignored. As it was also mentioned in the literature, they say that the autoregressive part of the model ends up explaining all structure in the data, while the latent variables are not used. Then, they propose two complementary approaches to force the latent variables to be used by the decoder. The first one is to make sure the autoregressive decoder only uses small local receptive field so the model has to use the latent code to learn long-range dependency. The second is to parametrize the prior distribution over the latent code with an autoregressive model.
They also report new state-of-the-art results on binarized MNIST (both dynamical and statically binarization), OMNIGLOT and Caltech-101 Silhouettes.
Review:
The bits-Back interpretation of VAE is a nice contribution to the community. Having novel interpretations for a model helps to better understand it and sometimes, like in this paper, highlights how it can be improved.
Having a fine-grained control over the kind of information that gets included in the learned representation can be useful for a lot of applications. For instance, in image retrieval, such learned representation could be used to retrieve objects that have similar shape no matter what texture they have.
However, the authors say they propose two complementary classes of improvements to VAE, that is the lossy code via explicit information placement (Section 3.1) and learning the prior with autoregressive flow (Section 3.2). However, they never actually showed how a VAE without AF prior but that has a PixelCNN decoder performs. What would be the impact on the latent code is no AF prior is used?
Also, it is not clear if WindowAround(i) represents only a subset of x_{<i} or it can contain any data other than x_i. The authors mentioned the window can be represented as a small rectangle adjacent to a pixel x_i, must it only contains pixels above and to the left of x_i (similar to PixelCNN)
Minor:
In Equation 8, should there be an expectation over the data distribution? |
ICLR | Title
Variational Lossy Autoencoder
Abstract
Representation learning seeks to expose certain aspects of observed data in a learned representation that’s amenable to downstream tasks like classification. For instance, a good representation for 2D images might be one that describes only global structure and discards information about detailed texture. In this paper, we present a simple but principled method to learn such global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN/CNN. Our proposed VAE model allows us to have control over what the global latent code can learn and by designing the architecture accordingly, we can force the global latent code to discard irrelevant information such as texture in 2D images, and hence the VAE only “autoencodes” data in a lossy fashion. In addition, by leveraging autoregressive models as both prior distribution p(z) and decoding distribution p(x|z), we can greatly improve generative modeling performance of VAEs, achieving new state-of-the-art results on MNIST, OMNIGLOT and Caltech-101 Silhouettes density estimation tasks as well as competitive results on CIFAR10.
1 INTRODUCTION
A key goal of representation learning is to identify and disentangle the underlying causal factors of the data, so that it becomes easier to understand the data, to classify it, or to perform other tasks (Bengio et al., 2013). For image data this often means that we are interested in uncovering the “global structure” that captures the content of an image (for example, the identity of objects present in the image) and its “style”, but that we are typically less interested in the local and high frequency sources of variation such as the specific textures or white noise patterns.
A popular approach for learning representations is to fit a probabilistic latent variable model, an approach also known as analysis-by-synthesis (Yuille & Kersten, 2006; Nair et al., 2008). By learning a generative model of the data with the appropriate hierarchical structure of latent variables, it is hoped that the model will somehow uncover and untangle those causal sources of variations that we happen to be interested in. However, without further assumptions, representation learning via generative modeling is ill-posed: there are many different possible generative models with different (or no) kinds of latent variables that all encode the same probability density function on our observed data. Thus, the results we empirically get using this approach are highly dependent on the specific architectural and modeling choices that are made. Moreover, the objective that we optimize is often completely disconnected from the goal of learning a good representation: An autoregressive model of the data may achieve the same log-likelihood as a variational autoencoder (VAE) (Kingma & Welling, 2013), but the structure learned by the two models is completely different: the latter typically has a clear hierarchy of latent variables, while the autoregressive model has no stochastic latent variables at all (although it is conceivable that the deterministic hidden units of the autoregressive models will have meaningful and useful representations). For this reason, autoregressive models have thus far not been popular for the purpose of learning representations, even though they are extremely powerful as generative models (see e.g. van den Oord et al., 2016a).
A natural question becomes: is it possible to have a model that is a powerful density estimator and at the same time has the right hierarchical structure for representation learning? A potential solution would be to use a hybrid model that has both the latent variable structure of a VAE, as
well as the powerful recurrence of an autoregressive model. However, earlier attempts at combining these two kinds of models have run into the problem that the autoregressive part of the model ends up explaining all structure in the data, while the latent variables are not used (Fabius & van Amersfoort, 2014; Chung et al., 2015; Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016). Bowman et al. (2015) noted that weakening the autoregressive part of the model by, for example, dropout can encourage the latent variables to be used. We analyze why weakening is necessary, and we propose a principled solution that takes advantage of this property to control what kind of information goes into latent variables. The model we propose performs well as a density estimator, as evidenced by state-of-the-art log-likelihood results on MNIST, OMNIGLOT and Caltech-101, and also has a structure that is uniquely suited for learning interesting global representations of data.
2 VAES DO NOT AUTOENCODE IN GENERAL
A VAE is frequently interpreted as a regularized autoencoder (Kingma & Welling, 2013; Zhang et al., 2016), but the conditions under which it is guaranteed to autoencode (reconstruction being close to original datapoint) are not discussed. In this section, we discuss the often-neglected fact that VAEs do not always autoencode and give explicit reasons why previous attempts to apply VAE in sequence modeling found that the latent code is generally not used unless the decoder is weakened (Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016). The understanding of when VAE does autoencode will be an essential building piece for VLAE.
2.1 TECHNICAL BACKGROUND
Let x be observed variables, z latent variables and let p(x, z) be the parametric model of their joint distribution, called the generative model defined over the variables. Given a dataset X = {x1, ...,xN} we wish to perform maximum likelihood learning of its parameters:
log p(X) = N∑ i=1 log p(x(i)), (1)
but in general this marginal likelihood is intractable to compute or differentiate directly for flexible generative models that have high-dimensional latent variables and flexible priors and likelihoods. A solution is to introduce q(z|x), a parametric inference model defined over the latent variables, and optimize the variational lower bound on the marginal log-likelihood of each observation x:
log p(x) ≥ Eq(z|x) [log p(x, z)− log q(z|x)] = L(x; θ) (2) where θ indicates the parameters of p and q models.
There are various ways to optimize the lower bound L(x; θ); for continuous z it can be done efficiently through a re-parameterization of q(z|x) (Kingma & Welling, 2013; Rezende et al., 2014). This way of optimizing the variational lower bound with a parametric inference network and reparameterization of continuous latent variables is usually called VAE. The “autoencoding” terminology comes from the fact that the lower bound L(x; θ) can be re-arranged:
L(x; θ) = Eq(z|x) [log p(x, z)− log q(z|x)] (3) = Eq(z|x) [log p(x|z)]−DKL(q(z|x)||p(z)) (4)
where the first term can be seen as the expectation of negative reconstruction error and the KL divergence term can be seen as a regularizer, which as a whole could be seen as a regularized autoencoder loss with q(z|x) being the encoder and p(x|z) being the decoder. In the context of 2D images modeling, the decoding distribution p(x|z) is usually chosen to be a simple factorized distribution, i.e. p(x|z) = ∏ i p(xi|z), and this setup often yields a sharp decoding distribution p(x|z) that tends to reconstruct original datapoint x exactly.
2.2 BITS-BACK CODING AND INFORMATION PREFERENCE
It’s straightforward to see that having a more powerful p(x|z) will make VAE’s marginal generative distribution p(x) = ∫ z p(z)p(x|z)dz more expressive. This idea has been explored extensively
in previous work applying VAE to sequence modeling (Fabius & van Amersfoort, 2014; Chung et al., 2015; Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016), where the decoding distribution is a powerful RNN with autoregressive dependency, i.e., p(x|z) =∏ i p(xi|z,x<i). Since RNNs are universal function approximators and any joint distribution over x admits an autoregressive factorization, the RNN autoregressive decoding distribution can in theory represent any probability distribution even without dependence on z.
However, previous attempts have found it hard to benefit from VAE when using an expressive decoding distribution p(x|z). Indeed it’s documented in detail by Bowman et al. (2015) that in most cases when an RNN autoregressive decoding distribution is used, the latent code z is completely ignored and the model regresses to be a standard unconditional RNN autoregressive distribution that doesn’t depend on the latent code. This phenomenon is commonly attributed to “optimization challenges” of VAE in the literature (Bowman et al., 2015; Serban et al., 2016; Kaae Sønderby et al., 2016) because early in the training the approximate posterior q(z|x) carries little information about datapoint x and hence it’s easy for the model to just set the approximate posterior to be the prior to avoid paying any regularization cost DKL(q(z|x)||p(z)). Here we present a simple but often-neglected observation that this phenomenon arises not just due to optimization challenges and instead even if we can solve the optimization problems exactly, the latent code should still be ignored at optimum for most practical instances of VAE that have intractable true posterior distributions and sufficiently powerful decoders. It is easiest to understand this observation from a Bits-Back Coding perspective of VAE.
It is well-known that Bits-Back Coding is an information-theoretic view of Variational Inference (Hinton & Van Camp, 1993; Honkela & Valpola, 2004) and specific links have been established between Bits-Back Coding and the Helmholtz Machine/VAE (Hinton & Zemel, 1994; Gregor et al., 2013). Here we briefly relate VAE to Bits-Back Coding for self-containedness:
First recall that the goal of designing an efficient coding protocol is to minimize the expected code length of communicating x. To explain Bits-Back Coding, let’s first consider a more naive coding scheme. VAE can be seen as a way to encode data in a two-part code: p(z) and p(x|z), where z can be seen as the essence/structure of a datum and is encoded first and then the modeling error (deviation from z’s structure) is encoded next. The expected code length under this naive coding scheme for a given data distribution is hence:
Cnaive(x) = Ex∼data,z∼q(z|x) [− log p(z)− log p(x|z)] (5)
This coding scheme is, however, inefficient. Bits-Back Coding improves on it by noticing that the encoder distribution q(z|x) can be used to transmit additional information, up to H(q(z|x)) expected nats, as long as the receiver also has access to q(z|x). The decoding scheme works as follows: a receiver first decodes z from p(z), then decodes x from p(x|z) and, by running the same approximate posterior that the sender is using, decodes a secondary message from q(z|x). Hence, to properly measure the code length of VAE’s two-part code, we need to subtract the extra information from q(z|x). Using Bit-Back Coding, the expected code length equates to the negative variational lower bound or the so-called Helmholtz variational free energy, which means minimizing code length is equivalent to maximizing the variational lower bound:
CBitsBack(x) = Ex∼data,z∼q(z|x) [log q(z|x)− log p(z)− log p(x|z)] (6) = Ex∼data [−L(x)] (7)
Casting the problem of optimizing VAE into designing an efficient coding scheme easily allows us to reason when the latent code z will be used: the latent code z will be used when the two-part code is an efficient code. Recalling that the lower-bound of expected code length for data is given by the Shannon entropy of data generation distribution: H(data) = Ex∼data [− log pdata(x)], we can analyze VAE’s coding efficiency:
CBitsBack(x) = Ex∼data,z∼q(z|x) [log q(z|x)− log p(z)− log p(x|z)] (8) = Ex∼data [− log p(x) +DKL(q(z|x)||p(z|x))] (9) ≥ Ex∼data [− log pdata(x) +DKL(q(z|x)||p(z|x))] (10) = H(data) + Ex∼data [DKL(q(z|x)||p(z|x))] (11)
Since Kullback Leibler divergence is always non-negative, we know that using the two-part code derived from VAE suffers at least an extra code length of DKL(q(z|x)||p(z|x)) nats for using a posterior that’s not precise. Many previous works in Variational Inference have designed flexible approximate posteriors to better approximate true posterior (Salimans et al., 2014; Rezende & Mohamed, 2015; Tran et al., 2015; Kingma et al., 2016). Improved posterior approximations have shown to be effective in improving variational inference but none of the existing methods are able to completely close the gap between approximate posterior and true posterior. This leads us to believe that for most practical models, at least in the near future, the extra coding costDKL(q(z|x)||p(z|x)) will exist and will not be negligible.
Once we understand the inefficiency of the Bits-Back Coding mechanism, it’s simple to realize why sometimes the latent code z is not used: if the p(x|z) could model pdata(x) without using information from z, then it will not use z, in which case the true posterior p(z|x) is simply the prior p(z) and it’s usually easy to set q(z|x) to be p(z) to avoid incurring an extra cost DKL(q(z|x)||p(z|x)). And it’s exactly the case when a powerful decoding distribution is used like an RNN autoregressive distribution, which given enough capacity is able to model arbitrarily complex distributions. Hence there exists a preference of information when a VAE is optimized: information that can be modeled locally by decoding distribution p(x|z) without access to z will be encoded locally and only the remainder will be encoded in z.
We note that one common way to encourage putting information into the code is to use a factorized decoder p(x|z) = ∏ i p(xi|z) but so long as there is one dimension xj that’s independent of all other dimensions for true data distribution, pdata(x) = pdata(xj)pdata(x 6=j), then the latent code doesn’t contain all the information about x since at least xj will be modeled locally by factorized p(x|z). This kind of independence structure rarely exists in images so common VAEs that have factorized decoder autoencode almost exactly. Other techniques to encourage the usage of the latent code include annealing the relative weight of of DKL(q(z|x)||p(z)) in the variational lower bound (Bowman et al., 2015; Kaae Sønderby et al., 2016) or the use of free bits (Kingma et al., 2016), which can serve the dual purpose of smoothing the optimization landscape and canceling out part of the Bits-Back Code inefficiency DKL(q(z|x)||p(z|x)).
3 VARIATIONAL LOSSY AUTOENCODER
The discussion in Section 2.2 suggests that autoregressive models cannot be combined with VAE since information will be preferred to be modeled by autoregressive models. Nevertheless, in this section, we present two complementary classes of improvements to VAE that utilize autoregressive models fruitfully to explicitly control representation learning and improve density estimation.
3.1 LOSSY CODE VIA EXPLICIT INFORMATION PLACEMENT
Even though the information preference property of VAE might suggest that one should always use the full autoregressive models to achieve a better code length/log-likelihood, especially when slow data generation is not a concern, we argue that this information preference property can be exploited to turn the VAE into a powerful representation learning method that gives us fine-grained control over the kind of information that gets included in the learned representation.
When we try to learn a lossy compression/representation of data, we can simply construct a decoding distribution that’s capable of modeling the part of information that we don’t want the lossy representation to capture, but, critically, that’s incapable of modelling the information that we do want the lossy representation to capture.
For instance, if we are interested in learning a global representation for 2D images that doesn’t encode information about detailed texture, we can construct a specific factorization of the autoregressive distribution such that it has a small local receptive field as decoding distribution, e.g., plocal(x|z) = ∏ i p(xi|z,xWindowAround(i)). Notice that, as long as xWindowAround(i) is smaller than x<i, plocal(x|z) won’t be able to represent arbitrarily complex distribution over x without dependence on z since the receptive field is limited such that not all distributions over x admit such factorizations. In particular, the receptive field window can be a small rectangle adjacent to a pixel xi and in this case long-range dependency will be encoded in the latent code z. On the other hand, if the true data distribution admits such factorization for a given datum x and dimension i, i.e.
pdata(xi|xWindowAround(i)) = pdata(xi|x<i), then the information preference property discussed in Section 2.2 will apply here, which means that all the information will be encoded in local autoregressive distribution for xi. Local statistics of 2D images like texture will likely be modeled completely by a small local window, whereas global structural information of an images like shapes of objects is long-range dependency that can only be communicated through latent code z. Therefore we have given an example VAE that will produce a lossy compression of 2D images carrying exclusively global information that can’t be modeled locally.
Notice that a global representation is only one of many possible lossy representations that we can construct using this information preference property. For instance, the conditional of an autoregressive distribution might depend on a heavily down-sampled receptive field so that it can only model long-range pattern whereas local high-frequency statistics need to be encoded into the latent code. Hence we have demonstrated that we can achieve explicit placement of information by constraining the receptive field/factorization of an autoregressive distribution that’s used as decoding distribution.
We want to additionally emphasize the information preference property is an asymptotic view in a sense that it only holds when the variational lowerbound can be optimized well. Thus, we are not proposing an alternative to techniques like free bits Kingma et al. (2016) or KL annealing, and indeed they are still useful methods to smooth the optimization problem and used in this paper’s experiments.
3.2 LEARNED PRIOR WITH AUTOREGRESSIVE FLOW
Inefficiency in Bits-Back Coding, i.e., the mismatch between approximate posterior and true posterior, can be exploited to construct a lossy code but it’s still important to minimize such inefficiency to improve overall modeling performance/coding efficiency. We propose to parametrize the prior distribution p(z; θ) with an autoregressive model and show that a type of autoregressive latent code can in theory reduce inefficiency in Bits-Back coding.
It is well-known that limited approximate posteriors impede learning and therefore various expressive posterior approximations have been proposed to improve VAE’s density estimation performance (Turner et al., 2008; Mnih & Gregor, 2014; Salimans et al., 2014; Rezende & Mohamed, 2015; Kingma et al., 2016). One such class of approximate posteriors that has been shown to attain good empirical performance is based on the idea of Normalizing Flow, which is to apply an invertible mapping to a simple random variable, for example a factorized Gaussian as commonly used for q(z|x), in order to obtain a complicated random variable. For an invertible transformation between a simple distribution y and a more flexible z, we know from the change-of-variable technique that log q(z|x) = log q(y|x) − log det dzdy and using q(z|x) as approximate posterior will decrease the coding efficiency gap DKL(q(z|x)||p(z|x)) provided the transformation is sufficiently expressive. Kingma et al. (2016) introduced Inverse Autoregressive Flow, which is a powerful class of such invertible mappings that have simple determinant: zi =
yi−µi(y1:i−1) σi(y1:i−1) , where µi(.) ∈ R, σi(.) ∈ R+ are general functions that can be parametrized by expressive neural networks, such as MADE and PixelCNN variants (Germain et al., 2015; van den Oord et al., 2016a). Inverse autoregressive flow is the inverse/whitening of autoregressive flow: yi = ziσi(y1:i−1) + µi(y1:i−1). We refer interested readers to (Rezende & Mohamed, 2015; Kingma et al., 2016) for in-depth discussions on related topics.
In this paper, we propose to parametrize our learnable prior as an autoregressive flow from some simple noise source like spherical Gaussian. Next, we show that using latent code transformed by autoregressive flow (AF) is equivalent to using inverse autoregressive flow (IAF) approximate posterior, which explains why it can similarly improve Bits-Back Coding efficiency. Moreover, compared with an IAF posterior, an AF prior has a more expressive generative model that essentially “comes for free”.
For an autoregressive flow f , some continuous noise source is transformed into latent code z: z = f( ). Assuming the density function for noise source is u( ), we similarly know that log p(z) = log u( ) + log det d dz .
Simply re-arranging the variational lowerbound for using AF prior reveals that having an AF latent code z is equivalent to using an IAF posterior for that we can interpret as the new latent code:
L(x; θ) = Ez∼q(z|x) [log p(x|z) + log p(z)− log q(z|x)] (12) = Ez∼q(z|x), =f−1(z) [ log p(x|f( )) + log u( ) + log det d
dz − log q(z|x)
] (13)
= Ez∼q(z|x), =f−1(z) log p(x|f( )) + log u( )− (log q(z|x)− log det d dz
)︸ ︷︷ ︸ IAF Posterior (14) AF prior is the same as IAF posterior along the encoder path, f−1(q(z|x)), but differs along the decoder/generator path: IAF posterior has a shorter decoder path p(x|z) whereas AF prior has a deeper decoder path p(x|f( )). The crucial observation is that AF prior and IAF posterior have the same computation cost under the expectation of z ∼ q(z|x), so using AF prior makes the model more expressive at no training time cost.
4 EXPERIMENTS
In this paper, we evaluate VLAE on 2D images and leave extensions to other forms of data to future work. For the rest of the section, we define a VLAE model as a VAE that uses AF prior and autoregressive decoder. We choose to implement conditional distribution p(x|z) with a smallreceptive-field PixelCNN (van den Oord et al., 2016a), which has been proved to be a scalable autoregressive model.
For evaluation, we use binary image datasets that are commonly used for density estimation tasks: MNIST (LeCun et al., 1998) (both statically binarized 1 and dynamically binarized version (Burda et al., 2015a)), OMNIGLOT (Lake et al., 2013; Burda et al., 2015a) and Caltech-101 Silhouettes (Marlin et al., 2010). All datasets uniformly consist of 28x28 binary images, which allow us to use a unified architecture. VAE networks used in binary image datasets are simple variants of ResNet VAEs described in (Salimans et al., 2014; Kingma et al., 2016). For the decoder, we use a variant of PixelCNN that has 6 layers of masked convolution with filter size 3, which means the window of dependency, xWindowAround(i), is limited to a small local patch. During training, ”free bits” (Kingma et al., 2016) is used improve optimization stability. Experimental setup and hyperparameters are detailed in the appendix. Reported marginal NLL is estimated using Importance Sampling with 4096 samples.
We designed experiments to answer the following questions:
• Can VLAE learn lossy codes that encode global statistics? • Does using AF priors improves upon using IAF posteriors as predicted by theory? • Does using autoregressive decoding distributions improve density estimation performance?
4.1 LOSSY COMPRESSION
First we are interested in whether VLAE can learn a lossy representation/compression of data by using the PixelCNN decoder to model local statistics. We trained VLAE model on Statically Binarized MNIST and the converged model has E[DKL(q(z|x)||p(z))] = 13.3 nats = 19.2 bits, which is the number of bits it uses on average to encode/compress one MNIST image. By comparison, an identical VAE model with factorized decoding distribution will uses on average 37.3 bits in latent code, and this thus indicates that VLAE can learn a lossier compression than a VAE with regular factorized conditional distribution.
The next question is whether VLAE’s lossy compression encodes global statistics and discards local statistics. In Fig 1a, we visualize original images xdata and one random “decompression” xdecompressed from VLAE: z ∼ q(z|xdata),xdecompressed ∼ p(x|z). We observe that none of the
1We use the version provided by Hugo Larochelle.
decompressions is an exact reconstruction of the original image but instead the global structure of the image was encoded in the lossy code z and regenerated. Also worth noting is that local statistics are not preserved but a new set of likely local statistics are generated in the decompressed images: the binary masks are usually different and local styles like stroke width are sometimes slightly different.
However, we remark that the lossy code z doesn’t always capture the kind of global information that we care about and it’s dependent on the type of constraint we put on the decoder. For instance, in Fig 4b, we show decompressions for OMNIGLOT dataset, which has more meaningful variations in small patches than MNIST, and we can observe that semantics are not preserved in some cases. This highlights the need to specify the type of statistics we care about in a representation, which will be different across tasks and datasets, and design decoding distribution accordingly.
4.2 DENSITY ESTIMATION
Next we investigate whether leveraging autoregressive models as latent distribution p(z) and as decoding distribution p(x|z) would improve density estimation performance. To verify whether AF prior is able to improve upon IAF posterior alone, it’s desirable to test this model without using autoregressive decoder but instead using the conventional independent Bernoulli distribution for p(x|z). Hence we use the best performing model from Kingma et al.
(2016) on statically binarized MNIST and make the single modification of replacing the original IAF posterior with an equivalent AF prior, removing the context. As seen in Table 1, VAE with AF prior is outperforming VAE with an equivalent IAF posterior, indicating that the deeper generative model from AF prior is beneficial. A similar gain carries over when an autoregressive decoder is used: on statically binarized MNIST, using AF prior instead of IAF posterior reduces train NLL by 0.8 nat and test NLL by 0.6 nat.
Next we evaluate whether using autoregressive decoding distribution can improve performance and we show in Table 1 that a VLAE model, with AF prior and PixelCNN conditional, is able to outperform a VAE with just AF prior and achieves new state-of-the-art results on statically binarized MNIST.
In addition, we hypothesize that the separation of different types of information, the modeling global structure in latent code and local statistics in PixelCNN, likely has some form of good inductive biases for 2D images. In order to evaluate if VLAE is an expressive density estimator with good inductive biases, we will test a single VLAE model, with the same network architecture, on all binary datasets. We choose hyperparameters manually on statically binarized MNIST and use the same hyperparameters to evaluate on dynamically binarized MNIST, OMNIGLOT and Caltech-101 Silhouettes. We also note that better performance can be obtained if we individually tune hyperparameters for each dataset. As a concrete demonstration, we report the performance of a fine-tuned VLAE on OMNIGLOT dataset in Table 3.
As seen in Table 2,3,4, with the same set of hyperparameters tuned on statically binarized MNIST, VLAE is able to perform well on the rest of datasets, significantly exceeding previous state-ofthe-art results on dynamically binarized MNIST and Caltech-101 Silhouettes and tying statistically with best previous result on OMNIGLOT. In order to isolate the effect of expressive PixelCNN as decoder, we also report performance of the same PixelCNN trained without VAE part under the name “Unconditional Decoder”.
4.3 NATURAL IMAGES: CIFAR10
In addition to binary image datasets, we have applied VLAE to the CIFAR10 dataset of natural images. Density estimation of CIFAR10 images has been a challenging benchmark problem used by many recent generative models and hence is great task to position VLAE among existing methods.
We investigated using ResNet (He et al., 2016) and DenseNet (Huang et al., 2016) as building blocks for VAE networks and observed that DenseNet reduces overfitting. We also propose a new optimization technique that blends the advantages of KL annealing (Serban et al., 2016) and ”free bits” (Kingma et al., 2016) to stabilize learning on this challenging dataset. Detailed experimental setup is described in Appendix.
VLAE is compared to other methods on CIFAR10 in Table 5. We show that VLAE models attain new state-of-the-art performance among other variationally trained latent-variable models. DenseNet VLAE model also outperforms most other tractable likelihood models including Gated PixelCNN and PixelRNN and has results only slightly worse than currently unarchived state-of-the-art PixelCNN++.
We also investigate learning lossy codes on CIFAR10 images. To illustrate how does the receptive field size of PixelCNN decoder influence properties of learned latent codes, we show visualizations of similar VLAE models with receptive fields of different sizes. Specifically we say a receptive field, xWindowAround(i), has size AxB when a pixel xi can depend on the rectangle block of size AxB immediately on top of xi as well as the ⌈ A−1 2 ⌉ pixels immediately to the left of xi. We use this notation to refer to different types of PixelCNN decoders in Figure 3.
From (a)-(c) in Figure 3, we can see that larger receptive fields progressively make autoregressive decoders capture more structural information. In (a), a smaller receptive field tends to preserve rather detailed shape information in the lossy code whereas the latent code only retains rough shape in (c) with a larger receptive field.
It’s interesting to also note that in (a)-(c), oftentimes color information is partially omitted from latent codes and one explanation can be that color is very predictable locally. However, color information can be important to preserve if our task is, for example, object classification. To demonstrate how we can encode color information in the lossy code, we can choose to make PixelCNN decoder depend only on images’ grayscale versions. In other words, instead of choosing the decoder to be plocal(x|z) = ∏ i p(xi|z,xWindowAround(i)), we use a decoder of the form
plocal(x|z) = ∏ i p(xi|z,Grayscale(xWindowAround(i))). In (d) of Figure 3, we visualize lossy codes for a VLAE that has the same receptive field size as (c) but uses a “grayscale receptive field”. We note that the lossy codes in (d) encode roughly the same structural information as those in (c) but generally generate objects that are more recognizable due to the preservation of color information. This serves as one example of how we can design the lossy latent code carefully to encode what’s important and what’s not.
5 RELATED WORK
We investigate a fusion between variational autoencoders with continuous latent variables (Kingma & Welling, 2013; Rezende et al., 2014) and neural autoregressive models. For autoregression, we specifically apply a novel type of architecture where autoregression is realised through a carefully
constructed deep convolutional network, introduced in the PixelCNN model for images (van den Oord et al., 2016a,b). These family of convolutional autoregressive models was further explored, and extended, for audio in WaveNet (Oord et al., 2016), video in Video Pixel Networks (Kalchbrenner et al., 2016b) and language in ByteNet (Kalchbrenner et al., 2016a).
The combination of latent variables with expressive decoder was previously explored using recurrent networks mainly in the context of language modeling (Chung et al., 2015; Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016). Bowman et al. (2015) has also proposed to weaken an otherwise too expressive decoder by dropout to force some information into latent codes.
Concurrent with our work, PixelVAE (Gulrajani et al., 2016) also explored using conditional PixelCNN as a VAE’s decoder and has obtained impressive density modeling results through the use of multiple levels of stochastic units.
Using autoregressive model on latent code was explored in the context of discrete latent variables in DARN (Gregor et al., 2013). Kingma et al. (2016), Kaae Sønderby et al. (2016), Gregor et al. (2016) and Salimans (2016) explored VAE architecture with an explicitly deep autoregressive prior for continuous latent variables, but the autoregressive data likelihood is intractable in those architectures and needs to inferred variationally. In contrast, we use multiple steps of autoregressive flows that has exact likelihood and analyze the effect of using expressive latent code.
Optimization challenges for using (all levels of) continuous latent code were discussed before and practical solutions were proposed (Bowman et al., 2015; Kaae Sønderby et al., 2016; Kingma et al., 2016). In this paper, we present a complementary perspective on when/how should the latent code be used by appealing to a Bits-Back interpretation of VAE.
Learning a lossy compressor with latent variable model has been investigated with ConvDRAW (Gregor et al., 2016). It learns a hierarchy of latent variables and just using high-level latent variables will result in a lossy compression that performs similarly to JPEG. Our model similarly learns a lossy compressor but it uses an autoregressive model to explicitly control what kind of information should be lost in compression.
6 CONCLUSION
In this paper, we analyze the condition under which the latent code in VAE should be used, i.e. when does VAE autoencode, and use this observation to design a VAE model that’s a lossy compressor of observed data. At modeling level, we propose two complementary improvements to VAE that are shown to have good empirical performance.
VLAE has the appealing properties of controllable representation learning and improved density estimation performance but these properties come at a cost: compared with VAE models that have simple prior and decoder, VLAE is slower at generation due to the sequential nature of autoregressive model.
Moving forward, we believe it’s exciting to extend this principle of learning lossy codes to other forms of data, in particular those that have a temporal aspect like audio and video. Another promising direction is to design representations that contain only information for downstream tasks and utilize those representations to improve semi-supervised learning.
A DETAILED EXPERIMENT SETUP FOR BINARY IMAGES
For VAE’s encoder and decoder, we use the same ResNet (He et al., 2015) VAE architecture as the one used in IAF MNIST experiment (Kingma et al., 2016). The only difference is that the decoder network now, instead of outputing a 28x28x1 spatial feature map to specify the mean of a factorized bernoulli distribution, outputs a 28x28x4 spatial feature map that’s concatenated with the original binary image channel-wise, forming a 28x28x5 feature map that’s then fed through a typical masked PixelCNN (van den Oord et al., 2016a). As such even though the PixelCNN conditions on the latent code, we don’t call it a Conditional PixelCNN because it doesn’t use the specific architecture that was proposed in van den Oord et al. (2016b). For the PixelCNN, it has 6 masked convolution layers with 12 3x3 filters organized in ResNet blocks, and it has 4 additional 1x1 convolution ResNet block between every other masked convolution layer to increase processing capacity since it employs fewer masked convolutions than usual. All the masked convolution layer have their weights tied to reduce overfitting on statically binarized MNIST, and untying the weights will increase performance for other datasets. Experiments are tuned on the validation set and then final experiment was run with train and validation set, with performance evaluated with test set. Exponential Linear Units (Clevert et al., 2015) are used as activation functions in both VAE network and PixelCNN network. Weight normalization is everywhere with data-dependent initialization (Salimans & Kingma, 2016).
A latent code of dimension 64 was used. For AF prior, it’s implemented with MADE (Germain et al., 2015) as detailed in Kingma et al. (2016). We used 4 steps of autoregressive flow and each flow is implemented by a 3-layer MADE that has 640 hidden units and uses Relu (Nair & Hinton, 2010) as activation functions. Differing from the practice of Kingma et al. (2016), we use mean-only autoregressive flow, which we found to be more numerically stable.
In terms of training, Adamax (Kingma & Ba, 2014) was used with a learning rate of 0.002. 0.01 nats/data-dim free bits (Kingma et al., 2016) was found to be effective in dealing with the problem of all the latent code being ignored early in training. Polyak averaging (Polyak & Juditsky, 1992) was used to compute the final parameters, with α = 0.998.
All experiments are implemented using TensorFlow (Abadi et al., 2016).
B ADDITIONAL EXPERIMENT SETUP FOR CIFAR10
Latent codes are represented by 16 feature maps of size 8x8, and this choice of spatial stochastic units are inspired by ResNet IAF VAE (Kingma et al., 2016). Prior distribution is factorized Gaussian noise transformed by 6 autoregressive flows, each of which is implemented by a PixelCNN (van den Oord et al., 2016a) with 2 hidden layers and 128 feature maps. Between every other autoregressive flow, the ordering of stochastic units is reversed.
ResNet VLAE has the following structure for encoder: 2 ResNet blocks, Conv w/ stride=2, 2 ResNet blocks, Conv w/ stride=2, 3 ResNet blocks, 1x1 convolution and has a symmetric decoder. Channel size = 48 for 32x32 feature maps and 96 for other feature maps. DenseNet VLAE follows a similar structure: replacing 2 ResNet blocks with one DenseNet block of 3 steps and each step produces a certain number of feature maps such that at the end of a block, the concatenated feature maps is slightly more than the ResNet VLAE at the same stage.
Conditional PixelCNN++ (Salimans et al., 2017) is used as the decoder. Specifically the channelautoregressive variant is used to ensure there is sufficient capacity even when the receptive field is small. Specifically, the decoder PixelCNN has 4 blocks of 64 feature maps where each block is conditioned on previous blocks with Gated ResNet connections and hence the PixelCNN decoders we use are shallow but very wide. For 4x2 receptive field experiment, we use 1 layer of vertical stack convolutions and 2 layers of horizontal stack convolutions; for 5x3 receptive field experiment, we use 2 layers of vertical stack convolutions and 2 layers of horizontal stack convolutions; For 5x3 receptive field experiment, we use 2 layers of vertical stack convolutions and 2 layers of horizontal stack convolutions; For 7x4 receptive field experiment, we use 3 layers of vertical stack convolutions and 3 layers of horizontal stack convolutions; for 7x4 Grayscale experiment, we transform RGB
images into gray-scale images via this specific transformation: (0.299∗R)+(0.587G)+(0.114B). Best density estimation result is obtained with 7x4 receptive field experiments.
C SOFT FREE BITS
”Free bits” was a technique proposed in (Kingma et al., 2016) where K groups of stochastic units are encouraged to be used through the following surrogate objective:
L̃λ = Ex∼M [ Eq(z|x) [log p(x|z)] ] − K∑ j=1 maximum(λ,Ex∼M [DKL(q(zj |x)||p(zj))])
This technique is easy to use since it’s usually easy to determine the minimum number of bits/nats, λ, stochastic units need to encode. Choosing λ is hence easier than setting a fixed KL annealing schedule (Serban et al., 2016).
On the other hand, Kl annealing has the benefit of the surrogate objective will smoothly become the true objective, the variational lower bound where as ”free bits” has a sharp transition at the boundary. Therefore, we propose to still use λ as hyperparameter to specify at least λ nats should be used but try to change the optimization objective as slowly as possible:
LSoftFreeBits(x; θ) =Eq(z|x) [log p(x|z)]− γDKL(q(z|x)||p(z)) where 0 < γ ≤ 1. And we make the optimization smoother by changing γ slowly online to make sure at least λ nats are used: when Kl is too much higher than λ (we experimented wide range of thresholds from 3% to 30%, all of which yield improved results, and we tend to use 5% us a threshold), γ is increased, and when Kl lower than λ, γ is decreased to encourage information flow.
We found it sufficient to increase/decrease at 10% increment and didn’t further tune this parameter.
D AUTOREGRESSIVE DECODER WITHOUT AUTOREGRESSIVE PRIOR
In this section, we investigate the scenario of just using an autoregressive decoder without using an autoregressive prior. We compare the exact same model in three configurations: 1) using smallreceptive-field PixelCNN as an unconditional density estimator; 2) using small-receptive-field as a decoder in a VAE with Gaussian latent variables; 3) replacing Gaussian latent variables with autoregressive flow latent variables in 2).
In Table 1, we can observe that each step of modification improves density estimation performance. In addition, using an autoregressive latent code makes the latent code transmit more information as shown in the difference of E[DKL(q(z|x)||p(z))].
E CIFAR10 GENERATED SAMPLES | 1. What is the main contribution of the paper, and how does it relate to VAE-type models?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its information-theoretical insight and empirical evaluation?
3. How does the paper address the question of whether a latent representation on top of an autoregressive model improves density modeling performance?
4. What is the distinction between transforming the latent code with an autoregressive flow and parametrizing the approximate posterior with an inverse autoregressive flow transformation?
5. How might the choice of prior impact representation learning, and what considerations should be taken into account when selecting a prior?
6. Are there any limitations or potential drawbacks to using a powerful prior, and how might these be addressed? | Review | Review
This paper introduces the notion of a "variational lossy autoencoder", where a powerful autoregressive conditional distribution on the inputs x given the latent code z is crippled in a way that forces it to use z in a meaningful way. Its three main contributions are:
(1) It gives an interesting information-theoretical insight as to why VAE-type models don't tend to take advantage of their latent representation when the conditional distribution on x given z is powerful enough.
(2) It shows that this insight can be used to efficiently train VAEs with powerful autoregressive conditional distributions such that they make use of the latent code.
(3) It presents a powerful way to parametrize the prior in the form of an autoregressive flow transformation which is equivalent to using an inverse autoregressive flow transformation on the approximate posterior.
By itself, I think the information-theoretical explanation of why VAEs do not use their latent code when the conditional distribution on x given z is powerful enough constitutes an excellent addition to our understanding of VAE-related approaches.
However, the way this intuition is empirically evaluated is a bit weak. The "crippling" method used feels hand-crafted and very task-dependent, and the qualitative evaluation of the "lossyness" of the learned representation is carried out on three datasets (MNIST, OMNIGLOT and Caltech-101 Silhouettes) which feature black-and-white images with little-to-no texture. Figures 1a and 2a do show that reconstructions discard low-level information, as observed in the slight variations in strokes between the input and the reconstruction, but such an analysis would have been more compelling with more complex image datasets. Have the authors tried applying VLAE to such datasets?
I think the Caltech101 Silhouettes benchmark should be treated with caution, as no comparison is made against other competitive approaches like IAF VAE, PixelRNN and Conv DRAW. This means that VLAE significantly outperforms the state-of-the-art in only one of the four settings examined.
A question which is very relevant to this paper is "Does a latent representation on top of an autoregressive model help improve the density modeling performance?" The paper touches this question, but very briefly: the only setting in which VLAE is compared against recent autoregressive approaches shows that it wins against PixelRNN by a small margin.
The proposal to transform the latent code with an autoregressive flow which is equivalent to parametrizing the approximate posterior with an inverse autoregressive flow transformation is also interesting. There is, however, one important distinction to be made between the two approaches: in the former, the prior over the latent code can potentially be very complex whereas in the latter the prior is limited to be a simple, factorized distribution.
It is not clear to me that having a very powerful prior is necessarily a good thing from a representation learning point of view: oftentimes we are interested in learning a representation of the data distribution which is untangled and composed of roughly independent factors of variation. The degree to which this can be achieved using something as simple as a spherical gaussian prior is up for discussion, but finding a good balance between the ability of the prior to fit the data and its usefulness as a high-level representation certainly warrants some thought. I would be interested in hearing the authors' opinion on this.
Overall, the paper introduces interesting ideas despite the flaws outlined above, but weaknesses in the empirical evaluation prevent me from recommending its acceptance.
UPDATE: The rating has been revised to a 7 following the authors' reply. |
ICLR | Title
Variational Lossy Autoencoder
Abstract
Representation learning seeks to expose certain aspects of observed data in a learned representation that’s amenable to downstream tasks like classification. For instance, a good representation for 2D images might be one that describes only global structure and discards information about detailed texture. In this paper, we present a simple but principled method to learn such global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN/CNN. Our proposed VAE model allows us to have control over what the global latent code can learn and by designing the architecture accordingly, we can force the global latent code to discard irrelevant information such as texture in 2D images, and hence the VAE only “autoencodes” data in a lossy fashion. In addition, by leveraging autoregressive models as both prior distribution p(z) and decoding distribution p(x|z), we can greatly improve generative modeling performance of VAEs, achieving new state-of-the-art results on MNIST, OMNIGLOT and Caltech-101 Silhouettes density estimation tasks as well as competitive results on CIFAR10.
1 INTRODUCTION
A key goal of representation learning is to identify and disentangle the underlying causal factors of the data, so that it becomes easier to understand the data, to classify it, or to perform other tasks (Bengio et al., 2013). For image data this often means that we are interested in uncovering the “global structure” that captures the content of an image (for example, the identity of objects present in the image) and its “style”, but that we are typically less interested in the local and high frequency sources of variation such as the specific textures or white noise patterns.
A popular approach for learning representations is to fit a probabilistic latent variable model, an approach also known as analysis-by-synthesis (Yuille & Kersten, 2006; Nair et al., 2008). By learning a generative model of the data with the appropriate hierarchical structure of latent variables, it is hoped that the model will somehow uncover and untangle those causal sources of variations that we happen to be interested in. However, without further assumptions, representation learning via generative modeling is ill-posed: there are many different possible generative models with different (or no) kinds of latent variables that all encode the same probability density function on our observed data. Thus, the results we empirically get using this approach are highly dependent on the specific architectural and modeling choices that are made. Moreover, the objective that we optimize is often completely disconnected from the goal of learning a good representation: An autoregressive model of the data may achieve the same log-likelihood as a variational autoencoder (VAE) (Kingma & Welling, 2013), but the structure learned by the two models is completely different: the latter typically has a clear hierarchy of latent variables, while the autoregressive model has no stochastic latent variables at all (although it is conceivable that the deterministic hidden units of the autoregressive models will have meaningful and useful representations). For this reason, autoregressive models have thus far not been popular for the purpose of learning representations, even though they are extremely powerful as generative models (see e.g. van den Oord et al., 2016a).
A natural question becomes: is it possible to have a model that is a powerful density estimator and at the same time has the right hierarchical structure for representation learning? A potential solution would be to use a hybrid model that has both the latent variable structure of a VAE, as
well as the powerful recurrence of an autoregressive model. However, earlier attempts at combining these two kinds of models have run into the problem that the autoregressive part of the model ends up explaining all structure in the data, while the latent variables are not used (Fabius & van Amersfoort, 2014; Chung et al., 2015; Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016). Bowman et al. (2015) noted that weakening the autoregressive part of the model by, for example, dropout can encourage the latent variables to be used. We analyze why weakening is necessary, and we propose a principled solution that takes advantage of this property to control what kind of information goes into latent variables. The model we propose performs well as a density estimator, as evidenced by state-of-the-art log-likelihood results on MNIST, OMNIGLOT and Caltech-101, and also has a structure that is uniquely suited for learning interesting global representations of data.
2 VAES DO NOT AUTOENCODE IN GENERAL
A VAE is frequently interpreted as a regularized autoencoder (Kingma & Welling, 2013; Zhang et al., 2016), but the conditions under which it is guaranteed to autoencode (reconstruction being close to original datapoint) are not discussed. In this section, we discuss the often-neglected fact that VAEs do not always autoencode and give explicit reasons why previous attempts to apply VAE in sequence modeling found that the latent code is generally not used unless the decoder is weakened (Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016). The understanding of when VAE does autoencode will be an essential building piece for VLAE.
2.1 TECHNICAL BACKGROUND
Let x be observed variables, z latent variables and let p(x, z) be the parametric model of their joint distribution, called the generative model defined over the variables. Given a dataset X = {x1, ...,xN} we wish to perform maximum likelihood learning of its parameters:
log p(X) = N∑ i=1 log p(x(i)), (1)
but in general this marginal likelihood is intractable to compute or differentiate directly for flexible generative models that have high-dimensional latent variables and flexible priors and likelihoods. A solution is to introduce q(z|x), a parametric inference model defined over the latent variables, and optimize the variational lower bound on the marginal log-likelihood of each observation x:
log p(x) ≥ Eq(z|x) [log p(x, z)− log q(z|x)] = L(x; θ) (2) where θ indicates the parameters of p and q models.
There are various ways to optimize the lower bound L(x; θ); for continuous z it can be done efficiently through a re-parameterization of q(z|x) (Kingma & Welling, 2013; Rezende et al., 2014). This way of optimizing the variational lower bound with a parametric inference network and reparameterization of continuous latent variables is usually called VAE. The “autoencoding” terminology comes from the fact that the lower bound L(x; θ) can be re-arranged:
L(x; θ) = Eq(z|x) [log p(x, z)− log q(z|x)] (3) = Eq(z|x) [log p(x|z)]−DKL(q(z|x)||p(z)) (4)
where the first term can be seen as the expectation of negative reconstruction error and the KL divergence term can be seen as a regularizer, which as a whole could be seen as a regularized autoencoder loss with q(z|x) being the encoder and p(x|z) being the decoder. In the context of 2D images modeling, the decoding distribution p(x|z) is usually chosen to be a simple factorized distribution, i.e. p(x|z) = ∏ i p(xi|z), and this setup often yields a sharp decoding distribution p(x|z) that tends to reconstruct original datapoint x exactly.
2.2 BITS-BACK CODING AND INFORMATION PREFERENCE
It’s straightforward to see that having a more powerful p(x|z) will make VAE’s marginal generative distribution p(x) = ∫ z p(z)p(x|z)dz more expressive. This idea has been explored extensively
in previous work applying VAE to sequence modeling (Fabius & van Amersfoort, 2014; Chung et al., 2015; Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016), where the decoding distribution is a powerful RNN with autoregressive dependency, i.e., p(x|z) =∏ i p(xi|z,x<i). Since RNNs are universal function approximators and any joint distribution over x admits an autoregressive factorization, the RNN autoregressive decoding distribution can in theory represent any probability distribution even without dependence on z.
However, previous attempts have found it hard to benefit from VAE when using an expressive decoding distribution p(x|z). Indeed it’s documented in detail by Bowman et al. (2015) that in most cases when an RNN autoregressive decoding distribution is used, the latent code z is completely ignored and the model regresses to be a standard unconditional RNN autoregressive distribution that doesn’t depend on the latent code. This phenomenon is commonly attributed to “optimization challenges” of VAE in the literature (Bowman et al., 2015; Serban et al., 2016; Kaae Sønderby et al., 2016) because early in the training the approximate posterior q(z|x) carries little information about datapoint x and hence it’s easy for the model to just set the approximate posterior to be the prior to avoid paying any regularization cost DKL(q(z|x)||p(z)). Here we present a simple but often-neglected observation that this phenomenon arises not just due to optimization challenges and instead even if we can solve the optimization problems exactly, the latent code should still be ignored at optimum for most practical instances of VAE that have intractable true posterior distributions and sufficiently powerful decoders. It is easiest to understand this observation from a Bits-Back Coding perspective of VAE.
It is well-known that Bits-Back Coding is an information-theoretic view of Variational Inference (Hinton & Van Camp, 1993; Honkela & Valpola, 2004) and specific links have been established between Bits-Back Coding and the Helmholtz Machine/VAE (Hinton & Zemel, 1994; Gregor et al., 2013). Here we briefly relate VAE to Bits-Back Coding for self-containedness:
First recall that the goal of designing an efficient coding protocol is to minimize the expected code length of communicating x. To explain Bits-Back Coding, let’s first consider a more naive coding scheme. VAE can be seen as a way to encode data in a two-part code: p(z) and p(x|z), where z can be seen as the essence/structure of a datum and is encoded first and then the modeling error (deviation from z’s structure) is encoded next. The expected code length under this naive coding scheme for a given data distribution is hence:
Cnaive(x) = Ex∼data,z∼q(z|x) [− log p(z)− log p(x|z)] (5)
This coding scheme is, however, inefficient. Bits-Back Coding improves on it by noticing that the encoder distribution q(z|x) can be used to transmit additional information, up to H(q(z|x)) expected nats, as long as the receiver also has access to q(z|x). The decoding scheme works as follows: a receiver first decodes z from p(z), then decodes x from p(x|z) and, by running the same approximate posterior that the sender is using, decodes a secondary message from q(z|x). Hence, to properly measure the code length of VAE’s two-part code, we need to subtract the extra information from q(z|x). Using Bit-Back Coding, the expected code length equates to the negative variational lower bound or the so-called Helmholtz variational free energy, which means minimizing code length is equivalent to maximizing the variational lower bound:
CBitsBack(x) = Ex∼data,z∼q(z|x) [log q(z|x)− log p(z)− log p(x|z)] (6) = Ex∼data [−L(x)] (7)
Casting the problem of optimizing VAE into designing an efficient coding scheme easily allows us to reason when the latent code z will be used: the latent code z will be used when the two-part code is an efficient code. Recalling that the lower-bound of expected code length for data is given by the Shannon entropy of data generation distribution: H(data) = Ex∼data [− log pdata(x)], we can analyze VAE’s coding efficiency:
CBitsBack(x) = Ex∼data,z∼q(z|x) [log q(z|x)− log p(z)− log p(x|z)] (8) = Ex∼data [− log p(x) +DKL(q(z|x)||p(z|x))] (9) ≥ Ex∼data [− log pdata(x) +DKL(q(z|x)||p(z|x))] (10) = H(data) + Ex∼data [DKL(q(z|x)||p(z|x))] (11)
Since Kullback Leibler divergence is always non-negative, we know that using the two-part code derived from VAE suffers at least an extra code length of DKL(q(z|x)||p(z|x)) nats for using a posterior that’s not precise. Many previous works in Variational Inference have designed flexible approximate posteriors to better approximate true posterior (Salimans et al., 2014; Rezende & Mohamed, 2015; Tran et al., 2015; Kingma et al., 2016). Improved posterior approximations have shown to be effective in improving variational inference but none of the existing methods are able to completely close the gap between approximate posterior and true posterior. This leads us to believe that for most practical models, at least in the near future, the extra coding costDKL(q(z|x)||p(z|x)) will exist and will not be negligible.
Once we understand the inefficiency of the Bits-Back Coding mechanism, it’s simple to realize why sometimes the latent code z is not used: if the p(x|z) could model pdata(x) without using information from z, then it will not use z, in which case the true posterior p(z|x) is simply the prior p(z) and it’s usually easy to set q(z|x) to be p(z) to avoid incurring an extra cost DKL(q(z|x)||p(z|x)). And it’s exactly the case when a powerful decoding distribution is used like an RNN autoregressive distribution, which given enough capacity is able to model arbitrarily complex distributions. Hence there exists a preference of information when a VAE is optimized: information that can be modeled locally by decoding distribution p(x|z) without access to z will be encoded locally and only the remainder will be encoded in z.
We note that one common way to encourage putting information into the code is to use a factorized decoder p(x|z) = ∏ i p(xi|z) but so long as there is one dimension xj that’s independent of all other dimensions for true data distribution, pdata(x) = pdata(xj)pdata(x 6=j), then the latent code doesn’t contain all the information about x since at least xj will be modeled locally by factorized p(x|z). This kind of independence structure rarely exists in images so common VAEs that have factorized decoder autoencode almost exactly. Other techniques to encourage the usage of the latent code include annealing the relative weight of of DKL(q(z|x)||p(z)) in the variational lower bound (Bowman et al., 2015; Kaae Sønderby et al., 2016) or the use of free bits (Kingma et al., 2016), which can serve the dual purpose of smoothing the optimization landscape and canceling out part of the Bits-Back Code inefficiency DKL(q(z|x)||p(z|x)).
3 VARIATIONAL LOSSY AUTOENCODER
The discussion in Section 2.2 suggests that autoregressive models cannot be combined with VAE since information will be preferred to be modeled by autoregressive models. Nevertheless, in this section, we present two complementary classes of improvements to VAE that utilize autoregressive models fruitfully to explicitly control representation learning and improve density estimation.
3.1 LOSSY CODE VIA EXPLICIT INFORMATION PLACEMENT
Even though the information preference property of VAE might suggest that one should always use the full autoregressive models to achieve a better code length/log-likelihood, especially when slow data generation is not a concern, we argue that this information preference property can be exploited to turn the VAE into a powerful representation learning method that gives us fine-grained control over the kind of information that gets included in the learned representation.
When we try to learn a lossy compression/representation of data, we can simply construct a decoding distribution that’s capable of modeling the part of information that we don’t want the lossy representation to capture, but, critically, that’s incapable of modelling the information that we do want the lossy representation to capture.
For instance, if we are interested in learning a global representation for 2D images that doesn’t encode information about detailed texture, we can construct a specific factorization of the autoregressive distribution such that it has a small local receptive field as decoding distribution, e.g., plocal(x|z) = ∏ i p(xi|z,xWindowAround(i)). Notice that, as long as xWindowAround(i) is smaller than x<i, plocal(x|z) won’t be able to represent arbitrarily complex distribution over x without dependence on z since the receptive field is limited such that not all distributions over x admit such factorizations. In particular, the receptive field window can be a small rectangle adjacent to a pixel xi and in this case long-range dependency will be encoded in the latent code z. On the other hand, if the true data distribution admits such factorization for a given datum x and dimension i, i.e.
pdata(xi|xWindowAround(i)) = pdata(xi|x<i), then the information preference property discussed in Section 2.2 will apply here, which means that all the information will be encoded in local autoregressive distribution for xi. Local statistics of 2D images like texture will likely be modeled completely by a small local window, whereas global structural information of an images like shapes of objects is long-range dependency that can only be communicated through latent code z. Therefore we have given an example VAE that will produce a lossy compression of 2D images carrying exclusively global information that can’t be modeled locally.
Notice that a global representation is only one of many possible lossy representations that we can construct using this information preference property. For instance, the conditional of an autoregressive distribution might depend on a heavily down-sampled receptive field so that it can only model long-range pattern whereas local high-frequency statistics need to be encoded into the latent code. Hence we have demonstrated that we can achieve explicit placement of information by constraining the receptive field/factorization of an autoregressive distribution that’s used as decoding distribution.
We want to additionally emphasize the information preference property is an asymptotic view in a sense that it only holds when the variational lowerbound can be optimized well. Thus, we are not proposing an alternative to techniques like free bits Kingma et al. (2016) or KL annealing, and indeed they are still useful methods to smooth the optimization problem and used in this paper’s experiments.
3.2 LEARNED PRIOR WITH AUTOREGRESSIVE FLOW
Inefficiency in Bits-Back Coding, i.e., the mismatch between approximate posterior and true posterior, can be exploited to construct a lossy code but it’s still important to minimize such inefficiency to improve overall modeling performance/coding efficiency. We propose to parametrize the prior distribution p(z; θ) with an autoregressive model and show that a type of autoregressive latent code can in theory reduce inefficiency in Bits-Back coding.
It is well-known that limited approximate posteriors impede learning and therefore various expressive posterior approximations have been proposed to improve VAE’s density estimation performance (Turner et al., 2008; Mnih & Gregor, 2014; Salimans et al., 2014; Rezende & Mohamed, 2015; Kingma et al., 2016). One such class of approximate posteriors that has been shown to attain good empirical performance is based on the idea of Normalizing Flow, which is to apply an invertible mapping to a simple random variable, for example a factorized Gaussian as commonly used for q(z|x), in order to obtain a complicated random variable. For an invertible transformation between a simple distribution y and a more flexible z, we know from the change-of-variable technique that log q(z|x) = log q(y|x) − log det dzdy and using q(z|x) as approximate posterior will decrease the coding efficiency gap DKL(q(z|x)||p(z|x)) provided the transformation is sufficiently expressive. Kingma et al. (2016) introduced Inverse Autoregressive Flow, which is a powerful class of such invertible mappings that have simple determinant: zi =
yi−µi(y1:i−1) σi(y1:i−1) , where µi(.) ∈ R, σi(.) ∈ R+ are general functions that can be parametrized by expressive neural networks, such as MADE and PixelCNN variants (Germain et al., 2015; van den Oord et al., 2016a). Inverse autoregressive flow is the inverse/whitening of autoregressive flow: yi = ziσi(y1:i−1) + µi(y1:i−1). We refer interested readers to (Rezende & Mohamed, 2015; Kingma et al., 2016) for in-depth discussions on related topics.
In this paper, we propose to parametrize our learnable prior as an autoregressive flow from some simple noise source like spherical Gaussian. Next, we show that using latent code transformed by autoregressive flow (AF) is equivalent to using inverse autoregressive flow (IAF) approximate posterior, which explains why it can similarly improve Bits-Back Coding efficiency. Moreover, compared with an IAF posterior, an AF prior has a more expressive generative model that essentially “comes for free”.
For an autoregressive flow f , some continuous noise source is transformed into latent code z: z = f( ). Assuming the density function for noise source is u( ), we similarly know that log p(z) = log u( ) + log det d dz .
Simply re-arranging the variational lowerbound for using AF prior reveals that having an AF latent code z is equivalent to using an IAF posterior for that we can interpret as the new latent code:
L(x; θ) = Ez∼q(z|x) [log p(x|z) + log p(z)− log q(z|x)] (12) = Ez∼q(z|x), =f−1(z) [ log p(x|f( )) + log u( ) + log det d
dz − log q(z|x)
] (13)
= Ez∼q(z|x), =f−1(z) log p(x|f( )) + log u( )− (log q(z|x)− log det d dz
)︸ ︷︷ ︸ IAF Posterior (14) AF prior is the same as IAF posterior along the encoder path, f−1(q(z|x)), but differs along the decoder/generator path: IAF posterior has a shorter decoder path p(x|z) whereas AF prior has a deeper decoder path p(x|f( )). The crucial observation is that AF prior and IAF posterior have the same computation cost under the expectation of z ∼ q(z|x), so using AF prior makes the model more expressive at no training time cost.
4 EXPERIMENTS
In this paper, we evaluate VLAE on 2D images and leave extensions to other forms of data to future work. For the rest of the section, we define a VLAE model as a VAE that uses AF prior and autoregressive decoder. We choose to implement conditional distribution p(x|z) with a smallreceptive-field PixelCNN (van den Oord et al., 2016a), which has been proved to be a scalable autoregressive model.
For evaluation, we use binary image datasets that are commonly used for density estimation tasks: MNIST (LeCun et al., 1998) (both statically binarized 1 and dynamically binarized version (Burda et al., 2015a)), OMNIGLOT (Lake et al., 2013; Burda et al., 2015a) and Caltech-101 Silhouettes (Marlin et al., 2010). All datasets uniformly consist of 28x28 binary images, which allow us to use a unified architecture. VAE networks used in binary image datasets are simple variants of ResNet VAEs described in (Salimans et al., 2014; Kingma et al., 2016). For the decoder, we use a variant of PixelCNN that has 6 layers of masked convolution with filter size 3, which means the window of dependency, xWindowAround(i), is limited to a small local patch. During training, ”free bits” (Kingma et al., 2016) is used improve optimization stability. Experimental setup and hyperparameters are detailed in the appendix. Reported marginal NLL is estimated using Importance Sampling with 4096 samples.
We designed experiments to answer the following questions:
• Can VLAE learn lossy codes that encode global statistics? • Does using AF priors improves upon using IAF posteriors as predicted by theory? • Does using autoregressive decoding distributions improve density estimation performance?
4.1 LOSSY COMPRESSION
First we are interested in whether VLAE can learn a lossy representation/compression of data by using the PixelCNN decoder to model local statistics. We trained VLAE model on Statically Binarized MNIST and the converged model has E[DKL(q(z|x)||p(z))] = 13.3 nats = 19.2 bits, which is the number of bits it uses on average to encode/compress one MNIST image. By comparison, an identical VAE model with factorized decoding distribution will uses on average 37.3 bits in latent code, and this thus indicates that VLAE can learn a lossier compression than a VAE with regular factorized conditional distribution.
The next question is whether VLAE’s lossy compression encodes global statistics and discards local statistics. In Fig 1a, we visualize original images xdata and one random “decompression” xdecompressed from VLAE: z ∼ q(z|xdata),xdecompressed ∼ p(x|z). We observe that none of the
1We use the version provided by Hugo Larochelle.
decompressions is an exact reconstruction of the original image but instead the global structure of the image was encoded in the lossy code z and regenerated. Also worth noting is that local statistics are not preserved but a new set of likely local statistics are generated in the decompressed images: the binary masks are usually different and local styles like stroke width are sometimes slightly different.
However, we remark that the lossy code z doesn’t always capture the kind of global information that we care about and it’s dependent on the type of constraint we put on the decoder. For instance, in Fig 4b, we show decompressions for OMNIGLOT dataset, which has more meaningful variations in small patches than MNIST, and we can observe that semantics are not preserved in some cases. This highlights the need to specify the type of statistics we care about in a representation, which will be different across tasks and datasets, and design decoding distribution accordingly.
4.2 DENSITY ESTIMATION
Next we investigate whether leveraging autoregressive models as latent distribution p(z) and as decoding distribution p(x|z) would improve density estimation performance. To verify whether AF prior is able to improve upon IAF posterior alone, it’s desirable to test this model without using autoregressive decoder but instead using the conventional independent Bernoulli distribution for p(x|z). Hence we use the best performing model from Kingma et al.
(2016) on statically binarized MNIST and make the single modification of replacing the original IAF posterior with an equivalent AF prior, removing the context. As seen in Table 1, VAE with AF prior is outperforming VAE with an equivalent IAF posterior, indicating that the deeper generative model from AF prior is beneficial. A similar gain carries over when an autoregressive decoder is used: on statically binarized MNIST, using AF prior instead of IAF posterior reduces train NLL by 0.8 nat and test NLL by 0.6 nat.
Next we evaluate whether using autoregressive decoding distribution can improve performance and we show in Table 1 that a VLAE model, with AF prior and PixelCNN conditional, is able to outperform a VAE with just AF prior and achieves new state-of-the-art results on statically binarized MNIST.
In addition, we hypothesize that the separation of different types of information, the modeling global structure in latent code and local statistics in PixelCNN, likely has some form of good inductive biases for 2D images. In order to evaluate if VLAE is an expressive density estimator with good inductive biases, we will test a single VLAE model, with the same network architecture, on all binary datasets. We choose hyperparameters manually on statically binarized MNIST and use the same hyperparameters to evaluate on dynamically binarized MNIST, OMNIGLOT and Caltech-101 Silhouettes. We also note that better performance can be obtained if we individually tune hyperparameters for each dataset. As a concrete demonstration, we report the performance of a fine-tuned VLAE on OMNIGLOT dataset in Table 3.
As seen in Table 2,3,4, with the same set of hyperparameters tuned on statically binarized MNIST, VLAE is able to perform well on the rest of datasets, significantly exceeding previous state-ofthe-art results on dynamically binarized MNIST and Caltech-101 Silhouettes and tying statistically with best previous result on OMNIGLOT. In order to isolate the effect of expressive PixelCNN as decoder, we also report performance of the same PixelCNN trained without VAE part under the name “Unconditional Decoder”.
4.3 NATURAL IMAGES: CIFAR10
In addition to binary image datasets, we have applied VLAE to the CIFAR10 dataset of natural images. Density estimation of CIFAR10 images has been a challenging benchmark problem used by many recent generative models and hence is great task to position VLAE among existing methods.
We investigated using ResNet (He et al., 2016) and DenseNet (Huang et al., 2016) as building blocks for VAE networks and observed that DenseNet reduces overfitting. We also propose a new optimization technique that blends the advantages of KL annealing (Serban et al., 2016) and ”free bits” (Kingma et al., 2016) to stabilize learning on this challenging dataset. Detailed experimental setup is described in Appendix.
VLAE is compared to other methods on CIFAR10 in Table 5. We show that VLAE models attain new state-of-the-art performance among other variationally trained latent-variable models. DenseNet VLAE model also outperforms most other tractable likelihood models including Gated PixelCNN and PixelRNN and has results only slightly worse than currently unarchived state-of-the-art PixelCNN++.
We also investigate learning lossy codes on CIFAR10 images. To illustrate how does the receptive field size of PixelCNN decoder influence properties of learned latent codes, we show visualizations of similar VLAE models with receptive fields of different sizes. Specifically we say a receptive field, xWindowAround(i), has size AxB when a pixel xi can depend on the rectangle block of size AxB immediately on top of xi as well as the ⌈ A−1 2 ⌉ pixels immediately to the left of xi. We use this notation to refer to different types of PixelCNN decoders in Figure 3.
From (a)-(c) in Figure 3, we can see that larger receptive fields progressively make autoregressive decoders capture more structural information. In (a), a smaller receptive field tends to preserve rather detailed shape information in the lossy code whereas the latent code only retains rough shape in (c) with a larger receptive field.
It’s interesting to also note that in (a)-(c), oftentimes color information is partially omitted from latent codes and one explanation can be that color is very predictable locally. However, color information can be important to preserve if our task is, for example, object classification. To demonstrate how we can encode color information in the lossy code, we can choose to make PixelCNN decoder depend only on images’ grayscale versions. In other words, instead of choosing the decoder to be plocal(x|z) = ∏ i p(xi|z,xWindowAround(i)), we use a decoder of the form
plocal(x|z) = ∏ i p(xi|z,Grayscale(xWindowAround(i))). In (d) of Figure 3, we visualize lossy codes for a VLAE that has the same receptive field size as (c) but uses a “grayscale receptive field”. We note that the lossy codes in (d) encode roughly the same structural information as those in (c) but generally generate objects that are more recognizable due to the preservation of color information. This serves as one example of how we can design the lossy latent code carefully to encode what’s important and what’s not.
5 RELATED WORK
We investigate a fusion between variational autoencoders with continuous latent variables (Kingma & Welling, 2013; Rezende et al., 2014) and neural autoregressive models. For autoregression, we specifically apply a novel type of architecture where autoregression is realised through a carefully
constructed deep convolutional network, introduced in the PixelCNN model for images (van den Oord et al., 2016a,b). These family of convolutional autoregressive models was further explored, and extended, for audio in WaveNet (Oord et al., 2016), video in Video Pixel Networks (Kalchbrenner et al., 2016b) and language in ByteNet (Kalchbrenner et al., 2016a).
The combination of latent variables with expressive decoder was previously explored using recurrent networks mainly in the context of language modeling (Chung et al., 2015; Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016). Bowman et al. (2015) has also proposed to weaken an otherwise too expressive decoder by dropout to force some information into latent codes.
Concurrent with our work, PixelVAE (Gulrajani et al., 2016) also explored using conditional PixelCNN as a VAE’s decoder and has obtained impressive density modeling results through the use of multiple levels of stochastic units.
Using autoregressive model on latent code was explored in the context of discrete latent variables in DARN (Gregor et al., 2013). Kingma et al. (2016), Kaae Sønderby et al. (2016), Gregor et al. (2016) and Salimans (2016) explored VAE architecture with an explicitly deep autoregressive prior for continuous latent variables, but the autoregressive data likelihood is intractable in those architectures and needs to inferred variationally. In contrast, we use multiple steps of autoregressive flows that has exact likelihood and analyze the effect of using expressive latent code.
Optimization challenges for using (all levels of) continuous latent code were discussed before and practical solutions were proposed (Bowman et al., 2015; Kaae Sønderby et al., 2016; Kingma et al., 2016). In this paper, we present a complementary perspective on when/how should the latent code be used by appealing to a Bits-Back interpretation of VAE.
Learning a lossy compressor with latent variable model has been investigated with ConvDRAW (Gregor et al., 2016). It learns a hierarchy of latent variables and just using high-level latent variables will result in a lossy compression that performs similarly to JPEG. Our model similarly learns a lossy compressor but it uses an autoregressive model to explicitly control what kind of information should be lost in compression.
6 CONCLUSION
In this paper, we analyze the condition under which the latent code in VAE should be used, i.e. when does VAE autoencode, and use this observation to design a VAE model that’s a lossy compressor of observed data. At modeling level, we propose two complementary improvements to VAE that are shown to have good empirical performance.
VLAE has the appealing properties of controllable representation learning and improved density estimation performance but these properties come at a cost: compared with VAE models that have simple prior and decoder, VLAE is slower at generation due to the sequential nature of autoregressive model.
Moving forward, we believe it’s exciting to extend this principle of learning lossy codes to other forms of data, in particular those that have a temporal aspect like audio and video. Another promising direction is to design representations that contain only information for downstream tasks and utilize those representations to improve semi-supervised learning.
A DETAILED EXPERIMENT SETUP FOR BINARY IMAGES
For VAE’s encoder and decoder, we use the same ResNet (He et al., 2015) VAE architecture as the one used in IAF MNIST experiment (Kingma et al., 2016). The only difference is that the decoder network now, instead of outputing a 28x28x1 spatial feature map to specify the mean of a factorized bernoulli distribution, outputs a 28x28x4 spatial feature map that’s concatenated with the original binary image channel-wise, forming a 28x28x5 feature map that’s then fed through a typical masked PixelCNN (van den Oord et al., 2016a). As such even though the PixelCNN conditions on the latent code, we don’t call it a Conditional PixelCNN because it doesn’t use the specific architecture that was proposed in van den Oord et al. (2016b). For the PixelCNN, it has 6 masked convolution layers with 12 3x3 filters organized in ResNet blocks, and it has 4 additional 1x1 convolution ResNet block between every other masked convolution layer to increase processing capacity since it employs fewer masked convolutions than usual. All the masked convolution layer have their weights tied to reduce overfitting on statically binarized MNIST, and untying the weights will increase performance for other datasets. Experiments are tuned on the validation set and then final experiment was run with train and validation set, with performance evaluated with test set. Exponential Linear Units (Clevert et al., 2015) are used as activation functions in both VAE network and PixelCNN network. Weight normalization is everywhere with data-dependent initialization (Salimans & Kingma, 2016).
A latent code of dimension 64 was used. For AF prior, it’s implemented with MADE (Germain et al., 2015) as detailed in Kingma et al. (2016). We used 4 steps of autoregressive flow and each flow is implemented by a 3-layer MADE that has 640 hidden units and uses Relu (Nair & Hinton, 2010) as activation functions. Differing from the practice of Kingma et al. (2016), we use mean-only autoregressive flow, which we found to be more numerically stable.
In terms of training, Adamax (Kingma & Ba, 2014) was used with a learning rate of 0.002. 0.01 nats/data-dim free bits (Kingma et al., 2016) was found to be effective in dealing with the problem of all the latent code being ignored early in training. Polyak averaging (Polyak & Juditsky, 1992) was used to compute the final parameters, with α = 0.998.
All experiments are implemented using TensorFlow (Abadi et al., 2016).
B ADDITIONAL EXPERIMENT SETUP FOR CIFAR10
Latent codes are represented by 16 feature maps of size 8x8, and this choice of spatial stochastic units are inspired by ResNet IAF VAE (Kingma et al., 2016). Prior distribution is factorized Gaussian noise transformed by 6 autoregressive flows, each of which is implemented by a PixelCNN (van den Oord et al., 2016a) with 2 hidden layers and 128 feature maps. Between every other autoregressive flow, the ordering of stochastic units is reversed.
ResNet VLAE has the following structure for encoder: 2 ResNet blocks, Conv w/ stride=2, 2 ResNet blocks, Conv w/ stride=2, 3 ResNet blocks, 1x1 convolution and has a symmetric decoder. Channel size = 48 for 32x32 feature maps and 96 for other feature maps. DenseNet VLAE follows a similar structure: replacing 2 ResNet blocks with one DenseNet block of 3 steps and each step produces a certain number of feature maps such that at the end of a block, the concatenated feature maps is slightly more than the ResNet VLAE at the same stage.
Conditional PixelCNN++ (Salimans et al., 2017) is used as the decoder. Specifically the channelautoregressive variant is used to ensure there is sufficient capacity even when the receptive field is small. Specifically, the decoder PixelCNN has 4 blocks of 64 feature maps where each block is conditioned on previous blocks with Gated ResNet connections and hence the PixelCNN decoders we use are shallow but very wide. For 4x2 receptive field experiment, we use 1 layer of vertical stack convolutions and 2 layers of horizontal stack convolutions; for 5x3 receptive field experiment, we use 2 layers of vertical stack convolutions and 2 layers of horizontal stack convolutions; For 5x3 receptive field experiment, we use 2 layers of vertical stack convolutions and 2 layers of horizontal stack convolutions; For 7x4 receptive field experiment, we use 3 layers of vertical stack convolutions and 3 layers of horizontal stack convolutions; for 7x4 Grayscale experiment, we transform RGB
images into gray-scale images via this specific transformation: (0.299∗R)+(0.587G)+(0.114B). Best density estimation result is obtained with 7x4 receptive field experiments.
C SOFT FREE BITS
”Free bits” was a technique proposed in (Kingma et al., 2016) where K groups of stochastic units are encouraged to be used through the following surrogate objective:
L̃λ = Ex∼M [ Eq(z|x) [log p(x|z)] ] − K∑ j=1 maximum(λ,Ex∼M [DKL(q(zj |x)||p(zj))])
This technique is easy to use since it’s usually easy to determine the minimum number of bits/nats, λ, stochastic units need to encode. Choosing λ is hence easier than setting a fixed KL annealing schedule (Serban et al., 2016).
On the other hand, Kl annealing has the benefit of the surrogate objective will smoothly become the true objective, the variational lower bound where as ”free bits” has a sharp transition at the boundary. Therefore, we propose to still use λ as hyperparameter to specify at least λ nats should be used but try to change the optimization objective as slowly as possible:
LSoftFreeBits(x; θ) =Eq(z|x) [log p(x|z)]− γDKL(q(z|x)||p(z)) where 0 < γ ≤ 1. And we make the optimization smoother by changing γ slowly online to make sure at least λ nats are used: when Kl is too much higher than λ (we experimented wide range of thresholds from 3% to 30%, all of which yield improved results, and we tend to use 5% us a threshold), γ is increased, and when Kl lower than λ, γ is decreased to encourage information flow.
We found it sufficient to increase/decrease at 10% increment and didn’t further tune this parameter.
D AUTOREGRESSIVE DECODER WITHOUT AUTOREGRESSIVE PRIOR
In this section, we investigate the scenario of just using an autoregressive decoder without using an autoregressive prior. We compare the exact same model in three configurations: 1) using smallreceptive-field PixelCNN as an unconditional density estimator; 2) using small-receptive-field as a decoder in a VAE with Gaussian latent variables; 3) replacing Gaussian latent variables with autoregressive flow latent variables in 2).
In Table 1, we can observe that each step of modification improves density estimation performance. In addition, using an autoregressive latent code makes the latent code transmit more information as shown in the difference of E[DKL(q(z|x)||p(z))].
E CIFAR10 GENERATED SAMPLES | 1. What is the main contribution of the paper, and how does it combine different techniques?
2. What are the strengths of the paper in terms of its insights and results?
3. Are there any novel aspects or ideas introduced by the paper?
4. How does the reviewer assess the significance and impact of the paper's contributions?
5. What are the limitations or areas for improvement in the paper, particularly regarding the scope of the experiments? | Review | Review
This paper motivates the combination of autoregressive models with Variational Auto-Encoders and how to control the amount the amount of information stored in the latent code. The authors provide state-of-the-art results on MNIST, OMNIGLOT and Caltech-101.
I find that the insights provided in the paper, e.g. with respect to the effect of having a more powerful decoder on learning the latent code, the bit-back coding, and the lossy decoding are well-written but are not novel.
The difference between an auto-regressive prior and the inverse auto-regressive posterior is new and interesting though.
The model presented combines the recent technique of PixelRNN/PixelCNN and Variational Auto-Encoders with Inverse Auto-Regressive Flows, which enables the authors to obtain state-of-the-art results on MNIST, OMNIGLOT and Caltech-101. Given the insights provided in the paper, the authors are also able to control the amount of information contained in the latent code to an extent.
This paper gather several insight on Variational Auto-Encoders scattered through several publications in a well-written way. From these, the authors are able to obtain state-of-the-art models on small complexity datasets. Larger scale experiments will be necessary. |
ICLR | Title
Variational Lossy Autoencoder
Abstract
Representation learning seeks to expose certain aspects of observed data in a learned representation that’s amenable to downstream tasks like classification. For instance, a good representation for 2D images might be one that describes only global structure and discards information about detailed texture. In this paper, we present a simple but principled method to learn such global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN/CNN. Our proposed VAE model allows us to have control over what the global latent code can learn and by designing the architecture accordingly, we can force the global latent code to discard irrelevant information such as texture in 2D images, and hence the VAE only “autoencodes” data in a lossy fashion. In addition, by leveraging autoregressive models as both prior distribution p(z) and decoding distribution p(x|z), we can greatly improve generative modeling performance of VAEs, achieving new state-of-the-art results on MNIST, OMNIGLOT and Caltech-101 Silhouettes density estimation tasks as well as competitive results on CIFAR10.
1 INTRODUCTION
A key goal of representation learning is to identify and disentangle the underlying causal factors of the data, so that it becomes easier to understand the data, to classify it, or to perform other tasks (Bengio et al., 2013). For image data this often means that we are interested in uncovering the “global structure” that captures the content of an image (for example, the identity of objects present in the image) and its “style”, but that we are typically less interested in the local and high frequency sources of variation such as the specific textures or white noise patterns.
A popular approach for learning representations is to fit a probabilistic latent variable model, an approach also known as analysis-by-synthesis (Yuille & Kersten, 2006; Nair et al., 2008). By learning a generative model of the data with the appropriate hierarchical structure of latent variables, it is hoped that the model will somehow uncover and untangle those causal sources of variations that we happen to be interested in. However, without further assumptions, representation learning via generative modeling is ill-posed: there are many different possible generative models with different (or no) kinds of latent variables that all encode the same probability density function on our observed data. Thus, the results we empirically get using this approach are highly dependent on the specific architectural and modeling choices that are made. Moreover, the objective that we optimize is often completely disconnected from the goal of learning a good representation: An autoregressive model of the data may achieve the same log-likelihood as a variational autoencoder (VAE) (Kingma & Welling, 2013), but the structure learned by the two models is completely different: the latter typically has a clear hierarchy of latent variables, while the autoregressive model has no stochastic latent variables at all (although it is conceivable that the deterministic hidden units of the autoregressive models will have meaningful and useful representations). For this reason, autoregressive models have thus far not been popular for the purpose of learning representations, even though they are extremely powerful as generative models (see e.g. van den Oord et al., 2016a).
A natural question becomes: is it possible to have a model that is a powerful density estimator and at the same time has the right hierarchical structure for representation learning? A potential solution would be to use a hybrid model that has both the latent variable structure of a VAE, as
well as the powerful recurrence of an autoregressive model. However, earlier attempts at combining these two kinds of models have run into the problem that the autoregressive part of the model ends up explaining all structure in the data, while the latent variables are not used (Fabius & van Amersfoort, 2014; Chung et al., 2015; Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016). Bowman et al. (2015) noted that weakening the autoregressive part of the model by, for example, dropout can encourage the latent variables to be used. We analyze why weakening is necessary, and we propose a principled solution that takes advantage of this property to control what kind of information goes into latent variables. The model we propose performs well as a density estimator, as evidenced by state-of-the-art log-likelihood results on MNIST, OMNIGLOT and Caltech-101, and also has a structure that is uniquely suited for learning interesting global representations of data.
2 VAES DO NOT AUTOENCODE IN GENERAL
A VAE is frequently interpreted as a regularized autoencoder (Kingma & Welling, 2013; Zhang et al., 2016), but the conditions under which it is guaranteed to autoencode (reconstruction being close to original datapoint) are not discussed. In this section, we discuss the often-neglected fact that VAEs do not always autoencode and give explicit reasons why previous attempts to apply VAE in sequence modeling found that the latent code is generally not used unless the decoder is weakened (Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016). The understanding of when VAE does autoencode will be an essential building piece for VLAE.
2.1 TECHNICAL BACKGROUND
Let x be observed variables, z latent variables and let p(x, z) be the parametric model of their joint distribution, called the generative model defined over the variables. Given a dataset X = {x1, ...,xN} we wish to perform maximum likelihood learning of its parameters:
log p(X) = N∑ i=1 log p(x(i)), (1)
but in general this marginal likelihood is intractable to compute or differentiate directly for flexible generative models that have high-dimensional latent variables and flexible priors and likelihoods. A solution is to introduce q(z|x), a parametric inference model defined over the latent variables, and optimize the variational lower bound on the marginal log-likelihood of each observation x:
log p(x) ≥ Eq(z|x) [log p(x, z)− log q(z|x)] = L(x; θ) (2) where θ indicates the parameters of p and q models.
There are various ways to optimize the lower bound L(x; θ); for continuous z it can be done efficiently through a re-parameterization of q(z|x) (Kingma & Welling, 2013; Rezende et al., 2014). This way of optimizing the variational lower bound with a parametric inference network and reparameterization of continuous latent variables is usually called VAE. The “autoencoding” terminology comes from the fact that the lower bound L(x; θ) can be re-arranged:
L(x; θ) = Eq(z|x) [log p(x, z)− log q(z|x)] (3) = Eq(z|x) [log p(x|z)]−DKL(q(z|x)||p(z)) (4)
where the first term can be seen as the expectation of negative reconstruction error and the KL divergence term can be seen as a regularizer, which as a whole could be seen as a regularized autoencoder loss with q(z|x) being the encoder and p(x|z) being the decoder. In the context of 2D images modeling, the decoding distribution p(x|z) is usually chosen to be a simple factorized distribution, i.e. p(x|z) = ∏ i p(xi|z), and this setup often yields a sharp decoding distribution p(x|z) that tends to reconstruct original datapoint x exactly.
2.2 BITS-BACK CODING AND INFORMATION PREFERENCE
It’s straightforward to see that having a more powerful p(x|z) will make VAE’s marginal generative distribution p(x) = ∫ z p(z)p(x|z)dz more expressive. This idea has been explored extensively
in previous work applying VAE to sequence modeling (Fabius & van Amersfoort, 2014; Chung et al., 2015; Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016), where the decoding distribution is a powerful RNN with autoregressive dependency, i.e., p(x|z) =∏ i p(xi|z,x<i). Since RNNs are universal function approximators and any joint distribution over x admits an autoregressive factorization, the RNN autoregressive decoding distribution can in theory represent any probability distribution even without dependence on z.
However, previous attempts have found it hard to benefit from VAE when using an expressive decoding distribution p(x|z). Indeed it’s documented in detail by Bowman et al. (2015) that in most cases when an RNN autoregressive decoding distribution is used, the latent code z is completely ignored and the model regresses to be a standard unconditional RNN autoregressive distribution that doesn’t depend on the latent code. This phenomenon is commonly attributed to “optimization challenges” of VAE in the literature (Bowman et al., 2015; Serban et al., 2016; Kaae Sønderby et al., 2016) because early in the training the approximate posterior q(z|x) carries little information about datapoint x and hence it’s easy for the model to just set the approximate posterior to be the prior to avoid paying any regularization cost DKL(q(z|x)||p(z)). Here we present a simple but often-neglected observation that this phenomenon arises not just due to optimization challenges and instead even if we can solve the optimization problems exactly, the latent code should still be ignored at optimum for most practical instances of VAE that have intractable true posterior distributions and sufficiently powerful decoders. It is easiest to understand this observation from a Bits-Back Coding perspective of VAE.
It is well-known that Bits-Back Coding is an information-theoretic view of Variational Inference (Hinton & Van Camp, 1993; Honkela & Valpola, 2004) and specific links have been established between Bits-Back Coding and the Helmholtz Machine/VAE (Hinton & Zemel, 1994; Gregor et al., 2013). Here we briefly relate VAE to Bits-Back Coding for self-containedness:
First recall that the goal of designing an efficient coding protocol is to minimize the expected code length of communicating x. To explain Bits-Back Coding, let’s first consider a more naive coding scheme. VAE can be seen as a way to encode data in a two-part code: p(z) and p(x|z), where z can be seen as the essence/structure of a datum and is encoded first and then the modeling error (deviation from z’s structure) is encoded next. The expected code length under this naive coding scheme for a given data distribution is hence:
Cnaive(x) = Ex∼data,z∼q(z|x) [− log p(z)− log p(x|z)] (5)
This coding scheme is, however, inefficient. Bits-Back Coding improves on it by noticing that the encoder distribution q(z|x) can be used to transmit additional information, up to H(q(z|x)) expected nats, as long as the receiver also has access to q(z|x). The decoding scheme works as follows: a receiver first decodes z from p(z), then decodes x from p(x|z) and, by running the same approximate posterior that the sender is using, decodes a secondary message from q(z|x). Hence, to properly measure the code length of VAE’s two-part code, we need to subtract the extra information from q(z|x). Using Bit-Back Coding, the expected code length equates to the negative variational lower bound or the so-called Helmholtz variational free energy, which means minimizing code length is equivalent to maximizing the variational lower bound:
CBitsBack(x) = Ex∼data,z∼q(z|x) [log q(z|x)− log p(z)− log p(x|z)] (6) = Ex∼data [−L(x)] (7)
Casting the problem of optimizing VAE into designing an efficient coding scheme easily allows us to reason when the latent code z will be used: the latent code z will be used when the two-part code is an efficient code. Recalling that the lower-bound of expected code length for data is given by the Shannon entropy of data generation distribution: H(data) = Ex∼data [− log pdata(x)], we can analyze VAE’s coding efficiency:
CBitsBack(x) = Ex∼data,z∼q(z|x) [log q(z|x)− log p(z)− log p(x|z)] (8) = Ex∼data [− log p(x) +DKL(q(z|x)||p(z|x))] (9) ≥ Ex∼data [− log pdata(x) +DKL(q(z|x)||p(z|x))] (10) = H(data) + Ex∼data [DKL(q(z|x)||p(z|x))] (11)
Since Kullback Leibler divergence is always non-negative, we know that using the two-part code derived from VAE suffers at least an extra code length of DKL(q(z|x)||p(z|x)) nats for using a posterior that’s not precise. Many previous works in Variational Inference have designed flexible approximate posteriors to better approximate true posterior (Salimans et al., 2014; Rezende & Mohamed, 2015; Tran et al., 2015; Kingma et al., 2016). Improved posterior approximations have shown to be effective in improving variational inference but none of the existing methods are able to completely close the gap between approximate posterior and true posterior. This leads us to believe that for most practical models, at least in the near future, the extra coding costDKL(q(z|x)||p(z|x)) will exist and will not be negligible.
Once we understand the inefficiency of the Bits-Back Coding mechanism, it’s simple to realize why sometimes the latent code z is not used: if the p(x|z) could model pdata(x) without using information from z, then it will not use z, in which case the true posterior p(z|x) is simply the prior p(z) and it’s usually easy to set q(z|x) to be p(z) to avoid incurring an extra cost DKL(q(z|x)||p(z|x)). And it’s exactly the case when a powerful decoding distribution is used like an RNN autoregressive distribution, which given enough capacity is able to model arbitrarily complex distributions. Hence there exists a preference of information when a VAE is optimized: information that can be modeled locally by decoding distribution p(x|z) without access to z will be encoded locally and only the remainder will be encoded in z.
We note that one common way to encourage putting information into the code is to use a factorized decoder p(x|z) = ∏ i p(xi|z) but so long as there is one dimension xj that’s independent of all other dimensions for true data distribution, pdata(x) = pdata(xj)pdata(x 6=j), then the latent code doesn’t contain all the information about x since at least xj will be modeled locally by factorized p(x|z). This kind of independence structure rarely exists in images so common VAEs that have factorized decoder autoencode almost exactly. Other techniques to encourage the usage of the latent code include annealing the relative weight of of DKL(q(z|x)||p(z)) in the variational lower bound (Bowman et al., 2015; Kaae Sønderby et al., 2016) or the use of free bits (Kingma et al., 2016), which can serve the dual purpose of smoothing the optimization landscape and canceling out part of the Bits-Back Code inefficiency DKL(q(z|x)||p(z|x)).
3 VARIATIONAL LOSSY AUTOENCODER
The discussion in Section 2.2 suggests that autoregressive models cannot be combined with VAE since information will be preferred to be modeled by autoregressive models. Nevertheless, in this section, we present two complementary classes of improvements to VAE that utilize autoregressive models fruitfully to explicitly control representation learning and improve density estimation.
3.1 LOSSY CODE VIA EXPLICIT INFORMATION PLACEMENT
Even though the information preference property of VAE might suggest that one should always use the full autoregressive models to achieve a better code length/log-likelihood, especially when slow data generation is not a concern, we argue that this information preference property can be exploited to turn the VAE into a powerful representation learning method that gives us fine-grained control over the kind of information that gets included in the learned representation.
When we try to learn a lossy compression/representation of data, we can simply construct a decoding distribution that’s capable of modeling the part of information that we don’t want the lossy representation to capture, but, critically, that’s incapable of modelling the information that we do want the lossy representation to capture.
For instance, if we are interested in learning a global representation for 2D images that doesn’t encode information about detailed texture, we can construct a specific factorization of the autoregressive distribution such that it has a small local receptive field as decoding distribution, e.g., plocal(x|z) = ∏ i p(xi|z,xWindowAround(i)). Notice that, as long as xWindowAround(i) is smaller than x<i, plocal(x|z) won’t be able to represent arbitrarily complex distribution over x without dependence on z since the receptive field is limited such that not all distributions over x admit such factorizations. In particular, the receptive field window can be a small rectangle adjacent to a pixel xi and in this case long-range dependency will be encoded in the latent code z. On the other hand, if the true data distribution admits such factorization for a given datum x and dimension i, i.e.
pdata(xi|xWindowAround(i)) = pdata(xi|x<i), then the information preference property discussed in Section 2.2 will apply here, which means that all the information will be encoded in local autoregressive distribution for xi. Local statistics of 2D images like texture will likely be modeled completely by a small local window, whereas global structural information of an images like shapes of objects is long-range dependency that can only be communicated through latent code z. Therefore we have given an example VAE that will produce a lossy compression of 2D images carrying exclusively global information that can’t be modeled locally.
Notice that a global representation is only one of many possible lossy representations that we can construct using this information preference property. For instance, the conditional of an autoregressive distribution might depend on a heavily down-sampled receptive field so that it can only model long-range pattern whereas local high-frequency statistics need to be encoded into the latent code. Hence we have demonstrated that we can achieve explicit placement of information by constraining the receptive field/factorization of an autoregressive distribution that’s used as decoding distribution.
We want to additionally emphasize the information preference property is an asymptotic view in a sense that it only holds when the variational lowerbound can be optimized well. Thus, we are not proposing an alternative to techniques like free bits Kingma et al. (2016) or KL annealing, and indeed they are still useful methods to smooth the optimization problem and used in this paper’s experiments.
3.2 LEARNED PRIOR WITH AUTOREGRESSIVE FLOW
Inefficiency in Bits-Back Coding, i.e., the mismatch between approximate posterior and true posterior, can be exploited to construct a lossy code but it’s still important to minimize such inefficiency to improve overall modeling performance/coding efficiency. We propose to parametrize the prior distribution p(z; θ) with an autoregressive model and show that a type of autoregressive latent code can in theory reduce inefficiency in Bits-Back coding.
It is well-known that limited approximate posteriors impede learning and therefore various expressive posterior approximations have been proposed to improve VAE’s density estimation performance (Turner et al., 2008; Mnih & Gregor, 2014; Salimans et al., 2014; Rezende & Mohamed, 2015; Kingma et al., 2016). One such class of approximate posteriors that has been shown to attain good empirical performance is based on the idea of Normalizing Flow, which is to apply an invertible mapping to a simple random variable, for example a factorized Gaussian as commonly used for q(z|x), in order to obtain a complicated random variable. For an invertible transformation between a simple distribution y and a more flexible z, we know from the change-of-variable technique that log q(z|x) = log q(y|x) − log det dzdy and using q(z|x) as approximate posterior will decrease the coding efficiency gap DKL(q(z|x)||p(z|x)) provided the transformation is sufficiently expressive. Kingma et al. (2016) introduced Inverse Autoregressive Flow, which is a powerful class of such invertible mappings that have simple determinant: zi =
yi−µi(y1:i−1) σi(y1:i−1) , where µi(.) ∈ R, σi(.) ∈ R+ are general functions that can be parametrized by expressive neural networks, such as MADE and PixelCNN variants (Germain et al., 2015; van den Oord et al., 2016a). Inverse autoregressive flow is the inverse/whitening of autoregressive flow: yi = ziσi(y1:i−1) + µi(y1:i−1). We refer interested readers to (Rezende & Mohamed, 2015; Kingma et al., 2016) for in-depth discussions on related topics.
In this paper, we propose to parametrize our learnable prior as an autoregressive flow from some simple noise source like spherical Gaussian. Next, we show that using latent code transformed by autoregressive flow (AF) is equivalent to using inverse autoregressive flow (IAF) approximate posterior, which explains why it can similarly improve Bits-Back Coding efficiency. Moreover, compared with an IAF posterior, an AF prior has a more expressive generative model that essentially “comes for free”.
For an autoregressive flow f , some continuous noise source is transformed into latent code z: z = f( ). Assuming the density function for noise source is u( ), we similarly know that log p(z) = log u( ) + log det d dz .
Simply re-arranging the variational lowerbound for using AF prior reveals that having an AF latent code z is equivalent to using an IAF posterior for that we can interpret as the new latent code:
L(x; θ) = Ez∼q(z|x) [log p(x|z) + log p(z)− log q(z|x)] (12) = Ez∼q(z|x), =f−1(z) [ log p(x|f( )) + log u( ) + log det d
dz − log q(z|x)
] (13)
= Ez∼q(z|x), =f−1(z) log p(x|f( )) + log u( )− (log q(z|x)− log det d dz
)︸ ︷︷ ︸ IAF Posterior (14) AF prior is the same as IAF posterior along the encoder path, f−1(q(z|x)), but differs along the decoder/generator path: IAF posterior has a shorter decoder path p(x|z) whereas AF prior has a deeper decoder path p(x|f( )). The crucial observation is that AF prior and IAF posterior have the same computation cost under the expectation of z ∼ q(z|x), so using AF prior makes the model more expressive at no training time cost.
4 EXPERIMENTS
In this paper, we evaluate VLAE on 2D images and leave extensions to other forms of data to future work. For the rest of the section, we define a VLAE model as a VAE that uses AF prior and autoregressive decoder. We choose to implement conditional distribution p(x|z) with a smallreceptive-field PixelCNN (van den Oord et al., 2016a), which has been proved to be a scalable autoregressive model.
For evaluation, we use binary image datasets that are commonly used for density estimation tasks: MNIST (LeCun et al., 1998) (both statically binarized 1 and dynamically binarized version (Burda et al., 2015a)), OMNIGLOT (Lake et al., 2013; Burda et al., 2015a) and Caltech-101 Silhouettes (Marlin et al., 2010). All datasets uniformly consist of 28x28 binary images, which allow us to use a unified architecture. VAE networks used in binary image datasets are simple variants of ResNet VAEs described in (Salimans et al., 2014; Kingma et al., 2016). For the decoder, we use a variant of PixelCNN that has 6 layers of masked convolution with filter size 3, which means the window of dependency, xWindowAround(i), is limited to a small local patch. During training, ”free bits” (Kingma et al., 2016) is used improve optimization stability. Experimental setup and hyperparameters are detailed in the appendix. Reported marginal NLL is estimated using Importance Sampling with 4096 samples.
We designed experiments to answer the following questions:
• Can VLAE learn lossy codes that encode global statistics? • Does using AF priors improves upon using IAF posteriors as predicted by theory? • Does using autoregressive decoding distributions improve density estimation performance?
4.1 LOSSY COMPRESSION
First we are interested in whether VLAE can learn a lossy representation/compression of data by using the PixelCNN decoder to model local statistics. We trained VLAE model on Statically Binarized MNIST and the converged model has E[DKL(q(z|x)||p(z))] = 13.3 nats = 19.2 bits, which is the number of bits it uses on average to encode/compress one MNIST image. By comparison, an identical VAE model with factorized decoding distribution will uses on average 37.3 bits in latent code, and this thus indicates that VLAE can learn a lossier compression than a VAE with regular factorized conditional distribution.
The next question is whether VLAE’s lossy compression encodes global statistics and discards local statistics. In Fig 1a, we visualize original images xdata and one random “decompression” xdecompressed from VLAE: z ∼ q(z|xdata),xdecompressed ∼ p(x|z). We observe that none of the
1We use the version provided by Hugo Larochelle.
decompressions is an exact reconstruction of the original image but instead the global structure of the image was encoded in the lossy code z and regenerated. Also worth noting is that local statistics are not preserved but a new set of likely local statistics are generated in the decompressed images: the binary masks are usually different and local styles like stroke width are sometimes slightly different.
However, we remark that the lossy code z doesn’t always capture the kind of global information that we care about and it’s dependent on the type of constraint we put on the decoder. For instance, in Fig 4b, we show decompressions for OMNIGLOT dataset, which has more meaningful variations in small patches than MNIST, and we can observe that semantics are not preserved in some cases. This highlights the need to specify the type of statistics we care about in a representation, which will be different across tasks and datasets, and design decoding distribution accordingly.
4.2 DENSITY ESTIMATION
Next we investigate whether leveraging autoregressive models as latent distribution p(z) and as decoding distribution p(x|z) would improve density estimation performance. To verify whether AF prior is able to improve upon IAF posterior alone, it’s desirable to test this model without using autoregressive decoder but instead using the conventional independent Bernoulli distribution for p(x|z). Hence we use the best performing model from Kingma et al.
(2016) on statically binarized MNIST and make the single modification of replacing the original IAF posterior with an equivalent AF prior, removing the context. As seen in Table 1, VAE with AF prior is outperforming VAE with an equivalent IAF posterior, indicating that the deeper generative model from AF prior is beneficial. A similar gain carries over when an autoregressive decoder is used: on statically binarized MNIST, using AF prior instead of IAF posterior reduces train NLL by 0.8 nat and test NLL by 0.6 nat.
Next we evaluate whether using autoregressive decoding distribution can improve performance and we show in Table 1 that a VLAE model, with AF prior and PixelCNN conditional, is able to outperform a VAE with just AF prior and achieves new state-of-the-art results on statically binarized MNIST.
In addition, we hypothesize that the separation of different types of information, the modeling global structure in latent code and local statistics in PixelCNN, likely has some form of good inductive biases for 2D images. In order to evaluate if VLAE is an expressive density estimator with good inductive biases, we will test a single VLAE model, with the same network architecture, on all binary datasets. We choose hyperparameters manually on statically binarized MNIST and use the same hyperparameters to evaluate on dynamically binarized MNIST, OMNIGLOT and Caltech-101 Silhouettes. We also note that better performance can be obtained if we individually tune hyperparameters for each dataset. As a concrete demonstration, we report the performance of a fine-tuned VLAE on OMNIGLOT dataset in Table 3.
As seen in Table 2,3,4, with the same set of hyperparameters tuned on statically binarized MNIST, VLAE is able to perform well on the rest of datasets, significantly exceeding previous state-ofthe-art results on dynamically binarized MNIST and Caltech-101 Silhouettes and tying statistically with best previous result on OMNIGLOT. In order to isolate the effect of expressive PixelCNN as decoder, we also report performance of the same PixelCNN trained without VAE part under the name “Unconditional Decoder”.
4.3 NATURAL IMAGES: CIFAR10
In addition to binary image datasets, we have applied VLAE to the CIFAR10 dataset of natural images. Density estimation of CIFAR10 images has been a challenging benchmark problem used by many recent generative models and hence is great task to position VLAE among existing methods.
We investigated using ResNet (He et al., 2016) and DenseNet (Huang et al., 2016) as building blocks for VAE networks and observed that DenseNet reduces overfitting. We also propose a new optimization technique that blends the advantages of KL annealing (Serban et al., 2016) and ”free bits” (Kingma et al., 2016) to stabilize learning on this challenging dataset. Detailed experimental setup is described in Appendix.
VLAE is compared to other methods on CIFAR10 in Table 5. We show that VLAE models attain new state-of-the-art performance among other variationally trained latent-variable models. DenseNet VLAE model also outperforms most other tractable likelihood models including Gated PixelCNN and PixelRNN and has results only slightly worse than currently unarchived state-of-the-art PixelCNN++.
We also investigate learning lossy codes on CIFAR10 images. To illustrate how does the receptive field size of PixelCNN decoder influence properties of learned latent codes, we show visualizations of similar VLAE models with receptive fields of different sizes. Specifically we say a receptive field, xWindowAround(i), has size AxB when a pixel xi can depend on the rectangle block of size AxB immediately on top of xi as well as the ⌈ A−1 2 ⌉ pixels immediately to the left of xi. We use this notation to refer to different types of PixelCNN decoders in Figure 3.
From (a)-(c) in Figure 3, we can see that larger receptive fields progressively make autoregressive decoders capture more structural information. In (a), a smaller receptive field tends to preserve rather detailed shape information in the lossy code whereas the latent code only retains rough shape in (c) with a larger receptive field.
It’s interesting to also note that in (a)-(c), oftentimes color information is partially omitted from latent codes and one explanation can be that color is very predictable locally. However, color information can be important to preserve if our task is, for example, object classification. To demonstrate how we can encode color information in the lossy code, we can choose to make PixelCNN decoder depend only on images’ grayscale versions. In other words, instead of choosing the decoder to be plocal(x|z) = ∏ i p(xi|z,xWindowAround(i)), we use a decoder of the form
plocal(x|z) = ∏ i p(xi|z,Grayscale(xWindowAround(i))). In (d) of Figure 3, we visualize lossy codes for a VLAE that has the same receptive field size as (c) but uses a “grayscale receptive field”. We note that the lossy codes in (d) encode roughly the same structural information as those in (c) but generally generate objects that are more recognizable due to the preservation of color information. This serves as one example of how we can design the lossy latent code carefully to encode what’s important and what’s not.
5 RELATED WORK
We investigate a fusion between variational autoencoders with continuous latent variables (Kingma & Welling, 2013; Rezende et al., 2014) and neural autoregressive models. For autoregression, we specifically apply a novel type of architecture where autoregression is realised through a carefully
constructed deep convolutional network, introduced in the PixelCNN model for images (van den Oord et al., 2016a,b). These family of convolutional autoregressive models was further explored, and extended, for audio in WaveNet (Oord et al., 2016), video in Video Pixel Networks (Kalchbrenner et al., 2016b) and language in ByteNet (Kalchbrenner et al., 2016a).
The combination of latent variables with expressive decoder was previously explored using recurrent networks mainly in the context of language modeling (Chung et al., 2015; Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016). Bowman et al. (2015) has also proposed to weaken an otherwise too expressive decoder by dropout to force some information into latent codes.
Concurrent with our work, PixelVAE (Gulrajani et al., 2016) also explored using conditional PixelCNN as a VAE’s decoder and has obtained impressive density modeling results through the use of multiple levels of stochastic units.
Using autoregressive model on latent code was explored in the context of discrete latent variables in DARN (Gregor et al., 2013). Kingma et al. (2016), Kaae Sønderby et al. (2016), Gregor et al. (2016) and Salimans (2016) explored VAE architecture with an explicitly deep autoregressive prior for continuous latent variables, but the autoregressive data likelihood is intractable in those architectures and needs to inferred variationally. In contrast, we use multiple steps of autoregressive flows that has exact likelihood and analyze the effect of using expressive latent code.
Optimization challenges for using (all levels of) continuous latent code were discussed before and practical solutions were proposed (Bowman et al., 2015; Kaae Sønderby et al., 2016; Kingma et al., 2016). In this paper, we present a complementary perspective on when/how should the latent code be used by appealing to a Bits-Back interpretation of VAE.
Learning a lossy compressor with latent variable model has been investigated with ConvDRAW (Gregor et al., 2016). It learns a hierarchy of latent variables and just using high-level latent variables will result in a lossy compression that performs similarly to JPEG. Our model similarly learns a lossy compressor but it uses an autoregressive model to explicitly control what kind of information should be lost in compression.
6 CONCLUSION
In this paper, we analyze the condition under which the latent code in VAE should be used, i.e. when does VAE autoencode, and use this observation to design a VAE model that’s a lossy compressor of observed data. At modeling level, we propose two complementary improvements to VAE that are shown to have good empirical performance.
VLAE has the appealing properties of controllable representation learning and improved density estimation performance but these properties come at a cost: compared with VAE models that have simple prior and decoder, VLAE is slower at generation due to the sequential nature of autoregressive model.
Moving forward, we believe it’s exciting to extend this principle of learning lossy codes to other forms of data, in particular those that have a temporal aspect like audio and video. Another promising direction is to design representations that contain only information for downstream tasks and utilize those representations to improve semi-supervised learning.
A DETAILED EXPERIMENT SETUP FOR BINARY IMAGES
For VAE’s encoder and decoder, we use the same ResNet (He et al., 2015) VAE architecture as the one used in IAF MNIST experiment (Kingma et al., 2016). The only difference is that the decoder network now, instead of outputing a 28x28x1 spatial feature map to specify the mean of a factorized bernoulli distribution, outputs a 28x28x4 spatial feature map that’s concatenated with the original binary image channel-wise, forming a 28x28x5 feature map that’s then fed through a typical masked PixelCNN (van den Oord et al., 2016a). As such even though the PixelCNN conditions on the latent code, we don’t call it a Conditional PixelCNN because it doesn’t use the specific architecture that was proposed in van den Oord et al. (2016b). For the PixelCNN, it has 6 masked convolution layers with 12 3x3 filters organized in ResNet blocks, and it has 4 additional 1x1 convolution ResNet block between every other masked convolution layer to increase processing capacity since it employs fewer masked convolutions than usual. All the masked convolution layer have their weights tied to reduce overfitting on statically binarized MNIST, and untying the weights will increase performance for other datasets. Experiments are tuned on the validation set and then final experiment was run with train and validation set, with performance evaluated with test set. Exponential Linear Units (Clevert et al., 2015) are used as activation functions in both VAE network and PixelCNN network. Weight normalization is everywhere with data-dependent initialization (Salimans & Kingma, 2016).
A latent code of dimension 64 was used. For AF prior, it’s implemented with MADE (Germain et al., 2015) as detailed in Kingma et al. (2016). We used 4 steps of autoregressive flow and each flow is implemented by a 3-layer MADE that has 640 hidden units and uses Relu (Nair & Hinton, 2010) as activation functions. Differing from the practice of Kingma et al. (2016), we use mean-only autoregressive flow, which we found to be more numerically stable.
In terms of training, Adamax (Kingma & Ba, 2014) was used with a learning rate of 0.002. 0.01 nats/data-dim free bits (Kingma et al., 2016) was found to be effective in dealing with the problem of all the latent code being ignored early in training. Polyak averaging (Polyak & Juditsky, 1992) was used to compute the final parameters, with α = 0.998.
All experiments are implemented using TensorFlow (Abadi et al., 2016).
B ADDITIONAL EXPERIMENT SETUP FOR CIFAR10
Latent codes are represented by 16 feature maps of size 8x8, and this choice of spatial stochastic units are inspired by ResNet IAF VAE (Kingma et al., 2016). Prior distribution is factorized Gaussian noise transformed by 6 autoregressive flows, each of which is implemented by a PixelCNN (van den Oord et al., 2016a) with 2 hidden layers and 128 feature maps. Between every other autoregressive flow, the ordering of stochastic units is reversed.
ResNet VLAE has the following structure for encoder: 2 ResNet blocks, Conv w/ stride=2, 2 ResNet blocks, Conv w/ stride=2, 3 ResNet blocks, 1x1 convolution and has a symmetric decoder. Channel size = 48 for 32x32 feature maps and 96 for other feature maps. DenseNet VLAE follows a similar structure: replacing 2 ResNet blocks with one DenseNet block of 3 steps and each step produces a certain number of feature maps such that at the end of a block, the concatenated feature maps is slightly more than the ResNet VLAE at the same stage.
Conditional PixelCNN++ (Salimans et al., 2017) is used as the decoder. Specifically the channelautoregressive variant is used to ensure there is sufficient capacity even when the receptive field is small. Specifically, the decoder PixelCNN has 4 blocks of 64 feature maps where each block is conditioned on previous blocks with Gated ResNet connections and hence the PixelCNN decoders we use are shallow but very wide. For 4x2 receptive field experiment, we use 1 layer of vertical stack convolutions and 2 layers of horizontal stack convolutions; for 5x3 receptive field experiment, we use 2 layers of vertical stack convolutions and 2 layers of horizontal stack convolutions; For 5x3 receptive field experiment, we use 2 layers of vertical stack convolutions and 2 layers of horizontal stack convolutions; For 7x4 receptive field experiment, we use 3 layers of vertical stack convolutions and 3 layers of horizontal stack convolutions; for 7x4 Grayscale experiment, we transform RGB
images into gray-scale images via this specific transformation: (0.299∗R)+(0.587G)+(0.114B). Best density estimation result is obtained with 7x4 receptive field experiments.
C SOFT FREE BITS
”Free bits” was a technique proposed in (Kingma et al., 2016) where K groups of stochastic units are encouraged to be used through the following surrogate objective:
L̃λ = Ex∼M [ Eq(z|x) [log p(x|z)] ] − K∑ j=1 maximum(λ,Ex∼M [DKL(q(zj |x)||p(zj))])
This technique is easy to use since it’s usually easy to determine the minimum number of bits/nats, λ, stochastic units need to encode. Choosing λ is hence easier than setting a fixed KL annealing schedule (Serban et al., 2016).
On the other hand, Kl annealing has the benefit of the surrogate objective will smoothly become the true objective, the variational lower bound where as ”free bits” has a sharp transition at the boundary. Therefore, we propose to still use λ as hyperparameter to specify at least λ nats should be used but try to change the optimization objective as slowly as possible:
LSoftFreeBits(x; θ) =Eq(z|x) [log p(x|z)]− γDKL(q(z|x)||p(z)) where 0 < γ ≤ 1. And we make the optimization smoother by changing γ slowly online to make sure at least λ nats are used: when Kl is too much higher than λ (we experimented wide range of thresholds from 3% to 30%, all of which yield improved results, and we tend to use 5% us a threshold), γ is increased, and when Kl lower than λ, γ is decreased to encourage information flow.
We found it sufficient to increase/decrease at 10% increment and didn’t further tune this parameter.
D AUTOREGRESSIVE DECODER WITHOUT AUTOREGRESSIVE PRIOR
In this section, we investigate the scenario of just using an autoregressive decoder without using an autoregressive prior. We compare the exact same model in three configurations: 1) using smallreceptive-field PixelCNN as an unconditional density estimator; 2) using small-receptive-field as a decoder in a VAE with Gaussian latent variables; 3) replacing Gaussian latent variables with autoregressive flow latent variables in 2).
In Table 1, we can observe that each step of modification improves density estimation performance. In addition, using an autoregressive latent code makes the latent code transmit more information as shown in the difference of E[DKL(q(z|x)||p(z))].
E CIFAR10 GENERATED SAMPLES | 1. What are the strengths and weaknesses of the paper regarding its contributions to improving VAEs?
2. Do you have any concerns or disagreements with the theoretical assumptions made in the paper? If so, what are they?
3. How does the reviewer assess the significance and novelty of the proposed approach compared to other works in the field?
4. Are there any questions or points that the reviewer would like the authors to clarify or expand upon in their responses? | Review | Review
The AR prior and its equivalent - the inverse AR posterior - is one of the more elegant ways to improve the unfortunately poor generative qualities of VAE-s. It is only an incremental but important step. Incremental, because, judging by the lack of, say, CIFAR10 pictures of the VLAE in its "creative" regime ( i.e., when sampling from prior), it will not answer many of the questions hanging over. We hope to see the paper accepted: in relative terms, the paper shines in the landscape of the other papers which are rich on engineering hacks but lacking on theoretical insights.
Some disagreements with the theoretical suppositions in the paper:
i) The VAE-s posterior converges to the prior faster than we would like because the gradients of the "generative" error (the KL divergence of prior and posterior) w.r.t. mu & sigma are simple, inf differentiable functions and their magnitude far exceeds the magnitude of the resp. gradients of the reconstruction error. Especially when more "hairy" decoders like pixelCNN are used. We always considered this obvious and certainly not worthy of one page of CS mumbo-jumbo to explain. Dumbing-down the decoder via variations of dropout or "complexifying" the sampler as in here, or slapping the generative error with a "DeepMind" constant (beta_VAE), are the natural band-aids, but seem to fail in the "creative" regime, for real-life sets like CIFAR10 or more complex ones. Other conceptual solutions are needed, some are discussed in [2].
ii) The claim near the end of section 2.2 that "the extra coding cost a.k.a. variational error will exist and will not be negligible" is a speculation, which, in our empirical experience at least, is incorrect. The variational error is quantifiable for the Gibbs/exponential family of priors/posteriors, as described in [1], section 3.8, and as Tim Salimans knows from his own previous work. In the case of CIFAR10 for example, the variational error is negligible, even for simple sampling families like Gaussian, Laplacian, etc. Moreover, in hindsight, using the closed-form for generative error (the KL divergence of prior and posterior) in the pioneering VAE papers, was likely a mistake inherited by the unnecessary Bayseanism which inspired them (beautiful but a mistake nonetheless): The combo of generative and variational error should together be approximated by the same naive Monte Carlo used for the reconstruction error (easily follows from equation (3.13) in [1]) i.e. arithmetic average over observations.
On the lighter side, guys, please do not recycle ridiculous terms like "optimizationally challenged", as in section 2.2! The English language has recently acquired "mentally-challenged", "emotionally-challenged", etc, and now political correctness has sadly found its way to machines?
[1] https://arxiv.org/pdf/1508.06585v5.pdf
[2] https://arxiv.org/pdf/1511.02841v3.pdf |
ICLR | Title
Semantic Code Repair using Neuro-Symbolic Transformation Networks
Abstract
We study the problem of semantic code repair, which can be broadly defined as automatically fixing non-syntactic bugs in source code. The majority of past work in semantic code repair assumed access to unit tests against which candidate repairs could be validated. In contrast, the goal here is to develop a strong statistical model to accurately predict both bug locations and exact fixes without access to information about the intended correct behavior of the program. Achieving such a goal requires a robust contextual repair model, which we train on a large corpus of real-world source code that has been augmented with synthetically injected bugs. Our framework adopts a two-stage approach where first a large set of repair candidates are generated by rule-based processors, and then these candidates are scored by a statistical model using a novel neural network architecture which we refer to as Share, Specialize, and Compete. Specifically, the architecture (1) generates a shared encoding of the source code using an RNN over the abstract syntax tree, (2) scores each candidate repair using specialized network modules, and (3) then normalizes these scores together so they can compete against one another in comparable probability space. We evaluate our model on a real-world test set gathered from GitHub containing four common categories of bugs. Our model is able to predict the exact correct repair 41% of the time with a single guess, compared to 13% accuracy for an attentional sequence-to-sequence model.
1 INTRODUCTION
The term automatic code repair is typically used to describe two overarching tasks: The first involves fixing syntactic errors, which are malformations that cause the code to not adhere to some language specification (Gupta et al., 2017; Bhatia and Singh, 2016). The second, which is the focus of this work, involves fixing semantic bugs, which refer to any case where the actual program behavior is not the same as the behavior the programmer intended. Clearly, this covers an extremely wide range of code issues, so this work is limited to a class of semantic bugs, which we roughly define as: “Bugs that can be identified and fixed by an experienced human programmer, without running the code or having deep contextual knowledge of the program.” This does not imply that the bugs are trivially fixable, as they often require time-consuming analysis of the code, rich background knowledge of the language and APIs, and complex logical reasoning about the original programmer’s intent.
Unlike previous work, we do not assume access to unit tests at training or test time. This requirement is important because it forces development of models which can infer intended semantic purpose from source code before proposing repairs, as a human programmer might. Most previous work relies on unit tests – a common theme is combining coarse-grained repair models with search algorithms to find some repair that satisfies unit tests (Harman, 2010; Singh et al., 2013). In contrast, our proposed task requires models to deeply understand the code in order to propose a single set of repairs. Thus, semantic code repair without unit tests presents a concrete, real-world test bed for the more general task of understanding and modifying source code.
Our semantic repair model was trained on a large corpus of open-source Python projects with synthetically injected bugs. We test on both real-bug and synthetic-bug test sets. 1 To train the repair model, we first evaluated an attentional sequence-to-sequence architecture. Although this model was able to achieve non-trivial results, we believe it to be an unsuitable solution in a number of ways, such as the lack of direct competition between repair candidates at different locations. Instead, we
1All data sets will be made publicly available.
use an alternative approach which decouples the non-statistical process of generating and applying repair proposal from the statistical process of scoring and ranking repairs.
This two-stage process itself is not new, but the core novelty in this work is the specific neural framework we propose for scoring repair candidates. We refer to our architecture as a Share, Specialize, and Compete (SSC) network:
• SHARE: The input code snippet is encoded with a neural network. This is a shared representation used by all repair types. • SPECIALIZE: Each repair type is associated with its own specialized neural module (An-
dreas et al., 2016), which emits a score for every repair candidate of that type. • COMPETE: The raw scores from the specialized modules are normalized to compete in
comparable probability space.
Our model can also be thought of as an evolution of work on neural code completion and summarization Allamanis et al. (2016); Bhoopchand et al. (2016). Like those systems, our SHARE network is used to learn a rich semantic understanding of the code snippet. Our SPECIALIZE modules then build on top of this representation to learn how to identify and fix specific bug types.
Although we have described our framework in relation to the problem of code repair, it has a number of other possible applications in sequence transformation scenarios where the input and output sequences have high overlap. For example, it could be applied to natural language grammar correction (Schmaltz et al., 2016), machine translation post editing (Libovickỳ et al., 2016), source code refactoring (Allamanis et al., 2015), or program optimization (Bunel et al., 2016).
2 RELATED WORK
We believe this paper to be the first work addressing the issue of semantic program repair in the absence of unit tests, where functionality must be inferred from the code. However, our work adds to a substantial literature on program repair and program analysis, some of which we describe below:
Neural Syntax Repair: There have been several recent techniques developed for training neural networks to correct syntax errors in code. DeepFix (Gupta et al., 2017) uses an attentional seq-to-seq model to fix syntax errors in a program by predicting both the buggy line and the statement to replace it. Bhatia and Singh (2016) train an RNN based token sequence model to predict token insertion or replacement at program locations provided by the compiler to fix syntax errors.
Statistical Program Repair: Approaches such as Arcuri and Yao (2008) and Goues et al. (2012) use genetic programming techniques to iteratively propose program modifications. Prophet (Long and Rinard, 2016) learns a probabilistic model to rank patches for null pointer exceptions and array out-of-bounds errors. The model is learnt from human patches using a set of hand-engineered program features. In contrast, our neural model automatically learns useful program representations for repairing a much richer class of semantic bugs.
Natural Source Code / Big Code: A number of recent papers have trained statistical models on large datasets of real-world code. These papers study tasks involving varying degrees of reasoning about source code, such as code completion (Raychev et al., 2015; 2014; Bhoopchand et al., 2016) and variable/class/function renaming (Raychev, 2016; Allamanis et al., 2015).
Rule-Based Static Analyzers: Rule-based analyzers for Python (Pylint (Thenault, 2001) and Pyflakes (PyCQA, 2012)) handle a highly disjoint set of issues compared to the type of bugs we are targeting, and generally do not directly propose fixes.
3 PROBLEM OVERVIEW
As mentioned in the introduction, our goal is to develop a system which can statically analyze a piece of code and predict the location of the bug along with the actual fix. We do not assume to have unit tests or any other specification associated with the snippet being repaired. These proposed repairs can be directly presented to the user, or taken as input to some downstream application. Since the task of “fixing bugs in code” is incredibly broad, we limit ourselves to four classes of common Python bugs that are described with examples in Section 3.
Ideally, we would train such a repair model using a large number of buggy/repaired code snippets. However, such a large data set does not exist. It is possible to extract a modest test set of genuine bugs from project commit histories, but it is not enough to train a large-scale neural network. Fortunately, there is a large amount of real-world non-buggy code available to which bugs can be injected. We demonstrate that a model trained on synthesized bugs is able to generalize to a test set with real bugs.
Training Data To create the training data, we first downloaded all Python projects from GitHub that were followed by at least 15 users and had permissive licenses (MIT/BSD/Apache), which amounted to 19,000 total repositories. We extracted every function from each Python source file as a code snippet. In all experiments presented here, each snippet was analyzed on its own without any surrounding context. All models explored in this paper only use static code representations, so each snippet must be parsable as an Abstract Syntax Tree (AST), but does not need to be runnable. Note that many of the extracted functions are member functions of some class, so although they can be parsed, they are not runnable without external context. We only kept snippets with between 5 and 300 nodes in their AST, which approximately corresponds to 1 to 40 lines of code. The average extracted snippet had 45 AST nodes and 6 lines of code.
This data was carved into training, test, and validation at the repository level, to eliminate any overlap between training and test. We also filtered out any training snippet which overlapped with any test snippet by more than 5 lines. In total we extracted 2,900,000 training snippets, and held-out 2,000 for test and 2,000 for validation.
Bug/Repair Types In this work, we consider four general classes of semantic repairs, which were chosen to be “simple” but still common during development, as reported by the Python programmers:
• VarReplace: An incorrect local variable is used at a particular location, and should be replaced with another variable from the snippet. • CompReplace: An incorrect comparison operator is used at a particular location. • IsSwap: The is operator is used instead of is not, or vice versa. • ClassMember: A self accessor is missing from a variable.
Generating synthetic bugs from these categories is straightforward. For example, for VarReplace, we synthesize bugs by replacing one random variable from a snippet with another variable from the same snippet. All bug types, locations, and replacements were chosen with random uniform likelihood. We applied this bug synthesis procedure to all of the training snippets to create our training data, as well as a synthetic test set (Synth-Bug Test).
Real-Bug Test Set In order to evaluate on a test set where both the code and bugs were real, we mined the Git commit history from the projects crawled from Github. We found that it was quite difficult to automatically distinguish bug repairs from other code changes such as refactoring, especially since we wanted to avoid introducing biases into the data set through the use of complex filtering heuristics. For this reason, we limited extraction to commits where exactly one line in a file
was changed, and the commit contained a word from the list “bug, error, issue, exception, fix”. We then filtered these commits to only keep those that correspond to one of our four bug types. Overall, we obtained 926 buggy/repaired snippet pairs with exactly one bug each. We believe that the small number of extracted snippets does not reflect the true frequency of these bugs during development, but rather reflect the fact that (1) one line Git commits are quite rare, (2) these type of bugs rarely make it into the public branch of high-quality repositories.
4 BASELINE ATTENTIONAL SEQUENCE-TO-SEQUENCE MODEL
Since the goal of program repair is to transform a buggy snippet into a repaired snippet, an obvious baseline is an attention sequence-to-sequence neural network (Bahdanau et al., 2014), which has been successfully used for the related tasks of syntatic code repair and code completion. On those tasks, sequence-to-sequence models have been shown to outperform a number of baseline methods such as n-gram language models or classifiers.
Because this model must actually generate a sequence, we first converted the buggy/repaired ASTs from Synth-Bug Train back to their tokenized source code, which is a simple deterministic process. The architecture used is almost identical to the machine translation system of Bahdanau et al. (2014). To handle the high number of rare tokens in the data, tokens were split by underscores and camel case. The size of the neural net vocabulary was 50,000, and the final out-of-vocabulary (OOV) rate was 1.1%. In evaluation we included OOVs in the reference, so OOVs did not cause a degradation in results. The LSTMs were 512-dimensional and decoding was performed with a beam of 8. When evaluating on the Single-Repair Synth-Bug Test set, the 1-best output exactly matches the reference 26% of the time. If we give the model credit when it predicts the correct repair but also predicts other changes, the accuracy is 41%.
Although this accuracy seem to be non-trivial, there are some intuitive weaknesses in using a sequenceto-sequence architecture for semantic code repair. First, the system is burdened with constructing the entire output sequence, even though on average it is 98.5% identical to the input. Second, potential repairs at different locations do not fairly “compete” with one another in probability space, but only compete with tokens at the same location. Third, it is difficult to use a richer code representation such as the AST, since the repaired code must be generated.
5 SHARE, SPECIALIZE, AND COMPETE (SSC) MODEL
Instead of directly generating the entire output snippet with a neural network, we consider an alternative approach where repairs are iteratively applied to the input snippet. Here, for each bug type described in Section 3, the system proposes all possible repairs of that type in the snippet. Although these candidate generators are manually written, they simply propose all possible repairs of a given type and do not perform any heuristic pruning, so each of the four generators can be written in a few lines of code. The challenging work of determining the correct repair using the code context is performed by our statistical model.
For clarity of terminology, a repair candidate is a particular fix that can be made at a particular location (e.g., “Replace == with != at node 4”). A repair instance refers to a particular repair location the generator proposes and all of the candidates at that location. Each instance is guaranteed to have exactly one no-op candidate, which results in no change to the AST if applied (e.g., “Replace == with == at node 4”). The reference label refers to the correct candidate of a given instance (e.g., “The correct replacement at node 4 is <=”). Note that for the majority of repair instances that are proposed, the reference label will be the no-op candidate.
We now present the statistical model used to score repair candidates. We refer to it as a Share, Specialize, and Compete (SSC) network. A visual representation is given in Figure 1.
5.1 SHARE
The SHARE component performs a rich encoding of the input AST using a neural network. Crucially, this encoding is only conditioned on the AST itself and not on any repair candidates, so it serves a shared representation for the next component. This network can take many forms, with the only restriction being that it must emit one vector of some dimension d for each node in the AST. An example of a Python AST is given on the right side of Figure 1.
Here, for efficiency purposes, we encode the AST with a sequential bidirectional LSTM by enumerating a depth first traversal of the nodes, which roughly corresponds to “source code order.” However, we encode the rich AST structure by using embeddings for (1) the absolute position of the node in the AST, (2) the type of the node in the AST, (3) the relationship between the node and its parent, and (4) the surface form string of the node.
These tokens are projected through an embedding layer and then concatenated, and the resulting vector is used as input to a bidirectional LSTM. The output of this layer is represented as H = (h1, h2, ..., hn), where hi ∈ Rd, d is the hidden dimension, and n is the number of nodes in the AST. The core concept of the shared component is that the vast majority of neural computation is performed here, independent of the repairs themselves. We contrast this to an alternative approach where each repair candidate is applied to the input AST and each resulting repair candidate AST is encoded with an RNN – such an approach would be orders of magnitude more expensive to train and evaluate.
5.2 SPECIALIZE
The SPECIALIZE component scores each repair candidate using a specialized network module (Andreas et al., 2016) for each repair type. Instances of the same type are processed by the same module, but obtain separate scores since they have different input. Each module takes as input the shared representation H and a repair instance R with m candidates. It produces an un-normalized scalar score for each candidate in the instance, ŝ = (s1, ..., sm). We use two module types:
Multi-Layer Perceptron (MLP) Module: This module performs scoring over a fixed label set using one non-linear hidden layer. This is used for the CompReplace, IsSwap, and ClassMember generators. It is computed as:
ŝ = V × tanh(Whj)
where V ∈ Rm×c, W ∈ Rc×d, c is the hidden dimension, m is the number of labels (i.e., transform candidates), and j is the transform location corresponding to the transform instance T . Note that separate V and W weights are learned for each repair type.
Pooled Pointer Module: Predicting variables for VarReplace presents a unique challenge when modeling code repair. First, the variable names in a test snippet may never have been seen in training. More importantly, the semantics of a variable are primarily defined by its usage, rather than its name. To address this, instead of using a fixed output layer, each candidate (i.e., another variable) is encoded using pointers to each usage of that variable in the AST. An example is given in Figure 1. Formally, it is computed as:
si = tanh(Whj) · [MaxPoolk∈pi(tanh(V hk))]
where i is the candidate (i.e., variable) index, pi is the list of locations (pointers) of the variable i in the AST, j is the location of the repair in the AST, and V,W ∈ Rc×d are learned weight matrices.
5.3 COMPETE
Once a scalar score has been produced for each repair candidate, these must be normalized to compete against one another. We consider two approaches to normalizing these scores:
Local Norm: A separate softmax is performed for each repair instance (i.e., location and type), so candidates are only normalized against other candidates in the same instance, including no-op. At test time we sort all candidates across all instances by probability, even though they have not been normalized against each other.
Global Norm: All candidates at all locations are normalized with a single softmax. No-op candidates are not included in this formulation.
6 EXPERIMENTAL RESULTS
We train the SSC model on the Synth-Bug Train data for 30 epochs. Different bugs are synthesized at each epoch which significantly mitigates over-fitting. We set the hidden dimensions of the SHARE and SPECIALIZE components to 512, and the embedding size to 128. A dropout of 0.25 is used on the output of the SHARE component. Training was done with plain SGD + gradient clipping using an in-house toolkit. A small amount of hyperparameter tuning was performed on the Synth-Bug Val set.
In the first condition we evaluate, all snippets in both training and test have exactly one bug each. As was described in Section 3, for Synth-Bug Test, the code snippets are real, but the bugs have been artificially inserted at random. For Real-Bug Test, we extracted 926 buggy/fixed snippet pairs mined from GitHub commit logs, so both the snippet and bug are real. The average snippet in the Real-Bug Test set has 31 repair locations and 102 total repair candidates, compared to 20 locations and 50 candidates of the Synth-Bug Test test set.
Table 1 presents Single-Repair results on Synth-Bug and Real-Bug test sets. The accuracy metric denotes how often the 1-best repair prediction exactly matches the reference repair, i.e., the model correctly detects where the bug is and correctly predicts how to fix it. In this case, the model was constrained to predict exactly one repair, but all candidates across all repair types are directly competing against one another. On Synth-Bug, the SSC model drastically outperforms the attentional sequence-to-sequence model, even using the upper bound seq-to-seq accuracy. Since global normalization and local normalization have similar performance and it is not obvious how to extend global normalization to multiple repairs, we use local normalization for multi-repair experiments.
On Real-Bug Test, the absolute accuracy is lower than on Synth-Bug Test, but the SSC model still significantly outperforms the seq-to-seq baseline. To better understand the absolute quality of the Real-Bug Test results, we perform a preliminary human evaluation in Section 6.
Example predictions from the Real-Bug Test set are presented below. The red region is the bug, and the green is the reference repair. For the incorrect predictions, the blue region is the predicted repair. Results on all 926 Real-Bug Test examples are provided in the supplementary material.
In the multi-repair setting, we consider the more realistic scenario where a snippet may have multiple bugs, or may have none. To model this scenario, the data was re-generated so that 0, 1, 2, or 3 bugs was added to each training/test/val snippet, with equal probability of each. We refer to these new sets as Synth-Multi-Bug Test and Synth-Multi-Bug Val. Unfortunately, we were not able to extract multi-bug examples from the Real-Bug data.
The major new complexity is that the system must now determine how many repairs to predict per snippet, if any. We use a simple threshold-based approach: Since each repair candidate is assigned a probability by the model, we simply emit all repairs which have probability greater than δ. The system is not constrained to emit only 3 repairs. A parameter sweep over the validation set revealed that accuracy is surprisingly un-sensitive to δ, so we simply use δ = 0.5. Note that we only perform a single pass of repair scoring here, but in future work we will explore an iterative decoder.
Results are presented on the right side of Table 1. For accuracy at the per-repair level, there is only a moderate decrease in F-score from 85% to 81% between the 1-repair and 3-repair settings. The Exact Accuracy does decrease significantly, but not beyond the “expected value.” In other words, three independent 1-repair snippets have an expected accuracy of 0.783 = 0.47, which is similar to the 45% accuracy observed for 3-repair snippet. We also see that the system is 82% accurate at correctly predicting when a snippet has no bugs.
Human Evaluation To better understand the significance of the performance of our system, we performed a preliminary human evaluation under identical conditions to the model. The evaluator was presented with a snippet from the test set, where all repair instances were highlighted in the code. The evaluator could click on a repair location to see all candidates at that location. They were explained each of the four bug types and told that there was always exactly one bug per snippet. This evaluation required experienced Python programmers performing a complex task, so we performed a small evaluation using 4 evaluators and 30 snippets each from the Real-Bug Test set. Evaluators typically used 2-6 minutes per snippet. These snippets were limited to 150 nodes for the benefit of the human evaluators, so the SSC model accuracy is higher on this subset than on the full set.
On these snippets, the humans achieved 37% accuracy compared to the 60% accuracy of the SSC model. One possible reason for this performance gap is that the model is simply better than humans at this task, presumably because it has been able to learn from such a large amount of data. Another possible reason is that humans did not spend the time or mental energy to perform as well as they could. To examine these possibilities, we performed a second evaluation with the same set of humans. In this evaluation, instead of having to consider all possible repairs – up to 100 candidates – the humans only had to decide between the four “most likely” repair candidates. These candidates were generated by taking the top four predictions from the SSC model (or the top three and the correct repair), shown in random order. In this evaluation, humans achieved 76% accuracy, which shows that the low performance of humans in the full task is due to the mental energy required, rather than lack of context or code understanding. We believe that these evaluations demonstrate that Real-Bug Test is a challenging set and that the accuracy achieved by the SSC model is empirically strong.
7 ANALYSIS AND DISCUSSION
Our first goal is to conceptually understand at what “level” the model was able to generalize to new snippets. Although the hidden activations of the neural network model are not directly interpretable, we can attempt to interpret the latent model space using nearest neighbor retrieval on the hidden vectors hi. The goal is to determine if the model is simply memorizing common n-grams, or if it is actually learning high-level repair concepts. Nearest neighbor retrieval for several test snippets are presented here:
In Example 1, we see the model is able to learn a high-level pattern “y.x = x”. In Example 2 we see the pattern “if (x c1 y...) elif (x c2 y...)”. In Example 3 we see the pattern “Strings usually use the equality (or inequality) operator.” In all cases, the surface form of the training nearest neighbor is very different from the test snippet. From this, it appears that the SSC model is able to learn a number of interesting, high-level patterns which it uses to generalize to new data.
We next examined failure cases of the SSC model which a human evaluator was able to repair correctly. Here, the primary weakness of the model was that humans were able to better infer program intent by using variable names, function names, and string literals. One major fault in the current implementation is a lack of sub-word representation. For example, consider a repair of the expression “dtypes.append(x)” where x could be dtype or syncnode. It is easy for a human to infer that dtype is the more sensible choice even without deeper understand of the code. In future work we plan to explore character-level encoding of value strings so that lexical similarity can be modeled latently by the network.
We finally examined cases where the SSC model succeeded but the human evaluator failed. Generally, we conclude that the model’s primary advantage was the sheer amount of data it was able to learn from. For example, consider the expression “if (db.version_info <= 3)”. This may not be immediately suspicious to a human, but if we analyze the reference training data we can measure that the pattern “if (x.version_info <= y)” is 10 times less frequent than the pattern “if (x.version_info < y)”. Intuitively, this makes sense because if a feature is added in version y, it is not useful to check <= y. However, the neural model is able to easily learn such probabilistic distributions even without deeper understanding of why they are true.
8 CONCLUSION
We presented a novel neural network architecture that allows specialized network modules to explicitly model different transformation types based on a shared input representation. When applied to the domain of semantic code repair, our model achieves high accuracy relative to a seq2seq baseline and an expert human evaluation. In our analysis of the results, we find that our system is able to learn fairly sophisticated repair patterns from the training data. In future work we plan to expand our model to cover a larger set of bug types, and ideally these bug types would be learned automatically from a corpus of real-world bugs. We also plan to apply the SSC model to other tasks.
A POOLED POINTER MODULE IMPLEMENTATION
Figure 2 provides a diagram of the pooled pointer network module.
As described in Section 5.2, the pooling module consists of a projection layer followed by a pooling operation. For each variable i, its representation is computed by pooling the set of all its occurrences, pi.
vi = MaxPoolk∈pi(tanh(V hk))
where hk denotes the representation computed by the SHARE module at location k.
The similarity module produces un-normalized scores for each potential variable replacement i. When applied at repair location j, it computes:
sij = tanh(Whj) · vi
B EXAMPLES OF PREDICTIONS
We include the full set of system predictions for the Real-Bug Test set. We have made these available at https://iclr2018anon.github.io/semantic_code_repair/index.html.
C ADDITIONAL RESULTS
Varying source code complexity Figure 3 presents accuracy of the model across functions with varying numbers of repair candidates. While the repair accuracy decreases with the number of repair candidates, the model achieves reasonably high accuracy even for functions with over 100 repair candidates. Among functions with 101-150 repair candidates, the model accuracy is 73% for synthetically introduced bugs and 36% for real bugs.
Importance of AST structure The Python abstract syntax tree is a rich source of semantic information about the tokens in a snippet. As described in Section 5.1, in addition to the original token string, we also include (1) the absolute position of the node in the AST, (2) the type of the node, and (3) the relationship between the node and its parent. To test the model’s reliance on this information, we present ablation results over these additional feature layers below in Table 2.
We see that using information from the AST provides a significant performance gain. Still, even when only using the surface form values, the SSC model outperforms the attentional sequence-to-sequence baseline by a large margin (78.3% repair accuracy compared to 26% for the sequence-to-sequence model). | 1. What is the focus of the paper regarding practical code repair?
2. What are the strengths and weaknesses of the proposed neural network architecture?
3. Do you have any concerns about the scope and limitations of the approach?
4. Are there any questions or issues with the experimental design and comparisons? | Review | Review
This paper presents a neural network architecture consisting of the share, specialize and compete parts for repairing code in four cases, i.e., VarReplace, CompReplace, IsSwap, and ClassMember. Experiments on the source codes from Github are conducted and the performance is evaluated against one sequence-to-sequence baseline method.
Pros:
* The problem studied in this paper is of practical significance.
* The proposed approach is technically sound in general. The paper is well-written and easy to follow.
Cons:
* The scope of this paper is narrow. This paper can only repair the program in the four special cases. It leads to a natural question that how many other cases besides the four? It seems that even if the proposed method works pretty well in practice, it would not be very useful since it is effective to only 4 out of a huge number of cases that a program could be wrong.
* Although the proposed architecture is specially designed for this problem, the components are a straight-forward application of existing approaches. E.g., The SHARE component that using bidirectional LSTM to encode from AST has been studied before and the specialized network has been studied in (Andreas et al., 2016). This reduces the novelty and technical contribution of this paper.
* Many technical details have not been well-explained. For example, how to determine the number of candidates m, since different snippets may have different number of candidates? How to train the model? What is the loss function?
* The experiments are weak. 1) the state-of-the-art program repair approaches such as the statistical program repair models (Arcuri and Yao, 2008) (Goues et al., 2012), Rule-Based Static Analyzers (Thenault, 2001) (PyCQA, 2012) should be compared. 2) the comparsion between SSC with and Seq-to-Seq is not fair, since the baseline is more general and not specially crafted for these 4 cases. |
ICLR | Title
Semantic Code Repair using Neuro-Symbolic Transformation Networks
Abstract
We study the problem of semantic code repair, which can be broadly defined as automatically fixing non-syntactic bugs in source code. The majority of past work in semantic code repair assumed access to unit tests against which candidate repairs could be validated. In contrast, the goal here is to develop a strong statistical model to accurately predict both bug locations and exact fixes without access to information about the intended correct behavior of the program. Achieving such a goal requires a robust contextual repair model, which we train on a large corpus of real-world source code that has been augmented with synthetically injected bugs. Our framework adopts a two-stage approach where first a large set of repair candidates are generated by rule-based processors, and then these candidates are scored by a statistical model using a novel neural network architecture which we refer to as Share, Specialize, and Compete. Specifically, the architecture (1) generates a shared encoding of the source code using an RNN over the abstract syntax tree, (2) scores each candidate repair using specialized network modules, and (3) then normalizes these scores together so they can compete against one another in comparable probability space. We evaluate our model on a real-world test set gathered from GitHub containing four common categories of bugs. Our model is able to predict the exact correct repair 41% of the time with a single guess, compared to 13% accuracy for an attentional sequence-to-sequence model.
1 INTRODUCTION
The term automatic code repair is typically used to describe two overarching tasks: The first involves fixing syntactic errors, which are malformations that cause the code to not adhere to some language specification (Gupta et al., 2017; Bhatia and Singh, 2016). The second, which is the focus of this work, involves fixing semantic bugs, which refer to any case where the actual program behavior is not the same as the behavior the programmer intended. Clearly, this covers an extremely wide range of code issues, so this work is limited to a class of semantic bugs, which we roughly define as: “Bugs that can be identified and fixed by an experienced human programmer, without running the code or having deep contextual knowledge of the program.” This does not imply that the bugs are trivially fixable, as they often require time-consuming analysis of the code, rich background knowledge of the language and APIs, and complex logical reasoning about the original programmer’s intent.
Unlike previous work, we do not assume access to unit tests at training or test time. This requirement is important because it forces development of models which can infer intended semantic purpose from source code before proposing repairs, as a human programmer might. Most previous work relies on unit tests – a common theme is combining coarse-grained repair models with search algorithms to find some repair that satisfies unit tests (Harman, 2010; Singh et al., 2013). In contrast, our proposed task requires models to deeply understand the code in order to propose a single set of repairs. Thus, semantic code repair without unit tests presents a concrete, real-world test bed for the more general task of understanding and modifying source code.
Our semantic repair model was trained on a large corpus of open-source Python projects with synthetically injected bugs. We test on both real-bug and synthetic-bug test sets. 1 To train the repair model, we first evaluated an attentional sequence-to-sequence architecture. Although this model was able to achieve non-trivial results, we believe it to be an unsuitable solution in a number of ways, such as the lack of direct competition between repair candidates at different locations. Instead, we
1All data sets will be made publicly available.
use an alternative approach which decouples the non-statistical process of generating and applying repair proposal from the statistical process of scoring and ranking repairs.
This two-stage process itself is not new, but the core novelty in this work is the specific neural framework we propose for scoring repair candidates. We refer to our architecture as a Share, Specialize, and Compete (SSC) network:
• SHARE: The input code snippet is encoded with a neural network. This is a shared representation used by all repair types. • SPECIALIZE: Each repair type is associated with its own specialized neural module (An-
dreas et al., 2016), which emits a score for every repair candidate of that type. • COMPETE: The raw scores from the specialized modules are normalized to compete in
comparable probability space.
Our model can also be thought of as an evolution of work on neural code completion and summarization Allamanis et al. (2016); Bhoopchand et al. (2016). Like those systems, our SHARE network is used to learn a rich semantic understanding of the code snippet. Our SPECIALIZE modules then build on top of this representation to learn how to identify and fix specific bug types.
Although we have described our framework in relation to the problem of code repair, it has a number of other possible applications in sequence transformation scenarios where the input and output sequences have high overlap. For example, it could be applied to natural language grammar correction (Schmaltz et al., 2016), machine translation post editing (Libovickỳ et al., 2016), source code refactoring (Allamanis et al., 2015), or program optimization (Bunel et al., 2016).
2 RELATED WORK
We believe this paper to be the first work addressing the issue of semantic program repair in the absence of unit tests, where functionality must be inferred from the code. However, our work adds to a substantial literature on program repair and program analysis, some of which we describe below:
Neural Syntax Repair: There have been several recent techniques developed for training neural networks to correct syntax errors in code. DeepFix (Gupta et al., 2017) uses an attentional seq-to-seq model to fix syntax errors in a program by predicting both the buggy line and the statement to replace it. Bhatia and Singh (2016) train an RNN based token sequence model to predict token insertion or replacement at program locations provided by the compiler to fix syntax errors.
Statistical Program Repair: Approaches such as Arcuri and Yao (2008) and Goues et al. (2012) use genetic programming techniques to iteratively propose program modifications. Prophet (Long and Rinard, 2016) learns a probabilistic model to rank patches for null pointer exceptions and array out-of-bounds errors. The model is learnt from human patches using a set of hand-engineered program features. In contrast, our neural model automatically learns useful program representations for repairing a much richer class of semantic bugs.
Natural Source Code / Big Code: A number of recent papers have trained statistical models on large datasets of real-world code. These papers study tasks involving varying degrees of reasoning about source code, such as code completion (Raychev et al., 2015; 2014; Bhoopchand et al., 2016) and variable/class/function renaming (Raychev, 2016; Allamanis et al., 2015).
Rule-Based Static Analyzers: Rule-based analyzers for Python (Pylint (Thenault, 2001) and Pyflakes (PyCQA, 2012)) handle a highly disjoint set of issues compared to the type of bugs we are targeting, and generally do not directly propose fixes.
3 PROBLEM OVERVIEW
As mentioned in the introduction, our goal is to develop a system which can statically analyze a piece of code and predict the location of the bug along with the actual fix. We do not assume to have unit tests or any other specification associated with the snippet being repaired. These proposed repairs can be directly presented to the user, or taken as input to some downstream application. Since the task of “fixing bugs in code” is incredibly broad, we limit ourselves to four classes of common Python bugs that are described with examples in Section 3.
Ideally, we would train such a repair model using a large number of buggy/repaired code snippets. However, such a large data set does not exist. It is possible to extract a modest test set of genuine bugs from project commit histories, but it is not enough to train a large-scale neural network. Fortunately, there is a large amount of real-world non-buggy code available to which bugs can be injected. We demonstrate that a model trained on synthesized bugs is able to generalize to a test set with real bugs.
Training Data To create the training data, we first downloaded all Python projects from GitHub that were followed by at least 15 users and had permissive licenses (MIT/BSD/Apache), which amounted to 19,000 total repositories. We extracted every function from each Python source file as a code snippet. In all experiments presented here, each snippet was analyzed on its own without any surrounding context. All models explored in this paper only use static code representations, so each snippet must be parsable as an Abstract Syntax Tree (AST), but does not need to be runnable. Note that many of the extracted functions are member functions of some class, so although they can be parsed, they are not runnable without external context. We only kept snippets with between 5 and 300 nodes in their AST, which approximately corresponds to 1 to 40 lines of code. The average extracted snippet had 45 AST nodes and 6 lines of code.
This data was carved into training, test, and validation at the repository level, to eliminate any overlap between training and test. We also filtered out any training snippet which overlapped with any test snippet by more than 5 lines. In total we extracted 2,900,000 training snippets, and held-out 2,000 for test and 2,000 for validation.
Bug/Repair Types In this work, we consider four general classes of semantic repairs, which were chosen to be “simple” but still common during development, as reported by the Python programmers:
• VarReplace: An incorrect local variable is used at a particular location, and should be replaced with another variable from the snippet. • CompReplace: An incorrect comparison operator is used at a particular location. • IsSwap: The is operator is used instead of is not, or vice versa. • ClassMember: A self accessor is missing from a variable.
Generating synthetic bugs from these categories is straightforward. For example, for VarReplace, we synthesize bugs by replacing one random variable from a snippet with another variable from the same snippet. All bug types, locations, and replacements were chosen with random uniform likelihood. We applied this bug synthesis procedure to all of the training snippets to create our training data, as well as a synthetic test set (Synth-Bug Test).
Real-Bug Test Set In order to evaluate on a test set where both the code and bugs were real, we mined the Git commit history from the projects crawled from Github. We found that it was quite difficult to automatically distinguish bug repairs from other code changes such as refactoring, especially since we wanted to avoid introducing biases into the data set through the use of complex filtering heuristics. For this reason, we limited extraction to commits where exactly one line in a file
was changed, and the commit contained a word from the list “bug, error, issue, exception, fix”. We then filtered these commits to only keep those that correspond to one of our four bug types. Overall, we obtained 926 buggy/repaired snippet pairs with exactly one bug each. We believe that the small number of extracted snippets does not reflect the true frequency of these bugs during development, but rather reflect the fact that (1) one line Git commits are quite rare, (2) these type of bugs rarely make it into the public branch of high-quality repositories.
4 BASELINE ATTENTIONAL SEQUENCE-TO-SEQUENCE MODEL
Since the goal of program repair is to transform a buggy snippet into a repaired snippet, an obvious baseline is an attention sequence-to-sequence neural network (Bahdanau et al., 2014), which has been successfully used for the related tasks of syntatic code repair and code completion. On those tasks, sequence-to-sequence models have been shown to outperform a number of baseline methods such as n-gram language models or classifiers.
Because this model must actually generate a sequence, we first converted the buggy/repaired ASTs from Synth-Bug Train back to their tokenized source code, which is a simple deterministic process. The architecture used is almost identical to the machine translation system of Bahdanau et al. (2014). To handle the high number of rare tokens in the data, tokens were split by underscores and camel case. The size of the neural net vocabulary was 50,000, and the final out-of-vocabulary (OOV) rate was 1.1%. In evaluation we included OOVs in the reference, so OOVs did not cause a degradation in results. The LSTMs were 512-dimensional and decoding was performed with a beam of 8. When evaluating on the Single-Repair Synth-Bug Test set, the 1-best output exactly matches the reference 26% of the time. If we give the model credit when it predicts the correct repair but also predicts other changes, the accuracy is 41%.
Although this accuracy seem to be non-trivial, there are some intuitive weaknesses in using a sequenceto-sequence architecture for semantic code repair. First, the system is burdened with constructing the entire output sequence, even though on average it is 98.5% identical to the input. Second, potential repairs at different locations do not fairly “compete” with one another in probability space, but only compete with tokens at the same location. Third, it is difficult to use a richer code representation such as the AST, since the repaired code must be generated.
5 SHARE, SPECIALIZE, AND COMPETE (SSC) MODEL
Instead of directly generating the entire output snippet with a neural network, we consider an alternative approach where repairs are iteratively applied to the input snippet. Here, for each bug type described in Section 3, the system proposes all possible repairs of that type in the snippet. Although these candidate generators are manually written, they simply propose all possible repairs of a given type and do not perform any heuristic pruning, so each of the four generators can be written in a few lines of code. The challenging work of determining the correct repair using the code context is performed by our statistical model.
For clarity of terminology, a repair candidate is a particular fix that can be made at a particular location (e.g., “Replace == with != at node 4”). A repair instance refers to a particular repair location the generator proposes and all of the candidates at that location. Each instance is guaranteed to have exactly one no-op candidate, which results in no change to the AST if applied (e.g., “Replace == with == at node 4”). The reference label refers to the correct candidate of a given instance (e.g., “The correct replacement at node 4 is <=”). Note that for the majority of repair instances that are proposed, the reference label will be the no-op candidate.
We now present the statistical model used to score repair candidates. We refer to it as a Share, Specialize, and Compete (SSC) network. A visual representation is given in Figure 1.
5.1 SHARE
The SHARE component performs a rich encoding of the input AST using a neural network. Crucially, this encoding is only conditioned on the AST itself and not on any repair candidates, so it serves a shared representation for the next component. This network can take many forms, with the only restriction being that it must emit one vector of some dimension d for each node in the AST. An example of a Python AST is given on the right side of Figure 1.
Here, for efficiency purposes, we encode the AST with a sequential bidirectional LSTM by enumerating a depth first traversal of the nodes, which roughly corresponds to “source code order.” However, we encode the rich AST structure by using embeddings for (1) the absolute position of the node in the AST, (2) the type of the node in the AST, (3) the relationship between the node and its parent, and (4) the surface form string of the node.
These tokens are projected through an embedding layer and then concatenated, and the resulting vector is used as input to a bidirectional LSTM. The output of this layer is represented as H = (h1, h2, ..., hn), where hi ∈ Rd, d is the hidden dimension, and n is the number of nodes in the AST. The core concept of the shared component is that the vast majority of neural computation is performed here, independent of the repairs themselves. We contrast this to an alternative approach where each repair candidate is applied to the input AST and each resulting repair candidate AST is encoded with an RNN – such an approach would be orders of magnitude more expensive to train and evaluate.
5.2 SPECIALIZE
The SPECIALIZE component scores each repair candidate using a specialized network module (Andreas et al., 2016) for each repair type. Instances of the same type are processed by the same module, but obtain separate scores since they have different input. Each module takes as input the shared representation H and a repair instance R with m candidates. It produces an un-normalized scalar score for each candidate in the instance, ŝ = (s1, ..., sm). We use two module types:
Multi-Layer Perceptron (MLP) Module: This module performs scoring over a fixed label set using one non-linear hidden layer. This is used for the CompReplace, IsSwap, and ClassMember generators. It is computed as:
ŝ = V × tanh(Whj)
where V ∈ Rm×c, W ∈ Rc×d, c is the hidden dimension, m is the number of labels (i.e., transform candidates), and j is the transform location corresponding to the transform instance T . Note that separate V and W weights are learned for each repair type.
Pooled Pointer Module: Predicting variables for VarReplace presents a unique challenge when modeling code repair. First, the variable names in a test snippet may never have been seen in training. More importantly, the semantics of a variable are primarily defined by its usage, rather than its name. To address this, instead of using a fixed output layer, each candidate (i.e., another variable) is encoded using pointers to each usage of that variable in the AST. An example is given in Figure 1. Formally, it is computed as:
si = tanh(Whj) · [MaxPoolk∈pi(tanh(V hk))]
where i is the candidate (i.e., variable) index, pi is the list of locations (pointers) of the variable i in the AST, j is the location of the repair in the AST, and V,W ∈ Rc×d are learned weight matrices.
5.3 COMPETE
Once a scalar score has been produced for each repair candidate, these must be normalized to compete against one another. We consider two approaches to normalizing these scores:
Local Norm: A separate softmax is performed for each repair instance (i.e., location and type), so candidates are only normalized against other candidates in the same instance, including no-op. At test time we sort all candidates across all instances by probability, even though they have not been normalized against each other.
Global Norm: All candidates at all locations are normalized with a single softmax. No-op candidates are not included in this formulation.
6 EXPERIMENTAL RESULTS
We train the SSC model on the Synth-Bug Train data for 30 epochs. Different bugs are synthesized at each epoch which significantly mitigates over-fitting. We set the hidden dimensions of the SHARE and SPECIALIZE components to 512, and the embedding size to 128. A dropout of 0.25 is used on the output of the SHARE component. Training was done with plain SGD + gradient clipping using an in-house toolkit. A small amount of hyperparameter tuning was performed on the Synth-Bug Val set.
In the first condition we evaluate, all snippets in both training and test have exactly one bug each. As was described in Section 3, for Synth-Bug Test, the code snippets are real, but the bugs have been artificially inserted at random. For Real-Bug Test, we extracted 926 buggy/fixed snippet pairs mined from GitHub commit logs, so both the snippet and bug are real. The average snippet in the Real-Bug Test set has 31 repair locations and 102 total repair candidates, compared to 20 locations and 50 candidates of the Synth-Bug Test test set.
Table 1 presents Single-Repair results on Synth-Bug and Real-Bug test sets. The accuracy metric denotes how often the 1-best repair prediction exactly matches the reference repair, i.e., the model correctly detects where the bug is and correctly predicts how to fix it. In this case, the model was constrained to predict exactly one repair, but all candidates across all repair types are directly competing against one another. On Synth-Bug, the SSC model drastically outperforms the attentional sequence-to-sequence model, even using the upper bound seq-to-seq accuracy. Since global normalization and local normalization have similar performance and it is not obvious how to extend global normalization to multiple repairs, we use local normalization for multi-repair experiments.
On Real-Bug Test, the absolute accuracy is lower than on Synth-Bug Test, but the SSC model still significantly outperforms the seq-to-seq baseline. To better understand the absolute quality of the Real-Bug Test results, we perform a preliminary human evaluation in Section 6.
Example predictions from the Real-Bug Test set are presented below. The red region is the bug, and the green is the reference repair. For the incorrect predictions, the blue region is the predicted repair. Results on all 926 Real-Bug Test examples are provided in the supplementary material.
In the multi-repair setting, we consider the more realistic scenario where a snippet may have multiple bugs, or may have none. To model this scenario, the data was re-generated so that 0, 1, 2, or 3 bugs was added to each training/test/val snippet, with equal probability of each. We refer to these new sets as Synth-Multi-Bug Test and Synth-Multi-Bug Val. Unfortunately, we were not able to extract multi-bug examples from the Real-Bug data.
The major new complexity is that the system must now determine how many repairs to predict per snippet, if any. We use a simple threshold-based approach: Since each repair candidate is assigned a probability by the model, we simply emit all repairs which have probability greater than δ. The system is not constrained to emit only 3 repairs. A parameter sweep over the validation set revealed that accuracy is surprisingly un-sensitive to δ, so we simply use δ = 0.5. Note that we only perform a single pass of repair scoring here, but in future work we will explore an iterative decoder.
Results are presented on the right side of Table 1. For accuracy at the per-repair level, there is only a moderate decrease in F-score from 85% to 81% between the 1-repair and 3-repair settings. The Exact Accuracy does decrease significantly, but not beyond the “expected value.” In other words, three independent 1-repair snippets have an expected accuracy of 0.783 = 0.47, which is similar to the 45% accuracy observed for 3-repair snippet. We also see that the system is 82% accurate at correctly predicting when a snippet has no bugs.
Human Evaluation To better understand the significance of the performance of our system, we performed a preliminary human evaluation under identical conditions to the model. The evaluator was presented with a snippet from the test set, where all repair instances were highlighted in the code. The evaluator could click on a repair location to see all candidates at that location. They were explained each of the four bug types and told that there was always exactly one bug per snippet. This evaluation required experienced Python programmers performing a complex task, so we performed a small evaluation using 4 evaluators and 30 snippets each from the Real-Bug Test set. Evaluators typically used 2-6 minutes per snippet. These snippets were limited to 150 nodes for the benefit of the human evaluators, so the SSC model accuracy is higher on this subset than on the full set.
On these snippets, the humans achieved 37% accuracy compared to the 60% accuracy of the SSC model. One possible reason for this performance gap is that the model is simply better than humans at this task, presumably because it has been able to learn from such a large amount of data. Another possible reason is that humans did not spend the time or mental energy to perform as well as they could. To examine these possibilities, we performed a second evaluation with the same set of humans. In this evaluation, instead of having to consider all possible repairs – up to 100 candidates – the humans only had to decide between the four “most likely” repair candidates. These candidates were generated by taking the top four predictions from the SSC model (or the top three and the correct repair), shown in random order. In this evaluation, humans achieved 76% accuracy, which shows that the low performance of humans in the full task is due to the mental energy required, rather than lack of context or code understanding. We believe that these evaluations demonstrate that Real-Bug Test is a challenging set and that the accuracy achieved by the SSC model is empirically strong.
7 ANALYSIS AND DISCUSSION
Our first goal is to conceptually understand at what “level” the model was able to generalize to new snippets. Although the hidden activations of the neural network model are not directly interpretable, we can attempt to interpret the latent model space using nearest neighbor retrieval on the hidden vectors hi. The goal is to determine if the model is simply memorizing common n-grams, or if it is actually learning high-level repair concepts. Nearest neighbor retrieval for several test snippets are presented here:
In Example 1, we see the model is able to learn a high-level pattern “y.x = x”. In Example 2 we see the pattern “if (x c1 y...) elif (x c2 y...)”. In Example 3 we see the pattern “Strings usually use the equality (or inequality) operator.” In all cases, the surface form of the training nearest neighbor is very different from the test snippet. From this, it appears that the SSC model is able to learn a number of interesting, high-level patterns which it uses to generalize to new data.
We next examined failure cases of the SSC model which a human evaluator was able to repair correctly. Here, the primary weakness of the model was that humans were able to better infer program intent by using variable names, function names, and string literals. One major fault in the current implementation is a lack of sub-word representation. For example, consider a repair of the expression “dtypes.append(x)” where x could be dtype or syncnode. It is easy for a human to infer that dtype is the more sensible choice even without deeper understand of the code. In future work we plan to explore character-level encoding of value strings so that lexical similarity can be modeled latently by the network.
We finally examined cases where the SSC model succeeded but the human evaluator failed. Generally, we conclude that the model’s primary advantage was the sheer amount of data it was able to learn from. For example, consider the expression “if (db.version_info <= 3)”. This may not be immediately suspicious to a human, but if we analyze the reference training data we can measure that the pattern “if (x.version_info <= y)” is 10 times less frequent than the pattern “if (x.version_info < y)”. Intuitively, this makes sense because if a feature is added in version y, it is not useful to check <= y. However, the neural model is able to easily learn such probabilistic distributions even without deeper understanding of why they are true.
8 CONCLUSION
We presented a novel neural network architecture that allows specialized network modules to explicitly model different transformation types based on a shared input representation. When applied to the domain of semantic code repair, our model achieves high accuracy relative to a seq2seq baseline and an expert human evaluation. In our analysis of the results, we find that our system is able to learn fairly sophisticated repair patterns from the training data. In future work we plan to expand our model to cover a larger set of bug types, and ideally these bug types would be learned automatically from a corpus of real-world bugs. We also plan to apply the SSC model to other tasks.
A POOLED POINTER MODULE IMPLEMENTATION
Figure 2 provides a diagram of the pooled pointer network module.
As described in Section 5.2, the pooling module consists of a projection layer followed by a pooling operation. For each variable i, its representation is computed by pooling the set of all its occurrences, pi.
vi = MaxPoolk∈pi(tanh(V hk))
where hk denotes the representation computed by the SHARE module at location k.
The similarity module produces un-normalized scores for each potential variable replacement i. When applied at repair location j, it computes:
sij = tanh(Whj) · vi
B EXAMPLES OF PREDICTIONS
We include the full set of system predictions for the Real-Bug Test set. We have made these available at https://iclr2018anon.github.io/semantic_code_repair/index.html.
C ADDITIONAL RESULTS
Varying source code complexity Figure 3 presents accuracy of the model across functions with varying numbers of repair candidates. While the repair accuracy decreases with the number of repair candidates, the model achieves reasonably high accuracy even for functions with over 100 repair candidates. Among functions with 101-150 repair candidates, the model accuracy is 73% for synthetically introduced bugs and 36% for real bugs.
Importance of AST structure The Python abstract syntax tree is a rich source of semantic information about the tokens in a snippet. As described in Section 5.1, in addition to the original token string, we also include (1) the absolute position of the node in the AST, (2) the type of the node, and (3) the relationship between the node and its parent. To test the model’s reliance on this information, we present ablation results over these additional feature layers below in Table 2.
We see that using information from the AST provides a significant performance gain. Still, even when only using the surface form values, the SSC model outperforms the attentional sequence-to-sequence baseline by a large margin (78.3% repair accuracy compared to 26% for the sequence-to-sequence model). | 1. What is the main contribution of the paper regarding code repair using neural networks?
2. What are the strengths of the proposed approach compared to prior sequence-to-sequence models?
3. How effective are the output constraints utilized by the proposed model in improving its performance?
4. Are there any concerns regarding the comparison between the proposed method and human performance?
5. What are the limitations of the proposed method in terms of its applicability to various bug types and real-world scenarios? | Review | Review
This paper introduces a neural network architecture for fixing semantic bugs in code. Focusing on four specific types of bugs, the proposed two-stage approach first generates a set of candidate repairs and then scores the repair candidates using a neural network trained on synthetically introduced bug/repair examples. Comparing to a prior sequence-to-sequence approach, the proposed approach achieved dominantly better accuracy on both synthetic and real bug datasets. On a real bug dataset constructed from GitHub commits, it was shown to outperform human.
I find the application of neural networks to the problem of code repair to be highly interesting. The proposed approach is highly specialized for the specific four types of bugs considered here and appears to be effective for fixing these specific bug types, especially in comparison to the sequence-to-sequence model based approach. However, I was wondering whether limiting the output choices (based on the bug type) is going a long way toward improving the performance compared to seq-2-seq, which does not utilize such output constraints. What if we introduce the same type of constraints for the seq-2-seq model? For example, one can simply modifying the decoding process such that for locations that are not in the candidate set, the network simply makes no change, and for candidate-repair locations, the output space is limited to the specific choices provided in the candidate set. This will provide a more fair comparison between the different models.
Right now it is not clear how much of the observed performance gain is due to the use of these constraints on the output space.
Is there any control mechanism used to ensure that the real bug test set do not overlap with the training set? This is not clear to me.
I find the comparison result to human performance to be interesting and somewhat surprising. This seems quite impressive. The presented example where human makes a mistake but the algorithm is correct is informative and provides some potential explanation to this. But it also raises a question. The specific example snippet could be considered to be correct when placed in a different context. Bugs are context sensitive artifacts. The setup of considering each function independently without any context seems like an inherent limitation in the types of bugs that this method could potentially address. Some discussion on the limitation of the proposed method seems to be warranted.
Pro:
Interesting application
Impressive results on a difficult task
Nice discussion of results and informative examples
Clear presentation, easy to read.
Con:
The comparison to baseline seq-2-seq does not seem quite fair
The method appears to be highly specialized to the four bug types. It is not clear how generalizable it will be to more complex bugs, and to the real application scenarios where we are dealing with open world classification and there is not fixed set of possible bugs. |
ICLR | Title
Semantic Code Repair using Neuro-Symbolic Transformation Networks
Abstract
We study the problem of semantic code repair, which can be broadly defined as automatically fixing non-syntactic bugs in source code. The majority of past work in semantic code repair assumed access to unit tests against which candidate repairs could be validated. In contrast, the goal here is to develop a strong statistical model to accurately predict both bug locations and exact fixes without access to information about the intended correct behavior of the program. Achieving such a goal requires a robust contextual repair model, which we train on a large corpus of real-world source code that has been augmented with synthetically injected bugs. Our framework adopts a two-stage approach where first a large set of repair candidates are generated by rule-based processors, and then these candidates are scored by a statistical model using a novel neural network architecture which we refer to as Share, Specialize, and Compete. Specifically, the architecture (1) generates a shared encoding of the source code using an RNN over the abstract syntax tree, (2) scores each candidate repair using specialized network modules, and (3) then normalizes these scores together so they can compete against one another in comparable probability space. We evaluate our model on a real-world test set gathered from GitHub containing four common categories of bugs. Our model is able to predict the exact correct repair 41% of the time with a single guess, compared to 13% accuracy for an attentional sequence-to-sequence model.
1 INTRODUCTION
The term automatic code repair is typically used to describe two overarching tasks: The first involves fixing syntactic errors, which are malformations that cause the code to not adhere to some language specification (Gupta et al., 2017; Bhatia and Singh, 2016). The second, which is the focus of this work, involves fixing semantic bugs, which refer to any case where the actual program behavior is not the same as the behavior the programmer intended. Clearly, this covers an extremely wide range of code issues, so this work is limited to a class of semantic bugs, which we roughly define as: “Bugs that can be identified and fixed by an experienced human programmer, without running the code or having deep contextual knowledge of the program.” This does not imply that the bugs are trivially fixable, as they often require time-consuming analysis of the code, rich background knowledge of the language and APIs, and complex logical reasoning about the original programmer’s intent.
Unlike previous work, we do not assume access to unit tests at training or test time. This requirement is important because it forces development of models which can infer intended semantic purpose from source code before proposing repairs, as a human programmer might. Most previous work relies on unit tests – a common theme is combining coarse-grained repair models with search algorithms to find some repair that satisfies unit tests (Harman, 2010; Singh et al., 2013). In contrast, our proposed task requires models to deeply understand the code in order to propose a single set of repairs. Thus, semantic code repair without unit tests presents a concrete, real-world test bed for the more general task of understanding and modifying source code.
Our semantic repair model was trained on a large corpus of open-source Python projects with synthetically injected bugs. We test on both real-bug and synthetic-bug test sets. 1 To train the repair model, we first evaluated an attentional sequence-to-sequence architecture. Although this model was able to achieve non-trivial results, we believe it to be an unsuitable solution in a number of ways, such as the lack of direct competition between repair candidates at different locations. Instead, we
1All data sets will be made publicly available.
use an alternative approach which decouples the non-statistical process of generating and applying repair proposal from the statistical process of scoring and ranking repairs.
This two-stage process itself is not new, but the core novelty in this work is the specific neural framework we propose for scoring repair candidates. We refer to our architecture as a Share, Specialize, and Compete (SSC) network:
• SHARE: The input code snippet is encoded with a neural network. This is a shared representation used by all repair types. • SPECIALIZE: Each repair type is associated with its own specialized neural module (An-
dreas et al., 2016), which emits a score for every repair candidate of that type. • COMPETE: The raw scores from the specialized modules are normalized to compete in
comparable probability space.
Our model can also be thought of as an evolution of work on neural code completion and summarization Allamanis et al. (2016); Bhoopchand et al. (2016). Like those systems, our SHARE network is used to learn a rich semantic understanding of the code snippet. Our SPECIALIZE modules then build on top of this representation to learn how to identify and fix specific bug types.
Although we have described our framework in relation to the problem of code repair, it has a number of other possible applications in sequence transformation scenarios where the input and output sequences have high overlap. For example, it could be applied to natural language grammar correction (Schmaltz et al., 2016), machine translation post editing (Libovickỳ et al., 2016), source code refactoring (Allamanis et al., 2015), or program optimization (Bunel et al., 2016).
2 RELATED WORK
We believe this paper to be the first work addressing the issue of semantic program repair in the absence of unit tests, where functionality must be inferred from the code. However, our work adds to a substantial literature on program repair and program analysis, some of which we describe below:
Neural Syntax Repair: There have been several recent techniques developed for training neural networks to correct syntax errors in code. DeepFix (Gupta et al., 2017) uses an attentional seq-to-seq model to fix syntax errors in a program by predicting both the buggy line and the statement to replace it. Bhatia and Singh (2016) train an RNN based token sequence model to predict token insertion or replacement at program locations provided by the compiler to fix syntax errors.
Statistical Program Repair: Approaches such as Arcuri and Yao (2008) and Goues et al. (2012) use genetic programming techniques to iteratively propose program modifications. Prophet (Long and Rinard, 2016) learns a probabilistic model to rank patches for null pointer exceptions and array out-of-bounds errors. The model is learnt from human patches using a set of hand-engineered program features. In contrast, our neural model automatically learns useful program representations for repairing a much richer class of semantic bugs.
Natural Source Code / Big Code: A number of recent papers have trained statistical models on large datasets of real-world code. These papers study tasks involving varying degrees of reasoning about source code, such as code completion (Raychev et al., 2015; 2014; Bhoopchand et al., 2016) and variable/class/function renaming (Raychev, 2016; Allamanis et al., 2015).
Rule-Based Static Analyzers: Rule-based analyzers for Python (Pylint (Thenault, 2001) and Pyflakes (PyCQA, 2012)) handle a highly disjoint set of issues compared to the type of bugs we are targeting, and generally do not directly propose fixes.
3 PROBLEM OVERVIEW
As mentioned in the introduction, our goal is to develop a system which can statically analyze a piece of code and predict the location of the bug along with the actual fix. We do not assume to have unit tests or any other specification associated with the snippet being repaired. These proposed repairs can be directly presented to the user, or taken as input to some downstream application. Since the task of “fixing bugs in code” is incredibly broad, we limit ourselves to four classes of common Python bugs that are described with examples in Section 3.
Ideally, we would train such a repair model using a large number of buggy/repaired code snippets. However, such a large data set does not exist. It is possible to extract a modest test set of genuine bugs from project commit histories, but it is not enough to train a large-scale neural network. Fortunately, there is a large amount of real-world non-buggy code available to which bugs can be injected. We demonstrate that a model trained on synthesized bugs is able to generalize to a test set with real bugs.
Training Data To create the training data, we first downloaded all Python projects from GitHub that were followed by at least 15 users and had permissive licenses (MIT/BSD/Apache), which amounted to 19,000 total repositories. We extracted every function from each Python source file as a code snippet. In all experiments presented here, each snippet was analyzed on its own without any surrounding context. All models explored in this paper only use static code representations, so each snippet must be parsable as an Abstract Syntax Tree (AST), but does not need to be runnable. Note that many of the extracted functions are member functions of some class, so although they can be parsed, they are not runnable without external context. We only kept snippets with between 5 and 300 nodes in their AST, which approximately corresponds to 1 to 40 lines of code. The average extracted snippet had 45 AST nodes and 6 lines of code.
This data was carved into training, test, and validation at the repository level, to eliminate any overlap between training and test. We also filtered out any training snippet which overlapped with any test snippet by more than 5 lines. In total we extracted 2,900,000 training snippets, and held-out 2,000 for test and 2,000 for validation.
Bug/Repair Types In this work, we consider four general classes of semantic repairs, which were chosen to be “simple” but still common during development, as reported by the Python programmers:
• VarReplace: An incorrect local variable is used at a particular location, and should be replaced with another variable from the snippet. • CompReplace: An incorrect comparison operator is used at a particular location. • IsSwap: The is operator is used instead of is not, or vice versa. • ClassMember: A self accessor is missing from a variable.
Generating synthetic bugs from these categories is straightforward. For example, for VarReplace, we synthesize bugs by replacing one random variable from a snippet with another variable from the same snippet. All bug types, locations, and replacements were chosen with random uniform likelihood. We applied this bug synthesis procedure to all of the training snippets to create our training data, as well as a synthetic test set (Synth-Bug Test).
Real-Bug Test Set In order to evaluate on a test set where both the code and bugs were real, we mined the Git commit history from the projects crawled from Github. We found that it was quite difficult to automatically distinguish bug repairs from other code changes such as refactoring, especially since we wanted to avoid introducing biases into the data set through the use of complex filtering heuristics. For this reason, we limited extraction to commits where exactly one line in a file
was changed, and the commit contained a word from the list “bug, error, issue, exception, fix”. We then filtered these commits to only keep those that correspond to one of our four bug types. Overall, we obtained 926 buggy/repaired snippet pairs with exactly one bug each. We believe that the small number of extracted snippets does not reflect the true frequency of these bugs during development, but rather reflect the fact that (1) one line Git commits are quite rare, (2) these type of bugs rarely make it into the public branch of high-quality repositories.
4 BASELINE ATTENTIONAL SEQUENCE-TO-SEQUENCE MODEL
Since the goal of program repair is to transform a buggy snippet into a repaired snippet, an obvious baseline is an attention sequence-to-sequence neural network (Bahdanau et al., 2014), which has been successfully used for the related tasks of syntatic code repair and code completion. On those tasks, sequence-to-sequence models have been shown to outperform a number of baseline methods such as n-gram language models or classifiers.
Because this model must actually generate a sequence, we first converted the buggy/repaired ASTs from Synth-Bug Train back to their tokenized source code, which is a simple deterministic process. The architecture used is almost identical to the machine translation system of Bahdanau et al. (2014). To handle the high number of rare tokens in the data, tokens were split by underscores and camel case. The size of the neural net vocabulary was 50,000, and the final out-of-vocabulary (OOV) rate was 1.1%. In evaluation we included OOVs in the reference, so OOVs did not cause a degradation in results. The LSTMs were 512-dimensional and decoding was performed with a beam of 8. When evaluating on the Single-Repair Synth-Bug Test set, the 1-best output exactly matches the reference 26% of the time. If we give the model credit when it predicts the correct repair but also predicts other changes, the accuracy is 41%.
Although this accuracy seem to be non-trivial, there are some intuitive weaknesses in using a sequenceto-sequence architecture for semantic code repair. First, the system is burdened with constructing the entire output sequence, even though on average it is 98.5% identical to the input. Second, potential repairs at different locations do not fairly “compete” with one another in probability space, but only compete with tokens at the same location. Third, it is difficult to use a richer code representation such as the AST, since the repaired code must be generated.
5 SHARE, SPECIALIZE, AND COMPETE (SSC) MODEL
Instead of directly generating the entire output snippet with a neural network, we consider an alternative approach where repairs are iteratively applied to the input snippet. Here, for each bug type described in Section 3, the system proposes all possible repairs of that type in the snippet. Although these candidate generators are manually written, they simply propose all possible repairs of a given type and do not perform any heuristic pruning, so each of the four generators can be written in a few lines of code. The challenging work of determining the correct repair using the code context is performed by our statistical model.
For clarity of terminology, a repair candidate is a particular fix that can be made at a particular location (e.g., “Replace == with != at node 4”). A repair instance refers to a particular repair location the generator proposes and all of the candidates at that location. Each instance is guaranteed to have exactly one no-op candidate, which results in no change to the AST if applied (e.g., “Replace == with == at node 4”). The reference label refers to the correct candidate of a given instance (e.g., “The correct replacement at node 4 is <=”). Note that for the majority of repair instances that are proposed, the reference label will be the no-op candidate.
We now present the statistical model used to score repair candidates. We refer to it as a Share, Specialize, and Compete (SSC) network. A visual representation is given in Figure 1.
5.1 SHARE
The SHARE component performs a rich encoding of the input AST using a neural network. Crucially, this encoding is only conditioned on the AST itself and not on any repair candidates, so it serves a shared representation for the next component. This network can take many forms, with the only restriction being that it must emit one vector of some dimension d for each node in the AST. An example of a Python AST is given on the right side of Figure 1.
Here, for efficiency purposes, we encode the AST with a sequential bidirectional LSTM by enumerating a depth first traversal of the nodes, which roughly corresponds to “source code order.” However, we encode the rich AST structure by using embeddings for (1) the absolute position of the node in the AST, (2) the type of the node in the AST, (3) the relationship between the node and its parent, and (4) the surface form string of the node.
These tokens are projected through an embedding layer and then concatenated, and the resulting vector is used as input to a bidirectional LSTM. The output of this layer is represented as H = (h1, h2, ..., hn), where hi ∈ Rd, d is the hidden dimension, and n is the number of nodes in the AST. The core concept of the shared component is that the vast majority of neural computation is performed here, independent of the repairs themselves. We contrast this to an alternative approach where each repair candidate is applied to the input AST and each resulting repair candidate AST is encoded with an RNN – such an approach would be orders of magnitude more expensive to train and evaluate.
5.2 SPECIALIZE
The SPECIALIZE component scores each repair candidate using a specialized network module (Andreas et al., 2016) for each repair type. Instances of the same type are processed by the same module, but obtain separate scores since they have different input. Each module takes as input the shared representation H and a repair instance R with m candidates. It produces an un-normalized scalar score for each candidate in the instance, ŝ = (s1, ..., sm). We use two module types:
Multi-Layer Perceptron (MLP) Module: This module performs scoring over a fixed label set using one non-linear hidden layer. This is used for the CompReplace, IsSwap, and ClassMember generators. It is computed as:
ŝ = V × tanh(Whj)
where V ∈ Rm×c, W ∈ Rc×d, c is the hidden dimension, m is the number of labels (i.e., transform candidates), and j is the transform location corresponding to the transform instance T . Note that separate V and W weights are learned for each repair type.
Pooled Pointer Module: Predicting variables for VarReplace presents a unique challenge when modeling code repair. First, the variable names in a test snippet may never have been seen in training. More importantly, the semantics of a variable are primarily defined by its usage, rather than its name. To address this, instead of using a fixed output layer, each candidate (i.e., another variable) is encoded using pointers to each usage of that variable in the AST. An example is given in Figure 1. Formally, it is computed as:
si = tanh(Whj) · [MaxPoolk∈pi(tanh(V hk))]
where i is the candidate (i.e., variable) index, pi is the list of locations (pointers) of the variable i in the AST, j is the location of the repair in the AST, and V,W ∈ Rc×d are learned weight matrices.
5.3 COMPETE
Once a scalar score has been produced for each repair candidate, these must be normalized to compete against one another. We consider two approaches to normalizing these scores:
Local Norm: A separate softmax is performed for each repair instance (i.e., location and type), so candidates are only normalized against other candidates in the same instance, including no-op. At test time we sort all candidates across all instances by probability, even though they have not been normalized against each other.
Global Norm: All candidates at all locations are normalized with a single softmax. No-op candidates are not included in this formulation.
6 EXPERIMENTAL RESULTS
We train the SSC model on the Synth-Bug Train data for 30 epochs. Different bugs are synthesized at each epoch which significantly mitigates over-fitting. We set the hidden dimensions of the SHARE and SPECIALIZE components to 512, and the embedding size to 128. A dropout of 0.25 is used on the output of the SHARE component. Training was done with plain SGD + gradient clipping using an in-house toolkit. A small amount of hyperparameter tuning was performed on the Synth-Bug Val set.
In the first condition we evaluate, all snippets in both training and test have exactly one bug each. As was described in Section 3, for Synth-Bug Test, the code snippets are real, but the bugs have been artificially inserted at random. For Real-Bug Test, we extracted 926 buggy/fixed snippet pairs mined from GitHub commit logs, so both the snippet and bug are real. The average snippet in the Real-Bug Test set has 31 repair locations and 102 total repair candidates, compared to 20 locations and 50 candidates of the Synth-Bug Test test set.
Table 1 presents Single-Repair results on Synth-Bug and Real-Bug test sets. The accuracy metric denotes how often the 1-best repair prediction exactly matches the reference repair, i.e., the model correctly detects where the bug is and correctly predicts how to fix it. In this case, the model was constrained to predict exactly one repair, but all candidates across all repair types are directly competing against one another. On Synth-Bug, the SSC model drastically outperforms the attentional sequence-to-sequence model, even using the upper bound seq-to-seq accuracy. Since global normalization and local normalization have similar performance and it is not obvious how to extend global normalization to multiple repairs, we use local normalization for multi-repair experiments.
On Real-Bug Test, the absolute accuracy is lower than on Synth-Bug Test, but the SSC model still significantly outperforms the seq-to-seq baseline. To better understand the absolute quality of the Real-Bug Test results, we perform a preliminary human evaluation in Section 6.
Example predictions from the Real-Bug Test set are presented below. The red region is the bug, and the green is the reference repair. For the incorrect predictions, the blue region is the predicted repair. Results on all 926 Real-Bug Test examples are provided in the supplementary material.
In the multi-repair setting, we consider the more realistic scenario where a snippet may have multiple bugs, or may have none. To model this scenario, the data was re-generated so that 0, 1, 2, or 3 bugs was added to each training/test/val snippet, with equal probability of each. We refer to these new sets as Synth-Multi-Bug Test and Synth-Multi-Bug Val. Unfortunately, we were not able to extract multi-bug examples from the Real-Bug data.
The major new complexity is that the system must now determine how many repairs to predict per snippet, if any. We use a simple threshold-based approach: Since each repair candidate is assigned a probability by the model, we simply emit all repairs which have probability greater than δ. The system is not constrained to emit only 3 repairs. A parameter sweep over the validation set revealed that accuracy is surprisingly un-sensitive to δ, so we simply use δ = 0.5. Note that we only perform a single pass of repair scoring here, but in future work we will explore an iterative decoder.
Results are presented on the right side of Table 1. For accuracy at the per-repair level, there is only a moderate decrease in F-score from 85% to 81% between the 1-repair and 3-repair settings. The Exact Accuracy does decrease significantly, but not beyond the “expected value.” In other words, three independent 1-repair snippets have an expected accuracy of 0.783 = 0.47, which is similar to the 45% accuracy observed for 3-repair snippet. We also see that the system is 82% accurate at correctly predicting when a snippet has no bugs.
Human Evaluation To better understand the significance of the performance of our system, we performed a preliminary human evaluation under identical conditions to the model. The evaluator was presented with a snippet from the test set, where all repair instances were highlighted in the code. The evaluator could click on a repair location to see all candidates at that location. They were explained each of the four bug types and told that there was always exactly one bug per snippet. This evaluation required experienced Python programmers performing a complex task, so we performed a small evaluation using 4 evaluators and 30 snippets each from the Real-Bug Test set. Evaluators typically used 2-6 minutes per snippet. These snippets were limited to 150 nodes for the benefit of the human evaluators, so the SSC model accuracy is higher on this subset than on the full set.
On these snippets, the humans achieved 37% accuracy compared to the 60% accuracy of the SSC model. One possible reason for this performance gap is that the model is simply better than humans at this task, presumably because it has been able to learn from such a large amount of data. Another possible reason is that humans did not spend the time or mental energy to perform as well as they could. To examine these possibilities, we performed a second evaluation with the same set of humans. In this evaluation, instead of having to consider all possible repairs – up to 100 candidates – the humans only had to decide between the four “most likely” repair candidates. These candidates were generated by taking the top four predictions from the SSC model (or the top three and the correct repair), shown in random order. In this evaluation, humans achieved 76% accuracy, which shows that the low performance of humans in the full task is due to the mental energy required, rather than lack of context or code understanding. We believe that these evaluations demonstrate that Real-Bug Test is a challenging set and that the accuracy achieved by the SSC model is empirically strong.
7 ANALYSIS AND DISCUSSION
Our first goal is to conceptually understand at what “level” the model was able to generalize to new snippets. Although the hidden activations of the neural network model are not directly interpretable, we can attempt to interpret the latent model space using nearest neighbor retrieval on the hidden vectors hi. The goal is to determine if the model is simply memorizing common n-grams, or if it is actually learning high-level repair concepts. Nearest neighbor retrieval for several test snippets are presented here:
In Example 1, we see the model is able to learn a high-level pattern “y.x = x”. In Example 2 we see the pattern “if (x c1 y...) elif (x c2 y...)”. In Example 3 we see the pattern “Strings usually use the equality (or inequality) operator.” In all cases, the surface form of the training nearest neighbor is very different from the test snippet. From this, it appears that the SSC model is able to learn a number of interesting, high-level patterns which it uses to generalize to new data.
We next examined failure cases of the SSC model which a human evaluator was able to repair correctly. Here, the primary weakness of the model was that humans were able to better infer program intent by using variable names, function names, and string literals. One major fault in the current implementation is a lack of sub-word representation. For example, consider a repair of the expression “dtypes.append(x)” where x could be dtype or syncnode. It is easy for a human to infer that dtype is the more sensible choice even without deeper understand of the code. In future work we plan to explore character-level encoding of value strings so that lexical similarity can be modeled latently by the network.
We finally examined cases where the SSC model succeeded but the human evaluator failed. Generally, we conclude that the model’s primary advantage was the sheer amount of data it was able to learn from. For example, consider the expression “if (db.version_info <= 3)”. This may not be immediately suspicious to a human, but if we analyze the reference training data we can measure that the pattern “if (x.version_info <= y)” is 10 times less frequent than the pattern “if (x.version_info < y)”. Intuitively, this makes sense because if a feature is added in version y, it is not useful to check <= y. However, the neural model is able to easily learn such probabilistic distributions even without deeper understanding of why they are true.
8 CONCLUSION
We presented a novel neural network architecture that allows specialized network modules to explicitly model different transformation types based on a shared input representation. When applied to the domain of semantic code repair, our model achieves high accuracy relative to a seq2seq baseline and an expert human evaluation. In our analysis of the results, we find that our system is able to learn fairly sophisticated repair patterns from the training data. In future work we plan to expand our model to cover a larger set of bug types, and ideally these bug types would be learned automatically from a corpus of real-world bugs. We also plan to apply the SSC model to other tasks.
A POOLED POINTER MODULE IMPLEMENTATION
Figure 2 provides a diagram of the pooled pointer network module.
As described in Section 5.2, the pooling module consists of a projection layer followed by a pooling operation. For each variable i, its representation is computed by pooling the set of all its occurrences, pi.
vi = MaxPoolk∈pi(tanh(V hk))
where hk denotes the representation computed by the SHARE module at location k.
The similarity module produces un-normalized scores for each potential variable replacement i. When applied at repair location j, it computes:
sij = tanh(Whj) · vi
B EXAMPLES OF PREDICTIONS
We include the full set of system predictions for the Real-Bug Test set. We have made these available at https://iclr2018anon.github.io/semantic_code_repair/index.html.
C ADDITIONAL RESULTS
Varying source code complexity Figure 3 presents accuracy of the model across functions with varying numbers of repair candidates. While the repair accuracy decreases with the number of repair candidates, the model achieves reasonably high accuracy even for functions with over 100 repair candidates. Among functions with 101-150 repair candidates, the model accuracy is 73% for synthetically introduced bugs and 36% for real bugs.
Importance of AST structure The Python abstract syntax tree is a rich source of semantic information about the tokens in a snippet. As described in Section 5.1, in addition to the original token string, we also include (1) the absolute position of the node in the AST, (2) the type of the node, and (3) the relationship between the node and its parent. To test the model’s reliance on this information, we present ablation results over these additional feature layers below in Table 2.
We see that using information from the AST provides a significant performance gain. Still, even when only using the surface form values, the SSC model outperforms the attentional sequence-to-sequence baseline by a large margin (78.3% repair accuracy compared to 26% for the sequence-to-sequence model). | 1. What is the main contribution of the paper in terms of the neural network architecture?
2. What are the strengths of the paper regarding its organization, technical description, and problem significance?
3. What are the limitations of the paper regarding the scope of the addressed bug categories?
4. How does the reviewer suggest improving the paper's contributions by addressing potential weaknesses?
5. Are there any requests for additional performance comparisons with recent techniques? | Review | Review
This paper describes the application of a neural network architecture, called Share, Specialize, and Compete, to the problem of automatically generating big fixes when the bugs fall into 4 specific categories. The approach is validated using both real and injected bugs based on a software corpus of 19,000 github projects implemented in python. The model achieves performance that is noticeably better than human experts.
This paper is well-written and nicely organized. The technical approach is described in sufficient detail, and supported with illustrative examples. Most importantly, the problem tackled is ambitious and of significance to the software engineering community.
To me the major shortcoming of the model is that the analysis focuses only on 4 specific types of semantic bugs. In practice, this is a minute fraction of what can actually go wrong when writing code. And while the high performance achieved on these 4 bugs is noteworthy, the fact that the baseline compared against is more generic weakens the contribution. The authors should address this potential limitation. I would also be curious to see performance comparisons to recent rule-based and statistical techniques.
Overall this is a nice paper with very promising results, but I believe addressing some of the above weaknesses (with experimental results, where possible) would make it an excellent paper. |
ICLR | Title
Toward Efficient Low-Precision Training: Data Format Optimization and Hysteresis Quantization
Abstract
As the complexity and size of deep neural networks continue to increase, lowprecision training has been extensively studied in the last few years to reduce hardware overhead. Training performance is largely affected by the numeric formats representing different values in low-precision training, but finding an optimal format typically requires numerous training runs, which is a very time-consuming process. In this paper, we propose a method to efficiently find an optimal format for activations and errors without actual training. We employ this method to determine an 8-bit format suitable for training various models. In addition, we propose hysteresis quantization to suppress undesired fluctuation in quantized weights during training. This scheme enables deeply quantized training using 4-bit weights, exhibiting only 0.2% degradation for ResNet-18 trained on ImageNet.
1 INTRODUCTION
Deep neural networks have been used in various fields such as vision, audio, natural language processing, and reinforcement learning. As larger and more complex neural networks are adopted, the energy and time consumed for training have become a critical issue in hardware implementation. Using low-bit representations in training significantly reduces hardware overhead and memory footprint; hence, neural network training with limited precision has been extensively studied recently. For instance, 16-bit formats are already adopted in commercial devices such as FP16 (IEEE, 2019) in Nvidia GPUs and bfloat16 (Kalamkar et al., 2019) in Google TPU (Wang et al., 2019). Also, Köster et al. (2017) suggested a new data format using a shared exponent suitable for low-precision training. Recently, it has been demonstrated that even 8-bit formats could be adopted in deep neural network training with reasonable accuracy (Sun et al., 2019; Fox et al., 2020). However, there are various issues in realizing low-precision training in practical applications as detailed below.
Optimal data format for low-precision training: Training performance is susceptible to the data format we use to represent variables in the network. When a value is represented using a floatingpoint format with a fixed number of bits, there is a trade-off between dynamic range and precision. For instance, allocating more bits to the exponent part in a floating-point format enlarges the dynamic range but lowers precision due to fewer bits in the mantissa part. Recent studies on 8-bit training suggest various ways to reduce the dynamic range required for number representation to enhance representation precision. Early work on 8-bit training (Wang et al., 2018) adopts a 5-bit exponent to represent different variables using a single format, but Sun et al. (2019) examine the statistics of each variable and optimize the numeric formats separately. Specifically, the values used in the forward path (weight and activation) have a relatively narrow dynamic range, and only 4 bits are allocated to the exponent. Fox et al. (2020) propose to divide data into smaller blocks and assign a shared exponent bias to each block. Since the values in a block tend to exhibit similar statistics, the forward (weight and activation) and backward (error) paths could be represented using only 2-bit and 4-bit exponents, respectively. Note that the shared exponent bias is effectively identical to the scaling factor. If a variable has a value of m · 2e and a shared exponent bias of b, then its actual value is m · 2e+bias, which is identical to the scaling factor of 2bias. However, these approaches are difficult to generalize since we should empirically decide numeric formats for each task, neural
network structure, and quantization scheme (Fig. 1). Furthermore, analyzing the statistics of each variable is not enough to determine an optimal format. Their distributions often have a long tail, and hence the dynamic range of the numeric format should be experimentally selected through many trial-and-errors in actual training.
Performance degradation in from-scratch training: Previous studies on quantized models show that a model could achieve comparable accuracy to full-precision models even using 1- or 2-bit weights (Choi et al., 2019; Martinez et al., 2020) through fine-tuning a pre-trained model. However, in low-precision training where a neural network is trained from scratch using low-precision values and computations, the trained model typically shows a noticeable accuracy drop (Elhoushi et al., 2021). Fig. 1(b) shows the Top-1 validation accuracy of ResNet-18 (He et al., 2016) trained on ImageNet (Deng et al., 2009) for different training schemes. The weights are quantized into a 4-bit base-2 logarithmic format. From-scratch training of the model with quantized weights results in a 2.1% accuracy drop, whereas only 1.0% degradation is observed if we fine-tune a pre-trained model. This suggests that even though a better solution (i.e., a set of parameters) exists for a given format, it cannot be reached through from-scratch training.
To formalize the issues above, here we divide quantization in low-precision training into two types: network quantization and data flow quantization. Network quantization refers to the quantization of the neural network model. An example of this type of quantization is weight quantization. In network quantization, we need to reduce the performance difference between from-scratch training and fine-tuning (Yang et al., 2019b). On the other hand, data flow quantization refers to the on-the-fly quantization that occurs when data propagate through the network in low-precision training. Examples include activation, error, and weight gradient quantizations. Additional errors are introduced in weight update computation due to this type of quantization, which leads to performance degradation. Hence, we need to find an optimal format to minimize accuracy drop due to computation errors in data flow quantization.
In this paper, we present a systematic approach to implementing low-precision training on various models and tasks. First, we present a method to efficiently find an optimal format for data flow quantization. In addition, we introduce a hysteresis quantization technique, a new quantization method for network quantization that can mitigate the issues of from-scratch training. Our main contributions are:
• We present a method that can predict the training performance of various numeric formats for data flow quantization. This method allows us to determine an appropriate data format for different neural network structures, datasets, and tasks efficiently.
• Using the method above, we propose an optimal 8-bit format suitable for low-precision training of various models, which enables quantization of BatchNorm layer input and improves hardware efficiency with minimal performance degradation.
• We propose a new quantization scheme that utilizes the hysteresis effect to improve the performance of from-scratch training in network quantization. This scheme enables ultra-low-precision training using 4-bit logarithmic weights.
2 DATA FLOW QUANTIZATION
2.1 NUMERIC FORMATS
There are many numeric formats that can be constructed with n bits depending on how much dynamic range is required and how many valid bits are used for representing a value. For example, using 8 bits we could implement 8-bit fixed point integer format, 8-bit floating-point formats such as FP152, FP143, and FP125 (FP1xy represents 1 sign bit, x exponent bits, and y mantissa bits), 8-bit posit format (Gustafson & Yonemoto, 2017), and 8-bit float-fix format (Han et al., 2019). Since the diversity of formats that could be formulated using n bits is nearly unlimited, here we assume some constraints to limit candidates while still including widely used formats such as fixed-point and floating-point formats as below:
• The MSB (Most Significant Bit) is used as a sign bit and other bits represent magnitude. Accordingly, only symmetric formats that have identical representable ranges for positive and negative numbers are considered. Two’s complement representation is slightly asymmetric since it can represent one more negative value, but it does not incur a significant difference.
• The number of valid bits of a larger value is greater than or equal to the number of valid bits of a smaller value. The valid bits stand for significant digits in binary representation.
• The ratio between consecutive representable values does not exceed 2. For example, the base-4 logarithmic format is excluded.
We could obtain 166 8-bit formats that meet these constraints. Then, we reduce 1 and 2 valid bits in each format to obtain 7- and 6-bit formats, resulting in 498 formats in total. More information on the numeric formats considered in our experiments is provided in Appendix A.1.
2.2 ACTIVATION AND ERROR QUANTIZATION
In a neural network consisting of n layers, the training process is described by
Al+1 = fl(W t l , Al) (1)
El = gl(W t l , El+1) (2)
Gwl = hl(Al, El+1) (3)
W t+1l = o(Gwl,W t l ) (4)
where A, E, W , and Gw are activation, error, weight, and weight gradient, respectively. f , g, h, and o are forward, backward, gradient, and update functions. l and t represent the layer number and time step. We follow the quantized training scheme suggested by Fox et al. (2020), but with the following modifications to reduce hardware implementation costs. A and E are quantized not only for the GEMM input but also for the BatchNorm layer input. BatchNorm layer normalizes input using the mean and variance of each channel, but these values are obtained only after observing all the inputs from the previous layer, necessitating that all input values are temporarily stored in memory. Therefore, quantizing the BatchNorm layer’s input significantly reduces memory footprint and memory access overhead. Additionally, the scope of sharing exponent bias is extended to a layer (Al and El) to avoid the overhead of aligning partial sums from different blocks in block-wise exponent sharing. Finally, instead of determining the shared exponent bias by analyzing all values in the layer, we conservatively update it by detecting overflow and underutilization that occurred in the previous mini-batch.
2.3 INDICATORS OF TRAINING PERFORMANCE
Effect of quantized error: Quantizing the error E in the backward path is independent of how the forward path behaves since the loss surface of the model does not change. Therefore, the optimalW that the network needs to reach through training remains the same regardless of the error quantization scheme. However, when the error is quantized, a quantization error ∆E is introduced in E, which incurs a noiseN∆E inGw through the gradient function in Eq. 3 and potentially updates each weight in the wrong direction. While some amount of noise may improve the training performance through regularization, using low-precision formats already introduces a large noise in the network, incurring performance degradation (see Appendix A.8). Therefore, we suggest that the weight gradient error N∆E could be a good indicator of degradation in training performance. One way to implement this is predicting performance using the magnitude ofN∆E ; however, if the noise is in the same direction as Gw, it would only change the amount of each update and result in a less severe effect. Instead, we could measure the misalignment between Gw + N∆E and Gw for performance prediction. The misalignment between two vectors is estimated by
∠(A,B) = cos−1 {
A ·B ‖A‖2 · ‖B‖2
} (5)
Then, the change in the update direction due to N∆E is ∠(Gw, Gw +N∆E). We can expect that the smaller ∠(Gw, Gw +N∆E), the better the training performance.
Effect of quantized activation: Contrary to error quantization, activation quantization affects the way the forward path operates, and the loss surface of the model changes. Hence, the global optima of weight parameters shift, where the amount of shift would be proportional to the quantization noise. The displacement of global optima can be indirectly estimated using the direction of the weight gradients Gw. If the angle ∠(Gw, Gw + N∆A) is small, the deviation of the global optima is expected to be small as well, suggesting a better training performance.
In the discussions above, we assumed that the angles ∠(Gw, Gw +N∆E) and ∠(Gw, Gw +N∆A) could be used to predict training performance. We experimentally prove this by comparing the training performance of different numeric formats. For 498 numeric formats in 6 to 8 bits, we compare the loss obtained from training with the proposed performance indicators (Fig. 2). Training loss is obtained by training ResNet-18 on CIFAR-10 dataset using SGD with a momentum of 0.9 for 60 epochs. The batch size is 128 images and the initial learning rate is 0.1, which is decayed by a cosine scheduler. We average angles from 100 mini-batches after quantizing a pre-trained model. Note that we use Gw of the first layer since it can reflect quantization errors that occur in the activations and errors of all the layers in the network. The weight gradients from the full-precision network, the network with quantized activations, and the network with quantized errors are Gw, Gw +N∆A, and GW +N∆E , respectively. Fig. 2 shows that using the misalignment angle results in not only a higher Spearman’s correlation but also a more distinct shape for low training losses, making it a better metric than the error magnitude. For instance, using the error magnitude would predict the best format for transformer incorrectly (see Fig. 8(e) in Appendix A.3). While obtaining the misalignment angle requires additional computations, its overhead is negligible since the part that requires the most time and computation is to obtain Gw, Gw + N∆E , and Gw + N∆A, which is still significantly lower than actual training. Using this method, we could determine the optimal format for a specific neural network model, dataset, and task very efficiently as we only need to
measure the misalignment angle without time-consuming network training. For experiments in Fig. 2, the amount of computation is reduced by 99.6%, and the reduction will be even larger for larger datasets and complex networks that need more epochs for training.
2.4 OPTIMAL FORMAT FOR DATA FLOW QUANTIZATION
Here we show that we could find an optimal format for training with quantized errors and activations using the proposed performance estimation method above. To find a format suitable for a wide range of models, we select six models with different architectures, layer types, and target tasks that are widely used in quantized training research for experiments: ResNet-18, ResNet-101, MobileNetV2 (Sandler et al., 2018), 2-layer LSTM, small transformer for translation on the IWSLT German to English dataset (Cettolo et al., 2014), and SSD-Lite (Liu et al., 2016) with MobileNetV2. We first measure misalignment angles for 166 8-bit formats. To verify the correlation between the training performance and the misalignment angles, we select four formats that exhibit low hardware implementation costs (INT8, FP152, FP143, and FP134) and train the networks using each format. While we may use different formats for activation and error, it requires a complicated datapath (Sun et al., 2019) and hence we only consider a single format for both variables. The experimental results in Fig. 3 demonstrate that the training performance is higher if both misalignment angles are small in all tasks and models, confirming that the proposed indicators could be used to determine the optimal numeric format. Fig. 3 suggests that FP134 and FP143 are the best candidates across all models. For hardware implementation, FP134 is the most optimal format due to its low implementation cost, which is discussed in Appendix A.7 in detail. Note that using the error magnitude leads to the same conclusion that FP134 is the best format for the target models. See Appendix A.3 for more details.
3 NETWORK QUANTIZATION
In quantized neural networks, the weight parameters are generally quantized in a way that minimizes the quantization error (Choi et al., 2019; Martinez et al., 2020). For instance, if x is quantized into a fixed-point format through s × round(xs ), a proper value is selected for the scaling factor s to minimize the quantization error. However, as the weights continue to change during training, we need to calculate s for every update, which could cause significant overhead. Therefore, prior studies on low-precision training suggest constraining the scaling factor to the power of 2 in the shared exponent (Köster et al., 2017) or the shared exponent bias (Fox et al., 2020). In this section,
we analyze the issues behind weight quantization and propose a new quantization scheme to mitigate those issues.
3.1 FLUCTUATION OF WEIGHT PARAMETERS
In typical low-precision training, a master copy of weight parameters is separately maintained in high precision, and those weights are updated based on the computed weight gradient. This highprecision weight is quantized into a low-precision format and used for the forward path computation during training. If the scaling factor s is constrained to 2n, the quantization threshold remains the same unless s is updated due to overflow or underutilization. If the optimal weight is located between two representable values of a data format, the quantized weight would fluctuate alternately between the two values in each update (Fig. 4(a)) even for a very small weight update, causing large fluctuations and undermining training performance.
3.2 HYSTERESIS QUANTIZATION
To mitigate the fluctuation issue above, we propose to introduce the concept of hysteresis to quantization. More specifically, we quantize each weight differently in a way that the quantized value tends to stay at its current value, effectively minimizing undesired oscillation between two values due to small weight updates. The equation below shows an example of the proposed quantization scheme.
Qtw = { bwtc, if wt > Qt−1w dwte, if wt < Qt−1w
(6)
where w is the original value, Qw is its quantized value, and t is the time step. The proposed hysteresis quantization reduces fluctuation significantly, stabilizing the training process and allowing the network to reach global optima more efficiently. In Fig. 4(b), if the weight change ∆W is small, then enough number of those changes should be accumulated to flip Qw. Hence, the update frequency is now proportional to the weight gradient. This helps the network to learn better while suppressing fluctuations for small Gw values. Alternatively, we may mitigate weight quantization errors by adopting AdaRound (Nagel et al. (2020)), which learns whether each weight should be rounded up or down to produce the same output as high-precision weights. However, whenever full-precision weights are updated, we need to re-train the learnable parameters (i.e., quantization scheme of each weight), incurring a large overhead and undermining the benefit of low-precision training.
3.3 ULTRA-LOW-PRECISION FORMAT FOR NETWORK QUANTIZATION
To verify the effectiveness of the proposed hysteresis quantization, we select 4-bit logarithmic representation as an ultra-low-precision format for weight parameters. This format has the same dynamic range as INT8 which is widely used for weight quantization, and is more hardware-efficient as multiplication is implemented only using simple shift operations. There have been attempts to use logarithmic weights in quantized neural networks (Lee et al., 2017; Elhoushi et al., 2021), but from-scratch training shows a significant performance degradation. In logarithmic data formats, the interval of quantization points is not uniform, making the effect of fluctuation more severe.
Fig. 5 shows experimental results of ResNet-18 training on ImageNet using 4-bit logarithmic weights. Note that we apply channel-wise quantization to the convolutional layers to compensate for the insufficient expression range and layer-wise quantization to the other types of layers. Further details on the experimental setup are provided in Appendix A.5.1. First, we measure how many quantized weights Qw change when the network performs one weight update using a mini-batch and average them over the first 100 updates in the 60th epoch. The experimental result displayed in Fig. 5(a) clearly shows that using hysteresis significantly reduces weight change frequency and stabilizes the training process. Fig. 5(b) compares the training performance of quantization schemes with and without hysteresis. Hysteresis quantization not only speeds up training but also achieves better results at the end of training. Note that hysteresis quantization is applicable to other data formats, and additional experimental results can be found in Appendix A.4.
4 EXPERIMENTAL RESULTS
FP32 FP8
4.1 LOW-PRECISION TRAINING SCHEME
For low-precision training, we need to quantize four variables: activation, error, weight, and weight gradient. In our experiments, we apply the quantized training scheme detailed in 2.2 to all of these variables, as depicted in Fig. 6. As in previous studies on 8-bit training, the inputs of GEMM are all quantized into 8 bits. Additional functions are applied to GEMM results in the forward and backward paths. ReLU, tanh, and sigmoid functions are performed directly on the input, whereas the input of BatchNorm is re-quantized.
4.2 8-BIT LOW-PRECISION TRAINING
In Section 2.4, we found that FP134 is the optimal format for low-precision training using the proposed performance prediction method. We measure the training performance of this format and compare it against other 8-bit data formats from recent studies by applying those formats to the training of various neural network models. More details on the experimental setup are provided in Appendix A.5. The performance of the proposed data format is summarized in Table 1. Overall, 8-bit training using FP134 achieves nearly the same performance as the full-precision training on all models. Even in MobileNetV2, which is known to be sensitive to quantization due to the small number of parameters, only 0.3% degradation occurred. Sun et al. (2019) show that HFP8 also exhibits only 0.2% accuracy degradation in MobileNetV2 (71.81% vs. 71.61%), but they quantize BatchNorm input into 16 bits instead of 8 bits, roughly doubling the memory access and computational complexity. Additionally, since the forward and backward paths employ different data formats, HFP8 is actually implemented using 9-bit MAC units in hardware (Agrawal et al., 2021). Table 2 compares the training performance of various data formats for ResNet-18 training. The columns w, x, dw, dx, and acc refer to weight, activation, weight gradient, error, and GEMM accumulation, respectively. Our FP134 format exhibits no accuracy drop compared to full-precision training. HFP8 (Sun et al., 2019) and BM8 (Fox et al., 2020) demonstrate similar performance, but they both use higher precision to represent BatchNorm inputs, and different formats are adopted in the forward and backward paths, necessitating complex computation units when implemented in hardware, as decribed above. In addition, BM8 assumes block-wise sharing of exponent bias, incurring additional overhead in memory access and data alignment. FP8-SEB (Park et al., 2021) addresses this issue by employing layer-wise exponent bias sharing and multi-way MAC units, but it results in a 0.7% accuracy drop for ResNet-18 training. Contrarily, our data format shows no performance degradation, while deeply quantizing BatchNorm inputs into the same format and allowing for a simple datapath by using an identical data format in the forward and backward paths.
4.3 ULTRA-LOW-PRECISION TRAINING WITH 4-BIT LOGARITHMIC WEIGHTS
Elhoushi et al. (2021) recently demonstrated that 4-bit logarithmic weights could be used for network quantization. Fine-tuning of a pre-trained model only showed 0.2% accuracy degradation, but
from-scratch training of the same model resulted in a 4.5% accuracy drop in ResNet-18 training (Table 3). Similarly, our experiments show 2.1% lower accuracy when training ResNet-18 using 4-bit logarithmic weights and FP134 format for other variables. However, using hysteresis quantization greatly improves the training performance and reduces accuracy degradation to 0.2%. This is identical to the training performance achieved through fine-tuning a pre-trained model by Elhoushi et al. (2021), confirming that hysteresis quantization effectively solves the issue of sub-optimal solutions in from-scratch training. In addition, Table 4 demonstrates that hysteresis quantization improves the training performance in all target models. Note that we quantized all trainable weights except for the BatchNorm parameters into 4 bits in experiments; the training performance could be further improved by using higher precision for error-sensitive parts such as the first/last layers and residual connections.
5 CONCLUSION
In low-precision training, the dynamic range of a tensor is data-dependant, and hence an optimal data format depends on various factors such as model, dataset, and quantization scheme. We showed that the training performance of a specific data format for activation and error could be predicted by observing the errors introduced in the weight gradients. Based on this observation, we determined an optimal 8-bit format for low-precision training very efficiently without running numerous training runs. The proposed FP134 format achieved a similar or better accuracy compared to prior works, while allowing for efficient hardware implementation through quantizing BatchNorm inputs and using a unified data format in both forward and backward paths. In addition, we proposed the hysteresis quantization scheme for network quantization, which improves training performance by suppressing undesired fluctuations and stabilizing the training process. In ultra-low-precision training with 4-bit logarithmic weights, hysteresis quantization significantly improves training performance by mitigating sub-optimal solutions, closely matching the performance obtained through fine-tuning a pre-trained model. We expect that these two schemes can complement each other to enable practical low-precision training on various models and tasks.
ACKNOWLEDGMENTS
This work was supported by the National Research Foundation of Korea (Grant No. NRF2022R1C1C1006880). The EDA tool was supported by the IC Design Education Center.
A APPENDIX
A.1 VARIOUS FORMATS ANALYZED IN SECTION 2
In this paper, we made three assumptions on the quantization formats that were analyzed. Firstly, 1-bit is allocated as a sign bit, so only symmetric formats are allowed, and secondly, the number of valid bits with a large absolute numerical value must be greater than or equal to the number of valid bits with a small absolute numerical value. Lastly, the base does not exceed 2.
Considering the above assumptions, we provide a systematical approach for generating different quantization methods that were used for analysis in Section 2, in order to create quantization methods that have trade-offs in terms of dynamic range and the number of valid bits. The quantization method is expressed with the following items: i) a list of decreasing positive real numbers P that contains the interval points (Eq. 7) and ii) a non-increasing integer list L that accompanies the interval list, with each item representing the number of valid bits (Eq. 8). Here, s is shared exponent bias.
P = {2s+1, 2s, 2s−1, ..., 2s−K+1} where s ∈ N (7) L = {l0, l1, ..., lK−1} where lk ∈ N, i < j ⇒ li ≥ lj (8)
The quantization points Q are generated in each of the intervals that are sliced with 2lk−1 evenly distributed datapoints. If the interval is {2s+1, 2s}, the quantization point Q can be expressed by Eq. 9.
Q = {2s, 2s(1 + 1 2lk−1 ), 2s(1 + 2 2lk−1 ), ..., 2s(1 + 2lk−1 − 1 2lk−1 )} (9)
Notice that L for an α-bit quantization must satisfy
2α−1 = 1 + K−1∑ k=0 2lk−1 (10)
Since the format is symmetric, only half of the data points are assigned to positive numbers, so the exponent in Eq. 10 should be α− 1 instead of α. The reason for adding 1 is to include a zero value. For example, when shared exponent bias is -1, an 8-bit fixed-point quantization would be expressed as follows:
P = {20, 2−1, 2−2, 2−3, 2−4, 2−5, 2−6, 2−7} (11) L = {7, 6, 5, 4, 3, 2, 1} (12)
The first interval from 1 to 0.5 would be evenly sliced by 27−1 datapoints, the next interval from 0.5 to 0.25 with 26−1, etc. Various cases are shown in Fig. 7, with P plotted on the x-axis and L plotted on the y-axis. Since P represents the range of values due to shared exponent bias that is independent of the data format, L can represent all of the various data formats we consider in this paper.
When selecting 8-bit formats, we chose the formats so that the intervals with less than 3 valid bits do not appear for more than two digits to reduce the search space, as they have an unnecessarily large dynamic range. Thus, formats such as [7,6,5,4,2,2,2,1] were excluded from the search space. Considering all of the generation rules, we selected 166 distinct 8-bit formats with different dynamic range and valid bits from [7,6,5,4,3,2,1] to [3,3,3,...,3,2,1]. After the number of valid bits for an 8-bit format is selected, 1 or 2 is subtracted from each value to create a corresponding 7-bit and 6-bit formats. For example, in the case of [6,5,5,5,5,4,4,4,3,2,1] 8-bit format, the 7-bit corresponding format is [5,4,4,4,4,3,3,3,2,1] and 6-bit corresponding format is [4,3,3,3,3,2,2,2,1]. From the generated 166 8-bit formats, 7-bit and 6-bit formats were also generated using this rule.
A.2 SOFTWARE IMPLEMENTATION DETAILS
To support quantized training for various formats, custom C++ and CUDA codes to emulate quantized data were written. Our custom C++ and CUDA extension code could perform quantizationrelated functions through utilizing the Python APIs in PyTorch for extendable research while maintaining high performance. We emulate the quantized value using custom code in the part that needs quantization throughout the network, and PyTorch built-in functions are used for computation kernels such as convolution and matrix multiplication. We created a package named lptorch, short for low precision PyTorch, and the code can be found in the supplementary material.
A.3 ANGLE VS. MAGNITUDE TO PREDICT PERFORMANCE
In addition to the misalignment angles of Gw (∠(Gw, Gw + N∆A) and ∠(Gw, Gw + N∆E)), as defined in Section 2.3, we used the magnitude of noise (|N∆A| and |N∆E |) in order to predict the final trained performance, and the results are shown in Fig. 8. Fig. 3 and Fig. 8 show that both the error magnitude and the misalignment angle are good metrics for determining optimal data format. For the six target models, both metrics suggest FP134 as the best format. However, the misalignment angle still better captures the training performance. For instance, in Fig. 8(e), although FP134 shows smaller noise magnitude, the actual training loss is smaller for FP143. Similarly, in Fig. 8(b), (c) and (f), although INT8 failed and FP152 succeeded in training, the absolute value of noise did not indicate a clear superior of the two formats. Based on these observations, we conclude that the misalignment angles are more suitable for predicting training performance compared against using the absolute value of noise.
A.4 HYSTERESIS QUANTIZATION WITH INTEGER WEIGHTS
In addition to 4-bit logarithmic weights, we also tested the hysteresis quantization scheme on a lowprecision integer format (INT4) that uses uniform quantization. The results are shown in Table 5. Experimental results show that using hysteresis improves the performance in most cases. In addition, in MobileNetV2 training with INT4 weights, training initially failed, but using hysteresis enables reliable training, which suggests that hysteresis quantization not only helps the network to reach the optimal point but also prevents divergence in an unwanted direction during the training process.
However, it is interesting to see that the hysteresis quantization is less effective on the LSTM model for the INT4 format. We suspect that this is due to the weight distribution characteristics of the LSTM model. As shown in Fig. 9, most of the weights have a relatively large magnitude in the
LSTM model when normalized, contrary to ResNet-18 in which the weights are more evenly distributed. In logarithmic formats, the relative amount of quantization error is similar for all values. In contrast, the relative amount of quantization error is smaller for large values in uniform quantization. Therefore, the weight parameters of LSTM are more severely affected by fluctuation in logarithmic formats, making our hysteresis quantization scheme more effective in those formats compared to uniform quantization.
A.5.1 RESNET-18 (IMAGENET)
A.5 EXPERIMENTAL DETAILS
We conducted ImageNet experiments using SGD with a momentum of 0.9 for 90 epochs with a batch size of 256 images and an initial learning rate of 0.1 which is decayed by a factor of 10 at the 30th and 60th epochs. We used the ResNet-18 architecture from the official PyTorch implementation1. Fig. 10 shows Top-1 training & validation accuracy graphs. Observation of the training graph indicates that all of the results are close to the baseline within 0.2% with the exception of FP130 without hysteresis quantization.
A.5.2 RESNET-101 (IMAGENET)
We trained ResNet-101 by applying the same training method as ResNet-18. We conducted ImageNet experiments using SGD with a momentum of 0.9 for 90 epochs with a batch size of 256 images and an initial learning rate of 0.1 which is decayed by a factor of 10 at the 30th and 60th epochs. We used the ResNet-101 architecture from the official PyTorch implementation2. Fig. 11 shows Top-1 training & validation accuracy graphs. Observation of the training graph indicates that all of the results are close to the baseline with less than 0.3% performance drop except for FP130 without hysteresis quantization.
A.5.3 MOBILENETV2 (IMAGENET)
We conducted ImageNet experiments using SGD with a momentum of 0.9 for 270 epochs with a batch size of 256 images and cosine annealing with an initial learning rate of 0.05. We used the MobileNetV2 architecture from the official PyTorch implementation3. Fig. 12 shows Top1 training & validation accuracy graphs. Observation of the training graph indicates that FP130
1https://github.com/pytorch/examples/tree/master/imagenet 2https://github.com/pytorch/examples/tree/master/imagenet 3https://github.com/pytorch/examples/tree/master/imagenet
without hysteresis leads to very unstable fluctuations throughout the training. On the other hand, in FP130 with hysteresis, training is less susceptible to fluctuations and follows the baseline (FP32) training closely until the learning rate decreases toward the latter part of learning, where both FP130 with hysteresis and FP134 show some degradation from the baseline. This is seen as a limitation due to the low precision of each format.
A.5.4 2-LAYER LSTM (PTB)
We adopted the 2-layer Long Short Term Memory (LSTM) network from PyTorch Examples4 for language modeling on the Penn Treebank dataset (Marcus et al., 1993). We ran experiments in batches of 20 sentences with an initial learning rate of 20 which is decayed by a factor of 4 at epoch 11, 16, 26, 31 and 37. The embedding and hidden dimensions are 650 and the sequence length is 35. Fig. 13 shows training & validation perplexity.
A.5.5 TRANSFORMER MODEL (IWLST)
We adopted the Transformer Base model from the FairSeq5 repository on the IWSLT’14 German to English translation task. We used Adam optimizer and default training parameters found in the repository and trained from scratch for 25 epochs. BLEU scores were calculated using the script from the repository.
A.5.6 MOBILENETV2 + SSDLITE (VOC)
We adopted a PyTorch implementation of SSDLite from the online repository6. The base network is MobileNetV2 which was pretrained with each format in Appendix A.5.3. The entire network is trained on VOC2012 and VOC2007 trainval datasets and evaluated on VOC2007 validation dataset. We used SGD with a momentum of 0.9 for 200 epochs in batches of 32 images and cosine annealing with an initial learning rate of 0.01. Fig. 14 shows validation loss at every 5 epochs. Even in this experiment, in the case of FP130 without hysteresis the loss fluctuates significantly, whereas in FP130 with hysteresis learning proceeds much more stably. FP134 showed similar results to the baseline regardless of hysteresis quantization.
A.6 MODEL QUANTIZATION METHODS
We quantized GEMM input and batchnorm input in all quantized training experiments. Among the six models used in the experiment, the quantization details for three representative structures are shown in the Fig. 15. In each structure of figure, inputs such as x, c, h, V, K, and Q are also all quantized in 8 bits.
4https://github.com/pytorch/examples/tree/master/word language model 5https://github.com/pytorch/fairseq 6https://github.com/qfgaohao/pytorch-ssd
A.7 HARDWARE EVALUATION
For hardware implementation cost comparisons, we implemented a conventional MAC unit and a multi-way MAC unit with integer-based accumulation (Tambe et al., 2020; Park et al., 2021) that support data formats presented in Section 4.2. For accumulation, we use FP169 with chunk-based accumulation (Wang et al., 2018). Experimental results in Table 6 show that FP134 exhibits lower
Structure FP134 FP1431 HFP82 BM83 Flex16+54 FP134 FP1431 HFP82 BM83 Flex16+54
Conventional 1355 1320 1308 1460 3800 122 116 106 141 537
Multi-way 2-input 1335 1480 2342 1865 2268 178 178 283 258 371 4-input 888 1034 1615 1296 1885 120 135 205 184 351 8-input 678 836 1343 1074 1672 97 123 194 168 342 16-input 571 698 1065 957 1540 95 114 170 155 329 32-input 511 668 994 898 1485 87 111 170 152 326 64-input 509 638 955 856 1450 88 110 172 149 326 1 Park et al. (2021) 2 Sun et al. (2019) 3 Fox et al. (2020) 4 Köster et al. (2017)
cost than FP143 and other formats in previous studies. Note that HFP8 (Sun et al., 2019) and BM8 (Fox et al., 2020) employ different formats for activation and error. Therefore, they need to be implemented in FP153 and FP145 to support all operations with a single MAC unit (Agrawal et al., 2021). Since Flex16+5 (Köster et al., 2017) requires 16-bit multiplication, its cost is significantly higher than other 8-bit formats.
A conventional MAC unit consists of a multiplier and an accumulator. In the multiplier, the exponents of two input operands are summed while their mantissas are multiplied. The multiplication part is more complex, and hence it dominates the area of the multiplier. As a result, the size of the multiplier is larger when more bits are allocated to mantissa. In the accumulator, a floating-point adder adds the multiplication results to a partial sum in FP169. The adder is decomposed into a shifter that aligns the mantissa by the exponent difference, an integer adder that sums aligned mantissas, and a quantization unit that converts the result back to FP169. Since the result is re-quantized into FP169, the addition operation of aligned mantissas does not need to be lossless. FP169 format has a 10-bit mantissa including one hidden bit. We only need to accurately calculate higher 10 bits, which necessitates a 12-bit adder considering rounding. Shifting by more than 12 bits is not needed even if the result of the multiplier has a larger exponent range. Therefore, the shifter, adder, and quantization unit, which are the components of the accumulator, are not affected by the input format. There are minor differences such as an adder that calculates the difference between exponents and a shifter with a different bit width of the input, but their costs are ignorable.
Contrarily, a multi-way MAC consists of a multiplier, a shifter for alignment, an adder tree, a normalization unit, and a final accumulator. The multiplier and the final accumulator are identical to those of the conventional MAC. However, since only one normalization unit and one final accumulator are shared across multiple inputs, their implementation cost becomes insignificant for a larger number of inputs. The shifter for alignment converts the multiplier output to an integer format since the cost of integer addition is lower than that of floating-point addition. Then, the adder tree sums those integer values, and the normalization unit converts the result back to a floating-point format. The cost of the shifter for alignment, adder tree, and normalization unit is all determined by the integer bit width, and the larger the exponent range of the input operands, the larger the required bit width, as shown in Fig. 16. In FP134, FP143, and FP152, the minimum integer bit widths are 23, 37, and 67 bits, respectively. Since the bit width is sufficiently large, the cost difference of these units exceeds the cost difference of the multiplier. Therefore, the cost of a multi-way MAC increases with the number of exponent bits.
When designing a neural network training processor, some parts of the hardware (e.g., batch normalization, non-linear activation functions such as tanh and sigmoid, and softmax function) are typically implemented with higher precision to avoid performance drop. Hence, we need to consider data for-
Direction FP134 FP143 HFP81 BM82 Flex16+53 FP134 FP143 HFP81 BM82 Flex16+53
To FP32 155 141 145 176 330 28 26 27 30 53 From FP32 139 144 152 162 427 19 20 22 23 55 1 Sun et al. (2019) 2 Fox et al. (2020) 3 Köster et al. (2017)
mat conversion overheads when comparing different formats. If we consider various 8-bit data formats with different representation methods, as we did in Table 6, and assume that computations other than MAC operations are implemented in full precision, the processing architecture (except MAC units) will be identical for all formats. In addition, the on/off-chip memory space, control logics, and on-chip interconnects will remain the same. The only difference would be the low-precision MAC units and the data conversion units between full-precision and low-precision formats. However, the cost of conversion between low-precision and high-precision floating-point formats is typically very low and does not vary much with the low-precision format. For low-precision to high-precision conversion, we only have to add a bias-correction term to the exponent and add 0 after the mantissa. For high-precision to low-precision conversion, we need to add a bias-correction term to the exponent, clamp the overflowed value to the maximum, and round off the mantissa. The cost is very low compared to the MAC operation, and the cost difference between different low-precision formats is negligible. We have synthesized the conversion units for different formats, and their costs are presented in Table 7. The experimental results confirm that the overhead of data format conversion is significantly lower than MAC operations. In addition, all formats except Flexpoint exhibit similar conversion costs.
In addition to the synthesis result for ASIC implementation in Table 6, we measured the hardware overhead of MAC units of different data formats on FPGA. Table 8 shows the synthesis results on Xilinx Artix-7 FPGA (XC7A100TCSG324-1). Those MAC units do not need block RAMs (BRAMs), and we used a compiler directive to avoid using DSP modules for fair comparisons. Table 8 shows a similar trend to Table 6; the cost of one MAC gradually decreases as the number of inputs increases in the multi-way MAC. Also, due to integer-based addition in the adder tree, the cost of FP134, which has the smallest dynamic range, exhibits lower costs than the other formats.
A.8 EFFECT OF QUANTIZATION NOISE ON DATA FLOW QUANTIZATION
Table 9 shows the training results when both activation and error are quantized in various data formats. If an appropriate amount of noise is introduced in the network during training, it will increase the training loss but reduce the validation loss, suggesting that the model has been improved due to the regularization effect. However, if the noise level continues to increase, the model’s performance will start to degrade at some point. For instance, when MobileNetV2 is quantized in FP134, its performance is improved through the regularization effect since the training loss increases while
the validation loss decreases compared to FP32. However, both the training and validation losses increase when quantized in most cases, resulting in lower accuracy. This suggests that using a very low precision data format already introduces a large amount of noise in the network, incurring performance degradation. Hence, it is necessary to reduce error in the network to improve the training performance in low-precision training. | 1. What are the main contributions of the paper regarding numeric precision for neural network training?
2. What are the strengths of the proposed methods, particularly in dealing with low precision representations?
3. Do you have any concerns or suggestions regarding the paper's content, such as citations, clarity, or specific points in the introduction and sections?
4. How does the reviewer assess the novelty and usefulness of the ideas presented in the paper?
5. Are there any areas where the paper could be improved, such as providing more context for related works or clearing up minor issues in certain sections? | Summary Of The Paper
Review | Summary Of The Paper
The authors propose 2 impactful methods to aid in the design of numeric precision for neural network training: a method to quickly determine which formats work for weights and activations using angular deviation of gradients between low precision and FP32, and a hysteresis method for dealing with low precision representations.
Review
While there are few nitpicks I have (these should be fixed prior to publishing), the ideas in the paper are well-founded and useful. I believe there is a meaningful contribution to the field here.
In the intro, there really needs to be a citation for Koster et al (2017) on Flexpoint. That was one of the first papers discussing these issues and one of first implementations of different numerics in modern hardware.
In the second paragraph of the introduction, the line "When a value is represented using a fixed number of bits, there is a trade-off between dynamic range and precision" is not quite correct. This is only true for floating-point formats, so simply stating that will clear this up.
In the "Performance degradation in from-scratch training" section: The line "the neural network is trained from scratch while all the values in the network - not only parameters, but also other variables in the network (e.g., activation, error, and weight gradient) " is not necessarily true and different components are done differently even in this paper. I'm not sure why there is so much text devoted to differentiating quantized vs from-scratch training, but I think this could removed. The paper is about from-scratch training so just state that.
Under Numeric Formats, I'm not sure what a "symmetric" format is and why 2's complement is asymmetric. Please define or remove.
Under Effect of quantized activation, it says "...suggesting that the misalignment angle is a better metric to predict the ranking of various formats based on the training performance." I think this is true, but more due to the shape of the curve rather than the (small) effect on spearman correlations. It might be clearer to state this; the angle differences are high for even small losses.
The hysteresis method has some great results and might really be the key to unlocking lower precision. Great work! |
ICLR | Title
Toward Efficient Low-Precision Training: Data Format Optimization and Hysteresis Quantization
Abstract
As the complexity and size of deep neural networks continue to increase, lowprecision training has been extensively studied in the last few years to reduce hardware overhead. Training performance is largely affected by the numeric formats representing different values in low-precision training, but finding an optimal format typically requires numerous training runs, which is a very time-consuming process. In this paper, we propose a method to efficiently find an optimal format for activations and errors without actual training. We employ this method to determine an 8-bit format suitable for training various models. In addition, we propose hysteresis quantization to suppress undesired fluctuation in quantized weights during training. This scheme enables deeply quantized training using 4-bit weights, exhibiting only 0.2% degradation for ResNet-18 trained on ImageNet.
1 INTRODUCTION
Deep neural networks have been used in various fields such as vision, audio, natural language processing, and reinforcement learning. As larger and more complex neural networks are adopted, the energy and time consumed for training have become a critical issue in hardware implementation. Using low-bit representations in training significantly reduces hardware overhead and memory footprint; hence, neural network training with limited precision has been extensively studied recently. For instance, 16-bit formats are already adopted in commercial devices such as FP16 (IEEE, 2019) in Nvidia GPUs and bfloat16 (Kalamkar et al., 2019) in Google TPU (Wang et al., 2019). Also, Köster et al. (2017) suggested a new data format using a shared exponent suitable for low-precision training. Recently, it has been demonstrated that even 8-bit formats could be adopted in deep neural network training with reasonable accuracy (Sun et al., 2019; Fox et al., 2020). However, there are various issues in realizing low-precision training in practical applications as detailed below.
Optimal data format for low-precision training: Training performance is susceptible to the data format we use to represent variables in the network. When a value is represented using a floatingpoint format with a fixed number of bits, there is a trade-off between dynamic range and precision. For instance, allocating more bits to the exponent part in a floating-point format enlarges the dynamic range but lowers precision due to fewer bits in the mantissa part. Recent studies on 8-bit training suggest various ways to reduce the dynamic range required for number representation to enhance representation precision. Early work on 8-bit training (Wang et al., 2018) adopts a 5-bit exponent to represent different variables using a single format, but Sun et al. (2019) examine the statistics of each variable and optimize the numeric formats separately. Specifically, the values used in the forward path (weight and activation) have a relatively narrow dynamic range, and only 4 bits are allocated to the exponent. Fox et al. (2020) propose to divide data into smaller blocks and assign a shared exponent bias to each block. Since the values in a block tend to exhibit similar statistics, the forward (weight and activation) and backward (error) paths could be represented using only 2-bit and 4-bit exponents, respectively. Note that the shared exponent bias is effectively identical to the scaling factor. If a variable has a value of m · 2e and a shared exponent bias of b, then its actual value is m · 2e+bias, which is identical to the scaling factor of 2bias. However, these approaches are difficult to generalize since we should empirically decide numeric formats for each task, neural
network structure, and quantization scheme (Fig. 1). Furthermore, analyzing the statistics of each variable is not enough to determine an optimal format. Their distributions often have a long tail, and hence the dynamic range of the numeric format should be experimentally selected through many trial-and-errors in actual training.
Performance degradation in from-scratch training: Previous studies on quantized models show that a model could achieve comparable accuracy to full-precision models even using 1- or 2-bit weights (Choi et al., 2019; Martinez et al., 2020) through fine-tuning a pre-trained model. However, in low-precision training where a neural network is trained from scratch using low-precision values and computations, the trained model typically shows a noticeable accuracy drop (Elhoushi et al., 2021). Fig. 1(b) shows the Top-1 validation accuracy of ResNet-18 (He et al., 2016) trained on ImageNet (Deng et al., 2009) for different training schemes. The weights are quantized into a 4-bit base-2 logarithmic format. From-scratch training of the model with quantized weights results in a 2.1% accuracy drop, whereas only 1.0% degradation is observed if we fine-tune a pre-trained model. This suggests that even though a better solution (i.e., a set of parameters) exists for a given format, it cannot be reached through from-scratch training.
To formalize the issues above, here we divide quantization in low-precision training into two types: network quantization and data flow quantization. Network quantization refers to the quantization of the neural network model. An example of this type of quantization is weight quantization. In network quantization, we need to reduce the performance difference between from-scratch training and fine-tuning (Yang et al., 2019b). On the other hand, data flow quantization refers to the on-the-fly quantization that occurs when data propagate through the network in low-precision training. Examples include activation, error, and weight gradient quantizations. Additional errors are introduced in weight update computation due to this type of quantization, which leads to performance degradation. Hence, we need to find an optimal format to minimize accuracy drop due to computation errors in data flow quantization.
In this paper, we present a systematic approach to implementing low-precision training on various models and tasks. First, we present a method to efficiently find an optimal format for data flow quantization. In addition, we introduce a hysteresis quantization technique, a new quantization method for network quantization that can mitigate the issues of from-scratch training. Our main contributions are:
• We present a method that can predict the training performance of various numeric formats for data flow quantization. This method allows us to determine an appropriate data format for different neural network structures, datasets, and tasks efficiently.
• Using the method above, we propose an optimal 8-bit format suitable for low-precision training of various models, which enables quantization of BatchNorm layer input and improves hardware efficiency with minimal performance degradation.
• We propose a new quantization scheme that utilizes the hysteresis effect to improve the performance of from-scratch training in network quantization. This scheme enables ultra-low-precision training using 4-bit logarithmic weights.
2 DATA FLOW QUANTIZATION
2.1 NUMERIC FORMATS
There are many numeric formats that can be constructed with n bits depending on how much dynamic range is required and how many valid bits are used for representing a value. For example, using 8 bits we could implement 8-bit fixed point integer format, 8-bit floating-point formats such as FP152, FP143, and FP125 (FP1xy represents 1 sign bit, x exponent bits, and y mantissa bits), 8-bit posit format (Gustafson & Yonemoto, 2017), and 8-bit float-fix format (Han et al., 2019). Since the diversity of formats that could be formulated using n bits is nearly unlimited, here we assume some constraints to limit candidates while still including widely used formats such as fixed-point and floating-point formats as below:
• The MSB (Most Significant Bit) is used as a sign bit and other bits represent magnitude. Accordingly, only symmetric formats that have identical representable ranges for positive and negative numbers are considered. Two’s complement representation is slightly asymmetric since it can represent one more negative value, but it does not incur a significant difference.
• The number of valid bits of a larger value is greater than or equal to the number of valid bits of a smaller value. The valid bits stand for significant digits in binary representation.
• The ratio between consecutive representable values does not exceed 2. For example, the base-4 logarithmic format is excluded.
We could obtain 166 8-bit formats that meet these constraints. Then, we reduce 1 and 2 valid bits in each format to obtain 7- and 6-bit formats, resulting in 498 formats in total. More information on the numeric formats considered in our experiments is provided in Appendix A.1.
2.2 ACTIVATION AND ERROR QUANTIZATION
In a neural network consisting of n layers, the training process is described by
Al+1 = fl(W t l , Al) (1)
El = gl(W t l , El+1) (2)
Gwl = hl(Al, El+1) (3)
W t+1l = o(Gwl,W t l ) (4)
where A, E, W , and Gw are activation, error, weight, and weight gradient, respectively. f , g, h, and o are forward, backward, gradient, and update functions. l and t represent the layer number and time step. We follow the quantized training scheme suggested by Fox et al. (2020), but with the following modifications to reduce hardware implementation costs. A and E are quantized not only for the GEMM input but also for the BatchNorm layer input. BatchNorm layer normalizes input using the mean and variance of each channel, but these values are obtained only after observing all the inputs from the previous layer, necessitating that all input values are temporarily stored in memory. Therefore, quantizing the BatchNorm layer’s input significantly reduces memory footprint and memory access overhead. Additionally, the scope of sharing exponent bias is extended to a layer (Al and El) to avoid the overhead of aligning partial sums from different blocks in block-wise exponent sharing. Finally, instead of determining the shared exponent bias by analyzing all values in the layer, we conservatively update it by detecting overflow and underutilization that occurred in the previous mini-batch.
2.3 INDICATORS OF TRAINING PERFORMANCE
Effect of quantized error: Quantizing the error E in the backward path is independent of how the forward path behaves since the loss surface of the model does not change. Therefore, the optimalW that the network needs to reach through training remains the same regardless of the error quantization scheme. However, when the error is quantized, a quantization error ∆E is introduced in E, which incurs a noiseN∆E inGw through the gradient function in Eq. 3 and potentially updates each weight in the wrong direction. While some amount of noise may improve the training performance through regularization, using low-precision formats already introduces a large noise in the network, incurring performance degradation (see Appendix A.8). Therefore, we suggest that the weight gradient error N∆E could be a good indicator of degradation in training performance. One way to implement this is predicting performance using the magnitude ofN∆E ; however, if the noise is in the same direction as Gw, it would only change the amount of each update and result in a less severe effect. Instead, we could measure the misalignment between Gw + N∆E and Gw for performance prediction. The misalignment between two vectors is estimated by
∠(A,B) = cos−1 {
A ·B ‖A‖2 · ‖B‖2
} (5)
Then, the change in the update direction due to N∆E is ∠(Gw, Gw +N∆E). We can expect that the smaller ∠(Gw, Gw +N∆E), the better the training performance.
Effect of quantized activation: Contrary to error quantization, activation quantization affects the way the forward path operates, and the loss surface of the model changes. Hence, the global optima of weight parameters shift, where the amount of shift would be proportional to the quantization noise. The displacement of global optima can be indirectly estimated using the direction of the weight gradients Gw. If the angle ∠(Gw, Gw + N∆A) is small, the deviation of the global optima is expected to be small as well, suggesting a better training performance.
In the discussions above, we assumed that the angles ∠(Gw, Gw +N∆E) and ∠(Gw, Gw +N∆A) could be used to predict training performance. We experimentally prove this by comparing the training performance of different numeric formats. For 498 numeric formats in 6 to 8 bits, we compare the loss obtained from training with the proposed performance indicators (Fig. 2). Training loss is obtained by training ResNet-18 on CIFAR-10 dataset using SGD with a momentum of 0.9 for 60 epochs. The batch size is 128 images and the initial learning rate is 0.1, which is decayed by a cosine scheduler. We average angles from 100 mini-batches after quantizing a pre-trained model. Note that we use Gw of the first layer since it can reflect quantization errors that occur in the activations and errors of all the layers in the network. The weight gradients from the full-precision network, the network with quantized activations, and the network with quantized errors are Gw, Gw +N∆A, and GW +N∆E , respectively. Fig. 2 shows that using the misalignment angle results in not only a higher Spearman’s correlation but also a more distinct shape for low training losses, making it a better metric than the error magnitude. For instance, using the error magnitude would predict the best format for transformer incorrectly (see Fig. 8(e) in Appendix A.3). While obtaining the misalignment angle requires additional computations, its overhead is negligible since the part that requires the most time and computation is to obtain Gw, Gw + N∆E , and Gw + N∆A, which is still significantly lower than actual training. Using this method, we could determine the optimal format for a specific neural network model, dataset, and task very efficiently as we only need to
measure the misalignment angle without time-consuming network training. For experiments in Fig. 2, the amount of computation is reduced by 99.6%, and the reduction will be even larger for larger datasets and complex networks that need more epochs for training.
2.4 OPTIMAL FORMAT FOR DATA FLOW QUANTIZATION
Here we show that we could find an optimal format for training with quantized errors and activations using the proposed performance estimation method above. To find a format suitable for a wide range of models, we select six models with different architectures, layer types, and target tasks that are widely used in quantized training research for experiments: ResNet-18, ResNet-101, MobileNetV2 (Sandler et al., 2018), 2-layer LSTM, small transformer for translation on the IWSLT German to English dataset (Cettolo et al., 2014), and SSD-Lite (Liu et al., 2016) with MobileNetV2. We first measure misalignment angles for 166 8-bit formats. To verify the correlation between the training performance and the misalignment angles, we select four formats that exhibit low hardware implementation costs (INT8, FP152, FP143, and FP134) and train the networks using each format. While we may use different formats for activation and error, it requires a complicated datapath (Sun et al., 2019) and hence we only consider a single format for both variables. The experimental results in Fig. 3 demonstrate that the training performance is higher if both misalignment angles are small in all tasks and models, confirming that the proposed indicators could be used to determine the optimal numeric format. Fig. 3 suggests that FP134 and FP143 are the best candidates across all models. For hardware implementation, FP134 is the most optimal format due to its low implementation cost, which is discussed in Appendix A.7 in detail. Note that using the error magnitude leads to the same conclusion that FP134 is the best format for the target models. See Appendix A.3 for more details.
3 NETWORK QUANTIZATION
In quantized neural networks, the weight parameters are generally quantized in a way that minimizes the quantization error (Choi et al., 2019; Martinez et al., 2020). For instance, if x is quantized into a fixed-point format through s × round(xs ), a proper value is selected for the scaling factor s to minimize the quantization error. However, as the weights continue to change during training, we need to calculate s for every update, which could cause significant overhead. Therefore, prior studies on low-precision training suggest constraining the scaling factor to the power of 2 in the shared exponent (Köster et al., 2017) or the shared exponent bias (Fox et al., 2020). In this section,
we analyze the issues behind weight quantization and propose a new quantization scheme to mitigate those issues.
3.1 FLUCTUATION OF WEIGHT PARAMETERS
In typical low-precision training, a master copy of weight parameters is separately maintained in high precision, and those weights are updated based on the computed weight gradient. This highprecision weight is quantized into a low-precision format and used for the forward path computation during training. If the scaling factor s is constrained to 2n, the quantization threshold remains the same unless s is updated due to overflow or underutilization. If the optimal weight is located between two representable values of a data format, the quantized weight would fluctuate alternately between the two values in each update (Fig. 4(a)) even for a very small weight update, causing large fluctuations and undermining training performance.
3.2 HYSTERESIS QUANTIZATION
To mitigate the fluctuation issue above, we propose to introduce the concept of hysteresis to quantization. More specifically, we quantize each weight differently in a way that the quantized value tends to stay at its current value, effectively minimizing undesired oscillation between two values due to small weight updates. The equation below shows an example of the proposed quantization scheme.
Qtw = { bwtc, if wt > Qt−1w dwte, if wt < Qt−1w
(6)
where w is the original value, Qw is its quantized value, and t is the time step. The proposed hysteresis quantization reduces fluctuation significantly, stabilizing the training process and allowing the network to reach global optima more efficiently. In Fig. 4(b), if the weight change ∆W is small, then enough number of those changes should be accumulated to flip Qw. Hence, the update frequency is now proportional to the weight gradient. This helps the network to learn better while suppressing fluctuations for small Gw values. Alternatively, we may mitigate weight quantization errors by adopting AdaRound (Nagel et al. (2020)), which learns whether each weight should be rounded up or down to produce the same output as high-precision weights. However, whenever full-precision weights are updated, we need to re-train the learnable parameters (i.e., quantization scheme of each weight), incurring a large overhead and undermining the benefit of low-precision training.
3.3 ULTRA-LOW-PRECISION FORMAT FOR NETWORK QUANTIZATION
To verify the effectiveness of the proposed hysteresis quantization, we select 4-bit logarithmic representation as an ultra-low-precision format for weight parameters. This format has the same dynamic range as INT8 which is widely used for weight quantization, and is more hardware-efficient as multiplication is implemented only using simple shift operations. There have been attempts to use logarithmic weights in quantized neural networks (Lee et al., 2017; Elhoushi et al., 2021), but from-scratch training shows a significant performance degradation. In logarithmic data formats, the interval of quantization points is not uniform, making the effect of fluctuation more severe.
Fig. 5 shows experimental results of ResNet-18 training on ImageNet using 4-bit logarithmic weights. Note that we apply channel-wise quantization to the convolutional layers to compensate for the insufficient expression range and layer-wise quantization to the other types of layers. Further details on the experimental setup are provided in Appendix A.5.1. First, we measure how many quantized weights Qw change when the network performs one weight update using a mini-batch and average them over the first 100 updates in the 60th epoch. The experimental result displayed in Fig. 5(a) clearly shows that using hysteresis significantly reduces weight change frequency and stabilizes the training process. Fig. 5(b) compares the training performance of quantization schemes with and without hysteresis. Hysteresis quantization not only speeds up training but also achieves better results at the end of training. Note that hysteresis quantization is applicable to other data formats, and additional experimental results can be found in Appendix A.4.
4 EXPERIMENTAL RESULTS
FP32 FP8
4.1 LOW-PRECISION TRAINING SCHEME
For low-precision training, we need to quantize four variables: activation, error, weight, and weight gradient. In our experiments, we apply the quantized training scheme detailed in 2.2 to all of these variables, as depicted in Fig. 6. As in previous studies on 8-bit training, the inputs of GEMM are all quantized into 8 bits. Additional functions are applied to GEMM results in the forward and backward paths. ReLU, tanh, and sigmoid functions are performed directly on the input, whereas the input of BatchNorm is re-quantized.
4.2 8-BIT LOW-PRECISION TRAINING
In Section 2.4, we found that FP134 is the optimal format for low-precision training using the proposed performance prediction method. We measure the training performance of this format and compare it against other 8-bit data formats from recent studies by applying those formats to the training of various neural network models. More details on the experimental setup are provided in Appendix A.5. The performance of the proposed data format is summarized in Table 1. Overall, 8-bit training using FP134 achieves nearly the same performance as the full-precision training on all models. Even in MobileNetV2, which is known to be sensitive to quantization due to the small number of parameters, only 0.3% degradation occurred. Sun et al. (2019) show that HFP8 also exhibits only 0.2% accuracy degradation in MobileNetV2 (71.81% vs. 71.61%), but they quantize BatchNorm input into 16 bits instead of 8 bits, roughly doubling the memory access and computational complexity. Additionally, since the forward and backward paths employ different data formats, HFP8 is actually implemented using 9-bit MAC units in hardware (Agrawal et al., 2021). Table 2 compares the training performance of various data formats for ResNet-18 training. The columns w, x, dw, dx, and acc refer to weight, activation, weight gradient, error, and GEMM accumulation, respectively. Our FP134 format exhibits no accuracy drop compared to full-precision training. HFP8 (Sun et al., 2019) and BM8 (Fox et al., 2020) demonstrate similar performance, but they both use higher precision to represent BatchNorm inputs, and different formats are adopted in the forward and backward paths, necessitating complex computation units when implemented in hardware, as decribed above. In addition, BM8 assumes block-wise sharing of exponent bias, incurring additional overhead in memory access and data alignment. FP8-SEB (Park et al., 2021) addresses this issue by employing layer-wise exponent bias sharing and multi-way MAC units, but it results in a 0.7% accuracy drop for ResNet-18 training. Contrarily, our data format shows no performance degradation, while deeply quantizing BatchNorm inputs into the same format and allowing for a simple datapath by using an identical data format in the forward and backward paths.
4.3 ULTRA-LOW-PRECISION TRAINING WITH 4-BIT LOGARITHMIC WEIGHTS
Elhoushi et al. (2021) recently demonstrated that 4-bit logarithmic weights could be used for network quantization. Fine-tuning of a pre-trained model only showed 0.2% accuracy degradation, but
from-scratch training of the same model resulted in a 4.5% accuracy drop in ResNet-18 training (Table 3). Similarly, our experiments show 2.1% lower accuracy when training ResNet-18 using 4-bit logarithmic weights and FP134 format for other variables. However, using hysteresis quantization greatly improves the training performance and reduces accuracy degradation to 0.2%. This is identical to the training performance achieved through fine-tuning a pre-trained model by Elhoushi et al. (2021), confirming that hysteresis quantization effectively solves the issue of sub-optimal solutions in from-scratch training. In addition, Table 4 demonstrates that hysteresis quantization improves the training performance in all target models. Note that we quantized all trainable weights except for the BatchNorm parameters into 4 bits in experiments; the training performance could be further improved by using higher precision for error-sensitive parts such as the first/last layers and residual connections.
5 CONCLUSION
In low-precision training, the dynamic range of a tensor is data-dependant, and hence an optimal data format depends on various factors such as model, dataset, and quantization scheme. We showed that the training performance of a specific data format for activation and error could be predicted by observing the errors introduced in the weight gradients. Based on this observation, we determined an optimal 8-bit format for low-precision training very efficiently without running numerous training runs. The proposed FP134 format achieved a similar or better accuracy compared to prior works, while allowing for efficient hardware implementation through quantizing BatchNorm inputs and using a unified data format in both forward and backward paths. In addition, we proposed the hysteresis quantization scheme for network quantization, which improves training performance by suppressing undesired fluctuations and stabilizing the training process. In ultra-low-precision training with 4-bit logarithmic weights, hysteresis quantization significantly improves training performance by mitigating sub-optimal solutions, closely matching the performance obtained through fine-tuning a pre-trained model. We expect that these two schemes can complement each other to enable practical low-precision training on various models and tasks.
ACKNOWLEDGMENTS
This work was supported by the National Research Foundation of Korea (Grant No. NRF2022R1C1C1006880). The EDA tool was supported by the IC Design Education Center.
A APPENDIX
A.1 VARIOUS FORMATS ANALYZED IN SECTION 2
In this paper, we made three assumptions on the quantization formats that were analyzed. Firstly, 1-bit is allocated as a sign bit, so only symmetric formats are allowed, and secondly, the number of valid bits with a large absolute numerical value must be greater than or equal to the number of valid bits with a small absolute numerical value. Lastly, the base does not exceed 2.
Considering the above assumptions, we provide a systematical approach for generating different quantization methods that were used for analysis in Section 2, in order to create quantization methods that have trade-offs in terms of dynamic range and the number of valid bits. The quantization method is expressed with the following items: i) a list of decreasing positive real numbers P that contains the interval points (Eq. 7) and ii) a non-increasing integer list L that accompanies the interval list, with each item representing the number of valid bits (Eq. 8). Here, s is shared exponent bias.
P = {2s+1, 2s, 2s−1, ..., 2s−K+1} where s ∈ N (7) L = {l0, l1, ..., lK−1} where lk ∈ N, i < j ⇒ li ≥ lj (8)
The quantization points Q are generated in each of the intervals that are sliced with 2lk−1 evenly distributed datapoints. If the interval is {2s+1, 2s}, the quantization point Q can be expressed by Eq. 9.
Q = {2s, 2s(1 + 1 2lk−1 ), 2s(1 + 2 2lk−1 ), ..., 2s(1 + 2lk−1 − 1 2lk−1 )} (9)
Notice that L for an α-bit quantization must satisfy
2α−1 = 1 + K−1∑ k=0 2lk−1 (10)
Since the format is symmetric, only half of the data points are assigned to positive numbers, so the exponent in Eq. 10 should be α− 1 instead of α. The reason for adding 1 is to include a zero value. For example, when shared exponent bias is -1, an 8-bit fixed-point quantization would be expressed as follows:
P = {20, 2−1, 2−2, 2−3, 2−4, 2−5, 2−6, 2−7} (11) L = {7, 6, 5, 4, 3, 2, 1} (12)
The first interval from 1 to 0.5 would be evenly sliced by 27−1 datapoints, the next interval from 0.5 to 0.25 with 26−1, etc. Various cases are shown in Fig. 7, with P plotted on the x-axis and L plotted on the y-axis. Since P represents the range of values due to shared exponent bias that is independent of the data format, L can represent all of the various data formats we consider in this paper.
When selecting 8-bit formats, we chose the formats so that the intervals with less than 3 valid bits do not appear for more than two digits to reduce the search space, as they have an unnecessarily large dynamic range. Thus, formats such as [7,6,5,4,2,2,2,1] were excluded from the search space. Considering all of the generation rules, we selected 166 distinct 8-bit formats with different dynamic range and valid bits from [7,6,5,4,3,2,1] to [3,3,3,...,3,2,1]. After the number of valid bits for an 8-bit format is selected, 1 or 2 is subtracted from each value to create a corresponding 7-bit and 6-bit formats. For example, in the case of [6,5,5,5,5,4,4,4,3,2,1] 8-bit format, the 7-bit corresponding format is [5,4,4,4,4,3,3,3,2,1] and 6-bit corresponding format is [4,3,3,3,3,2,2,2,1]. From the generated 166 8-bit formats, 7-bit and 6-bit formats were also generated using this rule.
A.2 SOFTWARE IMPLEMENTATION DETAILS
To support quantized training for various formats, custom C++ and CUDA codes to emulate quantized data were written. Our custom C++ and CUDA extension code could perform quantizationrelated functions through utilizing the Python APIs in PyTorch for extendable research while maintaining high performance. We emulate the quantized value using custom code in the part that needs quantization throughout the network, and PyTorch built-in functions are used for computation kernels such as convolution and matrix multiplication. We created a package named lptorch, short for low precision PyTorch, and the code can be found in the supplementary material.
A.3 ANGLE VS. MAGNITUDE TO PREDICT PERFORMANCE
In addition to the misalignment angles of Gw (∠(Gw, Gw + N∆A) and ∠(Gw, Gw + N∆E)), as defined in Section 2.3, we used the magnitude of noise (|N∆A| and |N∆E |) in order to predict the final trained performance, and the results are shown in Fig. 8. Fig. 3 and Fig. 8 show that both the error magnitude and the misalignment angle are good metrics for determining optimal data format. For the six target models, both metrics suggest FP134 as the best format. However, the misalignment angle still better captures the training performance. For instance, in Fig. 8(e), although FP134 shows smaller noise magnitude, the actual training loss is smaller for FP143. Similarly, in Fig. 8(b), (c) and (f), although INT8 failed and FP152 succeeded in training, the absolute value of noise did not indicate a clear superior of the two formats. Based on these observations, we conclude that the misalignment angles are more suitable for predicting training performance compared against using the absolute value of noise.
A.4 HYSTERESIS QUANTIZATION WITH INTEGER WEIGHTS
In addition to 4-bit logarithmic weights, we also tested the hysteresis quantization scheme on a lowprecision integer format (INT4) that uses uniform quantization. The results are shown in Table 5. Experimental results show that using hysteresis improves the performance in most cases. In addition, in MobileNetV2 training with INT4 weights, training initially failed, but using hysteresis enables reliable training, which suggests that hysteresis quantization not only helps the network to reach the optimal point but also prevents divergence in an unwanted direction during the training process.
However, it is interesting to see that the hysteresis quantization is less effective on the LSTM model for the INT4 format. We suspect that this is due to the weight distribution characteristics of the LSTM model. As shown in Fig. 9, most of the weights have a relatively large magnitude in the
LSTM model when normalized, contrary to ResNet-18 in which the weights are more evenly distributed. In logarithmic formats, the relative amount of quantization error is similar for all values. In contrast, the relative amount of quantization error is smaller for large values in uniform quantization. Therefore, the weight parameters of LSTM are more severely affected by fluctuation in logarithmic formats, making our hysteresis quantization scheme more effective in those formats compared to uniform quantization.
A.5.1 RESNET-18 (IMAGENET)
A.5 EXPERIMENTAL DETAILS
We conducted ImageNet experiments using SGD with a momentum of 0.9 for 90 epochs with a batch size of 256 images and an initial learning rate of 0.1 which is decayed by a factor of 10 at the 30th and 60th epochs. We used the ResNet-18 architecture from the official PyTorch implementation1. Fig. 10 shows Top-1 training & validation accuracy graphs. Observation of the training graph indicates that all of the results are close to the baseline within 0.2% with the exception of FP130 without hysteresis quantization.
A.5.2 RESNET-101 (IMAGENET)
We trained ResNet-101 by applying the same training method as ResNet-18. We conducted ImageNet experiments using SGD with a momentum of 0.9 for 90 epochs with a batch size of 256 images and an initial learning rate of 0.1 which is decayed by a factor of 10 at the 30th and 60th epochs. We used the ResNet-101 architecture from the official PyTorch implementation2. Fig. 11 shows Top-1 training & validation accuracy graphs. Observation of the training graph indicates that all of the results are close to the baseline with less than 0.3% performance drop except for FP130 without hysteresis quantization.
A.5.3 MOBILENETV2 (IMAGENET)
We conducted ImageNet experiments using SGD with a momentum of 0.9 for 270 epochs with a batch size of 256 images and cosine annealing with an initial learning rate of 0.05. We used the MobileNetV2 architecture from the official PyTorch implementation3. Fig. 12 shows Top1 training & validation accuracy graphs. Observation of the training graph indicates that FP130
1https://github.com/pytorch/examples/tree/master/imagenet 2https://github.com/pytorch/examples/tree/master/imagenet 3https://github.com/pytorch/examples/tree/master/imagenet
without hysteresis leads to very unstable fluctuations throughout the training. On the other hand, in FP130 with hysteresis, training is less susceptible to fluctuations and follows the baseline (FP32) training closely until the learning rate decreases toward the latter part of learning, where both FP130 with hysteresis and FP134 show some degradation from the baseline. This is seen as a limitation due to the low precision of each format.
A.5.4 2-LAYER LSTM (PTB)
We adopted the 2-layer Long Short Term Memory (LSTM) network from PyTorch Examples4 for language modeling on the Penn Treebank dataset (Marcus et al., 1993). We ran experiments in batches of 20 sentences with an initial learning rate of 20 which is decayed by a factor of 4 at epoch 11, 16, 26, 31 and 37. The embedding and hidden dimensions are 650 and the sequence length is 35. Fig. 13 shows training & validation perplexity.
A.5.5 TRANSFORMER MODEL (IWLST)
We adopted the Transformer Base model from the FairSeq5 repository on the IWSLT’14 German to English translation task. We used Adam optimizer and default training parameters found in the repository and trained from scratch for 25 epochs. BLEU scores were calculated using the script from the repository.
A.5.6 MOBILENETV2 + SSDLITE (VOC)
We adopted a PyTorch implementation of SSDLite from the online repository6. The base network is MobileNetV2 which was pretrained with each format in Appendix A.5.3. The entire network is trained on VOC2012 and VOC2007 trainval datasets and evaluated on VOC2007 validation dataset. We used SGD with a momentum of 0.9 for 200 epochs in batches of 32 images and cosine annealing with an initial learning rate of 0.01. Fig. 14 shows validation loss at every 5 epochs. Even in this experiment, in the case of FP130 without hysteresis the loss fluctuates significantly, whereas in FP130 with hysteresis learning proceeds much more stably. FP134 showed similar results to the baseline regardless of hysteresis quantization.
A.6 MODEL QUANTIZATION METHODS
We quantized GEMM input and batchnorm input in all quantized training experiments. Among the six models used in the experiment, the quantization details for three representative structures are shown in the Fig. 15. In each structure of figure, inputs such as x, c, h, V, K, and Q are also all quantized in 8 bits.
4https://github.com/pytorch/examples/tree/master/word language model 5https://github.com/pytorch/fairseq 6https://github.com/qfgaohao/pytorch-ssd
A.7 HARDWARE EVALUATION
For hardware implementation cost comparisons, we implemented a conventional MAC unit and a multi-way MAC unit with integer-based accumulation (Tambe et al., 2020; Park et al., 2021) that support data formats presented in Section 4.2. For accumulation, we use FP169 with chunk-based accumulation (Wang et al., 2018). Experimental results in Table 6 show that FP134 exhibits lower
Structure FP134 FP1431 HFP82 BM83 Flex16+54 FP134 FP1431 HFP82 BM83 Flex16+54
Conventional 1355 1320 1308 1460 3800 122 116 106 141 537
Multi-way 2-input 1335 1480 2342 1865 2268 178 178 283 258 371 4-input 888 1034 1615 1296 1885 120 135 205 184 351 8-input 678 836 1343 1074 1672 97 123 194 168 342 16-input 571 698 1065 957 1540 95 114 170 155 329 32-input 511 668 994 898 1485 87 111 170 152 326 64-input 509 638 955 856 1450 88 110 172 149 326 1 Park et al. (2021) 2 Sun et al. (2019) 3 Fox et al. (2020) 4 Köster et al. (2017)
cost than FP143 and other formats in previous studies. Note that HFP8 (Sun et al., 2019) and BM8 (Fox et al., 2020) employ different formats for activation and error. Therefore, they need to be implemented in FP153 and FP145 to support all operations with a single MAC unit (Agrawal et al., 2021). Since Flex16+5 (Köster et al., 2017) requires 16-bit multiplication, its cost is significantly higher than other 8-bit formats.
A conventional MAC unit consists of a multiplier and an accumulator. In the multiplier, the exponents of two input operands are summed while their mantissas are multiplied. The multiplication part is more complex, and hence it dominates the area of the multiplier. As a result, the size of the multiplier is larger when more bits are allocated to mantissa. In the accumulator, a floating-point adder adds the multiplication results to a partial sum in FP169. The adder is decomposed into a shifter that aligns the mantissa by the exponent difference, an integer adder that sums aligned mantissas, and a quantization unit that converts the result back to FP169. Since the result is re-quantized into FP169, the addition operation of aligned mantissas does not need to be lossless. FP169 format has a 10-bit mantissa including one hidden bit. We only need to accurately calculate higher 10 bits, which necessitates a 12-bit adder considering rounding. Shifting by more than 12 bits is not needed even if the result of the multiplier has a larger exponent range. Therefore, the shifter, adder, and quantization unit, which are the components of the accumulator, are not affected by the input format. There are minor differences such as an adder that calculates the difference between exponents and a shifter with a different bit width of the input, but their costs are ignorable.
Contrarily, a multi-way MAC consists of a multiplier, a shifter for alignment, an adder tree, a normalization unit, and a final accumulator. The multiplier and the final accumulator are identical to those of the conventional MAC. However, since only one normalization unit and one final accumulator are shared across multiple inputs, their implementation cost becomes insignificant for a larger number of inputs. The shifter for alignment converts the multiplier output to an integer format since the cost of integer addition is lower than that of floating-point addition. Then, the adder tree sums those integer values, and the normalization unit converts the result back to a floating-point format. The cost of the shifter for alignment, adder tree, and normalization unit is all determined by the integer bit width, and the larger the exponent range of the input operands, the larger the required bit width, as shown in Fig. 16. In FP134, FP143, and FP152, the minimum integer bit widths are 23, 37, and 67 bits, respectively. Since the bit width is sufficiently large, the cost difference of these units exceeds the cost difference of the multiplier. Therefore, the cost of a multi-way MAC increases with the number of exponent bits.
When designing a neural network training processor, some parts of the hardware (e.g., batch normalization, non-linear activation functions such as tanh and sigmoid, and softmax function) are typically implemented with higher precision to avoid performance drop. Hence, we need to consider data for-
Direction FP134 FP143 HFP81 BM82 Flex16+53 FP134 FP143 HFP81 BM82 Flex16+53
To FP32 155 141 145 176 330 28 26 27 30 53 From FP32 139 144 152 162 427 19 20 22 23 55 1 Sun et al. (2019) 2 Fox et al. (2020) 3 Köster et al. (2017)
mat conversion overheads when comparing different formats. If we consider various 8-bit data formats with different representation methods, as we did in Table 6, and assume that computations other than MAC operations are implemented in full precision, the processing architecture (except MAC units) will be identical for all formats. In addition, the on/off-chip memory space, control logics, and on-chip interconnects will remain the same. The only difference would be the low-precision MAC units and the data conversion units between full-precision and low-precision formats. However, the cost of conversion between low-precision and high-precision floating-point formats is typically very low and does not vary much with the low-precision format. For low-precision to high-precision conversion, we only have to add a bias-correction term to the exponent and add 0 after the mantissa. For high-precision to low-precision conversion, we need to add a bias-correction term to the exponent, clamp the overflowed value to the maximum, and round off the mantissa. The cost is very low compared to the MAC operation, and the cost difference between different low-precision formats is negligible. We have synthesized the conversion units for different formats, and their costs are presented in Table 7. The experimental results confirm that the overhead of data format conversion is significantly lower than MAC operations. In addition, all formats except Flexpoint exhibit similar conversion costs.
In addition to the synthesis result for ASIC implementation in Table 6, we measured the hardware overhead of MAC units of different data formats on FPGA. Table 8 shows the synthesis results on Xilinx Artix-7 FPGA (XC7A100TCSG324-1). Those MAC units do not need block RAMs (BRAMs), and we used a compiler directive to avoid using DSP modules for fair comparisons. Table 8 shows a similar trend to Table 6; the cost of one MAC gradually decreases as the number of inputs increases in the multi-way MAC. Also, due to integer-based addition in the adder tree, the cost of FP134, which has the smallest dynamic range, exhibits lower costs than the other formats.
A.8 EFFECT OF QUANTIZATION NOISE ON DATA FLOW QUANTIZATION
Table 9 shows the training results when both activation and error are quantized in various data formats. If an appropriate amount of noise is introduced in the network during training, it will increase the training loss but reduce the validation loss, suggesting that the model has been improved due to the regularization effect. However, if the noise level continues to increase, the model’s performance will start to degrade at some point. For instance, when MobileNetV2 is quantized in FP134, its performance is improved through the regularization effect since the training loss increases while
the validation loss decreases compared to FP32. However, both the training and validation losses increase when quantized in most cases, resulting in lower accuracy. This suggests that using a very low precision data format already introduces a large amount of noise in the network, incurring performance degradation. Hence, it is necessary to reduce error in the network to improve the training performance in low-precision training. | 1. What is the focus of the paper regarding quantization methods?
2. What are the strengths of the proposed approach, particularly in terms of error angle estimation and hardware overhead?
3. What are the weaknesses of the paper, especially regarding the magnitude of the error and the choice of FP134?
4. Do you have any concerns about the applicability of the suggested NPU design methodology and the limitation of FP134?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a method to find an optimal quantization format based on error angle estimation and hardware overhead. The authors also present an hysteresis-based quantization method to reduce fluctuation of exponent values such that training (from the scratch) using only 4-bit weights can result in negligible amount of accuracy degradation for ResNet-18 on ImageNet. Experimental results are provided for ResNet-18, MobileNetv2, 2-layer LSTM, Transformer, and MobileNetV2+SSDLite. For 8-bit quantization, FP134 is chosen and such quantization is also applied to BatchNorm layers to reduce memory consumption.
Review
Introduction discusses the difference between quantized models (after fine-tuning a pre-trained model) and from-scratch training for quantization (to gain speed-up for training) clearly and the direction of research to investigate why from-scratch training methods show increased degradation in accuracy is certainly important.
Unfortunately, this reviewer finds the following serious concerns in this paper.
(1) The authors argue that measuring the misalignment angle is better than the magnitude of the error. But in Section 2.3, it is not clear why the noise in the opposite direction is harmful. Similar to stochastic variation, noise on gradients can show some benefits if the amount of error is right such that regularization effects can be obtained. Moreover, as shown in Figure 2, reducing the magnitude of error also tends to present the training loss change. Is it difficult or impossible to find FP134 as an optimal one with magnitude-based error measurement? Supporting data to validate the claim (that the angle is better than the magnitude) needs to be provided.
(2) This paper seems to suggest a design methodology of NPU in the form of ASIC. Then, the authors need to prove that FP134 can be applied to a wide range of models. Such a particular format would show significant accuracy degradation if model size increases. What would be the limit of such a format? Moreover, the authors show the superiority of FP134 using ResNet-18 model while a few additional experimental results using some simple Transformers, such a strong argument to choose FP134 as an optimal one needs to consider what kind of limitations would be provided. For example, schemes in Table 3 might not be good for ResNet-18 but may be good for ResNet-101. This reviewer is not sure whether FP134 is a customized one for small models such as ResNet-18.
(3) A quantization technique using Hysteresis is interesting. But more detailed discussions and theories why Hysteresis is important for 4-bit Log W need to be included. Is Hysteresis generally helpful for other quantization formats and larger models as well? |
ICLR | Title
Toward Efficient Low-Precision Training: Data Format Optimization and Hysteresis Quantization
Abstract
As the complexity and size of deep neural networks continue to increase, lowprecision training has been extensively studied in the last few years to reduce hardware overhead. Training performance is largely affected by the numeric formats representing different values in low-precision training, but finding an optimal format typically requires numerous training runs, which is a very time-consuming process. In this paper, we propose a method to efficiently find an optimal format for activations and errors without actual training. We employ this method to determine an 8-bit format suitable for training various models. In addition, we propose hysteresis quantization to suppress undesired fluctuation in quantized weights during training. This scheme enables deeply quantized training using 4-bit weights, exhibiting only 0.2% degradation for ResNet-18 trained on ImageNet.
1 INTRODUCTION
Deep neural networks have been used in various fields such as vision, audio, natural language processing, and reinforcement learning. As larger and more complex neural networks are adopted, the energy and time consumed for training have become a critical issue in hardware implementation. Using low-bit representations in training significantly reduces hardware overhead and memory footprint; hence, neural network training with limited precision has been extensively studied recently. For instance, 16-bit formats are already adopted in commercial devices such as FP16 (IEEE, 2019) in Nvidia GPUs and bfloat16 (Kalamkar et al., 2019) in Google TPU (Wang et al., 2019). Also, Köster et al. (2017) suggested a new data format using a shared exponent suitable for low-precision training. Recently, it has been demonstrated that even 8-bit formats could be adopted in deep neural network training with reasonable accuracy (Sun et al., 2019; Fox et al., 2020). However, there are various issues in realizing low-precision training in practical applications as detailed below.
Optimal data format for low-precision training: Training performance is susceptible to the data format we use to represent variables in the network. When a value is represented using a floatingpoint format with a fixed number of bits, there is a trade-off between dynamic range and precision. For instance, allocating more bits to the exponent part in a floating-point format enlarges the dynamic range but lowers precision due to fewer bits in the mantissa part. Recent studies on 8-bit training suggest various ways to reduce the dynamic range required for number representation to enhance representation precision. Early work on 8-bit training (Wang et al., 2018) adopts a 5-bit exponent to represent different variables using a single format, but Sun et al. (2019) examine the statistics of each variable and optimize the numeric formats separately. Specifically, the values used in the forward path (weight and activation) have a relatively narrow dynamic range, and only 4 bits are allocated to the exponent. Fox et al. (2020) propose to divide data into smaller blocks and assign a shared exponent bias to each block. Since the values in a block tend to exhibit similar statistics, the forward (weight and activation) and backward (error) paths could be represented using only 2-bit and 4-bit exponents, respectively. Note that the shared exponent bias is effectively identical to the scaling factor. If a variable has a value of m · 2e and a shared exponent bias of b, then its actual value is m · 2e+bias, which is identical to the scaling factor of 2bias. However, these approaches are difficult to generalize since we should empirically decide numeric formats for each task, neural
network structure, and quantization scheme (Fig. 1). Furthermore, analyzing the statistics of each variable is not enough to determine an optimal format. Their distributions often have a long tail, and hence the dynamic range of the numeric format should be experimentally selected through many trial-and-errors in actual training.
Performance degradation in from-scratch training: Previous studies on quantized models show that a model could achieve comparable accuracy to full-precision models even using 1- or 2-bit weights (Choi et al., 2019; Martinez et al., 2020) through fine-tuning a pre-trained model. However, in low-precision training where a neural network is trained from scratch using low-precision values and computations, the trained model typically shows a noticeable accuracy drop (Elhoushi et al., 2021). Fig. 1(b) shows the Top-1 validation accuracy of ResNet-18 (He et al., 2016) trained on ImageNet (Deng et al., 2009) for different training schemes. The weights are quantized into a 4-bit base-2 logarithmic format. From-scratch training of the model with quantized weights results in a 2.1% accuracy drop, whereas only 1.0% degradation is observed if we fine-tune a pre-trained model. This suggests that even though a better solution (i.e., a set of parameters) exists for a given format, it cannot be reached through from-scratch training.
To formalize the issues above, here we divide quantization in low-precision training into two types: network quantization and data flow quantization. Network quantization refers to the quantization of the neural network model. An example of this type of quantization is weight quantization. In network quantization, we need to reduce the performance difference between from-scratch training and fine-tuning (Yang et al., 2019b). On the other hand, data flow quantization refers to the on-the-fly quantization that occurs when data propagate through the network in low-precision training. Examples include activation, error, and weight gradient quantizations. Additional errors are introduced in weight update computation due to this type of quantization, which leads to performance degradation. Hence, we need to find an optimal format to minimize accuracy drop due to computation errors in data flow quantization.
In this paper, we present a systematic approach to implementing low-precision training on various models and tasks. First, we present a method to efficiently find an optimal format for data flow quantization. In addition, we introduce a hysteresis quantization technique, a new quantization method for network quantization that can mitigate the issues of from-scratch training. Our main contributions are:
• We present a method that can predict the training performance of various numeric formats for data flow quantization. This method allows us to determine an appropriate data format for different neural network structures, datasets, and tasks efficiently.
• Using the method above, we propose an optimal 8-bit format suitable for low-precision training of various models, which enables quantization of BatchNorm layer input and improves hardware efficiency with minimal performance degradation.
• We propose a new quantization scheme that utilizes the hysteresis effect to improve the performance of from-scratch training in network quantization. This scheme enables ultra-low-precision training using 4-bit logarithmic weights.
2 DATA FLOW QUANTIZATION
2.1 NUMERIC FORMATS
There are many numeric formats that can be constructed with n bits depending on how much dynamic range is required and how many valid bits are used for representing a value. For example, using 8 bits we could implement 8-bit fixed point integer format, 8-bit floating-point formats such as FP152, FP143, and FP125 (FP1xy represents 1 sign bit, x exponent bits, and y mantissa bits), 8-bit posit format (Gustafson & Yonemoto, 2017), and 8-bit float-fix format (Han et al., 2019). Since the diversity of formats that could be formulated using n bits is nearly unlimited, here we assume some constraints to limit candidates while still including widely used formats such as fixed-point and floating-point formats as below:
• The MSB (Most Significant Bit) is used as a sign bit and other bits represent magnitude. Accordingly, only symmetric formats that have identical representable ranges for positive and negative numbers are considered. Two’s complement representation is slightly asymmetric since it can represent one more negative value, but it does not incur a significant difference.
• The number of valid bits of a larger value is greater than or equal to the number of valid bits of a smaller value. The valid bits stand for significant digits in binary representation.
• The ratio between consecutive representable values does not exceed 2. For example, the base-4 logarithmic format is excluded.
We could obtain 166 8-bit formats that meet these constraints. Then, we reduce 1 and 2 valid bits in each format to obtain 7- and 6-bit formats, resulting in 498 formats in total. More information on the numeric formats considered in our experiments is provided in Appendix A.1.
2.2 ACTIVATION AND ERROR QUANTIZATION
In a neural network consisting of n layers, the training process is described by
Al+1 = fl(W t l , Al) (1)
El = gl(W t l , El+1) (2)
Gwl = hl(Al, El+1) (3)
W t+1l = o(Gwl,W t l ) (4)
where A, E, W , and Gw are activation, error, weight, and weight gradient, respectively. f , g, h, and o are forward, backward, gradient, and update functions. l and t represent the layer number and time step. We follow the quantized training scheme suggested by Fox et al. (2020), but with the following modifications to reduce hardware implementation costs. A and E are quantized not only for the GEMM input but also for the BatchNorm layer input. BatchNorm layer normalizes input using the mean and variance of each channel, but these values are obtained only after observing all the inputs from the previous layer, necessitating that all input values are temporarily stored in memory. Therefore, quantizing the BatchNorm layer’s input significantly reduces memory footprint and memory access overhead. Additionally, the scope of sharing exponent bias is extended to a layer (Al and El) to avoid the overhead of aligning partial sums from different blocks in block-wise exponent sharing. Finally, instead of determining the shared exponent bias by analyzing all values in the layer, we conservatively update it by detecting overflow and underutilization that occurred in the previous mini-batch.
2.3 INDICATORS OF TRAINING PERFORMANCE
Effect of quantized error: Quantizing the error E in the backward path is independent of how the forward path behaves since the loss surface of the model does not change. Therefore, the optimalW that the network needs to reach through training remains the same regardless of the error quantization scheme. However, when the error is quantized, a quantization error ∆E is introduced in E, which incurs a noiseN∆E inGw through the gradient function in Eq. 3 and potentially updates each weight in the wrong direction. While some amount of noise may improve the training performance through regularization, using low-precision formats already introduces a large noise in the network, incurring performance degradation (see Appendix A.8). Therefore, we suggest that the weight gradient error N∆E could be a good indicator of degradation in training performance. One way to implement this is predicting performance using the magnitude ofN∆E ; however, if the noise is in the same direction as Gw, it would only change the amount of each update and result in a less severe effect. Instead, we could measure the misalignment between Gw + N∆E and Gw for performance prediction. The misalignment between two vectors is estimated by
∠(A,B) = cos−1 {
A ·B ‖A‖2 · ‖B‖2
} (5)
Then, the change in the update direction due to N∆E is ∠(Gw, Gw +N∆E). We can expect that the smaller ∠(Gw, Gw +N∆E), the better the training performance.
Effect of quantized activation: Contrary to error quantization, activation quantization affects the way the forward path operates, and the loss surface of the model changes. Hence, the global optima of weight parameters shift, where the amount of shift would be proportional to the quantization noise. The displacement of global optima can be indirectly estimated using the direction of the weight gradients Gw. If the angle ∠(Gw, Gw + N∆A) is small, the deviation of the global optima is expected to be small as well, suggesting a better training performance.
In the discussions above, we assumed that the angles ∠(Gw, Gw +N∆E) and ∠(Gw, Gw +N∆A) could be used to predict training performance. We experimentally prove this by comparing the training performance of different numeric formats. For 498 numeric formats in 6 to 8 bits, we compare the loss obtained from training with the proposed performance indicators (Fig. 2). Training loss is obtained by training ResNet-18 on CIFAR-10 dataset using SGD with a momentum of 0.9 for 60 epochs. The batch size is 128 images and the initial learning rate is 0.1, which is decayed by a cosine scheduler. We average angles from 100 mini-batches after quantizing a pre-trained model. Note that we use Gw of the first layer since it can reflect quantization errors that occur in the activations and errors of all the layers in the network. The weight gradients from the full-precision network, the network with quantized activations, and the network with quantized errors are Gw, Gw +N∆A, and GW +N∆E , respectively. Fig. 2 shows that using the misalignment angle results in not only a higher Spearman’s correlation but also a more distinct shape for low training losses, making it a better metric than the error magnitude. For instance, using the error magnitude would predict the best format for transformer incorrectly (see Fig. 8(e) in Appendix A.3). While obtaining the misalignment angle requires additional computations, its overhead is negligible since the part that requires the most time and computation is to obtain Gw, Gw + N∆E , and Gw + N∆A, which is still significantly lower than actual training. Using this method, we could determine the optimal format for a specific neural network model, dataset, and task very efficiently as we only need to
measure the misalignment angle without time-consuming network training. For experiments in Fig. 2, the amount of computation is reduced by 99.6%, and the reduction will be even larger for larger datasets and complex networks that need more epochs for training.
2.4 OPTIMAL FORMAT FOR DATA FLOW QUANTIZATION
Here we show that we could find an optimal format for training with quantized errors and activations using the proposed performance estimation method above. To find a format suitable for a wide range of models, we select six models with different architectures, layer types, and target tasks that are widely used in quantized training research for experiments: ResNet-18, ResNet-101, MobileNetV2 (Sandler et al., 2018), 2-layer LSTM, small transformer for translation on the IWSLT German to English dataset (Cettolo et al., 2014), and SSD-Lite (Liu et al., 2016) with MobileNetV2. We first measure misalignment angles for 166 8-bit formats. To verify the correlation between the training performance and the misalignment angles, we select four formats that exhibit low hardware implementation costs (INT8, FP152, FP143, and FP134) and train the networks using each format. While we may use different formats for activation and error, it requires a complicated datapath (Sun et al., 2019) and hence we only consider a single format for both variables. The experimental results in Fig. 3 demonstrate that the training performance is higher if both misalignment angles are small in all tasks and models, confirming that the proposed indicators could be used to determine the optimal numeric format. Fig. 3 suggests that FP134 and FP143 are the best candidates across all models. For hardware implementation, FP134 is the most optimal format due to its low implementation cost, which is discussed in Appendix A.7 in detail. Note that using the error magnitude leads to the same conclusion that FP134 is the best format for the target models. See Appendix A.3 for more details.
3 NETWORK QUANTIZATION
In quantized neural networks, the weight parameters are generally quantized in a way that minimizes the quantization error (Choi et al., 2019; Martinez et al., 2020). For instance, if x is quantized into a fixed-point format through s × round(xs ), a proper value is selected for the scaling factor s to minimize the quantization error. However, as the weights continue to change during training, we need to calculate s for every update, which could cause significant overhead. Therefore, prior studies on low-precision training suggest constraining the scaling factor to the power of 2 in the shared exponent (Köster et al., 2017) or the shared exponent bias (Fox et al., 2020). In this section,
we analyze the issues behind weight quantization and propose a new quantization scheme to mitigate those issues.
3.1 FLUCTUATION OF WEIGHT PARAMETERS
In typical low-precision training, a master copy of weight parameters is separately maintained in high precision, and those weights are updated based on the computed weight gradient. This highprecision weight is quantized into a low-precision format and used for the forward path computation during training. If the scaling factor s is constrained to 2n, the quantization threshold remains the same unless s is updated due to overflow or underutilization. If the optimal weight is located between two representable values of a data format, the quantized weight would fluctuate alternately between the two values in each update (Fig. 4(a)) even for a very small weight update, causing large fluctuations and undermining training performance.
3.2 HYSTERESIS QUANTIZATION
To mitigate the fluctuation issue above, we propose to introduce the concept of hysteresis to quantization. More specifically, we quantize each weight differently in a way that the quantized value tends to stay at its current value, effectively minimizing undesired oscillation between two values due to small weight updates. The equation below shows an example of the proposed quantization scheme.
Qtw = { bwtc, if wt > Qt−1w dwte, if wt < Qt−1w
(6)
where w is the original value, Qw is its quantized value, and t is the time step. The proposed hysteresis quantization reduces fluctuation significantly, stabilizing the training process and allowing the network to reach global optima more efficiently. In Fig. 4(b), if the weight change ∆W is small, then enough number of those changes should be accumulated to flip Qw. Hence, the update frequency is now proportional to the weight gradient. This helps the network to learn better while suppressing fluctuations for small Gw values. Alternatively, we may mitigate weight quantization errors by adopting AdaRound (Nagel et al. (2020)), which learns whether each weight should be rounded up or down to produce the same output as high-precision weights. However, whenever full-precision weights are updated, we need to re-train the learnable parameters (i.e., quantization scheme of each weight), incurring a large overhead and undermining the benefit of low-precision training.
3.3 ULTRA-LOW-PRECISION FORMAT FOR NETWORK QUANTIZATION
To verify the effectiveness of the proposed hysteresis quantization, we select 4-bit logarithmic representation as an ultra-low-precision format for weight parameters. This format has the same dynamic range as INT8 which is widely used for weight quantization, and is more hardware-efficient as multiplication is implemented only using simple shift operations. There have been attempts to use logarithmic weights in quantized neural networks (Lee et al., 2017; Elhoushi et al., 2021), but from-scratch training shows a significant performance degradation. In logarithmic data formats, the interval of quantization points is not uniform, making the effect of fluctuation more severe.
Fig. 5 shows experimental results of ResNet-18 training on ImageNet using 4-bit logarithmic weights. Note that we apply channel-wise quantization to the convolutional layers to compensate for the insufficient expression range and layer-wise quantization to the other types of layers. Further details on the experimental setup are provided in Appendix A.5.1. First, we measure how many quantized weights Qw change when the network performs one weight update using a mini-batch and average them over the first 100 updates in the 60th epoch. The experimental result displayed in Fig. 5(a) clearly shows that using hysteresis significantly reduces weight change frequency and stabilizes the training process. Fig. 5(b) compares the training performance of quantization schemes with and without hysteresis. Hysteresis quantization not only speeds up training but also achieves better results at the end of training. Note that hysteresis quantization is applicable to other data formats, and additional experimental results can be found in Appendix A.4.
4 EXPERIMENTAL RESULTS
FP32 FP8
4.1 LOW-PRECISION TRAINING SCHEME
For low-precision training, we need to quantize four variables: activation, error, weight, and weight gradient. In our experiments, we apply the quantized training scheme detailed in 2.2 to all of these variables, as depicted in Fig. 6. As in previous studies on 8-bit training, the inputs of GEMM are all quantized into 8 bits. Additional functions are applied to GEMM results in the forward and backward paths. ReLU, tanh, and sigmoid functions are performed directly on the input, whereas the input of BatchNorm is re-quantized.
4.2 8-BIT LOW-PRECISION TRAINING
In Section 2.4, we found that FP134 is the optimal format for low-precision training using the proposed performance prediction method. We measure the training performance of this format and compare it against other 8-bit data formats from recent studies by applying those formats to the training of various neural network models. More details on the experimental setup are provided in Appendix A.5. The performance of the proposed data format is summarized in Table 1. Overall, 8-bit training using FP134 achieves nearly the same performance as the full-precision training on all models. Even in MobileNetV2, which is known to be sensitive to quantization due to the small number of parameters, only 0.3% degradation occurred. Sun et al. (2019) show that HFP8 also exhibits only 0.2% accuracy degradation in MobileNetV2 (71.81% vs. 71.61%), but they quantize BatchNorm input into 16 bits instead of 8 bits, roughly doubling the memory access and computational complexity. Additionally, since the forward and backward paths employ different data formats, HFP8 is actually implemented using 9-bit MAC units in hardware (Agrawal et al., 2021). Table 2 compares the training performance of various data formats for ResNet-18 training. The columns w, x, dw, dx, and acc refer to weight, activation, weight gradient, error, and GEMM accumulation, respectively. Our FP134 format exhibits no accuracy drop compared to full-precision training. HFP8 (Sun et al., 2019) and BM8 (Fox et al., 2020) demonstrate similar performance, but they both use higher precision to represent BatchNorm inputs, and different formats are adopted in the forward and backward paths, necessitating complex computation units when implemented in hardware, as decribed above. In addition, BM8 assumes block-wise sharing of exponent bias, incurring additional overhead in memory access and data alignment. FP8-SEB (Park et al., 2021) addresses this issue by employing layer-wise exponent bias sharing and multi-way MAC units, but it results in a 0.7% accuracy drop for ResNet-18 training. Contrarily, our data format shows no performance degradation, while deeply quantizing BatchNorm inputs into the same format and allowing for a simple datapath by using an identical data format in the forward and backward paths.
4.3 ULTRA-LOW-PRECISION TRAINING WITH 4-BIT LOGARITHMIC WEIGHTS
Elhoushi et al. (2021) recently demonstrated that 4-bit logarithmic weights could be used for network quantization. Fine-tuning of a pre-trained model only showed 0.2% accuracy degradation, but
from-scratch training of the same model resulted in a 4.5% accuracy drop in ResNet-18 training (Table 3). Similarly, our experiments show 2.1% lower accuracy when training ResNet-18 using 4-bit logarithmic weights and FP134 format for other variables. However, using hysteresis quantization greatly improves the training performance and reduces accuracy degradation to 0.2%. This is identical to the training performance achieved through fine-tuning a pre-trained model by Elhoushi et al. (2021), confirming that hysteresis quantization effectively solves the issue of sub-optimal solutions in from-scratch training. In addition, Table 4 demonstrates that hysteresis quantization improves the training performance in all target models. Note that we quantized all trainable weights except for the BatchNorm parameters into 4 bits in experiments; the training performance could be further improved by using higher precision for error-sensitive parts such as the first/last layers and residual connections.
5 CONCLUSION
In low-precision training, the dynamic range of a tensor is data-dependant, and hence an optimal data format depends on various factors such as model, dataset, and quantization scheme. We showed that the training performance of a specific data format for activation and error could be predicted by observing the errors introduced in the weight gradients. Based on this observation, we determined an optimal 8-bit format for low-precision training very efficiently without running numerous training runs. The proposed FP134 format achieved a similar or better accuracy compared to prior works, while allowing for efficient hardware implementation through quantizing BatchNorm inputs and using a unified data format in both forward and backward paths. In addition, we proposed the hysteresis quantization scheme for network quantization, which improves training performance by suppressing undesired fluctuations and stabilizing the training process. In ultra-low-precision training with 4-bit logarithmic weights, hysteresis quantization significantly improves training performance by mitigating sub-optimal solutions, closely matching the performance obtained through fine-tuning a pre-trained model. We expect that these two schemes can complement each other to enable practical low-precision training on various models and tasks.
ACKNOWLEDGMENTS
This work was supported by the National Research Foundation of Korea (Grant No. NRF2022R1C1C1006880). The EDA tool was supported by the IC Design Education Center.
A APPENDIX
A.1 VARIOUS FORMATS ANALYZED IN SECTION 2
In this paper, we made three assumptions on the quantization formats that were analyzed. Firstly, 1-bit is allocated as a sign bit, so only symmetric formats are allowed, and secondly, the number of valid bits with a large absolute numerical value must be greater than or equal to the number of valid bits with a small absolute numerical value. Lastly, the base does not exceed 2.
Considering the above assumptions, we provide a systematical approach for generating different quantization methods that were used for analysis in Section 2, in order to create quantization methods that have trade-offs in terms of dynamic range and the number of valid bits. The quantization method is expressed with the following items: i) a list of decreasing positive real numbers P that contains the interval points (Eq. 7) and ii) a non-increasing integer list L that accompanies the interval list, with each item representing the number of valid bits (Eq. 8). Here, s is shared exponent bias.
P = {2s+1, 2s, 2s−1, ..., 2s−K+1} where s ∈ N (7) L = {l0, l1, ..., lK−1} where lk ∈ N, i < j ⇒ li ≥ lj (8)
The quantization points Q are generated in each of the intervals that are sliced with 2lk−1 evenly distributed datapoints. If the interval is {2s+1, 2s}, the quantization point Q can be expressed by Eq. 9.
Q = {2s, 2s(1 + 1 2lk−1 ), 2s(1 + 2 2lk−1 ), ..., 2s(1 + 2lk−1 − 1 2lk−1 )} (9)
Notice that L for an α-bit quantization must satisfy
2α−1 = 1 + K−1∑ k=0 2lk−1 (10)
Since the format is symmetric, only half of the data points are assigned to positive numbers, so the exponent in Eq. 10 should be α− 1 instead of α. The reason for adding 1 is to include a zero value. For example, when shared exponent bias is -1, an 8-bit fixed-point quantization would be expressed as follows:
P = {20, 2−1, 2−2, 2−3, 2−4, 2−5, 2−6, 2−7} (11) L = {7, 6, 5, 4, 3, 2, 1} (12)
The first interval from 1 to 0.5 would be evenly sliced by 27−1 datapoints, the next interval from 0.5 to 0.25 with 26−1, etc. Various cases are shown in Fig. 7, with P plotted on the x-axis and L plotted on the y-axis. Since P represents the range of values due to shared exponent bias that is independent of the data format, L can represent all of the various data formats we consider in this paper.
When selecting 8-bit formats, we chose the formats so that the intervals with less than 3 valid bits do not appear for more than two digits to reduce the search space, as they have an unnecessarily large dynamic range. Thus, formats such as [7,6,5,4,2,2,2,1] were excluded from the search space. Considering all of the generation rules, we selected 166 distinct 8-bit formats with different dynamic range and valid bits from [7,6,5,4,3,2,1] to [3,3,3,...,3,2,1]. After the number of valid bits for an 8-bit format is selected, 1 or 2 is subtracted from each value to create a corresponding 7-bit and 6-bit formats. For example, in the case of [6,5,5,5,5,4,4,4,3,2,1] 8-bit format, the 7-bit corresponding format is [5,4,4,4,4,3,3,3,2,1] and 6-bit corresponding format is [4,3,3,3,3,2,2,2,1]. From the generated 166 8-bit formats, 7-bit and 6-bit formats were also generated using this rule.
A.2 SOFTWARE IMPLEMENTATION DETAILS
To support quantized training for various formats, custom C++ and CUDA codes to emulate quantized data were written. Our custom C++ and CUDA extension code could perform quantizationrelated functions through utilizing the Python APIs in PyTorch for extendable research while maintaining high performance. We emulate the quantized value using custom code in the part that needs quantization throughout the network, and PyTorch built-in functions are used for computation kernels such as convolution and matrix multiplication. We created a package named lptorch, short for low precision PyTorch, and the code can be found in the supplementary material.
A.3 ANGLE VS. MAGNITUDE TO PREDICT PERFORMANCE
In addition to the misalignment angles of Gw (∠(Gw, Gw + N∆A) and ∠(Gw, Gw + N∆E)), as defined in Section 2.3, we used the magnitude of noise (|N∆A| and |N∆E |) in order to predict the final trained performance, and the results are shown in Fig. 8. Fig. 3 and Fig. 8 show that both the error magnitude and the misalignment angle are good metrics for determining optimal data format. For the six target models, both metrics suggest FP134 as the best format. However, the misalignment angle still better captures the training performance. For instance, in Fig. 8(e), although FP134 shows smaller noise magnitude, the actual training loss is smaller for FP143. Similarly, in Fig. 8(b), (c) and (f), although INT8 failed and FP152 succeeded in training, the absolute value of noise did not indicate a clear superior of the two formats. Based on these observations, we conclude that the misalignment angles are more suitable for predicting training performance compared against using the absolute value of noise.
A.4 HYSTERESIS QUANTIZATION WITH INTEGER WEIGHTS
In addition to 4-bit logarithmic weights, we also tested the hysteresis quantization scheme on a lowprecision integer format (INT4) that uses uniform quantization. The results are shown in Table 5. Experimental results show that using hysteresis improves the performance in most cases. In addition, in MobileNetV2 training with INT4 weights, training initially failed, but using hysteresis enables reliable training, which suggests that hysteresis quantization not only helps the network to reach the optimal point but also prevents divergence in an unwanted direction during the training process.
However, it is interesting to see that the hysteresis quantization is less effective on the LSTM model for the INT4 format. We suspect that this is due to the weight distribution characteristics of the LSTM model. As shown in Fig. 9, most of the weights have a relatively large magnitude in the
LSTM model when normalized, contrary to ResNet-18 in which the weights are more evenly distributed. In logarithmic formats, the relative amount of quantization error is similar for all values. In contrast, the relative amount of quantization error is smaller for large values in uniform quantization. Therefore, the weight parameters of LSTM are more severely affected by fluctuation in logarithmic formats, making our hysteresis quantization scheme more effective in those formats compared to uniform quantization.
A.5.1 RESNET-18 (IMAGENET)
A.5 EXPERIMENTAL DETAILS
We conducted ImageNet experiments using SGD with a momentum of 0.9 for 90 epochs with a batch size of 256 images and an initial learning rate of 0.1 which is decayed by a factor of 10 at the 30th and 60th epochs. We used the ResNet-18 architecture from the official PyTorch implementation1. Fig. 10 shows Top-1 training & validation accuracy graphs. Observation of the training graph indicates that all of the results are close to the baseline within 0.2% with the exception of FP130 without hysteresis quantization.
A.5.2 RESNET-101 (IMAGENET)
We trained ResNet-101 by applying the same training method as ResNet-18. We conducted ImageNet experiments using SGD with a momentum of 0.9 for 90 epochs with a batch size of 256 images and an initial learning rate of 0.1 which is decayed by a factor of 10 at the 30th and 60th epochs. We used the ResNet-101 architecture from the official PyTorch implementation2. Fig. 11 shows Top-1 training & validation accuracy graphs. Observation of the training graph indicates that all of the results are close to the baseline with less than 0.3% performance drop except for FP130 without hysteresis quantization.
A.5.3 MOBILENETV2 (IMAGENET)
We conducted ImageNet experiments using SGD with a momentum of 0.9 for 270 epochs with a batch size of 256 images and cosine annealing with an initial learning rate of 0.05. We used the MobileNetV2 architecture from the official PyTorch implementation3. Fig. 12 shows Top1 training & validation accuracy graphs. Observation of the training graph indicates that FP130
1https://github.com/pytorch/examples/tree/master/imagenet 2https://github.com/pytorch/examples/tree/master/imagenet 3https://github.com/pytorch/examples/tree/master/imagenet
without hysteresis leads to very unstable fluctuations throughout the training. On the other hand, in FP130 with hysteresis, training is less susceptible to fluctuations and follows the baseline (FP32) training closely until the learning rate decreases toward the latter part of learning, where both FP130 with hysteresis and FP134 show some degradation from the baseline. This is seen as a limitation due to the low precision of each format.
A.5.4 2-LAYER LSTM (PTB)
We adopted the 2-layer Long Short Term Memory (LSTM) network from PyTorch Examples4 for language modeling on the Penn Treebank dataset (Marcus et al., 1993). We ran experiments in batches of 20 sentences with an initial learning rate of 20 which is decayed by a factor of 4 at epoch 11, 16, 26, 31 and 37. The embedding and hidden dimensions are 650 and the sequence length is 35. Fig. 13 shows training & validation perplexity.
A.5.5 TRANSFORMER MODEL (IWLST)
We adopted the Transformer Base model from the FairSeq5 repository on the IWSLT’14 German to English translation task. We used Adam optimizer and default training parameters found in the repository and trained from scratch for 25 epochs. BLEU scores were calculated using the script from the repository.
A.5.6 MOBILENETV2 + SSDLITE (VOC)
We adopted a PyTorch implementation of SSDLite from the online repository6. The base network is MobileNetV2 which was pretrained with each format in Appendix A.5.3. The entire network is trained on VOC2012 and VOC2007 trainval datasets and evaluated on VOC2007 validation dataset. We used SGD with a momentum of 0.9 for 200 epochs in batches of 32 images and cosine annealing with an initial learning rate of 0.01. Fig. 14 shows validation loss at every 5 epochs. Even in this experiment, in the case of FP130 without hysteresis the loss fluctuates significantly, whereas in FP130 with hysteresis learning proceeds much more stably. FP134 showed similar results to the baseline regardless of hysteresis quantization.
A.6 MODEL QUANTIZATION METHODS
We quantized GEMM input and batchnorm input in all quantized training experiments. Among the six models used in the experiment, the quantization details for three representative structures are shown in the Fig. 15. In each structure of figure, inputs such as x, c, h, V, K, and Q are also all quantized in 8 bits.
4https://github.com/pytorch/examples/tree/master/word language model 5https://github.com/pytorch/fairseq 6https://github.com/qfgaohao/pytorch-ssd
A.7 HARDWARE EVALUATION
For hardware implementation cost comparisons, we implemented a conventional MAC unit and a multi-way MAC unit with integer-based accumulation (Tambe et al., 2020; Park et al., 2021) that support data formats presented in Section 4.2. For accumulation, we use FP169 with chunk-based accumulation (Wang et al., 2018). Experimental results in Table 6 show that FP134 exhibits lower
Structure FP134 FP1431 HFP82 BM83 Flex16+54 FP134 FP1431 HFP82 BM83 Flex16+54
Conventional 1355 1320 1308 1460 3800 122 116 106 141 537
Multi-way 2-input 1335 1480 2342 1865 2268 178 178 283 258 371 4-input 888 1034 1615 1296 1885 120 135 205 184 351 8-input 678 836 1343 1074 1672 97 123 194 168 342 16-input 571 698 1065 957 1540 95 114 170 155 329 32-input 511 668 994 898 1485 87 111 170 152 326 64-input 509 638 955 856 1450 88 110 172 149 326 1 Park et al. (2021) 2 Sun et al. (2019) 3 Fox et al. (2020) 4 Köster et al. (2017)
cost than FP143 and other formats in previous studies. Note that HFP8 (Sun et al., 2019) and BM8 (Fox et al., 2020) employ different formats for activation and error. Therefore, they need to be implemented in FP153 and FP145 to support all operations with a single MAC unit (Agrawal et al., 2021). Since Flex16+5 (Köster et al., 2017) requires 16-bit multiplication, its cost is significantly higher than other 8-bit formats.
A conventional MAC unit consists of a multiplier and an accumulator. In the multiplier, the exponents of two input operands are summed while their mantissas are multiplied. The multiplication part is more complex, and hence it dominates the area of the multiplier. As a result, the size of the multiplier is larger when more bits are allocated to mantissa. In the accumulator, a floating-point adder adds the multiplication results to a partial sum in FP169. The adder is decomposed into a shifter that aligns the mantissa by the exponent difference, an integer adder that sums aligned mantissas, and a quantization unit that converts the result back to FP169. Since the result is re-quantized into FP169, the addition operation of aligned mantissas does not need to be lossless. FP169 format has a 10-bit mantissa including one hidden bit. We only need to accurately calculate higher 10 bits, which necessitates a 12-bit adder considering rounding. Shifting by more than 12 bits is not needed even if the result of the multiplier has a larger exponent range. Therefore, the shifter, adder, and quantization unit, which are the components of the accumulator, are not affected by the input format. There are minor differences such as an adder that calculates the difference between exponents and a shifter with a different bit width of the input, but their costs are ignorable.
Contrarily, a multi-way MAC consists of a multiplier, a shifter for alignment, an adder tree, a normalization unit, and a final accumulator. The multiplier and the final accumulator are identical to those of the conventional MAC. However, since only one normalization unit and one final accumulator are shared across multiple inputs, their implementation cost becomes insignificant for a larger number of inputs. The shifter for alignment converts the multiplier output to an integer format since the cost of integer addition is lower than that of floating-point addition. Then, the adder tree sums those integer values, and the normalization unit converts the result back to a floating-point format. The cost of the shifter for alignment, adder tree, and normalization unit is all determined by the integer bit width, and the larger the exponent range of the input operands, the larger the required bit width, as shown in Fig. 16. In FP134, FP143, and FP152, the minimum integer bit widths are 23, 37, and 67 bits, respectively. Since the bit width is sufficiently large, the cost difference of these units exceeds the cost difference of the multiplier. Therefore, the cost of a multi-way MAC increases with the number of exponent bits.
When designing a neural network training processor, some parts of the hardware (e.g., batch normalization, non-linear activation functions such as tanh and sigmoid, and softmax function) are typically implemented with higher precision to avoid performance drop. Hence, we need to consider data for-
Direction FP134 FP143 HFP81 BM82 Flex16+53 FP134 FP143 HFP81 BM82 Flex16+53
To FP32 155 141 145 176 330 28 26 27 30 53 From FP32 139 144 152 162 427 19 20 22 23 55 1 Sun et al. (2019) 2 Fox et al. (2020) 3 Köster et al. (2017)
mat conversion overheads when comparing different formats. If we consider various 8-bit data formats with different representation methods, as we did in Table 6, and assume that computations other than MAC operations are implemented in full precision, the processing architecture (except MAC units) will be identical for all formats. In addition, the on/off-chip memory space, control logics, and on-chip interconnects will remain the same. The only difference would be the low-precision MAC units and the data conversion units between full-precision and low-precision formats. However, the cost of conversion between low-precision and high-precision floating-point formats is typically very low and does not vary much with the low-precision format. For low-precision to high-precision conversion, we only have to add a bias-correction term to the exponent and add 0 after the mantissa. For high-precision to low-precision conversion, we need to add a bias-correction term to the exponent, clamp the overflowed value to the maximum, and round off the mantissa. The cost is very low compared to the MAC operation, and the cost difference between different low-precision formats is negligible. We have synthesized the conversion units for different formats, and their costs are presented in Table 7. The experimental results confirm that the overhead of data format conversion is significantly lower than MAC operations. In addition, all formats except Flexpoint exhibit similar conversion costs.
In addition to the synthesis result for ASIC implementation in Table 6, we measured the hardware overhead of MAC units of different data formats on FPGA. Table 8 shows the synthesis results on Xilinx Artix-7 FPGA (XC7A100TCSG324-1). Those MAC units do not need block RAMs (BRAMs), and we used a compiler directive to avoid using DSP modules for fair comparisons. Table 8 shows a similar trend to Table 6; the cost of one MAC gradually decreases as the number of inputs increases in the multi-way MAC. Also, due to integer-based addition in the adder tree, the cost of FP134, which has the smallest dynamic range, exhibits lower costs than the other formats.
A.8 EFFECT OF QUANTIZATION NOISE ON DATA FLOW QUANTIZATION
Table 9 shows the training results when both activation and error are quantized in various data formats. If an appropriate amount of noise is introduced in the network during training, it will increase the training loss but reduce the validation loss, suggesting that the model has been improved due to the regularization effect. However, if the noise level continues to increase, the model’s performance will start to degrade at some point. For instance, when MobileNetV2 is quantized in FP134, its performance is improved through the regularization effect since the training loss increases while
the validation loss decreases compared to FP32. However, both the training and validation losses increase when quantized in most cases, resulting in lower accuracy. This suggests that using a very low precision data format already introduces a large amount of noise in the network, incurring performance degradation. Hence, it is necessary to reduce error in the network to improve the training performance in low-precision training. | 1. What is the focus and contribution of the paper regarding numeric format optimization?
2. What are the strengths of the proposed approach, particularly in evaluating performance and mitigating fluctuation issues?
3. Do you have any concerns about the proposed metric's advantages compared to other metrics?
4. How does hysteresis quantization work in practice, and what are its limitations?
5. Are there any similarities or differences between the proposed method and AdaRound?
6. Can you clarify the notations used in Table 4, such as "X," "O," "dw," and "x"? | Summary Of The Paper
Review | Summary Of The Paper
The authors propose a method to predict the performance of different numeric formats, which allows determining the optimal data format for various neural network architectures, datasets, and tasks efficiently. By comparing 498 formats in total, the authors find an optimal 8-bit format suitable for various models. To improve the performance of from-scratch training, the authors further propose hysteresis quantization to mitigate the fluctuation issue. Experiments on 8-bit and 4-bit training demonstrate the effectiveness of the proposed method.
Review
Contribution:
The authors propose a metric to evaluate the performance of different numeric formats. Using the proposed metric, the authors further find an optimal 8-bit numeric format suitable for various models.
The authors find that the performance degradation of 8-bit training is due to the fluctuation issue of quantized weights. To solve this, the authors propose a hysteresis quantization scheme to improve the performance of from-scratch training.
Experiments on 8-bit and 4-bit training show the promising performance of the proposed method.
Questions and points needed to be improved:
In Figure 2, the improvement of spearman’s correlation of the proposed metric over the magnitude of the error is marginal (0.9283 vs. 0.9215). It seems that the magnitude of the error is a good metric to measure the performance degradation. What are the advantages of the proposed metric? More explanations and results are required.
In Section 3.1, the authors state that the amount of change in the quantized weight due to the fluctuation is not necessarily proportional to the weight gradient. To mitigate the fluctuation issue above, the authors propose hysteresis quantization scheme. However, Figure 5 can not show the effect of hysteresis quantization. The number of
Q
w
changes is the same in Figure 5(a) and Figure 5(b). More explanations are required.
The idea of changing the rounding function in network quantization is similar to AdaRound[1]. It would be better for the authors to add more discussions on the difference between the proposed method and AdaRound.
In Table 4, many notations are unclear. What do “X” and “O” denote? What do dw and x denote? More explanations are required.
Reference:
[1] Up or Down? Adaptive Rounding for Post-Training Quantization. ICML 2020. |