venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
ICLR
Title Deconfounding to Explanation Evaluation in Graph Neural Networks Abstract Explainability of graph neural networks (GNNs) aims to answer “Why the GNN made a certain prediction?”, which is crucial to interpret the model prediction. The feature attribution framework distributes a GNN’s prediction to its input features (e.g., edges), identifying an influential subgraph as the explanation. When evaluating the explanation (i.e., subgraph importance), a standard way is to audit the model prediction based on the subgraph solely. However, we argue that a distribution shift exists between the full graph and the subgraph, causing the out-ofdistribution problem. Furthermore, with an in-depth causal analysis, we find the OOD effect acts as the confounder, which brings spurious associations between the subgraph importance and model prediction, making the evaluation less reliable. In this work, we propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction. While the distribution shift is generally intractable, we employ the front-door adjustment and introduce a surrogate variable of the subgraphs. Specifically, we devise a generative model to generate the plausible surrogates that conform to the data distribution, thus approaching the unbiased estimation of subgraph importance. Empirical results demonstrate the effectiveness of DSE in terms of explanation fidelity. N/A Explainability of graph neural networks (GNNs) aims to answer “Why the GNN made a certain prediction?”, which is crucial to interpret the model prediction. The feature attribution framework distributes a GNN’s prediction to its input features (e.g., edges), identifying an influential subgraph as the explanation. When evaluating the explanation (i.e., subgraph importance), a standard way is to audit the model prediction based on the subgraph solely. However, we argue that a distribution shift exists between the full graph and the subgraph, causing the out-ofdistribution problem. Furthermore, with an in-depth causal analysis, we find the OOD effect acts as the confounder, which brings spurious associations between the subgraph importance and model prediction, making the evaluation less reliable. In this work, we propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction. While the distribution shift is generally intractable, we employ the front-door adjustment and introduce a surrogate variable of the subgraphs. Specifically, we devise a generative model to generate the plausible surrogates that conform to the data distribution, thus approaching the unbiased estimation of subgraph importance. Empirical results demonstrate the effectiveness of DSE in terms of explanation fidelity. 1 INTRODUCTION Explainability of graph neural networks (GNNs) (Hamilton et al., 2017; Dwivedi et al., 2020) is crucial to model understanding and reliability in real-world applications, especially when about fairness and privacy (Ying et al., 2019; Luo et al., 2020). It aims to provide insight into how predictor models work, answering “Why the target GNN made a certain prediction?”. Towards this end, a variety of explainer models are proposed for feature attribution (Selvaraju et al., 2017; Ying et al., 2019; Luo et al., 2020; Vu & Thai, 2020), which decomposes the predictor’s prediction as contributions (i.e., importance) of its input features (e.g., edges, nodes). While feature attribution assigns the features with importance scores, it redistributes the graph features and creates a new distribution different from that of the original full graphs, from which a subgraph is sampled as the explanation. Such sampling process is referred to as feature removal (Covert et al., 2020). Then, to assess the explanatory subgraph, the current evaluation frameworks use the feature removal principle — (1) only feed the subgraph into the target predictor, discarding the other features; (2) measure the importance of the subgraph based on its information amount to recover the model’s prediction. Such subgraph-prediction correlations uncovered by the removal-based evaluator should offer a faithful inspection of the predictor’s decision-making process and assess the fidelity of the explainers reliably. However, feature removal brings the out-of-distribution (OOD) problem (Frye et al., 2020; Chang et al., 2019; Lukas Faber, 2021): the distribution shift from full graphs to subgraphs likely violates underlying properties, including node degree distribution (Leskovec et al., 2005) and domain-specific constraints (Liu et al., 2018) of the full graphs. For example, graph properties of chemical molecules, such as the valency rules, impose some constraints on syntactically valid molecules (Liu et al., 2018); hence, simply removing some bonds (edges) or atoms (nodes) creates invalid molecular subgraphs that never appear in the training dataset. Such OOD subgraphs could manipulate the predictor’s Under review as a conference paper at ICLR 2022 𝑮: Full Graph 𝑮𝒔: Subgraph 𝒀: Predicate Logits 𝐬 𝓖 House Cycle Crane 0.21 0.70 Target Predictor Crane 𝓖Cycle House𝓖𝒔𝟏𝑫: Distribution Shift 𝓖𝒔𝟐 𝒔∗ 𝐬 Front-door Adjustment𝓖𝒔𝟐𝓖𝒔𝟏 (a) Feature Removal to Evaluate Explanatory Subgraph Gs 𝑮: Full Graph 𝑮𝒔: Subgraph 𝒀: Predicate Logits 𝐬 𝓖 House Cycle Crane 0.21 0.70 Target Predictor Crane 𝓖Cycle House𝓖𝒔𝟏𝑫: Distribution Shift 𝓖𝒔𝟐 𝒔∗ 𝐬 Front-door Adjustment𝓖𝒔𝟐𝓖𝒔𝟏 (b) SCM I Figure 1: (a) A real example in TR3. The GNN predictor classifies the full graph as ‘House”. On subgraphs Gs1 and Gs2, the prediction probabilities of being “House” are respectively 0.21 and 0.70. (b) The structural causal model represents the causalities among variables: G as the input graph, D as the unobserved distribution shift, Gs as the explanatory subgraph, and Y as the model prediction. outcome arbitrarily (Dai et al., 2018; Zügner et al., 2018), generates erroneous predictions, and limits the reliability of the evaluation process. Here we demonstrate the OOD effect by a real example in Figure 1a, where the trained ASAP (Ranjan et al., 2020) predictor has classified the input graph as “House” for its attached motif (see Section 4 for more details). On the ground-truth explanation Gs1, the output probability of the “House” class is surprisingly low (0.21). While for Gs2 with less discriminative information, the outputs probability of the “House” class (0.70) is higher. Clearly, the removal-based evaluator assigns the OOD subgraphs with unreliable importance scores, which are unfaithful to the predictor’s decision. The OOD effect has not been explored in evaluating GNN explanations, to the best of our knowledge. We rigorously investigate it from a causal view (Pearl et al., 2016; Pearl, 2000; Pearl & Mackenzie, 2018). Figure 1b represents our causal assumption via a structural causal model (SCM) (Pearl et al., 2016; Pearl, 2000), where we target the causal effect of Gs on Y . Nonetheless, as a confounder between Gs and Y , distribution shift D opens the spurious path Gs ← D → Y . By “spurious”, we mean that the path lies outside the direct causal path from Gs to Y , making Gs and Y spuriously correlated and yielding an erroneous effect. And one can hardly distinguish between the spurious correlation and causative relations (Pearl et al., 2016). Hence, auditing Y on Gs suffers from the OOD effect and wrongly evaluates the importance of Gs. Motivated by our causal insight, we propose a novel evaluation paradigm, Deconfounded Subgraph Evaluator (DSE), to faithfully measure the causal effect of explanatory subgraphs on the prediction. 𝑮: Full Graph 𝑮𝒔: Subgraph 𝒀: Predicate Logits 𝑮𝐬𝑮 𝒀 𝑫 𝑮𝒔∗ 𝑮𝐬𝑮 𝒀 𝑫 Front-door Adjustment reliably and further guide explainers to generate faithful explanations. In a nutshell, our contributions are: • From a causal perspective, we argue that the OOD effect is the confounder that causes spurious correlations between subgraph importance and model prediction. • We propose a deconfounding paradigm, DSE, which exploits the front-door adjustment to mitigate the out-of-distribution effect and evaluate the explanatory subgraphs unbiasedly. • We validate the effectiveness of our framework over various explainers, target GNN models, and datasets. Significant boosts are achieved over the conventional feature removal techniques. Code and datasets are available at: https://anonymous.4open.science/r/DSE-24BC/. 2 A CAUSAL VIEW OF EXPLANATION EVALUATION Here we begin with the causality-based view of feature removal in Section 2.1 and present our causal assumption to inspect the OOD effect in Section 2.2. 2.1 PROBLEM FORMULATION Without loss of generality, we focus on the graph classification task: a well-trained GNN predictor f takes the graph variable G as input and predicts the class Y ∈ {1, · · · ,K}, i.e., Y = f(G). Generation of Explanatory Subgraphs. Post-hoc explainability typically considers the question “Why the GNN predictor f made certain prediction?”. A prevalent solution is building an explainer model to conduct feature attribution (Ying et al., 2019; Luo et al., 2020; Pope et al., 2019). It decomposes the prediction into the contributions of the input features, which redistributes the probability of features according to their importance and sample the salient features as an explanatory subgraph Gs. Specifically, Gs can be a structure-wise (Ying et al., 2019; Luo et al., 2020) or featurewise (Ying et al., 2019) subgraph of G. In this paper, we focus on the structural features. That is, for graph G = (N , E) with the edge set E and the node set N , the explanatory subgraph Gs = (Ns, Es) consists of a subset of edges Es ⊂ E and their endpoints Ns = {u, v|(u, v) ∈ Es}. Evaluation of Explanatory Subgraphs. Insertion-based evaluation by feature removal (Covert et al., 2020; Dabkowski & Gal, 2017) aims to check whether the subgraph is the supporting substructure 1 that alone allows a confident classification. We systematize this paradigm as three steps: (1) divide the full graph G into two parts, the subgraph Gs and the complement Gs; (2) feed Gs into the target GNN f , while discarding Gs; and (3) obtain the model prediction on Gs, to assess its discriminative information to recover the prediction on G. Briefly, at the core of the evaluator is the subgraphprediction correlation. However, as discussed in Section 1, the OOD effect is inherent in the removal-based evaluator, hindering the subgraph-prediction correlation from accurately estimating the subgraph importance. 2.2 STRUCTURAL CAUSAL MODEL To inspect the OOD effect rigorously, we take a causal look at the evaluation process with a Structural Causal Model (SCM I) in Figure 1b. We denote the abstract data variables by the nodes, where the directed links represent the causality. The SCM indicates how the variables interact with each other through the graphical definition of causation: • G→ Gs ← D. We introduce an abstract distribution shift variable D to sample a subgraph Gs from the edge distributions of the full graph G. • Gs → Y ← D. We denote Y as the prediction variable (e.g., logits output), which is determined by (1) the direct effect from Gs, and (2) the confounding effect caused by D. In particular, the former causation that led to the result is the focus of this work. We suggest readers to refer to Appendix A where we offer an elaboration of D. With our SCM assumption, directly measuring the importance of explanatory subgraphs is distracted by the backdoor path (Pearl, 2000), Gs ← D → Y . This path introduces the confounding associations between Gs and Y , which makes Gs and Y spuriously correlated, i.e., biases the subgraph-prediction correlations, thus making the evaluator invalid. How to mitigate the OOD effect and quantify Gs’s genuine causal effect on Y remains largely unexplored in the literature and is the focus of our work. 3 DECONFOUNDED EVALUATION OF EXPLANATORY SUBGRAPHS In this section, we propose a novel deconfounding framework to evaluate the explanatory subgraphs in a trustworthy way. Specifically, we first leverage the front-door adjustment (Pearl, 2000) to formulate a causal objective in Section 3.1. We then devise a conditional variational graph auto-encoders (CVGAE) as the effective implementation of our objective in Section 3.2. 1We focus on insertion-based evaluation here while we discuss deletion-based evaluation in Appendix C. 3.1 FRONT-DOOR ADJUSTMENT To the best of our knowledge, our work is the first to adopt the causal theory to solve the OOD problem in the explanation evaluation of GNNs. To pursue the causal effect of Gs on Y , we perform the calculus of the causal intervention P (Y = y|do(Gs = Gs)). Specifically, the do-calculus (Pearl, 2000; Pearl et al., 2016) is to intervene the subgraph variable Gs by cutting off its coming links and assigning it with the certain value Gs, making it unaffected from its causal parents G and D. From inspection of the SCM in Figure 1b, the distribution effect D acts as the confounder between Gs and Y , and opens the backdoor path Gs ← D → Y . However, as D is hardly measurable, we can not use the backdoor adjustment (Pearl, 2000; Pearl et al., 2016) to block the backdoor path from Gs to Y . Hence, the causal effect of Gs on Y is not identifiable from SCM I. However, we can go much further by considering SCM II in Figure 2 instead, where a mediating variable G∗s is introduced between Gs and Y : • Gs → G∗s . G∗s is the surrogate variable of Gs, which completes Gs to make them in the data distribution. First, it originates from and containsGs. Specifically, it imagines how the possible full graphs should be when observing the subgraph Gs. Second, G∗s should follow the data distribution and respect the inherent knowledge of graph properties, thus no link exists between D and G∗s . • G∗s → Y . This is based on our causal assumption that the causality-related information of Gs on Y , i.e., the discriminative information for Gs to make prediction, is well-preserved by G∗s . Thus, with the core of Gs, G∗s is qualified to serve as the mediator which further results in the model prediction. With SCM II, we can exploit the front-door adjustment (Pearl, 2000; Pearl et al., 2016) instead to quantify the causal effect of Gs on Y . Specifically, by summing over possible surrogate graphs G∗s of G∗s , we chain two identifiable partial effects of Gs on G ∗ s and G ∗ s on Y together: P (Y |do(Gs = Gs)) = ∑ G∗s P (Y |do(G∗s = G∗s ))P (G∗s = G∗s |do(Gs = Gs)) = ∑ G∗s ∑ G′s P (Y |G∗s = G∗s , Gs = G′s)P (Gs = G′s)P (G∗s = G∗s |do(Gs = Gs)) = ∑ G∗s ∑ G′s P (Y |G∗s = G∗s , Gs = G′s)P (Gs = G′s)P (G∗s = G∗s |Gs = Gs), (1) Specifically, we have P (G∗s|do(Gs = Gs)) = P (G∗s|Gs = Gs) as Gs is the only parent of G∗s . And we distinguish the Gs in our target expression P (Y |do(Gs = Gs)) between G′s, the latter of which is adjusted to pursue P (Y |do(G∗s = G∗s )). With the data of (Gs,G∗s ) pairs, we can obtain P (Y |G∗s = G∗s , Gs = G′s) by feeding the surrogate graph G∗s into the GNN predictor, conditional on the subgraph G′s; similarly, we can estimate P (Gs = G′s) statistically; P (G∗s = G∗s |Gs = Gs) is the conditional distribution of the surrogate variable, after observing the subgraphs. As a result, this front-door adjustment yields a consistent estimation of Gs’s effect on Y and avoids the confounding associations from the OOD effect. 3.2 DEEP GENERATIVE MODEL However, it is non-trivial to instantiate G∗s and collect the (Gs,G∗s ) pairs. We get inspiration from the great success of generative models and devise a novel probabilistic model, conditional variational graph auto-encoder (CVGAE), and an adversarial training framework, to generate G∗s . Conditional Generation. Inspired by previous works (Thomas N. Kipf, 2016; Liu et al., 2018), we model the data distribution via a generative model gθ parameterized by θ. It is composed of an encoder q(Z|G,Gs) and a decoder p(G∗s |Z). Specifically, the encoder q(Z|G,Gs) embeds each node i in G with a stochastic representation zi, and summarize all node representations in Z: q(Z|G,Gs) = N∏ i=1 q(zi|G,Gs), with q(zi|G,Gs) = N (zi | [µ1i,µ2i], [ σ21i 0 0 σ22i ] ) (2) where zi is sampled from a diagonal normal distribution by mean vector [µ1i,µ2i] and standard deviation vector diag(σ21i,σ 2 2i); µ1 = fµ(G) and logσ1 = fσ(G) denote the matrices of mean vectors µ1i and standard deviation vectors logσ1i respectively, which are derived from two GNN models fµ and fσ on the top of the full graph G; similarly, µ2 = fµ(Gs) and logσ2 = fσ(Gs) are on the top of the subgraph Gs. Then, the decoder p(G∗s |Z) generates the valid surrogates: p(G∗s |Z) = N∏ i N∏ j p(Aij |zi, zj), with p(Aij = 1|zi, zj) = fA([zi, zj ]), (3) whereAij = 1 indicates the existence of an edge between nodes i and j; fA is a MLP, which takes the concatenation of node representations zi and zj as the input and outputs the probability of Aij = 1. Leveraging the variational graph auto-encoder, we are able to generate some counterfactual edges that never appear in G and sample G∗s from the conditional distribution p(G∗s |Z), formally, G∗s ∼ p(G∗s|Z). As a result, P (G∗s = G∗s |Gs = Gs) in Equation 1 is identified by p(G∗s |Z). The quality of the generator directly affects the quality of the surrogate graphs, further determines how well the frontdoor adjustment is conducted. Next, we will detail an adversarial training framework to optimize the generator, which is distinct from the standard training of VAE. Adversarial Training. To achieve high-quality generation, we get inspiration from the adversarial training (Goodfellow et al., 2020; Yue et al., 2021) and devise the following training objective: min θ LVAE + γLC +max µ ωLD, (4) where γ, ω are trade-off hyper-parameters. These losses are carefully designed to assure the generation follows the data distribution. Next, we will elaborate on each of them. LVAE = −EG [Eq(Z|G,Gs)[log p(Ĝs|Z)]] + βEG [DKL(q(Z|G,Gs)||p(Z))], (5) We first minimize the β-VAE loss(Higgins et al., 2017), and the first term is the reconstruction loss responsible to predict the probability of edges’ existence; the second term is the KL-divergence between the variational and prior distributions. Here we resort to the isotropic Gaussian distribution p(Z) = ∏ i p(zi) = ∏ iN (zi|0, I) as the prior. β reweighs the KL-divergence, which promises to learn the disentangled factors in Z (Higgins et al., 2017; Yue et al., 2021; Suter et al., 2019). Moreover, we highlight the class-discriminative information in Z, by encouraging the agreement between graph representations with the same class compared to that with different classes. Technically, the contrastive loss is adopted: LC = −EG [log ∑ G′∈B+ exp (s(zG , zG′)/τ)∑ G′′∈B+∪B− exp (s(zG , zG′′)/τ) ], (6) where zG is the representation of G that aggregates all node representations Z together; s is the similarity function, which is given by an inner product here; τ is the temperature hyper-parameter; B+ is the graph set having the same class to G, while the graphs involved in B− have different classes from G. Minimizing this loss enables the generator to go beyond the generic knowledge and uncover the class-wise patterns of graph data. Besides, we introduce a discriminative model dµ to distinguish the generated graphs. Specifically, we set it as a probability-conditional GNN (Fey & Lenssen, 2019) parameterized by µ. It takes a graph as input and outputs a score between 0 to 1, which indicates the confidence of the graph being realistic. Hence, given a real graph G with the ground-truth label y, we can use the generator gθ to generate G∗s . Then the discriminator learns to assign G with a large score while labeling G∗s with a small score. To optimize the discriminator, we adopt the Wasserstein GAN (WGAN) (Martin Arjovsky, 2017) loss: LD = EG [Ep(G∗s |Z)[d(G, y)− d(G ∗ s , y)− λ(||∇G∗s d(G ∗ s , y)||2 − 1)2]], (7) where d(G∗s , y) is the probability of generating G∗s from the generator; λ is the hyper-parameter. By playing the min-max game between the generator and the discriminator in Equation 4, the generator can create the surrogate graphs from the data distribution plausibly. Subgraph Evaluation. With the well-trained generator g∗θ whose parameters are fixed, we now approximate the causal effect of Gs on Y . Here we conduct Monte-Carlo simulation based on g∗θ to sample a set of plausible surrogate graphs {G∗s} from p(G∗s |Z). Having collected the (Gs,G∗s ) data, we can arrive the estimation of Equation 1. 4 EXPERIMENTS We aim to answer the following research questions: • Study of Explanation Evaluation. How effective is our DSE in mitigating the OOD effect and evaluating the explanatory subgraph more reliably? (Section 4.2) • Study of Generator. How effective is our CVGAE in generating the surrogates for the explanatory subgraphs and making them conform to the data distribution? (Section 4.3) 4.1 EXPERIMENTAL SETTINGS Datasets & Target GNNs. We first train various target GNN classifiers on the three datasets: • TR3 is a synthetic dataset involving 3000 graphs, each of which is constructed by connecting a random tree-shape base with one motif (house, cycle, crane). The motif type is the ground-truth label, while we treat the motifs as the ground-truth explanations following Ying et al. (2019); Yuan et al. (2020a). A Local Extremum GNN (Ranjan et al., 2019) is trained for classification. • MNIST superpixels (MNISTsup) (Monti et al., 2017) converts the MNIST images into 70,000 superpixel graphs. Every graph with 75 nodes is labeled as one of 10 classes. We train a Splinebased GNN (Fey et al., 2018) as the classifier model. The subgraphs representing digits can be viewed as human explanations. • Graph-SST2 (Yuan et al., 2020b) is based on text sentiment dataset SST2 (Socher et al., 2013) and converts the text sentences to graphs where nodes represent tokens and edges indicate relations between nodes. Each graph is labeled by its sentence sentiment. The node embeddings are initialized by the pre-trained BERT word embeddings (Devlin et al., 2018). Graph Attention Network (Veličković et al., 2018) is trained as the classifier. Ground-Truth Explanations. By “ground-truth”, we follow the prior studies (Ying et al., 2019; Yuan et al., 2020a; Luo et al., 2020) and treat the subgraphs coherent to the model knowledge (e.g., the motif subgraphs in TR3) or human knowledge (e.g., the digit subgraphs in MNISTsup) as the ground-truth explanations. Although such ground-truth explanations might not fit the decision-making process of the model exactly, they contain sufficient discriminative information to help justify the explanations. Note that no ground-truth explanation is available in Graph-SST2. Explainers. To explain the decisions made by these GNNs, we adopt several state-of-the-art explainers, including SA (Baldassarre & Azizpour, 2019), Grad-CAM (Selvaraju et al., 2017), GNNExplainer (Ying et al., 2019), CXPlain (Schwab & Karlen, 2019), PGM-Explainer (Vu & Thai, 2020), Screener (Anonymous, 2021), to generate the explanatory subgraphs. Specifically, top-15%, 20%, 20% of edges on the full graph instance construct the explanatory subgraphs in TR3, MNIST, and Graph-SST2, respectively. We refer readers to Appendix D for more experimental details. 4.2 STUDY OF EXPLANATION EVALUATION (RQ1) Deconfounded Evaluation Performance. For an explanation Gs, the conventional removal-based evaluation framework quantifies its importance as the subgraph-prediction correlation, termed Impre(Gs) = f(Gs); whereas, our DSE framework focuses on the causal effect caused by Gs on Y which is computed based on Equation 1, and we denote it as Impdse(Gs) for short. These importance scores broadly aim to reflect the discriminative information carried by Gs. Thanks to the ground-truth knowledge available in TR3 and MNISTsup, we are able to get a faithful and principled metric to measure the discriminative information amount — the precision Prec(Gs,G+s ) between the ground-truth explanation G+s and the explanatory subgraph Gs. This precision metric allows us to perform a fair comparison between Impre(Gs) and Impdse(Gs) via: ρre = ρ([Prec(Gs,G+s )], [Impre(Gs)]), ρdse = ρ([Prec(Gs,G+s )], [Impdse(Gs)]), (8) where ρ is the correlation coefficient between the lists of precision and importance scores. We present the results in Figure 4 and have some interesting insights: • Insight 1: Removal-based evaluation hardly reflects the importance of explanations. In most cases, Prec(Gs,G+s ) is negatively correlated with the importance. This again shows that simply discarding a part of a graph could violate some underlying properties of graphs and mislead the target GNN, which is consistent with the adversarial attack works (Dai et al., 2018; Zügner et al., 2018). Moreover, the explainers that target high prediction accuracy, such as GNNExplainer, are easily distracted by the OOD effect and thus miss the important subgraphs. • Insight 2: Deconfounded evaluation quantifies the explanation importance more faithfully. Substantially, ρdse greatly improves after the frontdoor adjustments via the surrogate variable. The most notable case is GNNExplainer in MNISTsup, where ρdse = 0.17 achieves a tremendous increase from ρdse = −0.11. Although our DSE alleviates the OOD problem significantly, weak positive or negative correlations still exist, which indicates the limitation of the current CVGAE. We leave the exploration of higher-quality generation in future work. Revisiting & Reranking Explainers. Here we investigate the rankings of explainers generated from different evaluation frameworks, and further compute the Spearman rank correlations between these evaluation rankings and the reference rankings of explainers. Specifically, for TR3 and MNISTsup with ground-truth explanations, we regard the ranks w.r.t. precision as the references, while obtaining the reference of Graph-SST2 by a user study2. Such a reference offers the human knowledge for explanations and benchmarks the comparison. We show the results in Table 1 and conclude: • Insight 3: DSE presents a more fair and reliable comparison among explainers. The DSEbased rankings are highly consistent with the references, while the removal-based rankings struggle to pass the check. In particular, we observe that for TR3, the unrealistic splicing inputs cause a plain ranking w.r.t. Impre. We find that various input subgraphs are predicted as cycle class. That is, the target GNN model is a deterministic gambler with serious OOD subgraphs. In contrast, DSE outputs a more informative ranking; For MNISTsup, GNNExplainer with the highest precision 270 volunteers are engaged, where each was asked to answer 10 questions randomly sampled from 32 movie reviews and choose the best explanations generated by the explainers. See Appendix E for more details. Table 2: Importance scores or probabilities of subgraphs before and after feature removal. TR3 MNISTsup Graph-SST2 Imp(G) or GMM(G) 0.958−0.520 0.982−0.574 35.3−11.3 Imp(G+s ) or GMM(Gs) 0.438 0.408 24.0 Table 3: Performances of Generators in terms of Validity and Fidelity. TR3 MNISTsup Graph-SST2 Imp(G∗s) VAL↑ FID↓ Imp(G∗s) VAL↑ FID↓ GMM(G∗s) VAL↑ FID↓ Random 0.451 0.013 0.794 0.448 0.040 1.325 38.8 14.8 0.060 VGAE 0.469 0.031 0.754 0.205 -0.203 1.501 37.6 13.6 0.078 ARGVA 0.392 0.061 0.726 0.466 0.058 1.306 31.0 7.0 0.079 CVGAE 0.603 0.165 0.598 0.552 0.144 0.910 45.8 21.8 0.057 is overly underrated by the removal-based evaluation framework, but DSE justifies its position faithfully; For Graph-SST2, although the OOD problem seems to be minor, DSE can still achieve significant improvement. Case Study. We present a case study in Graph-SST2 to illustrate how DSE mitigates the potential OOD problem. See Appendix F for another case study on TR3. In Figure 5, G is a graph predicted as “negative" sentiment. The explanatory subgraph Gs emphasizes tokens like “weak” and relations like “n’t→funny”, which is cogent according to human knowledge. However, its removal-based importance is highly underestimated as 0.385, possibly due to its disconnectivity or sparsity after feature removal. To mitigate the OOD problem, DSE samples 50 surrogate graphs from the generator, performs the frontdoor adjustment, and justifies the subgraph importance as 0.913, which shows the effectiveness of our DSE framework. We also observe some limitations of the generator (1) Due to the limited training data, the generators only reflect the distribution of the observed graphs, thus making some generations grammatically wrong. (2) The generations is constrained within the complete graph determined by the node set of the explanatory subgraph, thereby limits the quality of deconfounding. As we mainly focus on the OOD problem, we will leave the ability of the generator as future work. 4.3 STUDY OF GENERATORS (RQ2) The generator plays an important role in our DSE framework, which aims to generate the valid surrogates conform to the data distribution. To evaluate the generator’s quality, we compare it with three baselines: a random generator, a variational graph auto-encoder (VGAE) (Thomas N. Kipf, 2016), and an adversarially regularized variational graph auto-encoder (ARGVA) (Pan et al., 2018). We perform the evaluation based on two metrics: (1) Validity. For the ground-truth explanations G+s that contains all discriminative information of the full graph G, the importance of its surrogate graph G∗s should be higher than itself. The difference between the two importance scores indicates the validity of the generator, thus we define VAL = EG [Imp(G∗s )− Imp(G+s )]. For Graph-SST2 where the class-wise features are intractable, we leverage the embeddings of training graphs and additionally train a Gaussian Mixture Model (GMM) as our distribution prior. Then, we compute the average loglikelihood of random subgraphs after in-filling, thus we have VAL = EGEGs∼Random(G)[GMM(G∗s )− GMM(Gs)]. (2) Fidelity. Towards a finer-grained assessment w.r.t. prediction probability of any random subgraphs, we adopt the metric following (Frye et al., 2021): FID = EGEGsEy|fy(G) − EG∗s [fy(G ∗ s )]|2. This measures how well the surrogates cover the target prediction distribution. Before comparing different generators, we first compute the importance or probabilities of the graphs before and after feature removal, which are summarized in Table 2. When inspecting the Removal’s results without any in-fills, the OOD problem is severe: in TR3 and MNISTsup, the importance of ground-truth subgraphs only reaches 43.8% and 40.8%, respectively, which are far away from the target importance of full graphs. Analogously in Graph-SST2. For the performance of the generators w.r.t. the two metrics, we summarize the average results over 5 runs in Table 3: • The performance of the baselines are poor. This suggests that they can hardly fit the target conditional distribution. • CVGAE outperforms other generators consistently across all cases, thus justifying the rationale and effectiveness of our proposed generator and adversarial training paradigm. For example, in TR3, CVGAE significantly increases the VAL scores and mitigates the OOD effect effectively. Moreover, we conduct ablation studies and sensitivity analysis in Appendix G to better understand the model components and validate the effectiveness of the designed objective. 5 RELATED WORK Post-hoc Explainability of GNNs. Inspired by the explainability in computer vision, Baldassarre & Azizpour (2019); Pope et al. (2019); Schnake et al. (2020) obtain the gradient-like scores of the model’s outcome or loss w.r.t. the input features. Another line (Luo et al., 2020; Ying et al., 2019; Yuan et al., 2020a; Yue Zhang, 2020; Michael Sejr Schlichtkrull, 2021) learns the masks on graph features. Typically, GNN-Explainer (Ying et al., 2019) applies the instance-wise masks on the messages carried by graph structures, and maximizes the mutual information between the masked graph and the prediction. Going beyond the instance-wise explanation, PGExplainer (Luo et al., 2020) generates masks for multiple instances inductively. Recently, researchers adopt the causal explainability (Pearl & Mackenzie, 2018) to uncover the causation of the model predictions.For instance, CXPlain (Schwab & Karlen, 2019) quantifies a feature’s importance by leaving it out. PGM-Explainer (Vu & Thai, 2020) performs perturbations on graph structures and builds an Bayesian network upon the perturbation-prediction pairs. Causal Screening (Screener) (Anonymous, 2021) measures the importance of an edge as its causal effect, conditional on the previously selected structures. Lately, SubgraphX (Yuan et al., 2021) explores different subgraphs with Monte-Carlo tree search and evaluates subgraphs with the Shapley value (Kuhn & Tucker, 1953). Counterfactual Generation for the OOD Problem. The OOD effect of feature removal has been investigated in some other domains. There are generally two classes of generation (i) Static generation. For example, Fong & Vedaldi. (2017); Dabkowski & Gal (2017) adopted blurred input and random colors for the image reference, respectively. Due to the unnatural in-filling, the generated images are distributional irrespective and can still introduce confounding bias. (ii) Adaptive generation: Chang et al. (2019); Frye et al. (2021); Agarwal et al. (2019); Kim et al. (2020). The generators of these methods, like DSE, overcomes the defects aforementioned, which generates data that conforms to the training distribution. For example, in computer vision, FIDO (Chang et al., 2019) generates imagespecific explanations that respect the data distribution, answering “Which region, when replaced by plausible alternative values, would maximally change classifier output?”. For the difference, firstly, DSE’s formulated importance involves additional adjustment on Gs and guarantees the unbiasedness of introducing the surrogate variable G∗s , which is commonly discarded by the prior works with in-fillings only. Specifically, we offer a comparison with FIDO in Appendix B. Secondly, the distribution of graph data is more complicated to model than other domains. And the proposed CVGAE is carefully designed for graph data, where the contrastive loss and the adversarial training framework are shown to be effective for learning the data distribution of graphs. 6 CONCLUSION In this work, we investigate the OOD effect on the explanation evaluation of GNNs. With a causal view, we uncover the OOD effect — the distribution shift between full graphs and subgraphs, as the confounder between the explanatory subgraphs and the model prediction, making the evaluation less reliable. To mitigate it, we propose a deconfounding evaluation framework that exploits the front-door adjustment to measure the causal effect of the explanatory subgraphs on the model prediction. And a deep generative model is devised to achieve the front-door adjustment by generating in-distribution surrogates of the subgraphs. In-so-doing, we can reliably evaluate the explanatory subgraphs. As the evaluation for explanations fundamentally guides the objective in GNNs explainability, this work offers in-depth insights into the future interpretability systems. ETHICS STATEMENT This work raises concerns about the removal-based evaluation in the explainability literature and proposed Deconfounded Subgraph Evaluator. For the user study that involves human subjects, we have detailed the fair evaluation procedure for each explanation generated by the explainers in Appendix E. For real-world applications, we admitted that the modeling of the distribution shift could be a barrier to fulfill their evaluation faithfulness. However, as shown in the paper, improper evaluation under the OOD setting largely biases the inspection of the model’s decision-making process and the quality of explainers. Therefore, we argue that explainability should exhibit faithful explanation evaluation before auditing deep models’ actual decision-making process. And a wrongly evaluated explanation might do more significant harm than an incorrect prediction, as the former could affect the general adjustment (e.g., structure construction) and human perspective (e.g., fairness check) of the model. REPRODUCIBILITY STATEMENT We have made great efforts to ensure reproducibility in this paper. Firstly, we make all causal assumptions clear in Section 2.2, Section 3.1 and Appendix A. For datasets, we have released the synthetic dataset, which can be referred to the link in Section 1, while the other two datasets are publicly available. We also include our code for model construction in the link. In Appendix D, we have reported the settings of hyper-parameters used in our implementation for model training. B COMPARISON OF IMPORTANCE ESTIMATIONS In this section, we compare our proposed estimation via front-door adjustment with the estimation in FIDO (Chang et al., 2019). We rephrased each estimation as Impdse(Gs) = ∑ G∗s P (G∗s = G∗s | Gs = Gs)P (Y | G∗s = G∗s ) = ∑ G∗s P (G∗s = G∗s | Gs = Gs) ∑ G′s P (Y | G∗s = G∗s , Gs = G′s)P (Gs = G′s) (9) and ImpFIDO(Gs) = ∑ G∗s P (G∗s = G∗s | Gs = Gs)P (Y | G∗s = G∗s ) (10) where DSE has alternatively adjusted on Gs (represented as G′s). To make it clear, we consider the underlined part of each equation. For Equation 9, we have ∑ G′s P (Y | G∗s = G∗s , Gs = G′s)P (Gs = G′s) = ∑ G′s P (Y | G∗s = G∗s , Gs = G′s)P (Gs = G′s | G∗s = G∗s ) P (Gs = G′s) P (Gs = G′s | G∗s = G∗s ) = ∑ G′s P (Y,Gs = G′s | G∗s = G∗s ) P (Gs = G′s) P (Gs = G′s | G∗s = G∗s ) (11) While for the formulation of Equation 10, we have P (Y | G∗s = G∗s ) = ∑ G′s P (Y,Gs = G′s | G∗s = G∗s ) (12) In the comparison of these two parts, we can see that Equation 12 is biased under our causal assumption. Intuitively, each contribution of the importance of G∗s on Y should be inversely proportional to the posterior probability, i.e., the probability of G′s given the observation G∗s . However, FIDO fails to consider the causal relation between Gs → G∗s , which biases tha approximation of the genuine causal effect under our causal assumption. Back to our proposed estimation, as we have collected (Gs,G∗s )-pairs via Monte-Carlo simulation, thus additional adjustment on Gs (G′s) can be achieved via Equation 11. C DSE FOR DELETION-BASED EVALUATION Based on the idea of deletion-based evaluation, we can instead use the average causal effect (Holland., 1988) (ACE) to look for a smallest deletion graph by conducting two interventions do(Gs = G) (i.e., , no feature removal) and do(Gs = G/s) where G/s denotes the complement of the explanatory graph Gs, meaning that the GNN input receives treatment and control, respectively. Formally, we have Imp fid dse(Gs = Gs) = P (Y | do (Gs = G))− P ( Y | do ( Gs = G/s )) (13) Then, we can similarly adjust for the individual terms as Equation 1, obtaining the unbiased importance value as the result of deletion-based evaluation. D EXPERIMENTAL DETAILS In this paper, all experiments are done on a single Tesla V100 SXM2 GPU (32 GB). The well-trained GNNs used in our experiments achieve high classification accuracies of 0.958 in TR3, 0.982 in MNISTsup, 0.909 in Graph-SST2. Now We introduce the model construction of the proposed generator. The encoder used is Crystal Graph Convolutional Neural Networks (Xie & Grossman, 2018), which contains three Convolutional layers. The encode dimensions in Tr3, MNISTsup, Graph-SST2 datasets are respectively 256, 64, 256. For decoder, we adopt two fully connected layers with ReLU as activation layers, where the numbers of neurons are the same with the encode dimensions. Next, we summarize the pseudocodes for the Adversarial Training in Algorithm 1. Algorithm 1 Generative Adversarial Training. All experiments in the paper used the default values m = 256, α = 2× 10−4, β = 1× 10−4, ω = λ = 5, τ = 0.1 Require: Pr, real graphs’ distribution. r, masking ratio. Require: m, batch size. α, learning rate. β, γ, λ, ω, τ , hyper-parameters. 1: µ← µ0; θ ← θ0 2: while loss in Equation (4) is not converged do 3: # Discriminator’s training 4: Sample {G(i)}mi=1 ∼ Pr a batch from the real graphs. 5: Randomly generate broken graphs {G(i)s }mi=1 from {G(i)}mi=1 with masking ratio r. 6: Embed the nodes through encoder q(Z|{G(i)s ,G(i)}mi=1) 7: Decode the edge probabilities and sample in-fill graphs {Ĝs̄}mi=1 ∼ p(Ĝs̄ | Z) 8: Compute Discriminator’s loss from Equation 7. 9: Update parameter µ with back-propagation. 10: # Generator’s training 11: Repeat the operations from line 4 to 7. 12: Compute Generator’s loss from Equation 4, 5, 6. 13: Update parameter θ with back-propagation. 14: end while For other hyper-parameters, we set r = 0.3, γ = 3 in Tr3 dataset. In MNISTsup and Graph-SST2 datasets, we set r = 0.6, γ = 1. We use Adam (Kingma & Ba, 2014) with weight decay rate 1e-5 for optimization. The maximum number of epochs is 100. E DETAILED USER STUDY The User Study starts by instructions to participants, where they will see a sentence (movie reviews) in each question and its sentiment (Positive of Negative), e.g., Sentence: “is more of an ordeal than an amusement” Sentiment: Negative then several explanations are presented for the answers of “Why the sentiment of this sentence is negative (positive)?”. The explanations (see Figure 7) are shown in graph form (edges indicate relations between words), and colors of more important features are darker. Then they were asked to choose the best explanation(s). A good explanation should be concise, informative, and the rational cause of sentence’s sentiment. In this case, (B) could be the best explanation since “ordeal” mostly decides the negative sentiment, while (A) only identifies plain words like “more than” and (C) is quite the opposite. Note that the participants can choose multiple answers and some choices are the same. Thereafter, 10 questions out of 32 questions in total are presented for each participant and we compute the average scores for the explainers. F EXTRA CASE STUDY In this section, we further present a case study for TR3 dataset. In Figure 8, the OOD probabilities for the ground truth explanatory subgraphs in each row remain the same as the edge selection ratios vary, which are 100%, 0%, 0% respectively. In contrast, the evaluation results generated from our DSE have shown strong rationality. Specifically, the importance score compute by our DSE increases with the increasing number of selected ground truth edges. This well validates our DSE framework, where we mitigate the OOD effect by generating the plausible surrogates, making the graphs to be evaluated conforms to the graph distribution in the training data. In this way, the effect of D → Y could hardly affect our assessment for the explanatory subgraph. Thereafter, as the explanatory graph becomes more informative and discriminative, it offers more evidence for the GNN to classify it as the target class which we want to explain, yielding faithful evaluation results. Cycle House Crane Im p d se Figure 8: Three cases in TR3 datasets. Each graph in the left represents the ground truth explanatory subgraphs (red) for explaining a given graph. One of the complement graphs (light blue) generated from CVGAE is also shown with each explanatory subgraph. As the edge selection ratio increases in each row, the importance scores output by our DSE are shown in the right. G ABLATION STUDY & SENSITIVITY ANALYSIS We first conduct ablation studies to investigate the contribution of the contrastive parameter γ and the penalty parameter λ in CVGAE. The ablation models are proposed by I. removing the contrastive loss, i.e., setting γ = 0 and II. removing the penalty term in the Wasserstein GAN (WGAN) (Martin Arjovsky, 2017) loss, i.e., setting λ = 0. The performance of the ablation models is reported in Table 4. We observe that the superiority of CVGAE compared with the ablation model supports our model design by (i) smoothing the model optimization which yields a more performant generator (ii) highlighting the class-discriminative information in the graph embeddings, which implicitly encodes the class information. Also, we conduct sensitivity analysis for CVGAE w.r.t. the hyper-parameters. Specifically, we select λ, the penalty in the WGAN loss (cf. Euqation 7) and γ, the strength of the contrastive loss (cf. Equation 4). While we empirically found the performance is relatively indifferent to other parameters in a wide range. The results are shown in Figure 9. We observe that the best performance is achieved with λ taking values from 1 to 10, and γ taking values from 1 to 10 in TR3 dataset and 0.1 to 5 in MNISTsup and Graph-SST2 datasets. And we found a large λ generally causes an increase in the FID metric, as it may alleviate the penalty on the reconstruction errors, which further makes a larger difference between fy(G) and E[fy(G∗s )].
1. What is the focus of the paper in terms of graph representation learning? 2. What are the strengths of the proposed approach, particularly in addressing out-of-distribution effects? 3. Are there any potential weaknesses or areas for improvement in the methodology or experiments presented in the paper?
Summary Of The Paper Review
Summary Of The Paper This paper has done an excellent work of finding the out-of-distribution between the subgraph and graph as the confounder. Further, this paper proposes a conditional variational graph auto-encoder in assessing the causal effects of subgraph on the prediction. They also introduce a surrogate variable to denote this out-of-distribution effect. Through adversarial training, the effects of the proposed model is correctly verified. Review This paper proposes a surrogate variable G s ∗ to denote the out-of-distribution effect and seems find interesting ways to evaluate the causal effects between the subgraph and full graph. Strengths: the out-of-distribution has not been explored before, as the paper claims. the conditional variational graph autoencoder is well proposed and well trained. I will the experimental settings, especially about the 3 insights. These results have sufficiently verify the claims and advantages of the paper. Weaknesses I did find clear weaknesses.
ICLR
Title Deconfounding to Explanation Evaluation in Graph Neural Networks Abstract Explainability of graph neural networks (GNNs) aims to answer “Why the GNN made a certain prediction?”, which is crucial to interpret the model prediction. The feature attribution framework distributes a GNN’s prediction to its input features (e.g., edges), identifying an influential subgraph as the explanation. When evaluating the explanation (i.e., subgraph importance), a standard way is to audit the model prediction based on the subgraph solely. However, we argue that a distribution shift exists between the full graph and the subgraph, causing the out-ofdistribution problem. Furthermore, with an in-depth causal analysis, we find the OOD effect acts as the confounder, which brings spurious associations between the subgraph importance and model prediction, making the evaluation less reliable. In this work, we propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction. While the distribution shift is generally intractable, we employ the front-door adjustment and introduce a surrogate variable of the subgraphs. Specifically, we devise a generative model to generate the plausible surrogates that conform to the data distribution, thus approaching the unbiased estimation of subgraph importance. Empirical results demonstrate the effectiveness of DSE in terms of explanation fidelity. N/A Explainability of graph neural networks (GNNs) aims to answer “Why the GNN made a certain prediction?”, which is crucial to interpret the model prediction. The feature attribution framework distributes a GNN’s prediction to its input features (e.g., edges), identifying an influential subgraph as the explanation. When evaluating the explanation (i.e., subgraph importance), a standard way is to audit the model prediction based on the subgraph solely. However, we argue that a distribution shift exists between the full graph and the subgraph, causing the out-ofdistribution problem. Furthermore, with an in-depth causal analysis, we find the OOD effect acts as the confounder, which brings spurious associations between the subgraph importance and model prediction, making the evaluation less reliable. In this work, we propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction. While the distribution shift is generally intractable, we employ the front-door adjustment and introduce a surrogate variable of the subgraphs. Specifically, we devise a generative model to generate the plausible surrogates that conform to the data distribution, thus approaching the unbiased estimation of subgraph importance. Empirical results demonstrate the effectiveness of DSE in terms of explanation fidelity. 1 INTRODUCTION Explainability of graph neural networks (GNNs) (Hamilton et al., 2017; Dwivedi et al., 2020) is crucial to model understanding and reliability in real-world applications, especially when about fairness and privacy (Ying et al., 2019; Luo et al., 2020). It aims to provide insight into how predictor models work, answering “Why the target GNN made a certain prediction?”. Towards this end, a variety of explainer models are proposed for feature attribution (Selvaraju et al., 2017; Ying et al., 2019; Luo et al., 2020; Vu & Thai, 2020), which decomposes the predictor’s prediction as contributions (i.e., importance) of its input features (e.g., edges, nodes). While feature attribution assigns the features with importance scores, it redistributes the graph features and creates a new distribution different from that of the original full graphs, from which a subgraph is sampled as the explanation. Such sampling process is referred to as feature removal (Covert et al., 2020). Then, to assess the explanatory subgraph, the current evaluation frameworks use the feature removal principle — (1) only feed the subgraph into the target predictor, discarding the other features; (2) measure the importance of the subgraph based on its information amount to recover the model’s prediction. Such subgraph-prediction correlations uncovered by the removal-based evaluator should offer a faithful inspection of the predictor’s decision-making process and assess the fidelity of the explainers reliably. However, feature removal brings the out-of-distribution (OOD) problem (Frye et al., 2020; Chang et al., 2019; Lukas Faber, 2021): the distribution shift from full graphs to subgraphs likely violates underlying properties, including node degree distribution (Leskovec et al., 2005) and domain-specific constraints (Liu et al., 2018) of the full graphs. For example, graph properties of chemical molecules, such as the valency rules, impose some constraints on syntactically valid molecules (Liu et al., 2018); hence, simply removing some bonds (edges) or atoms (nodes) creates invalid molecular subgraphs that never appear in the training dataset. Such OOD subgraphs could manipulate the predictor’s Under review as a conference paper at ICLR 2022 𝑮: Full Graph 𝑮𝒔: Subgraph 𝒀: Predicate Logits 𝐬 𝓖 House Cycle Crane 0.21 0.70 Target Predictor Crane 𝓖Cycle House𝓖𝒔𝟏𝑫: Distribution Shift 𝓖𝒔𝟐 𝒔∗ 𝐬 Front-door Adjustment𝓖𝒔𝟐𝓖𝒔𝟏 (a) Feature Removal to Evaluate Explanatory Subgraph Gs 𝑮: Full Graph 𝑮𝒔: Subgraph 𝒀: Predicate Logits 𝐬 𝓖 House Cycle Crane 0.21 0.70 Target Predictor Crane 𝓖Cycle House𝓖𝒔𝟏𝑫: Distribution Shift 𝓖𝒔𝟐 𝒔∗ 𝐬 Front-door Adjustment𝓖𝒔𝟐𝓖𝒔𝟏 (b) SCM I Figure 1: (a) A real example in TR3. The GNN predictor classifies the full graph as ‘House”. On subgraphs Gs1 and Gs2, the prediction probabilities of being “House” are respectively 0.21 and 0.70. (b) The structural causal model represents the causalities among variables: G as the input graph, D as the unobserved distribution shift, Gs as the explanatory subgraph, and Y as the model prediction. outcome arbitrarily (Dai et al., 2018; Zügner et al., 2018), generates erroneous predictions, and limits the reliability of the evaluation process. Here we demonstrate the OOD effect by a real example in Figure 1a, where the trained ASAP (Ranjan et al., 2020) predictor has classified the input graph as “House” for its attached motif (see Section 4 for more details). On the ground-truth explanation Gs1, the output probability of the “House” class is surprisingly low (0.21). While for Gs2 with less discriminative information, the outputs probability of the “House” class (0.70) is higher. Clearly, the removal-based evaluator assigns the OOD subgraphs with unreliable importance scores, which are unfaithful to the predictor’s decision. The OOD effect has not been explored in evaluating GNN explanations, to the best of our knowledge. We rigorously investigate it from a causal view (Pearl et al., 2016; Pearl, 2000; Pearl & Mackenzie, 2018). Figure 1b represents our causal assumption via a structural causal model (SCM) (Pearl et al., 2016; Pearl, 2000), where we target the causal effect of Gs on Y . Nonetheless, as a confounder between Gs and Y , distribution shift D opens the spurious path Gs ← D → Y . By “spurious”, we mean that the path lies outside the direct causal path from Gs to Y , making Gs and Y spuriously correlated and yielding an erroneous effect. And one can hardly distinguish between the spurious correlation and causative relations (Pearl et al., 2016). Hence, auditing Y on Gs suffers from the OOD effect and wrongly evaluates the importance of Gs. Motivated by our causal insight, we propose a novel evaluation paradigm, Deconfounded Subgraph Evaluator (DSE), to faithfully measure the causal effect of explanatory subgraphs on the prediction. 𝑮: Full Graph 𝑮𝒔: Subgraph 𝒀: Predicate Logits 𝑮𝐬𝑮 𝒀 𝑫 𝑮𝒔∗ 𝑮𝐬𝑮 𝒀 𝑫 Front-door Adjustment reliably and further guide explainers to generate faithful explanations. In a nutshell, our contributions are: • From a causal perspective, we argue that the OOD effect is the confounder that causes spurious correlations between subgraph importance and model prediction. • We propose a deconfounding paradigm, DSE, which exploits the front-door adjustment to mitigate the out-of-distribution effect and evaluate the explanatory subgraphs unbiasedly. • We validate the effectiveness of our framework over various explainers, target GNN models, and datasets. Significant boosts are achieved over the conventional feature removal techniques. Code and datasets are available at: https://anonymous.4open.science/r/DSE-24BC/. 2 A CAUSAL VIEW OF EXPLANATION EVALUATION Here we begin with the causality-based view of feature removal in Section 2.1 and present our causal assumption to inspect the OOD effect in Section 2.2. 2.1 PROBLEM FORMULATION Without loss of generality, we focus on the graph classification task: a well-trained GNN predictor f takes the graph variable G as input and predicts the class Y ∈ {1, · · · ,K}, i.e., Y = f(G). Generation of Explanatory Subgraphs. Post-hoc explainability typically considers the question “Why the GNN predictor f made certain prediction?”. A prevalent solution is building an explainer model to conduct feature attribution (Ying et al., 2019; Luo et al., 2020; Pope et al., 2019). It decomposes the prediction into the contributions of the input features, which redistributes the probability of features according to their importance and sample the salient features as an explanatory subgraph Gs. Specifically, Gs can be a structure-wise (Ying et al., 2019; Luo et al., 2020) or featurewise (Ying et al., 2019) subgraph of G. In this paper, we focus on the structural features. That is, for graph G = (N , E) with the edge set E and the node set N , the explanatory subgraph Gs = (Ns, Es) consists of a subset of edges Es ⊂ E and their endpoints Ns = {u, v|(u, v) ∈ Es}. Evaluation of Explanatory Subgraphs. Insertion-based evaluation by feature removal (Covert et al., 2020; Dabkowski & Gal, 2017) aims to check whether the subgraph is the supporting substructure 1 that alone allows a confident classification. We systematize this paradigm as three steps: (1) divide the full graph G into two parts, the subgraph Gs and the complement Gs; (2) feed Gs into the target GNN f , while discarding Gs; and (3) obtain the model prediction on Gs, to assess its discriminative information to recover the prediction on G. Briefly, at the core of the evaluator is the subgraphprediction correlation. However, as discussed in Section 1, the OOD effect is inherent in the removal-based evaluator, hindering the subgraph-prediction correlation from accurately estimating the subgraph importance. 2.2 STRUCTURAL CAUSAL MODEL To inspect the OOD effect rigorously, we take a causal look at the evaluation process with a Structural Causal Model (SCM I) in Figure 1b. We denote the abstract data variables by the nodes, where the directed links represent the causality. The SCM indicates how the variables interact with each other through the graphical definition of causation: • G→ Gs ← D. We introduce an abstract distribution shift variable D to sample a subgraph Gs from the edge distributions of the full graph G. • Gs → Y ← D. We denote Y as the prediction variable (e.g., logits output), which is determined by (1) the direct effect from Gs, and (2) the confounding effect caused by D. In particular, the former causation that led to the result is the focus of this work. We suggest readers to refer to Appendix A where we offer an elaboration of D. With our SCM assumption, directly measuring the importance of explanatory subgraphs is distracted by the backdoor path (Pearl, 2000), Gs ← D → Y . This path introduces the confounding associations between Gs and Y , which makes Gs and Y spuriously correlated, i.e., biases the subgraph-prediction correlations, thus making the evaluator invalid. How to mitigate the OOD effect and quantify Gs’s genuine causal effect on Y remains largely unexplored in the literature and is the focus of our work. 3 DECONFOUNDED EVALUATION OF EXPLANATORY SUBGRAPHS In this section, we propose a novel deconfounding framework to evaluate the explanatory subgraphs in a trustworthy way. Specifically, we first leverage the front-door adjustment (Pearl, 2000) to formulate a causal objective in Section 3.1. We then devise a conditional variational graph auto-encoders (CVGAE) as the effective implementation of our objective in Section 3.2. 1We focus on insertion-based evaluation here while we discuss deletion-based evaluation in Appendix C. 3.1 FRONT-DOOR ADJUSTMENT To the best of our knowledge, our work is the first to adopt the causal theory to solve the OOD problem in the explanation evaluation of GNNs. To pursue the causal effect of Gs on Y , we perform the calculus of the causal intervention P (Y = y|do(Gs = Gs)). Specifically, the do-calculus (Pearl, 2000; Pearl et al., 2016) is to intervene the subgraph variable Gs by cutting off its coming links and assigning it with the certain value Gs, making it unaffected from its causal parents G and D. From inspection of the SCM in Figure 1b, the distribution effect D acts as the confounder between Gs and Y , and opens the backdoor path Gs ← D → Y . However, as D is hardly measurable, we can not use the backdoor adjustment (Pearl, 2000; Pearl et al., 2016) to block the backdoor path from Gs to Y . Hence, the causal effect of Gs on Y is not identifiable from SCM I. However, we can go much further by considering SCM II in Figure 2 instead, where a mediating variable G∗s is introduced between Gs and Y : • Gs → G∗s . G∗s is the surrogate variable of Gs, which completes Gs to make them in the data distribution. First, it originates from and containsGs. Specifically, it imagines how the possible full graphs should be when observing the subgraph Gs. Second, G∗s should follow the data distribution and respect the inherent knowledge of graph properties, thus no link exists between D and G∗s . • G∗s → Y . This is based on our causal assumption that the causality-related information of Gs on Y , i.e., the discriminative information for Gs to make prediction, is well-preserved by G∗s . Thus, with the core of Gs, G∗s is qualified to serve as the mediator which further results in the model prediction. With SCM II, we can exploit the front-door adjustment (Pearl, 2000; Pearl et al., 2016) instead to quantify the causal effect of Gs on Y . Specifically, by summing over possible surrogate graphs G∗s of G∗s , we chain two identifiable partial effects of Gs on G ∗ s and G ∗ s on Y together: P (Y |do(Gs = Gs)) = ∑ G∗s P (Y |do(G∗s = G∗s ))P (G∗s = G∗s |do(Gs = Gs)) = ∑ G∗s ∑ G′s P (Y |G∗s = G∗s , Gs = G′s)P (Gs = G′s)P (G∗s = G∗s |do(Gs = Gs)) = ∑ G∗s ∑ G′s P (Y |G∗s = G∗s , Gs = G′s)P (Gs = G′s)P (G∗s = G∗s |Gs = Gs), (1) Specifically, we have P (G∗s|do(Gs = Gs)) = P (G∗s|Gs = Gs) as Gs is the only parent of G∗s . And we distinguish the Gs in our target expression P (Y |do(Gs = Gs)) between G′s, the latter of which is adjusted to pursue P (Y |do(G∗s = G∗s )). With the data of (Gs,G∗s ) pairs, we can obtain P (Y |G∗s = G∗s , Gs = G′s) by feeding the surrogate graph G∗s into the GNN predictor, conditional on the subgraph G′s; similarly, we can estimate P (Gs = G′s) statistically; P (G∗s = G∗s |Gs = Gs) is the conditional distribution of the surrogate variable, after observing the subgraphs. As a result, this front-door adjustment yields a consistent estimation of Gs’s effect on Y and avoids the confounding associations from the OOD effect. 3.2 DEEP GENERATIVE MODEL However, it is non-trivial to instantiate G∗s and collect the (Gs,G∗s ) pairs. We get inspiration from the great success of generative models and devise a novel probabilistic model, conditional variational graph auto-encoder (CVGAE), and an adversarial training framework, to generate G∗s . Conditional Generation. Inspired by previous works (Thomas N. Kipf, 2016; Liu et al., 2018), we model the data distribution via a generative model gθ parameterized by θ. It is composed of an encoder q(Z|G,Gs) and a decoder p(G∗s |Z). Specifically, the encoder q(Z|G,Gs) embeds each node i in G with a stochastic representation zi, and summarize all node representations in Z: q(Z|G,Gs) = N∏ i=1 q(zi|G,Gs), with q(zi|G,Gs) = N (zi | [µ1i,µ2i], [ σ21i 0 0 σ22i ] ) (2) where zi is sampled from a diagonal normal distribution by mean vector [µ1i,µ2i] and standard deviation vector diag(σ21i,σ 2 2i); µ1 = fµ(G) and logσ1 = fσ(G) denote the matrices of mean vectors µ1i and standard deviation vectors logσ1i respectively, which are derived from two GNN models fµ and fσ on the top of the full graph G; similarly, µ2 = fµ(Gs) and logσ2 = fσ(Gs) are on the top of the subgraph Gs. Then, the decoder p(G∗s |Z) generates the valid surrogates: p(G∗s |Z) = N∏ i N∏ j p(Aij |zi, zj), with p(Aij = 1|zi, zj) = fA([zi, zj ]), (3) whereAij = 1 indicates the existence of an edge between nodes i and j; fA is a MLP, which takes the concatenation of node representations zi and zj as the input and outputs the probability of Aij = 1. Leveraging the variational graph auto-encoder, we are able to generate some counterfactual edges that never appear in G and sample G∗s from the conditional distribution p(G∗s |Z), formally, G∗s ∼ p(G∗s|Z). As a result, P (G∗s = G∗s |Gs = Gs) in Equation 1 is identified by p(G∗s |Z). The quality of the generator directly affects the quality of the surrogate graphs, further determines how well the frontdoor adjustment is conducted. Next, we will detail an adversarial training framework to optimize the generator, which is distinct from the standard training of VAE. Adversarial Training. To achieve high-quality generation, we get inspiration from the adversarial training (Goodfellow et al., 2020; Yue et al., 2021) and devise the following training objective: min θ LVAE + γLC +max µ ωLD, (4) where γ, ω are trade-off hyper-parameters. These losses are carefully designed to assure the generation follows the data distribution. Next, we will elaborate on each of them. LVAE = −EG [Eq(Z|G,Gs)[log p(Ĝs|Z)]] + βEG [DKL(q(Z|G,Gs)||p(Z))], (5) We first minimize the β-VAE loss(Higgins et al., 2017), and the first term is the reconstruction loss responsible to predict the probability of edges’ existence; the second term is the KL-divergence between the variational and prior distributions. Here we resort to the isotropic Gaussian distribution p(Z) = ∏ i p(zi) = ∏ iN (zi|0, I) as the prior. β reweighs the KL-divergence, which promises to learn the disentangled factors in Z (Higgins et al., 2017; Yue et al., 2021; Suter et al., 2019). Moreover, we highlight the class-discriminative information in Z, by encouraging the agreement between graph representations with the same class compared to that with different classes. Technically, the contrastive loss is adopted: LC = −EG [log ∑ G′∈B+ exp (s(zG , zG′)/τ)∑ G′′∈B+∪B− exp (s(zG , zG′′)/τ) ], (6) where zG is the representation of G that aggregates all node representations Z together; s is the similarity function, which is given by an inner product here; τ is the temperature hyper-parameter; B+ is the graph set having the same class to G, while the graphs involved in B− have different classes from G. Minimizing this loss enables the generator to go beyond the generic knowledge and uncover the class-wise patterns of graph data. Besides, we introduce a discriminative model dµ to distinguish the generated graphs. Specifically, we set it as a probability-conditional GNN (Fey & Lenssen, 2019) parameterized by µ. It takes a graph as input and outputs a score between 0 to 1, which indicates the confidence of the graph being realistic. Hence, given a real graph G with the ground-truth label y, we can use the generator gθ to generate G∗s . Then the discriminator learns to assign G with a large score while labeling G∗s with a small score. To optimize the discriminator, we adopt the Wasserstein GAN (WGAN) (Martin Arjovsky, 2017) loss: LD = EG [Ep(G∗s |Z)[d(G, y)− d(G ∗ s , y)− λ(||∇G∗s d(G ∗ s , y)||2 − 1)2]], (7) where d(G∗s , y) is the probability of generating G∗s from the generator; λ is the hyper-parameter. By playing the min-max game between the generator and the discriminator in Equation 4, the generator can create the surrogate graphs from the data distribution plausibly. Subgraph Evaluation. With the well-trained generator g∗θ whose parameters are fixed, we now approximate the causal effect of Gs on Y . Here we conduct Monte-Carlo simulation based on g∗θ to sample a set of plausible surrogate graphs {G∗s} from p(G∗s |Z). Having collected the (Gs,G∗s ) data, we can arrive the estimation of Equation 1. 4 EXPERIMENTS We aim to answer the following research questions: • Study of Explanation Evaluation. How effective is our DSE in mitigating the OOD effect and evaluating the explanatory subgraph more reliably? (Section 4.2) • Study of Generator. How effective is our CVGAE in generating the surrogates for the explanatory subgraphs and making them conform to the data distribution? (Section 4.3) 4.1 EXPERIMENTAL SETTINGS Datasets & Target GNNs. We first train various target GNN classifiers on the three datasets: • TR3 is a synthetic dataset involving 3000 graphs, each of which is constructed by connecting a random tree-shape base with one motif (house, cycle, crane). The motif type is the ground-truth label, while we treat the motifs as the ground-truth explanations following Ying et al. (2019); Yuan et al. (2020a). A Local Extremum GNN (Ranjan et al., 2019) is trained for classification. • MNIST superpixels (MNISTsup) (Monti et al., 2017) converts the MNIST images into 70,000 superpixel graphs. Every graph with 75 nodes is labeled as one of 10 classes. We train a Splinebased GNN (Fey et al., 2018) as the classifier model. The subgraphs representing digits can be viewed as human explanations. • Graph-SST2 (Yuan et al., 2020b) is based on text sentiment dataset SST2 (Socher et al., 2013) and converts the text sentences to graphs where nodes represent tokens and edges indicate relations between nodes. Each graph is labeled by its sentence sentiment. The node embeddings are initialized by the pre-trained BERT word embeddings (Devlin et al., 2018). Graph Attention Network (Veličković et al., 2018) is trained as the classifier. Ground-Truth Explanations. By “ground-truth”, we follow the prior studies (Ying et al., 2019; Yuan et al., 2020a; Luo et al., 2020) and treat the subgraphs coherent to the model knowledge (e.g., the motif subgraphs in TR3) or human knowledge (e.g., the digit subgraphs in MNISTsup) as the ground-truth explanations. Although such ground-truth explanations might not fit the decision-making process of the model exactly, they contain sufficient discriminative information to help justify the explanations. Note that no ground-truth explanation is available in Graph-SST2. Explainers. To explain the decisions made by these GNNs, we adopt several state-of-the-art explainers, including SA (Baldassarre & Azizpour, 2019), Grad-CAM (Selvaraju et al., 2017), GNNExplainer (Ying et al., 2019), CXPlain (Schwab & Karlen, 2019), PGM-Explainer (Vu & Thai, 2020), Screener (Anonymous, 2021), to generate the explanatory subgraphs. Specifically, top-15%, 20%, 20% of edges on the full graph instance construct the explanatory subgraphs in TR3, MNIST, and Graph-SST2, respectively. We refer readers to Appendix D for more experimental details. 4.2 STUDY OF EXPLANATION EVALUATION (RQ1) Deconfounded Evaluation Performance. For an explanation Gs, the conventional removal-based evaluation framework quantifies its importance as the subgraph-prediction correlation, termed Impre(Gs) = f(Gs); whereas, our DSE framework focuses on the causal effect caused by Gs on Y which is computed based on Equation 1, and we denote it as Impdse(Gs) for short. These importance scores broadly aim to reflect the discriminative information carried by Gs. Thanks to the ground-truth knowledge available in TR3 and MNISTsup, we are able to get a faithful and principled metric to measure the discriminative information amount — the precision Prec(Gs,G+s ) between the ground-truth explanation G+s and the explanatory subgraph Gs. This precision metric allows us to perform a fair comparison between Impre(Gs) and Impdse(Gs) via: ρre = ρ([Prec(Gs,G+s )], [Impre(Gs)]), ρdse = ρ([Prec(Gs,G+s )], [Impdse(Gs)]), (8) where ρ is the correlation coefficient between the lists of precision and importance scores. We present the results in Figure 4 and have some interesting insights: • Insight 1: Removal-based evaluation hardly reflects the importance of explanations. In most cases, Prec(Gs,G+s ) is negatively correlated with the importance. This again shows that simply discarding a part of a graph could violate some underlying properties of graphs and mislead the target GNN, which is consistent with the adversarial attack works (Dai et al., 2018; Zügner et al., 2018). Moreover, the explainers that target high prediction accuracy, such as GNNExplainer, are easily distracted by the OOD effect and thus miss the important subgraphs. • Insight 2: Deconfounded evaluation quantifies the explanation importance more faithfully. Substantially, ρdse greatly improves after the frontdoor adjustments via the surrogate variable. The most notable case is GNNExplainer in MNISTsup, where ρdse = 0.17 achieves a tremendous increase from ρdse = −0.11. Although our DSE alleviates the OOD problem significantly, weak positive or negative correlations still exist, which indicates the limitation of the current CVGAE. We leave the exploration of higher-quality generation in future work. Revisiting & Reranking Explainers. Here we investigate the rankings of explainers generated from different evaluation frameworks, and further compute the Spearman rank correlations between these evaluation rankings and the reference rankings of explainers. Specifically, for TR3 and MNISTsup with ground-truth explanations, we regard the ranks w.r.t. precision as the references, while obtaining the reference of Graph-SST2 by a user study2. Such a reference offers the human knowledge for explanations and benchmarks the comparison. We show the results in Table 1 and conclude: • Insight 3: DSE presents a more fair and reliable comparison among explainers. The DSEbased rankings are highly consistent with the references, while the removal-based rankings struggle to pass the check. In particular, we observe that for TR3, the unrealistic splicing inputs cause a plain ranking w.r.t. Impre. We find that various input subgraphs are predicted as cycle class. That is, the target GNN model is a deterministic gambler with serious OOD subgraphs. In contrast, DSE outputs a more informative ranking; For MNISTsup, GNNExplainer with the highest precision 270 volunteers are engaged, where each was asked to answer 10 questions randomly sampled from 32 movie reviews and choose the best explanations generated by the explainers. See Appendix E for more details. Table 2: Importance scores or probabilities of subgraphs before and after feature removal. TR3 MNISTsup Graph-SST2 Imp(G) or GMM(G) 0.958−0.520 0.982−0.574 35.3−11.3 Imp(G+s ) or GMM(Gs) 0.438 0.408 24.0 Table 3: Performances of Generators in terms of Validity and Fidelity. TR3 MNISTsup Graph-SST2 Imp(G∗s) VAL↑ FID↓ Imp(G∗s) VAL↑ FID↓ GMM(G∗s) VAL↑ FID↓ Random 0.451 0.013 0.794 0.448 0.040 1.325 38.8 14.8 0.060 VGAE 0.469 0.031 0.754 0.205 -0.203 1.501 37.6 13.6 0.078 ARGVA 0.392 0.061 0.726 0.466 0.058 1.306 31.0 7.0 0.079 CVGAE 0.603 0.165 0.598 0.552 0.144 0.910 45.8 21.8 0.057 is overly underrated by the removal-based evaluation framework, but DSE justifies its position faithfully; For Graph-SST2, although the OOD problem seems to be minor, DSE can still achieve significant improvement. Case Study. We present a case study in Graph-SST2 to illustrate how DSE mitigates the potential OOD problem. See Appendix F for another case study on TR3. In Figure 5, G is a graph predicted as “negative" sentiment. The explanatory subgraph Gs emphasizes tokens like “weak” and relations like “n’t→funny”, which is cogent according to human knowledge. However, its removal-based importance is highly underestimated as 0.385, possibly due to its disconnectivity or sparsity after feature removal. To mitigate the OOD problem, DSE samples 50 surrogate graphs from the generator, performs the frontdoor adjustment, and justifies the subgraph importance as 0.913, which shows the effectiveness of our DSE framework. We also observe some limitations of the generator (1) Due to the limited training data, the generators only reflect the distribution of the observed graphs, thus making some generations grammatically wrong. (2) The generations is constrained within the complete graph determined by the node set of the explanatory subgraph, thereby limits the quality of deconfounding. As we mainly focus on the OOD problem, we will leave the ability of the generator as future work. 4.3 STUDY OF GENERATORS (RQ2) The generator plays an important role in our DSE framework, which aims to generate the valid surrogates conform to the data distribution. To evaluate the generator’s quality, we compare it with three baselines: a random generator, a variational graph auto-encoder (VGAE) (Thomas N. Kipf, 2016), and an adversarially regularized variational graph auto-encoder (ARGVA) (Pan et al., 2018). We perform the evaluation based on two metrics: (1) Validity. For the ground-truth explanations G+s that contains all discriminative information of the full graph G, the importance of its surrogate graph G∗s should be higher than itself. The difference between the two importance scores indicates the validity of the generator, thus we define VAL = EG [Imp(G∗s )− Imp(G+s )]. For Graph-SST2 where the class-wise features are intractable, we leverage the embeddings of training graphs and additionally train a Gaussian Mixture Model (GMM) as our distribution prior. Then, we compute the average loglikelihood of random subgraphs after in-filling, thus we have VAL = EGEGs∼Random(G)[GMM(G∗s )− GMM(Gs)]. (2) Fidelity. Towards a finer-grained assessment w.r.t. prediction probability of any random subgraphs, we adopt the metric following (Frye et al., 2021): FID = EGEGsEy|fy(G) − EG∗s [fy(G ∗ s )]|2. This measures how well the surrogates cover the target prediction distribution. Before comparing different generators, we first compute the importance or probabilities of the graphs before and after feature removal, which are summarized in Table 2. When inspecting the Removal’s results without any in-fills, the OOD problem is severe: in TR3 and MNISTsup, the importance of ground-truth subgraphs only reaches 43.8% and 40.8%, respectively, which are far away from the target importance of full graphs. Analogously in Graph-SST2. For the performance of the generators w.r.t. the two metrics, we summarize the average results over 5 runs in Table 3: • The performance of the baselines are poor. This suggests that they can hardly fit the target conditional distribution. • CVGAE outperforms other generators consistently across all cases, thus justifying the rationale and effectiveness of our proposed generator and adversarial training paradigm. For example, in TR3, CVGAE significantly increases the VAL scores and mitigates the OOD effect effectively. Moreover, we conduct ablation studies and sensitivity analysis in Appendix G to better understand the model components and validate the effectiveness of the designed objective. 5 RELATED WORK Post-hoc Explainability of GNNs. Inspired by the explainability in computer vision, Baldassarre & Azizpour (2019); Pope et al. (2019); Schnake et al. (2020) obtain the gradient-like scores of the model’s outcome or loss w.r.t. the input features. Another line (Luo et al., 2020; Ying et al., 2019; Yuan et al., 2020a; Yue Zhang, 2020; Michael Sejr Schlichtkrull, 2021) learns the masks on graph features. Typically, GNN-Explainer (Ying et al., 2019) applies the instance-wise masks on the messages carried by graph structures, and maximizes the mutual information between the masked graph and the prediction. Going beyond the instance-wise explanation, PGExplainer (Luo et al., 2020) generates masks for multiple instances inductively. Recently, researchers adopt the causal explainability (Pearl & Mackenzie, 2018) to uncover the causation of the model predictions.For instance, CXPlain (Schwab & Karlen, 2019) quantifies a feature’s importance by leaving it out. PGM-Explainer (Vu & Thai, 2020) performs perturbations on graph structures and builds an Bayesian network upon the perturbation-prediction pairs. Causal Screening (Screener) (Anonymous, 2021) measures the importance of an edge as its causal effect, conditional on the previously selected structures. Lately, SubgraphX (Yuan et al., 2021) explores different subgraphs with Monte-Carlo tree search and evaluates subgraphs with the Shapley value (Kuhn & Tucker, 1953). Counterfactual Generation for the OOD Problem. The OOD effect of feature removal has been investigated in some other domains. There are generally two classes of generation (i) Static generation. For example, Fong & Vedaldi. (2017); Dabkowski & Gal (2017) adopted blurred input and random colors for the image reference, respectively. Due to the unnatural in-filling, the generated images are distributional irrespective and can still introduce confounding bias. (ii) Adaptive generation: Chang et al. (2019); Frye et al. (2021); Agarwal et al. (2019); Kim et al. (2020). The generators of these methods, like DSE, overcomes the defects aforementioned, which generates data that conforms to the training distribution. For example, in computer vision, FIDO (Chang et al., 2019) generates imagespecific explanations that respect the data distribution, answering “Which region, when replaced by plausible alternative values, would maximally change classifier output?”. For the difference, firstly, DSE’s formulated importance involves additional adjustment on Gs and guarantees the unbiasedness of introducing the surrogate variable G∗s , which is commonly discarded by the prior works with in-fillings only. Specifically, we offer a comparison with FIDO in Appendix B. Secondly, the distribution of graph data is more complicated to model than other domains. And the proposed CVGAE is carefully designed for graph data, where the contrastive loss and the adversarial training framework are shown to be effective for learning the data distribution of graphs. 6 CONCLUSION In this work, we investigate the OOD effect on the explanation evaluation of GNNs. With a causal view, we uncover the OOD effect — the distribution shift between full graphs and subgraphs, as the confounder between the explanatory subgraphs and the model prediction, making the evaluation less reliable. To mitigate it, we propose a deconfounding evaluation framework that exploits the front-door adjustment to measure the causal effect of the explanatory subgraphs on the model prediction. And a deep generative model is devised to achieve the front-door adjustment by generating in-distribution surrogates of the subgraphs. In-so-doing, we can reliably evaluate the explanatory subgraphs. As the evaluation for explanations fundamentally guides the objective in GNNs explainability, this work offers in-depth insights into the future interpretability systems. ETHICS STATEMENT This work raises concerns about the removal-based evaluation in the explainability literature and proposed Deconfounded Subgraph Evaluator. For the user study that involves human subjects, we have detailed the fair evaluation procedure for each explanation generated by the explainers in Appendix E. For real-world applications, we admitted that the modeling of the distribution shift could be a barrier to fulfill their evaluation faithfulness. However, as shown in the paper, improper evaluation under the OOD setting largely biases the inspection of the model’s decision-making process and the quality of explainers. Therefore, we argue that explainability should exhibit faithful explanation evaluation before auditing deep models’ actual decision-making process. And a wrongly evaluated explanation might do more significant harm than an incorrect prediction, as the former could affect the general adjustment (e.g., structure construction) and human perspective (e.g., fairness check) of the model. REPRODUCIBILITY STATEMENT We have made great efforts to ensure reproducibility in this paper. Firstly, we make all causal assumptions clear in Section 2.2, Section 3.1 and Appendix A. For datasets, we have released the synthetic dataset, which can be referred to the link in Section 1, while the other two datasets are publicly available. We also include our code for model construction in the link. In Appendix D, we have reported the settings of hyper-parameters used in our implementation for model training. B COMPARISON OF IMPORTANCE ESTIMATIONS In this section, we compare our proposed estimation via front-door adjustment with the estimation in FIDO (Chang et al., 2019). We rephrased each estimation as Impdse(Gs) = ∑ G∗s P (G∗s = G∗s | Gs = Gs)P (Y | G∗s = G∗s ) = ∑ G∗s P (G∗s = G∗s | Gs = Gs) ∑ G′s P (Y | G∗s = G∗s , Gs = G′s)P (Gs = G′s) (9) and ImpFIDO(Gs) = ∑ G∗s P (G∗s = G∗s | Gs = Gs)P (Y | G∗s = G∗s ) (10) where DSE has alternatively adjusted on Gs (represented as G′s). To make it clear, we consider the underlined part of each equation. For Equation 9, we have ∑ G′s P (Y | G∗s = G∗s , Gs = G′s)P (Gs = G′s) = ∑ G′s P (Y | G∗s = G∗s , Gs = G′s)P (Gs = G′s | G∗s = G∗s ) P (Gs = G′s) P (Gs = G′s | G∗s = G∗s ) = ∑ G′s P (Y,Gs = G′s | G∗s = G∗s ) P (Gs = G′s) P (Gs = G′s | G∗s = G∗s ) (11) While for the formulation of Equation 10, we have P (Y | G∗s = G∗s ) = ∑ G′s P (Y,Gs = G′s | G∗s = G∗s ) (12) In the comparison of these two parts, we can see that Equation 12 is biased under our causal assumption. Intuitively, each contribution of the importance of G∗s on Y should be inversely proportional to the posterior probability, i.e., the probability of G′s given the observation G∗s . However, FIDO fails to consider the causal relation between Gs → G∗s , which biases tha approximation of the genuine causal effect under our causal assumption. Back to our proposed estimation, as we have collected (Gs,G∗s )-pairs via Monte-Carlo simulation, thus additional adjustment on Gs (G′s) can be achieved via Equation 11. C DSE FOR DELETION-BASED EVALUATION Based on the idea of deletion-based evaluation, we can instead use the average causal effect (Holland., 1988) (ACE) to look for a smallest deletion graph by conducting two interventions do(Gs = G) (i.e., , no feature removal) and do(Gs = G/s) where G/s denotes the complement of the explanatory graph Gs, meaning that the GNN input receives treatment and control, respectively. Formally, we have Imp fid dse(Gs = Gs) = P (Y | do (Gs = G))− P ( Y | do ( Gs = G/s )) (13) Then, we can similarly adjust for the individual terms as Equation 1, obtaining the unbiased importance value as the result of deletion-based evaluation. D EXPERIMENTAL DETAILS In this paper, all experiments are done on a single Tesla V100 SXM2 GPU (32 GB). The well-trained GNNs used in our experiments achieve high classification accuracies of 0.958 in TR3, 0.982 in MNISTsup, 0.909 in Graph-SST2. Now We introduce the model construction of the proposed generator. The encoder used is Crystal Graph Convolutional Neural Networks (Xie & Grossman, 2018), which contains three Convolutional layers. The encode dimensions in Tr3, MNISTsup, Graph-SST2 datasets are respectively 256, 64, 256. For decoder, we adopt two fully connected layers with ReLU as activation layers, where the numbers of neurons are the same with the encode dimensions. Next, we summarize the pseudocodes for the Adversarial Training in Algorithm 1. Algorithm 1 Generative Adversarial Training. All experiments in the paper used the default values m = 256, α = 2× 10−4, β = 1× 10−4, ω = λ = 5, τ = 0.1 Require: Pr, real graphs’ distribution. r, masking ratio. Require: m, batch size. α, learning rate. β, γ, λ, ω, τ , hyper-parameters. 1: µ← µ0; θ ← θ0 2: while loss in Equation (4) is not converged do 3: # Discriminator’s training 4: Sample {G(i)}mi=1 ∼ Pr a batch from the real graphs. 5: Randomly generate broken graphs {G(i)s }mi=1 from {G(i)}mi=1 with masking ratio r. 6: Embed the nodes through encoder q(Z|{G(i)s ,G(i)}mi=1) 7: Decode the edge probabilities and sample in-fill graphs {Ĝs̄}mi=1 ∼ p(Ĝs̄ | Z) 8: Compute Discriminator’s loss from Equation 7. 9: Update parameter µ with back-propagation. 10: # Generator’s training 11: Repeat the operations from line 4 to 7. 12: Compute Generator’s loss from Equation 4, 5, 6. 13: Update parameter θ with back-propagation. 14: end while For other hyper-parameters, we set r = 0.3, γ = 3 in Tr3 dataset. In MNISTsup and Graph-SST2 datasets, we set r = 0.6, γ = 1. We use Adam (Kingma & Ba, 2014) with weight decay rate 1e-5 for optimization. The maximum number of epochs is 100. E DETAILED USER STUDY The User Study starts by instructions to participants, where they will see a sentence (movie reviews) in each question and its sentiment (Positive of Negative), e.g., Sentence: “is more of an ordeal than an amusement” Sentiment: Negative then several explanations are presented for the answers of “Why the sentiment of this sentence is negative (positive)?”. The explanations (see Figure 7) are shown in graph form (edges indicate relations between words), and colors of more important features are darker. Then they were asked to choose the best explanation(s). A good explanation should be concise, informative, and the rational cause of sentence’s sentiment. In this case, (B) could be the best explanation since “ordeal” mostly decides the negative sentiment, while (A) only identifies plain words like “more than” and (C) is quite the opposite. Note that the participants can choose multiple answers and some choices are the same. Thereafter, 10 questions out of 32 questions in total are presented for each participant and we compute the average scores for the explainers. F EXTRA CASE STUDY In this section, we further present a case study for TR3 dataset. In Figure 8, the OOD probabilities for the ground truth explanatory subgraphs in each row remain the same as the edge selection ratios vary, which are 100%, 0%, 0% respectively. In contrast, the evaluation results generated from our DSE have shown strong rationality. Specifically, the importance score compute by our DSE increases with the increasing number of selected ground truth edges. This well validates our DSE framework, where we mitigate the OOD effect by generating the plausible surrogates, making the graphs to be evaluated conforms to the graph distribution in the training data. In this way, the effect of D → Y could hardly affect our assessment for the explanatory subgraph. Thereafter, as the explanatory graph becomes more informative and discriminative, it offers more evidence for the GNN to classify it as the target class which we want to explain, yielding faithful evaluation results. Cycle House Crane Im p d se Figure 8: Three cases in TR3 datasets. Each graph in the left represents the ground truth explanatory subgraphs (red) for explaining a given graph. One of the complement graphs (light blue) generated from CVGAE is also shown with each explanatory subgraph. As the edge selection ratio increases in each row, the importance scores output by our DSE are shown in the right. G ABLATION STUDY & SENSITIVITY ANALYSIS We first conduct ablation studies to investigate the contribution of the contrastive parameter γ and the penalty parameter λ in CVGAE. The ablation models are proposed by I. removing the contrastive loss, i.e., setting γ = 0 and II. removing the penalty term in the Wasserstein GAN (WGAN) (Martin Arjovsky, 2017) loss, i.e., setting λ = 0. The performance of the ablation models is reported in Table 4. We observe that the superiority of CVGAE compared with the ablation model supports our model design by (i) smoothing the model optimization which yields a more performant generator (ii) highlighting the class-discriminative information in the graph embeddings, which implicitly encodes the class information. Also, we conduct sensitivity analysis for CVGAE w.r.t. the hyper-parameters. Specifically, we select λ, the penalty in the WGAN loss (cf. Euqation 7) and γ, the strength of the contrastive loss (cf. Equation 4). While we empirically found the performance is relatively indifferent to other parameters in a wide range. The results are shown in Figure 9. We observe that the best performance is achieved with λ taking values from 1 to 10, and γ taking values from 1 to 10 in TR3 dataset and 0.1 to 5 in MNISTsup and Graph-SST2 datasets. And we found a large λ generally causes an increase in the FID metric, as it may alleviate the penalty on the reconstruction errors, which further makes a larger difference between fy(G) and E[fy(G∗s )].
1. What is the focus of the paper regarding GNN explainers? 2. What are the strengths of the proposed method in debiasing GNN-explainer subgraph importance scores? 3. Are there any potential limitations or risks in the method for generating surrogate subgraphs? 4. How does the reviewer assess the effectiveness of the empirical study demonstrating the framework's usefulness? 5. What additional considerations or suggestions does the reviewer have regarding the research's contribution?
Summary Of The Paper Review
Summary Of The Paper This paper presents a novel explainer-agnostic method to adjust the biases of feature importance scores of feature attribution for GNNs. The paper first describes the feature importance scores of the GNN feature attribution framework have biases due to the out-of-distribution (OOD) problem. The subgraph important scores are calculated by inputting a subgraph instead of data graphs, but subgraph patterns can fall into regions outside the distribution of training data graphs. To address this problem, the paper proposed a method to generate surrogate graphs within the data graph distribution by CVGAE to make a front-door adjustment for deconfounding these biases by distribution shift. Experiments using several state-of-the-art GNN explainers shows demonstrated the effectiveness of the proposed framework. Review The paper presents an interesting new method with a clear focus on debiasing GNN-explainer subgraph importance scores by generating surrogate subgraphs to correct distribution shift problems. The paper is well written and easy to follow, and the core idea sounds very effective and the empirical study using three datasets provides useful demonstrations. Overall I liked the idea of the paper and found it nice work. Here are a couple of small questions to make sure of the paper's contribution. The proposed method is GNN-explainer agnostic. This point is advantageous because we can make importance corrections to any GNN explainers we like. But at the same time, a natural question will be: Is there any chance that we miss important features because the method to generate subgraph patterns didn't consider this OOD problem even if the proposed method can make a correction by posthoc processing. So for example, is it possible to have situations we have no significant important subgraph patterns after OOD-bias correction? Or, either way, any subgraph patterns come from at least one of the data graphs, and so by recovering such data graph within the training distribution, we can approximately resolve the data distribution bias and there are technically no problems?? The paper focuses on structural features. This will imply that the bias under consideration is primarily distribution shift due to taking subgraphs of data graphs, and we need to recover the original data graph in the training set from a given any subgraph patterns. But relatively small subgraph patterns can occur in multiple instances, and does this generative step actually generate such surrogate graph as intended? Say, let G_s be a subgraph pattern we want to calculate the importance score but falls in out of the training distribution. The surrogate graph G_s^* by CVGAE is basically intended to recover the original super graph of G_s in the training dataset, isn't it? It'll be very helpful to make sure this point in some way. Just for your interest, and no need to include this in the paper: it might be interesting to consider whether the bias from subgraphs to the data graphs is the main problem. If the generative step can be explicit, we can even have a deterministic mapping from the possible subgraph patterns to the original graphs by graph mining algorithms. Explicit subgraph patterns are intensively investigated in the graph mining field in parallel to GNN-based approaches to the graph classification problem. For example, the book "Managing and Mining Graph Data (Ed: Charu C. Aggarwal & Haixun Wang) covers a list of relevant papers. So we can directly search explicit subgraph patterns in the data graphs and such approaches are intensively investigated around 10 years ago. by direct graph mining such as mining "discriminative patterns", "emerging patterns", "contrastive patterns", (many works such as Llinares-López+, Fast and memory-efficient significant pattern mining via permutation testing, KDD2015) by LASSO (K. Tsuda, Entire regularization paths for graph data. ICML 2007: 919-926) by feature-wise boosting (Saigo+, gBoost: a mathematical programming approach to graph classification and regression, Machine Learning, 2009) by sparse coordinate descent (Takigawa+, Generalized sparse learning of linear models over the complete subgraph feature set. TPAMI 2017) by decision tree / decision forest (Shirakawa+, Jointly learning relevant subgraph patterns and nonlinear models of their indicators. MLG2018@KDD) But using these methods, we often see disappointing conclusions that in general, "smaller subgraph patterns" are important because smaller subgraphs are more frequently occurring in the data and thus can be contributing in making predictions.
ICLR
Title Deconfounding to Explanation Evaluation in Graph Neural Networks Abstract Explainability of graph neural networks (GNNs) aims to answer “Why the GNN made a certain prediction?”, which is crucial to interpret the model prediction. The feature attribution framework distributes a GNN’s prediction to its input features (e.g., edges), identifying an influential subgraph as the explanation. When evaluating the explanation (i.e., subgraph importance), a standard way is to audit the model prediction based on the subgraph solely. However, we argue that a distribution shift exists between the full graph and the subgraph, causing the out-ofdistribution problem. Furthermore, with an in-depth causal analysis, we find the OOD effect acts as the confounder, which brings spurious associations between the subgraph importance and model prediction, making the evaluation less reliable. In this work, we propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction. While the distribution shift is generally intractable, we employ the front-door adjustment and introduce a surrogate variable of the subgraphs. Specifically, we devise a generative model to generate the plausible surrogates that conform to the data distribution, thus approaching the unbiased estimation of subgraph importance. Empirical results demonstrate the effectiveness of DSE in terms of explanation fidelity. N/A Explainability of graph neural networks (GNNs) aims to answer “Why the GNN made a certain prediction?”, which is crucial to interpret the model prediction. The feature attribution framework distributes a GNN’s prediction to its input features (e.g., edges), identifying an influential subgraph as the explanation. When evaluating the explanation (i.e., subgraph importance), a standard way is to audit the model prediction based on the subgraph solely. However, we argue that a distribution shift exists between the full graph and the subgraph, causing the out-ofdistribution problem. Furthermore, with an in-depth causal analysis, we find the OOD effect acts as the confounder, which brings spurious associations between the subgraph importance and model prediction, making the evaluation less reliable. In this work, we propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction. While the distribution shift is generally intractable, we employ the front-door adjustment and introduce a surrogate variable of the subgraphs. Specifically, we devise a generative model to generate the plausible surrogates that conform to the data distribution, thus approaching the unbiased estimation of subgraph importance. Empirical results demonstrate the effectiveness of DSE in terms of explanation fidelity. 1 INTRODUCTION Explainability of graph neural networks (GNNs) (Hamilton et al., 2017; Dwivedi et al., 2020) is crucial to model understanding and reliability in real-world applications, especially when about fairness and privacy (Ying et al., 2019; Luo et al., 2020). It aims to provide insight into how predictor models work, answering “Why the target GNN made a certain prediction?”. Towards this end, a variety of explainer models are proposed for feature attribution (Selvaraju et al., 2017; Ying et al., 2019; Luo et al., 2020; Vu & Thai, 2020), which decomposes the predictor’s prediction as contributions (i.e., importance) of its input features (e.g., edges, nodes). While feature attribution assigns the features with importance scores, it redistributes the graph features and creates a new distribution different from that of the original full graphs, from which a subgraph is sampled as the explanation. Such sampling process is referred to as feature removal (Covert et al., 2020). Then, to assess the explanatory subgraph, the current evaluation frameworks use the feature removal principle — (1) only feed the subgraph into the target predictor, discarding the other features; (2) measure the importance of the subgraph based on its information amount to recover the model’s prediction. Such subgraph-prediction correlations uncovered by the removal-based evaluator should offer a faithful inspection of the predictor’s decision-making process and assess the fidelity of the explainers reliably. However, feature removal brings the out-of-distribution (OOD) problem (Frye et al., 2020; Chang et al., 2019; Lukas Faber, 2021): the distribution shift from full graphs to subgraphs likely violates underlying properties, including node degree distribution (Leskovec et al., 2005) and domain-specific constraints (Liu et al., 2018) of the full graphs. For example, graph properties of chemical molecules, such as the valency rules, impose some constraints on syntactically valid molecules (Liu et al., 2018); hence, simply removing some bonds (edges) or atoms (nodes) creates invalid molecular subgraphs that never appear in the training dataset. Such OOD subgraphs could manipulate the predictor’s Under review as a conference paper at ICLR 2022 𝑮: Full Graph 𝑮𝒔: Subgraph 𝒀: Predicate Logits 𝐬 𝓖 House Cycle Crane 0.21 0.70 Target Predictor Crane 𝓖Cycle House𝓖𝒔𝟏𝑫: Distribution Shift 𝓖𝒔𝟐 𝒔∗ 𝐬 Front-door Adjustment𝓖𝒔𝟐𝓖𝒔𝟏 (a) Feature Removal to Evaluate Explanatory Subgraph Gs 𝑮: Full Graph 𝑮𝒔: Subgraph 𝒀: Predicate Logits 𝐬 𝓖 House Cycle Crane 0.21 0.70 Target Predictor Crane 𝓖Cycle House𝓖𝒔𝟏𝑫: Distribution Shift 𝓖𝒔𝟐 𝒔∗ 𝐬 Front-door Adjustment𝓖𝒔𝟐𝓖𝒔𝟏 (b) SCM I Figure 1: (a) A real example in TR3. The GNN predictor classifies the full graph as ‘House”. On subgraphs Gs1 and Gs2, the prediction probabilities of being “House” are respectively 0.21 and 0.70. (b) The structural causal model represents the causalities among variables: G as the input graph, D as the unobserved distribution shift, Gs as the explanatory subgraph, and Y as the model prediction. outcome arbitrarily (Dai et al., 2018; Zügner et al., 2018), generates erroneous predictions, and limits the reliability of the evaluation process. Here we demonstrate the OOD effect by a real example in Figure 1a, where the trained ASAP (Ranjan et al., 2020) predictor has classified the input graph as “House” for its attached motif (see Section 4 for more details). On the ground-truth explanation Gs1, the output probability of the “House” class is surprisingly low (0.21). While for Gs2 with less discriminative information, the outputs probability of the “House” class (0.70) is higher. Clearly, the removal-based evaluator assigns the OOD subgraphs with unreliable importance scores, which are unfaithful to the predictor’s decision. The OOD effect has not been explored in evaluating GNN explanations, to the best of our knowledge. We rigorously investigate it from a causal view (Pearl et al., 2016; Pearl, 2000; Pearl & Mackenzie, 2018). Figure 1b represents our causal assumption via a structural causal model (SCM) (Pearl et al., 2016; Pearl, 2000), where we target the causal effect of Gs on Y . Nonetheless, as a confounder between Gs and Y , distribution shift D opens the spurious path Gs ← D → Y . By “spurious”, we mean that the path lies outside the direct causal path from Gs to Y , making Gs and Y spuriously correlated and yielding an erroneous effect. And one can hardly distinguish between the spurious correlation and causative relations (Pearl et al., 2016). Hence, auditing Y on Gs suffers from the OOD effect and wrongly evaluates the importance of Gs. Motivated by our causal insight, we propose a novel evaluation paradigm, Deconfounded Subgraph Evaluator (DSE), to faithfully measure the causal effect of explanatory subgraphs on the prediction. 𝑮: Full Graph 𝑮𝒔: Subgraph 𝒀: Predicate Logits 𝑮𝐬𝑮 𝒀 𝑫 𝑮𝒔∗ 𝑮𝐬𝑮 𝒀 𝑫 Front-door Adjustment reliably and further guide explainers to generate faithful explanations. In a nutshell, our contributions are: • From a causal perspective, we argue that the OOD effect is the confounder that causes spurious correlations between subgraph importance and model prediction. • We propose a deconfounding paradigm, DSE, which exploits the front-door adjustment to mitigate the out-of-distribution effect and evaluate the explanatory subgraphs unbiasedly. • We validate the effectiveness of our framework over various explainers, target GNN models, and datasets. Significant boosts are achieved over the conventional feature removal techniques. Code and datasets are available at: https://anonymous.4open.science/r/DSE-24BC/. 2 A CAUSAL VIEW OF EXPLANATION EVALUATION Here we begin with the causality-based view of feature removal in Section 2.1 and present our causal assumption to inspect the OOD effect in Section 2.2. 2.1 PROBLEM FORMULATION Without loss of generality, we focus on the graph classification task: a well-trained GNN predictor f takes the graph variable G as input and predicts the class Y ∈ {1, · · · ,K}, i.e., Y = f(G). Generation of Explanatory Subgraphs. Post-hoc explainability typically considers the question “Why the GNN predictor f made certain prediction?”. A prevalent solution is building an explainer model to conduct feature attribution (Ying et al., 2019; Luo et al., 2020; Pope et al., 2019). It decomposes the prediction into the contributions of the input features, which redistributes the probability of features according to their importance and sample the salient features as an explanatory subgraph Gs. Specifically, Gs can be a structure-wise (Ying et al., 2019; Luo et al., 2020) or featurewise (Ying et al., 2019) subgraph of G. In this paper, we focus on the structural features. That is, for graph G = (N , E) with the edge set E and the node set N , the explanatory subgraph Gs = (Ns, Es) consists of a subset of edges Es ⊂ E and their endpoints Ns = {u, v|(u, v) ∈ Es}. Evaluation of Explanatory Subgraphs. Insertion-based evaluation by feature removal (Covert et al., 2020; Dabkowski & Gal, 2017) aims to check whether the subgraph is the supporting substructure 1 that alone allows a confident classification. We systematize this paradigm as three steps: (1) divide the full graph G into two parts, the subgraph Gs and the complement Gs; (2) feed Gs into the target GNN f , while discarding Gs; and (3) obtain the model prediction on Gs, to assess its discriminative information to recover the prediction on G. Briefly, at the core of the evaluator is the subgraphprediction correlation. However, as discussed in Section 1, the OOD effect is inherent in the removal-based evaluator, hindering the subgraph-prediction correlation from accurately estimating the subgraph importance. 2.2 STRUCTURAL CAUSAL MODEL To inspect the OOD effect rigorously, we take a causal look at the evaluation process with a Structural Causal Model (SCM I) in Figure 1b. We denote the abstract data variables by the nodes, where the directed links represent the causality. The SCM indicates how the variables interact with each other through the graphical definition of causation: • G→ Gs ← D. We introduce an abstract distribution shift variable D to sample a subgraph Gs from the edge distributions of the full graph G. • Gs → Y ← D. We denote Y as the prediction variable (e.g., logits output), which is determined by (1) the direct effect from Gs, and (2) the confounding effect caused by D. In particular, the former causation that led to the result is the focus of this work. We suggest readers to refer to Appendix A where we offer an elaboration of D. With our SCM assumption, directly measuring the importance of explanatory subgraphs is distracted by the backdoor path (Pearl, 2000), Gs ← D → Y . This path introduces the confounding associations between Gs and Y , which makes Gs and Y spuriously correlated, i.e., biases the subgraph-prediction correlations, thus making the evaluator invalid. How to mitigate the OOD effect and quantify Gs’s genuine causal effect on Y remains largely unexplored in the literature and is the focus of our work. 3 DECONFOUNDED EVALUATION OF EXPLANATORY SUBGRAPHS In this section, we propose a novel deconfounding framework to evaluate the explanatory subgraphs in a trustworthy way. Specifically, we first leverage the front-door adjustment (Pearl, 2000) to formulate a causal objective in Section 3.1. We then devise a conditional variational graph auto-encoders (CVGAE) as the effective implementation of our objective in Section 3.2. 1We focus on insertion-based evaluation here while we discuss deletion-based evaluation in Appendix C. 3.1 FRONT-DOOR ADJUSTMENT To the best of our knowledge, our work is the first to adopt the causal theory to solve the OOD problem in the explanation evaluation of GNNs. To pursue the causal effect of Gs on Y , we perform the calculus of the causal intervention P (Y = y|do(Gs = Gs)). Specifically, the do-calculus (Pearl, 2000; Pearl et al., 2016) is to intervene the subgraph variable Gs by cutting off its coming links and assigning it with the certain value Gs, making it unaffected from its causal parents G and D. From inspection of the SCM in Figure 1b, the distribution effect D acts as the confounder between Gs and Y , and opens the backdoor path Gs ← D → Y . However, as D is hardly measurable, we can not use the backdoor adjustment (Pearl, 2000; Pearl et al., 2016) to block the backdoor path from Gs to Y . Hence, the causal effect of Gs on Y is not identifiable from SCM I. However, we can go much further by considering SCM II in Figure 2 instead, where a mediating variable G∗s is introduced between Gs and Y : • Gs → G∗s . G∗s is the surrogate variable of Gs, which completes Gs to make them in the data distribution. First, it originates from and containsGs. Specifically, it imagines how the possible full graphs should be when observing the subgraph Gs. Second, G∗s should follow the data distribution and respect the inherent knowledge of graph properties, thus no link exists between D and G∗s . • G∗s → Y . This is based on our causal assumption that the causality-related information of Gs on Y , i.e., the discriminative information for Gs to make prediction, is well-preserved by G∗s . Thus, with the core of Gs, G∗s is qualified to serve as the mediator which further results in the model prediction. With SCM II, we can exploit the front-door adjustment (Pearl, 2000; Pearl et al., 2016) instead to quantify the causal effect of Gs on Y . Specifically, by summing over possible surrogate graphs G∗s of G∗s , we chain two identifiable partial effects of Gs on G ∗ s and G ∗ s on Y together: P (Y |do(Gs = Gs)) = ∑ G∗s P (Y |do(G∗s = G∗s ))P (G∗s = G∗s |do(Gs = Gs)) = ∑ G∗s ∑ G′s P (Y |G∗s = G∗s , Gs = G′s)P (Gs = G′s)P (G∗s = G∗s |do(Gs = Gs)) = ∑ G∗s ∑ G′s P (Y |G∗s = G∗s , Gs = G′s)P (Gs = G′s)P (G∗s = G∗s |Gs = Gs), (1) Specifically, we have P (G∗s|do(Gs = Gs)) = P (G∗s|Gs = Gs) as Gs is the only parent of G∗s . And we distinguish the Gs in our target expression P (Y |do(Gs = Gs)) between G′s, the latter of which is adjusted to pursue P (Y |do(G∗s = G∗s )). With the data of (Gs,G∗s ) pairs, we can obtain P (Y |G∗s = G∗s , Gs = G′s) by feeding the surrogate graph G∗s into the GNN predictor, conditional on the subgraph G′s; similarly, we can estimate P (Gs = G′s) statistically; P (G∗s = G∗s |Gs = Gs) is the conditional distribution of the surrogate variable, after observing the subgraphs. As a result, this front-door adjustment yields a consistent estimation of Gs’s effect on Y and avoids the confounding associations from the OOD effect. 3.2 DEEP GENERATIVE MODEL However, it is non-trivial to instantiate G∗s and collect the (Gs,G∗s ) pairs. We get inspiration from the great success of generative models and devise a novel probabilistic model, conditional variational graph auto-encoder (CVGAE), and an adversarial training framework, to generate G∗s . Conditional Generation. Inspired by previous works (Thomas N. Kipf, 2016; Liu et al., 2018), we model the data distribution via a generative model gθ parameterized by θ. It is composed of an encoder q(Z|G,Gs) and a decoder p(G∗s |Z). Specifically, the encoder q(Z|G,Gs) embeds each node i in G with a stochastic representation zi, and summarize all node representations in Z: q(Z|G,Gs) = N∏ i=1 q(zi|G,Gs), with q(zi|G,Gs) = N (zi | [µ1i,µ2i], [ σ21i 0 0 σ22i ] ) (2) where zi is sampled from a diagonal normal distribution by mean vector [µ1i,µ2i] and standard deviation vector diag(σ21i,σ 2 2i); µ1 = fµ(G) and logσ1 = fσ(G) denote the matrices of mean vectors µ1i and standard deviation vectors logσ1i respectively, which are derived from two GNN models fµ and fσ on the top of the full graph G; similarly, µ2 = fµ(Gs) and logσ2 = fσ(Gs) are on the top of the subgraph Gs. Then, the decoder p(G∗s |Z) generates the valid surrogates: p(G∗s |Z) = N∏ i N∏ j p(Aij |zi, zj), with p(Aij = 1|zi, zj) = fA([zi, zj ]), (3) whereAij = 1 indicates the existence of an edge between nodes i and j; fA is a MLP, which takes the concatenation of node representations zi and zj as the input and outputs the probability of Aij = 1. Leveraging the variational graph auto-encoder, we are able to generate some counterfactual edges that never appear in G and sample G∗s from the conditional distribution p(G∗s |Z), formally, G∗s ∼ p(G∗s|Z). As a result, P (G∗s = G∗s |Gs = Gs) in Equation 1 is identified by p(G∗s |Z). The quality of the generator directly affects the quality of the surrogate graphs, further determines how well the frontdoor adjustment is conducted. Next, we will detail an adversarial training framework to optimize the generator, which is distinct from the standard training of VAE. Adversarial Training. To achieve high-quality generation, we get inspiration from the adversarial training (Goodfellow et al., 2020; Yue et al., 2021) and devise the following training objective: min θ LVAE + γLC +max µ ωLD, (4) where γ, ω are trade-off hyper-parameters. These losses are carefully designed to assure the generation follows the data distribution. Next, we will elaborate on each of them. LVAE = −EG [Eq(Z|G,Gs)[log p(Ĝs|Z)]] + βEG [DKL(q(Z|G,Gs)||p(Z))], (5) We first minimize the β-VAE loss(Higgins et al., 2017), and the first term is the reconstruction loss responsible to predict the probability of edges’ existence; the second term is the KL-divergence between the variational and prior distributions. Here we resort to the isotropic Gaussian distribution p(Z) = ∏ i p(zi) = ∏ iN (zi|0, I) as the prior. β reweighs the KL-divergence, which promises to learn the disentangled factors in Z (Higgins et al., 2017; Yue et al., 2021; Suter et al., 2019). Moreover, we highlight the class-discriminative information in Z, by encouraging the agreement between graph representations with the same class compared to that with different classes. Technically, the contrastive loss is adopted: LC = −EG [log ∑ G′∈B+ exp (s(zG , zG′)/τ)∑ G′′∈B+∪B− exp (s(zG , zG′′)/τ) ], (6) where zG is the representation of G that aggregates all node representations Z together; s is the similarity function, which is given by an inner product here; τ is the temperature hyper-parameter; B+ is the graph set having the same class to G, while the graphs involved in B− have different classes from G. Minimizing this loss enables the generator to go beyond the generic knowledge and uncover the class-wise patterns of graph data. Besides, we introduce a discriminative model dµ to distinguish the generated graphs. Specifically, we set it as a probability-conditional GNN (Fey & Lenssen, 2019) parameterized by µ. It takes a graph as input and outputs a score between 0 to 1, which indicates the confidence of the graph being realistic. Hence, given a real graph G with the ground-truth label y, we can use the generator gθ to generate G∗s . Then the discriminator learns to assign G with a large score while labeling G∗s with a small score. To optimize the discriminator, we adopt the Wasserstein GAN (WGAN) (Martin Arjovsky, 2017) loss: LD = EG [Ep(G∗s |Z)[d(G, y)− d(G ∗ s , y)− λ(||∇G∗s d(G ∗ s , y)||2 − 1)2]], (7) where d(G∗s , y) is the probability of generating G∗s from the generator; λ is the hyper-parameter. By playing the min-max game between the generator and the discriminator in Equation 4, the generator can create the surrogate graphs from the data distribution plausibly. Subgraph Evaluation. With the well-trained generator g∗θ whose parameters are fixed, we now approximate the causal effect of Gs on Y . Here we conduct Monte-Carlo simulation based on g∗θ to sample a set of plausible surrogate graphs {G∗s} from p(G∗s |Z). Having collected the (Gs,G∗s ) data, we can arrive the estimation of Equation 1. 4 EXPERIMENTS We aim to answer the following research questions: • Study of Explanation Evaluation. How effective is our DSE in mitigating the OOD effect and evaluating the explanatory subgraph more reliably? (Section 4.2) • Study of Generator. How effective is our CVGAE in generating the surrogates for the explanatory subgraphs and making them conform to the data distribution? (Section 4.3) 4.1 EXPERIMENTAL SETTINGS Datasets & Target GNNs. We first train various target GNN classifiers on the three datasets: • TR3 is a synthetic dataset involving 3000 graphs, each of which is constructed by connecting a random tree-shape base with one motif (house, cycle, crane). The motif type is the ground-truth label, while we treat the motifs as the ground-truth explanations following Ying et al. (2019); Yuan et al. (2020a). A Local Extremum GNN (Ranjan et al., 2019) is trained for classification. • MNIST superpixels (MNISTsup) (Monti et al., 2017) converts the MNIST images into 70,000 superpixel graphs. Every graph with 75 nodes is labeled as one of 10 classes. We train a Splinebased GNN (Fey et al., 2018) as the classifier model. The subgraphs representing digits can be viewed as human explanations. • Graph-SST2 (Yuan et al., 2020b) is based on text sentiment dataset SST2 (Socher et al., 2013) and converts the text sentences to graphs where nodes represent tokens and edges indicate relations between nodes. Each graph is labeled by its sentence sentiment. The node embeddings are initialized by the pre-trained BERT word embeddings (Devlin et al., 2018). Graph Attention Network (Veličković et al., 2018) is trained as the classifier. Ground-Truth Explanations. By “ground-truth”, we follow the prior studies (Ying et al., 2019; Yuan et al., 2020a; Luo et al., 2020) and treat the subgraphs coherent to the model knowledge (e.g., the motif subgraphs in TR3) or human knowledge (e.g., the digit subgraphs in MNISTsup) as the ground-truth explanations. Although such ground-truth explanations might not fit the decision-making process of the model exactly, they contain sufficient discriminative information to help justify the explanations. Note that no ground-truth explanation is available in Graph-SST2. Explainers. To explain the decisions made by these GNNs, we adopt several state-of-the-art explainers, including SA (Baldassarre & Azizpour, 2019), Grad-CAM (Selvaraju et al., 2017), GNNExplainer (Ying et al., 2019), CXPlain (Schwab & Karlen, 2019), PGM-Explainer (Vu & Thai, 2020), Screener (Anonymous, 2021), to generate the explanatory subgraphs. Specifically, top-15%, 20%, 20% of edges on the full graph instance construct the explanatory subgraphs in TR3, MNIST, and Graph-SST2, respectively. We refer readers to Appendix D for more experimental details. 4.2 STUDY OF EXPLANATION EVALUATION (RQ1) Deconfounded Evaluation Performance. For an explanation Gs, the conventional removal-based evaluation framework quantifies its importance as the subgraph-prediction correlation, termed Impre(Gs) = f(Gs); whereas, our DSE framework focuses on the causal effect caused by Gs on Y which is computed based on Equation 1, and we denote it as Impdse(Gs) for short. These importance scores broadly aim to reflect the discriminative information carried by Gs. Thanks to the ground-truth knowledge available in TR3 and MNISTsup, we are able to get a faithful and principled metric to measure the discriminative information amount — the precision Prec(Gs,G+s ) between the ground-truth explanation G+s and the explanatory subgraph Gs. This precision metric allows us to perform a fair comparison between Impre(Gs) and Impdse(Gs) via: ρre = ρ([Prec(Gs,G+s )], [Impre(Gs)]), ρdse = ρ([Prec(Gs,G+s )], [Impdse(Gs)]), (8) where ρ is the correlation coefficient between the lists of precision and importance scores. We present the results in Figure 4 and have some interesting insights: • Insight 1: Removal-based evaluation hardly reflects the importance of explanations. In most cases, Prec(Gs,G+s ) is negatively correlated with the importance. This again shows that simply discarding a part of a graph could violate some underlying properties of graphs and mislead the target GNN, which is consistent with the adversarial attack works (Dai et al., 2018; Zügner et al., 2018). Moreover, the explainers that target high prediction accuracy, such as GNNExplainer, are easily distracted by the OOD effect and thus miss the important subgraphs. • Insight 2: Deconfounded evaluation quantifies the explanation importance more faithfully. Substantially, ρdse greatly improves after the frontdoor adjustments via the surrogate variable. The most notable case is GNNExplainer in MNISTsup, where ρdse = 0.17 achieves a tremendous increase from ρdse = −0.11. Although our DSE alleviates the OOD problem significantly, weak positive or negative correlations still exist, which indicates the limitation of the current CVGAE. We leave the exploration of higher-quality generation in future work. Revisiting & Reranking Explainers. Here we investigate the rankings of explainers generated from different evaluation frameworks, and further compute the Spearman rank correlations between these evaluation rankings and the reference rankings of explainers. Specifically, for TR3 and MNISTsup with ground-truth explanations, we regard the ranks w.r.t. precision as the references, while obtaining the reference of Graph-SST2 by a user study2. Such a reference offers the human knowledge for explanations and benchmarks the comparison. We show the results in Table 1 and conclude: • Insight 3: DSE presents a more fair and reliable comparison among explainers. The DSEbased rankings are highly consistent with the references, while the removal-based rankings struggle to pass the check. In particular, we observe that for TR3, the unrealistic splicing inputs cause a plain ranking w.r.t. Impre. We find that various input subgraphs are predicted as cycle class. That is, the target GNN model is a deterministic gambler with serious OOD subgraphs. In contrast, DSE outputs a more informative ranking; For MNISTsup, GNNExplainer with the highest precision 270 volunteers are engaged, where each was asked to answer 10 questions randomly sampled from 32 movie reviews and choose the best explanations generated by the explainers. See Appendix E for more details. Table 2: Importance scores or probabilities of subgraphs before and after feature removal. TR3 MNISTsup Graph-SST2 Imp(G) or GMM(G) 0.958−0.520 0.982−0.574 35.3−11.3 Imp(G+s ) or GMM(Gs) 0.438 0.408 24.0 Table 3: Performances of Generators in terms of Validity and Fidelity. TR3 MNISTsup Graph-SST2 Imp(G∗s) VAL↑ FID↓ Imp(G∗s) VAL↑ FID↓ GMM(G∗s) VAL↑ FID↓ Random 0.451 0.013 0.794 0.448 0.040 1.325 38.8 14.8 0.060 VGAE 0.469 0.031 0.754 0.205 -0.203 1.501 37.6 13.6 0.078 ARGVA 0.392 0.061 0.726 0.466 0.058 1.306 31.0 7.0 0.079 CVGAE 0.603 0.165 0.598 0.552 0.144 0.910 45.8 21.8 0.057 is overly underrated by the removal-based evaluation framework, but DSE justifies its position faithfully; For Graph-SST2, although the OOD problem seems to be minor, DSE can still achieve significant improvement. Case Study. We present a case study in Graph-SST2 to illustrate how DSE mitigates the potential OOD problem. See Appendix F for another case study on TR3. In Figure 5, G is a graph predicted as “negative" sentiment. The explanatory subgraph Gs emphasizes tokens like “weak” and relations like “n’t→funny”, which is cogent according to human knowledge. However, its removal-based importance is highly underestimated as 0.385, possibly due to its disconnectivity or sparsity after feature removal. To mitigate the OOD problem, DSE samples 50 surrogate graphs from the generator, performs the frontdoor adjustment, and justifies the subgraph importance as 0.913, which shows the effectiveness of our DSE framework. We also observe some limitations of the generator (1) Due to the limited training data, the generators only reflect the distribution of the observed graphs, thus making some generations grammatically wrong. (2) The generations is constrained within the complete graph determined by the node set of the explanatory subgraph, thereby limits the quality of deconfounding. As we mainly focus on the OOD problem, we will leave the ability of the generator as future work. 4.3 STUDY OF GENERATORS (RQ2) The generator plays an important role in our DSE framework, which aims to generate the valid surrogates conform to the data distribution. To evaluate the generator’s quality, we compare it with three baselines: a random generator, a variational graph auto-encoder (VGAE) (Thomas N. Kipf, 2016), and an adversarially regularized variational graph auto-encoder (ARGVA) (Pan et al., 2018). We perform the evaluation based on two metrics: (1) Validity. For the ground-truth explanations G+s that contains all discriminative information of the full graph G, the importance of its surrogate graph G∗s should be higher than itself. The difference between the two importance scores indicates the validity of the generator, thus we define VAL = EG [Imp(G∗s )− Imp(G+s )]. For Graph-SST2 where the class-wise features are intractable, we leverage the embeddings of training graphs and additionally train a Gaussian Mixture Model (GMM) as our distribution prior. Then, we compute the average loglikelihood of random subgraphs after in-filling, thus we have VAL = EGEGs∼Random(G)[GMM(G∗s )− GMM(Gs)]. (2) Fidelity. Towards a finer-grained assessment w.r.t. prediction probability of any random subgraphs, we adopt the metric following (Frye et al., 2021): FID = EGEGsEy|fy(G) − EG∗s [fy(G ∗ s )]|2. This measures how well the surrogates cover the target prediction distribution. Before comparing different generators, we first compute the importance or probabilities of the graphs before and after feature removal, which are summarized in Table 2. When inspecting the Removal’s results without any in-fills, the OOD problem is severe: in TR3 and MNISTsup, the importance of ground-truth subgraphs only reaches 43.8% and 40.8%, respectively, which are far away from the target importance of full graphs. Analogously in Graph-SST2. For the performance of the generators w.r.t. the two metrics, we summarize the average results over 5 runs in Table 3: • The performance of the baselines are poor. This suggests that they can hardly fit the target conditional distribution. • CVGAE outperforms other generators consistently across all cases, thus justifying the rationale and effectiveness of our proposed generator and adversarial training paradigm. For example, in TR3, CVGAE significantly increases the VAL scores and mitigates the OOD effect effectively. Moreover, we conduct ablation studies and sensitivity analysis in Appendix G to better understand the model components and validate the effectiveness of the designed objective. 5 RELATED WORK Post-hoc Explainability of GNNs. Inspired by the explainability in computer vision, Baldassarre & Azizpour (2019); Pope et al. (2019); Schnake et al. (2020) obtain the gradient-like scores of the model’s outcome or loss w.r.t. the input features. Another line (Luo et al., 2020; Ying et al., 2019; Yuan et al., 2020a; Yue Zhang, 2020; Michael Sejr Schlichtkrull, 2021) learns the masks on graph features. Typically, GNN-Explainer (Ying et al., 2019) applies the instance-wise masks on the messages carried by graph structures, and maximizes the mutual information between the masked graph and the prediction. Going beyond the instance-wise explanation, PGExplainer (Luo et al., 2020) generates masks for multiple instances inductively. Recently, researchers adopt the causal explainability (Pearl & Mackenzie, 2018) to uncover the causation of the model predictions.For instance, CXPlain (Schwab & Karlen, 2019) quantifies a feature’s importance by leaving it out. PGM-Explainer (Vu & Thai, 2020) performs perturbations on graph structures and builds an Bayesian network upon the perturbation-prediction pairs. Causal Screening (Screener) (Anonymous, 2021) measures the importance of an edge as its causal effect, conditional on the previously selected structures. Lately, SubgraphX (Yuan et al., 2021) explores different subgraphs with Monte-Carlo tree search and evaluates subgraphs with the Shapley value (Kuhn & Tucker, 1953). Counterfactual Generation for the OOD Problem. The OOD effect of feature removal has been investigated in some other domains. There are generally two classes of generation (i) Static generation. For example, Fong & Vedaldi. (2017); Dabkowski & Gal (2017) adopted blurred input and random colors for the image reference, respectively. Due to the unnatural in-filling, the generated images are distributional irrespective and can still introduce confounding bias. (ii) Adaptive generation: Chang et al. (2019); Frye et al. (2021); Agarwal et al. (2019); Kim et al. (2020). The generators of these methods, like DSE, overcomes the defects aforementioned, which generates data that conforms to the training distribution. For example, in computer vision, FIDO (Chang et al., 2019) generates imagespecific explanations that respect the data distribution, answering “Which region, when replaced by plausible alternative values, would maximally change classifier output?”. For the difference, firstly, DSE’s formulated importance involves additional adjustment on Gs and guarantees the unbiasedness of introducing the surrogate variable G∗s , which is commonly discarded by the prior works with in-fillings only. Specifically, we offer a comparison with FIDO in Appendix B. Secondly, the distribution of graph data is more complicated to model than other domains. And the proposed CVGAE is carefully designed for graph data, where the contrastive loss and the adversarial training framework are shown to be effective for learning the data distribution of graphs. 6 CONCLUSION In this work, we investigate the OOD effect on the explanation evaluation of GNNs. With a causal view, we uncover the OOD effect — the distribution shift between full graphs and subgraphs, as the confounder between the explanatory subgraphs and the model prediction, making the evaluation less reliable. To mitigate it, we propose a deconfounding evaluation framework that exploits the front-door adjustment to measure the causal effect of the explanatory subgraphs on the model prediction. And a deep generative model is devised to achieve the front-door adjustment by generating in-distribution surrogates of the subgraphs. In-so-doing, we can reliably evaluate the explanatory subgraphs. As the evaluation for explanations fundamentally guides the objective in GNNs explainability, this work offers in-depth insights into the future interpretability systems. ETHICS STATEMENT This work raises concerns about the removal-based evaluation in the explainability literature and proposed Deconfounded Subgraph Evaluator. For the user study that involves human subjects, we have detailed the fair evaluation procedure for each explanation generated by the explainers in Appendix E. For real-world applications, we admitted that the modeling of the distribution shift could be a barrier to fulfill their evaluation faithfulness. However, as shown in the paper, improper evaluation under the OOD setting largely biases the inspection of the model’s decision-making process and the quality of explainers. Therefore, we argue that explainability should exhibit faithful explanation evaluation before auditing deep models’ actual decision-making process. And a wrongly evaluated explanation might do more significant harm than an incorrect prediction, as the former could affect the general adjustment (e.g., structure construction) and human perspective (e.g., fairness check) of the model. REPRODUCIBILITY STATEMENT We have made great efforts to ensure reproducibility in this paper. Firstly, we make all causal assumptions clear in Section 2.2, Section 3.1 and Appendix A. For datasets, we have released the synthetic dataset, which can be referred to the link in Section 1, while the other two datasets are publicly available. We also include our code for model construction in the link. In Appendix D, we have reported the settings of hyper-parameters used in our implementation for model training. B COMPARISON OF IMPORTANCE ESTIMATIONS In this section, we compare our proposed estimation via front-door adjustment with the estimation in FIDO (Chang et al., 2019). We rephrased each estimation as Impdse(Gs) = ∑ G∗s P (G∗s = G∗s | Gs = Gs)P (Y | G∗s = G∗s ) = ∑ G∗s P (G∗s = G∗s | Gs = Gs) ∑ G′s P (Y | G∗s = G∗s , Gs = G′s)P (Gs = G′s) (9) and ImpFIDO(Gs) = ∑ G∗s P (G∗s = G∗s | Gs = Gs)P (Y | G∗s = G∗s ) (10) where DSE has alternatively adjusted on Gs (represented as G′s). To make it clear, we consider the underlined part of each equation. For Equation 9, we have ∑ G′s P (Y | G∗s = G∗s , Gs = G′s)P (Gs = G′s) = ∑ G′s P (Y | G∗s = G∗s , Gs = G′s)P (Gs = G′s | G∗s = G∗s ) P (Gs = G′s) P (Gs = G′s | G∗s = G∗s ) = ∑ G′s P (Y,Gs = G′s | G∗s = G∗s ) P (Gs = G′s) P (Gs = G′s | G∗s = G∗s ) (11) While for the formulation of Equation 10, we have P (Y | G∗s = G∗s ) = ∑ G′s P (Y,Gs = G′s | G∗s = G∗s ) (12) In the comparison of these two parts, we can see that Equation 12 is biased under our causal assumption. Intuitively, each contribution of the importance of G∗s on Y should be inversely proportional to the posterior probability, i.e., the probability of G′s given the observation G∗s . However, FIDO fails to consider the causal relation between Gs → G∗s , which biases tha approximation of the genuine causal effect under our causal assumption. Back to our proposed estimation, as we have collected (Gs,G∗s )-pairs via Monte-Carlo simulation, thus additional adjustment on Gs (G′s) can be achieved via Equation 11. C DSE FOR DELETION-BASED EVALUATION Based on the idea of deletion-based evaluation, we can instead use the average causal effect (Holland., 1988) (ACE) to look for a smallest deletion graph by conducting two interventions do(Gs = G) (i.e., , no feature removal) and do(Gs = G/s) where G/s denotes the complement of the explanatory graph Gs, meaning that the GNN input receives treatment and control, respectively. Formally, we have Imp fid dse(Gs = Gs) = P (Y | do (Gs = G))− P ( Y | do ( Gs = G/s )) (13) Then, we can similarly adjust for the individual terms as Equation 1, obtaining the unbiased importance value as the result of deletion-based evaluation. D EXPERIMENTAL DETAILS In this paper, all experiments are done on a single Tesla V100 SXM2 GPU (32 GB). The well-trained GNNs used in our experiments achieve high classification accuracies of 0.958 in TR3, 0.982 in MNISTsup, 0.909 in Graph-SST2. Now We introduce the model construction of the proposed generator. The encoder used is Crystal Graph Convolutional Neural Networks (Xie & Grossman, 2018), which contains three Convolutional layers. The encode dimensions in Tr3, MNISTsup, Graph-SST2 datasets are respectively 256, 64, 256. For decoder, we adopt two fully connected layers with ReLU as activation layers, where the numbers of neurons are the same with the encode dimensions. Next, we summarize the pseudocodes for the Adversarial Training in Algorithm 1. Algorithm 1 Generative Adversarial Training. All experiments in the paper used the default values m = 256, α = 2× 10−4, β = 1× 10−4, ω = λ = 5, τ = 0.1 Require: Pr, real graphs’ distribution. r, masking ratio. Require: m, batch size. α, learning rate. β, γ, λ, ω, τ , hyper-parameters. 1: µ← µ0; θ ← θ0 2: while loss in Equation (4) is not converged do 3: # Discriminator’s training 4: Sample {G(i)}mi=1 ∼ Pr a batch from the real graphs. 5: Randomly generate broken graphs {G(i)s }mi=1 from {G(i)}mi=1 with masking ratio r. 6: Embed the nodes through encoder q(Z|{G(i)s ,G(i)}mi=1) 7: Decode the edge probabilities and sample in-fill graphs {Ĝs̄}mi=1 ∼ p(Ĝs̄ | Z) 8: Compute Discriminator’s loss from Equation 7. 9: Update parameter µ with back-propagation. 10: # Generator’s training 11: Repeat the operations from line 4 to 7. 12: Compute Generator’s loss from Equation 4, 5, 6. 13: Update parameter θ with back-propagation. 14: end while For other hyper-parameters, we set r = 0.3, γ = 3 in Tr3 dataset. In MNISTsup and Graph-SST2 datasets, we set r = 0.6, γ = 1. We use Adam (Kingma & Ba, 2014) with weight decay rate 1e-5 for optimization. The maximum number of epochs is 100. E DETAILED USER STUDY The User Study starts by instructions to participants, where they will see a sentence (movie reviews) in each question and its sentiment (Positive of Negative), e.g., Sentence: “is more of an ordeal than an amusement” Sentiment: Negative then several explanations are presented for the answers of “Why the sentiment of this sentence is negative (positive)?”. The explanations (see Figure 7) are shown in graph form (edges indicate relations between words), and colors of more important features are darker. Then they were asked to choose the best explanation(s). A good explanation should be concise, informative, and the rational cause of sentence’s sentiment. In this case, (B) could be the best explanation since “ordeal” mostly decides the negative sentiment, while (A) only identifies plain words like “more than” and (C) is quite the opposite. Note that the participants can choose multiple answers and some choices are the same. Thereafter, 10 questions out of 32 questions in total are presented for each participant and we compute the average scores for the explainers. F EXTRA CASE STUDY In this section, we further present a case study for TR3 dataset. In Figure 8, the OOD probabilities for the ground truth explanatory subgraphs in each row remain the same as the edge selection ratios vary, which are 100%, 0%, 0% respectively. In contrast, the evaluation results generated from our DSE have shown strong rationality. Specifically, the importance score compute by our DSE increases with the increasing number of selected ground truth edges. This well validates our DSE framework, where we mitigate the OOD effect by generating the plausible surrogates, making the graphs to be evaluated conforms to the graph distribution in the training data. In this way, the effect of D → Y could hardly affect our assessment for the explanatory subgraph. Thereafter, as the explanatory graph becomes more informative and discriminative, it offers more evidence for the GNN to classify it as the target class which we want to explain, yielding faithful evaluation results. Cycle House Crane Im p d se Figure 8: Three cases in TR3 datasets. Each graph in the left represents the ground truth explanatory subgraphs (red) for explaining a given graph. One of the complement graphs (light blue) generated from CVGAE is also shown with each explanatory subgraph. As the edge selection ratio increases in each row, the importance scores output by our DSE are shown in the right. G ABLATION STUDY & SENSITIVITY ANALYSIS We first conduct ablation studies to investigate the contribution of the contrastive parameter γ and the penalty parameter λ in CVGAE. The ablation models are proposed by I. removing the contrastive loss, i.e., setting γ = 0 and II. removing the penalty term in the Wasserstein GAN (WGAN) (Martin Arjovsky, 2017) loss, i.e., setting λ = 0. The performance of the ablation models is reported in Table 4. We observe that the superiority of CVGAE compared with the ablation model supports our model design by (i) smoothing the model optimization which yields a more performant generator (ii) highlighting the class-discriminative information in the graph embeddings, which implicitly encodes the class information. Also, we conduct sensitivity analysis for CVGAE w.r.t. the hyper-parameters. Specifically, we select λ, the penalty in the WGAN loss (cf. Euqation 7) and γ, the strength of the contrastive loss (cf. Equation 4). While we empirically found the performance is relatively indifferent to other parameters in a wide range. The results are shown in Figure 9. We observe that the best performance is achieved with λ taking values from 1 to 10, and γ taking values from 1 to 10 in TR3 dataset and 0.1 to 5 in MNISTsup and Graph-SST2 datasets. And we found a large λ generally causes an increase in the FID metric, as it may alleviate the penalty on the reconstruction errors, which further makes a larger difference between fy(G) and E[fy(G∗s )].
1. What is the focus of the paper regarding GNN explanation methods? 2. What are the strengths of the proposed approach, particularly in its ability to capture the effect of OOD explanations? 3. What are the weaknesses of the paper, especially regarding its clarity and the impact of the generative model on the evaluation method? 4. How does the front-door adjustment mitigate the spurious path between the graph variable and the explainer variable? 5. Can you provide more details about the Conditional-VGAE used for generating graphs that cover the OOD case?
Summary Of The Paper Review
Summary Of The Paper The paper sheds an exciting light on the problem of producing a meaningful evaluation of GNN explanation methods (at least a subset of them). The idea is to introduce a deconfounder D to capture the effect of OOD explanations. The authors make an interesting example for a well-known synthetic dataset where the weight of the explanation in the ground truth is lower than a clear non-valid explanation when evaluated using the model to explain. The introduction of the deconfounder D creates a spurious path between the graph variable and the explainer variable. To mitigate this effect, then, they introduce a front-door adjustment to the causal graph. The front door adjustment requires a graph generator and authors use a novel Conditional-VGAE to generate graphs that will also cover the OOD case. The paper finally presents some experiments showing the evaluation method in action. Review As I have stated in summary, the paper has caught an interesting problem with current explanation evaluation methods. This aspect is a strength! Another strength is that they cast the problem into a causal framework making the reasoning behind the novel evaluation mechanism reasonably interpretable. The main issue with the paper lies in its clarity. It is not challenging to follow the technical details but rather to understand what the authors want to do. For instance, the authors title Section 2.1 as "Problem Formulation" but I do not see any problem formulated in that section. Also, the evaluation method requires using a generative model to generate enough samples to be able to apply Equation 1. For the above reasons, I would like authors to clearly state: Whenever a new model for explaining GNNs is developed, should one also generate graphs using CVGAE? How can one tell the superiority of the novel evaluation method from your paper? Where is exactly that assessed and proved? I tried to grasp it from reading the article, but I've not been able to. Can you please clearly state what is the impact of the generative model on your evaluation? The two baselines are very weak, in my opinion. One is random, OK we should always include random baselines. The other one, though, is not conditional, which makes only explicit that a conditional generator is better. But I would have been surprised to find out that this was not the case. So, the question is: what are other baselines that would truly show the impact of the generator? Would it be sufficient to use whatever conditional method to generate graphs? ====== Minor Concerns ====== with less discriminative information --> Define "less discriminative information" G_s is defined in the caption of Figure 1 but not in the text and when you first use it, is difficult to follow the sentence. what the full graph like --> what the full graph is like It harms the removal-based evaluation of the explanatory subgraph --> I've read this sentence over and over again but I've not been able to understand what you actually meant well-trained GNN predictor --> What "well-trained" means? it is rooted from G_s --> it is rooted on (?) G_s. (What does actually mean this sentence?) Equation (2) the product should have i = 1 and not just i auto-encoder is able to generate --> autoencoder we are able to generate the formula that comes after formally does not look correct (or I've not understood it) Besides, we introduce ... generated graphs --> Cannot understand it Equation equation 1 (in a couple of places) Authors have addressed most of my concerns, except for the #1, which is still not 100% clear. After the rebuttal, I've decided to raise my score.
ICLR
Title Deconfounding to Explanation Evaluation in Graph Neural Networks Abstract Explainability of graph neural networks (GNNs) aims to answer “Why the GNN made a certain prediction?”, which is crucial to interpret the model prediction. The feature attribution framework distributes a GNN’s prediction to its input features (e.g., edges), identifying an influential subgraph as the explanation. When evaluating the explanation (i.e., subgraph importance), a standard way is to audit the model prediction based on the subgraph solely. However, we argue that a distribution shift exists between the full graph and the subgraph, causing the out-ofdistribution problem. Furthermore, with an in-depth causal analysis, we find the OOD effect acts as the confounder, which brings spurious associations between the subgraph importance and model prediction, making the evaluation less reliable. In this work, we propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction. While the distribution shift is generally intractable, we employ the front-door adjustment and introduce a surrogate variable of the subgraphs. Specifically, we devise a generative model to generate the plausible surrogates that conform to the data distribution, thus approaching the unbiased estimation of subgraph importance. Empirical results demonstrate the effectiveness of DSE in terms of explanation fidelity. N/A Explainability of graph neural networks (GNNs) aims to answer “Why the GNN made a certain prediction?”, which is crucial to interpret the model prediction. The feature attribution framework distributes a GNN’s prediction to its input features (e.g., edges), identifying an influential subgraph as the explanation. When evaluating the explanation (i.e., subgraph importance), a standard way is to audit the model prediction based on the subgraph solely. However, we argue that a distribution shift exists between the full graph and the subgraph, causing the out-ofdistribution problem. Furthermore, with an in-depth causal analysis, we find the OOD effect acts as the confounder, which brings spurious associations between the subgraph importance and model prediction, making the evaluation less reliable. In this work, we propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction. While the distribution shift is generally intractable, we employ the front-door adjustment and introduce a surrogate variable of the subgraphs. Specifically, we devise a generative model to generate the plausible surrogates that conform to the data distribution, thus approaching the unbiased estimation of subgraph importance. Empirical results demonstrate the effectiveness of DSE in terms of explanation fidelity. 1 INTRODUCTION Explainability of graph neural networks (GNNs) (Hamilton et al., 2017; Dwivedi et al., 2020) is crucial to model understanding and reliability in real-world applications, especially when about fairness and privacy (Ying et al., 2019; Luo et al., 2020). It aims to provide insight into how predictor models work, answering “Why the target GNN made a certain prediction?”. Towards this end, a variety of explainer models are proposed for feature attribution (Selvaraju et al., 2017; Ying et al., 2019; Luo et al., 2020; Vu & Thai, 2020), which decomposes the predictor’s prediction as contributions (i.e., importance) of its input features (e.g., edges, nodes). While feature attribution assigns the features with importance scores, it redistributes the graph features and creates a new distribution different from that of the original full graphs, from which a subgraph is sampled as the explanation. Such sampling process is referred to as feature removal (Covert et al., 2020). Then, to assess the explanatory subgraph, the current evaluation frameworks use the feature removal principle — (1) only feed the subgraph into the target predictor, discarding the other features; (2) measure the importance of the subgraph based on its information amount to recover the model’s prediction. Such subgraph-prediction correlations uncovered by the removal-based evaluator should offer a faithful inspection of the predictor’s decision-making process and assess the fidelity of the explainers reliably. However, feature removal brings the out-of-distribution (OOD) problem (Frye et al., 2020; Chang et al., 2019; Lukas Faber, 2021): the distribution shift from full graphs to subgraphs likely violates underlying properties, including node degree distribution (Leskovec et al., 2005) and domain-specific constraints (Liu et al., 2018) of the full graphs. For example, graph properties of chemical molecules, such as the valency rules, impose some constraints on syntactically valid molecules (Liu et al., 2018); hence, simply removing some bonds (edges) or atoms (nodes) creates invalid molecular subgraphs that never appear in the training dataset. Such OOD subgraphs could manipulate the predictor’s Under review as a conference paper at ICLR 2022 𝑮: Full Graph 𝑮𝒔: Subgraph 𝒀: Predicate Logits 𝐬 𝓖 House Cycle Crane 0.21 0.70 Target Predictor Crane 𝓖Cycle House𝓖𝒔𝟏𝑫: Distribution Shift 𝓖𝒔𝟐 𝒔∗ 𝐬 Front-door Adjustment𝓖𝒔𝟐𝓖𝒔𝟏 (a) Feature Removal to Evaluate Explanatory Subgraph Gs 𝑮: Full Graph 𝑮𝒔: Subgraph 𝒀: Predicate Logits 𝐬 𝓖 House Cycle Crane 0.21 0.70 Target Predictor Crane 𝓖Cycle House𝓖𝒔𝟏𝑫: Distribution Shift 𝓖𝒔𝟐 𝒔∗ 𝐬 Front-door Adjustment𝓖𝒔𝟐𝓖𝒔𝟏 (b) SCM I Figure 1: (a) A real example in TR3. The GNN predictor classifies the full graph as ‘House”. On subgraphs Gs1 and Gs2, the prediction probabilities of being “House” are respectively 0.21 and 0.70. (b) The structural causal model represents the causalities among variables: G as the input graph, D as the unobserved distribution shift, Gs as the explanatory subgraph, and Y as the model prediction. outcome arbitrarily (Dai et al., 2018; Zügner et al., 2018), generates erroneous predictions, and limits the reliability of the evaluation process. Here we demonstrate the OOD effect by a real example in Figure 1a, where the trained ASAP (Ranjan et al., 2020) predictor has classified the input graph as “House” for its attached motif (see Section 4 for more details). On the ground-truth explanation Gs1, the output probability of the “House” class is surprisingly low (0.21). While for Gs2 with less discriminative information, the outputs probability of the “House” class (0.70) is higher. Clearly, the removal-based evaluator assigns the OOD subgraphs with unreliable importance scores, which are unfaithful to the predictor’s decision. The OOD effect has not been explored in evaluating GNN explanations, to the best of our knowledge. We rigorously investigate it from a causal view (Pearl et al., 2016; Pearl, 2000; Pearl & Mackenzie, 2018). Figure 1b represents our causal assumption via a structural causal model (SCM) (Pearl et al., 2016; Pearl, 2000), where we target the causal effect of Gs on Y . Nonetheless, as a confounder between Gs and Y , distribution shift D opens the spurious path Gs ← D → Y . By “spurious”, we mean that the path lies outside the direct causal path from Gs to Y , making Gs and Y spuriously correlated and yielding an erroneous effect. And one can hardly distinguish between the spurious correlation and causative relations (Pearl et al., 2016). Hence, auditing Y on Gs suffers from the OOD effect and wrongly evaluates the importance of Gs. Motivated by our causal insight, we propose a novel evaluation paradigm, Deconfounded Subgraph Evaluator (DSE), to faithfully measure the causal effect of explanatory subgraphs on the prediction. 𝑮: Full Graph 𝑮𝒔: Subgraph 𝒀: Predicate Logits 𝑮𝐬𝑮 𝒀 𝑫 𝑮𝒔∗ 𝑮𝐬𝑮 𝒀 𝑫 Front-door Adjustment reliably and further guide explainers to generate faithful explanations. In a nutshell, our contributions are: • From a causal perspective, we argue that the OOD effect is the confounder that causes spurious correlations between subgraph importance and model prediction. • We propose a deconfounding paradigm, DSE, which exploits the front-door adjustment to mitigate the out-of-distribution effect and evaluate the explanatory subgraphs unbiasedly. • We validate the effectiveness of our framework over various explainers, target GNN models, and datasets. Significant boosts are achieved over the conventional feature removal techniques. Code and datasets are available at: https://anonymous.4open.science/r/DSE-24BC/. 2 A CAUSAL VIEW OF EXPLANATION EVALUATION Here we begin with the causality-based view of feature removal in Section 2.1 and present our causal assumption to inspect the OOD effect in Section 2.2. 2.1 PROBLEM FORMULATION Without loss of generality, we focus on the graph classification task: a well-trained GNN predictor f takes the graph variable G as input and predicts the class Y ∈ {1, · · · ,K}, i.e., Y = f(G). Generation of Explanatory Subgraphs. Post-hoc explainability typically considers the question “Why the GNN predictor f made certain prediction?”. A prevalent solution is building an explainer model to conduct feature attribution (Ying et al., 2019; Luo et al., 2020; Pope et al., 2019). It decomposes the prediction into the contributions of the input features, which redistributes the probability of features according to their importance and sample the salient features as an explanatory subgraph Gs. Specifically, Gs can be a structure-wise (Ying et al., 2019; Luo et al., 2020) or featurewise (Ying et al., 2019) subgraph of G. In this paper, we focus on the structural features. That is, for graph G = (N , E) with the edge set E and the node set N , the explanatory subgraph Gs = (Ns, Es) consists of a subset of edges Es ⊂ E and their endpoints Ns = {u, v|(u, v) ∈ Es}. Evaluation of Explanatory Subgraphs. Insertion-based evaluation by feature removal (Covert et al., 2020; Dabkowski & Gal, 2017) aims to check whether the subgraph is the supporting substructure 1 that alone allows a confident classification. We systematize this paradigm as three steps: (1) divide the full graph G into two parts, the subgraph Gs and the complement Gs; (2) feed Gs into the target GNN f , while discarding Gs; and (3) obtain the model prediction on Gs, to assess its discriminative information to recover the prediction on G. Briefly, at the core of the evaluator is the subgraphprediction correlation. However, as discussed in Section 1, the OOD effect is inherent in the removal-based evaluator, hindering the subgraph-prediction correlation from accurately estimating the subgraph importance. 2.2 STRUCTURAL CAUSAL MODEL To inspect the OOD effect rigorously, we take a causal look at the evaluation process with a Structural Causal Model (SCM I) in Figure 1b. We denote the abstract data variables by the nodes, where the directed links represent the causality. The SCM indicates how the variables interact with each other through the graphical definition of causation: • G→ Gs ← D. We introduce an abstract distribution shift variable D to sample a subgraph Gs from the edge distributions of the full graph G. • Gs → Y ← D. We denote Y as the prediction variable (e.g., logits output), which is determined by (1) the direct effect from Gs, and (2) the confounding effect caused by D. In particular, the former causation that led to the result is the focus of this work. We suggest readers to refer to Appendix A where we offer an elaboration of D. With our SCM assumption, directly measuring the importance of explanatory subgraphs is distracted by the backdoor path (Pearl, 2000), Gs ← D → Y . This path introduces the confounding associations between Gs and Y , which makes Gs and Y spuriously correlated, i.e., biases the subgraph-prediction correlations, thus making the evaluator invalid. How to mitigate the OOD effect and quantify Gs’s genuine causal effect on Y remains largely unexplored in the literature and is the focus of our work. 3 DECONFOUNDED EVALUATION OF EXPLANATORY SUBGRAPHS In this section, we propose a novel deconfounding framework to evaluate the explanatory subgraphs in a trustworthy way. Specifically, we first leverage the front-door adjustment (Pearl, 2000) to formulate a causal objective in Section 3.1. We then devise a conditional variational graph auto-encoders (CVGAE) as the effective implementation of our objective in Section 3.2. 1We focus on insertion-based evaluation here while we discuss deletion-based evaluation in Appendix C. 3.1 FRONT-DOOR ADJUSTMENT To the best of our knowledge, our work is the first to adopt the causal theory to solve the OOD problem in the explanation evaluation of GNNs. To pursue the causal effect of Gs on Y , we perform the calculus of the causal intervention P (Y = y|do(Gs = Gs)). Specifically, the do-calculus (Pearl, 2000; Pearl et al., 2016) is to intervene the subgraph variable Gs by cutting off its coming links and assigning it with the certain value Gs, making it unaffected from its causal parents G and D. From inspection of the SCM in Figure 1b, the distribution effect D acts as the confounder between Gs and Y , and opens the backdoor path Gs ← D → Y . However, as D is hardly measurable, we can not use the backdoor adjustment (Pearl, 2000; Pearl et al., 2016) to block the backdoor path from Gs to Y . Hence, the causal effect of Gs on Y is not identifiable from SCM I. However, we can go much further by considering SCM II in Figure 2 instead, where a mediating variable G∗s is introduced between Gs and Y : • Gs → G∗s . G∗s is the surrogate variable of Gs, which completes Gs to make them in the data distribution. First, it originates from and containsGs. Specifically, it imagines how the possible full graphs should be when observing the subgraph Gs. Second, G∗s should follow the data distribution and respect the inherent knowledge of graph properties, thus no link exists between D and G∗s . • G∗s → Y . This is based on our causal assumption that the causality-related information of Gs on Y , i.e., the discriminative information for Gs to make prediction, is well-preserved by G∗s . Thus, with the core of Gs, G∗s is qualified to serve as the mediator which further results in the model prediction. With SCM II, we can exploit the front-door adjustment (Pearl, 2000; Pearl et al., 2016) instead to quantify the causal effect of Gs on Y . Specifically, by summing over possible surrogate graphs G∗s of G∗s , we chain two identifiable partial effects of Gs on G ∗ s and G ∗ s on Y together: P (Y |do(Gs = Gs)) = ∑ G∗s P (Y |do(G∗s = G∗s ))P (G∗s = G∗s |do(Gs = Gs)) = ∑ G∗s ∑ G′s P (Y |G∗s = G∗s , Gs = G′s)P (Gs = G′s)P (G∗s = G∗s |do(Gs = Gs)) = ∑ G∗s ∑ G′s P (Y |G∗s = G∗s , Gs = G′s)P (Gs = G′s)P (G∗s = G∗s |Gs = Gs), (1) Specifically, we have P (G∗s|do(Gs = Gs)) = P (G∗s|Gs = Gs) as Gs is the only parent of G∗s . And we distinguish the Gs in our target expression P (Y |do(Gs = Gs)) between G′s, the latter of which is adjusted to pursue P (Y |do(G∗s = G∗s )). With the data of (Gs,G∗s ) pairs, we can obtain P (Y |G∗s = G∗s , Gs = G′s) by feeding the surrogate graph G∗s into the GNN predictor, conditional on the subgraph G′s; similarly, we can estimate P (Gs = G′s) statistically; P (G∗s = G∗s |Gs = Gs) is the conditional distribution of the surrogate variable, after observing the subgraphs. As a result, this front-door adjustment yields a consistent estimation of Gs’s effect on Y and avoids the confounding associations from the OOD effect. 3.2 DEEP GENERATIVE MODEL However, it is non-trivial to instantiate G∗s and collect the (Gs,G∗s ) pairs. We get inspiration from the great success of generative models and devise a novel probabilistic model, conditional variational graph auto-encoder (CVGAE), and an adversarial training framework, to generate G∗s . Conditional Generation. Inspired by previous works (Thomas N. Kipf, 2016; Liu et al., 2018), we model the data distribution via a generative model gθ parameterized by θ. It is composed of an encoder q(Z|G,Gs) and a decoder p(G∗s |Z). Specifically, the encoder q(Z|G,Gs) embeds each node i in G with a stochastic representation zi, and summarize all node representations in Z: q(Z|G,Gs) = N∏ i=1 q(zi|G,Gs), with q(zi|G,Gs) = N (zi | [µ1i,µ2i], [ σ21i 0 0 σ22i ] ) (2) where zi is sampled from a diagonal normal distribution by mean vector [µ1i,µ2i] and standard deviation vector diag(σ21i,σ 2 2i); µ1 = fµ(G) and logσ1 = fσ(G) denote the matrices of mean vectors µ1i and standard deviation vectors logσ1i respectively, which are derived from two GNN models fµ and fσ on the top of the full graph G; similarly, µ2 = fµ(Gs) and logσ2 = fσ(Gs) are on the top of the subgraph Gs. Then, the decoder p(G∗s |Z) generates the valid surrogates: p(G∗s |Z) = N∏ i N∏ j p(Aij |zi, zj), with p(Aij = 1|zi, zj) = fA([zi, zj ]), (3) whereAij = 1 indicates the existence of an edge between nodes i and j; fA is a MLP, which takes the concatenation of node representations zi and zj as the input and outputs the probability of Aij = 1. Leveraging the variational graph auto-encoder, we are able to generate some counterfactual edges that never appear in G and sample G∗s from the conditional distribution p(G∗s |Z), formally, G∗s ∼ p(G∗s|Z). As a result, P (G∗s = G∗s |Gs = Gs) in Equation 1 is identified by p(G∗s |Z). The quality of the generator directly affects the quality of the surrogate graphs, further determines how well the frontdoor adjustment is conducted. Next, we will detail an adversarial training framework to optimize the generator, which is distinct from the standard training of VAE. Adversarial Training. To achieve high-quality generation, we get inspiration from the adversarial training (Goodfellow et al., 2020; Yue et al., 2021) and devise the following training objective: min θ LVAE + γLC +max µ ωLD, (4) where γ, ω are trade-off hyper-parameters. These losses are carefully designed to assure the generation follows the data distribution. Next, we will elaborate on each of them. LVAE = −EG [Eq(Z|G,Gs)[log p(Ĝs|Z)]] + βEG [DKL(q(Z|G,Gs)||p(Z))], (5) We first minimize the β-VAE loss(Higgins et al., 2017), and the first term is the reconstruction loss responsible to predict the probability of edges’ existence; the second term is the KL-divergence between the variational and prior distributions. Here we resort to the isotropic Gaussian distribution p(Z) = ∏ i p(zi) = ∏ iN (zi|0, I) as the prior. β reweighs the KL-divergence, which promises to learn the disentangled factors in Z (Higgins et al., 2017; Yue et al., 2021; Suter et al., 2019). Moreover, we highlight the class-discriminative information in Z, by encouraging the agreement between graph representations with the same class compared to that with different classes. Technically, the contrastive loss is adopted: LC = −EG [log ∑ G′∈B+ exp (s(zG , zG′)/τ)∑ G′′∈B+∪B− exp (s(zG , zG′′)/τ) ], (6) where zG is the representation of G that aggregates all node representations Z together; s is the similarity function, which is given by an inner product here; τ is the temperature hyper-parameter; B+ is the graph set having the same class to G, while the graphs involved in B− have different classes from G. Minimizing this loss enables the generator to go beyond the generic knowledge and uncover the class-wise patterns of graph data. Besides, we introduce a discriminative model dµ to distinguish the generated graphs. Specifically, we set it as a probability-conditional GNN (Fey & Lenssen, 2019) parameterized by µ. It takes a graph as input and outputs a score between 0 to 1, which indicates the confidence of the graph being realistic. Hence, given a real graph G with the ground-truth label y, we can use the generator gθ to generate G∗s . Then the discriminator learns to assign G with a large score while labeling G∗s with a small score. To optimize the discriminator, we adopt the Wasserstein GAN (WGAN) (Martin Arjovsky, 2017) loss: LD = EG [Ep(G∗s |Z)[d(G, y)− d(G ∗ s , y)− λ(||∇G∗s d(G ∗ s , y)||2 − 1)2]], (7) where d(G∗s , y) is the probability of generating G∗s from the generator; λ is the hyper-parameter. By playing the min-max game between the generator and the discriminator in Equation 4, the generator can create the surrogate graphs from the data distribution plausibly. Subgraph Evaluation. With the well-trained generator g∗θ whose parameters are fixed, we now approximate the causal effect of Gs on Y . Here we conduct Monte-Carlo simulation based on g∗θ to sample a set of plausible surrogate graphs {G∗s} from p(G∗s |Z). Having collected the (Gs,G∗s ) data, we can arrive the estimation of Equation 1. 4 EXPERIMENTS We aim to answer the following research questions: • Study of Explanation Evaluation. How effective is our DSE in mitigating the OOD effect and evaluating the explanatory subgraph more reliably? (Section 4.2) • Study of Generator. How effective is our CVGAE in generating the surrogates for the explanatory subgraphs and making them conform to the data distribution? (Section 4.3) 4.1 EXPERIMENTAL SETTINGS Datasets & Target GNNs. We first train various target GNN classifiers on the three datasets: • TR3 is a synthetic dataset involving 3000 graphs, each of which is constructed by connecting a random tree-shape base with one motif (house, cycle, crane). The motif type is the ground-truth label, while we treat the motifs as the ground-truth explanations following Ying et al. (2019); Yuan et al. (2020a). A Local Extremum GNN (Ranjan et al., 2019) is trained for classification. • MNIST superpixels (MNISTsup) (Monti et al., 2017) converts the MNIST images into 70,000 superpixel graphs. Every graph with 75 nodes is labeled as one of 10 classes. We train a Splinebased GNN (Fey et al., 2018) as the classifier model. The subgraphs representing digits can be viewed as human explanations. • Graph-SST2 (Yuan et al., 2020b) is based on text sentiment dataset SST2 (Socher et al., 2013) and converts the text sentences to graphs where nodes represent tokens and edges indicate relations between nodes. Each graph is labeled by its sentence sentiment. The node embeddings are initialized by the pre-trained BERT word embeddings (Devlin et al., 2018). Graph Attention Network (Veličković et al., 2018) is trained as the classifier. Ground-Truth Explanations. By “ground-truth”, we follow the prior studies (Ying et al., 2019; Yuan et al., 2020a; Luo et al., 2020) and treat the subgraphs coherent to the model knowledge (e.g., the motif subgraphs in TR3) or human knowledge (e.g., the digit subgraphs in MNISTsup) as the ground-truth explanations. Although such ground-truth explanations might not fit the decision-making process of the model exactly, they contain sufficient discriminative information to help justify the explanations. Note that no ground-truth explanation is available in Graph-SST2. Explainers. To explain the decisions made by these GNNs, we adopt several state-of-the-art explainers, including SA (Baldassarre & Azizpour, 2019), Grad-CAM (Selvaraju et al., 2017), GNNExplainer (Ying et al., 2019), CXPlain (Schwab & Karlen, 2019), PGM-Explainer (Vu & Thai, 2020), Screener (Anonymous, 2021), to generate the explanatory subgraphs. Specifically, top-15%, 20%, 20% of edges on the full graph instance construct the explanatory subgraphs in TR3, MNIST, and Graph-SST2, respectively. We refer readers to Appendix D for more experimental details. 4.2 STUDY OF EXPLANATION EVALUATION (RQ1) Deconfounded Evaluation Performance. For an explanation Gs, the conventional removal-based evaluation framework quantifies its importance as the subgraph-prediction correlation, termed Impre(Gs) = f(Gs); whereas, our DSE framework focuses on the causal effect caused by Gs on Y which is computed based on Equation 1, and we denote it as Impdse(Gs) for short. These importance scores broadly aim to reflect the discriminative information carried by Gs. Thanks to the ground-truth knowledge available in TR3 and MNISTsup, we are able to get a faithful and principled metric to measure the discriminative information amount — the precision Prec(Gs,G+s ) between the ground-truth explanation G+s and the explanatory subgraph Gs. This precision metric allows us to perform a fair comparison between Impre(Gs) and Impdse(Gs) via: ρre = ρ([Prec(Gs,G+s )], [Impre(Gs)]), ρdse = ρ([Prec(Gs,G+s )], [Impdse(Gs)]), (8) where ρ is the correlation coefficient between the lists of precision and importance scores. We present the results in Figure 4 and have some interesting insights: • Insight 1: Removal-based evaluation hardly reflects the importance of explanations. In most cases, Prec(Gs,G+s ) is negatively correlated with the importance. This again shows that simply discarding a part of a graph could violate some underlying properties of graphs and mislead the target GNN, which is consistent with the adversarial attack works (Dai et al., 2018; Zügner et al., 2018). Moreover, the explainers that target high prediction accuracy, such as GNNExplainer, are easily distracted by the OOD effect and thus miss the important subgraphs. • Insight 2: Deconfounded evaluation quantifies the explanation importance more faithfully. Substantially, ρdse greatly improves after the frontdoor adjustments via the surrogate variable. The most notable case is GNNExplainer in MNISTsup, where ρdse = 0.17 achieves a tremendous increase from ρdse = −0.11. Although our DSE alleviates the OOD problem significantly, weak positive or negative correlations still exist, which indicates the limitation of the current CVGAE. We leave the exploration of higher-quality generation in future work. Revisiting & Reranking Explainers. Here we investigate the rankings of explainers generated from different evaluation frameworks, and further compute the Spearman rank correlations between these evaluation rankings and the reference rankings of explainers. Specifically, for TR3 and MNISTsup with ground-truth explanations, we regard the ranks w.r.t. precision as the references, while obtaining the reference of Graph-SST2 by a user study2. Such a reference offers the human knowledge for explanations and benchmarks the comparison. We show the results in Table 1 and conclude: • Insight 3: DSE presents a more fair and reliable comparison among explainers. The DSEbased rankings are highly consistent with the references, while the removal-based rankings struggle to pass the check. In particular, we observe that for TR3, the unrealistic splicing inputs cause a plain ranking w.r.t. Impre. We find that various input subgraphs are predicted as cycle class. That is, the target GNN model is a deterministic gambler with serious OOD subgraphs. In contrast, DSE outputs a more informative ranking; For MNISTsup, GNNExplainer with the highest precision 270 volunteers are engaged, where each was asked to answer 10 questions randomly sampled from 32 movie reviews and choose the best explanations generated by the explainers. See Appendix E for more details. Table 2: Importance scores or probabilities of subgraphs before and after feature removal. TR3 MNISTsup Graph-SST2 Imp(G) or GMM(G) 0.958−0.520 0.982−0.574 35.3−11.3 Imp(G+s ) or GMM(Gs) 0.438 0.408 24.0 Table 3: Performances of Generators in terms of Validity and Fidelity. TR3 MNISTsup Graph-SST2 Imp(G∗s) VAL↑ FID↓ Imp(G∗s) VAL↑ FID↓ GMM(G∗s) VAL↑ FID↓ Random 0.451 0.013 0.794 0.448 0.040 1.325 38.8 14.8 0.060 VGAE 0.469 0.031 0.754 0.205 -0.203 1.501 37.6 13.6 0.078 ARGVA 0.392 0.061 0.726 0.466 0.058 1.306 31.0 7.0 0.079 CVGAE 0.603 0.165 0.598 0.552 0.144 0.910 45.8 21.8 0.057 is overly underrated by the removal-based evaluation framework, but DSE justifies its position faithfully; For Graph-SST2, although the OOD problem seems to be minor, DSE can still achieve significant improvement. Case Study. We present a case study in Graph-SST2 to illustrate how DSE mitigates the potential OOD problem. See Appendix F for another case study on TR3. In Figure 5, G is a graph predicted as “negative" sentiment. The explanatory subgraph Gs emphasizes tokens like “weak” and relations like “n’t→funny”, which is cogent according to human knowledge. However, its removal-based importance is highly underestimated as 0.385, possibly due to its disconnectivity or sparsity after feature removal. To mitigate the OOD problem, DSE samples 50 surrogate graphs from the generator, performs the frontdoor adjustment, and justifies the subgraph importance as 0.913, which shows the effectiveness of our DSE framework. We also observe some limitations of the generator (1) Due to the limited training data, the generators only reflect the distribution of the observed graphs, thus making some generations grammatically wrong. (2) The generations is constrained within the complete graph determined by the node set of the explanatory subgraph, thereby limits the quality of deconfounding. As we mainly focus on the OOD problem, we will leave the ability of the generator as future work. 4.3 STUDY OF GENERATORS (RQ2) The generator plays an important role in our DSE framework, which aims to generate the valid surrogates conform to the data distribution. To evaluate the generator’s quality, we compare it with three baselines: a random generator, a variational graph auto-encoder (VGAE) (Thomas N. Kipf, 2016), and an adversarially regularized variational graph auto-encoder (ARGVA) (Pan et al., 2018). We perform the evaluation based on two metrics: (1) Validity. For the ground-truth explanations G+s that contains all discriminative information of the full graph G, the importance of its surrogate graph G∗s should be higher than itself. The difference between the two importance scores indicates the validity of the generator, thus we define VAL = EG [Imp(G∗s )− Imp(G+s )]. For Graph-SST2 where the class-wise features are intractable, we leverage the embeddings of training graphs and additionally train a Gaussian Mixture Model (GMM) as our distribution prior. Then, we compute the average loglikelihood of random subgraphs after in-filling, thus we have VAL = EGEGs∼Random(G)[GMM(G∗s )− GMM(Gs)]. (2) Fidelity. Towards a finer-grained assessment w.r.t. prediction probability of any random subgraphs, we adopt the metric following (Frye et al., 2021): FID = EGEGsEy|fy(G) − EG∗s [fy(G ∗ s )]|2. This measures how well the surrogates cover the target prediction distribution. Before comparing different generators, we first compute the importance or probabilities of the graphs before and after feature removal, which are summarized in Table 2. When inspecting the Removal’s results without any in-fills, the OOD problem is severe: in TR3 and MNISTsup, the importance of ground-truth subgraphs only reaches 43.8% and 40.8%, respectively, which are far away from the target importance of full graphs. Analogously in Graph-SST2. For the performance of the generators w.r.t. the two metrics, we summarize the average results over 5 runs in Table 3: • The performance of the baselines are poor. This suggests that they can hardly fit the target conditional distribution. • CVGAE outperforms other generators consistently across all cases, thus justifying the rationale and effectiveness of our proposed generator and adversarial training paradigm. For example, in TR3, CVGAE significantly increases the VAL scores and mitigates the OOD effect effectively. Moreover, we conduct ablation studies and sensitivity analysis in Appendix G to better understand the model components and validate the effectiveness of the designed objective. 5 RELATED WORK Post-hoc Explainability of GNNs. Inspired by the explainability in computer vision, Baldassarre & Azizpour (2019); Pope et al. (2019); Schnake et al. (2020) obtain the gradient-like scores of the model’s outcome or loss w.r.t. the input features. Another line (Luo et al., 2020; Ying et al., 2019; Yuan et al., 2020a; Yue Zhang, 2020; Michael Sejr Schlichtkrull, 2021) learns the masks on graph features. Typically, GNN-Explainer (Ying et al., 2019) applies the instance-wise masks on the messages carried by graph structures, and maximizes the mutual information between the masked graph and the prediction. Going beyond the instance-wise explanation, PGExplainer (Luo et al., 2020) generates masks for multiple instances inductively. Recently, researchers adopt the causal explainability (Pearl & Mackenzie, 2018) to uncover the causation of the model predictions.For instance, CXPlain (Schwab & Karlen, 2019) quantifies a feature’s importance by leaving it out. PGM-Explainer (Vu & Thai, 2020) performs perturbations on graph structures and builds an Bayesian network upon the perturbation-prediction pairs. Causal Screening (Screener) (Anonymous, 2021) measures the importance of an edge as its causal effect, conditional on the previously selected structures. Lately, SubgraphX (Yuan et al., 2021) explores different subgraphs with Monte-Carlo tree search and evaluates subgraphs with the Shapley value (Kuhn & Tucker, 1953). Counterfactual Generation for the OOD Problem. The OOD effect of feature removal has been investigated in some other domains. There are generally two classes of generation (i) Static generation. For example, Fong & Vedaldi. (2017); Dabkowski & Gal (2017) adopted blurred input and random colors for the image reference, respectively. Due to the unnatural in-filling, the generated images are distributional irrespective and can still introduce confounding bias. (ii) Adaptive generation: Chang et al. (2019); Frye et al. (2021); Agarwal et al. (2019); Kim et al. (2020). The generators of these methods, like DSE, overcomes the defects aforementioned, which generates data that conforms to the training distribution. For example, in computer vision, FIDO (Chang et al., 2019) generates imagespecific explanations that respect the data distribution, answering “Which region, when replaced by plausible alternative values, would maximally change classifier output?”. For the difference, firstly, DSE’s formulated importance involves additional adjustment on Gs and guarantees the unbiasedness of introducing the surrogate variable G∗s , which is commonly discarded by the prior works with in-fillings only. Specifically, we offer a comparison with FIDO in Appendix B. Secondly, the distribution of graph data is more complicated to model than other domains. And the proposed CVGAE is carefully designed for graph data, where the contrastive loss and the adversarial training framework are shown to be effective for learning the data distribution of graphs. 6 CONCLUSION In this work, we investigate the OOD effect on the explanation evaluation of GNNs. With a causal view, we uncover the OOD effect — the distribution shift between full graphs and subgraphs, as the confounder between the explanatory subgraphs and the model prediction, making the evaluation less reliable. To mitigate it, we propose a deconfounding evaluation framework that exploits the front-door adjustment to measure the causal effect of the explanatory subgraphs on the model prediction. And a deep generative model is devised to achieve the front-door adjustment by generating in-distribution surrogates of the subgraphs. In-so-doing, we can reliably evaluate the explanatory subgraphs. As the evaluation for explanations fundamentally guides the objective in GNNs explainability, this work offers in-depth insights into the future interpretability systems. ETHICS STATEMENT This work raises concerns about the removal-based evaluation in the explainability literature and proposed Deconfounded Subgraph Evaluator. For the user study that involves human subjects, we have detailed the fair evaluation procedure for each explanation generated by the explainers in Appendix E. For real-world applications, we admitted that the modeling of the distribution shift could be a barrier to fulfill their evaluation faithfulness. However, as shown in the paper, improper evaluation under the OOD setting largely biases the inspection of the model’s decision-making process and the quality of explainers. Therefore, we argue that explainability should exhibit faithful explanation evaluation before auditing deep models’ actual decision-making process. And a wrongly evaluated explanation might do more significant harm than an incorrect prediction, as the former could affect the general adjustment (e.g., structure construction) and human perspective (e.g., fairness check) of the model. REPRODUCIBILITY STATEMENT We have made great efforts to ensure reproducibility in this paper. Firstly, we make all causal assumptions clear in Section 2.2, Section 3.1 and Appendix A. For datasets, we have released the synthetic dataset, which can be referred to the link in Section 1, while the other two datasets are publicly available. We also include our code for model construction in the link. In Appendix D, we have reported the settings of hyper-parameters used in our implementation for model training. B COMPARISON OF IMPORTANCE ESTIMATIONS In this section, we compare our proposed estimation via front-door adjustment with the estimation in FIDO (Chang et al., 2019). We rephrased each estimation as Impdse(Gs) = ∑ G∗s P (G∗s = G∗s | Gs = Gs)P (Y | G∗s = G∗s ) = ∑ G∗s P (G∗s = G∗s | Gs = Gs) ∑ G′s P (Y | G∗s = G∗s , Gs = G′s)P (Gs = G′s) (9) and ImpFIDO(Gs) = ∑ G∗s P (G∗s = G∗s | Gs = Gs)P (Y | G∗s = G∗s ) (10) where DSE has alternatively adjusted on Gs (represented as G′s). To make it clear, we consider the underlined part of each equation. For Equation 9, we have ∑ G′s P (Y | G∗s = G∗s , Gs = G′s)P (Gs = G′s) = ∑ G′s P (Y | G∗s = G∗s , Gs = G′s)P (Gs = G′s | G∗s = G∗s ) P (Gs = G′s) P (Gs = G′s | G∗s = G∗s ) = ∑ G′s P (Y,Gs = G′s | G∗s = G∗s ) P (Gs = G′s) P (Gs = G′s | G∗s = G∗s ) (11) While for the formulation of Equation 10, we have P (Y | G∗s = G∗s ) = ∑ G′s P (Y,Gs = G′s | G∗s = G∗s ) (12) In the comparison of these two parts, we can see that Equation 12 is biased under our causal assumption. Intuitively, each contribution of the importance of G∗s on Y should be inversely proportional to the posterior probability, i.e., the probability of G′s given the observation G∗s . However, FIDO fails to consider the causal relation between Gs → G∗s , which biases tha approximation of the genuine causal effect under our causal assumption. Back to our proposed estimation, as we have collected (Gs,G∗s )-pairs via Monte-Carlo simulation, thus additional adjustment on Gs (G′s) can be achieved via Equation 11. C DSE FOR DELETION-BASED EVALUATION Based on the idea of deletion-based evaluation, we can instead use the average causal effect (Holland., 1988) (ACE) to look for a smallest deletion graph by conducting two interventions do(Gs = G) (i.e., , no feature removal) and do(Gs = G/s) where G/s denotes the complement of the explanatory graph Gs, meaning that the GNN input receives treatment and control, respectively. Formally, we have Imp fid dse(Gs = Gs) = P (Y | do (Gs = G))− P ( Y | do ( Gs = G/s )) (13) Then, we can similarly adjust for the individual terms as Equation 1, obtaining the unbiased importance value as the result of deletion-based evaluation. D EXPERIMENTAL DETAILS In this paper, all experiments are done on a single Tesla V100 SXM2 GPU (32 GB). The well-trained GNNs used in our experiments achieve high classification accuracies of 0.958 in TR3, 0.982 in MNISTsup, 0.909 in Graph-SST2. Now We introduce the model construction of the proposed generator. The encoder used is Crystal Graph Convolutional Neural Networks (Xie & Grossman, 2018), which contains three Convolutional layers. The encode dimensions in Tr3, MNISTsup, Graph-SST2 datasets are respectively 256, 64, 256. For decoder, we adopt two fully connected layers with ReLU as activation layers, where the numbers of neurons are the same with the encode dimensions. Next, we summarize the pseudocodes for the Adversarial Training in Algorithm 1. Algorithm 1 Generative Adversarial Training. All experiments in the paper used the default values m = 256, α = 2× 10−4, β = 1× 10−4, ω = λ = 5, τ = 0.1 Require: Pr, real graphs’ distribution. r, masking ratio. Require: m, batch size. α, learning rate. β, γ, λ, ω, τ , hyper-parameters. 1: µ← µ0; θ ← θ0 2: while loss in Equation (4) is not converged do 3: # Discriminator’s training 4: Sample {G(i)}mi=1 ∼ Pr a batch from the real graphs. 5: Randomly generate broken graphs {G(i)s }mi=1 from {G(i)}mi=1 with masking ratio r. 6: Embed the nodes through encoder q(Z|{G(i)s ,G(i)}mi=1) 7: Decode the edge probabilities and sample in-fill graphs {Ĝs̄}mi=1 ∼ p(Ĝs̄ | Z) 8: Compute Discriminator’s loss from Equation 7. 9: Update parameter µ with back-propagation. 10: # Generator’s training 11: Repeat the operations from line 4 to 7. 12: Compute Generator’s loss from Equation 4, 5, 6. 13: Update parameter θ with back-propagation. 14: end while For other hyper-parameters, we set r = 0.3, γ = 3 in Tr3 dataset. In MNISTsup and Graph-SST2 datasets, we set r = 0.6, γ = 1. We use Adam (Kingma & Ba, 2014) with weight decay rate 1e-5 for optimization. The maximum number of epochs is 100. E DETAILED USER STUDY The User Study starts by instructions to participants, where they will see a sentence (movie reviews) in each question and its sentiment (Positive of Negative), e.g., Sentence: “is more of an ordeal than an amusement” Sentiment: Negative then several explanations are presented for the answers of “Why the sentiment of this sentence is negative (positive)?”. The explanations (see Figure 7) are shown in graph form (edges indicate relations between words), and colors of more important features are darker. Then they were asked to choose the best explanation(s). A good explanation should be concise, informative, and the rational cause of sentence’s sentiment. In this case, (B) could be the best explanation since “ordeal” mostly decides the negative sentiment, while (A) only identifies plain words like “more than” and (C) is quite the opposite. Note that the participants can choose multiple answers and some choices are the same. Thereafter, 10 questions out of 32 questions in total are presented for each participant and we compute the average scores for the explainers. F EXTRA CASE STUDY In this section, we further present a case study for TR3 dataset. In Figure 8, the OOD probabilities for the ground truth explanatory subgraphs in each row remain the same as the edge selection ratios vary, which are 100%, 0%, 0% respectively. In contrast, the evaluation results generated from our DSE have shown strong rationality. Specifically, the importance score compute by our DSE increases with the increasing number of selected ground truth edges. This well validates our DSE framework, where we mitigate the OOD effect by generating the plausible surrogates, making the graphs to be evaluated conforms to the graph distribution in the training data. In this way, the effect of D → Y could hardly affect our assessment for the explanatory subgraph. Thereafter, as the explanatory graph becomes more informative and discriminative, it offers more evidence for the GNN to classify it as the target class which we want to explain, yielding faithful evaluation results. Cycle House Crane Im p d se Figure 8: Three cases in TR3 datasets. Each graph in the left represents the ground truth explanatory subgraphs (red) for explaining a given graph. One of the complement graphs (light blue) generated from CVGAE is also shown with each explanatory subgraph. As the edge selection ratio increases in each row, the importance scores output by our DSE are shown in the right. G ABLATION STUDY & SENSITIVITY ANALYSIS We first conduct ablation studies to investigate the contribution of the contrastive parameter γ and the penalty parameter λ in CVGAE. The ablation models are proposed by I. removing the contrastive loss, i.e., setting γ = 0 and II. removing the penalty term in the Wasserstein GAN (WGAN) (Martin Arjovsky, 2017) loss, i.e., setting λ = 0. The performance of the ablation models is reported in Table 4. We observe that the superiority of CVGAE compared with the ablation model supports our model design by (i) smoothing the model optimization which yields a more performant generator (ii) highlighting the class-discriminative information in the graph embeddings, which implicitly encodes the class information. Also, we conduct sensitivity analysis for CVGAE w.r.t. the hyper-parameters. Specifically, we select λ, the penalty in the WGAN loss (cf. Euqation 7) and γ, the strength of the contrastive loss (cf. Equation 4). While we empirically found the performance is relatively indifferent to other parameters in a wide range. The results are shown in Figure 9. We observe that the best performance is achieved with λ taking values from 1 to 10, and γ taking values from 1 to 10 in TR3 dataset and 0.1 to 5 in MNISTsup and Graph-SST2 datasets. And we found a large λ generally causes an increase in the FID metric, as it may alleviate the penalty on the reconstruction errors, which further makes a larger difference between fy(G) and E[fy(G∗s )].
1. What is the focus and contribution of the paper regarding OOD effects on GNN explanation evaluation? 2. What are the strengths of the proposed approach, particularly in adopting causal theory? 3. What are the weaknesses of the paper, such as concerns about solving the OOD problem and hyperparameter sensitivity analysis? 4. Do you have any questions regarding the references used for evaluating DSE-based rankings? 5. Can you explain the negative value of VAL performance of VGAE on MNIST sup? 6. How does the reviewer assess the clarity and quality of the paper's content, including notations and typos? 7. What are the differences between the paper and Causal Screening (Screener) that the reviewer thinks the authors should discuss more?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors use a causal view to investigate the OOD effect on the explanation evaluation of GNNs. They find the confounder between the extracted subgraphs and the model prediction, which makes the evaluation less reliable. To solve this problem, the authors proposed a deconfounding evaluation method based on the front-door adjustment from causal discovery. To generate a reliable surrogate subgraph, they proposed a generative model, which contains three losses for training. The experimental results show the effectiveness of the proposed method (DSE). Review Strengths: In general, the motivation of this paper is very clear. The paper is easy to follow. View GNNs from a causal perspective is not a new idea. However, this paper uses causal theory to investigate the OOD problem in DNNs is new and interesting. Weaknesses: My major concerns are as follows, In Section 3.1, the authors mention 'our work is the first to adopt the causal theory to \textbf{solve} the OOD problem ...'. However, I think this paper is a causal view on the OOD problem in DNNs instead of solving the OOD problem in DNNs. Especially, at the beginning of Section 4, the authors also highlight that they study the explanation evaluation and the generator, which are verified my opinion. In the adversarial training part, there are several hyper-parameters such as γ , ω , τ , λ , and β . However, the authors do not provide any sensitivity analysis about these hyper-parameters. On the other hand, since there are three losses in Eq.(4), the authors should do several Ablation studies to demonstrate the significance of each loss function. In Insight 3 of Section 4.2, the authors mention 'The DSE-based rankings are highly consistent with the references,'. However, it is not clear what is the references and where can we get these references? In specific, how to get the values of Prec column and values of Score column in Table 1. Why the VAL performance of VGAE on MNIST s u p is a negative value (-0.203)? Since the work of Causal Screening (Screener) [1] is very close to this paper, the authors should discuss more differences between this paper and paper [1] instead of mentioning it slightly in the Related Work. [1] Wang, Xiang, et al. "Causal Screening to Interpret Graph Neural Networks." (2020). My minor concerns: Some notations are not clear and there are some typos. a) In Eq. (1), what is the meaning of \textbf{do}? b) Below Eq.(3), 'Equation equation 1' should be 'Equation 1'. c) In Figure 3, what is 'AGG'?
ICLR
Title On Deep Neural Network Calibration by Regularization and its Impact on Refinement Abstract Deep neural networks have been shown to be highly miscalibrated. often they tend to be overconfident in their predictions. It poses a significant challenge for safetycritical systems to utilise deep neural networks (DNNs), reliably. Many recently proposed approaches to mitigate this have demonstrated substantial progress in improving DNN calibration. However, they hardly touch upon refinement, which historically has been an essential aspect of calibration. Refinement indicates separability of a network’s correct and incorrect predictions. This paper presents a theoretically and empirically supported exposition reviewing refinement of a calibrated model. Firstly, we show the breakdown of expected calibration error (ECE), into predicted confidence and refinement under the assumption of over-confident predictions. Secondly, linking with this result, we highlight that regularisation based calibration only focuses on naively reducing a model’s confidence. This logically has a severe downside to a model’s refinement as correct and incorrect predictions become tightly coupled. Lastly, connecting refinement with ECE also provides support to existing refinement based approaches which improve calibration but do not explain the reasoning behind it. We support our claims through rigorous empirical evaluations of many state of the art calibration approaches on widely used datasets and neural networks. We find that many calibration approaches with the likes of label smoothing, mixup etc. lower the usefulness of a DNN by degrading its refinement. Even under natural data shift, this calibrationrefinement trade-off holds for the majority of calibration methods. 1 INTRODUCTION Guo et al. (2017) showed that many popular deep neural networks are highly miscalibrated. This implies that the model’s confidence in its estimate is not reflective of its accuracy. Typically, the output after a softmax layer of a neural network is interpreted as confidence (Hendrycks & Gimpel, 2017; Guo et al., 2017). Many studies have found that DNNs output high confidences for incorrectly classified samples (Guo et al., 2017; Pereyra et al., 2017). For scenarios such as automated driving, medical image analysis etc. where one wishes to avoid failures at all cost, such highly confident incorrect predictions can prove fatal. As a result, calibration is a desired property of the deployed neural networks, which is being actively studied in deep learning research. However, calibration is not the only component that describes a reliable system. Along with calibration we also require the predictions to be refined. Refinement describes the separability of a binary classification problem (Murphy, 1973; Gneiting et al., 2007). To build trust, it can be interpreted as the degree of confidence separation between correct and incorrect predictions. It serves as an important heuristic for real world deployment as more often than not the predictions are imposed over an operating threshold and the rest are forwarded to fallback mechanism for further evaluation. For example, in estimating if there is an object ahead of a car we might want to rely on the predictions if the estimated confidence lies above a pre-selected (based on validation) value. The idea of using confidence for reliability of predictions is very similar to how calibration is assessed as well. Good refinement indicates an ordinal ranking of predictions which allows better segregation of correct predictions from incorrect ones (Moon et al., 2020). Such a ranking can then allow the user to find an appropriate operating . threshold which reduces the chances of encountering incorrect predictions. Moreover, it also plays an important part in describing predictors’ effectiveness. To be better calibrated, a predictor can cheat by artificially making predictions around the empirical accuracy which is often referred to as predicting the marginal. This implies that for a binary classifier if its accuracy is 50% then making all predictions with confidence of 50% makes it perfectly calibrated but, the prediction thus made are useless. The model learnt is no better than a random coin flip. To emphasize on this example, we provide some more hypothetical settings in figure 1. We can qualitatively observe that it is possible for a network to exhibit varying degree of calibration and refinement in its predictions for the same final accuracy (≈ 50%). In (a), we have a classifier which is well calibrated but poorly refined. As the network makes prediction mostly with a confidence of 40%−60% with a matching accuracy, the usefulness of such a predictor is low as you lose a number of correct predictions by operating above 50% confidence. For (b), we see that the predictions are well separated but not well calibrated. We can select an operating threshold for the network to ensure that we don’t encounter many false-positives in practice; however, the remaining predictions become uncalibrated. Case (c) shows an ideal scenario where the predictions are well separated and calibrated. The correct predictions are all predicted with very high confidence, and incorrect predictions consist of very low confidence values. We also present a real scenario figures (d, e), wherein the confidence decreased after label smoothing has led to larger degradation of the quality of predictions. Though commonly studied together in the domains of statistics (Gneiting et al., 2007), meteorological forecast (Murphy & Winkler, 1977), medical analysis (Van Calster et al., 2019); for recent approaches proposed in the deep learning domain, the joint importance has been sidelined for individual improvements. Many of the recently proposed calibration methods employ strictly proper scores such as Brier Score (Brier, 1950) (mean squared error) and negative log-likelihood to measure calibration. Such scores have been known to decompose into calibration, and refinement components (Murphy, 1973). However, a metric which produces a single score reflecting 2 complex attributes can conceal the area in which the improvement is made. Due to this reason, many ap- proaches utilise Expected Calibration Error (ECE) (Niculescu-Mizil & Caruana, 2005; Naeini et al., 2015) or its variants to focus only on the calibration aspect of a forecaster. Motivated from reliability diagrams, it measures the difference between model confidence and accuracy computed over various bins. Knowing that refinement and calibration play an important part and consequently have been an integral component for describing a trustworthy and reliable predictor, it raises an important question: ‘How well do modern calibration approaches fare on refinement?’. The focus of our paper is to investigate this question. Our main contributions are as follows: • We mathematically highlight the connection between ECE and area under the ROC curve (AUROC) computed for a classification task. AUROC serves as a measure for refinement of predictions in this work. This serves to show that model confidence and confidence refinement are two areas focusing on which we can improve model calibration. This provides theoretical backing to various refinement based methods which improve calibration for which this support didn’t exist. • We also shed light on the link between the calibration approaches (based on regularisation) and the previously derived relationship to highlight the mode of working of such algorithms. We find that these algorithms work only on the confidence aspect of the classification which can in theory lead to predicting the marginal. • We provide supporting empirical evidence to illustrate improved calibration but at the expense of refinement of many calibration approaches. As overall the confidence is reduced in the final predictions, this leads to poor refinement. • Lastly, we provide empirical evidence of calibration-refinement trade-off under natural data shift. We find that refinement, in this case, is also degraded w.r.t an uncalibrated baseline. The structure of the paper is as follows: In Section 2, we first provide formal introduction to the concepts of calibration and refinement. We further show that under a weak assumption the goal of minimising the calibration error falls in line to improve separability between correctly & incorrectly classified samples. Furthermore, we shed light on the working method of many popular calibration approaches. In Section 3, we review the existing approaches proposed for calibration and the employed metrics. Sections 4 and 5 describe the evaluation setting and experiments which empirically verify our theoretical understanding built in Section 2. We discuss the implications of our findings, future work and conclusions in Section 6. 2 CALIBRATION & REFINEMENT A dataset is composed of tuples of inputs and targets represented as D = {(xi, yi)}Li=1, where x ∈ Rd, yi ∈ Y = {1, 2, . . .K} and L are the total number of samples in the dataset. We represent the learnable weights of a network as θ. The output of a network is a probability distribution over K possible outcomes. The predicted category and predicted confidence are respectively ŷi = argmaxk∈YP (Y = k|xi, θ) (1) ci = maxk∈YP (Y = k|xi, θ), (2) where ci is referred to as either the winning probability or maximum class probability. We focus on the problem of calibration and refinement for a reduced binary setting. For a multi-class classification problem we form two groups, overall correctly classified samples (or positive category) and overall incorrectly classified samples (or negative category). We intend to measure calibration and refinement within this reduced setting. Definition 2.1 (Calibration). A model Pθ is calibrated if P(yi = ŷi|ci, θ) = ci ∀(xi, yi) ∈ Dt. Dt being the test set. This implies that the accuracy of the model should be reflective of its confidence in the prediction. Deviation from it leads to under-confident (accuracy> predicted confidence) or over-confident (accuracy < predicted confidence) models. A common metric often used to measure calibration in practice is the Expected calibration error (Naeini et al., 2015). It is measured as the difference between the accuracy and predicted confidences computed over several bins. Formally, ECE , M∑ m |Bm| L |Am − Cm|, (3) where average confidence (C) and accuracy (A) is computed after splitting the predictions into predefined M bins sampled uniformly based on the predicted confidence and Bm is the number of total samples falling in bin m. Definition 2.2 (Refinement). Let Sp and Sn denote correct and incorrect classification of a model on Dt. Predictions are considered refined iff ci > cj ∀xi ∈ Sp , ∀xj ∈ Sn. Refinement enforces a separation between the two sets of prediction. Degroot & Fienberg (1981) provide an alternative definition of refinement for calibrated classifiers. We consider area under the ROC curve (r) (Ling et al., 2003), as an appropriate choice of metric for measuring refinement of a model(Corbière et al., 2019). A common interpretation of r is that it denotes the expectation that a uniformly drawn random positive sample is ranked higher (higher confidence) than a uniformly drawn random negative sample. Hand & Till (2001) calculate r as: r = Rp − |Sn| × (|Sn|+ 1)/2 |Sp| × |Sn| (4) where, Rp = ∑ ∀x∈Sp rank(x) and rank(x) denotes the rank of prediction x in an increasingly sorted list of predictions based on associated confidence. It is straightforward to observe that r for a refined model will always be greater than an unrefined one (switching the rank of an incorrect prediction with the correct one decreases r). 2.1 CONNECTING ECE AND r Assumption: We assume that Am < Cm∀m. It implies that the network is over-confident in its prediction throughout. This is partly true in practice as for all deep neural networks the problem of calibration entails over-confident predictions(Thulasidasan et al., 2019). Also, we empirically observed that for networks trained on ImageNet(Deng et al., 2009), CIFAR-100(Krizhevsky, 2009), STL-10(Coates et al., 2011) and CUB-200(Wah et al., 2011) the number of bins for which Am <= Cm holds true are 80, 95, 94 and 86 respectively for M = 100. Recently, a study by Bai et al. (2021) showed that a classifier learnt through well specified logistic regression is destined to be overconfident. Let, pm and nm represent positive and negative category samples in bin m respectively which implies |Sp| = ∑ m pm and |Sn| = ∑ m nm. We can now describe the accuracy within a bin as Am = pm pm+nm . Substituting all the above conversions to Equation equation 3, ECE is updated as ECE = ∑ m (pm + nm) |Sp|+ |Sn| ( Cm − pm pm + nm ) . (5) This can be further expanded to ECE = ∑ m (pm + nm) |Sp|+ |Sn| Cm︸ ︷︷ ︸ I − ∑ m pm |Sp|+ |Sn|︸ ︷︷ ︸ II . (6) I denotes the expected confidence of the predictions, EC∼pθ(X) [C], of the model, whereas II is the expected model accuracy, E [A]. Equation equation 6 can thus be updated to ECE = E [C]− E [A] . (7) For a binary classification task, it has been shown (Hernández-Orallo et al., 2012; Flach & Kull, 2015) that r and E [A] are linearly related averaged over all possible true-positive rates. They showed that: E [A] = P |Sp|+ |Sn| (1− P |Sp|+ |Sn| )(2r − 1) + 1 2 , (8) where r is the area under the ROC curve. Substituting Equation equation 8 for E [A] in Equation equation 7 and re-arranging the terms gives us the final expression in the form of ECE = E [C]︸ ︷︷ ︸ α −r 2PN (|Sp|+ |Sn|)2︸ ︷︷ ︸ β − P 2 +N2 2(|Sp|+ |Sn|)2 .︸ ︷︷ ︸ γ (9) Traditionally, for strictly proper scoring rules such as the Brier score, the decomposition of the metric into calibration and refinement is well known. However, for ECE which is not a strict proper scoring rule, we have shown that the breakdown is into average predicted confidence and refinement under the applied assumption of bins-wide overconfidence. For a set of predictions, we have the following constraints P ≥ 0, N ≥ 0, |Sp| + |Sn| > 0, β ≥ 0 and γ > 0. We can decrease the calibration error by either reducing α and/or increasing r. Moon et al. (2020) have shown that their refinement based approach improves calibration however, they do not provide the reasoning behind such an observation. Their observation can now be supported by the relationship described in Equation equation 9. We also compute calibration of another refinement approach, CFN(Corbière et al., 2019), for which earlier these results were not computed and find that in this case as well the network achieves better calibration after the refinement process (see Section A.3). 2.2 HOW REGULARIZATION ENFORCES CALIBRATION? We highlighted the factors which contribute towards lowering of the expected calibration error. In this section, we focus on shedding light on the working route for many regularization based calibration approaches instead. To emphasize, regularization acts as a penalty during the training procedure. Label Smoothing(Müller et al., 2019) provides calibration apart from other benefits. Many existing approaches also have been proven to materialize into label smoothing (LS) such as entropy regularization (ERL) (Pereyra et al., 2017) and focal loss (FL) (Mukhoti et al., 2020). We focus our attention to the label smoothing objective function and decipher the mode of working for this particular algorithm. A training loss consisting of label smoothing can be written as L = LCE + LLS , (10) where CE stands for cross-entropy and LS represents label smoothing contribution. Label smoothing contribution is the KL divergence between uniform distribution (U ) and network’s output distribution (Pθ). Formally, LLS = −DKL(U ||Pθ). (11) LLS can be expanded as, LLS = i<N∑ i=0 −U(xi)log(Pθ(xi))︸ ︷︷ ︸ I +U(xi)log(U(xi)))︸ ︷︷ ︸ II , (12) where xi is a sample input from a total of N sample points. The value for the uniform distribution is set before hand to a small constant thus making II a constant term. I is the term which is optimised and for a binary classification problem can be written as min N∑ i=1 logci + log(1− ci) s.t. 0 ≤ ci ≤ 1. (13) The above expression reaches a minimum value when ci = 0.5. For multi-class classification, the minimum is achieved at 1K . This goes on to show that label smoothing works on only reducing the confidence of all of its predictions. For ERL and FL, the breakdown is similar as they simply rely on slightly different target functions in equation 11. The breakdown is similar when we use their corresponding losses which are: Lerl = −H(Pθ) (14) Lfocal = (1− γ)H(Pθ) (15) where, H is the entropy. The takeaway is that regularisation added only helps to tone down the winning class confidence and increase the losing confidences. The improvement in calibration is focused more on the α-aspect of Equation equation 9. Intuitively, concentrating predictions at a point will have detrimental effect on a network’s refinement as now we have concentrated incorrect and correct predictions. 3 RELATED WORK 3.1 CALIBRATION This work is focused on calibration of point estimate based deep neural networks. For the Bayesian perspective, we refer the readers to recent works on ensembles(Lakshminarayanan et al., 2017) and cold-posterior(Wenzel et al., 2020). The existing work for calibration of point estimate models can be categorised into the following 3 broad groups based on the commonalities between the approaches. Regularisation based approaches apply a calibrating penalty to the supervised learning objective. Pereyra et al. (2017) added negative entropy of the predictions to encourage the model to predict less ‘peaky’ estimates. Subsequently, many approaches have been proposed along this direction which adds noise to the labels (Müller et al., 2019), optimise a proxy for the calibration error metric (Kumar et al., 2018), and replace the cross-entropy objective with focal loss (Mukhoti et al., 2020). Peterson et al. (2019) utilised human inferred soft-targets to improve robustness. This approach can be understood as being along the lines of label smoothing. Post-hoc approaches re-scale the confidence scores of an uncalibrated neural network to make it calibrated. The scaling hyper-parameters are chosen on a held-out validation set. Some of the recently proposed approaches are temperature scaling (Guo et al., 2017), scaling and binning calibration (Kumar et al., 2019), Dirichlet calibration (Kull et al., 2019), and beta calibration (Kull et al., 2017). These approaches find motivation from classical methods such as Platt scaling (Platt, 1999), binning (Zadrozny & Elkan, 2001), and isotonic regression (Zadrozny & Elkan, 2002). In the last group, we list the remaining approaches. Mixup (Zhang et al., 2018; Thulasidasan et al., 2019) and AugMix (Hendrycks et al., 2020) combine data augmentation and regularization. Pretraining (Hendrycks et al., 2019a) and self-supervised learning (Hendrycks et al., 2019b) have also been highlighted to be beneficial in this regard. 3.2 REFINEMENT By refining prediction, methods seek to find a good ordinal ranking of predictions. This may or may not result in a calibrated model as it has not been studied for this problem extensively. Moon et al. (2020) incorporated ‘Correctness Ranking Loss’ to allow a DNN to learn appropriate ordinal rankings for classified samples. They also observed that their approach helped in calibrating the network; however, do not discuss the reasoning behind this observation. As a replacement for confidence estimate, Jiang et al. (2018) introduced ‘TrustScore’, which provides a better ordinal ranking of predictions than the output of the network. They utilised the ratio between the distance from the sample to the nearest class different from the predicted class and the distance to the predicted class as the trust score. ConfidNet (Corbière et al., 2019) incorporates the learning of this trust score as an additional branch in the network. In the post-hoc stage, ConfidNet branch of the classifier is trained to predict a confidence score which mimics the reliability of the network on its prediction. Meta-cal(Ma & Blaschko, 2021), is a recent attempt to ensure that calibration ensures usability of the classifier though post-hoc ranking on an existing calibrated network. 3.3 METRICS For the scores utilised to assess calibration, the most commonly used are Brier score, negative loglikelihood (NLL), Expected Calibration Error (ECE) and Overconfidence Error (OE). Brier score (Brier, 1950) and NLL are strictly proper scoring rules (Gneiting & Raftery, 2007; Dawid & Musio, 2014). It has been shown that strictly proper scoring rules decompose into calibration and refinement components (Murphy, 1973; Blattenberger & Lad, 1985). The presence of the refinement component describes the utility of the calibration approach. However, the implicit combination of the two can conceal the area of improvement. ECE and OE (Degroot & Fienberg, 1983; Niculescu-Mizil & Caruana, 2005; Naeini et al., 2015) are proper scoring rules (not strict) and are adapted from reliability diagrams for judging the calibration of the models. They are not strict as the optimum value of 0 can be achieved with more than one set of predictions. These metrics also suffer from high sensitivity to the bin hyper-parameter (Nixon et al., 2020). Finding a good calibration metric is an active area of research (Geifman et al., 2019; Nixon et al., 2020). 4 IMPLEMENTATION DETAILS To empirically verify our findings we employ the following calibration approaches in our study. • Label Smoothing (LS) • Entropy Regularization (ERL) • Mixup (MX) • Focal Loss (FL). We compare these approaches to a cross-entropy trained model referred to as baseline. For the datasets we rely on CIFAR-10/100 (Krizhevsky, 2009), STL-10(Coates et al., 2011), CUB200(Wah et al., 2011) and ImageNet (Deng et al., 2009) which have been used extensively in recent calibration studies. The neural network architectures chosen are Resnet-50 (He et al., 2016), VGG16 (Simonyan & Zisserman, 2015) and DenseNet-121 (Huang et al., 2017) for CIFARs as to reflect on architecture wide occurrence of the calibration-refinement trade-off. Resnet-50 for (Pre-trained) CUB-200 and ImageNet. VGG-16(with batch norm) for STL-10. Alongside accuracy we report ECE and Brier score for calibration errors whereas, AUROC and AUPR for refinement. All values provided are ×100. We report mean and deviation(as subscript) over 3 trials where applicable. Training details are provided in the supplemental (see Section A.4). 5 EXPERIMENTS & RESULTS 5.1 CALIBRATION & REFINEMENT Tables 1 and 2 show the joint calibration and refinement on various datasets. Unsurprisingly, calibration approaches attain lower calibration errors for most of scenarios. Also, in many cases brier score is also better than the baseline which hides the shortcoming. In table 1 we can observe that in terms of refinement, the baseline performs superior to calibrated models. Focusing on AUPR and AUROC, these metrics capture slightly different aspects of the quality of predictions. AUPR is typically a preferred metric when there is an imbalance due to the negative category. But, as the overall accuracy of networks considered is > 50 we believe that is not the case. Additionally, AUPR prioritises positive class samples but not their ordering which forms the definition of refinement. Keeping this in mind, we believe AUROC is a stronger indicator of refinement with AUPR serving a similar but softened purpose. ERL provides the least improvement in terms of calibration and achieves slightly worse AUROC w.r.t the baseline at times. Out of all the approaches assessed, LS consistently acquires the lowest refinement performance. MX and FL provide moderate to low decay of refinement. For other datasets in table 2 similar observation of weakening refinement can be drawn. Another point to notice is the varying degree of calibration and refinement across datasets. This can be attributed to over-parameterized training. Mukhoti et al. (2020) argued that over-fitting to training leads to miscalibration. We suspect since the network’s overfit to varying degree on different datasets. This results in varied improvement in calibration and hence the impact on refinement also varies. For example, on ImageNet we achieve a baseline training accuracy of 77% as opposed to the CIFARs’ training accuracy > 99%. Figure 1 we also notice that the density plots for ImageNet are vastly different from CIFARs as the concentration of misclassified samples in the baseline are well separated from the corrects ones. 5.2 IMPACT ON REFINEMENT UNDER DATA SHIFT Previously, the test set consisted of samples originating from the same distribution as that of training. In this experiment, we aim to assess the deterioration under natural distribution shift of the datasets. Natural shift implies a subtle change in scene composition, object types, lighting conditions, and many others (Taori et al., 2020). It is logical to assume that a DNN is bound to confront such images in the real world. Examples of naturally shifted datasets are CIFAR-10.1 (Recht et al., 2018), CIFAR-10.2 (Lu et al., 2020) and ImageNet-v2 (Recht et al., 2019). These datasets are collected following the identical process to that of the original reference dataset. Such datasets have been utilised to measure the lack of generalisation and robustness of many classification models (Taori et al., 2020; Ovadia et al., 2019). This is the first attempt at evaluating calibration-refinement under natural data-shift to the best of our knowledge. Assessment of calibration under synthetic shift has been reported by Ovadia et al. (2019). However, we believe natural data-shift is a scenario which a deployed DNN is more likely to face and hence requires equal if not more attention. By evaluating calibration-refinement trade-off we will also be able to highlight the severity and extent of the problem induced by many calibration approaches. 5.2.1 RESULTS Table 3 shows the performance of models trained on original datasets and tested on shifted variants. For CIFAR-10.x we use the VGG-16 model trained on CIFAR-10 and for ImageNet-v2 we employee the ResNet-50 trained on ImageNet. We spot that the trend of worsening refinement continues for models under data shift as well. Similar to what we have already seen for LS, it also provides the lowest refinement performance under natural shift. A surprising observation to note is the poor performance of MX. MX as shown by Thulasidasan et al. (2019) performs well on out-of-distribution detection. However, when the data shift is not severe it appears that mixup provides no added benefit in terms of refinement. We also observe that calibration approaches provide better calibration than the baseline under the natural shift. This observation has not yet been highlighted in existing studies which focus on ood performance or some form of generalisation metric (relative accuracy) to investigate robustness of a model. For synthetic shifts, Ovadia et al. (2019) made a similar observation and noted that calibration approaches to a certain extent improve calibration on corrupted images w.r.t the baseline. 6 DISCUSSION & CONCLUSION In this paper we have brought forth a downside of many calibration approaches. We believe refinement is an important aspect which communicates the usefulness of safety-critical DNNs. Discussed theoretically and empirically, we have shed light on the current state of calibration-refinement tradeoff. Many regularization based calibration approaches disregard the role of refinement, leading to severe loss in the utility of DNNs thus trained. We successfully presented the case of declining refinement for a wide variety of approaches tested on many different datasets. The derived relationship in equation 9 showed how improving refinement can help better calibrate the model. This provides justification for calibration observed for refinement approach of Moon et al. (2020). In the appendix (A.3), we show that calibration is induced by the refinement technique proposed by Corbière et al. (2019). In the future, we aim to focus on finding balanced calibration methods which preserve if not improve refinement of predictions. The benefits of label smoothing have been highlighted by Müller et al. (2019). We were able to shed light on a severe limitation of the approach, which practitioners were currently unaware of. Similar to LS, other easy to apply calibration methods are also damaging in practice. A similar trend is observed for a NLP classification task reported in appendix A.1. We observed that the degree of refinement degradation varies from one dataset to another. Mukhoti et al. (2020) discussed the causes for miscalibration and accredited it to the over-fitting on the training data (under cross-entropy loss). We found that the training accuracy achieved by the baseline is 99.99%, 99.4% and 77.9% for CIFAR-10, CIFAR-100 and ImageNet respectively. This signals towards a comparably lower over-fitting of baseline trained on ImageNet and subsequently, a lower impact on calibration leading to a lower refinement degradation. We also noted the extension of calibration to naturally shifted data. Akin to the observations made by (Ovadia et al., 2019) on their evaluation on synthetically shifted datasets, we observed that existing solutions provide calibration on naturally shifted datasets as well. However, this calibration comes at a cost and as a result refinement aspect of the models is comparably poorer than their uncalibrated counterparts. An important point to note was the failure of Mixup under datashift. Thulasidasan et al. (2019) has demonstrated Mixup’s ability to distinguish ood samples however, we believe that natural shift is a weaker notion of data shift than ood evaluation and MX fails to provide any benefit in this regard. We also noted the varying impact of this degradation across datasets. We suspect that the lack of evident over-fitting on ImageNet is the root cause behind the visibly lower calibrationrefinement impact on it. Apart from relying on ECE and Brier score, incorporating metrics like AUROC, AUPR etc. helps in further distinguishing useful calibration approaches. Utilizing such measures can help researchers to make an intelligent and well-formed decision regarding the suitability of an approach for their application. Additionally, many evaluation protocols have also been proposed which extend the problem of calibration to a multi-class setting (Widmann et al., 2019). A natural extension will be to study refinement conjointly with calibration in a similar manner. To conclude, we have demonstrated a theoretically motivated study of calibration and refinement of many recently proposed calibration approaches. Though these methods improve calibration, they negatively impact refinement when compared to a heavily miscalibrated baseline. A APPENDIX A.1 NATURAL LANGUAGE TASK Dataset Meth. Acc Brier (↓) ECE (↓) AUROC (↑) 20News Baseline 73.31 36.60 17.92 83.95 LS 73.96 36.37 4.79 82.71 FL 70.74 39.59 8.67 83.46 formance than the other two calibration approaches. A.2 CALIBRATION AND REFINEMENT FOR TRANSFORMER BASED NETWORKS We utilize CCT and CVT networks as proposed by Hassani et al. (2021) in their recent work. These networks don;t require excess pre-training data to obtain comparable accuracy to popular feed-forward convolution only architectures. As the underlying architecture is significantly different from the baselines considered from our work, we still try to compare calibration and refinement of these models with a comparable baseline (in-terms of accuracy). A.2.1 RESULTS The results don’t indicate that transformers produce calibrated outputs. However, we did observe that for majority of the bins while computing ECE, the accuracy > confidence. This indicates towards the problem of under-confidence. CIFAR-10 CIFAR-100 Accuracy(↑) ECE(↓) AUROC(↑) Accuracy(↑) ECE(↓) AUROC(↑) R-50(Baseline) 95.65 2.69 93.8 77.2 12.7 85.69 CCT6 3 95.29 7.88 88.83 77.31 5.69 84.53 VGG-16(Baseline) 93.74 4.8 90.9 - - - CVT6 92.58 6.76 88.39 - - - VGG-16(Baseline) – – – 72.46 16.29 84.97 CVT7 – – – 73.01 4.23 85.94 A.3 CALIBRATION BY REFINEMENT In this section we present the results of the refinement approach of Corbière et al. (2019). ConfidNet (CFN) learns as a post-processing step a point-estimate for new predictions. The pre-trained classification branch drives the classification of an input sample, and for estimating the confidence for the prediction, the estimate from the confidence branch is employed. The authors highlight the refinement advantage over baseline and TrustScore Jiang et al. (2018) by employing AUPR, AUROC, etc. We utilize the official source code and train VGG-16 Simonyan & Zisserman (2015) with batch normalization. We retain 10% of training data to validate CFN training parameters and report the calibration and refinement results on the official test split for CIFARs Krizhevsky (2009). The results are reported over 3 independent runs of the experiment. A.3.1 RESULT Results in Table 5 show the CFN performance in comparison to an uncalibrated and unrefined baseline. Not only does CFN provide better refinement, it is also able to reduce the calibration errors over the datasets. This provides further support to our understanding of calibrating a model by improving refinement. A.4 IMPLEMENTATION DETAILS For CIFARs, we train the models for 300 epochs with a starting learning rate of 0.1 decayed by a factor of 5 (baseline, ERL, Mixup) or 10 (LS, FL) at 150 and 225 epochs. For calibration approaches many of the respective hyper-parameters are borrowed from the original work. For TS we use the temperature of 1.5. For MX, we use α = 0.2 based on the results provided by (Thulasidasan et al., 2019; Singh & Bay, 2020). For LS, we use = 0.05 following the work of Müller et al. (2019) and Mukhoti et al. (2020). We employ the fixed gamma variant for FL with γ = 3.0. The strength of the entropy regularizer in ERL is set to 0.1 based on the experiments of Thulasidasan et al. (2019). For ImageNet, the total number of epochs is 100 with learning rate decay by 10 at milestones 30, 60, 90. This is the standard approach for training Resnet-50 on ImageNet. For the method specific hyper-parameters we rely on existing experiments and their logical extensions. For LS, we use = 0.1 as utilized by Müller et al. (2019) and Thulasidasan et al. (2019). For FL, we rely on using γ = 3.0 as the authors utilized it for experiments on the Tiny-ImageNet (Le & Yang, 2015) dataset. For ERL, we use the strength to be 0.1 based on the experiments of Thulasidasan et al. (2019). We found that for TS the temperature 1.1 provides reasonably well calibration. For MX, we employ α = 0.2. We report ECE and Brier score as calibration errors whereas, AUROC for refinement. All values provided are ×100. We report mean and std. deviation over 3 trials where applicable. We report the accuracies in the supplementary document as we found them to be highly similar across different methods. We utilize publicly available datasets and code implementations for majority of our experiments. We use PyTorch Paszke et al. (2019) as the deep learning framework. Github links for the approaches investigated are provided below: 1. Mixup Calibration (MX): https://github.com/paganpasta/OnMixup 2. Focal Loss Calibration (FL): https://github.com/torrvision/focal_ calibration 3. ConfidNet (CFN): https://github.com/valeoai/ConfidNet The remaining approaches can be easily implemented. We provide short python scripts describing their implementation below: Listing 1: Entropy Regularization(ERL) from torch .nn import functional as F def erlloss ( logits , targets , eps=0.1, **kwargs): h c = F. cross entropy ( logits , targets , reduction =’sum’) h p = torch .sum(torch.sum(−F.softmax(logits ,dim=1) * F.log softmax( logits ,dim=1),1)) return h c − eps*h p Listing 2: Label Smoothing(LS) import torch .nn. functional as F import torch .nn as nn def linear combination (x, y, epsilon ) : return epsilon * x + (1 − epsilon ) * y def reduce loss ( loss , reduction =’sum’): return loss .mean() if reduction == ’mean’ else loss .sum() if reduction == ’sum’ else loss class LabelSmoothingLoss(nn.Module): def init ( self , epsilon = 0.1, reduction =’sum’): super() . init () self . epsilon = epsilon self . reduction = reduction def forward( self , preds , target ) : n = preds . size () [−1] log preds = F.log softmax(preds , dim=−1) loss = reduce loss (− log preds .sum(dim=−1), self . reduction ) nll = F. nll loss ( log preds , target , reduction =self . reduction ) return linear combination ( loss / n, nll , self . epsilon ) Lastly, temperature scaling (TS) requires dividing the output logits by the chosen temperature. We plan to release the pre-trained models to assist future research for all the methods after the review period.
1. What is the main contribution of the paper regarding calibration and refinement in probabilistic classifiers? 2. What are the concerns regarding the relationship between ECE and AUC in the paper? 3. How does the paper relate to other works on calibration and refinement loss, specifically those considering proper scoring rules? 4. Can the paper provide a clearer connection between regularization and grouping loss to help understand the calibration-refinement trade-offs? 5. Are there any experiments conducted to demonstrate the effects of regularization methods on the calibration-refinement trade-offs?
Summary Of The Paper Review
Summary Of The Paper This paper looks into the problem of adjusting calibration and refinement of probabilistic classifiers. The authors propose a view to linking the ECE measure with AUC, which can then be used to balance calibration loss and refinement loss in various model training processes. Some experiments are conducted to demonstrate the effects of regularisation methods on the calibration-refinement trade-offs. Review At the top level, this paper is on an interesting topic about the calibration-refinement trade-offs among modern deep NN classifiers. While there has been much recent work on calibration, refinement loss is somehow overlooked. It is therefore encouraging to see a paper on this particular problem. That being said, I have two major concerns after checking the entire paper. (1) The derived relationship between ECE and AUC (eq.9) is problematic. As indicated by the authors, one of the key contributions of this paper is the discovery of a relationship between ECE and AUC. However, the derivation includes a misusage of previous results by (Hernandez-Orallo et al., 2012). As defined by eq.6, the expected model accuracy E [ A ] is just the standard definition of accuracy (i.e. probability of making a correct classification). Given a fixed model and decision rule (as the setting specified by the authors), this value is a constant. However, the relationship between accuracy and AUC (Hernandez-Orallo et al., 2012) is derived from a different setting (e.g. when we evaluate the accuracy with varying thresholds of decision). The relationship can only be derived by "averaged over all possible predicted positive rate." (not "true positive rate" as quoted by the authors). (Hernandez-Orallo et al., 2012) considers both (Accuracy) and (AUC) as functions of (TPr) and (FPr), and the right-hand part of eq.8 can only be obtained by computing an integration between (Accuracy) and (Predicted Positive Rate). As a result, eq.8 (hence eq.9) is not correct as the E [ A ] term is not the averaged term as proposed in (Hernandez-Orallo et al., 2012), but merely the standard accuracy. The critical integral part is missing. ** P and N in eq.8 and eq.9 are not defined and require guessing for the moment. (2) Important related work is missing. Given this paper is on the relationship between calibration and refinement loss, it should be aware that the topic is also considered by (Kull M, Flach P. Novel decompositions of proper scoring rules for classification: Score adjustment as precursor to calibration, ECML-PKDD 2015) under the setting of proper scoring rules. In that work, the authors propose that the refinement loss can be further decomposed to (Irreducible Loss) and (Grouping Loss). (Irreducible Loss) is fixed by the source distribution and cannot be reduced by the models. Therefore, any change of (Refinement Loss) is due to the change of (Grouping Loss). Therefore, to have a better understanding of how we can balance (Refinement Loss) and (Calibration Loss), it would make more sense to link the regularisation problem in this paper to the (Grouping Loss) and illustrate how the loss is affected by applying regularization. While the authors suggested that the standard ECE is not a Proper Scoring Rule, it is more reasonable to consider the refinement loss problem under PSR where it was initially proposed. Also, the PSR framework can work with a true multi-class setting instead of a binary confidence setting as in the current paper.
ICLR
Title On Deep Neural Network Calibration by Regularization and its Impact on Refinement Abstract Deep neural networks have been shown to be highly miscalibrated. often they tend to be overconfident in their predictions. It poses a significant challenge for safetycritical systems to utilise deep neural networks (DNNs), reliably. Many recently proposed approaches to mitigate this have demonstrated substantial progress in improving DNN calibration. However, they hardly touch upon refinement, which historically has been an essential aspect of calibration. Refinement indicates separability of a network’s correct and incorrect predictions. This paper presents a theoretically and empirically supported exposition reviewing refinement of a calibrated model. Firstly, we show the breakdown of expected calibration error (ECE), into predicted confidence and refinement under the assumption of over-confident predictions. Secondly, linking with this result, we highlight that regularisation based calibration only focuses on naively reducing a model’s confidence. This logically has a severe downside to a model’s refinement as correct and incorrect predictions become tightly coupled. Lastly, connecting refinement with ECE also provides support to existing refinement based approaches which improve calibration but do not explain the reasoning behind it. We support our claims through rigorous empirical evaluations of many state of the art calibration approaches on widely used datasets and neural networks. We find that many calibration approaches with the likes of label smoothing, mixup etc. lower the usefulness of a DNN by degrading its refinement. Even under natural data shift, this calibrationrefinement trade-off holds for the majority of calibration methods. 1 INTRODUCTION Guo et al. (2017) showed that many popular deep neural networks are highly miscalibrated. This implies that the model’s confidence in its estimate is not reflective of its accuracy. Typically, the output after a softmax layer of a neural network is interpreted as confidence (Hendrycks & Gimpel, 2017; Guo et al., 2017). Many studies have found that DNNs output high confidences for incorrectly classified samples (Guo et al., 2017; Pereyra et al., 2017). For scenarios such as automated driving, medical image analysis etc. where one wishes to avoid failures at all cost, such highly confident incorrect predictions can prove fatal. As a result, calibration is a desired property of the deployed neural networks, which is being actively studied in deep learning research. However, calibration is not the only component that describes a reliable system. Along with calibration we also require the predictions to be refined. Refinement describes the separability of a binary classification problem (Murphy, 1973; Gneiting et al., 2007). To build trust, it can be interpreted as the degree of confidence separation between correct and incorrect predictions. It serves as an important heuristic for real world deployment as more often than not the predictions are imposed over an operating threshold and the rest are forwarded to fallback mechanism for further evaluation. For example, in estimating if there is an object ahead of a car we might want to rely on the predictions if the estimated confidence lies above a pre-selected (based on validation) value. The idea of using confidence for reliability of predictions is very similar to how calibration is assessed as well. Good refinement indicates an ordinal ranking of predictions which allows better segregation of correct predictions from incorrect ones (Moon et al., 2020). Such a ranking can then allow the user to find an appropriate operating . threshold which reduces the chances of encountering incorrect predictions. Moreover, it also plays an important part in describing predictors’ effectiveness. To be better calibrated, a predictor can cheat by artificially making predictions around the empirical accuracy which is often referred to as predicting the marginal. This implies that for a binary classifier if its accuracy is 50% then making all predictions with confidence of 50% makes it perfectly calibrated but, the prediction thus made are useless. The model learnt is no better than a random coin flip. To emphasize on this example, we provide some more hypothetical settings in figure 1. We can qualitatively observe that it is possible for a network to exhibit varying degree of calibration and refinement in its predictions for the same final accuracy (≈ 50%). In (a), we have a classifier which is well calibrated but poorly refined. As the network makes prediction mostly with a confidence of 40%−60% with a matching accuracy, the usefulness of such a predictor is low as you lose a number of correct predictions by operating above 50% confidence. For (b), we see that the predictions are well separated but not well calibrated. We can select an operating threshold for the network to ensure that we don’t encounter many false-positives in practice; however, the remaining predictions become uncalibrated. Case (c) shows an ideal scenario where the predictions are well separated and calibrated. The correct predictions are all predicted with very high confidence, and incorrect predictions consist of very low confidence values. We also present a real scenario figures (d, e), wherein the confidence decreased after label smoothing has led to larger degradation of the quality of predictions. Though commonly studied together in the domains of statistics (Gneiting et al., 2007), meteorological forecast (Murphy & Winkler, 1977), medical analysis (Van Calster et al., 2019); for recent approaches proposed in the deep learning domain, the joint importance has been sidelined for individual improvements. Many of the recently proposed calibration methods employ strictly proper scores such as Brier Score (Brier, 1950) (mean squared error) and negative log-likelihood to measure calibration. Such scores have been known to decompose into calibration, and refinement components (Murphy, 1973). However, a metric which produces a single score reflecting 2 complex attributes can conceal the area in which the improvement is made. Due to this reason, many ap- proaches utilise Expected Calibration Error (ECE) (Niculescu-Mizil & Caruana, 2005; Naeini et al., 2015) or its variants to focus only on the calibration aspect of a forecaster. Motivated from reliability diagrams, it measures the difference between model confidence and accuracy computed over various bins. Knowing that refinement and calibration play an important part and consequently have been an integral component for describing a trustworthy and reliable predictor, it raises an important question: ‘How well do modern calibration approaches fare on refinement?’. The focus of our paper is to investigate this question. Our main contributions are as follows: • We mathematically highlight the connection between ECE and area under the ROC curve (AUROC) computed for a classification task. AUROC serves as a measure for refinement of predictions in this work. This serves to show that model confidence and confidence refinement are two areas focusing on which we can improve model calibration. This provides theoretical backing to various refinement based methods which improve calibration for which this support didn’t exist. • We also shed light on the link between the calibration approaches (based on regularisation) and the previously derived relationship to highlight the mode of working of such algorithms. We find that these algorithms work only on the confidence aspect of the classification which can in theory lead to predicting the marginal. • We provide supporting empirical evidence to illustrate improved calibration but at the expense of refinement of many calibration approaches. As overall the confidence is reduced in the final predictions, this leads to poor refinement. • Lastly, we provide empirical evidence of calibration-refinement trade-off under natural data shift. We find that refinement, in this case, is also degraded w.r.t an uncalibrated baseline. The structure of the paper is as follows: In Section 2, we first provide formal introduction to the concepts of calibration and refinement. We further show that under a weak assumption the goal of minimising the calibration error falls in line to improve separability between correctly & incorrectly classified samples. Furthermore, we shed light on the working method of many popular calibration approaches. In Section 3, we review the existing approaches proposed for calibration and the employed metrics. Sections 4 and 5 describe the evaluation setting and experiments which empirically verify our theoretical understanding built in Section 2. We discuss the implications of our findings, future work and conclusions in Section 6. 2 CALIBRATION & REFINEMENT A dataset is composed of tuples of inputs and targets represented as D = {(xi, yi)}Li=1, where x ∈ Rd, yi ∈ Y = {1, 2, . . .K} and L are the total number of samples in the dataset. We represent the learnable weights of a network as θ. The output of a network is a probability distribution over K possible outcomes. The predicted category and predicted confidence are respectively ŷi = argmaxk∈YP (Y = k|xi, θ) (1) ci = maxk∈YP (Y = k|xi, θ), (2) where ci is referred to as either the winning probability or maximum class probability. We focus on the problem of calibration and refinement for a reduced binary setting. For a multi-class classification problem we form two groups, overall correctly classified samples (or positive category) and overall incorrectly classified samples (or negative category). We intend to measure calibration and refinement within this reduced setting. Definition 2.1 (Calibration). A model Pθ is calibrated if P(yi = ŷi|ci, θ) = ci ∀(xi, yi) ∈ Dt. Dt being the test set. This implies that the accuracy of the model should be reflective of its confidence in the prediction. Deviation from it leads to under-confident (accuracy> predicted confidence) or over-confident (accuracy < predicted confidence) models. A common metric often used to measure calibration in practice is the Expected calibration error (Naeini et al., 2015). It is measured as the difference between the accuracy and predicted confidences computed over several bins. Formally, ECE , M∑ m |Bm| L |Am − Cm|, (3) where average confidence (C) and accuracy (A) is computed after splitting the predictions into predefined M bins sampled uniformly based on the predicted confidence and Bm is the number of total samples falling in bin m. Definition 2.2 (Refinement). Let Sp and Sn denote correct and incorrect classification of a model on Dt. Predictions are considered refined iff ci > cj ∀xi ∈ Sp , ∀xj ∈ Sn. Refinement enforces a separation between the two sets of prediction. Degroot & Fienberg (1981) provide an alternative definition of refinement for calibrated classifiers. We consider area under the ROC curve (r) (Ling et al., 2003), as an appropriate choice of metric for measuring refinement of a model(Corbière et al., 2019). A common interpretation of r is that it denotes the expectation that a uniformly drawn random positive sample is ranked higher (higher confidence) than a uniformly drawn random negative sample. Hand & Till (2001) calculate r as: r = Rp − |Sn| × (|Sn|+ 1)/2 |Sp| × |Sn| (4) where, Rp = ∑ ∀x∈Sp rank(x) and rank(x) denotes the rank of prediction x in an increasingly sorted list of predictions based on associated confidence. It is straightforward to observe that r for a refined model will always be greater than an unrefined one (switching the rank of an incorrect prediction with the correct one decreases r). 2.1 CONNECTING ECE AND r Assumption: We assume that Am < Cm∀m. It implies that the network is over-confident in its prediction throughout. This is partly true in practice as for all deep neural networks the problem of calibration entails over-confident predictions(Thulasidasan et al., 2019). Also, we empirically observed that for networks trained on ImageNet(Deng et al., 2009), CIFAR-100(Krizhevsky, 2009), STL-10(Coates et al., 2011) and CUB-200(Wah et al., 2011) the number of bins for which Am <= Cm holds true are 80, 95, 94 and 86 respectively for M = 100. Recently, a study by Bai et al. (2021) showed that a classifier learnt through well specified logistic regression is destined to be overconfident. Let, pm and nm represent positive and negative category samples in bin m respectively which implies |Sp| = ∑ m pm and |Sn| = ∑ m nm. We can now describe the accuracy within a bin as Am = pm pm+nm . Substituting all the above conversions to Equation equation 3, ECE is updated as ECE = ∑ m (pm + nm) |Sp|+ |Sn| ( Cm − pm pm + nm ) . (5) This can be further expanded to ECE = ∑ m (pm + nm) |Sp|+ |Sn| Cm︸ ︷︷ ︸ I − ∑ m pm |Sp|+ |Sn|︸ ︷︷ ︸ II . (6) I denotes the expected confidence of the predictions, EC∼pθ(X) [C], of the model, whereas II is the expected model accuracy, E [A]. Equation equation 6 can thus be updated to ECE = E [C]− E [A] . (7) For a binary classification task, it has been shown (Hernández-Orallo et al., 2012; Flach & Kull, 2015) that r and E [A] are linearly related averaged over all possible true-positive rates. They showed that: E [A] = P |Sp|+ |Sn| (1− P |Sp|+ |Sn| )(2r − 1) + 1 2 , (8) where r is the area under the ROC curve. Substituting Equation equation 8 for E [A] in Equation equation 7 and re-arranging the terms gives us the final expression in the form of ECE = E [C]︸ ︷︷ ︸ α −r 2PN (|Sp|+ |Sn|)2︸ ︷︷ ︸ β − P 2 +N2 2(|Sp|+ |Sn|)2 .︸ ︷︷ ︸ γ (9) Traditionally, for strictly proper scoring rules such as the Brier score, the decomposition of the metric into calibration and refinement is well known. However, for ECE which is not a strict proper scoring rule, we have shown that the breakdown is into average predicted confidence and refinement under the applied assumption of bins-wide overconfidence. For a set of predictions, we have the following constraints P ≥ 0, N ≥ 0, |Sp| + |Sn| > 0, β ≥ 0 and γ > 0. We can decrease the calibration error by either reducing α and/or increasing r. Moon et al. (2020) have shown that their refinement based approach improves calibration however, they do not provide the reasoning behind such an observation. Their observation can now be supported by the relationship described in Equation equation 9. We also compute calibration of another refinement approach, CFN(Corbière et al., 2019), for which earlier these results were not computed and find that in this case as well the network achieves better calibration after the refinement process (see Section A.3). 2.2 HOW REGULARIZATION ENFORCES CALIBRATION? We highlighted the factors which contribute towards lowering of the expected calibration error. In this section, we focus on shedding light on the working route for many regularization based calibration approaches instead. To emphasize, regularization acts as a penalty during the training procedure. Label Smoothing(Müller et al., 2019) provides calibration apart from other benefits. Many existing approaches also have been proven to materialize into label smoothing (LS) such as entropy regularization (ERL) (Pereyra et al., 2017) and focal loss (FL) (Mukhoti et al., 2020). We focus our attention to the label smoothing objective function and decipher the mode of working for this particular algorithm. A training loss consisting of label smoothing can be written as L = LCE + LLS , (10) where CE stands for cross-entropy and LS represents label smoothing contribution. Label smoothing contribution is the KL divergence between uniform distribution (U ) and network’s output distribution (Pθ). Formally, LLS = −DKL(U ||Pθ). (11) LLS can be expanded as, LLS = i<N∑ i=0 −U(xi)log(Pθ(xi))︸ ︷︷ ︸ I +U(xi)log(U(xi)))︸ ︷︷ ︸ II , (12) where xi is a sample input from a total of N sample points. The value for the uniform distribution is set before hand to a small constant thus making II a constant term. I is the term which is optimised and for a binary classification problem can be written as min N∑ i=1 logci + log(1− ci) s.t. 0 ≤ ci ≤ 1. (13) The above expression reaches a minimum value when ci = 0.5. For multi-class classification, the minimum is achieved at 1K . This goes on to show that label smoothing works on only reducing the confidence of all of its predictions. For ERL and FL, the breakdown is similar as they simply rely on slightly different target functions in equation 11. The breakdown is similar when we use their corresponding losses which are: Lerl = −H(Pθ) (14) Lfocal = (1− γ)H(Pθ) (15) where, H is the entropy. The takeaway is that regularisation added only helps to tone down the winning class confidence and increase the losing confidences. The improvement in calibration is focused more on the α-aspect of Equation equation 9. Intuitively, concentrating predictions at a point will have detrimental effect on a network’s refinement as now we have concentrated incorrect and correct predictions. 3 RELATED WORK 3.1 CALIBRATION This work is focused on calibration of point estimate based deep neural networks. For the Bayesian perspective, we refer the readers to recent works on ensembles(Lakshminarayanan et al., 2017) and cold-posterior(Wenzel et al., 2020). The existing work for calibration of point estimate models can be categorised into the following 3 broad groups based on the commonalities between the approaches. Regularisation based approaches apply a calibrating penalty to the supervised learning objective. Pereyra et al. (2017) added negative entropy of the predictions to encourage the model to predict less ‘peaky’ estimates. Subsequently, many approaches have been proposed along this direction which adds noise to the labels (Müller et al., 2019), optimise a proxy for the calibration error metric (Kumar et al., 2018), and replace the cross-entropy objective with focal loss (Mukhoti et al., 2020). Peterson et al. (2019) utilised human inferred soft-targets to improve robustness. This approach can be understood as being along the lines of label smoothing. Post-hoc approaches re-scale the confidence scores of an uncalibrated neural network to make it calibrated. The scaling hyper-parameters are chosen on a held-out validation set. Some of the recently proposed approaches are temperature scaling (Guo et al., 2017), scaling and binning calibration (Kumar et al., 2019), Dirichlet calibration (Kull et al., 2019), and beta calibration (Kull et al., 2017). These approaches find motivation from classical methods such as Platt scaling (Platt, 1999), binning (Zadrozny & Elkan, 2001), and isotonic regression (Zadrozny & Elkan, 2002). In the last group, we list the remaining approaches. Mixup (Zhang et al., 2018; Thulasidasan et al., 2019) and AugMix (Hendrycks et al., 2020) combine data augmentation and regularization. Pretraining (Hendrycks et al., 2019a) and self-supervised learning (Hendrycks et al., 2019b) have also been highlighted to be beneficial in this regard. 3.2 REFINEMENT By refining prediction, methods seek to find a good ordinal ranking of predictions. This may or may not result in a calibrated model as it has not been studied for this problem extensively. Moon et al. (2020) incorporated ‘Correctness Ranking Loss’ to allow a DNN to learn appropriate ordinal rankings for classified samples. They also observed that their approach helped in calibrating the network; however, do not discuss the reasoning behind this observation. As a replacement for confidence estimate, Jiang et al. (2018) introduced ‘TrustScore’, which provides a better ordinal ranking of predictions than the output of the network. They utilised the ratio between the distance from the sample to the nearest class different from the predicted class and the distance to the predicted class as the trust score. ConfidNet (Corbière et al., 2019) incorporates the learning of this trust score as an additional branch in the network. In the post-hoc stage, ConfidNet branch of the classifier is trained to predict a confidence score which mimics the reliability of the network on its prediction. Meta-cal(Ma & Blaschko, 2021), is a recent attempt to ensure that calibration ensures usability of the classifier though post-hoc ranking on an existing calibrated network. 3.3 METRICS For the scores utilised to assess calibration, the most commonly used are Brier score, negative loglikelihood (NLL), Expected Calibration Error (ECE) and Overconfidence Error (OE). Brier score (Brier, 1950) and NLL are strictly proper scoring rules (Gneiting & Raftery, 2007; Dawid & Musio, 2014). It has been shown that strictly proper scoring rules decompose into calibration and refinement components (Murphy, 1973; Blattenberger & Lad, 1985). The presence of the refinement component describes the utility of the calibration approach. However, the implicit combination of the two can conceal the area of improvement. ECE and OE (Degroot & Fienberg, 1983; Niculescu-Mizil & Caruana, 2005; Naeini et al., 2015) are proper scoring rules (not strict) and are adapted from reliability diagrams for judging the calibration of the models. They are not strict as the optimum value of 0 can be achieved with more than one set of predictions. These metrics also suffer from high sensitivity to the bin hyper-parameter (Nixon et al., 2020). Finding a good calibration metric is an active area of research (Geifman et al., 2019; Nixon et al., 2020). 4 IMPLEMENTATION DETAILS To empirically verify our findings we employ the following calibration approaches in our study. • Label Smoothing (LS) • Entropy Regularization (ERL) • Mixup (MX) • Focal Loss (FL). We compare these approaches to a cross-entropy trained model referred to as baseline. For the datasets we rely on CIFAR-10/100 (Krizhevsky, 2009), STL-10(Coates et al., 2011), CUB200(Wah et al., 2011) and ImageNet (Deng et al., 2009) which have been used extensively in recent calibration studies. The neural network architectures chosen are Resnet-50 (He et al., 2016), VGG16 (Simonyan & Zisserman, 2015) and DenseNet-121 (Huang et al., 2017) for CIFARs as to reflect on architecture wide occurrence of the calibration-refinement trade-off. Resnet-50 for (Pre-trained) CUB-200 and ImageNet. VGG-16(with batch norm) for STL-10. Alongside accuracy we report ECE and Brier score for calibration errors whereas, AUROC and AUPR for refinement. All values provided are ×100. We report mean and deviation(as subscript) over 3 trials where applicable. Training details are provided in the supplemental (see Section A.4). 5 EXPERIMENTS & RESULTS 5.1 CALIBRATION & REFINEMENT Tables 1 and 2 show the joint calibration and refinement on various datasets. Unsurprisingly, calibration approaches attain lower calibration errors for most of scenarios. Also, in many cases brier score is also better than the baseline which hides the shortcoming. In table 1 we can observe that in terms of refinement, the baseline performs superior to calibrated models. Focusing on AUPR and AUROC, these metrics capture slightly different aspects of the quality of predictions. AUPR is typically a preferred metric when there is an imbalance due to the negative category. But, as the overall accuracy of networks considered is > 50 we believe that is not the case. Additionally, AUPR prioritises positive class samples but not their ordering which forms the definition of refinement. Keeping this in mind, we believe AUROC is a stronger indicator of refinement with AUPR serving a similar but softened purpose. ERL provides the least improvement in terms of calibration and achieves slightly worse AUROC w.r.t the baseline at times. Out of all the approaches assessed, LS consistently acquires the lowest refinement performance. MX and FL provide moderate to low decay of refinement. For other datasets in table 2 similar observation of weakening refinement can be drawn. Another point to notice is the varying degree of calibration and refinement across datasets. This can be attributed to over-parameterized training. Mukhoti et al. (2020) argued that over-fitting to training leads to miscalibration. We suspect since the network’s overfit to varying degree on different datasets. This results in varied improvement in calibration and hence the impact on refinement also varies. For example, on ImageNet we achieve a baseline training accuracy of 77% as opposed to the CIFARs’ training accuracy > 99%. Figure 1 we also notice that the density plots for ImageNet are vastly different from CIFARs as the concentration of misclassified samples in the baseline are well separated from the corrects ones. 5.2 IMPACT ON REFINEMENT UNDER DATA SHIFT Previously, the test set consisted of samples originating from the same distribution as that of training. In this experiment, we aim to assess the deterioration under natural distribution shift of the datasets. Natural shift implies a subtle change in scene composition, object types, lighting conditions, and many others (Taori et al., 2020). It is logical to assume that a DNN is bound to confront such images in the real world. Examples of naturally shifted datasets are CIFAR-10.1 (Recht et al., 2018), CIFAR-10.2 (Lu et al., 2020) and ImageNet-v2 (Recht et al., 2019). These datasets are collected following the identical process to that of the original reference dataset. Such datasets have been utilised to measure the lack of generalisation and robustness of many classification models (Taori et al., 2020; Ovadia et al., 2019). This is the first attempt at evaluating calibration-refinement under natural data-shift to the best of our knowledge. Assessment of calibration under synthetic shift has been reported by Ovadia et al. (2019). However, we believe natural data-shift is a scenario which a deployed DNN is more likely to face and hence requires equal if not more attention. By evaluating calibration-refinement trade-off we will also be able to highlight the severity and extent of the problem induced by many calibration approaches. 5.2.1 RESULTS Table 3 shows the performance of models trained on original datasets and tested on shifted variants. For CIFAR-10.x we use the VGG-16 model trained on CIFAR-10 and for ImageNet-v2 we employee the ResNet-50 trained on ImageNet. We spot that the trend of worsening refinement continues for models under data shift as well. Similar to what we have already seen for LS, it also provides the lowest refinement performance under natural shift. A surprising observation to note is the poor performance of MX. MX as shown by Thulasidasan et al. (2019) performs well on out-of-distribution detection. However, when the data shift is not severe it appears that mixup provides no added benefit in terms of refinement. We also observe that calibration approaches provide better calibration than the baseline under the natural shift. This observation has not yet been highlighted in existing studies which focus on ood performance or some form of generalisation metric (relative accuracy) to investigate robustness of a model. For synthetic shifts, Ovadia et al. (2019) made a similar observation and noted that calibration approaches to a certain extent improve calibration on corrupted images w.r.t the baseline. 6 DISCUSSION & CONCLUSION In this paper we have brought forth a downside of many calibration approaches. We believe refinement is an important aspect which communicates the usefulness of safety-critical DNNs. Discussed theoretically and empirically, we have shed light on the current state of calibration-refinement tradeoff. Many regularization based calibration approaches disregard the role of refinement, leading to severe loss in the utility of DNNs thus trained. We successfully presented the case of declining refinement for a wide variety of approaches tested on many different datasets. The derived relationship in equation 9 showed how improving refinement can help better calibrate the model. This provides justification for calibration observed for refinement approach of Moon et al. (2020). In the appendix (A.3), we show that calibration is induced by the refinement technique proposed by Corbière et al. (2019). In the future, we aim to focus on finding balanced calibration methods which preserve if not improve refinement of predictions. The benefits of label smoothing have been highlighted by Müller et al. (2019). We were able to shed light on a severe limitation of the approach, which practitioners were currently unaware of. Similar to LS, other easy to apply calibration methods are also damaging in practice. A similar trend is observed for a NLP classification task reported in appendix A.1. We observed that the degree of refinement degradation varies from one dataset to another. Mukhoti et al. (2020) discussed the causes for miscalibration and accredited it to the over-fitting on the training data (under cross-entropy loss). We found that the training accuracy achieved by the baseline is 99.99%, 99.4% and 77.9% for CIFAR-10, CIFAR-100 and ImageNet respectively. This signals towards a comparably lower over-fitting of baseline trained on ImageNet and subsequently, a lower impact on calibration leading to a lower refinement degradation. We also noted the extension of calibration to naturally shifted data. Akin to the observations made by (Ovadia et al., 2019) on their evaluation on synthetically shifted datasets, we observed that existing solutions provide calibration on naturally shifted datasets as well. However, this calibration comes at a cost and as a result refinement aspect of the models is comparably poorer than their uncalibrated counterparts. An important point to note was the failure of Mixup under datashift. Thulasidasan et al. (2019) has demonstrated Mixup’s ability to distinguish ood samples however, we believe that natural shift is a weaker notion of data shift than ood evaluation and MX fails to provide any benefit in this regard. We also noted the varying impact of this degradation across datasets. We suspect that the lack of evident over-fitting on ImageNet is the root cause behind the visibly lower calibrationrefinement impact on it. Apart from relying on ECE and Brier score, incorporating metrics like AUROC, AUPR etc. helps in further distinguishing useful calibration approaches. Utilizing such measures can help researchers to make an intelligent and well-formed decision regarding the suitability of an approach for their application. Additionally, many evaluation protocols have also been proposed which extend the problem of calibration to a multi-class setting (Widmann et al., 2019). A natural extension will be to study refinement conjointly with calibration in a similar manner. To conclude, we have demonstrated a theoretically motivated study of calibration and refinement of many recently proposed calibration approaches. Though these methods improve calibration, they negatively impact refinement when compared to a heavily miscalibrated baseline. A APPENDIX A.1 NATURAL LANGUAGE TASK Dataset Meth. Acc Brier (↓) ECE (↓) AUROC (↑) 20News Baseline 73.31 36.60 17.92 83.95 LS 73.96 36.37 4.79 82.71 FL 70.74 39.59 8.67 83.46 formance than the other two calibration approaches. A.2 CALIBRATION AND REFINEMENT FOR TRANSFORMER BASED NETWORKS We utilize CCT and CVT networks as proposed by Hassani et al. (2021) in their recent work. These networks don;t require excess pre-training data to obtain comparable accuracy to popular feed-forward convolution only architectures. As the underlying architecture is significantly different from the baselines considered from our work, we still try to compare calibration and refinement of these models with a comparable baseline (in-terms of accuracy). A.2.1 RESULTS The results don’t indicate that transformers produce calibrated outputs. However, we did observe that for majority of the bins while computing ECE, the accuracy > confidence. This indicates towards the problem of under-confidence. CIFAR-10 CIFAR-100 Accuracy(↑) ECE(↓) AUROC(↑) Accuracy(↑) ECE(↓) AUROC(↑) R-50(Baseline) 95.65 2.69 93.8 77.2 12.7 85.69 CCT6 3 95.29 7.88 88.83 77.31 5.69 84.53 VGG-16(Baseline) 93.74 4.8 90.9 - - - CVT6 92.58 6.76 88.39 - - - VGG-16(Baseline) – – – 72.46 16.29 84.97 CVT7 – – – 73.01 4.23 85.94 A.3 CALIBRATION BY REFINEMENT In this section we present the results of the refinement approach of Corbière et al. (2019). ConfidNet (CFN) learns as a post-processing step a point-estimate for new predictions. The pre-trained classification branch drives the classification of an input sample, and for estimating the confidence for the prediction, the estimate from the confidence branch is employed. The authors highlight the refinement advantage over baseline and TrustScore Jiang et al. (2018) by employing AUPR, AUROC, etc. We utilize the official source code and train VGG-16 Simonyan & Zisserman (2015) with batch normalization. We retain 10% of training data to validate CFN training parameters and report the calibration and refinement results on the official test split for CIFARs Krizhevsky (2009). The results are reported over 3 independent runs of the experiment. A.3.1 RESULT Results in Table 5 show the CFN performance in comparison to an uncalibrated and unrefined baseline. Not only does CFN provide better refinement, it is also able to reduce the calibration errors over the datasets. This provides further support to our understanding of calibrating a model by improving refinement. A.4 IMPLEMENTATION DETAILS For CIFARs, we train the models for 300 epochs with a starting learning rate of 0.1 decayed by a factor of 5 (baseline, ERL, Mixup) or 10 (LS, FL) at 150 and 225 epochs. For calibration approaches many of the respective hyper-parameters are borrowed from the original work. For TS we use the temperature of 1.5. For MX, we use α = 0.2 based on the results provided by (Thulasidasan et al., 2019; Singh & Bay, 2020). For LS, we use = 0.05 following the work of Müller et al. (2019) and Mukhoti et al. (2020). We employ the fixed gamma variant for FL with γ = 3.0. The strength of the entropy regularizer in ERL is set to 0.1 based on the experiments of Thulasidasan et al. (2019). For ImageNet, the total number of epochs is 100 with learning rate decay by 10 at milestones 30, 60, 90. This is the standard approach for training Resnet-50 on ImageNet. For the method specific hyper-parameters we rely on existing experiments and their logical extensions. For LS, we use = 0.1 as utilized by Müller et al. (2019) and Thulasidasan et al. (2019). For FL, we rely on using γ = 3.0 as the authors utilized it for experiments on the Tiny-ImageNet (Le & Yang, 2015) dataset. For ERL, we use the strength to be 0.1 based on the experiments of Thulasidasan et al. (2019). We found that for TS the temperature 1.1 provides reasonably well calibration. For MX, we employ α = 0.2. We report ECE and Brier score as calibration errors whereas, AUROC for refinement. All values provided are ×100. We report mean and std. deviation over 3 trials where applicable. We report the accuracies in the supplementary document as we found them to be highly similar across different methods. We utilize publicly available datasets and code implementations for majority of our experiments. We use PyTorch Paszke et al. (2019) as the deep learning framework. Github links for the approaches investigated are provided below: 1. Mixup Calibration (MX): https://github.com/paganpasta/OnMixup 2. Focal Loss Calibration (FL): https://github.com/torrvision/focal_ calibration 3. ConfidNet (CFN): https://github.com/valeoai/ConfidNet The remaining approaches can be easily implemented. We provide short python scripts describing their implementation below: Listing 1: Entropy Regularization(ERL) from torch .nn import functional as F def erlloss ( logits , targets , eps=0.1, **kwargs): h c = F. cross entropy ( logits , targets , reduction =’sum’) h p = torch .sum(torch.sum(−F.softmax(logits ,dim=1) * F.log softmax( logits ,dim=1),1)) return h c − eps*h p Listing 2: Label Smoothing(LS) import torch .nn. functional as F import torch .nn as nn def linear combination (x, y, epsilon ) : return epsilon * x + (1 − epsilon ) * y def reduce loss ( loss , reduction =’sum’): return loss .mean() if reduction == ’mean’ else loss .sum() if reduction == ’sum’ else loss class LabelSmoothingLoss(nn.Module): def init ( self , epsilon = 0.1, reduction =’sum’): super() . init () self . epsilon = epsilon self . reduction = reduction def forward( self , preds , target ) : n = preds . size () [−1] log preds = F.log softmax(preds , dim=−1) loss = reduce loss (− log preds .sum(dim=−1), self . reduction ) nll = F. nll loss ( log preds , target , reduction =self . reduction ) return linear combination ( loss / n, nll , self . epsilon ) Lastly, temperature scaling (TS) requires dividing the output logits by the chosen temperature. We plan to release the pre-trained models to assist future research for all the methods after the review period.
1. What are the assumptions made in Equations 8 and 9? 2. What is the definition of label smoothing, and how do the authors claim that certain regularization methods are a form of label smoothing? 3. Why does the equation for focal loss provided in the paper differ from the one used in Mukhoti et. al. 2020? 4. How does the author explain the similarity in AUROC across different datasets despite variations in accuracy? 5. Why are there no Imagenet results shown in Figure 1?
Summary Of The Paper Review
Summary Of The Paper The paper brings up an interesting point --- whether refinement is hurt by popular calibration approaches and argues that the answer is negative. To do this, they use empirical results from a few recent algorithms including FL, ERL, LS and MX. Review In this section, I am listing some specific points and then I describe the main reasons for my review in the next section. What is P and N in Eq 8 and 9 ? I also find the assumption C m > A m for all m rather strong. As the authors claim that many existing approaches like Entropy regularisation and Focal Loss are actually label smoothing, could the authors please provide a reference or further description to show that these two approaches are indeed label smoothing ? The equation of Focal loss provided in Eq. 15 is not the one used in Mukhoti et. al. 2020. In particular, they seem to use an exponential on the weighted term of the entropy. The experimental section does use γ = 3 but the main text completes omits that point. The authors argue that in these approaches "regularisation added only helps to tone down the winning class confidence and increase the losing confidences". However, this is something Mukhoti et. al. infact argue against. They make this point with Table H.1 that FL does more than just increase the entropy of the predictions by preserving the fractions of points correctly classified with high confidence. The authors state "We suspect since the network’s overfit to varying degree on different datasets. This results in varied improvement in calibration and hence the impact on refinement also varies." But I don't see the evidence of this. The AUROC, which indicates refinement, is quite similar for STL, CUB, and Imagenet in Table 2 even though the accuracies differ ( similarly in Table 3). Added to that, Figure 1, does not seem to have any imagenet results.
ICLR
Title On Deep Neural Network Calibration by Regularization and its Impact on Refinement Abstract Deep neural networks have been shown to be highly miscalibrated. often they tend to be overconfident in their predictions. It poses a significant challenge for safetycritical systems to utilise deep neural networks (DNNs), reliably. Many recently proposed approaches to mitigate this have demonstrated substantial progress in improving DNN calibration. However, they hardly touch upon refinement, which historically has been an essential aspect of calibration. Refinement indicates separability of a network’s correct and incorrect predictions. This paper presents a theoretically and empirically supported exposition reviewing refinement of a calibrated model. Firstly, we show the breakdown of expected calibration error (ECE), into predicted confidence and refinement under the assumption of over-confident predictions. Secondly, linking with this result, we highlight that regularisation based calibration only focuses on naively reducing a model’s confidence. This logically has a severe downside to a model’s refinement as correct and incorrect predictions become tightly coupled. Lastly, connecting refinement with ECE also provides support to existing refinement based approaches which improve calibration but do not explain the reasoning behind it. We support our claims through rigorous empirical evaluations of many state of the art calibration approaches on widely used datasets and neural networks. We find that many calibration approaches with the likes of label smoothing, mixup etc. lower the usefulness of a DNN by degrading its refinement. Even under natural data shift, this calibrationrefinement trade-off holds for the majority of calibration methods. 1 INTRODUCTION Guo et al. (2017) showed that many popular deep neural networks are highly miscalibrated. This implies that the model’s confidence in its estimate is not reflective of its accuracy. Typically, the output after a softmax layer of a neural network is interpreted as confidence (Hendrycks & Gimpel, 2017; Guo et al., 2017). Many studies have found that DNNs output high confidences for incorrectly classified samples (Guo et al., 2017; Pereyra et al., 2017). For scenarios such as automated driving, medical image analysis etc. where one wishes to avoid failures at all cost, such highly confident incorrect predictions can prove fatal. As a result, calibration is a desired property of the deployed neural networks, which is being actively studied in deep learning research. However, calibration is not the only component that describes a reliable system. Along with calibration we also require the predictions to be refined. Refinement describes the separability of a binary classification problem (Murphy, 1973; Gneiting et al., 2007). To build trust, it can be interpreted as the degree of confidence separation between correct and incorrect predictions. It serves as an important heuristic for real world deployment as more often than not the predictions are imposed over an operating threshold and the rest are forwarded to fallback mechanism for further evaluation. For example, in estimating if there is an object ahead of a car we might want to rely on the predictions if the estimated confidence lies above a pre-selected (based on validation) value. The idea of using confidence for reliability of predictions is very similar to how calibration is assessed as well. Good refinement indicates an ordinal ranking of predictions which allows better segregation of correct predictions from incorrect ones (Moon et al., 2020). Such a ranking can then allow the user to find an appropriate operating . threshold which reduces the chances of encountering incorrect predictions. Moreover, it also plays an important part in describing predictors’ effectiveness. To be better calibrated, a predictor can cheat by artificially making predictions around the empirical accuracy which is often referred to as predicting the marginal. This implies that for a binary classifier if its accuracy is 50% then making all predictions with confidence of 50% makes it perfectly calibrated but, the prediction thus made are useless. The model learnt is no better than a random coin flip. To emphasize on this example, we provide some more hypothetical settings in figure 1. We can qualitatively observe that it is possible for a network to exhibit varying degree of calibration and refinement in its predictions for the same final accuracy (≈ 50%). In (a), we have a classifier which is well calibrated but poorly refined. As the network makes prediction mostly with a confidence of 40%−60% with a matching accuracy, the usefulness of such a predictor is low as you lose a number of correct predictions by operating above 50% confidence. For (b), we see that the predictions are well separated but not well calibrated. We can select an operating threshold for the network to ensure that we don’t encounter many false-positives in practice; however, the remaining predictions become uncalibrated. Case (c) shows an ideal scenario where the predictions are well separated and calibrated. The correct predictions are all predicted with very high confidence, and incorrect predictions consist of very low confidence values. We also present a real scenario figures (d, e), wherein the confidence decreased after label smoothing has led to larger degradation of the quality of predictions. Though commonly studied together in the domains of statistics (Gneiting et al., 2007), meteorological forecast (Murphy & Winkler, 1977), medical analysis (Van Calster et al., 2019); for recent approaches proposed in the deep learning domain, the joint importance has been sidelined for individual improvements. Many of the recently proposed calibration methods employ strictly proper scores such as Brier Score (Brier, 1950) (mean squared error) and negative log-likelihood to measure calibration. Such scores have been known to decompose into calibration, and refinement components (Murphy, 1973). However, a metric which produces a single score reflecting 2 complex attributes can conceal the area in which the improvement is made. Due to this reason, many ap- proaches utilise Expected Calibration Error (ECE) (Niculescu-Mizil & Caruana, 2005; Naeini et al., 2015) or its variants to focus only on the calibration aspect of a forecaster. Motivated from reliability diagrams, it measures the difference between model confidence and accuracy computed over various bins. Knowing that refinement and calibration play an important part and consequently have been an integral component for describing a trustworthy and reliable predictor, it raises an important question: ‘How well do modern calibration approaches fare on refinement?’. The focus of our paper is to investigate this question. Our main contributions are as follows: • We mathematically highlight the connection between ECE and area under the ROC curve (AUROC) computed for a classification task. AUROC serves as a measure for refinement of predictions in this work. This serves to show that model confidence and confidence refinement are two areas focusing on which we can improve model calibration. This provides theoretical backing to various refinement based methods which improve calibration for which this support didn’t exist. • We also shed light on the link between the calibration approaches (based on regularisation) and the previously derived relationship to highlight the mode of working of such algorithms. We find that these algorithms work only on the confidence aspect of the classification which can in theory lead to predicting the marginal. • We provide supporting empirical evidence to illustrate improved calibration but at the expense of refinement of many calibration approaches. As overall the confidence is reduced in the final predictions, this leads to poor refinement. • Lastly, we provide empirical evidence of calibration-refinement trade-off under natural data shift. We find that refinement, in this case, is also degraded w.r.t an uncalibrated baseline. The structure of the paper is as follows: In Section 2, we first provide formal introduction to the concepts of calibration and refinement. We further show that under a weak assumption the goal of minimising the calibration error falls in line to improve separability between correctly & incorrectly classified samples. Furthermore, we shed light on the working method of many popular calibration approaches. In Section 3, we review the existing approaches proposed for calibration and the employed metrics. Sections 4 and 5 describe the evaluation setting and experiments which empirically verify our theoretical understanding built in Section 2. We discuss the implications of our findings, future work and conclusions in Section 6. 2 CALIBRATION & REFINEMENT A dataset is composed of tuples of inputs and targets represented as D = {(xi, yi)}Li=1, where x ∈ Rd, yi ∈ Y = {1, 2, . . .K} and L are the total number of samples in the dataset. We represent the learnable weights of a network as θ. The output of a network is a probability distribution over K possible outcomes. The predicted category and predicted confidence are respectively ŷi = argmaxk∈YP (Y = k|xi, θ) (1) ci = maxk∈YP (Y = k|xi, θ), (2) where ci is referred to as either the winning probability or maximum class probability. We focus on the problem of calibration and refinement for a reduced binary setting. For a multi-class classification problem we form two groups, overall correctly classified samples (or positive category) and overall incorrectly classified samples (or negative category). We intend to measure calibration and refinement within this reduced setting. Definition 2.1 (Calibration). A model Pθ is calibrated if P(yi = ŷi|ci, θ) = ci ∀(xi, yi) ∈ Dt. Dt being the test set. This implies that the accuracy of the model should be reflective of its confidence in the prediction. Deviation from it leads to under-confident (accuracy> predicted confidence) or over-confident (accuracy < predicted confidence) models. A common metric often used to measure calibration in practice is the Expected calibration error (Naeini et al., 2015). It is measured as the difference between the accuracy and predicted confidences computed over several bins. Formally, ECE , M∑ m |Bm| L |Am − Cm|, (3) where average confidence (C) and accuracy (A) is computed after splitting the predictions into predefined M bins sampled uniformly based on the predicted confidence and Bm is the number of total samples falling in bin m. Definition 2.2 (Refinement). Let Sp and Sn denote correct and incorrect classification of a model on Dt. Predictions are considered refined iff ci > cj ∀xi ∈ Sp , ∀xj ∈ Sn. Refinement enforces a separation between the two sets of prediction. Degroot & Fienberg (1981) provide an alternative definition of refinement for calibrated classifiers. We consider area under the ROC curve (r) (Ling et al., 2003), as an appropriate choice of metric for measuring refinement of a model(Corbière et al., 2019). A common interpretation of r is that it denotes the expectation that a uniformly drawn random positive sample is ranked higher (higher confidence) than a uniformly drawn random negative sample. Hand & Till (2001) calculate r as: r = Rp − |Sn| × (|Sn|+ 1)/2 |Sp| × |Sn| (4) where, Rp = ∑ ∀x∈Sp rank(x) and rank(x) denotes the rank of prediction x in an increasingly sorted list of predictions based on associated confidence. It is straightforward to observe that r for a refined model will always be greater than an unrefined one (switching the rank of an incorrect prediction with the correct one decreases r). 2.1 CONNECTING ECE AND r Assumption: We assume that Am < Cm∀m. It implies that the network is over-confident in its prediction throughout. This is partly true in practice as for all deep neural networks the problem of calibration entails over-confident predictions(Thulasidasan et al., 2019). Also, we empirically observed that for networks trained on ImageNet(Deng et al., 2009), CIFAR-100(Krizhevsky, 2009), STL-10(Coates et al., 2011) and CUB-200(Wah et al., 2011) the number of bins for which Am <= Cm holds true are 80, 95, 94 and 86 respectively for M = 100. Recently, a study by Bai et al. (2021) showed that a classifier learnt through well specified logistic regression is destined to be overconfident. Let, pm and nm represent positive and negative category samples in bin m respectively which implies |Sp| = ∑ m pm and |Sn| = ∑ m nm. We can now describe the accuracy within a bin as Am = pm pm+nm . Substituting all the above conversions to Equation equation 3, ECE is updated as ECE = ∑ m (pm + nm) |Sp|+ |Sn| ( Cm − pm pm + nm ) . (5) This can be further expanded to ECE = ∑ m (pm + nm) |Sp|+ |Sn| Cm︸ ︷︷ ︸ I − ∑ m pm |Sp|+ |Sn|︸ ︷︷ ︸ II . (6) I denotes the expected confidence of the predictions, EC∼pθ(X) [C], of the model, whereas II is the expected model accuracy, E [A]. Equation equation 6 can thus be updated to ECE = E [C]− E [A] . (7) For a binary classification task, it has been shown (Hernández-Orallo et al., 2012; Flach & Kull, 2015) that r and E [A] are linearly related averaged over all possible true-positive rates. They showed that: E [A] = P |Sp|+ |Sn| (1− P |Sp|+ |Sn| )(2r − 1) + 1 2 , (8) where r is the area under the ROC curve. Substituting Equation equation 8 for E [A] in Equation equation 7 and re-arranging the terms gives us the final expression in the form of ECE = E [C]︸ ︷︷ ︸ α −r 2PN (|Sp|+ |Sn|)2︸ ︷︷ ︸ β − P 2 +N2 2(|Sp|+ |Sn|)2 .︸ ︷︷ ︸ γ (9) Traditionally, for strictly proper scoring rules such as the Brier score, the decomposition of the metric into calibration and refinement is well known. However, for ECE which is not a strict proper scoring rule, we have shown that the breakdown is into average predicted confidence and refinement under the applied assumption of bins-wide overconfidence. For a set of predictions, we have the following constraints P ≥ 0, N ≥ 0, |Sp| + |Sn| > 0, β ≥ 0 and γ > 0. We can decrease the calibration error by either reducing α and/or increasing r. Moon et al. (2020) have shown that their refinement based approach improves calibration however, they do not provide the reasoning behind such an observation. Their observation can now be supported by the relationship described in Equation equation 9. We also compute calibration of another refinement approach, CFN(Corbière et al., 2019), for which earlier these results were not computed and find that in this case as well the network achieves better calibration after the refinement process (see Section A.3). 2.2 HOW REGULARIZATION ENFORCES CALIBRATION? We highlighted the factors which contribute towards lowering of the expected calibration error. In this section, we focus on shedding light on the working route for many regularization based calibration approaches instead. To emphasize, regularization acts as a penalty during the training procedure. Label Smoothing(Müller et al., 2019) provides calibration apart from other benefits. Many existing approaches also have been proven to materialize into label smoothing (LS) such as entropy regularization (ERL) (Pereyra et al., 2017) and focal loss (FL) (Mukhoti et al., 2020). We focus our attention to the label smoothing objective function and decipher the mode of working for this particular algorithm. A training loss consisting of label smoothing can be written as L = LCE + LLS , (10) where CE stands for cross-entropy and LS represents label smoothing contribution. Label smoothing contribution is the KL divergence between uniform distribution (U ) and network’s output distribution (Pθ). Formally, LLS = −DKL(U ||Pθ). (11) LLS can be expanded as, LLS = i<N∑ i=0 −U(xi)log(Pθ(xi))︸ ︷︷ ︸ I +U(xi)log(U(xi)))︸ ︷︷ ︸ II , (12) where xi is a sample input from a total of N sample points. The value for the uniform distribution is set before hand to a small constant thus making II a constant term. I is the term which is optimised and for a binary classification problem can be written as min N∑ i=1 logci + log(1− ci) s.t. 0 ≤ ci ≤ 1. (13) The above expression reaches a minimum value when ci = 0.5. For multi-class classification, the minimum is achieved at 1K . This goes on to show that label smoothing works on only reducing the confidence of all of its predictions. For ERL and FL, the breakdown is similar as they simply rely on slightly different target functions in equation 11. The breakdown is similar when we use their corresponding losses which are: Lerl = −H(Pθ) (14) Lfocal = (1− γ)H(Pθ) (15) where, H is the entropy. The takeaway is that regularisation added only helps to tone down the winning class confidence and increase the losing confidences. The improvement in calibration is focused more on the α-aspect of Equation equation 9. Intuitively, concentrating predictions at a point will have detrimental effect on a network’s refinement as now we have concentrated incorrect and correct predictions. 3 RELATED WORK 3.1 CALIBRATION This work is focused on calibration of point estimate based deep neural networks. For the Bayesian perspective, we refer the readers to recent works on ensembles(Lakshminarayanan et al., 2017) and cold-posterior(Wenzel et al., 2020). The existing work for calibration of point estimate models can be categorised into the following 3 broad groups based on the commonalities between the approaches. Regularisation based approaches apply a calibrating penalty to the supervised learning objective. Pereyra et al. (2017) added negative entropy of the predictions to encourage the model to predict less ‘peaky’ estimates. Subsequently, many approaches have been proposed along this direction which adds noise to the labels (Müller et al., 2019), optimise a proxy for the calibration error metric (Kumar et al., 2018), and replace the cross-entropy objective with focal loss (Mukhoti et al., 2020). Peterson et al. (2019) utilised human inferred soft-targets to improve robustness. This approach can be understood as being along the lines of label smoothing. Post-hoc approaches re-scale the confidence scores of an uncalibrated neural network to make it calibrated. The scaling hyper-parameters are chosen on a held-out validation set. Some of the recently proposed approaches are temperature scaling (Guo et al., 2017), scaling and binning calibration (Kumar et al., 2019), Dirichlet calibration (Kull et al., 2019), and beta calibration (Kull et al., 2017). These approaches find motivation from classical methods such as Platt scaling (Platt, 1999), binning (Zadrozny & Elkan, 2001), and isotonic regression (Zadrozny & Elkan, 2002). In the last group, we list the remaining approaches. Mixup (Zhang et al., 2018; Thulasidasan et al., 2019) and AugMix (Hendrycks et al., 2020) combine data augmentation and regularization. Pretraining (Hendrycks et al., 2019a) and self-supervised learning (Hendrycks et al., 2019b) have also been highlighted to be beneficial in this regard. 3.2 REFINEMENT By refining prediction, methods seek to find a good ordinal ranking of predictions. This may or may not result in a calibrated model as it has not been studied for this problem extensively. Moon et al. (2020) incorporated ‘Correctness Ranking Loss’ to allow a DNN to learn appropriate ordinal rankings for classified samples. They also observed that their approach helped in calibrating the network; however, do not discuss the reasoning behind this observation. As a replacement for confidence estimate, Jiang et al. (2018) introduced ‘TrustScore’, which provides a better ordinal ranking of predictions than the output of the network. They utilised the ratio between the distance from the sample to the nearest class different from the predicted class and the distance to the predicted class as the trust score. ConfidNet (Corbière et al., 2019) incorporates the learning of this trust score as an additional branch in the network. In the post-hoc stage, ConfidNet branch of the classifier is trained to predict a confidence score which mimics the reliability of the network on its prediction. Meta-cal(Ma & Blaschko, 2021), is a recent attempt to ensure that calibration ensures usability of the classifier though post-hoc ranking on an existing calibrated network. 3.3 METRICS For the scores utilised to assess calibration, the most commonly used are Brier score, negative loglikelihood (NLL), Expected Calibration Error (ECE) and Overconfidence Error (OE). Brier score (Brier, 1950) and NLL are strictly proper scoring rules (Gneiting & Raftery, 2007; Dawid & Musio, 2014). It has been shown that strictly proper scoring rules decompose into calibration and refinement components (Murphy, 1973; Blattenberger & Lad, 1985). The presence of the refinement component describes the utility of the calibration approach. However, the implicit combination of the two can conceal the area of improvement. ECE and OE (Degroot & Fienberg, 1983; Niculescu-Mizil & Caruana, 2005; Naeini et al., 2015) are proper scoring rules (not strict) and are adapted from reliability diagrams for judging the calibration of the models. They are not strict as the optimum value of 0 can be achieved with more than one set of predictions. These metrics also suffer from high sensitivity to the bin hyper-parameter (Nixon et al., 2020). Finding a good calibration metric is an active area of research (Geifman et al., 2019; Nixon et al., 2020). 4 IMPLEMENTATION DETAILS To empirically verify our findings we employ the following calibration approaches in our study. • Label Smoothing (LS) • Entropy Regularization (ERL) • Mixup (MX) • Focal Loss (FL). We compare these approaches to a cross-entropy trained model referred to as baseline. For the datasets we rely on CIFAR-10/100 (Krizhevsky, 2009), STL-10(Coates et al., 2011), CUB200(Wah et al., 2011) and ImageNet (Deng et al., 2009) which have been used extensively in recent calibration studies. The neural network architectures chosen are Resnet-50 (He et al., 2016), VGG16 (Simonyan & Zisserman, 2015) and DenseNet-121 (Huang et al., 2017) for CIFARs as to reflect on architecture wide occurrence of the calibration-refinement trade-off. Resnet-50 for (Pre-trained) CUB-200 and ImageNet. VGG-16(with batch norm) for STL-10. Alongside accuracy we report ECE and Brier score for calibration errors whereas, AUROC and AUPR for refinement. All values provided are ×100. We report mean and deviation(as subscript) over 3 trials where applicable. Training details are provided in the supplemental (see Section A.4). 5 EXPERIMENTS & RESULTS 5.1 CALIBRATION & REFINEMENT Tables 1 and 2 show the joint calibration and refinement on various datasets. Unsurprisingly, calibration approaches attain lower calibration errors for most of scenarios. Also, in many cases brier score is also better than the baseline which hides the shortcoming. In table 1 we can observe that in terms of refinement, the baseline performs superior to calibrated models. Focusing on AUPR and AUROC, these metrics capture slightly different aspects of the quality of predictions. AUPR is typically a preferred metric when there is an imbalance due to the negative category. But, as the overall accuracy of networks considered is > 50 we believe that is not the case. Additionally, AUPR prioritises positive class samples but not their ordering which forms the definition of refinement. Keeping this in mind, we believe AUROC is a stronger indicator of refinement with AUPR serving a similar but softened purpose. ERL provides the least improvement in terms of calibration and achieves slightly worse AUROC w.r.t the baseline at times. Out of all the approaches assessed, LS consistently acquires the lowest refinement performance. MX and FL provide moderate to low decay of refinement. For other datasets in table 2 similar observation of weakening refinement can be drawn. Another point to notice is the varying degree of calibration and refinement across datasets. This can be attributed to over-parameterized training. Mukhoti et al. (2020) argued that over-fitting to training leads to miscalibration. We suspect since the network’s overfit to varying degree on different datasets. This results in varied improvement in calibration and hence the impact on refinement also varies. For example, on ImageNet we achieve a baseline training accuracy of 77% as opposed to the CIFARs’ training accuracy > 99%. Figure 1 we also notice that the density plots for ImageNet are vastly different from CIFARs as the concentration of misclassified samples in the baseline are well separated from the corrects ones. 5.2 IMPACT ON REFINEMENT UNDER DATA SHIFT Previously, the test set consisted of samples originating from the same distribution as that of training. In this experiment, we aim to assess the deterioration under natural distribution shift of the datasets. Natural shift implies a subtle change in scene composition, object types, lighting conditions, and many others (Taori et al., 2020). It is logical to assume that a DNN is bound to confront such images in the real world. Examples of naturally shifted datasets are CIFAR-10.1 (Recht et al., 2018), CIFAR-10.2 (Lu et al., 2020) and ImageNet-v2 (Recht et al., 2019). These datasets are collected following the identical process to that of the original reference dataset. Such datasets have been utilised to measure the lack of generalisation and robustness of many classification models (Taori et al., 2020; Ovadia et al., 2019). This is the first attempt at evaluating calibration-refinement under natural data-shift to the best of our knowledge. Assessment of calibration under synthetic shift has been reported by Ovadia et al. (2019). However, we believe natural data-shift is a scenario which a deployed DNN is more likely to face and hence requires equal if not more attention. By evaluating calibration-refinement trade-off we will also be able to highlight the severity and extent of the problem induced by many calibration approaches. 5.2.1 RESULTS Table 3 shows the performance of models trained on original datasets and tested on shifted variants. For CIFAR-10.x we use the VGG-16 model trained on CIFAR-10 and for ImageNet-v2 we employee the ResNet-50 trained on ImageNet. We spot that the trend of worsening refinement continues for models under data shift as well. Similar to what we have already seen for LS, it also provides the lowest refinement performance under natural shift. A surprising observation to note is the poor performance of MX. MX as shown by Thulasidasan et al. (2019) performs well on out-of-distribution detection. However, when the data shift is not severe it appears that mixup provides no added benefit in terms of refinement. We also observe that calibration approaches provide better calibration than the baseline under the natural shift. This observation has not yet been highlighted in existing studies which focus on ood performance or some form of generalisation metric (relative accuracy) to investigate robustness of a model. For synthetic shifts, Ovadia et al. (2019) made a similar observation and noted that calibration approaches to a certain extent improve calibration on corrupted images w.r.t the baseline. 6 DISCUSSION & CONCLUSION In this paper we have brought forth a downside of many calibration approaches. We believe refinement is an important aspect which communicates the usefulness of safety-critical DNNs. Discussed theoretically and empirically, we have shed light on the current state of calibration-refinement tradeoff. Many regularization based calibration approaches disregard the role of refinement, leading to severe loss in the utility of DNNs thus trained. We successfully presented the case of declining refinement for a wide variety of approaches tested on many different datasets. The derived relationship in equation 9 showed how improving refinement can help better calibrate the model. This provides justification for calibration observed for refinement approach of Moon et al. (2020). In the appendix (A.3), we show that calibration is induced by the refinement technique proposed by Corbière et al. (2019). In the future, we aim to focus on finding balanced calibration methods which preserve if not improve refinement of predictions. The benefits of label smoothing have been highlighted by Müller et al. (2019). We were able to shed light on a severe limitation of the approach, which practitioners were currently unaware of. Similar to LS, other easy to apply calibration methods are also damaging in practice. A similar trend is observed for a NLP classification task reported in appendix A.1. We observed that the degree of refinement degradation varies from one dataset to another. Mukhoti et al. (2020) discussed the causes for miscalibration and accredited it to the over-fitting on the training data (under cross-entropy loss). We found that the training accuracy achieved by the baseline is 99.99%, 99.4% and 77.9% for CIFAR-10, CIFAR-100 and ImageNet respectively. This signals towards a comparably lower over-fitting of baseline trained on ImageNet and subsequently, a lower impact on calibration leading to a lower refinement degradation. We also noted the extension of calibration to naturally shifted data. Akin to the observations made by (Ovadia et al., 2019) on their evaluation on synthetically shifted datasets, we observed that existing solutions provide calibration on naturally shifted datasets as well. However, this calibration comes at a cost and as a result refinement aspect of the models is comparably poorer than their uncalibrated counterparts. An important point to note was the failure of Mixup under datashift. Thulasidasan et al. (2019) has demonstrated Mixup’s ability to distinguish ood samples however, we believe that natural shift is a weaker notion of data shift than ood evaluation and MX fails to provide any benefit in this regard. We also noted the varying impact of this degradation across datasets. We suspect that the lack of evident over-fitting on ImageNet is the root cause behind the visibly lower calibrationrefinement impact on it. Apart from relying on ECE and Brier score, incorporating metrics like AUROC, AUPR etc. helps in further distinguishing useful calibration approaches. Utilizing such measures can help researchers to make an intelligent and well-formed decision regarding the suitability of an approach for their application. Additionally, many evaluation protocols have also been proposed which extend the problem of calibration to a multi-class setting (Widmann et al., 2019). A natural extension will be to study refinement conjointly with calibration in a similar manner. To conclude, we have demonstrated a theoretically motivated study of calibration and refinement of many recently proposed calibration approaches. Though these methods improve calibration, they negatively impact refinement when compared to a heavily miscalibrated baseline. A APPENDIX A.1 NATURAL LANGUAGE TASK Dataset Meth. Acc Brier (↓) ECE (↓) AUROC (↑) 20News Baseline 73.31 36.60 17.92 83.95 LS 73.96 36.37 4.79 82.71 FL 70.74 39.59 8.67 83.46 formance than the other two calibration approaches. A.2 CALIBRATION AND REFINEMENT FOR TRANSFORMER BASED NETWORKS We utilize CCT and CVT networks as proposed by Hassani et al. (2021) in their recent work. These networks don;t require excess pre-training data to obtain comparable accuracy to popular feed-forward convolution only architectures. As the underlying architecture is significantly different from the baselines considered from our work, we still try to compare calibration and refinement of these models with a comparable baseline (in-terms of accuracy). A.2.1 RESULTS The results don’t indicate that transformers produce calibrated outputs. However, we did observe that for majority of the bins while computing ECE, the accuracy > confidence. This indicates towards the problem of under-confidence. CIFAR-10 CIFAR-100 Accuracy(↑) ECE(↓) AUROC(↑) Accuracy(↑) ECE(↓) AUROC(↑) R-50(Baseline) 95.65 2.69 93.8 77.2 12.7 85.69 CCT6 3 95.29 7.88 88.83 77.31 5.69 84.53 VGG-16(Baseline) 93.74 4.8 90.9 - - - CVT6 92.58 6.76 88.39 - - - VGG-16(Baseline) – – – 72.46 16.29 84.97 CVT7 – – – 73.01 4.23 85.94 A.3 CALIBRATION BY REFINEMENT In this section we present the results of the refinement approach of Corbière et al. (2019). ConfidNet (CFN) learns as a post-processing step a point-estimate for new predictions. The pre-trained classification branch drives the classification of an input sample, and for estimating the confidence for the prediction, the estimate from the confidence branch is employed. The authors highlight the refinement advantage over baseline and TrustScore Jiang et al. (2018) by employing AUPR, AUROC, etc. We utilize the official source code and train VGG-16 Simonyan & Zisserman (2015) with batch normalization. We retain 10% of training data to validate CFN training parameters and report the calibration and refinement results on the official test split for CIFARs Krizhevsky (2009). The results are reported over 3 independent runs of the experiment. A.3.1 RESULT Results in Table 5 show the CFN performance in comparison to an uncalibrated and unrefined baseline. Not only does CFN provide better refinement, it is also able to reduce the calibration errors over the datasets. This provides further support to our understanding of calibrating a model by improving refinement. A.4 IMPLEMENTATION DETAILS For CIFARs, we train the models for 300 epochs with a starting learning rate of 0.1 decayed by a factor of 5 (baseline, ERL, Mixup) or 10 (LS, FL) at 150 and 225 epochs. For calibration approaches many of the respective hyper-parameters are borrowed from the original work. For TS we use the temperature of 1.5. For MX, we use α = 0.2 based on the results provided by (Thulasidasan et al., 2019; Singh & Bay, 2020). For LS, we use = 0.05 following the work of Müller et al. (2019) and Mukhoti et al. (2020). We employ the fixed gamma variant for FL with γ = 3.0. The strength of the entropy regularizer in ERL is set to 0.1 based on the experiments of Thulasidasan et al. (2019). For ImageNet, the total number of epochs is 100 with learning rate decay by 10 at milestones 30, 60, 90. This is the standard approach for training Resnet-50 on ImageNet. For the method specific hyper-parameters we rely on existing experiments and their logical extensions. For LS, we use = 0.1 as utilized by Müller et al. (2019) and Thulasidasan et al. (2019). For FL, we rely on using γ = 3.0 as the authors utilized it for experiments on the Tiny-ImageNet (Le & Yang, 2015) dataset. For ERL, we use the strength to be 0.1 based on the experiments of Thulasidasan et al. (2019). We found that for TS the temperature 1.1 provides reasonably well calibration. For MX, we employ α = 0.2. We report ECE and Brier score as calibration errors whereas, AUROC for refinement. All values provided are ×100. We report mean and std. deviation over 3 trials where applicable. We report the accuracies in the supplementary document as we found them to be highly similar across different methods. We utilize publicly available datasets and code implementations for majority of our experiments. We use PyTorch Paszke et al. (2019) as the deep learning framework. Github links for the approaches investigated are provided below: 1. Mixup Calibration (MX): https://github.com/paganpasta/OnMixup 2. Focal Loss Calibration (FL): https://github.com/torrvision/focal_ calibration 3. ConfidNet (CFN): https://github.com/valeoai/ConfidNet The remaining approaches can be easily implemented. We provide short python scripts describing their implementation below: Listing 1: Entropy Regularization(ERL) from torch .nn import functional as F def erlloss ( logits , targets , eps=0.1, **kwargs): h c = F. cross entropy ( logits , targets , reduction =’sum’) h p = torch .sum(torch.sum(−F.softmax(logits ,dim=1) * F.log softmax( logits ,dim=1),1)) return h c − eps*h p Listing 2: Label Smoothing(LS) import torch .nn. functional as F import torch .nn as nn def linear combination (x, y, epsilon ) : return epsilon * x + (1 − epsilon ) * y def reduce loss ( loss , reduction =’sum’): return loss .mean() if reduction == ’mean’ else loss .sum() if reduction == ’sum’ else loss class LabelSmoothingLoss(nn.Module): def init ( self , epsilon = 0.1, reduction =’sum’): super() . init () self . epsilon = epsilon self . reduction = reduction def forward( self , preds , target ) : n = preds . size () [−1] log preds = F.log softmax(preds , dim=−1) loss = reduce loss (− log preds .sum(dim=−1), self . reduction ) nll = F. nll loss ( log preds , target , reduction =self . reduction ) return linear combination ( loss / n, nll , self . epsilon ) Lastly, temperature scaling (TS) requires dividing the output logits by the chosen temperature. We plan to release the pre-trained models to assist future research for all the methods after the review period.
1. What is the focus of the paper regarding predictive uncertainty evaluation? 2. What are the strengths of the paper's introduction and explanation? 3. What are the weaknesses of the paper's organization, writing, and guidance? 4. What are your questions regarding the equations and assumptions in the paper? 5. Do you have any suggestions for improving the paper's clarity and usefulness?
Summary Of The Paper Review
Summary Of The Paper This paper tries to point out the refinement is also an important metric in evaluating the predictive uncertainty besides calibration. A intuitive graph shows the representative cases and difference between accuracy and calibration. Then the relationship of calibration and refinement, as well as that of regularization and calibration, are discussed a bit. Review In general, the paper has a good introduction and intuitive explanation of the calibration and refinement. But is hard to follow when it goes to detailed discussion as it's not well organized and written. It does not really provide useful guidance or methodology for improving the refinement. The detailed questions are listed as follows: In the major equation 8 and 9, P and N seem not defined. I could assert P might be the true-positive rate. However, without any definitions and explanation, it not easy to strictly understand the relationship of ECE and refinement. Is seems a strong assumption that for every bin, the predictions are over-confident. How does this assumption function? What is the major objective of section 2.2? Does it mean regularization have a negative effect on refinement? How could we fix it if we want a good refinement without discarding regularization? Do AUROC and AUPR play the same role in representing refinement? There are some typos in the paper like the second sentence in the abstract. Please polish it carefully.
ICLR
Title On Deep Neural Network Calibration by Regularization and its Impact on Refinement Abstract Deep neural networks have been shown to be highly miscalibrated. often they tend to be overconfident in their predictions. It poses a significant challenge for safetycritical systems to utilise deep neural networks (DNNs), reliably. Many recently proposed approaches to mitigate this have demonstrated substantial progress in improving DNN calibration. However, they hardly touch upon refinement, which historically has been an essential aspect of calibration. Refinement indicates separability of a network’s correct and incorrect predictions. This paper presents a theoretically and empirically supported exposition reviewing refinement of a calibrated model. Firstly, we show the breakdown of expected calibration error (ECE), into predicted confidence and refinement under the assumption of over-confident predictions. Secondly, linking with this result, we highlight that regularisation based calibration only focuses on naively reducing a model’s confidence. This logically has a severe downside to a model’s refinement as correct and incorrect predictions become tightly coupled. Lastly, connecting refinement with ECE also provides support to existing refinement based approaches which improve calibration but do not explain the reasoning behind it. We support our claims through rigorous empirical evaluations of many state of the art calibration approaches on widely used datasets and neural networks. We find that many calibration approaches with the likes of label smoothing, mixup etc. lower the usefulness of a DNN by degrading its refinement. Even under natural data shift, this calibrationrefinement trade-off holds for the majority of calibration methods. 1 INTRODUCTION Guo et al. (2017) showed that many popular deep neural networks are highly miscalibrated. This implies that the model’s confidence in its estimate is not reflective of its accuracy. Typically, the output after a softmax layer of a neural network is interpreted as confidence (Hendrycks & Gimpel, 2017; Guo et al., 2017). Many studies have found that DNNs output high confidences for incorrectly classified samples (Guo et al., 2017; Pereyra et al., 2017). For scenarios such as automated driving, medical image analysis etc. where one wishes to avoid failures at all cost, such highly confident incorrect predictions can prove fatal. As a result, calibration is a desired property of the deployed neural networks, which is being actively studied in deep learning research. However, calibration is not the only component that describes a reliable system. Along with calibration we also require the predictions to be refined. Refinement describes the separability of a binary classification problem (Murphy, 1973; Gneiting et al., 2007). To build trust, it can be interpreted as the degree of confidence separation between correct and incorrect predictions. It serves as an important heuristic for real world deployment as more often than not the predictions are imposed over an operating threshold and the rest are forwarded to fallback mechanism for further evaluation. For example, in estimating if there is an object ahead of a car we might want to rely on the predictions if the estimated confidence lies above a pre-selected (based on validation) value. The idea of using confidence for reliability of predictions is very similar to how calibration is assessed as well. Good refinement indicates an ordinal ranking of predictions which allows better segregation of correct predictions from incorrect ones (Moon et al., 2020). Such a ranking can then allow the user to find an appropriate operating . threshold which reduces the chances of encountering incorrect predictions. Moreover, it also plays an important part in describing predictors’ effectiveness. To be better calibrated, a predictor can cheat by artificially making predictions around the empirical accuracy which is often referred to as predicting the marginal. This implies that for a binary classifier if its accuracy is 50% then making all predictions with confidence of 50% makes it perfectly calibrated but, the prediction thus made are useless. The model learnt is no better than a random coin flip. To emphasize on this example, we provide some more hypothetical settings in figure 1. We can qualitatively observe that it is possible for a network to exhibit varying degree of calibration and refinement in its predictions for the same final accuracy (≈ 50%). In (a), we have a classifier which is well calibrated but poorly refined. As the network makes prediction mostly with a confidence of 40%−60% with a matching accuracy, the usefulness of such a predictor is low as you lose a number of correct predictions by operating above 50% confidence. For (b), we see that the predictions are well separated but not well calibrated. We can select an operating threshold for the network to ensure that we don’t encounter many false-positives in practice; however, the remaining predictions become uncalibrated. Case (c) shows an ideal scenario where the predictions are well separated and calibrated. The correct predictions are all predicted with very high confidence, and incorrect predictions consist of very low confidence values. We also present a real scenario figures (d, e), wherein the confidence decreased after label smoothing has led to larger degradation of the quality of predictions. Though commonly studied together in the domains of statistics (Gneiting et al., 2007), meteorological forecast (Murphy & Winkler, 1977), medical analysis (Van Calster et al., 2019); for recent approaches proposed in the deep learning domain, the joint importance has been sidelined for individual improvements. Many of the recently proposed calibration methods employ strictly proper scores such as Brier Score (Brier, 1950) (mean squared error) and negative log-likelihood to measure calibration. Such scores have been known to decompose into calibration, and refinement components (Murphy, 1973). However, a metric which produces a single score reflecting 2 complex attributes can conceal the area in which the improvement is made. Due to this reason, many ap- proaches utilise Expected Calibration Error (ECE) (Niculescu-Mizil & Caruana, 2005; Naeini et al., 2015) or its variants to focus only on the calibration aspect of a forecaster. Motivated from reliability diagrams, it measures the difference between model confidence and accuracy computed over various bins. Knowing that refinement and calibration play an important part and consequently have been an integral component for describing a trustworthy and reliable predictor, it raises an important question: ‘How well do modern calibration approaches fare on refinement?’. The focus of our paper is to investigate this question. Our main contributions are as follows: • We mathematically highlight the connection between ECE and area under the ROC curve (AUROC) computed for a classification task. AUROC serves as a measure for refinement of predictions in this work. This serves to show that model confidence and confidence refinement are two areas focusing on which we can improve model calibration. This provides theoretical backing to various refinement based methods which improve calibration for which this support didn’t exist. • We also shed light on the link between the calibration approaches (based on regularisation) and the previously derived relationship to highlight the mode of working of such algorithms. We find that these algorithms work only on the confidence aspect of the classification which can in theory lead to predicting the marginal. • We provide supporting empirical evidence to illustrate improved calibration but at the expense of refinement of many calibration approaches. As overall the confidence is reduced in the final predictions, this leads to poor refinement. • Lastly, we provide empirical evidence of calibration-refinement trade-off under natural data shift. We find that refinement, in this case, is also degraded w.r.t an uncalibrated baseline. The structure of the paper is as follows: In Section 2, we first provide formal introduction to the concepts of calibration and refinement. We further show that under a weak assumption the goal of minimising the calibration error falls in line to improve separability between correctly & incorrectly classified samples. Furthermore, we shed light on the working method of many popular calibration approaches. In Section 3, we review the existing approaches proposed for calibration and the employed metrics. Sections 4 and 5 describe the evaluation setting and experiments which empirically verify our theoretical understanding built in Section 2. We discuss the implications of our findings, future work and conclusions in Section 6. 2 CALIBRATION & REFINEMENT A dataset is composed of tuples of inputs and targets represented as D = {(xi, yi)}Li=1, where x ∈ Rd, yi ∈ Y = {1, 2, . . .K} and L are the total number of samples in the dataset. We represent the learnable weights of a network as θ. The output of a network is a probability distribution over K possible outcomes. The predicted category and predicted confidence are respectively ŷi = argmaxk∈YP (Y = k|xi, θ) (1) ci = maxk∈YP (Y = k|xi, θ), (2) where ci is referred to as either the winning probability or maximum class probability. We focus on the problem of calibration and refinement for a reduced binary setting. For a multi-class classification problem we form two groups, overall correctly classified samples (or positive category) and overall incorrectly classified samples (or negative category). We intend to measure calibration and refinement within this reduced setting. Definition 2.1 (Calibration). A model Pθ is calibrated if P(yi = ŷi|ci, θ) = ci ∀(xi, yi) ∈ Dt. Dt being the test set. This implies that the accuracy of the model should be reflective of its confidence in the prediction. Deviation from it leads to under-confident (accuracy> predicted confidence) or over-confident (accuracy < predicted confidence) models. A common metric often used to measure calibration in practice is the Expected calibration error (Naeini et al., 2015). It is measured as the difference between the accuracy and predicted confidences computed over several bins. Formally, ECE , M∑ m |Bm| L |Am − Cm|, (3) where average confidence (C) and accuracy (A) is computed after splitting the predictions into predefined M bins sampled uniformly based on the predicted confidence and Bm is the number of total samples falling in bin m. Definition 2.2 (Refinement). Let Sp and Sn denote correct and incorrect classification of a model on Dt. Predictions are considered refined iff ci > cj ∀xi ∈ Sp , ∀xj ∈ Sn. Refinement enforces a separation between the two sets of prediction. Degroot & Fienberg (1981) provide an alternative definition of refinement for calibrated classifiers. We consider area under the ROC curve (r) (Ling et al., 2003), as an appropriate choice of metric for measuring refinement of a model(Corbière et al., 2019). A common interpretation of r is that it denotes the expectation that a uniformly drawn random positive sample is ranked higher (higher confidence) than a uniformly drawn random negative sample. Hand & Till (2001) calculate r as: r = Rp − |Sn| × (|Sn|+ 1)/2 |Sp| × |Sn| (4) where, Rp = ∑ ∀x∈Sp rank(x) and rank(x) denotes the rank of prediction x in an increasingly sorted list of predictions based on associated confidence. It is straightforward to observe that r for a refined model will always be greater than an unrefined one (switching the rank of an incorrect prediction with the correct one decreases r). 2.1 CONNECTING ECE AND r Assumption: We assume that Am < Cm∀m. It implies that the network is over-confident in its prediction throughout. This is partly true in practice as for all deep neural networks the problem of calibration entails over-confident predictions(Thulasidasan et al., 2019). Also, we empirically observed that for networks trained on ImageNet(Deng et al., 2009), CIFAR-100(Krizhevsky, 2009), STL-10(Coates et al., 2011) and CUB-200(Wah et al., 2011) the number of bins for which Am <= Cm holds true are 80, 95, 94 and 86 respectively for M = 100. Recently, a study by Bai et al. (2021) showed that a classifier learnt through well specified logistic regression is destined to be overconfident. Let, pm and nm represent positive and negative category samples in bin m respectively which implies |Sp| = ∑ m pm and |Sn| = ∑ m nm. We can now describe the accuracy within a bin as Am = pm pm+nm . Substituting all the above conversions to Equation equation 3, ECE is updated as ECE = ∑ m (pm + nm) |Sp|+ |Sn| ( Cm − pm pm + nm ) . (5) This can be further expanded to ECE = ∑ m (pm + nm) |Sp|+ |Sn| Cm︸ ︷︷ ︸ I − ∑ m pm |Sp|+ |Sn|︸ ︷︷ ︸ II . (6) I denotes the expected confidence of the predictions, EC∼pθ(X) [C], of the model, whereas II is the expected model accuracy, E [A]. Equation equation 6 can thus be updated to ECE = E [C]− E [A] . (7) For a binary classification task, it has been shown (Hernández-Orallo et al., 2012; Flach & Kull, 2015) that r and E [A] are linearly related averaged over all possible true-positive rates. They showed that: E [A] = P |Sp|+ |Sn| (1− P |Sp|+ |Sn| )(2r − 1) + 1 2 , (8) where r is the area under the ROC curve. Substituting Equation equation 8 for E [A] in Equation equation 7 and re-arranging the terms gives us the final expression in the form of ECE = E [C]︸ ︷︷ ︸ α −r 2PN (|Sp|+ |Sn|)2︸ ︷︷ ︸ β − P 2 +N2 2(|Sp|+ |Sn|)2 .︸ ︷︷ ︸ γ (9) Traditionally, for strictly proper scoring rules such as the Brier score, the decomposition of the metric into calibration and refinement is well known. However, for ECE which is not a strict proper scoring rule, we have shown that the breakdown is into average predicted confidence and refinement under the applied assumption of bins-wide overconfidence. For a set of predictions, we have the following constraints P ≥ 0, N ≥ 0, |Sp| + |Sn| > 0, β ≥ 0 and γ > 0. We can decrease the calibration error by either reducing α and/or increasing r. Moon et al. (2020) have shown that their refinement based approach improves calibration however, they do not provide the reasoning behind such an observation. Their observation can now be supported by the relationship described in Equation equation 9. We also compute calibration of another refinement approach, CFN(Corbière et al., 2019), for which earlier these results were not computed and find that in this case as well the network achieves better calibration after the refinement process (see Section A.3). 2.2 HOW REGULARIZATION ENFORCES CALIBRATION? We highlighted the factors which contribute towards lowering of the expected calibration error. In this section, we focus on shedding light on the working route for many regularization based calibration approaches instead. To emphasize, regularization acts as a penalty during the training procedure. Label Smoothing(Müller et al., 2019) provides calibration apart from other benefits. Many existing approaches also have been proven to materialize into label smoothing (LS) such as entropy regularization (ERL) (Pereyra et al., 2017) and focal loss (FL) (Mukhoti et al., 2020). We focus our attention to the label smoothing objective function and decipher the mode of working for this particular algorithm. A training loss consisting of label smoothing can be written as L = LCE + LLS , (10) where CE stands for cross-entropy and LS represents label smoothing contribution. Label smoothing contribution is the KL divergence between uniform distribution (U ) and network’s output distribution (Pθ). Formally, LLS = −DKL(U ||Pθ). (11) LLS can be expanded as, LLS = i<N∑ i=0 −U(xi)log(Pθ(xi))︸ ︷︷ ︸ I +U(xi)log(U(xi)))︸ ︷︷ ︸ II , (12) where xi is a sample input from a total of N sample points. The value for the uniform distribution is set before hand to a small constant thus making II a constant term. I is the term which is optimised and for a binary classification problem can be written as min N∑ i=1 logci + log(1− ci) s.t. 0 ≤ ci ≤ 1. (13) The above expression reaches a minimum value when ci = 0.5. For multi-class classification, the minimum is achieved at 1K . This goes on to show that label smoothing works on only reducing the confidence of all of its predictions. For ERL and FL, the breakdown is similar as they simply rely on slightly different target functions in equation 11. The breakdown is similar when we use their corresponding losses which are: Lerl = −H(Pθ) (14) Lfocal = (1− γ)H(Pθ) (15) where, H is the entropy. The takeaway is that regularisation added only helps to tone down the winning class confidence and increase the losing confidences. The improvement in calibration is focused more on the α-aspect of Equation equation 9. Intuitively, concentrating predictions at a point will have detrimental effect on a network’s refinement as now we have concentrated incorrect and correct predictions. 3 RELATED WORK 3.1 CALIBRATION This work is focused on calibration of point estimate based deep neural networks. For the Bayesian perspective, we refer the readers to recent works on ensembles(Lakshminarayanan et al., 2017) and cold-posterior(Wenzel et al., 2020). The existing work for calibration of point estimate models can be categorised into the following 3 broad groups based on the commonalities between the approaches. Regularisation based approaches apply a calibrating penalty to the supervised learning objective. Pereyra et al. (2017) added negative entropy of the predictions to encourage the model to predict less ‘peaky’ estimates. Subsequently, many approaches have been proposed along this direction which adds noise to the labels (Müller et al., 2019), optimise a proxy for the calibration error metric (Kumar et al., 2018), and replace the cross-entropy objective with focal loss (Mukhoti et al., 2020). Peterson et al. (2019) utilised human inferred soft-targets to improve robustness. This approach can be understood as being along the lines of label smoothing. Post-hoc approaches re-scale the confidence scores of an uncalibrated neural network to make it calibrated. The scaling hyper-parameters are chosen on a held-out validation set. Some of the recently proposed approaches are temperature scaling (Guo et al., 2017), scaling and binning calibration (Kumar et al., 2019), Dirichlet calibration (Kull et al., 2019), and beta calibration (Kull et al., 2017). These approaches find motivation from classical methods such as Platt scaling (Platt, 1999), binning (Zadrozny & Elkan, 2001), and isotonic regression (Zadrozny & Elkan, 2002). In the last group, we list the remaining approaches. Mixup (Zhang et al., 2018; Thulasidasan et al., 2019) and AugMix (Hendrycks et al., 2020) combine data augmentation and regularization. Pretraining (Hendrycks et al., 2019a) and self-supervised learning (Hendrycks et al., 2019b) have also been highlighted to be beneficial in this regard. 3.2 REFINEMENT By refining prediction, methods seek to find a good ordinal ranking of predictions. This may or may not result in a calibrated model as it has not been studied for this problem extensively. Moon et al. (2020) incorporated ‘Correctness Ranking Loss’ to allow a DNN to learn appropriate ordinal rankings for classified samples. They also observed that their approach helped in calibrating the network; however, do not discuss the reasoning behind this observation. As a replacement for confidence estimate, Jiang et al. (2018) introduced ‘TrustScore’, which provides a better ordinal ranking of predictions than the output of the network. They utilised the ratio between the distance from the sample to the nearest class different from the predicted class and the distance to the predicted class as the trust score. ConfidNet (Corbière et al., 2019) incorporates the learning of this trust score as an additional branch in the network. In the post-hoc stage, ConfidNet branch of the classifier is trained to predict a confidence score which mimics the reliability of the network on its prediction. Meta-cal(Ma & Blaschko, 2021), is a recent attempt to ensure that calibration ensures usability of the classifier though post-hoc ranking on an existing calibrated network. 3.3 METRICS For the scores utilised to assess calibration, the most commonly used are Brier score, negative loglikelihood (NLL), Expected Calibration Error (ECE) and Overconfidence Error (OE). Brier score (Brier, 1950) and NLL are strictly proper scoring rules (Gneiting & Raftery, 2007; Dawid & Musio, 2014). It has been shown that strictly proper scoring rules decompose into calibration and refinement components (Murphy, 1973; Blattenberger & Lad, 1985). The presence of the refinement component describes the utility of the calibration approach. However, the implicit combination of the two can conceal the area of improvement. ECE and OE (Degroot & Fienberg, 1983; Niculescu-Mizil & Caruana, 2005; Naeini et al., 2015) are proper scoring rules (not strict) and are adapted from reliability diagrams for judging the calibration of the models. They are not strict as the optimum value of 0 can be achieved with more than one set of predictions. These metrics also suffer from high sensitivity to the bin hyper-parameter (Nixon et al., 2020). Finding a good calibration metric is an active area of research (Geifman et al., 2019; Nixon et al., 2020). 4 IMPLEMENTATION DETAILS To empirically verify our findings we employ the following calibration approaches in our study. • Label Smoothing (LS) • Entropy Regularization (ERL) • Mixup (MX) • Focal Loss (FL). We compare these approaches to a cross-entropy trained model referred to as baseline. For the datasets we rely on CIFAR-10/100 (Krizhevsky, 2009), STL-10(Coates et al., 2011), CUB200(Wah et al., 2011) and ImageNet (Deng et al., 2009) which have been used extensively in recent calibration studies. The neural network architectures chosen are Resnet-50 (He et al., 2016), VGG16 (Simonyan & Zisserman, 2015) and DenseNet-121 (Huang et al., 2017) for CIFARs as to reflect on architecture wide occurrence of the calibration-refinement trade-off. Resnet-50 for (Pre-trained) CUB-200 and ImageNet. VGG-16(with batch norm) for STL-10. Alongside accuracy we report ECE and Brier score for calibration errors whereas, AUROC and AUPR for refinement. All values provided are ×100. We report mean and deviation(as subscript) over 3 trials where applicable. Training details are provided in the supplemental (see Section A.4). 5 EXPERIMENTS & RESULTS 5.1 CALIBRATION & REFINEMENT Tables 1 and 2 show the joint calibration and refinement on various datasets. Unsurprisingly, calibration approaches attain lower calibration errors for most of scenarios. Also, in many cases brier score is also better than the baseline which hides the shortcoming. In table 1 we can observe that in terms of refinement, the baseline performs superior to calibrated models. Focusing on AUPR and AUROC, these metrics capture slightly different aspects of the quality of predictions. AUPR is typically a preferred metric when there is an imbalance due to the negative category. But, as the overall accuracy of networks considered is > 50 we believe that is not the case. Additionally, AUPR prioritises positive class samples but not their ordering which forms the definition of refinement. Keeping this in mind, we believe AUROC is a stronger indicator of refinement with AUPR serving a similar but softened purpose. ERL provides the least improvement in terms of calibration and achieves slightly worse AUROC w.r.t the baseline at times. Out of all the approaches assessed, LS consistently acquires the lowest refinement performance. MX and FL provide moderate to low decay of refinement. For other datasets in table 2 similar observation of weakening refinement can be drawn. Another point to notice is the varying degree of calibration and refinement across datasets. This can be attributed to over-parameterized training. Mukhoti et al. (2020) argued that over-fitting to training leads to miscalibration. We suspect since the network’s overfit to varying degree on different datasets. This results in varied improvement in calibration and hence the impact on refinement also varies. For example, on ImageNet we achieve a baseline training accuracy of 77% as opposed to the CIFARs’ training accuracy > 99%. Figure 1 we also notice that the density plots for ImageNet are vastly different from CIFARs as the concentration of misclassified samples in the baseline are well separated from the corrects ones. 5.2 IMPACT ON REFINEMENT UNDER DATA SHIFT Previously, the test set consisted of samples originating from the same distribution as that of training. In this experiment, we aim to assess the deterioration under natural distribution shift of the datasets. Natural shift implies a subtle change in scene composition, object types, lighting conditions, and many others (Taori et al., 2020). It is logical to assume that a DNN is bound to confront such images in the real world. Examples of naturally shifted datasets are CIFAR-10.1 (Recht et al., 2018), CIFAR-10.2 (Lu et al., 2020) and ImageNet-v2 (Recht et al., 2019). These datasets are collected following the identical process to that of the original reference dataset. Such datasets have been utilised to measure the lack of generalisation and robustness of many classification models (Taori et al., 2020; Ovadia et al., 2019). This is the first attempt at evaluating calibration-refinement under natural data-shift to the best of our knowledge. Assessment of calibration under synthetic shift has been reported by Ovadia et al. (2019). However, we believe natural data-shift is a scenario which a deployed DNN is more likely to face and hence requires equal if not more attention. By evaluating calibration-refinement trade-off we will also be able to highlight the severity and extent of the problem induced by many calibration approaches. 5.2.1 RESULTS Table 3 shows the performance of models trained on original datasets and tested on shifted variants. For CIFAR-10.x we use the VGG-16 model trained on CIFAR-10 and for ImageNet-v2 we employee the ResNet-50 trained on ImageNet. We spot that the trend of worsening refinement continues for models under data shift as well. Similar to what we have already seen for LS, it also provides the lowest refinement performance under natural shift. A surprising observation to note is the poor performance of MX. MX as shown by Thulasidasan et al. (2019) performs well on out-of-distribution detection. However, when the data shift is not severe it appears that mixup provides no added benefit in terms of refinement. We also observe that calibration approaches provide better calibration than the baseline under the natural shift. This observation has not yet been highlighted in existing studies which focus on ood performance or some form of generalisation metric (relative accuracy) to investigate robustness of a model. For synthetic shifts, Ovadia et al. (2019) made a similar observation and noted that calibration approaches to a certain extent improve calibration on corrupted images w.r.t the baseline. 6 DISCUSSION & CONCLUSION In this paper we have brought forth a downside of many calibration approaches. We believe refinement is an important aspect which communicates the usefulness of safety-critical DNNs. Discussed theoretically and empirically, we have shed light on the current state of calibration-refinement tradeoff. Many regularization based calibration approaches disregard the role of refinement, leading to severe loss in the utility of DNNs thus trained. We successfully presented the case of declining refinement for a wide variety of approaches tested on many different datasets. The derived relationship in equation 9 showed how improving refinement can help better calibrate the model. This provides justification for calibration observed for refinement approach of Moon et al. (2020). In the appendix (A.3), we show that calibration is induced by the refinement technique proposed by Corbière et al. (2019). In the future, we aim to focus on finding balanced calibration methods which preserve if not improve refinement of predictions. The benefits of label smoothing have been highlighted by Müller et al. (2019). We were able to shed light on a severe limitation of the approach, which practitioners were currently unaware of. Similar to LS, other easy to apply calibration methods are also damaging in practice. A similar trend is observed for a NLP classification task reported in appendix A.1. We observed that the degree of refinement degradation varies from one dataset to another. Mukhoti et al. (2020) discussed the causes for miscalibration and accredited it to the over-fitting on the training data (under cross-entropy loss). We found that the training accuracy achieved by the baseline is 99.99%, 99.4% and 77.9% for CIFAR-10, CIFAR-100 and ImageNet respectively. This signals towards a comparably lower over-fitting of baseline trained on ImageNet and subsequently, a lower impact on calibration leading to a lower refinement degradation. We also noted the extension of calibration to naturally shifted data. Akin to the observations made by (Ovadia et al., 2019) on their evaluation on synthetically shifted datasets, we observed that existing solutions provide calibration on naturally shifted datasets as well. However, this calibration comes at a cost and as a result refinement aspect of the models is comparably poorer than their uncalibrated counterparts. An important point to note was the failure of Mixup under datashift. Thulasidasan et al. (2019) has demonstrated Mixup’s ability to distinguish ood samples however, we believe that natural shift is a weaker notion of data shift than ood evaluation and MX fails to provide any benefit in this regard. We also noted the varying impact of this degradation across datasets. We suspect that the lack of evident over-fitting on ImageNet is the root cause behind the visibly lower calibrationrefinement impact on it. Apart from relying on ECE and Brier score, incorporating metrics like AUROC, AUPR etc. helps in further distinguishing useful calibration approaches. Utilizing such measures can help researchers to make an intelligent and well-formed decision regarding the suitability of an approach for their application. Additionally, many evaluation protocols have also been proposed which extend the problem of calibration to a multi-class setting (Widmann et al., 2019). A natural extension will be to study refinement conjointly with calibration in a similar manner. To conclude, we have demonstrated a theoretically motivated study of calibration and refinement of many recently proposed calibration approaches. Though these methods improve calibration, they negatively impact refinement when compared to a heavily miscalibrated baseline. A APPENDIX A.1 NATURAL LANGUAGE TASK Dataset Meth. Acc Brier (↓) ECE (↓) AUROC (↑) 20News Baseline 73.31 36.60 17.92 83.95 LS 73.96 36.37 4.79 82.71 FL 70.74 39.59 8.67 83.46 formance than the other two calibration approaches. A.2 CALIBRATION AND REFINEMENT FOR TRANSFORMER BASED NETWORKS We utilize CCT and CVT networks as proposed by Hassani et al. (2021) in their recent work. These networks don;t require excess pre-training data to obtain comparable accuracy to popular feed-forward convolution only architectures. As the underlying architecture is significantly different from the baselines considered from our work, we still try to compare calibration and refinement of these models with a comparable baseline (in-terms of accuracy). A.2.1 RESULTS The results don’t indicate that transformers produce calibrated outputs. However, we did observe that for majority of the bins while computing ECE, the accuracy > confidence. This indicates towards the problem of under-confidence. CIFAR-10 CIFAR-100 Accuracy(↑) ECE(↓) AUROC(↑) Accuracy(↑) ECE(↓) AUROC(↑) R-50(Baseline) 95.65 2.69 93.8 77.2 12.7 85.69 CCT6 3 95.29 7.88 88.83 77.31 5.69 84.53 VGG-16(Baseline) 93.74 4.8 90.9 - - - CVT6 92.58 6.76 88.39 - - - VGG-16(Baseline) – – – 72.46 16.29 84.97 CVT7 – – – 73.01 4.23 85.94 A.3 CALIBRATION BY REFINEMENT In this section we present the results of the refinement approach of Corbière et al. (2019). ConfidNet (CFN) learns as a post-processing step a point-estimate for new predictions. The pre-trained classification branch drives the classification of an input sample, and for estimating the confidence for the prediction, the estimate from the confidence branch is employed. The authors highlight the refinement advantage over baseline and TrustScore Jiang et al. (2018) by employing AUPR, AUROC, etc. We utilize the official source code and train VGG-16 Simonyan & Zisserman (2015) with batch normalization. We retain 10% of training data to validate CFN training parameters and report the calibration and refinement results on the official test split for CIFARs Krizhevsky (2009). The results are reported over 3 independent runs of the experiment. A.3.1 RESULT Results in Table 5 show the CFN performance in comparison to an uncalibrated and unrefined baseline. Not only does CFN provide better refinement, it is also able to reduce the calibration errors over the datasets. This provides further support to our understanding of calibrating a model by improving refinement. A.4 IMPLEMENTATION DETAILS For CIFARs, we train the models for 300 epochs with a starting learning rate of 0.1 decayed by a factor of 5 (baseline, ERL, Mixup) or 10 (LS, FL) at 150 and 225 epochs. For calibration approaches many of the respective hyper-parameters are borrowed from the original work. For TS we use the temperature of 1.5. For MX, we use α = 0.2 based on the results provided by (Thulasidasan et al., 2019; Singh & Bay, 2020). For LS, we use = 0.05 following the work of Müller et al. (2019) and Mukhoti et al. (2020). We employ the fixed gamma variant for FL with γ = 3.0. The strength of the entropy regularizer in ERL is set to 0.1 based on the experiments of Thulasidasan et al. (2019). For ImageNet, the total number of epochs is 100 with learning rate decay by 10 at milestones 30, 60, 90. This is the standard approach for training Resnet-50 on ImageNet. For the method specific hyper-parameters we rely on existing experiments and their logical extensions. For LS, we use = 0.1 as utilized by Müller et al. (2019) and Thulasidasan et al. (2019). For FL, we rely on using γ = 3.0 as the authors utilized it for experiments on the Tiny-ImageNet (Le & Yang, 2015) dataset. For ERL, we use the strength to be 0.1 based on the experiments of Thulasidasan et al. (2019). We found that for TS the temperature 1.1 provides reasonably well calibration. For MX, we employ α = 0.2. We report ECE and Brier score as calibration errors whereas, AUROC for refinement. All values provided are ×100. We report mean and std. deviation over 3 trials where applicable. We report the accuracies in the supplementary document as we found them to be highly similar across different methods. We utilize publicly available datasets and code implementations for majority of our experiments. We use PyTorch Paszke et al. (2019) as the deep learning framework. Github links for the approaches investigated are provided below: 1. Mixup Calibration (MX): https://github.com/paganpasta/OnMixup 2. Focal Loss Calibration (FL): https://github.com/torrvision/focal_ calibration 3. ConfidNet (CFN): https://github.com/valeoai/ConfidNet The remaining approaches can be easily implemented. We provide short python scripts describing their implementation below: Listing 1: Entropy Regularization(ERL) from torch .nn import functional as F def erlloss ( logits , targets , eps=0.1, **kwargs): h c = F. cross entropy ( logits , targets , reduction =’sum’) h p = torch .sum(torch.sum(−F.softmax(logits ,dim=1) * F.log softmax( logits ,dim=1),1)) return h c − eps*h p Listing 2: Label Smoothing(LS) import torch .nn. functional as F import torch .nn as nn def linear combination (x, y, epsilon ) : return epsilon * x + (1 − epsilon ) * y def reduce loss ( loss , reduction =’sum’): return loss .mean() if reduction == ’mean’ else loss .sum() if reduction == ’sum’ else loss class LabelSmoothingLoss(nn.Module): def init ( self , epsilon = 0.1, reduction =’sum’): super() . init () self . epsilon = epsilon self . reduction = reduction def forward( self , preds , target ) : n = preds . size () [−1] log preds = F.log softmax(preds , dim=−1) loss = reduce loss (− log preds .sum(dim=−1), self . reduction ) nll = F. nll loss ( log preds , target , reduction =self . reduction ) return linear combination ( loss / n, nll , self . epsilon ) Lastly, temperature scaling (TS) requires dividing the output logits by the chosen temperature. We plan to release the pre-trained models to assist future research for all the methods after the review period.
1. What is the focus of the paper regarding calibration and refinement? 2. What are the strengths of the proposed approach, particularly in breaking down ECE? 3. What are the weaknesses of the paper, especially regarding the label smoothing loss and the absence of certain baselines? 4. Do you have any questions regarding the definitions and equations presented in the paper? 5. How does the reviewer assess the clarity and quality of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The authors highlight the connection between ECE and AUROC when AUROC is evaluated in terms of refinement. The authors break down ECE in a way which shows existing algorithms can work by only reducing the average confidence. The authors, show empirically that existing calibration approaches improve calibration at the expense of refinement. Review Pros The authors give a nice breakdown of refinement and how it breaks down in the ECE metric. Refinement is something that many other works do not mention and it looks to have many historical mentions, which means it may be widely overlooked in the recent calibration literature. Intuitively, the confident and unconfident predictions should be well separable if we are to avoid overconfident + wrong predictions which hurt calibration error. The experiments seem to cover common datasets and network architectures. Cons The label smoothing loss L L S appears to have the wrong sign. Equation 11 says that it is negative, which would maximize the KL between the uniform and output distributions. Table 1 is missing baselines for temperature scaling, which is a relevant baseline. Temperature Scaling is defined as TS in the appendix and it is said that TS uses a temperature of 1.5, but there is no mention os TS in any of the tables or results as far as I can see. NLL is not included in the metrics which are evaluated. NLL is a proper scoring rule and one which is widely reported in many works. If the authors want to highlight the usefulness of measuring refinement over existing approaches, then NLL should be compared against. Minor Definition 2.2: the last line says “switching the rank of an incorrect prediction with the correct one decreases r.” Shouldn’t the ‘correct’ and ‘incorrect’ be swapped since there is no rank of an incorrect prediction? Referenced equations all say “Equation equation x” Table 1 text is too small and very hard to read. Section 5.1 similar observation → a similar observation Section 5.2.1 employee → employ Section 5.2.1 ood → OOD (ood is used elsewhere in the text as well)
ICLR
Title Adversarially Robust Neural Networks via Optimal Control: Bridging Robustness with Lyapunov Stability Abstract Deep neural networks are known to be vulnerable to adversarial perturbations. In this paper, we bridge adversarial robustness of neural nets with Lyapunov stability of dynamical systems. From this viewpoint, training neural nets is equivalent to finding an optimal control of the discrete dynamical system, which allows one to utilize methods of successive approximations, an optimal control algorithm based on Pontryagin’s maximum principle, to train neural nets. This decoupled training method allows us to add constraints to the optimization, which makes the deep model more robust. The constrained optimization problem can be formulated as a semi-definite programming problem and hence can be solved efficiently. Experiments show that our method effectively improves deep model’s adversarial robustness. N/A Deep neural networks are known to be vulnerable to adversarial perturbations. In this paper, we bridge adversarial robustness of neural nets with Lyapunov stability of dynamical systems. From this viewpoint, training neural nets is equivalent to finding an optimal control of the discrete dynamical system, which allows one to utilize methods of successive approximations, an optimal control algorithm based on Pontryagin’s maximum principle, to train neural nets. This decoupled training method allows us to add constraints to the optimization, which makes the deep model more robust. The constrained optimization problem can be formulated as a semi-definite programming problem and hence can be solved efficiently. Experiments show that our method effectively improves deep model’s adversarial robustness. 1 INTRODUCTION Deep neural networks achieve state-of-the-art performances on a variety of tasks (LeCun et al., 2015). However, neural nets are known to be vulnerable to adversarial examples. Imperceptibly perturbed inputs can induce erroneous outputs in neural nets (Szegedy et al., 2013). In image classification problems of computer vision, previous work has proposed various methods to attack deep models and induce low accuracy (Goodfellow et al., 2015; Madry et al., 2017; Papernot et al., 2016a; Carlini & Wagner, 2017a). Whereas multiple defenses against adversarial attacks are developed, they don’t ensure safety faced with strong attacking methods. There are also theories that explain the existence of adversarial examples (Ilyas et al., 2019; Shamir et al., 2019), but they often fail to fully explain the features and behaviors of this phenomenon. This makes the study of adversarial attacks important in that it is a threat to real-life machine learning systems (Kurakin et al., 2016). In this paper, we propose a dynamical system view on the adversarial robustness of the models, as well as new method that significantly defense adversarial attacks. Recent works have shown the connection between deep neural networks and dynamical systems (E, 2017; Li et al., 2017; Haber & Ruthotto, 2017; Lu et al., 2017). If we regard the neural net as a discretization of an ordinary differential equation (ODE), then training neural nets becomes finding an optimal control of the corresponding discrete dynamical system. Traditionally, we often treat training neural networks as an unconstrained non-convex optimization problem min θ∈Θ J(θ) +R(θ), where θ denotes the parameters of the model, J denotes the loss function and R denotes the regularizer term, and we solve the problem with (stochastic) gradient-descent based methods (Bottou, 2010; Ruder, 2016). In the training process, we feed the network with a batch of training data, and compute the gradient with forward and backward propagation (E. Rumelhart et al., 1986). The propagation process resembles solving optimal control problems that tune the parameters to make the output be close to target states. This viewpoint motivates us to bridge adversarial robustness with Lyapunov stability of a dynamical system, and to train robust networks with algorithms that find stable optimal control. We will formulate the discussion in later sections. 2 RELATED WORK 2.1 ADVERSARIAL DEFENSE Many defense methods have been proposed to improve the models’ adversarial robustness. The defenses mainly fall into three types: adversarial training (Szegedy et al., 2013; Zhang et al., 2019), modifying the networks (Gu & Rigazio, 2015; Lyu et al., 2015; Papernot et al., 2016b; Nayebi & Ganguli, 2017; Ross & Doshi-Velez, 2017), and adding external models (Lee et al., 2017; Akhtar et al., 2017; Gebhart & Schrater, 2017; Xu et al., 2018; Sun et al., 2019). Although various defense methods have been developed, a defended deep model is often successfully attacked by newly developed attacks or specific counter-counter measures (Carlini & Wagner, 2017b). Therefore, it can be hoped that defenses against general attacks will be devised to make deep learning models (adversarially) robust to real-life threats. 2.2 NEURAL ODES AND OPTIMAL CONTROL Recent works have bridged deep neural networks with ODEs and dynamical systems. On the one hand, deep residual networks (He et al., 2015) can be illustrated as forward Euler scheme approximating an ODE (E, 2017), which motivates us to design effective network structures (Lu et al., 2017). On the other hand, regarding the network as a dynamical system allows us to set up an optimal control viewpoint of neural nets. Pontryagin’s Maximum Principle (Boltyanskii et al., 1960) has been applied to train neural nets (Li et al., 2017; Li & Hao, 2018). 3 ADVERSARIAL ROBUSTNESS AND LYAPUNOV STABILITY 3.1 DYNAMICS OF DEEP NEURAL NETS Given a T -layer neural net, we let the dynamical system {ft(xt, θt) : t = 0, . . . , T} represents the network, where xt is the input of t-th layer, θt is the parameter, and ft : Rdt × Θt → Rdt+1 denotes the t-th layer’s transformation, which is usually a non-linear function σ(θtxt + bt) for fully-connected layers, convolution layers and batch normalization layers, etc. Therefore, training the neural net can be regarded as controlling the parameters to let the dynamics fit the training data. Specifically, the training optimization problem can be formulated as a typical optimal control problem as follows: min θ B∑ i=1 J(xiT ) + T∑ i=0 L(θi), subj. to xit+1 = ft(x i t, θt), t = 0, . . . , T − 1, where we use xi to denote the i-th input in the batch and B denote the batch size. J and L are the loss function and the regularizer, respectively. Specially, if the model is a deep residual network with structure xt+1 = xt+ft(xt, θt), we can regard the problem as the forward Euler discretization of the following continuous optimal control problem: min θ J(x(T )) + ∫ T 0 L(θ(t)) dt, subj. to ẋ = f(t, x(t), θ(t)), x(0) = x, 0 ≤ t ≤ T, where x(t) is a continuous trajectory from the input to the output logits. 3.2 LYAPUNOV STABILITY Adversarial examples are usually clean images added by a small calculated perturbation η. The model predicts correct labels fed with clean inputs x0, while the output is completely different when it is fed with perturbed input x0 + η. The dynamical system view of neural nets motivate us to characterize this sensitivity with Lyapunov stability of a system (Hirsch et al., 2004). Definition 1 (Lyapunov Stability). For a given dynamical system ẋ = f(x), x(0) = x0, xe is an equilibrium, then • The system is Lyapunov stable, if, ∀ > 0, ∃ δ > 0 such that, if ‖x(0)− xe‖ < δ, then for every t ≥ 0, ‖x(t)− xe‖ < . • The system is asymptotically stable if it is Lyapunov stable and ∃ δ > 0 such that if ‖x(0)− xe‖ < δ, then limt→∞ ‖x(t)− xe‖ = 0. • The system is exponentially stable if it is asymptotically stable and ∃α > 0, β > 0, δ > 0 such that if ‖x(0)− xe‖ < δ, then ‖x(t)− xe‖ ≤ α‖x(0)− xe‖e−βt, for all t ≥ 0. The definitions can be easily extended to discrete-time systems. Intuitively, the Lyapunov stability states that for any small perturbation η, the trajectory is still “close enough” to the original one. If we regard a neural net as a dynamical system, and ensure the network is Lyapunov stable, then the model is robust to all (adversarial) perturbations. 3.3 ADVERSARIALLY ROBUST NEURAL NETS Due to the connection between numerical ODEs and residual networks, we first consider robustness (i.e. Lyapunov stability) of continuous ODEs. Theorem 1 (Stable ODEs). For a given ODE ẋ = f(t, x, θ) = σ(Ax+b), where σ is the activation function, e.g., Sigmoid function or ReLU function, it is stable if Re(λi(A)) ≤ 0, ∀i, where Re denotes the real part, and λi denotes the i-th eigenvalue. One can see, e.g. Hirsch et al. (2004), for the proof of this theorem. Theorem 1 provides a set of conditions for stable ODEs. However, deep residual network is only a forward Euler discretization scheme of continuous ODE. To ensure numerical stability, we require |1− λi(A)h| ≤ 1 (Ascher & Petzold, 1998), where the step size h = 1 in residual networks. Added by the identity mapping in residual networks, we can get the stable conditions for discrete dynamics. Theorem 2 (Stable Discrete Networks). For a discrete neural network, i.e., discrete dynamics {ft(xt, θt) : t = 0, . . . , T}, where ft(xt, θt) = σ(θtxt) (we omit the bias term for simplicity), the network is stable if the ρ(θt) ≤ 1, where ρ(A) = maxi(|λi(A)|) is the spectral radius. If the conditions are added to the unconstrained optimization problem of training, we can greatly improve the adversarial robustness of neural nets. The methods will be discussed in the following section. 4 TRAINING ROBUST NEURAL NETS 4.1 PMP AND MSA For deterministic systems, the Pontryagin’s Maximum Principle (PMP) (Boltyanskii et al., 1960) provides a set of necessary conditions for optimal control of the system. Various algorithms have been proposed to solve the deterministic optimal control problem based on PMP. Among them, the Method of Successive Approximations (MSA) (Krylov & Chernous’ko, 1963) is one of the simplest algorithms. In the field of deep learning, previous work has utilized MSA to train neural networks (Li et al., 2017; Li & Hao, 2018). Formally, consider the optimal control problem for training neural nets in section 3. For dynamics {ft(xt, θt) : t = 0, . . . , T}, assume θ∗ = { θ∗0 , . . . , θ ∗ T−1 } is a solution to the optimal control problem. Also, we define the Hamiltonian function H : Rdt × Rdt+1 × Θt × [T ] → R by H(x, p, θ, t) = p · ft(x, θ)−L(θt), where the dot denotes the inner product. We have the following necessary conditions for θ∗. Theorem 3 (Pontryagin’s Maximum Principle for Discrete Systems). Assume ft and J are sufficiently smooth. There exists co-states p∗ = {p∗0, . . . , p∗T } s.t. the following conditions hold: x∗t+1 = ∇pH(x∗t , p∗t+1, θ∗t , t), x∗0 = x0, p∗t = ∇xH(x∗t , p∗t+1, θ∗t , t), p∗T = −∇xJ(x∗T ), θ∗t = arg max θ H(x∗t , p ∗ t+1, θ, t). For simplicity of notations, here we assume the batch size is 1. One can easily extend the theorem to minibatch training case by summing over the batch. The theorem can be proved by KKT conditions (Boyd & Vandenberghe, 2004), where the co-states can be seen as the Lagrangian dual variables. Consider the conditions in PMP, one can find the x equations are exactly the forward propagation of a neural net, and the p equations resemble the backward propagation process. The third condition states that the model parameters must maximize the Hamiltonian function. This motivates us to iteratively compute forward and backward propagation, and solve the Hamiltonian maximization to find the optimal control, which is exactly the Method of Successive Approximations (Algorithm 1). In practice, we usually add regularizer terms that penalize great changes in the maximization step to prevent drastic steps that cause divergence. For the connection between MSA and back-propagationbased gradient descent algorithms, see the appendix of Li & Hao (2018). Algorithm 1 The Method of Successive Approximations Initialize θ0 = { θ00, . . . , θ 0 T−1 } , set k = 0; repeat Compute the states (forward propagation): xt+1 = ∇pH(xt, pt+1, θkt , t), t = 0, . . . , T − 1; Compute the co-states (backward propagation): pt = ∇xH(xt, pt+1, θkt , t), t = T − 1, . . . , 0, with initial pT = −∇xJ(xT ); For each t = 0, . . . , T − 1, solve the maximization θk+1t = arg maxθH(xt, pt+1, θ, t); Set k = k + 1; until Converge; The advantages of training by MSA compared with gradient descent algorithms has been discussed in (Li et al., 2017), among which the most significant feature is that the optimization steps on different layers are decoupled. Concretely, after computing the states x and co-states p, the optimization step on layer t is only searching for parameters θt. This not only suggests that the optimization process can be accelerated by parallelization, but also allows us to utilize the features of the problem. The parameter space is greatly reduced compared with the original intractable optimization problem, and hence the optimization is much more easier. This allows us to add constraints that ensure robustness of the model. 4.2 ROBUST CONSTRAINTS Consider a layer in the form of ft(x) = θtx, where we leave the activation as an individual layer with no parameters for simplicity, we can derive the following optimization problem for Hamiltonian maximization: max θ pt+1 · (θtxt)− α‖θt‖22 − β‖θt − θ′t‖22, subj. to ρ(θt) ≤ 1, where α‖θt‖22 is the L2 norm regularizer (weight decay), and θ′t is the initial parameter (i.e., θkt in the algorithm). The last term keeps the training process from drastic steps that cause divergence. The constraint, as illustrated in section 3, is the stable condition for discrete systems. It makes the optimization quite difficult if we directly add the constraints in gradient descent based algorithms, but the decoupled optimization in MSA allows us to do so. With regard to the constraint of parameter’s spectral radius, a simple method is to apply special forms of matrices for parameters, e.g. anti-symmetric matrices. For continuous deep models, the only constraint is Theorem 1, i.e., Re(λi(θt)) ≤ 0. Anti-symmetric matrices have only imaginary eigenvalues, and hence we can replace θt with θt − θTt − γI , where γ is a small positive constant. For general forms of parameters, one can prove the following transformation. Theorem 4. One sufficient condition of ρ(A) ≤ 1 is[ I A AT I ] 0, where A B denotes A−B is positive semi-definite. Proof. Recall that ρ(A) ≤ ‖A‖2 = √ λmax(ATA), we have ‖A‖2 ≤ 1⇔ ATA I ⇔ [ I A AT I ] 0. Hence we can replace ρ(θt) ≤ 1 with a positive semi-definite condition, and we turn the Hamiltonian maximization into a new optimization problem, where the target function is quadratic and the constraint is a semi-definite condition. This can be reduced to a semi-definite programming (SDP) problem (Vandenberghe & Boyd, 1998), which is a special case of convex optimization, and thus can be solved efficiently by, e.g., interior point methods (Helmberg et al., 1970) in polynomial time. Here we summarize our method. For a given neural network, we use MSA to train the model, i.e., iteratively computing the states (forward propagation) and co-states (backward propagation), and solving the optimization for each layer. Instead of directly maximizing the Hamiltonian, we add a positive semi-definite constraint to the optimization problem, which leads to a stable control of the dynamics. 5 EXPERIMENTS 5.1 EXPERIMENT SETUP To evaluate the effectiveness of our method, we conduct experiments on CIFAR10. We trained the network on clean data, with adversarial training (PGD-10) and with robust training (our method), respectively. We used FGSM (Goodfellow et al., 2015), PGD-10 (Madry et al., 2017) and C&W (Carlini & Wagner, 2017a) to attack the network. Due to the limitation of TensorFlow, we used a simple interior point method with gradient descent to solve SDP. The network model was an 18-layer residual network (He et al., 2015), with 8 residual blocks. We set the perturbation size as = 0.1 for both FGSM and PGD. For C&W, we used the L0 metric. We trained the model for 150 epochs with a batch size of 200. The learning rate was set to be 10−2 initially, and was divided by 5 at epoch 30, 60 and 100. The regularizer term constant was set to be 10−3. 5.2 RESULTS The results can be seen in Table 1. The accuracy of robust models on clean data is lower than vanilla model’s in that robust training and generalization is more difficult and requires more data (Schmidt et al., 2018). Our method improves model’s adversarial robustness, compared with the vanilla model. Figure 1 displays the eigenvalues of the last fully-connected layer’s parameter. The complex norm of eigenvalues (spectral radius) of the model trained by our method are effectively bounded below 1, which satisfies the robust constraint on parameters in section 4.2, while eigenvalues of natural training are randomly distributed in the complex plane. Our method is not as effective as traditional adversarial training method. However, it mainly has the following advantages: (a) The training process doesn’t require large numbers of gradient propagation, which consumes much time in adversarial training. In our experiment, adversarial training spends about 10 times GPU time as much as our method. (b) The decoupled training process allows us to set different hyperparameters and training methods for different layers, which is more maneuverable for large scale training. We can further control the behavior of different layers in adversarial settings. (c) Lyapunov stability provides a framework for analyzing adversarial robustness of deep models, which may lead to theoretical analysis of adversarial samples in future work. 6 DISCUSSION AND FUTURE WORK Motivated by the dynamical system view of neural networks, this work bridges adversarial robustness of deep neural models with Lyapunov stability of dynamical systems, and we also propose a method that uses a stable optimal control algorithm to train neural networks to improve the adversarial robustness of deep neural models. Though the result didn’t surpass STOA defense methods, the stable control view of training neural nets points out another direction towards adversarially robust models. For future work, on the one hand, mathematical analysis on Lyapunov stability of neural models may be studied to provide theoretical understanding of adversarial robustness. On the other hand, popular platforms for deep learning, e.g., TensorFlow, PyTorch, didn’t provide frameworks for optimal control. We will obtain better results if specific algorithms for SDP are applied to solve the optimization problem.
1. What is the main contribution of the paper regarding training neural networks? 2. What are the strengths and weaknesses of the proposed approach in terms of its formulation and implementation? 3. How does the reviewer assess the clarity and quality of the paper's content, particularly in the introduction and related work sections? 4. What are the concerns regarding the experimental results and their significance in demonstrating the effectiveness of the proposed method? 5. How does the reviewer evaluate the validity of the strong claims made in the paper, especially given the current state of research in the field?
Review
Review Summary: The goal of this paper is to train neural networks (NNs) in a way to be robust to adversarial attacks. The authors formulate training a NN as finding an optimal controller for a discrete dynamical system. This formulation allows them to use an optimal control algorithm, called method of successive approximations (MSA), to train a NN. The authors then show how constraints can be added to this optimization problem in order to make the trained NN more robust. They show that the resulted constraint optimization problem can be formulated as a semi-definite programming and provide some experimental results. Comments: - Although the problem studied in the paper is important and the approach is interesting, it seems the paper has been written in rush and in my opinion is not ready for publication. The writing is not good. The introduction and related work sections are incomplete and not very informative. It is not clear what has been done before and what is the contribution of this paper. The main technique/algorithm of the paper has not been explained clearly that someone can easily understand and implement it. The experimental results are not convincing. - There are strong claims in the paper such as "experiments show that our method effectively improves deep model's adversarial robustness", this is too strong, given the quality of the experiments of the paper. Or "the constraint optimization problem can be formulated as a semi-definite programming (SDP) problem and hence can be solved efficiently", to the best of my knowledge, SDP solvers are limited to small problems and cannot solve the large problems efficiently. - The area of making NNs robust to attacks is a very active area and there are many attacks and solutions out there, which require more comprehensive empirical studies of any new method. I do not see this in the paper. - Overall, I think this paper requires a major revision in order to be evaluated better and to be more useful for the community.
ICLR
Title Adversarially Robust Neural Networks via Optimal Control: Bridging Robustness with Lyapunov Stability Abstract Deep neural networks are known to be vulnerable to adversarial perturbations. In this paper, we bridge adversarial robustness of neural nets with Lyapunov stability of dynamical systems. From this viewpoint, training neural nets is equivalent to finding an optimal control of the discrete dynamical system, which allows one to utilize methods of successive approximations, an optimal control algorithm based on Pontryagin’s maximum principle, to train neural nets. This decoupled training method allows us to add constraints to the optimization, which makes the deep model more robust. The constrained optimization problem can be formulated as a semi-definite programming problem and hence can be solved efficiently. Experiments show that our method effectively improves deep model’s adversarial robustness. N/A Deep neural networks are known to be vulnerable to adversarial perturbations. In this paper, we bridge adversarial robustness of neural nets with Lyapunov stability of dynamical systems. From this viewpoint, training neural nets is equivalent to finding an optimal control of the discrete dynamical system, which allows one to utilize methods of successive approximations, an optimal control algorithm based on Pontryagin’s maximum principle, to train neural nets. This decoupled training method allows us to add constraints to the optimization, which makes the deep model more robust. The constrained optimization problem can be formulated as a semi-definite programming problem and hence can be solved efficiently. Experiments show that our method effectively improves deep model’s adversarial robustness. 1 INTRODUCTION Deep neural networks achieve state-of-the-art performances on a variety of tasks (LeCun et al., 2015). However, neural nets are known to be vulnerable to adversarial examples. Imperceptibly perturbed inputs can induce erroneous outputs in neural nets (Szegedy et al., 2013). In image classification problems of computer vision, previous work has proposed various methods to attack deep models and induce low accuracy (Goodfellow et al., 2015; Madry et al., 2017; Papernot et al., 2016a; Carlini & Wagner, 2017a). Whereas multiple defenses against adversarial attacks are developed, they don’t ensure safety faced with strong attacking methods. There are also theories that explain the existence of adversarial examples (Ilyas et al., 2019; Shamir et al., 2019), but they often fail to fully explain the features and behaviors of this phenomenon. This makes the study of adversarial attacks important in that it is a threat to real-life machine learning systems (Kurakin et al., 2016). In this paper, we propose a dynamical system view on the adversarial robustness of the models, as well as new method that significantly defense adversarial attacks. Recent works have shown the connection between deep neural networks and dynamical systems (E, 2017; Li et al., 2017; Haber & Ruthotto, 2017; Lu et al., 2017). If we regard the neural net as a discretization of an ordinary differential equation (ODE), then training neural nets becomes finding an optimal control of the corresponding discrete dynamical system. Traditionally, we often treat training neural networks as an unconstrained non-convex optimization problem min θ∈Θ J(θ) +R(θ), where θ denotes the parameters of the model, J denotes the loss function and R denotes the regularizer term, and we solve the problem with (stochastic) gradient-descent based methods (Bottou, 2010; Ruder, 2016). In the training process, we feed the network with a batch of training data, and compute the gradient with forward and backward propagation (E. Rumelhart et al., 1986). The propagation process resembles solving optimal control problems that tune the parameters to make the output be close to target states. This viewpoint motivates us to bridge adversarial robustness with Lyapunov stability of a dynamical system, and to train robust networks with algorithms that find stable optimal control. We will formulate the discussion in later sections. 2 RELATED WORK 2.1 ADVERSARIAL DEFENSE Many defense methods have been proposed to improve the models’ adversarial robustness. The defenses mainly fall into three types: adversarial training (Szegedy et al., 2013; Zhang et al., 2019), modifying the networks (Gu & Rigazio, 2015; Lyu et al., 2015; Papernot et al., 2016b; Nayebi & Ganguli, 2017; Ross & Doshi-Velez, 2017), and adding external models (Lee et al., 2017; Akhtar et al., 2017; Gebhart & Schrater, 2017; Xu et al., 2018; Sun et al., 2019). Although various defense methods have been developed, a defended deep model is often successfully attacked by newly developed attacks or specific counter-counter measures (Carlini & Wagner, 2017b). Therefore, it can be hoped that defenses against general attacks will be devised to make deep learning models (adversarially) robust to real-life threats. 2.2 NEURAL ODES AND OPTIMAL CONTROL Recent works have bridged deep neural networks with ODEs and dynamical systems. On the one hand, deep residual networks (He et al., 2015) can be illustrated as forward Euler scheme approximating an ODE (E, 2017), which motivates us to design effective network structures (Lu et al., 2017). On the other hand, regarding the network as a dynamical system allows us to set up an optimal control viewpoint of neural nets. Pontryagin’s Maximum Principle (Boltyanskii et al., 1960) has been applied to train neural nets (Li et al., 2017; Li & Hao, 2018). 3 ADVERSARIAL ROBUSTNESS AND LYAPUNOV STABILITY 3.1 DYNAMICS OF DEEP NEURAL NETS Given a T -layer neural net, we let the dynamical system {ft(xt, θt) : t = 0, . . . , T} represents the network, where xt is the input of t-th layer, θt is the parameter, and ft : Rdt × Θt → Rdt+1 denotes the t-th layer’s transformation, which is usually a non-linear function σ(θtxt + bt) for fully-connected layers, convolution layers and batch normalization layers, etc. Therefore, training the neural net can be regarded as controlling the parameters to let the dynamics fit the training data. Specifically, the training optimization problem can be formulated as a typical optimal control problem as follows: min θ B∑ i=1 J(xiT ) + T∑ i=0 L(θi), subj. to xit+1 = ft(x i t, θt), t = 0, . . . , T − 1, where we use xi to denote the i-th input in the batch and B denote the batch size. J and L are the loss function and the regularizer, respectively. Specially, if the model is a deep residual network with structure xt+1 = xt+ft(xt, θt), we can regard the problem as the forward Euler discretization of the following continuous optimal control problem: min θ J(x(T )) + ∫ T 0 L(θ(t)) dt, subj. to ẋ = f(t, x(t), θ(t)), x(0) = x, 0 ≤ t ≤ T, where x(t) is a continuous trajectory from the input to the output logits. 3.2 LYAPUNOV STABILITY Adversarial examples are usually clean images added by a small calculated perturbation η. The model predicts correct labels fed with clean inputs x0, while the output is completely different when it is fed with perturbed input x0 + η. The dynamical system view of neural nets motivate us to characterize this sensitivity with Lyapunov stability of a system (Hirsch et al., 2004). Definition 1 (Lyapunov Stability). For a given dynamical system ẋ = f(x), x(0) = x0, xe is an equilibrium, then • The system is Lyapunov stable, if, ∀ > 0, ∃ δ > 0 such that, if ‖x(0)− xe‖ < δ, then for every t ≥ 0, ‖x(t)− xe‖ < . • The system is asymptotically stable if it is Lyapunov stable and ∃ δ > 0 such that if ‖x(0)− xe‖ < δ, then limt→∞ ‖x(t)− xe‖ = 0. • The system is exponentially stable if it is asymptotically stable and ∃α > 0, β > 0, δ > 0 such that if ‖x(0)− xe‖ < δ, then ‖x(t)− xe‖ ≤ α‖x(0)− xe‖e−βt, for all t ≥ 0. The definitions can be easily extended to discrete-time systems. Intuitively, the Lyapunov stability states that for any small perturbation η, the trajectory is still “close enough” to the original one. If we regard a neural net as a dynamical system, and ensure the network is Lyapunov stable, then the model is robust to all (adversarial) perturbations. 3.3 ADVERSARIALLY ROBUST NEURAL NETS Due to the connection between numerical ODEs and residual networks, we first consider robustness (i.e. Lyapunov stability) of continuous ODEs. Theorem 1 (Stable ODEs). For a given ODE ẋ = f(t, x, θ) = σ(Ax+b), where σ is the activation function, e.g., Sigmoid function or ReLU function, it is stable if Re(λi(A)) ≤ 0, ∀i, where Re denotes the real part, and λi denotes the i-th eigenvalue. One can see, e.g. Hirsch et al. (2004), for the proof of this theorem. Theorem 1 provides a set of conditions for stable ODEs. However, deep residual network is only a forward Euler discretization scheme of continuous ODE. To ensure numerical stability, we require |1− λi(A)h| ≤ 1 (Ascher & Petzold, 1998), where the step size h = 1 in residual networks. Added by the identity mapping in residual networks, we can get the stable conditions for discrete dynamics. Theorem 2 (Stable Discrete Networks). For a discrete neural network, i.e., discrete dynamics {ft(xt, θt) : t = 0, . . . , T}, where ft(xt, θt) = σ(θtxt) (we omit the bias term for simplicity), the network is stable if the ρ(θt) ≤ 1, where ρ(A) = maxi(|λi(A)|) is the spectral radius. If the conditions are added to the unconstrained optimization problem of training, we can greatly improve the adversarial robustness of neural nets. The methods will be discussed in the following section. 4 TRAINING ROBUST NEURAL NETS 4.1 PMP AND MSA For deterministic systems, the Pontryagin’s Maximum Principle (PMP) (Boltyanskii et al., 1960) provides a set of necessary conditions for optimal control of the system. Various algorithms have been proposed to solve the deterministic optimal control problem based on PMP. Among them, the Method of Successive Approximations (MSA) (Krylov & Chernous’ko, 1963) is one of the simplest algorithms. In the field of deep learning, previous work has utilized MSA to train neural networks (Li et al., 2017; Li & Hao, 2018). Formally, consider the optimal control problem for training neural nets in section 3. For dynamics {ft(xt, θt) : t = 0, . . . , T}, assume θ∗ = { θ∗0 , . . . , θ ∗ T−1 } is a solution to the optimal control problem. Also, we define the Hamiltonian function H : Rdt × Rdt+1 × Θt × [T ] → R by H(x, p, θ, t) = p · ft(x, θ)−L(θt), where the dot denotes the inner product. We have the following necessary conditions for θ∗. Theorem 3 (Pontryagin’s Maximum Principle for Discrete Systems). Assume ft and J are sufficiently smooth. There exists co-states p∗ = {p∗0, . . . , p∗T } s.t. the following conditions hold: x∗t+1 = ∇pH(x∗t , p∗t+1, θ∗t , t), x∗0 = x0, p∗t = ∇xH(x∗t , p∗t+1, θ∗t , t), p∗T = −∇xJ(x∗T ), θ∗t = arg max θ H(x∗t , p ∗ t+1, θ, t). For simplicity of notations, here we assume the batch size is 1. One can easily extend the theorem to minibatch training case by summing over the batch. The theorem can be proved by KKT conditions (Boyd & Vandenberghe, 2004), where the co-states can be seen as the Lagrangian dual variables. Consider the conditions in PMP, one can find the x equations are exactly the forward propagation of a neural net, and the p equations resemble the backward propagation process. The third condition states that the model parameters must maximize the Hamiltonian function. This motivates us to iteratively compute forward and backward propagation, and solve the Hamiltonian maximization to find the optimal control, which is exactly the Method of Successive Approximations (Algorithm 1). In practice, we usually add regularizer terms that penalize great changes in the maximization step to prevent drastic steps that cause divergence. For the connection between MSA and back-propagationbased gradient descent algorithms, see the appendix of Li & Hao (2018). Algorithm 1 The Method of Successive Approximations Initialize θ0 = { θ00, . . . , θ 0 T−1 } , set k = 0; repeat Compute the states (forward propagation): xt+1 = ∇pH(xt, pt+1, θkt , t), t = 0, . . . , T − 1; Compute the co-states (backward propagation): pt = ∇xH(xt, pt+1, θkt , t), t = T − 1, . . . , 0, with initial pT = −∇xJ(xT ); For each t = 0, . . . , T − 1, solve the maximization θk+1t = arg maxθH(xt, pt+1, θ, t); Set k = k + 1; until Converge; The advantages of training by MSA compared with gradient descent algorithms has been discussed in (Li et al., 2017), among which the most significant feature is that the optimization steps on different layers are decoupled. Concretely, after computing the states x and co-states p, the optimization step on layer t is only searching for parameters θt. This not only suggests that the optimization process can be accelerated by parallelization, but also allows us to utilize the features of the problem. The parameter space is greatly reduced compared with the original intractable optimization problem, and hence the optimization is much more easier. This allows us to add constraints that ensure robustness of the model. 4.2 ROBUST CONSTRAINTS Consider a layer in the form of ft(x) = θtx, where we leave the activation as an individual layer with no parameters for simplicity, we can derive the following optimization problem for Hamiltonian maximization: max θ pt+1 · (θtxt)− α‖θt‖22 − β‖θt − θ′t‖22, subj. to ρ(θt) ≤ 1, where α‖θt‖22 is the L2 norm regularizer (weight decay), and θ′t is the initial parameter (i.e., θkt in the algorithm). The last term keeps the training process from drastic steps that cause divergence. The constraint, as illustrated in section 3, is the stable condition for discrete systems. It makes the optimization quite difficult if we directly add the constraints in gradient descent based algorithms, but the decoupled optimization in MSA allows us to do so. With regard to the constraint of parameter’s spectral radius, a simple method is to apply special forms of matrices for parameters, e.g. anti-symmetric matrices. For continuous deep models, the only constraint is Theorem 1, i.e., Re(λi(θt)) ≤ 0. Anti-symmetric matrices have only imaginary eigenvalues, and hence we can replace θt with θt − θTt − γI , where γ is a small positive constant. For general forms of parameters, one can prove the following transformation. Theorem 4. One sufficient condition of ρ(A) ≤ 1 is[ I A AT I ] 0, where A B denotes A−B is positive semi-definite. Proof. Recall that ρ(A) ≤ ‖A‖2 = √ λmax(ATA), we have ‖A‖2 ≤ 1⇔ ATA I ⇔ [ I A AT I ] 0. Hence we can replace ρ(θt) ≤ 1 with a positive semi-definite condition, and we turn the Hamiltonian maximization into a new optimization problem, where the target function is quadratic and the constraint is a semi-definite condition. This can be reduced to a semi-definite programming (SDP) problem (Vandenberghe & Boyd, 1998), which is a special case of convex optimization, and thus can be solved efficiently by, e.g., interior point methods (Helmberg et al., 1970) in polynomial time. Here we summarize our method. For a given neural network, we use MSA to train the model, i.e., iteratively computing the states (forward propagation) and co-states (backward propagation), and solving the optimization for each layer. Instead of directly maximizing the Hamiltonian, we add a positive semi-definite constraint to the optimization problem, which leads to a stable control of the dynamics. 5 EXPERIMENTS 5.1 EXPERIMENT SETUP To evaluate the effectiveness of our method, we conduct experiments on CIFAR10. We trained the network on clean data, with adversarial training (PGD-10) and with robust training (our method), respectively. We used FGSM (Goodfellow et al., 2015), PGD-10 (Madry et al., 2017) and C&W (Carlini & Wagner, 2017a) to attack the network. Due to the limitation of TensorFlow, we used a simple interior point method with gradient descent to solve SDP. The network model was an 18-layer residual network (He et al., 2015), with 8 residual blocks. We set the perturbation size as = 0.1 for both FGSM and PGD. For C&W, we used the L0 metric. We trained the model for 150 epochs with a batch size of 200. The learning rate was set to be 10−2 initially, and was divided by 5 at epoch 30, 60 and 100. The regularizer term constant was set to be 10−3. 5.2 RESULTS The results can be seen in Table 1. The accuracy of robust models on clean data is lower than vanilla model’s in that robust training and generalization is more difficult and requires more data (Schmidt et al., 2018). Our method improves model’s adversarial robustness, compared with the vanilla model. Figure 1 displays the eigenvalues of the last fully-connected layer’s parameter. The complex norm of eigenvalues (spectral radius) of the model trained by our method are effectively bounded below 1, which satisfies the robust constraint on parameters in section 4.2, while eigenvalues of natural training are randomly distributed in the complex plane. Our method is not as effective as traditional adversarial training method. However, it mainly has the following advantages: (a) The training process doesn’t require large numbers of gradient propagation, which consumes much time in adversarial training. In our experiment, adversarial training spends about 10 times GPU time as much as our method. (b) The decoupled training process allows us to set different hyperparameters and training methods for different layers, which is more maneuverable for large scale training. We can further control the behavior of different layers in adversarial settings. (c) Lyapunov stability provides a framework for analyzing adversarial robustness of deep models, which may lead to theoretical analysis of adversarial samples in future work. 6 DISCUSSION AND FUTURE WORK Motivated by the dynamical system view of neural networks, this work bridges adversarial robustness of deep neural models with Lyapunov stability of dynamical systems, and we also propose a method that uses a stable optimal control algorithm to train neural networks to improve the adversarial robustness of deep neural models. Though the result didn’t surpass STOA defense methods, the stable control view of training neural nets points out another direction towards adversarially robust models. For future work, on the one hand, mathematical analysis on Lyapunov stability of neural models may be studied to provide theoretical understanding of adversarial robustness. On the other hand, popular platforms for deep learning, e.g., TensorFlow, PyTorch, didn’t provide frameworks for optimal control. We will obtain better results if specific algorithms for SDP are applied to solve the optimization problem.
1. What is the main contribution of the paper in the field of robust training of neural networks? 2. What are the strengths of the proposed approach, particularly in its theoretical foundation? 3. Are there any concerns or limitations regarding the empirical evaluation of the method? 4. How does the reviewer assess the novelty and significance of the paper's contributions? 5. Are there any suggestions for improving the paper, such as exploring further empirical evaluations or refining the theoretical analysis?
Review
Review The paper contributes to the robust training of neural networks as follows: 1) The paper uses the theoretical view of a neural network as a discretized ODE to develop a robust control theory aimed at training the network while enforcing robustness; 2) Such an objective is achieved by introducing Lyaponov stability and practically implemented through the method of successive approximations; 3) Empirical evaluation demonstrate that the newly introduced method performs as well as the SOTA in terms of defensive training. The paper is well written and proposes a well motivated and theoretically original strategy to robustly train neural networks against adversarial examples. The strength of the paper is definitively in its theoretical section, it would be really great to see an empirical improvement improvement on the SOTA. However, I do not believe the paper should be penalized for only matching other algorithm as it relies on a tractable and principled theoretical analysis.
ICLR
Title Adversarially Robust Neural Networks via Optimal Control: Bridging Robustness with Lyapunov Stability Abstract Deep neural networks are known to be vulnerable to adversarial perturbations. In this paper, we bridge adversarial robustness of neural nets with Lyapunov stability of dynamical systems. From this viewpoint, training neural nets is equivalent to finding an optimal control of the discrete dynamical system, which allows one to utilize methods of successive approximations, an optimal control algorithm based on Pontryagin’s maximum principle, to train neural nets. This decoupled training method allows us to add constraints to the optimization, which makes the deep model more robust. The constrained optimization problem can be formulated as a semi-definite programming problem and hence can be solved efficiently. Experiments show that our method effectively improves deep model’s adversarial robustness. N/A Deep neural networks are known to be vulnerable to adversarial perturbations. In this paper, we bridge adversarial robustness of neural nets with Lyapunov stability of dynamical systems. From this viewpoint, training neural nets is equivalent to finding an optimal control of the discrete dynamical system, which allows one to utilize methods of successive approximations, an optimal control algorithm based on Pontryagin’s maximum principle, to train neural nets. This decoupled training method allows us to add constraints to the optimization, which makes the deep model more robust. The constrained optimization problem can be formulated as a semi-definite programming problem and hence can be solved efficiently. Experiments show that our method effectively improves deep model’s adversarial robustness. 1 INTRODUCTION Deep neural networks achieve state-of-the-art performances on a variety of tasks (LeCun et al., 2015). However, neural nets are known to be vulnerable to adversarial examples. Imperceptibly perturbed inputs can induce erroneous outputs in neural nets (Szegedy et al., 2013). In image classification problems of computer vision, previous work has proposed various methods to attack deep models and induce low accuracy (Goodfellow et al., 2015; Madry et al., 2017; Papernot et al., 2016a; Carlini & Wagner, 2017a). Whereas multiple defenses against adversarial attacks are developed, they don’t ensure safety faced with strong attacking methods. There are also theories that explain the existence of adversarial examples (Ilyas et al., 2019; Shamir et al., 2019), but they often fail to fully explain the features and behaviors of this phenomenon. This makes the study of adversarial attacks important in that it is a threat to real-life machine learning systems (Kurakin et al., 2016). In this paper, we propose a dynamical system view on the adversarial robustness of the models, as well as new method that significantly defense adversarial attacks. Recent works have shown the connection between deep neural networks and dynamical systems (E, 2017; Li et al., 2017; Haber & Ruthotto, 2017; Lu et al., 2017). If we regard the neural net as a discretization of an ordinary differential equation (ODE), then training neural nets becomes finding an optimal control of the corresponding discrete dynamical system. Traditionally, we often treat training neural networks as an unconstrained non-convex optimization problem min θ∈Θ J(θ) +R(θ), where θ denotes the parameters of the model, J denotes the loss function and R denotes the regularizer term, and we solve the problem with (stochastic) gradient-descent based methods (Bottou, 2010; Ruder, 2016). In the training process, we feed the network with a batch of training data, and compute the gradient with forward and backward propagation (E. Rumelhart et al., 1986). The propagation process resembles solving optimal control problems that tune the parameters to make the output be close to target states. This viewpoint motivates us to bridge adversarial robustness with Lyapunov stability of a dynamical system, and to train robust networks with algorithms that find stable optimal control. We will formulate the discussion in later sections. 2 RELATED WORK 2.1 ADVERSARIAL DEFENSE Many defense methods have been proposed to improve the models’ adversarial robustness. The defenses mainly fall into three types: adversarial training (Szegedy et al., 2013; Zhang et al., 2019), modifying the networks (Gu & Rigazio, 2015; Lyu et al., 2015; Papernot et al., 2016b; Nayebi & Ganguli, 2017; Ross & Doshi-Velez, 2017), and adding external models (Lee et al., 2017; Akhtar et al., 2017; Gebhart & Schrater, 2017; Xu et al., 2018; Sun et al., 2019). Although various defense methods have been developed, a defended deep model is often successfully attacked by newly developed attacks or specific counter-counter measures (Carlini & Wagner, 2017b). Therefore, it can be hoped that defenses against general attacks will be devised to make deep learning models (adversarially) robust to real-life threats. 2.2 NEURAL ODES AND OPTIMAL CONTROL Recent works have bridged deep neural networks with ODEs and dynamical systems. On the one hand, deep residual networks (He et al., 2015) can be illustrated as forward Euler scheme approximating an ODE (E, 2017), which motivates us to design effective network structures (Lu et al., 2017). On the other hand, regarding the network as a dynamical system allows us to set up an optimal control viewpoint of neural nets. Pontryagin’s Maximum Principle (Boltyanskii et al., 1960) has been applied to train neural nets (Li et al., 2017; Li & Hao, 2018). 3 ADVERSARIAL ROBUSTNESS AND LYAPUNOV STABILITY 3.1 DYNAMICS OF DEEP NEURAL NETS Given a T -layer neural net, we let the dynamical system {ft(xt, θt) : t = 0, . . . , T} represents the network, where xt is the input of t-th layer, θt is the parameter, and ft : Rdt × Θt → Rdt+1 denotes the t-th layer’s transformation, which is usually a non-linear function σ(θtxt + bt) for fully-connected layers, convolution layers and batch normalization layers, etc. Therefore, training the neural net can be regarded as controlling the parameters to let the dynamics fit the training data. Specifically, the training optimization problem can be formulated as a typical optimal control problem as follows: min θ B∑ i=1 J(xiT ) + T∑ i=0 L(θi), subj. to xit+1 = ft(x i t, θt), t = 0, . . . , T − 1, where we use xi to denote the i-th input in the batch and B denote the batch size. J and L are the loss function and the regularizer, respectively. Specially, if the model is a deep residual network with structure xt+1 = xt+ft(xt, θt), we can regard the problem as the forward Euler discretization of the following continuous optimal control problem: min θ J(x(T )) + ∫ T 0 L(θ(t)) dt, subj. to ẋ = f(t, x(t), θ(t)), x(0) = x, 0 ≤ t ≤ T, where x(t) is a continuous trajectory from the input to the output logits. 3.2 LYAPUNOV STABILITY Adversarial examples are usually clean images added by a small calculated perturbation η. The model predicts correct labels fed with clean inputs x0, while the output is completely different when it is fed with perturbed input x0 + η. The dynamical system view of neural nets motivate us to characterize this sensitivity with Lyapunov stability of a system (Hirsch et al., 2004). Definition 1 (Lyapunov Stability). For a given dynamical system ẋ = f(x), x(0) = x0, xe is an equilibrium, then • The system is Lyapunov stable, if, ∀ > 0, ∃ δ > 0 such that, if ‖x(0)− xe‖ < δ, then for every t ≥ 0, ‖x(t)− xe‖ < . • The system is asymptotically stable if it is Lyapunov stable and ∃ δ > 0 such that if ‖x(0)− xe‖ < δ, then limt→∞ ‖x(t)− xe‖ = 0. • The system is exponentially stable if it is asymptotically stable and ∃α > 0, β > 0, δ > 0 such that if ‖x(0)− xe‖ < δ, then ‖x(t)− xe‖ ≤ α‖x(0)− xe‖e−βt, for all t ≥ 0. The definitions can be easily extended to discrete-time systems. Intuitively, the Lyapunov stability states that for any small perturbation η, the trajectory is still “close enough” to the original one. If we regard a neural net as a dynamical system, and ensure the network is Lyapunov stable, then the model is robust to all (adversarial) perturbations. 3.3 ADVERSARIALLY ROBUST NEURAL NETS Due to the connection between numerical ODEs and residual networks, we first consider robustness (i.e. Lyapunov stability) of continuous ODEs. Theorem 1 (Stable ODEs). For a given ODE ẋ = f(t, x, θ) = σ(Ax+b), where σ is the activation function, e.g., Sigmoid function or ReLU function, it is stable if Re(λi(A)) ≤ 0, ∀i, where Re denotes the real part, and λi denotes the i-th eigenvalue. One can see, e.g. Hirsch et al. (2004), for the proof of this theorem. Theorem 1 provides a set of conditions for stable ODEs. However, deep residual network is only a forward Euler discretization scheme of continuous ODE. To ensure numerical stability, we require |1− λi(A)h| ≤ 1 (Ascher & Petzold, 1998), where the step size h = 1 in residual networks. Added by the identity mapping in residual networks, we can get the stable conditions for discrete dynamics. Theorem 2 (Stable Discrete Networks). For a discrete neural network, i.e., discrete dynamics {ft(xt, θt) : t = 0, . . . , T}, where ft(xt, θt) = σ(θtxt) (we omit the bias term for simplicity), the network is stable if the ρ(θt) ≤ 1, where ρ(A) = maxi(|λi(A)|) is the spectral radius. If the conditions are added to the unconstrained optimization problem of training, we can greatly improve the adversarial robustness of neural nets. The methods will be discussed in the following section. 4 TRAINING ROBUST NEURAL NETS 4.1 PMP AND MSA For deterministic systems, the Pontryagin’s Maximum Principle (PMP) (Boltyanskii et al., 1960) provides a set of necessary conditions for optimal control of the system. Various algorithms have been proposed to solve the deterministic optimal control problem based on PMP. Among them, the Method of Successive Approximations (MSA) (Krylov & Chernous’ko, 1963) is one of the simplest algorithms. In the field of deep learning, previous work has utilized MSA to train neural networks (Li et al., 2017; Li & Hao, 2018). Formally, consider the optimal control problem for training neural nets in section 3. For dynamics {ft(xt, θt) : t = 0, . . . , T}, assume θ∗ = { θ∗0 , . . . , θ ∗ T−1 } is a solution to the optimal control problem. Also, we define the Hamiltonian function H : Rdt × Rdt+1 × Θt × [T ] → R by H(x, p, θ, t) = p · ft(x, θ)−L(θt), where the dot denotes the inner product. We have the following necessary conditions for θ∗. Theorem 3 (Pontryagin’s Maximum Principle for Discrete Systems). Assume ft and J are sufficiently smooth. There exists co-states p∗ = {p∗0, . . . , p∗T } s.t. the following conditions hold: x∗t+1 = ∇pH(x∗t , p∗t+1, θ∗t , t), x∗0 = x0, p∗t = ∇xH(x∗t , p∗t+1, θ∗t , t), p∗T = −∇xJ(x∗T ), θ∗t = arg max θ H(x∗t , p ∗ t+1, θ, t). For simplicity of notations, here we assume the batch size is 1. One can easily extend the theorem to minibatch training case by summing over the batch. The theorem can be proved by KKT conditions (Boyd & Vandenberghe, 2004), where the co-states can be seen as the Lagrangian dual variables. Consider the conditions in PMP, one can find the x equations are exactly the forward propagation of a neural net, and the p equations resemble the backward propagation process. The third condition states that the model parameters must maximize the Hamiltonian function. This motivates us to iteratively compute forward and backward propagation, and solve the Hamiltonian maximization to find the optimal control, which is exactly the Method of Successive Approximations (Algorithm 1). In practice, we usually add regularizer terms that penalize great changes in the maximization step to prevent drastic steps that cause divergence. For the connection between MSA and back-propagationbased gradient descent algorithms, see the appendix of Li & Hao (2018). Algorithm 1 The Method of Successive Approximations Initialize θ0 = { θ00, . . . , θ 0 T−1 } , set k = 0; repeat Compute the states (forward propagation): xt+1 = ∇pH(xt, pt+1, θkt , t), t = 0, . . . , T − 1; Compute the co-states (backward propagation): pt = ∇xH(xt, pt+1, θkt , t), t = T − 1, . . . , 0, with initial pT = −∇xJ(xT ); For each t = 0, . . . , T − 1, solve the maximization θk+1t = arg maxθH(xt, pt+1, θ, t); Set k = k + 1; until Converge; The advantages of training by MSA compared with gradient descent algorithms has been discussed in (Li et al., 2017), among which the most significant feature is that the optimization steps on different layers are decoupled. Concretely, after computing the states x and co-states p, the optimization step on layer t is only searching for parameters θt. This not only suggests that the optimization process can be accelerated by parallelization, but also allows us to utilize the features of the problem. The parameter space is greatly reduced compared with the original intractable optimization problem, and hence the optimization is much more easier. This allows us to add constraints that ensure robustness of the model. 4.2 ROBUST CONSTRAINTS Consider a layer in the form of ft(x) = θtx, where we leave the activation as an individual layer with no parameters for simplicity, we can derive the following optimization problem for Hamiltonian maximization: max θ pt+1 · (θtxt)− α‖θt‖22 − β‖θt − θ′t‖22, subj. to ρ(θt) ≤ 1, where α‖θt‖22 is the L2 norm regularizer (weight decay), and θ′t is the initial parameter (i.e., θkt in the algorithm). The last term keeps the training process from drastic steps that cause divergence. The constraint, as illustrated in section 3, is the stable condition for discrete systems. It makes the optimization quite difficult if we directly add the constraints in gradient descent based algorithms, but the decoupled optimization in MSA allows us to do so. With regard to the constraint of parameter’s spectral radius, a simple method is to apply special forms of matrices for parameters, e.g. anti-symmetric matrices. For continuous deep models, the only constraint is Theorem 1, i.e., Re(λi(θt)) ≤ 0. Anti-symmetric matrices have only imaginary eigenvalues, and hence we can replace θt with θt − θTt − γI , where γ is a small positive constant. For general forms of parameters, one can prove the following transformation. Theorem 4. One sufficient condition of ρ(A) ≤ 1 is[ I A AT I ] 0, where A B denotes A−B is positive semi-definite. Proof. Recall that ρ(A) ≤ ‖A‖2 = √ λmax(ATA), we have ‖A‖2 ≤ 1⇔ ATA I ⇔ [ I A AT I ] 0. Hence we can replace ρ(θt) ≤ 1 with a positive semi-definite condition, and we turn the Hamiltonian maximization into a new optimization problem, where the target function is quadratic and the constraint is a semi-definite condition. This can be reduced to a semi-definite programming (SDP) problem (Vandenberghe & Boyd, 1998), which is a special case of convex optimization, and thus can be solved efficiently by, e.g., interior point methods (Helmberg et al., 1970) in polynomial time. Here we summarize our method. For a given neural network, we use MSA to train the model, i.e., iteratively computing the states (forward propagation) and co-states (backward propagation), and solving the optimization for each layer. Instead of directly maximizing the Hamiltonian, we add a positive semi-definite constraint to the optimization problem, which leads to a stable control of the dynamics. 5 EXPERIMENTS 5.1 EXPERIMENT SETUP To evaluate the effectiveness of our method, we conduct experiments on CIFAR10. We trained the network on clean data, with adversarial training (PGD-10) and with robust training (our method), respectively. We used FGSM (Goodfellow et al., 2015), PGD-10 (Madry et al., 2017) and C&W (Carlini & Wagner, 2017a) to attack the network. Due to the limitation of TensorFlow, we used a simple interior point method with gradient descent to solve SDP. The network model was an 18-layer residual network (He et al., 2015), with 8 residual blocks. We set the perturbation size as = 0.1 for both FGSM and PGD. For C&W, we used the L0 metric. We trained the model for 150 epochs with a batch size of 200. The learning rate was set to be 10−2 initially, and was divided by 5 at epoch 30, 60 and 100. The regularizer term constant was set to be 10−3. 5.2 RESULTS The results can be seen in Table 1. The accuracy of robust models on clean data is lower than vanilla model’s in that robust training and generalization is more difficult and requires more data (Schmidt et al., 2018). Our method improves model’s adversarial robustness, compared with the vanilla model. Figure 1 displays the eigenvalues of the last fully-connected layer’s parameter. The complex norm of eigenvalues (spectral radius) of the model trained by our method are effectively bounded below 1, which satisfies the robust constraint on parameters in section 4.2, while eigenvalues of natural training are randomly distributed in the complex plane. Our method is not as effective as traditional adversarial training method. However, it mainly has the following advantages: (a) The training process doesn’t require large numbers of gradient propagation, which consumes much time in adversarial training. In our experiment, adversarial training spends about 10 times GPU time as much as our method. (b) The decoupled training process allows us to set different hyperparameters and training methods for different layers, which is more maneuverable for large scale training. We can further control the behavior of different layers in adversarial settings. (c) Lyapunov stability provides a framework for analyzing adversarial robustness of deep models, which may lead to theoretical analysis of adversarial samples in future work. 6 DISCUSSION AND FUTURE WORK Motivated by the dynamical system view of neural networks, this work bridges adversarial robustness of deep neural models with Lyapunov stability of dynamical systems, and we also propose a method that uses a stable optimal control algorithm to train neural networks to improve the adversarial robustness of deep neural models. Though the result didn’t surpass STOA defense methods, the stable control view of training neural nets points out another direction towards adversarially robust models. For future work, on the one hand, mathematical analysis on Lyapunov stability of neural models may be studied to provide theoretical understanding of adversarial robustness. On the other hand, popular platforms for deep learning, e.g., TensorFlow, PyTorch, didn’t provide frameworks for optimal control. We will obtain better results if specific algorithms for SDP are applied to solve the optimization problem.
1. What is the focus of the paper regarding neural networks and adversarial attacks? 2. What method does the paper propose to address these attacks, and how does it relate to previous works in the field? 3. What are the weaknesses of the paper, particularly regarding its experimental section and writing quality? 4. How does the reviewer assess the overall quality and novelty of the paper's content? 5. What suggestions does the reviewer provide for improving the paper and making it suitable for publication in a different venue?
Review
Review Neural Networks are vulnerable to adversarial perturbations. This paper proposes a method that based on optimal control theory that uses semidefinite-programming. This is a quite popular topic in Adversarial training recently, there has been a few works in that line. There are almost no experiments in this paper. There are several typos in the paper and writing of this paper requires more work. There are several typos in this paper, for example STOA, should be SOTA (in the Section 6.) In its current state, this paper looks very rushed. As Yiping Lu pointed out, the PMP statement in this paper is also wrong. At this current stage, unfortunately this paper doesn’t meet the standards of ICLR. I would recommend the authors to go over the paper carefully and resubmit to a different venue.
ICLR
Title Augmentation Component Analysis: Modeling Similarity via the Augmentation Overlaps Abstract Self-supervised learning aims to learn a embedding space where semantically similar samples are close. Contrastive learning methods pull views of samples together and push different samples away, which utilizes semantic invariance of augmentation but ignores the relationship between samples. To better exploit the power of augmentation, we observe that semantically similar samples are more likely to have similar augmented views. Therefore, we can take the augmented views as a special description of a sample. In this paper, we model such a description as the augmentation distribution, and we call it augmentation feature. The similarity in augmentation feature reflects how much the views of two samples overlap and is related to their semantical similarity. Without computational burdens to explicitly estimate values of the augmentation feature, we propose Augmentation Component Analysis (ACA) with a contrastive-like loss to learn principal components and an on-the-fly projection loss to embed data. ACA equals an efficient dimension reduction by PCA and extracts low-dimensional embeddings, theoretically preserving the similarity of augmentation distribution between samples. Empirical results show that our method can achieve competitive results against various traditional contrastive learning methods on different benchmarks. Code available at https://github.com/hanlu-nju/AugCA. 1 INTRODUCTION The rapid development of contrastive learning has pushed self-supervised representation learning to unprecedented success. Many contrastive learning methods surpass traditional pretext-based methods by a large margin and even outperform representation learned by supervised learning (Wu et al., 2018; van den Oord et al., 2018; Tian et al., 2020a; He et al., 2020; Chen et al., 2020a;c). The key idea of self-supervised contrastive learning is to construct views of samples via modern data augmentations (Chen et al., 2020a). Then discriminative embeddings are learned by pulling together views of the same sample in the embedding space while pushing apart views of others. Contrastive learning methods utilize the semantic invariance between views of the same sample, but the semantic relationship between samples is ignored. Instead of measuring the similarity between certain augmented views of samples, we claim that the similarity between the augmentation distributions of samples can reveal the sample-wise similarity better. In other words, semantically similar samples have similar sets of views. As shown in Figure 1 left, two images of deer create many similar crops, and sets of their augmentation results, i.e., their distributions, overlap much. In contrast, a car image will rarely be augmented to the same crop as a deer, and their augmentation distributions overlap little. In Figure 1 right, we verify the motivation numerically. We approximate the overlaps between image augmentations with a classical image matching algorithm (Zitova & Flusser, 2003), which counts the portion of the key points matched in the raw images. We find samples of the same class overlap more than different classes on average, supporting our motivation. Therefore, we establish the semantic relationship between samples in an unsupervised manner based on the similarity of augmentation distributions, i.e., how much they overlap. In this paper, we propose to describe data directly by their augmentation distributions. We call the feature of this kind the augmentation feature. The elements of the augmentation feature represent the probability of getting a certain view by augmenting the sample as shown in the left of Figure 2. The augmentation feature serves as an “ideal” representation since it encodes the augmentation information without any loss and we can easily obtain the overlap of two samples from it. However, not only its elements are hard to calculate, but also such high-dimensional embeddings are impractical to use. Inspired by the classical strategy to deal with high-dimensional data, we propose Augmentation Component Analysis (ACA), which employs the idea of PCA (Hotelling, 1933) to perform dimension reduction on augmentation features previously mentioned. ACA reformulates the steps of extracting principal components of the augmentation features with a contrastive-like loss. With the learned principal components, another on-the-fly loss embeds samples effectively. ACA learns operable low-dimensional embeddings theoretically preserving the augmentation distribution distances. In addition, the similarity between the objectives of ACA and traditional contrastive loss may explain why contrastive learning can learn semantic-related embeddings – they embed samples into spaces that partially preserve augmentation distributions. Experiments on synthetic and real-world datasets demonstrate that our ACA achieves competitive results against various traditional contrastive learning methods. Our contributions are as follows: • We propose a new self-supervised strategy, which measures sample-wise similarity via the similarity of augmentation distributions. This new aspect facilitates learning embeddings. • We propose ACA method that implicitly employs the dimension reduction over the augmentation feature, and the learned embeddings preserve augmentation similarity between samples. • Benefiting from the resemblance to contrastive loss, our ACA helps explain the functionality of contrastive learning and why they can learn semantically meaningful embeddings. 2 RELATED WORK Self-Supervised Learning. Learning effective visual representations without human supervision is a long-standing problem. Self-supervised learning methods solve this problem by creating supervision from the data itself instead of human labelers. The model needs to solve a pretext task before it is used for the downstream tasks. For example, in computer vision, the pretext tasks include colorizing grayscale images (Zhang et al., 2016), inpainting images (Pathak et al., 2016), predicting relative patch (Doersch et al., 2015), solving jigsaw puzzles (Noroozi & Favaro, 2016), predicting rotations (Gidaris et al., 2018) and exploiting generative models (Goodfellow et al., 2014; Kingma & Welling, 2014; Donahue & Simonyan, 2019). Self-supervised learning also achieves great success in natural language processing (Mikolov et al., 2013; Devlin et al., 2019). Contrastive Learning and Non-Contrastive Methods. Contrastive approaches have been one of the most prominent representation learning strategies in self-supervised learning. Similar to the metric learning in supervised scenarios (Ye et al., 2019; 2020), these approaches maximize the agreement between positive pairs and minimize the agreement between negative pairs. Positive pairs are commonly constructed by co-occurrence (van den Oord et al., 2018; Tian et al., 2020a; Bachman et al., 2019) or augmentation of the same sample (He et al., 2020; Chen et al., 2020a;c; Li et al., 2021; Ye et al., 2023), while all the other samples are taken as negatives. Most of these methods employ the InfoNCE loss (van den Oord et al., 2018), which acts as a lower bound of mutual information between views. Based on this idea, there are several methods that attempt to improve contrastive learning, including mining nearest neighbour (Dwibedi et al., 2021; ?; Azabou et al., 2021) and creating extra views by mixing up (Kalantidis et al., 2020) or adversarial training (Hu et al., 2021). Another stream of methods employs a similar idea of contrastive learning to pull views of a sample together without using negative samples (Grill et al., 2020; Chen & He, 2021). Barlow Twins (Zbontar et al., 2021) minimizes the redundancy within the representation vector. Tsai et al. (2021) reveals the relationship among Barlow Twins, contrastive and non-contrastive methods. Most of these methods only utilize the semantic invariance of augmentation and ignore the relationship between samples. Different from them, we propose a new way to perform self-supervised learning by preserving the similarity of augmentation distribution, based on the observation that a strong correlation exists between the similarity of augmentation distributions and the similarity of semantics. Explanation of Contrastive Learning. Several works provide empirical or theoretical results for explaining the behavior of contrastive learning. Tian et al. (2020b); Xiao et al. (2021) explore the role of augmentation and show contrastive model can extract useful information from views but also can be affected by nuisance information. Zhao et al. (2021) empirically shows that contrastive learning preserves low-level or middle-level instance information. In theoretical studies, Saunshi et al. (2019) provide guarantees of downstream linear classification tasks under conditionally independence assumption. Other works weaken the assumption but are still unrealistic (Lee et al., 2021; Tosh et al., 2021). HaoChen et al. (2021) focus on how views of different samples are connected by the augmentation process and provide guarantees with certain connectivity assumptions. Wang et al. (2022) notice that the augmentation overlap provides a ladder for gradually learning class-separated representations. In addition to the alignment and uniformity as shown by Wang & Isola (2020), Huang et al. (2021) develop theories on the crucial effect of data augmentation on the generalization of contrastive learning. Hu et al. (2022) explain that the contrastive loss is implicitly doing SNE with “positive” pairs constructed from data augmentation. Inspired by the important role of augmentation, we provide a novel self-supervised method that ensures preserving augmentation overlap. 3 NOTATIONS The set of all natural data (data without augmentation) is denoted by X̄ , with size |X̄ | = N . We assume that the natural data follow a uniform distribution p(x̄) on X̄ , i.e., p(x̄) = 1N ,∀x̄ ∈ X̄ . By applying an augmentation method A, a natural sample x̄ ∈ X̄ could be augmented to another sample x with probability pA(x | x̄), so we use p(· | x̄) to encode the augmentation distribution. 1 For example, if x̄ is an image, then A can be common augmentations like Gaussian blur, color distortion and random cropping (Chen et al., 2020a). Denote the set of all possible augmented data as X . We assume X has finite size |X | = L and L > N for ease of exposition. Note that N and L are finite, but can be arbitrarily large. We denote the encoder as fθ, parameterized by θ, which projects a sample x to an embedding vector in Rk. 4 LEARNING VIA AUGMENTATION OVERLAPS As we mentioned in Section 1, measuring the similarity between the augmentation distributions, i.e., the overlap of the augmented results of the two samples reveals their semantic relationship well. For example, in natural language processing, we usually generate augmented sentences by dropping out some words. Then different sentences with similar meanings are likely to contain the same set of words and thus have a high probability of creating similar augmented data. With the help of this self-supervision, we formulate the embedding learning task to meet the following similarity preserving condition: dRk (fθ⋆ (x̄1) , fθ⋆ (x̄2)) ∝ dA(p(· | x̄1), p(· | x̄2)) . (1) dRk is a distance measure in the embedding space Rk, and dA measures the distance between two augmentation distributions. Equation (1) requires the learned embedding with the optimal parameter θ⋆ has the same similarity comparison with that measured by the augmentation distributions. In this section, we first introduce the augmentation feature for each sample, which is a manually designed embedding satisfying the condition in Equation (1). To handle the high dimensionality and complexity of the augmentation feature, we further propose our Augmentation Component Analysis (ACA) that learns to reduce the dimensionality and preserve the similarity. 1Note that p(· | x̄) is usually difficult to compute and we can only sample from it. We omit the subscript A and directly use p(· | x̄) in the following content for convenient 4.1 AUGMENTATION FEATURE To reach the goal of similarity preserving in Equation (1), a direct way is to manually construct the feature by the augmentation distributions of each natural sample, i.e., f(x̄) = [p(x1 | x̄), . . . , p(xL | x̄)]⊤, where each element p(xi | x̄) represents the probability of getting a certain element xi in space X by augmenting x̄. We omit θ in f(x̄) since such augmentation feature2 does not rely on any learnable parameters. In this case, any distance dRL defined in the space of f is exactly a valid distribution distance, which reveals the augmentation overlaps and is related to the semantic similarity. Although the constructive augmentation feature naturally satisfies the similarity preserving condition (Equation (1)) (because it directly use the augmentation distribution without loss of information), it is impractical for the following reasons. First, its dimensionality is exponentially high, which is up to L, the number of possible augmented results. For example, even on CIFAR10, the small-scale dataset with image size 32× 32× 3, L is up to 2563072 (3072 pixels and 256 possible pixel values). Second, the computation of each element is intractable. We may need an exponentially large number of samples to accurately estimate each p(x | x̄). The dimensionality and computation problems make the augmentation feature impractical both at inference and training time. Such inconvenience motivates us to (1) conduct certain dimension reduction to preserve the information in low dimensional space (Section 4.2) and (2) develop an efficient algorithm for dimension reduction (Section 4.3). 4.2 DIMENSION REDUCTION ON AUGMENTATION FEATURES To deal with the high-dimensional property, we employ the idea of PCA (Hotelling, 1933), which reconstructs the data with principal components.3 For convenience, we denote the design matrix of augmentation feature by A, where A ∈ RN×L, Ax̄,x = p(x | x̄) (see Figure 2). We perform PCA on a transformed augmentation feature called normalized augmentation feature:  = AD− 1 2 , (2) where D = diag([dx1 , dx2 , . . . , dxL ]), dx = ∑ x̄ p(x | x̄). Based on normalized augmentation feature, we can develop an efficient algorithm for similarity preserving embeddings. Assume the SVD of  = UΣV ⊤ with U ∈ RN×N , Σ ∈ RN×L, V ∈ RL×L , PCA first learns the projection matrix consisting of the top-k right singular vectors, which can be denoted as Ṽ ∈ RL×k. The vectors in Ṽ are called Principal Components (PCs). Then, it projects the feature by ÂṼ to get the embeddings for each sample. The overall procedure is illustrated at the top-right of Figure 2. But performing PCA on the augmentation feature will encounter many obstacles. The element of augmentation feature is not possible to estimate accurately, not to mention its high dimensionality. 2Following the common knowledge in dimension reduction, we call the raw high dimensional representation as “feature”, and learned low-dimensional representation as “embedding”. 3In this paper, we use the non-centred version (Reyment & Jvreskog, 1996), which is more appropriate for observations than for variables, where the origin matters more. Even if we can somehow get the projection matrix Ṽ , it is also impractical to project the highdimensional matrix Â. For this reason, we propose ACA to make PC learning and projection process efficient without explicitly calculating elements of augmentation feature. 4.3 AUGMENTATION COMPONENT ANALYSIS Although there are several obstacles when performing PCA on the augmentation features directly, fortunately, it is efficient to sample from the augmentation distribution p(x | x̄), i.e., by performing augmentation on the natural data x̄ and get an augmented sample x. Being aware of this, our ACA uses two practical losses to simulate the PCA process efficiently by sampling. The first contrastivelike loss leads the encoder to learn principal components of Â, which can be efficiently optimized by sampling like traditional contrastive methods. The second loss performs on-the-fly projection of  through the training trajectory, which solves the difficulty of high dimensional projection. Learning principal components. ACA learns the principal components by an efficient contrastivelike loss. Besides its projection functionality, these learned principal components can also serve as embeddings that preserve a kind of posterior distribution similarity, as we will show later. In the SVD view, UΣ serves as the PCA projection results for samples and V contains the principal components (Jolliffe, 2002). However, if changing our view, V Σ can be seen as the representation of each column. Since each column of  encodes the probability of the augmented data given natural data, V Σ preserves certain augmentation relationships, as we will show in Theorem 4.2 later. To leverage the extrapolation power of encoders like deep neural networks, we choose to design a loss that can guide the parameterized encoder fθ to learn similar embeddings as PCA. Inspired by the rank minimization view of PCA (Vidal et al., 2016), we employ the low-rank approximation objective with matrix factorization, similar to HaoChen et al. (2021): min F∈RL×k Lmf = ∥Â⊤Â− FF⊤∥2F , (3) where columns of F store the scaled version of top-k right singular vectors, and each row can be seen as the embedding of augmented data as will show in Lemma 4.1. According to Eckart–Young–Mirsky theorem (Eckart & Young, 1936), by optimizing Lmf , we can get the optimal F̂ , which has the form Ṽ Σ̃Q, Q ∈ Rk×k is an orthonormal matrix. Σ̃ and Ṽ contains the top-k singular values and right singular vectors. By expanding Equation (3), we get Augmentation Component Analysis Loss for learning Principal Components (ACA-PC) in the following lemma: Lemma 4.1 (ACA-PC loss). Let Fx,: = √ dxf ⊤ θ (x),∀x ∈ X . Minimizing Lmf is equivalent to minimizing the following objective: LACA-PC =− 2E x̄∼p(x̄),xi∼p(xi|x̄) xj∼p(xj |x̄) fθ(xi) ⊤fθ(xj) +NEx1∼pA(x1),x2∼pA(x2) [( fθ(x1) ⊤fθ(x2) )2] . (4) The proof can be found in Appendix F. In ACA-PC, the first term is the common alignment loss for augmented data and the second term is a form of uniformity loss (Wang & Isola, 2020). Both terms can be estimated by Monte-Carlo sampling. ACA-PC is a kind of contrastive loss. But unlike most of the others, it has theoretical meanings. We note that the form of ACA-PC differs from spectral loss (HaoChen et al., 2021) by adding a constant N before the uniformity term. This term is similar to the noise strength in NCE (Gutmann & Hyvärinen, 2010) or the number of negative samples in InfoNCE (van den Oord et al., 2018). It can be proved that the learned embeddings by ACA-PC preserve the posterior distribution distances between augmented data: Theorem 4.2 (Almost isometry for posterior distances). Assume fθ is a universal encoder, σk+1 is the (k + 1)-th largest singular value of Â, dmin = minx dx, and δx1x2 = I(x1 = x2), the minimizer θ∗ of LACA−PC satisfies: d2post(x1,x2)− 2σ2k+1 dmin (1− δx1x2) ≤ ∥fθ∗(x1)− fθ∗(x2)∥22 ≤ d2post(x1,x2) , ∀x1,x2 ∈ X where the posterior distance d2post(x1,x2) = ∑ x̄∈X̄ (pA(x̄ | x1)− pA(x̄ | x2))2 (5) measures the squared Euclidean distance between the posterior distribution pA(x̄ | x) = p(x|x̄)p(x̄)pA(x) . We give the proof in Appendix G. Theorem 4.2 states that the optimal encoder for ACA-PC preserves the distance of posterior distributions between augmented data within an error related to embedding size k. As k increase to N , the error decrease to 0. It corresponds to the phenomenon that a larger embedding size leads to better contrastive performance (Chen et al., 2020a). The posterior distribution pA(x̄ | x) represents the probability that a given augmented sample x is created by a natural sample x̄. Augmented data that are only produced by the same natural sample will have the smallest distance, and embeddings of those in overlapped areas will be pulled together by ACA-PC. Since the overlapped area are usually created by two same-class samples, ACA-PC can form semantically meaningful embedding space. It is also noticeable that the optimal encoder meets the similarity preserving condition (Equation (1)) but concerning the posterior distribution for augmented data not the augmentation distribution for natural data. Since what we care about is the distribution of natural data, we further propose a projection loss that helps learn good embeddings for all the natural data. On-the-fly Projection. As stated in the previous part, the learned embeddings by ACA-PC not only serve as certain embeddings for augmented data but also contain principal components of normalized augmentation feature. Based on this, we propose to use these embeddings to act as a projection operator to ensure meaningful embeddings for all the natural data. To be specific, denote the embedding matrix for all augmented data as F aug(∈ RL×k), where each row F augx,: = f⊤θ∗(x). From Equation (3) and F̂x,: = √ dxf ⊤ θ∗(x), it can be easily seen that: F aug = D− 1 2 F̂ = D− 1 2 Ṽ Σ̃Q Similar to PCA (Hotelling, 1933) that projects the original feature by the principal components V , we propose to use F aug to project the augmentation feature to get the embeddings for each natural sample. Denote the embedding matrix for natural data as Fnat(∈ RN×k), where each row Fnatx̄,: represents the embeddings of x̄. We compute Fnat as follows: Fnat = AF aug = ÂD 1 2D− 1 2 Ṽ Σ̃Q = (Ũ Σ̃)Σ̃Q, (6) where Σ̃,Ũ contain the top-k singular values and corresponding left singular vectors. It is noticeable that Fnat is exactly the PCA projection result multiplied by an additional matrix Σ̃Q. Fortunately, such additional linear transformation does not affect the linear probe performance (HaoChen et al., 2021). With Equation (6), the embedding of each natural sample can be computed as follows: Fnatx̄,: = Ax̄,:F aug = ∑ x p(x | x̄)f⊤θ∗(x) = Ex∼p(x|x̄)f⊤θ∗(x) (7) which is exactly the expected feature over the augmentation distribution. Similar to Theorem 4.2, the embeddings calculated by Equation (7) also present a certain isometry property: Theorem 4.3 (Almost isometry for weighted augmentation distances). Assume fθ is a universal encoder, σk+1 is the (k + 1)-th largest sigular value of Â,δx̄1x̄2 = I(x̄1 = x̄2), let the minimizer of LACA−PC be θ∗ and g(x̄) = Ex∼p(x|x̄)fθ∗(x) as in Equation (7), then: d2w-aug(x̄1, x̄2)− 2σ2k+1 (1− δx̄1x̄2) ≤ ∥g(x̄1)− g(x̄2)∥2Σ−2k ≤ d 2 w-aug(x̄1, x̄2) , ∀x1,x2 ∈ X where ∥·∥Σ−2k represent the Mahalanobis distance with matrix Σ −2 k ,Σk = diag([σ1, σ2, . . . , σk]) is the diagonal matrix containing top-k singular values and the weighted augmentation distance d2w-aug(x̄1, x̄2) = 1 N ∑ x∈X (p(x | x̄1)− p(x | x̄2))2 pA(x) (8) measures the weighted squared Euclidean distance between the augmentation distribution p(x | x̄). Different from Theorem 4.2, which presents isometry between Euclidean distances in embeddings and augmentation distribution, Theorem 4.3 presents isometry between Mahalanobis distances. The weighted augmentation distances weigh the Euclidean distances by pA(x). dw-aug can be regarded as a valid augmentation distance measure dA as in Equation (1) and Fnat preserve such a distance. So our goal is to make embeddings of x̄ approaches Ep(x|x̄)fθ⋆(x). However, as stated before, the additional projection process is not efficient, i.e., we need exponentially many samples from p(x | x̄). We notice that samples during the training process of ACA-PC can be reused. For this reason, we propose an on-the-fly projection loss that directly uses the current encoder for projection: Lproj = Ex̄∼p(x̄) [ ∥fθ(x̄)− Ep(x|x̄)fθ(x)∥22 ] (9) Full objective of ACA. Based on the discussion of the above parts, ACA simultaneously learns the principal components by ACA-PC and projects natural data by an on-the-fly projection loss. The full objective of ACA has the following form: LACA-Full = LACA-PC + αLproj (10) where α is a trade-off hyperparameter. We also find N in Equation (4) too large for stable training, so we replace it with a tunable hyperparameter K. Here, we only display the loss in expectation forms. The details of the implementation are described in Appendix A. 5 A PILOT STUDY In this section, we experiment with our Augmentation Component Analysis method on a synthetic mixture component data with a Gaussian augmentation method. In this example, we aim to show the relationship between semantic similarity and posterior/weighted augmentation distances. We also show the effectiveness of our method compared to traditional contrastive learning. In this example, the natural data x̄ are sampled from a mixture gaussian with c component: p(x̄) = c∑ i=1 πiN (µi, siI) We use Gaussian noise as the data augmentation of a natural data sample, i.e., A(x̄) = x̄+ ξ where ξ ∼ N (0, saI). Concretely, we conduct our experiment on 2-D data with c = 4, πi = 1c , si = 1 and µi uniformly distributed on a circle with radius 2 . For each component, we sample 200 natural data with the index of the component as their label. For each natural datum, we augment it 2 times with sa = 4, which results in totally 1600 augmented data. We compute the augmentation probability for between x and x̄ by p(x | x̄) and we normalize the probability for each x̄. First, we plot the distribution of posterior distances (Equation (5)) for pairs of augmented data and weighted augmentation distances (Equation (8)) for pairs of natural data in Figure 3 left. The two distances appear to have similar distributions because the synthetic data are Gaussian. It can be seen that data from the same component tend to have small distances, while from different components, their distances are large. In low-distance areas, there are pairs of the same class, which means that the two distances are reliable metrics for judging semantic similarity. In all, this picture reveals the correlation between semantic similarity and posterior/weighted augmentation distances. Second, we compare our methods with SimCLR (Chen et al., 2020a), the traditional contrastive method and Spectral (HaoChen et al., 2021), which similarly learns embeddings with spectral theory. We test the learned embeddings using a Logistic Regression classifier and report the error rate of the prediction in Figure 3 right. We also report performance when directly using augmentation feature (AF). First, AF has discriminability for simple linear classifiers. SimCLR and Spectral tend to underperform AF as the embedding size increases, while our methods consistently outperform. It may be confusing since our method performs dimension reduction on this feature. But we note that as the embedding size increases, the complexity of the linear model also increases, which affects the generalizability. All the methods in Figure 3 right show degradation of this kind. However, our methods consistently outperform others, which shows the superiority of ACA. Additionally, by adding projection loss, ACA-Full improves ACA-PC by a margin. Additionally, traditional contrastive learning like SimCLR achieves similar performance as our methods. We think it reveals that traditional contrastive learning has the same functionality as our methods. 6 EXPERIMENTS 6.1 SETUP Dataset. In this paper, we conduct experiments mainly on the following datasets with RTX-3090 ×4. CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009): two datasets containing totally 500K images of size 32 × 32 from 10 and 100 classes respectively. STL-10 (Coates et al., 2011): derived from ImageNet (Deng et al., 2009), with 96 × 96 resolution images with 5K labeled training data from 10 classes. Additionally, 100K unlabeled images are used for unsupervised learning. Tiny ImageNet: a reduced version of ImageNet (Deng et al., 2009), composed of 100K images scaled down to 64 × 64 from 200 classes. ImageNet-100 (Tian et al., 2020a): a subset of ImageNet, with 100-classes. ImageNet (Deng et al., 2009), the large-scale dataset with 1K classes. Network Structure. Following common practice (Chen et al., 2020a;b;c), we use the encoderprojector structure during training, where the projector projects the embeddings into a lowdimensional space. For CIFAR-10 and CIFAR-100, we use the CIFAR variant of ResNet-18 (He et al., 2016; Chen & He, 2021) as the encoder. We use a two-layer MLP as the projector whose hidden dimension is half of the input dimension and output dimension is 64. For STL-10 and Tiny ImageNet, only the max-pooling layer is disabled following (Chen & He, 2021; Ermolov et al., 2021). For these two datasets, we use the same projector structure, except that the output dimension is 128. For ImageNet, we use ResNet-50 with the same projector as Chen et al. (2020a). Image Transformation. Following the common practice of contrastive learning (Chen et al., 2020a), we apply the following augmentations sequentially during training: (a) crops with a random size; (b) random horizontal flipping; (c) color jittering; (d) grayscaling. For ImageNet-100 and ImageNet, we use the same implementation as (Chen et al., 2020a). Optimizer and other Hyper-parameters. For datasets except for ImageNet, adam optimizer (Kingma & Ba, 2015) is used for all datasets. For CIFAR-10 and CIFAR-100, we use 800 epochs with a learning rate of 3× 10−3. For Tiny ImageNet and STL-10, we train 1,000 epochs with a learning rate 2 × 10−3. We use a 0.1 learning rate decay at 100, 50, 20 epochs before the end. Due to hardware resource restrictions, we use a mini-batch of size 512. The weight decay is 1 × 10−6 if not specified. Following common practice in contrastive learning, we normalize the projected feature into a sphere. For CIFAR-10, we use α = 1. For the rest datasets, we use α = 0.2. By default, K is set to 2. For ImageNet, we use the same hyperparameters as (Chen et al., 2020a) except batch size being 256, α = 0.2 and K = 2. Evaluation Protocol. We evaluate the learned representation on two most commonly used protocols – linear classification (Zhang et al., 2016; Kolesnikov et al., 2019) and k-nearest neighbors classifier (Chen & He, 2021). In all the experiments, we train the linear classifier for 100 epochs. The learning rate exponentially decays from 10−2 to 10−6. The weight decay is 1× 10−6. We report the classification accuracy on test embeddings as well as the accuracy of a 5-Nearest Neighbors classifier for datasets except for ImageNet. 6.2 PERFORMANCE COMPARISON In Table 1, we compare the linear probe performance on various small-scale or mid-scale benchmarks with several methods including SimCLR (Chen et al., 2020a), BYOL (Grill et al., 2020), SimSiam (Chen & He, 2021) and Spectral (HaoChen et al., 2021). For transfer learning benchmarks, please refer to Appendix D and Appendix E. SimCLR uses is a method that uses contrastive loss. BYOL and SimSiam do not use negative samples. Spectral is a similar loss derived from the idea of spectral clustering. From Table 1, we can see that our ACA-Full method achieves competitive results on small- or mid-scale benchmarks, achieving either the best or the second-best results on all benchmarks except the 5-NN evaluation on STL-10. Also, ACA-PC differs from ACA-Full in the projection loss. In all the benchmarks, we can see that the projection loss improves performance. For large-scale benchmarks, we compare several methods on ImageNet-100 and ImageNet. On ImageNet-100, we compare our method additionally to MoCo (He et al., 2020), Lalign + Luniform (Wang & Isola, 2020) and InfoMin (Tian et al., 2020b). Note that the results of the other three methods are reported when using the ResNet-50 encoder, which has more capacity than ResNet18. Our method can also achieve state-of-the-art results among them. This means that our method is also effective with relatively small encoders even on large-scale datasets. On ImageNet, we see that ACA-PC achieves competitive performance against state-of-the-art contrastive methods (Chen et al., 2020a;c; Grill et al., 2020; Chen & He, 2021; HaoChen et al., 2021) and ACA-Full achieves the best. 7 CONCLUSION AND FUTURE WORK In this paper, we provide a new way of constructing self-supervised contrastive learning tasks by modeling similarity through augmentation overlap, which is motivated by the observation that semantically similar data usually creates similar augmentations. We propose Augmentation Component Analysis to perform PCA on augmentation feature efficiently. Interestingly, our methods have a similar form as the traditional contrastive loss and may explain the ability of contrastive loss. We hope our paper can inspire more thoughts about how to measure similarity in self-supervised learning and how to construct contrastive learning tasks. Future studies may be explorations of applying ACA to learn representations of other forms of instances, such as tasks (Achille et al., 2019) and models (Wu et al., 2023). ACKNOWLEDGE This research was supported by NSFC (61773198, 62006112,61921006), Collaborative Innovation Center of Novel Software Technology and Industrialization, NSF of Jiangsu Province (BK20200313) B EFFECT OF AUGMENTATION OVERLAPS Like contrastive learning, our method relies on the quality of augmentation. Therefore, we investigate the influence of different augmentations and reveal the relationship between distribution difference and the linear probe performance on CIFAR10. The augmentation distribution is estimated by augmenting 106 times for a subset of random 2000 pairs of samples with the number of intra-class and inter-class pairs being 1000 respectively. Note that as is stated in Section 4.1, even on CIFAR10, the actual value of L is exponentially large (up to 2563072). It is impossible to accurately estimate a distribution over so many possible values. But we notice that for neural networks, many operators can reduce the possible number of values, like convolutions and poolings. Following this observation and to make the computation efficient, we descrete the color into 8-bit for each channel and use a max pooling operation to get a 4× 4 picture. by this kind of approximation, the number of L reduces to 848. Seems still too large, but it can be noted that the augmentation distribution of each sample covers only a small region. It is enough to estimate the distribution by sampling. For memory restriction, we cannot fully estimate the weighted augmentation distance in Theorem 4.3. Because we cannot store all possible values for pA(x). Instead, we use the Hellinger distance as the distribution distance measure: d2H(x̄1, x̄2) = 1 N ∑ x∈X (√ p(x | x̄1)− √ p(x | x̄2) )2 Hellinger distance ranges [0, 2], making the comparison clear. We list the experimented augmentation here: 1. Grayscale: Randomly change the color into gray with probability of 0.1. 2. HorizontalFlip: Randomly flip horizontally with probability 0.5. 3. Rotation: Randomly rotate image with uniformly distributed angle in [0, π] 4. ColorJitter: Jitter (brightness, contrast, saturation, hue) with strength (0.4, 0.4, 0.4, 0.1) and probability 0.8. In Table 3, we display the histogram (HIST) of intra- and inter-class augmentation distribution distances. ACC displays the linear probe performance on the test set. From the table, the following requirements for a good augmentation can be concluded: (1) Existence of overlap. For the upper three augmentations. The “scope” of augmentation is small. As a result, most of the samples do not overlap. This makes embeddings lack the discriminative ability for downstream tasks. On the contrary, the lower three create overlaps for most of the samples, leading to much better performance. (2) Intra-class distance is lower than inter-class. Compared to ColorJitter, ResizedCrop makes more intra-class samples have lower distance. So ResizedCrop outperforms ColorJitter. SimCLR augmentation surpasses these two for the same reason. Interestingly, we find that the same phenomena appear when using other contrastive methods like SimCLR. It shows that these methods somehow utilize the augmentation overlap like our method. C PERFORMANCE CURVE In this section, we illustrate the performance curve throughout training. We aim to demonstrate the functionality of projection loss and show that our ACA method leads to better performance. The compared traditional contrastive learning method is chosen to be SimCLR, for the reason that our method only differs from SimCLR in the loss, with all other things (architecture, optimizer and other shared hyperparameters) identical. Also, we do not introduce extra mechanisms like momentum encoder (BYOL, MoCo) and predictor (BYOL, SimSiam). Figure 5 shows the performance curve along with the projection loss on the CIFAR-10 dataset. The left figure shows the projection loss. We can see that in the early stage of training, the projection loss will increase. It reveals that the natural data will deviate from the center of augmentation distribution. It is harmful to the performance of the model. With the help of projection loss, the embeddings of natural data will be dragged back to their right position, the center. The mid and right figures illustrate the performance curve during training. With only ACA-PC loss, the model can only achieve similar performance during training. But the ACA-Full loss will help improve performance during training. Also, we can see that ACA starts to outperform SimCLR and ACA-PC by a considerable margin from about 50 epochs. This happens to be the epoch in which the projection loss increases to its stable level. Therefore, pulling the natural data to the center of its augmentation helps to learn better embeddings. D TRANSFER TO OTHER DATASETS Following Chen et al. (2020a), we evaluate the self-supervised pre-trained models for linear classification task on 10 datasets as it is conducted in MSF paper (Koohpayegani et al., 2021). The results are reported in Table 4. All the results other than ACA are taken from Koohpayegani et al. (2021). Although our method is trained with fewer epochs, it achieves competitive results with contrastive learning methods. Notably, it surpasses the 1000-epoch SimCLR which differs from our method only in loss. It shows that the embeddings learned by our method are also transferable to other downstream tasks. We think it is due to the universality of the correlation between augmentation similarity and semantical similarity across these benchmarks. E TRANSFER TO OBJECT DETECTION Following the procedure outlined in ?, we use Faster-RCNN Ren et al. (2015) for the task of object detection on PASCAL-VOC Everingham et al. (2015). We use the code provided at MoCo repository4 with default parameters. All the weights are finetuned on the trainval07+12 set and evaluated on the test07 set. We report an average over 5 runs in Table 5. Despite the shorter training epochs, our method can achieve better results than SimCLR, especially outperform by a large margin on AP75(> 1%). F PROOF OF LEMMA 4.1 For convenient, we define M := Â⊤Â. The elements of M are: Mx1x2 = ∑ x̄∈X̄ p(x1 | x̄)p(x2 | x̄)√ dx1 √ dx2 ,x1,x2 ∈ X (13) Expanding Equation (3), we get: Lmf = ∑ x1,x2∈X (Mx1x2 − F⊤x1Fx2) 2 = ∑ x1,x2∈X (Mx1x2 − √ dx1 √ dx2fθ(x1) ⊤fθ(x2)) 2 = const − 2 ∑ x1,x2∈X √ dx1 √ dx2Mx1x2fθ(x1) ⊤fθ(x2) + ∑ x1,x2∈X dx1dx2(fθ(x1) ⊤fθ(x2)) 2 = const − 2 ∑ x1,x2∈X ∑ x̄∈X̄ p(x1 | x̄)p(x2 | x̄)fθ(x1)⊤fθ(x2) + ∑ x1,x2∈X dx1dx2(fθ(x1) ⊤fθ(x2)) 2 4https://github.com/facebookresearch/moco multiply by p(x̄) = 1N and replace dx with ∑ x̄ p(x | x̄) = NpA(x). The objective becomes: min θ − 2 ∑ x1,x2∈X ∑ x̄∈X̄ p(x1 | x̄)p(x2 | x̄)p(x̄)fθ(x1)⊤fθ(x2) +N ∑ x1,x2∈X pA(x1)pA(x2)(fθ(x1) ⊤fθ(x2)) 2 = −2E x̄∼p(x̄),xi∼A(xi|x̄) xj∼A(xj |x̄) [ fθ(x1) ⊤fθ(x2) ] +NEx1∼pA(x1),x2∼pA(x2) [ (fθ(x1) ⊤fθ(x2)) 2 ] = LACA-PC G PROOF OF THEOREM 4.2 As in Appendix F, we define M := Â⊤Â. By Eckart–Young–Mirsky theorem (Eckart & Young, 1936), the minimizer F̂ of ∥M − FF⊤∥2F , must have the form V̂ Σ̂Q, where V̂ , Σ̂ contain the top-k singular values and corresponding right singular vectors of Â, Q ∈ Rk×k is some orthonormal matrix with Q⊤Q = I . Since we let Fx = √ dxfθ(x), then the minimizer θ⋆ must satisfy fθ⋆(x) = Q σ̂ ⊙ v̂(x)√ dx = Q [σ1v1(x), σ2v2(x), . . . , σkvk(x)] ⊤ √ dx . where ⊙ is the element-wise multiplication. For convenience, we use σi to denote i-th largest singular value, ui(x̄),vi(x) to denote the element of i-th left/right singular value corresponding to x̄/x . When p(x̄) = 1N , dx = NpA(x) = pA(x) p(x̄) . Then the posterior distance: d2post(x1,x2) = ∑ x̄∈X̄ (pA(x̄ | x1)− pA(x̄ | x2))2 = ∑ x̄∈X̄ ( p(x1 | x̄)p(x̄) pA(x1) − p(x1 | x̄)p(x̄) pA(x1) )2 = ∑ x̄∈X̄ ( p(x1 | x̄) dx1 − p(x2 | x̄) dx2 )2 = ∑ x̄∈X̄ ( Âx̄x1√ dx1 − Âx̄x2√ dx2 )2 = ∑ x̄∈X̄ ( N∑ i=1 σiui(x̄)vi(x1)√ dx1 − σiui(x̄)vi(x2)√ dx2 )2 = ∑ x̄∈X̄ ( N∑ i=1 σiui(x̄)( vi(x1)√ dx1 − vi(x2)√ dx2 ) )2 = ∑ x̄∈X̄ ∑ i,i′ σiui(x̄)σi′ui′(x̄)( vi(x1)√ dx1 − vi(x2)√ dx2 )( vi′(x1)√ dx1 − vi ′(x2)√ dx2 ) = ∑ i,i′ σiσi′( vi(x1)√ dx1 − vi(x2)√ dx2 )( vi′(x1)√ dx1 − vi ′(x2)√ dx2 ) ∑ x̄∈X̄ ui(x̄)ui′(x̄) (1) = ∑ i,i′ σiσi′( vi(x1)√ dx1 − vi(x2)√ dx2 )( vi′(x1)√ dx1 − vi ′(x2)√ dx2 )δi,i′ = N∑ i=1 σ2i ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 (14) (1) is due to the orthogonality of singular vectors. Note that: N∑ i=1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 = L∑ i=1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 − L∑ i=N+1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 ≤ L∑ i=1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 = L∑ i=1 v2i (x1) dx1 + L∑ i=1 v2i (x2) dx2 − 2 L∑ i=1 vi(x1)vi(x2)√ dx1 √ dx2 = 1 dx1 + 1 dx2 − 2δx1x2√ dx1 √ dx2 (2) ≤ ( 1 dx1 + 1 dx2 )(1− δx1x2) ≤ 2 dmin (1− δx1x2) (2) can be deduced by considering conditions whether x1 = x2 or not. Then: ∥fθ⋆(x1)− fθ⋆(x2)∥2 = k∑ i=1 σ2i ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 =d2post(x1,x2)− N∑ i=k σ2i ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 (≤ d2post(x1,x2)) ≥d2post(x1,x2)− σ2k+1 N∑ i=k+1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 ≥d2post(x1,x2)− σ2k+1 N∑ i=1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 ≥d2post(x1,x2)− 2σ2k+1 dmin (1− δx1x2) Therefore, we have proved Theorem 4.2. H PROOF OF THEOREM 4.3 similar to Appendix G, d2w-aug(x̄1, x̄2) = ∑ x∈X 1 NpA(x) (p(x | x̄1)− p(x | x̄2))2 = ∑ x∈X ( p(x | x̄1)√ NpA(x) − p(x | x̄1)√ NpA(x) )2 = ∑ x∈X ( p(x | x̄1)√ dx − p(x | x̄1)√ dx )2 = ∑ x∈X ( Âx̄1x − Âx̄2x )2 = ∑ x∈X ( N∑ i=1 σiui(x̄1)vi(x)− σiui(x̄2)vi(x) )2 = ∑ x∈X ( N∑ i=1 σi(ui(x̄1)− ui(x̄2))vi(x) )2 = ∑ x∈X ∑ i,i′ σivi(x)σi′vi′(x)(ui(x̄1)− ui(x̄2))(ui′(x̄1)− ui′(x̄2)) = ∑ i,i′ σiσi′(ui(x̄1)− ui(x2))(ui′(x̄1)− ui′(x̄2)) ∑ x∈X vi(x)vi′(x) (1) = ∑ i,i′ σiσi′(ui(x̄1)− ui(x̄2))(ui′(x̄1)− ui′(x̄2))δi,i′ = N∑ i=1 σ2i (ui(x1)− ui(x2))2 (1) is due to the orthogonality of singular vectors. And g(x̄) takes the following form: g(x̄) = Q [ σ21u1(x), σ 2 2u2(x), . . . , σ 2 kuk(x) ]⊤ . Thus, ∥g(x̄1)− g(x̄2)∥2Σ−2k = k∑ i=1 σ2i (ui(x1)− ui(x2))2 = d2w-aug(x̄1, x̄2)− N∑ i=k+1 σ2i (ui(x1)− ui(x2))2 (≤ d2w-aug(x̄1, x̄2)) ≥ d2w-aug(x̄1, x̄2)− σ2k+1 N∑ i=1 (ui(x1)− ui(x2))2 = d2w-aug(x̄1, x̄2)− 2σ2k+1(1− δx̄1x̄2) I ABLATION STUDY ON PARAMETER α AND K We conduct ablation experiments on the parameter α and K. α is the trade-off parameter between ACA-PC loss and projection loss Equation (10). K act as the noise strength for ACA-PC, which replaces N in Equation (4). Figure 6 shows the effect of α and K on different benchmarks. It can be seen that α is necessary to improve the performance of ACA-PC. A certain value of α helps the model to achieve better results. However, a too large value of α degrades the performance. The same phenomenon is the same on K. J COMPARISON OF NEAREST NEIGHBORS We randomly select 8 samples from the validation set of ImageNet-100 (Tian et al., 2020a). Then we use the encoder learned by our ACA method and SimCLR (Chen et al., 2020a) to extract features and investigate their nearest neighbors of them. The left-most column displays the selected samples and the following columns show the 5 nearest neighbors. The samples labeled as different classes are marked by the red box. We also annotate the distance between the samples and their nearest neighbors. First, we can see that even though utilizing the augmentation in a different way, ACA achieves similar results as traditional contrastive learning. Both of them can learn semantically meaningful embeddings. However, we can see that ACA tends to learn embeddings that pull together images that are similar in the input space, i.e., creating similar augmentation, while SimCLR sometimes has neighbors that seem different.
1. What is the main contribution of the paper regarding contrastive learning? 2. How does the reviewer assess the strengths and weaknesses of the proposed approach? 3. Do you have any questions about the analysis of dimension reduction or the similarity constraint? 4. How does the reviewer evaluate the performance gain of ACA-PC? 5. Are there any concerns regarding the clarity and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper empirically claims that the overlap among the augmentation distributions from a similar category is much higher than those from dissimilar categories. Based on this discovery, this work establishes the semantic relationship between samples in an unsupervised manner based on the similarity of augmentation distributions. Technically, the paper proposes a new self-supervised loss function via maximizing the similarity of samples from the same augmentation distribution and making those from different augmentation distribution as orthogonal as possible for further enhancing the representation learning. Theoretically, this work is intended to explain why contrastive learning can effectively work with its proposed Augmentation Component Analysis. Strengths And Weaknesses Strengths: + The perspective of augmentation overlaps is interesting for contrastive learning. Especially for those samples from the same category, the discovery of augmentation overlaps is much useful for representation learning in an unsupervised way. + The loss functions proposed in this paper are verified their effectiveness for self-supervised representation learning. Especially, a projection loss is designed for compacting the representation from the same samples, which shares similar sprit with the prototype learning in supervised learning. The difference lies in the projection loss is mainly for the augmentation distribution of a single sample. Weaknesses: - In this exploration of augmentation overlaps, the similarity between different samples don’t seem to be explained very well in this paper. And, there also seems to have no corresponding loss function or theoretic derivation can prove the model could pull the semantically similar samples close, as showed in the left of Figure 1. - The analysis of dimension reduction with the idea of PCA is not suitable to derive the similarity constraint in Eq. (4). Besides, the ACA-PC loss is similar to the function appeared in [1]. - Performance gain of ACA-PC can be very marginal to prove the effectiveness of the proposed ACA. - There exist many sentences in this paper that are hard to digest. E.g., “we claim that it is the similarity between the augmentation distributions of samples, …, that reveals the sample-wise similarity better” (page 1), “It seems that we can obtain the solution directly without further learning, however, not only the elements in the augmentation feature are hard to calculate, but also such high-dimensional target embeddings are impractical to use” (page 1),, “In addition, the resemblance between the objectives of ACA and traditional contrastive loss may explain why the latter can learn semantic-related embeddings” (page 2). [1] HaoChen J Z, Wei C, Gaidon A, et al. Provable guarantees for self-supervised deep learning with spectral contrastive loss[J]. Advances in Neural Information Processing Systems, 2021, 34: 5000-5011. Clarity, Quality, Novelty And Reproducibility The novelty of this paper can be sufficient and it can be easy to reimplement the proposed ACA.
ICLR
Title Augmentation Component Analysis: Modeling Similarity via the Augmentation Overlaps Abstract Self-supervised learning aims to learn a embedding space where semantically similar samples are close. Contrastive learning methods pull views of samples together and push different samples away, which utilizes semantic invariance of augmentation but ignores the relationship between samples. To better exploit the power of augmentation, we observe that semantically similar samples are more likely to have similar augmented views. Therefore, we can take the augmented views as a special description of a sample. In this paper, we model such a description as the augmentation distribution, and we call it augmentation feature. The similarity in augmentation feature reflects how much the views of two samples overlap and is related to their semantical similarity. Without computational burdens to explicitly estimate values of the augmentation feature, we propose Augmentation Component Analysis (ACA) with a contrastive-like loss to learn principal components and an on-the-fly projection loss to embed data. ACA equals an efficient dimension reduction by PCA and extracts low-dimensional embeddings, theoretically preserving the similarity of augmentation distribution between samples. Empirical results show that our method can achieve competitive results against various traditional contrastive learning methods on different benchmarks. Code available at https://github.com/hanlu-nju/AugCA. 1 INTRODUCTION The rapid development of contrastive learning has pushed self-supervised representation learning to unprecedented success. Many contrastive learning methods surpass traditional pretext-based methods by a large margin and even outperform representation learned by supervised learning (Wu et al., 2018; van den Oord et al., 2018; Tian et al., 2020a; He et al., 2020; Chen et al., 2020a;c). The key idea of self-supervised contrastive learning is to construct views of samples via modern data augmentations (Chen et al., 2020a). Then discriminative embeddings are learned by pulling together views of the same sample in the embedding space while pushing apart views of others. Contrastive learning methods utilize the semantic invariance between views of the same sample, but the semantic relationship between samples is ignored. Instead of measuring the similarity between certain augmented views of samples, we claim that the similarity between the augmentation distributions of samples can reveal the sample-wise similarity better. In other words, semantically similar samples have similar sets of views. As shown in Figure 1 left, two images of deer create many similar crops, and sets of their augmentation results, i.e., their distributions, overlap much. In contrast, a car image will rarely be augmented to the same crop as a deer, and their augmentation distributions overlap little. In Figure 1 right, we verify the motivation numerically. We approximate the overlaps between image augmentations with a classical image matching algorithm (Zitova & Flusser, 2003), which counts the portion of the key points matched in the raw images. We find samples of the same class overlap more than different classes on average, supporting our motivation. Therefore, we establish the semantic relationship between samples in an unsupervised manner based on the similarity of augmentation distributions, i.e., how much they overlap. In this paper, we propose to describe data directly by their augmentation distributions. We call the feature of this kind the augmentation feature. The elements of the augmentation feature represent the probability of getting a certain view by augmenting the sample as shown in the left of Figure 2. The augmentation feature serves as an “ideal” representation since it encodes the augmentation information without any loss and we can easily obtain the overlap of two samples from it. However, not only its elements are hard to calculate, but also such high-dimensional embeddings are impractical to use. Inspired by the classical strategy to deal with high-dimensional data, we propose Augmentation Component Analysis (ACA), which employs the idea of PCA (Hotelling, 1933) to perform dimension reduction on augmentation features previously mentioned. ACA reformulates the steps of extracting principal components of the augmentation features with a contrastive-like loss. With the learned principal components, another on-the-fly loss embeds samples effectively. ACA learns operable low-dimensional embeddings theoretically preserving the augmentation distribution distances. In addition, the similarity between the objectives of ACA and traditional contrastive loss may explain why contrastive learning can learn semantic-related embeddings – they embed samples into spaces that partially preserve augmentation distributions. Experiments on synthetic and real-world datasets demonstrate that our ACA achieves competitive results against various traditional contrastive learning methods. Our contributions are as follows: • We propose a new self-supervised strategy, which measures sample-wise similarity via the similarity of augmentation distributions. This new aspect facilitates learning embeddings. • We propose ACA method that implicitly employs the dimension reduction over the augmentation feature, and the learned embeddings preserve augmentation similarity between samples. • Benefiting from the resemblance to contrastive loss, our ACA helps explain the functionality of contrastive learning and why they can learn semantically meaningful embeddings. 2 RELATED WORK Self-Supervised Learning. Learning effective visual representations without human supervision is a long-standing problem. Self-supervised learning methods solve this problem by creating supervision from the data itself instead of human labelers. The model needs to solve a pretext task before it is used for the downstream tasks. For example, in computer vision, the pretext tasks include colorizing grayscale images (Zhang et al., 2016), inpainting images (Pathak et al., 2016), predicting relative patch (Doersch et al., 2015), solving jigsaw puzzles (Noroozi & Favaro, 2016), predicting rotations (Gidaris et al., 2018) and exploiting generative models (Goodfellow et al., 2014; Kingma & Welling, 2014; Donahue & Simonyan, 2019). Self-supervised learning also achieves great success in natural language processing (Mikolov et al., 2013; Devlin et al., 2019). Contrastive Learning and Non-Contrastive Methods. Contrastive approaches have been one of the most prominent representation learning strategies in self-supervised learning. Similar to the metric learning in supervised scenarios (Ye et al., 2019; 2020), these approaches maximize the agreement between positive pairs and minimize the agreement between negative pairs. Positive pairs are commonly constructed by co-occurrence (van den Oord et al., 2018; Tian et al., 2020a; Bachman et al., 2019) or augmentation of the same sample (He et al., 2020; Chen et al., 2020a;c; Li et al., 2021; Ye et al., 2023), while all the other samples are taken as negatives. Most of these methods employ the InfoNCE loss (van den Oord et al., 2018), which acts as a lower bound of mutual information between views. Based on this idea, there are several methods that attempt to improve contrastive learning, including mining nearest neighbour (Dwibedi et al., 2021; ?; Azabou et al., 2021) and creating extra views by mixing up (Kalantidis et al., 2020) or adversarial training (Hu et al., 2021). Another stream of methods employs a similar idea of contrastive learning to pull views of a sample together without using negative samples (Grill et al., 2020; Chen & He, 2021). Barlow Twins (Zbontar et al., 2021) minimizes the redundancy within the representation vector. Tsai et al. (2021) reveals the relationship among Barlow Twins, contrastive and non-contrastive methods. Most of these methods only utilize the semantic invariance of augmentation and ignore the relationship between samples. Different from them, we propose a new way to perform self-supervised learning by preserving the similarity of augmentation distribution, based on the observation that a strong correlation exists between the similarity of augmentation distributions and the similarity of semantics. Explanation of Contrastive Learning. Several works provide empirical or theoretical results for explaining the behavior of contrastive learning. Tian et al. (2020b); Xiao et al. (2021) explore the role of augmentation and show contrastive model can extract useful information from views but also can be affected by nuisance information. Zhao et al. (2021) empirically shows that contrastive learning preserves low-level or middle-level instance information. In theoretical studies, Saunshi et al. (2019) provide guarantees of downstream linear classification tasks under conditionally independence assumption. Other works weaken the assumption but are still unrealistic (Lee et al., 2021; Tosh et al., 2021). HaoChen et al. (2021) focus on how views of different samples are connected by the augmentation process and provide guarantees with certain connectivity assumptions. Wang et al. (2022) notice that the augmentation overlap provides a ladder for gradually learning class-separated representations. In addition to the alignment and uniformity as shown by Wang & Isola (2020), Huang et al. (2021) develop theories on the crucial effect of data augmentation on the generalization of contrastive learning. Hu et al. (2022) explain that the contrastive loss is implicitly doing SNE with “positive” pairs constructed from data augmentation. Inspired by the important role of augmentation, we provide a novel self-supervised method that ensures preserving augmentation overlap. 3 NOTATIONS The set of all natural data (data without augmentation) is denoted by X̄ , with size |X̄ | = N . We assume that the natural data follow a uniform distribution p(x̄) on X̄ , i.e., p(x̄) = 1N ,∀x̄ ∈ X̄ . By applying an augmentation method A, a natural sample x̄ ∈ X̄ could be augmented to another sample x with probability pA(x | x̄), so we use p(· | x̄) to encode the augmentation distribution. 1 For example, if x̄ is an image, then A can be common augmentations like Gaussian blur, color distortion and random cropping (Chen et al., 2020a). Denote the set of all possible augmented data as X . We assume X has finite size |X | = L and L > N for ease of exposition. Note that N and L are finite, but can be arbitrarily large. We denote the encoder as fθ, parameterized by θ, which projects a sample x to an embedding vector in Rk. 4 LEARNING VIA AUGMENTATION OVERLAPS As we mentioned in Section 1, measuring the similarity between the augmentation distributions, i.e., the overlap of the augmented results of the two samples reveals their semantic relationship well. For example, in natural language processing, we usually generate augmented sentences by dropping out some words. Then different sentences with similar meanings are likely to contain the same set of words and thus have a high probability of creating similar augmented data. With the help of this self-supervision, we formulate the embedding learning task to meet the following similarity preserving condition: dRk (fθ⋆ (x̄1) , fθ⋆ (x̄2)) ∝ dA(p(· | x̄1), p(· | x̄2)) . (1) dRk is a distance measure in the embedding space Rk, and dA measures the distance between two augmentation distributions. Equation (1) requires the learned embedding with the optimal parameter θ⋆ has the same similarity comparison with that measured by the augmentation distributions. In this section, we first introduce the augmentation feature for each sample, which is a manually designed embedding satisfying the condition in Equation (1). To handle the high dimensionality and complexity of the augmentation feature, we further propose our Augmentation Component Analysis (ACA) that learns to reduce the dimensionality and preserve the similarity. 1Note that p(· | x̄) is usually difficult to compute and we can only sample from it. We omit the subscript A and directly use p(· | x̄) in the following content for convenient 4.1 AUGMENTATION FEATURE To reach the goal of similarity preserving in Equation (1), a direct way is to manually construct the feature by the augmentation distributions of each natural sample, i.e., f(x̄) = [p(x1 | x̄), . . . , p(xL | x̄)]⊤, where each element p(xi | x̄) represents the probability of getting a certain element xi in space X by augmenting x̄. We omit θ in f(x̄) since such augmentation feature2 does not rely on any learnable parameters. In this case, any distance dRL defined in the space of f is exactly a valid distribution distance, which reveals the augmentation overlaps and is related to the semantic similarity. Although the constructive augmentation feature naturally satisfies the similarity preserving condition (Equation (1)) (because it directly use the augmentation distribution without loss of information), it is impractical for the following reasons. First, its dimensionality is exponentially high, which is up to L, the number of possible augmented results. For example, even on CIFAR10, the small-scale dataset with image size 32× 32× 3, L is up to 2563072 (3072 pixels and 256 possible pixel values). Second, the computation of each element is intractable. We may need an exponentially large number of samples to accurately estimate each p(x | x̄). The dimensionality and computation problems make the augmentation feature impractical both at inference and training time. Such inconvenience motivates us to (1) conduct certain dimension reduction to preserve the information in low dimensional space (Section 4.2) and (2) develop an efficient algorithm for dimension reduction (Section 4.3). 4.2 DIMENSION REDUCTION ON AUGMENTATION FEATURES To deal with the high-dimensional property, we employ the idea of PCA (Hotelling, 1933), which reconstructs the data with principal components.3 For convenience, we denote the design matrix of augmentation feature by A, where A ∈ RN×L, Ax̄,x = p(x | x̄) (see Figure 2). We perform PCA on a transformed augmentation feature called normalized augmentation feature:  = AD− 1 2 , (2) where D = diag([dx1 , dx2 , . . . , dxL ]), dx = ∑ x̄ p(x | x̄). Based on normalized augmentation feature, we can develop an efficient algorithm for similarity preserving embeddings. Assume the SVD of  = UΣV ⊤ with U ∈ RN×N , Σ ∈ RN×L, V ∈ RL×L , PCA first learns the projection matrix consisting of the top-k right singular vectors, which can be denoted as Ṽ ∈ RL×k. The vectors in Ṽ are called Principal Components (PCs). Then, it projects the feature by ÂṼ to get the embeddings for each sample. The overall procedure is illustrated at the top-right of Figure 2. But performing PCA on the augmentation feature will encounter many obstacles. The element of augmentation feature is not possible to estimate accurately, not to mention its high dimensionality. 2Following the common knowledge in dimension reduction, we call the raw high dimensional representation as “feature”, and learned low-dimensional representation as “embedding”. 3In this paper, we use the non-centred version (Reyment & Jvreskog, 1996), which is more appropriate for observations than for variables, where the origin matters more. Even if we can somehow get the projection matrix Ṽ , it is also impractical to project the highdimensional matrix Â. For this reason, we propose ACA to make PC learning and projection process efficient without explicitly calculating elements of augmentation feature. 4.3 AUGMENTATION COMPONENT ANALYSIS Although there are several obstacles when performing PCA on the augmentation features directly, fortunately, it is efficient to sample from the augmentation distribution p(x | x̄), i.e., by performing augmentation on the natural data x̄ and get an augmented sample x. Being aware of this, our ACA uses two practical losses to simulate the PCA process efficiently by sampling. The first contrastivelike loss leads the encoder to learn principal components of Â, which can be efficiently optimized by sampling like traditional contrastive methods. The second loss performs on-the-fly projection of  through the training trajectory, which solves the difficulty of high dimensional projection. Learning principal components. ACA learns the principal components by an efficient contrastivelike loss. Besides its projection functionality, these learned principal components can also serve as embeddings that preserve a kind of posterior distribution similarity, as we will show later. In the SVD view, UΣ serves as the PCA projection results for samples and V contains the principal components (Jolliffe, 2002). However, if changing our view, V Σ can be seen as the representation of each column. Since each column of  encodes the probability of the augmented data given natural data, V Σ preserves certain augmentation relationships, as we will show in Theorem 4.2 later. To leverage the extrapolation power of encoders like deep neural networks, we choose to design a loss that can guide the parameterized encoder fθ to learn similar embeddings as PCA. Inspired by the rank minimization view of PCA (Vidal et al., 2016), we employ the low-rank approximation objective with matrix factorization, similar to HaoChen et al. (2021): min F∈RL×k Lmf = ∥Â⊤Â− FF⊤∥2F , (3) where columns of F store the scaled version of top-k right singular vectors, and each row can be seen as the embedding of augmented data as will show in Lemma 4.1. According to Eckart–Young–Mirsky theorem (Eckart & Young, 1936), by optimizing Lmf , we can get the optimal F̂ , which has the form Ṽ Σ̃Q, Q ∈ Rk×k is an orthonormal matrix. Σ̃ and Ṽ contains the top-k singular values and right singular vectors. By expanding Equation (3), we get Augmentation Component Analysis Loss for learning Principal Components (ACA-PC) in the following lemma: Lemma 4.1 (ACA-PC loss). Let Fx,: = √ dxf ⊤ θ (x),∀x ∈ X . Minimizing Lmf is equivalent to minimizing the following objective: LACA-PC =− 2E x̄∼p(x̄),xi∼p(xi|x̄) xj∼p(xj |x̄) fθ(xi) ⊤fθ(xj) +NEx1∼pA(x1),x2∼pA(x2) [( fθ(x1) ⊤fθ(x2) )2] . (4) The proof can be found in Appendix F. In ACA-PC, the first term is the common alignment loss for augmented data and the second term is a form of uniformity loss (Wang & Isola, 2020). Both terms can be estimated by Monte-Carlo sampling. ACA-PC is a kind of contrastive loss. But unlike most of the others, it has theoretical meanings. We note that the form of ACA-PC differs from spectral loss (HaoChen et al., 2021) by adding a constant N before the uniformity term. This term is similar to the noise strength in NCE (Gutmann & Hyvärinen, 2010) or the number of negative samples in InfoNCE (van den Oord et al., 2018). It can be proved that the learned embeddings by ACA-PC preserve the posterior distribution distances between augmented data: Theorem 4.2 (Almost isometry for posterior distances). Assume fθ is a universal encoder, σk+1 is the (k + 1)-th largest singular value of Â, dmin = minx dx, and δx1x2 = I(x1 = x2), the minimizer θ∗ of LACA−PC satisfies: d2post(x1,x2)− 2σ2k+1 dmin (1− δx1x2) ≤ ∥fθ∗(x1)− fθ∗(x2)∥22 ≤ d2post(x1,x2) , ∀x1,x2 ∈ X where the posterior distance d2post(x1,x2) = ∑ x̄∈X̄ (pA(x̄ | x1)− pA(x̄ | x2))2 (5) measures the squared Euclidean distance between the posterior distribution pA(x̄ | x) = p(x|x̄)p(x̄)pA(x) . We give the proof in Appendix G. Theorem 4.2 states that the optimal encoder for ACA-PC preserves the distance of posterior distributions between augmented data within an error related to embedding size k. As k increase to N , the error decrease to 0. It corresponds to the phenomenon that a larger embedding size leads to better contrastive performance (Chen et al., 2020a). The posterior distribution pA(x̄ | x) represents the probability that a given augmented sample x is created by a natural sample x̄. Augmented data that are only produced by the same natural sample will have the smallest distance, and embeddings of those in overlapped areas will be pulled together by ACA-PC. Since the overlapped area are usually created by two same-class samples, ACA-PC can form semantically meaningful embedding space. It is also noticeable that the optimal encoder meets the similarity preserving condition (Equation (1)) but concerning the posterior distribution for augmented data not the augmentation distribution for natural data. Since what we care about is the distribution of natural data, we further propose a projection loss that helps learn good embeddings for all the natural data. On-the-fly Projection. As stated in the previous part, the learned embeddings by ACA-PC not only serve as certain embeddings for augmented data but also contain principal components of normalized augmentation feature. Based on this, we propose to use these embeddings to act as a projection operator to ensure meaningful embeddings for all the natural data. To be specific, denote the embedding matrix for all augmented data as F aug(∈ RL×k), where each row F augx,: = f⊤θ∗(x). From Equation (3) and F̂x,: = √ dxf ⊤ θ∗(x), it can be easily seen that: F aug = D− 1 2 F̂ = D− 1 2 Ṽ Σ̃Q Similar to PCA (Hotelling, 1933) that projects the original feature by the principal components V , we propose to use F aug to project the augmentation feature to get the embeddings for each natural sample. Denote the embedding matrix for natural data as Fnat(∈ RN×k), where each row Fnatx̄,: represents the embeddings of x̄. We compute Fnat as follows: Fnat = AF aug = ÂD 1 2D− 1 2 Ṽ Σ̃Q = (Ũ Σ̃)Σ̃Q, (6) where Σ̃,Ũ contain the top-k singular values and corresponding left singular vectors. It is noticeable that Fnat is exactly the PCA projection result multiplied by an additional matrix Σ̃Q. Fortunately, such additional linear transformation does not affect the linear probe performance (HaoChen et al., 2021). With Equation (6), the embedding of each natural sample can be computed as follows: Fnatx̄,: = Ax̄,:F aug = ∑ x p(x | x̄)f⊤θ∗(x) = Ex∼p(x|x̄)f⊤θ∗(x) (7) which is exactly the expected feature over the augmentation distribution. Similar to Theorem 4.2, the embeddings calculated by Equation (7) also present a certain isometry property: Theorem 4.3 (Almost isometry for weighted augmentation distances). Assume fθ is a universal encoder, σk+1 is the (k + 1)-th largest sigular value of Â,δx̄1x̄2 = I(x̄1 = x̄2), let the minimizer of LACA−PC be θ∗ and g(x̄) = Ex∼p(x|x̄)fθ∗(x) as in Equation (7), then: d2w-aug(x̄1, x̄2)− 2σ2k+1 (1− δx̄1x̄2) ≤ ∥g(x̄1)− g(x̄2)∥2Σ−2k ≤ d 2 w-aug(x̄1, x̄2) , ∀x1,x2 ∈ X where ∥·∥Σ−2k represent the Mahalanobis distance with matrix Σ −2 k ,Σk = diag([σ1, σ2, . . . , σk]) is the diagonal matrix containing top-k singular values and the weighted augmentation distance d2w-aug(x̄1, x̄2) = 1 N ∑ x∈X (p(x | x̄1)− p(x | x̄2))2 pA(x) (8) measures the weighted squared Euclidean distance between the augmentation distribution p(x | x̄). Different from Theorem 4.2, which presents isometry between Euclidean distances in embeddings and augmentation distribution, Theorem 4.3 presents isometry between Mahalanobis distances. The weighted augmentation distances weigh the Euclidean distances by pA(x). dw-aug can be regarded as a valid augmentation distance measure dA as in Equation (1) and Fnat preserve such a distance. So our goal is to make embeddings of x̄ approaches Ep(x|x̄)fθ⋆(x). However, as stated before, the additional projection process is not efficient, i.e., we need exponentially many samples from p(x | x̄). We notice that samples during the training process of ACA-PC can be reused. For this reason, we propose an on-the-fly projection loss that directly uses the current encoder for projection: Lproj = Ex̄∼p(x̄) [ ∥fθ(x̄)− Ep(x|x̄)fθ(x)∥22 ] (9) Full objective of ACA. Based on the discussion of the above parts, ACA simultaneously learns the principal components by ACA-PC and projects natural data by an on-the-fly projection loss. The full objective of ACA has the following form: LACA-Full = LACA-PC + αLproj (10) where α is a trade-off hyperparameter. We also find N in Equation (4) too large for stable training, so we replace it with a tunable hyperparameter K. Here, we only display the loss in expectation forms. The details of the implementation are described in Appendix A. 5 A PILOT STUDY In this section, we experiment with our Augmentation Component Analysis method on a synthetic mixture component data with a Gaussian augmentation method. In this example, we aim to show the relationship between semantic similarity and posterior/weighted augmentation distances. We also show the effectiveness of our method compared to traditional contrastive learning. In this example, the natural data x̄ are sampled from a mixture gaussian with c component: p(x̄) = c∑ i=1 πiN (µi, siI) We use Gaussian noise as the data augmentation of a natural data sample, i.e., A(x̄) = x̄+ ξ where ξ ∼ N (0, saI). Concretely, we conduct our experiment on 2-D data with c = 4, πi = 1c , si = 1 and µi uniformly distributed on a circle with radius 2 . For each component, we sample 200 natural data with the index of the component as their label. For each natural datum, we augment it 2 times with sa = 4, which results in totally 1600 augmented data. We compute the augmentation probability for between x and x̄ by p(x | x̄) and we normalize the probability for each x̄. First, we plot the distribution of posterior distances (Equation (5)) for pairs of augmented data and weighted augmentation distances (Equation (8)) for pairs of natural data in Figure 3 left. The two distances appear to have similar distributions because the synthetic data are Gaussian. It can be seen that data from the same component tend to have small distances, while from different components, their distances are large. In low-distance areas, there are pairs of the same class, which means that the two distances are reliable metrics for judging semantic similarity. In all, this picture reveals the correlation between semantic similarity and posterior/weighted augmentation distances. Second, we compare our methods with SimCLR (Chen et al., 2020a), the traditional contrastive method and Spectral (HaoChen et al., 2021), which similarly learns embeddings with spectral theory. We test the learned embeddings using a Logistic Regression classifier and report the error rate of the prediction in Figure 3 right. We also report performance when directly using augmentation feature (AF). First, AF has discriminability for simple linear classifiers. SimCLR and Spectral tend to underperform AF as the embedding size increases, while our methods consistently outperform. It may be confusing since our method performs dimension reduction on this feature. But we note that as the embedding size increases, the complexity of the linear model also increases, which affects the generalizability. All the methods in Figure 3 right show degradation of this kind. However, our methods consistently outperform others, which shows the superiority of ACA. Additionally, by adding projection loss, ACA-Full improves ACA-PC by a margin. Additionally, traditional contrastive learning like SimCLR achieves similar performance as our methods. We think it reveals that traditional contrastive learning has the same functionality as our methods. 6 EXPERIMENTS 6.1 SETUP Dataset. In this paper, we conduct experiments mainly on the following datasets with RTX-3090 ×4. CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009): two datasets containing totally 500K images of size 32 × 32 from 10 and 100 classes respectively. STL-10 (Coates et al., 2011): derived from ImageNet (Deng et al., 2009), with 96 × 96 resolution images with 5K labeled training data from 10 classes. Additionally, 100K unlabeled images are used for unsupervised learning. Tiny ImageNet: a reduced version of ImageNet (Deng et al., 2009), composed of 100K images scaled down to 64 × 64 from 200 classes. ImageNet-100 (Tian et al., 2020a): a subset of ImageNet, with 100-classes. ImageNet (Deng et al., 2009), the large-scale dataset with 1K classes. Network Structure. Following common practice (Chen et al., 2020a;b;c), we use the encoderprojector structure during training, where the projector projects the embeddings into a lowdimensional space. For CIFAR-10 and CIFAR-100, we use the CIFAR variant of ResNet-18 (He et al., 2016; Chen & He, 2021) as the encoder. We use a two-layer MLP as the projector whose hidden dimension is half of the input dimension and output dimension is 64. For STL-10 and Tiny ImageNet, only the max-pooling layer is disabled following (Chen & He, 2021; Ermolov et al., 2021). For these two datasets, we use the same projector structure, except that the output dimension is 128. For ImageNet, we use ResNet-50 with the same projector as Chen et al. (2020a). Image Transformation. Following the common practice of contrastive learning (Chen et al., 2020a), we apply the following augmentations sequentially during training: (a) crops with a random size; (b) random horizontal flipping; (c) color jittering; (d) grayscaling. For ImageNet-100 and ImageNet, we use the same implementation as (Chen et al., 2020a). Optimizer and other Hyper-parameters. For datasets except for ImageNet, adam optimizer (Kingma & Ba, 2015) is used for all datasets. For CIFAR-10 and CIFAR-100, we use 800 epochs with a learning rate of 3× 10−3. For Tiny ImageNet and STL-10, we train 1,000 epochs with a learning rate 2 × 10−3. We use a 0.1 learning rate decay at 100, 50, 20 epochs before the end. Due to hardware resource restrictions, we use a mini-batch of size 512. The weight decay is 1 × 10−6 if not specified. Following common practice in contrastive learning, we normalize the projected feature into a sphere. For CIFAR-10, we use α = 1. For the rest datasets, we use α = 0.2. By default, K is set to 2. For ImageNet, we use the same hyperparameters as (Chen et al., 2020a) except batch size being 256, α = 0.2 and K = 2. Evaluation Protocol. We evaluate the learned representation on two most commonly used protocols – linear classification (Zhang et al., 2016; Kolesnikov et al., 2019) and k-nearest neighbors classifier (Chen & He, 2021). In all the experiments, we train the linear classifier for 100 epochs. The learning rate exponentially decays from 10−2 to 10−6. The weight decay is 1× 10−6. We report the classification accuracy on test embeddings as well as the accuracy of a 5-Nearest Neighbors classifier for datasets except for ImageNet. 6.2 PERFORMANCE COMPARISON In Table 1, we compare the linear probe performance on various small-scale or mid-scale benchmarks with several methods including SimCLR (Chen et al., 2020a), BYOL (Grill et al., 2020), SimSiam (Chen & He, 2021) and Spectral (HaoChen et al., 2021). For transfer learning benchmarks, please refer to Appendix D and Appendix E. SimCLR uses is a method that uses contrastive loss. BYOL and SimSiam do not use negative samples. Spectral is a similar loss derived from the idea of spectral clustering. From Table 1, we can see that our ACA-Full method achieves competitive results on small- or mid-scale benchmarks, achieving either the best or the second-best results on all benchmarks except the 5-NN evaluation on STL-10. Also, ACA-PC differs from ACA-Full in the projection loss. In all the benchmarks, we can see that the projection loss improves performance. For large-scale benchmarks, we compare several methods on ImageNet-100 and ImageNet. On ImageNet-100, we compare our method additionally to MoCo (He et al., 2020), Lalign + Luniform (Wang & Isola, 2020) and InfoMin (Tian et al., 2020b). Note that the results of the other three methods are reported when using the ResNet-50 encoder, which has more capacity than ResNet18. Our method can also achieve state-of-the-art results among them. This means that our method is also effective with relatively small encoders even on large-scale datasets. On ImageNet, we see that ACA-PC achieves competitive performance against state-of-the-art contrastive methods (Chen et al., 2020a;c; Grill et al., 2020; Chen & He, 2021; HaoChen et al., 2021) and ACA-Full achieves the best. 7 CONCLUSION AND FUTURE WORK In this paper, we provide a new way of constructing self-supervised contrastive learning tasks by modeling similarity through augmentation overlap, which is motivated by the observation that semantically similar data usually creates similar augmentations. We propose Augmentation Component Analysis to perform PCA on augmentation feature efficiently. Interestingly, our methods have a similar form as the traditional contrastive loss and may explain the ability of contrastive loss. We hope our paper can inspire more thoughts about how to measure similarity in self-supervised learning and how to construct contrastive learning tasks. Future studies may be explorations of applying ACA to learn representations of other forms of instances, such as tasks (Achille et al., 2019) and models (Wu et al., 2023). ACKNOWLEDGE This research was supported by NSFC (61773198, 62006112,61921006), Collaborative Innovation Center of Novel Software Technology and Industrialization, NSF of Jiangsu Province (BK20200313) B EFFECT OF AUGMENTATION OVERLAPS Like contrastive learning, our method relies on the quality of augmentation. Therefore, we investigate the influence of different augmentations and reveal the relationship between distribution difference and the linear probe performance on CIFAR10. The augmentation distribution is estimated by augmenting 106 times for a subset of random 2000 pairs of samples with the number of intra-class and inter-class pairs being 1000 respectively. Note that as is stated in Section 4.1, even on CIFAR10, the actual value of L is exponentially large (up to 2563072). It is impossible to accurately estimate a distribution over so many possible values. But we notice that for neural networks, many operators can reduce the possible number of values, like convolutions and poolings. Following this observation and to make the computation efficient, we descrete the color into 8-bit for each channel and use a max pooling operation to get a 4× 4 picture. by this kind of approximation, the number of L reduces to 848. Seems still too large, but it can be noted that the augmentation distribution of each sample covers only a small region. It is enough to estimate the distribution by sampling. For memory restriction, we cannot fully estimate the weighted augmentation distance in Theorem 4.3. Because we cannot store all possible values for pA(x). Instead, we use the Hellinger distance as the distribution distance measure: d2H(x̄1, x̄2) = 1 N ∑ x∈X (√ p(x | x̄1)− √ p(x | x̄2) )2 Hellinger distance ranges [0, 2], making the comparison clear. We list the experimented augmentation here: 1. Grayscale: Randomly change the color into gray with probability of 0.1. 2. HorizontalFlip: Randomly flip horizontally with probability 0.5. 3. Rotation: Randomly rotate image with uniformly distributed angle in [0, π] 4. ColorJitter: Jitter (brightness, contrast, saturation, hue) with strength (0.4, 0.4, 0.4, 0.1) and probability 0.8. In Table 3, we display the histogram (HIST) of intra- and inter-class augmentation distribution distances. ACC displays the linear probe performance on the test set. From the table, the following requirements for a good augmentation can be concluded: (1) Existence of overlap. For the upper three augmentations. The “scope” of augmentation is small. As a result, most of the samples do not overlap. This makes embeddings lack the discriminative ability for downstream tasks. On the contrary, the lower three create overlaps for most of the samples, leading to much better performance. (2) Intra-class distance is lower than inter-class. Compared to ColorJitter, ResizedCrop makes more intra-class samples have lower distance. So ResizedCrop outperforms ColorJitter. SimCLR augmentation surpasses these two for the same reason. Interestingly, we find that the same phenomena appear when using other contrastive methods like SimCLR. It shows that these methods somehow utilize the augmentation overlap like our method. C PERFORMANCE CURVE In this section, we illustrate the performance curve throughout training. We aim to demonstrate the functionality of projection loss and show that our ACA method leads to better performance. The compared traditional contrastive learning method is chosen to be SimCLR, for the reason that our method only differs from SimCLR in the loss, with all other things (architecture, optimizer and other shared hyperparameters) identical. Also, we do not introduce extra mechanisms like momentum encoder (BYOL, MoCo) and predictor (BYOL, SimSiam). Figure 5 shows the performance curve along with the projection loss on the CIFAR-10 dataset. The left figure shows the projection loss. We can see that in the early stage of training, the projection loss will increase. It reveals that the natural data will deviate from the center of augmentation distribution. It is harmful to the performance of the model. With the help of projection loss, the embeddings of natural data will be dragged back to their right position, the center. The mid and right figures illustrate the performance curve during training. With only ACA-PC loss, the model can only achieve similar performance during training. But the ACA-Full loss will help improve performance during training. Also, we can see that ACA starts to outperform SimCLR and ACA-PC by a considerable margin from about 50 epochs. This happens to be the epoch in which the projection loss increases to its stable level. Therefore, pulling the natural data to the center of its augmentation helps to learn better embeddings. D TRANSFER TO OTHER DATASETS Following Chen et al. (2020a), we evaluate the self-supervised pre-trained models for linear classification task on 10 datasets as it is conducted in MSF paper (Koohpayegani et al., 2021). The results are reported in Table 4. All the results other than ACA are taken from Koohpayegani et al. (2021). Although our method is trained with fewer epochs, it achieves competitive results with contrastive learning methods. Notably, it surpasses the 1000-epoch SimCLR which differs from our method only in loss. It shows that the embeddings learned by our method are also transferable to other downstream tasks. We think it is due to the universality of the correlation between augmentation similarity and semantical similarity across these benchmarks. E TRANSFER TO OBJECT DETECTION Following the procedure outlined in ?, we use Faster-RCNN Ren et al. (2015) for the task of object detection on PASCAL-VOC Everingham et al. (2015). We use the code provided at MoCo repository4 with default parameters. All the weights are finetuned on the trainval07+12 set and evaluated on the test07 set. We report an average over 5 runs in Table 5. Despite the shorter training epochs, our method can achieve better results than SimCLR, especially outperform by a large margin on AP75(> 1%). F PROOF OF LEMMA 4.1 For convenient, we define M := Â⊤Â. The elements of M are: Mx1x2 = ∑ x̄∈X̄ p(x1 | x̄)p(x2 | x̄)√ dx1 √ dx2 ,x1,x2 ∈ X (13) Expanding Equation (3), we get: Lmf = ∑ x1,x2∈X (Mx1x2 − F⊤x1Fx2) 2 = ∑ x1,x2∈X (Mx1x2 − √ dx1 √ dx2fθ(x1) ⊤fθ(x2)) 2 = const − 2 ∑ x1,x2∈X √ dx1 √ dx2Mx1x2fθ(x1) ⊤fθ(x2) + ∑ x1,x2∈X dx1dx2(fθ(x1) ⊤fθ(x2)) 2 = const − 2 ∑ x1,x2∈X ∑ x̄∈X̄ p(x1 | x̄)p(x2 | x̄)fθ(x1)⊤fθ(x2) + ∑ x1,x2∈X dx1dx2(fθ(x1) ⊤fθ(x2)) 2 4https://github.com/facebookresearch/moco multiply by p(x̄) = 1N and replace dx with ∑ x̄ p(x | x̄) = NpA(x). The objective becomes: min θ − 2 ∑ x1,x2∈X ∑ x̄∈X̄ p(x1 | x̄)p(x2 | x̄)p(x̄)fθ(x1)⊤fθ(x2) +N ∑ x1,x2∈X pA(x1)pA(x2)(fθ(x1) ⊤fθ(x2)) 2 = −2E x̄∼p(x̄),xi∼A(xi|x̄) xj∼A(xj |x̄) [ fθ(x1) ⊤fθ(x2) ] +NEx1∼pA(x1),x2∼pA(x2) [ (fθ(x1) ⊤fθ(x2)) 2 ] = LACA-PC G PROOF OF THEOREM 4.2 As in Appendix F, we define M := Â⊤Â. By Eckart–Young–Mirsky theorem (Eckart & Young, 1936), the minimizer F̂ of ∥M − FF⊤∥2F , must have the form V̂ Σ̂Q, where V̂ , Σ̂ contain the top-k singular values and corresponding right singular vectors of Â, Q ∈ Rk×k is some orthonormal matrix with Q⊤Q = I . Since we let Fx = √ dxfθ(x), then the minimizer θ⋆ must satisfy fθ⋆(x) = Q σ̂ ⊙ v̂(x)√ dx = Q [σ1v1(x), σ2v2(x), . . . , σkvk(x)] ⊤ √ dx . where ⊙ is the element-wise multiplication. For convenience, we use σi to denote i-th largest singular value, ui(x̄),vi(x) to denote the element of i-th left/right singular value corresponding to x̄/x . When p(x̄) = 1N , dx = NpA(x) = pA(x) p(x̄) . Then the posterior distance: d2post(x1,x2) = ∑ x̄∈X̄ (pA(x̄ | x1)− pA(x̄ | x2))2 = ∑ x̄∈X̄ ( p(x1 | x̄)p(x̄) pA(x1) − p(x1 | x̄)p(x̄) pA(x1) )2 = ∑ x̄∈X̄ ( p(x1 | x̄) dx1 − p(x2 | x̄) dx2 )2 = ∑ x̄∈X̄ ( Âx̄x1√ dx1 − Âx̄x2√ dx2 )2 = ∑ x̄∈X̄ ( N∑ i=1 σiui(x̄)vi(x1)√ dx1 − σiui(x̄)vi(x2)√ dx2 )2 = ∑ x̄∈X̄ ( N∑ i=1 σiui(x̄)( vi(x1)√ dx1 − vi(x2)√ dx2 ) )2 = ∑ x̄∈X̄ ∑ i,i′ σiui(x̄)σi′ui′(x̄)( vi(x1)√ dx1 − vi(x2)√ dx2 )( vi′(x1)√ dx1 − vi ′(x2)√ dx2 ) = ∑ i,i′ σiσi′( vi(x1)√ dx1 − vi(x2)√ dx2 )( vi′(x1)√ dx1 − vi ′(x2)√ dx2 ) ∑ x̄∈X̄ ui(x̄)ui′(x̄) (1) = ∑ i,i′ σiσi′( vi(x1)√ dx1 − vi(x2)√ dx2 )( vi′(x1)√ dx1 − vi ′(x2)√ dx2 )δi,i′ = N∑ i=1 σ2i ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 (14) (1) is due to the orthogonality of singular vectors. Note that: N∑ i=1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 = L∑ i=1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 − L∑ i=N+1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 ≤ L∑ i=1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 = L∑ i=1 v2i (x1) dx1 + L∑ i=1 v2i (x2) dx2 − 2 L∑ i=1 vi(x1)vi(x2)√ dx1 √ dx2 = 1 dx1 + 1 dx2 − 2δx1x2√ dx1 √ dx2 (2) ≤ ( 1 dx1 + 1 dx2 )(1− δx1x2) ≤ 2 dmin (1− δx1x2) (2) can be deduced by considering conditions whether x1 = x2 or not. Then: ∥fθ⋆(x1)− fθ⋆(x2)∥2 = k∑ i=1 σ2i ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 =d2post(x1,x2)− N∑ i=k σ2i ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 (≤ d2post(x1,x2)) ≥d2post(x1,x2)− σ2k+1 N∑ i=k+1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 ≥d2post(x1,x2)− σ2k+1 N∑ i=1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 ≥d2post(x1,x2)− 2σ2k+1 dmin (1− δx1x2) Therefore, we have proved Theorem 4.2. H PROOF OF THEOREM 4.3 similar to Appendix G, d2w-aug(x̄1, x̄2) = ∑ x∈X 1 NpA(x) (p(x | x̄1)− p(x | x̄2))2 = ∑ x∈X ( p(x | x̄1)√ NpA(x) − p(x | x̄1)√ NpA(x) )2 = ∑ x∈X ( p(x | x̄1)√ dx − p(x | x̄1)√ dx )2 = ∑ x∈X ( Âx̄1x − Âx̄2x )2 = ∑ x∈X ( N∑ i=1 σiui(x̄1)vi(x)− σiui(x̄2)vi(x) )2 = ∑ x∈X ( N∑ i=1 σi(ui(x̄1)− ui(x̄2))vi(x) )2 = ∑ x∈X ∑ i,i′ σivi(x)σi′vi′(x)(ui(x̄1)− ui(x̄2))(ui′(x̄1)− ui′(x̄2)) = ∑ i,i′ σiσi′(ui(x̄1)− ui(x2))(ui′(x̄1)− ui′(x̄2)) ∑ x∈X vi(x)vi′(x) (1) = ∑ i,i′ σiσi′(ui(x̄1)− ui(x̄2))(ui′(x̄1)− ui′(x̄2))δi,i′ = N∑ i=1 σ2i (ui(x1)− ui(x2))2 (1) is due to the orthogonality of singular vectors. And g(x̄) takes the following form: g(x̄) = Q [ σ21u1(x), σ 2 2u2(x), . . . , σ 2 kuk(x) ]⊤ . Thus, ∥g(x̄1)− g(x̄2)∥2Σ−2k = k∑ i=1 σ2i (ui(x1)− ui(x2))2 = d2w-aug(x̄1, x̄2)− N∑ i=k+1 σ2i (ui(x1)− ui(x2))2 (≤ d2w-aug(x̄1, x̄2)) ≥ d2w-aug(x̄1, x̄2)− σ2k+1 N∑ i=1 (ui(x1)− ui(x2))2 = d2w-aug(x̄1, x̄2)− 2σ2k+1(1− δx̄1x̄2) I ABLATION STUDY ON PARAMETER α AND K We conduct ablation experiments on the parameter α and K. α is the trade-off parameter between ACA-PC loss and projection loss Equation (10). K act as the noise strength for ACA-PC, which replaces N in Equation (4). Figure 6 shows the effect of α and K on different benchmarks. It can be seen that α is necessary to improve the performance of ACA-PC. A certain value of α helps the model to achieve better results. However, a too large value of α degrades the performance. The same phenomenon is the same on K. J COMPARISON OF NEAREST NEIGHBORS We randomly select 8 samples from the validation set of ImageNet-100 (Tian et al., 2020a). Then we use the encoder learned by our ACA method and SimCLR (Chen et al., 2020a) to extract features and investigate their nearest neighbors of them. The left-most column displays the selected samples and the following columns show the 5 nearest neighbors. The samples labeled as different classes are marked by the red box. We also annotate the distance between the samples and their nearest neighbors. First, we can see that even though utilizing the augmentation in a different way, ACA achieves similar results as traditional contrastive learning. Both of them can learn semantically meaningful embeddings. However, we can see that ACA tends to learn embeddings that pull together images that are similar in the input space, i.e., creating similar augmentation, while SimCLR sometimes has neighbors that seem different.
1. What is the focus and contribution of the paper on augmentation component analysis? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its theoretical analysis and experimental results? 3. Do you have any concerns regarding the improvements and comparisons made in the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes Augmentation Component Analysis (ACA), which employs the idea of PCA to perform dimension reduction on augmentation features. ACA reformulates the steps of extracting principal components of the augmentation features with a contrastive-like loss. With the learned principal components, another on-the-fly loss embeds samples effectively. ACA learns operable low-dimensional embeddings theoretically preserving the augmentation distribution distances. Strengths And Weaknesses Strength The idea of augmentation component analysis is novel and interesting. The theoretical analysis of this work is promising. Weaknesses The experimental results are too weak. Self-supervised learning aims to transfer the learned representations or whole network parameters into various downstream tasks. However, I do not see any transfer learning experiments in this paper. Could you provide more transfer learning experiments, for example, linear evaluation and fine-tuning in fine-grained classification tasks, semi-supervised learning, and object detection/segmentation? The improvements of this method are very marginal. From Table 1 and Table 2, ACA-Full only surpasses the second-best performance 0.5% in most cases, which needs to be more convincing. The comparison methods in Table 1 and Table 2 are outdated. I highly recommend the author compare ACA-Full with the latest contrastive learning methods. For example, [1][2][3][4][5][6][7]. Moreover, the convergence rate and final accuracy highly depend on the methods. For ImageNet experiments, the authors should at least train the model for 200 epochs (or even longer) to make sure all methods are fully converged. [1] With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations [2] Solving Inefficiency of Self-supervised Representation Learning [3] Unsupervised Learning of Visual Features by Contrasting Cluster Assignments [4] Mean Shift for Self-Supervised Learning [5] Ressl: relational self-supervised learning with weak augmentation [6] Barlow Twins: Self-Supervised Learning via Redundancy Reduction [7] AdCo: Adversarial Contrast for Efficient Learning of Unsupervised Representations From Self-Trained Negative Adversaries Clarity, Quality, Novelty And Reproducibility The code and pre-trained models are desired. After reading this paper, it is not easy for me to reproduce the results. I acknowledge the novelty of this paper, but the poor experimental results need to be more convincing.
ICLR
Title Augmentation Component Analysis: Modeling Similarity via the Augmentation Overlaps Abstract Self-supervised learning aims to learn a embedding space where semantically similar samples are close. Contrastive learning methods pull views of samples together and push different samples away, which utilizes semantic invariance of augmentation but ignores the relationship between samples. To better exploit the power of augmentation, we observe that semantically similar samples are more likely to have similar augmented views. Therefore, we can take the augmented views as a special description of a sample. In this paper, we model such a description as the augmentation distribution, and we call it augmentation feature. The similarity in augmentation feature reflects how much the views of two samples overlap and is related to their semantical similarity. Without computational burdens to explicitly estimate values of the augmentation feature, we propose Augmentation Component Analysis (ACA) with a contrastive-like loss to learn principal components and an on-the-fly projection loss to embed data. ACA equals an efficient dimension reduction by PCA and extracts low-dimensional embeddings, theoretically preserving the similarity of augmentation distribution between samples. Empirical results show that our method can achieve competitive results against various traditional contrastive learning methods on different benchmarks. Code available at https://github.com/hanlu-nju/AugCA. 1 INTRODUCTION The rapid development of contrastive learning has pushed self-supervised representation learning to unprecedented success. Many contrastive learning methods surpass traditional pretext-based methods by a large margin and even outperform representation learned by supervised learning (Wu et al., 2018; van den Oord et al., 2018; Tian et al., 2020a; He et al., 2020; Chen et al., 2020a;c). The key idea of self-supervised contrastive learning is to construct views of samples via modern data augmentations (Chen et al., 2020a). Then discriminative embeddings are learned by pulling together views of the same sample in the embedding space while pushing apart views of others. Contrastive learning methods utilize the semantic invariance between views of the same sample, but the semantic relationship between samples is ignored. Instead of measuring the similarity between certain augmented views of samples, we claim that the similarity between the augmentation distributions of samples can reveal the sample-wise similarity better. In other words, semantically similar samples have similar sets of views. As shown in Figure 1 left, two images of deer create many similar crops, and sets of their augmentation results, i.e., their distributions, overlap much. In contrast, a car image will rarely be augmented to the same crop as a deer, and their augmentation distributions overlap little. In Figure 1 right, we verify the motivation numerically. We approximate the overlaps between image augmentations with a classical image matching algorithm (Zitova & Flusser, 2003), which counts the portion of the key points matched in the raw images. We find samples of the same class overlap more than different classes on average, supporting our motivation. Therefore, we establish the semantic relationship between samples in an unsupervised manner based on the similarity of augmentation distributions, i.e., how much they overlap. In this paper, we propose to describe data directly by their augmentation distributions. We call the feature of this kind the augmentation feature. The elements of the augmentation feature represent the probability of getting a certain view by augmenting the sample as shown in the left of Figure 2. The augmentation feature serves as an “ideal” representation since it encodes the augmentation information without any loss and we can easily obtain the overlap of two samples from it. However, not only its elements are hard to calculate, but also such high-dimensional embeddings are impractical to use. Inspired by the classical strategy to deal with high-dimensional data, we propose Augmentation Component Analysis (ACA), which employs the idea of PCA (Hotelling, 1933) to perform dimension reduction on augmentation features previously mentioned. ACA reformulates the steps of extracting principal components of the augmentation features with a contrastive-like loss. With the learned principal components, another on-the-fly loss embeds samples effectively. ACA learns operable low-dimensional embeddings theoretically preserving the augmentation distribution distances. In addition, the similarity between the objectives of ACA and traditional contrastive loss may explain why contrastive learning can learn semantic-related embeddings – they embed samples into spaces that partially preserve augmentation distributions. Experiments on synthetic and real-world datasets demonstrate that our ACA achieves competitive results against various traditional contrastive learning methods. Our contributions are as follows: • We propose a new self-supervised strategy, which measures sample-wise similarity via the similarity of augmentation distributions. This new aspect facilitates learning embeddings. • We propose ACA method that implicitly employs the dimension reduction over the augmentation feature, and the learned embeddings preserve augmentation similarity between samples. • Benefiting from the resemblance to contrastive loss, our ACA helps explain the functionality of contrastive learning and why they can learn semantically meaningful embeddings. 2 RELATED WORK Self-Supervised Learning. Learning effective visual representations without human supervision is a long-standing problem. Self-supervised learning methods solve this problem by creating supervision from the data itself instead of human labelers. The model needs to solve a pretext task before it is used for the downstream tasks. For example, in computer vision, the pretext tasks include colorizing grayscale images (Zhang et al., 2016), inpainting images (Pathak et al., 2016), predicting relative patch (Doersch et al., 2015), solving jigsaw puzzles (Noroozi & Favaro, 2016), predicting rotations (Gidaris et al., 2018) and exploiting generative models (Goodfellow et al., 2014; Kingma & Welling, 2014; Donahue & Simonyan, 2019). Self-supervised learning also achieves great success in natural language processing (Mikolov et al., 2013; Devlin et al., 2019). Contrastive Learning and Non-Contrastive Methods. Contrastive approaches have been one of the most prominent representation learning strategies in self-supervised learning. Similar to the metric learning in supervised scenarios (Ye et al., 2019; 2020), these approaches maximize the agreement between positive pairs and minimize the agreement between negative pairs. Positive pairs are commonly constructed by co-occurrence (van den Oord et al., 2018; Tian et al., 2020a; Bachman et al., 2019) or augmentation of the same sample (He et al., 2020; Chen et al., 2020a;c; Li et al., 2021; Ye et al., 2023), while all the other samples are taken as negatives. Most of these methods employ the InfoNCE loss (van den Oord et al., 2018), which acts as a lower bound of mutual information between views. Based on this idea, there are several methods that attempt to improve contrastive learning, including mining nearest neighbour (Dwibedi et al., 2021; ?; Azabou et al., 2021) and creating extra views by mixing up (Kalantidis et al., 2020) or adversarial training (Hu et al., 2021). Another stream of methods employs a similar idea of contrastive learning to pull views of a sample together without using negative samples (Grill et al., 2020; Chen & He, 2021). Barlow Twins (Zbontar et al., 2021) minimizes the redundancy within the representation vector. Tsai et al. (2021) reveals the relationship among Barlow Twins, contrastive and non-contrastive methods. Most of these methods only utilize the semantic invariance of augmentation and ignore the relationship between samples. Different from them, we propose a new way to perform self-supervised learning by preserving the similarity of augmentation distribution, based on the observation that a strong correlation exists between the similarity of augmentation distributions and the similarity of semantics. Explanation of Contrastive Learning. Several works provide empirical or theoretical results for explaining the behavior of contrastive learning. Tian et al. (2020b); Xiao et al. (2021) explore the role of augmentation and show contrastive model can extract useful information from views but also can be affected by nuisance information. Zhao et al. (2021) empirically shows that contrastive learning preserves low-level or middle-level instance information. In theoretical studies, Saunshi et al. (2019) provide guarantees of downstream linear classification tasks under conditionally independence assumption. Other works weaken the assumption but are still unrealistic (Lee et al., 2021; Tosh et al., 2021). HaoChen et al. (2021) focus on how views of different samples are connected by the augmentation process and provide guarantees with certain connectivity assumptions. Wang et al. (2022) notice that the augmentation overlap provides a ladder for gradually learning class-separated representations. In addition to the alignment and uniformity as shown by Wang & Isola (2020), Huang et al. (2021) develop theories on the crucial effect of data augmentation on the generalization of contrastive learning. Hu et al. (2022) explain that the contrastive loss is implicitly doing SNE with “positive” pairs constructed from data augmentation. Inspired by the important role of augmentation, we provide a novel self-supervised method that ensures preserving augmentation overlap. 3 NOTATIONS The set of all natural data (data without augmentation) is denoted by X̄ , with size |X̄ | = N . We assume that the natural data follow a uniform distribution p(x̄) on X̄ , i.e., p(x̄) = 1N ,∀x̄ ∈ X̄ . By applying an augmentation method A, a natural sample x̄ ∈ X̄ could be augmented to another sample x with probability pA(x | x̄), so we use p(· | x̄) to encode the augmentation distribution. 1 For example, if x̄ is an image, then A can be common augmentations like Gaussian blur, color distortion and random cropping (Chen et al., 2020a). Denote the set of all possible augmented data as X . We assume X has finite size |X | = L and L > N for ease of exposition. Note that N and L are finite, but can be arbitrarily large. We denote the encoder as fθ, parameterized by θ, which projects a sample x to an embedding vector in Rk. 4 LEARNING VIA AUGMENTATION OVERLAPS As we mentioned in Section 1, measuring the similarity between the augmentation distributions, i.e., the overlap of the augmented results of the two samples reveals their semantic relationship well. For example, in natural language processing, we usually generate augmented sentences by dropping out some words. Then different sentences with similar meanings are likely to contain the same set of words and thus have a high probability of creating similar augmented data. With the help of this self-supervision, we formulate the embedding learning task to meet the following similarity preserving condition: dRk (fθ⋆ (x̄1) , fθ⋆ (x̄2)) ∝ dA(p(· | x̄1), p(· | x̄2)) . (1) dRk is a distance measure in the embedding space Rk, and dA measures the distance between two augmentation distributions. Equation (1) requires the learned embedding with the optimal parameter θ⋆ has the same similarity comparison with that measured by the augmentation distributions. In this section, we first introduce the augmentation feature for each sample, which is a manually designed embedding satisfying the condition in Equation (1). To handle the high dimensionality and complexity of the augmentation feature, we further propose our Augmentation Component Analysis (ACA) that learns to reduce the dimensionality and preserve the similarity. 1Note that p(· | x̄) is usually difficult to compute and we can only sample from it. We omit the subscript A and directly use p(· | x̄) in the following content for convenient 4.1 AUGMENTATION FEATURE To reach the goal of similarity preserving in Equation (1), a direct way is to manually construct the feature by the augmentation distributions of each natural sample, i.e., f(x̄) = [p(x1 | x̄), . . . , p(xL | x̄)]⊤, where each element p(xi | x̄) represents the probability of getting a certain element xi in space X by augmenting x̄. We omit θ in f(x̄) since such augmentation feature2 does not rely on any learnable parameters. In this case, any distance dRL defined in the space of f is exactly a valid distribution distance, which reveals the augmentation overlaps and is related to the semantic similarity. Although the constructive augmentation feature naturally satisfies the similarity preserving condition (Equation (1)) (because it directly use the augmentation distribution without loss of information), it is impractical for the following reasons. First, its dimensionality is exponentially high, which is up to L, the number of possible augmented results. For example, even on CIFAR10, the small-scale dataset with image size 32× 32× 3, L is up to 2563072 (3072 pixels and 256 possible pixel values). Second, the computation of each element is intractable. We may need an exponentially large number of samples to accurately estimate each p(x | x̄). The dimensionality and computation problems make the augmentation feature impractical both at inference and training time. Such inconvenience motivates us to (1) conduct certain dimension reduction to preserve the information in low dimensional space (Section 4.2) and (2) develop an efficient algorithm for dimension reduction (Section 4.3). 4.2 DIMENSION REDUCTION ON AUGMENTATION FEATURES To deal with the high-dimensional property, we employ the idea of PCA (Hotelling, 1933), which reconstructs the data with principal components.3 For convenience, we denote the design matrix of augmentation feature by A, where A ∈ RN×L, Ax̄,x = p(x | x̄) (see Figure 2). We perform PCA on a transformed augmentation feature called normalized augmentation feature:  = AD− 1 2 , (2) where D = diag([dx1 , dx2 , . . . , dxL ]), dx = ∑ x̄ p(x | x̄). Based on normalized augmentation feature, we can develop an efficient algorithm for similarity preserving embeddings. Assume the SVD of  = UΣV ⊤ with U ∈ RN×N , Σ ∈ RN×L, V ∈ RL×L , PCA first learns the projection matrix consisting of the top-k right singular vectors, which can be denoted as Ṽ ∈ RL×k. The vectors in Ṽ are called Principal Components (PCs). Then, it projects the feature by ÂṼ to get the embeddings for each sample. The overall procedure is illustrated at the top-right of Figure 2. But performing PCA on the augmentation feature will encounter many obstacles. The element of augmentation feature is not possible to estimate accurately, not to mention its high dimensionality. 2Following the common knowledge in dimension reduction, we call the raw high dimensional representation as “feature”, and learned low-dimensional representation as “embedding”. 3In this paper, we use the non-centred version (Reyment & Jvreskog, 1996), which is more appropriate for observations than for variables, where the origin matters more. Even if we can somehow get the projection matrix Ṽ , it is also impractical to project the highdimensional matrix Â. For this reason, we propose ACA to make PC learning and projection process efficient without explicitly calculating elements of augmentation feature. 4.3 AUGMENTATION COMPONENT ANALYSIS Although there are several obstacles when performing PCA on the augmentation features directly, fortunately, it is efficient to sample from the augmentation distribution p(x | x̄), i.e., by performing augmentation on the natural data x̄ and get an augmented sample x. Being aware of this, our ACA uses two practical losses to simulate the PCA process efficiently by sampling. The first contrastivelike loss leads the encoder to learn principal components of Â, which can be efficiently optimized by sampling like traditional contrastive methods. The second loss performs on-the-fly projection of  through the training trajectory, which solves the difficulty of high dimensional projection. Learning principal components. ACA learns the principal components by an efficient contrastivelike loss. Besides its projection functionality, these learned principal components can also serve as embeddings that preserve a kind of posterior distribution similarity, as we will show later. In the SVD view, UΣ serves as the PCA projection results for samples and V contains the principal components (Jolliffe, 2002). However, if changing our view, V Σ can be seen as the representation of each column. Since each column of  encodes the probability of the augmented data given natural data, V Σ preserves certain augmentation relationships, as we will show in Theorem 4.2 later. To leverage the extrapolation power of encoders like deep neural networks, we choose to design a loss that can guide the parameterized encoder fθ to learn similar embeddings as PCA. Inspired by the rank minimization view of PCA (Vidal et al., 2016), we employ the low-rank approximation objective with matrix factorization, similar to HaoChen et al. (2021): min F∈RL×k Lmf = ∥Â⊤Â− FF⊤∥2F , (3) where columns of F store the scaled version of top-k right singular vectors, and each row can be seen as the embedding of augmented data as will show in Lemma 4.1. According to Eckart–Young–Mirsky theorem (Eckart & Young, 1936), by optimizing Lmf , we can get the optimal F̂ , which has the form Ṽ Σ̃Q, Q ∈ Rk×k is an orthonormal matrix. Σ̃ and Ṽ contains the top-k singular values and right singular vectors. By expanding Equation (3), we get Augmentation Component Analysis Loss for learning Principal Components (ACA-PC) in the following lemma: Lemma 4.1 (ACA-PC loss). Let Fx,: = √ dxf ⊤ θ (x),∀x ∈ X . Minimizing Lmf is equivalent to minimizing the following objective: LACA-PC =− 2E x̄∼p(x̄),xi∼p(xi|x̄) xj∼p(xj |x̄) fθ(xi) ⊤fθ(xj) +NEx1∼pA(x1),x2∼pA(x2) [( fθ(x1) ⊤fθ(x2) )2] . (4) The proof can be found in Appendix F. In ACA-PC, the first term is the common alignment loss for augmented data and the second term is a form of uniformity loss (Wang & Isola, 2020). Both terms can be estimated by Monte-Carlo sampling. ACA-PC is a kind of contrastive loss. But unlike most of the others, it has theoretical meanings. We note that the form of ACA-PC differs from spectral loss (HaoChen et al., 2021) by adding a constant N before the uniformity term. This term is similar to the noise strength in NCE (Gutmann & Hyvärinen, 2010) or the number of negative samples in InfoNCE (van den Oord et al., 2018). It can be proved that the learned embeddings by ACA-PC preserve the posterior distribution distances between augmented data: Theorem 4.2 (Almost isometry for posterior distances). Assume fθ is a universal encoder, σk+1 is the (k + 1)-th largest singular value of Â, dmin = minx dx, and δx1x2 = I(x1 = x2), the minimizer θ∗ of LACA−PC satisfies: d2post(x1,x2)− 2σ2k+1 dmin (1− δx1x2) ≤ ∥fθ∗(x1)− fθ∗(x2)∥22 ≤ d2post(x1,x2) , ∀x1,x2 ∈ X where the posterior distance d2post(x1,x2) = ∑ x̄∈X̄ (pA(x̄ | x1)− pA(x̄ | x2))2 (5) measures the squared Euclidean distance between the posterior distribution pA(x̄ | x) = p(x|x̄)p(x̄)pA(x) . We give the proof in Appendix G. Theorem 4.2 states that the optimal encoder for ACA-PC preserves the distance of posterior distributions between augmented data within an error related to embedding size k. As k increase to N , the error decrease to 0. It corresponds to the phenomenon that a larger embedding size leads to better contrastive performance (Chen et al., 2020a). The posterior distribution pA(x̄ | x) represents the probability that a given augmented sample x is created by a natural sample x̄. Augmented data that are only produced by the same natural sample will have the smallest distance, and embeddings of those in overlapped areas will be pulled together by ACA-PC. Since the overlapped area are usually created by two same-class samples, ACA-PC can form semantically meaningful embedding space. It is also noticeable that the optimal encoder meets the similarity preserving condition (Equation (1)) but concerning the posterior distribution for augmented data not the augmentation distribution for natural data. Since what we care about is the distribution of natural data, we further propose a projection loss that helps learn good embeddings for all the natural data. On-the-fly Projection. As stated in the previous part, the learned embeddings by ACA-PC not only serve as certain embeddings for augmented data but also contain principal components of normalized augmentation feature. Based on this, we propose to use these embeddings to act as a projection operator to ensure meaningful embeddings for all the natural data. To be specific, denote the embedding matrix for all augmented data as F aug(∈ RL×k), where each row F augx,: = f⊤θ∗(x). From Equation (3) and F̂x,: = √ dxf ⊤ θ∗(x), it can be easily seen that: F aug = D− 1 2 F̂ = D− 1 2 Ṽ Σ̃Q Similar to PCA (Hotelling, 1933) that projects the original feature by the principal components V , we propose to use F aug to project the augmentation feature to get the embeddings for each natural sample. Denote the embedding matrix for natural data as Fnat(∈ RN×k), where each row Fnatx̄,: represents the embeddings of x̄. We compute Fnat as follows: Fnat = AF aug = ÂD 1 2D− 1 2 Ṽ Σ̃Q = (Ũ Σ̃)Σ̃Q, (6) where Σ̃,Ũ contain the top-k singular values and corresponding left singular vectors. It is noticeable that Fnat is exactly the PCA projection result multiplied by an additional matrix Σ̃Q. Fortunately, such additional linear transformation does not affect the linear probe performance (HaoChen et al., 2021). With Equation (6), the embedding of each natural sample can be computed as follows: Fnatx̄,: = Ax̄,:F aug = ∑ x p(x | x̄)f⊤θ∗(x) = Ex∼p(x|x̄)f⊤θ∗(x) (7) which is exactly the expected feature over the augmentation distribution. Similar to Theorem 4.2, the embeddings calculated by Equation (7) also present a certain isometry property: Theorem 4.3 (Almost isometry for weighted augmentation distances). Assume fθ is a universal encoder, σk+1 is the (k + 1)-th largest sigular value of Â,δx̄1x̄2 = I(x̄1 = x̄2), let the minimizer of LACA−PC be θ∗ and g(x̄) = Ex∼p(x|x̄)fθ∗(x) as in Equation (7), then: d2w-aug(x̄1, x̄2)− 2σ2k+1 (1− δx̄1x̄2) ≤ ∥g(x̄1)− g(x̄2)∥2Σ−2k ≤ d 2 w-aug(x̄1, x̄2) , ∀x1,x2 ∈ X where ∥·∥Σ−2k represent the Mahalanobis distance with matrix Σ −2 k ,Σk = diag([σ1, σ2, . . . , σk]) is the diagonal matrix containing top-k singular values and the weighted augmentation distance d2w-aug(x̄1, x̄2) = 1 N ∑ x∈X (p(x | x̄1)− p(x | x̄2))2 pA(x) (8) measures the weighted squared Euclidean distance between the augmentation distribution p(x | x̄). Different from Theorem 4.2, which presents isometry between Euclidean distances in embeddings and augmentation distribution, Theorem 4.3 presents isometry between Mahalanobis distances. The weighted augmentation distances weigh the Euclidean distances by pA(x). dw-aug can be regarded as a valid augmentation distance measure dA as in Equation (1) and Fnat preserve such a distance. So our goal is to make embeddings of x̄ approaches Ep(x|x̄)fθ⋆(x). However, as stated before, the additional projection process is not efficient, i.e., we need exponentially many samples from p(x | x̄). We notice that samples during the training process of ACA-PC can be reused. For this reason, we propose an on-the-fly projection loss that directly uses the current encoder for projection: Lproj = Ex̄∼p(x̄) [ ∥fθ(x̄)− Ep(x|x̄)fθ(x)∥22 ] (9) Full objective of ACA. Based on the discussion of the above parts, ACA simultaneously learns the principal components by ACA-PC and projects natural data by an on-the-fly projection loss. The full objective of ACA has the following form: LACA-Full = LACA-PC + αLproj (10) where α is a trade-off hyperparameter. We also find N in Equation (4) too large for stable training, so we replace it with a tunable hyperparameter K. Here, we only display the loss in expectation forms. The details of the implementation are described in Appendix A. 5 A PILOT STUDY In this section, we experiment with our Augmentation Component Analysis method on a synthetic mixture component data with a Gaussian augmentation method. In this example, we aim to show the relationship between semantic similarity and posterior/weighted augmentation distances. We also show the effectiveness of our method compared to traditional contrastive learning. In this example, the natural data x̄ are sampled from a mixture gaussian with c component: p(x̄) = c∑ i=1 πiN (µi, siI) We use Gaussian noise as the data augmentation of a natural data sample, i.e., A(x̄) = x̄+ ξ where ξ ∼ N (0, saI). Concretely, we conduct our experiment on 2-D data with c = 4, πi = 1c , si = 1 and µi uniformly distributed on a circle with radius 2 . For each component, we sample 200 natural data with the index of the component as their label. For each natural datum, we augment it 2 times with sa = 4, which results in totally 1600 augmented data. We compute the augmentation probability for between x and x̄ by p(x | x̄) and we normalize the probability for each x̄. First, we plot the distribution of posterior distances (Equation (5)) for pairs of augmented data and weighted augmentation distances (Equation (8)) for pairs of natural data in Figure 3 left. The two distances appear to have similar distributions because the synthetic data are Gaussian. It can be seen that data from the same component tend to have small distances, while from different components, their distances are large. In low-distance areas, there are pairs of the same class, which means that the two distances are reliable metrics for judging semantic similarity. In all, this picture reveals the correlation between semantic similarity and posterior/weighted augmentation distances. Second, we compare our methods with SimCLR (Chen et al., 2020a), the traditional contrastive method and Spectral (HaoChen et al., 2021), which similarly learns embeddings with spectral theory. We test the learned embeddings using a Logistic Regression classifier and report the error rate of the prediction in Figure 3 right. We also report performance when directly using augmentation feature (AF). First, AF has discriminability for simple linear classifiers. SimCLR and Spectral tend to underperform AF as the embedding size increases, while our methods consistently outperform. It may be confusing since our method performs dimension reduction on this feature. But we note that as the embedding size increases, the complexity of the linear model also increases, which affects the generalizability. All the methods in Figure 3 right show degradation of this kind. However, our methods consistently outperform others, which shows the superiority of ACA. Additionally, by adding projection loss, ACA-Full improves ACA-PC by a margin. Additionally, traditional contrastive learning like SimCLR achieves similar performance as our methods. We think it reveals that traditional contrastive learning has the same functionality as our methods. 6 EXPERIMENTS 6.1 SETUP Dataset. In this paper, we conduct experiments mainly on the following datasets with RTX-3090 ×4. CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009): two datasets containing totally 500K images of size 32 × 32 from 10 and 100 classes respectively. STL-10 (Coates et al., 2011): derived from ImageNet (Deng et al., 2009), with 96 × 96 resolution images with 5K labeled training data from 10 classes. Additionally, 100K unlabeled images are used for unsupervised learning. Tiny ImageNet: a reduced version of ImageNet (Deng et al., 2009), composed of 100K images scaled down to 64 × 64 from 200 classes. ImageNet-100 (Tian et al., 2020a): a subset of ImageNet, with 100-classes. ImageNet (Deng et al., 2009), the large-scale dataset with 1K classes. Network Structure. Following common practice (Chen et al., 2020a;b;c), we use the encoderprojector structure during training, where the projector projects the embeddings into a lowdimensional space. For CIFAR-10 and CIFAR-100, we use the CIFAR variant of ResNet-18 (He et al., 2016; Chen & He, 2021) as the encoder. We use a two-layer MLP as the projector whose hidden dimension is half of the input dimension and output dimension is 64. For STL-10 and Tiny ImageNet, only the max-pooling layer is disabled following (Chen & He, 2021; Ermolov et al., 2021). For these two datasets, we use the same projector structure, except that the output dimension is 128. For ImageNet, we use ResNet-50 with the same projector as Chen et al. (2020a). Image Transformation. Following the common practice of contrastive learning (Chen et al., 2020a), we apply the following augmentations sequentially during training: (a) crops with a random size; (b) random horizontal flipping; (c) color jittering; (d) grayscaling. For ImageNet-100 and ImageNet, we use the same implementation as (Chen et al., 2020a). Optimizer and other Hyper-parameters. For datasets except for ImageNet, adam optimizer (Kingma & Ba, 2015) is used for all datasets. For CIFAR-10 and CIFAR-100, we use 800 epochs with a learning rate of 3× 10−3. For Tiny ImageNet and STL-10, we train 1,000 epochs with a learning rate 2 × 10−3. We use a 0.1 learning rate decay at 100, 50, 20 epochs before the end. Due to hardware resource restrictions, we use a mini-batch of size 512. The weight decay is 1 × 10−6 if not specified. Following common practice in contrastive learning, we normalize the projected feature into a sphere. For CIFAR-10, we use α = 1. For the rest datasets, we use α = 0.2. By default, K is set to 2. For ImageNet, we use the same hyperparameters as (Chen et al., 2020a) except batch size being 256, α = 0.2 and K = 2. Evaluation Protocol. We evaluate the learned representation on two most commonly used protocols – linear classification (Zhang et al., 2016; Kolesnikov et al., 2019) and k-nearest neighbors classifier (Chen & He, 2021). In all the experiments, we train the linear classifier for 100 epochs. The learning rate exponentially decays from 10−2 to 10−6. The weight decay is 1× 10−6. We report the classification accuracy on test embeddings as well as the accuracy of a 5-Nearest Neighbors classifier for datasets except for ImageNet. 6.2 PERFORMANCE COMPARISON In Table 1, we compare the linear probe performance on various small-scale or mid-scale benchmarks with several methods including SimCLR (Chen et al., 2020a), BYOL (Grill et al., 2020), SimSiam (Chen & He, 2021) and Spectral (HaoChen et al., 2021). For transfer learning benchmarks, please refer to Appendix D and Appendix E. SimCLR uses is a method that uses contrastive loss. BYOL and SimSiam do not use negative samples. Spectral is a similar loss derived from the idea of spectral clustering. From Table 1, we can see that our ACA-Full method achieves competitive results on small- or mid-scale benchmarks, achieving either the best or the second-best results on all benchmarks except the 5-NN evaluation on STL-10. Also, ACA-PC differs from ACA-Full in the projection loss. In all the benchmarks, we can see that the projection loss improves performance. For large-scale benchmarks, we compare several methods on ImageNet-100 and ImageNet. On ImageNet-100, we compare our method additionally to MoCo (He et al., 2020), Lalign + Luniform (Wang & Isola, 2020) and InfoMin (Tian et al., 2020b). Note that the results of the other three methods are reported when using the ResNet-50 encoder, which has more capacity than ResNet18. Our method can also achieve state-of-the-art results among them. This means that our method is also effective with relatively small encoders even on large-scale datasets. On ImageNet, we see that ACA-PC achieves competitive performance against state-of-the-art contrastive methods (Chen et al., 2020a;c; Grill et al., 2020; Chen & He, 2021; HaoChen et al., 2021) and ACA-Full achieves the best. 7 CONCLUSION AND FUTURE WORK In this paper, we provide a new way of constructing self-supervised contrastive learning tasks by modeling similarity through augmentation overlap, which is motivated by the observation that semantically similar data usually creates similar augmentations. We propose Augmentation Component Analysis to perform PCA on augmentation feature efficiently. Interestingly, our methods have a similar form as the traditional contrastive loss and may explain the ability of contrastive loss. We hope our paper can inspire more thoughts about how to measure similarity in self-supervised learning and how to construct contrastive learning tasks. Future studies may be explorations of applying ACA to learn representations of other forms of instances, such as tasks (Achille et al., 2019) and models (Wu et al., 2023). ACKNOWLEDGE This research was supported by NSFC (61773198, 62006112,61921006), Collaborative Innovation Center of Novel Software Technology and Industrialization, NSF of Jiangsu Province (BK20200313) B EFFECT OF AUGMENTATION OVERLAPS Like contrastive learning, our method relies on the quality of augmentation. Therefore, we investigate the influence of different augmentations and reveal the relationship between distribution difference and the linear probe performance on CIFAR10. The augmentation distribution is estimated by augmenting 106 times for a subset of random 2000 pairs of samples with the number of intra-class and inter-class pairs being 1000 respectively. Note that as is stated in Section 4.1, even on CIFAR10, the actual value of L is exponentially large (up to 2563072). It is impossible to accurately estimate a distribution over so many possible values. But we notice that for neural networks, many operators can reduce the possible number of values, like convolutions and poolings. Following this observation and to make the computation efficient, we descrete the color into 8-bit for each channel and use a max pooling operation to get a 4× 4 picture. by this kind of approximation, the number of L reduces to 848. Seems still too large, but it can be noted that the augmentation distribution of each sample covers only a small region. It is enough to estimate the distribution by sampling. For memory restriction, we cannot fully estimate the weighted augmentation distance in Theorem 4.3. Because we cannot store all possible values for pA(x). Instead, we use the Hellinger distance as the distribution distance measure: d2H(x̄1, x̄2) = 1 N ∑ x∈X (√ p(x | x̄1)− √ p(x | x̄2) )2 Hellinger distance ranges [0, 2], making the comparison clear. We list the experimented augmentation here: 1. Grayscale: Randomly change the color into gray with probability of 0.1. 2. HorizontalFlip: Randomly flip horizontally with probability 0.5. 3. Rotation: Randomly rotate image with uniformly distributed angle in [0, π] 4. ColorJitter: Jitter (brightness, contrast, saturation, hue) with strength (0.4, 0.4, 0.4, 0.1) and probability 0.8. In Table 3, we display the histogram (HIST) of intra- and inter-class augmentation distribution distances. ACC displays the linear probe performance on the test set. From the table, the following requirements for a good augmentation can be concluded: (1) Existence of overlap. For the upper three augmentations. The “scope” of augmentation is small. As a result, most of the samples do not overlap. This makes embeddings lack the discriminative ability for downstream tasks. On the contrary, the lower three create overlaps for most of the samples, leading to much better performance. (2) Intra-class distance is lower than inter-class. Compared to ColorJitter, ResizedCrop makes more intra-class samples have lower distance. So ResizedCrop outperforms ColorJitter. SimCLR augmentation surpasses these two for the same reason. Interestingly, we find that the same phenomena appear when using other contrastive methods like SimCLR. It shows that these methods somehow utilize the augmentation overlap like our method. C PERFORMANCE CURVE In this section, we illustrate the performance curve throughout training. We aim to demonstrate the functionality of projection loss and show that our ACA method leads to better performance. The compared traditional contrastive learning method is chosen to be SimCLR, for the reason that our method only differs from SimCLR in the loss, with all other things (architecture, optimizer and other shared hyperparameters) identical. Also, we do not introduce extra mechanisms like momentum encoder (BYOL, MoCo) and predictor (BYOL, SimSiam). Figure 5 shows the performance curve along with the projection loss on the CIFAR-10 dataset. The left figure shows the projection loss. We can see that in the early stage of training, the projection loss will increase. It reveals that the natural data will deviate from the center of augmentation distribution. It is harmful to the performance of the model. With the help of projection loss, the embeddings of natural data will be dragged back to their right position, the center. The mid and right figures illustrate the performance curve during training. With only ACA-PC loss, the model can only achieve similar performance during training. But the ACA-Full loss will help improve performance during training. Also, we can see that ACA starts to outperform SimCLR and ACA-PC by a considerable margin from about 50 epochs. This happens to be the epoch in which the projection loss increases to its stable level. Therefore, pulling the natural data to the center of its augmentation helps to learn better embeddings. D TRANSFER TO OTHER DATASETS Following Chen et al. (2020a), we evaluate the self-supervised pre-trained models for linear classification task on 10 datasets as it is conducted in MSF paper (Koohpayegani et al., 2021). The results are reported in Table 4. All the results other than ACA are taken from Koohpayegani et al. (2021). Although our method is trained with fewer epochs, it achieves competitive results with contrastive learning methods. Notably, it surpasses the 1000-epoch SimCLR which differs from our method only in loss. It shows that the embeddings learned by our method are also transferable to other downstream tasks. We think it is due to the universality of the correlation between augmentation similarity and semantical similarity across these benchmarks. E TRANSFER TO OBJECT DETECTION Following the procedure outlined in ?, we use Faster-RCNN Ren et al. (2015) for the task of object detection on PASCAL-VOC Everingham et al. (2015). We use the code provided at MoCo repository4 with default parameters. All the weights are finetuned on the trainval07+12 set and evaluated on the test07 set. We report an average over 5 runs in Table 5. Despite the shorter training epochs, our method can achieve better results than SimCLR, especially outperform by a large margin on AP75(> 1%). F PROOF OF LEMMA 4.1 For convenient, we define M := Â⊤Â. The elements of M are: Mx1x2 = ∑ x̄∈X̄ p(x1 | x̄)p(x2 | x̄)√ dx1 √ dx2 ,x1,x2 ∈ X (13) Expanding Equation (3), we get: Lmf = ∑ x1,x2∈X (Mx1x2 − F⊤x1Fx2) 2 = ∑ x1,x2∈X (Mx1x2 − √ dx1 √ dx2fθ(x1) ⊤fθ(x2)) 2 = const − 2 ∑ x1,x2∈X √ dx1 √ dx2Mx1x2fθ(x1) ⊤fθ(x2) + ∑ x1,x2∈X dx1dx2(fθ(x1) ⊤fθ(x2)) 2 = const − 2 ∑ x1,x2∈X ∑ x̄∈X̄ p(x1 | x̄)p(x2 | x̄)fθ(x1)⊤fθ(x2) + ∑ x1,x2∈X dx1dx2(fθ(x1) ⊤fθ(x2)) 2 4https://github.com/facebookresearch/moco multiply by p(x̄) = 1N and replace dx with ∑ x̄ p(x | x̄) = NpA(x). The objective becomes: min θ − 2 ∑ x1,x2∈X ∑ x̄∈X̄ p(x1 | x̄)p(x2 | x̄)p(x̄)fθ(x1)⊤fθ(x2) +N ∑ x1,x2∈X pA(x1)pA(x2)(fθ(x1) ⊤fθ(x2)) 2 = −2E x̄∼p(x̄),xi∼A(xi|x̄) xj∼A(xj |x̄) [ fθ(x1) ⊤fθ(x2) ] +NEx1∼pA(x1),x2∼pA(x2) [ (fθ(x1) ⊤fθ(x2)) 2 ] = LACA-PC G PROOF OF THEOREM 4.2 As in Appendix F, we define M := Â⊤Â. By Eckart–Young–Mirsky theorem (Eckart & Young, 1936), the minimizer F̂ of ∥M − FF⊤∥2F , must have the form V̂ Σ̂Q, where V̂ , Σ̂ contain the top-k singular values and corresponding right singular vectors of Â, Q ∈ Rk×k is some orthonormal matrix with Q⊤Q = I . Since we let Fx = √ dxfθ(x), then the minimizer θ⋆ must satisfy fθ⋆(x) = Q σ̂ ⊙ v̂(x)√ dx = Q [σ1v1(x), σ2v2(x), . . . , σkvk(x)] ⊤ √ dx . where ⊙ is the element-wise multiplication. For convenience, we use σi to denote i-th largest singular value, ui(x̄),vi(x) to denote the element of i-th left/right singular value corresponding to x̄/x . When p(x̄) = 1N , dx = NpA(x) = pA(x) p(x̄) . Then the posterior distance: d2post(x1,x2) = ∑ x̄∈X̄ (pA(x̄ | x1)− pA(x̄ | x2))2 = ∑ x̄∈X̄ ( p(x1 | x̄)p(x̄) pA(x1) − p(x1 | x̄)p(x̄) pA(x1) )2 = ∑ x̄∈X̄ ( p(x1 | x̄) dx1 − p(x2 | x̄) dx2 )2 = ∑ x̄∈X̄ ( Âx̄x1√ dx1 − Âx̄x2√ dx2 )2 = ∑ x̄∈X̄ ( N∑ i=1 σiui(x̄)vi(x1)√ dx1 − σiui(x̄)vi(x2)√ dx2 )2 = ∑ x̄∈X̄ ( N∑ i=1 σiui(x̄)( vi(x1)√ dx1 − vi(x2)√ dx2 ) )2 = ∑ x̄∈X̄ ∑ i,i′ σiui(x̄)σi′ui′(x̄)( vi(x1)√ dx1 − vi(x2)√ dx2 )( vi′(x1)√ dx1 − vi ′(x2)√ dx2 ) = ∑ i,i′ σiσi′( vi(x1)√ dx1 − vi(x2)√ dx2 )( vi′(x1)√ dx1 − vi ′(x2)√ dx2 ) ∑ x̄∈X̄ ui(x̄)ui′(x̄) (1) = ∑ i,i′ σiσi′( vi(x1)√ dx1 − vi(x2)√ dx2 )( vi′(x1)√ dx1 − vi ′(x2)√ dx2 )δi,i′ = N∑ i=1 σ2i ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 (14) (1) is due to the orthogonality of singular vectors. Note that: N∑ i=1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 = L∑ i=1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 − L∑ i=N+1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 ≤ L∑ i=1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 = L∑ i=1 v2i (x1) dx1 + L∑ i=1 v2i (x2) dx2 − 2 L∑ i=1 vi(x1)vi(x2)√ dx1 √ dx2 = 1 dx1 + 1 dx2 − 2δx1x2√ dx1 √ dx2 (2) ≤ ( 1 dx1 + 1 dx2 )(1− δx1x2) ≤ 2 dmin (1− δx1x2) (2) can be deduced by considering conditions whether x1 = x2 or not. Then: ∥fθ⋆(x1)− fθ⋆(x2)∥2 = k∑ i=1 σ2i ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 =d2post(x1,x2)− N∑ i=k σ2i ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 (≤ d2post(x1,x2)) ≥d2post(x1,x2)− σ2k+1 N∑ i=k+1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 ≥d2post(x1,x2)− σ2k+1 N∑ i=1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 ≥d2post(x1,x2)− 2σ2k+1 dmin (1− δx1x2) Therefore, we have proved Theorem 4.2. H PROOF OF THEOREM 4.3 similar to Appendix G, d2w-aug(x̄1, x̄2) = ∑ x∈X 1 NpA(x) (p(x | x̄1)− p(x | x̄2))2 = ∑ x∈X ( p(x | x̄1)√ NpA(x) − p(x | x̄1)√ NpA(x) )2 = ∑ x∈X ( p(x | x̄1)√ dx − p(x | x̄1)√ dx )2 = ∑ x∈X ( Âx̄1x − Âx̄2x )2 = ∑ x∈X ( N∑ i=1 σiui(x̄1)vi(x)− σiui(x̄2)vi(x) )2 = ∑ x∈X ( N∑ i=1 σi(ui(x̄1)− ui(x̄2))vi(x) )2 = ∑ x∈X ∑ i,i′ σivi(x)σi′vi′(x)(ui(x̄1)− ui(x̄2))(ui′(x̄1)− ui′(x̄2)) = ∑ i,i′ σiσi′(ui(x̄1)− ui(x2))(ui′(x̄1)− ui′(x̄2)) ∑ x∈X vi(x)vi′(x) (1) = ∑ i,i′ σiσi′(ui(x̄1)− ui(x̄2))(ui′(x̄1)− ui′(x̄2))δi,i′ = N∑ i=1 σ2i (ui(x1)− ui(x2))2 (1) is due to the orthogonality of singular vectors. And g(x̄) takes the following form: g(x̄) = Q [ σ21u1(x), σ 2 2u2(x), . . . , σ 2 kuk(x) ]⊤ . Thus, ∥g(x̄1)− g(x̄2)∥2Σ−2k = k∑ i=1 σ2i (ui(x1)− ui(x2))2 = d2w-aug(x̄1, x̄2)− N∑ i=k+1 σ2i (ui(x1)− ui(x2))2 (≤ d2w-aug(x̄1, x̄2)) ≥ d2w-aug(x̄1, x̄2)− σ2k+1 N∑ i=1 (ui(x1)− ui(x2))2 = d2w-aug(x̄1, x̄2)− 2σ2k+1(1− δx̄1x̄2) I ABLATION STUDY ON PARAMETER α AND K We conduct ablation experiments on the parameter α and K. α is the trade-off parameter between ACA-PC loss and projection loss Equation (10). K act as the noise strength for ACA-PC, which replaces N in Equation (4). Figure 6 shows the effect of α and K on different benchmarks. It can be seen that α is necessary to improve the performance of ACA-PC. A certain value of α helps the model to achieve better results. However, a too large value of α degrades the performance. The same phenomenon is the same on K. J COMPARISON OF NEAREST NEIGHBORS We randomly select 8 samples from the validation set of ImageNet-100 (Tian et al., 2020a). Then we use the encoder learned by our ACA method and SimCLR (Chen et al., 2020a) to extract features and investigate their nearest neighbors of them. The left-most column displays the selected samples and the following columns show the 5 nearest neighbors. The samples labeled as different classes are marked by the red box. We also annotate the distance between the samples and their nearest neighbors. First, we can see that even though utilizing the augmentation in a different way, ACA achieves similar results as traditional contrastive learning. Both of them can learn semantically meaningful embeddings. However, we can see that ACA tends to learn embeddings that pull together images that are similar in the input space, i.e., creating similar augmentation, while SimCLR sometimes has neighbors that seem different.
1. What is the focus of the paper regarding self-supervised representation learning? 2. What are the strengths of the proposed approach, particularly in its novel idea and theoretical interpretation? 3. What are the weaknesses of the paper, especially regarding experimentation and clarity? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper presents a new method for self-supervised representation learning based on data augmentation. Unlike prior work in this line of research, the proposed method considers semantic similarity (e.g., class equivalence) between different instances as well as semantic invariance between different views of the same instance. The key idea is that if two training instances are semantically close to each other, their augmented versions (i.e., different views) will be largely overlapped. Conceptually, the idea can be implemented by comparing posterior probabilities of augmented examples given original ones, called augmentation features, which however is computationally intractable due to the prohibitively large number of potential augmented examples and the difficulty of estimating such probabilities. The authors thus propose to simulate principal component analysis (PCA) of augmentation features through a neural network encoder trained by two loss functions; they also prove that the loss functions allow the encoder to produce PCA projection results without computing the augmentation features explicitly. The proposed method outperforms prior work in the same direction and provides theoretical interpretation of contrastive methods for self-supervised representation learning. Strengths And Weaknesses [Strengths] The main idea about relations between semantic similarity and similarity between augmentation distributions is new in self-supervised representation learning and sounds reasonable. The idea also has been justified empirically. The way of simulating the direct yet impractical implementation of the idea using a neural network and two loss functions is interesting, solid (its validity is proven in the appendix), and worth to be introduced to the community. The proposed method achieved best scores on multiple benchmarks for self-supervised representation learning. [Weaknesses] Clarity issues: (1) The third paragraph of Section 1 is hard to grasp; it will be useful if a figure that conceptually illustrates the main idea is added. (2) The notion of augmented sample x is not clear and seems not consistent in Section 3 and 4. In Section 3, x is introduced as if it is a new view of a natural image (i.e., an augmented version of the natural image), but in Section 4 it is coupled with all natural images to form posterior probabilities. Lack of experiments: The proposed method needs to be compared with latest approaches to self-supervised representation learning. Also, I would recommend evaluating the proposed method on other epochs (e.g., 200, 400, 800) like prior work for the ImageNet linear classification benchmark. Moreover, advantages of the learned model have to be also validated for downstream tasks other than image classification, i.e., transfer learning setting for object detection and instance segmentation. Clarity, Quality, Novelty And Reproducibility Clarity: The manuscript has some clarity issues as commented above, but overall it is written clearly. Reproducibility: The paper elaborates on technical details for reproducing the reported results. Novelty: The main idea of this paper is new and validated both theoretically and empirically. Overall, I believe the paper meets the standard of ICLR in terms of quality.
ICLR
Title Augmentation Component Analysis: Modeling Similarity via the Augmentation Overlaps Abstract Self-supervised learning aims to learn a embedding space where semantically similar samples are close. Contrastive learning methods pull views of samples together and push different samples away, which utilizes semantic invariance of augmentation but ignores the relationship between samples. To better exploit the power of augmentation, we observe that semantically similar samples are more likely to have similar augmented views. Therefore, we can take the augmented views as a special description of a sample. In this paper, we model such a description as the augmentation distribution, and we call it augmentation feature. The similarity in augmentation feature reflects how much the views of two samples overlap and is related to their semantical similarity. Without computational burdens to explicitly estimate values of the augmentation feature, we propose Augmentation Component Analysis (ACA) with a contrastive-like loss to learn principal components and an on-the-fly projection loss to embed data. ACA equals an efficient dimension reduction by PCA and extracts low-dimensional embeddings, theoretically preserving the similarity of augmentation distribution between samples. Empirical results show that our method can achieve competitive results against various traditional contrastive learning methods on different benchmarks. Code available at https://github.com/hanlu-nju/AugCA. 1 INTRODUCTION The rapid development of contrastive learning has pushed self-supervised representation learning to unprecedented success. Many contrastive learning methods surpass traditional pretext-based methods by a large margin and even outperform representation learned by supervised learning (Wu et al., 2018; van den Oord et al., 2018; Tian et al., 2020a; He et al., 2020; Chen et al., 2020a;c). The key idea of self-supervised contrastive learning is to construct views of samples via modern data augmentations (Chen et al., 2020a). Then discriminative embeddings are learned by pulling together views of the same sample in the embedding space while pushing apart views of others. Contrastive learning methods utilize the semantic invariance between views of the same sample, but the semantic relationship between samples is ignored. Instead of measuring the similarity between certain augmented views of samples, we claim that the similarity between the augmentation distributions of samples can reveal the sample-wise similarity better. In other words, semantically similar samples have similar sets of views. As shown in Figure 1 left, two images of deer create many similar crops, and sets of their augmentation results, i.e., their distributions, overlap much. In contrast, a car image will rarely be augmented to the same crop as a deer, and their augmentation distributions overlap little. In Figure 1 right, we verify the motivation numerically. We approximate the overlaps between image augmentations with a classical image matching algorithm (Zitova & Flusser, 2003), which counts the portion of the key points matched in the raw images. We find samples of the same class overlap more than different classes on average, supporting our motivation. Therefore, we establish the semantic relationship between samples in an unsupervised manner based on the similarity of augmentation distributions, i.e., how much they overlap. In this paper, we propose to describe data directly by their augmentation distributions. We call the feature of this kind the augmentation feature. The elements of the augmentation feature represent the probability of getting a certain view by augmenting the sample as shown in the left of Figure 2. The augmentation feature serves as an “ideal” representation since it encodes the augmentation information without any loss and we can easily obtain the overlap of two samples from it. However, not only its elements are hard to calculate, but also such high-dimensional embeddings are impractical to use. Inspired by the classical strategy to deal with high-dimensional data, we propose Augmentation Component Analysis (ACA), which employs the idea of PCA (Hotelling, 1933) to perform dimension reduction on augmentation features previously mentioned. ACA reformulates the steps of extracting principal components of the augmentation features with a contrastive-like loss. With the learned principal components, another on-the-fly loss embeds samples effectively. ACA learns operable low-dimensional embeddings theoretically preserving the augmentation distribution distances. In addition, the similarity between the objectives of ACA and traditional contrastive loss may explain why contrastive learning can learn semantic-related embeddings – they embed samples into spaces that partially preserve augmentation distributions. Experiments on synthetic and real-world datasets demonstrate that our ACA achieves competitive results against various traditional contrastive learning methods. Our contributions are as follows: • We propose a new self-supervised strategy, which measures sample-wise similarity via the similarity of augmentation distributions. This new aspect facilitates learning embeddings. • We propose ACA method that implicitly employs the dimension reduction over the augmentation feature, and the learned embeddings preserve augmentation similarity between samples. • Benefiting from the resemblance to contrastive loss, our ACA helps explain the functionality of contrastive learning and why they can learn semantically meaningful embeddings. 2 RELATED WORK Self-Supervised Learning. Learning effective visual representations without human supervision is a long-standing problem. Self-supervised learning methods solve this problem by creating supervision from the data itself instead of human labelers. The model needs to solve a pretext task before it is used for the downstream tasks. For example, in computer vision, the pretext tasks include colorizing grayscale images (Zhang et al., 2016), inpainting images (Pathak et al., 2016), predicting relative patch (Doersch et al., 2015), solving jigsaw puzzles (Noroozi & Favaro, 2016), predicting rotations (Gidaris et al., 2018) and exploiting generative models (Goodfellow et al., 2014; Kingma & Welling, 2014; Donahue & Simonyan, 2019). Self-supervised learning also achieves great success in natural language processing (Mikolov et al., 2013; Devlin et al., 2019). Contrastive Learning and Non-Contrastive Methods. Contrastive approaches have been one of the most prominent representation learning strategies in self-supervised learning. Similar to the metric learning in supervised scenarios (Ye et al., 2019; 2020), these approaches maximize the agreement between positive pairs and minimize the agreement between negative pairs. Positive pairs are commonly constructed by co-occurrence (van den Oord et al., 2018; Tian et al., 2020a; Bachman et al., 2019) or augmentation of the same sample (He et al., 2020; Chen et al., 2020a;c; Li et al., 2021; Ye et al., 2023), while all the other samples are taken as negatives. Most of these methods employ the InfoNCE loss (van den Oord et al., 2018), which acts as a lower bound of mutual information between views. Based on this idea, there are several methods that attempt to improve contrastive learning, including mining nearest neighbour (Dwibedi et al., 2021; ?; Azabou et al., 2021) and creating extra views by mixing up (Kalantidis et al., 2020) or adversarial training (Hu et al., 2021). Another stream of methods employs a similar idea of contrastive learning to pull views of a sample together without using negative samples (Grill et al., 2020; Chen & He, 2021). Barlow Twins (Zbontar et al., 2021) minimizes the redundancy within the representation vector. Tsai et al. (2021) reveals the relationship among Barlow Twins, contrastive and non-contrastive methods. Most of these methods only utilize the semantic invariance of augmentation and ignore the relationship between samples. Different from them, we propose a new way to perform self-supervised learning by preserving the similarity of augmentation distribution, based on the observation that a strong correlation exists between the similarity of augmentation distributions and the similarity of semantics. Explanation of Contrastive Learning. Several works provide empirical or theoretical results for explaining the behavior of contrastive learning. Tian et al. (2020b); Xiao et al. (2021) explore the role of augmentation and show contrastive model can extract useful information from views but also can be affected by nuisance information. Zhao et al. (2021) empirically shows that contrastive learning preserves low-level or middle-level instance information. In theoretical studies, Saunshi et al. (2019) provide guarantees of downstream linear classification tasks under conditionally independence assumption. Other works weaken the assumption but are still unrealistic (Lee et al., 2021; Tosh et al., 2021). HaoChen et al. (2021) focus on how views of different samples are connected by the augmentation process and provide guarantees with certain connectivity assumptions. Wang et al. (2022) notice that the augmentation overlap provides a ladder for gradually learning class-separated representations. In addition to the alignment and uniformity as shown by Wang & Isola (2020), Huang et al. (2021) develop theories on the crucial effect of data augmentation on the generalization of contrastive learning. Hu et al. (2022) explain that the contrastive loss is implicitly doing SNE with “positive” pairs constructed from data augmentation. Inspired by the important role of augmentation, we provide a novel self-supervised method that ensures preserving augmentation overlap. 3 NOTATIONS The set of all natural data (data without augmentation) is denoted by X̄ , with size |X̄ | = N . We assume that the natural data follow a uniform distribution p(x̄) on X̄ , i.e., p(x̄) = 1N ,∀x̄ ∈ X̄ . By applying an augmentation method A, a natural sample x̄ ∈ X̄ could be augmented to another sample x with probability pA(x | x̄), so we use p(· | x̄) to encode the augmentation distribution. 1 For example, if x̄ is an image, then A can be common augmentations like Gaussian blur, color distortion and random cropping (Chen et al., 2020a). Denote the set of all possible augmented data as X . We assume X has finite size |X | = L and L > N for ease of exposition. Note that N and L are finite, but can be arbitrarily large. We denote the encoder as fθ, parameterized by θ, which projects a sample x to an embedding vector in Rk. 4 LEARNING VIA AUGMENTATION OVERLAPS As we mentioned in Section 1, measuring the similarity between the augmentation distributions, i.e., the overlap of the augmented results of the two samples reveals their semantic relationship well. For example, in natural language processing, we usually generate augmented sentences by dropping out some words. Then different sentences with similar meanings are likely to contain the same set of words and thus have a high probability of creating similar augmented data. With the help of this self-supervision, we formulate the embedding learning task to meet the following similarity preserving condition: dRk (fθ⋆ (x̄1) , fθ⋆ (x̄2)) ∝ dA(p(· | x̄1), p(· | x̄2)) . (1) dRk is a distance measure in the embedding space Rk, and dA measures the distance between two augmentation distributions. Equation (1) requires the learned embedding with the optimal parameter θ⋆ has the same similarity comparison with that measured by the augmentation distributions. In this section, we first introduce the augmentation feature for each sample, which is a manually designed embedding satisfying the condition in Equation (1). To handle the high dimensionality and complexity of the augmentation feature, we further propose our Augmentation Component Analysis (ACA) that learns to reduce the dimensionality and preserve the similarity. 1Note that p(· | x̄) is usually difficult to compute and we can only sample from it. We omit the subscript A and directly use p(· | x̄) in the following content for convenient 4.1 AUGMENTATION FEATURE To reach the goal of similarity preserving in Equation (1), a direct way is to manually construct the feature by the augmentation distributions of each natural sample, i.e., f(x̄) = [p(x1 | x̄), . . . , p(xL | x̄)]⊤, where each element p(xi | x̄) represents the probability of getting a certain element xi in space X by augmenting x̄. We omit θ in f(x̄) since such augmentation feature2 does not rely on any learnable parameters. In this case, any distance dRL defined in the space of f is exactly a valid distribution distance, which reveals the augmentation overlaps and is related to the semantic similarity. Although the constructive augmentation feature naturally satisfies the similarity preserving condition (Equation (1)) (because it directly use the augmentation distribution without loss of information), it is impractical for the following reasons. First, its dimensionality is exponentially high, which is up to L, the number of possible augmented results. For example, even on CIFAR10, the small-scale dataset with image size 32× 32× 3, L is up to 2563072 (3072 pixels and 256 possible pixel values). Second, the computation of each element is intractable. We may need an exponentially large number of samples to accurately estimate each p(x | x̄). The dimensionality and computation problems make the augmentation feature impractical both at inference and training time. Such inconvenience motivates us to (1) conduct certain dimension reduction to preserve the information in low dimensional space (Section 4.2) and (2) develop an efficient algorithm for dimension reduction (Section 4.3). 4.2 DIMENSION REDUCTION ON AUGMENTATION FEATURES To deal with the high-dimensional property, we employ the idea of PCA (Hotelling, 1933), which reconstructs the data with principal components.3 For convenience, we denote the design matrix of augmentation feature by A, where A ∈ RN×L, Ax̄,x = p(x | x̄) (see Figure 2). We perform PCA on a transformed augmentation feature called normalized augmentation feature:  = AD− 1 2 , (2) where D = diag([dx1 , dx2 , . . . , dxL ]), dx = ∑ x̄ p(x | x̄). Based on normalized augmentation feature, we can develop an efficient algorithm for similarity preserving embeddings. Assume the SVD of  = UΣV ⊤ with U ∈ RN×N , Σ ∈ RN×L, V ∈ RL×L , PCA first learns the projection matrix consisting of the top-k right singular vectors, which can be denoted as Ṽ ∈ RL×k. The vectors in Ṽ are called Principal Components (PCs). Then, it projects the feature by ÂṼ to get the embeddings for each sample. The overall procedure is illustrated at the top-right of Figure 2. But performing PCA on the augmentation feature will encounter many obstacles. The element of augmentation feature is not possible to estimate accurately, not to mention its high dimensionality. 2Following the common knowledge in dimension reduction, we call the raw high dimensional representation as “feature”, and learned low-dimensional representation as “embedding”. 3In this paper, we use the non-centred version (Reyment & Jvreskog, 1996), which is more appropriate for observations than for variables, where the origin matters more. Even if we can somehow get the projection matrix Ṽ , it is also impractical to project the highdimensional matrix Â. For this reason, we propose ACA to make PC learning and projection process efficient without explicitly calculating elements of augmentation feature. 4.3 AUGMENTATION COMPONENT ANALYSIS Although there are several obstacles when performing PCA on the augmentation features directly, fortunately, it is efficient to sample from the augmentation distribution p(x | x̄), i.e., by performing augmentation on the natural data x̄ and get an augmented sample x. Being aware of this, our ACA uses two practical losses to simulate the PCA process efficiently by sampling. The first contrastivelike loss leads the encoder to learn principal components of Â, which can be efficiently optimized by sampling like traditional contrastive methods. The second loss performs on-the-fly projection of  through the training trajectory, which solves the difficulty of high dimensional projection. Learning principal components. ACA learns the principal components by an efficient contrastivelike loss. Besides its projection functionality, these learned principal components can also serve as embeddings that preserve a kind of posterior distribution similarity, as we will show later. In the SVD view, UΣ serves as the PCA projection results for samples and V contains the principal components (Jolliffe, 2002). However, if changing our view, V Σ can be seen as the representation of each column. Since each column of  encodes the probability of the augmented data given natural data, V Σ preserves certain augmentation relationships, as we will show in Theorem 4.2 later. To leverage the extrapolation power of encoders like deep neural networks, we choose to design a loss that can guide the parameterized encoder fθ to learn similar embeddings as PCA. Inspired by the rank minimization view of PCA (Vidal et al., 2016), we employ the low-rank approximation objective with matrix factorization, similar to HaoChen et al. (2021): min F∈RL×k Lmf = ∥Â⊤Â− FF⊤∥2F , (3) where columns of F store the scaled version of top-k right singular vectors, and each row can be seen as the embedding of augmented data as will show in Lemma 4.1. According to Eckart–Young–Mirsky theorem (Eckart & Young, 1936), by optimizing Lmf , we can get the optimal F̂ , which has the form Ṽ Σ̃Q, Q ∈ Rk×k is an orthonormal matrix. Σ̃ and Ṽ contains the top-k singular values and right singular vectors. By expanding Equation (3), we get Augmentation Component Analysis Loss for learning Principal Components (ACA-PC) in the following lemma: Lemma 4.1 (ACA-PC loss). Let Fx,: = √ dxf ⊤ θ (x),∀x ∈ X . Minimizing Lmf is equivalent to minimizing the following objective: LACA-PC =− 2E x̄∼p(x̄),xi∼p(xi|x̄) xj∼p(xj |x̄) fθ(xi) ⊤fθ(xj) +NEx1∼pA(x1),x2∼pA(x2) [( fθ(x1) ⊤fθ(x2) )2] . (4) The proof can be found in Appendix F. In ACA-PC, the first term is the common alignment loss for augmented data and the second term is a form of uniformity loss (Wang & Isola, 2020). Both terms can be estimated by Monte-Carlo sampling. ACA-PC is a kind of contrastive loss. But unlike most of the others, it has theoretical meanings. We note that the form of ACA-PC differs from spectral loss (HaoChen et al., 2021) by adding a constant N before the uniformity term. This term is similar to the noise strength in NCE (Gutmann & Hyvärinen, 2010) or the number of negative samples in InfoNCE (van den Oord et al., 2018). It can be proved that the learned embeddings by ACA-PC preserve the posterior distribution distances between augmented data: Theorem 4.2 (Almost isometry for posterior distances). Assume fθ is a universal encoder, σk+1 is the (k + 1)-th largest singular value of Â, dmin = minx dx, and δx1x2 = I(x1 = x2), the minimizer θ∗ of LACA−PC satisfies: d2post(x1,x2)− 2σ2k+1 dmin (1− δx1x2) ≤ ∥fθ∗(x1)− fθ∗(x2)∥22 ≤ d2post(x1,x2) , ∀x1,x2 ∈ X where the posterior distance d2post(x1,x2) = ∑ x̄∈X̄ (pA(x̄ | x1)− pA(x̄ | x2))2 (5) measures the squared Euclidean distance between the posterior distribution pA(x̄ | x) = p(x|x̄)p(x̄)pA(x) . We give the proof in Appendix G. Theorem 4.2 states that the optimal encoder for ACA-PC preserves the distance of posterior distributions between augmented data within an error related to embedding size k. As k increase to N , the error decrease to 0. It corresponds to the phenomenon that a larger embedding size leads to better contrastive performance (Chen et al., 2020a). The posterior distribution pA(x̄ | x) represents the probability that a given augmented sample x is created by a natural sample x̄. Augmented data that are only produced by the same natural sample will have the smallest distance, and embeddings of those in overlapped areas will be pulled together by ACA-PC. Since the overlapped area are usually created by two same-class samples, ACA-PC can form semantically meaningful embedding space. It is also noticeable that the optimal encoder meets the similarity preserving condition (Equation (1)) but concerning the posterior distribution for augmented data not the augmentation distribution for natural data. Since what we care about is the distribution of natural data, we further propose a projection loss that helps learn good embeddings for all the natural data. On-the-fly Projection. As stated in the previous part, the learned embeddings by ACA-PC not only serve as certain embeddings for augmented data but also contain principal components of normalized augmentation feature. Based on this, we propose to use these embeddings to act as a projection operator to ensure meaningful embeddings for all the natural data. To be specific, denote the embedding matrix for all augmented data as F aug(∈ RL×k), where each row F augx,: = f⊤θ∗(x). From Equation (3) and F̂x,: = √ dxf ⊤ θ∗(x), it can be easily seen that: F aug = D− 1 2 F̂ = D− 1 2 Ṽ Σ̃Q Similar to PCA (Hotelling, 1933) that projects the original feature by the principal components V , we propose to use F aug to project the augmentation feature to get the embeddings for each natural sample. Denote the embedding matrix for natural data as Fnat(∈ RN×k), where each row Fnatx̄,: represents the embeddings of x̄. We compute Fnat as follows: Fnat = AF aug = ÂD 1 2D− 1 2 Ṽ Σ̃Q = (Ũ Σ̃)Σ̃Q, (6) where Σ̃,Ũ contain the top-k singular values and corresponding left singular vectors. It is noticeable that Fnat is exactly the PCA projection result multiplied by an additional matrix Σ̃Q. Fortunately, such additional linear transformation does not affect the linear probe performance (HaoChen et al., 2021). With Equation (6), the embedding of each natural sample can be computed as follows: Fnatx̄,: = Ax̄,:F aug = ∑ x p(x | x̄)f⊤θ∗(x) = Ex∼p(x|x̄)f⊤θ∗(x) (7) which is exactly the expected feature over the augmentation distribution. Similar to Theorem 4.2, the embeddings calculated by Equation (7) also present a certain isometry property: Theorem 4.3 (Almost isometry for weighted augmentation distances). Assume fθ is a universal encoder, σk+1 is the (k + 1)-th largest sigular value of Â,δx̄1x̄2 = I(x̄1 = x̄2), let the minimizer of LACA−PC be θ∗ and g(x̄) = Ex∼p(x|x̄)fθ∗(x) as in Equation (7), then: d2w-aug(x̄1, x̄2)− 2σ2k+1 (1− δx̄1x̄2) ≤ ∥g(x̄1)− g(x̄2)∥2Σ−2k ≤ d 2 w-aug(x̄1, x̄2) , ∀x1,x2 ∈ X where ∥·∥Σ−2k represent the Mahalanobis distance with matrix Σ −2 k ,Σk = diag([σ1, σ2, . . . , σk]) is the diagonal matrix containing top-k singular values and the weighted augmentation distance d2w-aug(x̄1, x̄2) = 1 N ∑ x∈X (p(x | x̄1)− p(x | x̄2))2 pA(x) (8) measures the weighted squared Euclidean distance between the augmentation distribution p(x | x̄). Different from Theorem 4.2, which presents isometry between Euclidean distances in embeddings and augmentation distribution, Theorem 4.3 presents isometry between Mahalanobis distances. The weighted augmentation distances weigh the Euclidean distances by pA(x). dw-aug can be regarded as a valid augmentation distance measure dA as in Equation (1) and Fnat preserve such a distance. So our goal is to make embeddings of x̄ approaches Ep(x|x̄)fθ⋆(x). However, as stated before, the additional projection process is not efficient, i.e., we need exponentially many samples from p(x | x̄). We notice that samples during the training process of ACA-PC can be reused. For this reason, we propose an on-the-fly projection loss that directly uses the current encoder for projection: Lproj = Ex̄∼p(x̄) [ ∥fθ(x̄)− Ep(x|x̄)fθ(x)∥22 ] (9) Full objective of ACA. Based on the discussion of the above parts, ACA simultaneously learns the principal components by ACA-PC and projects natural data by an on-the-fly projection loss. The full objective of ACA has the following form: LACA-Full = LACA-PC + αLproj (10) where α is a trade-off hyperparameter. We also find N in Equation (4) too large for stable training, so we replace it with a tunable hyperparameter K. Here, we only display the loss in expectation forms. The details of the implementation are described in Appendix A. 5 A PILOT STUDY In this section, we experiment with our Augmentation Component Analysis method on a synthetic mixture component data with a Gaussian augmentation method. In this example, we aim to show the relationship between semantic similarity and posterior/weighted augmentation distances. We also show the effectiveness of our method compared to traditional contrastive learning. In this example, the natural data x̄ are sampled from a mixture gaussian with c component: p(x̄) = c∑ i=1 πiN (µi, siI) We use Gaussian noise as the data augmentation of a natural data sample, i.e., A(x̄) = x̄+ ξ where ξ ∼ N (0, saI). Concretely, we conduct our experiment on 2-D data with c = 4, πi = 1c , si = 1 and µi uniformly distributed on a circle with radius 2 . For each component, we sample 200 natural data with the index of the component as their label. For each natural datum, we augment it 2 times with sa = 4, which results in totally 1600 augmented data. We compute the augmentation probability for between x and x̄ by p(x | x̄) and we normalize the probability for each x̄. First, we plot the distribution of posterior distances (Equation (5)) for pairs of augmented data and weighted augmentation distances (Equation (8)) for pairs of natural data in Figure 3 left. The two distances appear to have similar distributions because the synthetic data are Gaussian. It can be seen that data from the same component tend to have small distances, while from different components, their distances are large. In low-distance areas, there are pairs of the same class, which means that the two distances are reliable metrics for judging semantic similarity. In all, this picture reveals the correlation between semantic similarity and posterior/weighted augmentation distances. Second, we compare our methods with SimCLR (Chen et al., 2020a), the traditional contrastive method and Spectral (HaoChen et al., 2021), which similarly learns embeddings with spectral theory. We test the learned embeddings using a Logistic Regression classifier and report the error rate of the prediction in Figure 3 right. We also report performance when directly using augmentation feature (AF). First, AF has discriminability for simple linear classifiers. SimCLR and Spectral tend to underperform AF as the embedding size increases, while our methods consistently outperform. It may be confusing since our method performs dimension reduction on this feature. But we note that as the embedding size increases, the complexity of the linear model also increases, which affects the generalizability. All the methods in Figure 3 right show degradation of this kind. However, our methods consistently outperform others, which shows the superiority of ACA. Additionally, by adding projection loss, ACA-Full improves ACA-PC by a margin. Additionally, traditional contrastive learning like SimCLR achieves similar performance as our methods. We think it reveals that traditional contrastive learning has the same functionality as our methods. 6 EXPERIMENTS 6.1 SETUP Dataset. In this paper, we conduct experiments mainly on the following datasets with RTX-3090 ×4. CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009): two datasets containing totally 500K images of size 32 × 32 from 10 and 100 classes respectively. STL-10 (Coates et al., 2011): derived from ImageNet (Deng et al., 2009), with 96 × 96 resolution images with 5K labeled training data from 10 classes. Additionally, 100K unlabeled images are used for unsupervised learning. Tiny ImageNet: a reduced version of ImageNet (Deng et al., 2009), composed of 100K images scaled down to 64 × 64 from 200 classes. ImageNet-100 (Tian et al., 2020a): a subset of ImageNet, with 100-classes. ImageNet (Deng et al., 2009), the large-scale dataset with 1K classes. Network Structure. Following common practice (Chen et al., 2020a;b;c), we use the encoderprojector structure during training, where the projector projects the embeddings into a lowdimensional space. For CIFAR-10 and CIFAR-100, we use the CIFAR variant of ResNet-18 (He et al., 2016; Chen & He, 2021) as the encoder. We use a two-layer MLP as the projector whose hidden dimension is half of the input dimension and output dimension is 64. For STL-10 and Tiny ImageNet, only the max-pooling layer is disabled following (Chen & He, 2021; Ermolov et al., 2021). For these two datasets, we use the same projector structure, except that the output dimension is 128. For ImageNet, we use ResNet-50 with the same projector as Chen et al. (2020a). Image Transformation. Following the common practice of contrastive learning (Chen et al., 2020a), we apply the following augmentations sequentially during training: (a) crops with a random size; (b) random horizontal flipping; (c) color jittering; (d) grayscaling. For ImageNet-100 and ImageNet, we use the same implementation as (Chen et al., 2020a). Optimizer and other Hyper-parameters. For datasets except for ImageNet, adam optimizer (Kingma & Ba, 2015) is used for all datasets. For CIFAR-10 and CIFAR-100, we use 800 epochs with a learning rate of 3× 10−3. For Tiny ImageNet and STL-10, we train 1,000 epochs with a learning rate 2 × 10−3. We use a 0.1 learning rate decay at 100, 50, 20 epochs before the end. Due to hardware resource restrictions, we use a mini-batch of size 512. The weight decay is 1 × 10−6 if not specified. Following common practice in contrastive learning, we normalize the projected feature into a sphere. For CIFAR-10, we use α = 1. For the rest datasets, we use α = 0.2. By default, K is set to 2. For ImageNet, we use the same hyperparameters as (Chen et al., 2020a) except batch size being 256, α = 0.2 and K = 2. Evaluation Protocol. We evaluate the learned representation on two most commonly used protocols – linear classification (Zhang et al., 2016; Kolesnikov et al., 2019) and k-nearest neighbors classifier (Chen & He, 2021). In all the experiments, we train the linear classifier for 100 epochs. The learning rate exponentially decays from 10−2 to 10−6. The weight decay is 1× 10−6. We report the classification accuracy on test embeddings as well as the accuracy of a 5-Nearest Neighbors classifier for datasets except for ImageNet. 6.2 PERFORMANCE COMPARISON In Table 1, we compare the linear probe performance on various small-scale or mid-scale benchmarks with several methods including SimCLR (Chen et al., 2020a), BYOL (Grill et al., 2020), SimSiam (Chen & He, 2021) and Spectral (HaoChen et al., 2021). For transfer learning benchmarks, please refer to Appendix D and Appendix E. SimCLR uses is a method that uses contrastive loss. BYOL and SimSiam do not use negative samples. Spectral is a similar loss derived from the idea of spectral clustering. From Table 1, we can see that our ACA-Full method achieves competitive results on small- or mid-scale benchmarks, achieving either the best or the second-best results on all benchmarks except the 5-NN evaluation on STL-10. Also, ACA-PC differs from ACA-Full in the projection loss. In all the benchmarks, we can see that the projection loss improves performance. For large-scale benchmarks, we compare several methods on ImageNet-100 and ImageNet. On ImageNet-100, we compare our method additionally to MoCo (He et al., 2020), Lalign + Luniform (Wang & Isola, 2020) and InfoMin (Tian et al., 2020b). Note that the results of the other three methods are reported when using the ResNet-50 encoder, which has more capacity than ResNet18. Our method can also achieve state-of-the-art results among them. This means that our method is also effective with relatively small encoders even on large-scale datasets. On ImageNet, we see that ACA-PC achieves competitive performance against state-of-the-art contrastive methods (Chen et al., 2020a;c; Grill et al., 2020; Chen & He, 2021; HaoChen et al., 2021) and ACA-Full achieves the best. 7 CONCLUSION AND FUTURE WORK In this paper, we provide a new way of constructing self-supervised contrastive learning tasks by modeling similarity through augmentation overlap, which is motivated by the observation that semantically similar data usually creates similar augmentations. We propose Augmentation Component Analysis to perform PCA on augmentation feature efficiently. Interestingly, our methods have a similar form as the traditional contrastive loss and may explain the ability of contrastive loss. We hope our paper can inspire more thoughts about how to measure similarity in self-supervised learning and how to construct contrastive learning tasks. Future studies may be explorations of applying ACA to learn representations of other forms of instances, such as tasks (Achille et al., 2019) and models (Wu et al., 2023). ACKNOWLEDGE This research was supported by NSFC (61773198, 62006112,61921006), Collaborative Innovation Center of Novel Software Technology and Industrialization, NSF of Jiangsu Province (BK20200313) B EFFECT OF AUGMENTATION OVERLAPS Like contrastive learning, our method relies on the quality of augmentation. Therefore, we investigate the influence of different augmentations and reveal the relationship between distribution difference and the linear probe performance on CIFAR10. The augmentation distribution is estimated by augmenting 106 times for a subset of random 2000 pairs of samples with the number of intra-class and inter-class pairs being 1000 respectively. Note that as is stated in Section 4.1, even on CIFAR10, the actual value of L is exponentially large (up to 2563072). It is impossible to accurately estimate a distribution over so many possible values. But we notice that for neural networks, many operators can reduce the possible number of values, like convolutions and poolings. Following this observation and to make the computation efficient, we descrete the color into 8-bit for each channel and use a max pooling operation to get a 4× 4 picture. by this kind of approximation, the number of L reduces to 848. Seems still too large, but it can be noted that the augmentation distribution of each sample covers only a small region. It is enough to estimate the distribution by sampling. For memory restriction, we cannot fully estimate the weighted augmentation distance in Theorem 4.3. Because we cannot store all possible values for pA(x). Instead, we use the Hellinger distance as the distribution distance measure: d2H(x̄1, x̄2) = 1 N ∑ x∈X (√ p(x | x̄1)− √ p(x | x̄2) )2 Hellinger distance ranges [0, 2], making the comparison clear. We list the experimented augmentation here: 1. Grayscale: Randomly change the color into gray with probability of 0.1. 2. HorizontalFlip: Randomly flip horizontally with probability 0.5. 3. Rotation: Randomly rotate image with uniformly distributed angle in [0, π] 4. ColorJitter: Jitter (brightness, contrast, saturation, hue) with strength (0.4, 0.4, 0.4, 0.1) and probability 0.8. In Table 3, we display the histogram (HIST) of intra- and inter-class augmentation distribution distances. ACC displays the linear probe performance on the test set. From the table, the following requirements for a good augmentation can be concluded: (1) Existence of overlap. For the upper three augmentations. The “scope” of augmentation is small. As a result, most of the samples do not overlap. This makes embeddings lack the discriminative ability for downstream tasks. On the contrary, the lower three create overlaps for most of the samples, leading to much better performance. (2) Intra-class distance is lower than inter-class. Compared to ColorJitter, ResizedCrop makes more intra-class samples have lower distance. So ResizedCrop outperforms ColorJitter. SimCLR augmentation surpasses these two for the same reason. Interestingly, we find that the same phenomena appear when using other contrastive methods like SimCLR. It shows that these methods somehow utilize the augmentation overlap like our method. C PERFORMANCE CURVE In this section, we illustrate the performance curve throughout training. We aim to demonstrate the functionality of projection loss and show that our ACA method leads to better performance. The compared traditional contrastive learning method is chosen to be SimCLR, for the reason that our method only differs from SimCLR in the loss, with all other things (architecture, optimizer and other shared hyperparameters) identical. Also, we do not introduce extra mechanisms like momentum encoder (BYOL, MoCo) and predictor (BYOL, SimSiam). Figure 5 shows the performance curve along with the projection loss on the CIFAR-10 dataset. The left figure shows the projection loss. We can see that in the early stage of training, the projection loss will increase. It reveals that the natural data will deviate from the center of augmentation distribution. It is harmful to the performance of the model. With the help of projection loss, the embeddings of natural data will be dragged back to their right position, the center. The mid and right figures illustrate the performance curve during training. With only ACA-PC loss, the model can only achieve similar performance during training. But the ACA-Full loss will help improve performance during training. Also, we can see that ACA starts to outperform SimCLR and ACA-PC by a considerable margin from about 50 epochs. This happens to be the epoch in which the projection loss increases to its stable level. Therefore, pulling the natural data to the center of its augmentation helps to learn better embeddings. D TRANSFER TO OTHER DATASETS Following Chen et al. (2020a), we evaluate the self-supervised pre-trained models for linear classification task on 10 datasets as it is conducted in MSF paper (Koohpayegani et al., 2021). The results are reported in Table 4. All the results other than ACA are taken from Koohpayegani et al. (2021). Although our method is trained with fewer epochs, it achieves competitive results with contrastive learning methods. Notably, it surpasses the 1000-epoch SimCLR which differs from our method only in loss. It shows that the embeddings learned by our method are also transferable to other downstream tasks. We think it is due to the universality of the correlation between augmentation similarity and semantical similarity across these benchmarks. E TRANSFER TO OBJECT DETECTION Following the procedure outlined in ?, we use Faster-RCNN Ren et al. (2015) for the task of object detection on PASCAL-VOC Everingham et al. (2015). We use the code provided at MoCo repository4 with default parameters. All the weights are finetuned on the trainval07+12 set and evaluated on the test07 set. We report an average over 5 runs in Table 5. Despite the shorter training epochs, our method can achieve better results than SimCLR, especially outperform by a large margin on AP75(> 1%). F PROOF OF LEMMA 4.1 For convenient, we define M := Â⊤Â. The elements of M are: Mx1x2 = ∑ x̄∈X̄ p(x1 | x̄)p(x2 | x̄)√ dx1 √ dx2 ,x1,x2 ∈ X (13) Expanding Equation (3), we get: Lmf = ∑ x1,x2∈X (Mx1x2 − F⊤x1Fx2) 2 = ∑ x1,x2∈X (Mx1x2 − √ dx1 √ dx2fθ(x1) ⊤fθ(x2)) 2 = const − 2 ∑ x1,x2∈X √ dx1 √ dx2Mx1x2fθ(x1) ⊤fθ(x2) + ∑ x1,x2∈X dx1dx2(fθ(x1) ⊤fθ(x2)) 2 = const − 2 ∑ x1,x2∈X ∑ x̄∈X̄ p(x1 | x̄)p(x2 | x̄)fθ(x1)⊤fθ(x2) + ∑ x1,x2∈X dx1dx2(fθ(x1) ⊤fθ(x2)) 2 4https://github.com/facebookresearch/moco multiply by p(x̄) = 1N and replace dx with ∑ x̄ p(x | x̄) = NpA(x). The objective becomes: min θ − 2 ∑ x1,x2∈X ∑ x̄∈X̄ p(x1 | x̄)p(x2 | x̄)p(x̄)fθ(x1)⊤fθ(x2) +N ∑ x1,x2∈X pA(x1)pA(x2)(fθ(x1) ⊤fθ(x2)) 2 = −2E x̄∼p(x̄),xi∼A(xi|x̄) xj∼A(xj |x̄) [ fθ(x1) ⊤fθ(x2) ] +NEx1∼pA(x1),x2∼pA(x2) [ (fθ(x1) ⊤fθ(x2)) 2 ] = LACA-PC G PROOF OF THEOREM 4.2 As in Appendix F, we define M := Â⊤Â. By Eckart–Young–Mirsky theorem (Eckart & Young, 1936), the minimizer F̂ of ∥M − FF⊤∥2F , must have the form V̂ Σ̂Q, where V̂ , Σ̂ contain the top-k singular values and corresponding right singular vectors of Â, Q ∈ Rk×k is some orthonormal matrix with Q⊤Q = I . Since we let Fx = √ dxfθ(x), then the minimizer θ⋆ must satisfy fθ⋆(x) = Q σ̂ ⊙ v̂(x)√ dx = Q [σ1v1(x), σ2v2(x), . . . , σkvk(x)] ⊤ √ dx . where ⊙ is the element-wise multiplication. For convenience, we use σi to denote i-th largest singular value, ui(x̄),vi(x) to denote the element of i-th left/right singular value corresponding to x̄/x . When p(x̄) = 1N , dx = NpA(x) = pA(x) p(x̄) . Then the posterior distance: d2post(x1,x2) = ∑ x̄∈X̄ (pA(x̄ | x1)− pA(x̄ | x2))2 = ∑ x̄∈X̄ ( p(x1 | x̄)p(x̄) pA(x1) − p(x1 | x̄)p(x̄) pA(x1) )2 = ∑ x̄∈X̄ ( p(x1 | x̄) dx1 − p(x2 | x̄) dx2 )2 = ∑ x̄∈X̄ ( Âx̄x1√ dx1 − Âx̄x2√ dx2 )2 = ∑ x̄∈X̄ ( N∑ i=1 σiui(x̄)vi(x1)√ dx1 − σiui(x̄)vi(x2)√ dx2 )2 = ∑ x̄∈X̄ ( N∑ i=1 σiui(x̄)( vi(x1)√ dx1 − vi(x2)√ dx2 ) )2 = ∑ x̄∈X̄ ∑ i,i′ σiui(x̄)σi′ui′(x̄)( vi(x1)√ dx1 − vi(x2)√ dx2 )( vi′(x1)√ dx1 − vi ′(x2)√ dx2 ) = ∑ i,i′ σiσi′( vi(x1)√ dx1 − vi(x2)√ dx2 )( vi′(x1)√ dx1 − vi ′(x2)√ dx2 ) ∑ x̄∈X̄ ui(x̄)ui′(x̄) (1) = ∑ i,i′ σiσi′( vi(x1)√ dx1 − vi(x2)√ dx2 )( vi′(x1)√ dx1 − vi ′(x2)√ dx2 )δi,i′ = N∑ i=1 σ2i ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 (14) (1) is due to the orthogonality of singular vectors. Note that: N∑ i=1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 = L∑ i=1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 − L∑ i=N+1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 ≤ L∑ i=1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 = L∑ i=1 v2i (x1) dx1 + L∑ i=1 v2i (x2) dx2 − 2 L∑ i=1 vi(x1)vi(x2)√ dx1 √ dx2 = 1 dx1 + 1 dx2 − 2δx1x2√ dx1 √ dx2 (2) ≤ ( 1 dx1 + 1 dx2 )(1− δx1x2) ≤ 2 dmin (1− δx1x2) (2) can be deduced by considering conditions whether x1 = x2 or not. Then: ∥fθ⋆(x1)− fθ⋆(x2)∥2 = k∑ i=1 σ2i ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 =d2post(x1,x2)− N∑ i=k σ2i ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 (≤ d2post(x1,x2)) ≥d2post(x1,x2)− σ2k+1 N∑ i=k+1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 ≥d2post(x1,x2)− σ2k+1 N∑ i=1 ( vi(x1)√ dx1 − vi(x2)√ dx2 )2 ≥d2post(x1,x2)− 2σ2k+1 dmin (1− δx1x2) Therefore, we have proved Theorem 4.2. H PROOF OF THEOREM 4.3 similar to Appendix G, d2w-aug(x̄1, x̄2) = ∑ x∈X 1 NpA(x) (p(x | x̄1)− p(x | x̄2))2 = ∑ x∈X ( p(x | x̄1)√ NpA(x) − p(x | x̄1)√ NpA(x) )2 = ∑ x∈X ( p(x | x̄1)√ dx − p(x | x̄1)√ dx )2 = ∑ x∈X ( Âx̄1x − Âx̄2x )2 = ∑ x∈X ( N∑ i=1 σiui(x̄1)vi(x)− σiui(x̄2)vi(x) )2 = ∑ x∈X ( N∑ i=1 σi(ui(x̄1)− ui(x̄2))vi(x) )2 = ∑ x∈X ∑ i,i′ σivi(x)σi′vi′(x)(ui(x̄1)− ui(x̄2))(ui′(x̄1)− ui′(x̄2)) = ∑ i,i′ σiσi′(ui(x̄1)− ui(x2))(ui′(x̄1)− ui′(x̄2)) ∑ x∈X vi(x)vi′(x) (1) = ∑ i,i′ σiσi′(ui(x̄1)− ui(x̄2))(ui′(x̄1)− ui′(x̄2))δi,i′ = N∑ i=1 σ2i (ui(x1)− ui(x2))2 (1) is due to the orthogonality of singular vectors. And g(x̄) takes the following form: g(x̄) = Q [ σ21u1(x), σ 2 2u2(x), . . . , σ 2 kuk(x) ]⊤ . Thus, ∥g(x̄1)− g(x̄2)∥2Σ−2k = k∑ i=1 σ2i (ui(x1)− ui(x2))2 = d2w-aug(x̄1, x̄2)− N∑ i=k+1 σ2i (ui(x1)− ui(x2))2 (≤ d2w-aug(x̄1, x̄2)) ≥ d2w-aug(x̄1, x̄2)− σ2k+1 N∑ i=1 (ui(x1)− ui(x2))2 = d2w-aug(x̄1, x̄2)− 2σ2k+1(1− δx̄1x̄2) I ABLATION STUDY ON PARAMETER α AND K We conduct ablation experiments on the parameter α and K. α is the trade-off parameter between ACA-PC loss and projection loss Equation (10). K act as the noise strength for ACA-PC, which replaces N in Equation (4). Figure 6 shows the effect of α and K on different benchmarks. It can be seen that α is necessary to improve the performance of ACA-PC. A certain value of α helps the model to achieve better results. However, a too large value of α degrades the performance. The same phenomenon is the same on K. J COMPARISON OF NEAREST NEIGHBORS We randomly select 8 samples from the validation set of ImageNet-100 (Tian et al., 2020a). Then we use the encoder learned by our ACA method and SimCLR (Chen et al., 2020a) to extract features and investigate their nearest neighbors of them. The left-most column displays the selected samples and the following columns show the 5 nearest neighbors. The samples labeled as different classes are marked by the red box. We also annotate the distance between the samples and their nearest neighbors. First, we can see that even though utilizing the augmentation in a different way, ACA achieves similar results as traditional contrastive learning. Both of them can learn semantically meaningful embeddings. However, we can see that ACA tends to learn embeddings that pull together images that are similar in the input space, i.e., creating similar augmentation, while SimCLR sometimes has neighbors that seem different.
1. What are the strengths and weaknesses of the proposed method in terms of its theoretical soundness, model performance, and comparison to previous works? 2. How does the reviewer assess the paper's organization, writing style, and clarity? 3. Are there any concerns regarding the related works section, specifically regarding the inclusion of recent works and non-contrastive methods? 4. Is there a connection between the presented method and Barlow Twins, and could the authors explore other use cases of the proposed learning method? 5. What are the limitations of the paper regarding its contribution, and how might the authors improve it?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper presented a self-supervised learning method based on the augmentation distribution of the image. Specifically, the paper is motivated by the intuition that semantically similar images tend to have similar augmented views, and proposed a method called ACA to learn the augmentation principal components of data. The method is theoretically deduced to maintain and measure the discrepancy between the augmentation distribution across samples. The method is validated on many benchmarks (cifar, stl, imagenet), and shows competitive performance when compared with several existing models. Strengths And Weaknesses Strength The proposed method is theoretically sound and complete. The deduced loss function seems new and interesting. The paper provided interesting pilot study and also interesting small experiments for motivation purposes. The evaluation experiments are in general complete and nicely conducted. The method is validated on many datasets, with sufficient benchmarking. Weaknesses Major concern: model performance The model performance on several datasets seems to be lower than previously reported results. Specifically, check [1, 2] where many numbers are greater than the presented method. Unfortunately, as the major experimental result of the presented method is framed to be about surpassing previous state-of-the-art (that is the only experiment), the presented numbers are just not satisfactory enough, which greatly weakens the paper’s contribution. The related works and the references lack more recent works. There are many contrastive learning/self-supervised learning methods in 2021 and 2022 that are not included in related works. For example, check [2-5]. The writing and organization of the paper could be improved. The abstract could be improved. Specifically, I suggest rewording the first sentence and the 4th sentence. Section 4.1 and section 4.2 are a bit wordy and might be simplified. The computation overload is mentioned many times in two adjacent sections. Some of the preliminary in section 3 could be moved into related works, especially the second paragraph, which has much overlapping information with section 2. Minor comments: I suggest rewording Section 3 first sentence. Related works the Self-Supervised learning section should mention non-contrastive methods as well, like BYOL. Instead, the paper put non-contrastive methods inside the contrastive learning section and implicitly infer them as Contrastive Learning methods without using negative samples, which is a bit controversial because BYOL is not always referred to as contrastive methods though some studies suggest that they intrinsically are. What is the connection between this method and Barlow Twins? It might be more beneficial if the authors could explore other use cases of the presented learning method rather than just performing classification. For example, if the proposed loss function could provide an “augmentation spectrum” of certain data, could that provide more informative latent space based on some other metrics when compared to other methods? [1] Ermolov, Aleksandr, et al. "Whitening for self-supervised representation learning." International Conference on Machine Learning. PMLR, 2021. [2] Azabou, Mehdi, et al. "Mine your own view: Self-supervised learning through across-sample prediction." arXiv preprint arXiv:2102.10106 (2021). [3] Zbontar, Jure, et al. "Barlow twins: Self-supervised learning via redundancy reduction." International Conference on Machine Learning. PMLR, 2021. [4] Tsai, Yao-Hung Hubert, et al. "A note on connecting barlow twins with negative-sample-free contrastive learning." arXiv preprint arXiv:2104.13712 (2021). [5] Kalantidis, Yannis, et al. "Hard negative mixing for contrastive learning." Advances in Neural Information Processing Systems 33 (2020): 21798-21809. Clarity, Quality, Novelty And Reproducibility The clarity is okay, there does exist room for improvement. The quality is above average. The novelty is okay, there do exist novel theoretical contributions, but neither the motivation nor the presented method is super novel.
ICLR
Title The MultiBERTs: BERT Reproductions for Robustness Analysis Abstract Experiments with pre-trained models such as BERT are often based on a single checkpoint. While the conclusions drawn apply to the artifact tested in the experiment (i.e., the particular instance of the model), it is not always clear whether they hold for the more general procedure which includes the architecture, training data, initialization scheme, and loss function. Recent work has shown that repeating the pre-training process can lead to substantially different performance, suggesting that an alternate strategy is needed to make principled statements about procedures. To enable researchers to draw more robust conclusions, we introduce the MultiBERTs, a set of 25 BERT-Base checkpoints, trained with similar hyper-parameters as the original BERT model but differing in random weight initialization and shuffling of training data. We also define the Multi-Bootstrap, a non-parametric bootstrap method for statistical inference designed for settings where there are multiple pre-trained models and limited test data. To illustrate our approach, we present a case study of gender bias in coreference resolution, in which the Multi-Bootstrap lets us measure effects that may not be detected with a single checkpoint. We release our models and statistical library, along with an additional set of 140 intermediate checkpoints captured during pre-training to facilitate research on learning dynamics. 1 INTRODUCTION Contemporary natural language processing (NLP) relies heavily on pretrained language models, which are trained using large-scale unlabeled data (Bommasani et al., 2021). BERT (Devlin et al., 2019) is a particularly popular choice: it has been widely adopted in academia and industry, and aspects of its performance have been reported on in thousands of research papers (see, e.g., Rogers et al., 2020, for an overview). Because pre-training large language models is computationally expensive (Strubell et al., 2019), researchers often rely on the release of model checkpoints through libraries such as HuggingFace Transformers (Wolf et al., 2020), which enable them to use large-scale language models without repeating the pre-training work. Consequently, most published results are based on a small number of publicly released model checkpoints. While this reuse of model checkpoints has lowered the cost of research and facilitated head-to-head comparisons, it limits our ability to draw general scientific conclusions about the performance of a particular class of models (Dror et al., 2019; D’Amour et al., 2020; Zhong et al., 2021). The key issue is that reusing model checkpoints makes it hard to generalize observations about the behavior of a single model artifact to statements about the underlying pre-training procedure which created it. Pre-training such models is an inherently stochastic process which depends on the initialization of the model’s parameters and the ordering of training examples; for example, D’Amour et al. ∗ Equal contribution. † Work done as a Google AI resident. ‡ Work done during an internship at Google. 1http://goo.gle/multiberts (2020) report substantial quantitative differences across multiple checkpoints of the same model architecture on several “stress tests” (Naik et al., 2018; McCoy et al., 2019). It is therefore difficult to know how much of the success of a model based on the original BERT checkpoint is due to BERT’s design, and how much is due to idiosyncracies of a particular artifact. Understanding this difference is critical if we are to generate reusable insights about deep learning for NLP, and improve the state-of-the-art going forward (Zhou et al., 2020; Dodge et al., 2020; Aribandi et al., 2021). This paper describes the MultiBERTs, an effort to facilitate more robust research on the BERT model. Our primary contributions are: • We release the MultiBERTs, a set of 25 BERT-Base, Uncased checkpoints to facilitate studies of robustness to parameter initialization and order of training examples (§2). Releasing these models preserves the benefits to the community of a single checkpoint release (i.e., low cost of experiments, apples-to-apples comparisons between studies based on these checkpoints), while enabling researchers to draw more general conclusions about the BERT pre-training procedure. • We present the Multi-Bootstrap, a non-parametric method to quantify the uncertainty of experimental results based on multiple pre-training seeds (§3), and provide recommendations for how to use the Multi-Bootstrap and MultiBERTs in typical experimental scenarios. We implement these recommendations in a software library. • We illustrate the approach with a practical use case: we investigate the impact of counterfactual data augmentation on gender bias, in a BERT-based coreference resolution systems (Webster et al., 2020) (§4). Additional examples are provided in Appendix E, where we document challenges with reproducing the widely-used original BERT checkpoint. The release also includes an additional 140 intermediate checkpoints, captured during training for 5 of the runs (28 checkpoints per run), to facilitate studies of learning dynamics. Our checkpoints and statistical libraries are available at: http://goo.gle/multiberts. Additional Related Work. The MultiBERTs release builds on top of a large body of work that seeks to analyze the behavior of BERT (Rogers et al., 2020). In addition to the studies of robustness cited above, several authors have introduced methods to reduce BERT’s variability during finetuning (Zhang et al., 2021; Mosbach et al., 2021; Dodge et al., 2020; Lee et al., 2020; Phang et al., 2018). Other authors have also studied the time dimension, which motivates our release of intermediate checkpoints (Liu et al., 2021; Hao et al., 2020; Saphra & Lopez, 2019; Chiang et al., 2020; Dodge et al., 2020). Similarly to §3, authors in the NLP literature have recommended best practices for statistical testing (Koehn, 2004; Dror et al., 2018; Berg-Kirkpatrick et al., 2012; Card et al., 2020; Søgaard et al., 2014; Peyrard et al., 2021), many of which are based on existing tests to estimate the uncertainty of test sample. In concurrent work, Deutsch et al. (2021) considered bootstrapping methods similar to the Multi-Bootstrap, in the context of summarization metrics evaluation. Also in concurrent work, the Mistral project (Karamcheti et al., 2021) released a set of 10 GPT-2 models with intermediate checkpoints at different stages of pre-training. Our work is complementary, focusing on BERT, introducing a larger number of pre-training seeds, and presenting a methodology to draw robust conclusions about model performance. 2 RELEASE DESCRIPTION We first describe the MultiBERTs release: how the checkpoints were trained and how their performance compares to the original BERT on two common language understanding benchmarks. 2.1 TRAINING Overview. The MultiBERTs checkpoints are trained following the code and procedure of Devlin et al. (2019), with minor hyperparameter modifications necessary to obtain comparable results on GLUE (Wang et al., 2019); a detailed discussion of these differences is provided in Appendix E. We use the BERT-Base, Uncased architecture with 12 layers and embedding size 768. We trained the models on a combination of BooksCorpus (Zhu et al., 2015) and English Wikipedia. Since the exact dataset used to train the original BERT is not available, we used a more recent version that was collected by Turc et al. (2019) with the same methodology. Checkpoints. We release 25 models trained for two million steps each, each training step involving a batch of 256 sequences. For five of these models, we release 28 additional checkpoints captured over the course of pre-training (every 20,000 training steps up to 200,000, then every 100,000 steps). In total, we release 165 checkpoints, about 68 GB of data. Training Details. As in the original BERT paper, we used batch size 256 and the Adam optimizer (Kingma & Ba, 2014) with learning rate 1e-4 and 10,000 warm-up steps. We used the default values for all the other parameters, except the number of steps, which we set to two million, and sequence length, which we set to 512 from the beginning with up to 80 masked tokens per sequence.2 We follow the BERT code and initialize the layer parameters from a truncated normal distribution, using mean 0 and standard deviation 0.02. We train using the same configuration as Devlin et al. (2019)3, with each run taking about 4.5 days on 16 Cloud TPU v2 chips. Environmental Impact Statement. We estimate compute costs at around 1728 TPU-hours for each pre-training run, and around 208 GPU-hours plus 8 TPU-hours for associated fine-tuning experiments (§2.2, including hyperparameter search and 5x replication). Using the calculations of Luccioni et al. (2019)4, we estimate this as about 250 kg CO2e for each of our 25 models. Counting the 25 runs each of CDA-incr and CDA-full from §4, associated coreference models (20 GPU-hours per pretraining model), and additional experiments of Appendix E, this gives a total of about 12.0 metric tons CO2e before accounting for offsets or clean energy. Based on the report by Patterson et al. (2021) of 78% carbon-free energy in Google Iowa (us-central1), we estimate that reproducing these experiments would emit closer to 2.6 tons CO2e, or slightly more than two passengers on a round-trip flight between San Francisco and New York. By releasing the trained checkpoints publicly, we aim to enable many research efforts on reproducibility and robustness without requiring this cost to be incurred for every subsequent study. 2.2 PERFORMANCE BENCHMARKS GLUE Setup. We report results on the development sets of the GLUE tasks: CoLA (Warstadt et al., 2019), MNLI (matched) (Williams et al., 2018), MRPC (Dolan & Brockett, 2005), QNLI (v2) (Rajpurkar et al., 2016; Wang et al., 2019), QQP (Chen et al., 2018), RTE (Bentivogli et al., 2009), SST-2 (Socher et al., 2013), and SST-B (Cer et al., 2017). In all cases we follow the same approach as Devlin et al. (2019). For each task, we fine-tune BERT for 3 epochs using a batch 2Specifically, we keep the sequence length constant (the paper uses 128 tokens for 90% of the training then 512 for the remaining 10%) to expose the model to more tokens and simplify the implementation. As we were not able to reproduce original BERT exactly using either 1M or 2M steps (see Appendix E for discussion), we release MultiBERTs trained with 2M steps under the assumption that higher-performing models are more interesting objects of study. 3We use https://github.com/google-research/bert with TensorFlow (Abadi et al., 2015) version 2.5 in v1 compatibility mode. 4https://mlco2.github.io/impact/ size of 32. We run a parameter sweep on learning rates [5e-5, 4e-5, 3e-5, 2e-5] and report the best score. We run the procedure five times for each of the 25 models and average the results. SQuAD Setup. We report results on the development sets of SQuAD versions 1.1 and 2.0 (Rajpurkar et al., 2016; 2018), using a setup similar to that of Devlin et al. (2019). For both sets of experiments, we use batch size 48, learning rate 5e-5, and train for 2 epochs. Results. Figures 1 and 2 show the distribution of the MultiBERTs models’ performance on the development sets of GLUE and SQuAD, in comparison to the original BERT checkpoint.5 On most tasks, original BERT’s performance falls within the same range as MultiBERTs (i.e., original BERT is between the minimum and maximum of the MultiBERTs’ scores). However, original BERT outperforms all MultiBERTs models on QQP, and under-performs them on SQuAD. The discrepancies may be explained by both randomness and differences in training setups, as investigated further in Appendix E. To further illustrate the performance variability inherent to pre-training and fine-tuning, we analyze the instance-level agreement between the models in Appendix C. 3 HYPOTHESIS TESTING USING MULTIPLE CHECKPOINTS The previous section compared MultiBERTs with the original BERT, finding many similarities but also some differences (e.g., in the case of SQuAD). To what extent can these results be explained by random noise? More generally, how can we quantify the uncertainty of a set of experimental results when there are multiple sources of randomness? In parallel to the MultiBERTs release, we propose a more principled and standardized method to compare training procedures. We recommend a non-parametric bootstrapping procedure, the “Multi-Bootstrap”, which enables us to make inference about model performance in the face of multiple sources of uncertainty: the randomness due to the pre-training seed, the fine-tuning seed, and the finite test data. The main idea is to use the average behavior over seeds as a means of summarizing expected behavior in an ideal world with infinite samples. Although we present Multi-Bootstrap in the context of analyzing the MultiBERTs, the method could be applied in all setups that involve a set of checkpoints pre-trained with the same method, a finite test set, and (possibly) multiple rounds of fine-tuning. The Multi-Bootstrap is implemented as a Python library, included with the MultiBERTs release. 3.1 INTERPRETING STATISTICAL RESULTS The Multi-Bootstrap provides an estimate of the amount of remaining uncertainty when summarizing the performance over multiple seeds. The following notation will help us state this precisely. We assume access to model predictions f(x) for each instance x in the evaluation set. We consider randomness arising from: 1. The choice of pre-training seed S ∼ M 2. The choice of fine-tuning seed T ∼ N 3. The choice of test sample (X,Y ) ∼ D The Multi-Bootstrap procedure allows us to account for all of the above. Specifically, MultiBERTs enables us to estimate the variance due to the choice of pre-training seed (1), which would not be possible with a single artifact. Note that multiple fine-tuning runs are not required in order to use the procedure. 5We used https://storage.googleapis.com/bert_models/2020_02_20/uncased_ L-12_H-768_A-12.zip, as linked from https://github.com/google-research/bert. For each pre-training seed s, let fs(x) denote the learned model’s prediction on input features x and let L(s) denote the expected performance metric of fs on a test distribution D over features X and labels Y . For example, the accuracy would be L(s) = E[1{Y = fs(X)}]. We can use the test sample (which we will assume has nx examples) to estimate the performance for each of the seeds in MultiBERTs, which we denote as L̂(s). The performance L(s) depends on the seed, but we are interested in summarizing the model over all seeds. A natural summary is the average over seeds, ES∼M [L(S)], which we will denote by θ. Then, using ns independently sampled seeds, we can compute an estimate θ̂ as θ̂ = 1 ns ns∑ j=1 L̂(Sj) . Because θ̂ is computed under a finite evaluation set and finite number of seeds, it is necessary to quantify the uncertainty of the estimate. The goal of Multi-Bootstrap is to estimate the distribution of the error in this estimate, θ̂ − θ, in order to compute confidence intervals and test hypotheses about θ, such as whether it is above some threshold of interest. Below, we describe a few common experimental designs in NLP that can be studied with these tools. Design 1: Comparison to a Fixed Baseline. In many use cases, we want to compare BERT’s behavior to that of a single, fixed baseline. For instance, does BERT encode information about syntax as a feature-engineered model would (Tenney et al., 2019; Hewitt & Manning, 2019)? Does it encode social stereotypes, and how does it compare to human biases (Nadeem et al., 2021)? Does it encode world knowledge, similarly to explicit knowledge bases (Petroni et al., 2019)? Does another model such as RoBERTa (Liu et al., 2019) outperform BERT on common tasks such as those from the GLUE benchmark? In all these cases, we compare MultiBERTs to some external baseline of which we only have a single estimate (e.g., random or human performance), or against an existing model that is not derived from the MultiBERTs checkpoints. We treat the baseline as fixed, and assess only the uncertainty that arises from MultiBERTs’ random seeds and the test examples. Design 2: Paired Samples. Alternatively, we might seek to assess the effectiveness of a specific intervention on model behavior. In such studies, an intervention is proposed (e.g., representation learning via a specific intermediate task, or a specific architecture change) which can be applied to any pre-trained BERT checkpoint. The question is whether the procedure results in an improvement over the original BERT pre-training method: does the intervention reliably produce the desired effect, or is the observed effect due to the idiosyncracies of a particular model artifact? Examples of such studies include: Does intermediate tuning on NLI after pre-training make models more robust across language understanding tasks (Phang et al., 2018)? Does pruning attention heads degrade model performance on downstream tasks (Voita et al., 2019)? Does augmenting BERT with information about semantic roles improve performance on benchmark tasks (Zhang et al., 2020)? We refer to studies like the above as paired since each instance of the baseline model fs (which does not receive the intervention) can be paired with an instance of the proposed model f ′s (which receives the stated intervention) such that fs and f ′ s are based on the same pretrained checkpoint produced using the same seed. Denoting θf and θf ′ as the expected performance defined above for the baseline and intervention model respectively, our goal is to test hypotheses about the true difference in performance δ = θf ′ − θf using the estimated difference δ̂ = θ̂f ′ − θ̂f . In a paired study, Multi-Bootstrap allows us to estimate both of the errors θ̂f − θf and θ̂f ′ − θf ′ , as well as the correlation between the two. Together, these allow us to approximate the distribution of the overall estimation error δ̂ − δ = (θ̂f − θ̂f ′) − (θf − θf ′), between the estimate δ̂ and the truth δ. With this, we can compute confidence intervals for δ, the true average effect of the intervention on performance over seeds, and test hypotheses about δ, as well. Design 3: Unpaired Samples. Finally, we might seek to compare a number of seeds for both the intervention and baseline models, but may not expect them to be aligned in their dependence on the seed. For example, the second model may use a different architecture so that they cannot be built from the same checkpoints, or the models may be generated from entirely separate initialization schemes. We refer to such studies as unpaired. Like in a paired study, the Multi-Bootstrap allows us to estimate the errors θ̂f − θf and θ̂f ′ − θf ′ ; however, in an unpaired study, we cannot estimate the correlation between the errors. Thus, we assume that the correlation is zero. This will give a conservative estimate of the error (θ̂f − θ̂f ′) − (θf − θf ′), as long as θ̂f − θf and θ̂f ′ − θf ′ are not negatively correlated. Since there is little reason to believe that the random seeds used for two different models would induce a negative correlation between the models’ performance, we take this assumption to be relatively safe. Hypothesis Testing. Given the measured uncertainty, we recommend testing whether or not the difference is meaningfully different from some arbitrary predefined threshold (i.e., 0 in the typical case). Specifically, we are often interested in rejecting the null hypothesis that the intervention does not improve over the baseline model, i.e., H0 : δ ≤ 0 (1) in a statistically rigorous way. This can be done with the Multi-Bootstrap procedure described below. 3.2 MULTI-BOOTSTRAP PROCEDURE The Multi-Bootstrap is a non-parametric bootstrapping procedure that allows us to estimate the distribution of the error θ̂ − θ over the seeds and test instances. The algorithm supports both paired and unpaired study designs, differentiating the two settings only in the way the sampling is performed. To keep the presentation simple, we will assume that the performance L(s) is an average of a perexample metric ℓ(x, y, fs) over the distribution D of (X,Y ), such as accuracy or the log likelihood, and L̂(s) is similarly an empirical average with the observed nx test examples, L(s) = ED[ℓ(X,Y, fs)], and L̂(s) = 1 nx nx∑ i=1 ℓ(Xi, Yi, fs). We note that the mapping D 7→ L(s) is linear in D, which is required for our result in Theorem 1. However, we conjecture that this is an artifact of the proof; like most bootstrap methods, the method here likely generalizes to any performance metric which behaves asymptotically like a linear mapping of D, including AUC, BLEU score (Papineni et al., 2002), and expected calibration error. Building on the rich literature on bootstrap methods (e.g., Efron & Tibshirani, 1994), the MultiBootstrap is a new procedure which accounts for the way that the combined randomness from the seeds and test set creates error in the estimate θ̂. The statistical underpinnings of this approach have theoretical and methodological connections to inference procedures for two-sample tests (Van der Vaart, 2000), where the samples from each population are independent. However, in those settings, the test statistics naturally differ as a result of the scientific question at hand. In our procedure, we generate a bootstrap sample from the full sample with replacement separately over both the randomness from the pre-training seed s and from the test set (X,Y ). That is, we generate a sample of pre-training seeds (S∗1 , S ∗ 2 , . . . , S ∗ ns) with each S ∗ j drawn randomly with replacement from the pre-training seeds, and we generate a test set sample ((X∗1 , Y ∗ 1 ), (X ∗ 2 , Y ∗ 2 ), . . . , (X ∗ nx , Y ∗ nx)) with each (X,Y ) pair drawn randomly with replacement from the full test set. Then, we compute the bootstrap estimate θ̂∗ as θ̂∗ = 1 ns ns∑ j=1 L̂∗(S∗j ), where L̂ ∗(s) = 1 nx nx∑ i=1 ℓ(X∗i , Y ∗ i , fs). To illustrate the procedure, we present a minimal Python implementation in Appendix A. For sufficiently large nx and ns, the distribution of the estimation error θ̂ − θ is approximated well by the distribution of θ̂∗ − θ̂ over re-draws of the bootstrap samples, as stated precisely in Theorem 1. Theorem 1. Assume that E[ℓ2(X,Y, fS)] < ∞. Furthermore, assume that for each s, E[ℓ2(X,Y, fs)] < ∞, and for almost every (x, y) pair, E[ℓ2(X,Y, fS) | X = x, Y = y] < ∞. Let n = nx +ns, and assume that 0 < ps = ns/n < 1 stays fixed (up to rounding error) as n → ∞. Then, there exists 0 < σ2 < ∞ such that √n(θ̂ − θ) d→ G with G ∼ N (0, σ2). Furthermore, conditionally on ((X1, Y1), (X2, Y2), . . . ), √ n(θ̂∗ − θ̂) d→ G. The proof of Theorem 1 is in Appendix B, along with a comment on the rate of convergence for the approximation error. The challenge with applying existing theory to our method is that while the seeds and data points are each marginally iid, the observed losses depend on both, and therefore are not iid. Therefore, we need to handle this non-iid structure in our method and proof. For nested sources of randomness (e.g., if for each pre-training seed s, we have estimates from multiple fine-tuning seeds), we average over all of the inner samples (fine-tuning seeds) in every bootstrap sample, motivated by Field & Welsh (2007)’s recommendations for bootstrapping clustered data. Paired Samples (design 2, continued). In a paired design, the Multi-Bootstrap procedure can additionally tell us the joint distribution of θ̂f ′ − θf ′ and θ̂f − θf . To do so, one must use the same bootstrap samples of the seeds (S∗1 , S ∗ 2 , . . . , S ∗ ns) and test examples ((X∗1 , Y ∗ 1 ), (X ∗ 2 , Y ∗ 2 ), . . . , (X ∗ nx , Y ∗ nx)) for both models. Then, the correlation between the errors θ̂f ′ − θf ′ and θ̂f − θf is well approximated by the correlation between the bootstrap errors θ̂∗f ′ − θ∗f ′ and θ̂∗f − θ∗f . In particular, recall that we defined the difference in performance between the intervention f ′ and the baseline f to be δ, and defined its estimator to be δ̂. With the Multi-Bootstrap, we can estimate the bootstrapped difference δ̂∗ = θ̂∗f ′ − θ̂∗f . With this, the distribution of the estimation error δ̂ − δ is well approximated by the distribution of δ̂∗ − δ̂ over bootstrap samples. Unpaired Samples (design 3, continued). For studies that do not match the paired format, we adapt the Multi-Bootstrap procedure so that, instead of sampling a single pre-training seed that is shared between f and f ′, we sample pre-training seeds for each one independently. The remainder of the algorithm proceeds as in the paired case. Relative to the paired design discussed above, this additionally assumes that the errors due to differences in pre-training seed between θ̂f ′ − θf ′ and θ̂f − θf are independent. Comparison to a Fixed Baseline (design 1, continued). Often, we do not have access to multiple estimates of L(s), for example, when the baseline f against which we are comparing is an estimate of human performance for which only mean accuracy was reported, or when f is the performance of a previously-published model for which there only exists a single artifact or for which we do not have direct access to model predictions. When we have only a point estimate θ̂f = L̂(S1) of θf for the baseline f with a single seed S1, we recommend using Multi-Bootstrap to compute a confidence interval around θf ′ and reporting where the given estimate of baseline performance falls within that distribution. An example of such a case is Figure 1, in which the distribution of MultiBERTs performance is compared to that from the single checkpoint of the original BERT release. In general such results should be interpreted conservatively, as we cannot make any claims about the variance of the baseline model. Hypothesis Testing. A valid p-value for the hypothesis test described in Equation 1 is the fraction of bootstrap samples from the above procedure for which the estimate δ̂ is negative. 4 APPLICATION: GENDER BIAS IN COREFERENCE SYSTEMS We present a case study to illustrate how MultiBERTs and the Multi-Bootstrap can help us draw more robust conclusions about model behavior. The use case is based on gendered correlations. For a particular measure of gender bias, we take a single BERT checkpoint and measure a value of 0.35. We then apply an intervention, foo, designed to reduce this correlation, and measure 0.25. In an effort to do even better, we create a whole new checkpoint by applying the foo procedure from the very beginning of pre-training. On this checkpoint, we measure 0.3. How does one make sense of this result? As a concrete example, we analyze gender bias in coreference systems (Rudinger et al., 2018) and showing how MultiBERTs and the Multi-Bootstrap can help us understand the effect of an intervention, counterfactual data augmentation (CDA). We follow a set-up similar to Webster et al. (2020), which augments the BERT pretraining data with counterfactual sentences created by randomly swapping English binary-gendered pronouns. The goal is to weaken the correlation between gendered pronouns and other words such as occupation terms (e.g., doctor, nurse). We compare our baseline MultiBERTs models to two strategies for CDA. In the first (CDA-incr), we continue pre-training each MultiBERTs model for an additional 50K steps on the counterfactual data of Webster et al. (2020). In the second, we train BERT models from scratch (CDA-full) on the same dataset. The Winogender dataset consists of template sentences covering 60 occupation terms and instantiated with either male, female, or neutral pronouns. We follow Webster et al. (2020) and train a gold-mention coreference system using a two-layer feedforward network that takes span representations from a frozen BERT encoder as input and makes binary predictions for mention-referent pairs. The model is trained on OntoNotes (Hovy et al., 2006) and evaluated on the Winogender examples for both per-sentence accuracy and a bias score, defined as the Pearson correlation between the peroccupation bias score (Figure 4 of Rudinger et al. 2018) and the occupational gender statistics from the U.S. Bureau of Labor Statistics.6 For each pre-training run, we train five coreference models, using the same encoder but different random seeds to initialize the classifier weights and to shuffle the training data. 4.1 PAIRED ANALYSIS: CDA-INCR VS. BASE We investigate the impact of the intervention on performance and bias. Overall accuracy is fairly consistent across pre-training seeds, at 62.6±1.2% for the base model, with only a small and not statistically significant change under CDA-incr (Table 1). However, as shown in Figure 3, there is considerable variation in bias correlation, with r values between 0.1 and 0.7 depending on pretraining seed.7 The range for CDA-incr overlaps somewhat, with values between 0.0 and 0.4; however, because the incremental CDA is an intervention on each base checkpoint, we can look at the individual seeds and see that in most cases there appears to be a significant improvement. A paired Multi-Bootstrap allows us to quantify this and further account for noise due to the finite evaluation 6We use the occupation data as distributed with the Winogender dataset, https://github.com/ rudinger/winogender-schemas. 7Some of this variation is due to the classifier training, but on this task there is a large intrinsic contribution from the pretraining seed. See Appendix D for a detailed analysis. sample of 60 occupations. The results are shown in Table 1, which show that CDA-incr significantly reduces bias by δ̂ = −0.162 with p = 0.001. 4.2 UNPAIRED ANALYSIS: CDA-FULL VS. CDA-INCR We can also test if we get any additional benefit from running the entire pre-training with counterfactually-augmented data. Similar to MultiBERTs, we trained 25 CDA-full checkpoints for 2M steps on the CDA dataset.8 Because these are entirely new checkpoints, independent from the base MultiBERTs runs, we use an unpaired version of the Multi-Bootstrap, which uses the same set of examples but samples pretraining seeds independently for CDA-incr and CDA-full. As shown in Table 2, overall accuracy does not change appreciably (0.622 vs. 0.623, p = 0.416), while bias correlation seems to decrease but not significantly (0.256 vs 0.192, δ = -0.064 with p = 0.132). As an ablation, we also experiment with sampling over either only seeds (taking the set of examples, i.e. occupations, as fixed), or over examples (taking the set of 25 seeds as fixed). As shown in Table 2, we find lower p-values (0.005 and 0.053) in both cases—showing that failing to account for finite samples along either dimension could lead to overconfident conclusions. In Appendix E, we present two additional examples: a paired study where we increase pretraining time from 1M to 2M steps, as well as an unpaired comparison to the original bert-base-uncased checkpoint. 5 CONCLUSION To make progress on language model pre-training, it is essential to distinguish between the properties of specific model artifacts and those of the training procedures that generated them. To this end, we have presented two resources: the MultiBERTs, a set of 25 model checkpoints to support robust research on BERT, and the Multi-Bootstrap, a non-parametric statistical method to estimate the uncertainty of model comparisons across multiple training seeds. We demonstrated the utility of these resources by showing how to quantify the effect of an intervention to reduce a type of gender bias in coreference systems built on BERT. We hope that the release of multiple checkpoints and the use of principled hypothesis testing will become standard practices in research on pre-trained language models. 8Following Webster et al. (2020), we use 20 masks per sequence instead of the 80 from Devlin et al. (2019). A MINIMAL IMPLEMENTATION OF THE MULTI-BOOTSTRAP Below, we present a simplified Python implementation of the Multi-Bootstrap algorithm presented in Section 3.2. It describes a single-sided version of the procedure, which could be used, e.g., to test that a model’s performance is greater than 0. The input is a matrix of predictions where row indices correspond to test examples and column indices to random seeds. The functions returns an array of nboot samples [θ̂1, . . . , θ̂nboot ]. 1 def multibootstrap(predictions, labels, metric_fun, nboot): 2 """ 3 Generates bootstrap samples of a model’s performance. 4 5 Input: 6 predictions: 2D Numpy array with the predictions for different seeds. 7 labels: 1D Numpy array with the labels. 8 metric_fun: Python function. Takes a pair of arrays as input, and returns a metric or loss. 9 nboot: Number of bootstrap samples to generate. 10 11 Output: 12 Numpy array with nboot samples. 13 14 """ 15 # Checks the data format. 16 n_samples, n_seeds = predictions.shape 17 assert labels.shape == (n_samples,) 18 19 thetas = np.zeros(nboot) 20 for boot_ix in range(nboot): 21 # Samples n_samples test examples and n_seeds pre-training seeds. 22 x_samples = np.random.choice(n_samples, size=n_samples, replace=True) 23 s_samples = np.random.choice(n_seeds, size=n_seeds, replace=True) 24 25 # Computes the metric over the bootstrapping samples. 26 sampled_predictions = predictions[np.ix_(x_samples, s_samples)] 27 sampled_labels = labels[x_samples] 28 sampled_metrics = [ 29 metric_fun(sampled_predictions[:,j], sampled_labels) 30 for j in range(n_seeds) 31 ] 32 33 # Averages over the random seeds. 34 thetas[boot_ix] = np.mean(sampled_metrics) 35 36 return thetas We provide the complete version of the algorithm on our repository http://goo.gle/ multiberts. Our implementation is optimized and supports all the experiment designs described in Section 3, including paired and unpaired analysis as well as multiple fine-tuning runs for each pretraining seed. B PROOF OF THEOREM 1 Before giving the proof, we define some useful notation that will simplify the argument considerably. We let Dn be the empirical measure over the nx observations (Zi = (Xi, Yi)) n i=1, and Mn be the empirical measure over the ns observations (Sj) n j=1. For a function f : V → R and a distribution P over V , we will use the shorthand Pf to denote the expectation of f under P , Pf = EV∼P [f(V )]. For example, this allows us to write θ = DMℓ = EZ∼DES∼M ℓ(Z, fS), and θ̂ = DnMnℓ = 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ). For the bootstrapped distributions, let D∗n denote the distribution over the bootstrap data samples (Z∗1 , Z ∗ 2 , . . . , Z ∗ nx) and M ∗ n denote the distribution over the bootstrapped seed samples, (S∗1 , S ∗ 2 , . . . , S ∗ ns), both conditional on the observed samples (Zi) nx i=1 and (Sj) ns j=1. Note that the empirical average over a bootstrapped sample 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Z∗i , fS∗j ) can be written as 1 nx nx∑ i=1 1 ns ns∑ j=1 AiBjℓ(Zi, fSj ), where Ai is the number of times Zi appears in the bootstrapped sample (Z ∗ k) nx k=1, and Bj is the number of times Sj appears in the bootstrapped sample (S ∗ k) ns k=1. With this in mind, we will abuse notation, and also denote D∗n as the distribution over the Ai and M ∗ n as the distribution over the Bj . Finally, we will use E∗ and Var∗ to denote the expectation and variance of random variables defined with respect to D∗n or M ∗ n, conditional on Dn and Mn. We will use P to denote the distribution P = D×M . Throughout, all assertions made with respect to random variables made without a note about their probability of occurrence hold P -almost surely. Proof. The challenge with applying existing theory to our method is that because the performance metric (ℓ(Zi, fSj ) nx i=1 over the nx observations for a given seed Sj all depend on the same Sj , they are not independent. Similarly for the performance on a given observation, over seeds. Therefore, we need to handle this non-iid structure in our proof for the multi-bootstrap. There are conceptually three steps to our proof that allow us to do just that. The first is to show that θ̂ has an asymptotically linear representation as √ n(θ̂ − θ) = √n(Dn −D)Mℓ+ √ n(Mn −M)Dℓ+ oP (1). (2) The second is to show that conditional on Dn and Mn the multi-bootstrapped statistic θ̂ ∗ ∆= D∗nM ∗ nℓ has an asymptotically linear representation as √ n(θ̂∗ − θ̂) = √n(D◦n −Dn)Mℓ+ √ n(M◦n −Mn)Dℓ+ oP∗(1), (3) where D◦n and M ◦ n are multiplier bootstrap samples coupled to the bootstrap D ∗ n and M ∗ n which we define formally in the beginning of Step 2. The third step is to use standard results for the multiplier bootstrap of the mean of iid data to show that the distributions of the above linearized statistics converge to the same limit. Because we have assumed that ℓ(Z, fS) < ∞, E[ℓ(Z, fS) | S] < ∞, and E[ℓ(Z, fS) | Z] < ∞, Fubini’s theorem allows us to switch the order of integration over Z and S as needed. We will assume that DMℓ(X,Y, fS) = 0. This is without loss of generality, because adding and subtracting √ nDMℓ to the bootstrap expression gives √ n(θ̂∗ − θ̂) = √n(D∗nM∗nℓ−DnMnℓ) = √ n(D∗nM ∗ nℓ−DMℓ+DMℓ−DnMnℓ) = √ n(D∗nM ∗ n(ℓ−DMℓ)−DnMn(ℓ−DMℓ)), so if we prove that the result holds with the mean zero assumption, it will imply that the result holds for ℓ with a nonzero mean. This theorem guarantees consistency of the Multi-Bootstrap estimates. One question that comes up is whether it is possible to get meaningful / tight rates of convergence for the approximation. Unfortunately, getting OP (1/n) convergence as found in many bootstrap methods (Van der Vaart, 2000) is difficult without the use of Edgeworth expansions, by which the Multi-Bootstrap is not welladapted to analysis. That said, many of the remainder terms already have variance of order O(1/n), or could easily be adapted to the same, suggesting an OP (1/ √ n) convergence. The main difficulty, however, is showing rates of convergence for the strong law on separately exchangeable arrays (see the proof of Lemmas 2, 4-5). Showing a weaker notion of convergence, such as in probability, may perhaps allow one to show that the remainder is OP (1/ √ n), however the adaptation of the aforementioned Lemmas is nontrivial. Step 1 Recalling that θ̂ ∆ = DnMnℓ and θ ∆ = DMℓ, we can expand √ n(θ̂ − θ) as follows, √ n(DnMnℓ−DMℓ) = √ n(DnMnℓ−DMnℓ+DMnℓ−DMℓ) = √ n((Dn −D)Mnℓ+D(Mn −M)ℓ) = √ n((Dn −D)Mnℓ+ (Dn −D)Mℓ− (Dn −D)Mℓ+D(Mn −M)ℓ) = √ n((Dn −D)Mℓ+ (Dn −D)(Mn −M)ℓ+D(Mn −M)ℓ) The following lemma shows that √ n(Dn −D)(Mn −M)ℓ is a lower order term. Lemma 1. Under the assumptions of Theorem 1, √ n(Dn −D)(Mn −M)ℓ = oP (1). Therefore, √ n(DnMnℓ−DMℓ) = 1√ 1− ps √ nx(Dn −D)Mℓ+ 1√ ps √ ns(Mn −M)Dℓ+ oP (1). Step 2 One of the challenges with working with the bootstrap sample D∗n and M ∗ n is that the induced per-sample weights {Ai}nxi=1 and {Bj}nsj=1 do not have independent components, because they each follow a multinomial distribution over nx items and ns items, respectively. However, they are close enough to independent that we can define a coupled set of random variables {A◦i }nxi=1 and {B◦j }nsj=1 that do have independent components, but behave similarly enough to {Ai} and {Bj} that using these weights has a negligible effect on distribution of the bootstrapped estimator, as described concretely below. First, we discuss the coupled multiplier bootstrap sample D◦n and M ◦ n. The creation of this sequence, called “Poissonization” is a standard technique for proving results about the empirical bootstrap that require independence of the bootstrap weights (van der Vaart et al., 1996). We describe this for D◦n as the idea is identical for M◦n. Because our goal is to couple this distribution to D ∗ n, we define it on the same sample space, and extend the distribution P ∗, expectation E∗ and variance Var∗ to be over D◦n and M ◦ n, conditionally on Dn and Mn, as with D ∗ n and M ∗ n. To construct the distribution D◦n, from the empirical distribution Dn and a bootstrap sample D ∗ n, start with the distribution D∗n and modify it as follows: We draw a Poisson random variable Nnx with mean nx. If Nnx > nx, then we sample Nnx −nx iid observations from Dn, with replacement, and add them to the bootstrap sample initialized with D∗n to produce the distribution D ◦ n. If Nnx < nx, we sample nx − Nnx observations from D∗n, without replacement, and remove them from the bootstrap sample to produce the distribution D◦n. If Nnx = nx, then D ◦ n = D ∗ n. Recalling that Ai is the number of times the i-th sample is included in D ∗ n, similarly define A ◦ i as the number of times the i-th sample is included in D◦n. Note that by the properties of the Poisson distribution, A◦i ∼ Poisson(1), and {A◦i }nxi=1 are independent. Note that the natural normalization for D◦n would be Nnx . However, it will be useful to maintain the normalization by nx, so abusing notation, for a function f(z), we will say that D◦nf = 1 nx ∑nx i=1 A ◦ i f(Zi). Define θ̂◦ as the following empirical estimator of θ under the distribution D◦n ×M◦n, θ̂◦ = D◦nM ◦ nℓ = 1 nx nx∑ i=1 1 ns ns∑ j=1 A◦iB ◦ j ℓ(Zi, fSj ). Lemma 2 shows that √ n(θ̂∗ − θ̂◦) = oP∗(1), and so √ n(θ̂∗ − θ) = √n(θ̂◦ − θ) + oP∗(1). Lemma 2. Under the assumptions of Theorem 1, and that DMℓ = 0, √ n(θ̂∗ − θ̂◦) = oP∗(1). With this, the expansion of √ n(θ̂◦ − θ̂) begins mutatis mutandis the same as in Step 1, to get that √ n(θ̂◦ − θ̂) = 1√ 1− ps √ nx(D ◦ n −Dn)Mnℓ+ √ n(D◦n −Dn)(M◦n −Mn)ℓ + 1√ ps √ ns(M ◦ n −Mn)Dnℓ. As with Step 1, we provide Lemma 3 showing that the remainder term √ n(D◦n −Dn)(M◦n −Mn)ℓ will be lower order. Lemma 3. Under the assumptions of Theorem 1, √ n(D◦n −Dn)(M◦n −Mn)ℓ = oP∗(1). Therefore, √ n(D◦nM ◦ nℓ−DnMnℓ) = 1√ 1− ps √ nx(D ◦ n −Dn)Mnℓ+ 1√ ps √ ns(M ◦ n −Mn)Dnℓ+ oP∗(1). Then, to write √ n(θ̂∗−θ̂) in terms of √ns(M◦n−Mn)Dℓ as wanted in Eq. (3), instead of √ ns(M ◦ n− Mn)Dnℓ, we must additionally show that the functional has enough continuity that the error term√ ns(M ◦ n −Mn)(Dn −D)ℓ is lower order. The following lemma shows exactly this. Lemma 4. Under the assumptions of Theorem 1, conditionally on the sequences Z1, Z2, . . . and S1, S2, . . . , (a) √ n(D◦n −Dn)(Mn −M)ℓ = oP∗(1), and (b) √ n(Dn −D)(M◦n −Mn)ℓ = oP∗(1). Altogether, these imply that √ n(D∗nM ∗ nℓ−DnMnℓ) = 1√ 1− ps √ nx(D ◦ n −Dn)Mℓ+ 1√ ps √ ns(M ◦ n −Mn)Dℓ+ oP∗(1). Step 3 Noting that Mℓ(·, fS) = ED×M [ℓ(·, fS) | Z = ·] is a real-valued random variable with finite variance (similarly for Dℓ(Z, ·)), and recalling that the nx samples used for Dn and ns samples for Mn satisfy n = nx/(1 − ps) and n = ns/ps, for 0 < ps < 1, the conventional central limit theorem shows that for some positive semi-definite matrix Σ ∈ R2×2, and G ∼ N (0,Σ), √ n ( (Dn −D)Mℓ (Mn −M)Dℓ ) = ( 1 1−ps √ nx(Dn −D)Mℓ 1 ps √ ns(Mn −M)Dℓ ) d→ G. Note that Dn and Mn are independent, so G is, in fact, a diagonal matrix. Additionally, the conditional multiplier CLT (van der Vaart et al., 1996, Lemma 2.9.5, pg. 181) implies that conditionally on Z1, Z2, . . . and S1, S2, . . . , √ n ( (D∗n −Dn)Mℓ (M∗n −Mn)Dℓ ) d→ G. Finally, applying the delta method (see Theorem 23.5 from Van der Vaart (2000)) along with the results from Steps 1 and 2 shows that the distributions of √ n(θ̂ − θ) and √n(θ̂∗ − θ̂) converge to N (0, σ2), where σ2 = Σ11/(1− ps) + Σ22/ps. B.1 PROOF OF LEMMA 1 Fix ǫ > 0. Note that E[(Dn −D)(Mn −M)ℓ] = 0, so by Chebyshev’s inequality, P ( |√n(Dn −D)(Mn −M)ℓ| > ǫ ) ≤ Var( √ n(Dn −D)(Mn −M)ℓ) ǫ2 . Therefore, it suffices to show that limn→∞ Var( √ n(Dn−D)(Mn−M)ℓ) = 0. To do so, we apply the law of total variance, conditioning on Dn, and bound the resulting expression by C/n. Var( √ n(Dn −D)(Mn −M)ℓ) = nE[Var((Dn −D)(Mn −M)ℓ | Dn)] + nVar(E[(Dn −D)(Mn −M)ℓ | Dn]) = nE[Var((Dn −D)(Mn −M)ℓ | Dn)] = nE[Var((Mn −M)(Dn −D)ℓ | Dn)] = E n n2s ns∑ j=1 Var((Dn −D)ℓ(·, fSj ) | Dn) = E [ n ns Var((Dn −D)ℓ(·, fS1) | Dn) ] = E 1 ps E 1 nx nx∑ i=1 ℓ(Zi, fS1)− E[ℓ(Zi, fS1) | S1] 2 | {Zi}nxi=1 = E 1 ps 1 nx nx∑ i=1 ℓ(Zi, fS1)− E[ℓ(Zi, fS1) | S1] 2 = E 1 psn2x nx∑ i=1 nx∑ k=1 (ℓ(Zi, fS1)− E[ℓ(Zi, fS1) | S1])(ℓ(Zk, fS1)− E[ℓ(Zk, fS1) | S1]) = E 1 psn2x nx∑ i=1 (ℓ(Zi, fS1)− E[ℓ(Zi, fS1) | S1])2 = 1 ps(1− ps)n E [ (ℓ(Z1, fS1)− E[ℓ(Z1, fS1) | S1])2 ] ≤ C n → 0. B.2 PROOF OF LEMMA 2 First, note the following representation for θ̂∗ − θ̂◦: θ̂∗ − θ̂◦ = 1 nx nx∑ i=1 1 ns ns∑ j=1 AiBjℓ(Zi, fSj )− 1 nx nx∑ i=1 1 ns ns∑ j=1 A◦iB ◦ j ℓ(Zi, fSj ) = 1 ns ns∑ j=1 (Bj −B◦j ) nx nx∑ i=1 Aiℓ(Zi, fSj ) ︸ ︷︷ ︸ ∆ =I1 + 1 nx nx∑ i=1 (Ai −A◦i ) ns ns∑ j=1 B◦j ℓ(Zi, fSj ) ︸ ︷︷ ︸ ∆ =I2 . Let ǫ > 0. Noting that E∗[I1] = E∗[I2] = 0, applying Chebyshev’s inequality gives P ∗ (√ n|θ̂∗ − θ̂◦| > ǫ ) ≤ nVar ∗(θ̂∗ − θ̂◦) ǫ2 ≤ 2nVar ∗(I1) + Var ∗(I2) ǫ2 It suffices to show that nVar∗(I1) → 0 and nVar∗(I2) → 0. The arguments for each term are mutatis mutandis the same, and so we proceed by showing the proof for I2. By the law of total variance, Var∗(I2) = Var ∗(E∗[I2 | {Bj}nsj=1]) + E∗[Var∗(I2 | {Bj}nsj=1)]. Because E∗[Ai] = E∗[A◦i ] and {Bj}nsj=1 ⊥ Ai, A◦i , it follows that E∗[I2 | {Bj}nsj=1] = 0. Taking the remaining term and re-organizing the sums in I2, Var∗(I2) = E ∗ Var ∗ 1 nx nx∑ i=1 (Ai −A◦i ) 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) | {Bj}nsj=1 . (4) Next, we apply the law of total variance again, conditioning on Nnx = ∑ i A ◦ i . First, E ∗[I2 | Nnx , {Bj}nsj=1] = Nnx − nx nx 1 nx nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ), and so Var∗ ( E ∗[I2 | Nnx , {Bj}nsj=1] | {Bj}nsj=1 ) = 1 nx 1 nx nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) 2 Then, conditionally on Nnx (and {Bj}), I2 is the (centered) empirical average of |Nn − n| samples from a finite population of size n, rescaled by |Nn − n|/n. Therefore, applying Theorem 2.2 of Cochran (2007) gives the conditional variance as |Nnx − nx| n2x 1 nx − 1 nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) 2 − nx nx − 1 1 nx nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) 2 ︸ ︷︷ ︸ ∆ =V 2 . To take the expectation over Nnx , notice that because E ∗[Nnx ] = nx, this is the mean absolute deviation (MAD) of Nnx . Using the expression for the MAD of a Poisson variable from Ramasubban (1958) gives E ∗|Nnx − nx| = 2nx nnxx exp(−nx) nx! , and using Stirling’s approximation, this is bounded by C √ nx, for some 0 < C < ∞. Combining this with the above term for the variance of the conditional expectation, we have Var∗ 1 nx nx∑ i=1 (Ai −A◦i ) 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) | {Bj}nsj=1 ≤ 1 nx 1 nx nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) 2 + 1 n1.5x V 2. (5) Noting that E∗[B2j ] = E ∗[BjBk] = 1, we get the following bound: Var∗(I2) ≤ 1 nx 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) 2 + 1 n1.5x V̄ 2, where V̄ 2 = 1 nx − 1 nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) 2 − nx nx − 1 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) 2 . Because of the assumption that DMℓ = 0, the SLLN adapted to separately exchangeable arrays (Rieders, 1991, Theorem 1.4) implies that lim n→∞ 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) = 0, almost surely. Therefore, the first term of (5) is o(1/n). Note that V̄ 2 is the empirical variance of the conditional expectation of ℓ(Zi, fSj ) given {Zi}ni=1. Therefore, the law of total variance shows that V̄ 2 ≤ 1 nx 1 ns nx∑ i=1 ns∑ j=1 ℓ2(Zi, fSj )− 1 nx 1 ns nx∑ i=1 ns∑ j=1 ℓ(Zi, fSj ) 2 . By the SLLN adapted to separately exchangeable arrays (Rieders, 1991, Theorem 1.4), both of the terms converge almost surely to DMℓ2 < ∞ and (DMℓ)2, respectively. and therefore, lim n→∞ nVar∗(Is) ≤ lim n→∞ n nx 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) 2 + n n1.5x V̄ 2 = 0. B.3 PROOF OF LEMMA 3 As with Lemma 1, the main idea of the proof is to apply Chebyshev’s inequality, and show that the variance tends to zero. Indeed, choosing an arbitrary ǫ > 0, P ∗ ( |√n(D◦n −Dn)(M◦n −Mn)ℓ| ≥ ǫ ) ≤ Var ∗ (√n(D◦n −Dn)(M◦n −Mn)ℓ ) ǫ2 . Therefore, it suffices to show that the variance in the above display goes to zero. To do this, we start by re-writing the expression in terms of A◦i and B ◦ j , and then apply the law of total variance. Var∗ (√ n(D◦n −Dn)(M◦n −Mn)ℓ ) = nVar∗ 1 nxns nx∑ i=1 ns∑ j=1 (A◦i − 1)(B◦j − 1)ℓ(Zi, fSj ) = nVar∗ E∗ 1 nxns nx∑ i=1 ns∑ j=1 (A◦i − 1)(B◦j − 1)ℓ(Zi, fSj ) | {A◦i }nxi=1 + nE∗ Var∗ 1 nxns nx∑ i=1 ns∑ j=1 (A◦i − 1)(B◦j − 1)ℓ(Zi, fSj ) | {A◦i }nxi=1 . Because {B◦j }nsj=1 are independent of {A◦i }nxi=1, and have mean 1, the conditional expectation in the first term is 0 almost surely. Expanding out the second term, using that Var∗(B◦j ) = 1, and that the {B◦j }nsj=1 are uncorrelated, nE∗ Var∗ 1 nxns nx∑ i=1 ns∑ j=1 (A◦i − 1)(B◦j − 1)ℓ(Zi, fSj ) | {Ai}nxi=1 = nE∗ 1 n2s ns∑ j=1 Var∗ (B◦j − 1) 1 nx nx∑ i=1 (A◦i − 1)ℓ(Zi, fSj ) | {A◦i }nxi=1 = nE∗ 1 n2s ns∑ j=1 1 nx nx∑ i=1 (A◦i − 1)ℓ(Zi, fSj ) 2 = nE∗ 1 n2s ns∑ j=1 1 n2x nx∑ i=1 nx∑ k=1 (A◦i − 1)(A◦k − 1)ℓ(Zi, fSj )ℓ(Zk, fSj ) . Now, noting that Var∗(A◦i ) = 1, and that the {A◦i }nxi=1 are uncorrelated, this simplifies to nE∗ 1 n2s ns∑ j=1 1 n2x nx∑ i=1 (A◦i − 1)2ℓ2(Zi, fSj ) = n nsnx 1 ns ns∑ j=1 1 nx nx∑ i=1 ℓ2(Zi, fSj ). Because ED×M [ℓ2(Z, fS)] < ∞, the SLLN adapted to separately exchangeable arrays (Rieders, 1991, Theorem 1.4) implies that this converges almost surely to 0. B.4 PROOF OF LEMMA 4 We prove (a) of the Lemma, as (b) follows from applying Fubini’s theorem and following mutatis mutandis the same argument. Without loss of generality, we will assume that ℓ(Zi, fSj ) ≥ 0. Because Var(ℓ(Zi, fSj )) < ∞, we can always decompose ℓ(·, ·) into a positive and negative part, and show that the result holds for each individually. Once again, we prove (a) by turning to Chebyshev’s inequality. Fix ǫ > 0, and observe that P ∗ ( |√n(D◦n −Dn)(Mn −M)ℓ| > ǫ ) ≤ Var ∗ (√n(D◦n −Dn)(Mn −M) ) ǫ2 , so it is sufficient to show that Var∗ (√ n(D◦n −Dn)(Mn −M) ) → 0. Writing the above in terms of A◦i , we have Var∗ (√ n(D◦n −Dn)(Mn −M) ) = Var∗ √ n nx nx∑ i=1 (A◦i − 1) 1 ns ns∑ j=1 ℓ(Zi, fSj )− E[ℓ(Zi, fSj ) | Zi] = n n2x nx∑ i=1 Var∗ (A◦i − 1) 1 ns ns∑ j=1 ℓ(Zi, fSj )− E[ℓ(Zi, fSj ) | Zi] 2 = n n2x nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj )− E[ℓ(Zi, fSj ) | Zi] 2 . Now, we want to show that the last display converges almost surely to 0. Notice that each term within the outer sum will obviously converge due to the SLLN. Showing that the outer sum also converges almost surely is technically difficult, but conceptually follows the same argument used to prove the SLLN (specifically, we follow the one done elegantly by Etemadi (1981); Luzia (2018) provides a more detailed account of this proof technique that is helpful for developing a deeper understanding). We show the following version of almost sure convergence: that for any ǫ > 0, P n n2x nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj )− E[ℓ(Zi, fSj ) | Sj ] 2 > ǫ i.o. = 0, where i.o. stands for infinitely often. Define the shorthand Lij = ℓ(Zi, fSj ) and let L̄ij = Lij1{Lij < ij} be a truncated version of Lij . The proof of Theorem 2 of Etemadi (1981) implies that P (L̄ij 6= Lij i.o.) = 0, because the assumption Var(Lij) < ∞ implies the assumption used in Etemadi (1981), and independence of {Lij}i,j is not needed for this result. Therefore, 1 nx nx∑ i=1 1 ns ns∑ j=1 Lij − L̄ij 2 a.s.→ 0, and 1 nx nx∑ i=1 1 ns ns∑ j=1 E[Lij | Zi]− E[L̄ij | Zi] 2 a.s.→ 0. Together, these imply that if we can prove that the truncated sum converges, ie., 1 nx n∑ i=1 1 ns ns∑ j=1 L̄ij − E[L̄ij | Zi] 2 a.s.→ 0, (6) this is sufficient to show that the un-truncated version converges almost surely. To prove (6), we show two things: first, that there is a subsequence kn such that (6) holds when restricted to the subsequence, and then we show that the sequence is a Cauchy sequence, which together imply the result. Let α > 1 and let kn = α n. For convenience, denote knx as the number of data samples and kns as the number of seed samples when knx + kns = kn total samples are drawn. We will ignore integer rounding issues, and assume knx = (1− ps)αn, and kns = psαn. The following lemma shows that the subsequence defined by kn converges almost surely. Lemma 5. Let α > 1, and kn = α n. Under the assumptions of Theorem 1 and that Lij ≥ 0 P 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij − E[L̄ij | Zi] 2 > ǫ i.o. = 0. We now must show that the sequence in (6) is a Cauchy sequence. Note that the SLLN implies that 1 nx nx∑ i=1 E[L̄ij | Zi]2 a.s.→ E[E[L̄ij | Zi]2], and the LLN for exchangeable arrays (Rieders, 1991, Theorem 1.4) implies that 1 nx nx∑ i=1 1 ns ns∑ j=1 L̄ijE[L̄ij | Zi] a.s.→ E[E[L̄ij | Zi]2]. Therefore, 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij 2 a.s.→ E[E[L̄ij | Zi]2]. (7) Notice that because L̄ij ≥ 0, the sum ∑nx i=1 (∑ns j=1 L̄ij )2 is monotone increasing in ns and nx. With this in mind, for any m > 0, let n be such that kn ≤ m < kn+1. Then, by the montonicity, ( kn kn+1 1 kn )3 knx∑ i=1 kns∑ j=1 L̄ij 2 ≤ ∑(1−ps)m i=1 (∑psm j=1 L̄ij )2 p2s(1− ps)m3 ≤ ( kn+1 kn 1 kn+1 )3 k(n+1)x∑ i=1 k(n+1)s∑ j=1 L̄ij 2 . From (7), the left hand side converges to 1α3E[E[L̄ij | Zi]2], and the right hand side converges to α3E[E[L̄ij | Zi]2]. Because α is arbitrary, this proves that the sequence ∑(1−ps)m i=1 (∑psm j=1 L̄ij )2 p2s(1− ps)m3 m=1,... is almost surely Cauchy. Together with Lemma 5, this implies (6). B.5 PROOF OF LEMMA 5 We will show that ∞∑ n=1 P 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij − E[L̄ij | Zi] 2 > ǫ < ∞. This, along with the first Borel-Cantelli lemma (Émile Borel, 1909; Cantelli, 1917) implies the result. Applying Markov’s inequality and using the fact that L̄ij and L̄ih are independent conditional on Zi gives ∞∑ n=1 P 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij − E[L̄ij | Zi] 2 > ǫ ≤ 1 ǫ ∞∑ n=1 E 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij − E[L̄ij | Zi] 2 = 1 ǫ ∞∑ n=1 1 knxk2ns knx∑ i=1 kns∑ j=1 E [( L̄ij − E[L̄ij | Zi] )2] ≤ 1 ǫ ∞∑ n=1 1 knxk2ns knx∑ i=1 kns∑ j=1 E[L̄2ij ], where the last line follows from the law of total variance. To simplify the remaining algebra, we will use a . b to denote that there is some constant 0 < c < ∞ such that a < cb. Continuing, we have 1 ǫ ∞∑ n=1 1 knxk2ns knx∑ i=1 kns∑ j=1 E[L̄2ij ] . 1 ǫ ∞∑ n=1 knx∑ i=1 kns∑ j=1 1 k3n E[L̄2ij ] = 1 ǫ ∞∑ i=1 ∞∑ j=1 E[L̄2ij ] ∞∑ n=n(i,j) 1 α3n . 1 ǫ ∞∑ i=1 ∞∑ j=1 1 max{i/(1− ps), j/ps}3 E[L̄2ij ] . 1 ǫ ∞∑ i=1 ∞∑ j=1 1 max{i, j}3E[L̄ 2 ij ] = 1 ǫ ∞∑ i=1 ∞∑ j=1 1 max{i, j}3E[L̄ 2 ij ] where n(i, j) is shorthand for n(i, j) = logα max{i/(1− ps), j/ps} is the first n such that knx ≥ i and kns ≥ j. Now, define Q as the distribution of L11 induced by Z1 and S1. Additionally, split the inner sum into two pieces, one for when j < i and so max{i, j} = i and one for when j ≥ i and so max{i, j} = j. 1 ǫ ∞∑ i=1 ∞∑ j=1 1 max{i, j}3E[L̄ 2 ij ] = 1 ǫ ∞∑ i=1 i∑ j=1 1 i3 ∫ ij 0 x2 dQ(x) + ∞∑ j=i ∫ ij 0 x2 dQ(x) = 1 ǫ ∞∑ i=1 i−1∑ j=1 1 i3 ij∑ k=1 ∫ k k−1 x2 dQ(x) + ∞∑ j=i ij∑ k=1 ∫ k k−1 x2 dQ(x) switching the order of the indices over j and k, using that 1 ≤ k ≤ ij and the constraints on j relative to i, 1 ǫ ∞∑ i=1 i−1∑ j=1 1 i3 ij∑ k=1 ∫ k k−1 x2 dQ(x) + ∞∑ j=i ij∑ k=1 ∫ k k−1 x2 dQ(x) . 1 ǫ ∞∑ i=1 i2−1∑ k=1 (i− k/i) i3 ∫ k k−1 x2 dQ(x) + ∞∑ k=1 ∞∑ j=max{i,k/i} 1 j3 ∫ k k−1 x2 dQ(x) . 1 ǫ ∞∑ i=1 i2−1∑ k=1 (i− k/i) i3 ∫ k k−1 x2 dQ(x) + ∞∑ k=1 1 max{i, k/i}2 ∫ k k−1 x2 dQ(x) . Switching the order of summation over i and k, and separating out the terms where k/i < i and k/i ≥ i, 1 ǫ ∞∑ i=1 i2−1∑ k=1 (i− k/i) i3 ∫ k k−1 x2 dQ(x) + ∞∑ k=1 1 max{i, k/i}2 ∫ k k−1 x2 dQ(x) = 1 ǫ ∞∑ k=1 (∫ k k−1 x2 dQ(x) ) √ k+1∑ i=1 (i− k/i) i3 + ∞∑ i= √ k 1 i2 + √ k∑ i=1 i2 k2 . 1 ǫ ∞∑ k=1 1√ k (∫ k k−1 x2 dQ(x) ) . 1 ǫ ∞∑ k=1 (∫ k k−1 x2√ x dQ(x) ) . 1 ǫ ∫ ∞ 0 x1.5 dQ(x) < ∞. C INSTANCE-LEVEL AGREEMENT OF MULTIBERTS ON GLUE We present additional performance experiments to complement Section 2. Table 3 shows per-example agreement rates on GLUE predictions between pairs of models pretrained with a single seed (“same”) and pairs pre-trained with different seeds (“diff”); in all cases, models are fine-tuned with different seeds. With the exception of RTE, we see high agreement (over 90%) on test examples drawn from the same distribution as the training data, and note that agreement is 1–2% lower on average for the predictions of models pre-trained on different seeds compared to models pre-trained on the same seed. However, this discrepancy becomes significantly more pronounced if we look at out-of-domain “challenge sets” which feature a different data distribution from the training set. For example, if we evaluate our MNLI models on the anti-sterotypical examples from HANS (McCoy et al., 2019), we see agreement drop from 88% to 82% when comparing across pre-training seeds. Figure 4 shows how this can affect overall accuracy, which can vary over a range of nearly 20% depending on the pre-training seed. Such results underscore the need to evaluate multiple pre-training runs, especially when evaluating a model’s ability to generalize outside of its training distribution. D CROSS-SEED VARIATION Figure 5 shows variation in Winogender bias correlation (S4) between each MultiBERTs pretraining seed. Each box shows the distribution over five runs, and some of the variation between seeds may simple be due to variation in training the coreference model. If we average the scores for each seed then look at the distribution of this per-seed average score, we get 0.45±0.11. What if pretraining didn’t matter? If we ignore the seed and randomly sample sets of five runs from this set with replacement, we get scores of 0.45±0.05 - telling us that most of the variance can only be explained by differences between the pretraining checkpoints. We can confirm this by taking a subset of our pretraining seeds and training additional 25 randomlyinitialized coreference models. Figure 6 shows the result: seeds 0, 2, 3, and 4 appear closer together than in Figure 5, but seed 1 clearly has different properties with respect to our Winogender metric. We can confirm this with an unpaired multibootstrap analysis, taking seed 0 as base and seed 1 as experiment: we observe a significant effect of δ = 0.203 (p = 0.009), as shown in Table 4. E CASE STUDY: MULTIBERTS VS. ORIGINAL BERT As an additional example of application, we discuss challenges in reproducing the performance of the original BERT checkpoint, using the Multi-Bootstrap procedure. The original bert-base-uncased checkpoint appears to be an outlier when viewed against the distribution of scores obtained using the MultiBERTs reproductions. Specifically, in reproducing the training recipe of Devlin et al. (2019), we found it difficult to simultaneously match performance on all tasks using a single set of hyperparameters. Devlin et al. (2019) reports training for 1M steps. However, as shown in Figure 1 and 2, models pre-trained for 1M steps matched the original checkpoint on SQuAD but lagged behind on GLUE tasks; if pre-training continues to 2M steps, GLUE performance matches the original checkpoint but SQuAD performance is significantly higher. The above observations suggest two separate but related hypotheses (below) about the BERT pretraining procedure. 1. On most tasks, running BERT pre-training for 2M steps produces better models than 1M steps. 2. The MultiBERTs training procedure outperforms the original BERT procedure on SQuAD. Let us use the Multi-Bootstrap
1. What is the focus of the paper regarding model checkpoints and uncertainty estimation? 2. What are the strengths of the proposed approach, particularly in its application to a case study on gender bias in coreference resolution? 3. How does the reviewer assess the computational cost and environmental impact of the paper's methods? 4. In what ways does the paper contribute to the understanding of large language models? 5. How does the reviewer evaluate the mathematical and statistical rigor of the proposed method?
Summary Of The Paper Review
Summary Of The Paper The paper presents MultiBERTs, a set of 25 model checkpoints, and the Multi-Bootstrap, a non-parametric method to estimate the model uncertainty. The experiments verified the proposed method on a case study of gender bias in coreference resolution. Review The paper presents MultiBERTs, a set of 25 model checkpoints, and the Multi-Bootstrap, a non-parametric method to estimate the model uncertainty. The experiments verified the proposed method on a case study of gender bias in coreference resolution. Paper Strengths: It is great to see the models and statistical library are available online (165 checkpoints). I appreciate the authors can provide the CO2 information on model training, which is environment friendly. This is an empirical analysis paper, which is useful for the community but the computational cost seems very high. Pretrained model has been widely used and show impressive performance. The method in this paper is novel and theoretically sound, which is helpful for understanding the large models. The proposed method is rigorous in terms of mathematics and statistics. Even though the techniques are simple, but they can be categorized into three designs and formulated as a formal problem.
ICLR
Title The MultiBERTs: BERT Reproductions for Robustness Analysis Abstract Experiments with pre-trained models such as BERT are often based on a single checkpoint. While the conclusions drawn apply to the artifact tested in the experiment (i.e., the particular instance of the model), it is not always clear whether they hold for the more general procedure which includes the architecture, training data, initialization scheme, and loss function. Recent work has shown that repeating the pre-training process can lead to substantially different performance, suggesting that an alternate strategy is needed to make principled statements about procedures. To enable researchers to draw more robust conclusions, we introduce the MultiBERTs, a set of 25 BERT-Base checkpoints, trained with similar hyper-parameters as the original BERT model but differing in random weight initialization and shuffling of training data. We also define the Multi-Bootstrap, a non-parametric bootstrap method for statistical inference designed for settings where there are multiple pre-trained models and limited test data. To illustrate our approach, we present a case study of gender bias in coreference resolution, in which the Multi-Bootstrap lets us measure effects that may not be detected with a single checkpoint. We release our models and statistical library, along with an additional set of 140 intermediate checkpoints captured during pre-training to facilitate research on learning dynamics. 1 INTRODUCTION Contemporary natural language processing (NLP) relies heavily on pretrained language models, which are trained using large-scale unlabeled data (Bommasani et al., 2021). BERT (Devlin et al., 2019) is a particularly popular choice: it has been widely adopted in academia and industry, and aspects of its performance have been reported on in thousands of research papers (see, e.g., Rogers et al., 2020, for an overview). Because pre-training large language models is computationally expensive (Strubell et al., 2019), researchers often rely on the release of model checkpoints through libraries such as HuggingFace Transformers (Wolf et al., 2020), which enable them to use large-scale language models without repeating the pre-training work. Consequently, most published results are based on a small number of publicly released model checkpoints. While this reuse of model checkpoints has lowered the cost of research and facilitated head-to-head comparisons, it limits our ability to draw general scientific conclusions about the performance of a particular class of models (Dror et al., 2019; D’Amour et al., 2020; Zhong et al., 2021). The key issue is that reusing model checkpoints makes it hard to generalize observations about the behavior of a single model artifact to statements about the underlying pre-training procedure which created it. Pre-training such models is an inherently stochastic process which depends on the initialization of the model’s parameters and the ordering of training examples; for example, D’Amour et al. ∗ Equal contribution. † Work done as a Google AI resident. ‡ Work done during an internship at Google. 1http://goo.gle/multiberts (2020) report substantial quantitative differences across multiple checkpoints of the same model architecture on several “stress tests” (Naik et al., 2018; McCoy et al., 2019). It is therefore difficult to know how much of the success of a model based on the original BERT checkpoint is due to BERT’s design, and how much is due to idiosyncracies of a particular artifact. Understanding this difference is critical if we are to generate reusable insights about deep learning for NLP, and improve the state-of-the-art going forward (Zhou et al., 2020; Dodge et al., 2020; Aribandi et al., 2021). This paper describes the MultiBERTs, an effort to facilitate more robust research on the BERT model. Our primary contributions are: • We release the MultiBERTs, a set of 25 BERT-Base, Uncased checkpoints to facilitate studies of robustness to parameter initialization and order of training examples (§2). Releasing these models preserves the benefits to the community of a single checkpoint release (i.e., low cost of experiments, apples-to-apples comparisons between studies based on these checkpoints), while enabling researchers to draw more general conclusions about the BERT pre-training procedure. • We present the Multi-Bootstrap, a non-parametric method to quantify the uncertainty of experimental results based on multiple pre-training seeds (§3), and provide recommendations for how to use the Multi-Bootstrap and MultiBERTs in typical experimental scenarios. We implement these recommendations in a software library. • We illustrate the approach with a practical use case: we investigate the impact of counterfactual data augmentation on gender bias, in a BERT-based coreference resolution systems (Webster et al., 2020) (§4). Additional examples are provided in Appendix E, where we document challenges with reproducing the widely-used original BERT checkpoint. The release also includes an additional 140 intermediate checkpoints, captured during training for 5 of the runs (28 checkpoints per run), to facilitate studies of learning dynamics. Our checkpoints and statistical libraries are available at: http://goo.gle/multiberts. Additional Related Work. The MultiBERTs release builds on top of a large body of work that seeks to analyze the behavior of BERT (Rogers et al., 2020). In addition to the studies of robustness cited above, several authors have introduced methods to reduce BERT’s variability during finetuning (Zhang et al., 2021; Mosbach et al., 2021; Dodge et al., 2020; Lee et al., 2020; Phang et al., 2018). Other authors have also studied the time dimension, which motivates our release of intermediate checkpoints (Liu et al., 2021; Hao et al., 2020; Saphra & Lopez, 2019; Chiang et al., 2020; Dodge et al., 2020). Similarly to §3, authors in the NLP literature have recommended best practices for statistical testing (Koehn, 2004; Dror et al., 2018; Berg-Kirkpatrick et al., 2012; Card et al., 2020; Søgaard et al., 2014; Peyrard et al., 2021), many of which are based on existing tests to estimate the uncertainty of test sample. In concurrent work, Deutsch et al. (2021) considered bootstrapping methods similar to the Multi-Bootstrap, in the context of summarization metrics evaluation. Also in concurrent work, the Mistral project (Karamcheti et al., 2021) released a set of 10 GPT-2 models with intermediate checkpoints at different stages of pre-training. Our work is complementary, focusing on BERT, introducing a larger number of pre-training seeds, and presenting a methodology to draw robust conclusions about model performance. 2 RELEASE DESCRIPTION We first describe the MultiBERTs release: how the checkpoints were trained and how their performance compares to the original BERT on two common language understanding benchmarks. 2.1 TRAINING Overview. The MultiBERTs checkpoints are trained following the code and procedure of Devlin et al. (2019), with minor hyperparameter modifications necessary to obtain comparable results on GLUE (Wang et al., 2019); a detailed discussion of these differences is provided in Appendix E. We use the BERT-Base, Uncased architecture with 12 layers and embedding size 768. We trained the models on a combination of BooksCorpus (Zhu et al., 2015) and English Wikipedia. Since the exact dataset used to train the original BERT is not available, we used a more recent version that was collected by Turc et al. (2019) with the same methodology. Checkpoints. We release 25 models trained for two million steps each, each training step involving a batch of 256 sequences. For five of these models, we release 28 additional checkpoints captured over the course of pre-training (every 20,000 training steps up to 200,000, then every 100,000 steps). In total, we release 165 checkpoints, about 68 GB of data. Training Details. As in the original BERT paper, we used batch size 256 and the Adam optimizer (Kingma & Ba, 2014) with learning rate 1e-4 and 10,000 warm-up steps. We used the default values for all the other parameters, except the number of steps, which we set to two million, and sequence length, which we set to 512 from the beginning with up to 80 masked tokens per sequence.2 We follow the BERT code and initialize the layer parameters from a truncated normal distribution, using mean 0 and standard deviation 0.02. We train using the same configuration as Devlin et al. (2019)3, with each run taking about 4.5 days on 16 Cloud TPU v2 chips. Environmental Impact Statement. We estimate compute costs at around 1728 TPU-hours for each pre-training run, and around 208 GPU-hours plus 8 TPU-hours for associated fine-tuning experiments (§2.2, including hyperparameter search and 5x replication). Using the calculations of Luccioni et al. (2019)4, we estimate this as about 250 kg CO2e for each of our 25 models. Counting the 25 runs each of CDA-incr and CDA-full from §4, associated coreference models (20 GPU-hours per pretraining model), and additional experiments of Appendix E, this gives a total of about 12.0 metric tons CO2e before accounting for offsets or clean energy. Based on the report by Patterson et al. (2021) of 78% carbon-free energy in Google Iowa (us-central1), we estimate that reproducing these experiments would emit closer to 2.6 tons CO2e, or slightly more than two passengers on a round-trip flight between San Francisco and New York. By releasing the trained checkpoints publicly, we aim to enable many research efforts on reproducibility and robustness without requiring this cost to be incurred for every subsequent study. 2.2 PERFORMANCE BENCHMARKS GLUE Setup. We report results on the development sets of the GLUE tasks: CoLA (Warstadt et al., 2019), MNLI (matched) (Williams et al., 2018), MRPC (Dolan & Brockett, 2005), QNLI (v2) (Rajpurkar et al., 2016; Wang et al., 2019), QQP (Chen et al., 2018), RTE (Bentivogli et al., 2009), SST-2 (Socher et al., 2013), and SST-B (Cer et al., 2017). In all cases we follow the same approach as Devlin et al. (2019). For each task, we fine-tune BERT for 3 epochs using a batch 2Specifically, we keep the sequence length constant (the paper uses 128 tokens for 90% of the training then 512 for the remaining 10%) to expose the model to more tokens and simplify the implementation. As we were not able to reproduce original BERT exactly using either 1M or 2M steps (see Appendix E for discussion), we release MultiBERTs trained with 2M steps under the assumption that higher-performing models are more interesting objects of study. 3We use https://github.com/google-research/bert with TensorFlow (Abadi et al., 2015) version 2.5 in v1 compatibility mode. 4https://mlco2.github.io/impact/ size of 32. We run a parameter sweep on learning rates [5e-5, 4e-5, 3e-5, 2e-5] and report the best score. We run the procedure five times for each of the 25 models and average the results. SQuAD Setup. We report results on the development sets of SQuAD versions 1.1 and 2.0 (Rajpurkar et al., 2016; 2018), using a setup similar to that of Devlin et al. (2019). For both sets of experiments, we use batch size 48, learning rate 5e-5, and train for 2 epochs. Results. Figures 1 and 2 show the distribution of the MultiBERTs models’ performance on the development sets of GLUE and SQuAD, in comparison to the original BERT checkpoint.5 On most tasks, original BERT’s performance falls within the same range as MultiBERTs (i.e., original BERT is between the minimum and maximum of the MultiBERTs’ scores). However, original BERT outperforms all MultiBERTs models on QQP, and under-performs them on SQuAD. The discrepancies may be explained by both randomness and differences in training setups, as investigated further in Appendix E. To further illustrate the performance variability inherent to pre-training and fine-tuning, we analyze the instance-level agreement between the models in Appendix C. 3 HYPOTHESIS TESTING USING MULTIPLE CHECKPOINTS The previous section compared MultiBERTs with the original BERT, finding many similarities but also some differences (e.g., in the case of SQuAD). To what extent can these results be explained by random noise? More generally, how can we quantify the uncertainty of a set of experimental results when there are multiple sources of randomness? In parallel to the MultiBERTs release, we propose a more principled and standardized method to compare training procedures. We recommend a non-parametric bootstrapping procedure, the “Multi-Bootstrap”, which enables us to make inference about model performance in the face of multiple sources of uncertainty: the randomness due to the pre-training seed, the fine-tuning seed, and the finite test data. The main idea is to use the average behavior over seeds as a means of summarizing expected behavior in an ideal world with infinite samples. Although we present Multi-Bootstrap in the context of analyzing the MultiBERTs, the method could be applied in all setups that involve a set of checkpoints pre-trained with the same method, a finite test set, and (possibly) multiple rounds of fine-tuning. The Multi-Bootstrap is implemented as a Python library, included with the MultiBERTs release. 3.1 INTERPRETING STATISTICAL RESULTS The Multi-Bootstrap provides an estimate of the amount of remaining uncertainty when summarizing the performance over multiple seeds. The following notation will help us state this precisely. We assume access to model predictions f(x) for each instance x in the evaluation set. We consider randomness arising from: 1. The choice of pre-training seed S ∼ M 2. The choice of fine-tuning seed T ∼ N 3. The choice of test sample (X,Y ) ∼ D The Multi-Bootstrap procedure allows us to account for all of the above. Specifically, MultiBERTs enables us to estimate the variance due to the choice of pre-training seed (1), which would not be possible with a single artifact. Note that multiple fine-tuning runs are not required in order to use the procedure. 5We used https://storage.googleapis.com/bert_models/2020_02_20/uncased_ L-12_H-768_A-12.zip, as linked from https://github.com/google-research/bert. For each pre-training seed s, let fs(x) denote the learned model’s prediction on input features x and let L(s) denote the expected performance metric of fs on a test distribution D over features X and labels Y . For example, the accuracy would be L(s) = E[1{Y = fs(X)}]. We can use the test sample (which we will assume has nx examples) to estimate the performance for each of the seeds in MultiBERTs, which we denote as L̂(s). The performance L(s) depends on the seed, but we are interested in summarizing the model over all seeds. A natural summary is the average over seeds, ES∼M [L(S)], which we will denote by θ. Then, using ns independently sampled seeds, we can compute an estimate θ̂ as θ̂ = 1 ns ns∑ j=1 L̂(Sj) . Because θ̂ is computed under a finite evaluation set and finite number of seeds, it is necessary to quantify the uncertainty of the estimate. The goal of Multi-Bootstrap is to estimate the distribution of the error in this estimate, θ̂ − θ, in order to compute confidence intervals and test hypotheses about θ, such as whether it is above some threshold of interest. Below, we describe a few common experimental designs in NLP that can be studied with these tools. Design 1: Comparison to a Fixed Baseline. In many use cases, we want to compare BERT’s behavior to that of a single, fixed baseline. For instance, does BERT encode information about syntax as a feature-engineered model would (Tenney et al., 2019; Hewitt & Manning, 2019)? Does it encode social stereotypes, and how does it compare to human biases (Nadeem et al., 2021)? Does it encode world knowledge, similarly to explicit knowledge bases (Petroni et al., 2019)? Does another model such as RoBERTa (Liu et al., 2019) outperform BERT on common tasks such as those from the GLUE benchmark? In all these cases, we compare MultiBERTs to some external baseline of which we only have a single estimate (e.g., random or human performance), or against an existing model that is not derived from the MultiBERTs checkpoints. We treat the baseline as fixed, and assess only the uncertainty that arises from MultiBERTs’ random seeds and the test examples. Design 2: Paired Samples. Alternatively, we might seek to assess the effectiveness of a specific intervention on model behavior. In such studies, an intervention is proposed (e.g., representation learning via a specific intermediate task, or a specific architecture change) which can be applied to any pre-trained BERT checkpoint. The question is whether the procedure results in an improvement over the original BERT pre-training method: does the intervention reliably produce the desired effect, or is the observed effect due to the idiosyncracies of a particular model artifact? Examples of such studies include: Does intermediate tuning on NLI after pre-training make models more robust across language understanding tasks (Phang et al., 2018)? Does pruning attention heads degrade model performance on downstream tasks (Voita et al., 2019)? Does augmenting BERT with information about semantic roles improve performance on benchmark tasks (Zhang et al., 2020)? We refer to studies like the above as paired since each instance of the baseline model fs (which does not receive the intervention) can be paired with an instance of the proposed model f ′s (which receives the stated intervention) such that fs and f ′ s are based on the same pretrained checkpoint produced using the same seed. Denoting θf and θf ′ as the expected performance defined above for the baseline and intervention model respectively, our goal is to test hypotheses about the true difference in performance δ = θf ′ − θf using the estimated difference δ̂ = θ̂f ′ − θ̂f . In a paired study, Multi-Bootstrap allows us to estimate both of the errors θ̂f − θf and θ̂f ′ − θf ′ , as well as the correlation between the two. Together, these allow us to approximate the distribution of the overall estimation error δ̂ − δ = (θ̂f − θ̂f ′) − (θf − θf ′), between the estimate δ̂ and the truth δ. With this, we can compute confidence intervals for δ, the true average effect of the intervention on performance over seeds, and test hypotheses about δ, as well. Design 3: Unpaired Samples. Finally, we might seek to compare a number of seeds for both the intervention and baseline models, but may not expect them to be aligned in their dependence on the seed. For example, the second model may use a different architecture so that they cannot be built from the same checkpoints, or the models may be generated from entirely separate initialization schemes. We refer to such studies as unpaired. Like in a paired study, the Multi-Bootstrap allows us to estimate the errors θ̂f − θf and θ̂f ′ − θf ′ ; however, in an unpaired study, we cannot estimate the correlation between the errors. Thus, we assume that the correlation is zero. This will give a conservative estimate of the error (θ̂f − θ̂f ′) − (θf − θf ′), as long as θ̂f − θf and θ̂f ′ − θf ′ are not negatively correlated. Since there is little reason to believe that the random seeds used for two different models would induce a negative correlation between the models’ performance, we take this assumption to be relatively safe. Hypothesis Testing. Given the measured uncertainty, we recommend testing whether or not the difference is meaningfully different from some arbitrary predefined threshold (i.e., 0 in the typical case). Specifically, we are often interested in rejecting the null hypothesis that the intervention does not improve over the baseline model, i.e., H0 : δ ≤ 0 (1) in a statistically rigorous way. This can be done with the Multi-Bootstrap procedure described below. 3.2 MULTI-BOOTSTRAP PROCEDURE The Multi-Bootstrap is a non-parametric bootstrapping procedure that allows us to estimate the distribution of the error θ̂ − θ over the seeds and test instances. The algorithm supports both paired and unpaired study designs, differentiating the two settings only in the way the sampling is performed. To keep the presentation simple, we will assume that the performance L(s) is an average of a perexample metric ℓ(x, y, fs) over the distribution D of (X,Y ), such as accuracy or the log likelihood, and L̂(s) is similarly an empirical average with the observed nx test examples, L(s) = ED[ℓ(X,Y, fs)], and L̂(s) = 1 nx nx∑ i=1 ℓ(Xi, Yi, fs). We note that the mapping D 7→ L(s) is linear in D, which is required for our result in Theorem 1. However, we conjecture that this is an artifact of the proof; like most bootstrap methods, the method here likely generalizes to any performance metric which behaves asymptotically like a linear mapping of D, including AUC, BLEU score (Papineni et al., 2002), and expected calibration error. Building on the rich literature on bootstrap methods (e.g., Efron & Tibshirani, 1994), the MultiBootstrap is a new procedure which accounts for the way that the combined randomness from the seeds and test set creates error in the estimate θ̂. The statistical underpinnings of this approach have theoretical and methodological connections to inference procedures for two-sample tests (Van der Vaart, 2000), where the samples from each population are independent. However, in those settings, the test statistics naturally differ as a result of the scientific question at hand. In our procedure, we generate a bootstrap sample from the full sample with replacement separately over both the randomness from the pre-training seed s and from the test set (X,Y ). That is, we generate a sample of pre-training seeds (S∗1 , S ∗ 2 , . . . , S ∗ ns) with each S ∗ j drawn randomly with replacement from the pre-training seeds, and we generate a test set sample ((X∗1 , Y ∗ 1 ), (X ∗ 2 , Y ∗ 2 ), . . . , (X ∗ nx , Y ∗ nx)) with each (X,Y ) pair drawn randomly with replacement from the full test set. Then, we compute the bootstrap estimate θ̂∗ as θ̂∗ = 1 ns ns∑ j=1 L̂∗(S∗j ), where L̂ ∗(s) = 1 nx nx∑ i=1 ℓ(X∗i , Y ∗ i , fs). To illustrate the procedure, we present a minimal Python implementation in Appendix A. For sufficiently large nx and ns, the distribution of the estimation error θ̂ − θ is approximated well by the distribution of θ̂∗ − θ̂ over re-draws of the bootstrap samples, as stated precisely in Theorem 1. Theorem 1. Assume that E[ℓ2(X,Y, fS)] < ∞. Furthermore, assume that for each s, E[ℓ2(X,Y, fs)] < ∞, and for almost every (x, y) pair, E[ℓ2(X,Y, fS) | X = x, Y = y] < ∞. Let n = nx +ns, and assume that 0 < ps = ns/n < 1 stays fixed (up to rounding error) as n → ∞. Then, there exists 0 < σ2 < ∞ such that √n(θ̂ − θ) d→ G with G ∼ N (0, σ2). Furthermore, conditionally on ((X1, Y1), (X2, Y2), . . . ), √ n(θ̂∗ − θ̂) d→ G. The proof of Theorem 1 is in Appendix B, along with a comment on the rate of convergence for the approximation error. The challenge with applying existing theory to our method is that while the seeds and data points are each marginally iid, the observed losses depend on both, and therefore are not iid. Therefore, we need to handle this non-iid structure in our method and proof. For nested sources of randomness (e.g., if for each pre-training seed s, we have estimates from multiple fine-tuning seeds), we average over all of the inner samples (fine-tuning seeds) in every bootstrap sample, motivated by Field & Welsh (2007)’s recommendations for bootstrapping clustered data. Paired Samples (design 2, continued). In a paired design, the Multi-Bootstrap procedure can additionally tell us the joint distribution of θ̂f ′ − θf ′ and θ̂f − θf . To do so, one must use the same bootstrap samples of the seeds (S∗1 , S ∗ 2 , . . . , S ∗ ns) and test examples ((X∗1 , Y ∗ 1 ), (X ∗ 2 , Y ∗ 2 ), . . . , (X ∗ nx , Y ∗ nx)) for both models. Then, the correlation between the errors θ̂f ′ − θf ′ and θ̂f − θf is well approximated by the correlation between the bootstrap errors θ̂∗f ′ − θ∗f ′ and θ̂∗f − θ∗f . In particular, recall that we defined the difference in performance between the intervention f ′ and the baseline f to be δ, and defined its estimator to be δ̂. With the Multi-Bootstrap, we can estimate the bootstrapped difference δ̂∗ = θ̂∗f ′ − θ̂∗f . With this, the distribution of the estimation error δ̂ − δ is well approximated by the distribution of δ̂∗ − δ̂ over bootstrap samples. Unpaired Samples (design 3, continued). For studies that do not match the paired format, we adapt the Multi-Bootstrap procedure so that, instead of sampling a single pre-training seed that is shared between f and f ′, we sample pre-training seeds for each one independently. The remainder of the algorithm proceeds as in the paired case. Relative to the paired design discussed above, this additionally assumes that the errors due to differences in pre-training seed between θ̂f ′ − θf ′ and θ̂f − θf are independent. Comparison to a Fixed Baseline (design 1, continued). Often, we do not have access to multiple estimates of L(s), for example, when the baseline f against which we are comparing is an estimate of human performance for which only mean accuracy was reported, or when f is the performance of a previously-published model for which there only exists a single artifact or for which we do not have direct access to model predictions. When we have only a point estimate θ̂f = L̂(S1) of θf for the baseline f with a single seed S1, we recommend using Multi-Bootstrap to compute a confidence interval around θf ′ and reporting where the given estimate of baseline performance falls within that distribution. An example of such a case is Figure 1, in which the distribution of MultiBERTs performance is compared to that from the single checkpoint of the original BERT release. In general such results should be interpreted conservatively, as we cannot make any claims about the variance of the baseline model. Hypothesis Testing. A valid p-value for the hypothesis test described in Equation 1 is the fraction of bootstrap samples from the above procedure for which the estimate δ̂ is negative. 4 APPLICATION: GENDER BIAS IN COREFERENCE SYSTEMS We present a case study to illustrate how MultiBERTs and the Multi-Bootstrap can help us draw more robust conclusions about model behavior. The use case is based on gendered correlations. For a particular measure of gender bias, we take a single BERT checkpoint and measure a value of 0.35. We then apply an intervention, foo, designed to reduce this correlation, and measure 0.25. In an effort to do even better, we create a whole new checkpoint by applying the foo procedure from the very beginning of pre-training. On this checkpoint, we measure 0.3. How does one make sense of this result? As a concrete example, we analyze gender bias in coreference systems (Rudinger et al., 2018) and showing how MultiBERTs and the Multi-Bootstrap can help us understand the effect of an intervention, counterfactual data augmentation (CDA). We follow a set-up similar to Webster et al. (2020), which augments the BERT pretraining data with counterfactual sentences created by randomly swapping English binary-gendered pronouns. The goal is to weaken the correlation between gendered pronouns and other words such as occupation terms (e.g., doctor, nurse). We compare our baseline MultiBERTs models to two strategies for CDA. In the first (CDA-incr), we continue pre-training each MultiBERTs model for an additional 50K steps on the counterfactual data of Webster et al. (2020). In the second, we train BERT models from scratch (CDA-full) on the same dataset. The Winogender dataset consists of template sentences covering 60 occupation terms and instantiated with either male, female, or neutral pronouns. We follow Webster et al. (2020) and train a gold-mention coreference system using a two-layer feedforward network that takes span representations from a frozen BERT encoder as input and makes binary predictions for mention-referent pairs. The model is trained on OntoNotes (Hovy et al., 2006) and evaluated on the Winogender examples for both per-sentence accuracy and a bias score, defined as the Pearson correlation between the peroccupation bias score (Figure 4 of Rudinger et al. 2018) and the occupational gender statistics from the U.S. Bureau of Labor Statistics.6 For each pre-training run, we train five coreference models, using the same encoder but different random seeds to initialize the classifier weights and to shuffle the training data. 4.1 PAIRED ANALYSIS: CDA-INCR VS. BASE We investigate the impact of the intervention on performance and bias. Overall accuracy is fairly consistent across pre-training seeds, at 62.6±1.2% for the base model, with only a small and not statistically significant change under CDA-incr (Table 1). However, as shown in Figure 3, there is considerable variation in bias correlation, with r values between 0.1 and 0.7 depending on pretraining seed.7 The range for CDA-incr overlaps somewhat, with values between 0.0 and 0.4; however, because the incremental CDA is an intervention on each base checkpoint, we can look at the individual seeds and see that in most cases there appears to be a significant improvement. A paired Multi-Bootstrap allows us to quantify this and further account for noise due to the finite evaluation 6We use the occupation data as distributed with the Winogender dataset, https://github.com/ rudinger/winogender-schemas. 7Some of this variation is due to the classifier training, but on this task there is a large intrinsic contribution from the pretraining seed. See Appendix D for a detailed analysis. sample of 60 occupations. The results are shown in Table 1, which show that CDA-incr significantly reduces bias by δ̂ = −0.162 with p = 0.001. 4.2 UNPAIRED ANALYSIS: CDA-FULL VS. CDA-INCR We can also test if we get any additional benefit from running the entire pre-training with counterfactually-augmented data. Similar to MultiBERTs, we trained 25 CDA-full checkpoints for 2M steps on the CDA dataset.8 Because these are entirely new checkpoints, independent from the base MultiBERTs runs, we use an unpaired version of the Multi-Bootstrap, which uses the same set of examples but samples pretraining seeds independently for CDA-incr and CDA-full. As shown in Table 2, overall accuracy does not change appreciably (0.622 vs. 0.623, p = 0.416), while bias correlation seems to decrease but not significantly (0.256 vs 0.192, δ = -0.064 with p = 0.132). As an ablation, we also experiment with sampling over either only seeds (taking the set of examples, i.e. occupations, as fixed), or over examples (taking the set of 25 seeds as fixed). As shown in Table 2, we find lower p-values (0.005 and 0.053) in both cases—showing that failing to account for finite samples along either dimension could lead to overconfident conclusions. In Appendix E, we present two additional examples: a paired study where we increase pretraining time from 1M to 2M steps, as well as an unpaired comparison to the original bert-base-uncased checkpoint. 5 CONCLUSION To make progress on language model pre-training, it is essential to distinguish between the properties of specific model artifacts and those of the training procedures that generated them. To this end, we have presented two resources: the MultiBERTs, a set of 25 model checkpoints to support robust research on BERT, and the Multi-Bootstrap, a non-parametric statistical method to estimate the uncertainty of model comparisons across multiple training seeds. We demonstrated the utility of these resources by showing how to quantify the effect of an intervention to reduce a type of gender bias in coreference systems built on BERT. We hope that the release of multiple checkpoints and the use of principled hypothesis testing will become standard practices in research on pre-trained language models. 8Following Webster et al. (2020), we use 20 masks per sequence instead of the 80 from Devlin et al. (2019). A MINIMAL IMPLEMENTATION OF THE MULTI-BOOTSTRAP Below, we present a simplified Python implementation of the Multi-Bootstrap algorithm presented in Section 3.2. It describes a single-sided version of the procedure, which could be used, e.g., to test that a model’s performance is greater than 0. The input is a matrix of predictions where row indices correspond to test examples and column indices to random seeds. The functions returns an array of nboot samples [θ̂1, . . . , θ̂nboot ]. 1 def multibootstrap(predictions, labels, metric_fun, nboot): 2 """ 3 Generates bootstrap samples of a model’s performance. 4 5 Input: 6 predictions: 2D Numpy array with the predictions for different seeds. 7 labels: 1D Numpy array with the labels. 8 metric_fun: Python function. Takes a pair of arrays as input, and returns a metric or loss. 9 nboot: Number of bootstrap samples to generate. 10 11 Output: 12 Numpy array with nboot samples. 13 14 """ 15 # Checks the data format. 16 n_samples, n_seeds = predictions.shape 17 assert labels.shape == (n_samples,) 18 19 thetas = np.zeros(nboot) 20 for boot_ix in range(nboot): 21 # Samples n_samples test examples and n_seeds pre-training seeds. 22 x_samples = np.random.choice(n_samples, size=n_samples, replace=True) 23 s_samples = np.random.choice(n_seeds, size=n_seeds, replace=True) 24 25 # Computes the metric over the bootstrapping samples. 26 sampled_predictions = predictions[np.ix_(x_samples, s_samples)] 27 sampled_labels = labels[x_samples] 28 sampled_metrics = [ 29 metric_fun(sampled_predictions[:,j], sampled_labels) 30 for j in range(n_seeds) 31 ] 32 33 # Averages over the random seeds. 34 thetas[boot_ix] = np.mean(sampled_metrics) 35 36 return thetas We provide the complete version of the algorithm on our repository http://goo.gle/ multiberts. Our implementation is optimized and supports all the experiment designs described in Section 3, including paired and unpaired analysis as well as multiple fine-tuning runs for each pretraining seed. B PROOF OF THEOREM 1 Before giving the proof, we define some useful notation that will simplify the argument considerably. We let Dn be the empirical measure over the nx observations (Zi = (Xi, Yi)) n i=1, and Mn be the empirical measure over the ns observations (Sj) n j=1. For a function f : V → R and a distribution P over V , we will use the shorthand Pf to denote the expectation of f under P , Pf = EV∼P [f(V )]. For example, this allows us to write θ = DMℓ = EZ∼DES∼M ℓ(Z, fS), and θ̂ = DnMnℓ = 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ). For the bootstrapped distributions, let D∗n denote the distribution over the bootstrap data samples (Z∗1 , Z ∗ 2 , . . . , Z ∗ nx) and M ∗ n denote the distribution over the bootstrapped seed samples, (S∗1 , S ∗ 2 , . . . , S ∗ ns), both conditional on the observed samples (Zi) nx i=1 and (Sj) ns j=1. Note that the empirical average over a bootstrapped sample 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Z∗i , fS∗j ) can be written as 1 nx nx∑ i=1 1 ns ns∑ j=1 AiBjℓ(Zi, fSj ), where Ai is the number of times Zi appears in the bootstrapped sample (Z ∗ k) nx k=1, and Bj is the number of times Sj appears in the bootstrapped sample (S ∗ k) ns k=1. With this in mind, we will abuse notation, and also denote D∗n as the distribution over the Ai and M ∗ n as the distribution over the Bj . Finally, we will use E∗ and Var∗ to denote the expectation and variance of random variables defined with respect to D∗n or M ∗ n, conditional on Dn and Mn. We will use P to denote the distribution P = D×M . Throughout, all assertions made with respect to random variables made without a note about their probability of occurrence hold P -almost surely. Proof. The challenge with applying existing theory to our method is that because the performance metric (ℓ(Zi, fSj ) nx i=1 over the nx observations for a given seed Sj all depend on the same Sj , they are not independent. Similarly for the performance on a given observation, over seeds. Therefore, we need to handle this non-iid structure in our proof for the multi-bootstrap. There are conceptually three steps to our proof that allow us to do just that. The first is to show that θ̂ has an asymptotically linear representation as √ n(θ̂ − θ) = √n(Dn −D)Mℓ+ √ n(Mn −M)Dℓ+ oP (1). (2) The second is to show that conditional on Dn and Mn the multi-bootstrapped statistic θ̂ ∗ ∆= D∗nM ∗ nℓ has an asymptotically linear representation as √ n(θ̂∗ − θ̂) = √n(D◦n −Dn)Mℓ+ √ n(M◦n −Mn)Dℓ+ oP∗(1), (3) where D◦n and M ◦ n are multiplier bootstrap samples coupled to the bootstrap D ∗ n and M ∗ n which we define formally in the beginning of Step 2. The third step is to use standard results for the multiplier bootstrap of the mean of iid data to show that the distributions of the above linearized statistics converge to the same limit. Because we have assumed that ℓ(Z, fS) < ∞, E[ℓ(Z, fS) | S] < ∞, and E[ℓ(Z, fS) | Z] < ∞, Fubini’s theorem allows us to switch the order of integration over Z and S as needed. We will assume that DMℓ(X,Y, fS) = 0. This is without loss of generality, because adding and subtracting √ nDMℓ to the bootstrap expression gives √ n(θ̂∗ − θ̂) = √n(D∗nM∗nℓ−DnMnℓ) = √ n(D∗nM ∗ nℓ−DMℓ+DMℓ−DnMnℓ) = √ n(D∗nM ∗ n(ℓ−DMℓ)−DnMn(ℓ−DMℓ)), so if we prove that the result holds with the mean zero assumption, it will imply that the result holds for ℓ with a nonzero mean. This theorem guarantees consistency of the Multi-Bootstrap estimates. One question that comes up is whether it is possible to get meaningful / tight rates of convergence for the approximation. Unfortunately, getting OP (1/n) convergence as found in many bootstrap methods (Van der Vaart, 2000) is difficult without the use of Edgeworth expansions, by which the Multi-Bootstrap is not welladapted to analysis. That said, many of the remainder terms already have variance of order O(1/n), or could easily be adapted to the same, suggesting an OP (1/ √ n) convergence. The main difficulty, however, is showing rates of convergence for the strong law on separately exchangeable arrays (see the proof of Lemmas 2, 4-5). Showing a weaker notion of convergence, such as in probability, may perhaps allow one to show that the remainder is OP (1/ √ n), however the adaptation of the aforementioned Lemmas is nontrivial. Step 1 Recalling that θ̂ ∆ = DnMnℓ and θ ∆ = DMℓ, we can expand √ n(θ̂ − θ) as follows, √ n(DnMnℓ−DMℓ) = √ n(DnMnℓ−DMnℓ+DMnℓ−DMℓ) = √ n((Dn −D)Mnℓ+D(Mn −M)ℓ) = √ n((Dn −D)Mnℓ+ (Dn −D)Mℓ− (Dn −D)Mℓ+D(Mn −M)ℓ) = √ n((Dn −D)Mℓ+ (Dn −D)(Mn −M)ℓ+D(Mn −M)ℓ) The following lemma shows that √ n(Dn −D)(Mn −M)ℓ is a lower order term. Lemma 1. Under the assumptions of Theorem 1, √ n(Dn −D)(Mn −M)ℓ = oP (1). Therefore, √ n(DnMnℓ−DMℓ) = 1√ 1− ps √ nx(Dn −D)Mℓ+ 1√ ps √ ns(Mn −M)Dℓ+ oP (1). Step 2 One of the challenges with working with the bootstrap sample D∗n and M ∗ n is that the induced per-sample weights {Ai}nxi=1 and {Bj}nsj=1 do not have independent components, because they each follow a multinomial distribution over nx items and ns items, respectively. However, they are close enough to independent that we can define a coupled set of random variables {A◦i }nxi=1 and {B◦j }nsj=1 that do have independent components, but behave similarly enough to {Ai} and {Bj} that using these weights has a negligible effect on distribution of the bootstrapped estimator, as described concretely below. First, we discuss the coupled multiplier bootstrap sample D◦n and M ◦ n. The creation of this sequence, called “Poissonization” is a standard technique for proving results about the empirical bootstrap that require independence of the bootstrap weights (van der Vaart et al., 1996). We describe this for D◦n as the idea is identical for M◦n. Because our goal is to couple this distribution to D ∗ n, we define it on the same sample space, and extend the distribution P ∗, expectation E∗ and variance Var∗ to be over D◦n and M ◦ n, conditionally on Dn and Mn, as with D ∗ n and M ∗ n. To construct the distribution D◦n, from the empirical distribution Dn and a bootstrap sample D ∗ n, start with the distribution D∗n and modify it as follows: We draw a Poisson random variable Nnx with mean nx. If Nnx > nx, then we sample Nnx −nx iid observations from Dn, with replacement, and add them to the bootstrap sample initialized with D∗n to produce the distribution D ◦ n. If Nnx < nx, we sample nx − Nnx observations from D∗n, without replacement, and remove them from the bootstrap sample to produce the distribution D◦n. If Nnx = nx, then D ◦ n = D ∗ n. Recalling that Ai is the number of times the i-th sample is included in D ∗ n, similarly define A ◦ i as the number of times the i-th sample is included in D◦n. Note that by the properties of the Poisson distribution, A◦i ∼ Poisson(1), and {A◦i }nxi=1 are independent. Note that the natural normalization for D◦n would be Nnx . However, it will be useful to maintain the normalization by nx, so abusing notation, for a function f(z), we will say that D◦nf = 1 nx ∑nx i=1 A ◦ i f(Zi). Define θ̂◦ as the following empirical estimator of θ under the distribution D◦n ×M◦n, θ̂◦ = D◦nM ◦ nℓ = 1 nx nx∑ i=1 1 ns ns∑ j=1 A◦iB ◦ j ℓ(Zi, fSj ). Lemma 2 shows that √ n(θ̂∗ − θ̂◦) = oP∗(1), and so √ n(θ̂∗ − θ) = √n(θ̂◦ − θ) + oP∗(1). Lemma 2. Under the assumptions of Theorem 1, and that DMℓ = 0, √ n(θ̂∗ − θ̂◦) = oP∗(1). With this, the expansion of √ n(θ̂◦ − θ̂) begins mutatis mutandis the same as in Step 1, to get that √ n(θ̂◦ − θ̂) = 1√ 1− ps √ nx(D ◦ n −Dn)Mnℓ+ √ n(D◦n −Dn)(M◦n −Mn)ℓ + 1√ ps √ ns(M ◦ n −Mn)Dnℓ. As with Step 1, we provide Lemma 3 showing that the remainder term √ n(D◦n −Dn)(M◦n −Mn)ℓ will be lower order. Lemma 3. Under the assumptions of Theorem 1, √ n(D◦n −Dn)(M◦n −Mn)ℓ = oP∗(1). Therefore, √ n(D◦nM ◦ nℓ−DnMnℓ) = 1√ 1− ps √ nx(D ◦ n −Dn)Mnℓ+ 1√ ps √ ns(M ◦ n −Mn)Dnℓ+ oP∗(1). Then, to write √ n(θ̂∗−θ̂) in terms of √ns(M◦n−Mn)Dℓ as wanted in Eq. (3), instead of √ ns(M ◦ n− Mn)Dnℓ, we must additionally show that the functional has enough continuity that the error term√ ns(M ◦ n −Mn)(Dn −D)ℓ is lower order. The following lemma shows exactly this. Lemma 4. Under the assumptions of Theorem 1, conditionally on the sequences Z1, Z2, . . . and S1, S2, . . . , (a) √ n(D◦n −Dn)(Mn −M)ℓ = oP∗(1), and (b) √ n(Dn −D)(M◦n −Mn)ℓ = oP∗(1). Altogether, these imply that √ n(D∗nM ∗ nℓ−DnMnℓ) = 1√ 1− ps √ nx(D ◦ n −Dn)Mℓ+ 1√ ps √ ns(M ◦ n −Mn)Dℓ+ oP∗(1). Step 3 Noting that Mℓ(·, fS) = ED×M [ℓ(·, fS) | Z = ·] is a real-valued random variable with finite variance (similarly for Dℓ(Z, ·)), and recalling that the nx samples used for Dn and ns samples for Mn satisfy n = nx/(1 − ps) and n = ns/ps, for 0 < ps < 1, the conventional central limit theorem shows that for some positive semi-definite matrix Σ ∈ R2×2, and G ∼ N (0,Σ), √ n ( (Dn −D)Mℓ (Mn −M)Dℓ ) = ( 1 1−ps √ nx(Dn −D)Mℓ 1 ps √ ns(Mn −M)Dℓ ) d→ G. Note that Dn and Mn are independent, so G is, in fact, a diagonal matrix. Additionally, the conditional multiplier CLT (van der Vaart et al., 1996, Lemma 2.9.5, pg. 181) implies that conditionally on Z1, Z2, . . . and S1, S2, . . . , √ n ( (D∗n −Dn)Mℓ (M∗n −Mn)Dℓ ) d→ G. Finally, applying the delta method (see Theorem 23.5 from Van der Vaart (2000)) along with the results from Steps 1 and 2 shows that the distributions of √ n(θ̂ − θ) and √n(θ̂∗ − θ̂) converge to N (0, σ2), where σ2 = Σ11/(1− ps) + Σ22/ps. B.1 PROOF OF LEMMA 1 Fix ǫ > 0. Note that E[(Dn −D)(Mn −M)ℓ] = 0, so by Chebyshev’s inequality, P ( |√n(Dn −D)(Mn −M)ℓ| > ǫ ) ≤ Var( √ n(Dn −D)(Mn −M)ℓ) ǫ2 . Therefore, it suffices to show that limn→∞ Var( √ n(Dn−D)(Mn−M)ℓ) = 0. To do so, we apply the law of total variance, conditioning on Dn, and bound the resulting expression by C/n. Var( √ n(Dn −D)(Mn −M)ℓ) = nE[Var((Dn −D)(Mn −M)ℓ | Dn)] + nVar(E[(Dn −D)(Mn −M)ℓ | Dn]) = nE[Var((Dn −D)(Mn −M)ℓ | Dn)] = nE[Var((Mn −M)(Dn −D)ℓ | Dn)] = E n n2s ns∑ j=1 Var((Dn −D)ℓ(·, fSj ) | Dn) = E [ n ns Var((Dn −D)ℓ(·, fS1) | Dn) ] = E 1 ps E 1 nx nx∑ i=1 ℓ(Zi, fS1)− E[ℓ(Zi, fS1) | S1] 2 | {Zi}nxi=1 = E 1 ps 1 nx nx∑ i=1 ℓ(Zi, fS1)− E[ℓ(Zi, fS1) | S1] 2 = E 1 psn2x nx∑ i=1 nx∑ k=1 (ℓ(Zi, fS1)− E[ℓ(Zi, fS1) | S1])(ℓ(Zk, fS1)− E[ℓ(Zk, fS1) | S1]) = E 1 psn2x nx∑ i=1 (ℓ(Zi, fS1)− E[ℓ(Zi, fS1) | S1])2 = 1 ps(1− ps)n E [ (ℓ(Z1, fS1)− E[ℓ(Z1, fS1) | S1])2 ] ≤ C n → 0. B.2 PROOF OF LEMMA 2 First, note the following representation for θ̂∗ − θ̂◦: θ̂∗ − θ̂◦ = 1 nx nx∑ i=1 1 ns ns∑ j=1 AiBjℓ(Zi, fSj )− 1 nx nx∑ i=1 1 ns ns∑ j=1 A◦iB ◦ j ℓ(Zi, fSj ) = 1 ns ns∑ j=1 (Bj −B◦j ) nx nx∑ i=1 Aiℓ(Zi, fSj ) ︸ ︷︷ ︸ ∆ =I1 + 1 nx nx∑ i=1 (Ai −A◦i ) ns ns∑ j=1 B◦j ℓ(Zi, fSj ) ︸ ︷︷ ︸ ∆ =I2 . Let ǫ > 0. Noting that E∗[I1] = E∗[I2] = 0, applying Chebyshev’s inequality gives P ∗ (√ n|θ̂∗ − θ̂◦| > ǫ ) ≤ nVar ∗(θ̂∗ − θ̂◦) ǫ2 ≤ 2nVar ∗(I1) + Var ∗(I2) ǫ2 It suffices to show that nVar∗(I1) → 0 and nVar∗(I2) → 0. The arguments for each term are mutatis mutandis the same, and so we proceed by showing the proof for I2. By the law of total variance, Var∗(I2) = Var ∗(E∗[I2 | {Bj}nsj=1]) + E∗[Var∗(I2 | {Bj}nsj=1)]. Because E∗[Ai] = E∗[A◦i ] and {Bj}nsj=1 ⊥ Ai, A◦i , it follows that E∗[I2 | {Bj}nsj=1] = 0. Taking the remaining term and re-organizing the sums in I2, Var∗(I2) = E ∗ Var ∗ 1 nx nx∑ i=1 (Ai −A◦i ) 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) | {Bj}nsj=1 . (4) Next, we apply the law of total variance again, conditioning on Nnx = ∑ i A ◦ i . First, E ∗[I2 | Nnx , {Bj}nsj=1] = Nnx − nx nx 1 nx nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ), and so Var∗ ( E ∗[I2 | Nnx , {Bj}nsj=1] | {Bj}nsj=1 ) = 1 nx 1 nx nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) 2 Then, conditionally on Nnx (and {Bj}), I2 is the (centered) empirical average of |Nn − n| samples from a finite population of size n, rescaled by |Nn − n|/n. Therefore, applying Theorem 2.2 of Cochran (2007) gives the conditional variance as |Nnx − nx| n2x 1 nx − 1 nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) 2 − nx nx − 1 1 nx nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) 2 ︸ ︷︷ ︸ ∆ =V 2 . To take the expectation over Nnx , notice that because E ∗[Nnx ] = nx, this is the mean absolute deviation (MAD) of Nnx . Using the expression for the MAD of a Poisson variable from Ramasubban (1958) gives E ∗|Nnx − nx| = 2nx nnxx exp(−nx) nx! , and using Stirling’s approximation, this is bounded by C √ nx, for some 0 < C < ∞. Combining this with the above term for the variance of the conditional expectation, we have Var∗ 1 nx nx∑ i=1 (Ai −A◦i ) 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) | {Bj}nsj=1 ≤ 1 nx 1 nx nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) 2 + 1 n1.5x V 2. (5) Noting that E∗[B2j ] = E ∗[BjBk] = 1, we get the following bound: Var∗(I2) ≤ 1 nx 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) 2 + 1 n1.5x V̄ 2, where V̄ 2 = 1 nx − 1 nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) 2 − nx nx − 1 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) 2 . Because of the assumption that DMℓ = 0, the SLLN adapted to separately exchangeable arrays (Rieders, 1991, Theorem 1.4) implies that lim n→∞ 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) = 0, almost surely. Therefore, the first term of (5) is o(1/n). Note that V̄ 2 is the empirical variance of the conditional expectation of ℓ(Zi, fSj ) given {Zi}ni=1. Therefore, the law of total variance shows that V̄ 2 ≤ 1 nx 1 ns nx∑ i=1 ns∑ j=1 ℓ2(Zi, fSj )− 1 nx 1 ns nx∑ i=1 ns∑ j=1 ℓ(Zi, fSj ) 2 . By the SLLN adapted to separately exchangeable arrays (Rieders, 1991, Theorem 1.4), both of the terms converge almost surely to DMℓ2 < ∞ and (DMℓ)2, respectively. and therefore, lim n→∞ nVar∗(Is) ≤ lim n→∞ n nx 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) 2 + n n1.5x V̄ 2 = 0. B.3 PROOF OF LEMMA 3 As with Lemma 1, the main idea of the proof is to apply Chebyshev’s inequality, and show that the variance tends to zero. Indeed, choosing an arbitrary ǫ > 0, P ∗ ( |√n(D◦n −Dn)(M◦n −Mn)ℓ| ≥ ǫ ) ≤ Var ∗ (√n(D◦n −Dn)(M◦n −Mn)ℓ ) ǫ2 . Therefore, it suffices to show that the variance in the above display goes to zero. To do this, we start by re-writing the expression in terms of A◦i and B ◦ j , and then apply the law of total variance. Var∗ (√ n(D◦n −Dn)(M◦n −Mn)ℓ ) = nVar∗ 1 nxns nx∑ i=1 ns∑ j=1 (A◦i − 1)(B◦j − 1)ℓ(Zi, fSj ) = nVar∗ E∗ 1 nxns nx∑ i=1 ns∑ j=1 (A◦i − 1)(B◦j − 1)ℓ(Zi, fSj ) | {A◦i }nxi=1 + nE∗ Var∗ 1 nxns nx∑ i=1 ns∑ j=1 (A◦i − 1)(B◦j − 1)ℓ(Zi, fSj ) | {A◦i }nxi=1 . Because {B◦j }nsj=1 are independent of {A◦i }nxi=1, and have mean 1, the conditional expectation in the first term is 0 almost surely. Expanding out the second term, using that Var∗(B◦j ) = 1, and that the {B◦j }nsj=1 are uncorrelated, nE∗ Var∗ 1 nxns nx∑ i=1 ns∑ j=1 (A◦i − 1)(B◦j − 1)ℓ(Zi, fSj ) | {Ai}nxi=1 = nE∗ 1 n2s ns∑ j=1 Var∗ (B◦j − 1) 1 nx nx∑ i=1 (A◦i − 1)ℓ(Zi, fSj ) | {A◦i }nxi=1 = nE∗ 1 n2s ns∑ j=1 1 nx nx∑ i=1 (A◦i − 1)ℓ(Zi, fSj ) 2 = nE∗ 1 n2s ns∑ j=1 1 n2x nx∑ i=1 nx∑ k=1 (A◦i − 1)(A◦k − 1)ℓ(Zi, fSj )ℓ(Zk, fSj ) . Now, noting that Var∗(A◦i ) = 1, and that the {A◦i }nxi=1 are uncorrelated, this simplifies to nE∗ 1 n2s ns∑ j=1 1 n2x nx∑ i=1 (A◦i − 1)2ℓ2(Zi, fSj ) = n nsnx 1 ns ns∑ j=1 1 nx nx∑ i=1 ℓ2(Zi, fSj ). Because ED×M [ℓ2(Z, fS)] < ∞, the SLLN adapted to separately exchangeable arrays (Rieders, 1991, Theorem 1.4) implies that this converges almost surely to 0. B.4 PROOF OF LEMMA 4 We prove (a) of the Lemma, as (b) follows from applying Fubini’s theorem and following mutatis mutandis the same argument. Without loss of generality, we will assume that ℓ(Zi, fSj ) ≥ 0. Because Var(ℓ(Zi, fSj )) < ∞, we can always decompose ℓ(·, ·) into a positive and negative part, and show that the result holds for each individually. Once again, we prove (a) by turning to Chebyshev’s inequality. Fix ǫ > 0, and observe that P ∗ ( |√n(D◦n −Dn)(Mn −M)ℓ| > ǫ ) ≤ Var ∗ (√n(D◦n −Dn)(Mn −M) ) ǫ2 , so it is sufficient to show that Var∗ (√ n(D◦n −Dn)(Mn −M) ) → 0. Writing the above in terms of A◦i , we have Var∗ (√ n(D◦n −Dn)(Mn −M) ) = Var∗ √ n nx nx∑ i=1 (A◦i − 1) 1 ns ns∑ j=1 ℓ(Zi, fSj )− E[ℓ(Zi, fSj ) | Zi] = n n2x nx∑ i=1 Var∗ (A◦i − 1) 1 ns ns∑ j=1 ℓ(Zi, fSj )− E[ℓ(Zi, fSj ) | Zi] 2 = n n2x nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj )− E[ℓ(Zi, fSj ) | Zi] 2 . Now, we want to show that the last display converges almost surely to 0. Notice that each term within the outer sum will obviously converge due to the SLLN. Showing that the outer sum also converges almost surely is technically difficult, but conceptually follows the same argument used to prove the SLLN (specifically, we follow the one done elegantly by Etemadi (1981); Luzia (2018) provides a more detailed account of this proof technique that is helpful for developing a deeper understanding). We show the following version of almost sure convergence: that for any ǫ > 0, P n n2x nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj )− E[ℓ(Zi, fSj ) | Sj ] 2 > ǫ i.o. = 0, where i.o. stands for infinitely often. Define the shorthand Lij = ℓ(Zi, fSj ) and let L̄ij = Lij1{Lij < ij} be a truncated version of Lij . The proof of Theorem 2 of Etemadi (1981) implies that P (L̄ij 6= Lij i.o.) = 0, because the assumption Var(Lij) < ∞ implies the assumption used in Etemadi (1981), and independence of {Lij}i,j is not needed for this result. Therefore, 1 nx nx∑ i=1 1 ns ns∑ j=1 Lij − L̄ij 2 a.s.→ 0, and 1 nx nx∑ i=1 1 ns ns∑ j=1 E[Lij | Zi]− E[L̄ij | Zi] 2 a.s.→ 0. Together, these imply that if we can prove that the truncated sum converges, ie., 1 nx n∑ i=1 1 ns ns∑ j=1 L̄ij − E[L̄ij | Zi] 2 a.s.→ 0, (6) this is sufficient to show that the un-truncated version converges almost surely. To prove (6), we show two things: first, that there is a subsequence kn such that (6) holds when restricted to the subsequence, and then we show that the sequence is a Cauchy sequence, which together imply the result. Let α > 1 and let kn = α n. For convenience, denote knx as the number of data samples and kns as the number of seed samples when knx + kns = kn total samples are drawn. We will ignore integer rounding issues, and assume knx = (1− ps)αn, and kns = psαn. The following lemma shows that the subsequence defined by kn converges almost surely. Lemma 5. Let α > 1, and kn = α n. Under the assumptions of Theorem 1 and that Lij ≥ 0 P 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij − E[L̄ij | Zi] 2 > ǫ i.o. = 0. We now must show that the sequence in (6) is a Cauchy sequence. Note that the SLLN implies that 1 nx nx∑ i=1 E[L̄ij | Zi]2 a.s.→ E[E[L̄ij | Zi]2], and the LLN for exchangeable arrays (Rieders, 1991, Theorem 1.4) implies that 1 nx nx∑ i=1 1 ns ns∑ j=1 L̄ijE[L̄ij | Zi] a.s.→ E[E[L̄ij | Zi]2]. Therefore, 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij 2 a.s.→ E[E[L̄ij | Zi]2]. (7) Notice that because L̄ij ≥ 0, the sum ∑nx i=1 (∑ns j=1 L̄ij )2 is monotone increasing in ns and nx. With this in mind, for any m > 0, let n be such that kn ≤ m < kn+1. Then, by the montonicity, ( kn kn+1 1 kn )3 knx∑ i=1 kns∑ j=1 L̄ij 2 ≤ ∑(1−ps)m i=1 (∑psm j=1 L̄ij )2 p2s(1− ps)m3 ≤ ( kn+1 kn 1 kn+1 )3 k(n+1)x∑ i=1 k(n+1)s∑ j=1 L̄ij 2 . From (7), the left hand side converges to 1α3E[E[L̄ij | Zi]2], and the right hand side converges to α3E[E[L̄ij | Zi]2]. Because α is arbitrary, this proves that the sequence ∑(1−ps)m i=1 (∑psm j=1 L̄ij )2 p2s(1− ps)m3 m=1,... is almost surely Cauchy. Together with Lemma 5, this implies (6). B.5 PROOF OF LEMMA 5 We will show that ∞∑ n=1 P 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij − E[L̄ij | Zi] 2 > ǫ < ∞. This, along with the first Borel-Cantelli lemma (Émile Borel, 1909; Cantelli, 1917) implies the result. Applying Markov’s inequality and using the fact that L̄ij and L̄ih are independent conditional on Zi gives ∞∑ n=1 P 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij − E[L̄ij | Zi] 2 > ǫ ≤ 1 ǫ ∞∑ n=1 E 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij − E[L̄ij | Zi] 2 = 1 ǫ ∞∑ n=1 1 knxk2ns knx∑ i=1 kns∑ j=1 E [( L̄ij − E[L̄ij | Zi] )2] ≤ 1 ǫ ∞∑ n=1 1 knxk2ns knx∑ i=1 kns∑ j=1 E[L̄2ij ], where the last line follows from the law of total variance. To simplify the remaining algebra, we will use a . b to denote that there is some constant 0 < c < ∞ such that a < cb. Continuing, we have 1 ǫ ∞∑ n=1 1 knxk2ns knx∑ i=1 kns∑ j=1 E[L̄2ij ] . 1 ǫ ∞∑ n=1 knx∑ i=1 kns∑ j=1 1 k3n E[L̄2ij ] = 1 ǫ ∞∑ i=1 ∞∑ j=1 E[L̄2ij ] ∞∑ n=n(i,j) 1 α3n . 1 ǫ ∞∑ i=1 ∞∑ j=1 1 max{i/(1− ps), j/ps}3 E[L̄2ij ] . 1 ǫ ∞∑ i=1 ∞∑ j=1 1 max{i, j}3E[L̄ 2 ij ] = 1 ǫ ∞∑ i=1 ∞∑ j=1 1 max{i, j}3E[L̄ 2 ij ] where n(i, j) is shorthand for n(i, j) = logα max{i/(1− ps), j/ps} is the first n such that knx ≥ i and kns ≥ j. Now, define Q as the distribution of L11 induced by Z1 and S1. Additionally, split the inner sum into two pieces, one for when j < i and so max{i, j} = i and one for when j ≥ i and so max{i, j} = j. 1 ǫ ∞∑ i=1 ∞∑ j=1 1 max{i, j}3E[L̄ 2 ij ] = 1 ǫ ∞∑ i=1 i∑ j=1 1 i3 ∫ ij 0 x2 dQ(x) + ∞∑ j=i ∫ ij 0 x2 dQ(x) = 1 ǫ ∞∑ i=1 i−1∑ j=1 1 i3 ij∑ k=1 ∫ k k−1 x2 dQ(x) + ∞∑ j=i ij∑ k=1 ∫ k k−1 x2 dQ(x) switching the order of the indices over j and k, using that 1 ≤ k ≤ ij and the constraints on j relative to i, 1 ǫ ∞∑ i=1 i−1∑ j=1 1 i3 ij∑ k=1 ∫ k k−1 x2 dQ(x) + ∞∑ j=i ij∑ k=1 ∫ k k−1 x2 dQ(x) . 1 ǫ ∞∑ i=1 i2−1∑ k=1 (i− k/i) i3 ∫ k k−1 x2 dQ(x) + ∞∑ k=1 ∞∑ j=max{i,k/i} 1 j3 ∫ k k−1 x2 dQ(x) . 1 ǫ ∞∑ i=1 i2−1∑ k=1 (i− k/i) i3 ∫ k k−1 x2 dQ(x) + ∞∑ k=1 1 max{i, k/i}2 ∫ k k−1 x2 dQ(x) . Switching the order of summation over i and k, and separating out the terms where k/i < i and k/i ≥ i, 1 ǫ ∞∑ i=1 i2−1∑ k=1 (i− k/i) i3 ∫ k k−1 x2 dQ(x) + ∞∑ k=1 1 max{i, k/i}2 ∫ k k−1 x2 dQ(x) = 1 ǫ ∞∑ k=1 (∫ k k−1 x2 dQ(x) ) √ k+1∑ i=1 (i− k/i) i3 + ∞∑ i= √ k 1 i2 + √ k∑ i=1 i2 k2 . 1 ǫ ∞∑ k=1 1√ k (∫ k k−1 x2 dQ(x) ) . 1 ǫ ∞∑ k=1 (∫ k k−1 x2√ x dQ(x) ) . 1 ǫ ∫ ∞ 0 x1.5 dQ(x) < ∞. C INSTANCE-LEVEL AGREEMENT OF MULTIBERTS ON GLUE We present additional performance experiments to complement Section 2. Table 3 shows per-example agreement rates on GLUE predictions between pairs of models pretrained with a single seed (“same”) and pairs pre-trained with different seeds (“diff”); in all cases, models are fine-tuned with different seeds. With the exception of RTE, we see high agreement (over 90%) on test examples drawn from the same distribution as the training data, and note that agreement is 1–2% lower on average for the predictions of models pre-trained on different seeds compared to models pre-trained on the same seed. However, this discrepancy becomes significantly more pronounced if we look at out-of-domain “challenge sets” which feature a different data distribution from the training set. For example, if we evaluate our MNLI models on the anti-sterotypical examples from HANS (McCoy et al., 2019), we see agreement drop from 88% to 82% when comparing across pre-training seeds. Figure 4 shows how this can affect overall accuracy, which can vary over a range of nearly 20% depending on the pre-training seed. Such results underscore the need to evaluate multiple pre-training runs, especially when evaluating a model’s ability to generalize outside of its training distribution. D CROSS-SEED VARIATION Figure 5 shows variation in Winogender bias correlation (S4) between each MultiBERTs pretraining seed. Each box shows the distribution over five runs, and some of the variation between seeds may simple be due to variation in training the coreference model. If we average the scores for each seed then look at the distribution of this per-seed average score, we get 0.45±0.11. What if pretraining didn’t matter? If we ignore the seed and randomly sample sets of five runs from this set with replacement, we get scores of 0.45±0.05 - telling us that most of the variance can only be explained by differences between the pretraining checkpoints. We can confirm this by taking a subset of our pretraining seeds and training additional 25 randomlyinitialized coreference models. Figure 6 shows the result: seeds 0, 2, 3, and 4 appear closer together than in Figure 5, but seed 1 clearly has different properties with respect to our Winogender metric. We can confirm this with an unpaired multibootstrap analysis, taking seed 0 as base and seed 1 as experiment: we observe a significant effect of δ = 0.203 (p = 0.009), as shown in Table 4. E CASE STUDY: MULTIBERTS VS. ORIGINAL BERT As an additional example of application, we discuss challenges in reproducing the performance of the original BERT checkpoint, using the Multi-Bootstrap procedure. The original bert-base-uncased checkpoint appears to be an outlier when viewed against the distribution of scores obtained using the MultiBERTs reproductions. Specifically, in reproducing the training recipe of Devlin et al. (2019), we found it difficult to simultaneously match performance on all tasks using a single set of hyperparameters. Devlin et al. (2019) reports training for 1M steps. However, as shown in Figure 1 and 2, models pre-trained for 1M steps matched the original checkpoint on SQuAD but lagged behind on GLUE tasks; if pre-training continues to 2M steps, GLUE performance matches the original checkpoint but SQuAD performance is significantly higher. The above observations suggest two separate but related hypotheses (below) about the BERT pretraining procedure. 1. On most tasks, running BERT pre-training for 2M steps produces better models than 1M steps. 2. The MultiBERTs training procedure outperforms the original BERT procedure on SQuAD. Let us use the Multi-Bootstrap
1. What is the focus and contribution of the paper on MultiBERTs and Multi-Bootstrap? 2. What are the strengths of the proposed approach, particularly in terms of enabling future researches on reproducibility and robustness? 3. What are the limitations regarding the practicality and cost of adopting MultiBERTs and Multi-Bootstrap methods for common NLP tasks?
Summary Of The Paper Review
Summary Of The Paper This paper releases MultiBERTs, a set of 25 BERT base checkpoints to facilitate studies of robustness to parameter initialization and order of training examples. It also proposes the Multi-Bootstrap method to quantify the uncertainty of experimental results based on multiple pre-training seeds. Review Strengths: The paper provides various pre-trained BERT models with many checkpoints, which could enable future researches on reproducibility and robustness. The proposed Multi-Bootstrap procedure could give a reasonable estimate of the distribution of the error over the seeds and test instances, under both paired and unpaired scenarios. The provided code for Multi-Bootstrap is easy to understand. Overall, I feel it is still expensive for common NLP tasks to adopt MultiBERTs and MultiBootStrap methods to draw conclusions, which may hinder it from a wide range of applications.
ICLR
Title The MultiBERTs: BERT Reproductions for Robustness Analysis Abstract Experiments with pre-trained models such as BERT are often based on a single checkpoint. While the conclusions drawn apply to the artifact tested in the experiment (i.e., the particular instance of the model), it is not always clear whether they hold for the more general procedure which includes the architecture, training data, initialization scheme, and loss function. Recent work has shown that repeating the pre-training process can lead to substantially different performance, suggesting that an alternate strategy is needed to make principled statements about procedures. To enable researchers to draw more robust conclusions, we introduce the MultiBERTs, a set of 25 BERT-Base checkpoints, trained with similar hyper-parameters as the original BERT model but differing in random weight initialization and shuffling of training data. We also define the Multi-Bootstrap, a non-parametric bootstrap method for statistical inference designed for settings where there are multiple pre-trained models and limited test data. To illustrate our approach, we present a case study of gender bias in coreference resolution, in which the Multi-Bootstrap lets us measure effects that may not be detected with a single checkpoint. We release our models and statistical library, along with an additional set of 140 intermediate checkpoints captured during pre-training to facilitate research on learning dynamics. 1 INTRODUCTION Contemporary natural language processing (NLP) relies heavily on pretrained language models, which are trained using large-scale unlabeled data (Bommasani et al., 2021). BERT (Devlin et al., 2019) is a particularly popular choice: it has been widely adopted in academia and industry, and aspects of its performance have been reported on in thousands of research papers (see, e.g., Rogers et al., 2020, for an overview). Because pre-training large language models is computationally expensive (Strubell et al., 2019), researchers often rely on the release of model checkpoints through libraries such as HuggingFace Transformers (Wolf et al., 2020), which enable them to use large-scale language models without repeating the pre-training work. Consequently, most published results are based on a small number of publicly released model checkpoints. While this reuse of model checkpoints has lowered the cost of research and facilitated head-to-head comparisons, it limits our ability to draw general scientific conclusions about the performance of a particular class of models (Dror et al., 2019; D’Amour et al., 2020; Zhong et al., 2021). The key issue is that reusing model checkpoints makes it hard to generalize observations about the behavior of a single model artifact to statements about the underlying pre-training procedure which created it. Pre-training such models is an inherently stochastic process which depends on the initialization of the model’s parameters and the ordering of training examples; for example, D’Amour et al. ∗ Equal contribution. † Work done as a Google AI resident. ‡ Work done during an internship at Google. 1http://goo.gle/multiberts (2020) report substantial quantitative differences across multiple checkpoints of the same model architecture on several “stress tests” (Naik et al., 2018; McCoy et al., 2019). It is therefore difficult to know how much of the success of a model based on the original BERT checkpoint is due to BERT’s design, and how much is due to idiosyncracies of a particular artifact. Understanding this difference is critical if we are to generate reusable insights about deep learning for NLP, and improve the state-of-the-art going forward (Zhou et al., 2020; Dodge et al., 2020; Aribandi et al., 2021). This paper describes the MultiBERTs, an effort to facilitate more robust research on the BERT model. Our primary contributions are: • We release the MultiBERTs, a set of 25 BERT-Base, Uncased checkpoints to facilitate studies of robustness to parameter initialization and order of training examples (§2). Releasing these models preserves the benefits to the community of a single checkpoint release (i.e., low cost of experiments, apples-to-apples comparisons between studies based on these checkpoints), while enabling researchers to draw more general conclusions about the BERT pre-training procedure. • We present the Multi-Bootstrap, a non-parametric method to quantify the uncertainty of experimental results based on multiple pre-training seeds (§3), and provide recommendations for how to use the Multi-Bootstrap and MultiBERTs in typical experimental scenarios. We implement these recommendations in a software library. • We illustrate the approach with a practical use case: we investigate the impact of counterfactual data augmentation on gender bias, in a BERT-based coreference resolution systems (Webster et al., 2020) (§4). Additional examples are provided in Appendix E, where we document challenges with reproducing the widely-used original BERT checkpoint. The release also includes an additional 140 intermediate checkpoints, captured during training for 5 of the runs (28 checkpoints per run), to facilitate studies of learning dynamics. Our checkpoints and statistical libraries are available at: http://goo.gle/multiberts. Additional Related Work. The MultiBERTs release builds on top of a large body of work that seeks to analyze the behavior of BERT (Rogers et al., 2020). In addition to the studies of robustness cited above, several authors have introduced methods to reduce BERT’s variability during finetuning (Zhang et al., 2021; Mosbach et al., 2021; Dodge et al., 2020; Lee et al., 2020; Phang et al., 2018). Other authors have also studied the time dimension, which motivates our release of intermediate checkpoints (Liu et al., 2021; Hao et al., 2020; Saphra & Lopez, 2019; Chiang et al., 2020; Dodge et al., 2020). Similarly to §3, authors in the NLP literature have recommended best practices for statistical testing (Koehn, 2004; Dror et al., 2018; Berg-Kirkpatrick et al., 2012; Card et al., 2020; Søgaard et al., 2014; Peyrard et al., 2021), many of which are based on existing tests to estimate the uncertainty of test sample. In concurrent work, Deutsch et al. (2021) considered bootstrapping methods similar to the Multi-Bootstrap, in the context of summarization metrics evaluation. Also in concurrent work, the Mistral project (Karamcheti et al., 2021) released a set of 10 GPT-2 models with intermediate checkpoints at different stages of pre-training. Our work is complementary, focusing on BERT, introducing a larger number of pre-training seeds, and presenting a methodology to draw robust conclusions about model performance. 2 RELEASE DESCRIPTION We first describe the MultiBERTs release: how the checkpoints were trained and how their performance compares to the original BERT on two common language understanding benchmarks. 2.1 TRAINING Overview. The MultiBERTs checkpoints are trained following the code and procedure of Devlin et al. (2019), with minor hyperparameter modifications necessary to obtain comparable results on GLUE (Wang et al., 2019); a detailed discussion of these differences is provided in Appendix E. We use the BERT-Base, Uncased architecture with 12 layers and embedding size 768. We trained the models on a combination of BooksCorpus (Zhu et al., 2015) and English Wikipedia. Since the exact dataset used to train the original BERT is not available, we used a more recent version that was collected by Turc et al. (2019) with the same methodology. Checkpoints. We release 25 models trained for two million steps each, each training step involving a batch of 256 sequences. For five of these models, we release 28 additional checkpoints captured over the course of pre-training (every 20,000 training steps up to 200,000, then every 100,000 steps). In total, we release 165 checkpoints, about 68 GB of data. Training Details. As in the original BERT paper, we used batch size 256 and the Adam optimizer (Kingma & Ba, 2014) with learning rate 1e-4 and 10,000 warm-up steps. We used the default values for all the other parameters, except the number of steps, which we set to two million, and sequence length, which we set to 512 from the beginning with up to 80 masked tokens per sequence.2 We follow the BERT code and initialize the layer parameters from a truncated normal distribution, using mean 0 and standard deviation 0.02. We train using the same configuration as Devlin et al. (2019)3, with each run taking about 4.5 days on 16 Cloud TPU v2 chips. Environmental Impact Statement. We estimate compute costs at around 1728 TPU-hours for each pre-training run, and around 208 GPU-hours plus 8 TPU-hours for associated fine-tuning experiments (§2.2, including hyperparameter search and 5x replication). Using the calculations of Luccioni et al. (2019)4, we estimate this as about 250 kg CO2e for each of our 25 models. Counting the 25 runs each of CDA-incr and CDA-full from §4, associated coreference models (20 GPU-hours per pretraining model), and additional experiments of Appendix E, this gives a total of about 12.0 metric tons CO2e before accounting for offsets or clean energy. Based on the report by Patterson et al. (2021) of 78% carbon-free energy in Google Iowa (us-central1), we estimate that reproducing these experiments would emit closer to 2.6 tons CO2e, or slightly more than two passengers on a round-trip flight between San Francisco and New York. By releasing the trained checkpoints publicly, we aim to enable many research efforts on reproducibility and robustness without requiring this cost to be incurred for every subsequent study. 2.2 PERFORMANCE BENCHMARKS GLUE Setup. We report results on the development sets of the GLUE tasks: CoLA (Warstadt et al., 2019), MNLI (matched) (Williams et al., 2018), MRPC (Dolan & Brockett, 2005), QNLI (v2) (Rajpurkar et al., 2016; Wang et al., 2019), QQP (Chen et al., 2018), RTE (Bentivogli et al., 2009), SST-2 (Socher et al., 2013), and SST-B (Cer et al., 2017). In all cases we follow the same approach as Devlin et al. (2019). For each task, we fine-tune BERT for 3 epochs using a batch 2Specifically, we keep the sequence length constant (the paper uses 128 tokens for 90% of the training then 512 for the remaining 10%) to expose the model to more tokens and simplify the implementation. As we were not able to reproduce original BERT exactly using either 1M or 2M steps (see Appendix E for discussion), we release MultiBERTs trained with 2M steps under the assumption that higher-performing models are more interesting objects of study. 3We use https://github.com/google-research/bert with TensorFlow (Abadi et al., 2015) version 2.5 in v1 compatibility mode. 4https://mlco2.github.io/impact/ size of 32. We run a parameter sweep on learning rates [5e-5, 4e-5, 3e-5, 2e-5] and report the best score. We run the procedure five times for each of the 25 models and average the results. SQuAD Setup. We report results on the development sets of SQuAD versions 1.1 and 2.0 (Rajpurkar et al., 2016; 2018), using a setup similar to that of Devlin et al. (2019). For both sets of experiments, we use batch size 48, learning rate 5e-5, and train for 2 epochs. Results. Figures 1 and 2 show the distribution of the MultiBERTs models’ performance on the development sets of GLUE and SQuAD, in comparison to the original BERT checkpoint.5 On most tasks, original BERT’s performance falls within the same range as MultiBERTs (i.e., original BERT is between the minimum and maximum of the MultiBERTs’ scores). However, original BERT outperforms all MultiBERTs models on QQP, and under-performs them on SQuAD. The discrepancies may be explained by both randomness and differences in training setups, as investigated further in Appendix E. To further illustrate the performance variability inherent to pre-training and fine-tuning, we analyze the instance-level agreement between the models in Appendix C. 3 HYPOTHESIS TESTING USING MULTIPLE CHECKPOINTS The previous section compared MultiBERTs with the original BERT, finding many similarities but also some differences (e.g., in the case of SQuAD). To what extent can these results be explained by random noise? More generally, how can we quantify the uncertainty of a set of experimental results when there are multiple sources of randomness? In parallel to the MultiBERTs release, we propose a more principled and standardized method to compare training procedures. We recommend a non-parametric bootstrapping procedure, the “Multi-Bootstrap”, which enables us to make inference about model performance in the face of multiple sources of uncertainty: the randomness due to the pre-training seed, the fine-tuning seed, and the finite test data. The main idea is to use the average behavior over seeds as a means of summarizing expected behavior in an ideal world with infinite samples. Although we present Multi-Bootstrap in the context of analyzing the MultiBERTs, the method could be applied in all setups that involve a set of checkpoints pre-trained with the same method, a finite test set, and (possibly) multiple rounds of fine-tuning. The Multi-Bootstrap is implemented as a Python library, included with the MultiBERTs release. 3.1 INTERPRETING STATISTICAL RESULTS The Multi-Bootstrap provides an estimate of the amount of remaining uncertainty when summarizing the performance over multiple seeds. The following notation will help us state this precisely. We assume access to model predictions f(x) for each instance x in the evaluation set. We consider randomness arising from: 1. The choice of pre-training seed S ∼ M 2. The choice of fine-tuning seed T ∼ N 3. The choice of test sample (X,Y ) ∼ D The Multi-Bootstrap procedure allows us to account for all of the above. Specifically, MultiBERTs enables us to estimate the variance due to the choice of pre-training seed (1), which would not be possible with a single artifact. Note that multiple fine-tuning runs are not required in order to use the procedure. 5We used https://storage.googleapis.com/bert_models/2020_02_20/uncased_ L-12_H-768_A-12.zip, as linked from https://github.com/google-research/bert. For each pre-training seed s, let fs(x) denote the learned model’s prediction on input features x and let L(s) denote the expected performance metric of fs on a test distribution D over features X and labels Y . For example, the accuracy would be L(s) = E[1{Y = fs(X)}]. We can use the test sample (which we will assume has nx examples) to estimate the performance for each of the seeds in MultiBERTs, which we denote as L̂(s). The performance L(s) depends on the seed, but we are interested in summarizing the model over all seeds. A natural summary is the average over seeds, ES∼M [L(S)], which we will denote by θ. Then, using ns independently sampled seeds, we can compute an estimate θ̂ as θ̂ = 1 ns ns∑ j=1 L̂(Sj) . Because θ̂ is computed under a finite evaluation set and finite number of seeds, it is necessary to quantify the uncertainty of the estimate. The goal of Multi-Bootstrap is to estimate the distribution of the error in this estimate, θ̂ − θ, in order to compute confidence intervals and test hypotheses about θ, such as whether it is above some threshold of interest. Below, we describe a few common experimental designs in NLP that can be studied with these tools. Design 1: Comparison to a Fixed Baseline. In many use cases, we want to compare BERT’s behavior to that of a single, fixed baseline. For instance, does BERT encode information about syntax as a feature-engineered model would (Tenney et al., 2019; Hewitt & Manning, 2019)? Does it encode social stereotypes, and how does it compare to human biases (Nadeem et al., 2021)? Does it encode world knowledge, similarly to explicit knowledge bases (Petroni et al., 2019)? Does another model such as RoBERTa (Liu et al., 2019) outperform BERT on common tasks such as those from the GLUE benchmark? In all these cases, we compare MultiBERTs to some external baseline of which we only have a single estimate (e.g., random or human performance), or against an existing model that is not derived from the MultiBERTs checkpoints. We treat the baseline as fixed, and assess only the uncertainty that arises from MultiBERTs’ random seeds and the test examples. Design 2: Paired Samples. Alternatively, we might seek to assess the effectiveness of a specific intervention on model behavior. In such studies, an intervention is proposed (e.g., representation learning via a specific intermediate task, or a specific architecture change) which can be applied to any pre-trained BERT checkpoint. The question is whether the procedure results in an improvement over the original BERT pre-training method: does the intervention reliably produce the desired effect, or is the observed effect due to the idiosyncracies of a particular model artifact? Examples of such studies include: Does intermediate tuning on NLI after pre-training make models more robust across language understanding tasks (Phang et al., 2018)? Does pruning attention heads degrade model performance on downstream tasks (Voita et al., 2019)? Does augmenting BERT with information about semantic roles improve performance on benchmark tasks (Zhang et al., 2020)? We refer to studies like the above as paired since each instance of the baseline model fs (which does not receive the intervention) can be paired with an instance of the proposed model f ′s (which receives the stated intervention) such that fs and f ′ s are based on the same pretrained checkpoint produced using the same seed. Denoting θf and θf ′ as the expected performance defined above for the baseline and intervention model respectively, our goal is to test hypotheses about the true difference in performance δ = θf ′ − θf using the estimated difference δ̂ = θ̂f ′ − θ̂f . In a paired study, Multi-Bootstrap allows us to estimate both of the errors θ̂f − θf and θ̂f ′ − θf ′ , as well as the correlation between the two. Together, these allow us to approximate the distribution of the overall estimation error δ̂ − δ = (θ̂f − θ̂f ′) − (θf − θf ′), between the estimate δ̂ and the truth δ. With this, we can compute confidence intervals for δ, the true average effect of the intervention on performance over seeds, and test hypotheses about δ, as well. Design 3: Unpaired Samples. Finally, we might seek to compare a number of seeds for both the intervention and baseline models, but may not expect them to be aligned in their dependence on the seed. For example, the second model may use a different architecture so that they cannot be built from the same checkpoints, or the models may be generated from entirely separate initialization schemes. We refer to such studies as unpaired. Like in a paired study, the Multi-Bootstrap allows us to estimate the errors θ̂f − θf and θ̂f ′ − θf ′ ; however, in an unpaired study, we cannot estimate the correlation between the errors. Thus, we assume that the correlation is zero. This will give a conservative estimate of the error (θ̂f − θ̂f ′) − (θf − θf ′), as long as θ̂f − θf and θ̂f ′ − θf ′ are not negatively correlated. Since there is little reason to believe that the random seeds used for two different models would induce a negative correlation between the models’ performance, we take this assumption to be relatively safe. Hypothesis Testing. Given the measured uncertainty, we recommend testing whether or not the difference is meaningfully different from some arbitrary predefined threshold (i.e., 0 in the typical case). Specifically, we are often interested in rejecting the null hypothesis that the intervention does not improve over the baseline model, i.e., H0 : δ ≤ 0 (1) in a statistically rigorous way. This can be done with the Multi-Bootstrap procedure described below. 3.2 MULTI-BOOTSTRAP PROCEDURE The Multi-Bootstrap is a non-parametric bootstrapping procedure that allows us to estimate the distribution of the error θ̂ − θ over the seeds and test instances. The algorithm supports both paired and unpaired study designs, differentiating the two settings only in the way the sampling is performed. To keep the presentation simple, we will assume that the performance L(s) is an average of a perexample metric ℓ(x, y, fs) over the distribution D of (X,Y ), such as accuracy or the log likelihood, and L̂(s) is similarly an empirical average with the observed nx test examples, L(s) = ED[ℓ(X,Y, fs)], and L̂(s) = 1 nx nx∑ i=1 ℓ(Xi, Yi, fs). We note that the mapping D 7→ L(s) is linear in D, which is required for our result in Theorem 1. However, we conjecture that this is an artifact of the proof; like most bootstrap methods, the method here likely generalizes to any performance metric which behaves asymptotically like a linear mapping of D, including AUC, BLEU score (Papineni et al., 2002), and expected calibration error. Building on the rich literature on bootstrap methods (e.g., Efron & Tibshirani, 1994), the MultiBootstrap is a new procedure which accounts for the way that the combined randomness from the seeds and test set creates error in the estimate θ̂. The statistical underpinnings of this approach have theoretical and methodological connections to inference procedures for two-sample tests (Van der Vaart, 2000), where the samples from each population are independent. However, in those settings, the test statistics naturally differ as a result of the scientific question at hand. In our procedure, we generate a bootstrap sample from the full sample with replacement separately over both the randomness from the pre-training seed s and from the test set (X,Y ). That is, we generate a sample of pre-training seeds (S∗1 , S ∗ 2 , . . . , S ∗ ns) with each S ∗ j drawn randomly with replacement from the pre-training seeds, and we generate a test set sample ((X∗1 , Y ∗ 1 ), (X ∗ 2 , Y ∗ 2 ), . . . , (X ∗ nx , Y ∗ nx)) with each (X,Y ) pair drawn randomly with replacement from the full test set. Then, we compute the bootstrap estimate θ̂∗ as θ̂∗ = 1 ns ns∑ j=1 L̂∗(S∗j ), where L̂ ∗(s) = 1 nx nx∑ i=1 ℓ(X∗i , Y ∗ i , fs). To illustrate the procedure, we present a minimal Python implementation in Appendix A. For sufficiently large nx and ns, the distribution of the estimation error θ̂ − θ is approximated well by the distribution of θ̂∗ − θ̂ over re-draws of the bootstrap samples, as stated precisely in Theorem 1. Theorem 1. Assume that E[ℓ2(X,Y, fS)] < ∞. Furthermore, assume that for each s, E[ℓ2(X,Y, fs)] < ∞, and for almost every (x, y) pair, E[ℓ2(X,Y, fS) | X = x, Y = y] < ∞. Let n = nx +ns, and assume that 0 < ps = ns/n < 1 stays fixed (up to rounding error) as n → ∞. Then, there exists 0 < σ2 < ∞ such that √n(θ̂ − θ) d→ G with G ∼ N (0, σ2). Furthermore, conditionally on ((X1, Y1), (X2, Y2), . . . ), √ n(θ̂∗ − θ̂) d→ G. The proof of Theorem 1 is in Appendix B, along with a comment on the rate of convergence for the approximation error. The challenge with applying existing theory to our method is that while the seeds and data points are each marginally iid, the observed losses depend on both, and therefore are not iid. Therefore, we need to handle this non-iid structure in our method and proof. For nested sources of randomness (e.g., if for each pre-training seed s, we have estimates from multiple fine-tuning seeds), we average over all of the inner samples (fine-tuning seeds) in every bootstrap sample, motivated by Field & Welsh (2007)’s recommendations for bootstrapping clustered data. Paired Samples (design 2, continued). In a paired design, the Multi-Bootstrap procedure can additionally tell us the joint distribution of θ̂f ′ − θf ′ and θ̂f − θf . To do so, one must use the same bootstrap samples of the seeds (S∗1 , S ∗ 2 , . . . , S ∗ ns) and test examples ((X∗1 , Y ∗ 1 ), (X ∗ 2 , Y ∗ 2 ), . . . , (X ∗ nx , Y ∗ nx)) for both models. Then, the correlation between the errors θ̂f ′ − θf ′ and θ̂f − θf is well approximated by the correlation between the bootstrap errors θ̂∗f ′ − θ∗f ′ and θ̂∗f − θ∗f . In particular, recall that we defined the difference in performance between the intervention f ′ and the baseline f to be δ, and defined its estimator to be δ̂. With the Multi-Bootstrap, we can estimate the bootstrapped difference δ̂∗ = θ̂∗f ′ − θ̂∗f . With this, the distribution of the estimation error δ̂ − δ is well approximated by the distribution of δ̂∗ − δ̂ over bootstrap samples. Unpaired Samples (design 3, continued). For studies that do not match the paired format, we adapt the Multi-Bootstrap procedure so that, instead of sampling a single pre-training seed that is shared between f and f ′, we sample pre-training seeds for each one independently. The remainder of the algorithm proceeds as in the paired case. Relative to the paired design discussed above, this additionally assumes that the errors due to differences in pre-training seed between θ̂f ′ − θf ′ and θ̂f − θf are independent. Comparison to a Fixed Baseline (design 1, continued). Often, we do not have access to multiple estimates of L(s), for example, when the baseline f against which we are comparing is an estimate of human performance for which only mean accuracy was reported, or when f is the performance of a previously-published model for which there only exists a single artifact or for which we do not have direct access to model predictions. When we have only a point estimate θ̂f = L̂(S1) of θf for the baseline f with a single seed S1, we recommend using Multi-Bootstrap to compute a confidence interval around θf ′ and reporting where the given estimate of baseline performance falls within that distribution. An example of such a case is Figure 1, in which the distribution of MultiBERTs performance is compared to that from the single checkpoint of the original BERT release. In general such results should be interpreted conservatively, as we cannot make any claims about the variance of the baseline model. Hypothesis Testing. A valid p-value for the hypothesis test described in Equation 1 is the fraction of bootstrap samples from the above procedure for which the estimate δ̂ is negative. 4 APPLICATION: GENDER BIAS IN COREFERENCE SYSTEMS We present a case study to illustrate how MultiBERTs and the Multi-Bootstrap can help us draw more robust conclusions about model behavior. The use case is based on gendered correlations. For a particular measure of gender bias, we take a single BERT checkpoint and measure a value of 0.35. We then apply an intervention, foo, designed to reduce this correlation, and measure 0.25. In an effort to do even better, we create a whole new checkpoint by applying the foo procedure from the very beginning of pre-training. On this checkpoint, we measure 0.3. How does one make sense of this result? As a concrete example, we analyze gender bias in coreference systems (Rudinger et al., 2018) and showing how MultiBERTs and the Multi-Bootstrap can help us understand the effect of an intervention, counterfactual data augmentation (CDA). We follow a set-up similar to Webster et al. (2020), which augments the BERT pretraining data with counterfactual sentences created by randomly swapping English binary-gendered pronouns. The goal is to weaken the correlation between gendered pronouns and other words such as occupation terms (e.g., doctor, nurse). We compare our baseline MultiBERTs models to two strategies for CDA. In the first (CDA-incr), we continue pre-training each MultiBERTs model for an additional 50K steps on the counterfactual data of Webster et al. (2020). In the second, we train BERT models from scratch (CDA-full) on the same dataset. The Winogender dataset consists of template sentences covering 60 occupation terms and instantiated with either male, female, or neutral pronouns. We follow Webster et al. (2020) and train a gold-mention coreference system using a two-layer feedforward network that takes span representations from a frozen BERT encoder as input and makes binary predictions for mention-referent pairs. The model is trained on OntoNotes (Hovy et al., 2006) and evaluated on the Winogender examples for both per-sentence accuracy and a bias score, defined as the Pearson correlation between the peroccupation bias score (Figure 4 of Rudinger et al. 2018) and the occupational gender statistics from the U.S. Bureau of Labor Statistics.6 For each pre-training run, we train five coreference models, using the same encoder but different random seeds to initialize the classifier weights and to shuffle the training data. 4.1 PAIRED ANALYSIS: CDA-INCR VS. BASE We investigate the impact of the intervention on performance and bias. Overall accuracy is fairly consistent across pre-training seeds, at 62.6±1.2% for the base model, with only a small and not statistically significant change under CDA-incr (Table 1). However, as shown in Figure 3, there is considerable variation in bias correlation, with r values between 0.1 and 0.7 depending on pretraining seed.7 The range for CDA-incr overlaps somewhat, with values between 0.0 and 0.4; however, because the incremental CDA is an intervention on each base checkpoint, we can look at the individual seeds and see that in most cases there appears to be a significant improvement. A paired Multi-Bootstrap allows us to quantify this and further account for noise due to the finite evaluation 6We use the occupation data as distributed with the Winogender dataset, https://github.com/ rudinger/winogender-schemas. 7Some of this variation is due to the classifier training, but on this task there is a large intrinsic contribution from the pretraining seed. See Appendix D for a detailed analysis. sample of 60 occupations. The results are shown in Table 1, which show that CDA-incr significantly reduces bias by δ̂ = −0.162 with p = 0.001. 4.2 UNPAIRED ANALYSIS: CDA-FULL VS. CDA-INCR We can also test if we get any additional benefit from running the entire pre-training with counterfactually-augmented data. Similar to MultiBERTs, we trained 25 CDA-full checkpoints for 2M steps on the CDA dataset.8 Because these are entirely new checkpoints, independent from the base MultiBERTs runs, we use an unpaired version of the Multi-Bootstrap, which uses the same set of examples but samples pretraining seeds independently for CDA-incr and CDA-full. As shown in Table 2, overall accuracy does not change appreciably (0.622 vs. 0.623, p = 0.416), while bias correlation seems to decrease but not significantly (0.256 vs 0.192, δ = -0.064 with p = 0.132). As an ablation, we also experiment with sampling over either only seeds (taking the set of examples, i.e. occupations, as fixed), or over examples (taking the set of 25 seeds as fixed). As shown in Table 2, we find lower p-values (0.005 and 0.053) in both cases—showing that failing to account for finite samples along either dimension could lead to overconfident conclusions. In Appendix E, we present two additional examples: a paired study where we increase pretraining time from 1M to 2M steps, as well as an unpaired comparison to the original bert-base-uncased checkpoint. 5 CONCLUSION To make progress on language model pre-training, it is essential to distinguish between the properties of specific model artifacts and those of the training procedures that generated them. To this end, we have presented two resources: the MultiBERTs, a set of 25 model checkpoints to support robust research on BERT, and the Multi-Bootstrap, a non-parametric statistical method to estimate the uncertainty of model comparisons across multiple training seeds. We demonstrated the utility of these resources by showing how to quantify the effect of an intervention to reduce a type of gender bias in coreference systems built on BERT. We hope that the release of multiple checkpoints and the use of principled hypothesis testing will become standard practices in research on pre-trained language models. 8Following Webster et al. (2020), we use 20 masks per sequence instead of the 80 from Devlin et al. (2019). A MINIMAL IMPLEMENTATION OF THE MULTI-BOOTSTRAP Below, we present a simplified Python implementation of the Multi-Bootstrap algorithm presented in Section 3.2. It describes a single-sided version of the procedure, which could be used, e.g., to test that a model’s performance is greater than 0. The input is a matrix of predictions where row indices correspond to test examples and column indices to random seeds. The functions returns an array of nboot samples [θ̂1, . . . , θ̂nboot ]. 1 def multibootstrap(predictions, labels, metric_fun, nboot): 2 """ 3 Generates bootstrap samples of a model’s performance. 4 5 Input: 6 predictions: 2D Numpy array with the predictions for different seeds. 7 labels: 1D Numpy array with the labels. 8 metric_fun: Python function. Takes a pair of arrays as input, and returns a metric or loss. 9 nboot: Number of bootstrap samples to generate. 10 11 Output: 12 Numpy array with nboot samples. 13 14 """ 15 # Checks the data format. 16 n_samples, n_seeds = predictions.shape 17 assert labels.shape == (n_samples,) 18 19 thetas = np.zeros(nboot) 20 for boot_ix in range(nboot): 21 # Samples n_samples test examples and n_seeds pre-training seeds. 22 x_samples = np.random.choice(n_samples, size=n_samples, replace=True) 23 s_samples = np.random.choice(n_seeds, size=n_seeds, replace=True) 24 25 # Computes the metric over the bootstrapping samples. 26 sampled_predictions = predictions[np.ix_(x_samples, s_samples)] 27 sampled_labels = labels[x_samples] 28 sampled_metrics = [ 29 metric_fun(sampled_predictions[:,j], sampled_labels) 30 for j in range(n_seeds) 31 ] 32 33 # Averages over the random seeds. 34 thetas[boot_ix] = np.mean(sampled_metrics) 35 36 return thetas We provide the complete version of the algorithm on our repository http://goo.gle/ multiberts. Our implementation is optimized and supports all the experiment designs described in Section 3, including paired and unpaired analysis as well as multiple fine-tuning runs for each pretraining seed. B PROOF OF THEOREM 1 Before giving the proof, we define some useful notation that will simplify the argument considerably. We let Dn be the empirical measure over the nx observations (Zi = (Xi, Yi)) n i=1, and Mn be the empirical measure over the ns observations (Sj) n j=1. For a function f : V → R and a distribution P over V , we will use the shorthand Pf to denote the expectation of f under P , Pf = EV∼P [f(V )]. For example, this allows us to write θ = DMℓ = EZ∼DES∼M ℓ(Z, fS), and θ̂ = DnMnℓ = 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ). For the bootstrapped distributions, let D∗n denote the distribution over the bootstrap data samples (Z∗1 , Z ∗ 2 , . . . , Z ∗ nx) and M ∗ n denote the distribution over the bootstrapped seed samples, (S∗1 , S ∗ 2 , . . . , S ∗ ns), both conditional on the observed samples (Zi) nx i=1 and (Sj) ns j=1. Note that the empirical average over a bootstrapped sample 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Z∗i , fS∗j ) can be written as 1 nx nx∑ i=1 1 ns ns∑ j=1 AiBjℓ(Zi, fSj ), where Ai is the number of times Zi appears in the bootstrapped sample (Z ∗ k) nx k=1, and Bj is the number of times Sj appears in the bootstrapped sample (S ∗ k) ns k=1. With this in mind, we will abuse notation, and also denote D∗n as the distribution over the Ai and M ∗ n as the distribution over the Bj . Finally, we will use E∗ and Var∗ to denote the expectation and variance of random variables defined with respect to D∗n or M ∗ n, conditional on Dn and Mn. We will use P to denote the distribution P = D×M . Throughout, all assertions made with respect to random variables made without a note about their probability of occurrence hold P -almost surely. Proof. The challenge with applying existing theory to our method is that because the performance metric (ℓ(Zi, fSj ) nx i=1 over the nx observations for a given seed Sj all depend on the same Sj , they are not independent. Similarly for the performance on a given observation, over seeds. Therefore, we need to handle this non-iid structure in our proof for the multi-bootstrap. There are conceptually three steps to our proof that allow us to do just that. The first is to show that θ̂ has an asymptotically linear representation as √ n(θ̂ − θ) = √n(Dn −D)Mℓ+ √ n(Mn −M)Dℓ+ oP (1). (2) The second is to show that conditional on Dn and Mn the multi-bootstrapped statistic θ̂ ∗ ∆= D∗nM ∗ nℓ has an asymptotically linear representation as √ n(θ̂∗ − θ̂) = √n(D◦n −Dn)Mℓ+ √ n(M◦n −Mn)Dℓ+ oP∗(1), (3) where D◦n and M ◦ n are multiplier bootstrap samples coupled to the bootstrap D ∗ n and M ∗ n which we define formally in the beginning of Step 2. The third step is to use standard results for the multiplier bootstrap of the mean of iid data to show that the distributions of the above linearized statistics converge to the same limit. Because we have assumed that ℓ(Z, fS) < ∞, E[ℓ(Z, fS) | S] < ∞, and E[ℓ(Z, fS) | Z] < ∞, Fubini’s theorem allows us to switch the order of integration over Z and S as needed. We will assume that DMℓ(X,Y, fS) = 0. This is without loss of generality, because adding and subtracting √ nDMℓ to the bootstrap expression gives √ n(θ̂∗ − θ̂) = √n(D∗nM∗nℓ−DnMnℓ) = √ n(D∗nM ∗ nℓ−DMℓ+DMℓ−DnMnℓ) = √ n(D∗nM ∗ n(ℓ−DMℓ)−DnMn(ℓ−DMℓ)), so if we prove that the result holds with the mean zero assumption, it will imply that the result holds for ℓ with a nonzero mean. This theorem guarantees consistency of the Multi-Bootstrap estimates. One question that comes up is whether it is possible to get meaningful / tight rates of convergence for the approximation. Unfortunately, getting OP (1/n) convergence as found in many bootstrap methods (Van der Vaart, 2000) is difficult without the use of Edgeworth expansions, by which the Multi-Bootstrap is not welladapted to analysis. That said, many of the remainder terms already have variance of order O(1/n), or could easily be adapted to the same, suggesting an OP (1/ √ n) convergence. The main difficulty, however, is showing rates of convergence for the strong law on separately exchangeable arrays (see the proof of Lemmas 2, 4-5). Showing a weaker notion of convergence, such as in probability, may perhaps allow one to show that the remainder is OP (1/ √ n), however the adaptation of the aforementioned Lemmas is nontrivial. Step 1 Recalling that θ̂ ∆ = DnMnℓ and θ ∆ = DMℓ, we can expand √ n(θ̂ − θ) as follows, √ n(DnMnℓ−DMℓ) = √ n(DnMnℓ−DMnℓ+DMnℓ−DMℓ) = √ n((Dn −D)Mnℓ+D(Mn −M)ℓ) = √ n((Dn −D)Mnℓ+ (Dn −D)Mℓ− (Dn −D)Mℓ+D(Mn −M)ℓ) = √ n((Dn −D)Mℓ+ (Dn −D)(Mn −M)ℓ+D(Mn −M)ℓ) The following lemma shows that √ n(Dn −D)(Mn −M)ℓ is a lower order term. Lemma 1. Under the assumptions of Theorem 1, √ n(Dn −D)(Mn −M)ℓ = oP (1). Therefore, √ n(DnMnℓ−DMℓ) = 1√ 1− ps √ nx(Dn −D)Mℓ+ 1√ ps √ ns(Mn −M)Dℓ+ oP (1). Step 2 One of the challenges with working with the bootstrap sample D∗n and M ∗ n is that the induced per-sample weights {Ai}nxi=1 and {Bj}nsj=1 do not have independent components, because they each follow a multinomial distribution over nx items and ns items, respectively. However, they are close enough to independent that we can define a coupled set of random variables {A◦i }nxi=1 and {B◦j }nsj=1 that do have independent components, but behave similarly enough to {Ai} and {Bj} that using these weights has a negligible effect on distribution of the bootstrapped estimator, as described concretely below. First, we discuss the coupled multiplier bootstrap sample D◦n and M ◦ n. The creation of this sequence, called “Poissonization” is a standard technique for proving results about the empirical bootstrap that require independence of the bootstrap weights (van der Vaart et al., 1996). We describe this for D◦n as the idea is identical for M◦n. Because our goal is to couple this distribution to D ∗ n, we define it on the same sample space, and extend the distribution P ∗, expectation E∗ and variance Var∗ to be over D◦n and M ◦ n, conditionally on Dn and Mn, as with D ∗ n and M ∗ n. To construct the distribution D◦n, from the empirical distribution Dn and a bootstrap sample D ∗ n, start with the distribution D∗n and modify it as follows: We draw a Poisson random variable Nnx with mean nx. If Nnx > nx, then we sample Nnx −nx iid observations from Dn, with replacement, and add them to the bootstrap sample initialized with D∗n to produce the distribution D ◦ n. If Nnx < nx, we sample nx − Nnx observations from D∗n, without replacement, and remove them from the bootstrap sample to produce the distribution D◦n. If Nnx = nx, then D ◦ n = D ∗ n. Recalling that Ai is the number of times the i-th sample is included in D ∗ n, similarly define A ◦ i as the number of times the i-th sample is included in D◦n. Note that by the properties of the Poisson distribution, A◦i ∼ Poisson(1), and {A◦i }nxi=1 are independent. Note that the natural normalization for D◦n would be Nnx . However, it will be useful to maintain the normalization by nx, so abusing notation, for a function f(z), we will say that D◦nf = 1 nx ∑nx i=1 A ◦ i f(Zi). Define θ̂◦ as the following empirical estimator of θ under the distribution D◦n ×M◦n, θ̂◦ = D◦nM ◦ nℓ = 1 nx nx∑ i=1 1 ns ns∑ j=1 A◦iB ◦ j ℓ(Zi, fSj ). Lemma 2 shows that √ n(θ̂∗ − θ̂◦) = oP∗(1), and so √ n(θ̂∗ − θ) = √n(θ̂◦ − θ) + oP∗(1). Lemma 2. Under the assumptions of Theorem 1, and that DMℓ = 0, √ n(θ̂∗ − θ̂◦) = oP∗(1). With this, the expansion of √ n(θ̂◦ − θ̂) begins mutatis mutandis the same as in Step 1, to get that √ n(θ̂◦ − θ̂) = 1√ 1− ps √ nx(D ◦ n −Dn)Mnℓ+ √ n(D◦n −Dn)(M◦n −Mn)ℓ + 1√ ps √ ns(M ◦ n −Mn)Dnℓ. As with Step 1, we provide Lemma 3 showing that the remainder term √ n(D◦n −Dn)(M◦n −Mn)ℓ will be lower order. Lemma 3. Under the assumptions of Theorem 1, √ n(D◦n −Dn)(M◦n −Mn)ℓ = oP∗(1). Therefore, √ n(D◦nM ◦ nℓ−DnMnℓ) = 1√ 1− ps √ nx(D ◦ n −Dn)Mnℓ+ 1√ ps √ ns(M ◦ n −Mn)Dnℓ+ oP∗(1). Then, to write √ n(θ̂∗−θ̂) in terms of √ns(M◦n−Mn)Dℓ as wanted in Eq. (3), instead of √ ns(M ◦ n− Mn)Dnℓ, we must additionally show that the functional has enough continuity that the error term√ ns(M ◦ n −Mn)(Dn −D)ℓ is lower order. The following lemma shows exactly this. Lemma 4. Under the assumptions of Theorem 1, conditionally on the sequences Z1, Z2, . . . and S1, S2, . . . , (a) √ n(D◦n −Dn)(Mn −M)ℓ = oP∗(1), and (b) √ n(Dn −D)(M◦n −Mn)ℓ = oP∗(1). Altogether, these imply that √ n(D∗nM ∗ nℓ−DnMnℓ) = 1√ 1− ps √ nx(D ◦ n −Dn)Mℓ+ 1√ ps √ ns(M ◦ n −Mn)Dℓ+ oP∗(1). Step 3 Noting that Mℓ(·, fS) = ED×M [ℓ(·, fS) | Z = ·] is a real-valued random variable with finite variance (similarly for Dℓ(Z, ·)), and recalling that the nx samples used for Dn and ns samples for Mn satisfy n = nx/(1 − ps) and n = ns/ps, for 0 < ps < 1, the conventional central limit theorem shows that for some positive semi-definite matrix Σ ∈ R2×2, and G ∼ N (0,Σ), √ n ( (Dn −D)Mℓ (Mn −M)Dℓ ) = ( 1 1−ps √ nx(Dn −D)Mℓ 1 ps √ ns(Mn −M)Dℓ ) d→ G. Note that Dn and Mn are independent, so G is, in fact, a diagonal matrix. Additionally, the conditional multiplier CLT (van der Vaart et al., 1996, Lemma 2.9.5, pg. 181) implies that conditionally on Z1, Z2, . . . and S1, S2, . . . , √ n ( (D∗n −Dn)Mℓ (M∗n −Mn)Dℓ ) d→ G. Finally, applying the delta method (see Theorem 23.5 from Van der Vaart (2000)) along with the results from Steps 1 and 2 shows that the distributions of √ n(θ̂ − θ) and √n(θ̂∗ − θ̂) converge to N (0, σ2), where σ2 = Σ11/(1− ps) + Σ22/ps. B.1 PROOF OF LEMMA 1 Fix ǫ > 0. Note that E[(Dn −D)(Mn −M)ℓ] = 0, so by Chebyshev’s inequality, P ( |√n(Dn −D)(Mn −M)ℓ| > ǫ ) ≤ Var( √ n(Dn −D)(Mn −M)ℓ) ǫ2 . Therefore, it suffices to show that limn→∞ Var( √ n(Dn−D)(Mn−M)ℓ) = 0. To do so, we apply the law of total variance, conditioning on Dn, and bound the resulting expression by C/n. Var( √ n(Dn −D)(Mn −M)ℓ) = nE[Var((Dn −D)(Mn −M)ℓ | Dn)] + nVar(E[(Dn −D)(Mn −M)ℓ | Dn]) = nE[Var((Dn −D)(Mn −M)ℓ | Dn)] = nE[Var((Mn −M)(Dn −D)ℓ | Dn)] = E n n2s ns∑ j=1 Var((Dn −D)ℓ(·, fSj ) | Dn) = E [ n ns Var((Dn −D)ℓ(·, fS1) | Dn) ] = E 1 ps E 1 nx nx∑ i=1 ℓ(Zi, fS1)− E[ℓ(Zi, fS1) | S1] 2 | {Zi}nxi=1 = E 1 ps 1 nx nx∑ i=1 ℓ(Zi, fS1)− E[ℓ(Zi, fS1) | S1] 2 = E 1 psn2x nx∑ i=1 nx∑ k=1 (ℓ(Zi, fS1)− E[ℓ(Zi, fS1) | S1])(ℓ(Zk, fS1)− E[ℓ(Zk, fS1) | S1]) = E 1 psn2x nx∑ i=1 (ℓ(Zi, fS1)− E[ℓ(Zi, fS1) | S1])2 = 1 ps(1− ps)n E [ (ℓ(Z1, fS1)− E[ℓ(Z1, fS1) | S1])2 ] ≤ C n → 0. B.2 PROOF OF LEMMA 2 First, note the following representation for θ̂∗ − θ̂◦: θ̂∗ − θ̂◦ = 1 nx nx∑ i=1 1 ns ns∑ j=1 AiBjℓ(Zi, fSj )− 1 nx nx∑ i=1 1 ns ns∑ j=1 A◦iB ◦ j ℓ(Zi, fSj ) = 1 ns ns∑ j=1 (Bj −B◦j ) nx nx∑ i=1 Aiℓ(Zi, fSj ) ︸ ︷︷ ︸ ∆ =I1 + 1 nx nx∑ i=1 (Ai −A◦i ) ns ns∑ j=1 B◦j ℓ(Zi, fSj ) ︸ ︷︷ ︸ ∆ =I2 . Let ǫ > 0. Noting that E∗[I1] = E∗[I2] = 0, applying Chebyshev’s inequality gives P ∗ (√ n|θ̂∗ − θ̂◦| > ǫ ) ≤ nVar ∗(θ̂∗ − θ̂◦) ǫ2 ≤ 2nVar ∗(I1) + Var ∗(I2) ǫ2 It suffices to show that nVar∗(I1) → 0 and nVar∗(I2) → 0. The arguments for each term are mutatis mutandis the same, and so we proceed by showing the proof for I2. By the law of total variance, Var∗(I2) = Var ∗(E∗[I2 | {Bj}nsj=1]) + E∗[Var∗(I2 | {Bj}nsj=1)]. Because E∗[Ai] = E∗[A◦i ] and {Bj}nsj=1 ⊥ Ai, A◦i , it follows that E∗[I2 | {Bj}nsj=1] = 0. Taking the remaining term and re-organizing the sums in I2, Var∗(I2) = E ∗ Var ∗ 1 nx nx∑ i=1 (Ai −A◦i ) 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) | {Bj}nsj=1 . (4) Next, we apply the law of total variance again, conditioning on Nnx = ∑ i A ◦ i . First, E ∗[I2 | Nnx , {Bj}nsj=1] = Nnx − nx nx 1 nx nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ), and so Var∗ ( E ∗[I2 | Nnx , {Bj}nsj=1] | {Bj}nsj=1 ) = 1 nx 1 nx nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) 2 Then, conditionally on Nnx (and {Bj}), I2 is the (centered) empirical average of |Nn − n| samples from a finite population of size n, rescaled by |Nn − n|/n. Therefore, applying Theorem 2.2 of Cochran (2007) gives the conditional variance as |Nnx − nx| n2x 1 nx − 1 nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) 2 − nx nx − 1 1 nx nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) 2 ︸ ︷︷ ︸ ∆ =V 2 . To take the expectation over Nnx , notice that because E ∗[Nnx ] = nx, this is the mean absolute deviation (MAD) of Nnx . Using the expression for the MAD of a Poisson variable from Ramasubban (1958) gives E ∗|Nnx − nx| = 2nx nnxx exp(−nx) nx! , and using Stirling’s approximation, this is bounded by C √ nx, for some 0 < C < ∞. Combining this with the above term for the variance of the conditional expectation, we have Var∗ 1 nx nx∑ i=1 (Ai −A◦i ) 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) | {Bj}nsj=1 ≤ 1 nx 1 nx nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) 2 + 1 n1.5x V 2. (5) Noting that E∗[B2j ] = E ∗[BjBk] = 1, we get the following bound: Var∗(I2) ≤ 1 nx 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) 2 + 1 n1.5x V̄ 2, where V̄ 2 = 1 nx − 1 nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) 2 − nx nx − 1 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) 2 . Because of the assumption that DMℓ = 0, the SLLN adapted to separately exchangeable arrays (Rieders, 1991, Theorem 1.4) implies that lim n→∞ 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) = 0, almost surely. Therefore, the first term of (5) is o(1/n). Note that V̄ 2 is the empirical variance of the conditional expectation of ℓ(Zi, fSj ) given {Zi}ni=1. Therefore, the law of total variance shows that V̄ 2 ≤ 1 nx 1 ns nx∑ i=1 ns∑ j=1 ℓ2(Zi, fSj )− 1 nx 1 ns nx∑ i=1 ns∑ j=1 ℓ(Zi, fSj ) 2 . By the SLLN adapted to separately exchangeable arrays (Rieders, 1991, Theorem 1.4), both of the terms converge almost surely to DMℓ2 < ∞ and (DMℓ)2, respectively. and therefore, lim n→∞ nVar∗(Is) ≤ lim n→∞ n nx 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) 2 + n n1.5x V̄ 2 = 0. B.3 PROOF OF LEMMA 3 As with Lemma 1, the main idea of the proof is to apply Chebyshev’s inequality, and show that the variance tends to zero. Indeed, choosing an arbitrary ǫ > 0, P ∗ ( |√n(D◦n −Dn)(M◦n −Mn)ℓ| ≥ ǫ ) ≤ Var ∗ (√n(D◦n −Dn)(M◦n −Mn)ℓ ) ǫ2 . Therefore, it suffices to show that the variance in the above display goes to zero. To do this, we start by re-writing the expression in terms of A◦i and B ◦ j , and then apply the law of total variance. Var∗ (√ n(D◦n −Dn)(M◦n −Mn)ℓ ) = nVar∗ 1 nxns nx∑ i=1 ns∑ j=1 (A◦i − 1)(B◦j − 1)ℓ(Zi, fSj ) = nVar∗ E∗ 1 nxns nx∑ i=1 ns∑ j=1 (A◦i − 1)(B◦j − 1)ℓ(Zi, fSj ) | {A◦i }nxi=1 + nE∗ Var∗ 1 nxns nx∑ i=1 ns∑ j=1 (A◦i − 1)(B◦j − 1)ℓ(Zi, fSj ) | {A◦i }nxi=1 . Because {B◦j }nsj=1 are independent of {A◦i }nxi=1, and have mean 1, the conditional expectation in the first term is 0 almost surely. Expanding out the second term, using that Var∗(B◦j ) = 1, and that the {B◦j }nsj=1 are uncorrelated, nE∗ Var∗ 1 nxns nx∑ i=1 ns∑ j=1 (A◦i − 1)(B◦j − 1)ℓ(Zi, fSj ) | {Ai}nxi=1 = nE∗ 1 n2s ns∑ j=1 Var∗ (B◦j − 1) 1 nx nx∑ i=1 (A◦i − 1)ℓ(Zi, fSj ) | {A◦i }nxi=1 = nE∗ 1 n2s ns∑ j=1 1 nx nx∑ i=1 (A◦i − 1)ℓ(Zi, fSj ) 2 = nE∗ 1 n2s ns∑ j=1 1 n2x nx∑ i=1 nx∑ k=1 (A◦i − 1)(A◦k − 1)ℓ(Zi, fSj )ℓ(Zk, fSj ) . Now, noting that Var∗(A◦i ) = 1, and that the {A◦i }nxi=1 are uncorrelated, this simplifies to nE∗ 1 n2s ns∑ j=1 1 n2x nx∑ i=1 (A◦i − 1)2ℓ2(Zi, fSj ) = n nsnx 1 ns ns∑ j=1 1 nx nx∑ i=1 ℓ2(Zi, fSj ). Because ED×M [ℓ2(Z, fS)] < ∞, the SLLN adapted to separately exchangeable arrays (Rieders, 1991, Theorem 1.4) implies that this converges almost surely to 0. B.4 PROOF OF LEMMA 4 We prove (a) of the Lemma, as (b) follows from applying Fubini’s theorem and following mutatis mutandis the same argument. Without loss of generality, we will assume that ℓ(Zi, fSj ) ≥ 0. Because Var(ℓ(Zi, fSj )) < ∞, we can always decompose ℓ(·, ·) into a positive and negative part, and show that the result holds for each individually. Once again, we prove (a) by turning to Chebyshev’s inequality. Fix ǫ > 0, and observe that P ∗ ( |√n(D◦n −Dn)(Mn −M)ℓ| > ǫ ) ≤ Var ∗ (√n(D◦n −Dn)(Mn −M) ) ǫ2 , so it is sufficient to show that Var∗ (√ n(D◦n −Dn)(Mn −M) ) → 0. Writing the above in terms of A◦i , we have Var∗ (√ n(D◦n −Dn)(Mn −M) ) = Var∗ √ n nx nx∑ i=1 (A◦i − 1) 1 ns ns∑ j=1 ℓ(Zi, fSj )− E[ℓ(Zi, fSj ) | Zi] = n n2x nx∑ i=1 Var∗ (A◦i − 1) 1 ns ns∑ j=1 ℓ(Zi, fSj )− E[ℓ(Zi, fSj ) | Zi] 2 = n n2x nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj )− E[ℓ(Zi, fSj ) | Zi] 2 . Now, we want to show that the last display converges almost surely to 0. Notice that each term within the outer sum will obviously converge due to the SLLN. Showing that the outer sum also converges almost surely is technically difficult, but conceptually follows the same argument used to prove the SLLN (specifically, we follow the one done elegantly by Etemadi (1981); Luzia (2018) provides a more detailed account of this proof technique that is helpful for developing a deeper understanding). We show the following version of almost sure convergence: that for any ǫ > 0, P n n2x nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj )− E[ℓ(Zi, fSj ) | Sj ] 2 > ǫ i.o. = 0, where i.o. stands for infinitely often. Define the shorthand Lij = ℓ(Zi, fSj ) and let L̄ij = Lij1{Lij < ij} be a truncated version of Lij . The proof of Theorem 2 of Etemadi (1981) implies that P (L̄ij 6= Lij i.o.) = 0, because the assumption Var(Lij) < ∞ implies the assumption used in Etemadi (1981), and independence of {Lij}i,j is not needed for this result. Therefore, 1 nx nx∑ i=1 1 ns ns∑ j=1 Lij − L̄ij 2 a.s.→ 0, and 1 nx nx∑ i=1 1 ns ns∑ j=1 E[Lij | Zi]− E[L̄ij | Zi] 2 a.s.→ 0. Together, these imply that if we can prove that the truncated sum converges, ie., 1 nx n∑ i=1 1 ns ns∑ j=1 L̄ij − E[L̄ij | Zi] 2 a.s.→ 0, (6) this is sufficient to show that the un-truncated version converges almost surely. To prove (6), we show two things: first, that there is a subsequence kn such that (6) holds when restricted to the subsequence, and then we show that the sequence is a Cauchy sequence, which together imply the result. Let α > 1 and let kn = α n. For convenience, denote knx as the number of data samples and kns as the number of seed samples when knx + kns = kn total samples are drawn. We will ignore integer rounding issues, and assume knx = (1− ps)αn, and kns = psαn. The following lemma shows that the subsequence defined by kn converges almost surely. Lemma 5. Let α > 1, and kn = α n. Under the assumptions of Theorem 1 and that Lij ≥ 0 P 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij − E[L̄ij | Zi] 2 > ǫ i.o. = 0. We now must show that the sequence in (6) is a Cauchy sequence. Note that the SLLN implies that 1 nx nx∑ i=1 E[L̄ij | Zi]2 a.s.→ E[E[L̄ij | Zi]2], and the LLN for exchangeable arrays (Rieders, 1991, Theorem 1.4) implies that 1 nx nx∑ i=1 1 ns ns∑ j=1 L̄ijE[L̄ij | Zi] a.s.→ E[E[L̄ij | Zi]2]. Therefore, 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij 2 a.s.→ E[E[L̄ij | Zi]2]. (7) Notice that because L̄ij ≥ 0, the sum ∑nx i=1 (∑ns j=1 L̄ij )2 is monotone increasing in ns and nx. With this in mind, for any m > 0, let n be such that kn ≤ m < kn+1. Then, by the montonicity, ( kn kn+1 1 kn )3 knx∑ i=1 kns∑ j=1 L̄ij 2 ≤ ∑(1−ps)m i=1 (∑psm j=1 L̄ij )2 p2s(1− ps)m3 ≤ ( kn+1 kn 1 kn+1 )3 k(n+1)x∑ i=1 k(n+1)s∑ j=1 L̄ij 2 . From (7), the left hand side converges to 1α3E[E[L̄ij | Zi]2], and the right hand side converges to α3E[E[L̄ij | Zi]2]. Because α is arbitrary, this proves that the sequence ∑(1−ps)m i=1 (∑psm j=1 L̄ij )2 p2s(1− ps)m3 m=1,... is almost surely Cauchy. Together with Lemma 5, this implies (6). B.5 PROOF OF LEMMA 5 We will show that ∞∑ n=1 P 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij − E[L̄ij | Zi] 2 > ǫ < ∞. This, along with the first Borel-Cantelli lemma (Émile Borel, 1909; Cantelli, 1917) implies the result. Applying Markov’s inequality and using the fact that L̄ij and L̄ih are independent conditional on Zi gives ∞∑ n=1 P 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij − E[L̄ij | Zi] 2 > ǫ ≤ 1 ǫ ∞∑ n=1 E 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij − E[L̄ij | Zi] 2 = 1 ǫ ∞∑ n=1 1 knxk2ns knx∑ i=1 kns∑ j=1 E [( L̄ij − E[L̄ij | Zi] )2] ≤ 1 ǫ ∞∑ n=1 1 knxk2ns knx∑ i=1 kns∑ j=1 E[L̄2ij ], where the last line follows from the law of total variance. To simplify the remaining algebra, we will use a . b to denote that there is some constant 0 < c < ∞ such that a < cb. Continuing, we have 1 ǫ ∞∑ n=1 1 knxk2ns knx∑ i=1 kns∑ j=1 E[L̄2ij ] . 1 ǫ ∞∑ n=1 knx∑ i=1 kns∑ j=1 1 k3n E[L̄2ij ] = 1 ǫ ∞∑ i=1 ∞∑ j=1 E[L̄2ij ] ∞∑ n=n(i,j) 1 α3n . 1 ǫ ∞∑ i=1 ∞∑ j=1 1 max{i/(1− ps), j/ps}3 E[L̄2ij ] . 1 ǫ ∞∑ i=1 ∞∑ j=1 1 max{i, j}3E[L̄ 2 ij ] = 1 ǫ ∞∑ i=1 ∞∑ j=1 1 max{i, j}3E[L̄ 2 ij ] where n(i, j) is shorthand for n(i, j) = logα max{i/(1− ps), j/ps} is the first n such that knx ≥ i and kns ≥ j. Now, define Q as the distribution of L11 induced by Z1 and S1. Additionally, split the inner sum into two pieces, one for when j < i and so max{i, j} = i and one for when j ≥ i and so max{i, j} = j. 1 ǫ ∞∑ i=1 ∞∑ j=1 1 max{i, j}3E[L̄ 2 ij ] = 1 ǫ ∞∑ i=1 i∑ j=1 1 i3 ∫ ij 0 x2 dQ(x) + ∞∑ j=i ∫ ij 0 x2 dQ(x) = 1 ǫ ∞∑ i=1 i−1∑ j=1 1 i3 ij∑ k=1 ∫ k k−1 x2 dQ(x) + ∞∑ j=i ij∑ k=1 ∫ k k−1 x2 dQ(x) switching the order of the indices over j and k, using that 1 ≤ k ≤ ij and the constraints on j relative to i, 1 ǫ ∞∑ i=1 i−1∑ j=1 1 i3 ij∑ k=1 ∫ k k−1 x2 dQ(x) + ∞∑ j=i ij∑ k=1 ∫ k k−1 x2 dQ(x) . 1 ǫ ∞∑ i=1 i2−1∑ k=1 (i− k/i) i3 ∫ k k−1 x2 dQ(x) + ∞∑ k=1 ∞∑ j=max{i,k/i} 1 j3 ∫ k k−1 x2 dQ(x) . 1 ǫ ∞∑ i=1 i2−1∑ k=1 (i− k/i) i3 ∫ k k−1 x2 dQ(x) + ∞∑ k=1 1 max{i, k/i}2 ∫ k k−1 x2 dQ(x) . Switching the order of summation over i and k, and separating out the terms where k/i < i and k/i ≥ i, 1 ǫ ∞∑ i=1 i2−1∑ k=1 (i− k/i) i3 ∫ k k−1 x2 dQ(x) + ∞∑ k=1 1 max{i, k/i}2 ∫ k k−1 x2 dQ(x) = 1 ǫ ∞∑ k=1 (∫ k k−1 x2 dQ(x) ) √ k+1∑ i=1 (i− k/i) i3 + ∞∑ i= √ k 1 i2 + √ k∑ i=1 i2 k2 . 1 ǫ ∞∑ k=1 1√ k (∫ k k−1 x2 dQ(x) ) . 1 ǫ ∞∑ k=1 (∫ k k−1 x2√ x dQ(x) ) . 1 ǫ ∫ ∞ 0 x1.5 dQ(x) < ∞. C INSTANCE-LEVEL AGREEMENT OF MULTIBERTS ON GLUE We present additional performance experiments to complement Section 2. Table 3 shows per-example agreement rates on GLUE predictions between pairs of models pretrained with a single seed (“same”) and pairs pre-trained with different seeds (“diff”); in all cases, models are fine-tuned with different seeds. With the exception of RTE, we see high agreement (over 90%) on test examples drawn from the same distribution as the training data, and note that agreement is 1–2% lower on average for the predictions of models pre-trained on different seeds compared to models pre-trained on the same seed. However, this discrepancy becomes significantly more pronounced if we look at out-of-domain “challenge sets” which feature a different data distribution from the training set. For example, if we evaluate our MNLI models on the anti-sterotypical examples from HANS (McCoy et al., 2019), we see agreement drop from 88% to 82% when comparing across pre-training seeds. Figure 4 shows how this can affect overall accuracy, which can vary over a range of nearly 20% depending on the pre-training seed. Such results underscore the need to evaluate multiple pre-training runs, especially when evaluating a model’s ability to generalize outside of its training distribution. D CROSS-SEED VARIATION Figure 5 shows variation in Winogender bias correlation (S4) between each MultiBERTs pretraining seed. Each box shows the distribution over five runs, and some of the variation between seeds may simple be due to variation in training the coreference model. If we average the scores for each seed then look at the distribution of this per-seed average score, we get 0.45±0.11. What if pretraining didn’t matter? If we ignore the seed and randomly sample sets of five runs from this set with replacement, we get scores of 0.45±0.05 - telling us that most of the variance can only be explained by differences between the pretraining checkpoints. We can confirm this by taking a subset of our pretraining seeds and training additional 25 randomlyinitialized coreference models. Figure 6 shows the result: seeds 0, 2, 3, and 4 appear closer together than in Figure 5, but seed 1 clearly has different properties with respect to our Winogender metric. We can confirm this with an unpaired multibootstrap analysis, taking seed 0 as base and seed 1 as experiment: we observe a significant effect of δ = 0.203 (p = 0.009), as shown in Table 4. E CASE STUDY: MULTIBERTS VS. ORIGINAL BERT As an additional example of application, we discuss challenges in reproducing the performance of the original BERT checkpoint, using the Multi-Bootstrap procedure. The original bert-base-uncased checkpoint appears to be an outlier when viewed against the distribution of scores obtained using the MultiBERTs reproductions. Specifically, in reproducing the training recipe of Devlin et al. (2019), we found it difficult to simultaneously match performance on all tasks using a single set of hyperparameters. Devlin et al. (2019) reports training for 1M steps. However, as shown in Figure 1 and 2, models pre-trained for 1M steps matched the original checkpoint on SQuAD but lagged behind on GLUE tasks; if pre-training continues to 2M steps, GLUE performance matches the original checkpoint but SQuAD performance is significantly higher. The above observations suggest two separate but related hypotheses (below) about the BERT pretraining procedure. 1. On most tasks, running BERT pre-training for 2M steps produces better models than 1M steps. 2. The MultiBERTs training procedure outperforms the original BERT procedure on SQuAD. Let us use the Multi-Bootstrap
1. What is the main contribution of the paper regarding the use of large language models in NLP research? 2. What are the strengths and weaknesses of the proposed approach, particularly in its application to BERT models? 3. How does the reviewer assess the clarity and openness of the authors' writing and code sharing? 4. What is the significance of the case study in section 4, and how does it relate to the rest of the contributions? 5. Are there any concerns or suggestions regarding the adaptation of the multi-bootstrap approach for future work?
Summary Of The Paper Review
Summary Of The Paper Many tasks in contemporary NLP begin by building off of a large language model. This can cast the downstream task as a sort of fine-tuning experiment, whereby the results are heavily influenced by conditioning on the starting point of a single pre-trained version of an LLM. In this work, the authors take BERT as an example, and ask how much a specific artifact as a draw from the distribution over (model weights, initialization scheme, training data, loss function) affects downstream tasks built upon it. The authors provide a wide variety of BERT models that are varied in their training and initalization. They define a bootstrap procedure for the scenario where multiple instantiations of base models are available, and tie their findings together in a case study of gender bias in coreference resolution. Review The authors address the question of using artifactual large language models as the basis of experimentation in NLP research. In particular, they examine the extent to which results of finetuning or transfer learning can be attributed to the use a single best checkpoint of drawn from a distribution over training data, loss, random seed, and architecture, or for common building blocks (here they consider BERT). Strengths: The biggest strength of the work as I read it is the multi-bootstrap. This is the most likely element to be reused in future work I commend the openness of the authors; their code, their checkpoints, and the clarity of their writing made this paper a pleasure to review The inclusion of a case study in section 4 brought the whole of the contributions together, and highlighted the value of both multiple independent samples as well as the bootstrap framework used to assess the contribution of different possible choices. Weaknesses: I hope that the resources of the MultiBERT models will be reused in NLP research, but fear that since LLM research is progressing at a frenetic pace, BERT is no longer as relevant as it once was even a mere 12 months ago, being supplanted on several tasks by prompt-based language models (e.g T5, GPT-3) or smaller but more versatile reimplementations (e.g GPT-J). This is beyond the authors control however, and does not much diminish their contribution here. On page 5, it was not clear to me why in the paired samples design that estimating δ ^ − δ represents the overall error. It would help here to explain (in a footnote if need be) how the overall error is represented in terms of δ or δ ^ . Questions: This may be already established in other bootstrap results, but what is the sample efficiency of the convergence in distribution for Theorem 1? As the authors detail in their environmental impact statement of section 2.1, the cost of producing the MultiBERT checkpoints even in favourable energy generation conditions conditions is not negilible. Could the multi-bootstrap be adapted to probing smaller sets of models that themselves are resampled (e.g. under repeated dropout mask as in MC-Dropout) to approximate posterior distributions over the model parameters? The samples would no longer be IID, but such a scheme would admit many more usecases for the multi-bootstrap. In section 4, CDA-full sounds like a much more drastic intervention. The MultiBERTS in CDA-incr are trained on much larger corpii, but CDA-full MultiBERTS are trained from initialization only on Webster et al’s data? Won’t this introduce a confound whereby the CDA-full models are less proficient overall compared to the CDA-incr models?
ICLR
Title The MultiBERTs: BERT Reproductions for Robustness Analysis Abstract Experiments with pre-trained models such as BERT are often based on a single checkpoint. While the conclusions drawn apply to the artifact tested in the experiment (i.e., the particular instance of the model), it is not always clear whether they hold for the more general procedure which includes the architecture, training data, initialization scheme, and loss function. Recent work has shown that repeating the pre-training process can lead to substantially different performance, suggesting that an alternate strategy is needed to make principled statements about procedures. To enable researchers to draw more robust conclusions, we introduce the MultiBERTs, a set of 25 BERT-Base checkpoints, trained with similar hyper-parameters as the original BERT model but differing in random weight initialization and shuffling of training data. We also define the Multi-Bootstrap, a non-parametric bootstrap method for statistical inference designed for settings where there are multiple pre-trained models and limited test data. To illustrate our approach, we present a case study of gender bias in coreference resolution, in which the Multi-Bootstrap lets us measure effects that may not be detected with a single checkpoint. We release our models and statistical library, along with an additional set of 140 intermediate checkpoints captured during pre-training to facilitate research on learning dynamics. 1 INTRODUCTION Contemporary natural language processing (NLP) relies heavily on pretrained language models, which are trained using large-scale unlabeled data (Bommasani et al., 2021). BERT (Devlin et al., 2019) is a particularly popular choice: it has been widely adopted in academia and industry, and aspects of its performance have been reported on in thousands of research papers (see, e.g., Rogers et al., 2020, for an overview). Because pre-training large language models is computationally expensive (Strubell et al., 2019), researchers often rely on the release of model checkpoints through libraries such as HuggingFace Transformers (Wolf et al., 2020), which enable them to use large-scale language models without repeating the pre-training work. Consequently, most published results are based on a small number of publicly released model checkpoints. While this reuse of model checkpoints has lowered the cost of research and facilitated head-to-head comparisons, it limits our ability to draw general scientific conclusions about the performance of a particular class of models (Dror et al., 2019; D’Amour et al., 2020; Zhong et al., 2021). The key issue is that reusing model checkpoints makes it hard to generalize observations about the behavior of a single model artifact to statements about the underlying pre-training procedure which created it. Pre-training such models is an inherently stochastic process which depends on the initialization of the model’s parameters and the ordering of training examples; for example, D’Amour et al. ∗ Equal contribution. † Work done as a Google AI resident. ‡ Work done during an internship at Google. 1http://goo.gle/multiberts (2020) report substantial quantitative differences across multiple checkpoints of the same model architecture on several “stress tests” (Naik et al., 2018; McCoy et al., 2019). It is therefore difficult to know how much of the success of a model based on the original BERT checkpoint is due to BERT’s design, and how much is due to idiosyncracies of a particular artifact. Understanding this difference is critical if we are to generate reusable insights about deep learning for NLP, and improve the state-of-the-art going forward (Zhou et al., 2020; Dodge et al., 2020; Aribandi et al., 2021). This paper describes the MultiBERTs, an effort to facilitate more robust research on the BERT model. Our primary contributions are: • We release the MultiBERTs, a set of 25 BERT-Base, Uncased checkpoints to facilitate studies of robustness to parameter initialization and order of training examples (§2). Releasing these models preserves the benefits to the community of a single checkpoint release (i.e., low cost of experiments, apples-to-apples comparisons between studies based on these checkpoints), while enabling researchers to draw more general conclusions about the BERT pre-training procedure. • We present the Multi-Bootstrap, a non-parametric method to quantify the uncertainty of experimental results based on multiple pre-training seeds (§3), and provide recommendations for how to use the Multi-Bootstrap and MultiBERTs in typical experimental scenarios. We implement these recommendations in a software library. • We illustrate the approach with a practical use case: we investigate the impact of counterfactual data augmentation on gender bias, in a BERT-based coreference resolution systems (Webster et al., 2020) (§4). Additional examples are provided in Appendix E, where we document challenges with reproducing the widely-used original BERT checkpoint. The release also includes an additional 140 intermediate checkpoints, captured during training for 5 of the runs (28 checkpoints per run), to facilitate studies of learning dynamics. Our checkpoints and statistical libraries are available at: http://goo.gle/multiberts. Additional Related Work. The MultiBERTs release builds on top of a large body of work that seeks to analyze the behavior of BERT (Rogers et al., 2020). In addition to the studies of robustness cited above, several authors have introduced methods to reduce BERT’s variability during finetuning (Zhang et al., 2021; Mosbach et al., 2021; Dodge et al., 2020; Lee et al., 2020; Phang et al., 2018). Other authors have also studied the time dimension, which motivates our release of intermediate checkpoints (Liu et al., 2021; Hao et al., 2020; Saphra & Lopez, 2019; Chiang et al., 2020; Dodge et al., 2020). Similarly to §3, authors in the NLP literature have recommended best practices for statistical testing (Koehn, 2004; Dror et al., 2018; Berg-Kirkpatrick et al., 2012; Card et al., 2020; Søgaard et al., 2014; Peyrard et al., 2021), many of which are based on existing tests to estimate the uncertainty of test sample. In concurrent work, Deutsch et al. (2021) considered bootstrapping methods similar to the Multi-Bootstrap, in the context of summarization metrics evaluation. Also in concurrent work, the Mistral project (Karamcheti et al., 2021) released a set of 10 GPT-2 models with intermediate checkpoints at different stages of pre-training. Our work is complementary, focusing on BERT, introducing a larger number of pre-training seeds, and presenting a methodology to draw robust conclusions about model performance. 2 RELEASE DESCRIPTION We first describe the MultiBERTs release: how the checkpoints were trained and how their performance compares to the original BERT on two common language understanding benchmarks. 2.1 TRAINING Overview. The MultiBERTs checkpoints are trained following the code and procedure of Devlin et al. (2019), with minor hyperparameter modifications necessary to obtain comparable results on GLUE (Wang et al., 2019); a detailed discussion of these differences is provided in Appendix E. We use the BERT-Base, Uncased architecture with 12 layers and embedding size 768. We trained the models on a combination of BooksCorpus (Zhu et al., 2015) and English Wikipedia. Since the exact dataset used to train the original BERT is not available, we used a more recent version that was collected by Turc et al. (2019) with the same methodology. Checkpoints. We release 25 models trained for two million steps each, each training step involving a batch of 256 sequences. For five of these models, we release 28 additional checkpoints captured over the course of pre-training (every 20,000 training steps up to 200,000, then every 100,000 steps). In total, we release 165 checkpoints, about 68 GB of data. Training Details. As in the original BERT paper, we used batch size 256 and the Adam optimizer (Kingma & Ba, 2014) with learning rate 1e-4 and 10,000 warm-up steps. We used the default values for all the other parameters, except the number of steps, which we set to two million, and sequence length, which we set to 512 from the beginning with up to 80 masked tokens per sequence.2 We follow the BERT code and initialize the layer parameters from a truncated normal distribution, using mean 0 and standard deviation 0.02. We train using the same configuration as Devlin et al. (2019)3, with each run taking about 4.5 days on 16 Cloud TPU v2 chips. Environmental Impact Statement. We estimate compute costs at around 1728 TPU-hours for each pre-training run, and around 208 GPU-hours plus 8 TPU-hours for associated fine-tuning experiments (§2.2, including hyperparameter search and 5x replication). Using the calculations of Luccioni et al. (2019)4, we estimate this as about 250 kg CO2e for each of our 25 models. Counting the 25 runs each of CDA-incr and CDA-full from §4, associated coreference models (20 GPU-hours per pretraining model), and additional experiments of Appendix E, this gives a total of about 12.0 metric tons CO2e before accounting for offsets or clean energy. Based on the report by Patterson et al. (2021) of 78% carbon-free energy in Google Iowa (us-central1), we estimate that reproducing these experiments would emit closer to 2.6 tons CO2e, or slightly more than two passengers on a round-trip flight between San Francisco and New York. By releasing the trained checkpoints publicly, we aim to enable many research efforts on reproducibility and robustness without requiring this cost to be incurred for every subsequent study. 2.2 PERFORMANCE BENCHMARKS GLUE Setup. We report results on the development sets of the GLUE tasks: CoLA (Warstadt et al., 2019), MNLI (matched) (Williams et al., 2018), MRPC (Dolan & Brockett, 2005), QNLI (v2) (Rajpurkar et al., 2016; Wang et al., 2019), QQP (Chen et al., 2018), RTE (Bentivogli et al., 2009), SST-2 (Socher et al., 2013), and SST-B (Cer et al., 2017). In all cases we follow the same approach as Devlin et al. (2019). For each task, we fine-tune BERT for 3 epochs using a batch 2Specifically, we keep the sequence length constant (the paper uses 128 tokens for 90% of the training then 512 for the remaining 10%) to expose the model to more tokens and simplify the implementation. As we were not able to reproduce original BERT exactly using either 1M or 2M steps (see Appendix E for discussion), we release MultiBERTs trained with 2M steps under the assumption that higher-performing models are more interesting objects of study. 3We use https://github.com/google-research/bert with TensorFlow (Abadi et al., 2015) version 2.5 in v1 compatibility mode. 4https://mlco2.github.io/impact/ size of 32. We run a parameter sweep on learning rates [5e-5, 4e-5, 3e-5, 2e-5] and report the best score. We run the procedure five times for each of the 25 models and average the results. SQuAD Setup. We report results on the development sets of SQuAD versions 1.1 and 2.0 (Rajpurkar et al., 2016; 2018), using a setup similar to that of Devlin et al. (2019). For both sets of experiments, we use batch size 48, learning rate 5e-5, and train for 2 epochs. Results. Figures 1 and 2 show the distribution of the MultiBERTs models’ performance on the development sets of GLUE and SQuAD, in comparison to the original BERT checkpoint.5 On most tasks, original BERT’s performance falls within the same range as MultiBERTs (i.e., original BERT is between the minimum and maximum of the MultiBERTs’ scores). However, original BERT outperforms all MultiBERTs models on QQP, and under-performs them on SQuAD. The discrepancies may be explained by both randomness and differences in training setups, as investigated further in Appendix E. To further illustrate the performance variability inherent to pre-training and fine-tuning, we analyze the instance-level agreement between the models in Appendix C. 3 HYPOTHESIS TESTING USING MULTIPLE CHECKPOINTS The previous section compared MultiBERTs with the original BERT, finding many similarities but also some differences (e.g., in the case of SQuAD). To what extent can these results be explained by random noise? More generally, how can we quantify the uncertainty of a set of experimental results when there are multiple sources of randomness? In parallel to the MultiBERTs release, we propose a more principled and standardized method to compare training procedures. We recommend a non-parametric bootstrapping procedure, the “Multi-Bootstrap”, which enables us to make inference about model performance in the face of multiple sources of uncertainty: the randomness due to the pre-training seed, the fine-tuning seed, and the finite test data. The main idea is to use the average behavior over seeds as a means of summarizing expected behavior in an ideal world with infinite samples. Although we present Multi-Bootstrap in the context of analyzing the MultiBERTs, the method could be applied in all setups that involve a set of checkpoints pre-trained with the same method, a finite test set, and (possibly) multiple rounds of fine-tuning. The Multi-Bootstrap is implemented as a Python library, included with the MultiBERTs release. 3.1 INTERPRETING STATISTICAL RESULTS The Multi-Bootstrap provides an estimate of the amount of remaining uncertainty when summarizing the performance over multiple seeds. The following notation will help us state this precisely. We assume access to model predictions f(x) for each instance x in the evaluation set. We consider randomness arising from: 1. The choice of pre-training seed S ∼ M 2. The choice of fine-tuning seed T ∼ N 3. The choice of test sample (X,Y ) ∼ D The Multi-Bootstrap procedure allows us to account for all of the above. Specifically, MultiBERTs enables us to estimate the variance due to the choice of pre-training seed (1), which would not be possible with a single artifact. Note that multiple fine-tuning runs are not required in order to use the procedure. 5We used https://storage.googleapis.com/bert_models/2020_02_20/uncased_ L-12_H-768_A-12.zip, as linked from https://github.com/google-research/bert. For each pre-training seed s, let fs(x) denote the learned model’s prediction on input features x and let L(s) denote the expected performance metric of fs on a test distribution D over features X and labels Y . For example, the accuracy would be L(s) = E[1{Y = fs(X)}]. We can use the test sample (which we will assume has nx examples) to estimate the performance for each of the seeds in MultiBERTs, which we denote as L̂(s). The performance L(s) depends on the seed, but we are interested in summarizing the model over all seeds. A natural summary is the average over seeds, ES∼M [L(S)], which we will denote by θ. Then, using ns independently sampled seeds, we can compute an estimate θ̂ as θ̂ = 1 ns ns∑ j=1 L̂(Sj) . Because θ̂ is computed under a finite evaluation set and finite number of seeds, it is necessary to quantify the uncertainty of the estimate. The goal of Multi-Bootstrap is to estimate the distribution of the error in this estimate, θ̂ − θ, in order to compute confidence intervals and test hypotheses about θ, such as whether it is above some threshold of interest. Below, we describe a few common experimental designs in NLP that can be studied with these tools. Design 1: Comparison to a Fixed Baseline. In many use cases, we want to compare BERT’s behavior to that of a single, fixed baseline. For instance, does BERT encode information about syntax as a feature-engineered model would (Tenney et al., 2019; Hewitt & Manning, 2019)? Does it encode social stereotypes, and how does it compare to human biases (Nadeem et al., 2021)? Does it encode world knowledge, similarly to explicit knowledge bases (Petroni et al., 2019)? Does another model such as RoBERTa (Liu et al., 2019) outperform BERT on common tasks such as those from the GLUE benchmark? In all these cases, we compare MultiBERTs to some external baseline of which we only have a single estimate (e.g., random or human performance), or against an existing model that is not derived from the MultiBERTs checkpoints. We treat the baseline as fixed, and assess only the uncertainty that arises from MultiBERTs’ random seeds and the test examples. Design 2: Paired Samples. Alternatively, we might seek to assess the effectiveness of a specific intervention on model behavior. In such studies, an intervention is proposed (e.g., representation learning via a specific intermediate task, or a specific architecture change) which can be applied to any pre-trained BERT checkpoint. The question is whether the procedure results in an improvement over the original BERT pre-training method: does the intervention reliably produce the desired effect, or is the observed effect due to the idiosyncracies of a particular model artifact? Examples of such studies include: Does intermediate tuning on NLI after pre-training make models more robust across language understanding tasks (Phang et al., 2018)? Does pruning attention heads degrade model performance on downstream tasks (Voita et al., 2019)? Does augmenting BERT with information about semantic roles improve performance on benchmark tasks (Zhang et al., 2020)? We refer to studies like the above as paired since each instance of the baseline model fs (which does not receive the intervention) can be paired with an instance of the proposed model f ′s (which receives the stated intervention) such that fs and f ′ s are based on the same pretrained checkpoint produced using the same seed. Denoting θf and θf ′ as the expected performance defined above for the baseline and intervention model respectively, our goal is to test hypotheses about the true difference in performance δ = θf ′ − θf using the estimated difference δ̂ = θ̂f ′ − θ̂f . In a paired study, Multi-Bootstrap allows us to estimate both of the errors θ̂f − θf and θ̂f ′ − θf ′ , as well as the correlation between the two. Together, these allow us to approximate the distribution of the overall estimation error δ̂ − δ = (θ̂f − θ̂f ′) − (θf − θf ′), between the estimate δ̂ and the truth δ. With this, we can compute confidence intervals for δ, the true average effect of the intervention on performance over seeds, and test hypotheses about δ, as well. Design 3: Unpaired Samples. Finally, we might seek to compare a number of seeds for both the intervention and baseline models, but may not expect them to be aligned in their dependence on the seed. For example, the second model may use a different architecture so that they cannot be built from the same checkpoints, or the models may be generated from entirely separate initialization schemes. We refer to such studies as unpaired. Like in a paired study, the Multi-Bootstrap allows us to estimate the errors θ̂f − θf and θ̂f ′ − θf ′ ; however, in an unpaired study, we cannot estimate the correlation between the errors. Thus, we assume that the correlation is zero. This will give a conservative estimate of the error (θ̂f − θ̂f ′) − (θf − θf ′), as long as θ̂f − θf and θ̂f ′ − θf ′ are not negatively correlated. Since there is little reason to believe that the random seeds used for two different models would induce a negative correlation between the models’ performance, we take this assumption to be relatively safe. Hypothesis Testing. Given the measured uncertainty, we recommend testing whether or not the difference is meaningfully different from some arbitrary predefined threshold (i.e., 0 in the typical case). Specifically, we are often interested in rejecting the null hypothesis that the intervention does not improve over the baseline model, i.e., H0 : δ ≤ 0 (1) in a statistically rigorous way. This can be done with the Multi-Bootstrap procedure described below. 3.2 MULTI-BOOTSTRAP PROCEDURE The Multi-Bootstrap is a non-parametric bootstrapping procedure that allows us to estimate the distribution of the error θ̂ − θ over the seeds and test instances. The algorithm supports both paired and unpaired study designs, differentiating the two settings only in the way the sampling is performed. To keep the presentation simple, we will assume that the performance L(s) is an average of a perexample metric ℓ(x, y, fs) over the distribution D of (X,Y ), such as accuracy or the log likelihood, and L̂(s) is similarly an empirical average with the observed nx test examples, L(s) = ED[ℓ(X,Y, fs)], and L̂(s) = 1 nx nx∑ i=1 ℓ(Xi, Yi, fs). We note that the mapping D 7→ L(s) is linear in D, which is required for our result in Theorem 1. However, we conjecture that this is an artifact of the proof; like most bootstrap methods, the method here likely generalizes to any performance metric which behaves asymptotically like a linear mapping of D, including AUC, BLEU score (Papineni et al., 2002), and expected calibration error. Building on the rich literature on bootstrap methods (e.g., Efron & Tibshirani, 1994), the MultiBootstrap is a new procedure which accounts for the way that the combined randomness from the seeds and test set creates error in the estimate θ̂. The statistical underpinnings of this approach have theoretical and methodological connections to inference procedures for two-sample tests (Van der Vaart, 2000), where the samples from each population are independent. However, in those settings, the test statistics naturally differ as a result of the scientific question at hand. In our procedure, we generate a bootstrap sample from the full sample with replacement separately over both the randomness from the pre-training seed s and from the test set (X,Y ). That is, we generate a sample of pre-training seeds (S∗1 , S ∗ 2 , . . . , S ∗ ns) with each S ∗ j drawn randomly with replacement from the pre-training seeds, and we generate a test set sample ((X∗1 , Y ∗ 1 ), (X ∗ 2 , Y ∗ 2 ), . . . , (X ∗ nx , Y ∗ nx)) with each (X,Y ) pair drawn randomly with replacement from the full test set. Then, we compute the bootstrap estimate θ̂∗ as θ̂∗ = 1 ns ns∑ j=1 L̂∗(S∗j ), where L̂ ∗(s) = 1 nx nx∑ i=1 ℓ(X∗i , Y ∗ i , fs). To illustrate the procedure, we present a minimal Python implementation in Appendix A. For sufficiently large nx and ns, the distribution of the estimation error θ̂ − θ is approximated well by the distribution of θ̂∗ − θ̂ over re-draws of the bootstrap samples, as stated precisely in Theorem 1. Theorem 1. Assume that E[ℓ2(X,Y, fS)] < ∞. Furthermore, assume that for each s, E[ℓ2(X,Y, fs)] < ∞, and for almost every (x, y) pair, E[ℓ2(X,Y, fS) | X = x, Y = y] < ∞. Let n = nx +ns, and assume that 0 < ps = ns/n < 1 stays fixed (up to rounding error) as n → ∞. Then, there exists 0 < σ2 < ∞ such that √n(θ̂ − θ) d→ G with G ∼ N (0, σ2). Furthermore, conditionally on ((X1, Y1), (X2, Y2), . . . ), √ n(θ̂∗ − θ̂) d→ G. The proof of Theorem 1 is in Appendix B, along with a comment on the rate of convergence for the approximation error. The challenge with applying existing theory to our method is that while the seeds and data points are each marginally iid, the observed losses depend on both, and therefore are not iid. Therefore, we need to handle this non-iid structure in our method and proof. For nested sources of randomness (e.g., if for each pre-training seed s, we have estimates from multiple fine-tuning seeds), we average over all of the inner samples (fine-tuning seeds) in every bootstrap sample, motivated by Field & Welsh (2007)’s recommendations for bootstrapping clustered data. Paired Samples (design 2, continued). In a paired design, the Multi-Bootstrap procedure can additionally tell us the joint distribution of θ̂f ′ − θf ′ and θ̂f − θf . To do so, one must use the same bootstrap samples of the seeds (S∗1 , S ∗ 2 , . . . , S ∗ ns) and test examples ((X∗1 , Y ∗ 1 ), (X ∗ 2 , Y ∗ 2 ), . . . , (X ∗ nx , Y ∗ nx)) for both models. Then, the correlation between the errors θ̂f ′ − θf ′ and θ̂f − θf is well approximated by the correlation between the bootstrap errors θ̂∗f ′ − θ∗f ′ and θ̂∗f − θ∗f . In particular, recall that we defined the difference in performance between the intervention f ′ and the baseline f to be δ, and defined its estimator to be δ̂. With the Multi-Bootstrap, we can estimate the bootstrapped difference δ̂∗ = θ̂∗f ′ − θ̂∗f . With this, the distribution of the estimation error δ̂ − δ is well approximated by the distribution of δ̂∗ − δ̂ over bootstrap samples. Unpaired Samples (design 3, continued). For studies that do not match the paired format, we adapt the Multi-Bootstrap procedure so that, instead of sampling a single pre-training seed that is shared between f and f ′, we sample pre-training seeds for each one independently. The remainder of the algorithm proceeds as in the paired case. Relative to the paired design discussed above, this additionally assumes that the errors due to differences in pre-training seed between θ̂f ′ − θf ′ and θ̂f − θf are independent. Comparison to a Fixed Baseline (design 1, continued). Often, we do not have access to multiple estimates of L(s), for example, when the baseline f against which we are comparing is an estimate of human performance for which only mean accuracy was reported, or when f is the performance of a previously-published model for which there only exists a single artifact or for which we do not have direct access to model predictions. When we have only a point estimate θ̂f = L̂(S1) of θf for the baseline f with a single seed S1, we recommend using Multi-Bootstrap to compute a confidence interval around θf ′ and reporting where the given estimate of baseline performance falls within that distribution. An example of such a case is Figure 1, in which the distribution of MultiBERTs performance is compared to that from the single checkpoint of the original BERT release. In general such results should be interpreted conservatively, as we cannot make any claims about the variance of the baseline model. Hypothesis Testing. A valid p-value for the hypothesis test described in Equation 1 is the fraction of bootstrap samples from the above procedure for which the estimate δ̂ is negative. 4 APPLICATION: GENDER BIAS IN COREFERENCE SYSTEMS We present a case study to illustrate how MultiBERTs and the Multi-Bootstrap can help us draw more robust conclusions about model behavior. The use case is based on gendered correlations. For a particular measure of gender bias, we take a single BERT checkpoint and measure a value of 0.35. We then apply an intervention, foo, designed to reduce this correlation, and measure 0.25. In an effort to do even better, we create a whole new checkpoint by applying the foo procedure from the very beginning of pre-training. On this checkpoint, we measure 0.3. How does one make sense of this result? As a concrete example, we analyze gender bias in coreference systems (Rudinger et al., 2018) and showing how MultiBERTs and the Multi-Bootstrap can help us understand the effect of an intervention, counterfactual data augmentation (CDA). We follow a set-up similar to Webster et al. (2020), which augments the BERT pretraining data with counterfactual sentences created by randomly swapping English binary-gendered pronouns. The goal is to weaken the correlation between gendered pronouns and other words such as occupation terms (e.g., doctor, nurse). We compare our baseline MultiBERTs models to two strategies for CDA. In the first (CDA-incr), we continue pre-training each MultiBERTs model for an additional 50K steps on the counterfactual data of Webster et al. (2020). In the second, we train BERT models from scratch (CDA-full) on the same dataset. The Winogender dataset consists of template sentences covering 60 occupation terms and instantiated with either male, female, or neutral pronouns. We follow Webster et al. (2020) and train a gold-mention coreference system using a two-layer feedforward network that takes span representations from a frozen BERT encoder as input and makes binary predictions for mention-referent pairs. The model is trained on OntoNotes (Hovy et al., 2006) and evaluated on the Winogender examples for both per-sentence accuracy and a bias score, defined as the Pearson correlation between the peroccupation bias score (Figure 4 of Rudinger et al. 2018) and the occupational gender statistics from the U.S. Bureau of Labor Statistics.6 For each pre-training run, we train five coreference models, using the same encoder but different random seeds to initialize the classifier weights and to shuffle the training data. 4.1 PAIRED ANALYSIS: CDA-INCR VS. BASE We investigate the impact of the intervention on performance and bias. Overall accuracy is fairly consistent across pre-training seeds, at 62.6±1.2% for the base model, with only a small and not statistically significant change under CDA-incr (Table 1). However, as shown in Figure 3, there is considerable variation in bias correlation, with r values between 0.1 and 0.7 depending on pretraining seed.7 The range for CDA-incr overlaps somewhat, with values between 0.0 and 0.4; however, because the incremental CDA is an intervention on each base checkpoint, we can look at the individual seeds and see that in most cases there appears to be a significant improvement. A paired Multi-Bootstrap allows us to quantify this and further account for noise due to the finite evaluation 6We use the occupation data as distributed with the Winogender dataset, https://github.com/ rudinger/winogender-schemas. 7Some of this variation is due to the classifier training, but on this task there is a large intrinsic contribution from the pretraining seed. See Appendix D for a detailed analysis. sample of 60 occupations. The results are shown in Table 1, which show that CDA-incr significantly reduces bias by δ̂ = −0.162 with p = 0.001. 4.2 UNPAIRED ANALYSIS: CDA-FULL VS. CDA-INCR We can also test if we get any additional benefit from running the entire pre-training with counterfactually-augmented data. Similar to MultiBERTs, we trained 25 CDA-full checkpoints for 2M steps on the CDA dataset.8 Because these are entirely new checkpoints, independent from the base MultiBERTs runs, we use an unpaired version of the Multi-Bootstrap, which uses the same set of examples but samples pretraining seeds independently for CDA-incr and CDA-full. As shown in Table 2, overall accuracy does not change appreciably (0.622 vs. 0.623, p = 0.416), while bias correlation seems to decrease but not significantly (0.256 vs 0.192, δ = -0.064 with p = 0.132). As an ablation, we also experiment with sampling over either only seeds (taking the set of examples, i.e. occupations, as fixed), or over examples (taking the set of 25 seeds as fixed). As shown in Table 2, we find lower p-values (0.005 and 0.053) in both cases—showing that failing to account for finite samples along either dimension could lead to overconfident conclusions. In Appendix E, we present two additional examples: a paired study where we increase pretraining time from 1M to 2M steps, as well as an unpaired comparison to the original bert-base-uncased checkpoint. 5 CONCLUSION To make progress on language model pre-training, it is essential to distinguish between the properties of specific model artifacts and those of the training procedures that generated them. To this end, we have presented two resources: the MultiBERTs, a set of 25 model checkpoints to support robust research on BERT, and the Multi-Bootstrap, a non-parametric statistical method to estimate the uncertainty of model comparisons across multiple training seeds. We demonstrated the utility of these resources by showing how to quantify the effect of an intervention to reduce a type of gender bias in coreference systems built on BERT. We hope that the release of multiple checkpoints and the use of principled hypothesis testing will become standard practices in research on pre-trained language models. 8Following Webster et al. (2020), we use 20 masks per sequence instead of the 80 from Devlin et al. (2019). A MINIMAL IMPLEMENTATION OF THE MULTI-BOOTSTRAP Below, we present a simplified Python implementation of the Multi-Bootstrap algorithm presented in Section 3.2. It describes a single-sided version of the procedure, which could be used, e.g., to test that a model’s performance is greater than 0. The input is a matrix of predictions where row indices correspond to test examples and column indices to random seeds. The functions returns an array of nboot samples [θ̂1, . . . , θ̂nboot ]. 1 def multibootstrap(predictions, labels, metric_fun, nboot): 2 """ 3 Generates bootstrap samples of a model’s performance. 4 5 Input: 6 predictions: 2D Numpy array with the predictions for different seeds. 7 labels: 1D Numpy array with the labels. 8 metric_fun: Python function. Takes a pair of arrays as input, and returns a metric or loss. 9 nboot: Number of bootstrap samples to generate. 10 11 Output: 12 Numpy array with nboot samples. 13 14 """ 15 # Checks the data format. 16 n_samples, n_seeds = predictions.shape 17 assert labels.shape == (n_samples,) 18 19 thetas = np.zeros(nboot) 20 for boot_ix in range(nboot): 21 # Samples n_samples test examples and n_seeds pre-training seeds. 22 x_samples = np.random.choice(n_samples, size=n_samples, replace=True) 23 s_samples = np.random.choice(n_seeds, size=n_seeds, replace=True) 24 25 # Computes the metric over the bootstrapping samples. 26 sampled_predictions = predictions[np.ix_(x_samples, s_samples)] 27 sampled_labels = labels[x_samples] 28 sampled_metrics = [ 29 metric_fun(sampled_predictions[:,j], sampled_labels) 30 for j in range(n_seeds) 31 ] 32 33 # Averages over the random seeds. 34 thetas[boot_ix] = np.mean(sampled_metrics) 35 36 return thetas We provide the complete version of the algorithm on our repository http://goo.gle/ multiberts. Our implementation is optimized and supports all the experiment designs described in Section 3, including paired and unpaired analysis as well as multiple fine-tuning runs for each pretraining seed. B PROOF OF THEOREM 1 Before giving the proof, we define some useful notation that will simplify the argument considerably. We let Dn be the empirical measure over the nx observations (Zi = (Xi, Yi)) n i=1, and Mn be the empirical measure over the ns observations (Sj) n j=1. For a function f : V → R and a distribution P over V , we will use the shorthand Pf to denote the expectation of f under P , Pf = EV∼P [f(V )]. For example, this allows us to write θ = DMℓ = EZ∼DES∼M ℓ(Z, fS), and θ̂ = DnMnℓ = 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ). For the bootstrapped distributions, let D∗n denote the distribution over the bootstrap data samples (Z∗1 , Z ∗ 2 , . . . , Z ∗ nx) and M ∗ n denote the distribution over the bootstrapped seed samples, (S∗1 , S ∗ 2 , . . . , S ∗ ns), both conditional on the observed samples (Zi) nx i=1 and (Sj) ns j=1. Note that the empirical average over a bootstrapped sample 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Z∗i , fS∗j ) can be written as 1 nx nx∑ i=1 1 ns ns∑ j=1 AiBjℓ(Zi, fSj ), where Ai is the number of times Zi appears in the bootstrapped sample (Z ∗ k) nx k=1, and Bj is the number of times Sj appears in the bootstrapped sample (S ∗ k) ns k=1. With this in mind, we will abuse notation, and also denote D∗n as the distribution over the Ai and M ∗ n as the distribution over the Bj . Finally, we will use E∗ and Var∗ to denote the expectation and variance of random variables defined with respect to D∗n or M ∗ n, conditional on Dn and Mn. We will use P to denote the distribution P = D×M . Throughout, all assertions made with respect to random variables made without a note about their probability of occurrence hold P -almost surely. Proof. The challenge with applying existing theory to our method is that because the performance metric (ℓ(Zi, fSj ) nx i=1 over the nx observations for a given seed Sj all depend on the same Sj , they are not independent. Similarly for the performance on a given observation, over seeds. Therefore, we need to handle this non-iid structure in our proof for the multi-bootstrap. There are conceptually three steps to our proof that allow us to do just that. The first is to show that θ̂ has an asymptotically linear representation as √ n(θ̂ − θ) = √n(Dn −D)Mℓ+ √ n(Mn −M)Dℓ+ oP (1). (2) The second is to show that conditional on Dn and Mn the multi-bootstrapped statistic θ̂ ∗ ∆= D∗nM ∗ nℓ has an asymptotically linear representation as √ n(θ̂∗ − θ̂) = √n(D◦n −Dn)Mℓ+ √ n(M◦n −Mn)Dℓ+ oP∗(1), (3) where D◦n and M ◦ n are multiplier bootstrap samples coupled to the bootstrap D ∗ n and M ∗ n which we define formally in the beginning of Step 2. The third step is to use standard results for the multiplier bootstrap of the mean of iid data to show that the distributions of the above linearized statistics converge to the same limit. Because we have assumed that ℓ(Z, fS) < ∞, E[ℓ(Z, fS) | S] < ∞, and E[ℓ(Z, fS) | Z] < ∞, Fubini’s theorem allows us to switch the order of integration over Z and S as needed. We will assume that DMℓ(X,Y, fS) = 0. This is without loss of generality, because adding and subtracting √ nDMℓ to the bootstrap expression gives √ n(θ̂∗ − θ̂) = √n(D∗nM∗nℓ−DnMnℓ) = √ n(D∗nM ∗ nℓ−DMℓ+DMℓ−DnMnℓ) = √ n(D∗nM ∗ n(ℓ−DMℓ)−DnMn(ℓ−DMℓ)), so if we prove that the result holds with the mean zero assumption, it will imply that the result holds for ℓ with a nonzero mean. This theorem guarantees consistency of the Multi-Bootstrap estimates. One question that comes up is whether it is possible to get meaningful / tight rates of convergence for the approximation. Unfortunately, getting OP (1/n) convergence as found in many bootstrap methods (Van der Vaart, 2000) is difficult without the use of Edgeworth expansions, by which the Multi-Bootstrap is not welladapted to analysis. That said, many of the remainder terms already have variance of order O(1/n), or could easily be adapted to the same, suggesting an OP (1/ √ n) convergence. The main difficulty, however, is showing rates of convergence for the strong law on separately exchangeable arrays (see the proof of Lemmas 2, 4-5). Showing a weaker notion of convergence, such as in probability, may perhaps allow one to show that the remainder is OP (1/ √ n), however the adaptation of the aforementioned Lemmas is nontrivial. Step 1 Recalling that θ̂ ∆ = DnMnℓ and θ ∆ = DMℓ, we can expand √ n(θ̂ − θ) as follows, √ n(DnMnℓ−DMℓ) = √ n(DnMnℓ−DMnℓ+DMnℓ−DMℓ) = √ n((Dn −D)Mnℓ+D(Mn −M)ℓ) = √ n((Dn −D)Mnℓ+ (Dn −D)Mℓ− (Dn −D)Mℓ+D(Mn −M)ℓ) = √ n((Dn −D)Mℓ+ (Dn −D)(Mn −M)ℓ+D(Mn −M)ℓ) The following lemma shows that √ n(Dn −D)(Mn −M)ℓ is a lower order term. Lemma 1. Under the assumptions of Theorem 1, √ n(Dn −D)(Mn −M)ℓ = oP (1). Therefore, √ n(DnMnℓ−DMℓ) = 1√ 1− ps √ nx(Dn −D)Mℓ+ 1√ ps √ ns(Mn −M)Dℓ+ oP (1). Step 2 One of the challenges with working with the bootstrap sample D∗n and M ∗ n is that the induced per-sample weights {Ai}nxi=1 and {Bj}nsj=1 do not have independent components, because they each follow a multinomial distribution over nx items and ns items, respectively. However, they are close enough to independent that we can define a coupled set of random variables {A◦i }nxi=1 and {B◦j }nsj=1 that do have independent components, but behave similarly enough to {Ai} and {Bj} that using these weights has a negligible effect on distribution of the bootstrapped estimator, as described concretely below. First, we discuss the coupled multiplier bootstrap sample D◦n and M ◦ n. The creation of this sequence, called “Poissonization” is a standard technique for proving results about the empirical bootstrap that require independence of the bootstrap weights (van der Vaart et al., 1996). We describe this for D◦n as the idea is identical for M◦n. Because our goal is to couple this distribution to D ∗ n, we define it on the same sample space, and extend the distribution P ∗, expectation E∗ and variance Var∗ to be over D◦n and M ◦ n, conditionally on Dn and Mn, as with D ∗ n and M ∗ n. To construct the distribution D◦n, from the empirical distribution Dn and a bootstrap sample D ∗ n, start with the distribution D∗n and modify it as follows: We draw a Poisson random variable Nnx with mean nx. If Nnx > nx, then we sample Nnx −nx iid observations from Dn, with replacement, and add them to the bootstrap sample initialized with D∗n to produce the distribution D ◦ n. If Nnx < nx, we sample nx − Nnx observations from D∗n, without replacement, and remove them from the bootstrap sample to produce the distribution D◦n. If Nnx = nx, then D ◦ n = D ∗ n. Recalling that Ai is the number of times the i-th sample is included in D ∗ n, similarly define A ◦ i as the number of times the i-th sample is included in D◦n. Note that by the properties of the Poisson distribution, A◦i ∼ Poisson(1), and {A◦i }nxi=1 are independent. Note that the natural normalization for D◦n would be Nnx . However, it will be useful to maintain the normalization by nx, so abusing notation, for a function f(z), we will say that D◦nf = 1 nx ∑nx i=1 A ◦ i f(Zi). Define θ̂◦ as the following empirical estimator of θ under the distribution D◦n ×M◦n, θ̂◦ = D◦nM ◦ nℓ = 1 nx nx∑ i=1 1 ns ns∑ j=1 A◦iB ◦ j ℓ(Zi, fSj ). Lemma 2 shows that √ n(θ̂∗ − θ̂◦) = oP∗(1), and so √ n(θ̂∗ − θ) = √n(θ̂◦ − θ) + oP∗(1). Lemma 2. Under the assumptions of Theorem 1, and that DMℓ = 0, √ n(θ̂∗ − θ̂◦) = oP∗(1). With this, the expansion of √ n(θ̂◦ − θ̂) begins mutatis mutandis the same as in Step 1, to get that √ n(θ̂◦ − θ̂) = 1√ 1− ps √ nx(D ◦ n −Dn)Mnℓ+ √ n(D◦n −Dn)(M◦n −Mn)ℓ + 1√ ps √ ns(M ◦ n −Mn)Dnℓ. As with Step 1, we provide Lemma 3 showing that the remainder term √ n(D◦n −Dn)(M◦n −Mn)ℓ will be lower order. Lemma 3. Under the assumptions of Theorem 1, √ n(D◦n −Dn)(M◦n −Mn)ℓ = oP∗(1). Therefore, √ n(D◦nM ◦ nℓ−DnMnℓ) = 1√ 1− ps √ nx(D ◦ n −Dn)Mnℓ+ 1√ ps √ ns(M ◦ n −Mn)Dnℓ+ oP∗(1). Then, to write √ n(θ̂∗−θ̂) in terms of √ns(M◦n−Mn)Dℓ as wanted in Eq. (3), instead of √ ns(M ◦ n− Mn)Dnℓ, we must additionally show that the functional has enough continuity that the error term√ ns(M ◦ n −Mn)(Dn −D)ℓ is lower order. The following lemma shows exactly this. Lemma 4. Under the assumptions of Theorem 1, conditionally on the sequences Z1, Z2, . . . and S1, S2, . . . , (a) √ n(D◦n −Dn)(Mn −M)ℓ = oP∗(1), and (b) √ n(Dn −D)(M◦n −Mn)ℓ = oP∗(1). Altogether, these imply that √ n(D∗nM ∗ nℓ−DnMnℓ) = 1√ 1− ps √ nx(D ◦ n −Dn)Mℓ+ 1√ ps √ ns(M ◦ n −Mn)Dℓ+ oP∗(1). Step 3 Noting that Mℓ(·, fS) = ED×M [ℓ(·, fS) | Z = ·] is a real-valued random variable with finite variance (similarly for Dℓ(Z, ·)), and recalling that the nx samples used for Dn and ns samples for Mn satisfy n = nx/(1 − ps) and n = ns/ps, for 0 < ps < 1, the conventional central limit theorem shows that for some positive semi-definite matrix Σ ∈ R2×2, and G ∼ N (0,Σ), √ n ( (Dn −D)Mℓ (Mn −M)Dℓ ) = ( 1 1−ps √ nx(Dn −D)Mℓ 1 ps √ ns(Mn −M)Dℓ ) d→ G. Note that Dn and Mn are independent, so G is, in fact, a diagonal matrix. Additionally, the conditional multiplier CLT (van der Vaart et al., 1996, Lemma 2.9.5, pg. 181) implies that conditionally on Z1, Z2, . . . and S1, S2, . . . , √ n ( (D∗n −Dn)Mℓ (M∗n −Mn)Dℓ ) d→ G. Finally, applying the delta method (see Theorem 23.5 from Van der Vaart (2000)) along with the results from Steps 1 and 2 shows that the distributions of √ n(θ̂ − θ) and √n(θ̂∗ − θ̂) converge to N (0, σ2), where σ2 = Σ11/(1− ps) + Σ22/ps. B.1 PROOF OF LEMMA 1 Fix ǫ > 0. Note that E[(Dn −D)(Mn −M)ℓ] = 0, so by Chebyshev’s inequality, P ( |√n(Dn −D)(Mn −M)ℓ| > ǫ ) ≤ Var( √ n(Dn −D)(Mn −M)ℓ) ǫ2 . Therefore, it suffices to show that limn→∞ Var( √ n(Dn−D)(Mn−M)ℓ) = 0. To do so, we apply the law of total variance, conditioning on Dn, and bound the resulting expression by C/n. Var( √ n(Dn −D)(Mn −M)ℓ) = nE[Var((Dn −D)(Mn −M)ℓ | Dn)] + nVar(E[(Dn −D)(Mn −M)ℓ | Dn]) = nE[Var((Dn −D)(Mn −M)ℓ | Dn)] = nE[Var((Mn −M)(Dn −D)ℓ | Dn)] = E n n2s ns∑ j=1 Var((Dn −D)ℓ(·, fSj ) | Dn) = E [ n ns Var((Dn −D)ℓ(·, fS1) | Dn) ] = E 1 ps E 1 nx nx∑ i=1 ℓ(Zi, fS1)− E[ℓ(Zi, fS1) | S1] 2 | {Zi}nxi=1 = E 1 ps 1 nx nx∑ i=1 ℓ(Zi, fS1)− E[ℓ(Zi, fS1) | S1] 2 = E 1 psn2x nx∑ i=1 nx∑ k=1 (ℓ(Zi, fS1)− E[ℓ(Zi, fS1) | S1])(ℓ(Zk, fS1)− E[ℓ(Zk, fS1) | S1]) = E 1 psn2x nx∑ i=1 (ℓ(Zi, fS1)− E[ℓ(Zi, fS1) | S1])2 = 1 ps(1− ps)n E [ (ℓ(Z1, fS1)− E[ℓ(Z1, fS1) | S1])2 ] ≤ C n → 0. B.2 PROOF OF LEMMA 2 First, note the following representation for θ̂∗ − θ̂◦: θ̂∗ − θ̂◦ = 1 nx nx∑ i=1 1 ns ns∑ j=1 AiBjℓ(Zi, fSj )− 1 nx nx∑ i=1 1 ns ns∑ j=1 A◦iB ◦ j ℓ(Zi, fSj ) = 1 ns ns∑ j=1 (Bj −B◦j ) nx nx∑ i=1 Aiℓ(Zi, fSj ) ︸ ︷︷ ︸ ∆ =I1 + 1 nx nx∑ i=1 (Ai −A◦i ) ns ns∑ j=1 B◦j ℓ(Zi, fSj ) ︸ ︷︷ ︸ ∆ =I2 . Let ǫ > 0. Noting that E∗[I1] = E∗[I2] = 0, applying Chebyshev’s inequality gives P ∗ (√ n|θ̂∗ − θ̂◦| > ǫ ) ≤ nVar ∗(θ̂∗ − θ̂◦) ǫ2 ≤ 2nVar ∗(I1) + Var ∗(I2) ǫ2 It suffices to show that nVar∗(I1) → 0 and nVar∗(I2) → 0. The arguments for each term are mutatis mutandis the same, and so we proceed by showing the proof for I2. By the law of total variance, Var∗(I2) = Var ∗(E∗[I2 | {Bj}nsj=1]) + E∗[Var∗(I2 | {Bj}nsj=1)]. Because E∗[Ai] = E∗[A◦i ] and {Bj}nsj=1 ⊥ Ai, A◦i , it follows that E∗[I2 | {Bj}nsj=1] = 0. Taking the remaining term and re-organizing the sums in I2, Var∗(I2) = E ∗ Var ∗ 1 nx nx∑ i=1 (Ai −A◦i ) 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) | {Bj}nsj=1 . (4) Next, we apply the law of total variance again, conditioning on Nnx = ∑ i A ◦ i . First, E ∗[I2 | Nnx , {Bj}nsj=1] = Nnx − nx nx 1 nx nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ), and so Var∗ ( E ∗[I2 | Nnx , {Bj}nsj=1] | {Bj}nsj=1 ) = 1 nx 1 nx nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) 2 Then, conditionally on Nnx (and {Bj}), I2 is the (centered) empirical average of |Nn − n| samples from a finite population of size n, rescaled by |Nn − n|/n. Therefore, applying Theorem 2.2 of Cochran (2007) gives the conditional variance as |Nnx − nx| n2x 1 nx − 1 nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) 2 − nx nx − 1 1 nx nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) 2 ︸ ︷︷ ︸ ∆ =V 2 . To take the expectation over Nnx , notice that because E ∗[Nnx ] = nx, this is the mean absolute deviation (MAD) of Nnx . Using the expression for the MAD of a Poisson variable from Ramasubban (1958) gives E ∗|Nnx − nx| = 2nx nnxx exp(−nx) nx! , and using Stirling’s approximation, this is bounded by C √ nx, for some 0 < C < ∞. Combining this with the above term for the variance of the conditional expectation, we have Var∗ 1 nx nx∑ i=1 (Ai −A◦i ) 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) | {Bj}nsj=1 ≤ 1 nx 1 nx nx∑ i=1 1 ns ns∑ j=1 Bjℓ(Zi, fSj ) 2 + 1 n1.5x V 2. (5) Noting that E∗[B2j ] = E ∗[BjBk] = 1, we get the following bound: Var∗(I2) ≤ 1 nx 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) 2 + 1 n1.5x V̄ 2, where V̄ 2 = 1 nx − 1 nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) 2 − nx nx − 1 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) 2 . Because of the assumption that DMℓ = 0, the SLLN adapted to separately exchangeable arrays (Rieders, 1991, Theorem 1.4) implies that lim n→∞ 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) = 0, almost surely. Therefore, the first term of (5) is o(1/n). Note that V̄ 2 is the empirical variance of the conditional expectation of ℓ(Zi, fSj ) given {Zi}ni=1. Therefore, the law of total variance shows that V̄ 2 ≤ 1 nx 1 ns nx∑ i=1 ns∑ j=1 ℓ2(Zi, fSj )− 1 nx 1 ns nx∑ i=1 ns∑ j=1 ℓ(Zi, fSj ) 2 . By the SLLN adapted to separately exchangeable arrays (Rieders, 1991, Theorem 1.4), both of the terms converge almost surely to DMℓ2 < ∞ and (DMℓ)2, respectively. and therefore, lim n→∞ nVar∗(Is) ≤ lim n→∞ n nx 1 nx nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj ) 2 + n n1.5x V̄ 2 = 0. B.3 PROOF OF LEMMA 3 As with Lemma 1, the main idea of the proof is to apply Chebyshev’s inequality, and show that the variance tends to zero. Indeed, choosing an arbitrary ǫ > 0, P ∗ ( |√n(D◦n −Dn)(M◦n −Mn)ℓ| ≥ ǫ ) ≤ Var ∗ (√n(D◦n −Dn)(M◦n −Mn)ℓ ) ǫ2 . Therefore, it suffices to show that the variance in the above display goes to zero. To do this, we start by re-writing the expression in terms of A◦i and B ◦ j , and then apply the law of total variance. Var∗ (√ n(D◦n −Dn)(M◦n −Mn)ℓ ) = nVar∗ 1 nxns nx∑ i=1 ns∑ j=1 (A◦i − 1)(B◦j − 1)ℓ(Zi, fSj ) = nVar∗ E∗ 1 nxns nx∑ i=1 ns∑ j=1 (A◦i − 1)(B◦j − 1)ℓ(Zi, fSj ) | {A◦i }nxi=1 + nE∗ Var∗ 1 nxns nx∑ i=1 ns∑ j=1 (A◦i − 1)(B◦j − 1)ℓ(Zi, fSj ) | {A◦i }nxi=1 . Because {B◦j }nsj=1 are independent of {A◦i }nxi=1, and have mean 1, the conditional expectation in the first term is 0 almost surely. Expanding out the second term, using that Var∗(B◦j ) = 1, and that the {B◦j }nsj=1 are uncorrelated, nE∗ Var∗ 1 nxns nx∑ i=1 ns∑ j=1 (A◦i − 1)(B◦j − 1)ℓ(Zi, fSj ) | {Ai}nxi=1 = nE∗ 1 n2s ns∑ j=1 Var∗ (B◦j − 1) 1 nx nx∑ i=1 (A◦i − 1)ℓ(Zi, fSj ) | {A◦i }nxi=1 = nE∗ 1 n2s ns∑ j=1 1 nx nx∑ i=1 (A◦i − 1)ℓ(Zi, fSj ) 2 = nE∗ 1 n2s ns∑ j=1 1 n2x nx∑ i=1 nx∑ k=1 (A◦i − 1)(A◦k − 1)ℓ(Zi, fSj )ℓ(Zk, fSj ) . Now, noting that Var∗(A◦i ) = 1, and that the {A◦i }nxi=1 are uncorrelated, this simplifies to nE∗ 1 n2s ns∑ j=1 1 n2x nx∑ i=1 (A◦i − 1)2ℓ2(Zi, fSj ) = n nsnx 1 ns ns∑ j=1 1 nx nx∑ i=1 ℓ2(Zi, fSj ). Because ED×M [ℓ2(Z, fS)] < ∞, the SLLN adapted to separately exchangeable arrays (Rieders, 1991, Theorem 1.4) implies that this converges almost surely to 0. B.4 PROOF OF LEMMA 4 We prove (a) of the Lemma, as (b) follows from applying Fubini’s theorem and following mutatis mutandis the same argument. Without loss of generality, we will assume that ℓ(Zi, fSj ) ≥ 0. Because Var(ℓ(Zi, fSj )) < ∞, we can always decompose ℓ(·, ·) into a positive and negative part, and show that the result holds for each individually. Once again, we prove (a) by turning to Chebyshev’s inequality. Fix ǫ > 0, and observe that P ∗ ( |√n(D◦n −Dn)(Mn −M)ℓ| > ǫ ) ≤ Var ∗ (√n(D◦n −Dn)(Mn −M) ) ǫ2 , so it is sufficient to show that Var∗ (√ n(D◦n −Dn)(Mn −M) ) → 0. Writing the above in terms of A◦i , we have Var∗ (√ n(D◦n −Dn)(Mn −M) ) = Var∗ √ n nx nx∑ i=1 (A◦i − 1) 1 ns ns∑ j=1 ℓ(Zi, fSj )− E[ℓ(Zi, fSj ) | Zi] = n n2x nx∑ i=1 Var∗ (A◦i − 1) 1 ns ns∑ j=1 ℓ(Zi, fSj )− E[ℓ(Zi, fSj ) | Zi] 2 = n n2x nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj )− E[ℓ(Zi, fSj ) | Zi] 2 . Now, we want to show that the last display converges almost surely to 0. Notice that each term within the outer sum will obviously converge due to the SLLN. Showing that the outer sum also converges almost surely is technically difficult, but conceptually follows the same argument used to prove the SLLN (specifically, we follow the one done elegantly by Etemadi (1981); Luzia (2018) provides a more detailed account of this proof technique that is helpful for developing a deeper understanding). We show the following version of almost sure convergence: that for any ǫ > 0, P n n2x nx∑ i=1 1 ns ns∑ j=1 ℓ(Zi, fSj )− E[ℓ(Zi, fSj ) | Sj ] 2 > ǫ i.o. = 0, where i.o. stands for infinitely often. Define the shorthand Lij = ℓ(Zi, fSj ) and let L̄ij = Lij1{Lij < ij} be a truncated version of Lij . The proof of Theorem 2 of Etemadi (1981) implies that P (L̄ij 6= Lij i.o.) = 0, because the assumption Var(Lij) < ∞ implies the assumption used in Etemadi (1981), and independence of {Lij}i,j is not needed for this result. Therefore, 1 nx nx∑ i=1 1 ns ns∑ j=1 Lij − L̄ij 2 a.s.→ 0, and 1 nx nx∑ i=1 1 ns ns∑ j=1 E[Lij | Zi]− E[L̄ij | Zi] 2 a.s.→ 0. Together, these imply that if we can prove that the truncated sum converges, ie., 1 nx n∑ i=1 1 ns ns∑ j=1 L̄ij − E[L̄ij | Zi] 2 a.s.→ 0, (6) this is sufficient to show that the un-truncated version converges almost surely. To prove (6), we show two things: first, that there is a subsequence kn such that (6) holds when restricted to the subsequence, and then we show that the sequence is a Cauchy sequence, which together imply the result. Let α > 1 and let kn = α n. For convenience, denote knx as the number of data samples and kns as the number of seed samples when knx + kns = kn total samples are drawn. We will ignore integer rounding issues, and assume knx = (1− ps)αn, and kns = psαn. The following lemma shows that the subsequence defined by kn converges almost surely. Lemma 5. Let α > 1, and kn = α n. Under the assumptions of Theorem 1 and that Lij ≥ 0 P 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij − E[L̄ij | Zi] 2 > ǫ i.o. = 0. We now must show that the sequence in (6) is a Cauchy sequence. Note that the SLLN implies that 1 nx nx∑ i=1 E[L̄ij | Zi]2 a.s.→ E[E[L̄ij | Zi]2], and the LLN for exchangeable arrays (Rieders, 1991, Theorem 1.4) implies that 1 nx nx∑ i=1 1 ns ns∑ j=1 L̄ijE[L̄ij | Zi] a.s.→ E[E[L̄ij | Zi]2]. Therefore, 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij 2 a.s.→ E[E[L̄ij | Zi]2]. (7) Notice that because L̄ij ≥ 0, the sum ∑nx i=1 (∑ns j=1 L̄ij )2 is monotone increasing in ns and nx. With this in mind, for any m > 0, let n be such that kn ≤ m < kn+1. Then, by the montonicity, ( kn kn+1 1 kn )3 knx∑ i=1 kns∑ j=1 L̄ij 2 ≤ ∑(1−ps)m i=1 (∑psm j=1 L̄ij )2 p2s(1− ps)m3 ≤ ( kn+1 kn 1 kn+1 )3 k(n+1)x∑ i=1 k(n+1)s∑ j=1 L̄ij 2 . From (7), the left hand side converges to 1α3E[E[L̄ij | Zi]2], and the right hand side converges to α3E[E[L̄ij | Zi]2]. Because α is arbitrary, this proves that the sequence ∑(1−ps)m i=1 (∑psm j=1 L̄ij )2 p2s(1− ps)m3 m=1,... is almost surely Cauchy. Together with Lemma 5, this implies (6). B.5 PROOF OF LEMMA 5 We will show that ∞∑ n=1 P 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij − E[L̄ij | Zi] 2 > ǫ < ∞. This, along with the first Borel-Cantelli lemma (Émile Borel, 1909; Cantelli, 1917) implies the result. Applying Markov’s inequality and using the fact that L̄ij and L̄ih are independent conditional on Zi gives ∞∑ n=1 P 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij − E[L̄ij | Zi] 2 > ǫ ≤ 1 ǫ ∞∑ n=1 E 1 knxk2ns knx∑ i=1 kns∑ j=1 L̄ij − E[L̄ij | Zi] 2 = 1 ǫ ∞∑ n=1 1 knxk2ns knx∑ i=1 kns∑ j=1 E [( L̄ij − E[L̄ij | Zi] )2] ≤ 1 ǫ ∞∑ n=1 1 knxk2ns knx∑ i=1 kns∑ j=1 E[L̄2ij ], where the last line follows from the law of total variance. To simplify the remaining algebra, we will use a . b to denote that there is some constant 0 < c < ∞ such that a < cb. Continuing, we have 1 ǫ ∞∑ n=1 1 knxk2ns knx∑ i=1 kns∑ j=1 E[L̄2ij ] . 1 ǫ ∞∑ n=1 knx∑ i=1 kns∑ j=1 1 k3n E[L̄2ij ] = 1 ǫ ∞∑ i=1 ∞∑ j=1 E[L̄2ij ] ∞∑ n=n(i,j) 1 α3n . 1 ǫ ∞∑ i=1 ∞∑ j=1 1 max{i/(1− ps), j/ps}3 E[L̄2ij ] . 1 ǫ ∞∑ i=1 ∞∑ j=1 1 max{i, j}3E[L̄ 2 ij ] = 1 ǫ ∞∑ i=1 ∞∑ j=1 1 max{i, j}3E[L̄ 2 ij ] where n(i, j) is shorthand for n(i, j) = logα max{i/(1− ps), j/ps} is the first n such that knx ≥ i and kns ≥ j. Now, define Q as the distribution of L11 induced by Z1 and S1. Additionally, split the inner sum into two pieces, one for when j < i and so max{i, j} = i and one for when j ≥ i and so max{i, j} = j. 1 ǫ ∞∑ i=1 ∞∑ j=1 1 max{i, j}3E[L̄ 2 ij ] = 1 ǫ ∞∑ i=1 i∑ j=1 1 i3 ∫ ij 0 x2 dQ(x) + ∞∑ j=i ∫ ij 0 x2 dQ(x) = 1 ǫ ∞∑ i=1 i−1∑ j=1 1 i3 ij∑ k=1 ∫ k k−1 x2 dQ(x) + ∞∑ j=i ij∑ k=1 ∫ k k−1 x2 dQ(x) switching the order of the indices over j and k, using that 1 ≤ k ≤ ij and the constraints on j relative to i, 1 ǫ ∞∑ i=1 i−1∑ j=1 1 i3 ij∑ k=1 ∫ k k−1 x2 dQ(x) + ∞∑ j=i ij∑ k=1 ∫ k k−1 x2 dQ(x) . 1 ǫ ∞∑ i=1 i2−1∑ k=1 (i− k/i) i3 ∫ k k−1 x2 dQ(x) + ∞∑ k=1 ∞∑ j=max{i,k/i} 1 j3 ∫ k k−1 x2 dQ(x) . 1 ǫ ∞∑ i=1 i2−1∑ k=1 (i− k/i) i3 ∫ k k−1 x2 dQ(x) + ∞∑ k=1 1 max{i, k/i}2 ∫ k k−1 x2 dQ(x) . Switching the order of summation over i and k, and separating out the terms where k/i < i and k/i ≥ i, 1 ǫ ∞∑ i=1 i2−1∑ k=1 (i− k/i) i3 ∫ k k−1 x2 dQ(x) + ∞∑ k=1 1 max{i, k/i}2 ∫ k k−1 x2 dQ(x) = 1 ǫ ∞∑ k=1 (∫ k k−1 x2 dQ(x) ) √ k+1∑ i=1 (i− k/i) i3 + ∞∑ i= √ k 1 i2 + √ k∑ i=1 i2 k2 . 1 ǫ ∞∑ k=1 1√ k (∫ k k−1 x2 dQ(x) ) . 1 ǫ ∞∑ k=1 (∫ k k−1 x2√ x dQ(x) ) . 1 ǫ ∫ ∞ 0 x1.5 dQ(x) < ∞. C INSTANCE-LEVEL AGREEMENT OF MULTIBERTS ON GLUE We present additional performance experiments to complement Section 2. Table 3 shows per-example agreement rates on GLUE predictions between pairs of models pretrained with a single seed (“same”) and pairs pre-trained with different seeds (“diff”); in all cases, models are fine-tuned with different seeds. With the exception of RTE, we see high agreement (over 90%) on test examples drawn from the same distribution as the training data, and note that agreement is 1–2% lower on average for the predictions of models pre-trained on different seeds compared to models pre-trained on the same seed. However, this discrepancy becomes significantly more pronounced if we look at out-of-domain “challenge sets” which feature a different data distribution from the training set. For example, if we evaluate our MNLI models on the anti-sterotypical examples from HANS (McCoy et al., 2019), we see agreement drop from 88% to 82% when comparing across pre-training seeds. Figure 4 shows how this can affect overall accuracy, which can vary over a range of nearly 20% depending on the pre-training seed. Such results underscore the need to evaluate multiple pre-training runs, especially when evaluating a model’s ability to generalize outside of its training distribution. D CROSS-SEED VARIATION Figure 5 shows variation in Winogender bias correlation (S4) between each MultiBERTs pretraining seed. Each box shows the distribution over five runs, and some of the variation between seeds may simple be due to variation in training the coreference model. If we average the scores for each seed then look at the distribution of this per-seed average score, we get 0.45±0.11. What if pretraining didn’t matter? If we ignore the seed and randomly sample sets of five runs from this set with replacement, we get scores of 0.45±0.05 - telling us that most of the variance can only be explained by differences between the pretraining checkpoints. We can confirm this by taking a subset of our pretraining seeds and training additional 25 randomlyinitialized coreference models. Figure 6 shows the result: seeds 0, 2, 3, and 4 appear closer together than in Figure 5, but seed 1 clearly has different properties with respect to our Winogender metric. We can confirm this with an unpaired multibootstrap analysis, taking seed 0 as base and seed 1 as experiment: we observe a significant effect of δ = 0.203 (p = 0.009), as shown in Table 4. E CASE STUDY: MULTIBERTS VS. ORIGINAL BERT As an additional example of application, we discuss challenges in reproducing the performance of the original BERT checkpoint, using the Multi-Bootstrap procedure. The original bert-base-uncased checkpoint appears to be an outlier when viewed against the distribution of scores obtained using the MultiBERTs reproductions. Specifically, in reproducing the training recipe of Devlin et al. (2019), we found it difficult to simultaneously match performance on all tasks using a single set of hyperparameters. Devlin et al. (2019) reports training for 1M steps. However, as shown in Figure 1 and 2, models pre-trained for 1M steps matched the original checkpoint on SQuAD but lagged behind on GLUE tasks; if pre-training continues to 2M steps, GLUE performance matches the original checkpoint but SQuAD performance is significantly higher. The above observations suggest two separate but related hypotheses (below) about the BERT pretraining procedure. 1. On most tasks, running BERT pre-training for 2M steps produces better models than 1M steps. 2. The MultiBERTs training procedure outperforms the original BERT procedure on SQuAD. Let us use the Multi-Bootstrap
1. What is the focus of the paper, and what are the contributions of the proposed approach? 2. How does the paper demonstrate the utility of the introduced resources? 3. What are the strengths of the paper regarding its methodology, related work review, and description of the MultiBERTs release? 4. Are there any concerns or suggestions for improving the paper, such as fixing typos or further analyzing the impact of the approach?
Summary Of The Paper Review
Summary Of The Paper The paper presents MultiBERTs, a set of 25 model checkpoints to support robust research on BERT, and the Multi-Bootstrap, a non-parametric statistical method to estimate the uncertainty of model comparisons across multiple training seeds. It demonstrates the utility of these resources by showing how to quantify the effect of an intervention to reduce a type of gender bias in coreference systems built on BERT. Review The paper presents an interesting approach, a novel contribution that might have some applications. From the methodological point of view, it is well-written, there is a good review of related works, a description of the MultiBERTs release, and an application to reduce a type of gender bias in coreference systems built on BERT. There are also comprehensive supplementary materials with proofs of theorems and lemmas. It seems to be a novel approach, it's hard to assess its significance but the presented application suggests there the impact might be important. If the article is accepted, there are some typos that should be fixed before publication: p. 2: seeks to to analyze -> seeks to analyze p. 2: the uncertainty the test sample -> the uncertainty of the test sample p. 6: it looks that in one case, to mark a distribution, D, is used, in another case, it is P p. 8: statiatically -> statistically
ICLR
Title Exploring semantic information in disease: Simple Data Augmentation Techniques for Chinese Disease Normalization Abstract The disease is a core concept in the medical field, and the task of normalizing disease names is the basis of all disease-related tasks. However, due to the multi-axis and multi-grain nature of disease names, incorrect information is often injected and harms the performance when using general text data augmentation techniques. To address the above problem, we propose a set of data augmentation techniques that work together as an augmented training task for disease normalization. Our data augmentation methods are based on both the clinical disease corpus and standard disease corpus derived from ICD-10 coding. Extensive experiments are conducted to show the effectiveness of our proposed methods. The results demonstrate that our methods can have up to 3% performance gain compared to non-augmented counterparts, and they can work even better on smaller datasets. 1 Introduction The disease is a central concept in medical text processing problems. One of the most important tasks, i.e. disease normalization, uses diseases as both input and output to match the diagnoses terms used in clinical documents to standard names in ICD coding. The disease normalization task mainly faces the following three challenges. First, different writing styles. The writing styles of the diseases can be diversified, where different doctors have different writing habits, so a single disease might result in thousands of versions of names. Second, data scarcity, where some diseases may not be covered in the training set, which often leads to few-shot or zero-shot scenarios. For example, in the Chinese disease normalization dataset CHIP-CDN, there are 40472 diseases to classify, but only data of 3505 diseases (i.e. less than 10% of all diseases) are provided in the training set. Figure 1 illustrates the data scarcity problem in CHIP-CDN dataset. Third, semantics density. The length of disease names is usually short, which makes every character carries huge semantic information. The meanings of the diseases are very different from each other even if they share a lot of common characters, and a single change in characters could result in dramatic change in semantic meaning. For instance, ” 髂总动脉夹层 (Common iliac artery dissection)” and ” 劲总动脉夹层 (Common carotid artery dissection)” are only different in one character, but the positions of those diseases are very distinct, from the upper half of the body part to the lower half. Among all the challenges we discussed, data scarcity is the biggest one, since other problems usually can be solved by providing larger datasets for models to learn. A common way to address the data scarcity problem is through data augmentation. There are numerous data augmentation methods for general corpora such as synonym replacement or back translation. Wei & Zou (2019) has shown that simple text data augmentation methods can be effective for text classification problems. However, because of the unique structure of disease names (i.e. semantics density), general text data augmentation methods do not work well on them, and sometimes even hurt the overall performance. For example, if random deletion Wei & Zou (2019) is performed on disease ” 阻塞性睡眠呼吸暂停 (Obstructive Sleep Apnoea)” and results in ” 阻塞性睡眠 (Obstructive Sleep)”, that would dramatically change the meaning of that disease name and makes it become another disease. Admittedly, general data augmentation methods may be able to address the challenge of different writing styles, as performing random operations on texts can be seen as a way to emulate different writing behaviors. However, due to the above reasons, general data augmentation methods tend to hurt performance, which is demonstrated in our experiments. Therefore, designing data augmentation methods specific to disease corpus is necessary. To bridge this gap, we propose a set of disease-oriented data augmentation methods to address this problem. As with other disease-related tasks, disease normalization can be thought as a process of text matching, from clinical names to standard names in ICD coding. Therefore, the key to this task is for the model to learn great encoding that contains enough similar information for each disease. For instance, the model needs to tell that ” 左肾发育不全 (Left renal agenesis)” and ” 先天性肾发育不全 (Congenital renal agenesis)” are the same disease while ” 髂总动脉夹层 (Common iliac artery dissection)” and ” 颈总动脉夹层 (Common carotid artery dissection)” are not, despite that they both share a lot of common characters. Our methods are based on the following two assumptions. First, disease names have the property of structural invariance. A disease name consists of several different types of key elements, such as location, clinical manifestations, etiology, pathology, etc. In the pair of clinical disease and standard ICD disease, the specified elements can correspond in most cases. Therefore, we can replace a specific element between the pair of clinical disease and standard ICD disease at the same time to generate new pairs. The matching relationship of the newly generated clinical disease and the ICD standard disease pairs can still be maintained. We screened the generated standard ICD diseases to ensure that they belonged to the correct label and that the pairs are effective. It should be noticed that replacing components could derive a new clinical disease name that turns out to be fake (i.e. the disease actually does not exist), but the key point here is to make models learn the necessary semantic association within the diseases. Second, labels in the disease normalization task have transitivity properties. In specific, a more specified description of an object can be comprised into a larger group where the descriptions are more coarse, e.g. a yellow chair is also a chair. In the ICD coding system, there are also different and clear granularities of diseases. Therefore, we can treat the fine-grained disease as their coarse-grained upper disease by assigning them father labels. Normally, a data augmentation method generates new data and trains them along with the existing data, without altering the training paradigm. However, the disease normalization task assigns each disease a unique label, while our methods augment the labels. Therefore, if the traditional training paradigm is still applied to our augmentation methods, a same input disease in the dataset may get different labels, which will make the model difficult to train due to label confusion. To overcome this problem, we treat the data augmentation operation as a pre-training task (we call it augmented training) prior to the original task, so that the model can first learn the necessary semantic information within diseases and then leverage that information when fine-tuning on the actual normalization dataset. Additionally, both unnormalized disease names from the tasks and standard ICD names of the diseases can be used as inputs in the data augmentation process. A unique advantage of using standard ICD names to perform data augmentation as a pre-training task is that the model can get the whole picture of the disease-related information from ICD coding, which includes all classes of diseases, even before the actual training of the downstream task. Therefore, with all those information injected, the model can perform much stronger on smaller datasets where lots of class labels are not able to be seen in the training set. To the best of our knowledge, we are the first to explore the semantic components and information within disease names. We believe the research on disease name enhancement has high research value and can benefit various downstream tasks. To summarize our contributions: • We propose a set of data augmentation methods for the Chinese disease normalization tasks. • Experiments validate that general data augmentation methods have the potential to impair the disease normalization task. However, our method has obvious performance gain on the task based on various baseline models. • We also analyze the reasons why the proposed method is effective. 2 Background ICD coding. ICD, the acronym of the International Classification of Diseases, is an international unified classification of diseases developed by the World Health Organization, and ICD-10 is the 10th version of ICD coding which is used in our work. The coding is a combination of letters and numbers, which classifies diseases according to their etiology, pathology, clinical manifestations, and anatomical locations, so that they form a hierarchical coding structure. ICD also adopts a multi-grain fashion where coarse-grained disease are followed by fine-grained diseases. Disease normalization task. In clinical practice, doctors will fill in the name of the disease according to clinical diagnosis standards along with their own writing habits, which makes a single disease name hundreds of versions. The disease normalization task is to match disease names written in different styles into a single standard name provided by ICD coding. After the disease normalization process, researchers can perform further operations upon the normalized names to realize all kinds of functions used in wise medical applications. The task can be formalized into the following operation: X -> Y, where X represents the clinical disease names and Y represents the standard ICD names. NER. NER stands for Named Entity Recognition, which is a common task in Natural Language Processing. It aims to identify entities that have practical values and their locations from unstructured texts. The classification of these entities may include persons, organizations, locations, etc. In this work, we use an NER tool trained by ourselves to identify elements in disease names in order to perform data augmentation. Additionally, we argue that any NER tool that can identify elements in disease names should be fine, and our work mainly focus on the data augmentation methods. 3 Related Work In this section, we first introduce related works of data augmentation, then we introduce medical data-driven research works that are similar to ours. 3.1 Data Augmentation Data augmentation is a technology to synthesize new data based on existing data as a way to expand the amount of dataset. It is often used when the amount of data is not enough, and it can also act as a regularizer to prevent the model from overfitting the training set. Unlike images, where it is relatively easy to augment data as well as keep the semantic information intact, data augmentation in texts is more difficult, due to its unstructured form Ng et al. (2020). Many works focus on augmentations directly on the input: Wei & Zou (2019) propose four simple augmentation methods base on character-level noise injection, which are replacement, insertion, swap, and deletion. Their methods are quite straightaway and effective, but the augmentation results may cause unwanted noise by not following the grammar rules. Back translation, augments data by translating the original text to a second language and then translating it back. This method can keep the semantic meaning well of the original text, but the augmented results are lack of diversity and sometimes restricted by the translation tool. In order to make the augmented data more realistic, Kim et al. (2022) leverages lexicalized probabilistic context-free grammars to capture the intricate compositional structure of natural language and then perform word replacements. This method yields good results, but grammar-based methods for general text are difficult to generalize to specialized areas, such as medicine. There are also methods that leverage pre-trained language models to perform data augmentation. Ng et al. (2020) use MLM objective in BERT Devlin et al. (2018) to mask out some words and then regenerate it. Wu et al. (2019) also uses MLM task as well as changing the segment ids to class labels. Kumar et al. (2020) compares three kinds of data augmentation methods using a conditional pre-trained model, namely auto-encoder, auto-regressive, and seq2seq. A problem with these methods is that the semantic meaning of the original sentence may change after several MLM replacements. Semi-supervised learning can also be a way to perform data augmentation by leveraging the vast amount of unlabeled data. Berthelot et al. (2019) uses MixUp to guess the low-entropy labels of the augmented data and then mixes the labeled and unlabeled data to derive a loss term, and Xie et al. (2020) performs data augmentation on unlabeled data for consistency training. However, we only focus on augmenting the data itself instead of semi-supervised learning objectives in this work. 3.2 Data approaches on medical data While most researches focus on the effect of data augmentation on general text data, there are also works that try to explore the possibility of data augmentation operations on medical text data. In this section, we mainly introduce data augmentation on medical text data and other related research works. There are works that focus on the synonym replacement in medical terms. Falis et al. (2022) and Abdollahi et al. (2021) leverage Unified Medical Language System (UMLS) to find medical synonyms to perform replacements after certain medical terms are identified in classification texts. Focusing on the ICD-coding task, Falis et al. (2022) also replaces both the medical terms in raw texts and the classification label to get new training data. While their works mainly focus on replacing the whole medical term, we investigate the possibility of replacing the components of the medical terms by exploring the semantic structures within them. Additionally, Ansari et al. (2021) investigates the performance of EDA, conditional pretrained language models and back translation to perform data augmentation on social media texts for mental health classification. Wang et al. (2020a) proposes Segment Reordering as a data augmentation technique to keep the medical semantic meaning intact. Wang et al. (2020b) use pre-trained language models fine-tuned on General Semantic Textual Similarity (STS-G) data to generate pseudo-labels on medical STS data, and then perform iterative training. 4 Methods In this section, we introduce the details of our proposed data augmentation methods and the overall pipeline. Since the significance of data augmentation is to inject the model with extra knowledge, the key point is to explore the components and relations in diseases so that the model can have a broad sense of the internal structures of the diseases. Therefore, we leverage the multi-axis and multi-grain nature of the diseases to design all of the data augmentation methods. First of all, the disease names are composed of several elements, which include but are not limited to etiology, pathology, clinical manifestations, anatomical location, chronicity, degree type, etc. For ease of expression, we merge and select from all those elements into three main categories, which are disease center, anatomical location and disease quality. This shows the multi-axis nature of the diseases. • Disease Center: Disease center, which may include etiology and pathology, is the minimal word that describes the nature of a disease. It defines the main category of a disease, such as ”disorders” for ”Other disorders of the eye with mcc”. • Anatomical Location: Anatomical Location is a part of the human body that have actual meaning in anatomy. It indicates which part of the human body is ill. • Disease Quality: The quality of a disease which indicates the subtype of the disease, such as ”Drug-induced” for ”Drug-induced peripheral neuropathy”. With these three axis words, all kinds of disease names can be combined by them. Second, a disease can be described by multiple granularities. An upper disease is a coarsedefined disease and a lower disease is a fine-grained disease. The ICD coding contains lots of upper-lower disease pairs by assigning them different lengths of code. For example, in ”ICD-10 Beijing Clinical Version 601”, the disease name of code ”A18.2” is ” 外周结核 性淋巴结炎 (Peripheral Tuberculous Lymphadenitis)” and ”A18.201” is ” 腹股沟淋巴结 结核 (Inguinal lymph node tuberculosis)”. ”Peripheral Tuberculous Lymphadenitis” is a coarse-defined disease due to not specifying a single anatomical location. Additionally, a coarse-defined disease can contain multiple fine-grained diseases in ICD coding. In our intuition, although the disease can only be called the same if all of its components are the same, it is necessary for the model to learn which diseases are more similar than others. Therefore, we define the following data augmentation methods. 4.1 Data Augmentation We perform data augmentation by assigning pseudo-labels to diseases to describe their relationships so that they can form a new pair of diseases, and we use those pairs to perform augmented training of disease normalization tasks. We divide our methods into two main categories: Axis-word Replacement and Multi-grain Aggregation. We call our proposed disease name data augmentation method DDA. Figure 2 illustrates the overall pipeline of our methods. Axis-word Replacement (AR): We assume that disease names have the property of structural invariance, which means a name derived by replacing an axis-word in a disease to another one with the same type also makes sense. Since there are often matches of Axis-words between an unnormalized-standard disease pair in the disease normalization task, replacing the corresponding Axis-word in the clinical name with the standard name in the pair at the same time can ensure that the newly-generated pair will still match. To locate all axis-word in the disease, we leverage a Named Entity Recognition (NER) tool trained by ourselves1. The entity type includes but is not limited to disease center, anatomical location, and disease quality. We note that the NER tool is just for the use of locating axis-words, and it can be replaced by any modules that can achieve the same function. We leverage both the ICD-coding and the disease normalization training set to perform axisword replacement. The detailed descriptions of each category of axis-word replacements are as follows: • AR1: AR1 is illustrated in the top left corner of Figure 2. First, select a pair of diseases (disease A and disease B) that shares one or more axis (part1 in figure) but is different in another axis (part 2 in figure). Then, replace the part 2 in disease A to be the same part2 in disease B. (Note: disease A can be chosen from any sources, but disease B can only be chosen from the standard ICD-coding list as it serves as the label of a disease normalization pair.) – AR1-posotion: Perform AR1 by fixing the disease center and replacing the anatomical location. – AR1-center: Perform AR1 by fixing the anatomical location and replacing the disease center. – AR1-quality: Perform AR1 by fixing both the disease center and the anatomical location and replacing the disease quality. • AR2: AR2 is illustrated in the top right corner of Figure 2. First, select a pair of unnormalized-standard diseases from the disease normalization training set. Let the unnormalized disease be disease A, and the standard disease be disease B. Then, find disease C from ICD-coding list that shares one or more axis (part1) but is different in another axis (part2). Finally, replace part2 in disease A to be the same part2 in disease C, so that the replaced disease A and disease C can form a new disease normalization pair. – AR2-position: Perform AR2 by fixing the disease center and replacing the anatomical location. – AR2-center: Perform AR2 by fixing the anatomical location and replacing the disease center. – AR2-quality: Perform AR2 by fixing both the disease center and the anatomical location and replacing the disease quality. Multi-Grain Aggregation (MGA): We assume that labels in the disease normalization task have transitivity properties. In specific, a more specified description of an object can be comprised into a larger group where the descriptions are more coarse. In the ICD coding system, there are also clear granularities of diseases. The maximum length of code that can be shared between hospitals is 6, and the multi-grain structure contains 3-digit, 4-digit, and 6-digit codes. We observe that the semantic meaning between diseases that share the first 1We will open source the code of our experiment along with the NER tool for disease names on Github. 3-digit code but are different in the 4th-digit code can be quite different, but the meaning would be a lot similar if the diseases share the first 4-digit code. Therefore, We implement MGA augmentation using the following method. • MGA-code: we leverage the multi-grain nature of the ICD coding by assigning the label of a 6-digit disease to its corresponding 4-digit disease. We call the method ”aggregation” because normally a 4-digit disease can be matched to several 6-digit diseases, so the model can learn which diseases are similar. MGA-code is illustrated in the left bottom of Figure 2. – MGA-code1: The 6-digit diseases are directly derived from the ICD-coding list. – MGA-code2: The 6-digit diseases are derived from the diseases in CHIP-CDN training set whose labels are a 6-digit ICD disease. • MGA-position: Apart from the ICD coding, anatomical locations also follow a hierarchical structure, where several smaller positions can be grouped together to form a larger position. Thus, we search for diseases in ICD coding that share the same center and one position is the upper position of another one, and we grouped the classification labels of the lower position diseases to their upper position diseases. MGA-position is illustrated in the right bottom of Figure 2. (Note: the upper position diseases must come from the standard ICD-coding list.) – MGA-position1: The lower position diseases are directly derived from the ICDcoding list. – MGA-position2: The lower position diseases are derived from the diseases in CHIP-CDN training set. (Note: In the human body, we call a location the upper position to another position if that location covers a larger area than another. In order to find the upper or lower positions of a position, we construct a position tree document where the anatomical positions in the human body are organized into a tree data structure. We use the constructed position tree to recognize the upper and lower relations above. The same goal can be achieved with other sources containing knowledge bases of human anatomy.) 4.2 Training Process • Taking the augmented data to train the disease normalization task. • Fine-tuning the original disease normalization dataset. 5 Experiments 5.1 Dataset We evaluate the effectiveness of our data augmentation methods on a Chinese disease normalization dataset called CHIP-CDN. CHIP-CDN originates in the CHIP-2019 competition and was collected in A Chinese Biomedical Language Understanding Evaluation Benchmark called CBLUE Zhang et al. (2021). The dataset contains 6000 unnormalized-standard disease pairs in the training set, 1000 pairs in the dev set, and 2000 pairs in the test set. 5.2 Experimental Setup We evaluate our methods on three baselines: BILSTM Sak et al. (2014)and BERT-base Devlin et al. (2018), CDN-Baseline(from CBLUE)Zhang et al. (2021). For BILSTM, we use two BILSTM layers followed by a MLP layer to perform classification. For BERTbased models, we use the CLS vector to perform classification. For CDN-Baseline, we use the original model provided by its git repository2, which follows a ”recall-match” two step training approach based on pre-trained language models. The choose of the baseline models is to demonstrate the effectiveness of our method under different types of models and training 2https://github.com/CBLUEbenchmark/CBLUE settings. In specific, we verify the effectiveness of DDA to a train-from-scratch model using a BILSTM model, we verify the effectiveness to models with pre-trained knowledge using the BERT-base model, and we verify the effectiveness to complex models using CDN-Baseline model. For the BILSTM model and BERT-base model, we use accuracy to judge the model performance. In our evaluation, we treat this disease normalization as a multi-class classification rather than multi-label classification task despite that there are few data samples that a single unnormalized disease is matched to several standard diseases. Hence, if an unnormalized disease is matched to several standard diseases, this data sample is considered correctly predicted as long as one of the standard diseases is correctly predicted. We design the experiments in this way to simplify the model as much as possible to more clearly illustrate the effectiveness of DDA. For CDN-Baseline, we stick to the settings in CBLUE Zhang et al. (2021), which use the F1 as the evaluation metric, use BERT-base as the baseline model, and use the two step training paradigm provided by CBLUE for better comparison. To ensure fairness, we use the exact same parameter settings for the same model. In particular, for CDN-Baseline, we use almost the same parameter settings as CBLUE’s git repository, including random seed numbers. Additionally, we use devset for performance comparison, since the label of test set of the CHIP-CDN dataset is not given. For all experiments, we keep the best performing result as the final score. 5.3 Results The results are shown in Table 1. The trainset in the table represents CHIP-CDN training set. From top to bottom, the performance of different models using different data augmentation methods is represented. Among them, BT is the back-translation data augment method3, and DDA is the semantic-based disease name data augmentation method proposed by us. The experimental results demonstrate that although EDA and back-translation increase diversity, they both hurt performances in some settings (especially for EDA). However, DDA improves the performance in every settings. Clearly, DDA avoids the problem of EDA, and its effect is much better than BT. We observe that the performances improve for all models above after applying the DDA methods, showing the effectiveness of our proposed methods. For the BILSTM model, the relative performance improvement reaches 6%. We further observe that there is more performance gain on BILSTM than BERT-based models and CDN-Baseline, probably because the knowledge in pre-trained language models has already covered some of the similar information, but our proposed method can further improve their performance, showing the effectiveness of DDA. 5.4 Ablation Study In this section, we evaluate the effectiveness of every data augmentation methods on BILSTM, BERT-base models and CDN-Baseline. As we propose two types of data augmentation methods, we evaluate them by taking out these methods one by one to see the resulting performances. The results are shown in Table 2. We observe that removing data gener- 3we use the youdao translation tool and the URL is https://fanyi.youdao.com/. ated by either types of methods would lead to performance degradation, thus proving the effectiveness of every method that we propose. 5.5 Smaller datasets experiments We also evaluate the performance improvements over smaller datasets that derived from CHIP-CDN since the data scarcity problem is more severe in smaller datasets. We evaluate the training set whose sizes range from 5%, to 100% of the CHIP-CDN training set size. For the convenience of training, for augmented training in this setting, we only leverage standard disease names in ICD-coding. No data from disease normalization training set are used. We draw curves to illustrate the comparison on whether to use our proposed methods or not, which is shown in figure 3. When the size of the training set increase, both curves steadily improve. We also notice that the performance gain is higher when the size of the training set is smaller. 6 Conclusion In this paper, we propose two main types of data augmentation methods for Chinese disease normalization tasks based on two hypothesis respectively, where the disease names have the property of structural invariance, and the labels in disease normalization task have the transitivity properties. Our data augmentation methods explore the semantic information and the relation information in diseases, and are adopted in augmented training fashion to avoid introducing misinformation. Experimental results show that our DDA method can better solve the three main challenges in disease normalization task, namely description diversity, data scarcity, and semantics density. Compared to EDA and back-translation methods, our method has obvious advantages on the disease normalization task. Furthermore, we prove that our data augmentation methods work even better on smaller datasets. A Appendix A.1 data augment result statics The table 3 is all the statistical results of the data we obtained using MGA and AR data augmentation methods 4. A.2 Hyperparameter settings Table 4 shows the hyperparameter settings of our choices. For different methods, the way of parameter setting is different. For models that use word2vec initialization or random initialization parameters, the training on augmented data can be regarded as a special pretraining task, and a large learning rate and a large number of iterations can be set to make the training sufficient. For models that use a pre-trained model (i.e. BERT) as the backbone, a small learning rate and a small number of training iterations should be set to avoid the catastrophic forgetting of valuable information in the pre-trained models. For each baseline model, we first train on the augmented dataset (Augmented Training), and then fine-tune on CHIP-CDN dataset. For the CDN-Baseline model, we use Chinese-bertwwm as the pre-training model, and the training method is provided by CBLUE. For the DDA method, we first use the augmented dataset to train for 1 epoch with a learning rate of 5e-6 and then fine-tune on CHIP-CDN. The hyperparameter of the num_negative_sample is 3+3 and the recall_k is 2 (The explanation of hyperparameter num_negative_sample and recall_k can be found in their github repository). A.3 Analysis In table 5, the first row represents the distribution of the number of times the label appears in the training set. The other two rows represent the label distribution of the two types of the augmented data. The statistical result shows that the data of DDA can effectively improve the labels that appear less frequently (the number of times < 3) and the labels that do not appear at all (did not appear) in the training set. This is beneficial for addressing the data scarcity problem of disease normalization tasks and the diversity of disease names. This is the direct reason why DDA works. As for EDA and BT, they can only increase the number of labels that are already appeared in the training set, which only solve the problem of expression diversity. Hence, their abilities are limited. A.4 Case Study We give a real example of the augmentation results generated by different data augmentation methods. We observe that the semantic meaning of the EDA-generated result dramatically 4We will open source the augmentation code and the augmented result on Github. changes due to the property of semantics density, and it changes the key information within the disease by losing the anatomical location. The results generated by BT is more realistic, but this method cannot generate samples beyond the original label scope, and it also suffers from the restrictions of the translation tools. As for our proposed method DDA (last two lines in the table), it can not only increase the diversity of the input, but also generates data where their labels are never appeared in the training set, so that sparse labels can be trained more thoroughly. A.5 Future work So far, we have only proved the effectiveness of our DDA method, but no experimental analysis is done to explore the internal mechanisms of why it is so effective. Moreover, to further avoid the injection of misinformation, we believe designing loss function terms to effectively select more valuable data from the data augmentation results can be a promising direction. We aim to perform researches on those topics in the future.
1. What is the focus and contribution of the paper regarding disease normalization? 2. What are the strengths and weaknesses of the proposed data augmentation framework? 3. Do you have any concerns about the formatting and presentation of the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions regarding the training procedure and evaluation metrics used in the study?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors have proposed a data augmentation framework which helps in improving the performance of existing disease normalization models. The framework is compared against two existing methodologies and in smaller dataset setting as well. Strengths And Weaknesses Strength: The authors have focused on an important problem of disease normalization. The proposed framework improves the performance of the existing normalization models. Weaknesses [Questions to the authors]: The paper needs some proof-reading and formatting. Some common formatting issues are highlighted below: The starting double quotes in latex need to be `` as opposed to " since they appear inverted during pdf built. All the tables would look a bit better with the top and bottom margins would look a bit cleaner. Para 1 of the introduction consists of multiple sentences with grammatical errors as well. The authors have mentioned the work of Falis et al [2022] and Abdollahi et al [2021] for data augmentation using UMLS but have not compared their methodologies for data augmentation. The information regarding the NER tool are completely missing from the paper. On which data was the NER model from section 4.1 trained and what was its performance? Since the performance of the framework heavily depends on the model as well. Is there a validity study done to see if the axis rotation method would not create diseases which are not anatomically or medically correct or significant? Section 4.2 needs some more details regarding the training procedure. Were the experiments done using multiple seeds? If yes, could you please share the average and std for the results? Details in Section 5.2 are confusing. Para 2 of the section mentions accuracy as the evaluation metric but also mentions that F1-score was used for evaluation according to the previous work. Also, was macro or micro F1-score used for evaluation? Table 1 mentioned that the result is of devset accuracy. Are all the experiments performed on devset? And is the testset accuracy or F1-score not reported? Recommendation to the authors I believe that the work is important and necessary but I would also like to add that it would be much more suitable for a clinically oriented workshop or conference such as ML4H, MLHC, LOUHI, BioNLP or clinicalNLP. Clarity, Quality, Novelty And Reproducibility The paper is clearly written and easy to follow but needs some more work for refining the language and presentation. The work is novel but needs some more consideration for the validity of pre-training before the task fine-tuning. Currently, it would be quite hard to reproduce the results.
ICLR
Title Exploring semantic information in disease: Simple Data Augmentation Techniques for Chinese Disease Normalization Abstract The disease is a core concept in the medical field, and the task of normalizing disease names is the basis of all disease-related tasks. However, due to the multi-axis and multi-grain nature of disease names, incorrect information is often injected and harms the performance when using general text data augmentation techniques. To address the above problem, we propose a set of data augmentation techniques that work together as an augmented training task for disease normalization. Our data augmentation methods are based on both the clinical disease corpus and standard disease corpus derived from ICD-10 coding. Extensive experiments are conducted to show the effectiveness of our proposed methods. The results demonstrate that our methods can have up to 3% performance gain compared to non-augmented counterparts, and they can work even better on smaller datasets. 1 Introduction The disease is a central concept in medical text processing problems. One of the most important tasks, i.e. disease normalization, uses diseases as both input and output to match the diagnoses terms used in clinical documents to standard names in ICD coding. The disease normalization task mainly faces the following three challenges. First, different writing styles. The writing styles of the diseases can be diversified, where different doctors have different writing habits, so a single disease might result in thousands of versions of names. Second, data scarcity, where some diseases may not be covered in the training set, which often leads to few-shot or zero-shot scenarios. For example, in the Chinese disease normalization dataset CHIP-CDN, there are 40472 diseases to classify, but only data of 3505 diseases (i.e. less than 10% of all diseases) are provided in the training set. Figure 1 illustrates the data scarcity problem in CHIP-CDN dataset. Third, semantics density. The length of disease names is usually short, which makes every character carries huge semantic information. The meanings of the diseases are very different from each other even if they share a lot of common characters, and a single change in characters could result in dramatic change in semantic meaning. For instance, ” 髂总动脉夹层 (Common iliac artery dissection)” and ” 劲总动脉夹层 (Common carotid artery dissection)” are only different in one character, but the positions of those diseases are very distinct, from the upper half of the body part to the lower half. Among all the challenges we discussed, data scarcity is the biggest one, since other problems usually can be solved by providing larger datasets for models to learn. A common way to address the data scarcity problem is through data augmentation. There are numerous data augmentation methods for general corpora such as synonym replacement or back translation. Wei & Zou (2019) has shown that simple text data augmentation methods can be effective for text classification problems. However, because of the unique structure of disease names (i.e. semantics density), general text data augmentation methods do not work well on them, and sometimes even hurt the overall performance. For example, if random deletion Wei & Zou (2019) is performed on disease ” 阻塞性睡眠呼吸暂停 (Obstructive Sleep Apnoea)” and results in ” 阻塞性睡眠 (Obstructive Sleep)”, that would dramatically change the meaning of that disease name and makes it become another disease. Admittedly, general data augmentation methods may be able to address the challenge of different writing styles, as performing random operations on texts can be seen as a way to emulate different writing behaviors. However, due to the above reasons, general data augmentation methods tend to hurt performance, which is demonstrated in our experiments. Therefore, designing data augmentation methods specific to disease corpus is necessary. To bridge this gap, we propose a set of disease-oriented data augmentation methods to address this problem. As with other disease-related tasks, disease normalization can be thought as a process of text matching, from clinical names to standard names in ICD coding. Therefore, the key to this task is for the model to learn great encoding that contains enough similar information for each disease. For instance, the model needs to tell that ” 左肾发育不全 (Left renal agenesis)” and ” 先天性肾发育不全 (Congenital renal agenesis)” are the same disease while ” 髂总动脉夹层 (Common iliac artery dissection)” and ” 颈总动脉夹层 (Common carotid artery dissection)” are not, despite that they both share a lot of common characters. Our methods are based on the following two assumptions. First, disease names have the property of structural invariance. A disease name consists of several different types of key elements, such as location, clinical manifestations, etiology, pathology, etc. In the pair of clinical disease and standard ICD disease, the specified elements can correspond in most cases. Therefore, we can replace a specific element between the pair of clinical disease and standard ICD disease at the same time to generate new pairs. The matching relationship of the newly generated clinical disease and the ICD standard disease pairs can still be maintained. We screened the generated standard ICD diseases to ensure that they belonged to the correct label and that the pairs are effective. It should be noticed that replacing components could derive a new clinical disease name that turns out to be fake (i.e. the disease actually does not exist), but the key point here is to make models learn the necessary semantic association within the diseases. Second, labels in the disease normalization task have transitivity properties. In specific, a more specified description of an object can be comprised into a larger group where the descriptions are more coarse, e.g. a yellow chair is also a chair. In the ICD coding system, there are also different and clear granularities of diseases. Therefore, we can treat the fine-grained disease as their coarse-grained upper disease by assigning them father labels. Normally, a data augmentation method generates new data and trains them along with the existing data, without altering the training paradigm. However, the disease normalization task assigns each disease a unique label, while our methods augment the labels. Therefore, if the traditional training paradigm is still applied to our augmentation methods, a same input disease in the dataset may get different labels, which will make the model difficult to train due to label confusion. To overcome this problem, we treat the data augmentation operation as a pre-training task (we call it augmented training) prior to the original task, so that the model can first learn the necessary semantic information within diseases and then leverage that information when fine-tuning on the actual normalization dataset. Additionally, both unnormalized disease names from the tasks and standard ICD names of the diseases can be used as inputs in the data augmentation process. A unique advantage of using standard ICD names to perform data augmentation as a pre-training task is that the model can get the whole picture of the disease-related information from ICD coding, which includes all classes of diseases, even before the actual training of the downstream task. Therefore, with all those information injected, the model can perform much stronger on smaller datasets where lots of class labels are not able to be seen in the training set. To the best of our knowledge, we are the first to explore the semantic components and information within disease names. We believe the research on disease name enhancement has high research value and can benefit various downstream tasks. To summarize our contributions: • We propose a set of data augmentation methods for the Chinese disease normalization tasks. • Experiments validate that general data augmentation methods have the potential to impair the disease normalization task. However, our method has obvious performance gain on the task based on various baseline models. • We also analyze the reasons why the proposed method is effective. 2 Background ICD coding. ICD, the acronym of the International Classification of Diseases, is an international unified classification of diseases developed by the World Health Organization, and ICD-10 is the 10th version of ICD coding which is used in our work. The coding is a combination of letters and numbers, which classifies diseases according to their etiology, pathology, clinical manifestations, and anatomical locations, so that they form a hierarchical coding structure. ICD also adopts a multi-grain fashion where coarse-grained disease are followed by fine-grained diseases. Disease normalization task. In clinical practice, doctors will fill in the name of the disease according to clinical diagnosis standards along with their own writing habits, which makes a single disease name hundreds of versions. The disease normalization task is to match disease names written in different styles into a single standard name provided by ICD coding. After the disease normalization process, researchers can perform further operations upon the normalized names to realize all kinds of functions used in wise medical applications. The task can be formalized into the following operation: X -> Y, where X represents the clinical disease names and Y represents the standard ICD names. NER. NER stands for Named Entity Recognition, which is a common task in Natural Language Processing. It aims to identify entities that have practical values and their locations from unstructured texts. The classification of these entities may include persons, organizations, locations, etc. In this work, we use an NER tool trained by ourselves to identify elements in disease names in order to perform data augmentation. Additionally, we argue that any NER tool that can identify elements in disease names should be fine, and our work mainly focus on the data augmentation methods. 3 Related Work In this section, we first introduce related works of data augmentation, then we introduce medical data-driven research works that are similar to ours. 3.1 Data Augmentation Data augmentation is a technology to synthesize new data based on existing data as a way to expand the amount of dataset. It is often used when the amount of data is not enough, and it can also act as a regularizer to prevent the model from overfitting the training set. Unlike images, where it is relatively easy to augment data as well as keep the semantic information intact, data augmentation in texts is more difficult, due to its unstructured form Ng et al. (2020). Many works focus on augmentations directly on the input: Wei & Zou (2019) propose four simple augmentation methods base on character-level noise injection, which are replacement, insertion, swap, and deletion. Their methods are quite straightaway and effective, but the augmentation results may cause unwanted noise by not following the grammar rules. Back translation, augments data by translating the original text to a second language and then translating it back. This method can keep the semantic meaning well of the original text, but the augmented results are lack of diversity and sometimes restricted by the translation tool. In order to make the augmented data more realistic, Kim et al. (2022) leverages lexicalized probabilistic context-free grammars to capture the intricate compositional structure of natural language and then perform word replacements. This method yields good results, but grammar-based methods for general text are difficult to generalize to specialized areas, such as medicine. There are also methods that leverage pre-trained language models to perform data augmentation. Ng et al. (2020) use MLM objective in BERT Devlin et al. (2018) to mask out some words and then regenerate it. Wu et al. (2019) also uses MLM task as well as changing the segment ids to class labels. Kumar et al. (2020) compares three kinds of data augmentation methods using a conditional pre-trained model, namely auto-encoder, auto-regressive, and seq2seq. A problem with these methods is that the semantic meaning of the original sentence may change after several MLM replacements. Semi-supervised learning can also be a way to perform data augmentation by leveraging the vast amount of unlabeled data. Berthelot et al. (2019) uses MixUp to guess the low-entropy labels of the augmented data and then mixes the labeled and unlabeled data to derive a loss term, and Xie et al. (2020) performs data augmentation on unlabeled data for consistency training. However, we only focus on augmenting the data itself instead of semi-supervised learning objectives in this work. 3.2 Data approaches on medical data While most researches focus on the effect of data augmentation on general text data, there are also works that try to explore the possibility of data augmentation operations on medical text data. In this section, we mainly introduce data augmentation on medical text data and other related research works. There are works that focus on the synonym replacement in medical terms. Falis et al. (2022) and Abdollahi et al. (2021) leverage Unified Medical Language System (UMLS) to find medical synonyms to perform replacements after certain medical terms are identified in classification texts. Focusing on the ICD-coding task, Falis et al. (2022) also replaces both the medical terms in raw texts and the classification label to get new training data. While their works mainly focus on replacing the whole medical term, we investigate the possibility of replacing the components of the medical terms by exploring the semantic structures within them. Additionally, Ansari et al. (2021) investigates the performance of EDA, conditional pretrained language models and back translation to perform data augmentation on social media texts for mental health classification. Wang et al. (2020a) proposes Segment Reordering as a data augmentation technique to keep the medical semantic meaning intact. Wang et al. (2020b) use pre-trained language models fine-tuned on General Semantic Textual Similarity (STS-G) data to generate pseudo-labels on medical STS data, and then perform iterative training. 4 Methods In this section, we introduce the details of our proposed data augmentation methods and the overall pipeline. Since the significance of data augmentation is to inject the model with extra knowledge, the key point is to explore the components and relations in diseases so that the model can have a broad sense of the internal structures of the diseases. Therefore, we leverage the multi-axis and multi-grain nature of the diseases to design all of the data augmentation methods. First of all, the disease names are composed of several elements, which include but are not limited to etiology, pathology, clinical manifestations, anatomical location, chronicity, degree type, etc. For ease of expression, we merge and select from all those elements into three main categories, which are disease center, anatomical location and disease quality. This shows the multi-axis nature of the diseases. • Disease Center: Disease center, which may include etiology and pathology, is the minimal word that describes the nature of a disease. It defines the main category of a disease, such as ”disorders” for ”Other disorders of the eye with mcc”. • Anatomical Location: Anatomical Location is a part of the human body that have actual meaning in anatomy. It indicates which part of the human body is ill. • Disease Quality: The quality of a disease which indicates the subtype of the disease, such as ”Drug-induced” for ”Drug-induced peripheral neuropathy”. With these three axis words, all kinds of disease names can be combined by them. Second, a disease can be described by multiple granularities. An upper disease is a coarsedefined disease and a lower disease is a fine-grained disease. The ICD coding contains lots of upper-lower disease pairs by assigning them different lengths of code. For example, in ”ICD-10 Beijing Clinical Version 601”, the disease name of code ”A18.2” is ” 外周结核 性淋巴结炎 (Peripheral Tuberculous Lymphadenitis)” and ”A18.201” is ” 腹股沟淋巴结 结核 (Inguinal lymph node tuberculosis)”. ”Peripheral Tuberculous Lymphadenitis” is a coarse-defined disease due to not specifying a single anatomical location. Additionally, a coarse-defined disease can contain multiple fine-grained diseases in ICD coding. In our intuition, although the disease can only be called the same if all of its components are the same, it is necessary for the model to learn which diseases are more similar than others. Therefore, we define the following data augmentation methods. 4.1 Data Augmentation We perform data augmentation by assigning pseudo-labels to diseases to describe their relationships so that they can form a new pair of diseases, and we use those pairs to perform augmented training of disease normalization tasks. We divide our methods into two main categories: Axis-word Replacement and Multi-grain Aggregation. We call our proposed disease name data augmentation method DDA. Figure 2 illustrates the overall pipeline of our methods. Axis-word Replacement (AR): We assume that disease names have the property of structural invariance, which means a name derived by replacing an axis-word in a disease to another one with the same type also makes sense. Since there are often matches of Axis-words between an unnormalized-standard disease pair in the disease normalization task, replacing the corresponding Axis-word in the clinical name with the standard name in the pair at the same time can ensure that the newly-generated pair will still match. To locate all axis-word in the disease, we leverage a Named Entity Recognition (NER) tool trained by ourselves1. The entity type includes but is not limited to disease center, anatomical location, and disease quality. We note that the NER tool is just for the use of locating axis-words, and it can be replaced by any modules that can achieve the same function. We leverage both the ICD-coding and the disease normalization training set to perform axisword replacement. The detailed descriptions of each category of axis-word replacements are as follows: • AR1: AR1 is illustrated in the top left corner of Figure 2. First, select a pair of diseases (disease A and disease B) that shares one or more axis (part1 in figure) but is different in another axis (part 2 in figure). Then, replace the part 2 in disease A to be the same part2 in disease B. (Note: disease A can be chosen from any sources, but disease B can only be chosen from the standard ICD-coding list as it serves as the label of a disease normalization pair.) – AR1-posotion: Perform AR1 by fixing the disease center and replacing the anatomical location. – AR1-center: Perform AR1 by fixing the anatomical location and replacing the disease center. – AR1-quality: Perform AR1 by fixing both the disease center and the anatomical location and replacing the disease quality. • AR2: AR2 is illustrated in the top right corner of Figure 2. First, select a pair of unnormalized-standard diseases from the disease normalization training set. Let the unnormalized disease be disease A, and the standard disease be disease B. Then, find disease C from ICD-coding list that shares one or more axis (part1) but is different in another axis (part2). Finally, replace part2 in disease A to be the same part2 in disease C, so that the replaced disease A and disease C can form a new disease normalization pair. – AR2-position: Perform AR2 by fixing the disease center and replacing the anatomical location. – AR2-center: Perform AR2 by fixing the anatomical location and replacing the disease center. – AR2-quality: Perform AR2 by fixing both the disease center and the anatomical location and replacing the disease quality. Multi-Grain Aggregation (MGA): We assume that labels in the disease normalization task have transitivity properties. In specific, a more specified description of an object can be comprised into a larger group where the descriptions are more coarse. In the ICD coding system, there are also clear granularities of diseases. The maximum length of code that can be shared between hospitals is 6, and the multi-grain structure contains 3-digit, 4-digit, and 6-digit codes. We observe that the semantic meaning between diseases that share the first 1We will open source the code of our experiment along with the NER tool for disease names on Github. 3-digit code but are different in the 4th-digit code can be quite different, but the meaning would be a lot similar if the diseases share the first 4-digit code. Therefore, We implement MGA augmentation using the following method. • MGA-code: we leverage the multi-grain nature of the ICD coding by assigning the label of a 6-digit disease to its corresponding 4-digit disease. We call the method ”aggregation” because normally a 4-digit disease can be matched to several 6-digit diseases, so the model can learn which diseases are similar. MGA-code is illustrated in the left bottom of Figure 2. – MGA-code1: The 6-digit diseases are directly derived from the ICD-coding list. – MGA-code2: The 6-digit diseases are derived from the diseases in CHIP-CDN training set whose labels are a 6-digit ICD disease. • MGA-position: Apart from the ICD coding, anatomical locations also follow a hierarchical structure, where several smaller positions can be grouped together to form a larger position. Thus, we search for diseases in ICD coding that share the same center and one position is the upper position of another one, and we grouped the classification labels of the lower position diseases to their upper position diseases. MGA-position is illustrated in the right bottom of Figure 2. (Note: the upper position diseases must come from the standard ICD-coding list.) – MGA-position1: The lower position diseases are directly derived from the ICDcoding list. – MGA-position2: The lower position diseases are derived from the diseases in CHIP-CDN training set. (Note: In the human body, we call a location the upper position to another position if that location covers a larger area than another. In order to find the upper or lower positions of a position, we construct a position tree document where the anatomical positions in the human body are organized into a tree data structure. We use the constructed position tree to recognize the upper and lower relations above. The same goal can be achieved with other sources containing knowledge bases of human anatomy.) 4.2 Training Process • Taking the augmented data to train the disease normalization task. • Fine-tuning the original disease normalization dataset. 5 Experiments 5.1 Dataset We evaluate the effectiveness of our data augmentation methods on a Chinese disease normalization dataset called CHIP-CDN. CHIP-CDN originates in the CHIP-2019 competition and was collected in A Chinese Biomedical Language Understanding Evaluation Benchmark called CBLUE Zhang et al. (2021). The dataset contains 6000 unnormalized-standard disease pairs in the training set, 1000 pairs in the dev set, and 2000 pairs in the test set. 5.2 Experimental Setup We evaluate our methods on three baselines: BILSTM Sak et al. (2014)and BERT-base Devlin et al. (2018), CDN-Baseline(from CBLUE)Zhang et al. (2021). For BILSTM, we use two BILSTM layers followed by a MLP layer to perform classification. For BERTbased models, we use the CLS vector to perform classification. For CDN-Baseline, we use the original model provided by its git repository2, which follows a ”recall-match” two step training approach based on pre-trained language models. The choose of the baseline models is to demonstrate the effectiveness of our method under different types of models and training 2https://github.com/CBLUEbenchmark/CBLUE settings. In specific, we verify the effectiveness of DDA to a train-from-scratch model using a BILSTM model, we verify the effectiveness to models with pre-trained knowledge using the BERT-base model, and we verify the effectiveness to complex models using CDN-Baseline model. For the BILSTM model and BERT-base model, we use accuracy to judge the model performance. In our evaluation, we treat this disease normalization as a multi-class classification rather than multi-label classification task despite that there are few data samples that a single unnormalized disease is matched to several standard diseases. Hence, if an unnormalized disease is matched to several standard diseases, this data sample is considered correctly predicted as long as one of the standard diseases is correctly predicted. We design the experiments in this way to simplify the model as much as possible to more clearly illustrate the effectiveness of DDA. For CDN-Baseline, we stick to the settings in CBLUE Zhang et al. (2021), which use the F1 as the evaluation metric, use BERT-base as the baseline model, and use the two step training paradigm provided by CBLUE for better comparison. To ensure fairness, we use the exact same parameter settings for the same model. In particular, for CDN-Baseline, we use almost the same parameter settings as CBLUE’s git repository, including random seed numbers. Additionally, we use devset for performance comparison, since the label of test set of the CHIP-CDN dataset is not given. For all experiments, we keep the best performing result as the final score. 5.3 Results The results are shown in Table 1. The trainset in the table represents CHIP-CDN training set. From top to bottom, the performance of different models using different data augmentation methods is represented. Among them, BT is the back-translation data augment method3, and DDA is the semantic-based disease name data augmentation method proposed by us. The experimental results demonstrate that although EDA and back-translation increase diversity, they both hurt performances in some settings (especially for EDA). However, DDA improves the performance in every settings. Clearly, DDA avoids the problem of EDA, and its effect is much better than BT. We observe that the performances improve for all models above after applying the DDA methods, showing the effectiveness of our proposed methods. For the BILSTM model, the relative performance improvement reaches 6%. We further observe that there is more performance gain on BILSTM than BERT-based models and CDN-Baseline, probably because the knowledge in pre-trained language models has already covered some of the similar information, but our proposed method can further improve their performance, showing the effectiveness of DDA. 5.4 Ablation Study In this section, we evaluate the effectiveness of every data augmentation methods on BILSTM, BERT-base models and CDN-Baseline. As we propose two types of data augmentation methods, we evaluate them by taking out these methods one by one to see the resulting performances. The results are shown in Table 2. We observe that removing data gener- 3we use the youdao translation tool and the URL is https://fanyi.youdao.com/. ated by either types of methods would lead to performance degradation, thus proving the effectiveness of every method that we propose. 5.5 Smaller datasets experiments We also evaluate the performance improvements over smaller datasets that derived from CHIP-CDN since the data scarcity problem is more severe in smaller datasets. We evaluate the training set whose sizes range from 5%, to 100% of the CHIP-CDN training set size. For the convenience of training, for augmented training in this setting, we only leverage standard disease names in ICD-coding. No data from disease normalization training set are used. We draw curves to illustrate the comparison on whether to use our proposed methods or not, which is shown in figure 3. When the size of the training set increase, both curves steadily improve. We also notice that the performance gain is higher when the size of the training set is smaller. 6 Conclusion In this paper, we propose two main types of data augmentation methods for Chinese disease normalization tasks based on two hypothesis respectively, where the disease names have the property of structural invariance, and the labels in disease normalization task have the transitivity properties. Our data augmentation methods explore the semantic information and the relation information in diseases, and are adopted in augmented training fashion to avoid introducing misinformation. Experimental results show that our DDA method can better solve the three main challenges in disease normalization task, namely description diversity, data scarcity, and semantics density. Compared to EDA and back-translation methods, our method has obvious advantages on the disease normalization task. Furthermore, we prove that our data augmentation methods work even better on smaller datasets. A Appendix A.1 data augment result statics The table 3 is all the statistical results of the data we obtained using MGA and AR data augmentation methods 4. A.2 Hyperparameter settings Table 4 shows the hyperparameter settings of our choices. For different methods, the way of parameter setting is different. For models that use word2vec initialization or random initialization parameters, the training on augmented data can be regarded as a special pretraining task, and a large learning rate and a large number of iterations can be set to make the training sufficient. For models that use a pre-trained model (i.e. BERT) as the backbone, a small learning rate and a small number of training iterations should be set to avoid the catastrophic forgetting of valuable information in the pre-trained models. For each baseline model, we first train on the augmented dataset (Augmented Training), and then fine-tune on CHIP-CDN dataset. For the CDN-Baseline model, we use Chinese-bertwwm as the pre-training model, and the training method is provided by CBLUE. For the DDA method, we first use the augmented dataset to train for 1 epoch with a learning rate of 5e-6 and then fine-tune on CHIP-CDN. The hyperparameter of the num_negative_sample is 3+3 and the recall_k is 2 (The explanation of hyperparameter num_negative_sample and recall_k can be found in their github repository). A.3 Analysis In table 5, the first row represents the distribution of the number of times the label appears in the training set. The other two rows represent the label distribution of the two types of the augmented data. The statistical result shows that the data of DDA can effectively improve the labels that appear less frequently (the number of times < 3) and the labels that do not appear at all (did not appear) in the training set. This is beneficial for addressing the data scarcity problem of disease normalization tasks and the diversity of disease names. This is the direct reason why DDA works. As for EDA and BT, they can only increase the number of labels that are already appeared in the training set, which only solve the problem of expression diversity. Hence, their abilities are limited. A.4 Case Study We give a real example of the augmentation results generated by different data augmentation methods. We observe that the semantic meaning of the EDA-generated result dramatically 4We will open source the augmentation code and the augmented result on Github. changes due to the property of semantics density, and it changes the key information within the disease by losing the anatomical location. The results generated by BT is more realistic, but this method cannot generate samples beyond the original label scope, and it also suffers from the restrictions of the translation tools. As for our proposed method DDA (last two lines in the table), it can not only increase the diversity of the input, but also generates data where their labels are never appeared in the training set, so that sparse labels can be trained more thoroughly. A.5 Future work So far, we have only proved the effectiveness of our DDA method, but no experimental analysis is done to explore the internal mechanisms of why it is so effective. Moreover, to further avoid the injection of misinformation, we believe designing loss function terms to effectively select more valuable data from the data augmentation results can be a promising direction. We aim to perform researches on those topics in the future.
1. What is the focus of the paper on disease diagnosis normalization? 2. What are the strengths and weaknesses of the proposed approach, particularly in its incremental improvement? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns regarding the translation and language components in the study? 5. Can the gains achieved by the proposed method be generalized?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper describes authors work related to set of data augmentation techniques that work together as an augmented training task for disease diagnosis normalization.Various techniques are described which show incremental improvement in disease diagnosis normalization against ICD-10 classification using CHIP-CDN dataset. The key to their work are described methods which augment the labels. Strengths And Weaknesses Strengths: Authors work does demonstrate incremental benefit(3%) over prior related work. Clearly,this is valuable works there is significant variability in disease diagnosis as documented by clinicians. Weakness: Incremental benefit is limited. Some of the grammar could be improved. Unclear if the Chinese language have the exact translation for comparison. Clarity, Quality, Novelty And Reproducibility Quality: The paper is of reasonable quality. Clarity: Table 6 is unclear.The comparison between different diagnosis codes goes from ankle to Knee and the label in the DA-AR is duplicated.So, not sure how to assess.Not sure if the challenge is different or even worse as there is a language component in it too(Chinese vs English). Originality: While the authors show incremental benefit of their technique over prior related work, not sure if the gains can be generalized.
ICLR
Title Exploring semantic information in disease: Simple Data Augmentation Techniques for Chinese Disease Normalization Abstract The disease is a core concept in the medical field, and the task of normalizing disease names is the basis of all disease-related tasks. However, due to the multi-axis and multi-grain nature of disease names, incorrect information is often injected and harms the performance when using general text data augmentation techniques. To address the above problem, we propose a set of data augmentation techniques that work together as an augmented training task for disease normalization. Our data augmentation methods are based on both the clinical disease corpus and standard disease corpus derived from ICD-10 coding. Extensive experiments are conducted to show the effectiveness of our proposed methods. The results demonstrate that our methods can have up to 3% performance gain compared to non-augmented counterparts, and they can work even better on smaller datasets. 1 Introduction The disease is a central concept in medical text processing problems. One of the most important tasks, i.e. disease normalization, uses diseases as both input and output to match the diagnoses terms used in clinical documents to standard names in ICD coding. The disease normalization task mainly faces the following three challenges. First, different writing styles. The writing styles of the diseases can be diversified, where different doctors have different writing habits, so a single disease might result in thousands of versions of names. Second, data scarcity, where some diseases may not be covered in the training set, which often leads to few-shot or zero-shot scenarios. For example, in the Chinese disease normalization dataset CHIP-CDN, there are 40472 diseases to classify, but only data of 3505 diseases (i.e. less than 10% of all diseases) are provided in the training set. Figure 1 illustrates the data scarcity problem in CHIP-CDN dataset. Third, semantics density. The length of disease names is usually short, which makes every character carries huge semantic information. The meanings of the diseases are very different from each other even if they share a lot of common characters, and a single change in characters could result in dramatic change in semantic meaning. For instance, ” 髂总动脉夹层 (Common iliac artery dissection)” and ” 劲总动脉夹层 (Common carotid artery dissection)” are only different in one character, but the positions of those diseases are very distinct, from the upper half of the body part to the lower half. Among all the challenges we discussed, data scarcity is the biggest one, since other problems usually can be solved by providing larger datasets for models to learn. A common way to address the data scarcity problem is through data augmentation. There are numerous data augmentation methods for general corpora such as synonym replacement or back translation. Wei & Zou (2019) has shown that simple text data augmentation methods can be effective for text classification problems. However, because of the unique structure of disease names (i.e. semantics density), general text data augmentation methods do not work well on them, and sometimes even hurt the overall performance. For example, if random deletion Wei & Zou (2019) is performed on disease ” 阻塞性睡眠呼吸暂停 (Obstructive Sleep Apnoea)” and results in ” 阻塞性睡眠 (Obstructive Sleep)”, that would dramatically change the meaning of that disease name and makes it become another disease. Admittedly, general data augmentation methods may be able to address the challenge of different writing styles, as performing random operations on texts can be seen as a way to emulate different writing behaviors. However, due to the above reasons, general data augmentation methods tend to hurt performance, which is demonstrated in our experiments. Therefore, designing data augmentation methods specific to disease corpus is necessary. To bridge this gap, we propose a set of disease-oriented data augmentation methods to address this problem. As with other disease-related tasks, disease normalization can be thought as a process of text matching, from clinical names to standard names in ICD coding. Therefore, the key to this task is for the model to learn great encoding that contains enough similar information for each disease. For instance, the model needs to tell that ” 左肾发育不全 (Left renal agenesis)” and ” 先天性肾发育不全 (Congenital renal agenesis)” are the same disease while ” 髂总动脉夹层 (Common iliac artery dissection)” and ” 颈总动脉夹层 (Common carotid artery dissection)” are not, despite that they both share a lot of common characters. Our methods are based on the following two assumptions. First, disease names have the property of structural invariance. A disease name consists of several different types of key elements, such as location, clinical manifestations, etiology, pathology, etc. In the pair of clinical disease and standard ICD disease, the specified elements can correspond in most cases. Therefore, we can replace a specific element between the pair of clinical disease and standard ICD disease at the same time to generate new pairs. The matching relationship of the newly generated clinical disease and the ICD standard disease pairs can still be maintained. We screened the generated standard ICD diseases to ensure that they belonged to the correct label and that the pairs are effective. It should be noticed that replacing components could derive a new clinical disease name that turns out to be fake (i.e. the disease actually does not exist), but the key point here is to make models learn the necessary semantic association within the diseases. Second, labels in the disease normalization task have transitivity properties. In specific, a more specified description of an object can be comprised into a larger group where the descriptions are more coarse, e.g. a yellow chair is also a chair. In the ICD coding system, there are also different and clear granularities of diseases. Therefore, we can treat the fine-grained disease as their coarse-grained upper disease by assigning them father labels. Normally, a data augmentation method generates new data and trains them along with the existing data, without altering the training paradigm. However, the disease normalization task assigns each disease a unique label, while our methods augment the labels. Therefore, if the traditional training paradigm is still applied to our augmentation methods, a same input disease in the dataset may get different labels, which will make the model difficult to train due to label confusion. To overcome this problem, we treat the data augmentation operation as a pre-training task (we call it augmented training) prior to the original task, so that the model can first learn the necessary semantic information within diseases and then leverage that information when fine-tuning on the actual normalization dataset. Additionally, both unnormalized disease names from the tasks and standard ICD names of the diseases can be used as inputs in the data augmentation process. A unique advantage of using standard ICD names to perform data augmentation as a pre-training task is that the model can get the whole picture of the disease-related information from ICD coding, which includes all classes of diseases, even before the actual training of the downstream task. Therefore, with all those information injected, the model can perform much stronger on smaller datasets where lots of class labels are not able to be seen in the training set. To the best of our knowledge, we are the first to explore the semantic components and information within disease names. We believe the research on disease name enhancement has high research value and can benefit various downstream tasks. To summarize our contributions: • We propose a set of data augmentation methods for the Chinese disease normalization tasks. • Experiments validate that general data augmentation methods have the potential to impair the disease normalization task. However, our method has obvious performance gain on the task based on various baseline models. • We also analyze the reasons why the proposed method is effective. 2 Background ICD coding. ICD, the acronym of the International Classification of Diseases, is an international unified classification of diseases developed by the World Health Organization, and ICD-10 is the 10th version of ICD coding which is used in our work. The coding is a combination of letters and numbers, which classifies diseases according to their etiology, pathology, clinical manifestations, and anatomical locations, so that they form a hierarchical coding structure. ICD also adopts a multi-grain fashion where coarse-grained disease are followed by fine-grained diseases. Disease normalization task. In clinical practice, doctors will fill in the name of the disease according to clinical diagnosis standards along with their own writing habits, which makes a single disease name hundreds of versions. The disease normalization task is to match disease names written in different styles into a single standard name provided by ICD coding. After the disease normalization process, researchers can perform further operations upon the normalized names to realize all kinds of functions used in wise medical applications. The task can be formalized into the following operation: X -> Y, where X represents the clinical disease names and Y represents the standard ICD names. NER. NER stands for Named Entity Recognition, which is a common task in Natural Language Processing. It aims to identify entities that have practical values and their locations from unstructured texts. The classification of these entities may include persons, organizations, locations, etc. In this work, we use an NER tool trained by ourselves to identify elements in disease names in order to perform data augmentation. Additionally, we argue that any NER tool that can identify elements in disease names should be fine, and our work mainly focus on the data augmentation methods. 3 Related Work In this section, we first introduce related works of data augmentation, then we introduce medical data-driven research works that are similar to ours. 3.1 Data Augmentation Data augmentation is a technology to synthesize new data based on existing data as a way to expand the amount of dataset. It is often used when the amount of data is not enough, and it can also act as a regularizer to prevent the model from overfitting the training set. Unlike images, where it is relatively easy to augment data as well as keep the semantic information intact, data augmentation in texts is more difficult, due to its unstructured form Ng et al. (2020). Many works focus on augmentations directly on the input: Wei & Zou (2019) propose four simple augmentation methods base on character-level noise injection, which are replacement, insertion, swap, and deletion. Their methods are quite straightaway and effective, but the augmentation results may cause unwanted noise by not following the grammar rules. Back translation, augments data by translating the original text to a second language and then translating it back. This method can keep the semantic meaning well of the original text, but the augmented results are lack of diversity and sometimes restricted by the translation tool. In order to make the augmented data more realistic, Kim et al. (2022) leverages lexicalized probabilistic context-free grammars to capture the intricate compositional structure of natural language and then perform word replacements. This method yields good results, but grammar-based methods for general text are difficult to generalize to specialized areas, such as medicine. There are also methods that leverage pre-trained language models to perform data augmentation. Ng et al. (2020) use MLM objective in BERT Devlin et al. (2018) to mask out some words and then regenerate it. Wu et al. (2019) also uses MLM task as well as changing the segment ids to class labels. Kumar et al. (2020) compares three kinds of data augmentation methods using a conditional pre-trained model, namely auto-encoder, auto-regressive, and seq2seq. A problem with these methods is that the semantic meaning of the original sentence may change after several MLM replacements. Semi-supervised learning can also be a way to perform data augmentation by leveraging the vast amount of unlabeled data. Berthelot et al. (2019) uses MixUp to guess the low-entropy labels of the augmented data and then mixes the labeled and unlabeled data to derive a loss term, and Xie et al. (2020) performs data augmentation on unlabeled data for consistency training. However, we only focus on augmenting the data itself instead of semi-supervised learning objectives in this work. 3.2 Data approaches on medical data While most researches focus on the effect of data augmentation on general text data, there are also works that try to explore the possibility of data augmentation operations on medical text data. In this section, we mainly introduce data augmentation on medical text data and other related research works. There are works that focus on the synonym replacement in medical terms. Falis et al. (2022) and Abdollahi et al. (2021) leverage Unified Medical Language System (UMLS) to find medical synonyms to perform replacements after certain medical terms are identified in classification texts. Focusing on the ICD-coding task, Falis et al. (2022) also replaces both the medical terms in raw texts and the classification label to get new training data. While their works mainly focus on replacing the whole medical term, we investigate the possibility of replacing the components of the medical terms by exploring the semantic structures within them. Additionally, Ansari et al. (2021) investigates the performance of EDA, conditional pretrained language models and back translation to perform data augmentation on social media texts for mental health classification. Wang et al. (2020a) proposes Segment Reordering as a data augmentation technique to keep the medical semantic meaning intact. Wang et al. (2020b) use pre-trained language models fine-tuned on General Semantic Textual Similarity (STS-G) data to generate pseudo-labels on medical STS data, and then perform iterative training. 4 Methods In this section, we introduce the details of our proposed data augmentation methods and the overall pipeline. Since the significance of data augmentation is to inject the model with extra knowledge, the key point is to explore the components and relations in diseases so that the model can have a broad sense of the internal structures of the diseases. Therefore, we leverage the multi-axis and multi-grain nature of the diseases to design all of the data augmentation methods. First of all, the disease names are composed of several elements, which include but are not limited to etiology, pathology, clinical manifestations, anatomical location, chronicity, degree type, etc. For ease of expression, we merge and select from all those elements into three main categories, which are disease center, anatomical location and disease quality. This shows the multi-axis nature of the diseases. • Disease Center: Disease center, which may include etiology and pathology, is the minimal word that describes the nature of a disease. It defines the main category of a disease, such as ”disorders” for ”Other disorders of the eye with mcc”. • Anatomical Location: Anatomical Location is a part of the human body that have actual meaning in anatomy. It indicates which part of the human body is ill. • Disease Quality: The quality of a disease which indicates the subtype of the disease, such as ”Drug-induced” for ”Drug-induced peripheral neuropathy”. With these three axis words, all kinds of disease names can be combined by them. Second, a disease can be described by multiple granularities. An upper disease is a coarsedefined disease and a lower disease is a fine-grained disease. The ICD coding contains lots of upper-lower disease pairs by assigning them different lengths of code. For example, in ”ICD-10 Beijing Clinical Version 601”, the disease name of code ”A18.2” is ” 外周结核 性淋巴结炎 (Peripheral Tuberculous Lymphadenitis)” and ”A18.201” is ” 腹股沟淋巴结 结核 (Inguinal lymph node tuberculosis)”. ”Peripheral Tuberculous Lymphadenitis” is a coarse-defined disease due to not specifying a single anatomical location. Additionally, a coarse-defined disease can contain multiple fine-grained diseases in ICD coding. In our intuition, although the disease can only be called the same if all of its components are the same, it is necessary for the model to learn which diseases are more similar than others. Therefore, we define the following data augmentation methods. 4.1 Data Augmentation We perform data augmentation by assigning pseudo-labels to diseases to describe their relationships so that they can form a new pair of diseases, and we use those pairs to perform augmented training of disease normalization tasks. We divide our methods into two main categories: Axis-word Replacement and Multi-grain Aggregation. We call our proposed disease name data augmentation method DDA. Figure 2 illustrates the overall pipeline of our methods. Axis-word Replacement (AR): We assume that disease names have the property of structural invariance, which means a name derived by replacing an axis-word in a disease to another one with the same type also makes sense. Since there are often matches of Axis-words between an unnormalized-standard disease pair in the disease normalization task, replacing the corresponding Axis-word in the clinical name with the standard name in the pair at the same time can ensure that the newly-generated pair will still match. To locate all axis-word in the disease, we leverage a Named Entity Recognition (NER) tool trained by ourselves1. The entity type includes but is not limited to disease center, anatomical location, and disease quality. We note that the NER tool is just for the use of locating axis-words, and it can be replaced by any modules that can achieve the same function. We leverage both the ICD-coding and the disease normalization training set to perform axisword replacement. The detailed descriptions of each category of axis-word replacements are as follows: • AR1: AR1 is illustrated in the top left corner of Figure 2. First, select a pair of diseases (disease A and disease B) that shares one or more axis (part1 in figure) but is different in another axis (part 2 in figure). Then, replace the part 2 in disease A to be the same part2 in disease B. (Note: disease A can be chosen from any sources, but disease B can only be chosen from the standard ICD-coding list as it serves as the label of a disease normalization pair.) – AR1-posotion: Perform AR1 by fixing the disease center and replacing the anatomical location. – AR1-center: Perform AR1 by fixing the anatomical location and replacing the disease center. – AR1-quality: Perform AR1 by fixing both the disease center and the anatomical location and replacing the disease quality. • AR2: AR2 is illustrated in the top right corner of Figure 2. First, select a pair of unnormalized-standard diseases from the disease normalization training set. Let the unnormalized disease be disease A, and the standard disease be disease B. Then, find disease C from ICD-coding list that shares one or more axis (part1) but is different in another axis (part2). Finally, replace part2 in disease A to be the same part2 in disease C, so that the replaced disease A and disease C can form a new disease normalization pair. – AR2-position: Perform AR2 by fixing the disease center and replacing the anatomical location. – AR2-center: Perform AR2 by fixing the anatomical location and replacing the disease center. – AR2-quality: Perform AR2 by fixing both the disease center and the anatomical location and replacing the disease quality. Multi-Grain Aggregation (MGA): We assume that labels in the disease normalization task have transitivity properties. In specific, a more specified description of an object can be comprised into a larger group where the descriptions are more coarse. In the ICD coding system, there are also clear granularities of diseases. The maximum length of code that can be shared between hospitals is 6, and the multi-grain structure contains 3-digit, 4-digit, and 6-digit codes. We observe that the semantic meaning between diseases that share the first 1We will open source the code of our experiment along with the NER tool for disease names on Github. 3-digit code but are different in the 4th-digit code can be quite different, but the meaning would be a lot similar if the diseases share the first 4-digit code. Therefore, We implement MGA augmentation using the following method. • MGA-code: we leverage the multi-grain nature of the ICD coding by assigning the label of a 6-digit disease to its corresponding 4-digit disease. We call the method ”aggregation” because normally a 4-digit disease can be matched to several 6-digit diseases, so the model can learn which diseases are similar. MGA-code is illustrated in the left bottom of Figure 2. – MGA-code1: The 6-digit diseases are directly derived from the ICD-coding list. – MGA-code2: The 6-digit diseases are derived from the diseases in CHIP-CDN training set whose labels are a 6-digit ICD disease. • MGA-position: Apart from the ICD coding, anatomical locations also follow a hierarchical structure, where several smaller positions can be grouped together to form a larger position. Thus, we search for diseases in ICD coding that share the same center and one position is the upper position of another one, and we grouped the classification labels of the lower position diseases to their upper position diseases. MGA-position is illustrated in the right bottom of Figure 2. (Note: the upper position diseases must come from the standard ICD-coding list.) – MGA-position1: The lower position diseases are directly derived from the ICDcoding list. – MGA-position2: The lower position diseases are derived from the diseases in CHIP-CDN training set. (Note: In the human body, we call a location the upper position to another position if that location covers a larger area than another. In order to find the upper or lower positions of a position, we construct a position tree document where the anatomical positions in the human body are organized into a tree data structure. We use the constructed position tree to recognize the upper and lower relations above. The same goal can be achieved with other sources containing knowledge bases of human anatomy.) 4.2 Training Process • Taking the augmented data to train the disease normalization task. • Fine-tuning the original disease normalization dataset. 5 Experiments 5.1 Dataset We evaluate the effectiveness of our data augmentation methods on a Chinese disease normalization dataset called CHIP-CDN. CHIP-CDN originates in the CHIP-2019 competition and was collected in A Chinese Biomedical Language Understanding Evaluation Benchmark called CBLUE Zhang et al. (2021). The dataset contains 6000 unnormalized-standard disease pairs in the training set, 1000 pairs in the dev set, and 2000 pairs in the test set. 5.2 Experimental Setup We evaluate our methods on three baselines: BILSTM Sak et al. (2014)and BERT-base Devlin et al. (2018), CDN-Baseline(from CBLUE)Zhang et al. (2021). For BILSTM, we use two BILSTM layers followed by a MLP layer to perform classification. For BERTbased models, we use the CLS vector to perform classification. For CDN-Baseline, we use the original model provided by its git repository2, which follows a ”recall-match” two step training approach based on pre-trained language models. The choose of the baseline models is to demonstrate the effectiveness of our method under different types of models and training 2https://github.com/CBLUEbenchmark/CBLUE settings. In specific, we verify the effectiveness of DDA to a train-from-scratch model using a BILSTM model, we verify the effectiveness to models with pre-trained knowledge using the BERT-base model, and we verify the effectiveness to complex models using CDN-Baseline model. For the BILSTM model and BERT-base model, we use accuracy to judge the model performance. In our evaluation, we treat this disease normalization as a multi-class classification rather than multi-label classification task despite that there are few data samples that a single unnormalized disease is matched to several standard diseases. Hence, if an unnormalized disease is matched to several standard diseases, this data sample is considered correctly predicted as long as one of the standard diseases is correctly predicted. We design the experiments in this way to simplify the model as much as possible to more clearly illustrate the effectiveness of DDA. For CDN-Baseline, we stick to the settings in CBLUE Zhang et al. (2021), which use the F1 as the evaluation metric, use BERT-base as the baseline model, and use the two step training paradigm provided by CBLUE for better comparison. To ensure fairness, we use the exact same parameter settings for the same model. In particular, for CDN-Baseline, we use almost the same parameter settings as CBLUE’s git repository, including random seed numbers. Additionally, we use devset for performance comparison, since the label of test set of the CHIP-CDN dataset is not given. For all experiments, we keep the best performing result as the final score. 5.3 Results The results are shown in Table 1. The trainset in the table represents CHIP-CDN training set. From top to bottom, the performance of different models using different data augmentation methods is represented. Among them, BT is the back-translation data augment method3, and DDA is the semantic-based disease name data augmentation method proposed by us. The experimental results demonstrate that although EDA and back-translation increase diversity, they both hurt performances in some settings (especially for EDA). However, DDA improves the performance in every settings. Clearly, DDA avoids the problem of EDA, and its effect is much better than BT. We observe that the performances improve for all models above after applying the DDA methods, showing the effectiveness of our proposed methods. For the BILSTM model, the relative performance improvement reaches 6%. We further observe that there is more performance gain on BILSTM than BERT-based models and CDN-Baseline, probably because the knowledge in pre-trained language models has already covered some of the similar information, but our proposed method can further improve their performance, showing the effectiveness of DDA. 5.4 Ablation Study In this section, we evaluate the effectiveness of every data augmentation methods on BILSTM, BERT-base models and CDN-Baseline. As we propose two types of data augmentation methods, we evaluate them by taking out these methods one by one to see the resulting performances. The results are shown in Table 2. We observe that removing data gener- 3we use the youdao translation tool and the URL is https://fanyi.youdao.com/. ated by either types of methods would lead to performance degradation, thus proving the effectiveness of every method that we propose. 5.5 Smaller datasets experiments We also evaluate the performance improvements over smaller datasets that derived from CHIP-CDN since the data scarcity problem is more severe in smaller datasets. We evaluate the training set whose sizes range from 5%, to 100% of the CHIP-CDN training set size. For the convenience of training, for augmented training in this setting, we only leverage standard disease names in ICD-coding. No data from disease normalization training set are used. We draw curves to illustrate the comparison on whether to use our proposed methods or not, which is shown in figure 3. When the size of the training set increase, both curves steadily improve. We also notice that the performance gain is higher when the size of the training set is smaller. 6 Conclusion In this paper, we propose two main types of data augmentation methods for Chinese disease normalization tasks based on two hypothesis respectively, where the disease names have the property of structural invariance, and the labels in disease normalization task have the transitivity properties. Our data augmentation methods explore the semantic information and the relation information in diseases, and are adopted in augmented training fashion to avoid introducing misinformation. Experimental results show that our DDA method can better solve the three main challenges in disease normalization task, namely description diversity, data scarcity, and semantics density. Compared to EDA and back-translation methods, our method has obvious advantages on the disease normalization task. Furthermore, we prove that our data augmentation methods work even better on smaller datasets. A Appendix A.1 data augment result statics The table 3 is all the statistical results of the data we obtained using MGA and AR data augmentation methods 4. A.2 Hyperparameter settings Table 4 shows the hyperparameter settings of our choices. For different methods, the way of parameter setting is different. For models that use word2vec initialization or random initialization parameters, the training on augmented data can be regarded as a special pretraining task, and a large learning rate and a large number of iterations can be set to make the training sufficient. For models that use a pre-trained model (i.e. BERT) as the backbone, a small learning rate and a small number of training iterations should be set to avoid the catastrophic forgetting of valuable information in the pre-trained models. For each baseline model, we first train on the augmented dataset (Augmented Training), and then fine-tune on CHIP-CDN dataset. For the CDN-Baseline model, we use Chinese-bertwwm as the pre-training model, and the training method is provided by CBLUE. For the DDA method, we first use the augmented dataset to train for 1 epoch with a learning rate of 5e-6 and then fine-tune on CHIP-CDN. The hyperparameter of the num_negative_sample is 3+3 and the recall_k is 2 (The explanation of hyperparameter num_negative_sample and recall_k can be found in their github repository). A.3 Analysis In table 5, the first row represents the distribution of the number of times the label appears in the training set. The other two rows represent the label distribution of the two types of the augmented data. The statistical result shows that the data of DDA can effectively improve the labels that appear less frequently (the number of times < 3) and the labels that do not appear at all (did not appear) in the training set. This is beneficial for addressing the data scarcity problem of disease normalization tasks and the diversity of disease names. This is the direct reason why DDA works. As for EDA and BT, they can only increase the number of labels that are already appeared in the training set, which only solve the problem of expression diversity. Hence, their abilities are limited. A.4 Case Study We give a real example of the augmentation results generated by different data augmentation methods. We observe that the semantic meaning of the EDA-generated result dramatically 4We will open source the augmentation code and the augmented result on Github. changes due to the property of semantics density, and it changes the key information within the disease by losing the anatomical location. The results generated by BT is more realistic, but this method cannot generate samples beyond the original label scope, and it also suffers from the restrictions of the translation tools. As for our proposed method DDA (last two lines in the table), it can not only increase the diversity of the input, but also generates data where their labels are never appeared in the training set, so that sparse labels can be trained more thoroughly. A.5 Future work So far, we have only proved the effectiveness of our DDA method, but no experimental analysis is done to explore the internal mechanisms of why it is so effective. Moreover, to further avoid the injection of misinformation, we believe designing loss function terms to effectively select more valuable data from the data augmentation results can be a promising direction. We aim to perform researches on those topics in the future.
1. What is the main contribution of the paper regarding data augmentation techniques for normalizing disease names in Chinese? 2. What are the strengths and weaknesses of the paper, particularly in terms of experimental results and the acknowledgment of prior work? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns regarding ethical considerations in the study? 5. Is the formatting and writing style of the paper appropriate and consistent with the standards of ICLR 2023 papers?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The proposed and evaluated data augmentation techniques for normalizing disease names in Chinese. The techniques were derived from disease corpora extracted from an existing dataset of authentic clinical text and the standardized ICD-10 coding system. Experimental results demonstrated a fair 3% performance gain with potential for further gains on smaller datasets. Strengths And Weaknesses Experimental results demonstrated a fair 3% performance gain with potential for further gains on smaller datasets. Medical natural language processing is societally important. Although the evaluation results demonstrated performance gains, no statistical significance testing, confidence intervals, or effect sizes were reported in the paper. However, the authors reported on an ablation study whose merit should be noted. The novelty and clarity of the study - or minimally their clear and convincing argumentation - can be questioned. Ethical considerations of this study were insufficiently addressed. Unfortunately, the paper was formatted using an incorrect template; it did not look like an ICLR 2023 paper and hence, should be desk rejected. Clarity, Quality, Novelty And Reproducibility The paper addressed a fundamental task in medical natural language processing: disease name normalization. However, although this task is extensively studied, also in Chinese, the paper did not seem to properly acknowledge and appreciate this prior work. To illustrate, its first two sections (Introduction and Background) did not use peer-reviewed publications to support its argumentation (in total, one paper (arXiv preprint from 2019) was cited in the entire first section and none of the related literature was cited in the second section) and this problem of insufficient positioning of this paper in the prior work was also evident in the last section (Conclusion) of the paper; no discussion section or paragraph to position the contributions of this paper compared to the prior work was included here to close the study. Finally, the third section (Related Work) was hard to understand because the scope of the included studies was not communicated; I would have wanted to learn if, for example, the included papers were targeting or at least applicable to medical natural language processing in Chinese. Consequently, the novelty and clarity of the study - or minimally their clear and convincing argumentation - can be questioned. Given the societal importance of medical natural language processing, I would have expected to see a broader impact statement in this paper. Also, although the CHIP-CDN - Chinese disease normalization dataset used in this study seems to have been collected and released before, the authors should have discussed relevant ethical considerations, minimally describing how the authors analyzed and evaluated that the dataset was created ethically and that their use of its for the purposes of this study was ethical. Unfortunately, the paper was formatted using an incorrect template; it did not look like ICLR 2023 papers and hence, should be desk rejected. Also, further care should have been demonstrated in writing; for instance, the paper title was inconsistently capitalized.
ICLR
Title Exploring semantic information in disease: Simple Data Augmentation Techniques for Chinese Disease Normalization Abstract The disease is a core concept in the medical field, and the task of normalizing disease names is the basis of all disease-related tasks. However, due to the multi-axis and multi-grain nature of disease names, incorrect information is often injected and harms the performance when using general text data augmentation techniques. To address the above problem, we propose a set of data augmentation techniques that work together as an augmented training task for disease normalization. Our data augmentation methods are based on both the clinical disease corpus and standard disease corpus derived from ICD-10 coding. Extensive experiments are conducted to show the effectiveness of our proposed methods. The results demonstrate that our methods can have up to 3% performance gain compared to non-augmented counterparts, and they can work even better on smaller datasets. 1 Introduction The disease is a central concept in medical text processing problems. One of the most important tasks, i.e. disease normalization, uses diseases as both input and output to match the diagnoses terms used in clinical documents to standard names in ICD coding. The disease normalization task mainly faces the following three challenges. First, different writing styles. The writing styles of the diseases can be diversified, where different doctors have different writing habits, so a single disease might result in thousands of versions of names. Second, data scarcity, where some diseases may not be covered in the training set, which often leads to few-shot or zero-shot scenarios. For example, in the Chinese disease normalization dataset CHIP-CDN, there are 40472 diseases to classify, but only data of 3505 diseases (i.e. less than 10% of all diseases) are provided in the training set. Figure 1 illustrates the data scarcity problem in CHIP-CDN dataset. Third, semantics density. The length of disease names is usually short, which makes every character carries huge semantic information. The meanings of the diseases are very different from each other even if they share a lot of common characters, and a single change in characters could result in dramatic change in semantic meaning. For instance, ” 髂总动脉夹层 (Common iliac artery dissection)” and ” 劲总动脉夹层 (Common carotid artery dissection)” are only different in one character, but the positions of those diseases are very distinct, from the upper half of the body part to the lower half. Among all the challenges we discussed, data scarcity is the biggest one, since other problems usually can be solved by providing larger datasets for models to learn. A common way to address the data scarcity problem is through data augmentation. There are numerous data augmentation methods for general corpora such as synonym replacement or back translation. Wei & Zou (2019) has shown that simple text data augmentation methods can be effective for text classification problems. However, because of the unique structure of disease names (i.e. semantics density), general text data augmentation methods do not work well on them, and sometimes even hurt the overall performance. For example, if random deletion Wei & Zou (2019) is performed on disease ” 阻塞性睡眠呼吸暂停 (Obstructive Sleep Apnoea)” and results in ” 阻塞性睡眠 (Obstructive Sleep)”, that would dramatically change the meaning of that disease name and makes it become another disease. Admittedly, general data augmentation methods may be able to address the challenge of different writing styles, as performing random operations on texts can be seen as a way to emulate different writing behaviors. However, due to the above reasons, general data augmentation methods tend to hurt performance, which is demonstrated in our experiments. Therefore, designing data augmentation methods specific to disease corpus is necessary. To bridge this gap, we propose a set of disease-oriented data augmentation methods to address this problem. As with other disease-related tasks, disease normalization can be thought as a process of text matching, from clinical names to standard names in ICD coding. Therefore, the key to this task is for the model to learn great encoding that contains enough similar information for each disease. For instance, the model needs to tell that ” 左肾发育不全 (Left renal agenesis)” and ” 先天性肾发育不全 (Congenital renal agenesis)” are the same disease while ” 髂总动脉夹层 (Common iliac artery dissection)” and ” 颈总动脉夹层 (Common carotid artery dissection)” are not, despite that they both share a lot of common characters. Our methods are based on the following two assumptions. First, disease names have the property of structural invariance. A disease name consists of several different types of key elements, such as location, clinical manifestations, etiology, pathology, etc. In the pair of clinical disease and standard ICD disease, the specified elements can correspond in most cases. Therefore, we can replace a specific element between the pair of clinical disease and standard ICD disease at the same time to generate new pairs. The matching relationship of the newly generated clinical disease and the ICD standard disease pairs can still be maintained. We screened the generated standard ICD diseases to ensure that they belonged to the correct label and that the pairs are effective. It should be noticed that replacing components could derive a new clinical disease name that turns out to be fake (i.e. the disease actually does not exist), but the key point here is to make models learn the necessary semantic association within the diseases. Second, labels in the disease normalization task have transitivity properties. In specific, a more specified description of an object can be comprised into a larger group where the descriptions are more coarse, e.g. a yellow chair is also a chair. In the ICD coding system, there are also different and clear granularities of diseases. Therefore, we can treat the fine-grained disease as their coarse-grained upper disease by assigning them father labels. Normally, a data augmentation method generates new data and trains them along with the existing data, without altering the training paradigm. However, the disease normalization task assigns each disease a unique label, while our methods augment the labels. Therefore, if the traditional training paradigm is still applied to our augmentation methods, a same input disease in the dataset may get different labels, which will make the model difficult to train due to label confusion. To overcome this problem, we treat the data augmentation operation as a pre-training task (we call it augmented training) prior to the original task, so that the model can first learn the necessary semantic information within diseases and then leverage that information when fine-tuning on the actual normalization dataset. Additionally, both unnormalized disease names from the tasks and standard ICD names of the diseases can be used as inputs in the data augmentation process. A unique advantage of using standard ICD names to perform data augmentation as a pre-training task is that the model can get the whole picture of the disease-related information from ICD coding, which includes all classes of diseases, even before the actual training of the downstream task. Therefore, with all those information injected, the model can perform much stronger on smaller datasets where lots of class labels are not able to be seen in the training set. To the best of our knowledge, we are the first to explore the semantic components and information within disease names. We believe the research on disease name enhancement has high research value and can benefit various downstream tasks. To summarize our contributions: • We propose a set of data augmentation methods for the Chinese disease normalization tasks. • Experiments validate that general data augmentation methods have the potential to impair the disease normalization task. However, our method has obvious performance gain on the task based on various baseline models. • We also analyze the reasons why the proposed method is effective. 2 Background ICD coding. ICD, the acronym of the International Classification of Diseases, is an international unified classification of diseases developed by the World Health Organization, and ICD-10 is the 10th version of ICD coding which is used in our work. The coding is a combination of letters and numbers, which classifies diseases according to their etiology, pathology, clinical manifestations, and anatomical locations, so that they form a hierarchical coding structure. ICD also adopts a multi-grain fashion where coarse-grained disease are followed by fine-grained diseases. Disease normalization task. In clinical practice, doctors will fill in the name of the disease according to clinical diagnosis standards along with their own writing habits, which makes a single disease name hundreds of versions. The disease normalization task is to match disease names written in different styles into a single standard name provided by ICD coding. After the disease normalization process, researchers can perform further operations upon the normalized names to realize all kinds of functions used in wise medical applications. The task can be formalized into the following operation: X -> Y, where X represents the clinical disease names and Y represents the standard ICD names. NER. NER stands for Named Entity Recognition, which is a common task in Natural Language Processing. It aims to identify entities that have practical values and their locations from unstructured texts. The classification of these entities may include persons, organizations, locations, etc. In this work, we use an NER tool trained by ourselves to identify elements in disease names in order to perform data augmentation. Additionally, we argue that any NER tool that can identify elements in disease names should be fine, and our work mainly focus on the data augmentation methods. 3 Related Work In this section, we first introduce related works of data augmentation, then we introduce medical data-driven research works that are similar to ours. 3.1 Data Augmentation Data augmentation is a technology to synthesize new data based on existing data as a way to expand the amount of dataset. It is often used when the amount of data is not enough, and it can also act as a regularizer to prevent the model from overfitting the training set. Unlike images, where it is relatively easy to augment data as well as keep the semantic information intact, data augmentation in texts is more difficult, due to its unstructured form Ng et al. (2020). Many works focus on augmentations directly on the input: Wei & Zou (2019) propose four simple augmentation methods base on character-level noise injection, which are replacement, insertion, swap, and deletion. Their methods are quite straightaway and effective, but the augmentation results may cause unwanted noise by not following the grammar rules. Back translation, augments data by translating the original text to a second language and then translating it back. This method can keep the semantic meaning well of the original text, but the augmented results are lack of diversity and sometimes restricted by the translation tool. In order to make the augmented data more realistic, Kim et al. (2022) leverages lexicalized probabilistic context-free grammars to capture the intricate compositional structure of natural language and then perform word replacements. This method yields good results, but grammar-based methods for general text are difficult to generalize to specialized areas, such as medicine. There are also methods that leverage pre-trained language models to perform data augmentation. Ng et al. (2020) use MLM objective in BERT Devlin et al. (2018) to mask out some words and then regenerate it. Wu et al. (2019) also uses MLM task as well as changing the segment ids to class labels. Kumar et al. (2020) compares three kinds of data augmentation methods using a conditional pre-trained model, namely auto-encoder, auto-regressive, and seq2seq. A problem with these methods is that the semantic meaning of the original sentence may change after several MLM replacements. Semi-supervised learning can also be a way to perform data augmentation by leveraging the vast amount of unlabeled data. Berthelot et al. (2019) uses MixUp to guess the low-entropy labels of the augmented data and then mixes the labeled and unlabeled data to derive a loss term, and Xie et al. (2020) performs data augmentation on unlabeled data for consistency training. However, we only focus on augmenting the data itself instead of semi-supervised learning objectives in this work. 3.2 Data approaches on medical data While most researches focus on the effect of data augmentation on general text data, there are also works that try to explore the possibility of data augmentation operations on medical text data. In this section, we mainly introduce data augmentation on medical text data and other related research works. There are works that focus on the synonym replacement in medical terms. Falis et al. (2022) and Abdollahi et al. (2021) leverage Unified Medical Language System (UMLS) to find medical synonyms to perform replacements after certain medical terms are identified in classification texts. Focusing on the ICD-coding task, Falis et al. (2022) also replaces both the medical terms in raw texts and the classification label to get new training data. While their works mainly focus on replacing the whole medical term, we investigate the possibility of replacing the components of the medical terms by exploring the semantic structures within them. Additionally, Ansari et al. (2021) investigates the performance of EDA, conditional pretrained language models and back translation to perform data augmentation on social media texts for mental health classification. Wang et al. (2020a) proposes Segment Reordering as a data augmentation technique to keep the medical semantic meaning intact. Wang et al. (2020b) use pre-trained language models fine-tuned on General Semantic Textual Similarity (STS-G) data to generate pseudo-labels on medical STS data, and then perform iterative training. 4 Methods In this section, we introduce the details of our proposed data augmentation methods and the overall pipeline. Since the significance of data augmentation is to inject the model with extra knowledge, the key point is to explore the components and relations in diseases so that the model can have a broad sense of the internal structures of the diseases. Therefore, we leverage the multi-axis and multi-grain nature of the diseases to design all of the data augmentation methods. First of all, the disease names are composed of several elements, which include but are not limited to etiology, pathology, clinical manifestations, anatomical location, chronicity, degree type, etc. For ease of expression, we merge and select from all those elements into three main categories, which are disease center, anatomical location and disease quality. This shows the multi-axis nature of the diseases. • Disease Center: Disease center, which may include etiology and pathology, is the minimal word that describes the nature of a disease. It defines the main category of a disease, such as ”disorders” for ”Other disorders of the eye with mcc”. • Anatomical Location: Anatomical Location is a part of the human body that have actual meaning in anatomy. It indicates which part of the human body is ill. • Disease Quality: The quality of a disease which indicates the subtype of the disease, such as ”Drug-induced” for ”Drug-induced peripheral neuropathy”. With these three axis words, all kinds of disease names can be combined by them. Second, a disease can be described by multiple granularities. An upper disease is a coarsedefined disease and a lower disease is a fine-grained disease. The ICD coding contains lots of upper-lower disease pairs by assigning them different lengths of code. For example, in ”ICD-10 Beijing Clinical Version 601”, the disease name of code ”A18.2” is ” 外周结核 性淋巴结炎 (Peripheral Tuberculous Lymphadenitis)” and ”A18.201” is ” 腹股沟淋巴结 结核 (Inguinal lymph node tuberculosis)”. ”Peripheral Tuberculous Lymphadenitis” is a coarse-defined disease due to not specifying a single anatomical location. Additionally, a coarse-defined disease can contain multiple fine-grained diseases in ICD coding. In our intuition, although the disease can only be called the same if all of its components are the same, it is necessary for the model to learn which diseases are more similar than others. Therefore, we define the following data augmentation methods. 4.1 Data Augmentation We perform data augmentation by assigning pseudo-labels to diseases to describe their relationships so that they can form a new pair of diseases, and we use those pairs to perform augmented training of disease normalization tasks. We divide our methods into two main categories: Axis-word Replacement and Multi-grain Aggregation. We call our proposed disease name data augmentation method DDA. Figure 2 illustrates the overall pipeline of our methods. Axis-word Replacement (AR): We assume that disease names have the property of structural invariance, which means a name derived by replacing an axis-word in a disease to another one with the same type also makes sense. Since there are often matches of Axis-words between an unnormalized-standard disease pair in the disease normalization task, replacing the corresponding Axis-word in the clinical name with the standard name in the pair at the same time can ensure that the newly-generated pair will still match. To locate all axis-word in the disease, we leverage a Named Entity Recognition (NER) tool trained by ourselves1. The entity type includes but is not limited to disease center, anatomical location, and disease quality. We note that the NER tool is just for the use of locating axis-words, and it can be replaced by any modules that can achieve the same function. We leverage both the ICD-coding and the disease normalization training set to perform axisword replacement. The detailed descriptions of each category of axis-word replacements are as follows: • AR1: AR1 is illustrated in the top left corner of Figure 2. First, select a pair of diseases (disease A and disease B) that shares one or more axis (part1 in figure) but is different in another axis (part 2 in figure). Then, replace the part 2 in disease A to be the same part2 in disease B. (Note: disease A can be chosen from any sources, but disease B can only be chosen from the standard ICD-coding list as it serves as the label of a disease normalization pair.) – AR1-posotion: Perform AR1 by fixing the disease center and replacing the anatomical location. – AR1-center: Perform AR1 by fixing the anatomical location and replacing the disease center. – AR1-quality: Perform AR1 by fixing both the disease center and the anatomical location and replacing the disease quality. • AR2: AR2 is illustrated in the top right corner of Figure 2. First, select a pair of unnormalized-standard diseases from the disease normalization training set. Let the unnormalized disease be disease A, and the standard disease be disease B. Then, find disease C from ICD-coding list that shares one or more axis (part1) but is different in another axis (part2). Finally, replace part2 in disease A to be the same part2 in disease C, so that the replaced disease A and disease C can form a new disease normalization pair. – AR2-position: Perform AR2 by fixing the disease center and replacing the anatomical location. – AR2-center: Perform AR2 by fixing the anatomical location and replacing the disease center. – AR2-quality: Perform AR2 by fixing both the disease center and the anatomical location and replacing the disease quality. Multi-Grain Aggregation (MGA): We assume that labels in the disease normalization task have transitivity properties. In specific, a more specified description of an object can be comprised into a larger group where the descriptions are more coarse. In the ICD coding system, there are also clear granularities of diseases. The maximum length of code that can be shared between hospitals is 6, and the multi-grain structure contains 3-digit, 4-digit, and 6-digit codes. We observe that the semantic meaning between diseases that share the first 1We will open source the code of our experiment along with the NER tool for disease names on Github. 3-digit code but are different in the 4th-digit code can be quite different, but the meaning would be a lot similar if the diseases share the first 4-digit code. Therefore, We implement MGA augmentation using the following method. • MGA-code: we leverage the multi-grain nature of the ICD coding by assigning the label of a 6-digit disease to its corresponding 4-digit disease. We call the method ”aggregation” because normally a 4-digit disease can be matched to several 6-digit diseases, so the model can learn which diseases are similar. MGA-code is illustrated in the left bottom of Figure 2. – MGA-code1: The 6-digit diseases are directly derived from the ICD-coding list. – MGA-code2: The 6-digit diseases are derived from the diseases in CHIP-CDN training set whose labels are a 6-digit ICD disease. • MGA-position: Apart from the ICD coding, anatomical locations also follow a hierarchical structure, where several smaller positions can be grouped together to form a larger position. Thus, we search for diseases in ICD coding that share the same center and one position is the upper position of another one, and we grouped the classification labels of the lower position diseases to their upper position diseases. MGA-position is illustrated in the right bottom of Figure 2. (Note: the upper position diseases must come from the standard ICD-coding list.) – MGA-position1: The lower position diseases are directly derived from the ICDcoding list. – MGA-position2: The lower position diseases are derived from the diseases in CHIP-CDN training set. (Note: In the human body, we call a location the upper position to another position if that location covers a larger area than another. In order to find the upper or lower positions of a position, we construct a position tree document where the anatomical positions in the human body are organized into a tree data structure. We use the constructed position tree to recognize the upper and lower relations above. The same goal can be achieved with other sources containing knowledge bases of human anatomy.) 4.2 Training Process • Taking the augmented data to train the disease normalization task. • Fine-tuning the original disease normalization dataset. 5 Experiments 5.1 Dataset We evaluate the effectiveness of our data augmentation methods on a Chinese disease normalization dataset called CHIP-CDN. CHIP-CDN originates in the CHIP-2019 competition and was collected in A Chinese Biomedical Language Understanding Evaluation Benchmark called CBLUE Zhang et al. (2021). The dataset contains 6000 unnormalized-standard disease pairs in the training set, 1000 pairs in the dev set, and 2000 pairs in the test set. 5.2 Experimental Setup We evaluate our methods on three baselines: BILSTM Sak et al. (2014)and BERT-base Devlin et al. (2018), CDN-Baseline(from CBLUE)Zhang et al. (2021). For BILSTM, we use two BILSTM layers followed by a MLP layer to perform classification. For BERTbased models, we use the CLS vector to perform classification. For CDN-Baseline, we use the original model provided by its git repository2, which follows a ”recall-match” two step training approach based on pre-trained language models. The choose of the baseline models is to demonstrate the effectiveness of our method under different types of models and training 2https://github.com/CBLUEbenchmark/CBLUE settings. In specific, we verify the effectiveness of DDA to a train-from-scratch model using a BILSTM model, we verify the effectiveness to models with pre-trained knowledge using the BERT-base model, and we verify the effectiveness to complex models using CDN-Baseline model. For the BILSTM model and BERT-base model, we use accuracy to judge the model performance. In our evaluation, we treat this disease normalization as a multi-class classification rather than multi-label classification task despite that there are few data samples that a single unnormalized disease is matched to several standard diseases. Hence, if an unnormalized disease is matched to several standard diseases, this data sample is considered correctly predicted as long as one of the standard diseases is correctly predicted. We design the experiments in this way to simplify the model as much as possible to more clearly illustrate the effectiveness of DDA. For CDN-Baseline, we stick to the settings in CBLUE Zhang et al. (2021), which use the F1 as the evaluation metric, use BERT-base as the baseline model, and use the two step training paradigm provided by CBLUE for better comparison. To ensure fairness, we use the exact same parameter settings for the same model. In particular, for CDN-Baseline, we use almost the same parameter settings as CBLUE’s git repository, including random seed numbers. Additionally, we use devset for performance comparison, since the label of test set of the CHIP-CDN dataset is not given. For all experiments, we keep the best performing result as the final score. 5.3 Results The results are shown in Table 1. The trainset in the table represents CHIP-CDN training set. From top to bottom, the performance of different models using different data augmentation methods is represented. Among them, BT is the back-translation data augment method3, and DDA is the semantic-based disease name data augmentation method proposed by us. The experimental results demonstrate that although EDA and back-translation increase diversity, they both hurt performances in some settings (especially for EDA). However, DDA improves the performance in every settings. Clearly, DDA avoids the problem of EDA, and its effect is much better than BT. We observe that the performances improve for all models above after applying the DDA methods, showing the effectiveness of our proposed methods. For the BILSTM model, the relative performance improvement reaches 6%. We further observe that there is more performance gain on BILSTM than BERT-based models and CDN-Baseline, probably because the knowledge in pre-trained language models has already covered some of the similar information, but our proposed method can further improve their performance, showing the effectiveness of DDA. 5.4 Ablation Study In this section, we evaluate the effectiveness of every data augmentation methods on BILSTM, BERT-base models and CDN-Baseline. As we propose two types of data augmentation methods, we evaluate them by taking out these methods one by one to see the resulting performances. The results are shown in Table 2. We observe that removing data gener- 3we use the youdao translation tool and the URL is https://fanyi.youdao.com/. ated by either types of methods would lead to performance degradation, thus proving the effectiveness of every method that we propose. 5.5 Smaller datasets experiments We also evaluate the performance improvements over smaller datasets that derived from CHIP-CDN since the data scarcity problem is more severe in smaller datasets. We evaluate the training set whose sizes range from 5%, to 100% of the CHIP-CDN training set size. For the convenience of training, for augmented training in this setting, we only leverage standard disease names in ICD-coding. No data from disease normalization training set are used. We draw curves to illustrate the comparison on whether to use our proposed methods or not, which is shown in figure 3. When the size of the training set increase, both curves steadily improve. We also notice that the performance gain is higher when the size of the training set is smaller. 6 Conclusion In this paper, we propose two main types of data augmentation methods for Chinese disease normalization tasks based on two hypothesis respectively, where the disease names have the property of structural invariance, and the labels in disease normalization task have the transitivity properties. Our data augmentation methods explore the semantic information and the relation information in diseases, and are adopted in augmented training fashion to avoid introducing misinformation. Experimental results show that our DDA method can better solve the three main challenges in disease normalization task, namely description diversity, data scarcity, and semantics density. Compared to EDA and back-translation methods, our method has obvious advantages on the disease normalization task. Furthermore, we prove that our data augmentation methods work even better on smaller datasets. A Appendix A.1 data augment result statics The table 3 is all the statistical results of the data we obtained using MGA and AR data augmentation methods 4. A.2 Hyperparameter settings Table 4 shows the hyperparameter settings of our choices. For different methods, the way of parameter setting is different. For models that use word2vec initialization or random initialization parameters, the training on augmented data can be regarded as a special pretraining task, and a large learning rate and a large number of iterations can be set to make the training sufficient. For models that use a pre-trained model (i.e. BERT) as the backbone, a small learning rate and a small number of training iterations should be set to avoid the catastrophic forgetting of valuable information in the pre-trained models. For each baseline model, we first train on the augmented dataset (Augmented Training), and then fine-tune on CHIP-CDN dataset. For the CDN-Baseline model, we use Chinese-bertwwm as the pre-training model, and the training method is provided by CBLUE. For the DDA method, we first use the augmented dataset to train for 1 epoch with a learning rate of 5e-6 and then fine-tune on CHIP-CDN. The hyperparameter of the num_negative_sample is 3+3 and the recall_k is 2 (The explanation of hyperparameter num_negative_sample and recall_k can be found in their github repository). A.3 Analysis In table 5, the first row represents the distribution of the number of times the label appears in the training set. The other two rows represent the label distribution of the two types of the augmented data. The statistical result shows that the data of DDA can effectively improve the labels that appear less frequently (the number of times < 3) and the labels that do not appear at all (did not appear) in the training set. This is beneficial for addressing the data scarcity problem of disease normalization tasks and the diversity of disease names. This is the direct reason why DDA works. As for EDA and BT, they can only increase the number of labels that are already appeared in the training set, which only solve the problem of expression diversity. Hence, their abilities are limited. A.4 Case Study We give a real example of the augmentation results generated by different data augmentation methods. We observe that the semantic meaning of the EDA-generated result dramatically 4We will open source the augmentation code and the augmented result on Github. changes due to the property of semantics density, and it changes the key information within the disease by losing the anatomical location. The results generated by BT is more realistic, but this method cannot generate samples beyond the original label scope, and it also suffers from the restrictions of the translation tools. As for our proposed method DDA (last two lines in the table), it can not only increase the diversity of the input, but also generates data where their labels are never appeared in the training set, so that sparse labels can be trained more thoroughly. A.5 Future work So far, we have only proved the effectiveness of our DDA method, but no experimental analysis is done to explore the internal mechanisms of why it is so effective. Moreover, to further avoid the injection of misinformation, we believe designing loss function terms to effectively select more valuable data from the data augmentation results can be a promising direction. We aim to perform researches on those topics in the future.
1. What is the main contribution of the paper regarding data augmentation for Chinese disease normalization? 2. What are the strengths and weaknesses of the proposed method compared to other existing methods? 3. Do you have any concerns or suggestions regarding the experiment design and comparison with other methods? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any minor issues or typos in the paper that need to be addressed?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper introduces a new data augmentation method for Chinese disease normalization dataset after analysing the unnormalized disease name and standard disease name. The main contribution is a novel data augmentation method adapted to a new dataset, which consists of axis-word replacement and multi-grain aggregation. Strengths And Weaknesses Strength: The paper proposes an interesting method to augment dataset, it achieave good performance compared with other augmentation method. It demonstrated that EDA and BT methods are harmful on CDN datasets. Weakness: It would be nice if the authors could basically check how many ratios of wrong labels brought by the proposed DA and existing one, either using expert annotation or automatic evaluation. This could be tested by using only a few examples. There are no details about how many augmented examples used for baseline DA methods including EDA and BT. This should be carefully compared for a fair comparison. Only a small-scaled Chinese dataset was used. It would be better if the authors could evidences their method across many datasets The task is only for Chinese language, making it too narrow to fit ICLR. The authors should detail the motivaiton to use EDA and BT as the baselines. Since there are many other DA methods in NLP, see https://arxiv.org/abs/2110.01852 minor issue: The font and the reference format seems wrong Clarity, Quality, Novelty And Reproducibility The paper seems a little bit redundant, e.g. two assumptions (structural invariance, transitivity) are repeated by many times. -- it seems the naming of such two properties is not that appropriate. Questions: Why don't the authors also benchmark the DA method in English disease normalization dataset (if there is any). From my understanding, some of the DA methods are specific to Chinese language, right? If yes, this should be clearly stated. Otherwise English disease normalization dataset should be considered. Do you think it is more challenging for Chinese language than English to deal with disease normalization dataset? If so, please illustrate a little bit morel,
ICLR
Title AutoNF: Automated Architecture Optimization of Normalizing Flows Using a Mixture Distribution Formulation Abstract Although various flow models based on different transformations have been proposed, there still lacks a quantitative analysis of performance-cost trade-offs between different flows as well as a systematic way of constructing the best flow architecture. To tackle this challenge, we present an automated normalizing flow (NF) architecture search method. Our method aims to find the optimal sequence of transformation layers from a given set of unique transformations with three folds. First, a mixed distribution is formulated to enable efficient architecture optimization originally on the discrete space without violating the invertibility of the resulting NF architecture. Second, the mixture NF is optimized with an approximate upper bound which has a more preferable global minimum. Third, a block-wise alternating optimization algorithm is proposed to ensure efficient architecture optimization of deep flow models. N/A Although various flow models based on different transformations have been proposed, there still lacks a quantitative analysis of performance-cost trade-offs between different flows as well as a systematic way of constructing the best flow architecture. To tackle this challenge, we present an automated normalizing flow (NF) architecture search method. Our method aims to find the optimal sequence of transformation layers from a given set of unique transformations with three folds. First, a mixed distribution is formulated to enable efficient architecture optimization originally on the discrete space without violating the invertibility of the resulting NF architecture. Second, the mixture NF is optimized with an approximate upper bound which has a more preferable global minimum. Third, a block-wise alternating optimization algorithm is proposed to ensure efficient architecture optimization of deep flow models. 1 INTRODUCTION Normalizing flow (NF) is a probabilistic modeling tool that has been widely used in density estimation, generative models, and random sampling. Various flow models have been proposed in recent years to improve their expressive power. Discrete flow models are either built based on elementalwise monotonical functions, named autoregressive flow or coupling layers (Papamakarios et al., 2017), or built with transformations where the determinant of the flow can be easily calculated with matrix determinant lemma (Rezende & Mohamed, 2015). In the continuous flow family, the models are constructed by neural ODE (Grathwohl et al., 2019). Despite the variety of flow models, there’s yet no perfect flow concerning the expressive power and the computation cost. The flow models with higher expressive power usually have higher computational costs in either forward and inverse pass. In contrast, flows which are fast to compute are not able to model rich distributions and are limited to simple applications. For instance, autoregressive flows (Papamakarios et al., 2017) are universal probability approximators but are D times slower to invert than forward calculation, where D is the dimension of the modeled random variable x (Papamakarios et al., 2021). Flows based on coupling layers (Dinh et al., 2015; 2017; Kingma & Dhariwal, 2018) have an analytic one-pass inverse but are less expressive than their autoregressive counterparts. Other highly expressive NF models (Rezende & Mohamed, 2015; Behrmann et al., 2019) cannot provide an analytic inverses and relies on numerical optimizations. For different applications, the optimal flow model can be drastically different, especially if the computation cost is taken into consideration. For generative models (Dinh et al., 2015; Kingma & Dhariwal, 2018), flows with the fast forward pass are preferable since the forward transformations need to be applied to every sample from the base distribution. For density estimation (Papamakarios et al., 2017; Rippel & Adams, 2013), flows with cheap inverse will prevail. For applications where flow is utilized as a co-trained kernel (Mazoure et al., 2020), the computation cost and performance trade-off are more important, i.e., having a fast model with relatively good performance. However, in the current body of work, the architecture designs of the flow models are all based on manual configuration and tuning. To this date, there is a lack of a systematic way that could automatically construct an optimal flow architecture with a preferred cost. In this paper, we propose AutoNF, an automated method for normalizing flow architecture optimization. AutoNF has a better performance-cost trade-off than hand-tuned SOTA flow models based on a given set of transformations. Our approach employs a mixture distribution formulation that can search a large design space of different transformations while still satisfying the invertibility requirement of normalizing flow. The proposed mixture NF is optimized via approximate upper bound which provides a better optimization landscape for finding the desired flow architecture. Besides, to deal with exponentially growing optimization complexity, we introduce a block-wise optimization method to enable efficient optimization of deep flow models. 2 RELATED WORK Normalizing Flows: Various normalizing flow models have been proposed since the first concept in (Tabak & Turner, 2013). Current flow models can be classified into two categories: finite flows based on layer structure, and continuous flow based on neural ODE (Grathwohl et al., 2019). The finite flow family includes flows based on elemental-wise transformation (Papamakarios et al., 2017; Kingma & Dhariwal, 2018) and flows whose transformations are restricted to be contractive (Behrmann et al., 2019). In elemental-wise transformation flows, autoregressive flow and coupling layers are two major flavors and extensive work has been proposed to improve the expressive power of both flow models. In Huang et al. (2018), the dimension-wise scalar transformation is implemented by a sigmoid neural network, which increases the expressive power at the cost of being not analytically invertible. In Durkan et al. (2019), piecewise splines are used as drop-in replacement of affine or additive transformations (Dinh et al., 2015; 2017) and is the current SOTA flow model. Consequently many recent research efforts have been devoted to closing the gap of expressive power, albeit at the cost of more complex and expensive transformations. Moreover, there has been no quantitative trade-off analysis between the performance and cost among different flows. Neural Architecture Search: Many algorithms have been proposed or applied for neural architecture search. For instance, reinforcement learning (Zoph & Le, 2017), genetic algorithm (Real et al., 2017; Suganuma et al., 2018; Liu et al., 2018), Monte Carlo tree search (Negrinho & Gordon, 2017) or Bayesian optimization (Kandasamy et al., 2018). However, these methods all face the challenge of optimizing on a large discrete space and can take thousand of GPU days to find a good architecture. To address this issue, DARTS (Liu et al., 2019) proposes to relax the search space from discrete to continuous and allows efficient differentiable architecture search with gradient method which could reduce the search time to a single GPU day while still producing the SOTA architecture. However, all current NAS methods focus on optimizing traditional neural network structures (CNN, RNN) and there has yet been any implementation on normalizing flow. Necessity for the Trade-off Between Performance and Cost: Despite various transformations proposed in the literature, there is no perfect transformation with strong expressive power and low computational cost. Autoregressive flows have better expressive power, but the inverse computation cost grows linearly with data dimension. Coupling layers’ inverse calculation is as fast as the forward pass, but their expressive power is generally worse than autoregressive flow with the same element-wise transformation. Even in the same autoregressive flow or coupling layer family, flows with different element-wise transformations have different performance and computation costs. For instance, additive or affine coupling layers (Dinh et al., 2017; 2015) have very fast forward and inverse calculation with limited expressive power while the flow in (Durkan et al., 2019) are highly expressive but are more demanding on computation. In most applications, it is necessary to find the best performance while minimizing at least one specific component of the cost. Unfortunately, the current design of flow models is empirical and therefore cannot ensure the optimal trade-offs. 3 METHOD In this work, we aim to tackle the challenge of finding an optimal flow model for a given task via an automated architecture search algorithm. Assumptions: In the remaining part of this paper, without losing generality, we assume that the transformation is properly modeled such that during the training process, only forward computation is needed. Under this assumption, when the flow model is used for density modeling (Durkan et al., 2019), the forward calculation is the dominant computation. When the flow model is used for random sampling (Kingma & Dhariwal, 2018), the inverse calculation is computationally intensive. When the flow model is utilized as a module and trained together with other components, e.g., policy network in maximum entropy learning (Mazoure et al., 2020), the training cost of the flow model is an important consideration. Problem Definition: Given a transformation set with m options {T 1, T 2, ...Tm}, the goal is to construct an optimal flow model with n layers of transformations from the set. The flow model pNF (x;θ) = pT1T2...Tn(x;θ) should minimize the KL divergence between the target distribution p∗(x) and itself while minimizing its computational cost CNF . Here, θ are the parameters of the transformation in the flow model. In this paper, we use the forward KL divergence as our target loss function (Papamakarios et al., 2021): θ∗ =argmin θ {DKL[p∗(x) || pT1T2...Tn(x;θ)] + λ · CNF } s.t. Ti ∈ {T 1, T 2, ...Tm} (1) While λ is a tuning factor capturing the relative importance of the performance-cost trade-off. Finding this optimal flow model is a discrete optimization problem with exponential complexity. To enable efficient architecture optimization, we use proposed method of relaxing the discrete search space to continuous space as suggested in Liu et al. (2019). 3.1 MIXED FLOW ENSEMBLE For the ith transformation layer with m options, we introduce a corresponding weight w j i for each option T j which reflects how likely the transformation will be selected. The weight is parameterized by a vector α and made continuous via softmax: wji = exp(αji )∑m j=1 exp(α j i ) (2) By applying this parameterization for each transformation layer, we can construct a mixed flow ensemble pMix(x;θ,α), where each layer in this mixed model reflects a weighted combination of the effect of all possible transformations. In this case, the architecture optimization problem is reduced to learning the weight vector for each layer and, at the end of the optimization process, weights will be binarized and the transformation with the highest weight in one layer will be selected as the final transformation. The mixed flow ensemble thus degrades to a normal flow model. The whole procedure is illustrated in Fig. 1 (left). As adopted in (Liu et al., 2019), training of the flow ensemble becomes joint optimization of the architecture parameterα and the model parameter θ over the training and validation datasets, which could be written as the following bi-level optimization problem: α∗ =argmin α DvalKL[p ∗(x) || pMix(x;θ∗,α)] + λ · CMix(α) s.t. θ∗ = argmin θ DtrainKL [p ∗(x) || pMix(x;θ,α)], ∀ T ∈ pMix, T ∈ {T 1, T 2, ...Tm}, (3) While the optimization problem is well defined, the key challenge is to construct the flow ensemble within the normalizing flow framework. This is different from traditional neural architecture search, which can mix various operations with no additional issue. Normalizing flow has its unique requirement for the invertibility of transformations and a preferred simple Jacobian calculation, which requires careful handling. The mixed flow ensemble pMix(x;θ∗,α) must satisfy two requirements. First, it must be a legal density function such that it can be optimized by the KL divergence formulation. Second, each transformation layer in pMix(x;θ∗,α) should represent a weighted combination of all possible transformations. Consider the ith layer in the mixed flow ensemble with input random variable xin and output random variable xout, and pxin(xin) and pxout(xout) are their corresponding density functions. This layer has m transformation options in {T 1i , T 2i , ...Tmi } and w j i is the corresponding weight for each transformation. As discussed in Assumption, we assume all transformations directly model the inverse transformation, i.e. xin = T j i (xout). Two approaches can be used to construct the mixed flow ensemble. Construction by Mixed Transformations: The straight forward way of building the ith mix flow ensemble layer is to mix all transformations by weighted summation, as shown in Fig. 1 (right-top). The final weighted transformation for this layer can be thus represented as: Ti(xin) = m∑ j=1 wji · T j i (xout) (4) There are two drawbacks of this formulation despite its simplicity. First, definition of normalizing flow requires the mixed transformation Ti be invertible and differentiable in order to ensure pxout(xout) legal density function. However, this invertibility is not guaranteed even if all candidate transformations are invertible. Second, even if the mixed transformation is invertible, there is no easy way to calculate the Jacobian determinant of this weighted summation of transformations. Meeting the requirement of invertibility and ease of calculating Jacobian determinant brings strict restrictions on the candidate transformations and prevents the optimization of flow architectures on a wider search space. As a result, the construction of the mixed flow ensemble by weighted summation of transformations is not adopted in this paper. Construction by Mixed Distributions: An alternating way is to build the mixed flow ensemble by mixing distributions. For a given transformation T ji in this ith layer, applying the transformation to the input random variable will result in a new distribution: pT ji (xout) = pxin(T j i (xout)) · | detJT ji (xout)| (5) By applying this to every transformation option in {T 1i , T 2i , ...Tmi }, we can obtain k different distributions, and it is possible to mix all the density functions together by their weighted summation, to get a mixture model as shown in eq.(6). pTi(xout) = m∑ j=1 wji · pT ji (xout) (6) An illustration of this process is shown in Fig. 1 (right-bottom). Different from the previous approach, the mixture model has a legal density function as: pTi(xout). By the definition of normalizing flow, we can assume that there exists an invertible and differentiable transformation Ti, which transforms xin to xout, although the transformation itself can not be explicitly written out. For the next (i + 1)th layer, the density of the mixture model will be used as the input density function pxin(xin) as in the previous layer. By applying this formulation for n layers, the final mixed flow ensemble can be written as: pMix(x;θ,a) = mn∑ k=1 Wk · pT1T2...Tn(x,θ) = mn∑ k=1 Wk · pi(x;θi) where each Wk = n∏ i=1 wi and mn∑ k Wk = 1 (7) Each wi is defined in eq.(2) and we use pk(x;θk) to represent a “normal flow architecture” with n transformation layers. Clearly, the final mixed flow ensemble is a legal density function which is in fact, a weighted summation of all possible flow models built with n layers of transformations. 3.2 OPTIMIZATION WITH APPROXIMATED UPPER BOUND Optimizing the forward KL divergence between the target distribution and the mixed flow ensemble can be written as: LOpMix = DKL [p ∗(x) || pMix(x;θ,α)] = −Ep∗(x)[log( mn∑ k=1 Wk · pk(x;θk))] (8) We will demonstrate that direct optimization of this original loss can lead to underside mixture models. In the whole search space of the flow ensemble, we are interested only in ”normal flow architectures” points, i.e. the points where the weight of one architecture is 1 and others are all 0. However, it can be easily proven that the global optimum of LOpMix may not be the desired normal flow architecture (the red points in Fig. 2). Instead, optimization is very likely to end up in a mixture model that is globally optimal with similar weight for each possible flow architecture (the green point in Fig. 2). In this case, we will encounter difficulty when extracting a normal flow architecture with the search result. A common practice in differentiable architecture search (Liu et al., 2019) is to binarize the weights and select corresponding transformations. However, there is no guarantee that the binarized architecture will have a lower loss, and finding this nearest binarization may lead to performance drop. As a result, optimization with the original loss function is not suitable, and could be risky. In this paper, we propose to optimize an upper bound of the original loss function to provide a better global optimum for the search of best normal flow architectures. Our method utilizes Jensen’s inequality log( ∑ W · x) ≥ ∑ W · log(x) as follows, since we have ∑ W = 1 and the log function is concave, we can obtain an upper bound of the KL divergence given as: LOpMix = −Ep∗(x)[log( mn∑ k Wk · pk(x;θk)] ≤ LUpMix = −Ep∗(x)[ mn∑ k Wk · log(pk(x;θk))] (9) The benefit of optimizing the upper bound can be summarized as follows: Proposition 1: The global minimum point of LUpMix is defined by a normal flow architecture. Proof: Suppose each flow model pk(x;θk) has an optimal parameter θ∗k that minimizes the KL divergence between p∗(x) and it: −Ep∗(x)[log(pk(x;θ∗k)] ≤ −Ep∗(x)[log(pk(x;θk)] (10) There also exists a flow architecture (pz(x;θ∗z)) that has the minimal KL divergence: −Ep∗(x)[log(pz(x;θ∗z)] ≤ −Ep∗(x)[log(pk(x;θ∗k)], ∀k ∈ mn (11) We can then prove the proposition by showing that: LUpMix = −Ep∗(x)[ mn∑ k Wk · log(pk(x;θk))] ≥ −Ep∗(x)[ mn∑ k Wk · log(pk(x;θ∗k))] ≥ −Ep∗(x)[ mn∑ k Wk · log(pz(x;θ∗z))] = −Ep∗(x)[log(pz(x;θ∗z)] (12) Proposition 2: At normal architecture points (Wk = 1,W−k = 0), LUpMix = L O pMix . The proof of proposition 2 is apparent. With the above propositions and under the assumption that the global optimum can be reached at the end of the optimization, we can show that the solution set, i.e. all possible normal flow architectures are the same in both LOpMix and L U pMix , and we can do optimization with proposed upper bound without violating the original definition. Furthermore, since the global optimum of the upper bound will always lead to a normal flow architecture, we will not end up in finding a mixture model with the need to do heuristic and risky binarization of weights W . 3.3 EFFICIENT ARCHITECTURE OPTIMIZATION FOR DEEP FLOW MODELS While the flow ensemble by mixed density formulation could reflect the weighted effect of all possible transformation combinations, the architecture optimization complexity grows exponentially with respect to the number of considered transformation types and the number of transformation layers. In this scenario, efficient optimization of the whole flow architecture will not be possible. It is natural to decompose the original problem into sequential optimization of few different blocks, where each block could be optimized in one time with a limited number of layers. We propose two methods to decompose the problem. Grow Method: The first approach is a straightforward greedy method which we call ”Grow”. Each time, a block is optimized until convergence, and the weights of the transformation layer are binarized. The searched transformations in this block will be directly added to the searched layer in the previous block. The architecture optimization of later blocks will be based on the existing layers and, the growth of layers stops when reaching the total number of layers constraint. Despite its simplicity, the downside of the “Grow” method is that the optimization is short-sighted. The block being optimized has no information about the architectures which could be added later, and the whole architecture is more likely to be trapped in local minimum. Block Method: To avoid the issue of getting stuck in a local minimum, we propose another method named “Block” optimization. Blocks B in this approach are optimized alternatively to allow each block to adjust their architectures with respect to other blocks. In fact, the first “Grow” approach is a specific case of the “Block” method, where all the blocks are initialized as identity transformations and optimized only once. Algorithm 1 Algorithm flow for AutoNF Require: Transformations: {T 1, T 2, ...Tm}, Blocks: B = {B1, B2, ...Bl}, Cost: CMix Ensure: n-layer flow model: 1: while not converged do 2: for each Bi ∈B do 3: while not convergence do 4: αBi = argminαBi D val KL[p ∗(x) || pMix(x;θ∗B,αBi)] + λ · CMix(αBi) 5: θB = argminθB D train KL [p ∗(x) || pMix(x;θB,αBi)] 6: end while 7: Fix architecture for Bi 8: end for 9: end while 3.4 COST MODEL AND ALGORITHM FLOW As discussed in section II, we are interested in modeling the training cost (forward calculation cost) and the inverse calculation cost, since each of them plays a different role based on desired applications. We use an independent experiment to model the cost of different types of flows and summarized in a table which are included in Appendix B. With the cost model, the total cost of the mixed flow ensemble could be extracted based on emphasize on different costs, e.g. if training cost is the major concern, only training cost of different flows will be calculated. This total cost CMix is then added as an regularization term into the training loss function. In our paper, gradient based method is used for optimization which is efficient in this very high dimensional search space. The architecture parameter α and the flow model parameter θ are optimized alternatively with first order approximation in (Liu et al., 2019). The final algorithm flow of our proposed AutoNF method can be summarized in Algorithm 1. 4 EXPERIMENTS 4.1 EVALUATION OF PROPOSED UPPER BOUND Setup: We use a simple example to demonstrate the necessity of doing optimization with our proposed upper bound. We use AutoNF to build a 4 layer flow model with 2 transformation options including planar flow and radial flow from (Rezende & Mohamed, 2015). We use the POWER dataset as the target and optimize with original loss (name M1) and our proposed upper bound (named M2). We use Adam optimizer for both architecture parameter and model parameter with a learning rate of 0.002. The batch size is 512 and the training iteration is 10000. The results are shown in Fig.3. For both M1 and M2, we present the weight for planar and radial flow for each layer as well as the training and validation loss during the search process. The final weight for each layer, searched architectures after binarization and the test score are shown in the right-bottom table. Analysis: Optimization with our proposed upper bound (M2) shows a concrete convergence of weight to 0 or 1 for each layer, which leads to a desired normal flow architecture, while the optimization with the original loss function (M1) ends up in a mixture model instead of a normal flow architecture, as shown in Fig.3(left). This is within in our expectation as shown in Fig.2. Moreover, although the mixture model is mostly likely to be the optimal in the original loss, the normal flow architecture after binarization however, is not an optimal model. As shown in the right-bottom table, the architecture found by M2 has a significantly better test score than M1, and this clearly supports our statement of doing optimization with our proposed upper bound. 4.2 SEARCH FOR FLOW MODELS WITH BEST PERFORMANCE COST TRADE-OFF Transformation Options and Reference Designs: To evaluate our AutoNF framework, we setup our experiments with four types of non-linear flows and one linear flow. In autoregressive family, we choose affine autoregressive flow (Papamakarios et al., 2017) and rational quadratic autoregressive flow (Durkan et al., 2019). Affine autoregressive flow has limited expressive power but the computation cost is lower, while the later has the state of art performance in autoregressive family with higher cost. Affine coupling layer (Dinh et al., 2015) and rational quadratic coupling layer (Durkan et al., 2019) are selected from coupling layer family. For linear transformation, we combine a reverse permutation and an LU linear layer together as a single layer. Random permutation (Durkan et al., 2019; Oliva et al., 2018) is not used since it is difficult to reproduce in architecture optimization. Every non-linear transformation layer is paired with a linear transformation layer suggested by Durkan et al. (2019) as a final transformation option, i.e., a layer in our experiment contains a reverse permutation, an LU-linear layer and one of the non-linear transformation layer listed above. We use the rational quadratic flows family, including rational quadratic autoregressive flow (RQ-AF) and Rational quadratic coupling layer (RQ-C) in (Durkan et al., 2019) which have top 2 performance as the baseline. For fair comparison, we use RQ-AF as the baseline when emphasizing forward cost since it has better performance and use RQ-C as the baseline when emphasizing inverse cost since RQ-C has significantly lower inverse cost. Evaluation Metric and Datasets: Evaluating the performance-cost trade-off is an open question in NF, we propose to use a new metric to address the difficulty of negative log-likelyhood (NLL). NLL is a common measurement for density estimation (lower, the better), however, the order of magnitude of NLL is different across different datasets and it is not suitable to use percentage difference to measure how a model is exactly better than another. In this paper, We proposed to utilize density and coverage (Naeem et al., 2020) to evaluate the performance of NF models. Density and coverage are recently proposed method to evaluate the sample quality of generative models. The density metric reflects the fidelity of the model and is consistent with NLL metric. Across different datasets, density and coverage are at the same order of magnitude and allows evaluation of architecture across datasets. In our experiments, 10000 samples are drawn from the trained flow models and compare with 10000 samples from the test data. The results of three independent runs are averaged as the final reported results. To evaluate the performance-cost trade-off, we define a figure of merit (FOM) as FOM = cost reduction% + density drop% compared to reference SOTA designs. In principle, the weight of the two terms can be manually adjusted to reflect the importance. For demonstration purpose, we use the equally weighted summation to report the results. The performance of the flow models are evaluated with density estimation for UCI (Dua & Graff, 2017) and BSDS300 (Martin et al., 2001) datasets. Analysis: The architecture search results are reported in Table.1 which includes the test NLL, density, coverage, cost and corresponding FOM. Table.1 shows that our AutoNF clearly helps to find architectures that have better performance-cost trade-off. out AutoNF can reach to up to 3.66X cost reduction and up to 75.2% improvement in FOM compared with SOTA literature results. Across all five different datasets, AutoNF demonstrates an average improvement of 58.67% on FOM with emphasis on forward cost and an average improvement of 52.57% on FOM with emphasis on inverse cost. 5 DISCUSSION Normalizing flow is highly parameterized module and designing a flow model and use it for application requires a lot of hands-on experience and domain knowledge. In this paper, we show that the AutoNF framework is very effective in balancing performance-cost trade-offs when building complex flow models. Moreover, although not demonstrated in this paper, the framework could also be used to help decide hyper parameters in complex flow model, e.g. the hidden features and number of bins in the SOTA coupling layer (Durkan et al., 2019). In additional, the proposed optimization method with upper bound can be easily extended to other suitable probabilistic kernels. one example is to identify the best parameterized distribution(s) within a mixture model. We believe our framework will be very useful in many machine learning applications where normalizing flows are needed.
1. What is the focus and contribution of the paper regarding flow architectures? 2. What are the strengths of the proposed approach, particularly in its motivation and theoretical analysis? 3. What are the weaknesses of the paper, especially regarding its experiment section and limitations? 4. Do you have any concerns about the generalizability of the approach to other types of flows or layers? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This work provides the novel approach for searching flow architectures. Compared to the standard approaches used to find the best deep architecture in standard network networks this problem is more challenging due to the need of invertible transform and requirement for determinant of the Jacobian to be easy to calculate. To solve the problem the authors propose to apply the weighting for candidate transformations. In order to enforce invertible properties of the model the authors suggest to use the mixing distribution approach instead of mixing the base transformations. They formulate the problem of learning best weights and show that optimal solution for soft weights is not optimal for binarised versions. Therefore they propose to optimize the upper bound of the proposed loss function instead. Further they show how to deal with the problem with larger number of layers. Some experiments are also performed to show the quality of the approach with respect to the baseline that is expert-based selection. Review Strengths The problem considered in this work is novel and important for the community that works with flows. The problem is significantly more challenging than standard architecture search. The proposed solution is well-motivated. The theoretical claims seems to be correct. The flow of the paper is easy to follow and each step of the proposed approach is justified. The novelty and contribution of the paper is high in my opinion. The idea of using mixing the probabilities and the application of upper bound instead of direct optimisation is non-trivial. Weaknesses The empirical evaluation of the proposed method should be more extensive. The selection manual flow baseline is a bit tricky to be. How far is the manual approach from the optimal combination of transformations on validation set? It would be also beneficial to create the baseline where the transformation are selected randomly. Such a baseline would deliver the information what is the gain on NLL with respect to the random approach. I would suggest also to take under consideration evolution approach that optimize the binary vectors that stays behind the selection process as reference method. My second concern is about the generalisation of the proposed approach to other types of flows. The approach seems to be generic and scale to any possible transformations, but the experimental evaluation is mainly focused on autoregressive flows. Is it possible to adopt that approach to various types of layers that represent the dynamics in CNF? It would be also beneficial to see what is the quality of that approach for various CNF layers in experimental part. The third concern is about the limitation of the approach due the fact that complexity grows exponentially and some decomposition methods are essential to apply architecture search effectively. For this part it would be interesting how much we loose during the decomposition process.
ICLR
Title AutoNF: Automated Architecture Optimization of Normalizing Flows Using a Mixture Distribution Formulation Abstract Although various flow models based on different transformations have been proposed, there still lacks a quantitative analysis of performance-cost trade-offs between different flows as well as a systematic way of constructing the best flow architecture. To tackle this challenge, we present an automated normalizing flow (NF) architecture search method. Our method aims to find the optimal sequence of transformation layers from a given set of unique transformations with three folds. First, a mixed distribution is formulated to enable efficient architecture optimization originally on the discrete space without violating the invertibility of the resulting NF architecture. Second, the mixture NF is optimized with an approximate upper bound which has a more preferable global minimum. Third, a block-wise alternating optimization algorithm is proposed to ensure efficient architecture optimization of deep flow models. N/A Although various flow models based on different transformations have been proposed, there still lacks a quantitative analysis of performance-cost trade-offs between different flows as well as a systematic way of constructing the best flow architecture. To tackle this challenge, we present an automated normalizing flow (NF) architecture search method. Our method aims to find the optimal sequence of transformation layers from a given set of unique transformations with three folds. First, a mixed distribution is formulated to enable efficient architecture optimization originally on the discrete space without violating the invertibility of the resulting NF architecture. Second, the mixture NF is optimized with an approximate upper bound which has a more preferable global minimum. Third, a block-wise alternating optimization algorithm is proposed to ensure efficient architecture optimization of deep flow models. 1 INTRODUCTION Normalizing flow (NF) is a probabilistic modeling tool that has been widely used in density estimation, generative models, and random sampling. Various flow models have been proposed in recent years to improve their expressive power. Discrete flow models are either built based on elementalwise monotonical functions, named autoregressive flow or coupling layers (Papamakarios et al., 2017), or built with transformations where the determinant of the flow can be easily calculated with matrix determinant lemma (Rezende & Mohamed, 2015). In the continuous flow family, the models are constructed by neural ODE (Grathwohl et al., 2019). Despite the variety of flow models, there’s yet no perfect flow concerning the expressive power and the computation cost. The flow models with higher expressive power usually have higher computational costs in either forward and inverse pass. In contrast, flows which are fast to compute are not able to model rich distributions and are limited to simple applications. For instance, autoregressive flows (Papamakarios et al., 2017) are universal probability approximators but are D times slower to invert than forward calculation, where D is the dimension of the modeled random variable x (Papamakarios et al., 2021). Flows based on coupling layers (Dinh et al., 2015; 2017; Kingma & Dhariwal, 2018) have an analytic one-pass inverse but are less expressive than their autoregressive counterparts. Other highly expressive NF models (Rezende & Mohamed, 2015; Behrmann et al., 2019) cannot provide an analytic inverses and relies on numerical optimizations. For different applications, the optimal flow model can be drastically different, especially if the computation cost is taken into consideration. For generative models (Dinh et al., 2015; Kingma & Dhariwal, 2018), flows with the fast forward pass are preferable since the forward transformations need to be applied to every sample from the base distribution. For density estimation (Papamakarios et al., 2017; Rippel & Adams, 2013), flows with cheap inverse will prevail. For applications where flow is utilized as a co-trained kernel (Mazoure et al., 2020), the computation cost and performance trade-off are more important, i.e., having a fast model with relatively good performance. However, in the current body of work, the architecture designs of the flow models are all based on manual configuration and tuning. To this date, there is a lack of a systematic way that could automatically construct an optimal flow architecture with a preferred cost. In this paper, we propose AutoNF, an automated method for normalizing flow architecture optimization. AutoNF has a better performance-cost trade-off than hand-tuned SOTA flow models based on a given set of transformations. Our approach employs a mixture distribution formulation that can search a large design space of different transformations while still satisfying the invertibility requirement of normalizing flow. The proposed mixture NF is optimized via approximate upper bound which provides a better optimization landscape for finding the desired flow architecture. Besides, to deal with exponentially growing optimization complexity, we introduce a block-wise optimization method to enable efficient optimization of deep flow models. 2 RELATED WORK Normalizing Flows: Various normalizing flow models have been proposed since the first concept in (Tabak & Turner, 2013). Current flow models can be classified into two categories: finite flows based on layer structure, and continuous flow based on neural ODE (Grathwohl et al., 2019). The finite flow family includes flows based on elemental-wise transformation (Papamakarios et al., 2017; Kingma & Dhariwal, 2018) and flows whose transformations are restricted to be contractive (Behrmann et al., 2019). In elemental-wise transformation flows, autoregressive flow and coupling layers are two major flavors and extensive work has been proposed to improve the expressive power of both flow models. In Huang et al. (2018), the dimension-wise scalar transformation is implemented by a sigmoid neural network, which increases the expressive power at the cost of being not analytically invertible. In Durkan et al. (2019), piecewise splines are used as drop-in replacement of affine or additive transformations (Dinh et al., 2015; 2017) and is the current SOTA flow model. Consequently many recent research efforts have been devoted to closing the gap of expressive power, albeit at the cost of more complex and expensive transformations. Moreover, there has been no quantitative trade-off analysis between the performance and cost among different flows. Neural Architecture Search: Many algorithms have been proposed or applied for neural architecture search. For instance, reinforcement learning (Zoph & Le, 2017), genetic algorithm (Real et al., 2017; Suganuma et al., 2018; Liu et al., 2018), Monte Carlo tree search (Negrinho & Gordon, 2017) or Bayesian optimization (Kandasamy et al., 2018). However, these methods all face the challenge of optimizing on a large discrete space and can take thousand of GPU days to find a good architecture. To address this issue, DARTS (Liu et al., 2019) proposes to relax the search space from discrete to continuous and allows efficient differentiable architecture search with gradient method which could reduce the search time to a single GPU day while still producing the SOTA architecture. However, all current NAS methods focus on optimizing traditional neural network structures (CNN, RNN) and there has yet been any implementation on normalizing flow. Necessity for the Trade-off Between Performance and Cost: Despite various transformations proposed in the literature, there is no perfect transformation with strong expressive power and low computational cost. Autoregressive flows have better expressive power, but the inverse computation cost grows linearly with data dimension. Coupling layers’ inverse calculation is as fast as the forward pass, but their expressive power is generally worse than autoregressive flow with the same element-wise transformation. Even in the same autoregressive flow or coupling layer family, flows with different element-wise transformations have different performance and computation costs. For instance, additive or affine coupling layers (Dinh et al., 2017; 2015) have very fast forward and inverse calculation with limited expressive power while the flow in (Durkan et al., 2019) are highly expressive but are more demanding on computation. In most applications, it is necessary to find the best performance while minimizing at least one specific component of the cost. Unfortunately, the current design of flow models is empirical and therefore cannot ensure the optimal trade-offs. 3 METHOD In this work, we aim to tackle the challenge of finding an optimal flow model for a given task via an automated architecture search algorithm. Assumptions: In the remaining part of this paper, without losing generality, we assume that the transformation is properly modeled such that during the training process, only forward computation is needed. Under this assumption, when the flow model is used for density modeling (Durkan et al., 2019), the forward calculation is the dominant computation. When the flow model is used for random sampling (Kingma & Dhariwal, 2018), the inverse calculation is computationally intensive. When the flow model is utilized as a module and trained together with other components, e.g., policy network in maximum entropy learning (Mazoure et al., 2020), the training cost of the flow model is an important consideration. Problem Definition: Given a transformation set with m options {T 1, T 2, ...Tm}, the goal is to construct an optimal flow model with n layers of transformations from the set. The flow model pNF (x;θ) = pT1T2...Tn(x;θ) should minimize the KL divergence between the target distribution p∗(x) and itself while minimizing its computational cost CNF . Here, θ are the parameters of the transformation in the flow model. In this paper, we use the forward KL divergence as our target loss function (Papamakarios et al., 2021): θ∗ =argmin θ {DKL[p∗(x) || pT1T2...Tn(x;θ)] + λ · CNF } s.t. Ti ∈ {T 1, T 2, ...Tm} (1) While λ is a tuning factor capturing the relative importance of the performance-cost trade-off. Finding this optimal flow model is a discrete optimization problem with exponential complexity. To enable efficient architecture optimization, we use proposed method of relaxing the discrete search space to continuous space as suggested in Liu et al. (2019). 3.1 MIXED FLOW ENSEMBLE For the ith transformation layer with m options, we introduce a corresponding weight w j i for each option T j which reflects how likely the transformation will be selected. The weight is parameterized by a vector α and made continuous via softmax: wji = exp(αji )∑m j=1 exp(α j i ) (2) By applying this parameterization for each transformation layer, we can construct a mixed flow ensemble pMix(x;θ,α), where each layer in this mixed model reflects a weighted combination of the effect of all possible transformations. In this case, the architecture optimization problem is reduced to learning the weight vector for each layer and, at the end of the optimization process, weights will be binarized and the transformation with the highest weight in one layer will be selected as the final transformation. The mixed flow ensemble thus degrades to a normal flow model. The whole procedure is illustrated in Fig. 1 (left). As adopted in (Liu et al., 2019), training of the flow ensemble becomes joint optimization of the architecture parameterα and the model parameter θ over the training and validation datasets, which could be written as the following bi-level optimization problem: α∗ =argmin α DvalKL[p ∗(x) || pMix(x;θ∗,α)] + λ · CMix(α) s.t. θ∗ = argmin θ DtrainKL [p ∗(x) || pMix(x;θ,α)], ∀ T ∈ pMix, T ∈ {T 1, T 2, ...Tm}, (3) While the optimization problem is well defined, the key challenge is to construct the flow ensemble within the normalizing flow framework. This is different from traditional neural architecture search, which can mix various operations with no additional issue. Normalizing flow has its unique requirement for the invertibility of transformations and a preferred simple Jacobian calculation, which requires careful handling. The mixed flow ensemble pMix(x;θ∗,α) must satisfy two requirements. First, it must be a legal density function such that it can be optimized by the KL divergence formulation. Second, each transformation layer in pMix(x;θ∗,α) should represent a weighted combination of all possible transformations. Consider the ith layer in the mixed flow ensemble with input random variable xin and output random variable xout, and pxin(xin) and pxout(xout) are their corresponding density functions. This layer has m transformation options in {T 1i , T 2i , ...Tmi } and w j i is the corresponding weight for each transformation. As discussed in Assumption, we assume all transformations directly model the inverse transformation, i.e. xin = T j i (xout). Two approaches can be used to construct the mixed flow ensemble. Construction by Mixed Transformations: The straight forward way of building the ith mix flow ensemble layer is to mix all transformations by weighted summation, as shown in Fig. 1 (right-top). The final weighted transformation for this layer can be thus represented as: Ti(xin) = m∑ j=1 wji · T j i (xout) (4) There are two drawbacks of this formulation despite its simplicity. First, definition of normalizing flow requires the mixed transformation Ti be invertible and differentiable in order to ensure pxout(xout) legal density function. However, this invertibility is not guaranteed even if all candidate transformations are invertible. Second, even if the mixed transformation is invertible, there is no easy way to calculate the Jacobian determinant of this weighted summation of transformations. Meeting the requirement of invertibility and ease of calculating Jacobian determinant brings strict restrictions on the candidate transformations and prevents the optimization of flow architectures on a wider search space. As a result, the construction of the mixed flow ensemble by weighted summation of transformations is not adopted in this paper. Construction by Mixed Distributions: An alternating way is to build the mixed flow ensemble by mixing distributions. For a given transformation T ji in this ith layer, applying the transformation to the input random variable will result in a new distribution: pT ji (xout) = pxin(T j i (xout)) · | detJT ji (xout)| (5) By applying this to every transformation option in {T 1i , T 2i , ...Tmi }, we can obtain k different distributions, and it is possible to mix all the density functions together by their weighted summation, to get a mixture model as shown in eq.(6). pTi(xout) = m∑ j=1 wji · pT ji (xout) (6) An illustration of this process is shown in Fig. 1 (right-bottom). Different from the previous approach, the mixture model has a legal density function as: pTi(xout). By the definition of normalizing flow, we can assume that there exists an invertible and differentiable transformation Ti, which transforms xin to xout, although the transformation itself can not be explicitly written out. For the next (i + 1)th layer, the density of the mixture model will be used as the input density function pxin(xin) as in the previous layer. By applying this formulation for n layers, the final mixed flow ensemble can be written as: pMix(x;θ,a) = mn∑ k=1 Wk · pT1T2...Tn(x,θ) = mn∑ k=1 Wk · pi(x;θi) where each Wk = n∏ i=1 wi and mn∑ k Wk = 1 (7) Each wi is defined in eq.(2) and we use pk(x;θk) to represent a “normal flow architecture” with n transformation layers. Clearly, the final mixed flow ensemble is a legal density function which is in fact, a weighted summation of all possible flow models built with n layers of transformations. 3.2 OPTIMIZATION WITH APPROXIMATED UPPER BOUND Optimizing the forward KL divergence between the target distribution and the mixed flow ensemble can be written as: LOpMix = DKL [p ∗(x) || pMix(x;θ,α)] = −Ep∗(x)[log( mn∑ k=1 Wk · pk(x;θk))] (8) We will demonstrate that direct optimization of this original loss can lead to underside mixture models. In the whole search space of the flow ensemble, we are interested only in ”normal flow architectures” points, i.e. the points where the weight of one architecture is 1 and others are all 0. However, it can be easily proven that the global optimum of LOpMix may not be the desired normal flow architecture (the red points in Fig. 2). Instead, optimization is very likely to end up in a mixture model that is globally optimal with similar weight for each possible flow architecture (the green point in Fig. 2). In this case, we will encounter difficulty when extracting a normal flow architecture with the search result. A common practice in differentiable architecture search (Liu et al., 2019) is to binarize the weights and select corresponding transformations. However, there is no guarantee that the binarized architecture will have a lower loss, and finding this nearest binarization may lead to performance drop. As a result, optimization with the original loss function is not suitable, and could be risky. In this paper, we propose to optimize an upper bound of the original loss function to provide a better global optimum for the search of best normal flow architectures. Our method utilizes Jensen’s inequality log( ∑ W · x) ≥ ∑ W · log(x) as follows, since we have ∑ W = 1 and the log function is concave, we can obtain an upper bound of the KL divergence given as: LOpMix = −Ep∗(x)[log( mn∑ k Wk · pk(x;θk)] ≤ LUpMix = −Ep∗(x)[ mn∑ k Wk · log(pk(x;θk))] (9) The benefit of optimizing the upper bound can be summarized as follows: Proposition 1: The global minimum point of LUpMix is defined by a normal flow architecture. Proof: Suppose each flow model pk(x;θk) has an optimal parameter θ∗k that minimizes the KL divergence between p∗(x) and it: −Ep∗(x)[log(pk(x;θ∗k)] ≤ −Ep∗(x)[log(pk(x;θk)] (10) There also exists a flow architecture (pz(x;θ∗z)) that has the minimal KL divergence: −Ep∗(x)[log(pz(x;θ∗z)] ≤ −Ep∗(x)[log(pk(x;θ∗k)], ∀k ∈ mn (11) We can then prove the proposition by showing that: LUpMix = −Ep∗(x)[ mn∑ k Wk · log(pk(x;θk))] ≥ −Ep∗(x)[ mn∑ k Wk · log(pk(x;θ∗k))] ≥ −Ep∗(x)[ mn∑ k Wk · log(pz(x;θ∗z))] = −Ep∗(x)[log(pz(x;θ∗z)] (12) Proposition 2: At normal architecture points (Wk = 1,W−k = 0), LUpMix = L O pMix . The proof of proposition 2 is apparent. With the above propositions and under the assumption that the global optimum can be reached at the end of the optimization, we can show that the solution set, i.e. all possible normal flow architectures are the same in both LOpMix and L U pMix , and we can do optimization with proposed upper bound without violating the original definition. Furthermore, since the global optimum of the upper bound will always lead to a normal flow architecture, we will not end up in finding a mixture model with the need to do heuristic and risky binarization of weights W . 3.3 EFFICIENT ARCHITECTURE OPTIMIZATION FOR DEEP FLOW MODELS While the flow ensemble by mixed density formulation could reflect the weighted effect of all possible transformation combinations, the architecture optimization complexity grows exponentially with respect to the number of considered transformation types and the number of transformation layers. In this scenario, efficient optimization of the whole flow architecture will not be possible. It is natural to decompose the original problem into sequential optimization of few different blocks, where each block could be optimized in one time with a limited number of layers. We propose two methods to decompose the problem. Grow Method: The first approach is a straightforward greedy method which we call ”Grow”. Each time, a block is optimized until convergence, and the weights of the transformation layer are binarized. The searched transformations in this block will be directly added to the searched layer in the previous block. The architecture optimization of later blocks will be based on the existing layers and, the growth of layers stops when reaching the total number of layers constraint. Despite its simplicity, the downside of the “Grow” method is that the optimization is short-sighted. The block being optimized has no information about the architectures which could be added later, and the whole architecture is more likely to be trapped in local minimum. Block Method: To avoid the issue of getting stuck in a local minimum, we propose another method named “Block” optimization. Blocks B in this approach are optimized alternatively to allow each block to adjust their architectures with respect to other blocks. In fact, the first “Grow” approach is a specific case of the “Block” method, where all the blocks are initialized as identity transformations and optimized only once. Algorithm 1 Algorithm flow for AutoNF Require: Transformations: {T 1, T 2, ...Tm}, Blocks: B = {B1, B2, ...Bl}, Cost: CMix Ensure: n-layer flow model: 1: while not converged do 2: for each Bi ∈B do 3: while not convergence do 4: αBi = argminαBi D val KL[p ∗(x) || pMix(x;θ∗B,αBi)] + λ · CMix(αBi) 5: θB = argminθB D train KL [p ∗(x) || pMix(x;θB,αBi)] 6: end while 7: Fix architecture for Bi 8: end for 9: end while 3.4 COST MODEL AND ALGORITHM FLOW As discussed in section II, we are interested in modeling the training cost (forward calculation cost) and the inverse calculation cost, since each of them plays a different role based on desired applications. We use an independent experiment to model the cost of different types of flows and summarized in a table which are included in Appendix B. With the cost model, the total cost of the mixed flow ensemble could be extracted based on emphasize on different costs, e.g. if training cost is the major concern, only training cost of different flows will be calculated. This total cost CMix is then added as an regularization term into the training loss function. In our paper, gradient based method is used for optimization which is efficient in this very high dimensional search space. The architecture parameter α and the flow model parameter θ are optimized alternatively with first order approximation in (Liu et al., 2019). The final algorithm flow of our proposed AutoNF method can be summarized in Algorithm 1. 4 EXPERIMENTS 4.1 EVALUATION OF PROPOSED UPPER BOUND Setup: We use a simple example to demonstrate the necessity of doing optimization with our proposed upper bound. We use AutoNF to build a 4 layer flow model with 2 transformation options including planar flow and radial flow from (Rezende & Mohamed, 2015). We use the POWER dataset as the target and optimize with original loss (name M1) and our proposed upper bound (named M2). We use Adam optimizer for both architecture parameter and model parameter with a learning rate of 0.002. The batch size is 512 and the training iteration is 10000. The results are shown in Fig.3. For both M1 and M2, we present the weight for planar and radial flow for each layer as well as the training and validation loss during the search process. The final weight for each layer, searched architectures after binarization and the test score are shown in the right-bottom table. Analysis: Optimization with our proposed upper bound (M2) shows a concrete convergence of weight to 0 or 1 for each layer, which leads to a desired normal flow architecture, while the optimization with the original loss function (M1) ends up in a mixture model instead of a normal flow architecture, as shown in Fig.3(left). This is within in our expectation as shown in Fig.2. Moreover, although the mixture model is mostly likely to be the optimal in the original loss, the normal flow architecture after binarization however, is not an optimal model. As shown in the right-bottom table, the architecture found by M2 has a significantly better test score than M1, and this clearly supports our statement of doing optimization with our proposed upper bound. 4.2 SEARCH FOR FLOW MODELS WITH BEST PERFORMANCE COST TRADE-OFF Transformation Options and Reference Designs: To evaluate our AutoNF framework, we setup our experiments with four types of non-linear flows and one linear flow. In autoregressive family, we choose affine autoregressive flow (Papamakarios et al., 2017) and rational quadratic autoregressive flow (Durkan et al., 2019). Affine autoregressive flow has limited expressive power but the computation cost is lower, while the later has the state of art performance in autoregressive family with higher cost. Affine coupling layer (Dinh et al., 2015) and rational quadratic coupling layer (Durkan et al., 2019) are selected from coupling layer family. For linear transformation, we combine a reverse permutation and an LU linear layer together as a single layer. Random permutation (Durkan et al., 2019; Oliva et al., 2018) is not used since it is difficult to reproduce in architecture optimization. Every non-linear transformation layer is paired with a linear transformation layer suggested by Durkan et al. (2019) as a final transformation option, i.e., a layer in our experiment contains a reverse permutation, an LU-linear layer and one of the non-linear transformation layer listed above. We use the rational quadratic flows family, including rational quadratic autoregressive flow (RQ-AF) and Rational quadratic coupling layer (RQ-C) in (Durkan et al., 2019) which have top 2 performance as the baseline. For fair comparison, we use RQ-AF as the baseline when emphasizing forward cost since it has better performance and use RQ-C as the baseline when emphasizing inverse cost since RQ-C has significantly lower inverse cost. Evaluation Metric and Datasets: Evaluating the performance-cost trade-off is an open question in NF, we propose to use a new metric to address the difficulty of negative log-likelyhood (NLL). NLL is a common measurement for density estimation (lower, the better), however, the order of magnitude of NLL is different across different datasets and it is not suitable to use percentage difference to measure how a model is exactly better than another. In this paper, We proposed to utilize density and coverage (Naeem et al., 2020) to evaluate the performance of NF models. Density and coverage are recently proposed method to evaluate the sample quality of generative models. The density metric reflects the fidelity of the model and is consistent with NLL metric. Across different datasets, density and coverage are at the same order of magnitude and allows evaluation of architecture across datasets. In our experiments, 10000 samples are drawn from the trained flow models and compare with 10000 samples from the test data. The results of three independent runs are averaged as the final reported results. To evaluate the performance-cost trade-off, we define a figure of merit (FOM) as FOM = cost reduction% + density drop% compared to reference SOTA designs. In principle, the weight of the two terms can be manually adjusted to reflect the importance. For demonstration purpose, we use the equally weighted summation to report the results. The performance of the flow models are evaluated with density estimation for UCI (Dua & Graff, 2017) and BSDS300 (Martin et al., 2001) datasets. Analysis: The architecture search results are reported in Table.1 which includes the test NLL, density, coverage, cost and corresponding FOM. Table.1 shows that our AutoNF clearly helps to find architectures that have better performance-cost trade-off. out AutoNF can reach to up to 3.66X cost reduction and up to 75.2% improvement in FOM compared with SOTA literature results. Across all five different datasets, AutoNF demonstrates an average improvement of 58.67% on FOM with emphasis on forward cost and an average improvement of 52.57% on FOM with emphasis on inverse cost. 5 DISCUSSION Normalizing flow is highly parameterized module and designing a flow model and use it for application requires a lot of hands-on experience and domain knowledge. In this paper, we show that the AutoNF framework is very effective in balancing performance-cost trade-offs when building complex flow models. Moreover, although not demonstrated in this paper, the framework could also be used to help decide hyper parameters in complex flow model, e.g. the hidden features and number of bins in the SOTA coupling layer (Durkan et al., 2019). In additional, the proposed optimization method with upper bound can be easily extended to other suitable probabilistic kernels. one example is to identify the best parameterized distribution(s) within a mixture model. We believe our framework will be very useful in many machine learning applications where normalizing flows are needed.
1. What is the focus and contribution of the paper on normalizing flow architecture optimization? 2. What are the strengths of the proposed approach, particularly in terms of its adaptation from Liu et al. (2019)? 3. What are the weaknesses of the paper, especially regarding its experimental evaluation and limitations in the search space? 4. How does the reviewer assess the novelty of the paper's contributions compared to prior works?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors proposed to adapt a differentiable architecture search formulation (Liu et al., 2019) based on learned weighting of an ensemble of modules to automated search for Normalizing Flow architectures. The authors made several adaptations to the original approach for the normalizing flow problem, due to the invertibility constraints that prevent direct linear summation of different transform operations. Furthermore, the authors proposed to optimize the full network using an approximated upper bound of the KL divergence, instead of directly optimization. The authors proposed two methods to decompose the optimization problem: grow method, which is more straightforward and greedy, and block method, that alternatively adjusts each block. The authors experimentally compared their proposed method with manually specified architectures across various datasets, including POWER, GAS, HERMASS, MINIBOONE and BSDS300. The results seem mixed, as the searched model outperforms the manual model in some contexts but not others. Review Pros: Overall the paper is well written and easy to follow. The main idea is to adapt the differentiable formulation for the NAS towards normalizing flow architecture optimization, and the authors made interesting theoretical contributions for reformulating the mixture-of-experts setup towards normalizing flow models, as well as for proposing novel optimization strategies. Overall the proposed method is sound and coherent. Cons: The experimental evaluation seems relatively weak. Though the optimized architectures seem to consistently have lower train, forward and inverse costs, the test performance is mixed, and in many scenarios worse-performing than the manually design architecture. The search space that the authors experimented with is extremely limited, limited to only planar flows and radial flow, with the weights between the two transformations being the only architectural hyperparameter learned, not accounting for other hyperparameters such as number of stacked flows, the network complexity (number of feature layers, etc) for each network, which has little generalizability to more modern and useful architectures (e.g., RealNVP, GLOW, FFJORD). On the novelty side, though the authors made problem-specific adaptations for the differentiable architecture search algorithm for normalizing flows, the main idea is very much adapted from Liu et al. 2019 and somewhat marginally novel.
ICLR
Title AutoNF: Automated Architecture Optimization of Normalizing Flows Using a Mixture Distribution Formulation Abstract Although various flow models based on different transformations have been proposed, there still lacks a quantitative analysis of performance-cost trade-offs between different flows as well as a systematic way of constructing the best flow architecture. To tackle this challenge, we present an automated normalizing flow (NF) architecture search method. Our method aims to find the optimal sequence of transformation layers from a given set of unique transformations with three folds. First, a mixed distribution is formulated to enable efficient architecture optimization originally on the discrete space without violating the invertibility of the resulting NF architecture. Second, the mixture NF is optimized with an approximate upper bound which has a more preferable global minimum. Third, a block-wise alternating optimization algorithm is proposed to ensure efficient architecture optimization of deep flow models. N/A Although various flow models based on different transformations have been proposed, there still lacks a quantitative analysis of performance-cost trade-offs between different flows as well as a systematic way of constructing the best flow architecture. To tackle this challenge, we present an automated normalizing flow (NF) architecture search method. Our method aims to find the optimal sequence of transformation layers from a given set of unique transformations with three folds. First, a mixed distribution is formulated to enable efficient architecture optimization originally on the discrete space without violating the invertibility of the resulting NF architecture. Second, the mixture NF is optimized with an approximate upper bound which has a more preferable global minimum. Third, a block-wise alternating optimization algorithm is proposed to ensure efficient architecture optimization of deep flow models. 1 INTRODUCTION Normalizing flow (NF) is a probabilistic modeling tool that has been widely used in density estimation, generative models, and random sampling. Various flow models have been proposed in recent years to improve their expressive power. Discrete flow models are either built based on elementalwise monotonical functions, named autoregressive flow or coupling layers (Papamakarios et al., 2017), or built with transformations where the determinant of the flow can be easily calculated with matrix determinant lemma (Rezende & Mohamed, 2015). In the continuous flow family, the models are constructed by neural ODE (Grathwohl et al., 2019). Despite the variety of flow models, there’s yet no perfect flow concerning the expressive power and the computation cost. The flow models with higher expressive power usually have higher computational costs in either forward and inverse pass. In contrast, flows which are fast to compute are not able to model rich distributions and are limited to simple applications. For instance, autoregressive flows (Papamakarios et al., 2017) are universal probability approximators but are D times slower to invert than forward calculation, where D is the dimension of the modeled random variable x (Papamakarios et al., 2021). Flows based on coupling layers (Dinh et al., 2015; 2017; Kingma & Dhariwal, 2018) have an analytic one-pass inverse but are less expressive than their autoregressive counterparts. Other highly expressive NF models (Rezende & Mohamed, 2015; Behrmann et al., 2019) cannot provide an analytic inverses and relies on numerical optimizations. For different applications, the optimal flow model can be drastically different, especially if the computation cost is taken into consideration. For generative models (Dinh et al., 2015; Kingma & Dhariwal, 2018), flows with the fast forward pass are preferable since the forward transformations need to be applied to every sample from the base distribution. For density estimation (Papamakarios et al., 2017; Rippel & Adams, 2013), flows with cheap inverse will prevail. For applications where flow is utilized as a co-trained kernel (Mazoure et al., 2020), the computation cost and performance trade-off are more important, i.e., having a fast model with relatively good performance. However, in the current body of work, the architecture designs of the flow models are all based on manual configuration and tuning. To this date, there is a lack of a systematic way that could automatically construct an optimal flow architecture with a preferred cost. In this paper, we propose AutoNF, an automated method for normalizing flow architecture optimization. AutoNF has a better performance-cost trade-off than hand-tuned SOTA flow models based on a given set of transformations. Our approach employs a mixture distribution formulation that can search a large design space of different transformations while still satisfying the invertibility requirement of normalizing flow. The proposed mixture NF is optimized via approximate upper bound which provides a better optimization landscape for finding the desired flow architecture. Besides, to deal with exponentially growing optimization complexity, we introduce a block-wise optimization method to enable efficient optimization of deep flow models. 2 RELATED WORK Normalizing Flows: Various normalizing flow models have been proposed since the first concept in (Tabak & Turner, 2013). Current flow models can be classified into two categories: finite flows based on layer structure, and continuous flow based on neural ODE (Grathwohl et al., 2019). The finite flow family includes flows based on elemental-wise transformation (Papamakarios et al., 2017; Kingma & Dhariwal, 2018) and flows whose transformations are restricted to be contractive (Behrmann et al., 2019). In elemental-wise transformation flows, autoregressive flow and coupling layers are two major flavors and extensive work has been proposed to improve the expressive power of both flow models. In Huang et al. (2018), the dimension-wise scalar transformation is implemented by a sigmoid neural network, which increases the expressive power at the cost of being not analytically invertible. In Durkan et al. (2019), piecewise splines are used as drop-in replacement of affine or additive transformations (Dinh et al., 2015; 2017) and is the current SOTA flow model. Consequently many recent research efforts have been devoted to closing the gap of expressive power, albeit at the cost of more complex and expensive transformations. Moreover, there has been no quantitative trade-off analysis between the performance and cost among different flows. Neural Architecture Search: Many algorithms have been proposed or applied for neural architecture search. For instance, reinforcement learning (Zoph & Le, 2017), genetic algorithm (Real et al., 2017; Suganuma et al., 2018; Liu et al., 2018), Monte Carlo tree search (Negrinho & Gordon, 2017) or Bayesian optimization (Kandasamy et al., 2018). However, these methods all face the challenge of optimizing on a large discrete space and can take thousand of GPU days to find a good architecture. To address this issue, DARTS (Liu et al., 2019) proposes to relax the search space from discrete to continuous and allows efficient differentiable architecture search with gradient method which could reduce the search time to a single GPU day while still producing the SOTA architecture. However, all current NAS methods focus on optimizing traditional neural network structures (CNN, RNN) and there has yet been any implementation on normalizing flow. Necessity for the Trade-off Between Performance and Cost: Despite various transformations proposed in the literature, there is no perfect transformation with strong expressive power and low computational cost. Autoregressive flows have better expressive power, but the inverse computation cost grows linearly with data dimension. Coupling layers’ inverse calculation is as fast as the forward pass, but their expressive power is generally worse than autoregressive flow with the same element-wise transformation. Even in the same autoregressive flow or coupling layer family, flows with different element-wise transformations have different performance and computation costs. For instance, additive or affine coupling layers (Dinh et al., 2017; 2015) have very fast forward and inverse calculation with limited expressive power while the flow in (Durkan et al., 2019) are highly expressive but are more demanding on computation. In most applications, it is necessary to find the best performance while minimizing at least one specific component of the cost. Unfortunately, the current design of flow models is empirical and therefore cannot ensure the optimal trade-offs. 3 METHOD In this work, we aim to tackle the challenge of finding an optimal flow model for a given task via an automated architecture search algorithm. Assumptions: In the remaining part of this paper, without losing generality, we assume that the transformation is properly modeled such that during the training process, only forward computation is needed. Under this assumption, when the flow model is used for density modeling (Durkan et al., 2019), the forward calculation is the dominant computation. When the flow model is used for random sampling (Kingma & Dhariwal, 2018), the inverse calculation is computationally intensive. When the flow model is utilized as a module and trained together with other components, e.g., policy network in maximum entropy learning (Mazoure et al., 2020), the training cost of the flow model is an important consideration. Problem Definition: Given a transformation set with m options {T 1, T 2, ...Tm}, the goal is to construct an optimal flow model with n layers of transformations from the set. The flow model pNF (x;θ) = pT1T2...Tn(x;θ) should minimize the KL divergence between the target distribution p∗(x) and itself while minimizing its computational cost CNF . Here, θ are the parameters of the transformation in the flow model. In this paper, we use the forward KL divergence as our target loss function (Papamakarios et al., 2021): θ∗ =argmin θ {DKL[p∗(x) || pT1T2...Tn(x;θ)] + λ · CNF } s.t. Ti ∈ {T 1, T 2, ...Tm} (1) While λ is a tuning factor capturing the relative importance of the performance-cost trade-off. Finding this optimal flow model is a discrete optimization problem with exponential complexity. To enable efficient architecture optimization, we use proposed method of relaxing the discrete search space to continuous space as suggested in Liu et al. (2019). 3.1 MIXED FLOW ENSEMBLE For the ith transformation layer with m options, we introduce a corresponding weight w j i for each option T j which reflects how likely the transformation will be selected. The weight is parameterized by a vector α and made continuous via softmax: wji = exp(αji )∑m j=1 exp(α j i ) (2) By applying this parameterization for each transformation layer, we can construct a mixed flow ensemble pMix(x;θ,α), where each layer in this mixed model reflects a weighted combination of the effect of all possible transformations. In this case, the architecture optimization problem is reduced to learning the weight vector for each layer and, at the end of the optimization process, weights will be binarized and the transformation with the highest weight in one layer will be selected as the final transformation. The mixed flow ensemble thus degrades to a normal flow model. The whole procedure is illustrated in Fig. 1 (left). As adopted in (Liu et al., 2019), training of the flow ensemble becomes joint optimization of the architecture parameterα and the model parameter θ over the training and validation datasets, which could be written as the following bi-level optimization problem: α∗ =argmin α DvalKL[p ∗(x) || pMix(x;θ∗,α)] + λ · CMix(α) s.t. θ∗ = argmin θ DtrainKL [p ∗(x) || pMix(x;θ,α)], ∀ T ∈ pMix, T ∈ {T 1, T 2, ...Tm}, (3) While the optimization problem is well defined, the key challenge is to construct the flow ensemble within the normalizing flow framework. This is different from traditional neural architecture search, which can mix various operations with no additional issue. Normalizing flow has its unique requirement for the invertibility of transformations and a preferred simple Jacobian calculation, which requires careful handling. The mixed flow ensemble pMix(x;θ∗,α) must satisfy two requirements. First, it must be a legal density function such that it can be optimized by the KL divergence formulation. Second, each transformation layer in pMix(x;θ∗,α) should represent a weighted combination of all possible transformations. Consider the ith layer in the mixed flow ensemble with input random variable xin and output random variable xout, and pxin(xin) and pxout(xout) are their corresponding density functions. This layer has m transformation options in {T 1i , T 2i , ...Tmi } and w j i is the corresponding weight for each transformation. As discussed in Assumption, we assume all transformations directly model the inverse transformation, i.e. xin = T j i (xout). Two approaches can be used to construct the mixed flow ensemble. Construction by Mixed Transformations: The straight forward way of building the ith mix flow ensemble layer is to mix all transformations by weighted summation, as shown in Fig. 1 (right-top). The final weighted transformation for this layer can be thus represented as: Ti(xin) = m∑ j=1 wji · T j i (xout) (4) There are two drawbacks of this formulation despite its simplicity. First, definition of normalizing flow requires the mixed transformation Ti be invertible and differentiable in order to ensure pxout(xout) legal density function. However, this invertibility is not guaranteed even if all candidate transformations are invertible. Second, even if the mixed transformation is invertible, there is no easy way to calculate the Jacobian determinant of this weighted summation of transformations. Meeting the requirement of invertibility and ease of calculating Jacobian determinant brings strict restrictions on the candidate transformations and prevents the optimization of flow architectures on a wider search space. As a result, the construction of the mixed flow ensemble by weighted summation of transformations is not adopted in this paper. Construction by Mixed Distributions: An alternating way is to build the mixed flow ensemble by mixing distributions. For a given transformation T ji in this ith layer, applying the transformation to the input random variable will result in a new distribution: pT ji (xout) = pxin(T j i (xout)) · | detJT ji (xout)| (5) By applying this to every transformation option in {T 1i , T 2i , ...Tmi }, we can obtain k different distributions, and it is possible to mix all the density functions together by their weighted summation, to get a mixture model as shown in eq.(6). pTi(xout) = m∑ j=1 wji · pT ji (xout) (6) An illustration of this process is shown in Fig. 1 (right-bottom). Different from the previous approach, the mixture model has a legal density function as: pTi(xout). By the definition of normalizing flow, we can assume that there exists an invertible and differentiable transformation Ti, which transforms xin to xout, although the transformation itself can not be explicitly written out. For the next (i + 1)th layer, the density of the mixture model will be used as the input density function pxin(xin) as in the previous layer. By applying this formulation for n layers, the final mixed flow ensemble can be written as: pMix(x;θ,a) = mn∑ k=1 Wk · pT1T2...Tn(x,θ) = mn∑ k=1 Wk · pi(x;θi) where each Wk = n∏ i=1 wi and mn∑ k Wk = 1 (7) Each wi is defined in eq.(2) and we use pk(x;θk) to represent a “normal flow architecture” with n transformation layers. Clearly, the final mixed flow ensemble is a legal density function which is in fact, a weighted summation of all possible flow models built with n layers of transformations. 3.2 OPTIMIZATION WITH APPROXIMATED UPPER BOUND Optimizing the forward KL divergence between the target distribution and the mixed flow ensemble can be written as: LOpMix = DKL [p ∗(x) || pMix(x;θ,α)] = −Ep∗(x)[log( mn∑ k=1 Wk · pk(x;θk))] (8) We will demonstrate that direct optimization of this original loss can lead to underside mixture models. In the whole search space of the flow ensemble, we are interested only in ”normal flow architectures” points, i.e. the points where the weight of one architecture is 1 and others are all 0. However, it can be easily proven that the global optimum of LOpMix may not be the desired normal flow architecture (the red points in Fig. 2). Instead, optimization is very likely to end up in a mixture model that is globally optimal with similar weight for each possible flow architecture (the green point in Fig. 2). In this case, we will encounter difficulty when extracting a normal flow architecture with the search result. A common practice in differentiable architecture search (Liu et al., 2019) is to binarize the weights and select corresponding transformations. However, there is no guarantee that the binarized architecture will have a lower loss, and finding this nearest binarization may lead to performance drop. As a result, optimization with the original loss function is not suitable, and could be risky. In this paper, we propose to optimize an upper bound of the original loss function to provide a better global optimum for the search of best normal flow architectures. Our method utilizes Jensen’s inequality log( ∑ W · x) ≥ ∑ W · log(x) as follows, since we have ∑ W = 1 and the log function is concave, we can obtain an upper bound of the KL divergence given as: LOpMix = −Ep∗(x)[log( mn∑ k Wk · pk(x;θk)] ≤ LUpMix = −Ep∗(x)[ mn∑ k Wk · log(pk(x;θk))] (9) The benefit of optimizing the upper bound can be summarized as follows: Proposition 1: The global minimum point of LUpMix is defined by a normal flow architecture. Proof: Suppose each flow model pk(x;θk) has an optimal parameter θ∗k that minimizes the KL divergence between p∗(x) and it: −Ep∗(x)[log(pk(x;θ∗k)] ≤ −Ep∗(x)[log(pk(x;θk)] (10) There also exists a flow architecture (pz(x;θ∗z)) that has the minimal KL divergence: −Ep∗(x)[log(pz(x;θ∗z)] ≤ −Ep∗(x)[log(pk(x;θ∗k)], ∀k ∈ mn (11) We can then prove the proposition by showing that: LUpMix = −Ep∗(x)[ mn∑ k Wk · log(pk(x;θk))] ≥ −Ep∗(x)[ mn∑ k Wk · log(pk(x;θ∗k))] ≥ −Ep∗(x)[ mn∑ k Wk · log(pz(x;θ∗z))] = −Ep∗(x)[log(pz(x;θ∗z)] (12) Proposition 2: At normal architecture points (Wk = 1,W−k = 0), LUpMix = L O pMix . The proof of proposition 2 is apparent. With the above propositions and under the assumption that the global optimum can be reached at the end of the optimization, we can show that the solution set, i.e. all possible normal flow architectures are the same in both LOpMix and L U pMix , and we can do optimization with proposed upper bound without violating the original definition. Furthermore, since the global optimum of the upper bound will always lead to a normal flow architecture, we will not end up in finding a mixture model with the need to do heuristic and risky binarization of weights W . 3.3 EFFICIENT ARCHITECTURE OPTIMIZATION FOR DEEP FLOW MODELS While the flow ensemble by mixed density formulation could reflect the weighted effect of all possible transformation combinations, the architecture optimization complexity grows exponentially with respect to the number of considered transformation types and the number of transformation layers. In this scenario, efficient optimization of the whole flow architecture will not be possible. It is natural to decompose the original problem into sequential optimization of few different blocks, where each block could be optimized in one time with a limited number of layers. We propose two methods to decompose the problem. Grow Method: The first approach is a straightforward greedy method which we call ”Grow”. Each time, a block is optimized until convergence, and the weights of the transformation layer are binarized. The searched transformations in this block will be directly added to the searched layer in the previous block. The architecture optimization of later blocks will be based on the existing layers and, the growth of layers stops when reaching the total number of layers constraint. Despite its simplicity, the downside of the “Grow” method is that the optimization is short-sighted. The block being optimized has no information about the architectures which could be added later, and the whole architecture is more likely to be trapped in local minimum. Block Method: To avoid the issue of getting stuck in a local minimum, we propose another method named “Block” optimization. Blocks B in this approach are optimized alternatively to allow each block to adjust their architectures with respect to other blocks. In fact, the first “Grow” approach is a specific case of the “Block” method, where all the blocks are initialized as identity transformations and optimized only once. Algorithm 1 Algorithm flow for AutoNF Require: Transformations: {T 1, T 2, ...Tm}, Blocks: B = {B1, B2, ...Bl}, Cost: CMix Ensure: n-layer flow model: 1: while not converged do 2: for each Bi ∈B do 3: while not convergence do 4: αBi = argminαBi D val KL[p ∗(x) || pMix(x;θ∗B,αBi)] + λ · CMix(αBi) 5: θB = argminθB D train KL [p ∗(x) || pMix(x;θB,αBi)] 6: end while 7: Fix architecture for Bi 8: end for 9: end while 3.4 COST MODEL AND ALGORITHM FLOW As discussed in section II, we are interested in modeling the training cost (forward calculation cost) and the inverse calculation cost, since each of them plays a different role based on desired applications. We use an independent experiment to model the cost of different types of flows and summarized in a table which are included in Appendix B. With the cost model, the total cost of the mixed flow ensemble could be extracted based on emphasize on different costs, e.g. if training cost is the major concern, only training cost of different flows will be calculated. This total cost CMix is then added as an regularization term into the training loss function. In our paper, gradient based method is used for optimization which is efficient in this very high dimensional search space. The architecture parameter α and the flow model parameter θ are optimized alternatively with first order approximation in (Liu et al., 2019). The final algorithm flow of our proposed AutoNF method can be summarized in Algorithm 1. 4 EXPERIMENTS 4.1 EVALUATION OF PROPOSED UPPER BOUND Setup: We use a simple example to demonstrate the necessity of doing optimization with our proposed upper bound. We use AutoNF to build a 4 layer flow model with 2 transformation options including planar flow and radial flow from (Rezende & Mohamed, 2015). We use the POWER dataset as the target and optimize with original loss (name M1) and our proposed upper bound (named M2). We use Adam optimizer for both architecture parameter and model parameter with a learning rate of 0.002. The batch size is 512 and the training iteration is 10000. The results are shown in Fig.3. For both M1 and M2, we present the weight for planar and radial flow for each layer as well as the training and validation loss during the search process. The final weight for each layer, searched architectures after binarization and the test score are shown in the right-bottom table. Analysis: Optimization with our proposed upper bound (M2) shows a concrete convergence of weight to 0 or 1 for each layer, which leads to a desired normal flow architecture, while the optimization with the original loss function (M1) ends up in a mixture model instead of a normal flow architecture, as shown in Fig.3(left). This is within in our expectation as shown in Fig.2. Moreover, although the mixture model is mostly likely to be the optimal in the original loss, the normal flow architecture after binarization however, is not an optimal model. As shown in the right-bottom table, the architecture found by M2 has a significantly better test score than M1, and this clearly supports our statement of doing optimization with our proposed upper bound. 4.2 SEARCH FOR FLOW MODELS WITH BEST PERFORMANCE COST TRADE-OFF Transformation Options and Reference Designs: To evaluate our AutoNF framework, we setup our experiments with four types of non-linear flows and one linear flow. In autoregressive family, we choose affine autoregressive flow (Papamakarios et al., 2017) and rational quadratic autoregressive flow (Durkan et al., 2019). Affine autoregressive flow has limited expressive power but the computation cost is lower, while the later has the state of art performance in autoregressive family with higher cost. Affine coupling layer (Dinh et al., 2015) and rational quadratic coupling layer (Durkan et al., 2019) are selected from coupling layer family. For linear transformation, we combine a reverse permutation and an LU linear layer together as a single layer. Random permutation (Durkan et al., 2019; Oliva et al., 2018) is not used since it is difficult to reproduce in architecture optimization. Every non-linear transformation layer is paired with a linear transformation layer suggested by Durkan et al. (2019) as a final transformation option, i.e., a layer in our experiment contains a reverse permutation, an LU-linear layer and one of the non-linear transformation layer listed above. We use the rational quadratic flows family, including rational quadratic autoregressive flow (RQ-AF) and Rational quadratic coupling layer (RQ-C) in (Durkan et al., 2019) which have top 2 performance as the baseline. For fair comparison, we use RQ-AF as the baseline when emphasizing forward cost since it has better performance and use RQ-C as the baseline when emphasizing inverse cost since RQ-C has significantly lower inverse cost. Evaluation Metric and Datasets: Evaluating the performance-cost trade-off is an open question in NF, we propose to use a new metric to address the difficulty of negative log-likelyhood (NLL). NLL is a common measurement for density estimation (lower, the better), however, the order of magnitude of NLL is different across different datasets and it is not suitable to use percentage difference to measure how a model is exactly better than another. In this paper, We proposed to utilize density and coverage (Naeem et al., 2020) to evaluate the performance of NF models. Density and coverage are recently proposed method to evaluate the sample quality of generative models. The density metric reflects the fidelity of the model and is consistent with NLL metric. Across different datasets, density and coverage are at the same order of magnitude and allows evaluation of architecture across datasets. In our experiments, 10000 samples are drawn from the trained flow models and compare with 10000 samples from the test data. The results of three independent runs are averaged as the final reported results. To evaluate the performance-cost trade-off, we define a figure of merit (FOM) as FOM = cost reduction% + density drop% compared to reference SOTA designs. In principle, the weight of the two terms can be manually adjusted to reflect the importance. For demonstration purpose, we use the equally weighted summation to report the results. The performance of the flow models are evaluated with density estimation for UCI (Dua & Graff, 2017) and BSDS300 (Martin et al., 2001) datasets. Analysis: The architecture search results are reported in Table.1 which includes the test NLL, density, coverage, cost and corresponding FOM. Table.1 shows that our AutoNF clearly helps to find architectures that have better performance-cost trade-off. out AutoNF can reach to up to 3.66X cost reduction and up to 75.2% improvement in FOM compared with SOTA literature results. Across all five different datasets, AutoNF demonstrates an average improvement of 58.67% on FOM with emphasis on forward cost and an average improvement of 52.57% on FOM with emphasis on inverse cost. 5 DISCUSSION Normalizing flow is highly parameterized module and designing a flow model and use it for application requires a lot of hands-on experience and domain knowledge. In this paper, we show that the AutoNF framework is very effective in balancing performance-cost trade-offs when building complex flow models. Moreover, although not demonstrated in this paper, the framework could also be used to help decide hyper parameters in complex flow model, e.g. the hidden features and number of bins in the SOTA coupling layer (Durkan et al., 2019). In additional, the proposed optimization method with upper bound can be easily extended to other suitable probabilistic kernels. one example is to identify the best parameterized distribution(s) within a mixture model. We believe our framework will be very useful in many machine learning applications where normalizing flows are needed.
1. What is the main contribution of the paper regarding automated normalization flow models? 2. What are the strengths of the proposed approach, particularly in its application of NAS? 3. What are the weaknesses of the paper, especially regarding the proof of upper bound optimization and the experiments? 4. Do you have any questions regarding the proposed method's similarity to DARTS? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a DARTS-like method for searching automated normalization flow models. Instead of directly using the output ensembles, which leads to infeasible flow models, this work proposed distribution mixture to guarantee that the supernet is always a valid flow model. The upper bound of the loss function is optimized jointly with resource constraints. Experiments on small-to-medium scale datasets valid the effectiveness of the proposed method. Review [Strength] This is the first work using NAS to optimize flow models Although the proposed method is based on DARTs, it requires some efforts to make it work on flow models, such as distribution mixture. [Weakness] The paper is a bit difficult to follow if the reader is not very familiar with flow model. Especially, the following things need to be further clarified in great detail: a) How did you get Eq. (7) from Eq. (6)? In Eq. (7), W k = ∏ i w i , so the right hand side does not contain k at all? b) The proof of upper bound optimization should be further clarified. It is hard to follow (at least to me). The upper bound argument seems questionable. While I could understand that we need to binarize the weight α in order to get a simple and valid flow model, I still cannot understand why optimizing the Jensen's upper bound is a good (or better) idea. At least from the experiments, it seems that binarization is still a necessary step. The proof of Proposition 1 seems questionable. The second ≥ is not obvious. The experiments focus on density estimation problems. It is somewhat insufficient. I would expect more real-world applications and comparing to strong baseline methods. Table 1 need improvements. For example, it is better to explicitly align with one cost and then compare the test score. For now it is difficult to compare results since they have different costs. The proposed method almost directly follows DARTS. Although the distribution mixture is novel, it is still more or less an incremental improvement.
ICLR
Title AutoNF: Automated Architecture Optimization of Normalizing Flows Using a Mixture Distribution Formulation Abstract Although various flow models based on different transformations have been proposed, there still lacks a quantitative analysis of performance-cost trade-offs between different flows as well as a systematic way of constructing the best flow architecture. To tackle this challenge, we present an automated normalizing flow (NF) architecture search method. Our method aims to find the optimal sequence of transformation layers from a given set of unique transformations with three folds. First, a mixed distribution is formulated to enable efficient architecture optimization originally on the discrete space without violating the invertibility of the resulting NF architecture. Second, the mixture NF is optimized with an approximate upper bound which has a more preferable global minimum. Third, a block-wise alternating optimization algorithm is proposed to ensure efficient architecture optimization of deep flow models. N/A Although various flow models based on different transformations have been proposed, there still lacks a quantitative analysis of performance-cost trade-offs between different flows as well as a systematic way of constructing the best flow architecture. To tackle this challenge, we present an automated normalizing flow (NF) architecture search method. Our method aims to find the optimal sequence of transformation layers from a given set of unique transformations with three folds. First, a mixed distribution is formulated to enable efficient architecture optimization originally on the discrete space without violating the invertibility of the resulting NF architecture. Second, the mixture NF is optimized with an approximate upper bound which has a more preferable global minimum. Third, a block-wise alternating optimization algorithm is proposed to ensure efficient architecture optimization of deep flow models. 1 INTRODUCTION Normalizing flow (NF) is a probabilistic modeling tool that has been widely used in density estimation, generative models, and random sampling. Various flow models have been proposed in recent years to improve their expressive power. Discrete flow models are either built based on elementalwise monotonical functions, named autoregressive flow or coupling layers (Papamakarios et al., 2017), or built with transformations where the determinant of the flow can be easily calculated with matrix determinant lemma (Rezende & Mohamed, 2015). In the continuous flow family, the models are constructed by neural ODE (Grathwohl et al., 2019). Despite the variety of flow models, there’s yet no perfect flow concerning the expressive power and the computation cost. The flow models with higher expressive power usually have higher computational costs in either forward and inverse pass. In contrast, flows which are fast to compute are not able to model rich distributions and are limited to simple applications. For instance, autoregressive flows (Papamakarios et al., 2017) are universal probability approximators but are D times slower to invert than forward calculation, where D is the dimension of the modeled random variable x (Papamakarios et al., 2021). Flows based on coupling layers (Dinh et al., 2015; 2017; Kingma & Dhariwal, 2018) have an analytic one-pass inverse but are less expressive than their autoregressive counterparts. Other highly expressive NF models (Rezende & Mohamed, 2015; Behrmann et al., 2019) cannot provide an analytic inverses and relies on numerical optimizations. For different applications, the optimal flow model can be drastically different, especially if the computation cost is taken into consideration. For generative models (Dinh et al., 2015; Kingma & Dhariwal, 2018), flows with the fast forward pass are preferable since the forward transformations need to be applied to every sample from the base distribution. For density estimation (Papamakarios et al., 2017; Rippel & Adams, 2013), flows with cheap inverse will prevail. For applications where flow is utilized as a co-trained kernel (Mazoure et al., 2020), the computation cost and performance trade-off are more important, i.e., having a fast model with relatively good performance. However, in the current body of work, the architecture designs of the flow models are all based on manual configuration and tuning. To this date, there is a lack of a systematic way that could automatically construct an optimal flow architecture with a preferred cost. In this paper, we propose AutoNF, an automated method for normalizing flow architecture optimization. AutoNF has a better performance-cost trade-off than hand-tuned SOTA flow models based on a given set of transformations. Our approach employs a mixture distribution formulation that can search a large design space of different transformations while still satisfying the invertibility requirement of normalizing flow. The proposed mixture NF is optimized via approximate upper bound which provides a better optimization landscape for finding the desired flow architecture. Besides, to deal with exponentially growing optimization complexity, we introduce a block-wise optimization method to enable efficient optimization of deep flow models. 2 RELATED WORK Normalizing Flows: Various normalizing flow models have been proposed since the first concept in (Tabak & Turner, 2013). Current flow models can be classified into two categories: finite flows based on layer structure, and continuous flow based on neural ODE (Grathwohl et al., 2019). The finite flow family includes flows based on elemental-wise transformation (Papamakarios et al., 2017; Kingma & Dhariwal, 2018) and flows whose transformations are restricted to be contractive (Behrmann et al., 2019). In elemental-wise transformation flows, autoregressive flow and coupling layers are two major flavors and extensive work has been proposed to improve the expressive power of both flow models. In Huang et al. (2018), the dimension-wise scalar transformation is implemented by a sigmoid neural network, which increases the expressive power at the cost of being not analytically invertible. In Durkan et al. (2019), piecewise splines are used as drop-in replacement of affine or additive transformations (Dinh et al., 2015; 2017) and is the current SOTA flow model. Consequently many recent research efforts have been devoted to closing the gap of expressive power, albeit at the cost of more complex and expensive transformations. Moreover, there has been no quantitative trade-off analysis between the performance and cost among different flows. Neural Architecture Search: Many algorithms have been proposed or applied for neural architecture search. For instance, reinforcement learning (Zoph & Le, 2017), genetic algorithm (Real et al., 2017; Suganuma et al., 2018; Liu et al., 2018), Monte Carlo tree search (Negrinho & Gordon, 2017) or Bayesian optimization (Kandasamy et al., 2018). However, these methods all face the challenge of optimizing on a large discrete space and can take thousand of GPU days to find a good architecture. To address this issue, DARTS (Liu et al., 2019) proposes to relax the search space from discrete to continuous and allows efficient differentiable architecture search with gradient method which could reduce the search time to a single GPU day while still producing the SOTA architecture. However, all current NAS methods focus on optimizing traditional neural network structures (CNN, RNN) and there has yet been any implementation on normalizing flow. Necessity for the Trade-off Between Performance and Cost: Despite various transformations proposed in the literature, there is no perfect transformation with strong expressive power and low computational cost. Autoregressive flows have better expressive power, but the inverse computation cost grows linearly with data dimension. Coupling layers’ inverse calculation is as fast as the forward pass, but their expressive power is generally worse than autoregressive flow with the same element-wise transformation. Even in the same autoregressive flow or coupling layer family, flows with different element-wise transformations have different performance and computation costs. For instance, additive or affine coupling layers (Dinh et al., 2017; 2015) have very fast forward and inverse calculation with limited expressive power while the flow in (Durkan et al., 2019) are highly expressive but are more demanding on computation. In most applications, it is necessary to find the best performance while minimizing at least one specific component of the cost. Unfortunately, the current design of flow models is empirical and therefore cannot ensure the optimal trade-offs. 3 METHOD In this work, we aim to tackle the challenge of finding an optimal flow model for a given task via an automated architecture search algorithm. Assumptions: In the remaining part of this paper, without losing generality, we assume that the transformation is properly modeled such that during the training process, only forward computation is needed. Under this assumption, when the flow model is used for density modeling (Durkan et al., 2019), the forward calculation is the dominant computation. When the flow model is used for random sampling (Kingma & Dhariwal, 2018), the inverse calculation is computationally intensive. When the flow model is utilized as a module and trained together with other components, e.g., policy network in maximum entropy learning (Mazoure et al., 2020), the training cost of the flow model is an important consideration. Problem Definition: Given a transformation set with m options {T 1, T 2, ...Tm}, the goal is to construct an optimal flow model with n layers of transformations from the set. The flow model pNF (x;θ) = pT1T2...Tn(x;θ) should minimize the KL divergence between the target distribution p∗(x) and itself while minimizing its computational cost CNF . Here, θ are the parameters of the transformation in the flow model. In this paper, we use the forward KL divergence as our target loss function (Papamakarios et al., 2021): θ∗ =argmin θ {DKL[p∗(x) || pT1T2...Tn(x;θ)] + λ · CNF } s.t. Ti ∈ {T 1, T 2, ...Tm} (1) While λ is a tuning factor capturing the relative importance of the performance-cost trade-off. Finding this optimal flow model is a discrete optimization problem with exponential complexity. To enable efficient architecture optimization, we use proposed method of relaxing the discrete search space to continuous space as suggested in Liu et al. (2019). 3.1 MIXED FLOW ENSEMBLE For the ith transformation layer with m options, we introduce a corresponding weight w j i for each option T j which reflects how likely the transformation will be selected. The weight is parameterized by a vector α and made continuous via softmax: wji = exp(αji )∑m j=1 exp(α j i ) (2) By applying this parameterization for each transformation layer, we can construct a mixed flow ensemble pMix(x;θ,α), where each layer in this mixed model reflects a weighted combination of the effect of all possible transformations. In this case, the architecture optimization problem is reduced to learning the weight vector for each layer and, at the end of the optimization process, weights will be binarized and the transformation with the highest weight in one layer will be selected as the final transformation. The mixed flow ensemble thus degrades to a normal flow model. The whole procedure is illustrated in Fig. 1 (left). As adopted in (Liu et al., 2019), training of the flow ensemble becomes joint optimization of the architecture parameterα and the model parameter θ over the training and validation datasets, which could be written as the following bi-level optimization problem: α∗ =argmin α DvalKL[p ∗(x) || pMix(x;θ∗,α)] + λ · CMix(α) s.t. θ∗ = argmin θ DtrainKL [p ∗(x) || pMix(x;θ,α)], ∀ T ∈ pMix, T ∈ {T 1, T 2, ...Tm}, (3) While the optimization problem is well defined, the key challenge is to construct the flow ensemble within the normalizing flow framework. This is different from traditional neural architecture search, which can mix various operations with no additional issue. Normalizing flow has its unique requirement for the invertibility of transformations and a preferred simple Jacobian calculation, which requires careful handling. The mixed flow ensemble pMix(x;θ∗,α) must satisfy two requirements. First, it must be a legal density function such that it can be optimized by the KL divergence formulation. Second, each transformation layer in pMix(x;θ∗,α) should represent a weighted combination of all possible transformations. Consider the ith layer in the mixed flow ensemble with input random variable xin and output random variable xout, and pxin(xin) and pxout(xout) are their corresponding density functions. This layer has m transformation options in {T 1i , T 2i , ...Tmi } and w j i is the corresponding weight for each transformation. As discussed in Assumption, we assume all transformations directly model the inverse transformation, i.e. xin = T j i (xout). Two approaches can be used to construct the mixed flow ensemble. Construction by Mixed Transformations: The straight forward way of building the ith mix flow ensemble layer is to mix all transformations by weighted summation, as shown in Fig. 1 (right-top). The final weighted transformation for this layer can be thus represented as: Ti(xin) = m∑ j=1 wji · T j i (xout) (4) There are two drawbacks of this formulation despite its simplicity. First, definition of normalizing flow requires the mixed transformation Ti be invertible and differentiable in order to ensure pxout(xout) legal density function. However, this invertibility is not guaranteed even if all candidate transformations are invertible. Second, even if the mixed transformation is invertible, there is no easy way to calculate the Jacobian determinant of this weighted summation of transformations. Meeting the requirement of invertibility and ease of calculating Jacobian determinant brings strict restrictions on the candidate transformations and prevents the optimization of flow architectures on a wider search space. As a result, the construction of the mixed flow ensemble by weighted summation of transformations is not adopted in this paper. Construction by Mixed Distributions: An alternating way is to build the mixed flow ensemble by mixing distributions. For a given transformation T ji in this ith layer, applying the transformation to the input random variable will result in a new distribution: pT ji (xout) = pxin(T j i (xout)) · | detJT ji (xout)| (5) By applying this to every transformation option in {T 1i , T 2i , ...Tmi }, we can obtain k different distributions, and it is possible to mix all the density functions together by their weighted summation, to get a mixture model as shown in eq.(6). pTi(xout) = m∑ j=1 wji · pT ji (xout) (6) An illustration of this process is shown in Fig. 1 (right-bottom). Different from the previous approach, the mixture model has a legal density function as: pTi(xout). By the definition of normalizing flow, we can assume that there exists an invertible and differentiable transformation Ti, which transforms xin to xout, although the transformation itself can not be explicitly written out. For the next (i + 1)th layer, the density of the mixture model will be used as the input density function pxin(xin) as in the previous layer. By applying this formulation for n layers, the final mixed flow ensemble can be written as: pMix(x;θ,a) = mn∑ k=1 Wk · pT1T2...Tn(x,θ) = mn∑ k=1 Wk · pi(x;θi) where each Wk = n∏ i=1 wi and mn∑ k Wk = 1 (7) Each wi is defined in eq.(2) and we use pk(x;θk) to represent a “normal flow architecture” with n transformation layers. Clearly, the final mixed flow ensemble is a legal density function which is in fact, a weighted summation of all possible flow models built with n layers of transformations. 3.2 OPTIMIZATION WITH APPROXIMATED UPPER BOUND Optimizing the forward KL divergence between the target distribution and the mixed flow ensemble can be written as: LOpMix = DKL [p ∗(x) || pMix(x;θ,α)] = −Ep∗(x)[log( mn∑ k=1 Wk · pk(x;θk))] (8) We will demonstrate that direct optimization of this original loss can lead to underside mixture models. In the whole search space of the flow ensemble, we are interested only in ”normal flow architectures” points, i.e. the points where the weight of one architecture is 1 and others are all 0. However, it can be easily proven that the global optimum of LOpMix may not be the desired normal flow architecture (the red points in Fig. 2). Instead, optimization is very likely to end up in a mixture model that is globally optimal with similar weight for each possible flow architecture (the green point in Fig. 2). In this case, we will encounter difficulty when extracting a normal flow architecture with the search result. A common practice in differentiable architecture search (Liu et al., 2019) is to binarize the weights and select corresponding transformations. However, there is no guarantee that the binarized architecture will have a lower loss, and finding this nearest binarization may lead to performance drop. As a result, optimization with the original loss function is not suitable, and could be risky. In this paper, we propose to optimize an upper bound of the original loss function to provide a better global optimum for the search of best normal flow architectures. Our method utilizes Jensen’s inequality log( ∑ W · x) ≥ ∑ W · log(x) as follows, since we have ∑ W = 1 and the log function is concave, we can obtain an upper bound of the KL divergence given as: LOpMix = −Ep∗(x)[log( mn∑ k Wk · pk(x;θk)] ≤ LUpMix = −Ep∗(x)[ mn∑ k Wk · log(pk(x;θk))] (9) The benefit of optimizing the upper bound can be summarized as follows: Proposition 1: The global minimum point of LUpMix is defined by a normal flow architecture. Proof: Suppose each flow model pk(x;θk) has an optimal parameter θ∗k that minimizes the KL divergence between p∗(x) and it: −Ep∗(x)[log(pk(x;θ∗k)] ≤ −Ep∗(x)[log(pk(x;θk)] (10) There also exists a flow architecture (pz(x;θ∗z)) that has the minimal KL divergence: −Ep∗(x)[log(pz(x;θ∗z)] ≤ −Ep∗(x)[log(pk(x;θ∗k)], ∀k ∈ mn (11) We can then prove the proposition by showing that: LUpMix = −Ep∗(x)[ mn∑ k Wk · log(pk(x;θk))] ≥ −Ep∗(x)[ mn∑ k Wk · log(pk(x;θ∗k))] ≥ −Ep∗(x)[ mn∑ k Wk · log(pz(x;θ∗z))] = −Ep∗(x)[log(pz(x;θ∗z)] (12) Proposition 2: At normal architecture points (Wk = 1,W−k = 0), LUpMix = L O pMix . The proof of proposition 2 is apparent. With the above propositions and under the assumption that the global optimum can be reached at the end of the optimization, we can show that the solution set, i.e. all possible normal flow architectures are the same in both LOpMix and L U pMix , and we can do optimization with proposed upper bound without violating the original definition. Furthermore, since the global optimum of the upper bound will always lead to a normal flow architecture, we will not end up in finding a mixture model with the need to do heuristic and risky binarization of weights W . 3.3 EFFICIENT ARCHITECTURE OPTIMIZATION FOR DEEP FLOW MODELS While the flow ensemble by mixed density formulation could reflect the weighted effect of all possible transformation combinations, the architecture optimization complexity grows exponentially with respect to the number of considered transformation types and the number of transformation layers. In this scenario, efficient optimization of the whole flow architecture will not be possible. It is natural to decompose the original problem into sequential optimization of few different blocks, where each block could be optimized in one time with a limited number of layers. We propose two methods to decompose the problem. Grow Method: The first approach is a straightforward greedy method which we call ”Grow”. Each time, a block is optimized until convergence, and the weights of the transformation layer are binarized. The searched transformations in this block will be directly added to the searched layer in the previous block. The architecture optimization of later blocks will be based on the existing layers and, the growth of layers stops when reaching the total number of layers constraint. Despite its simplicity, the downside of the “Grow” method is that the optimization is short-sighted. The block being optimized has no information about the architectures which could be added later, and the whole architecture is more likely to be trapped in local minimum. Block Method: To avoid the issue of getting stuck in a local minimum, we propose another method named “Block” optimization. Blocks B in this approach are optimized alternatively to allow each block to adjust their architectures with respect to other blocks. In fact, the first “Grow” approach is a specific case of the “Block” method, where all the blocks are initialized as identity transformations and optimized only once. Algorithm 1 Algorithm flow for AutoNF Require: Transformations: {T 1, T 2, ...Tm}, Blocks: B = {B1, B2, ...Bl}, Cost: CMix Ensure: n-layer flow model: 1: while not converged do 2: for each Bi ∈B do 3: while not convergence do 4: αBi = argminαBi D val KL[p ∗(x) || pMix(x;θ∗B,αBi)] + λ · CMix(αBi) 5: θB = argminθB D train KL [p ∗(x) || pMix(x;θB,αBi)] 6: end while 7: Fix architecture for Bi 8: end for 9: end while 3.4 COST MODEL AND ALGORITHM FLOW As discussed in section II, we are interested in modeling the training cost (forward calculation cost) and the inverse calculation cost, since each of them plays a different role based on desired applications. We use an independent experiment to model the cost of different types of flows and summarized in a table which are included in Appendix B. With the cost model, the total cost of the mixed flow ensemble could be extracted based on emphasize on different costs, e.g. if training cost is the major concern, only training cost of different flows will be calculated. This total cost CMix is then added as an regularization term into the training loss function. In our paper, gradient based method is used for optimization which is efficient in this very high dimensional search space. The architecture parameter α and the flow model parameter θ are optimized alternatively with first order approximation in (Liu et al., 2019). The final algorithm flow of our proposed AutoNF method can be summarized in Algorithm 1. 4 EXPERIMENTS 4.1 EVALUATION OF PROPOSED UPPER BOUND Setup: We use a simple example to demonstrate the necessity of doing optimization with our proposed upper bound. We use AutoNF to build a 4 layer flow model with 2 transformation options including planar flow and radial flow from (Rezende & Mohamed, 2015). We use the POWER dataset as the target and optimize with original loss (name M1) and our proposed upper bound (named M2). We use Adam optimizer for both architecture parameter and model parameter with a learning rate of 0.002. The batch size is 512 and the training iteration is 10000. The results are shown in Fig.3. For both M1 and M2, we present the weight for planar and radial flow for each layer as well as the training and validation loss during the search process. The final weight for each layer, searched architectures after binarization and the test score are shown in the right-bottom table. Analysis: Optimization with our proposed upper bound (M2) shows a concrete convergence of weight to 0 or 1 for each layer, which leads to a desired normal flow architecture, while the optimization with the original loss function (M1) ends up in a mixture model instead of a normal flow architecture, as shown in Fig.3(left). This is within in our expectation as shown in Fig.2. Moreover, although the mixture model is mostly likely to be the optimal in the original loss, the normal flow architecture after binarization however, is not an optimal model. As shown in the right-bottom table, the architecture found by M2 has a significantly better test score than M1, and this clearly supports our statement of doing optimization with our proposed upper bound. 4.2 SEARCH FOR FLOW MODELS WITH BEST PERFORMANCE COST TRADE-OFF Transformation Options and Reference Designs: To evaluate our AutoNF framework, we setup our experiments with four types of non-linear flows and one linear flow. In autoregressive family, we choose affine autoregressive flow (Papamakarios et al., 2017) and rational quadratic autoregressive flow (Durkan et al., 2019). Affine autoregressive flow has limited expressive power but the computation cost is lower, while the later has the state of art performance in autoregressive family with higher cost. Affine coupling layer (Dinh et al., 2015) and rational quadratic coupling layer (Durkan et al., 2019) are selected from coupling layer family. For linear transformation, we combine a reverse permutation and an LU linear layer together as a single layer. Random permutation (Durkan et al., 2019; Oliva et al., 2018) is not used since it is difficult to reproduce in architecture optimization. Every non-linear transformation layer is paired with a linear transformation layer suggested by Durkan et al. (2019) as a final transformation option, i.e., a layer in our experiment contains a reverse permutation, an LU-linear layer and one of the non-linear transformation layer listed above. We use the rational quadratic flows family, including rational quadratic autoregressive flow (RQ-AF) and Rational quadratic coupling layer (RQ-C) in (Durkan et al., 2019) which have top 2 performance as the baseline. For fair comparison, we use RQ-AF as the baseline when emphasizing forward cost since it has better performance and use RQ-C as the baseline when emphasizing inverse cost since RQ-C has significantly lower inverse cost. Evaluation Metric and Datasets: Evaluating the performance-cost trade-off is an open question in NF, we propose to use a new metric to address the difficulty of negative log-likelyhood (NLL). NLL is a common measurement for density estimation (lower, the better), however, the order of magnitude of NLL is different across different datasets and it is not suitable to use percentage difference to measure how a model is exactly better than another. In this paper, We proposed to utilize density and coverage (Naeem et al., 2020) to evaluate the performance of NF models. Density and coverage are recently proposed method to evaluate the sample quality of generative models. The density metric reflects the fidelity of the model and is consistent with NLL metric. Across different datasets, density and coverage are at the same order of magnitude and allows evaluation of architecture across datasets. In our experiments, 10000 samples are drawn from the trained flow models and compare with 10000 samples from the test data. The results of three independent runs are averaged as the final reported results. To evaluate the performance-cost trade-off, we define a figure of merit (FOM) as FOM = cost reduction% + density drop% compared to reference SOTA designs. In principle, the weight of the two terms can be manually adjusted to reflect the importance. For demonstration purpose, we use the equally weighted summation to report the results. The performance of the flow models are evaluated with density estimation for UCI (Dua & Graff, 2017) and BSDS300 (Martin et al., 2001) datasets. Analysis: The architecture search results are reported in Table.1 which includes the test NLL, density, coverage, cost and corresponding FOM. Table.1 shows that our AutoNF clearly helps to find architectures that have better performance-cost trade-off. out AutoNF can reach to up to 3.66X cost reduction and up to 75.2% improvement in FOM compared with SOTA literature results. Across all five different datasets, AutoNF demonstrates an average improvement of 58.67% on FOM with emphasis on forward cost and an average improvement of 52.57% on FOM with emphasis on inverse cost. 5 DISCUSSION Normalizing flow is highly parameterized module and designing a flow model and use it for application requires a lot of hands-on experience and domain knowledge. In this paper, we show that the AutoNF framework is very effective in balancing performance-cost trade-offs when building complex flow models. Moreover, although not demonstrated in this paper, the framework could also be used to help decide hyper parameters in complex flow model, e.g. the hidden features and number of bins in the SOTA coupling layer (Durkan et al., 2019). In additional, the proposed optimization method with upper bound can be easily extended to other suitable probabilistic kernels. one example is to identify the best parameterized distribution(s) within a mixture model. We believe our framework will be very useful in many machine learning applications where normalizing flows are needed.
1. What is the focus and contribution of the paper on normalizing flow architecture search? 2. What are the strengths of the proposed approach, particularly in its ability to construct an optimal flow model and deal with exponentially growing optimization complexity? 3. What are the weaknesses of the paper, especially regarding its novelty and comparisons with other works in the field of Normalizing Flow and NAS? 4. Do you have any concerns or questions about the technical aspects of the proposed method, such as the mixture distribution formulation, block-wise optimization method, and the use of approximate upper bound? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, especially regarding its experimental results and comparisons with other works?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors present an automated normalizing flow(NF) architecture search method. The method employs a mixture distribution formulation that can construct an optimal flow model with n layers of transformations from the transformation set. Besides, the authors introduce a block-wise optimization method to deal with exponentially growing optimization complexity. In the experiment, the authors proved the effectiveness of the optimization method which via approximate upper bound. And AutoNF has a better performance-cost trade-off than hand-tuned SOTA flow models. Review Positive points: In this paper, the authors propose an automated normalizing flow architecture search method which can find the best distribution for each layer from a set of given distribution sequences. When constructing each layer of the model, the authors used a weighted summation of probability density instead of each distribution to ensure the reversibility of the model and the simple calculation. The authors optimized the model via approximate upper bound instead of using KL divergence between the target distribution and the mixed flow, so that the model can go out of the local minimum. sssssss Negative points: The novelty of this paper seems limited. The authors directly apply NAS techniques to the field of Normalizing Flow (NF). It would be better to clarify whether there are some technical contributions regarding to the search algorithm. There seem some errors in Eqn. (4). The mixed transformation should be the weighted sum of T_i^j rather than T_i. Eqn. (8) should use the symbol of expectation instead of directly using capital E. As mentioned in Section 3.2, the global minimum may not be the desired architecture. Why can optimizing the upper bound find the desired architecture? It seems that both cases suffer the same issue. Since this paper is essentially a NAS paper, it is necessary to compare the proposed method with existing NAS methods, e.g., DARTS [a], ENAS [b], MNasNet [c]. The performance-cost trade-off seems to depend on a parameter lambda that needs to be manually adjusted. Thus, the impact of lambda should be investigated in the experiment section. Reference: [a] Darts: Differentiable architecture search. ICLR 2019. [b] Efficient neural architecture search via parameters sharing. ICML 2018. [c] Mnasnet: Platform-aware neural architecture search for mobile. CVPR 2019.
ICLR
Title Efficient Data Subset Selection to Generalize Training Across Models: Transductive and Inductive Networks Abstract Subset selection, in recent times, has emerged as a successful approach toward efficient training of models by significantly reducing the amount of data and computational resources required. However, existing methods employ discrete combinatorial and model-specific approaches which lack generalizability— for each new model, the algorithm has to be executed from the beginning. Therefore, for data subset selection for an unseen architecture, one cannot use the subset chosen for a different model. In this work, we propose SUBSELNET, a nonadaptive subset selection framework, which tackles these problems with two main components. First, we introduce an attention-based neural gadget that leverages the graph structure of architectures and acts as a surrogate to trained deep neural networks for quick model prediction. Then, we use these predictions to build subset samplers. This leads us to develop two variants of SUBSELNET. The first variant is transductive (called as Transductive-SUBSELNET) which computes the subset separately for each model by solving a small optimization problem. Such an optimization is still super fast, thanks to the replacement of explicit model training by the model approximator. The second variant is inductive (called as Inductive-SUBSELNET) which computes the subset using a trained subset selector, without any optimization. Most state-of-the-art data subset selection approaches are adaptive, in that the subset selection adapts as the training progresses, and as a result, they require access to the entire data at training time. Our approach, in contrast, is non-adaptive and does the subset selection only once in the beginning, thereby achieving resource and memory efficiency along with compute-efficiency at training time. Our experiments show that both the variants of our model outperform several methods on the quality of the subset chosen and further demonstrate that our method can be used for choosing the best architecture from a set of architectures. 1 INTRODUCTION In the last decade, deep neural networks have enhanced the performance of the state-of-the-art ML models dramatically. However, these neural networks often demand massive data to train, which renders them heavily contingent on availability of high performance computing machinery, e.g., GPUs, CPUs, RAMs, storage disks, etc. However, such resources entail heavy energy consumption, excessive CO2 emission and maintenance cost. Driven by this challenge, a recent body of work focus on suitably selecting a subset of instances, so that the model can be quickly trained using lightweight computing infrastructure (Boutsidis et al., 2013; Kirchhoff & Bilmes, 2014; Wei et al., 2014a; Bairi et al., 2015; Liu et al., 2015; Wei et al., 2015; Lucic et al., 2017; Mirzasoleiman et al., 2020b; Kaushal et al., 2019; Killamsetty et al., 2021a;b;c). However, these existing data subset selection algorithm are discrete combinatorial algorithms, which share three key limitations. (1) Scaling up the combinatorial algorithms is often difficult, which imposes significant barrier against achieving efficiency gains as compared to training with entire data. (2) Many of these approaches are adaptive in nature, i.e, the subset changes as the model training progresses. As a result, they require access to the entire training dataset and while they provide compute-efficiency, they do not address memory and resource efficiency challenges of deep model training. (3) The subset selected by the algorithm is tailored to train only a given specific model and it cannot be used to train another model. Therefore, the algorithm cannot be shared across different models. We discuss the related work in detail in Appendix A. 1.1 PRESENT WORK Responding to the above limitations, we develop SUBSELNET, a trainable subset selection framework, which— once trained on a set of model architectures and a dataset— can quickly select a small training subset such that it can be used to train a new (test) model, without a significant drop in accuracy. Our setup is non-adaptive in that it learns to select the subset before the training starts for a new architecture, instead of adaptively selecting the subset during the training process. We initiate our investigation by writing down an instance of combinatorial optimization problem that outputs a subset specifically for one given model architecture. Then, we gradually develop SUBSELNET, by building upon this setup. SUBSELNET comprises of the following novel components. Neural model approximator. The key blocker in scaling up a model-specific combinatorial subset selector across different architectures is the involvement of the model parameters as optimization variables along with the candidate data subset. To circumvent this blocker, we design a neural model approximator which aims to approximate the predictions of a trained model for any given architecture. Thus, such a model approximator can provide per instance accuracy provided by a new (test) model without explicitly training it. This model approximator works in two steps. First, it translates a given model architecture into a set of embedding vectors using graph neural networks (GNNs). Similar to the proposal of Yan et al. (2020) it views a given model architecture as a directed graph between different operations and, then outputs the node embeddings by learning a variational graph autoencoder (VAE) in an unsupervised manner. Due to such nature of the training, these node embeddings represent only the underlying architecture— they do not capture any signal from the predictions of the trained model. Hence, in the next step, we build a neural model encoder which uses these node embeddings and the given instance to approximate the prediction made by the trained model. The model encoder is a transformer based neural network which combines the node embedding using self-attention induced weights to obtain an intermediate graph representation. This intermediate representation finally combines with the instance vector x to provide the prediction of the trained architecture. Subset sampler. Having computed the prediction of a trained architecture, we aim to choose a subset of instances that would minimize the predicted loss and at the same time, offers a good representation of the data. Our subset sampler takes the approximate model output and an instance as input and computes a selection score. Then it builds a logit vector using all these selection scores, feeds it into a multinomial distribution and samples a subset from it. This naturally leads to two variants of the model. Transductive-SUBSELNET: The first variant is transductive in nature. Here, for each new architecture, we utilize the predictions from the model approximator to build a continuous surrogate of the original combinatorial problem and solve it to obtain the underlying selection scores. Thus, we still need to solve a fresh optimization problem for every new architecture. However, the direct predictions from the model approximator allow us to skip explicit model training. This makes this strategy extremely fast both in terms of memory and time. We call this transductive subset selector as Transductive-SUBSELNET. Inductive-SUBSELNET: In contrast to Transductive-SUBSELNET, the second variant does not require to solve any optimization problem. Consequently, it is extremely fast. Instead, it models the scores using a neural network which is trained across different architectures to minimize the entropy regularized sum of the prediction loss. We call this variant as Inductive-SUBSELNET. We compare our method against six state-of-the-art methods on three real world datasets, which show that Transductive-SUBSELNET (Inductive-SUBSELNET) provides the best (second best) trade off between accuracy and inference time as well as accuracy and memory usage, among all the methods. This is because (1) our subset selection method does not require any training at any stage of subset selection for a new model; and, (2) our approach is non-adaptive and does the subset selection before the training starts. In contrast, most state-of-the-art data subset selection approaches are adaptive, in that the subset selection adapts as the training progresses, and as a result, they require access to the entire data at training time. Finally, we design a hybrid version of the model, where given a budget, we first select a larger set of instances using Inductive-SUBSELNET, and then extract the required number of instances using Transductive-SUBSELNET. We observe that such a hybrid approach allow us to make a smooth transition between the trade off curves from Inductive-SUBSELNET to Transductive-SUBSELNET. 2 DEVELOPMENT OF PROPOSED MODEL: SUBSELNET In this section, we setup the notations and write down the combinatorial subset selection problem for efficient training. This leads us to develop a continuous optimization problem which would allow us to generalize the combinatorial setup across different models. 2.1 NOTATIONS We are given a set of training instances {(xi, yi)}i∈D where we use D to index the data. Here, xi ∈ Rdx are features and yi ∈ Y as the labels. In our experiments, we consider Y as a set of categorical labels. However, our framework can also be used for continuous labels. We use m to denote a neural architecture and represent its parameterization as mθ. We also useM to denote the set of neural architectures. Given an architecture m ∈ M, Gm = (Vm, Em) provides the graph representation of m, where the nodes u ∈ Vm represent the operations and the e = (um, vm) indicates an edge, where the output given by the operation represented by the node um is fed to one of the operands of the operation given by the node vm. Finally, we use H(·) to denote the entropy of a probability distribution and ℓ(mθ(x), y) as the cross entropy loss hereafter. 2.2 COMBINATORIAL SUBSET SELECTION FOR EFFICIENT LEARNING We are given a dataset {(xi, yi)}i∈D and a model architecture m ∈ M with its neural parameterization mθ. The goal of a subset selection algorithm is to select a small subset of instances S with |S| = n << |D| such that, training mθ on the subset S gives nearly same accuracy as training on the entire dataset D. Existing works (Killamsetty et al., 2021b; Sivasubramanian et al., 2021; Killamsetty et al., 2021a) adopt different strategies to achieve this goal, but all of them aim to simultaneously optimize for the model parameters θ as well as the candidate subset S. At the outset, we may consider the following optimization problem. minimize θ,S⊂D:|S|=b ∑ i∈S ℓ(mθ(xi), yi)− λDIVERSITY(S), (1) where b is the budget, DIVERSITY(S) measures the representativeness of S with respect to the whole dataset D and λ is a regularizing coefficient. One can use submodular functions (Fujishige, 2005; Iyer, 2015) like Facility Location, graph cut, or Log-Determinants to model DIVERSITY(S). Here, λ trades off between training loss and diversity. Such an optimization problem indeed provides an optimal subset S that results in high accuracy. Bottlenecks of the combinatorial optimization. The optimization problem (1) imposes the following challenges. (1) It demands explicit training of mθ which can be expensive in terms of both memory and time. (2) The training of mθ every time for a new architecture m prevents the subset S from being generalizable— one needs to solve the optimization (1) again to find S for an unseen model architecture. We address these challenges by designing a neural surrogate of the objective (1), which would lead to generalization of subset selection across efficient training of different models. 2.3 COMPONENTS OF SUBSELNET MODEL Next, we sketch our proposed model SUBSELNET that leads to substituting the optimization (1) with its neural surrogate. It consists of two key components: (i) neural approximator of the trained model and (ii) the subset sampler. Figure 4 in Appendix B illustrates our model. Approximator of the trained model mθ∗ . First, we design a neural network Fϕ which would approximate the predictions of the trained model mθ∗ for different architectures m ∈M. Given the dataset {(xi, yi)i∈D} and a model architecture m ∈M, we first feed the underlying DAG Gm into a graph neural network GNNα with parameter α, which outputs the representations of the nodes of the Gm, i.e., Hm = {hu}u∈Vm . Next, we feed Hm and the instance xi into an encoder gβ Fϕ(Gm,xi) ≈ mθ∗(xi) for m ∈M. (2) Here, Fϕ(Gm,xi) = gβ(GNNα(Gm),xi). (3) Here, ϕ = {α, β}, and θ∗ is the set of learned parameters of the model mθ on the dataset D. Subset sampler. We design a subset sampler using a probabilistic model Prπ(•). Given a budget |S| ≤ b, it sequentially draws instances S = {s1, ..., sb} from a softmax distribution of the logit vector π ∈ R|D| where π(xi, yi) indicates a score for the element (xi, yi). Having chosen the first t instances St = {s1, ..st} from D, it draws the (t+ 1)-th element (x, y) from the remaining instances in D with a probability proportional to exp(π(x, y)) and then repeat it for b times. Thus, the probability of selecting the ordered set of elements S = {s1, ..., sb} is given by Pr π(S) = b∏ t=0 exp(π(xst+1 , yst+1))∑ τ∈D\St exp(π(xsτ , ysτ )) (4) We would like to highlight that we use S as an ordered set of elements, selected in a sequential manner. However, such an order does not affect the trained model which is inherently invariant of permutations of the training data, it only affects the choice of S. Training objective. Using the Eqs. (2) and (4), we replace the combinatorial optimization problem in Eq. (1) with a continuous optimization problem, across different model architectures m ∈M. To that goal, we define Λ(S;m;π, Fϕ) = ∑ i∈S ℓ(Fϕ(Gm,xi), yi)− λH(Pr π(•)) (5) minimize π,ϕ ∑ m∈M E S∈Prπ(•) [ Λ(S;m;π, Fϕ) + ∑ i∈S γKL(Fϕ(Gm,xi),mθ∗(xi)) ] (6) Here, we use entropy on the subset sampler H(Prπ(•)) to model the diversity of samples in the selected subset. We call our neural pipeline, which consists of the model approximator Fϕ and the subset selector π, as SUBSELNET. In the above, γ penalizes the difference between the output of model approximator and the prediction made by the trained model, which allows us to generalize the training of different models m ∈M through the model Fϕ(Gm,xi). 2.4 TRANSDUCTIVE-SUBSELNET AND INDUCTIVE-SUBSELNET MODELS The optimization (6) suggests that once Fϕ is trained, we can use it to compute the output of the trained model mθ∗ for an unseen architecture m′ and use it to compute π. This already removes a significant overhead of model training and facilitates fast computation of π. This leads us to develop two types of models based on how we can compute π, as follows. Transductive-SUBSELNET. The first variant of the model is transductive in terms of computation of π. Here, once we train the model approximator Fϕ, then we compute π by solving the optimization problem explicitly with respect to π, every time when we wish to select data subset for a new architecture. Given a trained model Fϕ and a new model architecture m′ ∈M, we solve the optimization problem: minπ ES∈Pr π(•)[Λ(S;m;π, Fϕ)] to find the subset sampler Prπ during inference time for a new architecture m′. Such an optimization still consumes time during inference. However, it is still significantly faster than the combinatorial methods (Killamsetty et al., 2021b;a; Mirzasoleiman et al., 2020a; Sivasubramanian et al., 2021) thanks to sidestepping the explicit model training using a model approximator. Inductive-SUBSELNET. In contrast to the transductive model, the inductive model does not require explicit optimization of π in the face of a new architecture. To that aim, we approximate π using a neural network πψ. This takes two signals as inputs - the dataset D and the outputs of the model approximator for different instances {Fϕ(Gm,xi) | i ∈ D}, and finally outputs a score for each instance πψ(xi, yi). Under Inductive-SUBSELNET, the optimization (6) becomes: minimize ψ,ϕ ∑ m∈M E S∈Prπψ (•) [ Λ(S;m;πψ, Fϕ) + ∑ i∈S γKL(Fϕ(Gm,xi),mθ∗(xi)) ] (7) Such an inductive model can select an optimal distribution of the subset that should be used to efficiently train any model mθ, without explicitly training θ or searching for the underlying subset. 3 NEURAL PARAMETERIZATION OF SUBSELNET In this section, we describe the neural parametrization of SUBSELNET. SUBSELNET consists of two key components, Fϕ and πψ . Specifically, Transductive-SUBSELNET has only one neural component which is Fϕ, whereas, Inductive-SUBSELNET has both Fϕ and πψ . 3.1 NEURAL PARAMETERIZATION OF Fϕ The approximator Fϕ consists of two components: (i) a graph neural network GNNα which mapsGm, the DAG of an architecture, to the node representations Hm = {hu}u∈Vm and (ii) a model encoder gβ which takes Hm and the instance xi as input and approximates mθ∗(xi), i.e., the prediction made by the trained model. Therefore, Fϕ(Gm,x) = gβ(GNNα(Gm),xi). Here, ϕ = {α, β}. Computation of architecture embedding using GNNα. Given a model m ∈M, we compute the representations Hm = {hu|u ∈ Vm} by using a graph neural network GNNα parameterized with α, following the proposal of Yan et al. (2020). We first compute the feature vector fu for each node u ∈ Vm using the one-hot encoding of the associated operation (e.g., max, sum, etc.) and then feed it into a neural network to compute an initial node representation, as given below. hu[0] = INITNODEα(fu) (8) Then, we use a message passing network, which collects signals from the neighborhood of different nodes and recursively compute the node representations (Yan et al., 2020; Xu et al., 2018b; Gilmer et al., 2017). Given a maximum number of recursive layers K and the node u, we compute the node embeddings Hm = {hu|u ∈ Vm} by gathering information from the k < K hops using K recursive layers as follows. h(u,v)[k − 1] = EDGEEMBEDα(hu[k − 1],hv[k − 1]) h′u[k − 1] = SYMMAGGRα( { h(u,v)[k − 1] | v ∈ Nbr(u) } ) hu[k] = UPDATEα(hu[k − 1],h′u[k − 1]). (9) Here, Nbr(u) is the set of neighbors of u. We use SYMMAGGR as a simple sum aggregator and both UPDATE and EDGEEMBED are injective mappings, as used in (Xu et al., 2018b). Note that trainable parameters from EDGEEMBED, SYMMAGGR and UPDATE are decoupled. They are represented as the set of parameters α. Finally, we obtain our node representations as: hu = [hu[0], ..,hu[K − 1]]. (10) Model encoder gβ . Having computed the architecture representation {hu |u ∈ Vm}, we next design the model encoder which leverages these embeddings to predict the output of the trained model mθ∗(xi). To this aim, we developed a model encoder gβ parameterized by β that takes Hm and xi as input and attempts to predict mθ∗(xi), i.e., gβ(Hm,xi) ≈ mθ∗(xi). It consists of three steps. In the first step, we generate a permutation invariant order on the nodes. Next, we feed the representations {hu} in this order into a self-attention based transformer layer. Finally, we combine the output of the transformer and the instance xi using a feedforward network to approximate the model output. Node ordering using BFS order. We first sort the nodes using breadth-first-search (BFS) order ρ. Similar to You et al. (2018), this sorting method produces a permutation-invariant sequence of nodes and captures subtleties like skip connections in the network structure Gm Attention layer. Given the BFS order ρ, we pass the representations Hm = {hu |u ∈ Vm} in the sequence ρ through a self-attention based transformer network. Here, the Query, Key and Value functions are realized by matrices Wquery,Wkey,Wvalue ∈ Rdim(h)×k where k is a tunable width. Thus, for each node u ∈ Vm, we have: Query(hu) = W ⊤ queryhu, Key(hu) = W ⊤ keyhu, Value(hu) = W ⊤ valuehu (11) Using these quantities, we compute an attention weighted vector ζu given by: Attu = W T c ∑ v au,vValue(hv) with, au,v = SOFTMAXv ( Query(hu) ⊤Key(hv)/ √ k ) (12) Here k is the dimension of the latent space, the softmax operation is over the node v, and Wc ∈ Rk×dim(h). Subsequently, for each node u, we use a feedforward network, preceded and succeeded by layer normalization operations, which are given by the following set of equations. ζu,1 = LN(Attu + hu; γ1, γ2), ζu,2 = W⊤2 RELU(W ⊤ 1 ζu,1), ζu,3 = LN(ζu,1 + ζu,2; γ3, γ4) Here, LN is the layer normalization operation (Ba et al., 2016). Finally, we feed the vector ζu,3 for the last node u in the sequence ρ, i.e., u = ρ(|Vm|) along with the feature vector xi into a feed-forward network parameterized by WF to model the prediction mθ∗(xi). Thus, the final output of the model encoder gβ(Hm,xi) is given by om,xi = FFβ2(ζρ|Vm|,3 ,xi) (13) Here, W• and γ• are trainable parameters and collectively form the set of parameters β. 3.2 NEURAL ARCHITECTURE OF INDUCTIVE-SUBSELNET We approximate π using a neural network πψ using a neural network which takes three inputs – (xj , yj), the corresponding output of the model approximator, i.e., om,xj = Fϕ(Gm,xj) and the node representation matrix Hm and provides us a positive selection score πψ(Hm,xj , yj ,om,xj ). In practice, πψ is a three-layer feed-forward network, which contains Leaky-ReLU activation functions for the first two layers and sigmoid activation at the last layer. 4 PARAMETER ESTIMATION AND INFERENCE Given a dataset {(xi, yi) | i ∈ D} and the output of the trained models {mθ∗(xi)}i∈D, our goal is to estimate ϕ and π (resp. ψ) for the transductive (inductive) model. We first illustrate the bottlenecks that prevent us from end-to-end training for estimating these parameters. Then, we introduce a multi-stage training method to overcome these limitations. Finally, we present the inference method. 4.1 BOTTLENECK FOR END TO END TRAINING End to end optimization of the above problem is difficult for the following reasons. (i) Our architecture representation Hm only represents the architectures and thus should be independent of parameter of the architecture θ and the instances x. End to end training can make them sensitive to these quantities. (ii) To enable the model approximator Fϕ accurately fit the output of the trained model mθ, we need an explicit training for ϕ with the target mθ. Adding the corresponding loss as an additional regularizer imposes an additional hyperparameter tuning. 4.2 MULTI-STAGE TRAINING In our multi-stage training method, we first train the model approximator Fϕ by minimizing the sum of the KL divergence between the gold output probabilities, and then train our subset sampler Prπ (resp. Prπψ ) for the transductive (inductive) model as well as fine-tuning ϕ. Training the model approximator Fϕ. We train Fϕ in two steps. In the first step, we perform unsupervised training of GNNα using graph variational autoencoder (GVAE). This ensures that the architecture representations Hm remain insensitive to the model parameters. We build the encoder and decoder of our GVAE by following existing works on graph VAEs (Yan et al., 2020) in the context graph based modeling of neural architectures. Given a graph Gm, the encoder q(Zm |Gm) which takes the node embeddings {hu}u∈Vm and maps it into the latent space Zm = {zu}u∈Vm . Specifically, we model the encoder q(Zm |Gm) as: q(zu |Gm) = N (µ(hu),Σ(hu)). Here, both µ and Σ are neural networks. Given a latent representation Zm = {zu}u∈Vm , the decoder models a generative distribution of the graph Gm where the presence of an edge is modeled as Bernoulli distribution BERNOULLI(σ(z⊤u zv)). Thus, we model the decoder as: p(Gm | Z) = ∏ (u,v)∈Em σ(z ⊤ u zv) · ∏ (u,v) ̸∈Em [1− σ(z⊤u zv)] (14) Here, σ is a parameterized sigmoid function. Finally, we estimate α, µ,Σ and σ by maximizing the evidence lower bound (ELBO) as follows: max α,µ,Σ,σ EZ∼q(• |Gm)[p(Gm | Z)]− KL(q(• |Gm)||prior(•)) (15) Next, we train our model encoder gβ by minimizing the KL-Divergence between the approximated prediction gβ(Hm,xi) and the ground truth prediction mθ∗(xi), where both these quantities are probabilities across different classes. Hence, the training problem is as follows: minimize β ∑ i∈D,m∈M KL(mθ∗(xi)||gβ(Hm,xi)) (16) Training of the subset sampler. Finally, we fine-tune gβ and train π by solving (6) for the Transductive-SUBSELNET (likewise train πψ by solving (7) for Inductive-SUBSELNET). 4.3 INFERENCE During inference, our goal is to select a subset S with |S| = b for a new model m′, which would facilitate efficient training of m′. As discussed in Section 2.4, we compute π for TransductiveSUBSELNET by explicitly solving the optimization problem: minπ ES∈Pr π(•)[Λ(S;m;π, Fϕ)] and then draw S ∼ Prπ(•). For Inductive-SUBSELNET, we draw S ∼ Prπψ̂ (•) where ψ̂ is the learned value of ψ during training. 4.4 OVERVIEW OF TRAINING AND INFERENCE ROUTINES Algorithms 1 and 2 summarize the algorithms for the training and inference procedure. Algorithm 1 Training procedure 1: function TRAINTRANSDUCTIVE(D,M, {θ∗}) 2: α̂, β̂,Hm ←TRAINAPPROX(D,M, {θ∗}) 1: function TRAININDUCTIVE(D,M, {θ∗}) 2: α̂, β̂,Hm ←TRAINAPPROX(D,M, {θ∗}) 3: o← [gβ̂({Hm,xi})]i,m 4: ψ̂ ← TRAINPI(o, {Hm}, {xi}) 1: function TRAINAPPROX(D,M, {θ∗}) 2: α̂← TRAINGNN(M) 3: for m ∈Mtrain do 4: Hm ← GNNα̂(m) 5: POS ← BFSORDERING(Gm) 6: β̂ ← TRAINMODELENC({xi}, POS, {θ∗}) Algorithm 2 Inference procedure 1: function INFERTRANSDUCTIVE(D, α̂, β̂,m′) 2: Hm′ ← GNNα̂(m′) 3: Fϕ(Gm′ ,xi)← gβ̂(Hm′ ,xi) ∀i ∈ D 4: π∗ ← minπ ES∈Prπ(•)[Λ(S;m′;π;Fϕ)] 5: S∗ ∼ Prπ∗(•) 6: TRAINNEWMODEL(m′;S∗) 1: function INFERINDUCTIVE(D, α̂, β̂,m′) 2: Hm′ ← GNNα̂(m′) 3: Fϕ(Gm′ ,xi)← gβ̂(Hm′ ,xi) ∀i ∈ D 4: Compute πψ̂(xi, yi) ∀i ∈ D 5: S∗ ∼ Prπψ̂ (•) 6: TRAINNEWMODEL(m′;S∗) Training Subroutines. The training phase for both, Transductive-SUBSELNET first utilizes the TRAINAPPROX routine to train the model approximator given the dataset, trained model parameters, and the set of neural architectures. Internally, the routine calls the TRAINGNN subroutine to train the parameters (α) of the GNN network, BFSORDERING subroutine to reorder the embeddings based on the BFS order and the TRAINMODELENC subroutine to train the attention-based model encoder’s parameters (β). The TRAININDUCTIVE routine further calls the TRAINPI subroutine to train the parameters of the neural subset selector. Inference Subroutines. Given an unseen architecture and parameters of the trained neural networks, the inference phase for both variants of SUBSELNET first generates the model encoder output for all the data points. Post this, the INFERTRANSDUCTIVE routine solves the optimization problem on π explicitly for the unseen architecture and selects the subset from the dataset. On the other hand, INFERINDUCTIVE utilizes the trained parameters of the neural subset selector. Finally, both routines call the TRAINNEWMODEL to train and evaluate the unseen architecture on selected subset. 5 EXPERIMENTS In this section, we provide comprehensive evaluation of SUBSELNET against several strong baselines on three real world datasets. In Appendix D, we present additional results. 5.1 EXPERIMENTAL SETUP Datasets. We use FMNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky et al., 2014) and CIFAR100 (Krizhevsky et al., 2009) datasets for our experiments. We transform an input image Xi to a vector xi of dimension 2048 by feeding it to a pre-trained ResNet50 v1.5 (?) model and using the output from the penultimate layer as the image representation. Model architectures and baselines. We use model architectures from NAS-Bench-101 (Ying et al., 2019) for our experiments. We compare Transductive-SUBSELNET and Inductive-SUBSELNET against two non-adaptive subset selection methods – (i) Facility location (Fujishige, 2005; Iyer, 2015) where we maximize FL(S) = ∑ j∈Dmaxi∈S x ⊤ i xj to find S, (ii) Pruning (Sorscher et al., 2022), and four adaptive subset selection methods – (iii) Glister (Killamsetty et al., 2021b), (iv) Grad– Match (Killamsetty et al., 2021a), (v) EL2N (Paul et al., 2021), (vi) GraNd (Paul et al., 2021); and; (vii) Full selection where we use complete training data. The non-adaptive subset selectors select the subset before the training begins and thus, never access the rest of the training set again during the training iterations. On the other hand, the adaptive subset selectors refine the choice of subset during training iterations and thus they need to access the full training set at each training iteration. Appendix C contains additional details about the baselines. Evaluation protocol. We split the model architecturesM into 60% training (Mtr), 20% validation (Mval) and 20% test (Mtest) folds. Similarly, we split the dataset D into Dtr, Dval and Dtest. We presentMtr,Mval, Dtr and Dval to our method and estimate ϕ̂ and ψ̂ (for Inductive-SUBSELNET model). None of the baseline methods supports any generalizable learning protocol across different model architectures and thus cannot leverage the training architectures during test. Given an architecture m′ ∈ Mtest, we select the subset S from Dtr using our subset sampler (Prπ for Transductive-SUBSELNET or Prπ ψ̂ for Inductive-SUBSELNET). Similarly, all the non-adaptive subset selectors select S ⊂ Dtr using their own algorithms. Once S is selected, we train the test models m′ ∈Mtest on S. We perform our experiments with different |S| = b ∈ (0.005|D|, 0.05|D|) and compare the performance between different methods using three quantities: (1) Accuracy Pr(y = ŷ) measured using 1|Dtest| ∑ i∈Dtest ∑ m′∈Mtest 1(maxjm ′ θ∗(xi)[j] = yi). (2) Computational efficiency, i.e., the speedup achieved with respect to training with full dataset. It is measured with respect to Tf/T . Here, Tf is the time taken for training with full dataset; and, T is the time taken for the entire inference task, which is the average time for selecting subsets across the test models m′ ∈ Mtest plus the average training time of these test models on the respective selected subsets. (3) Resource efficiency in terms of the amount of memory consumed during the entire inference task, described in item (2), which is measured as ∫ T 0 memory(t) dt where memory(t) is amount of memory consumed at timestamp t. 5.2 RESULTS Comparison with baselines. Here, we compare different methods in terms of the trade off between accuracy and computational efficiency as well as accuracy and resource efficiency. In Figure 1, we probe the variation between these quantities by varying the size of the selected subset |S| = b ∈ (0.005|D|, 0.05|D|). We make the following observations. (1) Our methods trade-off between accuracy vs. computational efficiency as well as accuracy vs. resource efficiency more effectively than all the methods. For FMNIST, both the variants of our method strikingly output 75% accuracy, whereas they are 100 times faster than full selection. Transductive-SUBSELNET performs slightly better than Inductive-SUBSELNET in terms of the overall trade-off between accuracy and efficiency for FMNIST and CIFAR10 datasets. However, for CIFAR100, Transductive-SUBSELNET performs significantly better than Inductive-SUBSELNET. The time taken for both Transductive-SUBSELNET and Inductive-SUBSELNET seems comparable— this is because the subset selection time for both of them are significantly less than the final training time on the selected subset. (2) EL2N is the second best method. It provides the best trade-off between accuracy and time as well as accuracy and GPU memory, among all the baselines. It aims at choosing difficult training instances having high prediction error. As a result, once trained on them, the model can predict the labels of easy instances too. However, it chooses instances after running the initial few epochs. (3) FL adopts a greedy algorithm for subset selection and therefore, it consumes a large time and memory during subset selection itself. Consequently, the overall efficiency significantly decreases although the complexity of the training time on the selected subset remains the same as our models in terms of time and memory. (4) In addition to EL2N, Glister, Grad-Match and GraNd are adaptive subset selection methods that operate with moderately small (> 5%) subset sizes. In a region, where the subset size is extremely small, i.e., 1% − 5%, they perform very poorly. Moreover, they maximize a monotone function at each gradient update step, which results in significant overhead in terms of time. These methods process the entire training data to refine the choice of the subset and consequently, they end up consuming a lot of memory. (5) GraNd selects the instances having high uncertainty after running each model for five epochs and often the model is not well trained by then. Finer analysis of the inference time. Next, we demarcate the subset selection phase from the training phase of the test models on the selected subset during the inference time analysis. Table 2 summarizes the results for top three non-adaptive subset selection methods for b = 0.005|D| on CIFAR100. We observe that: (1) the final training times of all three methods are roughly same; (2) the selection time for TransductiveSUBSELNET is significantly more than Inductive-SUBSELNET, although it remains extremely small as compared to the final training on the inferred subset; and, (3) the selection time of FL is large— as close as 323% of the training time. Hybrid-SUBSELNET. From Figure 1, we observe that Transductive-SUBSELNET performs significantly better than Inductive-SUBSELNET. However, since Transductive-SUBSELNET solves a fresh optimization problem for each new architecture, it performs better at the cost of time and GPU memory. On the other hand, InductiveSUBSELNET performs significantly worse as it relies on a trained neural network to learn the same optimization problem. Here, we design a hybrid version of our model, called as Hybrid-SUBSELNET. Here, given the budget of the subset b, we first choose B > b instances using InductiveSUBSELNET and the final b instances by running the explicit optimization routines in Transductive-SUBSELNET. Figure 3 sum- marizes the results for B = {25K, 30K, 35K, 45K, 50K} . We observe that the trade off curves for the Hybrid-SUBSELNET lie in between Inductive-SUBSELNET and Transductive-SUBSELNET. For low value of B, i.e., B = 25K, the trade off line of Hybrid-SUBSELNET remains close to Inductive-SUBSELNET. As we increase B, the trade-off curve of accuracy vs speed up as well as the accuracy vs GPU usage becomes better, which allows Hybrid-SUBSELNET to smoothly transition from the trade off curve of Inductive-SUBSELNET to Transductive-SUBSELNET. At B = 45K, the trade-off curve almost coincides with Transductive-SUBSELNET. Such properties allow a user to choose an appropriate B that can accurately correspond to a target operating point in the form of (Accuracy, Speed up) or (Accuracy, memory usage). 6 CONCLUSION In this work, we develop SUBSELNET, a subset selection framework, which can be trained on a set of model architectures, to be able to predict a suitable training subset before training a model, for an unseen architecture. To do so, we first design a neural model approximator, which predicts the output of a new candidate architecture without explicitly training it. We use that output to design transductive and inductive variants of our model. The transductive model solves a small optimization problem to compute the subset for a new architecture m every single time. In contrast, the inductive model resorts to a neural subset sampler instead of an optimizer. Our work does not incorporate the gradients of the trained model in model approximator and it would be interesting to explore its impact on the subset selection. Further we can extend our setup to an adaptive setting, where we can incorporate signals from different epochs with a sequence encoder to train a subset selector. 7 ETHICS STATEMENT We do not foresee any negative impact of our work from ethics viewpoint. 8 REPRODUCIBILITY STATEMENT We uploaded the code in supplementary material. Details of implementation are given in Appendix C. A RELATED WORK Our work is closely related to representation learning for model architectures, network architecture search, data subset selection. Representation learning for model architectures. Recent work in network representation learning use GNN based encoder-decoder to encapsulate the local structural information of a neural network into a fixed-length latent space (Zhang et al., 2019; Ning et al., 2020; Yan et al., 2020; Lukasik et al., 2021). By employing an asynchronous message passing scheme over the directed acyclic graph (DAG), GNN-based methods model the propagation of input data over the actual network structure. Apart from encodings based solely on the structure of the network, White et al. (2020); Yan et al. (2021) produce computation-aware encodings that map architectures with similar performance to the same region in the latent space. Following the work of Yan et al. (2020), we use a graph isomorphism network as an encoder but instead of producing a single graph embedding, our method produces a collection of node embeddings, ordered by breadth-first-search (BFS) ordering of the nodes. Our work also differs in that we do not employ network embeddings to perform downstream search strategies. Instead, architecture embeddings are used in training a novel model approximator that predicts the logits of a particular architecture, given an architecture embedding and a data embedding. Network architecture search. There is an ever-increasing demand for the automatic search of neural networks for various tasks. The networks discovered by NAS methods often come from an underlying search space, usually designed to constrain the search space size. One such method is to use cell-based search spaces (Luo et al., 2018; Zoph et al., 2017; Liu et al., 2017; Pham et al., 2018; Ying et al., 2019; Dong & Yang, 2020). Although we utilize the NAS-Bench-101 search space for architecture retrieval, our work is fundamentally different from NAS. In contrast to the NAS methods, which search for the best possible architecture from the search space using either sampling or gradient-descent based methods (Baker et al., 2017; Zoph & Le, 2016; Real et al., 2017; 2018; Liu et al., 2018; Tan et al., 2018), our work focuses on efficient data subset selection given a dataset and an architecture, which is sampled from a search space. Our work utilizes graph representation learning on the architectures sampled from the mentioned search spaces to project an architecture under consideration to a continuous latent space, utilize the model expression from the latent space as proxies for the actual model and proceed with data subset selection using the generated embedding, model proxy and given dataset. Data subset selection. Data subset selection is widely used in literature for efficient learning, coreset selection, human centric learning, etc. Several works cast the efficient data subset selection task as instance of submodular or approximate-submodular optimization problem (Killamsetty et al., 2021a; Wei et al., 2014a;b;c; Killamsetty et al., 2021b; Sivasubramanian et al., 2021). Another line of work focus on selecting coresets which are expressed as the weighted combination of subset of data, approximating some characteristics, e.g., loss function, model prediction (Feldman, 2020; Mirzasoleiman et al., 2020b; Har-Peled & Mazumdar, 2004; Boutsidis et al., 2013; Lucic et al., 2017). Our work is closely connected to simultaneous model learning and subset selection (De et al., 2021; 2020; Sivasubramanian et al., 2021). These existing works focus on jointly optimizing the training loss, with respect to the subset of instances and the parameters of the underlying model. Among them (De et al., 2021; 2020) focus on distributing decisions between human and machines, whereas (Sivasubramanian et al., 2021) aims for efficient learning. However, these methods adopt a combinatorial approach for selecting subsets and consequently, they are not generalizable across architectures. In contrast, our work focuses on differentiable subset selection mechanism, which can generalize across architectures. B ILLUSTRATION OF SUBSELNET C ADDITIONAL DETAILS ABOUT EXPERIMENTAL SETUP C.1 DATASET Datasets (D). Architectures (M). Although our task is not Neural Architecture Search, we leverage the NASBench101 search space as an architecture pool. The cell-based search space was designed for the benchmarking of various NAS methods. It consists of 423, 624 unique architectures with the following constraints – (1) number of nodes in each cell is at most 7, (2) number of edges in each cell is at most 9, (3) barring the input and output, there are three unique operations, namely 1× 1 convolution, 3× 3 convolution and 3× 3 max-pool. We utilize the architectures from the search space in generating the sequence of embeddings along with sampling architectures for the training and testing of the encoder and datasets for the subset selector. C.2 IMPLEMENTATION DETAILS ABOUT BASELINES Facility Location (FL). We implemented facility location on all the three datasets using the apricot 1 library. The similarity matrix was computed using Euclidean distance between data points, and the objective function was maximized using the naive greedy algorithm. Pruning. It selects a subset from the entire dataset based on the uncertainty of the datapoints while partial training. In our setup, we considered ResNet-18 as a master model, which is trained on each dataset for 5 epochs. Post training, the uncertainty measure is calculated based on the probabilities of each class, and the points with highest uncertainty are considered in the subset. We train the master model at a learning rate of 0.025. Glister and Grad-Match. We implemented GLISTER (Killamsetty et al., 2021b) and Grad-Match (Killamsetty et al., 2021a) using the CORDS library. We trained the models for 50 epochs, using batch size of 20, and selected the subset after every 10 epochs. The loss was minimized using SGD with learning rate of 0.01, momentum of 0.9 and weight decay with regularization constant of 5× 10−4. We used cosine annealing for scheduling the learning rate with Tmax of 50 epochs, and used 10% of the training data as the validation set. Details of specific hyperparameters for stated as follows. Glister uses a greedy selection approach to minimize a bi-level objective function. In our implementation, we used stochastic greedy optimization with learning rate 0.01, applied on the data points of each mini-batch. Online-Glister approximates the objective function with a Taylor series expansion up to an arbitrary number of terms to speed up the process; we used 15 terms in our experiments. Grad-Match applies the orthogonal matching (OMP) pursuit algorithm to the data points of each mini-batch to match gradient of a subset to the entire training/validation set. Here, we set the learning rate is set to 0.01. The regularization constant in OMP is 1.0 and the algorithm optimizes the objective function within an error margin of 10−4. GraNd. This is an adaptive subset selection strategy in which the norm of the gradient of the loss function is used as a score to rank a data point. The gradient scores are computed after the model has trained on the full dataset for the first few epochs. For the rest of epochs, the model is trained only on the top-k data points, selected using the gradient scores. In our implementation, we let the model train on the full dataset for the first 5 epochs, and computed the gradient of the loss only with respect to the last layer fully connected layer. EL2N. When the loss function used to compute the GraNd scores is the cross entropy loss, the norm of the gradient for a data point x can be approximated by E||p(x)− y||2, where p(x) is the discrete 1https://github.com/jmschrei/apricot probability distribution over the classes, computed by taking softmax of the logits, and y is the one-hot encoded true label corresponding to the data point x. Similar to our implementation of GraNd, we computed the EL2N scores after letting the models train on the full data for the first 5 epochs. C.3 IMPLEMENTATION DETAILS ABOUT OUR MODEL GNNα. As we utilize NASBench-101 space as the underlying set of neural architectures, each computational node in the architecture can comprise of one of five operations and the one-hotencoded feature vector fu. Since the set is cell-based, there is an injective mapping between the neural architecture and the cell structure. We aim to produce a sequence of embeddings for the cell, which in turn corresponds to that of the architecture. For each architecture, we use the initial feature fu ∈ R5 in (8) as a five dimensional one-hot encoding for each operation. This is fed into INITNODE (8) to obtain an 16 dimensional output. Here, INITNODE consists of a 5 × 16 linear, ReLU and 16 × 16 linear layers cascaded with each other. Each of EDGEEMBED and UPDATE consists of a 5× 128 linear-BatchNorm-ReLU cascaded with a 128× 16 linear layer. Moreover, the symmetric aggregator is a sum aggregator. We repeat this layer K times, and each iteration gathers information from k < K hops. After all the iterations, we generate an embedding for each node, and following (You et al., 2018) we use the BFS-tree based node-ordering scheme to generate the sequence of embeddings for each network. The GVAE-based architecture was trained for 10 epochs with the number of recursive layers K set to 5, and the Adam optimizer was used with learning rate of 10−3. The entire search space was considered as the dataset, and a batch-size of 32 was used. Post training, we call the node embeddings collectively as the architecture representation. To train the latent space embeddings, the parameters α are trained in an encoder-decoder fashion using a variational autoencoder. The mean µ and variance σ on the final node embeddings hu are: µ = FCN ([ hu ] u∈Vm ) and σ = exp ( FCN ([ hu ] u∈Vm )) The decoder aims to reconstruct the original cell structure (i.e the nodes and the corresponding operations), which are one-hot encoded. It is modeled using single-layer fully connected networks followed by a sigmoid layer. Model Encoder gβ . The model encoder gβ is essentially a single-head attention block that acts on a sequence of node embeddings Hm = {hu|u ∈ Vm}. The Query, Key and Value matrices, Wquery, Wkey and Wvalue ∈ R16×8, and the matrix WC ∈ R8×16. The fully connected network acting on ζu,1 consists of matrices W1 ∈ R16×64 and W2 ∈ R64×16. All the trainable matrices along with the layer normalizations were implemented using the Linear and LayerNorm functions in Pytorch. The last item of the output sequence ζu,3 is concatenated with the data embedding xi and fed to another 2-layer fully-connected network with hidden dimension 256 and dropout probability of 0.3. The model encoder is trained by minimizing the KL-divergence between gβ(Hm,xi) and mθ∗(xi). We used an AdamW optimizer with learning rate of 10−3, ϵ = 10−8, betas = (0.9, 0.999) and weight decay of 0.005. We also used Cosine Annealing to decay the learning rate, and used gradient clipping with maximum norm set to 5. Figure 6 shows the convergence of the outputs of the model encoder gβ(Hm,xi) with the outputs of the model mθ∗(xi). Neural Network πψ. The inductive model is a three-layer fully-connected neural network with two Leaky ReLU activations and a sigmoid activation after the last layer. The input to πψ is the concatenation (Hm;om,i;xi; yi). The hidden dimensions of the two intermediary layers are 64 and 16, and the final layer is a single neuron that outputs the score corresponding to a data point xi. While training πψ we add a regularization term λ′( ∑ i∈D πψ(Hm,om,i,xi, yi)− |S|) to ensure that nearly |S| samples have high scores out of the entire dataset D. Both the regularization constants λ (in equation 6) and λ′ are set to 0.1. We train the model weights using an Adam optimizer with a learning rate of 0.001. During training, at each iteration we draw instances using Prπ and use the log-derivative trick to compute the gradient of the objective. During each computation step, we use one instance of the ranked list to compute the unbiased estimate of the objective in (6) . D ADDITIONAL EXPERIMENTS D.1 ABLATION STUDY We perform ablation study of SUBSELNET from three perspectives. Impact of ablation of subset sampler. First, we attempt to understand the impact of the subset sampler. To that aim, we compare the performance of SUBSELNET against two baselines, namely - Bottom-b-loss and Bottom-b-loss+gumbel. In Bottom-b-loss, we sort the data instances based on their predicted loss ℓ(Fϕ(Gm,x), y) and consider those points with the bottom b values. In Bottomb-loss+gumbel, we add noise sampled from the gumbel distribution with µ = 0 and β = 0.025, and sort the instances based on these noisy loss values, i.e., ℓ(Fϕ(Gm,x), y) + Gumbel(0, β = 0.025). We observe that Bottom-b-loss and Bottom-b-loss+gumbel do not perform that well in spite of being efficient in terms of time and memory. Figure 7 compares the performance of the variants of SUBSELNET, Bottom-b-loss and Bottom-b-loss+gumbel. Exploring alternative architecture of the model encoder gβ . We consider three alternative architecture to our current model encoder gβ . • FEEDFORWARD: We consider a two-layer fully-connected network, in which we concatenate the mean of Hm with xi. We used ReLu activation between the layers and the hidden dimension was set to 256. We used dropout for regularization with probability 0.3. • DEEPSET: We consider permutation invariant networks of the form ρ( ∑ h∈H ϕ(h);xi) where ρ and ϕ are neural networks and H is the sequence under consideration. We ρ is a fully-connected network with 4 layers, ReLU activation, and hidden dimension of 64, and ϕ is a two-layer fullyconnected network with ReLU activation and has output dimension 10. • LSTM: We consider an LSTM-based encoder with hidden dimension of 16 and dropout probability of 0.2. The output of the last LSTM block is concatenated with xi and fed to a linear layer with hidden dimension 256, dropout probability of 0.3 and ReLU as the activation function. Since the goal of the model encoder is to produce outputs which mimic the architectures, we measure the KL divergence between the outputs of the gold models and of the encoder to denote the closeness of the output distribution. Table. 8 summarizes performance of different model encoders. We make the following observations: (1) Transformer-based model encoder outperforms every other method by a significant margin across both the datasets. (2) The BFS sequential modeling of an architecture with transformers leads to better representation that enables closer model approximation compared to other sequential methods like LSTM. (3) Non-sequential model approximators like Feedforward and DeepSets led to poor model approximation. Performance of subset selectors using different model encoders. We consider three different design choices of model approximator (our (Transformer), Feedforward, and LSTM) along with three different subset selection strategies (Our subset sampler, top-b instances based on uncertainty, and top-b based on loss) which result in nine different combinations of model approximation and subset selection strategies. We measure uncertainty using the entropy of the predicted distribution of the target classes and report the average test accuracy of the models when they are trained on the underlying pre-selected subset in the following table - We make the following observations - 1. The complete design of our method, i.e., Our model approximator (Transformer) + Our subset sampler (SUBSELNET) performs best. 2. If we use simple unsupervised subset selection heuristics, e.g., loss or uncertainty based subset selection, then our model approximator performs much worse than Feedforward or LSTM, whereas this trend is opposite if we use our subset sampler for selecting the subset. This may be due to overfitting of the transformer architecture in presence of uncertainty or loss based selection, which is compensated by our subset sampler. D.2 RECOMMENDING MODEL ARCHITECTURE When dealing with a pool of architectures designed for the same task, choosing the correct architecture for the task might be a daunting task - since it is impractical to train all the architectures from scratch. In view of this problem, we show that training on smaller carefully chosen subsets might be beneficial for a quicker alternative to choosing the correct architectures. We first extract the top 15 best performing architectures A∗ having highest accuracy, when trained on full data. We mark them as "gold". Then, we gather top 15 architectures A when trained on the subset provided by our models. Then, we compare A and A∗ using the Kendall tau rank correlation coefficient (KTau) along with Jaccard coefficent |A ∩ A∗|/|A ∪ A∗|. Figure 10 summarizes the results for top three non-adaptive subset selectors in terms of the accuracy, namely - Transductive-SUBSELNET, Inductive-SUBSELNET and FL. We make the following observations: (1) One of our variant outperforms FL in most of the cases in CIFAR10 and CIFAR100. (2) There is no consistent winner between Transductive-SUBSELNET and Inductive-SUBSELNET, although Inductive-SUBSELNET outperforms both Transductive-SUBSELNET and FL consistently in CIFAR100 in terms of the Jaccard coefficient. D.3 AVOIDING UNDERFITTING AND OVERFITTING Since the amount of training data is small, there is a possibility of overfitting. However, the coefficient λ of the entropy regularizer λH(Prπ), can be increased to draw instances from the different regions of the feature space, which in turn can reduce the overfitting. In practice, we tuned λ on the validation set to control such overfitting. We present the accuracies on (training, validation, test) folds for both Transductive-SUBSELNET and Inductive-SUBSELNET in Table 11. We make the following observations: 1. From training to test, in most cases, the decrease in accuracy is ∼ 7%. 2. This small accuracy gap is further reduced from validation to test. Here, in most cases, the decrease in accuracy is ∼ 4%. We perform early stopping using the validation set which acts as an additional regularizer and therefore, the amount of overfitting is significantly low. D.4 PERFORMANCE OF SUBSET SELECTION STRATEGIES ON LARGER SUBSET SIZES We conducted similar experiments as Section 5.1 for CIFAR10 and FMNIST on larger subset sizes (b) of 0.1|D|, 0.2|D|, 0.4|D| and 0.7|D|. For each dataset and the above mentioned subset sizes, we evaluate the decrease in accuracy (ratio of the accuracy on the subset to accuracy on the full dataset), speed-up (ratio of the time taken to train the full dataset to the sum of times taken for subset selection and subset training), and GPU usage in GB-min. We report the variation of these metrics with respect to the subset sizes in the following tables – Note that in the case of CIFAR10, we denote the decrease factors of 0.91-0.96 in green, and the decrease factors of 0.85 - 0.88 in purple. In case of FMNIST, we denote the decrease factors of 0.94-0.97 in green and the decrease factors of 0.90 - 0.93 in purple. We make the following observations: 1. We show a better trade-off between accuracy and time and accuracy and memory than almost all the baselines. 2. Observations in CIFAR10: When we tuned the subset sizes, we notice that SUBSELNET, GLISTER, Grad-Match and EL2N can achieve a comparable decrease factor of 0.91-0.93. In terms of speed-up and memory usage, we see that (a) SUBSELNET achieves a 1.3x speed-up as compared to GLISTER and 1.1x speed-up as compared to Grad-Match and EL2N (b) GLISTER consumes 3.7x GPU memory, Grad-Match consumes 3.1x GPU memory and EL2N consumes 2.5x GPU memory as compared to SUBSELNET We notice that none of the other subset selection strategies achieve a high-enough accuracy, and we beat them in terms of speed-up and memory usage. Moreover, for the case when the subset selection methods achieve a decrease factor of 0.85 - 0.88, we see that (a) SUBSELNET achieves a 2.4x speed-up as compared to FacLoc, 1.8x speed-up as compared to Pruning, 1.4x speed-up as compared to GLISTER, 1.2x speed-up as compared to Grad-Match and 1.1x speed-up as compared to EL2N (b) FacLoc consumes 4.8x GPU memory, Pruning consumes 1.7x GPU memory, GLISTER consumes 4x GPU memory, Grad-Match consumes 3.4x GPU memory and EL2N consumes 2.6x GPU memory as compared to SUBSELNET. 3. Observations in FMNIST: When we tuned the subset sizes, we notice that SUBSELNET, Facloc, GLISTER, Grad-Match and EL2N can achieve a comparable decrease factor of 0.94-0.97. In terms of speed-up and memory usage, we see that (a) SUBSELNET achieves a 3.8x speed-up as compared to FacLoc, 1.4x speed-up as compared to GLISTER and Grad-Match, and 2.2x speed-up as compared to EL2N. (b) FacLoc consumes 12.5x GPU Memory, and GLISTER, Grad-Match and EL2N con- sume 2.9x GPU memory as compared to SUBSELNET. We notice that none of the other subset selection strategies achieve a high-enough accuracy, and we beat them in terms of speed-up and memory usage. Moreover, for the case when the subset selection methods achieve a decrease factor of 0.90-0.93, we see that (a) SUBSELNET achieves a 7.4x speed-up as compared to FacLoc, 2.1x speed-up as compared to GLISTER, 2.9x speed-up as compared to Grad-Match and 2.1x speed-up as compared to EL2N (b) FacLoc consumes 28.5x GPU memory, GLISTER consumes 4.5x GPU memory, GradMatch consumes 6.1x GPU memory and EL2N consumes 3.7x GPU memory as compared to SUBSELNET. We present the trade-off between the accuracy and speed-up, and accuracy and memory consumption in Figure 15. E PROS AND CONS OF USING GNNS We have used a GNN in our model encoder to encode the architecture representations into an embedding. We chose a GNN for the task due to following reasons - 1. Message passing between the nodes (which may be the input, output, or any of the operations) allows us to generate embeddings that capture the contextual structural information of the node, i.e., the embedding of each node captures not only the operation for that node but also the operations preceding that node to a large extent. 2. It has been shown by (Morris et al., 2019) and (Xu et al., 2018a) that GNNs are as powerful as the Weisfeiler-Lehman algorithm and thus give a powerful representation for the graph. Thus, we obtain smooth embeddings of the nodes/edges that can effectively distill information from its neighborhood without significant compression. 3. GNNs embed model architecture into representations independent of the underlying dataset and the model parameters. This is because it operates on only the nodes and edges— the structure of the architecture and does not use the parameter values or input data. However, the GNN faces the following drawbacks - 1. GNN uses a symmetric aggregator for message passing over node neighbors to ensure that the representation of any node should be invariant to a permutation of its neighbors. Such a symmetric aggregator renders it a low-pass filter, as shown in (NT & Maehara, 2019), which attenuates important high-frequency signals. 2. We are training one GNN using several architectures. This can lead to the insensitivity of the embedding to change in the architecture. In the context of model architecture, if we change the operation of one node in the architecture (either remove, add or change the operation), then the model’s output can significantly change. However, the embedding of GNN may become immune to such changes, since the GNN is being trained over many architectures. F CHOICE OF SUBMODULAR FUNCTION FOR THE OPTIMIZATION PROBLEM In ( 1) we introduced the original combinatorial problem for subset selection where optimization variable S— the subset of instances — makes the underlying problem combinatorial. Here, we can use submodular functions like Graph-Cut, Facility-Location, and Log-Determinant as the diversity functions, which would allow us to use greedy algorithms to maximize the objective in ( 1). But, as discussed in Section 4.1, this suffers from two bottlenecks — expensive computation issues and lack of generalizability. Therefore, we do not follow these approaches and resort to our proposed approach called SUBSELNET. In contrast to the optimization problem in (1), which was a combinatorial set optimization problem, the optimization problem in SUBSELNET(6) is a continuous optimization problem where the goal is to estimate Prπ. In such a problem, where the probability distribution is the key optimization variable, entropy is a more natural measure of diversity than the other submodular measures.
1. What is the main contribution of the paper regarding subset selection? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of efficiency and memory utilization? 3. Do you have any concerns regarding the graph network used in the approach? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. What additional information would have been helpful to include in the paper regarding the loss function, objective functions, and the approach used?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper In this paper, the authors present SUBSELNET - a non- adaptive subset selection framework for solving a particular aspect of subset selection problem - improving generalizability of the subset selection approach; with existing methods, the algorithm has to be executed from the beginning for each new model. The authors introduce an attention based neural approach that uses the graph structure of the architectures, which is then used to build subset samplers. Their approach has 2 variants: transductive and inductive. They claim that their approach is more efficient than the existing approaches since the subset is chosen at the beginning of the training process, and the entire dataset is not required through the training process. Strengths And Weaknesses The authors have provided the motivation for the problem, and presented the solution to address the problem. They have also evaluated their approach against 6 other approaches, and 3 different datasets, showing the speed up and memory utilization. The approach overall is interesting since, they are able to preselect the dataset for the training process. However, there are a few areas that are not clear from the paper. It would have been great if the authors spent more time discussing the pros- and cons- of their graph network. In general, graph networks themselves can be large, slow and memory consuming. It seems like the comparison is performed on the output of the graph network rather than the end-to-end approach. The details about the GNN and the graph embedding are important. Much of the details, including the step-by-step algorithm and the details are added to the appendix. For instance, the diagram in appendix B and Pseudocode in C, both would have helped understand the paper better, if it was in the main text. Since the approach relies on pre-selecting, it is not clear how the approach is able to avoid overfitting or underfitting. The authors have split the data into train, validation and test sets. Including a report on the accuracy on these datasets, and the time/ computation resources required for these would have been helpful. The loss function (eq 1) and the objective functions (5 and 6) require more explanation. For intance, the authors state in page 3 after eq1 that: "One can use submodular functions (Fujishige, 2005; Iyer, 2015) like Facility Location, graph cut, or Log-Determinants to model DIVERSITY(S)". However, they havent mentioned the approach they have used in the paper. Later, they mention the use of entropy on the subset sampler H(Prπ(•)) to model the diversity in page 4 after eq 5 and KL after eq 6. The choice of the functions needs to be elaborated to appreciate the approach better. Clarity, Quality, Novelty And Reproducibility The approach overall is interesting since, they are able to preselect the dataset for the training process. It seems novel in that aspect. The paper's clarity can be improved - important and interesting parts have been moved to the appendix, whereas the math behind the model could have been explained better. The paper as presented, is less easy to reproduce - details about the GNN, the embeddings etc are probably missing.
ICLR
Title Efficient Data Subset Selection to Generalize Training Across Models: Transductive and Inductive Networks Abstract Subset selection, in recent times, has emerged as a successful approach toward efficient training of models by significantly reducing the amount of data and computational resources required. However, existing methods employ discrete combinatorial and model-specific approaches which lack generalizability— for each new model, the algorithm has to be executed from the beginning. Therefore, for data subset selection for an unseen architecture, one cannot use the subset chosen for a different model. In this work, we propose SUBSELNET, a nonadaptive subset selection framework, which tackles these problems with two main components. First, we introduce an attention-based neural gadget that leverages the graph structure of architectures and acts as a surrogate to trained deep neural networks for quick model prediction. Then, we use these predictions to build subset samplers. This leads us to develop two variants of SUBSELNET. The first variant is transductive (called as Transductive-SUBSELNET) which computes the subset separately for each model by solving a small optimization problem. Such an optimization is still super fast, thanks to the replacement of explicit model training by the model approximator. The second variant is inductive (called as Inductive-SUBSELNET) which computes the subset using a trained subset selector, without any optimization. Most state-of-the-art data subset selection approaches are adaptive, in that the subset selection adapts as the training progresses, and as a result, they require access to the entire data at training time. Our approach, in contrast, is non-adaptive and does the subset selection only once in the beginning, thereby achieving resource and memory efficiency along with compute-efficiency at training time. Our experiments show that both the variants of our model outperform several methods on the quality of the subset chosen and further demonstrate that our method can be used for choosing the best architecture from a set of architectures. 1 INTRODUCTION In the last decade, deep neural networks have enhanced the performance of the state-of-the-art ML models dramatically. However, these neural networks often demand massive data to train, which renders them heavily contingent on availability of high performance computing machinery, e.g., GPUs, CPUs, RAMs, storage disks, etc. However, such resources entail heavy energy consumption, excessive CO2 emission and maintenance cost. Driven by this challenge, a recent body of work focus on suitably selecting a subset of instances, so that the model can be quickly trained using lightweight computing infrastructure (Boutsidis et al., 2013; Kirchhoff & Bilmes, 2014; Wei et al., 2014a; Bairi et al., 2015; Liu et al., 2015; Wei et al., 2015; Lucic et al., 2017; Mirzasoleiman et al., 2020b; Kaushal et al., 2019; Killamsetty et al., 2021a;b;c). However, these existing data subset selection algorithm are discrete combinatorial algorithms, which share three key limitations. (1) Scaling up the combinatorial algorithms is often difficult, which imposes significant barrier against achieving efficiency gains as compared to training with entire data. (2) Many of these approaches are adaptive in nature, i.e, the subset changes as the model training progresses. As a result, they require access to the entire training dataset and while they provide compute-efficiency, they do not address memory and resource efficiency challenges of deep model training. (3) The subset selected by the algorithm is tailored to train only a given specific model and it cannot be used to train another model. Therefore, the algorithm cannot be shared across different models. We discuss the related work in detail in Appendix A. 1.1 PRESENT WORK Responding to the above limitations, we develop SUBSELNET, a trainable subset selection framework, which— once trained on a set of model architectures and a dataset— can quickly select a small training subset such that it can be used to train a new (test) model, without a significant drop in accuracy. Our setup is non-adaptive in that it learns to select the subset before the training starts for a new architecture, instead of adaptively selecting the subset during the training process. We initiate our investigation by writing down an instance of combinatorial optimization problem that outputs a subset specifically for one given model architecture. Then, we gradually develop SUBSELNET, by building upon this setup. SUBSELNET comprises of the following novel components. Neural model approximator. The key blocker in scaling up a model-specific combinatorial subset selector across different architectures is the involvement of the model parameters as optimization variables along with the candidate data subset. To circumvent this blocker, we design a neural model approximator which aims to approximate the predictions of a trained model for any given architecture. Thus, such a model approximator can provide per instance accuracy provided by a new (test) model without explicitly training it. This model approximator works in two steps. First, it translates a given model architecture into a set of embedding vectors using graph neural networks (GNNs). Similar to the proposal of Yan et al. (2020) it views a given model architecture as a directed graph between different operations and, then outputs the node embeddings by learning a variational graph autoencoder (VAE) in an unsupervised manner. Due to such nature of the training, these node embeddings represent only the underlying architecture— they do not capture any signal from the predictions of the trained model. Hence, in the next step, we build a neural model encoder which uses these node embeddings and the given instance to approximate the prediction made by the trained model. The model encoder is a transformer based neural network which combines the node embedding using self-attention induced weights to obtain an intermediate graph representation. This intermediate representation finally combines with the instance vector x to provide the prediction of the trained architecture. Subset sampler. Having computed the prediction of a trained architecture, we aim to choose a subset of instances that would minimize the predicted loss and at the same time, offers a good representation of the data. Our subset sampler takes the approximate model output and an instance as input and computes a selection score. Then it builds a logit vector using all these selection scores, feeds it into a multinomial distribution and samples a subset from it. This naturally leads to two variants of the model. Transductive-SUBSELNET: The first variant is transductive in nature. Here, for each new architecture, we utilize the predictions from the model approximator to build a continuous surrogate of the original combinatorial problem and solve it to obtain the underlying selection scores. Thus, we still need to solve a fresh optimization problem for every new architecture. However, the direct predictions from the model approximator allow us to skip explicit model training. This makes this strategy extremely fast both in terms of memory and time. We call this transductive subset selector as Transductive-SUBSELNET. Inductive-SUBSELNET: In contrast to Transductive-SUBSELNET, the second variant does not require to solve any optimization problem. Consequently, it is extremely fast. Instead, it models the scores using a neural network which is trained across different architectures to minimize the entropy regularized sum of the prediction loss. We call this variant as Inductive-SUBSELNET. We compare our method against six state-of-the-art methods on three real world datasets, which show that Transductive-SUBSELNET (Inductive-SUBSELNET) provides the best (second best) trade off between accuracy and inference time as well as accuracy and memory usage, among all the methods. This is because (1) our subset selection method does not require any training at any stage of subset selection for a new model; and, (2) our approach is non-adaptive and does the subset selection before the training starts. In contrast, most state-of-the-art data subset selection approaches are adaptive, in that the subset selection adapts as the training progresses, and as a result, they require access to the entire data at training time. Finally, we design a hybrid version of the model, where given a budget, we first select a larger set of instances using Inductive-SUBSELNET, and then extract the required number of instances using Transductive-SUBSELNET. We observe that such a hybrid approach allow us to make a smooth transition between the trade off curves from Inductive-SUBSELNET to Transductive-SUBSELNET. 2 DEVELOPMENT OF PROPOSED MODEL: SUBSELNET In this section, we setup the notations and write down the combinatorial subset selection problem for efficient training. This leads us to develop a continuous optimization problem which would allow us to generalize the combinatorial setup across different models. 2.1 NOTATIONS We are given a set of training instances {(xi, yi)}i∈D where we use D to index the data. Here, xi ∈ Rdx are features and yi ∈ Y as the labels. In our experiments, we consider Y as a set of categorical labels. However, our framework can also be used for continuous labels. We use m to denote a neural architecture and represent its parameterization as mθ. We also useM to denote the set of neural architectures. Given an architecture m ∈ M, Gm = (Vm, Em) provides the graph representation of m, where the nodes u ∈ Vm represent the operations and the e = (um, vm) indicates an edge, where the output given by the operation represented by the node um is fed to one of the operands of the operation given by the node vm. Finally, we use H(·) to denote the entropy of a probability distribution and ℓ(mθ(x), y) as the cross entropy loss hereafter. 2.2 COMBINATORIAL SUBSET SELECTION FOR EFFICIENT LEARNING We are given a dataset {(xi, yi)}i∈D and a model architecture m ∈ M with its neural parameterization mθ. The goal of a subset selection algorithm is to select a small subset of instances S with |S| = n << |D| such that, training mθ on the subset S gives nearly same accuracy as training on the entire dataset D. Existing works (Killamsetty et al., 2021b; Sivasubramanian et al., 2021; Killamsetty et al., 2021a) adopt different strategies to achieve this goal, but all of them aim to simultaneously optimize for the model parameters θ as well as the candidate subset S. At the outset, we may consider the following optimization problem. minimize θ,S⊂D:|S|=b ∑ i∈S ℓ(mθ(xi), yi)− λDIVERSITY(S), (1) where b is the budget, DIVERSITY(S) measures the representativeness of S with respect to the whole dataset D and λ is a regularizing coefficient. One can use submodular functions (Fujishige, 2005; Iyer, 2015) like Facility Location, graph cut, or Log-Determinants to model DIVERSITY(S). Here, λ trades off between training loss and diversity. Such an optimization problem indeed provides an optimal subset S that results in high accuracy. Bottlenecks of the combinatorial optimization. The optimization problem (1) imposes the following challenges. (1) It demands explicit training of mθ which can be expensive in terms of both memory and time. (2) The training of mθ every time for a new architecture m prevents the subset S from being generalizable— one needs to solve the optimization (1) again to find S for an unseen model architecture. We address these challenges by designing a neural surrogate of the objective (1), which would lead to generalization of subset selection across efficient training of different models. 2.3 COMPONENTS OF SUBSELNET MODEL Next, we sketch our proposed model SUBSELNET that leads to substituting the optimization (1) with its neural surrogate. It consists of two key components: (i) neural approximator of the trained model and (ii) the subset sampler. Figure 4 in Appendix B illustrates our model. Approximator of the trained model mθ∗ . First, we design a neural network Fϕ which would approximate the predictions of the trained model mθ∗ for different architectures m ∈M. Given the dataset {(xi, yi)i∈D} and a model architecture m ∈M, we first feed the underlying DAG Gm into a graph neural network GNNα with parameter α, which outputs the representations of the nodes of the Gm, i.e., Hm = {hu}u∈Vm . Next, we feed Hm and the instance xi into an encoder gβ Fϕ(Gm,xi) ≈ mθ∗(xi) for m ∈M. (2) Here, Fϕ(Gm,xi) = gβ(GNNα(Gm),xi). (3) Here, ϕ = {α, β}, and θ∗ is the set of learned parameters of the model mθ on the dataset D. Subset sampler. We design a subset sampler using a probabilistic model Prπ(•). Given a budget |S| ≤ b, it sequentially draws instances S = {s1, ..., sb} from a softmax distribution of the logit vector π ∈ R|D| where π(xi, yi) indicates a score for the element (xi, yi). Having chosen the first t instances St = {s1, ..st} from D, it draws the (t+ 1)-th element (x, y) from the remaining instances in D with a probability proportional to exp(π(x, y)) and then repeat it for b times. Thus, the probability of selecting the ordered set of elements S = {s1, ..., sb} is given by Pr π(S) = b∏ t=0 exp(π(xst+1 , yst+1))∑ τ∈D\St exp(π(xsτ , ysτ )) (4) We would like to highlight that we use S as an ordered set of elements, selected in a sequential manner. However, such an order does not affect the trained model which is inherently invariant of permutations of the training data, it only affects the choice of S. Training objective. Using the Eqs. (2) and (4), we replace the combinatorial optimization problem in Eq. (1) with a continuous optimization problem, across different model architectures m ∈M. To that goal, we define Λ(S;m;π, Fϕ) = ∑ i∈S ℓ(Fϕ(Gm,xi), yi)− λH(Pr π(•)) (5) minimize π,ϕ ∑ m∈M E S∈Prπ(•) [ Λ(S;m;π, Fϕ) + ∑ i∈S γKL(Fϕ(Gm,xi),mθ∗(xi)) ] (6) Here, we use entropy on the subset sampler H(Prπ(•)) to model the diversity of samples in the selected subset. We call our neural pipeline, which consists of the model approximator Fϕ and the subset selector π, as SUBSELNET. In the above, γ penalizes the difference between the output of model approximator and the prediction made by the trained model, which allows us to generalize the training of different models m ∈M through the model Fϕ(Gm,xi). 2.4 TRANSDUCTIVE-SUBSELNET AND INDUCTIVE-SUBSELNET MODELS The optimization (6) suggests that once Fϕ is trained, we can use it to compute the output of the trained model mθ∗ for an unseen architecture m′ and use it to compute π. This already removes a significant overhead of model training and facilitates fast computation of π. This leads us to develop two types of models based on how we can compute π, as follows. Transductive-SUBSELNET. The first variant of the model is transductive in terms of computation of π. Here, once we train the model approximator Fϕ, then we compute π by solving the optimization problem explicitly with respect to π, every time when we wish to select data subset for a new architecture. Given a trained model Fϕ and a new model architecture m′ ∈M, we solve the optimization problem: minπ ES∈Pr π(•)[Λ(S;m;π, Fϕ)] to find the subset sampler Prπ during inference time for a new architecture m′. Such an optimization still consumes time during inference. However, it is still significantly faster than the combinatorial methods (Killamsetty et al., 2021b;a; Mirzasoleiman et al., 2020a; Sivasubramanian et al., 2021) thanks to sidestepping the explicit model training using a model approximator. Inductive-SUBSELNET. In contrast to the transductive model, the inductive model does not require explicit optimization of π in the face of a new architecture. To that aim, we approximate π using a neural network πψ. This takes two signals as inputs - the dataset D and the outputs of the model approximator for different instances {Fϕ(Gm,xi) | i ∈ D}, and finally outputs a score for each instance πψ(xi, yi). Under Inductive-SUBSELNET, the optimization (6) becomes: minimize ψ,ϕ ∑ m∈M E S∈Prπψ (•) [ Λ(S;m;πψ, Fϕ) + ∑ i∈S γKL(Fϕ(Gm,xi),mθ∗(xi)) ] (7) Such an inductive model can select an optimal distribution of the subset that should be used to efficiently train any model mθ, without explicitly training θ or searching for the underlying subset. 3 NEURAL PARAMETERIZATION OF SUBSELNET In this section, we describe the neural parametrization of SUBSELNET. SUBSELNET consists of two key components, Fϕ and πψ . Specifically, Transductive-SUBSELNET has only one neural component which is Fϕ, whereas, Inductive-SUBSELNET has both Fϕ and πψ . 3.1 NEURAL PARAMETERIZATION OF Fϕ The approximator Fϕ consists of two components: (i) a graph neural network GNNα which mapsGm, the DAG of an architecture, to the node representations Hm = {hu}u∈Vm and (ii) a model encoder gβ which takes Hm and the instance xi as input and approximates mθ∗(xi), i.e., the prediction made by the trained model. Therefore, Fϕ(Gm,x) = gβ(GNNα(Gm),xi). Here, ϕ = {α, β}. Computation of architecture embedding using GNNα. Given a model m ∈M, we compute the representations Hm = {hu|u ∈ Vm} by using a graph neural network GNNα parameterized with α, following the proposal of Yan et al. (2020). We first compute the feature vector fu for each node u ∈ Vm using the one-hot encoding of the associated operation (e.g., max, sum, etc.) and then feed it into a neural network to compute an initial node representation, as given below. hu[0] = INITNODEα(fu) (8) Then, we use a message passing network, which collects signals from the neighborhood of different nodes and recursively compute the node representations (Yan et al., 2020; Xu et al., 2018b; Gilmer et al., 2017). Given a maximum number of recursive layers K and the node u, we compute the node embeddings Hm = {hu|u ∈ Vm} by gathering information from the k < K hops using K recursive layers as follows. h(u,v)[k − 1] = EDGEEMBEDα(hu[k − 1],hv[k − 1]) h′u[k − 1] = SYMMAGGRα( { h(u,v)[k − 1] | v ∈ Nbr(u) } ) hu[k] = UPDATEα(hu[k − 1],h′u[k − 1]). (9) Here, Nbr(u) is the set of neighbors of u. We use SYMMAGGR as a simple sum aggregator and both UPDATE and EDGEEMBED are injective mappings, as used in (Xu et al., 2018b). Note that trainable parameters from EDGEEMBED, SYMMAGGR and UPDATE are decoupled. They are represented as the set of parameters α. Finally, we obtain our node representations as: hu = [hu[0], ..,hu[K − 1]]. (10) Model encoder gβ . Having computed the architecture representation {hu |u ∈ Vm}, we next design the model encoder which leverages these embeddings to predict the output of the trained model mθ∗(xi). To this aim, we developed a model encoder gβ parameterized by β that takes Hm and xi as input and attempts to predict mθ∗(xi), i.e., gβ(Hm,xi) ≈ mθ∗(xi). It consists of three steps. In the first step, we generate a permutation invariant order on the nodes. Next, we feed the representations {hu} in this order into a self-attention based transformer layer. Finally, we combine the output of the transformer and the instance xi using a feedforward network to approximate the model output. Node ordering using BFS order. We first sort the nodes using breadth-first-search (BFS) order ρ. Similar to You et al. (2018), this sorting method produces a permutation-invariant sequence of nodes and captures subtleties like skip connections in the network structure Gm Attention layer. Given the BFS order ρ, we pass the representations Hm = {hu |u ∈ Vm} in the sequence ρ through a self-attention based transformer network. Here, the Query, Key and Value functions are realized by matrices Wquery,Wkey,Wvalue ∈ Rdim(h)×k where k is a tunable width. Thus, for each node u ∈ Vm, we have: Query(hu) = W ⊤ queryhu, Key(hu) = W ⊤ keyhu, Value(hu) = W ⊤ valuehu (11) Using these quantities, we compute an attention weighted vector ζu given by: Attu = W T c ∑ v au,vValue(hv) with, au,v = SOFTMAXv ( Query(hu) ⊤Key(hv)/ √ k ) (12) Here k is the dimension of the latent space, the softmax operation is over the node v, and Wc ∈ Rk×dim(h). Subsequently, for each node u, we use a feedforward network, preceded and succeeded by layer normalization operations, which are given by the following set of equations. ζu,1 = LN(Attu + hu; γ1, γ2), ζu,2 = W⊤2 RELU(W ⊤ 1 ζu,1), ζu,3 = LN(ζu,1 + ζu,2; γ3, γ4) Here, LN is the layer normalization operation (Ba et al., 2016). Finally, we feed the vector ζu,3 for the last node u in the sequence ρ, i.e., u = ρ(|Vm|) along with the feature vector xi into a feed-forward network parameterized by WF to model the prediction mθ∗(xi). Thus, the final output of the model encoder gβ(Hm,xi) is given by om,xi = FFβ2(ζρ|Vm|,3 ,xi) (13) Here, W• and γ• are trainable parameters and collectively form the set of parameters β. 3.2 NEURAL ARCHITECTURE OF INDUCTIVE-SUBSELNET We approximate π using a neural network πψ using a neural network which takes three inputs – (xj , yj), the corresponding output of the model approximator, i.e., om,xj = Fϕ(Gm,xj) and the node representation matrix Hm and provides us a positive selection score πψ(Hm,xj , yj ,om,xj ). In practice, πψ is a three-layer feed-forward network, which contains Leaky-ReLU activation functions for the first two layers and sigmoid activation at the last layer. 4 PARAMETER ESTIMATION AND INFERENCE Given a dataset {(xi, yi) | i ∈ D} and the output of the trained models {mθ∗(xi)}i∈D, our goal is to estimate ϕ and π (resp. ψ) for the transductive (inductive) model. We first illustrate the bottlenecks that prevent us from end-to-end training for estimating these parameters. Then, we introduce a multi-stage training method to overcome these limitations. Finally, we present the inference method. 4.1 BOTTLENECK FOR END TO END TRAINING End to end optimization of the above problem is difficult for the following reasons. (i) Our architecture representation Hm only represents the architectures and thus should be independent of parameter of the architecture θ and the instances x. End to end training can make them sensitive to these quantities. (ii) To enable the model approximator Fϕ accurately fit the output of the trained model mθ, we need an explicit training for ϕ with the target mθ. Adding the corresponding loss as an additional regularizer imposes an additional hyperparameter tuning. 4.2 MULTI-STAGE TRAINING In our multi-stage training method, we first train the model approximator Fϕ by minimizing the sum of the KL divergence between the gold output probabilities, and then train our subset sampler Prπ (resp. Prπψ ) for the transductive (inductive) model as well as fine-tuning ϕ. Training the model approximator Fϕ. We train Fϕ in two steps. In the first step, we perform unsupervised training of GNNα using graph variational autoencoder (GVAE). This ensures that the architecture representations Hm remain insensitive to the model parameters. We build the encoder and decoder of our GVAE by following existing works on graph VAEs (Yan et al., 2020) in the context graph based modeling of neural architectures. Given a graph Gm, the encoder q(Zm |Gm) which takes the node embeddings {hu}u∈Vm and maps it into the latent space Zm = {zu}u∈Vm . Specifically, we model the encoder q(Zm |Gm) as: q(zu |Gm) = N (µ(hu),Σ(hu)). Here, both µ and Σ are neural networks. Given a latent representation Zm = {zu}u∈Vm , the decoder models a generative distribution of the graph Gm where the presence of an edge is modeled as Bernoulli distribution BERNOULLI(σ(z⊤u zv)). Thus, we model the decoder as: p(Gm | Z) = ∏ (u,v)∈Em σ(z ⊤ u zv) · ∏ (u,v) ̸∈Em [1− σ(z⊤u zv)] (14) Here, σ is a parameterized sigmoid function. Finally, we estimate α, µ,Σ and σ by maximizing the evidence lower bound (ELBO) as follows: max α,µ,Σ,σ EZ∼q(• |Gm)[p(Gm | Z)]− KL(q(• |Gm)||prior(•)) (15) Next, we train our model encoder gβ by minimizing the KL-Divergence between the approximated prediction gβ(Hm,xi) and the ground truth prediction mθ∗(xi), where both these quantities are probabilities across different classes. Hence, the training problem is as follows: minimize β ∑ i∈D,m∈M KL(mθ∗(xi)||gβ(Hm,xi)) (16) Training of the subset sampler. Finally, we fine-tune gβ and train π by solving (6) for the Transductive-SUBSELNET (likewise train πψ by solving (7) for Inductive-SUBSELNET). 4.3 INFERENCE During inference, our goal is to select a subset S with |S| = b for a new model m′, which would facilitate efficient training of m′. As discussed in Section 2.4, we compute π for TransductiveSUBSELNET by explicitly solving the optimization problem: minπ ES∈Pr π(•)[Λ(S;m;π, Fϕ)] and then draw S ∼ Prπ(•). For Inductive-SUBSELNET, we draw S ∼ Prπψ̂ (•) where ψ̂ is the learned value of ψ during training. 4.4 OVERVIEW OF TRAINING AND INFERENCE ROUTINES Algorithms 1 and 2 summarize the algorithms for the training and inference procedure. Algorithm 1 Training procedure 1: function TRAINTRANSDUCTIVE(D,M, {θ∗}) 2: α̂, β̂,Hm ←TRAINAPPROX(D,M, {θ∗}) 1: function TRAININDUCTIVE(D,M, {θ∗}) 2: α̂, β̂,Hm ←TRAINAPPROX(D,M, {θ∗}) 3: o← [gβ̂({Hm,xi})]i,m 4: ψ̂ ← TRAINPI(o, {Hm}, {xi}) 1: function TRAINAPPROX(D,M, {θ∗}) 2: α̂← TRAINGNN(M) 3: for m ∈Mtrain do 4: Hm ← GNNα̂(m) 5: POS ← BFSORDERING(Gm) 6: β̂ ← TRAINMODELENC({xi}, POS, {θ∗}) Algorithm 2 Inference procedure 1: function INFERTRANSDUCTIVE(D, α̂, β̂,m′) 2: Hm′ ← GNNα̂(m′) 3: Fϕ(Gm′ ,xi)← gβ̂(Hm′ ,xi) ∀i ∈ D 4: π∗ ← minπ ES∈Prπ(•)[Λ(S;m′;π;Fϕ)] 5: S∗ ∼ Prπ∗(•) 6: TRAINNEWMODEL(m′;S∗) 1: function INFERINDUCTIVE(D, α̂, β̂,m′) 2: Hm′ ← GNNα̂(m′) 3: Fϕ(Gm′ ,xi)← gβ̂(Hm′ ,xi) ∀i ∈ D 4: Compute πψ̂(xi, yi) ∀i ∈ D 5: S∗ ∼ Prπψ̂ (•) 6: TRAINNEWMODEL(m′;S∗) Training Subroutines. The training phase for both, Transductive-SUBSELNET first utilizes the TRAINAPPROX routine to train the model approximator given the dataset, trained model parameters, and the set of neural architectures. Internally, the routine calls the TRAINGNN subroutine to train the parameters (α) of the GNN network, BFSORDERING subroutine to reorder the embeddings based on the BFS order and the TRAINMODELENC subroutine to train the attention-based model encoder’s parameters (β). The TRAININDUCTIVE routine further calls the TRAINPI subroutine to train the parameters of the neural subset selector. Inference Subroutines. Given an unseen architecture and parameters of the trained neural networks, the inference phase for both variants of SUBSELNET first generates the model encoder output for all the data points. Post this, the INFERTRANSDUCTIVE routine solves the optimization problem on π explicitly for the unseen architecture and selects the subset from the dataset. On the other hand, INFERINDUCTIVE utilizes the trained parameters of the neural subset selector. Finally, both routines call the TRAINNEWMODEL to train and evaluate the unseen architecture on selected subset. 5 EXPERIMENTS In this section, we provide comprehensive evaluation of SUBSELNET against several strong baselines on three real world datasets. In Appendix D, we present additional results. 5.1 EXPERIMENTAL SETUP Datasets. We use FMNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky et al., 2014) and CIFAR100 (Krizhevsky et al., 2009) datasets for our experiments. We transform an input image Xi to a vector xi of dimension 2048 by feeding it to a pre-trained ResNet50 v1.5 (?) model and using the output from the penultimate layer as the image representation. Model architectures and baselines. We use model architectures from NAS-Bench-101 (Ying et al., 2019) for our experiments. We compare Transductive-SUBSELNET and Inductive-SUBSELNET against two non-adaptive subset selection methods – (i) Facility location (Fujishige, 2005; Iyer, 2015) where we maximize FL(S) = ∑ j∈Dmaxi∈S x ⊤ i xj to find S, (ii) Pruning (Sorscher et al., 2022), and four adaptive subset selection methods – (iii) Glister (Killamsetty et al., 2021b), (iv) Grad– Match (Killamsetty et al., 2021a), (v) EL2N (Paul et al., 2021), (vi) GraNd (Paul et al., 2021); and; (vii) Full selection where we use complete training data. The non-adaptive subset selectors select the subset before the training begins and thus, never access the rest of the training set again during the training iterations. On the other hand, the adaptive subset selectors refine the choice of subset during training iterations and thus they need to access the full training set at each training iteration. Appendix C contains additional details about the baselines. Evaluation protocol. We split the model architecturesM into 60% training (Mtr), 20% validation (Mval) and 20% test (Mtest) folds. Similarly, we split the dataset D into Dtr, Dval and Dtest. We presentMtr,Mval, Dtr and Dval to our method and estimate ϕ̂ and ψ̂ (for Inductive-SUBSELNET model). None of the baseline methods supports any generalizable learning protocol across different model architectures and thus cannot leverage the training architectures during test. Given an architecture m′ ∈ Mtest, we select the subset S from Dtr using our subset sampler (Prπ for Transductive-SUBSELNET or Prπ ψ̂ for Inductive-SUBSELNET). Similarly, all the non-adaptive subset selectors select S ⊂ Dtr using their own algorithms. Once S is selected, we train the test models m′ ∈Mtest on S. We perform our experiments with different |S| = b ∈ (0.005|D|, 0.05|D|) and compare the performance between different methods using three quantities: (1) Accuracy Pr(y = ŷ) measured using 1|Dtest| ∑ i∈Dtest ∑ m′∈Mtest 1(maxjm ′ θ∗(xi)[j] = yi). (2) Computational efficiency, i.e., the speedup achieved with respect to training with full dataset. It is measured with respect to Tf/T . Here, Tf is the time taken for training with full dataset; and, T is the time taken for the entire inference task, which is the average time for selecting subsets across the test models m′ ∈ Mtest plus the average training time of these test models on the respective selected subsets. (3) Resource efficiency in terms of the amount of memory consumed during the entire inference task, described in item (2), which is measured as ∫ T 0 memory(t) dt where memory(t) is amount of memory consumed at timestamp t. 5.2 RESULTS Comparison with baselines. Here, we compare different methods in terms of the trade off between accuracy and computational efficiency as well as accuracy and resource efficiency. In Figure 1, we probe the variation between these quantities by varying the size of the selected subset |S| = b ∈ (0.005|D|, 0.05|D|). We make the following observations. (1) Our methods trade-off between accuracy vs. computational efficiency as well as accuracy vs. resource efficiency more effectively than all the methods. For FMNIST, both the variants of our method strikingly output 75% accuracy, whereas they are 100 times faster than full selection. Transductive-SUBSELNET performs slightly better than Inductive-SUBSELNET in terms of the overall trade-off between accuracy and efficiency for FMNIST and CIFAR10 datasets. However, for CIFAR100, Transductive-SUBSELNET performs significantly better than Inductive-SUBSELNET. The time taken for both Transductive-SUBSELNET and Inductive-SUBSELNET seems comparable— this is because the subset selection time for both of them are significantly less than the final training time on the selected subset. (2) EL2N is the second best method. It provides the best trade-off between accuracy and time as well as accuracy and GPU memory, among all the baselines. It aims at choosing difficult training instances having high prediction error. As a result, once trained on them, the model can predict the labels of easy instances too. However, it chooses instances after running the initial few epochs. (3) FL adopts a greedy algorithm for subset selection and therefore, it consumes a large time and memory during subset selection itself. Consequently, the overall efficiency significantly decreases although the complexity of the training time on the selected subset remains the same as our models in terms of time and memory. (4) In addition to EL2N, Glister, Grad-Match and GraNd are adaptive subset selection methods that operate with moderately small (> 5%) subset sizes. In a region, where the subset size is extremely small, i.e., 1% − 5%, they perform very poorly. Moreover, they maximize a monotone function at each gradient update step, which results in significant overhead in terms of time. These methods process the entire training data to refine the choice of the subset and consequently, they end up consuming a lot of memory. (5) GraNd selects the instances having high uncertainty after running each model for five epochs and often the model is not well trained by then. Finer analysis of the inference time. Next, we demarcate the subset selection phase from the training phase of the test models on the selected subset during the inference time analysis. Table 2 summarizes the results for top three non-adaptive subset selection methods for b = 0.005|D| on CIFAR100. We observe that: (1) the final training times of all three methods are roughly same; (2) the selection time for TransductiveSUBSELNET is significantly more than Inductive-SUBSELNET, although it remains extremely small as compared to the final training on the inferred subset; and, (3) the selection time of FL is large— as close as 323% of the training time. Hybrid-SUBSELNET. From Figure 1, we observe that Transductive-SUBSELNET performs significantly better than Inductive-SUBSELNET. However, since Transductive-SUBSELNET solves a fresh optimization problem for each new architecture, it performs better at the cost of time and GPU memory. On the other hand, InductiveSUBSELNET performs significantly worse as it relies on a trained neural network to learn the same optimization problem. Here, we design a hybrid version of our model, called as Hybrid-SUBSELNET. Here, given the budget of the subset b, we first choose B > b instances using InductiveSUBSELNET and the final b instances by running the explicit optimization routines in Transductive-SUBSELNET. Figure 3 sum- marizes the results for B = {25K, 30K, 35K, 45K, 50K} . We observe that the trade off curves for the Hybrid-SUBSELNET lie in between Inductive-SUBSELNET and Transductive-SUBSELNET. For low value of B, i.e., B = 25K, the trade off line of Hybrid-SUBSELNET remains close to Inductive-SUBSELNET. As we increase B, the trade-off curve of accuracy vs speed up as well as the accuracy vs GPU usage becomes better, which allows Hybrid-SUBSELNET to smoothly transition from the trade off curve of Inductive-SUBSELNET to Transductive-SUBSELNET. At B = 45K, the trade-off curve almost coincides with Transductive-SUBSELNET. Such properties allow a user to choose an appropriate B that can accurately correspond to a target operating point in the form of (Accuracy, Speed up) or (Accuracy, memory usage). 6 CONCLUSION In this work, we develop SUBSELNET, a subset selection framework, which can be trained on a set of model architectures, to be able to predict a suitable training subset before training a model, for an unseen architecture. To do so, we first design a neural model approximator, which predicts the output of a new candidate architecture without explicitly training it. We use that output to design transductive and inductive variants of our model. The transductive model solves a small optimization problem to compute the subset for a new architecture m every single time. In contrast, the inductive model resorts to a neural subset sampler instead of an optimizer. Our work does not incorporate the gradients of the trained model in model approximator and it would be interesting to explore its impact on the subset selection. Further we can extend our setup to an adaptive setting, where we can incorporate signals from different epochs with a sequence encoder to train a subset selector. 7 ETHICS STATEMENT We do not foresee any negative impact of our work from ethics viewpoint. 8 REPRODUCIBILITY STATEMENT We uploaded the code in supplementary material. Details of implementation are given in Appendix C. A RELATED WORK Our work is closely related to representation learning for model architectures, network architecture search, data subset selection. Representation learning for model architectures. Recent work in network representation learning use GNN based encoder-decoder to encapsulate the local structural information of a neural network into a fixed-length latent space (Zhang et al., 2019; Ning et al., 2020; Yan et al., 2020; Lukasik et al., 2021). By employing an asynchronous message passing scheme over the directed acyclic graph (DAG), GNN-based methods model the propagation of input data over the actual network structure. Apart from encodings based solely on the structure of the network, White et al. (2020); Yan et al. (2021) produce computation-aware encodings that map architectures with similar performance to the same region in the latent space. Following the work of Yan et al. (2020), we use a graph isomorphism network as an encoder but instead of producing a single graph embedding, our method produces a collection of node embeddings, ordered by breadth-first-search (BFS) ordering of the nodes. Our work also differs in that we do not employ network embeddings to perform downstream search strategies. Instead, architecture embeddings are used in training a novel model approximator that predicts the logits of a particular architecture, given an architecture embedding and a data embedding. Network architecture search. There is an ever-increasing demand for the automatic search of neural networks for various tasks. The networks discovered by NAS methods often come from an underlying search space, usually designed to constrain the search space size. One such method is to use cell-based search spaces (Luo et al., 2018; Zoph et al., 2017; Liu et al., 2017; Pham et al., 2018; Ying et al., 2019; Dong & Yang, 2020). Although we utilize the NAS-Bench-101 search space for architecture retrieval, our work is fundamentally different from NAS. In contrast to the NAS methods, which search for the best possible architecture from the search space using either sampling or gradient-descent based methods (Baker et al., 2017; Zoph & Le, 2016; Real et al., 2017; 2018; Liu et al., 2018; Tan et al., 2018), our work focuses on efficient data subset selection given a dataset and an architecture, which is sampled from a search space. Our work utilizes graph representation learning on the architectures sampled from the mentioned search spaces to project an architecture under consideration to a continuous latent space, utilize the model expression from the latent space as proxies for the actual model and proceed with data subset selection using the generated embedding, model proxy and given dataset. Data subset selection. Data subset selection is widely used in literature for efficient learning, coreset selection, human centric learning, etc. Several works cast the efficient data subset selection task as instance of submodular or approximate-submodular optimization problem (Killamsetty et al., 2021a; Wei et al., 2014a;b;c; Killamsetty et al., 2021b; Sivasubramanian et al., 2021). Another line of work focus on selecting coresets which are expressed as the weighted combination of subset of data, approximating some characteristics, e.g., loss function, model prediction (Feldman, 2020; Mirzasoleiman et al., 2020b; Har-Peled & Mazumdar, 2004; Boutsidis et al., 2013; Lucic et al., 2017). Our work is closely connected to simultaneous model learning and subset selection (De et al., 2021; 2020; Sivasubramanian et al., 2021). These existing works focus on jointly optimizing the training loss, with respect to the subset of instances and the parameters of the underlying model. Among them (De et al., 2021; 2020) focus on distributing decisions between human and machines, whereas (Sivasubramanian et al., 2021) aims for efficient learning. However, these methods adopt a combinatorial approach for selecting subsets and consequently, they are not generalizable across architectures. In contrast, our work focuses on differentiable subset selection mechanism, which can generalize across architectures. B ILLUSTRATION OF SUBSELNET C ADDITIONAL DETAILS ABOUT EXPERIMENTAL SETUP C.1 DATASET Datasets (D). Architectures (M). Although our task is not Neural Architecture Search, we leverage the NASBench101 search space as an architecture pool. The cell-based search space was designed for the benchmarking of various NAS methods. It consists of 423, 624 unique architectures with the following constraints – (1) number of nodes in each cell is at most 7, (2) number of edges in each cell is at most 9, (3) barring the input and output, there are three unique operations, namely 1× 1 convolution, 3× 3 convolution and 3× 3 max-pool. We utilize the architectures from the search space in generating the sequence of embeddings along with sampling architectures for the training and testing of the encoder and datasets for the subset selector. C.2 IMPLEMENTATION DETAILS ABOUT BASELINES Facility Location (FL). We implemented facility location on all the three datasets using the apricot 1 library. The similarity matrix was computed using Euclidean distance between data points, and the objective function was maximized using the naive greedy algorithm. Pruning. It selects a subset from the entire dataset based on the uncertainty of the datapoints while partial training. In our setup, we considered ResNet-18 as a master model, which is trained on each dataset for 5 epochs. Post training, the uncertainty measure is calculated based on the probabilities of each class, and the points with highest uncertainty are considered in the subset. We train the master model at a learning rate of 0.025. Glister and Grad-Match. We implemented GLISTER (Killamsetty et al., 2021b) and Grad-Match (Killamsetty et al., 2021a) using the CORDS library. We trained the models for 50 epochs, using batch size of 20, and selected the subset after every 10 epochs. The loss was minimized using SGD with learning rate of 0.01, momentum of 0.9 and weight decay with regularization constant of 5× 10−4. We used cosine annealing for scheduling the learning rate with Tmax of 50 epochs, and used 10% of the training data as the validation set. Details of specific hyperparameters for stated as follows. Glister uses a greedy selection approach to minimize a bi-level objective function. In our implementation, we used stochastic greedy optimization with learning rate 0.01, applied on the data points of each mini-batch. Online-Glister approximates the objective function with a Taylor series expansion up to an arbitrary number of terms to speed up the process; we used 15 terms in our experiments. Grad-Match applies the orthogonal matching (OMP) pursuit algorithm to the data points of each mini-batch to match gradient of a subset to the entire training/validation set. Here, we set the learning rate is set to 0.01. The regularization constant in OMP is 1.0 and the algorithm optimizes the objective function within an error margin of 10−4. GraNd. This is an adaptive subset selection strategy in which the norm of the gradient of the loss function is used as a score to rank a data point. The gradient scores are computed after the model has trained on the full dataset for the first few epochs. For the rest of epochs, the model is trained only on the top-k data points, selected using the gradient scores. In our implementation, we let the model train on the full dataset for the first 5 epochs, and computed the gradient of the loss only with respect to the last layer fully connected layer. EL2N. When the loss function used to compute the GraNd scores is the cross entropy loss, the norm of the gradient for a data point x can be approximated by E||p(x)− y||2, where p(x) is the discrete 1https://github.com/jmschrei/apricot probability distribution over the classes, computed by taking softmax of the logits, and y is the one-hot encoded true label corresponding to the data point x. Similar to our implementation of GraNd, we computed the EL2N scores after letting the models train on the full data for the first 5 epochs. C.3 IMPLEMENTATION DETAILS ABOUT OUR MODEL GNNα. As we utilize NASBench-101 space as the underlying set of neural architectures, each computational node in the architecture can comprise of one of five operations and the one-hotencoded feature vector fu. Since the set is cell-based, there is an injective mapping between the neural architecture and the cell structure. We aim to produce a sequence of embeddings for the cell, which in turn corresponds to that of the architecture. For each architecture, we use the initial feature fu ∈ R5 in (8) as a five dimensional one-hot encoding for each operation. This is fed into INITNODE (8) to obtain an 16 dimensional output. Here, INITNODE consists of a 5 × 16 linear, ReLU and 16 × 16 linear layers cascaded with each other. Each of EDGEEMBED and UPDATE consists of a 5× 128 linear-BatchNorm-ReLU cascaded with a 128× 16 linear layer. Moreover, the symmetric aggregator is a sum aggregator. We repeat this layer K times, and each iteration gathers information from k < K hops. After all the iterations, we generate an embedding for each node, and following (You et al., 2018) we use the BFS-tree based node-ordering scheme to generate the sequence of embeddings for each network. The GVAE-based architecture was trained for 10 epochs with the number of recursive layers K set to 5, and the Adam optimizer was used with learning rate of 10−3. The entire search space was considered as the dataset, and a batch-size of 32 was used. Post training, we call the node embeddings collectively as the architecture representation. To train the latent space embeddings, the parameters α are trained in an encoder-decoder fashion using a variational autoencoder. The mean µ and variance σ on the final node embeddings hu are: µ = FCN ([ hu ] u∈Vm ) and σ = exp ( FCN ([ hu ] u∈Vm )) The decoder aims to reconstruct the original cell structure (i.e the nodes and the corresponding operations), which are one-hot encoded. It is modeled using single-layer fully connected networks followed by a sigmoid layer. Model Encoder gβ . The model encoder gβ is essentially a single-head attention block that acts on a sequence of node embeddings Hm = {hu|u ∈ Vm}. The Query, Key and Value matrices, Wquery, Wkey and Wvalue ∈ R16×8, and the matrix WC ∈ R8×16. The fully connected network acting on ζu,1 consists of matrices W1 ∈ R16×64 and W2 ∈ R64×16. All the trainable matrices along with the layer normalizations were implemented using the Linear and LayerNorm functions in Pytorch. The last item of the output sequence ζu,3 is concatenated with the data embedding xi and fed to another 2-layer fully-connected network with hidden dimension 256 and dropout probability of 0.3. The model encoder is trained by minimizing the KL-divergence between gβ(Hm,xi) and mθ∗(xi). We used an AdamW optimizer with learning rate of 10−3, ϵ = 10−8, betas = (0.9, 0.999) and weight decay of 0.005. We also used Cosine Annealing to decay the learning rate, and used gradient clipping with maximum norm set to 5. Figure 6 shows the convergence of the outputs of the model encoder gβ(Hm,xi) with the outputs of the model mθ∗(xi). Neural Network πψ. The inductive model is a three-layer fully-connected neural network with two Leaky ReLU activations and a sigmoid activation after the last layer. The input to πψ is the concatenation (Hm;om,i;xi; yi). The hidden dimensions of the two intermediary layers are 64 and 16, and the final layer is a single neuron that outputs the score corresponding to a data point xi. While training πψ we add a regularization term λ′( ∑ i∈D πψ(Hm,om,i,xi, yi)− |S|) to ensure that nearly |S| samples have high scores out of the entire dataset D. Both the regularization constants λ (in equation 6) and λ′ are set to 0.1. We train the model weights using an Adam optimizer with a learning rate of 0.001. During training, at each iteration we draw instances using Prπ and use the log-derivative trick to compute the gradient of the objective. During each computation step, we use one instance of the ranked list to compute the unbiased estimate of the objective in (6) . D ADDITIONAL EXPERIMENTS D.1 ABLATION STUDY We perform ablation study of SUBSELNET from three perspectives. Impact of ablation of subset sampler. First, we attempt to understand the impact of the subset sampler. To that aim, we compare the performance of SUBSELNET against two baselines, namely - Bottom-b-loss and Bottom-b-loss+gumbel. In Bottom-b-loss, we sort the data instances based on their predicted loss ℓ(Fϕ(Gm,x), y) and consider those points with the bottom b values. In Bottomb-loss+gumbel, we add noise sampled from the gumbel distribution with µ = 0 and β = 0.025, and sort the instances based on these noisy loss values, i.e., ℓ(Fϕ(Gm,x), y) + Gumbel(0, β = 0.025). We observe that Bottom-b-loss and Bottom-b-loss+gumbel do not perform that well in spite of being efficient in terms of time and memory. Figure 7 compares the performance of the variants of SUBSELNET, Bottom-b-loss and Bottom-b-loss+gumbel. Exploring alternative architecture of the model encoder gβ . We consider three alternative architecture to our current model encoder gβ . • FEEDFORWARD: We consider a two-layer fully-connected network, in which we concatenate the mean of Hm with xi. We used ReLu activation between the layers and the hidden dimension was set to 256. We used dropout for regularization with probability 0.3. • DEEPSET: We consider permutation invariant networks of the form ρ( ∑ h∈H ϕ(h);xi) where ρ and ϕ are neural networks and H is the sequence under consideration. We ρ is a fully-connected network with 4 layers, ReLU activation, and hidden dimension of 64, and ϕ is a two-layer fullyconnected network with ReLU activation and has output dimension 10. • LSTM: We consider an LSTM-based encoder with hidden dimension of 16 and dropout probability of 0.2. The output of the last LSTM block is concatenated with xi and fed to a linear layer with hidden dimension 256, dropout probability of 0.3 and ReLU as the activation function. Since the goal of the model encoder is to produce outputs which mimic the architectures, we measure the KL divergence between the outputs of the gold models and of the encoder to denote the closeness of the output distribution. Table. 8 summarizes performance of different model encoders. We make the following observations: (1) Transformer-based model encoder outperforms every other method by a significant margin across both the datasets. (2) The BFS sequential modeling of an architecture with transformers leads to better representation that enables closer model approximation compared to other sequential methods like LSTM. (3) Non-sequential model approximators like Feedforward and DeepSets led to poor model approximation. Performance of subset selectors using different model encoders. We consider three different design choices of model approximator (our (Transformer), Feedforward, and LSTM) along with three different subset selection strategies (Our subset sampler, top-b instances based on uncertainty, and top-b based on loss) which result in nine different combinations of model approximation and subset selection strategies. We measure uncertainty using the entropy of the predicted distribution of the target classes and report the average test accuracy of the models when they are trained on the underlying pre-selected subset in the following table - We make the following observations - 1. The complete design of our method, i.e., Our model approximator (Transformer) + Our subset sampler (SUBSELNET) performs best. 2. If we use simple unsupervised subset selection heuristics, e.g., loss or uncertainty based subset selection, then our model approximator performs much worse than Feedforward or LSTM, whereas this trend is opposite if we use our subset sampler for selecting the subset. This may be due to overfitting of the transformer architecture in presence of uncertainty or loss based selection, which is compensated by our subset sampler. D.2 RECOMMENDING MODEL ARCHITECTURE When dealing with a pool of architectures designed for the same task, choosing the correct architecture for the task might be a daunting task - since it is impractical to train all the architectures from scratch. In view of this problem, we show that training on smaller carefully chosen subsets might be beneficial for a quicker alternative to choosing the correct architectures. We first extract the top 15 best performing architectures A∗ having highest accuracy, when trained on full data. We mark them as "gold". Then, we gather top 15 architectures A when trained on the subset provided by our models. Then, we compare A and A∗ using the Kendall tau rank correlation coefficient (KTau) along with Jaccard coefficent |A ∩ A∗|/|A ∪ A∗|. Figure 10 summarizes the results for top three non-adaptive subset selectors in terms of the accuracy, namely - Transductive-SUBSELNET, Inductive-SUBSELNET and FL. We make the following observations: (1) One of our variant outperforms FL in most of the cases in CIFAR10 and CIFAR100. (2) There is no consistent winner between Transductive-SUBSELNET and Inductive-SUBSELNET, although Inductive-SUBSELNET outperforms both Transductive-SUBSELNET and FL consistently in CIFAR100 in terms of the Jaccard coefficient. D.3 AVOIDING UNDERFITTING AND OVERFITTING Since the amount of training data is small, there is a possibility of overfitting. However, the coefficient λ of the entropy regularizer λH(Prπ), can be increased to draw instances from the different regions of the feature space, which in turn can reduce the overfitting. In practice, we tuned λ on the validation set to control such overfitting. We present the accuracies on (training, validation, test) folds for both Transductive-SUBSELNET and Inductive-SUBSELNET in Table 11. We make the following observations: 1. From training to test, in most cases, the decrease in accuracy is ∼ 7%. 2. This small accuracy gap is further reduced from validation to test. Here, in most cases, the decrease in accuracy is ∼ 4%. We perform early stopping using the validation set which acts as an additional regularizer and therefore, the amount of overfitting is significantly low. D.4 PERFORMANCE OF SUBSET SELECTION STRATEGIES ON LARGER SUBSET SIZES We conducted similar experiments as Section 5.1 for CIFAR10 and FMNIST on larger subset sizes (b) of 0.1|D|, 0.2|D|, 0.4|D| and 0.7|D|. For each dataset and the above mentioned subset sizes, we evaluate the decrease in accuracy (ratio of the accuracy on the subset to accuracy on the full dataset), speed-up (ratio of the time taken to train the full dataset to the sum of times taken for subset selection and subset training), and GPU usage in GB-min. We report the variation of these metrics with respect to the subset sizes in the following tables – Note that in the case of CIFAR10, we denote the decrease factors of 0.91-0.96 in green, and the decrease factors of 0.85 - 0.88 in purple. In case of FMNIST, we denote the decrease factors of 0.94-0.97 in green and the decrease factors of 0.90 - 0.93 in purple. We make the following observations: 1. We show a better trade-off between accuracy and time and accuracy and memory than almost all the baselines. 2. Observations in CIFAR10: When we tuned the subset sizes, we notice that SUBSELNET, GLISTER, Grad-Match and EL2N can achieve a comparable decrease factor of 0.91-0.93. In terms of speed-up and memory usage, we see that (a) SUBSELNET achieves a 1.3x speed-up as compared to GLISTER and 1.1x speed-up as compared to Grad-Match and EL2N (b) GLISTER consumes 3.7x GPU memory, Grad-Match consumes 3.1x GPU memory and EL2N consumes 2.5x GPU memory as compared to SUBSELNET We notice that none of the other subset selection strategies achieve a high-enough accuracy, and we beat them in terms of speed-up and memory usage. Moreover, for the case when the subset selection methods achieve a decrease factor of 0.85 - 0.88, we see that (a) SUBSELNET achieves a 2.4x speed-up as compared to FacLoc, 1.8x speed-up as compared to Pruning, 1.4x speed-up as compared to GLISTER, 1.2x speed-up as compared to Grad-Match and 1.1x speed-up as compared to EL2N (b) FacLoc consumes 4.8x GPU memory, Pruning consumes 1.7x GPU memory, GLISTER consumes 4x GPU memory, Grad-Match consumes 3.4x GPU memory and EL2N consumes 2.6x GPU memory as compared to SUBSELNET. 3. Observations in FMNIST: When we tuned the subset sizes, we notice that SUBSELNET, Facloc, GLISTER, Grad-Match and EL2N can achieve a comparable decrease factor of 0.94-0.97. In terms of speed-up and memory usage, we see that (a) SUBSELNET achieves a 3.8x speed-up as compared to FacLoc, 1.4x speed-up as compared to GLISTER and Grad-Match, and 2.2x speed-up as compared to EL2N. (b) FacLoc consumes 12.5x GPU Memory, and GLISTER, Grad-Match and EL2N con- sume 2.9x GPU memory as compared to SUBSELNET. We notice that none of the other subset selection strategies achieve a high-enough accuracy, and we beat them in terms of speed-up and memory usage. Moreover, for the case when the subset selection methods achieve a decrease factor of 0.90-0.93, we see that (a) SUBSELNET achieves a 7.4x speed-up as compared to FacLoc, 2.1x speed-up as compared to GLISTER, 2.9x speed-up as compared to Grad-Match and 2.1x speed-up as compared to EL2N (b) FacLoc consumes 28.5x GPU memory, GLISTER consumes 4.5x GPU memory, GradMatch consumes 6.1x GPU memory and EL2N consumes 3.7x GPU memory as compared to SUBSELNET. We present the trade-off between the accuracy and speed-up, and accuracy and memory consumption in Figure 15. E PROS AND CONS OF USING GNNS We have used a GNN in our model encoder to encode the architecture representations into an embedding. We chose a GNN for the task due to following reasons - 1. Message passing between the nodes (which may be the input, output, or any of the operations) allows us to generate embeddings that capture the contextual structural information of the node, i.e., the embedding of each node captures not only the operation for that node but also the operations preceding that node to a large extent. 2. It has been shown by (Morris et al., 2019) and (Xu et al., 2018a) that GNNs are as powerful as the Weisfeiler-Lehman algorithm and thus give a powerful representation for the graph. Thus, we obtain smooth embeddings of the nodes/edges that can effectively distill information from its neighborhood without significant compression. 3. GNNs embed model architecture into representations independent of the underlying dataset and the model parameters. This is because it operates on only the nodes and edges— the structure of the architecture and does not use the parameter values or input data. However, the GNN faces the following drawbacks - 1. GNN uses a symmetric aggregator for message passing over node neighbors to ensure that the representation of any node should be invariant to a permutation of its neighbors. Such a symmetric aggregator renders it a low-pass filter, as shown in (NT & Maehara, 2019), which attenuates important high-frequency signals. 2. We are training one GNN using several architectures. This can lead to the insensitivity of the embedding to change in the architecture. In the context of model architecture, if we change the operation of one node in the architecture (either remove, add or change the operation), then the model’s output can significantly change. However, the embedding of GNN may become immune to such changes, since the GNN is being trained over many architectures. F CHOICE OF SUBMODULAR FUNCTION FOR THE OPTIMIZATION PROBLEM In ( 1) we introduced the original combinatorial problem for subset selection where optimization variable S— the subset of instances — makes the underlying problem combinatorial. Here, we can use submodular functions like Graph-Cut, Facility-Location, and Log-Determinant as the diversity functions, which would allow us to use greedy algorithms to maximize the objective in ( 1). But, as discussed in Section 4.1, this suffers from two bottlenecks — expensive computation issues and lack of generalizability. Therefore, we do not follow these approaches and resort to our proposed approach called SUBSELNET. In contrast to the optimization problem in (1), which was a combinatorial set optimization problem, the optimization problem in SUBSELNET(6) is a continuous optimization problem where the goal is to estimate Prπ. In such a problem, where the probability distribution is the key optimization variable, entropy is a more natural measure of diversity than the other submodular measures.
1. What is the focus and contribution of the paper on data subset selection? 2. What are the strengths of the proposed approach, particularly in its decomposition into model approximator and subset sampler? 3. What are the weaknesses of the paper regarding its writing errors, formulae, and optimization objectives? 4. Do you have any concerns or questions about the experimental setup and comparisons with other works? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper In this paper, a new non-adaptive data subset selection method is proposed. The traditional adaptive method mixed the training and subset selection. While for the new proposed method, the subset processing is done before the training. Furthermore, the paper also proposed the transductive and inductive variants. The experimental results verified that both variants output perform the baselines on subset selection and also an be used to choose the best architecture. Strengths And Weaknesses This paper has a clear decomposition on the model into parts including model approximator and subset sampler. And for each part, it has clear annotations to explain the whole process. For the subset sampler, 2 variants are proposed and the experimental results show the tradeoff between them, and furthermore, a combination of these 2 variants are tested too. The comprehensive experimental results proved the effectiveness of the proposed method. Some questions: some writing errors such as “viz.” appeared a couple of times. both formula (5) and (6) has the E_S, is that correct? for the E_S optimization objective such as (6), as the parameter \pi is under the prob distribution and needs sampling, how do you optimize the \pi? do you use some reparameterization trick which is now shown in the paper. Can you elaborate more on why you don’t jointly optimize the gnn parameter and transformer parameter? In the experimental setup, the paper mentions all the baselines are non-adaptive, is it correct? Do you have any results to show the accuracy gap between the neural model approximator and the fully trained model? Clarity, Quality, Novelty And Reproducibility This paper proposed a new non-adaptive method for the data subset selection problem and has a clear description of the proposed method. Also the experimental results verify the paper’s arguments: mainly on the advantage of the tradeoff between speedup and memory.
ICLR
Title Efficient Data Subset Selection to Generalize Training Across Models: Transductive and Inductive Networks Abstract Subset selection, in recent times, has emerged as a successful approach toward efficient training of models by significantly reducing the amount of data and computational resources required. However, existing methods employ discrete combinatorial and model-specific approaches which lack generalizability— for each new model, the algorithm has to be executed from the beginning. Therefore, for data subset selection for an unseen architecture, one cannot use the subset chosen for a different model. In this work, we propose SUBSELNET, a nonadaptive subset selection framework, which tackles these problems with two main components. First, we introduce an attention-based neural gadget that leverages the graph structure of architectures and acts as a surrogate to trained deep neural networks for quick model prediction. Then, we use these predictions to build subset samplers. This leads us to develop two variants of SUBSELNET. The first variant is transductive (called as Transductive-SUBSELNET) which computes the subset separately for each model by solving a small optimization problem. Such an optimization is still super fast, thanks to the replacement of explicit model training by the model approximator. The second variant is inductive (called as Inductive-SUBSELNET) which computes the subset using a trained subset selector, without any optimization. Most state-of-the-art data subset selection approaches are adaptive, in that the subset selection adapts as the training progresses, and as a result, they require access to the entire data at training time. Our approach, in contrast, is non-adaptive and does the subset selection only once in the beginning, thereby achieving resource and memory efficiency along with compute-efficiency at training time. Our experiments show that both the variants of our model outperform several methods on the quality of the subset chosen and further demonstrate that our method can be used for choosing the best architecture from a set of architectures. 1 INTRODUCTION In the last decade, deep neural networks have enhanced the performance of the state-of-the-art ML models dramatically. However, these neural networks often demand massive data to train, which renders them heavily contingent on availability of high performance computing machinery, e.g., GPUs, CPUs, RAMs, storage disks, etc. However, such resources entail heavy energy consumption, excessive CO2 emission and maintenance cost. Driven by this challenge, a recent body of work focus on suitably selecting a subset of instances, so that the model can be quickly trained using lightweight computing infrastructure (Boutsidis et al., 2013; Kirchhoff & Bilmes, 2014; Wei et al., 2014a; Bairi et al., 2015; Liu et al., 2015; Wei et al., 2015; Lucic et al., 2017; Mirzasoleiman et al., 2020b; Kaushal et al., 2019; Killamsetty et al., 2021a;b;c). However, these existing data subset selection algorithm are discrete combinatorial algorithms, which share three key limitations. (1) Scaling up the combinatorial algorithms is often difficult, which imposes significant barrier against achieving efficiency gains as compared to training with entire data. (2) Many of these approaches are adaptive in nature, i.e, the subset changes as the model training progresses. As a result, they require access to the entire training dataset and while they provide compute-efficiency, they do not address memory and resource efficiency challenges of deep model training. (3) The subset selected by the algorithm is tailored to train only a given specific model and it cannot be used to train another model. Therefore, the algorithm cannot be shared across different models. We discuss the related work in detail in Appendix A. 1.1 PRESENT WORK Responding to the above limitations, we develop SUBSELNET, a trainable subset selection framework, which— once trained on a set of model architectures and a dataset— can quickly select a small training subset such that it can be used to train a new (test) model, without a significant drop in accuracy. Our setup is non-adaptive in that it learns to select the subset before the training starts for a new architecture, instead of adaptively selecting the subset during the training process. We initiate our investigation by writing down an instance of combinatorial optimization problem that outputs a subset specifically for one given model architecture. Then, we gradually develop SUBSELNET, by building upon this setup. SUBSELNET comprises of the following novel components. Neural model approximator. The key blocker in scaling up a model-specific combinatorial subset selector across different architectures is the involvement of the model parameters as optimization variables along with the candidate data subset. To circumvent this blocker, we design a neural model approximator which aims to approximate the predictions of a trained model for any given architecture. Thus, such a model approximator can provide per instance accuracy provided by a new (test) model without explicitly training it. This model approximator works in two steps. First, it translates a given model architecture into a set of embedding vectors using graph neural networks (GNNs). Similar to the proposal of Yan et al. (2020) it views a given model architecture as a directed graph between different operations and, then outputs the node embeddings by learning a variational graph autoencoder (VAE) in an unsupervised manner. Due to such nature of the training, these node embeddings represent only the underlying architecture— they do not capture any signal from the predictions of the trained model. Hence, in the next step, we build a neural model encoder which uses these node embeddings and the given instance to approximate the prediction made by the trained model. The model encoder is a transformer based neural network which combines the node embedding using self-attention induced weights to obtain an intermediate graph representation. This intermediate representation finally combines with the instance vector x to provide the prediction of the trained architecture. Subset sampler. Having computed the prediction of a trained architecture, we aim to choose a subset of instances that would minimize the predicted loss and at the same time, offers a good representation of the data. Our subset sampler takes the approximate model output and an instance as input and computes a selection score. Then it builds a logit vector using all these selection scores, feeds it into a multinomial distribution and samples a subset from it. This naturally leads to two variants of the model. Transductive-SUBSELNET: The first variant is transductive in nature. Here, for each new architecture, we utilize the predictions from the model approximator to build a continuous surrogate of the original combinatorial problem and solve it to obtain the underlying selection scores. Thus, we still need to solve a fresh optimization problem for every new architecture. However, the direct predictions from the model approximator allow us to skip explicit model training. This makes this strategy extremely fast both in terms of memory and time. We call this transductive subset selector as Transductive-SUBSELNET. Inductive-SUBSELNET: In contrast to Transductive-SUBSELNET, the second variant does not require to solve any optimization problem. Consequently, it is extremely fast. Instead, it models the scores using a neural network which is trained across different architectures to minimize the entropy regularized sum of the prediction loss. We call this variant as Inductive-SUBSELNET. We compare our method against six state-of-the-art methods on three real world datasets, which show that Transductive-SUBSELNET (Inductive-SUBSELNET) provides the best (second best) trade off between accuracy and inference time as well as accuracy and memory usage, among all the methods. This is because (1) our subset selection method does not require any training at any stage of subset selection for a new model; and, (2) our approach is non-adaptive and does the subset selection before the training starts. In contrast, most state-of-the-art data subset selection approaches are adaptive, in that the subset selection adapts as the training progresses, and as a result, they require access to the entire data at training time. Finally, we design a hybrid version of the model, where given a budget, we first select a larger set of instances using Inductive-SUBSELNET, and then extract the required number of instances using Transductive-SUBSELNET. We observe that such a hybrid approach allow us to make a smooth transition between the trade off curves from Inductive-SUBSELNET to Transductive-SUBSELNET. 2 DEVELOPMENT OF PROPOSED MODEL: SUBSELNET In this section, we setup the notations and write down the combinatorial subset selection problem for efficient training. This leads us to develop a continuous optimization problem which would allow us to generalize the combinatorial setup across different models. 2.1 NOTATIONS We are given a set of training instances {(xi, yi)}i∈D where we use D to index the data. Here, xi ∈ Rdx are features and yi ∈ Y as the labels. In our experiments, we consider Y as a set of categorical labels. However, our framework can also be used for continuous labels. We use m to denote a neural architecture and represent its parameterization as mθ. We also useM to denote the set of neural architectures. Given an architecture m ∈ M, Gm = (Vm, Em) provides the graph representation of m, where the nodes u ∈ Vm represent the operations and the e = (um, vm) indicates an edge, where the output given by the operation represented by the node um is fed to one of the operands of the operation given by the node vm. Finally, we use H(·) to denote the entropy of a probability distribution and ℓ(mθ(x), y) as the cross entropy loss hereafter. 2.2 COMBINATORIAL SUBSET SELECTION FOR EFFICIENT LEARNING We are given a dataset {(xi, yi)}i∈D and a model architecture m ∈ M with its neural parameterization mθ. The goal of a subset selection algorithm is to select a small subset of instances S with |S| = n << |D| such that, training mθ on the subset S gives nearly same accuracy as training on the entire dataset D. Existing works (Killamsetty et al., 2021b; Sivasubramanian et al., 2021; Killamsetty et al., 2021a) adopt different strategies to achieve this goal, but all of them aim to simultaneously optimize for the model parameters θ as well as the candidate subset S. At the outset, we may consider the following optimization problem. minimize θ,S⊂D:|S|=b ∑ i∈S ℓ(mθ(xi), yi)− λDIVERSITY(S), (1) where b is the budget, DIVERSITY(S) measures the representativeness of S with respect to the whole dataset D and λ is a regularizing coefficient. One can use submodular functions (Fujishige, 2005; Iyer, 2015) like Facility Location, graph cut, or Log-Determinants to model DIVERSITY(S). Here, λ trades off between training loss and diversity. Such an optimization problem indeed provides an optimal subset S that results in high accuracy. Bottlenecks of the combinatorial optimization. The optimization problem (1) imposes the following challenges. (1) It demands explicit training of mθ which can be expensive in terms of both memory and time. (2) The training of mθ every time for a new architecture m prevents the subset S from being generalizable— one needs to solve the optimization (1) again to find S for an unseen model architecture. We address these challenges by designing a neural surrogate of the objective (1), which would lead to generalization of subset selection across efficient training of different models. 2.3 COMPONENTS OF SUBSELNET MODEL Next, we sketch our proposed model SUBSELNET that leads to substituting the optimization (1) with its neural surrogate. It consists of two key components: (i) neural approximator of the trained model and (ii) the subset sampler. Figure 4 in Appendix B illustrates our model. Approximator of the trained model mθ∗ . First, we design a neural network Fϕ which would approximate the predictions of the trained model mθ∗ for different architectures m ∈M. Given the dataset {(xi, yi)i∈D} and a model architecture m ∈M, we first feed the underlying DAG Gm into a graph neural network GNNα with parameter α, which outputs the representations of the nodes of the Gm, i.e., Hm = {hu}u∈Vm . Next, we feed Hm and the instance xi into an encoder gβ Fϕ(Gm,xi) ≈ mθ∗(xi) for m ∈M. (2) Here, Fϕ(Gm,xi) = gβ(GNNα(Gm),xi). (3) Here, ϕ = {α, β}, and θ∗ is the set of learned parameters of the model mθ on the dataset D. Subset sampler. We design a subset sampler using a probabilistic model Prπ(•). Given a budget |S| ≤ b, it sequentially draws instances S = {s1, ..., sb} from a softmax distribution of the logit vector π ∈ R|D| where π(xi, yi) indicates a score for the element (xi, yi). Having chosen the first t instances St = {s1, ..st} from D, it draws the (t+ 1)-th element (x, y) from the remaining instances in D with a probability proportional to exp(π(x, y)) and then repeat it for b times. Thus, the probability of selecting the ordered set of elements S = {s1, ..., sb} is given by Pr π(S) = b∏ t=0 exp(π(xst+1 , yst+1))∑ τ∈D\St exp(π(xsτ , ysτ )) (4) We would like to highlight that we use S as an ordered set of elements, selected in a sequential manner. However, such an order does not affect the trained model which is inherently invariant of permutations of the training data, it only affects the choice of S. Training objective. Using the Eqs. (2) and (4), we replace the combinatorial optimization problem in Eq. (1) with a continuous optimization problem, across different model architectures m ∈M. To that goal, we define Λ(S;m;π, Fϕ) = ∑ i∈S ℓ(Fϕ(Gm,xi), yi)− λH(Pr π(•)) (5) minimize π,ϕ ∑ m∈M E S∈Prπ(•) [ Λ(S;m;π, Fϕ) + ∑ i∈S γKL(Fϕ(Gm,xi),mθ∗(xi)) ] (6) Here, we use entropy on the subset sampler H(Prπ(•)) to model the diversity of samples in the selected subset. We call our neural pipeline, which consists of the model approximator Fϕ and the subset selector π, as SUBSELNET. In the above, γ penalizes the difference between the output of model approximator and the prediction made by the trained model, which allows us to generalize the training of different models m ∈M through the model Fϕ(Gm,xi). 2.4 TRANSDUCTIVE-SUBSELNET AND INDUCTIVE-SUBSELNET MODELS The optimization (6) suggests that once Fϕ is trained, we can use it to compute the output of the trained model mθ∗ for an unseen architecture m′ and use it to compute π. This already removes a significant overhead of model training and facilitates fast computation of π. This leads us to develop two types of models based on how we can compute π, as follows. Transductive-SUBSELNET. The first variant of the model is transductive in terms of computation of π. Here, once we train the model approximator Fϕ, then we compute π by solving the optimization problem explicitly with respect to π, every time when we wish to select data subset for a new architecture. Given a trained model Fϕ and a new model architecture m′ ∈M, we solve the optimization problem: minπ ES∈Pr π(•)[Λ(S;m;π, Fϕ)] to find the subset sampler Prπ during inference time for a new architecture m′. Such an optimization still consumes time during inference. However, it is still significantly faster than the combinatorial methods (Killamsetty et al., 2021b;a; Mirzasoleiman et al., 2020a; Sivasubramanian et al., 2021) thanks to sidestepping the explicit model training using a model approximator. Inductive-SUBSELNET. In contrast to the transductive model, the inductive model does not require explicit optimization of π in the face of a new architecture. To that aim, we approximate π using a neural network πψ. This takes two signals as inputs - the dataset D and the outputs of the model approximator for different instances {Fϕ(Gm,xi) | i ∈ D}, and finally outputs a score for each instance πψ(xi, yi). Under Inductive-SUBSELNET, the optimization (6) becomes: minimize ψ,ϕ ∑ m∈M E S∈Prπψ (•) [ Λ(S;m;πψ, Fϕ) + ∑ i∈S γKL(Fϕ(Gm,xi),mθ∗(xi)) ] (7) Such an inductive model can select an optimal distribution of the subset that should be used to efficiently train any model mθ, without explicitly training θ or searching for the underlying subset. 3 NEURAL PARAMETERIZATION OF SUBSELNET In this section, we describe the neural parametrization of SUBSELNET. SUBSELNET consists of two key components, Fϕ and πψ . Specifically, Transductive-SUBSELNET has only one neural component which is Fϕ, whereas, Inductive-SUBSELNET has both Fϕ and πψ . 3.1 NEURAL PARAMETERIZATION OF Fϕ The approximator Fϕ consists of two components: (i) a graph neural network GNNα which mapsGm, the DAG of an architecture, to the node representations Hm = {hu}u∈Vm and (ii) a model encoder gβ which takes Hm and the instance xi as input and approximates mθ∗(xi), i.e., the prediction made by the trained model. Therefore, Fϕ(Gm,x) = gβ(GNNα(Gm),xi). Here, ϕ = {α, β}. Computation of architecture embedding using GNNα. Given a model m ∈M, we compute the representations Hm = {hu|u ∈ Vm} by using a graph neural network GNNα parameterized with α, following the proposal of Yan et al. (2020). We first compute the feature vector fu for each node u ∈ Vm using the one-hot encoding of the associated operation (e.g., max, sum, etc.) and then feed it into a neural network to compute an initial node representation, as given below. hu[0] = INITNODEα(fu) (8) Then, we use a message passing network, which collects signals from the neighborhood of different nodes and recursively compute the node representations (Yan et al., 2020; Xu et al., 2018b; Gilmer et al., 2017). Given a maximum number of recursive layers K and the node u, we compute the node embeddings Hm = {hu|u ∈ Vm} by gathering information from the k < K hops using K recursive layers as follows. h(u,v)[k − 1] = EDGEEMBEDα(hu[k − 1],hv[k − 1]) h′u[k − 1] = SYMMAGGRα( { h(u,v)[k − 1] | v ∈ Nbr(u) } ) hu[k] = UPDATEα(hu[k − 1],h′u[k − 1]). (9) Here, Nbr(u) is the set of neighbors of u. We use SYMMAGGR as a simple sum aggregator and both UPDATE and EDGEEMBED are injective mappings, as used in (Xu et al., 2018b). Note that trainable parameters from EDGEEMBED, SYMMAGGR and UPDATE are decoupled. They are represented as the set of parameters α. Finally, we obtain our node representations as: hu = [hu[0], ..,hu[K − 1]]. (10) Model encoder gβ . Having computed the architecture representation {hu |u ∈ Vm}, we next design the model encoder which leverages these embeddings to predict the output of the trained model mθ∗(xi). To this aim, we developed a model encoder gβ parameterized by β that takes Hm and xi as input and attempts to predict mθ∗(xi), i.e., gβ(Hm,xi) ≈ mθ∗(xi). It consists of three steps. In the first step, we generate a permutation invariant order on the nodes. Next, we feed the representations {hu} in this order into a self-attention based transformer layer. Finally, we combine the output of the transformer and the instance xi using a feedforward network to approximate the model output. Node ordering using BFS order. We first sort the nodes using breadth-first-search (BFS) order ρ. Similar to You et al. (2018), this sorting method produces a permutation-invariant sequence of nodes and captures subtleties like skip connections in the network structure Gm Attention layer. Given the BFS order ρ, we pass the representations Hm = {hu |u ∈ Vm} in the sequence ρ through a self-attention based transformer network. Here, the Query, Key and Value functions are realized by matrices Wquery,Wkey,Wvalue ∈ Rdim(h)×k where k is a tunable width. Thus, for each node u ∈ Vm, we have: Query(hu) = W ⊤ queryhu, Key(hu) = W ⊤ keyhu, Value(hu) = W ⊤ valuehu (11) Using these quantities, we compute an attention weighted vector ζu given by: Attu = W T c ∑ v au,vValue(hv) with, au,v = SOFTMAXv ( Query(hu) ⊤Key(hv)/ √ k ) (12) Here k is the dimension of the latent space, the softmax operation is over the node v, and Wc ∈ Rk×dim(h). Subsequently, for each node u, we use a feedforward network, preceded and succeeded by layer normalization operations, which are given by the following set of equations. ζu,1 = LN(Attu + hu; γ1, γ2), ζu,2 = W⊤2 RELU(W ⊤ 1 ζu,1), ζu,3 = LN(ζu,1 + ζu,2; γ3, γ4) Here, LN is the layer normalization operation (Ba et al., 2016). Finally, we feed the vector ζu,3 for the last node u in the sequence ρ, i.e., u = ρ(|Vm|) along with the feature vector xi into a feed-forward network parameterized by WF to model the prediction mθ∗(xi). Thus, the final output of the model encoder gβ(Hm,xi) is given by om,xi = FFβ2(ζρ|Vm|,3 ,xi) (13) Here, W• and γ• are trainable parameters and collectively form the set of parameters β. 3.2 NEURAL ARCHITECTURE OF INDUCTIVE-SUBSELNET We approximate π using a neural network πψ using a neural network which takes three inputs – (xj , yj), the corresponding output of the model approximator, i.e., om,xj = Fϕ(Gm,xj) and the node representation matrix Hm and provides us a positive selection score πψ(Hm,xj , yj ,om,xj ). In practice, πψ is a three-layer feed-forward network, which contains Leaky-ReLU activation functions for the first two layers and sigmoid activation at the last layer. 4 PARAMETER ESTIMATION AND INFERENCE Given a dataset {(xi, yi) | i ∈ D} and the output of the trained models {mθ∗(xi)}i∈D, our goal is to estimate ϕ and π (resp. ψ) for the transductive (inductive) model. We first illustrate the bottlenecks that prevent us from end-to-end training for estimating these parameters. Then, we introduce a multi-stage training method to overcome these limitations. Finally, we present the inference method. 4.1 BOTTLENECK FOR END TO END TRAINING End to end optimization of the above problem is difficult for the following reasons. (i) Our architecture representation Hm only represents the architectures and thus should be independent of parameter of the architecture θ and the instances x. End to end training can make them sensitive to these quantities. (ii) To enable the model approximator Fϕ accurately fit the output of the trained model mθ, we need an explicit training for ϕ with the target mθ. Adding the corresponding loss as an additional regularizer imposes an additional hyperparameter tuning. 4.2 MULTI-STAGE TRAINING In our multi-stage training method, we first train the model approximator Fϕ by minimizing the sum of the KL divergence between the gold output probabilities, and then train our subset sampler Prπ (resp. Prπψ ) for the transductive (inductive) model as well as fine-tuning ϕ. Training the model approximator Fϕ. We train Fϕ in two steps. In the first step, we perform unsupervised training of GNNα using graph variational autoencoder (GVAE). This ensures that the architecture representations Hm remain insensitive to the model parameters. We build the encoder and decoder of our GVAE by following existing works on graph VAEs (Yan et al., 2020) in the context graph based modeling of neural architectures. Given a graph Gm, the encoder q(Zm |Gm) which takes the node embeddings {hu}u∈Vm and maps it into the latent space Zm = {zu}u∈Vm . Specifically, we model the encoder q(Zm |Gm) as: q(zu |Gm) = N (µ(hu),Σ(hu)). Here, both µ and Σ are neural networks. Given a latent representation Zm = {zu}u∈Vm , the decoder models a generative distribution of the graph Gm where the presence of an edge is modeled as Bernoulli distribution BERNOULLI(σ(z⊤u zv)). Thus, we model the decoder as: p(Gm | Z) = ∏ (u,v)∈Em σ(z ⊤ u zv) · ∏ (u,v) ̸∈Em [1− σ(z⊤u zv)] (14) Here, σ is a parameterized sigmoid function. Finally, we estimate α, µ,Σ and σ by maximizing the evidence lower bound (ELBO) as follows: max α,µ,Σ,σ EZ∼q(• |Gm)[p(Gm | Z)]− KL(q(• |Gm)||prior(•)) (15) Next, we train our model encoder gβ by minimizing the KL-Divergence between the approximated prediction gβ(Hm,xi) and the ground truth prediction mθ∗(xi), where both these quantities are probabilities across different classes. Hence, the training problem is as follows: minimize β ∑ i∈D,m∈M KL(mθ∗(xi)||gβ(Hm,xi)) (16) Training of the subset sampler. Finally, we fine-tune gβ and train π by solving (6) for the Transductive-SUBSELNET (likewise train πψ by solving (7) for Inductive-SUBSELNET). 4.3 INFERENCE During inference, our goal is to select a subset S with |S| = b for a new model m′, which would facilitate efficient training of m′. As discussed in Section 2.4, we compute π for TransductiveSUBSELNET by explicitly solving the optimization problem: minπ ES∈Pr π(•)[Λ(S;m;π, Fϕ)] and then draw S ∼ Prπ(•). For Inductive-SUBSELNET, we draw S ∼ Prπψ̂ (•) where ψ̂ is the learned value of ψ during training. 4.4 OVERVIEW OF TRAINING AND INFERENCE ROUTINES Algorithms 1 and 2 summarize the algorithms for the training and inference procedure. Algorithm 1 Training procedure 1: function TRAINTRANSDUCTIVE(D,M, {θ∗}) 2: α̂, β̂,Hm ←TRAINAPPROX(D,M, {θ∗}) 1: function TRAININDUCTIVE(D,M, {θ∗}) 2: α̂, β̂,Hm ←TRAINAPPROX(D,M, {θ∗}) 3: o← [gβ̂({Hm,xi})]i,m 4: ψ̂ ← TRAINPI(o, {Hm}, {xi}) 1: function TRAINAPPROX(D,M, {θ∗}) 2: α̂← TRAINGNN(M) 3: for m ∈Mtrain do 4: Hm ← GNNα̂(m) 5: POS ← BFSORDERING(Gm) 6: β̂ ← TRAINMODELENC({xi}, POS, {θ∗}) Algorithm 2 Inference procedure 1: function INFERTRANSDUCTIVE(D, α̂, β̂,m′) 2: Hm′ ← GNNα̂(m′) 3: Fϕ(Gm′ ,xi)← gβ̂(Hm′ ,xi) ∀i ∈ D 4: π∗ ← minπ ES∈Prπ(•)[Λ(S;m′;π;Fϕ)] 5: S∗ ∼ Prπ∗(•) 6: TRAINNEWMODEL(m′;S∗) 1: function INFERINDUCTIVE(D, α̂, β̂,m′) 2: Hm′ ← GNNα̂(m′) 3: Fϕ(Gm′ ,xi)← gβ̂(Hm′ ,xi) ∀i ∈ D 4: Compute πψ̂(xi, yi) ∀i ∈ D 5: S∗ ∼ Prπψ̂ (•) 6: TRAINNEWMODEL(m′;S∗) Training Subroutines. The training phase for both, Transductive-SUBSELNET first utilizes the TRAINAPPROX routine to train the model approximator given the dataset, trained model parameters, and the set of neural architectures. Internally, the routine calls the TRAINGNN subroutine to train the parameters (α) of the GNN network, BFSORDERING subroutine to reorder the embeddings based on the BFS order and the TRAINMODELENC subroutine to train the attention-based model encoder’s parameters (β). The TRAININDUCTIVE routine further calls the TRAINPI subroutine to train the parameters of the neural subset selector. Inference Subroutines. Given an unseen architecture and parameters of the trained neural networks, the inference phase for both variants of SUBSELNET first generates the model encoder output for all the data points. Post this, the INFERTRANSDUCTIVE routine solves the optimization problem on π explicitly for the unseen architecture and selects the subset from the dataset. On the other hand, INFERINDUCTIVE utilizes the trained parameters of the neural subset selector. Finally, both routines call the TRAINNEWMODEL to train and evaluate the unseen architecture on selected subset. 5 EXPERIMENTS In this section, we provide comprehensive evaluation of SUBSELNET against several strong baselines on three real world datasets. In Appendix D, we present additional results. 5.1 EXPERIMENTAL SETUP Datasets. We use FMNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky et al., 2014) and CIFAR100 (Krizhevsky et al., 2009) datasets for our experiments. We transform an input image Xi to a vector xi of dimension 2048 by feeding it to a pre-trained ResNet50 v1.5 (?) model and using the output from the penultimate layer as the image representation. Model architectures and baselines. We use model architectures from NAS-Bench-101 (Ying et al., 2019) for our experiments. We compare Transductive-SUBSELNET and Inductive-SUBSELNET against two non-adaptive subset selection methods – (i) Facility location (Fujishige, 2005; Iyer, 2015) where we maximize FL(S) = ∑ j∈Dmaxi∈S x ⊤ i xj to find S, (ii) Pruning (Sorscher et al., 2022), and four adaptive subset selection methods – (iii) Glister (Killamsetty et al., 2021b), (iv) Grad– Match (Killamsetty et al., 2021a), (v) EL2N (Paul et al., 2021), (vi) GraNd (Paul et al., 2021); and; (vii) Full selection where we use complete training data. The non-adaptive subset selectors select the subset before the training begins and thus, never access the rest of the training set again during the training iterations. On the other hand, the adaptive subset selectors refine the choice of subset during training iterations and thus they need to access the full training set at each training iteration. Appendix C contains additional details about the baselines. Evaluation protocol. We split the model architecturesM into 60% training (Mtr), 20% validation (Mval) and 20% test (Mtest) folds. Similarly, we split the dataset D into Dtr, Dval and Dtest. We presentMtr,Mval, Dtr and Dval to our method and estimate ϕ̂ and ψ̂ (for Inductive-SUBSELNET model). None of the baseline methods supports any generalizable learning protocol across different model architectures and thus cannot leverage the training architectures during test. Given an architecture m′ ∈ Mtest, we select the subset S from Dtr using our subset sampler (Prπ for Transductive-SUBSELNET or Prπ ψ̂ for Inductive-SUBSELNET). Similarly, all the non-adaptive subset selectors select S ⊂ Dtr using their own algorithms. Once S is selected, we train the test models m′ ∈Mtest on S. We perform our experiments with different |S| = b ∈ (0.005|D|, 0.05|D|) and compare the performance between different methods using three quantities: (1) Accuracy Pr(y = ŷ) measured using 1|Dtest| ∑ i∈Dtest ∑ m′∈Mtest 1(maxjm ′ θ∗(xi)[j] = yi). (2) Computational efficiency, i.e., the speedup achieved with respect to training with full dataset. It is measured with respect to Tf/T . Here, Tf is the time taken for training with full dataset; and, T is the time taken for the entire inference task, which is the average time for selecting subsets across the test models m′ ∈ Mtest plus the average training time of these test models on the respective selected subsets. (3) Resource efficiency in terms of the amount of memory consumed during the entire inference task, described in item (2), which is measured as ∫ T 0 memory(t) dt where memory(t) is amount of memory consumed at timestamp t. 5.2 RESULTS Comparison with baselines. Here, we compare different methods in terms of the trade off between accuracy and computational efficiency as well as accuracy and resource efficiency. In Figure 1, we probe the variation between these quantities by varying the size of the selected subset |S| = b ∈ (0.005|D|, 0.05|D|). We make the following observations. (1) Our methods trade-off between accuracy vs. computational efficiency as well as accuracy vs. resource efficiency more effectively than all the methods. For FMNIST, both the variants of our method strikingly output 75% accuracy, whereas they are 100 times faster than full selection. Transductive-SUBSELNET performs slightly better than Inductive-SUBSELNET in terms of the overall trade-off between accuracy and efficiency for FMNIST and CIFAR10 datasets. However, for CIFAR100, Transductive-SUBSELNET performs significantly better than Inductive-SUBSELNET. The time taken for both Transductive-SUBSELNET and Inductive-SUBSELNET seems comparable— this is because the subset selection time for both of them are significantly less than the final training time on the selected subset. (2) EL2N is the second best method. It provides the best trade-off between accuracy and time as well as accuracy and GPU memory, among all the baselines. It aims at choosing difficult training instances having high prediction error. As a result, once trained on them, the model can predict the labels of easy instances too. However, it chooses instances after running the initial few epochs. (3) FL adopts a greedy algorithm for subset selection and therefore, it consumes a large time and memory during subset selection itself. Consequently, the overall efficiency significantly decreases although the complexity of the training time on the selected subset remains the same as our models in terms of time and memory. (4) In addition to EL2N, Glister, Grad-Match and GraNd are adaptive subset selection methods that operate with moderately small (> 5%) subset sizes. In a region, where the subset size is extremely small, i.e., 1% − 5%, they perform very poorly. Moreover, they maximize a monotone function at each gradient update step, which results in significant overhead in terms of time. These methods process the entire training data to refine the choice of the subset and consequently, they end up consuming a lot of memory. (5) GraNd selects the instances having high uncertainty after running each model for five epochs and often the model is not well trained by then. Finer analysis of the inference time. Next, we demarcate the subset selection phase from the training phase of the test models on the selected subset during the inference time analysis. Table 2 summarizes the results for top three non-adaptive subset selection methods for b = 0.005|D| on CIFAR100. We observe that: (1) the final training times of all three methods are roughly same; (2) the selection time for TransductiveSUBSELNET is significantly more than Inductive-SUBSELNET, although it remains extremely small as compared to the final training on the inferred subset; and, (3) the selection time of FL is large— as close as 323% of the training time. Hybrid-SUBSELNET. From Figure 1, we observe that Transductive-SUBSELNET performs significantly better than Inductive-SUBSELNET. However, since Transductive-SUBSELNET solves a fresh optimization problem for each new architecture, it performs better at the cost of time and GPU memory. On the other hand, InductiveSUBSELNET performs significantly worse as it relies on a trained neural network to learn the same optimization problem. Here, we design a hybrid version of our model, called as Hybrid-SUBSELNET. Here, given the budget of the subset b, we first choose B > b instances using InductiveSUBSELNET and the final b instances by running the explicit optimization routines in Transductive-SUBSELNET. Figure 3 sum- marizes the results for B = {25K, 30K, 35K, 45K, 50K} . We observe that the trade off curves for the Hybrid-SUBSELNET lie in between Inductive-SUBSELNET and Transductive-SUBSELNET. For low value of B, i.e., B = 25K, the trade off line of Hybrid-SUBSELNET remains close to Inductive-SUBSELNET. As we increase B, the trade-off curve of accuracy vs speed up as well as the accuracy vs GPU usage becomes better, which allows Hybrid-SUBSELNET to smoothly transition from the trade off curve of Inductive-SUBSELNET to Transductive-SUBSELNET. At B = 45K, the trade-off curve almost coincides with Transductive-SUBSELNET. Such properties allow a user to choose an appropriate B that can accurately correspond to a target operating point in the form of (Accuracy, Speed up) or (Accuracy, memory usage). 6 CONCLUSION In this work, we develop SUBSELNET, a subset selection framework, which can be trained on a set of model architectures, to be able to predict a suitable training subset before training a model, for an unseen architecture. To do so, we first design a neural model approximator, which predicts the output of a new candidate architecture without explicitly training it. We use that output to design transductive and inductive variants of our model. The transductive model solves a small optimization problem to compute the subset for a new architecture m every single time. In contrast, the inductive model resorts to a neural subset sampler instead of an optimizer. Our work does not incorporate the gradients of the trained model in model approximator and it would be interesting to explore its impact on the subset selection. Further we can extend our setup to an adaptive setting, where we can incorporate signals from different epochs with a sequence encoder to train a subset selector. 7 ETHICS STATEMENT We do not foresee any negative impact of our work from ethics viewpoint. 8 REPRODUCIBILITY STATEMENT We uploaded the code in supplementary material. Details of implementation are given in Appendix C. A RELATED WORK Our work is closely related to representation learning for model architectures, network architecture search, data subset selection. Representation learning for model architectures. Recent work in network representation learning use GNN based encoder-decoder to encapsulate the local structural information of a neural network into a fixed-length latent space (Zhang et al., 2019; Ning et al., 2020; Yan et al., 2020; Lukasik et al., 2021). By employing an asynchronous message passing scheme over the directed acyclic graph (DAG), GNN-based methods model the propagation of input data over the actual network structure. Apart from encodings based solely on the structure of the network, White et al. (2020); Yan et al. (2021) produce computation-aware encodings that map architectures with similar performance to the same region in the latent space. Following the work of Yan et al. (2020), we use a graph isomorphism network as an encoder but instead of producing a single graph embedding, our method produces a collection of node embeddings, ordered by breadth-first-search (BFS) ordering of the nodes. Our work also differs in that we do not employ network embeddings to perform downstream search strategies. Instead, architecture embeddings are used in training a novel model approximator that predicts the logits of a particular architecture, given an architecture embedding and a data embedding. Network architecture search. There is an ever-increasing demand for the automatic search of neural networks for various tasks. The networks discovered by NAS methods often come from an underlying search space, usually designed to constrain the search space size. One such method is to use cell-based search spaces (Luo et al., 2018; Zoph et al., 2017; Liu et al., 2017; Pham et al., 2018; Ying et al., 2019; Dong & Yang, 2020). Although we utilize the NAS-Bench-101 search space for architecture retrieval, our work is fundamentally different from NAS. In contrast to the NAS methods, which search for the best possible architecture from the search space using either sampling or gradient-descent based methods (Baker et al., 2017; Zoph & Le, 2016; Real et al., 2017; 2018; Liu et al., 2018; Tan et al., 2018), our work focuses on efficient data subset selection given a dataset and an architecture, which is sampled from a search space. Our work utilizes graph representation learning on the architectures sampled from the mentioned search spaces to project an architecture under consideration to a continuous latent space, utilize the model expression from the latent space as proxies for the actual model and proceed with data subset selection using the generated embedding, model proxy and given dataset. Data subset selection. Data subset selection is widely used in literature for efficient learning, coreset selection, human centric learning, etc. Several works cast the efficient data subset selection task as instance of submodular or approximate-submodular optimization problem (Killamsetty et al., 2021a; Wei et al., 2014a;b;c; Killamsetty et al., 2021b; Sivasubramanian et al., 2021). Another line of work focus on selecting coresets which are expressed as the weighted combination of subset of data, approximating some characteristics, e.g., loss function, model prediction (Feldman, 2020; Mirzasoleiman et al., 2020b; Har-Peled & Mazumdar, 2004; Boutsidis et al., 2013; Lucic et al., 2017). Our work is closely connected to simultaneous model learning and subset selection (De et al., 2021; 2020; Sivasubramanian et al., 2021). These existing works focus on jointly optimizing the training loss, with respect to the subset of instances and the parameters of the underlying model. Among them (De et al., 2021; 2020) focus on distributing decisions between human and machines, whereas (Sivasubramanian et al., 2021) aims for efficient learning. However, these methods adopt a combinatorial approach for selecting subsets and consequently, they are not generalizable across architectures. In contrast, our work focuses on differentiable subset selection mechanism, which can generalize across architectures. B ILLUSTRATION OF SUBSELNET C ADDITIONAL DETAILS ABOUT EXPERIMENTAL SETUP C.1 DATASET Datasets (D). Architectures (M). Although our task is not Neural Architecture Search, we leverage the NASBench101 search space as an architecture pool. The cell-based search space was designed for the benchmarking of various NAS methods. It consists of 423, 624 unique architectures with the following constraints – (1) number of nodes in each cell is at most 7, (2) number of edges in each cell is at most 9, (3) barring the input and output, there are three unique operations, namely 1× 1 convolution, 3× 3 convolution and 3× 3 max-pool. We utilize the architectures from the search space in generating the sequence of embeddings along with sampling architectures for the training and testing of the encoder and datasets for the subset selector. C.2 IMPLEMENTATION DETAILS ABOUT BASELINES Facility Location (FL). We implemented facility location on all the three datasets using the apricot 1 library. The similarity matrix was computed using Euclidean distance between data points, and the objective function was maximized using the naive greedy algorithm. Pruning. It selects a subset from the entire dataset based on the uncertainty of the datapoints while partial training. In our setup, we considered ResNet-18 as a master model, which is trained on each dataset for 5 epochs. Post training, the uncertainty measure is calculated based on the probabilities of each class, and the points with highest uncertainty are considered in the subset. We train the master model at a learning rate of 0.025. Glister and Grad-Match. We implemented GLISTER (Killamsetty et al., 2021b) and Grad-Match (Killamsetty et al., 2021a) using the CORDS library. We trained the models for 50 epochs, using batch size of 20, and selected the subset after every 10 epochs. The loss was minimized using SGD with learning rate of 0.01, momentum of 0.9 and weight decay with regularization constant of 5× 10−4. We used cosine annealing for scheduling the learning rate with Tmax of 50 epochs, and used 10% of the training data as the validation set. Details of specific hyperparameters for stated as follows. Glister uses a greedy selection approach to minimize a bi-level objective function. In our implementation, we used stochastic greedy optimization with learning rate 0.01, applied on the data points of each mini-batch. Online-Glister approximates the objective function with a Taylor series expansion up to an arbitrary number of terms to speed up the process; we used 15 terms in our experiments. Grad-Match applies the orthogonal matching (OMP) pursuit algorithm to the data points of each mini-batch to match gradient of a subset to the entire training/validation set. Here, we set the learning rate is set to 0.01. The regularization constant in OMP is 1.0 and the algorithm optimizes the objective function within an error margin of 10−4. GraNd. This is an adaptive subset selection strategy in which the norm of the gradient of the loss function is used as a score to rank a data point. The gradient scores are computed after the model has trained on the full dataset for the first few epochs. For the rest of epochs, the model is trained only on the top-k data points, selected using the gradient scores. In our implementation, we let the model train on the full dataset for the first 5 epochs, and computed the gradient of the loss only with respect to the last layer fully connected layer. EL2N. When the loss function used to compute the GraNd scores is the cross entropy loss, the norm of the gradient for a data point x can be approximated by E||p(x)− y||2, where p(x) is the discrete 1https://github.com/jmschrei/apricot probability distribution over the classes, computed by taking softmax of the logits, and y is the one-hot encoded true label corresponding to the data point x. Similar to our implementation of GraNd, we computed the EL2N scores after letting the models train on the full data for the first 5 epochs. C.3 IMPLEMENTATION DETAILS ABOUT OUR MODEL GNNα. As we utilize NASBench-101 space as the underlying set of neural architectures, each computational node in the architecture can comprise of one of five operations and the one-hotencoded feature vector fu. Since the set is cell-based, there is an injective mapping between the neural architecture and the cell structure. We aim to produce a sequence of embeddings for the cell, which in turn corresponds to that of the architecture. For each architecture, we use the initial feature fu ∈ R5 in (8) as a five dimensional one-hot encoding for each operation. This is fed into INITNODE (8) to obtain an 16 dimensional output. Here, INITNODE consists of a 5 × 16 linear, ReLU and 16 × 16 linear layers cascaded with each other. Each of EDGEEMBED and UPDATE consists of a 5× 128 linear-BatchNorm-ReLU cascaded with a 128× 16 linear layer. Moreover, the symmetric aggregator is a sum aggregator. We repeat this layer K times, and each iteration gathers information from k < K hops. After all the iterations, we generate an embedding for each node, and following (You et al., 2018) we use the BFS-tree based node-ordering scheme to generate the sequence of embeddings for each network. The GVAE-based architecture was trained for 10 epochs with the number of recursive layers K set to 5, and the Adam optimizer was used with learning rate of 10−3. The entire search space was considered as the dataset, and a batch-size of 32 was used. Post training, we call the node embeddings collectively as the architecture representation. To train the latent space embeddings, the parameters α are trained in an encoder-decoder fashion using a variational autoencoder. The mean µ and variance σ on the final node embeddings hu are: µ = FCN ([ hu ] u∈Vm ) and σ = exp ( FCN ([ hu ] u∈Vm )) The decoder aims to reconstruct the original cell structure (i.e the nodes and the corresponding operations), which are one-hot encoded. It is modeled using single-layer fully connected networks followed by a sigmoid layer. Model Encoder gβ . The model encoder gβ is essentially a single-head attention block that acts on a sequence of node embeddings Hm = {hu|u ∈ Vm}. The Query, Key and Value matrices, Wquery, Wkey and Wvalue ∈ R16×8, and the matrix WC ∈ R8×16. The fully connected network acting on ζu,1 consists of matrices W1 ∈ R16×64 and W2 ∈ R64×16. All the trainable matrices along with the layer normalizations were implemented using the Linear and LayerNorm functions in Pytorch. The last item of the output sequence ζu,3 is concatenated with the data embedding xi and fed to another 2-layer fully-connected network with hidden dimension 256 and dropout probability of 0.3. The model encoder is trained by minimizing the KL-divergence between gβ(Hm,xi) and mθ∗(xi). We used an AdamW optimizer with learning rate of 10−3, ϵ = 10−8, betas = (0.9, 0.999) and weight decay of 0.005. We also used Cosine Annealing to decay the learning rate, and used gradient clipping with maximum norm set to 5. Figure 6 shows the convergence of the outputs of the model encoder gβ(Hm,xi) with the outputs of the model mθ∗(xi). Neural Network πψ. The inductive model is a three-layer fully-connected neural network with two Leaky ReLU activations and a sigmoid activation after the last layer. The input to πψ is the concatenation (Hm;om,i;xi; yi). The hidden dimensions of the two intermediary layers are 64 and 16, and the final layer is a single neuron that outputs the score corresponding to a data point xi. While training πψ we add a regularization term λ′( ∑ i∈D πψ(Hm,om,i,xi, yi)− |S|) to ensure that nearly |S| samples have high scores out of the entire dataset D. Both the regularization constants λ (in equation 6) and λ′ are set to 0.1. We train the model weights using an Adam optimizer with a learning rate of 0.001. During training, at each iteration we draw instances using Prπ and use the log-derivative trick to compute the gradient of the objective. During each computation step, we use one instance of the ranked list to compute the unbiased estimate of the objective in (6) . D ADDITIONAL EXPERIMENTS D.1 ABLATION STUDY We perform ablation study of SUBSELNET from three perspectives. Impact of ablation of subset sampler. First, we attempt to understand the impact of the subset sampler. To that aim, we compare the performance of SUBSELNET against two baselines, namely - Bottom-b-loss and Bottom-b-loss+gumbel. In Bottom-b-loss, we sort the data instances based on their predicted loss ℓ(Fϕ(Gm,x), y) and consider those points with the bottom b values. In Bottomb-loss+gumbel, we add noise sampled from the gumbel distribution with µ = 0 and β = 0.025, and sort the instances based on these noisy loss values, i.e., ℓ(Fϕ(Gm,x), y) + Gumbel(0, β = 0.025). We observe that Bottom-b-loss and Bottom-b-loss+gumbel do not perform that well in spite of being efficient in terms of time and memory. Figure 7 compares the performance of the variants of SUBSELNET, Bottom-b-loss and Bottom-b-loss+gumbel. Exploring alternative architecture of the model encoder gβ . We consider three alternative architecture to our current model encoder gβ . • FEEDFORWARD: We consider a two-layer fully-connected network, in which we concatenate the mean of Hm with xi. We used ReLu activation between the layers and the hidden dimension was set to 256. We used dropout for regularization with probability 0.3. • DEEPSET: We consider permutation invariant networks of the form ρ( ∑ h∈H ϕ(h);xi) where ρ and ϕ are neural networks and H is the sequence under consideration. We ρ is a fully-connected network with 4 layers, ReLU activation, and hidden dimension of 64, and ϕ is a two-layer fullyconnected network with ReLU activation and has output dimension 10. • LSTM: We consider an LSTM-based encoder with hidden dimension of 16 and dropout probability of 0.2. The output of the last LSTM block is concatenated with xi and fed to a linear layer with hidden dimension 256, dropout probability of 0.3 and ReLU as the activation function. Since the goal of the model encoder is to produce outputs which mimic the architectures, we measure the KL divergence between the outputs of the gold models and of the encoder to denote the closeness of the output distribution. Table. 8 summarizes performance of different model encoders. We make the following observations: (1) Transformer-based model encoder outperforms every other method by a significant margin across both the datasets. (2) The BFS sequential modeling of an architecture with transformers leads to better representation that enables closer model approximation compared to other sequential methods like LSTM. (3) Non-sequential model approximators like Feedforward and DeepSets led to poor model approximation. Performance of subset selectors using different model encoders. We consider three different design choices of model approximator (our (Transformer), Feedforward, and LSTM) along with three different subset selection strategies (Our subset sampler, top-b instances based on uncertainty, and top-b based on loss) which result in nine different combinations of model approximation and subset selection strategies. We measure uncertainty using the entropy of the predicted distribution of the target classes and report the average test accuracy of the models when they are trained on the underlying pre-selected subset in the following table - We make the following observations - 1. The complete design of our method, i.e., Our model approximator (Transformer) + Our subset sampler (SUBSELNET) performs best. 2. If we use simple unsupervised subset selection heuristics, e.g., loss or uncertainty based subset selection, then our model approximator performs much worse than Feedforward or LSTM, whereas this trend is opposite if we use our subset sampler for selecting the subset. This may be due to overfitting of the transformer architecture in presence of uncertainty or loss based selection, which is compensated by our subset sampler. D.2 RECOMMENDING MODEL ARCHITECTURE When dealing with a pool of architectures designed for the same task, choosing the correct architecture for the task might be a daunting task - since it is impractical to train all the architectures from scratch. In view of this problem, we show that training on smaller carefully chosen subsets might be beneficial for a quicker alternative to choosing the correct architectures. We first extract the top 15 best performing architectures A∗ having highest accuracy, when trained on full data. We mark them as "gold". Then, we gather top 15 architectures A when trained on the subset provided by our models. Then, we compare A and A∗ using the Kendall tau rank correlation coefficient (KTau) along with Jaccard coefficent |A ∩ A∗|/|A ∪ A∗|. Figure 10 summarizes the results for top three non-adaptive subset selectors in terms of the accuracy, namely - Transductive-SUBSELNET, Inductive-SUBSELNET and FL. We make the following observations: (1) One of our variant outperforms FL in most of the cases in CIFAR10 and CIFAR100. (2) There is no consistent winner between Transductive-SUBSELNET and Inductive-SUBSELNET, although Inductive-SUBSELNET outperforms both Transductive-SUBSELNET and FL consistently in CIFAR100 in terms of the Jaccard coefficient. D.3 AVOIDING UNDERFITTING AND OVERFITTING Since the amount of training data is small, there is a possibility of overfitting. However, the coefficient λ of the entropy regularizer λH(Prπ), can be increased to draw instances from the different regions of the feature space, which in turn can reduce the overfitting. In practice, we tuned λ on the validation set to control such overfitting. We present the accuracies on (training, validation, test) folds for both Transductive-SUBSELNET and Inductive-SUBSELNET in Table 11. We make the following observations: 1. From training to test, in most cases, the decrease in accuracy is ∼ 7%. 2. This small accuracy gap is further reduced from validation to test. Here, in most cases, the decrease in accuracy is ∼ 4%. We perform early stopping using the validation set which acts as an additional regularizer and therefore, the amount of overfitting is significantly low. D.4 PERFORMANCE OF SUBSET SELECTION STRATEGIES ON LARGER SUBSET SIZES We conducted similar experiments as Section 5.1 for CIFAR10 and FMNIST on larger subset sizes (b) of 0.1|D|, 0.2|D|, 0.4|D| and 0.7|D|. For each dataset and the above mentioned subset sizes, we evaluate the decrease in accuracy (ratio of the accuracy on the subset to accuracy on the full dataset), speed-up (ratio of the time taken to train the full dataset to the sum of times taken for subset selection and subset training), and GPU usage in GB-min. We report the variation of these metrics with respect to the subset sizes in the following tables – Note that in the case of CIFAR10, we denote the decrease factors of 0.91-0.96 in green, and the decrease factors of 0.85 - 0.88 in purple. In case of FMNIST, we denote the decrease factors of 0.94-0.97 in green and the decrease factors of 0.90 - 0.93 in purple. We make the following observations: 1. We show a better trade-off between accuracy and time and accuracy and memory than almost all the baselines. 2. Observations in CIFAR10: When we tuned the subset sizes, we notice that SUBSELNET, GLISTER, Grad-Match and EL2N can achieve a comparable decrease factor of 0.91-0.93. In terms of speed-up and memory usage, we see that (a) SUBSELNET achieves a 1.3x speed-up as compared to GLISTER and 1.1x speed-up as compared to Grad-Match and EL2N (b) GLISTER consumes 3.7x GPU memory, Grad-Match consumes 3.1x GPU memory and EL2N consumes 2.5x GPU memory as compared to SUBSELNET We notice that none of the other subset selection strategies achieve a high-enough accuracy, and we beat them in terms of speed-up and memory usage. Moreover, for the case when the subset selection methods achieve a decrease factor of 0.85 - 0.88, we see that (a) SUBSELNET achieves a 2.4x speed-up as compared to FacLoc, 1.8x speed-up as compared to Pruning, 1.4x speed-up as compared to GLISTER, 1.2x speed-up as compared to Grad-Match and 1.1x speed-up as compared to EL2N (b) FacLoc consumes 4.8x GPU memory, Pruning consumes 1.7x GPU memory, GLISTER consumes 4x GPU memory, Grad-Match consumes 3.4x GPU memory and EL2N consumes 2.6x GPU memory as compared to SUBSELNET. 3. Observations in FMNIST: When we tuned the subset sizes, we notice that SUBSELNET, Facloc, GLISTER, Grad-Match and EL2N can achieve a comparable decrease factor of 0.94-0.97. In terms of speed-up and memory usage, we see that (a) SUBSELNET achieves a 3.8x speed-up as compared to FacLoc, 1.4x speed-up as compared to GLISTER and Grad-Match, and 2.2x speed-up as compared to EL2N. (b) FacLoc consumes 12.5x GPU Memory, and GLISTER, Grad-Match and EL2N con- sume 2.9x GPU memory as compared to SUBSELNET. We notice that none of the other subset selection strategies achieve a high-enough accuracy, and we beat them in terms of speed-up and memory usage. Moreover, for the case when the subset selection methods achieve a decrease factor of 0.90-0.93, we see that (a) SUBSELNET achieves a 7.4x speed-up as compared to FacLoc, 2.1x speed-up as compared to GLISTER, 2.9x speed-up as compared to Grad-Match and 2.1x speed-up as compared to EL2N (b) FacLoc consumes 28.5x GPU memory, GLISTER consumes 4.5x GPU memory, GradMatch consumes 6.1x GPU memory and EL2N consumes 3.7x GPU memory as compared to SUBSELNET. We present the trade-off between the accuracy and speed-up, and accuracy and memory consumption in Figure 15. E PROS AND CONS OF USING GNNS We have used a GNN in our model encoder to encode the architecture representations into an embedding. We chose a GNN for the task due to following reasons - 1. Message passing between the nodes (which may be the input, output, or any of the operations) allows us to generate embeddings that capture the contextual structural information of the node, i.e., the embedding of each node captures not only the operation for that node but also the operations preceding that node to a large extent. 2. It has been shown by (Morris et al., 2019) and (Xu et al., 2018a) that GNNs are as powerful as the Weisfeiler-Lehman algorithm and thus give a powerful representation for the graph. Thus, we obtain smooth embeddings of the nodes/edges that can effectively distill information from its neighborhood without significant compression. 3. GNNs embed model architecture into representations independent of the underlying dataset and the model parameters. This is because it operates on only the nodes and edges— the structure of the architecture and does not use the parameter values or input data. However, the GNN faces the following drawbacks - 1. GNN uses a symmetric aggregator for message passing over node neighbors to ensure that the representation of any node should be invariant to a permutation of its neighbors. Such a symmetric aggregator renders it a low-pass filter, as shown in (NT & Maehara, 2019), which attenuates important high-frequency signals. 2. We are training one GNN using several architectures. This can lead to the insensitivity of the embedding to change in the architecture. In the context of model architecture, if we change the operation of one node in the architecture (either remove, add or change the operation), then the model’s output can significantly change. However, the embedding of GNN may become immune to such changes, since the GNN is being trained over many architectures. F CHOICE OF SUBMODULAR FUNCTION FOR THE OPTIMIZATION PROBLEM In ( 1) we introduced the original combinatorial problem for subset selection where optimization variable S— the subset of instances — makes the underlying problem combinatorial. Here, we can use submodular functions like Graph-Cut, Facility-Location, and Log-Determinant as the diversity functions, which would allow us to use greedy algorithms to maximize the objective in ( 1). But, as discussed in Section 4.1, this suffers from two bottlenecks — expensive computation issues and lack of generalizability. Therefore, we do not follow these approaches and resort to our proposed approach called SUBSELNET. In contrast to the optimization problem in (1), which was a combinatorial set optimization problem, the optimization problem in SUBSELNET(6) is a continuous optimization problem where the goal is to estimate Prπ. In such a problem, where the probability distribution is the key optimization variable, entropy is a more natural measure of diversity than the other submodular measures.
1. What is the focus of the paper regarding data selection for an unseen architecture? 2. What are the strengths and weaknesses of the proposed SUBSELNET method? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What questions does the reviewer have regarding the experiment setup and the training cost of the model approximator?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper introduces a SUBSELNET to select a subset of training data. SUBSELNET is a non-adaptive method as it is agnostic to model architecture and training stage and. The authors design a neural model approximator to approximate the output of any given architecture. Two variants of subset sampler are proposed. SUBSELNET is compared against state-of-the-art methods on three datasets to demonstrate the tradeoff advantage between accuracy and speed up. Strengths And Weaknesses Strength: (1) Well-motivated problem. Data selection for an unseen architecture is an important problem with real-world impact. Weakness: (1) The main problem of this work is the experiment. The authors choose 0.5% and 5% of training data, however, with such a low subsample rate, we notice a dramatic accuracy drop in Figure 1. It is hard to judge the effectiveness of any methods with such a significant accuracy drop. The experiment should reflect at what speed up, the model can maintain the same accuracy. Essentially, focusing on the left side of Figure 1. The authors may consider a larger range of sampling rates (5%, 10%, 20%,30%, 40%, 50%, 60%, 70%), following the setup of previous work such as GraNd and Gister. Question: (1) When calculating inference time, are you assume selection and training are conducted in sequential? Is it possible to perform selection in parallel with training when loading a batch of data? (2) What is the training cost of training the model approximator? Have the authors evaluated the approximator error? Clarity, Quality, Novelty And Reproducibility Clarity: Clarity is fine. Figures are generally easy to read. Most descriptions are clear. Quality: Experiments setups can be improved ( See Weakness 1). Writing can be polished with noticeable grammar mistakes. For example, Section 5.1 Model architectures and baselines: four non-adaptive -> four adaptive. Figure1 Caption ;and; -> , and Novelty: Neural model approximator using GNN seems new for data selection.
ICLR
Title Top-label calibration and multiclass-to-binary reductions Abstract We propose a new notion of multiclass calibration called top-label calibration. A classifier is said to be top-label calibrated if the reported probability for the predicted class label—the top-label—is calibrated, conditioned on the top-label. This conditioning is essential for practical utility of the calibration property, since the top-label is always reported and we must condition on what is reported. However, the popular notion of confidence calibration erroneously skips this conditioning. Furthermore, we outline a multiclass-to-binary (M2B) reduction framework that unifies confidence, top-label, and class-wise calibration, among others. As its name suggests, M2B works by reducing multiclass calibration to different binary calibration problems; various types of multiclass calibration can then be achieved using simple binary calibration routines. We instantiate the M2B framework with the well-studied histogram binning (HB) binary calibrator, and prove that the overall procedure is multiclass calibrated without making any assumptions on the underlying data distribution. In an empirical evaluation with four deep net architectures on CIFAR-10 and CIFAR-100, we find that the M2B + HB procedure achieves lower top-label and class-wise calibration error than other approaches such as temperature scaling. Code for this work is available at https://github.com/aigen/df-posthoc-calibration. 1 INTRODUCTION Machine learning models often make probabilistic predictions. The ideal prediction is the true conditional distribution of the output given the input. However, nature never reveals true probability distributions, making it infeasible to achieve this ideal in most situations. Instead, there is significant interest towards designing models that are calibrated, which is often feasible. We motivate the definition of calibration using a standard example of predicting the probability of rain. Suppose a meteorologist claims that the probability of rain on a particular day is 0.7. Regardless of whether it rains on that day or not, we cannot know if 0.7 was the underlying probability of rain. However, we can test if the meteorologist is calibrated in the long run, by checking if on the D days when 0.7 was predicted, it indeed rained on around 0.7D days (and the same is true for other probabilities). This example is readily converted to a formal binary calibration setting. Denote a random (feature, label)-pair as pX,Y q P X ˆt0, 1u, where X is the feature space. A probabilistic predictor h : X Ñ r0, 1s is said to be calibrated if for every prediction q P r0, 1s, PrpY “ 1 | hpXq “ qq “ q (almost surely). Arguably, if an ML classification model produces such calibrated scores for the classes, downstream users of the model can reliably use its predictions for a broader set of tasks. Our focus in this paper is calibration for multiclass classification, with L ě 3 classes and Y P rLs :“ t1, 2, . . . , L ě 3u. We assume all (training and test) data is drawn i.i.d. from a fixed distribution P , and denote a general point from this distribution as pX,Y q „ P . Consider a typical multiclass predictor, h : X Ñ ∆L´1, whose range ∆L´1 is the probability simplex in RL. A natural notion of calibration for h, called canonical calibration is the following: for every l P rLs, P pY “ l | hpXq “ qq “ ql (ql denotes the l-th component of q). However, canonical calibration becomes infeasible to achieve or verify once L is even 4 or 5 (Vaicenavicius et al., 2019). Thus, there is interest in studying statistically feasible relaxations of canonical notion, such as confidence calibration (Guo et al., 2017) and class-wise calibration (Kull et al., 2017). In particular, the notion of confidence calibration (Guo et al., 2017) has been popular recently. A model is confidence calibrated if the following is true: “when the reported confidence for the predicted class is q P r0, 1s, the accuracy is also q”. In any practical setting, the confidence q is never reported alone; it is always reported along with the actual class prediction l P rLs. One may expect that if a model is confidence calibrated, the following also holds: “when the class l is predicted with confidence q, the probability of the actual class being l is also q”? Unfortunately, this expectation is rarely met—there exist confidence calibrated classifier for whom the latter statement is grossly violated for all classes (Example 1). On the other hand, our proposed notion of top-label calibration enforces the latter statement. It is philosophically more coherent, because it requires conditioning on all relevant reported quantities (both the predicted top label and our confidence in it). In Section 2, we argue further that top-label calibration is a simple and practically meaningful replacement of confidence calibration. In Section 3, we unify top-label, confidence, and a number of other popular notions of multiclass calibration into the framework of multiclass-to-binary (M2B) reductions. The M2B framework relies on the simple observation that each of these notions internally verifies binary calibration claims. As a consequence, each M2B notion of calibration can be achieved by solving a number of binary calibration problems. With the M2B framework at our disposal, all of the rich literature on binary calibration can now be used for multiclass calibration. We illustrate this by instantiating the M2B framework with the binary calibration algorithm of histogram binning or HB (Zadrozny and Elkan, 2001; Gupta and Ramdas, 2021). The M2B + HB procedure achieves state-of-the-art results with respect to standard notions of calibration error (Section 4). Further, we show that our procedure is provably calibrated for arbitrary data-generating distributions. The formal theorems are delayed to Appendices B, C (due to space limitations), but an informal result is presented in Section 4. 2 MODIFYING CONFIDENCE CALIBRATION TO TOP-LABEL CALIBRATION Let c : X Ñ rLs denote a classifier or top-label predictor and h : X Ñ r0, 1s a function that provides a confidence or probability score for the top-label cpXq. The predictor pc, hq is said to be confidence calibrated (for the data-generating distribution P ) if P pY “ cpXq | hpXqq “ hpXq. (1) In other words, when the reported confidence hpXq equals p P r0, 1s, then the fraction of instances where the predicted label is correct also approximately equals p. Note that for an L-dimensional predictor h : X Ñ ∆L´1, one would use cp¨q “ arg maxlPrLs hlp¨q and hp¨q “ hcp¨qp¨q; ties are broken arbitrarily. Then h is confidence calibrated if the corresponding pc, hq satisfies (1). Confidence calibration is most applicable in high-accuracy settings where we trust the label prediction cpxq. For instance, if a high-accuracy cancer-grade-prediction model predicts a patient as having “95% grade III, 3% grade II, and 2% grade I”, we would suggest the patient to undergo an invasive treatment. However, we may want to know (and control) the number of non-grade-III patients that were given this suggestion incorrectly. In other words, is Prpcancer is not grade III | cancer is predicted to be of grade III with confidence 95%q equal to 5%? It would appear that by focusing on the the probability of the predicted label, confidence calibration enforces such control. However, as we illustrate next, confidence calibration fails at this goal by providing a guarantee that is neither practically interpretable, nor actionable. Translating the probabilistic statement (1) into words, we ascertain that confidence calibration leads to guarantees of the form: “if the confidence hpXq in the top-label is 0.6, then the accuracy (frequency with which Y equals cpXq) is 0.6”. Such a guarantee is not very useful. Suppose a patient P is informed (based on their symptoms X), that they are most likely to have a certain disease D with probability 0.6. Further patient P is told that this score is confidence calibrated. P can now infer the following: “among all patients who have probability 0.6 of having some unspecified disease, the fraction who have that unspecified disease is also 0.6.” However, P is concerned only about disease D, and not about other diseases. That is, P wants to know the probability of having D among patients who were predicted to have disease D with confidence 0.6, not among patients who were predicted to have some disease with confidence 0.6. In other words, P cares about the occurrence of D among patients who were told the same thing that P has been told. It is tempting to wish that the confidence calibrated probability 0.6 has any bearing on what P cares about. However, this faith is misguided, as the above reasoning suggests, and further illustrated through the following example. Example 1. Suppose the instance space is pX,Y q P ta, bu ˆ t1, 2, . . .u. (X can be seen as the random patient, and Y as the disease they are suffering from.) Consider a predictor pc, hq and let the values taken by pX,Y, c, hq be as follows: Feature x P pX “ xq Class prediction cpxq Confidence hpxq P pY “ cpXq | X “ xq a 0.5 1 0.6 0.2 b 0.5 2 0.6 1.0 The table specifies only the probabilities P pY “ cpXq | X “ xq; the probabilities P pY “ l | X “ xq, l ‰ cpxq, can be set arbitrarily. We verify that pc, hq is confidence calibrated: P pY “ cpXq | hpXq “ 0.6q “ 0.5pP pY “ 1 | X “ aq ` P pY “ 2 | X “ bqq “ 0.5p0.2` 1q “ 0.6. However, whether the actual instance is X “ a or X “ b, the probabilistic claim of 0.6 bears no correspondence with reality. If X “ a, hpXq “ 0.6 is extremely overconfident since P pY “ 1 | X “ aq “ 0.2. Contrarily, if X “ b, hpXq “ 0.6 is extremely underconfident. The reason for the strange behavior above is that the probability P pY “ cpXq | hpXqq is not interpretable from a decision-making perspective. In practice, we never report just the confidence hpXq, but also the class prediction cpXq (obviously!). Thus it is more reasonable to talk about the conditional probability of Y “ cpXq, given what is reported, that is both cpXq and hpXq. We make a small but critical change to (1); we say that pc, hq is top-label calibrated if P pY “ cpXq | hpXq, cpXqq “ hpXq. (2) (See the disambiguating Remark 2 on terminology.) Going back to the patient-disease example, top-label calibration would tell patient P the following: “among all patients, who (just like you) are predicted to have disease D with probability 0.6, the fraction who actually have disease D is also 0.6.” Philosophically, it makes sense to condition on what is reported—both the top label and its confidence—because that is what is known to the recipient of the information; and there is no apparent justification for not conditioning on both. A commonly used metric for quantifying the miscalibration of a model is the expected-calibrationerror (ECE) metric. The ECE associated with confidence calibration is defined as conf-ECEpc, hq :“ EX |P pY “ cpXq | hpXqq ´ hpXq| . (3) We define top-label-ECE (TL-ECE) in an analogous fashion, but also condition on cpXq: TL-ECEpc, hq :“ EX |P pY “ cpXq | cpXq, hpXqq ´ hpXq| . (4) Higher values of ECE indicate worse calibration performance. The predictor in Example 1 has conf-ECEpc, hq “ 0. However, it has TL-ECEpc, hq “ 0.4, revealing its miscalibration. More generally, it can be deduced as a straightforward consequence of Jensen’s inequality that conf-ECEpc, hq is always smaller than the TL-ECEpc, hq (see Proposition 4 in Appendix H). As illustrated by Example 1, the difference can be significant. In the following subsection we illustrate that the difference can be significant on a real dataset as well. First, we make a couple of remarks. Remark 1 (ECE estimation using binning). Estimating the ECE requires estimating probabilities conditional on some prediction such as hpxq. A common strategy to do this is to bin together nearby values of hpxq using binning schemes (Nixon et al., 2020, Section 2.1), and compute a single estimate for the predicted and true probabilities using all the points in a bin. The calibration method we espouse in this work, histogram binning (HB), produces discrete predictions whose ECE can be estimated without further binning. Based on this, we use the following experimental protocol: we report unbinned ECE estimates while assessing HB, and binned ECE estimates for all other compared methods, which are continuous output methods (deep-nets, temperature scaling, etc). It is commonly understood that binning leads to underestimation of the effective ECE (Vaicenavicius et al., 2019; Kumar et al., 2019). Thus, using unbinned ECE estimates for HB gives HB a disadvantage compared to the binned ECE estimates we use for other methods. (This further strengthens our positive results for HB.) The binning scheme we use is equal-width binning, where the interval r0, 1s is divided into B equal-width intervals. Equal-width binning typically leads to lower ECE estimates compared to adaptive-width binning (Nixon et al., 2020). Remark 2 (Terminology). The term conf-ECE was introduced by Kull et al. (2019). Most works refer to conf-ECE as just ECE (Guo et al., 2017; Nixon et al., 2020; Mukhoti et al., 2020; Kumar et al., 2018). However, some papers refer to conf-ECE as top-label-ECE (Kumar et al., 2019; Zhang et al., 2020), resulting in two different terms for the same concept. We call the older notion as conf-ECE, and our definition of top-label calibration/ECE (4) is different from previous ones. (a) Confidence reliability diagram (points marked ‹) and top-label reliability diagram (points marked `) for a ResNet-50 model on the CIFAR-10 dataset; see further details in points (a) and (b) below. The gray bars denote the fraction of predictions in each bin. The confidence reliability diagram (mistakenly) suggests better calibration than the top-label reliability diagram. (b) Class-wise and zoomed-in version of Figure 1a for bin 6 (top) and bin 10 (bottom); see further details in point (c) below. The ‹ markers are in the same position as Figure 1a, and denote the average predicted and true probabilities. The colored points denote the predicted and true probabilities when seen class-wise. The histograms on the right show the number of test points per class within bins 6 and 10. Figure 1: Confidence reliability diagrams misrepresent the effective miscalibration. 2.1 AN ILLUSTRATIVE EXPERIMENT WITH RESNET-50 ON CIFAR-10 We now compare confidence and top-label calibration using ECE estimates and reliability diagrams (Niculescu-Mizil and Caruana, 2005). This experiment can be seen as a less malignant version of Example 1. Here, confidence calibration is not completely meaningless, but can nevertheless be misleading. Figure 1 illustrates the (test-time) calibration performance of a ResNet-50 model (He et al., 2016) on the CIFAR-10 dataset (Krizhevsky, 2009). In the following summarizing points, the pc, hq correspond to the ResNet-50 model. (a) The ‹ markers in Figure 1a form the confidence reliability diagram (Guo et al., 2017), con- structed as follows. First, the hpxq values on the test set are binned into one of B “ 10 bins, r0, 0.1q, r0.1, 0.2q, . . . , r0.9, 1s, depending on the interval to which hpxq belongs. The gray bars in Figure 1a indicate the fraction of hpxq values in each bin—nearly 92% points belong to bin r0.9, 1s and no points belong to bin r0, 0.1q. Next, for every bin b, we plot ‹ “ pconfb, accbq, which are the plugin estimates of E rhpXq | hpXq P Bin bs and P pY “ cpXq | hpXq P Bin bq respectively. The dashed X “ Y line indicates perfect confidence calibration. (b) The ` markers in Figure 1a form the top-label reliability diagram. Unlike the confidence reliability diagram, the top-label reliability diagram shows the average miscalibration across classes in a given bin. For a given class l and bin b, define ∆b,l :“ | pP pY “ cpXq | cpXq “ l, hpXq P Bin bq ´ pE rhpXq | cpXq “ l, hpXq P Bin bs |, where pP , pE denote empirical estimates based on the test data. The overall miscalibration is then ∆b :“ Weighted-averagep∆b,lq “ ř lPrLs pP pcpXq “ l | hpXq P Bin bq ∆b,l. Note that ∆b is always non-negative and does not indicate whether the overall miscalibration occurs due to under- or over-confidence; also, if the absolute-values were dropped from ∆b,l, then ∆b would simply equal accb´ confb. In order to plot ∆b in a reliability diagram, we obtain the direction for the corresponding point from the confidence reliability diagram. Thus for every ‹ “ pconfb, accbq, we plot` “ pconfb, confb`∆bq if accb ą confb and` “ pconfb, confb´∆bq otherwise, for every b. This scatter plot of the `’s gives us the top-label reliability diagram. Figure 1a shows that there is a visible increase in miscalibration when going from confidence calibration to top-label calibration. To understand why this change occurs, Figure 1b zooms into the sixth bin (hpXq P r0.5, 0.6q) and bin 10 (hpXq P r0.9, 1.0s), as described next. (c) Figure 1b displays the class-wise top-label reliability diagrams for bins 6 and 10. Note that for bin 6, the ‹ marker is nearly on theX “ Y line, indicating that the overall accuracy matches the 0.005 0.010 0.015 0.020 0.025 Es tim at ed E CE Base model top-label-ECE Base model conf-ECE Temperature scaling top-label-ECE Temperature scaling conf-ECE Histogram binning top-label-ECE Histogram binning conf-ECE 5 10 15 20 25 Number of bins 0.0075 0.0100 0.0125 0.0150 0.0175 0.0200 0.0225 0.0250 0.0275 Es tim at ed E CE ResNet-50 5 10 15 20 25 Number of bins 0.005 0.010 0.015 0.020 0.025 0.030 Es tim at ed E CE ResNet-110 5 10 15 20 25 Number of bins 0.005 0.010 0.015 0.020 0.025 Es tim at ed E CE Wide-ResNet-26-10 5 10 15 20 25 Number of bins 0.005 0.010 0.015 0.020 0.025 0.030 Es tim at ed E CE DenseNet-121 Figure 2 displays the aggregate effect of the above phenomenon (across bins and classes) through estimates of the conf-ECE and TL-ECE. The precise experimental setup is described in Section 4. These plots display the ECE estimates of the base model, as well as the base model when recalibrated using temperature scaling (Guo et al., 2017) and our upcoming formulation of top-label histogram binning (Section 3). Since ECE estimates depend on the number of bins B used (see Roelofs et al. (2020) for empirical work around this), we plot the ECE estimate for every valueB P r5, 25s in order to obtain clear and unambiguous results. We find that the TL-ECE is significantly higher than the conf-ECE for most values of B, the architectures, and the pre- and post- recalibration models. This figure also previews the performance of our forthcoming top-label histogram binning algorithm. Top-label HB has smaller estimated TL-ECE than temperature scaling for most values of B and the architectures. Except for ResNet-50, the conf-ECE estimates are also better. To summarize, top-label calibration captures the intuition of confidence calibration by focusing on the predicted class. However, top-label calibration also conditions on the predicted class, which is always part of the prediction in any practical setting. Further, TL-ECE estimates can be substantially different from conf-ECE estimates. Thus, while it is common to compare predictors based on the conf-ECE, the TL-ECE comparison is more meaningful, and can potentially be different. 3 CALIBRATION ALGORITHMS FROM CALIBRATION METRICS In this section, we unify a number of notions of multiclass calibration as multiclass-to-binary (or M2B) notions, and propose a general-purpose calibration algorithm that achieves the corresponding M2B notion of calibration. The M2B framework yields multiple novel post-hoc calibration algorithms, each of which is tuned to a specific M2B notion of calibration. 3.1 MULTICLASS-TO-BINARY (M2B) NOTIONS OF CALIBRATION In Section 2, we defined confidence calibration (1) and top-label calibration (2). These notions verify calibration claims for the highest predicted probability. Other popular notions of calibration verify calibration claims for other entries in the full L-dimensional prediction vector. A predictor h “ ph1, h2, . . . , hLq is said to be class-wise calibrated (Kull et al., 2017) if (class-wise calibration) @l P rLs, P pY “ l | hlpXqq “ hlpXq. (5) Another recently proposed notion is top-K confidence calibration (Gupta et al., 2021). For some l P rLs, let cplq : X Ñ rLs denote the l-th highest class prediction, and let hplq : X Ñ rLs denote the confidence associated with it (c “ cp1q and h “ hp1q are special cases). For a given K ď L, (top-K-confidence calibration) @k P rKs, P pY “ cpkqpXq | hpkqpXqq “ hpkqpXq. (6) As we did in Section 2 for confidenceÑtop-label, top-K-confidence calibration can be modified to the more interpretable top-K-label calibration by further conditioning on the predicted labels: (top-K-label calibration) @k P rKs, P pY “ cpkqpXq | hpkqpXq, cpkqpXqq “ hpkqpXq. (7) Each of these notions reduce multiclass calibration to one or more binary calibration requirements, where each binary calibration requirement corresponds to verifying if the distribution of Y , conditioned on some prediction predpXq, satisfies a single binary calibration claim associated with predpXq. Table 1 illustrates how the calibration notions discussed so far internally verify a number of binary calibration claims, making them M2B notions. For example, for class-wise calibration, for every l P rLs, the conditioning is on predpXq “ hlpXq, and a single binary calibration statement is verified: P pY “ l | predpXqq “ hlpXq. Based on this property, we call each of these notions multiclass-to-binary or M2B notions. The notion of canonical calibration mentioned in the introduction is not an M2B notion. Canonical calibration is discussed in detail in Appendix G. Due to the conditioning on a multi-dimensional prediction, non-M2B notions of calibration are harder to achieve or verify. For the same reason, it is possibly easier for humans to interpret binary calibration claims when taking decisions/actions. 3.2 ACHIEVING M2B NOTIONS OF CALIBRATION USING M2B CALIBRATORS The M2B framework illustrates how multiclass calibration can typically be viewed via a reduction to binary calibration. The immediate consequence of this reduction is that one can now solve multiclass calibration problems by leveraging the well-developed methodology for binary calibration. The upcoming M2B calibrators belong to the standard recalibration or post-hoc calibration setting. In this setting, one starts with a fixed pre-learnt base model g : X Ñ ∆L´1. The base model g can correspond to a deep-net, a random forest, or any 1-v-all (one-versus-all) binary classification model such as logistic regression. The base model is typically optimized for classification accuracy and may not be calibrated. The goal of post-hoc calibration is to use some given calibration data D “ pX1, Y1q, pX2, Y2q, . . . , pXn, Ynq P pX ˆ rLsqn, typically data on which g was not learnt, to recalibrate g. In practice, the calibration data is usually the same as the validation data. To motivate M2B calibrators, suppose we want to verify if g is calibrated on a certain test set, based on a given M2B notion of calibration. Then, the verifying process will split the test data into a number of sub-datasets, each of which will verify one of the binary calibration claims. In Appendix A.2, we argue that the calibration data can also be viewed as a test set, and every step in the verification process can be used to provide a signal for improving calibration. M2B calibrators take the form of wrapper methods that work on top of a given binary calibrator. Denote an arbitrary black-box binary calibrator as At0,1u : r0, 1sXˆpXˆt0, 1uq‹ Ñ r0, 1sX , where the first argument is a mapping X Ñ r0, 1s that denotes a (miscalibrated) binary predicor, and the second argument is a calibration data sequence of arbitrary length. The output is a (better calibrated) binary predictor. Examples of At0,1u are histogram binning (Zadrozny and Elkan, 2001), isotonic regression (Zadrozny and Elkan, 2002), and Platt scaling (Platt, 1999). In the upcoming descriptions, we use the indicator function 1 ta “ bu P t0, 1u which takes the value 1 if a “ b, and 0 if a ‰ b. The general formulation of our M2B calibrator is delayed to Appendix A since the description is a bit involved. To ease readability and adhere to the space restrictions, in the main paper we describe the calibrators corresponding to top-label, class-wise, and confidence calibration (Algorithms 1–3). Each of these calibrators are different from the classical M2B calibrator (Algorithm 4) that has been used by Zadrozny and Elkan (2002), Guo et al. (2017), Kull et al. (2019), and most other papers M2B calibrators: Post-hoc multiclass calibration using binary calibrators Input in each case: Binary calibrator At0,1u : r0, 1sX ˆ pX ˆ t0, 1uq‹ Ñ r0, 1sX , base multiclass predictor g : X Ñ ∆L´1, calibration data D “ pX1, Y1q, . . . , pXn, Ynq. Algorithm 1: Confidence calibrator 1 cÐ classifier or top-class based on g; 2 g Ð top-class-probability based on g; 3 D1 Ð tpXi,1 tYi “ cpXiquq : i P rnsu; 4 hÐ At0,1upg,D1q; 5 return pc, hq; Algorithm 2: Top-label calibrator 1 cÐ classifier or top-class based on g; 2 g Ð top-class-probability based on g; 3 for lÐ 1 to L do 4 Dl Ð tpXi,1 tYi “ luq : cpXiq “ lqu; 5 hl Ð At0,1upg,Dlq; 6 end 7 hp¨q Ð hcp¨qp¨q (predict hlpxq if cpxq “ l); 8 return pc, hq; Algorithm 3: Class-wise calibrator 1 Write g “ pg1, g2, . . . , gLq; 2 for lÐ 1 to L do 3 Dl Ð tpXi,1 tYi “ luq : i P rnsu; 4 hl Ð At0,1upgl,Dlq; 5 end 6 return ph1, h2, . . . , hLq; Algorithm 4: Normalized calibrator 1 Write g “ pg1, g2, . . . , gLq; 2 for lÐ 1 to L do 3 Dl Ð tpXi,1 tYi “ luq : i P rnsu; 4 rhl Ð At0,1upgl,Dlq; 5 end 6 Normalize: for every l P rLs, hlp¨q :“ rhlp¨q{ řL k“1 rhkp¨q; 7 return ph1, h2, . . . , hLq; we are aware of, with the most similar one being Algorithm 3. Top-K-label and top-K-confidence calibrators are also explicitly described in Appendix A (Algorithms 6 and 7). Top-label calibration requires that for every class l P rLs, P pY “ l | cpXq “ l, hpXqq “ hpXq. Thus, to achieve top-label calibration, we must solve L calibration problems. Algorithm 2 constructs L datasets tDl : l P rLsu (line 4). The features in Dl are the Xi’s for which cpXiq “ l, and the labels are 1 tYi “ lu. Now for every l P rLs, we calibrate g to hl : X Ñ r0, 1s using Dl and any binary calibrator. The final probabilistic predictor is hp¨q “ hcp¨qp¨q (that is, it predicts hlpxq if cpxq “ l). The top-label predictor c does not change in this process. Thus the accuracy of pc, hq is the same as the accuracy of g irrespective of which At0,1u is used. Unlike the top-label calibrator, the confidence calibrator merges all classes together into a single dataset D1 “ Ť lPrLsDl. To achieve class-wise calibration, Algorithm 3 also solves L calibration problems, but these correspond to satisfying P pY “ l | hlpXqq “ hlpXq. Unlike top-label calibration, the dataset Dl for class-wise calibration contains all the Xi’s (even if cpXiq ‰ l), and hl is passed to At0,1u instead of h. Also, unlike confidence calibration, Yi is replaced with 1 tYi “ lu instead of 1 tYi “ cpXiqu. The overall process is similar to reducing multiclass classification to L 1-v-all binary classification problem, but our motivation is intricately tied to the notion of class-wise calibration. Most popular empirical works that have discussed binary calibrators for multiclass calibration have done so using the normalized calibrator, Algorithm 4. This is almost identical to Algorithm 3, except that there is an additional normalization step (line 6 of Algorithm 4). This normalization was first proposed by Zadrozny and Elkan (2002, Section 5.2), and has been used unaltered by most other works1 where the goal has been to simply compare direct multiclass calibrators such as temperature scaling, Dirichlet scaling, etc., to a calibrator based on binary methods (for instance, see Section 4.2 of Guo et al. (2017)). In contrast to these papers, we investigate multiple M2B reductions in an effort to identify the right reduction of multiclass calibration to binary calibration. To summarize, the M2B characterization immediately yields a novel and different calibrator for every M2B notion. In the following section, we instantiate M2B calibrators on the binary calibrator of histogram binning (HB), leading to two new algorithms: top-label-HB and class-wise-HB, that achieve strong empirical results and satisfy distribution-free calibration guarantees. 1the only exception we are aware of is the recent work of Patel et al. (2021) who also suggest skipping normalization (see their Appendix A1); however they use a common I-Max binning scheme across classes, whereas in Algorithm 3 the predictor hl for each class is learnt completely independently of other classes 4 EXPERIMENTS: M2B CALIBRATION WITH HISTOGRAM BINNING Histogram binning or HB was proposed by Zadrozny and Elkan (2001) with strong empirical results for binary calibration. In HB, a base binary calibration model g : X Ñ r0, 1s is used to partition the calibration data into a number of bins so that each bin has roughly the same number of points. Then, for each bin, the probability of Y “ 1 is estimated using the empirical distribution on the calibration data. This estimate forms the new calibrated prediction for that bin. Recently, Gupta and Ramdas (2021) showed that HB satisfies strong distribution-free calibration guarantees, which are otherwise impossible for scaling methods (Gupta et al., 2020). Despite these results for binary calibration, studies for multiclass calibration have reported that HB typically performs worse than scaling methods such as temperature scaling (TS), vector scaling (VS), and Dirichlet scaling (DS) (Kull et al., 2019; Roelofs et al., 2020; Guo et al., 2017). In our experiments, we find that the issue is not HB but the M2B wrapper used to produce the HB baseline. With the right M2B wrapper, HB beats TS, VS, and DS. A number of calibrators have been proposed recently (Zhang et al., 2020; Rahimi et al., 2020; Patel et al., 2021; Gupta et al., 2021), but VS and DS continue to remain strong baselines which are often close to the best in these papers. We do not compare to each of these calibrators; our focus is on the M2B reduction and the message that the baselines dramatically improve with the right M2B wrapper. We use three metrics for comparison: the first is top-label-ECE or TL-ECE (defined in (4)), which we argued leads to a more meaningful comparison compared to conf-ECE. Second, we consider the more stringent maximum-calibration-error (MCE) metric that assesses the worst calibration across predictions (see more details in Appendix E.3). For top-label calibration MCE is given by TL-MCEpc, hq :“ maxlPrLs suprPRangephq |P pY “ l | cpXq “ l, hpXq “ rq ´ r|. To assess classwise calibration, we use class-wise-ECE defined as the average calibration error across classes: CW-ECEpc,hq :“ L´1 řL l“1 EX |P pY “ l | hlpXqq ´ hlpXq|. All ECE/MCE estimation is performed as described in Remark 1. For further details, see Appendix E.2. Formal algorithm and theoretical guarantees. Top-label-HB (TL-HB) and class-wise-HB (CWHB) are explicitly stated in Appendices B and C respectively; these are instantiations of the top-label calibrator and class-wise calibrator with HB. N-HB is the the normalized calibrator (Algorithm 4) with HB, which is the same as CW-HB, but with an added normalization step. In the Appendix, we extend the binary calibration guarantees of Gupta and Ramdas (2021) to TL-HB and CW-HB (Theorems 1 and 2). We informally summarize one of the results here: if there are at least k calibration points-per-bin, then the expected-ECE is bounded as: E r(TL-) or (CW-) ECEs ď a 1{2k, for TL-HB and CW-HB respectively. The outer E above is an expectation over the calibration data, and corresponds to the randomness in the predictor learnt on the calibration data. Note that the ECE itself is an expected error over an unseen i.i.d. test-point pX,Y q „ P . Experimental details. We experimented on the CIFAR-10 and CIFAR-100 datasets, which have 10 and 100 classes each. The base models are deep-nets with the following architectures: ResNet50, Resnet-110, Wide-ResNet-26-10 (WRN) (Zagoruyko and Komodakis, 2016), and DenseNet121 (Huang et al., 2017). Both CIFAR datasets consist of 60K (60,000) points, which are split as 45K/5K/10K to form the train/validation/test sets. The validation set was used for post-hoc calibration and the test set was used for evaluation through ECE/MCE estimates. Instead of training new models, we used the pre-trained models of Mukhoti et al. (2020). We then ask: “which post-hoc calibrator improves the calibration the most?” We used their Brier score and focal loss models in our experiments (Mukhoti et al. (2020) report that these are the empirically best performing loss functions). All results in the main paper are with Brier score, and results with focal loss are in Appendix E.4. Implementation details for TS, VS, and DS are in Appendix E. Findings. In Table 2, we report the binned ECE and MCE estimates when B “ 15 bins are used by HB, and for ECE estimation. We make the following observations: (a) For TL-ECE, N-HB is the best performing method for both CIFAR-10 and CIFAR-100. While most methods perform similarly across architectures for CIFAR-10, there is high variation in CIFAR-100. DS is the worst performing method on CIFAR-100, but TL-HB also performs poorly. We believe that this could be because the data splitting scheme of the TL-calibrator (line 4 of Algorithm 2) splits datasets across the predicted classes, and some classes in CIFAR-100 occur very rarely. This is further discussed in Appendix E.6. (b) For TL-MCE, TL-HB is the best performing method on CIFAR-10, by a huge margin. For CIFAR-100, TS or VS perform slightly better than TL-HB. Since HB ensures that each bin gets roughly the same number of points, the predictions are well calibrated across bins, leading to smaller TL-MCE. A similar observation was also made by Gupta and Ramdas (2021). (c) For CW-ECE, CW-HB is the best performing method across the two datasets and all four architectures. The N-HB method which has been used in many CW-ECE baseline experiments performs terribly. In other words, skipping the normalization step leads to a large improvement in CW-ECE. This observation is one of our most striking findings. To shed further light on this, we note that the distribution-free calibration guarantees for CW-HB shown in Appendix C no longer hold post-normalization. Thus, both our theory and experiments indicate that skipping normalization improves CW-ECE performance. Additional experiments in the Appendix. In Appendix E.5, we report each of the results in Tables 2 and 3 with the number of bins taking every value in the range r5, 25s. Most observations remain the same under this expanded study. In Appendix B.2, we consider top-label calibration for the class imbalanced COVTYPE-7 dataset, and show that TL-HB adapts to tail/infrequent classes. 5 CONCLUSION We make two contributions to the study of multiclass calibration: (i) defining the new notion of top-label calibration which enforces a natural minimal requirement on a multiclass predictor—the probability score for the top class prediction should be calibrated; (ii) developing a multiclass-tobinary (M2B) framework which posits that various notions of multiclass calibration can be achieved via reduction to binary calibration, balancing practical utility with statistically tractability. Since it is important to identify appropriate notions of calibration in any structured output space (Kuleshov et al., 2018; Gneiting et al., 2007), we anticipate that the philosophy behind the M2B framework could find applications in other structured spaces. 6 REPRODUCIBILITY STATEMENT Some reproducibility desiderata, such as external code and libraries that were used are summarized in Appendix E.1. All code to generate results with the CIFAR datasets is attached in the supplementary material. Our base models were pre-trained deep-net models generated by Mukhoti et al. (2020), obtained from www.robots.ox.ac.uk/„viveka/focal calibration/ (corresponding to ‘brier score’ and ‘focal loss adaptive 53’ at the above link). By avoiding training of new deep-net models with multiple hyperparameters, we also consequently avoided selection biases that inevitably creep in due to test-data-peeking. The predictions of the pre-trained models were obtained using the code at https://github.com/torrvision/focal calibration. 7 ETHICS STATEMENT Post-hoc calibration is a post-processing step that can be applied on top of miscalibrated machine learning models to increase their reliability. As such, we believe our work should improve the transparency and explainability of machine learning models. However, we outline a few limitations. Post-hoc calibration requires keeping aside a fresh, representative dataset, that was not used for training. If this dataset is too small, the resulting calibration guarantee can be too weak to be meaningful in practice. Further, if the test data distribution shifts in significant ways, additional corrections may be needed to recalibrate (Gupta et al., 2020; Podkopaev and Ramdas, 2021). A well calibrated classifier is not necessarily an accurate or a fair one, and vice versa (Kleinberg et al., 2017). Deploying calibrated models in critical applications like medicine, criminal law, banking, etc. does not preclude the possibility of the model being frequently wrong or unfair. ACKNOWLEDGEMENTS This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562 (Towns et al., 2014). Specifically, it used the Bridges-2 system, which is supported by NSF award number ACI-1928147, at the Pittsburgh Supercomputing Center (PSC). CG’s research was supported by the generous Bloomberg Data Science Ph.D. Fellowship. CG would like to thank Saurabh Garg and Youngseog Chung for interesting discussions, and Viveka Kulharia for help with the focal calibration repository. Finally, we thank Zack Lipton, the ICLR reviewers, and the ICLR area chair, for excellent feedback that helped improve the writing of the paper. A ADDENDUM TO SECTION 3 “CALIBRATION ALGORITHMS FROM CALIBRATION METRICS” In Section 3, we introduced the concept of M2B calibration, and showed that popular calibration notions are in fact M2B notions (Table 1). We showed how the calibration notions of top-label, class-wise, and confidence calibration can be achieved using a corresponding M2B calibrator. In the following subsection, we present the general-purpose wrapper Algorithm 5 that can be used to derive an M2B calibrator from any given M2B calibration notion that follows the rubric specified by Table 1. In Appendix A.2, we illustrate the philosophy of M2B calibration using a simple example with a dataset that contains 6 points. This example also illustrates the top-label-calibrator, the classwise-calibrator, and the confidence-calibrator. A.1 GENERAL-PURPOSE M2B CALIBRATOR Denote some M2B notion of calibration as C. Suppose C corresponds toK binary calibration claims. The outer for-loop in Algorithm 5, runs over each such claim in C. For example, for class-wise calibration, K “ L and for confidence and top-label calibration, K “ 1. Corresponding to each claim, there is a probability-predictor that the conditioning is to be done on, such as g or gl or gpkq. Additionally, there may be conditioning on the label predictor such as c or cpkq. These are denoted as prc, rgq in Algorithm 5. For confidence and top-label calibration, rc “ c, the top-label-confidence. For class-wise calibration, when rg “ gl, we have rcp¨q “ l. If there is no label conditioning in the calibration notion, such as in confidence, top-K-confidence, and class-wise calibration, then we enter the if-condition inside the for-loop. Here hk is learnt using a single calibration dataset and a single call to At0,1u. Otherwise, if there is label conditioning, such as in top-label and top-K-label calibration, we enter the else-condition, where we learn a separate hk,l for every l P rLs, using a different part of the dataset Dl in each case. Then hkpxq equals hk,lpxq if rcpxq “ l. Finally, since C is verifying a sequence of claims, the output of Algorithm 5 is a sequence of predictors. Each original prediction prc, rgq corresponding to the C is replaced with prc, hkq. This is the output of the M2B calibrator. Note that the rc values are not changed. This output appears abstract, but normally, it can be represented in an interpretable way. For example, for class-wise calibration, the output is just a sequence of predictors, one for each class: ph1, h2, . . . , hLq. This general-purpose M2B calibrators can be used to achieve any M2B calibration notion: toplabel calibration (Algorithm 2), class-wise calibration (Algorithm 3), confidence calibration (Algorithm 1), top-K-label calibration (Algorithm 6), and top-K-confidence calibration (Algorithm 7). A.2 AN EXAMPLE TO ILLUSTRATE THE PHILOSOPHY OF M2B CALIBRATION Figure 3a shows the predictions of a given base model g on a given dataset D. Suppose D is a test set, and we are testing confidence calibration. Then the only predictions that matter are the top-predictions corresponding to the shaded values. These are stripped out and shown in Figure 3b, in the gp¨q row. Note that the indicator 1 tY “ cp¨qu is sufficient to test confidence calibration and given this, the cpXq are not needed. Thus the second row in Figure 3b only shows these indicators. Algorithm 8: Top-label histogram binning Input: Base multiclass predictor g, calibration data D “ pX1, Y1q, . . . , pXn, Ynq Hyperparameter: # points per bin k P N (say 50), tie-breaking parameter δ ą 0 (say 10´10) Output: Top-label calibrated predictor pc, hq 1 cÐ classifier or top-class based on g; 2 g Ð top-class-probability based on g; 3 for lÐ 1 to L do 4 Dl Ð tpXi,1 tYi “ luq : cpXiq “ lqu and nl Ð |Dl|; 5 hl Ð Binary-histogram-binningpg,Dl, tnl{ku , δq; 6 end 7 hp¨q Ð hcp¨qp¨q; 8 return pc, hq; Verifying top-label calibration is similar (Figure 3c), but in addition to the predictions gp¨q, we also retain the values of cp¨q. Thus the gp¨q and 1 tY “ cp¨qu are shown, but split across the 4 classes. Class-wise calibration requires access to all the predictions, however, each class is considered separately as indicated by Figure 3d. Canonical calibration looks at the full prediction vector in each case. However, in doing so, it becomes unlikely that gpxq “ gpyq for any x,y since the number of values that g can take is now exponential. Let us turn this around and suppose that D were a calibration set instead of a test set. We argue that D should be used in the same way, whether testing or calibrating. Thus, if confidence calibration is to be achieved, we should focus on the pg,1 tY “ cp¨quq corresponding to g. If top-label calibration is to be achieved, we should use the pc, gq values. If class-wise calibration is to be achieved, we should look at each gl separately and solve L different problems. Finally, for canonical calibration, we must look at the entire g vector as a single unit. This is the core philosophy behind M2B calibrators: if binary claims are being verified, solve binary calibration problems. B DISTRIBUTION-FREE TOP-LABEL CALIBRATION USING HISTOGRAM BINNING In this section, we formally describe histogram binning (HB) with the top-label-calibrator (Algorithm 2) and provide methodological insights through theory and experiments. B.1 FORMAL ALGORITHM AND THEORETICAL GUARANTEES Algorithm 8 describes the top-label calibrator formally using HB as the binary calibration algorithm. The function called in line 5 is Algorithm 2 of Gupta and Ramdas (2021). The first argument in the call is the top-label confidence predictor, the second argument is the dataset to be used, the third argument is the number of bins to be used, and the fourth argument is a tie-breaking parameter (described shortly). While previous empirical works on HB fixed the number of bins per class, the analysis of Gupta and Ramdas (2021) suggests that a more principled way of choosing the number of bins is to fix the number of points per bin. This is parameter k of Algorithm 8. Given k, the number of bins is decided separately for every class as tnl{ku where nl is the number of points predicted as class l. This choice is particularly relevant for top-label calibration since nl can be highly non-uniform (we illustrate this empirically in Section B.2). The tie-breaking parameter δ can be arbitrarily small (like 10´10), and its significance is mostly theoretical—it is used to ensure that outputs of different bins are not exactly identical by chance, so that conditioning on a calibrated probability output is equivalent to conditioning on a bin; this leads to a cleaner theoretical guarantee. HB recalibrates g to a piecewise constant function h that takes one value per bin. Consider a specific bin b; the h value for this bin is computed as the average of the indicators t1 tYi “ cpXiqu : Xi P Bin bu. This is an estimate of the bias of the bin P pY “ cpXq | X P Bin bq. A concentration inequality can then be used to bound the deviation between the estimate and the true bias to prove distribution-free calibration guarantees. In the forthcoming Theorem 1, we show high-probability and in-expectation bounds on the the TL-ECE of HB. Additionally, we show marginal and condi- tional top-label calibration bounds, defined next. These notions were proposed in the binary calibration setting by Gupta et al. (2020) and Gupta and Ramdas (2021). In the definition below, A refers to any algorithm that takes as input calibration data D and an initial classifier g to produce a top-label predictor c and an associated probability map h. Algorithm 8 is an example of A. Definition 1 (Marginal and conditional top-label calibration). Let ε, α P p0, 1q be some given levels of approximation and failure respectively. An algorithm A : pg,Dq ÞÑ pc, hq is (a) pε, αq-marginally top-label calibrated if for every distribution P over X ˆ rLs, P ´ |P pY “ cpXq | cpXq, hpXqq ´ hpXq| ď ε ¯ ě 1´ α. (8) (b) pε, αq-conditionally top-label calibrated if for every distribution P over X ˆ rLs, P ´ @ l P rLs, r P Rangephq, |P pY “ cpXq | cpXq “ l, hpXq “ rq ´ r| ď ε ¯ ě 1´ α. (9) To clarify, all probabilities are taken over the test point pX,Y q „ P , the calibration data D „ Pn, and any other inherent algorithmic randomness in A; these are all implicit in pc, hq “ ApD,gq. Marginal calibration asserts that with high probability, on average over the distribution of D, X , P pY “ cpXq | cpXq, hpXqq is at most ε away from hpXq. In comparison, TL-ECE is the average of these deviations over X . Marginal calibration may be a more appropriate metric for calibration than TL-ECE if we are somewhat agnostic to probabilistic errors less than some fixed threshold ε (like 0.05). Conditional calibration is a strictly stronger definition that requires the deviation to be at most ε for every possible prediction pl, rq, including rare ones, not just on average over predictions. This may be relevant in medical settings where we want the prediction on every patient to be reasonably calibrated. Algorithm 8 satisfies the following calibration guarantees. Theorem 1. Fix hyperparameters δ ą 0 (arbitrarily small) and points per bin k ě 2, and assume nl ě k for every l P rLs. Then, for any α P p0, 1q, Algorithm 8 is pε1, αq-marginally and pε2, αqconditionally top-label calibrated for ε1 “ d logp2{αq 2pk ´ 1q ` δ, and ε2 “ d logp2n{kαq 2pk ´ 1q ` δ. (10) Further, for any distribution P over X ˆ rLs, we have P pTL-ECEpc, hq ď ε2q ě 1 ´ α, and E rTL-ECEpc, hqs ď a 1{2k ` δ. The proof in Appendix H is a multiclass top-label adaption of the guarantee in the binary setting by Gupta and Ramdas (2021). The rOp1{ ? kq dependence of the bound relies on Algorithm 8 delegating at least k points to every bin. Since δ can be chosen to be arbitrarily small, setting k “ 50 gives roughly ED rTL-ECEphqs ď 0.1. Base on this, we suggest setting k P r50, 150s in practice. B.2 TOP-LABEL HISTOGRAM BINNING ADAPTS TO CLASS IMBALANCED DATASETS The principled methodology of fixing the number of points per bin reaps practical benefits. Figure 4 illustrates this through the performance of HB for the class imbalanced COVTYPE-7 dataset (Blackard and Dean, 1999) with class ratio approximately 36% for class 1 and 49% for class 2. The entire dataset has 581012 points which is divided into train-test in the ratio 70:30. Then, 10% of the training points are held out for calibration (n “ |D| “ 40671). The base classifier is a random forest (RF) trained on the remaining training points (it achieves around 95% test accuracy). The RF is then recalibrated using HB. The top-label reliability diagrams in Figure 4a illustrate that the original RF (in orange) is underconfident on both the most likely and least likely classes. Additional figures in Appendix F show that the RF is always underconfident no matter which class is predicted as the top-label. HB (in green) recalibrates the RF effectively across all classes. Validity plots (Gupta and Ramdas, 2021) estimate how the LHS of condition (8), denoted as V pεq, varies with ε. We observe that for all ε, V pεq is higher for HB. The rightmost barplot compares the estimated TL-ECE for all classes, and also shows the class proportions. While the original RF is significantly miscalibrated for 1 2 3 4 5 6 70.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 EC E Random forest Histogram binning 0.0 0.1 0.2 0.3 0.4 0.5 Cl as s r at io Class ratio 1 2 3 4 5 6 70.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 EC E Random forest Histogram b nning 0.0 0.1 0.2 0.3 0.4 0.5 Cl as s r at io Class ratio 0.00 0.25 0.50 0.75 1.0 Predicted probability 0.0 0.2 0.4 0.6 0.8 1.0 Tr ue p ro ba bi lit y Class 2 reliability diagram 0.00 0.05 0.10 0.15 0.0 0.2 0.4 0.6 0.8 1.0 V( ) Class 2 validity plot 0.00 0.25 0.50 0.75 1.00 Predicted probability 0.0 0.2 0.4 0.6 0.8 1.0 Tr ue p ro ba bi lit y Class 4 reliability diagram 0.00 0.05 0.10 0.15 0.0 0.2 0.4 0.6 0.8 1.0 V( ) Class 4 validity plot 1 2 3 4 5 6 7 Class 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 To pla be l E CE 0.0 0.1 0.2 0.3 0.4 0.5 Cl as s r at io (a) Top-label histogram binning (Algorithm 8) with k “ 100 points per bin. Class 4 has only 183 calibration points. Algorithm 8 adapts and uses only a single bin to ensure that the TL-ECE on class 4 is comparable to the TL-ECE on class 2. Overall, the random forest classifier has significantly higher TL-ECE for the least likely classes (4, 5, and 6), but the post-calibration TL-ECE using binning is quite uniform. 0.00 0.25 0.50 0.75 1.00 Predicted probability 0.0 0.2 0.4 0.6 0.8 1.0 Tr ue p ro ba bi lit y Class 2 reliability diagram 0.00 0.05 0.10 0.15 0.0 0.2 0.4 0.6 0.8 1.0 V( ) Class 2 validity plot 0.00 0.25 0.50 0.75 1.00 Predicted probability 0.0 0.2 0.4 0.6 0.8 1.0 Tr ue p ro ba bi lit y Class 4 reliability diagram 0.00 0.05 0.10 0.15 0.0 0.2 0.4 0.6 0.8 1.0 V( ) Class 4 validity plot 1 2 3 4 5 6 7 Class 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 To pla be l E CE 0.0 0.1 0.2 0.3 0.4 0.5 Cl as s r at io (b) Histogram binning with B “ 50 bins for every class. Compared to Figure 4a, the post-calibration TL-ECE for the most likely classes decreases while the TL-ECE for the least likely classes increases. Figure 4: Recalibration of a random forest using histogram binning on the class imbalanced COVTYPE-7 dataset (class 2 is roughly 100 times likelier than class 4). By ensuring a fixed number of calibration points per bin, Algorithm 8 obtains relatively uniform top-label calibration across classes (Figure 4a). In comparison, if a fixed number of bins are chosen for all classes, the performance deteriorates for the least likely classes (Figure 4b). the less likely classes, HB has a more uniform miscalibration across classes. Figure 4b considers a slightly different HB algorithm where the number of points per class is not adapted to the number of times the class is predicted, but is fixed beforehand (this corresponds to replacing tnl{ku in line 5 of Algorithm 8 with a fixed B P N). While even in this setting there is a drop in the TL-ECE compared to the RF model, the final profile is less uniform compared to fixing the number of points per bin. The validity plots and top-label reliability diagrams for all the 7 classes are reported in Figure 9 in Appendix F, along with some additional observations. C DISTRIBUTION-FREE CLASS-WISE CALIBRATION USING HISTOGRAM BINNING In this section, we formally describe histogram binning (HB) with the class-wise-calibrator (Algorithm 3) and provide theoretical guarantees for it. The overall procedure is called class-wise-HB. Further details and background on HB are contained in Appendix B, where top-label-HB is described. C.1 FORMAL ALGORITHM To achieve class-wise calibration using binary routines, we learn each component function hl in a 1- v-all fashion as described in Algorithm 3. Algorithm 9 contains the pseudocode with the underlying routine as binary HB. To learn hl, we use a dataset Dl, which unlike top-label HB (Algorithm 8), contains Xi even if cpXiq ‰ l. However the Yi is replaced with 1 tYi “ lu. The number of points per bin kl can be different for different classes, but generally one would set k1 “ . . . “ kL “ k P N. Larger values of kl will lead to smaller εl and δl in the guarantees, at loss of sharpness since the number of bins tn{klu would be smaller. Algorithm 9: Class-wise histogram binning Input: Base multiclass predictor g : X Ñ ∆L´1, calibration data D “ pX1, Y1q, . . . , pXn, Ynq Hyperparameter: # points per bin k1, k2, . . . , kl P NL (say each kl “ 50), tie-breaking parameter δ ą 0 (say 10´10) Output: L class-wise calibrated predictors h1, h2, . . . , hL 1 for lÐ 1 to L do 2 Dl Ð tpXi,1 tYi “ luq : i P rnsqu; 3 hl Ð Binary-histogram-binningpgl,Dl, tn{klu , δq; 4 end 5 return ph1, h2, . . . , hLq; C.2 CALIBRATION GUARANTEES A general algorithm A for class-wise calibration takes as input calibration data D and an initial classifier g to produce an approximately class-wise calibrated predictor h : X Ñ r0, 1sL. Define the notation ε “ pε1, ε2, . . . , εLq P p0, 1qL and α “ pα1, α2, . . . , αLq P p0, 1qL. Definition 2 (Marginal and conditional class-wise calibration). Let ε,α P p0, 1qL be some given levels of approximation and failure respectively. An algorithm A : pg,Dq ÞÑ h is (a) pε,αq-marginally class-wise calibrated if for every distribution P over X ˆ rLs and for every l P rLs P ´ |P pY “ l | hlpXqq ´ hlpXq| ď εl ¯ ě 1´ αl. (11) (b) pε,αq-conditionally class-wise calibrated if for every distribution P over X ˆ rLs and for every l P rLs, P ´ @r P Rangephlq, |P pY “ l | hlpXq “ rq ´ r| ď εl ¯ ě 1´ αl. (12) Definition 2 requires that each hl is pεl, αlq calibrated in the binary senses defined by Gupta et al. (2021, Definitions 1 and 2). From Definition 2, we can also uniform bounds that hold simultaneously over every l P rLs. Let α “ řL l“1 αl and ε “ maxlPrLs εl. Then (11) implies P ´ @l P rLs, |P pY “ l | hlpXqq ´ hlpXq| ď ε ¯ ě 1´ α, (13) and (12) implies P ´ @l P rLs, r P Rangephlq, |P pY “ l | hlpXq “ rq ´ r| ď ε ¯ ě 1´ α. (14) The choice of not including the uniformity over L in Definition 2 reveals the nature of our class-wise HB algorithm and the upcoming theoretical guarantees: (a) we learn the hl’s separately for each l and do not combine the learnt functions in any way (such as normalization), (b) we do not combine the calibration inequalities for different rLs in any other way other than a union bound. Thus the only way we can show (13) (or (14)) is by using a union bound over (11) (or (12)). We now state the distribution-free calibration guarantees satisfied by Algorithm 9. Theorem 2. Fix hyperparameters δ ą 0 (arbitrarily small) and points per bin k1, k2, . . . , kl ě 2, and assume nl ě kl for every l P rLs. Then, for every l P rLs, for any αl P p0, 1q, Algorithm 9 is pεp1q,αq-marginally and pεp2q,αq-conditionally class-wise calibrated with ε p1q l “ d logp2{αlq 2pkl ´ 1q ` δ, and εp2ql “ d logp2n{klαlq 2pkl ´ 1q ` δ. (15) Further, for any distribution P over X ˆ rLs, (a) P pCW-ECEpc, hq ď maxlPrLs ε p2q l q ě 1´ ř lPrLs αl, and (b) E rCW-ECEpc, hqs ď maxlPrLs a 1{2kl ` δ. Theorem 2 is proved in Appendix H. The proof follows by using the result of Gupta and Ramdas (2021, Theorem 2), derived in the binary calibration setting, for each hl separately. Gupta and Ramdas (2021) proved a more general result for general `p-ECE bounds. Similar results can also be derived for the suitably defined `p-CW-ECE. As discussed in Section 3.2, unlike previous works (Zadrozny and Elkan, 2002; Guo et al., 2017; Kull et al., 2019), Algorithm 9 does not normalize the hl’s. We do not know how to derive Theorem 2 style results for a normalized version of Algorithm 9. D FIGURES FOR APPENDIX E Appendix E begins on page 23. The relevant figures for Appendix E are displayed on the following pages. E ADDITIONAL EXPERIMENTAL DETAILS AND RESULTS FOR CIFAR-10 AND CIFAR-100 We present additional details and results to supplement the experiments with CIFAR-10 and CIFAR100 in Sections 2 and 4 of the main paper. E.1 EXTERNAL LIBRARIES USED All our base models were pre-trained deep-net models generated by Mukhoti et al. (2020), obtained from www.robots.ox.ac.uk/„viveka/focal calibration/ and used along with the code at https://github.com/torrvision/focal calibration to obtain base predictions. We focused on the models trained with Brier score and focal loss, since it was found to perform the best for calibration. All reports in the main paper are with the Brier score; in Appendix E.4, we report corresponding results with focal loss. We also used the code at https://github.com/torrvision/focal calibration for temperature scaling (TS). For vector scaling (VS) and Dirichlet scaling (DS), we used the code of Kull et al. (2019), hosted at https://github.com/dirichletcal/dirichlet python. For VS, we used the file dirichletcal/calib/vectorscaling.py, and for DS, we used the file dirichletcal/calib/fulldirichlet.py. No hyperparameter tuning was performed in any of our histogram binning experiments or baseline experiments; default settings were used in every case. The random seed was fixed so that every run of the experiment gives the same result. In particular, by relying on pre-trained models, we avoid training new deep-net models with multiple hyperparameters, thus avoiding any selection biases that may arise due to test-data peeking across multiple settings. E.2 FURTHER COMMENTS ON BINNING FOR ECE ESTIMATION As mentioned in Remark 1, ECE estimates for all methods except TL-HB and CW-HB was done using fixed-width bins r0, 1{Bq, r1{B, 2{Bq, . . . r1´ 1{B, 1s for various values of B P r5, 25s. For TL-HB and CW-HB, B is the number of bins used for each call to binary HB. For TL-HB, note that we actually proposed that the number of bins-per-class should be fixed; see Section B.2. However, for ease of comparison to other methods, we simply set the number of bins to B for each call to binary HB. That is, in line 5, we replace tnl{ku with B. For CW-HB, we described Algorithm 9 with different values of kl corresponding to the number of bins per class. For the CIFAR-10 and CIFAR-100 comparisons, we set each k1 “ k2 “ . . . “ kL “ k, where k P N satisfies tn{ku “ B. Tables 2,3, 4, and 5 report estimates with B “ 15, which has been commonly used in many works (Guo et al., 2017; Kull et al., 2019; Mukhoti et al., 2020). Corresponding to each table, we have a figure where ECE estimates with varying B are reported to strengthen conclusions: these are Figure 5,7, 6, and 8 respectively. Plugin estimates of the ECE were used, same as Guo et al. (2017). Further binning was not done for TL-HB and CW-HB since the output is already discrete and sufficiently many points take each of the predicted values. Note that due to Jensen’s inequality, any further binning will only decrease the ECE estimate (Kumar et al., 2019). Thus, using unbinned estimates may give TL-HB and CW-HB a disadvantage. E.3 SOME REMARKS ON MAXIMUM-CALIBRATION-ERROR (MCE) Guo et al. (2017) defined MCE with respect to confidence calibration, as follows: conf-MCEpc, hq :“ sup rPRangephq |P pY “ cpXq | hpXq “ rq ´ r| . (16) Conf-MCE suffers from the same issue illustrated in Figure 2 for conf-ECE. In Figure 1b, we looked at the reliability diagram within two bins. These indicate two of the values over which the supremum is taken in equation (16): these are the Y-axis distances between the ‹ markers and the X “ Y line for bins 6 and 10 (both are less than 0.02). On the other hand, the effective maximum miscalibration for bin 6 is roughly 0.15 (for class 1), and roughly 0.045 (for class 4), and the maximum should be taken with respect to these values across all bins. To remedy the underestimation of the effective MCE, we can consider the top-label-MCE, defined as TL-MCEpc, hq :“ max lPrLs sup rPRangephq |P pY “ l | cpXq “ l, hpXq “ rq ´ r| . (17) Interpreted in words, the TL-MCE assesses the maximum deviation between the predicted and true probabilities across all predictions and all classes. Following the same argument as in the proof of Proposition 4, it can be shown that for any c, h, conf-MCEpc, hq ď TL-MCEpc, hq. The TL-MCE is closely related to conditional top-label calibration (Definition 1b). Clearly, an algorithm is pε, αqconditionally top-label calibrated if and only if for every distribution P , P pTL-MCEpc, hq ď εq ě 1´ α. Thus the conditional top-label calibration guarantee of Theorem 1 implies a high probability bound on the TL-MCE as well. E.4 TABLE 2 AND 3 STYLE RESULTS WITH FOCAL LOSS Results for top-label-ECE and top-label-MCE with the base deep net model being trained using focal loss are reported in Table 4. Corresponding results for class-wise-ECE are reported in Table 5. The observations are similar to the ones reported for Brier score: 1. For TL-ECE, TL-HB is either the best or close to the best performing method on CIFAR10, but suffers on CIFAR-100. This phenomenon is discussed further in Appendix E.6. N-HB is the best or close to the best for both CIFAR-10 and CIFAR-100. 2. For TL-MCE, TL-HB is the best performing method on CIFAR-10, by a huge margin. For CIFAR-100, TS or VS perform better than TL-HB, but not by a huge margin. 3. For CW-ECE, CW-HB is the best performing method across the two datasets and all four architectures. E.5 ECE AND MCE ESTIMATES WITH VARYING NUMBER OF BINS Corresponding to each entry in Tables 2 and 4, we perform an ablation study with the number of bins varying as B P r5, 25s. This is in keeping with the findings of Roelofs et al. (2020) that the ECE/MCE estimate can vary with different numbers of bins, along with the relative performance of the various models. The results are reported in Figure 5 (ablation of Table 2) and Figure 7 (ablation of Table 3). The captions of these figures contain further details on the findings. Most findings are similar to those in the main paper, but the findings in the tables are strengthened through this ablation. The same ablations are performed for focal loss as well. The results are reported in Figure 6 (ablation of Metric Dataset Architecture Base TS VS DS N-HB CW-HB Table 4) and Figure 8 (ablation of Table 5). The captions of these figures contain further details on the findings. The ablation results in the figures support those in the tables. E.6 ANALYZING THE POOR PERFORMANCE OF TL-HB ON CIFAR-100 CIFAR-100 is an imbalanced dataset with 100 classes and 5000 points for validation/calibration (as per the default splits). Due to random subsampling, the validation split we used had one of the classes predicted as the top-label only 31 times. Thus, based on Theorem 1, we do not expect HB to have small TL-ECE. This is confirmed by the empirical results presented in Tables 2/4, and Figures 5b/6b. We observe that HB has higher estimated TL-ECE than all methods except DS, for most values of the number of bins. The performance of TL-HB for TL-MCE however is much much closer to the other methods since HB uses the same number of points per bin, ensuring that the predictions are somewhat equally calibrated across bins (Figures 5d/6d). In comparison, for CWECE, CW-HB is the best performing method. This is because in the class-wise setting, 5000 points are available for recalibration irrespective of the class, which is sufficient for HB. The deterioration in performance of HB when few calibration points are available was also observed in the binary setting by Gupta and Ramdas (2021, Appendix C). Niculescu-Mizil and Caruana (2005) noted in the conclusion of their paper that Platt scaling (Platt, 1999), which is closely related to TS, performs well when the data is small, but another nonparametric binning method, isotonic regression (Zadroz
1. What is the focus of the paper regarding calibration in multi-class settings? 2. What are the strengths of the proposed definition for calibration, particularly in its comparison to previous definitions? 3. What are the weaknesses of the definition, according to the reviewer? 4. How does the reviewer suggest improving the explanation of the definition and its relation to confidence calibration? 5. Can the authors provide more discussion on the good properties of confidence calibration and how they compare to their definition? 6. Does the reviewer fully understand the contribution of Section 3, and what additional information would they like to see regarding the reduction of notions to binary classifiers? 7. Are there any concerns regarding the scaling algorithms used in Table 2 and 3, specifically in regards to their effectiveness in bringing down ECE or TL ECE?
Summary Of The Paper Review
Summary Of The Paper The paper suggests a definition for calibration in the multi-class setting named 'top label calibration'. The idea is to have only the most likely class calibrated. The small difference with previous definitions is that instead of conditioning only only the confidence value, the conditioning here is also on the identity of the class. The authors argue convincingly that this conditioning renders the definition more meaningful. The authors then observe that many definitions for multi-class calibration could be reduced to multiple instances of binary calibration and suggest an algorithmic framework where a binary calibrator is used as a black box to achieve the multi class calibrator. They then test this by instantiating it with histogram binning and measuring the corresponding notion of expected calibration error. Review Strengths: The paper makes a convincing argument that on an individual level, confidence calibration may be misguided while their definition makes more sense. It is also true that a natural algorithm to achieve many notions of multi-label calibration is to reduce it to the binary case, as the paper suggests. Weaknesses: I think the definition also has obvious drawbacks that the authors do not discuss. In particular I am not convinced that calibrating only the top label makes sense. In practice it just means that you partition the data by the top label and then calibrate each partition separately (as the algorithm they suggest effectively does). The predictions outside the top label make no difference. It should be noted that satisfying this requirement is very easy. Pick the most common label and assign to all points the expectation of that label. Thus, the point of calibration is to do it to an existing classifier with out sacrificing other good properties such as loss minimization. Since the top label doesn't change by the calibrator the accuracy is unchanged, but the typical loss function for multi-class is cross entropy. Detailed comments: Please explain better why calibrating only the top label is sufficient. Typically we assume that c(x) returns a vector of probabilities of dimension L, not just the top label out of the L. At the beginning of section 2, please define confidence better. The arg max of the expression is a class (a number in [L]), not a pair (c,h), so the definition is confusing. I am not very familiar with confidence calibration as is defined here, but it must have some good properties no? Please discuss them and contrast with your definition as well. I'm not sure I fully understand the contribution of Section 3. Sure, some notions are reductions from binary classifiers, so lend themselves to be computed via binary calibrators. Is there anything more you can say? (for instance, is error being compounded?, is loss minimization being affected? are there computational tradeoffs?). In table 2 and 3 it is worth noting that scaling algorithms are not designed to bring down ECE or TL ECE.
ICLR
Title Top-label calibration and multiclass-to-binary reductions Abstract We propose a new notion of multiclass calibration called top-label calibration. A classifier is said to be top-label calibrated if the reported probability for the predicted class label—the top-label—is calibrated, conditioned on the top-label. This conditioning is essential for practical utility of the calibration property, since the top-label is always reported and we must condition on what is reported. However, the popular notion of confidence calibration erroneously skips this conditioning. Furthermore, we outline a multiclass-to-binary (M2B) reduction framework that unifies confidence, top-label, and class-wise calibration, among others. As its name suggests, M2B works by reducing multiclass calibration to different binary calibration problems; various types of multiclass calibration can then be achieved using simple binary calibration routines. We instantiate the M2B framework with the well-studied histogram binning (HB) binary calibrator, and prove that the overall procedure is multiclass calibrated without making any assumptions on the underlying data distribution. In an empirical evaluation with four deep net architectures on CIFAR-10 and CIFAR-100, we find that the M2B + HB procedure achieves lower top-label and class-wise calibration error than other approaches such as temperature scaling. Code for this work is available at https://github.com/aigen/df-posthoc-calibration. 1 INTRODUCTION Machine learning models often make probabilistic predictions. The ideal prediction is the true conditional distribution of the output given the input. However, nature never reveals true probability distributions, making it infeasible to achieve this ideal in most situations. Instead, there is significant interest towards designing models that are calibrated, which is often feasible. We motivate the definition of calibration using a standard example of predicting the probability of rain. Suppose a meteorologist claims that the probability of rain on a particular day is 0.7. Regardless of whether it rains on that day or not, we cannot know if 0.7 was the underlying probability of rain. However, we can test if the meteorologist is calibrated in the long run, by checking if on the D days when 0.7 was predicted, it indeed rained on around 0.7D days (and the same is true for other probabilities). This example is readily converted to a formal binary calibration setting. Denote a random (feature, label)-pair as pX,Y q P X ˆt0, 1u, where X is the feature space. A probabilistic predictor h : X Ñ r0, 1s is said to be calibrated if for every prediction q P r0, 1s, PrpY “ 1 | hpXq “ qq “ q (almost surely). Arguably, if an ML classification model produces such calibrated scores for the classes, downstream users of the model can reliably use its predictions for a broader set of tasks. Our focus in this paper is calibration for multiclass classification, with L ě 3 classes and Y P rLs :“ t1, 2, . . . , L ě 3u. We assume all (training and test) data is drawn i.i.d. from a fixed distribution P , and denote a general point from this distribution as pX,Y q „ P . Consider a typical multiclass predictor, h : X Ñ ∆L´1, whose range ∆L´1 is the probability simplex in RL. A natural notion of calibration for h, called canonical calibration is the following: for every l P rLs, P pY “ l | hpXq “ qq “ ql (ql denotes the l-th component of q). However, canonical calibration becomes infeasible to achieve or verify once L is even 4 or 5 (Vaicenavicius et al., 2019). Thus, there is interest in studying statistically feasible relaxations of canonical notion, such as confidence calibration (Guo et al., 2017) and class-wise calibration (Kull et al., 2017). In particular, the notion of confidence calibration (Guo et al., 2017) has been popular recently. A model is confidence calibrated if the following is true: “when the reported confidence for the predicted class is q P r0, 1s, the accuracy is also q”. In any practical setting, the confidence q is never reported alone; it is always reported along with the actual class prediction l P rLs. One may expect that if a model is confidence calibrated, the following also holds: “when the class l is predicted with confidence q, the probability of the actual class being l is also q”? Unfortunately, this expectation is rarely met—there exist confidence calibrated classifier for whom the latter statement is grossly violated for all classes (Example 1). On the other hand, our proposed notion of top-label calibration enforces the latter statement. It is philosophically more coherent, because it requires conditioning on all relevant reported quantities (both the predicted top label and our confidence in it). In Section 2, we argue further that top-label calibration is a simple and practically meaningful replacement of confidence calibration. In Section 3, we unify top-label, confidence, and a number of other popular notions of multiclass calibration into the framework of multiclass-to-binary (M2B) reductions. The M2B framework relies on the simple observation that each of these notions internally verifies binary calibration claims. As a consequence, each M2B notion of calibration can be achieved by solving a number of binary calibration problems. With the M2B framework at our disposal, all of the rich literature on binary calibration can now be used for multiclass calibration. We illustrate this by instantiating the M2B framework with the binary calibration algorithm of histogram binning or HB (Zadrozny and Elkan, 2001; Gupta and Ramdas, 2021). The M2B + HB procedure achieves state-of-the-art results with respect to standard notions of calibration error (Section 4). Further, we show that our procedure is provably calibrated for arbitrary data-generating distributions. The formal theorems are delayed to Appendices B, C (due to space limitations), but an informal result is presented in Section 4. 2 MODIFYING CONFIDENCE CALIBRATION TO TOP-LABEL CALIBRATION Let c : X Ñ rLs denote a classifier or top-label predictor and h : X Ñ r0, 1s a function that provides a confidence or probability score for the top-label cpXq. The predictor pc, hq is said to be confidence calibrated (for the data-generating distribution P ) if P pY “ cpXq | hpXqq “ hpXq. (1) In other words, when the reported confidence hpXq equals p P r0, 1s, then the fraction of instances where the predicted label is correct also approximately equals p. Note that for an L-dimensional predictor h : X Ñ ∆L´1, one would use cp¨q “ arg maxlPrLs hlp¨q and hp¨q “ hcp¨qp¨q; ties are broken arbitrarily. Then h is confidence calibrated if the corresponding pc, hq satisfies (1). Confidence calibration is most applicable in high-accuracy settings where we trust the label prediction cpxq. For instance, if a high-accuracy cancer-grade-prediction model predicts a patient as having “95% grade III, 3% grade II, and 2% grade I”, we would suggest the patient to undergo an invasive treatment. However, we may want to know (and control) the number of non-grade-III patients that were given this suggestion incorrectly. In other words, is Prpcancer is not grade III | cancer is predicted to be of grade III with confidence 95%q equal to 5%? It would appear that by focusing on the the probability of the predicted label, confidence calibration enforces such control. However, as we illustrate next, confidence calibration fails at this goal by providing a guarantee that is neither practically interpretable, nor actionable. Translating the probabilistic statement (1) into words, we ascertain that confidence calibration leads to guarantees of the form: “if the confidence hpXq in the top-label is 0.6, then the accuracy (frequency with which Y equals cpXq) is 0.6”. Such a guarantee is not very useful. Suppose a patient P is informed (based on their symptoms X), that they are most likely to have a certain disease D with probability 0.6. Further patient P is told that this score is confidence calibrated. P can now infer the following: “among all patients who have probability 0.6 of having some unspecified disease, the fraction who have that unspecified disease is also 0.6.” However, P is concerned only about disease D, and not about other diseases. That is, P wants to know the probability of having D among patients who were predicted to have disease D with confidence 0.6, not among patients who were predicted to have some disease with confidence 0.6. In other words, P cares about the occurrence of D among patients who were told the same thing that P has been told. It is tempting to wish that the confidence calibrated probability 0.6 has any bearing on what P cares about. However, this faith is misguided, as the above reasoning suggests, and further illustrated through the following example. Example 1. Suppose the instance space is pX,Y q P ta, bu ˆ t1, 2, . . .u. (X can be seen as the random patient, and Y as the disease they are suffering from.) Consider a predictor pc, hq and let the values taken by pX,Y, c, hq be as follows: Feature x P pX “ xq Class prediction cpxq Confidence hpxq P pY “ cpXq | X “ xq a 0.5 1 0.6 0.2 b 0.5 2 0.6 1.0 The table specifies only the probabilities P pY “ cpXq | X “ xq; the probabilities P pY “ l | X “ xq, l ‰ cpxq, can be set arbitrarily. We verify that pc, hq is confidence calibrated: P pY “ cpXq | hpXq “ 0.6q “ 0.5pP pY “ 1 | X “ aq ` P pY “ 2 | X “ bqq “ 0.5p0.2` 1q “ 0.6. However, whether the actual instance is X “ a or X “ b, the probabilistic claim of 0.6 bears no correspondence with reality. If X “ a, hpXq “ 0.6 is extremely overconfident since P pY “ 1 | X “ aq “ 0.2. Contrarily, if X “ b, hpXq “ 0.6 is extremely underconfident. The reason for the strange behavior above is that the probability P pY “ cpXq | hpXqq is not interpretable from a decision-making perspective. In practice, we never report just the confidence hpXq, but also the class prediction cpXq (obviously!). Thus it is more reasonable to talk about the conditional probability of Y “ cpXq, given what is reported, that is both cpXq and hpXq. We make a small but critical change to (1); we say that pc, hq is top-label calibrated if P pY “ cpXq | hpXq, cpXqq “ hpXq. (2) (See the disambiguating Remark 2 on terminology.) Going back to the patient-disease example, top-label calibration would tell patient P the following: “among all patients, who (just like you) are predicted to have disease D with probability 0.6, the fraction who actually have disease D is also 0.6.” Philosophically, it makes sense to condition on what is reported—both the top label and its confidence—because that is what is known to the recipient of the information; and there is no apparent justification for not conditioning on both. A commonly used metric for quantifying the miscalibration of a model is the expected-calibrationerror (ECE) metric. The ECE associated with confidence calibration is defined as conf-ECEpc, hq :“ EX |P pY “ cpXq | hpXqq ´ hpXq| . (3) We define top-label-ECE (TL-ECE) in an analogous fashion, but also condition on cpXq: TL-ECEpc, hq :“ EX |P pY “ cpXq | cpXq, hpXqq ´ hpXq| . (4) Higher values of ECE indicate worse calibration performance. The predictor in Example 1 has conf-ECEpc, hq “ 0. However, it has TL-ECEpc, hq “ 0.4, revealing its miscalibration. More generally, it can be deduced as a straightforward consequence of Jensen’s inequality that conf-ECEpc, hq is always smaller than the TL-ECEpc, hq (see Proposition 4 in Appendix H). As illustrated by Example 1, the difference can be significant. In the following subsection we illustrate that the difference can be significant on a real dataset as well. First, we make a couple of remarks. Remark 1 (ECE estimation using binning). Estimating the ECE requires estimating probabilities conditional on some prediction such as hpxq. A common strategy to do this is to bin together nearby values of hpxq using binning schemes (Nixon et al., 2020, Section 2.1), and compute a single estimate for the predicted and true probabilities using all the points in a bin. The calibration method we espouse in this work, histogram binning (HB), produces discrete predictions whose ECE can be estimated without further binning. Based on this, we use the following experimental protocol: we report unbinned ECE estimates while assessing HB, and binned ECE estimates for all other compared methods, which are continuous output methods (deep-nets, temperature scaling, etc). It is commonly understood that binning leads to underestimation of the effective ECE (Vaicenavicius et al., 2019; Kumar et al., 2019). Thus, using unbinned ECE estimates for HB gives HB a disadvantage compared to the binned ECE estimates we use for other methods. (This further strengthens our positive results for HB.) The binning scheme we use is equal-width binning, where the interval r0, 1s is divided into B equal-width intervals. Equal-width binning typically leads to lower ECE estimates compared to adaptive-width binning (Nixon et al., 2020). Remark 2 (Terminology). The term conf-ECE was introduced by Kull et al. (2019). Most works refer to conf-ECE as just ECE (Guo et al., 2017; Nixon et al., 2020; Mukhoti et al., 2020; Kumar et al., 2018). However, some papers refer to conf-ECE as top-label-ECE (Kumar et al., 2019; Zhang et al., 2020), resulting in two different terms for the same concept. We call the older notion as conf-ECE, and our definition of top-label calibration/ECE (4) is different from previous ones. (a) Confidence reliability diagram (points marked ‹) and top-label reliability diagram (points marked `) for a ResNet-50 model on the CIFAR-10 dataset; see further details in points (a) and (b) below. The gray bars denote the fraction of predictions in each bin. The confidence reliability diagram (mistakenly) suggests better calibration than the top-label reliability diagram. (b) Class-wise and zoomed-in version of Figure 1a for bin 6 (top) and bin 10 (bottom); see further details in point (c) below. The ‹ markers are in the same position as Figure 1a, and denote the average predicted and true probabilities. The colored points denote the predicted and true probabilities when seen class-wise. The histograms on the right show the number of test points per class within bins 6 and 10. Figure 1: Confidence reliability diagrams misrepresent the effective miscalibration. 2.1 AN ILLUSTRATIVE EXPERIMENT WITH RESNET-50 ON CIFAR-10 We now compare confidence and top-label calibration using ECE estimates and reliability diagrams (Niculescu-Mizil and Caruana, 2005). This experiment can be seen as a less malignant version of Example 1. Here, confidence calibration is not completely meaningless, but can nevertheless be misleading. Figure 1 illustrates the (test-time) calibration performance of a ResNet-50 model (He et al., 2016) on the CIFAR-10 dataset (Krizhevsky, 2009). In the following summarizing points, the pc, hq correspond to the ResNet-50 model. (a) The ‹ markers in Figure 1a form the confidence reliability diagram (Guo et al., 2017), con- structed as follows. First, the hpxq values on the test set are binned into one of B “ 10 bins, r0, 0.1q, r0.1, 0.2q, . . . , r0.9, 1s, depending on the interval to which hpxq belongs. The gray bars in Figure 1a indicate the fraction of hpxq values in each bin—nearly 92% points belong to bin r0.9, 1s and no points belong to bin r0, 0.1q. Next, for every bin b, we plot ‹ “ pconfb, accbq, which are the plugin estimates of E rhpXq | hpXq P Bin bs and P pY “ cpXq | hpXq P Bin bq respectively. The dashed X “ Y line indicates perfect confidence calibration. (b) The ` markers in Figure 1a form the top-label reliability diagram. Unlike the confidence reliability diagram, the top-label reliability diagram shows the average miscalibration across classes in a given bin. For a given class l and bin b, define ∆b,l :“ | pP pY “ cpXq | cpXq “ l, hpXq P Bin bq ´ pE rhpXq | cpXq “ l, hpXq P Bin bs |, where pP , pE denote empirical estimates based on the test data. The overall miscalibration is then ∆b :“ Weighted-averagep∆b,lq “ ř lPrLs pP pcpXq “ l | hpXq P Bin bq ∆b,l. Note that ∆b is always non-negative and does not indicate whether the overall miscalibration occurs due to under- or over-confidence; also, if the absolute-values were dropped from ∆b,l, then ∆b would simply equal accb´ confb. In order to plot ∆b in a reliability diagram, we obtain the direction for the corresponding point from the confidence reliability diagram. Thus for every ‹ “ pconfb, accbq, we plot` “ pconfb, confb`∆bq if accb ą confb and` “ pconfb, confb´∆bq otherwise, for every b. This scatter plot of the `’s gives us the top-label reliability diagram. Figure 1a shows that there is a visible increase in miscalibration when going from confidence calibration to top-label calibration. To understand why this change occurs, Figure 1b zooms into the sixth bin (hpXq P r0.5, 0.6q) and bin 10 (hpXq P r0.9, 1.0s), as described next. (c) Figure 1b displays the class-wise top-label reliability diagrams for bins 6 and 10. Note that for bin 6, the ‹ marker is nearly on theX “ Y line, indicating that the overall accuracy matches the 0.005 0.010 0.015 0.020 0.025 Es tim at ed E CE Base model top-label-ECE Base model conf-ECE Temperature scaling top-label-ECE Temperature scaling conf-ECE Histogram binning top-label-ECE Histogram binning conf-ECE 5 10 15 20 25 Number of bins 0.0075 0.0100 0.0125 0.0150 0.0175 0.0200 0.0225 0.0250 0.0275 Es tim at ed E CE ResNet-50 5 10 15 20 25 Number of bins 0.005 0.010 0.015 0.020 0.025 0.030 Es tim at ed E CE ResNet-110 5 10 15 20 25 Number of bins 0.005 0.010 0.015 0.020 0.025 Es tim at ed E CE Wide-ResNet-26-10 5 10 15 20 25 Number of bins 0.005 0.010 0.015 0.020 0.025 0.030 Es tim at ed E CE DenseNet-121 Figure 2 displays the aggregate effect of the above phenomenon (across bins and classes) through estimates of the conf-ECE and TL-ECE. The precise experimental setup is described in Section 4. These plots display the ECE estimates of the base model, as well as the base model when recalibrated using temperature scaling (Guo et al., 2017) and our upcoming formulation of top-label histogram binning (Section 3). Since ECE estimates depend on the number of bins B used (see Roelofs et al. (2020) for empirical work around this), we plot the ECE estimate for every valueB P r5, 25s in order to obtain clear and unambiguous results. We find that the TL-ECE is significantly higher than the conf-ECE for most values of B, the architectures, and the pre- and post- recalibration models. This figure also previews the performance of our forthcoming top-label histogram binning algorithm. Top-label HB has smaller estimated TL-ECE than temperature scaling for most values of B and the architectures. Except for ResNet-50, the conf-ECE estimates are also better. To summarize, top-label calibration captures the intuition of confidence calibration by focusing on the predicted class. However, top-label calibration also conditions on the predicted class, which is always part of the prediction in any practical setting. Further, TL-ECE estimates can be substantially different from conf-ECE estimates. Thus, while it is common to compare predictors based on the conf-ECE, the TL-ECE comparison is more meaningful, and can potentially be different. 3 CALIBRATION ALGORITHMS FROM CALIBRATION METRICS In this section, we unify a number of notions of multiclass calibration as multiclass-to-binary (or M2B) notions, and propose a general-purpose calibration algorithm that achieves the corresponding M2B notion of calibration. The M2B framework yields multiple novel post-hoc calibration algorithms, each of which is tuned to a specific M2B notion of calibration. 3.1 MULTICLASS-TO-BINARY (M2B) NOTIONS OF CALIBRATION In Section 2, we defined confidence calibration (1) and top-label calibration (2). These notions verify calibration claims for the highest predicted probability. Other popular notions of calibration verify calibration claims for other entries in the full L-dimensional prediction vector. A predictor h “ ph1, h2, . . . , hLq is said to be class-wise calibrated (Kull et al., 2017) if (class-wise calibration) @l P rLs, P pY “ l | hlpXqq “ hlpXq. (5) Another recently proposed notion is top-K confidence calibration (Gupta et al., 2021). For some l P rLs, let cplq : X Ñ rLs denote the l-th highest class prediction, and let hplq : X Ñ rLs denote the confidence associated with it (c “ cp1q and h “ hp1q are special cases). For a given K ď L, (top-K-confidence calibration) @k P rKs, P pY “ cpkqpXq | hpkqpXqq “ hpkqpXq. (6) As we did in Section 2 for confidenceÑtop-label, top-K-confidence calibration can be modified to the more interpretable top-K-label calibration by further conditioning on the predicted labels: (top-K-label calibration) @k P rKs, P pY “ cpkqpXq | hpkqpXq, cpkqpXqq “ hpkqpXq. (7) Each of these notions reduce multiclass calibration to one or more binary calibration requirements, where each binary calibration requirement corresponds to verifying if the distribution of Y , conditioned on some prediction predpXq, satisfies a single binary calibration claim associated with predpXq. Table 1 illustrates how the calibration notions discussed so far internally verify a number of binary calibration claims, making them M2B notions. For example, for class-wise calibration, for every l P rLs, the conditioning is on predpXq “ hlpXq, and a single binary calibration statement is verified: P pY “ l | predpXqq “ hlpXq. Based on this property, we call each of these notions multiclass-to-binary or M2B notions. The notion of canonical calibration mentioned in the introduction is not an M2B notion. Canonical calibration is discussed in detail in Appendix G. Due to the conditioning on a multi-dimensional prediction, non-M2B notions of calibration are harder to achieve or verify. For the same reason, it is possibly easier for humans to interpret binary calibration claims when taking decisions/actions. 3.2 ACHIEVING M2B NOTIONS OF CALIBRATION USING M2B CALIBRATORS The M2B framework illustrates how multiclass calibration can typically be viewed via a reduction to binary calibration. The immediate consequence of this reduction is that one can now solve multiclass calibration problems by leveraging the well-developed methodology for binary calibration. The upcoming M2B calibrators belong to the standard recalibration or post-hoc calibration setting. In this setting, one starts with a fixed pre-learnt base model g : X Ñ ∆L´1. The base model g can correspond to a deep-net, a random forest, or any 1-v-all (one-versus-all) binary classification model such as logistic regression. The base model is typically optimized for classification accuracy and may not be calibrated. The goal of post-hoc calibration is to use some given calibration data D “ pX1, Y1q, pX2, Y2q, . . . , pXn, Ynq P pX ˆ rLsqn, typically data on which g was not learnt, to recalibrate g. In practice, the calibration data is usually the same as the validation data. To motivate M2B calibrators, suppose we want to verify if g is calibrated on a certain test set, based on a given M2B notion of calibration. Then, the verifying process will split the test data into a number of sub-datasets, each of which will verify one of the binary calibration claims. In Appendix A.2, we argue that the calibration data can also be viewed as a test set, and every step in the verification process can be used to provide a signal for improving calibration. M2B calibrators take the form of wrapper methods that work on top of a given binary calibrator. Denote an arbitrary black-box binary calibrator as At0,1u : r0, 1sXˆpXˆt0, 1uq‹ Ñ r0, 1sX , where the first argument is a mapping X Ñ r0, 1s that denotes a (miscalibrated) binary predicor, and the second argument is a calibration data sequence of arbitrary length. The output is a (better calibrated) binary predictor. Examples of At0,1u are histogram binning (Zadrozny and Elkan, 2001), isotonic regression (Zadrozny and Elkan, 2002), and Platt scaling (Platt, 1999). In the upcoming descriptions, we use the indicator function 1 ta “ bu P t0, 1u which takes the value 1 if a “ b, and 0 if a ‰ b. The general formulation of our M2B calibrator is delayed to Appendix A since the description is a bit involved. To ease readability and adhere to the space restrictions, in the main paper we describe the calibrators corresponding to top-label, class-wise, and confidence calibration (Algorithms 1–3). Each of these calibrators are different from the classical M2B calibrator (Algorithm 4) that has been used by Zadrozny and Elkan (2002), Guo et al. (2017), Kull et al. (2019), and most other papers M2B calibrators: Post-hoc multiclass calibration using binary calibrators Input in each case: Binary calibrator At0,1u : r0, 1sX ˆ pX ˆ t0, 1uq‹ Ñ r0, 1sX , base multiclass predictor g : X Ñ ∆L´1, calibration data D “ pX1, Y1q, . . . , pXn, Ynq. Algorithm 1: Confidence calibrator 1 cÐ classifier or top-class based on g; 2 g Ð top-class-probability based on g; 3 D1 Ð tpXi,1 tYi “ cpXiquq : i P rnsu; 4 hÐ At0,1upg,D1q; 5 return pc, hq; Algorithm 2: Top-label calibrator 1 cÐ classifier or top-class based on g; 2 g Ð top-class-probability based on g; 3 for lÐ 1 to L do 4 Dl Ð tpXi,1 tYi “ luq : cpXiq “ lqu; 5 hl Ð At0,1upg,Dlq; 6 end 7 hp¨q Ð hcp¨qp¨q (predict hlpxq if cpxq “ l); 8 return pc, hq; Algorithm 3: Class-wise calibrator 1 Write g “ pg1, g2, . . . , gLq; 2 for lÐ 1 to L do 3 Dl Ð tpXi,1 tYi “ luq : i P rnsu; 4 hl Ð At0,1upgl,Dlq; 5 end 6 return ph1, h2, . . . , hLq; Algorithm 4: Normalized calibrator 1 Write g “ pg1, g2, . . . , gLq; 2 for lÐ 1 to L do 3 Dl Ð tpXi,1 tYi “ luq : i P rnsu; 4 rhl Ð At0,1upgl,Dlq; 5 end 6 Normalize: for every l P rLs, hlp¨q :“ rhlp¨q{ řL k“1 rhkp¨q; 7 return ph1, h2, . . . , hLq; we are aware of, with the most similar one being Algorithm 3. Top-K-label and top-K-confidence calibrators are also explicitly described in Appendix A (Algorithms 6 and 7). Top-label calibration requires that for every class l P rLs, P pY “ l | cpXq “ l, hpXqq “ hpXq. Thus, to achieve top-label calibration, we must solve L calibration problems. Algorithm 2 constructs L datasets tDl : l P rLsu (line 4). The features in Dl are the Xi’s for which cpXiq “ l, and the labels are 1 tYi “ lu. Now for every l P rLs, we calibrate g to hl : X Ñ r0, 1s using Dl and any binary calibrator. The final probabilistic predictor is hp¨q “ hcp¨qp¨q (that is, it predicts hlpxq if cpxq “ l). The top-label predictor c does not change in this process. Thus the accuracy of pc, hq is the same as the accuracy of g irrespective of which At0,1u is used. Unlike the top-label calibrator, the confidence calibrator merges all classes together into a single dataset D1 “ Ť lPrLsDl. To achieve class-wise calibration, Algorithm 3 also solves L calibration problems, but these correspond to satisfying P pY “ l | hlpXqq “ hlpXq. Unlike top-label calibration, the dataset Dl for class-wise calibration contains all the Xi’s (even if cpXiq ‰ l), and hl is passed to At0,1u instead of h. Also, unlike confidence calibration, Yi is replaced with 1 tYi “ lu instead of 1 tYi “ cpXiqu. The overall process is similar to reducing multiclass classification to L 1-v-all binary classification problem, but our motivation is intricately tied to the notion of class-wise calibration. Most popular empirical works that have discussed binary calibrators for multiclass calibration have done so using the normalized calibrator, Algorithm 4. This is almost identical to Algorithm 3, except that there is an additional normalization step (line 6 of Algorithm 4). This normalization was first proposed by Zadrozny and Elkan (2002, Section 5.2), and has been used unaltered by most other works1 where the goal has been to simply compare direct multiclass calibrators such as temperature scaling, Dirichlet scaling, etc., to a calibrator based on binary methods (for instance, see Section 4.2 of Guo et al. (2017)). In contrast to these papers, we investigate multiple M2B reductions in an effort to identify the right reduction of multiclass calibration to binary calibration. To summarize, the M2B characterization immediately yields a novel and different calibrator for every M2B notion. In the following section, we instantiate M2B calibrators on the binary calibrator of histogram binning (HB), leading to two new algorithms: top-label-HB and class-wise-HB, that achieve strong empirical results and satisfy distribution-free calibration guarantees. 1the only exception we are aware of is the recent work of Patel et al. (2021) who also suggest skipping normalization (see their Appendix A1); however they use a common I-Max binning scheme across classes, whereas in Algorithm 3 the predictor hl for each class is learnt completely independently of other classes 4 EXPERIMENTS: M2B CALIBRATION WITH HISTOGRAM BINNING Histogram binning or HB was proposed by Zadrozny and Elkan (2001) with strong empirical results for binary calibration. In HB, a base binary calibration model g : X Ñ r0, 1s is used to partition the calibration data into a number of bins so that each bin has roughly the same number of points. Then, for each bin, the probability of Y “ 1 is estimated using the empirical distribution on the calibration data. This estimate forms the new calibrated prediction for that bin. Recently, Gupta and Ramdas (2021) showed that HB satisfies strong distribution-free calibration guarantees, which are otherwise impossible for scaling methods (Gupta et al., 2020). Despite these results for binary calibration, studies for multiclass calibration have reported that HB typically performs worse than scaling methods such as temperature scaling (TS), vector scaling (VS), and Dirichlet scaling (DS) (Kull et al., 2019; Roelofs et al., 2020; Guo et al., 2017). In our experiments, we find that the issue is not HB but the M2B wrapper used to produce the HB baseline. With the right M2B wrapper, HB beats TS, VS, and DS. A number of calibrators have been proposed recently (Zhang et al., 2020; Rahimi et al., 2020; Patel et al., 2021; Gupta et al., 2021), but VS and DS continue to remain strong baselines which are often close to the best in these papers. We do not compare to each of these calibrators; our focus is on the M2B reduction and the message that the baselines dramatically improve with the right M2B wrapper. We use three metrics for comparison: the first is top-label-ECE or TL-ECE (defined in (4)), which we argued leads to a more meaningful comparison compared to conf-ECE. Second, we consider the more stringent maximum-calibration-error (MCE) metric that assesses the worst calibration across predictions (see more details in Appendix E.3). For top-label calibration MCE is given by TL-MCEpc, hq :“ maxlPrLs suprPRangephq |P pY “ l | cpXq “ l, hpXq “ rq ´ r|. To assess classwise calibration, we use class-wise-ECE defined as the average calibration error across classes: CW-ECEpc,hq :“ L´1 řL l“1 EX |P pY “ l | hlpXqq ´ hlpXq|. All ECE/MCE estimation is performed as described in Remark 1. For further details, see Appendix E.2. Formal algorithm and theoretical guarantees. Top-label-HB (TL-HB) and class-wise-HB (CWHB) are explicitly stated in Appendices B and C respectively; these are instantiations of the top-label calibrator and class-wise calibrator with HB. N-HB is the the normalized calibrator (Algorithm 4) with HB, which is the same as CW-HB, but with an added normalization step. In the Appendix, we extend the binary calibration guarantees of Gupta and Ramdas (2021) to TL-HB and CW-HB (Theorems 1 and 2). We informally summarize one of the results here: if there are at least k calibration points-per-bin, then the expected-ECE is bounded as: E r(TL-) or (CW-) ECEs ď a 1{2k, for TL-HB and CW-HB respectively. The outer E above is an expectation over the calibration data, and corresponds to the randomness in the predictor learnt on the calibration data. Note that the ECE itself is an expected error over an unseen i.i.d. test-point pX,Y q „ P . Experimental details. We experimented on the CIFAR-10 and CIFAR-100 datasets, which have 10 and 100 classes each. The base models are deep-nets with the following architectures: ResNet50, Resnet-110, Wide-ResNet-26-10 (WRN) (Zagoruyko and Komodakis, 2016), and DenseNet121 (Huang et al., 2017). Both CIFAR datasets consist of 60K (60,000) points, which are split as 45K/5K/10K to form the train/validation/test sets. The validation set was used for post-hoc calibration and the test set was used for evaluation through ECE/MCE estimates. Instead of training new models, we used the pre-trained models of Mukhoti et al. (2020). We then ask: “which post-hoc calibrator improves the calibration the most?” We used their Brier score and focal loss models in our experiments (Mukhoti et al. (2020) report that these are the empirically best performing loss functions). All results in the main paper are with Brier score, and results with focal loss are in Appendix E.4. Implementation details for TS, VS, and DS are in Appendix E. Findings. In Table 2, we report the binned ECE and MCE estimates when B “ 15 bins are used by HB, and for ECE estimation. We make the following observations: (a) For TL-ECE, N-HB is the best performing method for both CIFAR-10 and CIFAR-100. While most methods perform similarly across architectures for CIFAR-10, there is high variation in CIFAR-100. DS is the worst performing method on CIFAR-100, but TL-HB also performs poorly. We believe that this could be because the data splitting scheme of the TL-calibrator (line 4 of Algorithm 2) splits datasets across the predicted classes, and some classes in CIFAR-100 occur very rarely. This is further discussed in Appendix E.6. (b) For TL-MCE, TL-HB is the best performing method on CIFAR-10, by a huge margin. For CIFAR-100, TS or VS perform slightly better than TL-HB. Since HB ensures that each bin gets roughly the same number of points, the predictions are well calibrated across bins, leading to smaller TL-MCE. A similar observation was also made by Gupta and Ramdas (2021). (c) For CW-ECE, CW-HB is the best performing method across the two datasets and all four architectures. The N-HB method which has been used in many CW-ECE baseline experiments performs terribly. In other words, skipping the normalization step leads to a large improvement in CW-ECE. This observation is one of our most striking findings. To shed further light on this, we note that the distribution-free calibration guarantees for CW-HB shown in Appendix C no longer hold post-normalization. Thus, both our theory and experiments indicate that skipping normalization improves CW-ECE performance. Additional experiments in the Appendix. In Appendix E.5, we report each of the results in Tables 2 and 3 with the number of bins taking every value in the range r5, 25s. Most observations remain the same under this expanded study. In Appendix B.2, we consider top-label calibration for the class imbalanced COVTYPE-7 dataset, and show that TL-HB adapts to tail/infrequent classes. 5 CONCLUSION We make two contributions to the study of multiclass calibration: (i) defining the new notion of top-label calibration which enforces a natural minimal requirement on a multiclass predictor—the probability score for the top class prediction should be calibrated; (ii) developing a multiclass-tobinary (M2B) framework which posits that various notions of multiclass calibration can be achieved via reduction to binary calibration, balancing practical utility with statistically tractability. Since it is important to identify appropriate notions of calibration in any structured output space (Kuleshov et al., 2018; Gneiting et al., 2007), we anticipate that the philosophy behind the M2B framework could find applications in other structured spaces. 6 REPRODUCIBILITY STATEMENT Some reproducibility desiderata, such as external code and libraries that were used are summarized in Appendix E.1. All code to generate results with the CIFAR datasets is attached in the supplementary material. Our base models were pre-trained deep-net models generated by Mukhoti et al. (2020), obtained from www.robots.ox.ac.uk/„viveka/focal calibration/ (corresponding to ‘brier score’ and ‘focal loss adaptive 53’ at the above link). By avoiding training of new deep-net models with multiple hyperparameters, we also consequently avoided selection biases that inevitably creep in due to test-data-peeking. The predictions of the pre-trained models were obtained using the code at https://github.com/torrvision/focal calibration. 7 ETHICS STATEMENT Post-hoc calibration is a post-processing step that can be applied on top of miscalibrated machine learning models to increase their reliability. As such, we believe our work should improve the transparency and explainability of machine learning models. However, we outline a few limitations. Post-hoc calibration requires keeping aside a fresh, representative dataset, that was not used for training. If this dataset is too small, the resulting calibration guarantee can be too weak to be meaningful in practice. Further, if the test data distribution shifts in significant ways, additional corrections may be needed to recalibrate (Gupta et al., 2020; Podkopaev and Ramdas, 2021). A well calibrated classifier is not necessarily an accurate or a fair one, and vice versa (Kleinberg et al., 2017). Deploying calibrated models in critical applications like medicine, criminal law, banking, etc. does not preclude the possibility of the model being frequently wrong or unfair. ACKNOWLEDGEMENTS This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562 (Towns et al., 2014). Specifically, it used the Bridges-2 system, which is supported by NSF award number ACI-1928147, at the Pittsburgh Supercomputing Center (PSC). CG’s research was supported by the generous Bloomberg Data Science Ph.D. Fellowship. CG would like to thank Saurabh Garg and Youngseog Chung for interesting discussions, and Viveka Kulharia for help with the focal calibration repository. Finally, we thank Zack Lipton, the ICLR reviewers, and the ICLR area chair, for excellent feedback that helped improve the writing of the paper. A ADDENDUM TO SECTION 3 “CALIBRATION ALGORITHMS FROM CALIBRATION METRICS” In Section 3, we introduced the concept of M2B calibration, and showed that popular calibration notions are in fact M2B notions (Table 1). We showed how the calibration notions of top-label, class-wise, and confidence calibration can be achieved using a corresponding M2B calibrator. In the following subsection, we present the general-purpose wrapper Algorithm 5 that can be used to derive an M2B calibrator from any given M2B calibration notion that follows the rubric specified by Table 1. In Appendix A.2, we illustrate the philosophy of M2B calibration using a simple example with a dataset that contains 6 points. This example also illustrates the top-label-calibrator, the classwise-calibrator, and the confidence-calibrator. A.1 GENERAL-PURPOSE M2B CALIBRATOR Denote some M2B notion of calibration as C. Suppose C corresponds toK binary calibration claims. The outer for-loop in Algorithm 5, runs over each such claim in C. For example, for class-wise calibration, K “ L and for confidence and top-label calibration, K “ 1. Corresponding to each claim, there is a probability-predictor that the conditioning is to be done on, such as g or gl or gpkq. Additionally, there may be conditioning on the label predictor such as c or cpkq. These are denoted as prc, rgq in Algorithm 5. For confidence and top-label calibration, rc “ c, the top-label-confidence. For class-wise calibration, when rg “ gl, we have rcp¨q “ l. If there is no label conditioning in the calibration notion, such as in confidence, top-K-confidence, and class-wise calibration, then we enter the if-condition inside the for-loop. Here hk is learnt using a single calibration dataset and a single call to At0,1u. Otherwise, if there is label conditioning, such as in top-label and top-K-label calibration, we enter the else-condition, where we learn a separate hk,l for every l P rLs, using a different part of the dataset Dl in each case. Then hkpxq equals hk,lpxq if rcpxq “ l. Finally, since C is verifying a sequence of claims, the output of Algorithm 5 is a sequence of predictors. Each original prediction prc, rgq corresponding to the C is replaced with prc, hkq. This is the output of the M2B calibrator. Note that the rc values are not changed. This output appears abstract, but normally, it can be represented in an interpretable way. For example, for class-wise calibration, the output is just a sequence of predictors, one for each class: ph1, h2, . . . , hLq. This general-purpose M2B calibrators can be used to achieve any M2B calibration notion: toplabel calibration (Algorithm 2), class-wise calibration (Algorithm 3), confidence calibration (Algorithm 1), top-K-label calibration (Algorithm 6), and top-K-confidence calibration (Algorithm 7). A.2 AN EXAMPLE TO ILLUSTRATE THE PHILOSOPHY OF M2B CALIBRATION Figure 3a shows the predictions of a given base model g on a given dataset D. Suppose D is a test set, and we are testing confidence calibration. Then the only predictions that matter are the top-predictions corresponding to the shaded values. These are stripped out and shown in Figure 3b, in the gp¨q row. Note that the indicator 1 tY “ cp¨qu is sufficient to test confidence calibration and given this, the cpXq are not needed. Thus the second row in Figure 3b only shows these indicators. Algorithm 8: Top-label histogram binning Input: Base multiclass predictor g, calibration data D “ pX1, Y1q, . . . , pXn, Ynq Hyperparameter: # points per bin k P N (say 50), tie-breaking parameter δ ą 0 (say 10´10) Output: Top-label calibrated predictor pc, hq 1 cÐ classifier or top-class based on g; 2 g Ð top-class-probability based on g; 3 for lÐ 1 to L do 4 Dl Ð tpXi,1 tYi “ luq : cpXiq “ lqu and nl Ð |Dl|; 5 hl Ð Binary-histogram-binningpg,Dl, tnl{ku , δq; 6 end 7 hp¨q Ð hcp¨qp¨q; 8 return pc, hq; Verifying top-label calibration is similar (Figure 3c), but in addition to the predictions gp¨q, we also retain the values of cp¨q. Thus the gp¨q and 1 tY “ cp¨qu are shown, but split across the 4 classes. Class-wise calibration requires access to all the predictions, however, each class is considered separately as indicated by Figure 3d. Canonical calibration looks at the full prediction vector in each case. However, in doing so, it becomes unlikely that gpxq “ gpyq for any x,y since the number of values that g can take is now exponential. Let us turn this around and suppose that D were a calibration set instead of a test set. We argue that D should be used in the same way, whether testing or calibrating. Thus, if confidence calibration is to be achieved, we should focus on the pg,1 tY “ cp¨quq corresponding to g. If top-label calibration is to be achieved, we should use the pc, gq values. If class-wise calibration is to be achieved, we should look at each gl separately and solve L different problems. Finally, for canonical calibration, we must look at the entire g vector as a single unit. This is the core philosophy behind M2B calibrators: if binary claims are being verified, solve binary calibration problems. B DISTRIBUTION-FREE TOP-LABEL CALIBRATION USING HISTOGRAM BINNING In this section, we formally describe histogram binning (HB) with the top-label-calibrator (Algorithm 2) and provide methodological insights through theory and experiments. B.1 FORMAL ALGORITHM AND THEORETICAL GUARANTEES Algorithm 8 describes the top-label calibrator formally using HB as the binary calibration algorithm. The function called in line 5 is Algorithm 2 of Gupta and Ramdas (2021). The first argument in the call is the top-label confidence predictor, the second argument is the dataset to be used, the third argument is the number of bins to be used, and the fourth argument is a tie-breaking parameter (described shortly). While previous empirical works on HB fixed the number of bins per class, the analysis of Gupta and Ramdas (2021) suggests that a more principled way of choosing the number of bins is to fix the number of points per bin. This is parameter k of Algorithm 8. Given k, the number of bins is decided separately for every class as tnl{ku where nl is the number of points predicted as class l. This choice is particularly relevant for top-label calibration since nl can be highly non-uniform (we illustrate this empirically in Section B.2). The tie-breaking parameter δ can be arbitrarily small (like 10´10), and its significance is mostly theoretical—it is used to ensure that outputs of different bins are not exactly identical by chance, so that conditioning on a calibrated probability output is equivalent to conditioning on a bin; this leads to a cleaner theoretical guarantee. HB recalibrates g to a piecewise constant function h that takes one value per bin. Consider a specific bin b; the h value for this bin is computed as the average of the indicators t1 tYi “ cpXiqu : Xi P Bin bu. This is an estimate of the bias of the bin P pY “ cpXq | X P Bin bq. A concentration inequality can then be used to bound the deviation between the estimate and the true bias to prove distribution-free calibration guarantees. In the forthcoming Theorem 1, we show high-probability and in-expectation bounds on the the TL-ECE of HB. Additionally, we show marginal and condi- tional top-label calibration bounds, defined next. These notions were proposed in the binary calibration setting by Gupta et al. (2020) and Gupta and Ramdas (2021). In the definition below, A refers to any algorithm that takes as input calibration data D and an initial classifier g to produce a top-label predictor c and an associated probability map h. Algorithm 8 is an example of A. Definition 1 (Marginal and conditional top-label calibration). Let ε, α P p0, 1q be some given levels of approximation and failure respectively. An algorithm A : pg,Dq ÞÑ pc, hq is (a) pε, αq-marginally top-label calibrated if for every distribution P over X ˆ rLs, P ´ |P pY “ cpXq | cpXq, hpXqq ´ hpXq| ď ε ¯ ě 1´ α. (8) (b) pε, αq-conditionally top-label calibrated if for every distribution P over X ˆ rLs, P ´ @ l P rLs, r P Rangephq, |P pY “ cpXq | cpXq “ l, hpXq “ rq ´ r| ď ε ¯ ě 1´ α. (9) To clarify, all probabilities are taken over the test point pX,Y q „ P , the calibration data D „ Pn, and any other inherent algorithmic randomness in A; these are all implicit in pc, hq “ ApD,gq. Marginal calibration asserts that with high probability, on average over the distribution of D, X , P pY “ cpXq | cpXq, hpXqq is at most ε away from hpXq. In comparison, TL-ECE is the average of these deviations over X . Marginal calibration may be a more appropriate metric for calibration than TL-ECE if we are somewhat agnostic to probabilistic errors less than some fixed threshold ε (like 0.05). Conditional calibration is a strictly stronger definition that requires the deviation to be at most ε for every possible prediction pl, rq, including rare ones, not just on average over predictions. This may be relevant in medical settings where we want the prediction on every patient to be reasonably calibrated. Algorithm 8 satisfies the following calibration guarantees. Theorem 1. Fix hyperparameters δ ą 0 (arbitrarily small) and points per bin k ě 2, and assume nl ě k for every l P rLs. Then, for any α P p0, 1q, Algorithm 8 is pε1, αq-marginally and pε2, αqconditionally top-label calibrated for ε1 “ d logp2{αq 2pk ´ 1q ` δ, and ε2 “ d logp2n{kαq 2pk ´ 1q ` δ. (10) Further, for any distribution P over X ˆ rLs, we have P pTL-ECEpc, hq ď ε2q ě 1 ´ α, and E rTL-ECEpc, hqs ď a 1{2k ` δ. The proof in Appendix H is a multiclass top-label adaption of the guarantee in the binary setting by Gupta and Ramdas (2021). The rOp1{ ? kq dependence of the bound relies on Algorithm 8 delegating at least k points to every bin. Since δ can be chosen to be arbitrarily small, setting k “ 50 gives roughly ED rTL-ECEphqs ď 0.1. Base on this, we suggest setting k P r50, 150s in practice. B.2 TOP-LABEL HISTOGRAM BINNING ADAPTS TO CLASS IMBALANCED DATASETS The principled methodology of fixing the number of points per bin reaps practical benefits. Figure 4 illustrates this through the performance of HB for the class imbalanced COVTYPE-7 dataset (Blackard and Dean, 1999) with class ratio approximately 36% for class 1 and 49% for class 2. The entire dataset has 581012 points which is divided into train-test in the ratio 70:30. Then, 10% of the training points are held out for calibration (n “ |D| “ 40671). The base classifier is a random forest (RF) trained on the remaining training points (it achieves around 95% test accuracy). The RF is then recalibrated using HB. The top-label reliability diagrams in Figure 4a illustrate that the original RF (in orange) is underconfident on both the most likely and least likely classes. Additional figures in Appendix F show that the RF is always underconfident no matter which class is predicted as the top-label. HB (in green) recalibrates the RF effectively across all classes. Validity plots (Gupta and Ramdas, 2021) estimate how the LHS of condition (8), denoted as V pεq, varies with ε. We observe that for all ε, V pεq is higher for HB. The rightmost barplot compares the estimated TL-ECE for all classes, and also shows the class proportions. While the original RF is significantly miscalibrated for 1 2 3 4 5 6 70.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 EC E Random forest Histogram binning 0.0 0.1 0.2 0.3 0.4 0.5 Cl as s r at io Class ratio 1 2 3 4 5 6 70.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 EC E Random forest Histogram b nning 0.0 0.1 0.2 0.3 0.4 0.5 Cl as s r at io Class ratio 0.00 0.25 0.50 0.75 1.0 Predicted probability 0.0 0.2 0.4 0.6 0.8 1.0 Tr ue p ro ba bi lit y Class 2 reliability diagram 0.00 0.05 0.10 0.15 0.0 0.2 0.4 0.6 0.8 1.0 V( ) Class 2 validity plot 0.00 0.25 0.50 0.75 1.00 Predicted probability 0.0 0.2 0.4 0.6 0.8 1.0 Tr ue p ro ba bi lit y Class 4 reliability diagram 0.00 0.05 0.10 0.15 0.0 0.2 0.4 0.6 0.8 1.0 V( ) Class 4 validity plot 1 2 3 4 5 6 7 Class 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 To pla be l E CE 0.0 0.1 0.2 0.3 0.4 0.5 Cl as s r at io (a) Top-label histogram binning (Algorithm 8) with k “ 100 points per bin. Class 4 has only 183 calibration points. Algorithm 8 adapts and uses only a single bin to ensure that the TL-ECE on class 4 is comparable to the TL-ECE on class 2. Overall, the random forest classifier has significantly higher TL-ECE for the least likely classes (4, 5, and 6), but the post-calibration TL-ECE using binning is quite uniform. 0.00 0.25 0.50 0.75 1.00 Predicted probability 0.0 0.2 0.4 0.6 0.8 1.0 Tr ue p ro ba bi lit y Class 2 reliability diagram 0.00 0.05 0.10 0.15 0.0 0.2 0.4 0.6 0.8 1.0 V( ) Class 2 validity plot 0.00 0.25 0.50 0.75 1.00 Predicted probability 0.0 0.2 0.4 0.6 0.8 1.0 Tr ue p ro ba bi lit y Class 4 reliability diagram 0.00 0.05 0.10 0.15 0.0 0.2 0.4 0.6 0.8 1.0 V( ) Class 4 validity plot 1 2 3 4 5 6 7 Class 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 To pla be l E CE 0.0 0.1 0.2 0.3 0.4 0.5 Cl as s r at io (b) Histogram binning with B “ 50 bins for every class. Compared to Figure 4a, the post-calibration TL-ECE for the most likely classes decreases while the TL-ECE for the least likely classes increases. Figure 4: Recalibration of a random forest using histogram binning on the class imbalanced COVTYPE-7 dataset (class 2 is roughly 100 times likelier than class 4). By ensuring a fixed number of calibration points per bin, Algorithm 8 obtains relatively uniform top-label calibration across classes (Figure 4a). In comparison, if a fixed number of bins are chosen for all classes, the performance deteriorates for the least likely classes (Figure 4b). the less likely classes, HB has a more uniform miscalibration across classes. Figure 4b considers a slightly different HB algorithm where the number of points per class is not adapted to the number of times the class is predicted, but is fixed beforehand (this corresponds to replacing tnl{ku in line 5 of Algorithm 8 with a fixed B P N). While even in this setting there is a drop in the TL-ECE compared to the RF model, the final profile is less uniform compared to fixing the number of points per bin. The validity plots and top-label reliability diagrams for all the 7 classes are reported in Figure 9 in Appendix F, along with some additional observations. C DISTRIBUTION-FREE CLASS-WISE CALIBRATION USING HISTOGRAM BINNING In this section, we formally describe histogram binning (HB) with the class-wise-calibrator (Algorithm 3) and provide theoretical guarantees for it. The overall procedure is called class-wise-HB. Further details and background on HB are contained in Appendix B, where top-label-HB is described. C.1 FORMAL ALGORITHM To achieve class-wise calibration using binary routines, we learn each component function hl in a 1- v-all fashion as described in Algorithm 3. Algorithm 9 contains the pseudocode with the underlying routine as binary HB. To learn hl, we use a dataset Dl, which unlike top-label HB (Algorithm 8), contains Xi even if cpXiq ‰ l. However the Yi is replaced with 1 tYi “ lu. The number of points per bin kl can be different for different classes, but generally one would set k1 “ . . . “ kL “ k P N. Larger values of kl will lead to smaller εl and δl in the guarantees, at loss of sharpness since the number of bins tn{klu would be smaller. Algorithm 9: Class-wise histogram binning Input: Base multiclass predictor g : X Ñ ∆L´1, calibration data D “ pX1, Y1q, . . . , pXn, Ynq Hyperparameter: # points per bin k1, k2, . . . , kl P NL (say each kl “ 50), tie-breaking parameter δ ą 0 (say 10´10) Output: L class-wise calibrated predictors h1, h2, . . . , hL 1 for lÐ 1 to L do 2 Dl Ð tpXi,1 tYi “ luq : i P rnsqu; 3 hl Ð Binary-histogram-binningpgl,Dl, tn{klu , δq; 4 end 5 return ph1, h2, . . . , hLq; C.2 CALIBRATION GUARANTEES A general algorithm A for class-wise calibration takes as input calibration data D and an initial classifier g to produce an approximately class-wise calibrated predictor h : X Ñ r0, 1sL. Define the notation ε “ pε1, ε2, . . . , εLq P p0, 1qL and α “ pα1, α2, . . . , αLq P p0, 1qL. Definition 2 (Marginal and conditional class-wise calibration). Let ε,α P p0, 1qL be some given levels of approximation and failure respectively. An algorithm A : pg,Dq ÞÑ h is (a) pε,αq-marginally class-wise calibrated if for every distribution P over X ˆ rLs and for every l P rLs P ´ |P pY “ l | hlpXqq ´ hlpXq| ď εl ¯ ě 1´ αl. (11) (b) pε,αq-conditionally class-wise calibrated if for every distribution P over X ˆ rLs and for every l P rLs, P ´ @r P Rangephlq, |P pY “ l | hlpXq “ rq ´ r| ď εl ¯ ě 1´ αl. (12) Definition 2 requires that each hl is pεl, αlq calibrated in the binary senses defined by Gupta et al. (2021, Definitions 1 and 2). From Definition 2, we can also uniform bounds that hold simultaneously over every l P rLs. Let α “ řL l“1 αl and ε “ maxlPrLs εl. Then (11) implies P ´ @l P rLs, |P pY “ l | hlpXqq ´ hlpXq| ď ε ¯ ě 1´ α, (13) and (12) implies P ´ @l P rLs, r P Rangephlq, |P pY “ l | hlpXq “ rq ´ r| ď ε ¯ ě 1´ α. (14) The choice of not including the uniformity over L in Definition 2 reveals the nature of our class-wise HB algorithm and the upcoming theoretical guarantees: (a) we learn the hl’s separately for each l and do not combine the learnt functions in any way (such as normalization), (b) we do not combine the calibration inequalities for different rLs in any other way other than a union bound. Thus the only way we can show (13) (or (14)) is by using a union bound over (11) (or (12)). We now state the distribution-free calibration guarantees satisfied by Algorithm 9. Theorem 2. Fix hyperparameters δ ą 0 (arbitrarily small) and points per bin k1, k2, . . . , kl ě 2, and assume nl ě kl for every l P rLs. Then, for every l P rLs, for any αl P p0, 1q, Algorithm 9 is pεp1q,αq-marginally and pεp2q,αq-conditionally class-wise calibrated with ε p1q l “ d logp2{αlq 2pkl ´ 1q ` δ, and εp2ql “ d logp2n{klαlq 2pkl ´ 1q ` δ. (15) Further, for any distribution P over X ˆ rLs, (a) P pCW-ECEpc, hq ď maxlPrLs ε p2q l q ě 1´ ř lPrLs αl, and (b) E rCW-ECEpc, hqs ď maxlPrLs a 1{2kl ` δ. Theorem 2 is proved in Appendix H. The proof follows by using the result of Gupta and Ramdas (2021, Theorem 2), derived in the binary calibration setting, for each hl separately. Gupta and Ramdas (2021) proved a more general result for general `p-ECE bounds. Similar results can also be derived for the suitably defined `p-CW-ECE. As discussed in Section 3.2, unlike previous works (Zadrozny and Elkan, 2002; Guo et al., 2017; Kull et al., 2019), Algorithm 9 does not normalize the hl’s. We do not know how to derive Theorem 2 style results for a normalized version of Algorithm 9. D FIGURES FOR APPENDIX E Appendix E begins on page 23. The relevant figures for Appendix E are displayed on the following pages. E ADDITIONAL EXPERIMENTAL DETAILS AND RESULTS FOR CIFAR-10 AND CIFAR-100 We present additional details and results to supplement the experiments with CIFAR-10 and CIFAR100 in Sections 2 and 4 of the main paper. E.1 EXTERNAL LIBRARIES USED All our base models were pre-trained deep-net models generated by Mukhoti et al. (2020), obtained from www.robots.ox.ac.uk/„viveka/focal calibration/ and used along with the code at https://github.com/torrvision/focal calibration to obtain base predictions. We focused on the models trained with Brier score and focal loss, since it was found to perform the best for calibration. All reports in the main paper are with the Brier score; in Appendix E.4, we report corresponding results with focal loss. We also used the code at https://github.com/torrvision/focal calibration for temperature scaling (TS). For vector scaling (VS) and Dirichlet scaling (DS), we used the code of Kull et al. (2019), hosted at https://github.com/dirichletcal/dirichlet python. For VS, we used the file dirichletcal/calib/vectorscaling.py, and for DS, we used the file dirichletcal/calib/fulldirichlet.py. No hyperparameter tuning was performed in any of our histogram binning experiments or baseline experiments; default settings were used in every case. The random seed was fixed so that every run of the experiment gives the same result. In particular, by relying on pre-trained models, we avoid training new deep-net models with multiple hyperparameters, thus avoiding any selection biases that may arise due to test-data peeking across multiple settings. E.2 FURTHER COMMENTS ON BINNING FOR ECE ESTIMATION As mentioned in Remark 1, ECE estimates for all methods except TL-HB and CW-HB was done using fixed-width bins r0, 1{Bq, r1{B, 2{Bq, . . . r1´ 1{B, 1s for various values of B P r5, 25s. For TL-HB and CW-HB, B is the number of bins used for each call to binary HB. For TL-HB, note that we actually proposed that the number of bins-per-class should be fixed; see Section B.2. However, for ease of comparison to other methods, we simply set the number of bins to B for each call to binary HB. That is, in line 5, we replace tnl{ku with B. For CW-HB, we described Algorithm 9 with different values of kl corresponding to the number of bins per class. For the CIFAR-10 and CIFAR-100 comparisons, we set each k1 “ k2 “ . . . “ kL “ k, where k P N satisfies tn{ku “ B. Tables 2,3, 4, and 5 report estimates with B “ 15, which has been commonly used in many works (Guo et al., 2017; Kull et al., 2019; Mukhoti et al., 2020). Corresponding to each table, we have a figure where ECE estimates with varying B are reported to strengthen conclusions: these are Figure 5,7, 6, and 8 respectively. Plugin estimates of the ECE were used, same as Guo et al. (2017). Further binning was not done for TL-HB and CW-HB since the output is already discrete and sufficiently many points take each of the predicted values. Note that due to Jensen’s inequality, any further binning will only decrease the ECE estimate (Kumar et al., 2019). Thus, using unbinned estimates may give TL-HB and CW-HB a disadvantage. E.3 SOME REMARKS ON MAXIMUM-CALIBRATION-ERROR (MCE) Guo et al. (2017) defined MCE with respect to confidence calibration, as follows: conf-MCEpc, hq :“ sup rPRangephq |P pY “ cpXq | hpXq “ rq ´ r| . (16) Conf-MCE suffers from the same issue illustrated in Figure 2 for conf-ECE. In Figure 1b, we looked at the reliability diagram within two bins. These indicate two of the values over which the supremum is taken in equation (16): these are the Y-axis distances between the ‹ markers and the X “ Y line for bins 6 and 10 (both are less than 0.02). On the other hand, the effective maximum miscalibration for bin 6 is roughly 0.15 (for class 1), and roughly 0.045 (for class 4), and the maximum should be taken with respect to these values across all bins. To remedy the underestimation of the effective MCE, we can consider the top-label-MCE, defined as TL-MCEpc, hq :“ max lPrLs sup rPRangephq |P pY “ l | cpXq “ l, hpXq “ rq ´ r| . (17) Interpreted in words, the TL-MCE assesses the maximum deviation between the predicted and true probabilities across all predictions and all classes. Following the same argument as in the proof of Proposition 4, it can be shown that for any c, h, conf-MCEpc, hq ď TL-MCEpc, hq. The TL-MCE is closely related to conditional top-label calibration (Definition 1b). Clearly, an algorithm is pε, αqconditionally top-label calibrated if and only if for every distribution P , P pTL-MCEpc, hq ď εq ě 1´ α. Thus the conditional top-label calibration guarantee of Theorem 1 implies a high probability bound on the TL-MCE as well. E.4 TABLE 2 AND 3 STYLE RESULTS WITH FOCAL LOSS Results for top-label-ECE and top-label-MCE with the base deep net model being trained using focal loss are reported in Table 4. Corresponding results for class-wise-ECE are reported in Table 5. The observations are similar to the ones reported for Brier score: 1. For TL-ECE, TL-HB is either the best or close to the best performing method on CIFAR10, but suffers on CIFAR-100. This phenomenon is discussed further in Appendix E.6. N-HB is the best or close to the best for both CIFAR-10 and CIFAR-100. 2. For TL-MCE, TL-HB is the best performing method on CIFAR-10, by a huge margin. For CIFAR-100, TS or VS perform better than TL-HB, but not by a huge margin. 3. For CW-ECE, CW-HB is the best performing method across the two datasets and all four architectures. E.5 ECE AND MCE ESTIMATES WITH VARYING NUMBER OF BINS Corresponding to each entry in Tables 2 and 4, we perform an ablation study with the number of bins varying as B P r5, 25s. This is in keeping with the findings of Roelofs et al. (2020) that the ECE/MCE estimate can vary with different numbers of bins, along with the relative performance of the various models. The results are reported in Figure 5 (ablation of Table 2) and Figure 7 (ablation of Table 3). The captions of these figures contain further details on the findings. Most findings are similar to those in the main paper, but the findings in the tables are strengthened through this ablation. The same ablations are performed for focal loss as well. The results are reported in Figure 6 (ablation of Metric Dataset Architecture Base TS VS DS N-HB CW-HB Table 4) and Figure 8 (ablation of Table 5). The captions of these figures contain further details on the findings. The ablation results in the figures support those in the tables. E.6 ANALYZING THE POOR PERFORMANCE OF TL-HB ON CIFAR-100 CIFAR-100 is an imbalanced dataset with 100 classes and 5000 points for validation/calibration (as per the default splits). Due to random subsampling, the validation split we used had one of the classes predicted as the top-label only 31 times. Thus, based on Theorem 1, we do not expect HB to have small TL-ECE. This is confirmed by the empirical results presented in Tables 2/4, and Figures 5b/6b. We observe that HB has higher estimated TL-ECE than all methods except DS, for most values of the number of bins. The performance of TL-HB for TL-MCE however is much much closer to the other methods since HB uses the same number of points per bin, ensuring that the predictions are somewhat equally calibrated across bins (Figures 5d/6d). In comparison, for CWECE, CW-HB is the best performing method. This is because in the class-wise setting, 5000 points are available for recalibration irrespective of the class, which is sufficient for HB. The deterioration in performance of HB when few calibration points are available was also observed in the binary setting by Gupta and Ramdas (2021, Appendix C). Niculescu-Mizil and Caruana (2005) noted in the conclusion of their paper that Platt scaling (Platt, 1999), which is closely related to TS, performs well when the data is small, but another nonparametric binning method, isotonic regression (Zadroz
1. What is the focus of the paper regarding multiclass classification? 2. What are the different types of calibrations previously studied, and how do they compare to the novel notion proposed in this paper? 3. How does the proposed calibration provide a more intuitive interpretation, and what are the advantages of using this approach? 4. Can you explain the concept of matching calibration algorithms and metrics, and how does it benefit the analysis? 5. What are some potential limitations of only considering histogram binning as a post-hoc calibration method?
Summary Of The Paper Review
Summary Of The Paper The authors consider the problem of calibrated probabilistic outputs for multiclass classifiers. They consider the various types of calibrations previously studied, such as confidence calibration and class-wise calibration, and then propose a novel notion of calibration based on the choice of top-labels. Review This paper is very well written, tackles an interesting area, proposes a novel calibration, and demonstrates the utility in empirical experiments. Strengths: The proposed calibration results in a more intuitive interpretation than previous approaches and the authors argue so convincingly. Matching calibration algorithms and metrics places everything into a nice framework, and the experiments demonstrate clearly the value of matching the algrithm and metric. Histogram binning, within the appropriate algorithm, is shown to be a competitive post-hoc calibration technique. Weaknesses: Only histogram binning is considered, though there are other post-hoc calibration methods that clearly fit in the algorithms described. It would have been good to see some results, or at least a discussion.
ICLR
Title Top-label calibration and multiclass-to-binary reductions Abstract We propose a new notion of multiclass calibration called top-label calibration. A classifier is said to be top-label calibrated if the reported probability for the predicted class label—the top-label—is calibrated, conditioned on the top-label. This conditioning is essential for practical utility of the calibration property, since the top-label is always reported and we must condition on what is reported. However, the popular notion of confidence calibration erroneously skips this conditioning. Furthermore, we outline a multiclass-to-binary (M2B) reduction framework that unifies confidence, top-label, and class-wise calibration, among others. As its name suggests, M2B works by reducing multiclass calibration to different binary calibration problems; various types of multiclass calibration can then be achieved using simple binary calibration routines. We instantiate the M2B framework with the well-studied histogram binning (HB) binary calibrator, and prove that the overall procedure is multiclass calibrated without making any assumptions on the underlying data distribution. In an empirical evaluation with four deep net architectures on CIFAR-10 and CIFAR-100, we find that the M2B + HB procedure achieves lower top-label and class-wise calibration error than other approaches such as temperature scaling. Code for this work is available at https://github.com/aigen/df-posthoc-calibration. 1 INTRODUCTION Machine learning models often make probabilistic predictions. The ideal prediction is the true conditional distribution of the output given the input. However, nature never reveals true probability distributions, making it infeasible to achieve this ideal in most situations. Instead, there is significant interest towards designing models that are calibrated, which is often feasible. We motivate the definition of calibration using a standard example of predicting the probability of rain. Suppose a meteorologist claims that the probability of rain on a particular day is 0.7. Regardless of whether it rains on that day or not, we cannot know if 0.7 was the underlying probability of rain. However, we can test if the meteorologist is calibrated in the long run, by checking if on the D days when 0.7 was predicted, it indeed rained on around 0.7D days (and the same is true for other probabilities). This example is readily converted to a formal binary calibration setting. Denote a random (feature, label)-pair as pX,Y q P X ˆt0, 1u, where X is the feature space. A probabilistic predictor h : X Ñ r0, 1s is said to be calibrated if for every prediction q P r0, 1s, PrpY “ 1 | hpXq “ qq “ q (almost surely). Arguably, if an ML classification model produces such calibrated scores for the classes, downstream users of the model can reliably use its predictions for a broader set of tasks. Our focus in this paper is calibration for multiclass classification, with L ě 3 classes and Y P rLs :“ t1, 2, . . . , L ě 3u. We assume all (training and test) data is drawn i.i.d. from a fixed distribution P , and denote a general point from this distribution as pX,Y q „ P . Consider a typical multiclass predictor, h : X Ñ ∆L´1, whose range ∆L´1 is the probability simplex in RL. A natural notion of calibration for h, called canonical calibration is the following: for every l P rLs, P pY “ l | hpXq “ qq “ ql (ql denotes the l-th component of q). However, canonical calibration becomes infeasible to achieve or verify once L is even 4 or 5 (Vaicenavicius et al., 2019). Thus, there is interest in studying statistically feasible relaxations of canonical notion, such as confidence calibration (Guo et al., 2017) and class-wise calibration (Kull et al., 2017). In particular, the notion of confidence calibration (Guo et al., 2017) has been popular recently. A model is confidence calibrated if the following is true: “when the reported confidence for the predicted class is q P r0, 1s, the accuracy is also q”. In any practical setting, the confidence q is never reported alone; it is always reported along with the actual class prediction l P rLs. One may expect that if a model is confidence calibrated, the following also holds: “when the class l is predicted with confidence q, the probability of the actual class being l is also q”? Unfortunately, this expectation is rarely met—there exist confidence calibrated classifier for whom the latter statement is grossly violated for all classes (Example 1). On the other hand, our proposed notion of top-label calibration enforces the latter statement. It is philosophically more coherent, because it requires conditioning on all relevant reported quantities (both the predicted top label and our confidence in it). In Section 2, we argue further that top-label calibration is a simple and practically meaningful replacement of confidence calibration. In Section 3, we unify top-label, confidence, and a number of other popular notions of multiclass calibration into the framework of multiclass-to-binary (M2B) reductions. The M2B framework relies on the simple observation that each of these notions internally verifies binary calibration claims. As a consequence, each M2B notion of calibration can be achieved by solving a number of binary calibration problems. With the M2B framework at our disposal, all of the rich literature on binary calibration can now be used for multiclass calibration. We illustrate this by instantiating the M2B framework with the binary calibration algorithm of histogram binning or HB (Zadrozny and Elkan, 2001; Gupta and Ramdas, 2021). The M2B + HB procedure achieves state-of-the-art results with respect to standard notions of calibration error (Section 4). Further, we show that our procedure is provably calibrated for arbitrary data-generating distributions. The formal theorems are delayed to Appendices B, C (due to space limitations), but an informal result is presented in Section 4. 2 MODIFYING CONFIDENCE CALIBRATION TO TOP-LABEL CALIBRATION Let c : X Ñ rLs denote a classifier or top-label predictor and h : X Ñ r0, 1s a function that provides a confidence or probability score for the top-label cpXq. The predictor pc, hq is said to be confidence calibrated (for the data-generating distribution P ) if P pY “ cpXq | hpXqq “ hpXq. (1) In other words, when the reported confidence hpXq equals p P r0, 1s, then the fraction of instances where the predicted label is correct also approximately equals p. Note that for an L-dimensional predictor h : X Ñ ∆L´1, one would use cp¨q “ arg maxlPrLs hlp¨q and hp¨q “ hcp¨qp¨q; ties are broken arbitrarily. Then h is confidence calibrated if the corresponding pc, hq satisfies (1). Confidence calibration is most applicable in high-accuracy settings where we trust the label prediction cpxq. For instance, if a high-accuracy cancer-grade-prediction model predicts a patient as having “95% grade III, 3% grade II, and 2% grade I”, we would suggest the patient to undergo an invasive treatment. However, we may want to know (and control) the number of non-grade-III patients that were given this suggestion incorrectly. In other words, is Prpcancer is not grade III | cancer is predicted to be of grade III with confidence 95%q equal to 5%? It would appear that by focusing on the the probability of the predicted label, confidence calibration enforces such control. However, as we illustrate next, confidence calibration fails at this goal by providing a guarantee that is neither practically interpretable, nor actionable. Translating the probabilistic statement (1) into words, we ascertain that confidence calibration leads to guarantees of the form: “if the confidence hpXq in the top-label is 0.6, then the accuracy (frequency with which Y equals cpXq) is 0.6”. Such a guarantee is not very useful. Suppose a patient P is informed (based on their symptoms X), that they are most likely to have a certain disease D with probability 0.6. Further patient P is told that this score is confidence calibrated. P can now infer the following: “among all patients who have probability 0.6 of having some unspecified disease, the fraction who have that unspecified disease is also 0.6.” However, P is concerned only about disease D, and not about other diseases. That is, P wants to know the probability of having D among patients who were predicted to have disease D with confidence 0.6, not among patients who were predicted to have some disease with confidence 0.6. In other words, P cares about the occurrence of D among patients who were told the same thing that P has been told. It is tempting to wish that the confidence calibrated probability 0.6 has any bearing on what P cares about. However, this faith is misguided, as the above reasoning suggests, and further illustrated through the following example. Example 1. Suppose the instance space is pX,Y q P ta, bu ˆ t1, 2, . . .u. (X can be seen as the random patient, and Y as the disease they are suffering from.) Consider a predictor pc, hq and let the values taken by pX,Y, c, hq be as follows: Feature x P pX “ xq Class prediction cpxq Confidence hpxq P pY “ cpXq | X “ xq a 0.5 1 0.6 0.2 b 0.5 2 0.6 1.0 The table specifies only the probabilities P pY “ cpXq | X “ xq; the probabilities P pY “ l | X “ xq, l ‰ cpxq, can be set arbitrarily. We verify that pc, hq is confidence calibrated: P pY “ cpXq | hpXq “ 0.6q “ 0.5pP pY “ 1 | X “ aq ` P pY “ 2 | X “ bqq “ 0.5p0.2` 1q “ 0.6. However, whether the actual instance is X “ a or X “ b, the probabilistic claim of 0.6 bears no correspondence with reality. If X “ a, hpXq “ 0.6 is extremely overconfident since P pY “ 1 | X “ aq “ 0.2. Contrarily, if X “ b, hpXq “ 0.6 is extremely underconfident. The reason for the strange behavior above is that the probability P pY “ cpXq | hpXqq is not interpretable from a decision-making perspective. In practice, we never report just the confidence hpXq, but also the class prediction cpXq (obviously!). Thus it is more reasonable to talk about the conditional probability of Y “ cpXq, given what is reported, that is both cpXq and hpXq. We make a small but critical change to (1); we say that pc, hq is top-label calibrated if P pY “ cpXq | hpXq, cpXqq “ hpXq. (2) (See the disambiguating Remark 2 on terminology.) Going back to the patient-disease example, top-label calibration would tell patient P the following: “among all patients, who (just like you) are predicted to have disease D with probability 0.6, the fraction who actually have disease D is also 0.6.” Philosophically, it makes sense to condition on what is reported—both the top label and its confidence—because that is what is known to the recipient of the information; and there is no apparent justification for not conditioning on both. A commonly used metric for quantifying the miscalibration of a model is the expected-calibrationerror (ECE) metric. The ECE associated with confidence calibration is defined as conf-ECEpc, hq :“ EX |P pY “ cpXq | hpXqq ´ hpXq| . (3) We define top-label-ECE (TL-ECE) in an analogous fashion, but also condition on cpXq: TL-ECEpc, hq :“ EX |P pY “ cpXq | cpXq, hpXqq ´ hpXq| . (4) Higher values of ECE indicate worse calibration performance. The predictor in Example 1 has conf-ECEpc, hq “ 0. However, it has TL-ECEpc, hq “ 0.4, revealing its miscalibration. More generally, it can be deduced as a straightforward consequence of Jensen’s inequality that conf-ECEpc, hq is always smaller than the TL-ECEpc, hq (see Proposition 4 in Appendix H). As illustrated by Example 1, the difference can be significant. In the following subsection we illustrate that the difference can be significant on a real dataset as well. First, we make a couple of remarks. Remark 1 (ECE estimation using binning). Estimating the ECE requires estimating probabilities conditional on some prediction such as hpxq. A common strategy to do this is to bin together nearby values of hpxq using binning schemes (Nixon et al., 2020, Section 2.1), and compute a single estimate for the predicted and true probabilities using all the points in a bin. The calibration method we espouse in this work, histogram binning (HB), produces discrete predictions whose ECE can be estimated without further binning. Based on this, we use the following experimental protocol: we report unbinned ECE estimates while assessing HB, and binned ECE estimates for all other compared methods, which are continuous output methods (deep-nets, temperature scaling, etc). It is commonly understood that binning leads to underestimation of the effective ECE (Vaicenavicius et al., 2019; Kumar et al., 2019). Thus, using unbinned ECE estimates for HB gives HB a disadvantage compared to the binned ECE estimates we use for other methods. (This further strengthens our positive results for HB.) The binning scheme we use is equal-width binning, where the interval r0, 1s is divided into B equal-width intervals. Equal-width binning typically leads to lower ECE estimates compared to adaptive-width binning (Nixon et al., 2020). Remark 2 (Terminology). The term conf-ECE was introduced by Kull et al. (2019). Most works refer to conf-ECE as just ECE (Guo et al., 2017; Nixon et al., 2020; Mukhoti et al., 2020; Kumar et al., 2018). However, some papers refer to conf-ECE as top-label-ECE (Kumar et al., 2019; Zhang et al., 2020), resulting in two different terms for the same concept. We call the older notion as conf-ECE, and our definition of top-label calibration/ECE (4) is different from previous ones. (a) Confidence reliability diagram (points marked ‹) and top-label reliability diagram (points marked `) for a ResNet-50 model on the CIFAR-10 dataset; see further details in points (a) and (b) below. The gray bars denote the fraction of predictions in each bin. The confidence reliability diagram (mistakenly) suggests better calibration than the top-label reliability diagram. (b) Class-wise and zoomed-in version of Figure 1a for bin 6 (top) and bin 10 (bottom); see further details in point (c) below. The ‹ markers are in the same position as Figure 1a, and denote the average predicted and true probabilities. The colored points denote the predicted and true probabilities when seen class-wise. The histograms on the right show the number of test points per class within bins 6 and 10. Figure 1: Confidence reliability diagrams misrepresent the effective miscalibration. 2.1 AN ILLUSTRATIVE EXPERIMENT WITH RESNET-50 ON CIFAR-10 We now compare confidence and top-label calibration using ECE estimates and reliability diagrams (Niculescu-Mizil and Caruana, 2005). This experiment can be seen as a less malignant version of Example 1. Here, confidence calibration is not completely meaningless, but can nevertheless be misleading. Figure 1 illustrates the (test-time) calibration performance of a ResNet-50 model (He et al., 2016) on the CIFAR-10 dataset (Krizhevsky, 2009). In the following summarizing points, the pc, hq correspond to the ResNet-50 model. (a) The ‹ markers in Figure 1a form the confidence reliability diagram (Guo et al., 2017), con- structed as follows. First, the hpxq values on the test set are binned into one of B “ 10 bins, r0, 0.1q, r0.1, 0.2q, . . . , r0.9, 1s, depending on the interval to which hpxq belongs. The gray bars in Figure 1a indicate the fraction of hpxq values in each bin—nearly 92% points belong to bin r0.9, 1s and no points belong to bin r0, 0.1q. Next, for every bin b, we plot ‹ “ pconfb, accbq, which are the plugin estimates of E rhpXq | hpXq P Bin bs and P pY “ cpXq | hpXq P Bin bq respectively. The dashed X “ Y line indicates perfect confidence calibration. (b) The ` markers in Figure 1a form the top-label reliability diagram. Unlike the confidence reliability diagram, the top-label reliability diagram shows the average miscalibration across classes in a given bin. For a given class l and bin b, define ∆b,l :“ | pP pY “ cpXq | cpXq “ l, hpXq P Bin bq ´ pE rhpXq | cpXq “ l, hpXq P Bin bs |, where pP , pE denote empirical estimates based on the test data. The overall miscalibration is then ∆b :“ Weighted-averagep∆b,lq “ ř lPrLs pP pcpXq “ l | hpXq P Bin bq ∆b,l. Note that ∆b is always non-negative and does not indicate whether the overall miscalibration occurs due to under- or over-confidence; also, if the absolute-values were dropped from ∆b,l, then ∆b would simply equal accb´ confb. In order to plot ∆b in a reliability diagram, we obtain the direction for the corresponding point from the confidence reliability diagram. Thus for every ‹ “ pconfb, accbq, we plot` “ pconfb, confb`∆bq if accb ą confb and` “ pconfb, confb´∆bq otherwise, for every b. This scatter plot of the `’s gives us the top-label reliability diagram. Figure 1a shows that there is a visible increase in miscalibration when going from confidence calibration to top-label calibration. To understand why this change occurs, Figure 1b zooms into the sixth bin (hpXq P r0.5, 0.6q) and bin 10 (hpXq P r0.9, 1.0s), as described next. (c) Figure 1b displays the class-wise top-label reliability diagrams for bins 6 and 10. Note that for bin 6, the ‹ marker is nearly on theX “ Y line, indicating that the overall accuracy matches the 0.005 0.010 0.015 0.020 0.025 Es tim at ed E CE Base model top-label-ECE Base model conf-ECE Temperature scaling top-label-ECE Temperature scaling conf-ECE Histogram binning top-label-ECE Histogram binning conf-ECE 5 10 15 20 25 Number of bins 0.0075 0.0100 0.0125 0.0150 0.0175 0.0200 0.0225 0.0250 0.0275 Es tim at ed E CE ResNet-50 5 10 15 20 25 Number of bins 0.005 0.010 0.015 0.020 0.025 0.030 Es tim at ed E CE ResNet-110 5 10 15 20 25 Number of bins 0.005 0.010 0.015 0.020 0.025 Es tim at ed E CE Wide-ResNet-26-10 5 10 15 20 25 Number of bins 0.005 0.010 0.015 0.020 0.025 0.030 Es tim at ed E CE DenseNet-121 Figure 2 displays the aggregate effect of the above phenomenon (across bins and classes) through estimates of the conf-ECE and TL-ECE. The precise experimental setup is described in Section 4. These plots display the ECE estimates of the base model, as well as the base model when recalibrated using temperature scaling (Guo et al., 2017) and our upcoming formulation of top-label histogram binning (Section 3). Since ECE estimates depend on the number of bins B used (see Roelofs et al. (2020) for empirical work around this), we plot the ECE estimate for every valueB P r5, 25s in order to obtain clear and unambiguous results. We find that the TL-ECE is significantly higher than the conf-ECE for most values of B, the architectures, and the pre- and post- recalibration models. This figure also previews the performance of our forthcoming top-label histogram binning algorithm. Top-label HB has smaller estimated TL-ECE than temperature scaling for most values of B and the architectures. Except for ResNet-50, the conf-ECE estimates are also better. To summarize, top-label calibration captures the intuition of confidence calibration by focusing on the predicted class. However, top-label calibration also conditions on the predicted class, which is always part of the prediction in any practical setting. Further, TL-ECE estimates can be substantially different from conf-ECE estimates. Thus, while it is common to compare predictors based on the conf-ECE, the TL-ECE comparison is more meaningful, and can potentially be different. 3 CALIBRATION ALGORITHMS FROM CALIBRATION METRICS In this section, we unify a number of notions of multiclass calibration as multiclass-to-binary (or M2B) notions, and propose a general-purpose calibration algorithm that achieves the corresponding M2B notion of calibration. The M2B framework yields multiple novel post-hoc calibration algorithms, each of which is tuned to a specific M2B notion of calibration. 3.1 MULTICLASS-TO-BINARY (M2B) NOTIONS OF CALIBRATION In Section 2, we defined confidence calibration (1) and top-label calibration (2). These notions verify calibration claims for the highest predicted probability. Other popular notions of calibration verify calibration claims for other entries in the full L-dimensional prediction vector. A predictor h “ ph1, h2, . . . , hLq is said to be class-wise calibrated (Kull et al., 2017) if (class-wise calibration) @l P rLs, P pY “ l | hlpXqq “ hlpXq. (5) Another recently proposed notion is top-K confidence calibration (Gupta et al., 2021). For some l P rLs, let cplq : X Ñ rLs denote the l-th highest class prediction, and let hplq : X Ñ rLs denote the confidence associated with it (c “ cp1q and h “ hp1q are special cases). For a given K ď L, (top-K-confidence calibration) @k P rKs, P pY “ cpkqpXq | hpkqpXqq “ hpkqpXq. (6) As we did in Section 2 for confidenceÑtop-label, top-K-confidence calibration can be modified to the more interpretable top-K-label calibration by further conditioning on the predicted labels: (top-K-label calibration) @k P rKs, P pY “ cpkqpXq | hpkqpXq, cpkqpXqq “ hpkqpXq. (7) Each of these notions reduce multiclass calibration to one or more binary calibration requirements, where each binary calibration requirement corresponds to verifying if the distribution of Y , conditioned on some prediction predpXq, satisfies a single binary calibration claim associated with predpXq. Table 1 illustrates how the calibration notions discussed so far internally verify a number of binary calibration claims, making them M2B notions. For example, for class-wise calibration, for every l P rLs, the conditioning is on predpXq “ hlpXq, and a single binary calibration statement is verified: P pY “ l | predpXqq “ hlpXq. Based on this property, we call each of these notions multiclass-to-binary or M2B notions. The notion of canonical calibration mentioned in the introduction is not an M2B notion. Canonical calibration is discussed in detail in Appendix G. Due to the conditioning on a multi-dimensional prediction, non-M2B notions of calibration are harder to achieve or verify. For the same reason, it is possibly easier for humans to interpret binary calibration claims when taking decisions/actions. 3.2 ACHIEVING M2B NOTIONS OF CALIBRATION USING M2B CALIBRATORS The M2B framework illustrates how multiclass calibration can typically be viewed via a reduction to binary calibration. The immediate consequence of this reduction is that one can now solve multiclass calibration problems by leveraging the well-developed methodology for binary calibration. The upcoming M2B calibrators belong to the standard recalibration or post-hoc calibration setting. In this setting, one starts with a fixed pre-learnt base model g : X Ñ ∆L´1. The base model g can correspond to a deep-net, a random forest, or any 1-v-all (one-versus-all) binary classification model such as logistic regression. The base model is typically optimized for classification accuracy and may not be calibrated. The goal of post-hoc calibration is to use some given calibration data D “ pX1, Y1q, pX2, Y2q, . . . , pXn, Ynq P pX ˆ rLsqn, typically data on which g was not learnt, to recalibrate g. In practice, the calibration data is usually the same as the validation data. To motivate M2B calibrators, suppose we want to verify if g is calibrated on a certain test set, based on a given M2B notion of calibration. Then, the verifying process will split the test data into a number of sub-datasets, each of which will verify one of the binary calibration claims. In Appendix A.2, we argue that the calibration data can also be viewed as a test set, and every step in the verification process can be used to provide a signal for improving calibration. M2B calibrators take the form of wrapper methods that work on top of a given binary calibrator. Denote an arbitrary black-box binary calibrator as At0,1u : r0, 1sXˆpXˆt0, 1uq‹ Ñ r0, 1sX , where the first argument is a mapping X Ñ r0, 1s that denotes a (miscalibrated) binary predicor, and the second argument is a calibration data sequence of arbitrary length. The output is a (better calibrated) binary predictor. Examples of At0,1u are histogram binning (Zadrozny and Elkan, 2001), isotonic regression (Zadrozny and Elkan, 2002), and Platt scaling (Platt, 1999). In the upcoming descriptions, we use the indicator function 1 ta “ bu P t0, 1u which takes the value 1 if a “ b, and 0 if a ‰ b. The general formulation of our M2B calibrator is delayed to Appendix A since the description is a bit involved. To ease readability and adhere to the space restrictions, in the main paper we describe the calibrators corresponding to top-label, class-wise, and confidence calibration (Algorithms 1–3). Each of these calibrators are different from the classical M2B calibrator (Algorithm 4) that has been used by Zadrozny and Elkan (2002), Guo et al. (2017), Kull et al. (2019), and most other papers M2B calibrators: Post-hoc multiclass calibration using binary calibrators Input in each case: Binary calibrator At0,1u : r0, 1sX ˆ pX ˆ t0, 1uq‹ Ñ r0, 1sX , base multiclass predictor g : X Ñ ∆L´1, calibration data D “ pX1, Y1q, . . . , pXn, Ynq. Algorithm 1: Confidence calibrator 1 cÐ classifier or top-class based on g; 2 g Ð top-class-probability based on g; 3 D1 Ð tpXi,1 tYi “ cpXiquq : i P rnsu; 4 hÐ At0,1upg,D1q; 5 return pc, hq; Algorithm 2: Top-label calibrator 1 cÐ classifier or top-class based on g; 2 g Ð top-class-probability based on g; 3 for lÐ 1 to L do 4 Dl Ð tpXi,1 tYi “ luq : cpXiq “ lqu; 5 hl Ð At0,1upg,Dlq; 6 end 7 hp¨q Ð hcp¨qp¨q (predict hlpxq if cpxq “ l); 8 return pc, hq; Algorithm 3: Class-wise calibrator 1 Write g “ pg1, g2, . . . , gLq; 2 for lÐ 1 to L do 3 Dl Ð tpXi,1 tYi “ luq : i P rnsu; 4 hl Ð At0,1upgl,Dlq; 5 end 6 return ph1, h2, . . . , hLq; Algorithm 4: Normalized calibrator 1 Write g “ pg1, g2, . . . , gLq; 2 for lÐ 1 to L do 3 Dl Ð tpXi,1 tYi “ luq : i P rnsu; 4 rhl Ð At0,1upgl,Dlq; 5 end 6 Normalize: for every l P rLs, hlp¨q :“ rhlp¨q{ řL k“1 rhkp¨q; 7 return ph1, h2, . . . , hLq; we are aware of, with the most similar one being Algorithm 3. Top-K-label and top-K-confidence calibrators are also explicitly described in Appendix A (Algorithms 6 and 7). Top-label calibration requires that for every class l P rLs, P pY “ l | cpXq “ l, hpXqq “ hpXq. Thus, to achieve top-label calibration, we must solve L calibration problems. Algorithm 2 constructs L datasets tDl : l P rLsu (line 4). The features in Dl are the Xi’s for which cpXiq “ l, and the labels are 1 tYi “ lu. Now for every l P rLs, we calibrate g to hl : X Ñ r0, 1s using Dl and any binary calibrator. The final probabilistic predictor is hp¨q “ hcp¨qp¨q (that is, it predicts hlpxq if cpxq “ l). The top-label predictor c does not change in this process. Thus the accuracy of pc, hq is the same as the accuracy of g irrespective of which At0,1u is used. Unlike the top-label calibrator, the confidence calibrator merges all classes together into a single dataset D1 “ Ť lPrLsDl. To achieve class-wise calibration, Algorithm 3 also solves L calibration problems, but these correspond to satisfying P pY “ l | hlpXqq “ hlpXq. Unlike top-label calibration, the dataset Dl for class-wise calibration contains all the Xi’s (even if cpXiq ‰ l), and hl is passed to At0,1u instead of h. Also, unlike confidence calibration, Yi is replaced with 1 tYi “ lu instead of 1 tYi “ cpXiqu. The overall process is similar to reducing multiclass classification to L 1-v-all binary classification problem, but our motivation is intricately tied to the notion of class-wise calibration. Most popular empirical works that have discussed binary calibrators for multiclass calibration have done so using the normalized calibrator, Algorithm 4. This is almost identical to Algorithm 3, except that there is an additional normalization step (line 6 of Algorithm 4). This normalization was first proposed by Zadrozny and Elkan (2002, Section 5.2), and has been used unaltered by most other works1 where the goal has been to simply compare direct multiclass calibrators such as temperature scaling, Dirichlet scaling, etc., to a calibrator based on binary methods (for instance, see Section 4.2 of Guo et al. (2017)). In contrast to these papers, we investigate multiple M2B reductions in an effort to identify the right reduction of multiclass calibration to binary calibration. To summarize, the M2B characterization immediately yields a novel and different calibrator for every M2B notion. In the following section, we instantiate M2B calibrators on the binary calibrator of histogram binning (HB), leading to two new algorithms: top-label-HB and class-wise-HB, that achieve strong empirical results and satisfy distribution-free calibration guarantees. 1the only exception we are aware of is the recent work of Patel et al. (2021) who also suggest skipping normalization (see their Appendix A1); however they use a common I-Max binning scheme across classes, whereas in Algorithm 3 the predictor hl for each class is learnt completely independently of other classes 4 EXPERIMENTS: M2B CALIBRATION WITH HISTOGRAM BINNING Histogram binning or HB was proposed by Zadrozny and Elkan (2001) with strong empirical results for binary calibration. In HB, a base binary calibration model g : X Ñ r0, 1s is used to partition the calibration data into a number of bins so that each bin has roughly the same number of points. Then, for each bin, the probability of Y “ 1 is estimated using the empirical distribution on the calibration data. This estimate forms the new calibrated prediction for that bin. Recently, Gupta and Ramdas (2021) showed that HB satisfies strong distribution-free calibration guarantees, which are otherwise impossible for scaling methods (Gupta et al., 2020). Despite these results for binary calibration, studies for multiclass calibration have reported that HB typically performs worse than scaling methods such as temperature scaling (TS), vector scaling (VS), and Dirichlet scaling (DS) (Kull et al., 2019; Roelofs et al., 2020; Guo et al., 2017). In our experiments, we find that the issue is not HB but the M2B wrapper used to produce the HB baseline. With the right M2B wrapper, HB beats TS, VS, and DS. A number of calibrators have been proposed recently (Zhang et al., 2020; Rahimi et al., 2020; Patel et al., 2021; Gupta et al., 2021), but VS and DS continue to remain strong baselines which are often close to the best in these papers. We do not compare to each of these calibrators; our focus is on the M2B reduction and the message that the baselines dramatically improve with the right M2B wrapper. We use three metrics for comparison: the first is top-label-ECE or TL-ECE (defined in (4)), which we argued leads to a more meaningful comparison compared to conf-ECE. Second, we consider the more stringent maximum-calibration-error (MCE) metric that assesses the worst calibration across predictions (see more details in Appendix E.3). For top-label calibration MCE is given by TL-MCEpc, hq :“ maxlPrLs suprPRangephq |P pY “ l | cpXq “ l, hpXq “ rq ´ r|. To assess classwise calibration, we use class-wise-ECE defined as the average calibration error across classes: CW-ECEpc,hq :“ L´1 řL l“1 EX |P pY “ l | hlpXqq ´ hlpXq|. All ECE/MCE estimation is performed as described in Remark 1. For further details, see Appendix E.2. Formal algorithm and theoretical guarantees. Top-label-HB (TL-HB) and class-wise-HB (CWHB) are explicitly stated in Appendices B and C respectively; these are instantiations of the top-label calibrator and class-wise calibrator with HB. N-HB is the the normalized calibrator (Algorithm 4) with HB, which is the same as CW-HB, but with an added normalization step. In the Appendix, we extend the binary calibration guarantees of Gupta and Ramdas (2021) to TL-HB and CW-HB (Theorems 1 and 2). We informally summarize one of the results here: if there are at least k calibration points-per-bin, then the expected-ECE is bounded as: E r(TL-) or (CW-) ECEs ď a 1{2k, for TL-HB and CW-HB respectively. The outer E above is an expectation over the calibration data, and corresponds to the randomness in the predictor learnt on the calibration data. Note that the ECE itself is an expected error over an unseen i.i.d. test-point pX,Y q „ P . Experimental details. We experimented on the CIFAR-10 and CIFAR-100 datasets, which have 10 and 100 classes each. The base models are deep-nets with the following architectures: ResNet50, Resnet-110, Wide-ResNet-26-10 (WRN) (Zagoruyko and Komodakis, 2016), and DenseNet121 (Huang et al., 2017). Both CIFAR datasets consist of 60K (60,000) points, which are split as 45K/5K/10K to form the train/validation/test sets. The validation set was used for post-hoc calibration and the test set was used for evaluation through ECE/MCE estimates. Instead of training new models, we used the pre-trained models of Mukhoti et al. (2020). We then ask: “which post-hoc calibrator improves the calibration the most?” We used their Brier score and focal loss models in our experiments (Mukhoti et al. (2020) report that these are the empirically best performing loss functions). All results in the main paper are with Brier score, and results with focal loss are in Appendix E.4. Implementation details for TS, VS, and DS are in Appendix E. Findings. In Table 2, we report the binned ECE and MCE estimates when B “ 15 bins are used by HB, and for ECE estimation. We make the following observations: (a) For TL-ECE, N-HB is the best performing method for both CIFAR-10 and CIFAR-100. While most methods perform similarly across architectures for CIFAR-10, there is high variation in CIFAR-100. DS is the worst performing method on CIFAR-100, but TL-HB also performs poorly. We believe that this could be because the data splitting scheme of the TL-calibrator (line 4 of Algorithm 2) splits datasets across the predicted classes, and some classes in CIFAR-100 occur very rarely. This is further discussed in Appendix E.6. (b) For TL-MCE, TL-HB is the best performing method on CIFAR-10, by a huge margin. For CIFAR-100, TS or VS perform slightly better than TL-HB. Since HB ensures that each bin gets roughly the same number of points, the predictions are well calibrated across bins, leading to smaller TL-MCE. A similar observation was also made by Gupta and Ramdas (2021). (c) For CW-ECE, CW-HB is the best performing method across the two datasets and all four architectures. The N-HB method which has been used in many CW-ECE baseline experiments performs terribly. In other words, skipping the normalization step leads to a large improvement in CW-ECE. This observation is one of our most striking findings. To shed further light on this, we note that the distribution-free calibration guarantees for CW-HB shown in Appendix C no longer hold post-normalization. Thus, both our theory and experiments indicate that skipping normalization improves CW-ECE performance. Additional experiments in the Appendix. In Appendix E.5, we report each of the results in Tables 2 and 3 with the number of bins taking every value in the range r5, 25s. Most observations remain the same under this expanded study. In Appendix B.2, we consider top-label calibration for the class imbalanced COVTYPE-7 dataset, and show that TL-HB adapts to tail/infrequent classes. 5 CONCLUSION We make two contributions to the study of multiclass calibration: (i) defining the new notion of top-label calibration which enforces a natural minimal requirement on a multiclass predictor—the probability score for the top class prediction should be calibrated; (ii) developing a multiclass-tobinary (M2B) framework which posits that various notions of multiclass calibration can be achieved via reduction to binary calibration, balancing practical utility with statistically tractability. Since it is important to identify appropriate notions of calibration in any structured output space (Kuleshov et al., 2018; Gneiting et al., 2007), we anticipate that the philosophy behind the M2B framework could find applications in other structured spaces. 6 REPRODUCIBILITY STATEMENT Some reproducibility desiderata, such as external code and libraries that were used are summarized in Appendix E.1. All code to generate results with the CIFAR datasets is attached in the supplementary material. Our base models were pre-trained deep-net models generated by Mukhoti et al. (2020), obtained from www.robots.ox.ac.uk/„viveka/focal calibration/ (corresponding to ‘brier score’ and ‘focal loss adaptive 53’ at the above link). By avoiding training of new deep-net models with multiple hyperparameters, we also consequently avoided selection biases that inevitably creep in due to test-data-peeking. The predictions of the pre-trained models were obtained using the code at https://github.com/torrvision/focal calibration. 7 ETHICS STATEMENT Post-hoc calibration is a post-processing step that can be applied on top of miscalibrated machine learning models to increase their reliability. As such, we believe our work should improve the transparency and explainability of machine learning models. However, we outline a few limitations. Post-hoc calibration requires keeping aside a fresh, representative dataset, that was not used for training. If this dataset is too small, the resulting calibration guarantee can be too weak to be meaningful in practice. Further, if the test data distribution shifts in significant ways, additional corrections may be needed to recalibrate (Gupta et al., 2020; Podkopaev and Ramdas, 2021). A well calibrated classifier is not necessarily an accurate or a fair one, and vice versa (Kleinberg et al., 2017). Deploying calibrated models in critical applications like medicine, criminal law, banking, etc. does not preclude the possibility of the model being frequently wrong or unfair. ACKNOWLEDGEMENTS This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562 (Towns et al., 2014). Specifically, it used the Bridges-2 system, which is supported by NSF award number ACI-1928147, at the Pittsburgh Supercomputing Center (PSC). CG’s research was supported by the generous Bloomberg Data Science Ph.D. Fellowship. CG would like to thank Saurabh Garg and Youngseog Chung for interesting discussions, and Viveka Kulharia for help with the focal calibration repository. Finally, we thank Zack Lipton, the ICLR reviewers, and the ICLR area chair, for excellent feedback that helped improve the writing of the paper. A ADDENDUM TO SECTION 3 “CALIBRATION ALGORITHMS FROM CALIBRATION METRICS” In Section 3, we introduced the concept of M2B calibration, and showed that popular calibration notions are in fact M2B notions (Table 1). We showed how the calibration notions of top-label, class-wise, and confidence calibration can be achieved using a corresponding M2B calibrator. In the following subsection, we present the general-purpose wrapper Algorithm 5 that can be used to derive an M2B calibrator from any given M2B calibration notion that follows the rubric specified by Table 1. In Appendix A.2, we illustrate the philosophy of M2B calibration using a simple example with a dataset that contains 6 points. This example also illustrates the top-label-calibrator, the classwise-calibrator, and the confidence-calibrator. A.1 GENERAL-PURPOSE M2B CALIBRATOR Denote some M2B notion of calibration as C. Suppose C corresponds toK binary calibration claims. The outer for-loop in Algorithm 5, runs over each such claim in C. For example, for class-wise calibration, K “ L and for confidence and top-label calibration, K “ 1. Corresponding to each claim, there is a probability-predictor that the conditioning is to be done on, such as g or gl or gpkq. Additionally, there may be conditioning on the label predictor such as c or cpkq. These are denoted as prc, rgq in Algorithm 5. For confidence and top-label calibration, rc “ c, the top-label-confidence. For class-wise calibration, when rg “ gl, we have rcp¨q “ l. If there is no label conditioning in the calibration notion, such as in confidence, top-K-confidence, and class-wise calibration, then we enter the if-condition inside the for-loop. Here hk is learnt using a single calibration dataset and a single call to At0,1u. Otherwise, if there is label conditioning, such as in top-label and top-K-label calibration, we enter the else-condition, where we learn a separate hk,l for every l P rLs, using a different part of the dataset Dl in each case. Then hkpxq equals hk,lpxq if rcpxq “ l. Finally, since C is verifying a sequence of claims, the output of Algorithm 5 is a sequence of predictors. Each original prediction prc, rgq corresponding to the C is replaced with prc, hkq. This is the output of the M2B calibrator. Note that the rc values are not changed. This output appears abstract, but normally, it can be represented in an interpretable way. For example, for class-wise calibration, the output is just a sequence of predictors, one for each class: ph1, h2, . . . , hLq. This general-purpose M2B calibrators can be used to achieve any M2B calibration notion: toplabel calibration (Algorithm 2), class-wise calibration (Algorithm 3), confidence calibration (Algorithm 1), top-K-label calibration (Algorithm 6), and top-K-confidence calibration (Algorithm 7). A.2 AN EXAMPLE TO ILLUSTRATE THE PHILOSOPHY OF M2B CALIBRATION Figure 3a shows the predictions of a given base model g on a given dataset D. Suppose D is a test set, and we are testing confidence calibration. Then the only predictions that matter are the top-predictions corresponding to the shaded values. These are stripped out and shown in Figure 3b, in the gp¨q row. Note that the indicator 1 tY “ cp¨qu is sufficient to test confidence calibration and given this, the cpXq are not needed. Thus the second row in Figure 3b only shows these indicators. Algorithm 8: Top-label histogram binning Input: Base multiclass predictor g, calibration data D “ pX1, Y1q, . . . , pXn, Ynq Hyperparameter: # points per bin k P N (say 50), tie-breaking parameter δ ą 0 (say 10´10) Output: Top-label calibrated predictor pc, hq 1 cÐ classifier or top-class based on g; 2 g Ð top-class-probability based on g; 3 for lÐ 1 to L do 4 Dl Ð tpXi,1 tYi “ luq : cpXiq “ lqu and nl Ð |Dl|; 5 hl Ð Binary-histogram-binningpg,Dl, tnl{ku , δq; 6 end 7 hp¨q Ð hcp¨qp¨q; 8 return pc, hq; Verifying top-label calibration is similar (Figure 3c), but in addition to the predictions gp¨q, we also retain the values of cp¨q. Thus the gp¨q and 1 tY “ cp¨qu are shown, but split across the 4 classes. Class-wise calibration requires access to all the predictions, however, each class is considered separately as indicated by Figure 3d. Canonical calibration looks at the full prediction vector in each case. However, in doing so, it becomes unlikely that gpxq “ gpyq for any x,y since the number of values that g can take is now exponential. Let us turn this around and suppose that D were a calibration set instead of a test set. We argue that D should be used in the same way, whether testing or calibrating. Thus, if confidence calibration is to be achieved, we should focus on the pg,1 tY “ cp¨quq corresponding to g. If top-label calibration is to be achieved, we should use the pc, gq values. If class-wise calibration is to be achieved, we should look at each gl separately and solve L different problems. Finally, for canonical calibration, we must look at the entire g vector as a single unit. This is the core philosophy behind M2B calibrators: if binary claims are being verified, solve binary calibration problems. B DISTRIBUTION-FREE TOP-LABEL CALIBRATION USING HISTOGRAM BINNING In this section, we formally describe histogram binning (HB) with the top-label-calibrator (Algorithm 2) and provide methodological insights through theory and experiments. B.1 FORMAL ALGORITHM AND THEORETICAL GUARANTEES Algorithm 8 describes the top-label calibrator formally using HB as the binary calibration algorithm. The function called in line 5 is Algorithm 2 of Gupta and Ramdas (2021). The first argument in the call is the top-label confidence predictor, the second argument is the dataset to be used, the third argument is the number of bins to be used, and the fourth argument is a tie-breaking parameter (described shortly). While previous empirical works on HB fixed the number of bins per class, the analysis of Gupta and Ramdas (2021) suggests that a more principled way of choosing the number of bins is to fix the number of points per bin. This is parameter k of Algorithm 8. Given k, the number of bins is decided separately for every class as tnl{ku where nl is the number of points predicted as class l. This choice is particularly relevant for top-label calibration since nl can be highly non-uniform (we illustrate this empirically in Section B.2). The tie-breaking parameter δ can be arbitrarily small (like 10´10), and its significance is mostly theoretical—it is used to ensure that outputs of different bins are not exactly identical by chance, so that conditioning on a calibrated probability output is equivalent to conditioning on a bin; this leads to a cleaner theoretical guarantee. HB recalibrates g to a piecewise constant function h that takes one value per bin. Consider a specific bin b; the h value for this bin is computed as the average of the indicators t1 tYi “ cpXiqu : Xi P Bin bu. This is an estimate of the bias of the bin P pY “ cpXq | X P Bin bq. A concentration inequality can then be used to bound the deviation between the estimate and the true bias to prove distribution-free calibration guarantees. In the forthcoming Theorem 1, we show high-probability and in-expectation bounds on the the TL-ECE of HB. Additionally, we show marginal and condi- tional top-label calibration bounds, defined next. These notions were proposed in the binary calibration setting by Gupta et al. (2020) and Gupta and Ramdas (2021). In the definition below, A refers to any algorithm that takes as input calibration data D and an initial classifier g to produce a top-label predictor c and an associated probability map h. Algorithm 8 is an example of A. Definition 1 (Marginal and conditional top-label calibration). Let ε, α P p0, 1q be some given levels of approximation and failure respectively. An algorithm A : pg,Dq ÞÑ pc, hq is (a) pε, αq-marginally top-label calibrated if for every distribution P over X ˆ rLs, P ´ |P pY “ cpXq | cpXq, hpXqq ´ hpXq| ď ε ¯ ě 1´ α. (8) (b) pε, αq-conditionally top-label calibrated if for every distribution P over X ˆ rLs, P ´ @ l P rLs, r P Rangephq, |P pY “ cpXq | cpXq “ l, hpXq “ rq ´ r| ď ε ¯ ě 1´ α. (9) To clarify, all probabilities are taken over the test point pX,Y q „ P , the calibration data D „ Pn, and any other inherent algorithmic randomness in A; these are all implicit in pc, hq “ ApD,gq. Marginal calibration asserts that with high probability, on average over the distribution of D, X , P pY “ cpXq | cpXq, hpXqq is at most ε away from hpXq. In comparison, TL-ECE is the average of these deviations over X . Marginal calibration may be a more appropriate metric for calibration than TL-ECE if we are somewhat agnostic to probabilistic errors less than some fixed threshold ε (like 0.05). Conditional calibration is a strictly stronger definition that requires the deviation to be at most ε for every possible prediction pl, rq, including rare ones, not just on average over predictions. This may be relevant in medical settings where we want the prediction on every patient to be reasonably calibrated. Algorithm 8 satisfies the following calibration guarantees. Theorem 1. Fix hyperparameters δ ą 0 (arbitrarily small) and points per bin k ě 2, and assume nl ě k for every l P rLs. Then, for any α P p0, 1q, Algorithm 8 is pε1, αq-marginally and pε2, αqconditionally top-label calibrated for ε1 “ d logp2{αq 2pk ´ 1q ` δ, and ε2 “ d logp2n{kαq 2pk ´ 1q ` δ. (10) Further, for any distribution P over X ˆ rLs, we have P pTL-ECEpc, hq ď ε2q ě 1 ´ α, and E rTL-ECEpc, hqs ď a 1{2k ` δ. The proof in Appendix H is a multiclass top-label adaption of the guarantee in the binary setting by Gupta and Ramdas (2021). The rOp1{ ? kq dependence of the bound relies on Algorithm 8 delegating at least k points to every bin. Since δ can be chosen to be arbitrarily small, setting k “ 50 gives roughly ED rTL-ECEphqs ď 0.1. Base on this, we suggest setting k P r50, 150s in practice. B.2 TOP-LABEL HISTOGRAM BINNING ADAPTS TO CLASS IMBALANCED DATASETS The principled methodology of fixing the number of points per bin reaps practical benefits. Figure 4 illustrates this through the performance of HB for the class imbalanced COVTYPE-7 dataset (Blackard and Dean, 1999) with class ratio approximately 36% for class 1 and 49% for class 2. The entire dataset has 581012 points which is divided into train-test in the ratio 70:30. Then, 10% of the training points are held out for calibration (n “ |D| “ 40671). The base classifier is a random forest (RF) trained on the remaining training points (it achieves around 95% test accuracy). The RF is then recalibrated using HB. The top-label reliability diagrams in Figure 4a illustrate that the original RF (in orange) is underconfident on both the most likely and least likely classes. Additional figures in Appendix F show that the RF is always underconfident no matter which class is predicted as the top-label. HB (in green) recalibrates the RF effectively across all classes. Validity plots (Gupta and Ramdas, 2021) estimate how the LHS of condition (8), denoted as V pεq, varies with ε. We observe that for all ε, V pεq is higher for HB. The rightmost barplot compares the estimated TL-ECE for all classes, and also shows the class proportions. While the original RF is significantly miscalibrated for 1 2 3 4 5 6 70.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 EC E Random forest Histogram binning 0.0 0.1 0.2 0.3 0.4 0.5 Cl as s r at io Class ratio 1 2 3 4 5 6 70.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 EC E Random forest Histogram b nning 0.0 0.1 0.2 0.3 0.4 0.5 Cl as s r at io Class ratio 0.00 0.25 0.50 0.75 1.0 Predicted probability 0.0 0.2 0.4 0.6 0.8 1.0 Tr ue p ro ba bi lit y Class 2 reliability diagram 0.00 0.05 0.10 0.15 0.0 0.2 0.4 0.6 0.8 1.0 V( ) Class 2 validity plot 0.00 0.25 0.50 0.75 1.00 Predicted probability 0.0 0.2 0.4 0.6 0.8 1.0 Tr ue p ro ba bi lit y Class 4 reliability diagram 0.00 0.05 0.10 0.15 0.0 0.2 0.4 0.6 0.8 1.0 V( ) Class 4 validity plot 1 2 3 4 5 6 7 Class 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 To pla be l E CE 0.0 0.1 0.2 0.3 0.4 0.5 Cl as s r at io (a) Top-label histogram binning (Algorithm 8) with k “ 100 points per bin. Class 4 has only 183 calibration points. Algorithm 8 adapts and uses only a single bin to ensure that the TL-ECE on class 4 is comparable to the TL-ECE on class 2. Overall, the random forest classifier has significantly higher TL-ECE for the least likely classes (4, 5, and 6), but the post-calibration TL-ECE using binning is quite uniform. 0.00 0.25 0.50 0.75 1.00 Predicted probability 0.0 0.2 0.4 0.6 0.8 1.0 Tr ue p ro ba bi lit y Class 2 reliability diagram 0.00 0.05 0.10 0.15 0.0 0.2 0.4 0.6 0.8 1.0 V( ) Class 2 validity plot 0.00 0.25 0.50 0.75 1.00 Predicted probability 0.0 0.2 0.4 0.6 0.8 1.0 Tr ue p ro ba bi lit y Class 4 reliability diagram 0.00 0.05 0.10 0.15 0.0 0.2 0.4 0.6 0.8 1.0 V( ) Class 4 validity plot 1 2 3 4 5 6 7 Class 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 To pla be l E CE 0.0 0.1 0.2 0.3 0.4 0.5 Cl as s r at io (b) Histogram binning with B “ 50 bins for every class. Compared to Figure 4a, the post-calibration TL-ECE for the most likely classes decreases while the TL-ECE for the least likely classes increases. Figure 4: Recalibration of a random forest using histogram binning on the class imbalanced COVTYPE-7 dataset (class 2 is roughly 100 times likelier than class 4). By ensuring a fixed number of calibration points per bin, Algorithm 8 obtains relatively uniform top-label calibration across classes (Figure 4a). In comparison, if a fixed number of bins are chosen for all classes, the performance deteriorates for the least likely classes (Figure 4b). the less likely classes, HB has a more uniform miscalibration across classes. Figure 4b considers a slightly different HB algorithm where the number of points per class is not adapted to the number of times the class is predicted, but is fixed beforehand (this corresponds to replacing tnl{ku in line 5 of Algorithm 8 with a fixed B P N). While even in this setting there is a drop in the TL-ECE compared to the RF model, the final profile is less uniform compared to fixing the number of points per bin. The validity plots and top-label reliability diagrams for all the 7 classes are reported in Figure 9 in Appendix F, along with some additional observations. C DISTRIBUTION-FREE CLASS-WISE CALIBRATION USING HISTOGRAM BINNING In this section, we formally describe histogram binning (HB) with the class-wise-calibrator (Algorithm 3) and provide theoretical guarantees for it. The overall procedure is called class-wise-HB. Further details and background on HB are contained in Appendix B, where top-label-HB is described. C.1 FORMAL ALGORITHM To achieve class-wise calibration using binary routines, we learn each component function hl in a 1- v-all fashion as described in Algorithm 3. Algorithm 9 contains the pseudocode with the underlying routine as binary HB. To learn hl, we use a dataset Dl, which unlike top-label HB (Algorithm 8), contains Xi even if cpXiq ‰ l. However the Yi is replaced with 1 tYi “ lu. The number of points per bin kl can be different for different classes, but generally one would set k1 “ . . . “ kL “ k P N. Larger values of kl will lead to smaller εl and δl in the guarantees, at loss of sharpness since the number of bins tn{klu would be smaller. Algorithm 9: Class-wise histogram binning Input: Base multiclass predictor g : X Ñ ∆L´1, calibration data D “ pX1, Y1q, . . . , pXn, Ynq Hyperparameter: # points per bin k1, k2, . . . , kl P NL (say each kl “ 50), tie-breaking parameter δ ą 0 (say 10´10) Output: L class-wise calibrated predictors h1, h2, . . . , hL 1 for lÐ 1 to L do 2 Dl Ð tpXi,1 tYi “ luq : i P rnsqu; 3 hl Ð Binary-histogram-binningpgl,Dl, tn{klu , δq; 4 end 5 return ph1, h2, . . . , hLq; C.2 CALIBRATION GUARANTEES A general algorithm A for class-wise calibration takes as input calibration data D and an initial classifier g to produce an approximately class-wise calibrated predictor h : X Ñ r0, 1sL. Define the notation ε “ pε1, ε2, . . . , εLq P p0, 1qL and α “ pα1, α2, . . . , αLq P p0, 1qL. Definition 2 (Marginal and conditional class-wise calibration). Let ε,α P p0, 1qL be some given levels of approximation and failure respectively. An algorithm A : pg,Dq ÞÑ h is (a) pε,αq-marginally class-wise calibrated if for every distribution P over X ˆ rLs and for every l P rLs P ´ |P pY “ l | hlpXqq ´ hlpXq| ď εl ¯ ě 1´ αl. (11) (b) pε,αq-conditionally class-wise calibrated if for every distribution P over X ˆ rLs and for every l P rLs, P ´ @r P Rangephlq, |P pY “ l | hlpXq “ rq ´ r| ď εl ¯ ě 1´ αl. (12) Definition 2 requires that each hl is pεl, αlq calibrated in the binary senses defined by Gupta et al. (2021, Definitions 1 and 2). From Definition 2, we can also uniform bounds that hold simultaneously over every l P rLs. Let α “ řL l“1 αl and ε “ maxlPrLs εl. Then (11) implies P ´ @l P rLs, |P pY “ l | hlpXqq ´ hlpXq| ď ε ¯ ě 1´ α, (13) and (12) implies P ´ @l P rLs, r P Rangephlq, |P pY “ l | hlpXq “ rq ´ r| ď ε ¯ ě 1´ α. (14) The choice of not including the uniformity over L in Definition 2 reveals the nature of our class-wise HB algorithm and the upcoming theoretical guarantees: (a) we learn the hl’s separately for each l and do not combine the learnt functions in any way (such as normalization), (b) we do not combine the calibration inequalities for different rLs in any other way other than a union bound. Thus the only way we can show (13) (or (14)) is by using a union bound over (11) (or (12)). We now state the distribution-free calibration guarantees satisfied by Algorithm 9. Theorem 2. Fix hyperparameters δ ą 0 (arbitrarily small) and points per bin k1, k2, . . . , kl ě 2, and assume nl ě kl for every l P rLs. Then, for every l P rLs, for any αl P p0, 1q, Algorithm 9 is pεp1q,αq-marginally and pεp2q,αq-conditionally class-wise calibrated with ε p1q l “ d logp2{αlq 2pkl ´ 1q ` δ, and εp2ql “ d logp2n{klαlq 2pkl ´ 1q ` δ. (15) Further, for any distribution P over X ˆ rLs, (a) P pCW-ECEpc, hq ď maxlPrLs ε p2q l q ě 1´ ř lPrLs αl, and (b) E rCW-ECEpc, hqs ď maxlPrLs a 1{2kl ` δ. Theorem 2 is proved in Appendix H. The proof follows by using the result of Gupta and Ramdas (2021, Theorem 2), derived in the binary calibration setting, for each hl separately. Gupta and Ramdas (2021) proved a more general result for general `p-ECE bounds. Similar results can also be derived for the suitably defined `p-CW-ECE. As discussed in Section 3.2, unlike previous works (Zadrozny and Elkan, 2002; Guo et al., 2017; Kull et al., 2019), Algorithm 9 does not normalize the hl’s. We do not know how to derive Theorem 2 style results for a normalized version of Algorithm 9. D FIGURES FOR APPENDIX E Appendix E begins on page 23. The relevant figures for Appendix E are displayed on the following pages. E ADDITIONAL EXPERIMENTAL DETAILS AND RESULTS FOR CIFAR-10 AND CIFAR-100 We present additional details and results to supplement the experiments with CIFAR-10 and CIFAR100 in Sections 2 and 4 of the main paper. E.1 EXTERNAL LIBRARIES USED All our base models were pre-trained deep-net models generated by Mukhoti et al. (2020), obtained from www.robots.ox.ac.uk/„viveka/focal calibration/ and used along with the code at https://github.com/torrvision/focal calibration to obtain base predictions. We focused on the models trained with Brier score and focal loss, since it was found to perform the best for calibration. All reports in the main paper are with the Brier score; in Appendix E.4, we report corresponding results with focal loss. We also used the code at https://github.com/torrvision/focal calibration for temperature scaling (TS). For vector scaling (VS) and Dirichlet scaling (DS), we used the code of Kull et al. (2019), hosted at https://github.com/dirichletcal/dirichlet python. For VS, we used the file dirichletcal/calib/vectorscaling.py, and for DS, we used the file dirichletcal/calib/fulldirichlet.py. No hyperparameter tuning was performed in any of our histogram binning experiments or baseline experiments; default settings were used in every case. The random seed was fixed so that every run of the experiment gives the same result. In particular, by relying on pre-trained models, we avoid training new deep-net models with multiple hyperparameters, thus avoiding any selection biases that may arise due to test-data peeking across multiple settings. E.2 FURTHER COMMENTS ON BINNING FOR ECE ESTIMATION As mentioned in Remark 1, ECE estimates for all methods except TL-HB and CW-HB was done using fixed-width bins r0, 1{Bq, r1{B, 2{Bq, . . . r1´ 1{B, 1s for various values of B P r5, 25s. For TL-HB and CW-HB, B is the number of bins used for each call to binary HB. For TL-HB, note that we actually proposed that the number of bins-per-class should be fixed; see Section B.2. However, for ease of comparison to other methods, we simply set the number of bins to B for each call to binary HB. That is, in line 5, we replace tnl{ku with B. For CW-HB, we described Algorithm 9 with different values of kl corresponding to the number of bins per class. For the CIFAR-10 and CIFAR-100 comparisons, we set each k1 “ k2 “ . . . “ kL “ k, where k P N satisfies tn{ku “ B. Tables 2,3, 4, and 5 report estimates with B “ 15, which has been commonly used in many works (Guo et al., 2017; Kull et al., 2019; Mukhoti et al., 2020). Corresponding to each table, we have a figure where ECE estimates with varying B are reported to strengthen conclusions: these are Figure 5,7, 6, and 8 respectively. Plugin estimates of the ECE were used, same as Guo et al. (2017). Further binning was not done for TL-HB and CW-HB since the output is already discrete and sufficiently many points take each of the predicted values. Note that due to Jensen’s inequality, any further binning will only decrease the ECE estimate (Kumar et al., 2019). Thus, using unbinned estimates may give TL-HB and CW-HB a disadvantage. E.3 SOME REMARKS ON MAXIMUM-CALIBRATION-ERROR (MCE) Guo et al. (2017) defined MCE with respect to confidence calibration, as follows: conf-MCEpc, hq :“ sup rPRangephq |P pY “ cpXq | hpXq “ rq ´ r| . (16) Conf-MCE suffers from the same issue illustrated in Figure 2 for conf-ECE. In Figure 1b, we looked at the reliability diagram within two bins. These indicate two of the values over which the supremum is taken in equation (16): these are the Y-axis distances between the ‹ markers and the X “ Y line for bins 6 and 10 (both are less than 0.02). On the other hand, the effective maximum miscalibration for bin 6 is roughly 0.15 (for class 1), and roughly 0.045 (for class 4), and the maximum should be taken with respect to these values across all bins. To remedy the underestimation of the effective MCE, we can consider the top-label-MCE, defined as TL-MCEpc, hq :“ max lPrLs sup rPRangephq |P pY “ l | cpXq “ l, hpXq “ rq ´ r| . (17) Interpreted in words, the TL-MCE assesses the maximum deviation between the predicted and true probabilities across all predictions and all classes. Following the same argument as in the proof of Proposition 4, it can be shown that for any c, h, conf-MCEpc, hq ď TL-MCEpc, hq. The TL-MCE is closely related to conditional top-label calibration (Definition 1b). Clearly, an algorithm is pε, αqconditionally top-label calibrated if and only if for every distribution P , P pTL-MCEpc, hq ď εq ě 1´ α. Thus the conditional top-label calibration guarantee of Theorem 1 implies a high probability bound on the TL-MCE as well. E.4 TABLE 2 AND 3 STYLE RESULTS WITH FOCAL LOSS Results for top-label-ECE and top-label-MCE with the base deep net model being trained using focal loss are reported in Table 4. Corresponding results for class-wise-ECE are reported in Table 5. The observations are similar to the ones reported for Brier score: 1. For TL-ECE, TL-HB is either the best or close to the best performing method on CIFAR10, but suffers on CIFAR-100. This phenomenon is discussed further in Appendix E.6. N-HB is the best or close to the best for both CIFAR-10 and CIFAR-100. 2. For TL-MCE, TL-HB is the best performing method on CIFAR-10, by a huge margin. For CIFAR-100, TS or VS perform better than TL-HB, but not by a huge margin. 3. For CW-ECE, CW-HB is the best performing method across the two datasets and all four architectures. E.5 ECE AND MCE ESTIMATES WITH VARYING NUMBER OF BINS Corresponding to each entry in Tables 2 and 4, we perform an ablation study with the number of bins varying as B P r5, 25s. This is in keeping with the findings of Roelofs et al. (2020) that the ECE/MCE estimate can vary with different numbers of bins, along with the relative performance of the various models. The results are reported in Figure 5 (ablation of Table 2) and Figure 7 (ablation of Table 3). The captions of these figures contain further details on the findings. Most findings are similar to those in the main paper, but the findings in the tables are strengthened through this ablation. The same ablations are performed for focal loss as well. The results are reported in Figure 6 (ablation of Metric Dataset Architecture Base TS VS DS N-HB CW-HB Table 4) and Figure 8 (ablation of Table 5). The captions of these figures contain further details on the findings. The ablation results in the figures support those in the tables. E.6 ANALYZING THE POOR PERFORMANCE OF TL-HB ON CIFAR-100 CIFAR-100 is an imbalanced dataset with 100 classes and 5000 points for validation/calibration (as per the default splits). Due to random subsampling, the validation split we used had one of the classes predicted as the top-label only 31 times. Thus, based on Theorem 1, we do not expect HB to have small TL-ECE. This is confirmed by the empirical results presented in Tables 2/4, and Figures 5b/6b. We observe that HB has higher estimated TL-ECE than all methods except DS, for most values of the number of bins. The performance of TL-HB for TL-MCE however is much much closer to the other methods since HB uses the same number of points per bin, ensuring that the predictions are somewhat equally calibrated across bins (Figures 5d/6d). In comparison, for CWECE, CW-HB is the best performing method. This is because in the class-wise setting, 5000 points are available for recalibration irrespective of the class, which is sufficient for HB. The deterioration in performance of HB when few calibration points are available was also observed in the binary setting by Gupta and Ramdas (2021, Appendix C). Niculescu-Mizil and Caruana (2005) noted in the conclusion of their paper that Platt scaling (Platt, 1999), which is closely related to TS, performs well when the data is small, but another nonparametric binning method, isotonic regression (Zadroz
1. What is the focus of the paper, and what are the proposed approaches? 2. What are the strengths and weaknesses of the paper regarding its contributions, experiments, and comparisons with other works? 3. Do you have any concerns or questions about the paper's content, such as the application of top-label calibration, the use of separate calibration models for each predicted class value, or the modification of the calibration process? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, particularly regarding the presentation of the experimental results and the lack of analysis of certain findings? 5. Are there any suggestions or recommendations for improving the paper, such as including more datasets, comparing with isotonic regression, or providing a deeper analysis of the results?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a new method to fix confidence calibration by conditioning on the predicted label when formulating the calibration task (as well as on the class probability estimate for that label, as done in confidence calibration). This new approach is called "top-label" calibration. The paper shows how to modify the calibration process to achieve this by building a separate calibration model for each predicted class value. Experiments compare this approach, using binning as the calibration method, to confidence calibration as well as standard 1-vs-rest calibration (both with and without normalization in the latter method) based on calibration of pre-trained CIFAR-10/100 CNNs. Temperature scaling, vector scaling, and Dirichlet scaling are also included. Evaluation is performed using 1-vs-rest ECE, a new top-label ECE, and a maximum calibration error (MCE) variant of the latter. Surprisingly, the proposed approach does not perform as well as 1-vs-rest calibration when using the top-label variant of ECE; it does perform a lot better wrt the MCE variant on the CIFAR-10 data though. When considering 1-vs-rest ECE, the paper reports that 1-vs-rest without normalization performs better than the same approach with normalization, and also better than all scaling-based methods. The paper also explains how to adapt top-K-confidence calibration to obtain top-K-label calibration, but this is not evaluated. Review The first seven pages of the paper are very nice, and I very much enjoyed reading this material. The problems start with the experiments: Reading through the appendix, the authors are obviously aware that a calibration set is normally used to calibrate a model, before it is evaluated on the test set to compute ECE. However, there is no reference to how this is done for the experiments in the paper, which are based on pre-trained models. It seems that the validation set was perhaps used for both building the discretization-based calibration models and their evaluation, which seems highly problematic. One of the three empirical observations on Page 9 states that the new TL-HB is the best performing method for 1-vs-rest ECE. In fact, it is the un-normalized variant of 1-vs-rest calibration that is shown in Table 3 and that performs best! Assuming this is just a typo and "TL-HB" in Observation (c) was meant to be "CW-HB", this is the most surprising and potentially most impactful finding in the paper. (The other two observations are about the behaviour wrt TL-ECE/MCE, which are the new, less obvious ECE metrics, and the results are also more mixed wrt these metrics.) However, there is no analysis at all in the paper why leaving out normalization in the 1-vs rest method (CW-HB) is so much better than using it (N-EB)! There is no comparison to isotonic regression, which is trivially applied using the 1-vs-rest method, and like binning, is also a non-parameteric method. The scaling-based methods are all parameteric. The number of datasets is very limited (CIFAR-10 and CIFAR-100). The actual 1-vs-rest discretization-based calibration algorithm used in the paper (CW-HB) seems non-standard because a separate binning is applied for each class (see Algorithm 9 in Appendix C.1, assuming that Binary-histogram-binning performs equal-frequency binning as indicated elsewhere in the paper). My understanding is that normally, the same bin boundaries are used for all classes when this method is applied. The effect of this is unclear. Could it explain why normalization performs so poorly in Table 3? The submission describes the discretization-based algorithms being evaluated as special instances of a general M2B "notion" for calibration of multi-class problems, and there is a corresponding "general-purpose" calibration algorithm that includes the evaluated algorithms as special cases. It is unclear how helpful this algorithm and the M2B "notion" are. This aspect of the paper seems somewhat trivial. There are no results or proofs regarding this general-purpose formulation of the algorithm. Longer and longer appendices seem to be becoming the norm, but this submission is quite extreme in this regard, particularly because only some appendices are referred to in the main text (G, A, D.3, D.2, D.4, D.5, B, D.1) while others aren't: E (random forest experiments, extend Appendix B.2), C (CW-HB), F (canonical multi-class calibration, 8 pages). The role of appendix F in particular is mysterious: it is only tangentially related to what is presented in the main text. Other questions and comments: Do the calibrated probabilities obtained using top-label calibration sum to 1? "However, the distribution of g can be different for different labels, thus they should be treated differently" -- I don't understand the reason for having this sentence here. Is this to reinforce that point? I would delete it. Renumber the algorithms to follow the order in which they are discussed. "the expectation is over the calibration data" - isn't it trivial to fit the calibration data arbitrarily well? Why is this bound useful then? Is N-HB defined in the text or just in the caption? Why is CW-HB not included in Table 2? Is it obvious that it performs worse than TL-HB for these metrics? Conversely, doesn't it make sense to include TL-HB in Table 3?
ICLR
Title Top-label calibration and multiclass-to-binary reductions Abstract We propose a new notion of multiclass calibration called top-label calibration. A classifier is said to be top-label calibrated if the reported probability for the predicted class label—the top-label—is calibrated, conditioned on the top-label. This conditioning is essential for practical utility of the calibration property, since the top-label is always reported and we must condition on what is reported. However, the popular notion of confidence calibration erroneously skips this conditioning. Furthermore, we outline a multiclass-to-binary (M2B) reduction framework that unifies confidence, top-label, and class-wise calibration, among others. As its name suggests, M2B works by reducing multiclass calibration to different binary calibration problems; various types of multiclass calibration can then be achieved using simple binary calibration routines. We instantiate the M2B framework with the well-studied histogram binning (HB) binary calibrator, and prove that the overall procedure is multiclass calibrated without making any assumptions on the underlying data distribution. In an empirical evaluation with four deep net architectures on CIFAR-10 and CIFAR-100, we find that the M2B + HB procedure achieves lower top-label and class-wise calibration error than other approaches such as temperature scaling. Code for this work is available at https://github.com/aigen/df-posthoc-calibration. 1 INTRODUCTION Machine learning models often make probabilistic predictions. The ideal prediction is the true conditional distribution of the output given the input. However, nature never reveals true probability distributions, making it infeasible to achieve this ideal in most situations. Instead, there is significant interest towards designing models that are calibrated, which is often feasible. We motivate the definition of calibration using a standard example of predicting the probability of rain. Suppose a meteorologist claims that the probability of rain on a particular day is 0.7. Regardless of whether it rains on that day or not, we cannot know if 0.7 was the underlying probability of rain. However, we can test if the meteorologist is calibrated in the long run, by checking if on the D days when 0.7 was predicted, it indeed rained on around 0.7D days (and the same is true for other probabilities). This example is readily converted to a formal binary calibration setting. Denote a random (feature, label)-pair as pX,Y q P X ˆt0, 1u, where X is the feature space. A probabilistic predictor h : X Ñ r0, 1s is said to be calibrated if for every prediction q P r0, 1s, PrpY “ 1 | hpXq “ qq “ q (almost surely). Arguably, if an ML classification model produces such calibrated scores for the classes, downstream users of the model can reliably use its predictions for a broader set of tasks. Our focus in this paper is calibration for multiclass classification, with L ě 3 classes and Y P rLs :“ t1, 2, . . . , L ě 3u. We assume all (training and test) data is drawn i.i.d. from a fixed distribution P , and denote a general point from this distribution as pX,Y q „ P . Consider a typical multiclass predictor, h : X Ñ ∆L´1, whose range ∆L´1 is the probability simplex in RL. A natural notion of calibration for h, called canonical calibration is the following: for every l P rLs, P pY “ l | hpXq “ qq “ ql (ql denotes the l-th component of q). However, canonical calibration becomes infeasible to achieve or verify once L is even 4 or 5 (Vaicenavicius et al., 2019). Thus, there is interest in studying statistically feasible relaxations of canonical notion, such as confidence calibration (Guo et al., 2017) and class-wise calibration (Kull et al., 2017). In particular, the notion of confidence calibration (Guo et al., 2017) has been popular recently. A model is confidence calibrated if the following is true: “when the reported confidence for the predicted class is q P r0, 1s, the accuracy is also q”. In any practical setting, the confidence q is never reported alone; it is always reported along with the actual class prediction l P rLs. One may expect that if a model is confidence calibrated, the following also holds: “when the class l is predicted with confidence q, the probability of the actual class being l is also q”? Unfortunately, this expectation is rarely met—there exist confidence calibrated classifier for whom the latter statement is grossly violated for all classes (Example 1). On the other hand, our proposed notion of top-label calibration enforces the latter statement. It is philosophically more coherent, because it requires conditioning on all relevant reported quantities (both the predicted top label and our confidence in it). In Section 2, we argue further that top-label calibration is a simple and practically meaningful replacement of confidence calibration. In Section 3, we unify top-label, confidence, and a number of other popular notions of multiclass calibration into the framework of multiclass-to-binary (M2B) reductions. The M2B framework relies on the simple observation that each of these notions internally verifies binary calibration claims. As a consequence, each M2B notion of calibration can be achieved by solving a number of binary calibration problems. With the M2B framework at our disposal, all of the rich literature on binary calibration can now be used for multiclass calibration. We illustrate this by instantiating the M2B framework with the binary calibration algorithm of histogram binning or HB (Zadrozny and Elkan, 2001; Gupta and Ramdas, 2021). The M2B + HB procedure achieves state-of-the-art results with respect to standard notions of calibration error (Section 4). Further, we show that our procedure is provably calibrated for arbitrary data-generating distributions. The formal theorems are delayed to Appendices B, C (due to space limitations), but an informal result is presented in Section 4. 2 MODIFYING CONFIDENCE CALIBRATION TO TOP-LABEL CALIBRATION Let c : X Ñ rLs denote a classifier or top-label predictor and h : X Ñ r0, 1s a function that provides a confidence or probability score for the top-label cpXq. The predictor pc, hq is said to be confidence calibrated (for the data-generating distribution P ) if P pY “ cpXq | hpXqq “ hpXq. (1) In other words, when the reported confidence hpXq equals p P r0, 1s, then the fraction of instances where the predicted label is correct also approximately equals p. Note that for an L-dimensional predictor h : X Ñ ∆L´1, one would use cp¨q “ arg maxlPrLs hlp¨q and hp¨q “ hcp¨qp¨q; ties are broken arbitrarily. Then h is confidence calibrated if the corresponding pc, hq satisfies (1). Confidence calibration is most applicable in high-accuracy settings where we trust the label prediction cpxq. For instance, if a high-accuracy cancer-grade-prediction model predicts a patient as having “95% grade III, 3% grade II, and 2% grade I”, we would suggest the patient to undergo an invasive treatment. However, we may want to know (and control) the number of non-grade-III patients that were given this suggestion incorrectly. In other words, is Prpcancer is not grade III | cancer is predicted to be of grade III with confidence 95%q equal to 5%? It would appear that by focusing on the the probability of the predicted label, confidence calibration enforces such control. However, as we illustrate next, confidence calibration fails at this goal by providing a guarantee that is neither practically interpretable, nor actionable. Translating the probabilistic statement (1) into words, we ascertain that confidence calibration leads to guarantees of the form: “if the confidence hpXq in the top-label is 0.6, then the accuracy (frequency with which Y equals cpXq) is 0.6”. Such a guarantee is not very useful. Suppose a patient P is informed (based on their symptoms X), that they are most likely to have a certain disease D with probability 0.6. Further patient P is told that this score is confidence calibrated. P can now infer the following: “among all patients who have probability 0.6 of having some unspecified disease, the fraction who have that unspecified disease is also 0.6.” However, P is concerned only about disease D, and not about other diseases. That is, P wants to know the probability of having D among patients who were predicted to have disease D with confidence 0.6, not among patients who were predicted to have some disease with confidence 0.6. In other words, P cares about the occurrence of D among patients who were told the same thing that P has been told. It is tempting to wish that the confidence calibrated probability 0.6 has any bearing on what P cares about. However, this faith is misguided, as the above reasoning suggests, and further illustrated through the following example. Example 1. Suppose the instance space is pX,Y q P ta, bu ˆ t1, 2, . . .u. (X can be seen as the random patient, and Y as the disease they are suffering from.) Consider a predictor pc, hq and let the values taken by pX,Y, c, hq be as follows: Feature x P pX “ xq Class prediction cpxq Confidence hpxq P pY “ cpXq | X “ xq a 0.5 1 0.6 0.2 b 0.5 2 0.6 1.0 The table specifies only the probabilities P pY “ cpXq | X “ xq; the probabilities P pY “ l | X “ xq, l ‰ cpxq, can be set arbitrarily. We verify that pc, hq is confidence calibrated: P pY “ cpXq | hpXq “ 0.6q “ 0.5pP pY “ 1 | X “ aq ` P pY “ 2 | X “ bqq “ 0.5p0.2` 1q “ 0.6. However, whether the actual instance is X “ a or X “ b, the probabilistic claim of 0.6 bears no correspondence with reality. If X “ a, hpXq “ 0.6 is extremely overconfident since P pY “ 1 | X “ aq “ 0.2. Contrarily, if X “ b, hpXq “ 0.6 is extremely underconfident. The reason for the strange behavior above is that the probability P pY “ cpXq | hpXqq is not interpretable from a decision-making perspective. In practice, we never report just the confidence hpXq, but also the class prediction cpXq (obviously!). Thus it is more reasonable to talk about the conditional probability of Y “ cpXq, given what is reported, that is both cpXq and hpXq. We make a small but critical change to (1); we say that pc, hq is top-label calibrated if P pY “ cpXq | hpXq, cpXqq “ hpXq. (2) (See the disambiguating Remark 2 on terminology.) Going back to the patient-disease example, top-label calibration would tell patient P the following: “among all patients, who (just like you) are predicted to have disease D with probability 0.6, the fraction who actually have disease D is also 0.6.” Philosophically, it makes sense to condition on what is reported—both the top label and its confidence—because that is what is known to the recipient of the information; and there is no apparent justification for not conditioning on both. A commonly used metric for quantifying the miscalibration of a model is the expected-calibrationerror (ECE) metric. The ECE associated with confidence calibration is defined as conf-ECEpc, hq :“ EX |P pY “ cpXq | hpXqq ´ hpXq| . (3) We define top-label-ECE (TL-ECE) in an analogous fashion, but also condition on cpXq: TL-ECEpc, hq :“ EX |P pY “ cpXq | cpXq, hpXqq ´ hpXq| . (4) Higher values of ECE indicate worse calibration performance. The predictor in Example 1 has conf-ECEpc, hq “ 0. However, it has TL-ECEpc, hq “ 0.4, revealing its miscalibration. More generally, it can be deduced as a straightforward consequence of Jensen’s inequality that conf-ECEpc, hq is always smaller than the TL-ECEpc, hq (see Proposition 4 in Appendix H). As illustrated by Example 1, the difference can be significant. In the following subsection we illustrate that the difference can be significant on a real dataset as well. First, we make a couple of remarks. Remark 1 (ECE estimation using binning). Estimating the ECE requires estimating probabilities conditional on some prediction such as hpxq. A common strategy to do this is to bin together nearby values of hpxq using binning schemes (Nixon et al., 2020, Section 2.1), and compute a single estimate for the predicted and true probabilities using all the points in a bin. The calibration method we espouse in this work, histogram binning (HB), produces discrete predictions whose ECE can be estimated without further binning. Based on this, we use the following experimental protocol: we report unbinned ECE estimates while assessing HB, and binned ECE estimates for all other compared methods, which are continuous output methods (deep-nets, temperature scaling, etc). It is commonly understood that binning leads to underestimation of the effective ECE (Vaicenavicius et al., 2019; Kumar et al., 2019). Thus, using unbinned ECE estimates for HB gives HB a disadvantage compared to the binned ECE estimates we use for other methods. (This further strengthens our positive results for HB.) The binning scheme we use is equal-width binning, where the interval r0, 1s is divided into B equal-width intervals. Equal-width binning typically leads to lower ECE estimates compared to adaptive-width binning (Nixon et al., 2020). Remark 2 (Terminology). The term conf-ECE was introduced by Kull et al. (2019). Most works refer to conf-ECE as just ECE (Guo et al., 2017; Nixon et al., 2020; Mukhoti et al., 2020; Kumar et al., 2018). However, some papers refer to conf-ECE as top-label-ECE (Kumar et al., 2019; Zhang et al., 2020), resulting in two different terms for the same concept. We call the older notion as conf-ECE, and our definition of top-label calibration/ECE (4) is different from previous ones. (a) Confidence reliability diagram (points marked ‹) and top-label reliability diagram (points marked `) for a ResNet-50 model on the CIFAR-10 dataset; see further details in points (a) and (b) below. The gray bars denote the fraction of predictions in each bin. The confidence reliability diagram (mistakenly) suggests better calibration than the top-label reliability diagram. (b) Class-wise and zoomed-in version of Figure 1a for bin 6 (top) and bin 10 (bottom); see further details in point (c) below. The ‹ markers are in the same position as Figure 1a, and denote the average predicted and true probabilities. The colored points denote the predicted and true probabilities when seen class-wise. The histograms on the right show the number of test points per class within bins 6 and 10. Figure 1: Confidence reliability diagrams misrepresent the effective miscalibration. 2.1 AN ILLUSTRATIVE EXPERIMENT WITH RESNET-50 ON CIFAR-10 We now compare confidence and top-label calibration using ECE estimates and reliability diagrams (Niculescu-Mizil and Caruana, 2005). This experiment can be seen as a less malignant version of Example 1. Here, confidence calibration is not completely meaningless, but can nevertheless be misleading. Figure 1 illustrates the (test-time) calibration performance of a ResNet-50 model (He et al., 2016) on the CIFAR-10 dataset (Krizhevsky, 2009). In the following summarizing points, the pc, hq correspond to the ResNet-50 model. (a) The ‹ markers in Figure 1a form the confidence reliability diagram (Guo et al., 2017), con- structed as follows. First, the hpxq values on the test set are binned into one of B “ 10 bins, r0, 0.1q, r0.1, 0.2q, . . . , r0.9, 1s, depending on the interval to which hpxq belongs. The gray bars in Figure 1a indicate the fraction of hpxq values in each bin—nearly 92% points belong to bin r0.9, 1s and no points belong to bin r0, 0.1q. Next, for every bin b, we plot ‹ “ pconfb, accbq, which are the plugin estimates of E rhpXq | hpXq P Bin bs and P pY “ cpXq | hpXq P Bin bq respectively. The dashed X “ Y line indicates perfect confidence calibration. (b) The ` markers in Figure 1a form the top-label reliability diagram. Unlike the confidence reliability diagram, the top-label reliability diagram shows the average miscalibration across classes in a given bin. For a given class l and bin b, define ∆b,l :“ | pP pY “ cpXq | cpXq “ l, hpXq P Bin bq ´ pE rhpXq | cpXq “ l, hpXq P Bin bs |, where pP , pE denote empirical estimates based on the test data. The overall miscalibration is then ∆b :“ Weighted-averagep∆b,lq “ ř lPrLs pP pcpXq “ l | hpXq P Bin bq ∆b,l. Note that ∆b is always non-negative and does not indicate whether the overall miscalibration occurs due to under- or over-confidence; also, if the absolute-values were dropped from ∆b,l, then ∆b would simply equal accb´ confb. In order to plot ∆b in a reliability diagram, we obtain the direction for the corresponding point from the confidence reliability diagram. Thus for every ‹ “ pconfb, accbq, we plot` “ pconfb, confb`∆bq if accb ą confb and` “ pconfb, confb´∆bq otherwise, for every b. This scatter plot of the `’s gives us the top-label reliability diagram. Figure 1a shows that there is a visible increase in miscalibration when going from confidence calibration to top-label calibration. To understand why this change occurs, Figure 1b zooms into the sixth bin (hpXq P r0.5, 0.6q) and bin 10 (hpXq P r0.9, 1.0s), as described next. (c) Figure 1b displays the class-wise top-label reliability diagrams for bins 6 and 10. Note that for bin 6, the ‹ marker is nearly on theX “ Y line, indicating that the overall accuracy matches the 0.005 0.010 0.015 0.020 0.025 Es tim at ed E CE Base model top-label-ECE Base model conf-ECE Temperature scaling top-label-ECE Temperature scaling conf-ECE Histogram binning top-label-ECE Histogram binning conf-ECE 5 10 15 20 25 Number of bins 0.0075 0.0100 0.0125 0.0150 0.0175 0.0200 0.0225 0.0250 0.0275 Es tim at ed E CE ResNet-50 5 10 15 20 25 Number of bins 0.005 0.010 0.015 0.020 0.025 0.030 Es tim at ed E CE ResNet-110 5 10 15 20 25 Number of bins 0.005 0.010 0.015 0.020 0.025 Es tim at ed E CE Wide-ResNet-26-10 5 10 15 20 25 Number of bins 0.005 0.010 0.015 0.020 0.025 0.030 Es tim at ed E CE DenseNet-121 Figure 2 displays the aggregate effect of the above phenomenon (across bins and classes) through estimates of the conf-ECE and TL-ECE. The precise experimental setup is described in Section 4. These plots display the ECE estimates of the base model, as well as the base model when recalibrated using temperature scaling (Guo et al., 2017) and our upcoming formulation of top-label histogram binning (Section 3). Since ECE estimates depend on the number of bins B used (see Roelofs et al. (2020) for empirical work around this), we plot the ECE estimate for every valueB P r5, 25s in order to obtain clear and unambiguous results. We find that the TL-ECE is significantly higher than the conf-ECE for most values of B, the architectures, and the pre- and post- recalibration models. This figure also previews the performance of our forthcoming top-label histogram binning algorithm. Top-label HB has smaller estimated TL-ECE than temperature scaling for most values of B and the architectures. Except for ResNet-50, the conf-ECE estimates are also better. To summarize, top-label calibration captures the intuition of confidence calibration by focusing on the predicted class. However, top-label calibration also conditions on the predicted class, which is always part of the prediction in any practical setting. Further, TL-ECE estimates can be substantially different from conf-ECE estimates. Thus, while it is common to compare predictors based on the conf-ECE, the TL-ECE comparison is more meaningful, and can potentially be different. 3 CALIBRATION ALGORITHMS FROM CALIBRATION METRICS In this section, we unify a number of notions of multiclass calibration as multiclass-to-binary (or M2B) notions, and propose a general-purpose calibration algorithm that achieves the corresponding M2B notion of calibration. The M2B framework yields multiple novel post-hoc calibration algorithms, each of which is tuned to a specific M2B notion of calibration. 3.1 MULTICLASS-TO-BINARY (M2B) NOTIONS OF CALIBRATION In Section 2, we defined confidence calibration (1) and top-label calibration (2). These notions verify calibration claims for the highest predicted probability. Other popular notions of calibration verify calibration claims for other entries in the full L-dimensional prediction vector. A predictor h “ ph1, h2, . . . , hLq is said to be class-wise calibrated (Kull et al., 2017) if (class-wise calibration) @l P rLs, P pY “ l | hlpXqq “ hlpXq. (5) Another recently proposed notion is top-K confidence calibration (Gupta et al., 2021). For some l P rLs, let cplq : X Ñ rLs denote the l-th highest class prediction, and let hplq : X Ñ rLs denote the confidence associated with it (c “ cp1q and h “ hp1q are special cases). For a given K ď L, (top-K-confidence calibration) @k P rKs, P pY “ cpkqpXq | hpkqpXqq “ hpkqpXq. (6) As we did in Section 2 for confidenceÑtop-label, top-K-confidence calibration can be modified to the more interpretable top-K-label calibration by further conditioning on the predicted labels: (top-K-label calibration) @k P rKs, P pY “ cpkqpXq | hpkqpXq, cpkqpXqq “ hpkqpXq. (7) Each of these notions reduce multiclass calibration to one or more binary calibration requirements, where each binary calibration requirement corresponds to verifying if the distribution of Y , conditioned on some prediction predpXq, satisfies a single binary calibration claim associated with predpXq. Table 1 illustrates how the calibration notions discussed so far internally verify a number of binary calibration claims, making them M2B notions. For example, for class-wise calibration, for every l P rLs, the conditioning is on predpXq “ hlpXq, and a single binary calibration statement is verified: P pY “ l | predpXqq “ hlpXq. Based on this property, we call each of these notions multiclass-to-binary or M2B notions. The notion of canonical calibration mentioned in the introduction is not an M2B notion. Canonical calibration is discussed in detail in Appendix G. Due to the conditioning on a multi-dimensional prediction, non-M2B notions of calibration are harder to achieve or verify. For the same reason, it is possibly easier for humans to interpret binary calibration claims when taking decisions/actions. 3.2 ACHIEVING M2B NOTIONS OF CALIBRATION USING M2B CALIBRATORS The M2B framework illustrates how multiclass calibration can typically be viewed via a reduction to binary calibration. The immediate consequence of this reduction is that one can now solve multiclass calibration problems by leveraging the well-developed methodology for binary calibration. The upcoming M2B calibrators belong to the standard recalibration or post-hoc calibration setting. In this setting, one starts with a fixed pre-learnt base model g : X Ñ ∆L´1. The base model g can correspond to a deep-net, a random forest, or any 1-v-all (one-versus-all) binary classification model such as logistic regression. The base model is typically optimized for classification accuracy and may not be calibrated. The goal of post-hoc calibration is to use some given calibration data D “ pX1, Y1q, pX2, Y2q, . . . , pXn, Ynq P pX ˆ rLsqn, typically data on which g was not learnt, to recalibrate g. In practice, the calibration data is usually the same as the validation data. To motivate M2B calibrators, suppose we want to verify if g is calibrated on a certain test set, based on a given M2B notion of calibration. Then, the verifying process will split the test data into a number of sub-datasets, each of which will verify one of the binary calibration claims. In Appendix A.2, we argue that the calibration data can also be viewed as a test set, and every step in the verification process can be used to provide a signal for improving calibration. M2B calibrators take the form of wrapper methods that work on top of a given binary calibrator. Denote an arbitrary black-box binary calibrator as At0,1u : r0, 1sXˆpXˆt0, 1uq‹ Ñ r0, 1sX , where the first argument is a mapping X Ñ r0, 1s that denotes a (miscalibrated) binary predicor, and the second argument is a calibration data sequence of arbitrary length. The output is a (better calibrated) binary predictor. Examples of At0,1u are histogram binning (Zadrozny and Elkan, 2001), isotonic regression (Zadrozny and Elkan, 2002), and Platt scaling (Platt, 1999). In the upcoming descriptions, we use the indicator function 1 ta “ bu P t0, 1u which takes the value 1 if a “ b, and 0 if a ‰ b. The general formulation of our M2B calibrator is delayed to Appendix A since the description is a bit involved. To ease readability and adhere to the space restrictions, in the main paper we describe the calibrators corresponding to top-label, class-wise, and confidence calibration (Algorithms 1–3). Each of these calibrators are different from the classical M2B calibrator (Algorithm 4) that has been used by Zadrozny and Elkan (2002), Guo et al. (2017), Kull et al. (2019), and most other papers M2B calibrators: Post-hoc multiclass calibration using binary calibrators Input in each case: Binary calibrator At0,1u : r0, 1sX ˆ pX ˆ t0, 1uq‹ Ñ r0, 1sX , base multiclass predictor g : X Ñ ∆L´1, calibration data D “ pX1, Y1q, . . . , pXn, Ynq. Algorithm 1: Confidence calibrator 1 cÐ classifier or top-class based on g; 2 g Ð top-class-probability based on g; 3 D1 Ð tpXi,1 tYi “ cpXiquq : i P rnsu; 4 hÐ At0,1upg,D1q; 5 return pc, hq; Algorithm 2: Top-label calibrator 1 cÐ classifier or top-class based on g; 2 g Ð top-class-probability based on g; 3 for lÐ 1 to L do 4 Dl Ð tpXi,1 tYi “ luq : cpXiq “ lqu; 5 hl Ð At0,1upg,Dlq; 6 end 7 hp¨q Ð hcp¨qp¨q (predict hlpxq if cpxq “ l); 8 return pc, hq; Algorithm 3: Class-wise calibrator 1 Write g “ pg1, g2, . . . , gLq; 2 for lÐ 1 to L do 3 Dl Ð tpXi,1 tYi “ luq : i P rnsu; 4 hl Ð At0,1upgl,Dlq; 5 end 6 return ph1, h2, . . . , hLq; Algorithm 4: Normalized calibrator 1 Write g “ pg1, g2, . . . , gLq; 2 for lÐ 1 to L do 3 Dl Ð tpXi,1 tYi “ luq : i P rnsu; 4 rhl Ð At0,1upgl,Dlq; 5 end 6 Normalize: for every l P rLs, hlp¨q :“ rhlp¨q{ řL k“1 rhkp¨q; 7 return ph1, h2, . . . , hLq; we are aware of, with the most similar one being Algorithm 3. Top-K-label and top-K-confidence calibrators are also explicitly described in Appendix A (Algorithms 6 and 7). Top-label calibration requires that for every class l P rLs, P pY “ l | cpXq “ l, hpXqq “ hpXq. Thus, to achieve top-label calibration, we must solve L calibration problems. Algorithm 2 constructs L datasets tDl : l P rLsu (line 4). The features in Dl are the Xi’s for which cpXiq “ l, and the labels are 1 tYi “ lu. Now for every l P rLs, we calibrate g to hl : X Ñ r0, 1s using Dl and any binary calibrator. The final probabilistic predictor is hp¨q “ hcp¨qp¨q (that is, it predicts hlpxq if cpxq “ l). The top-label predictor c does not change in this process. Thus the accuracy of pc, hq is the same as the accuracy of g irrespective of which At0,1u is used. Unlike the top-label calibrator, the confidence calibrator merges all classes together into a single dataset D1 “ Ť lPrLsDl. To achieve class-wise calibration, Algorithm 3 also solves L calibration problems, but these correspond to satisfying P pY “ l | hlpXqq “ hlpXq. Unlike top-label calibration, the dataset Dl for class-wise calibration contains all the Xi’s (even if cpXiq ‰ l), and hl is passed to At0,1u instead of h. Also, unlike confidence calibration, Yi is replaced with 1 tYi “ lu instead of 1 tYi “ cpXiqu. The overall process is similar to reducing multiclass classification to L 1-v-all binary classification problem, but our motivation is intricately tied to the notion of class-wise calibration. Most popular empirical works that have discussed binary calibrators for multiclass calibration have done so using the normalized calibrator, Algorithm 4. This is almost identical to Algorithm 3, except that there is an additional normalization step (line 6 of Algorithm 4). This normalization was first proposed by Zadrozny and Elkan (2002, Section 5.2), and has been used unaltered by most other works1 where the goal has been to simply compare direct multiclass calibrators such as temperature scaling, Dirichlet scaling, etc., to a calibrator based on binary methods (for instance, see Section 4.2 of Guo et al. (2017)). In contrast to these papers, we investigate multiple M2B reductions in an effort to identify the right reduction of multiclass calibration to binary calibration. To summarize, the M2B characterization immediately yields a novel and different calibrator for every M2B notion. In the following section, we instantiate M2B calibrators on the binary calibrator of histogram binning (HB), leading to two new algorithms: top-label-HB and class-wise-HB, that achieve strong empirical results and satisfy distribution-free calibration guarantees. 1the only exception we are aware of is the recent work of Patel et al. (2021) who also suggest skipping normalization (see their Appendix A1); however they use a common I-Max binning scheme across classes, whereas in Algorithm 3 the predictor hl for each class is learnt completely independently of other classes 4 EXPERIMENTS: M2B CALIBRATION WITH HISTOGRAM BINNING Histogram binning or HB was proposed by Zadrozny and Elkan (2001) with strong empirical results for binary calibration. In HB, a base binary calibration model g : X Ñ r0, 1s is used to partition the calibration data into a number of bins so that each bin has roughly the same number of points. Then, for each bin, the probability of Y “ 1 is estimated using the empirical distribution on the calibration data. This estimate forms the new calibrated prediction for that bin. Recently, Gupta and Ramdas (2021) showed that HB satisfies strong distribution-free calibration guarantees, which are otherwise impossible for scaling methods (Gupta et al., 2020). Despite these results for binary calibration, studies for multiclass calibration have reported that HB typically performs worse than scaling methods such as temperature scaling (TS), vector scaling (VS), and Dirichlet scaling (DS) (Kull et al., 2019; Roelofs et al., 2020; Guo et al., 2017). In our experiments, we find that the issue is not HB but the M2B wrapper used to produce the HB baseline. With the right M2B wrapper, HB beats TS, VS, and DS. A number of calibrators have been proposed recently (Zhang et al., 2020; Rahimi et al., 2020; Patel et al., 2021; Gupta et al., 2021), but VS and DS continue to remain strong baselines which are often close to the best in these papers. We do not compare to each of these calibrators; our focus is on the M2B reduction and the message that the baselines dramatically improve with the right M2B wrapper. We use three metrics for comparison: the first is top-label-ECE or TL-ECE (defined in (4)), which we argued leads to a more meaningful comparison compared to conf-ECE. Second, we consider the more stringent maximum-calibration-error (MCE) metric that assesses the worst calibration across predictions (see more details in Appendix E.3). For top-label calibration MCE is given by TL-MCEpc, hq :“ maxlPrLs suprPRangephq |P pY “ l | cpXq “ l, hpXq “ rq ´ r|. To assess classwise calibration, we use class-wise-ECE defined as the average calibration error across classes: CW-ECEpc,hq :“ L´1 řL l“1 EX |P pY “ l | hlpXqq ´ hlpXq|. All ECE/MCE estimation is performed as described in Remark 1. For further details, see Appendix E.2. Formal algorithm and theoretical guarantees. Top-label-HB (TL-HB) and class-wise-HB (CWHB) are explicitly stated in Appendices B and C respectively; these are instantiations of the top-label calibrator and class-wise calibrator with HB. N-HB is the the normalized calibrator (Algorithm 4) with HB, which is the same as CW-HB, but with an added normalization step. In the Appendix, we extend the binary calibration guarantees of Gupta and Ramdas (2021) to TL-HB and CW-HB (Theorems 1 and 2). We informally summarize one of the results here: if there are at least k calibration points-per-bin, then the expected-ECE is bounded as: E r(TL-) or (CW-) ECEs ď a 1{2k, for TL-HB and CW-HB respectively. The outer E above is an expectation over the calibration data, and corresponds to the randomness in the predictor learnt on the calibration data. Note that the ECE itself is an expected error over an unseen i.i.d. test-point pX,Y q „ P . Experimental details. We experimented on the CIFAR-10 and CIFAR-100 datasets, which have 10 and 100 classes each. The base models are deep-nets with the following architectures: ResNet50, Resnet-110, Wide-ResNet-26-10 (WRN) (Zagoruyko and Komodakis, 2016), and DenseNet121 (Huang et al., 2017). Both CIFAR datasets consist of 60K (60,000) points, which are split as 45K/5K/10K to form the train/validation/test sets. The validation set was used for post-hoc calibration and the test set was used for evaluation through ECE/MCE estimates. Instead of training new models, we used the pre-trained models of Mukhoti et al. (2020). We then ask: “which post-hoc calibrator improves the calibration the most?” We used their Brier score and focal loss models in our experiments (Mukhoti et al. (2020) report that these are the empirically best performing loss functions). All results in the main paper are with Brier score, and results with focal loss are in Appendix E.4. Implementation details for TS, VS, and DS are in Appendix E. Findings. In Table 2, we report the binned ECE and MCE estimates when B “ 15 bins are used by HB, and for ECE estimation. We make the following observations: (a) For TL-ECE, N-HB is the best performing method for both CIFAR-10 and CIFAR-100. While most methods perform similarly across architectures for CIFAR-10, there is high variation in CIFAR-100. DS is the worst performing method on CIFAR-100, but TL-HB also performs poorly. We believe that this could be because the data splitting scheme of the TL-calibrator (line 4 of Algorithm 2) splits datasets across the predicted classes, and some classes in CIFAR-100 occur very rarely. This is further discussed in Appendix E.6. (b) For TL-MCE, TL-HB is the best performing method on CIFAR-10, by a huge margin. For CIFAR-100, TS or VS perform slightly better than TL-HB. Since HB ensures that each bin gets roughly the same number of points, the predictions are well calibrated across bins, leading to smaller TL-MCE. A similar observation was also made by Gupta and Ramdas (2021). (c) For CW-ECE, CW-HB is the best performing method across the two datasets and all four architectures. The N-HB method which has been used in many CW-ECE baseline experiments performs terribly. In other words, skipping the normalization step leads to a large improvement in CW-ECE. This observation is one of our most striking findings. To shed further light on this, we note that the distribution-free calibration guarantees for CW-HB shown in Appendix C no longer hold post-normalization. Thus, both our theory and experiments indicate that skipping normalization improves CW-ECE performance. Additional experiments in the Appendix. In Appendix E.5, we report each of the results in Tables 2 and 3 with the number of bins taking every value in the range r5, 25s. Most observations remain the same under this expanded study. In Appendix B.2, we consider top-label calibration for the class imbalanced COVTYPE-7 dataset, and show that TL-HB adapts to tail/infrequent classes. 5 CONCLUSION We make two contributions to the study of multiclass calibration: (i) defining the new notion of top-label calibration which enforces a natural minimal requirement on a multiclass predictor—the probability score for the top class prediction should be calibrated; (ii) developing a multiclass-tobinary (M2B) framework which posits that various notions of multiclass calibration can be achieved via reduction to binary calibration, balancing practical utility with statistically tractability. Since it is important to identify appropriate notions of calibration in any structured output space (Kuleshov et al., 2018; Gneiting et al., 2007), we anticipate that the philosophy behind the M2B framework could find applications in other structured spaces. 6 REPRODUCIBILITY STATEMENT Some reproducibility desiderata, such as external code and libraries that were used are summarized in Appendix E.1. All code to generate results with the CIFAR datasets is attached in the supplementary material. Our base models were pre-trained deep-net models generated by Mukhoti et al. (2020), obtained from www.robots.ox.ac.uk/„viveka/focal calibration/ (corresponding to ‘brier score’ and ‘focal loss adaptive 53’ at the above link). By avoiding training of new deep-net models with multiple hyperparameters, we also consequently avoided selection biases that inevitably creep in due to test-data-peeking. The predictions of the pre-trained models were obtained using the code at https://github.com/torrvision/focal calibration. 7 ETHICS STATEMENT Post-hoc calibration is a post-processing step that can be applied on top of miscalibrated machine learning models to increase their reliability. As such, we believe our work should improve the transparency and explainability of machine learning models. However, we outline a few limitations. Post-hoc calibration requires keeping aside a fresh, representative dataset, that was not used for training. If this dataset is too small, the resulting calibration guarantee can be too weak to be meaningful in practice. Further, if the test data distribution shifts in significant ways, additional corrections may be needed to recalibrate (Gupta et al., 2020; Podkopaev and Ramdas, 2021). A well calibrated classifier is not necessarily an accurate or a fair one, and vice versa (Kleinberg et al., 2017). Deploying calibrated models in critical applications like medicine, criminal law, banking, etc. does not preclude the possibility of the model being frequently wrong or unfair. ACKNOWLEDGEMENTS This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562 (Towns et al., 2014). Specifically, it used the Bridges-2 system, which is supported by NSF award number ACI-1928147, at the Pittsburgh Supercomputing Center (PSC). CG’s research was supported by the generous Bloomberg Data Science Ph.D. Fellowship. CG would like to thank Saurabh Garg and Youngseog Chung for interesting discussions, and Viveka Kulharia for help with the focal calibration repository. Finally, we thank Zack Lipton, the ICLR reviewers, and the ICLR area chair, for excellent feedback that helped improve the writing of the paper. A ADDENDUM TO SECTION 3 “CALIBRATION ALGORITHMS FROM CALIBRATION METRICS” In Section 3, we introduced the concept of M2B calibration, and showed that popular calibration notions are in fact M2B notions (Table 1). We showed how the calibration notions of top-label, class-wise, and confidence calibration can be achieved using a corresponding M2B calibrator. In the following subsection, we present the general-purpose wrapper Algorithm 5 that can be used to derive an M2B calibrator from any given M2B calibration notion that follows the rubric specified by Table 1. In Appendix A.2, we illustrate the philosophy of M2B calibration using a simple example with a dataset that contains 6 points. This example also illustrates the top-label-calibrator, the classwise-calibrator, and the confidence-calibrator. A.1 GENERAL-PURPOSE M2B CALIBRATOR Denote some M2B notion of calibration as C. Suppose C corresponds toK binary calibration claims. The outer for-loop in Algorithm 5, runs over each such claim in C. For example, for class-wise calibration, K “ L and for confidence and top-label calibration, K “ 1. Corresponding to each claim, there is a probability-predictor that the conditioning is to be done on, such as g or gl or gpkq. Additionally, there may be conditioning on the label predictor such as c or cpkq. These are denoted as prc, rgq in Algorithm 5. For confidence and top-label calibration, rc “ c, the top-label-confidence. For class-wise calibration, when rg “ gl, we have rcp¨q “ l. If there is no label conditioning in the calibration notion, such as in confidence, top-K-confidence, and class-wise calibration, then we enter the if-condition inside the for-loop. Here hk is learnt using a single calibration dataset and a single call to At0,1u. Otherwise, if there is label conditioning, such as in top-label and top-K-label calibration, we enter the else-condition, where we learn a separate hk,l for every l P rLs, using a different part of the dataset Dl in each case. Then hkpxq equals hk,lpxq if rcpxq “ l. Finally, since C is verifying a sequence of claims, the output of Algorithm 5 is a sequence of predictors. Each original prediction prc, rgq corresponding to the C is replaced with prc, hkq. This is the output of the M2B calibrator. Note that the rc values are not changed. This output appears abstract, but normally, it can be represented in an interpretable way. For example, for class-wise calibration, the output is just a sequence of predictors, one for each class: ph1, h2, . . . , hLq. This general-purpose M2B calibrators can be used to achieve any M2B calibration notion: toplabel calibration (Algorithm 2), class-wise calibration (Algorithm 3), confidence calibration (Algorithm 1), top-K-label calibration (Algorithm 6), and top-K-confidence calibration (Algorithm 7). A.2 AN EXAMPLE TO ILLUSTRATE THE PHILOSOPHY OF M2B CALIBRATION Figure 3a shows the predictions of a given base model g on a given dataset D. Suppose D is a test set, and we are testing confidence calibration. Then the only predictions that matter are the top-predictions corresponding to the shaded values. These are stripped out and shown in Figure 3b, in the gp¨q row. Note that the indicator 1 tY “ cp¨qu is sufficient to test confidence calibration and given this, the cpXq are not needed. Thus the second row in Figure 3b only shows these indicators. Algorithm 8: Top-label histogram binning Input: Base multiclass predictor g, calibration data D “ pX1, Y1q, . . . , pXn, Ynq Hyperparameter: # points per bin k P N (say 50), tie-breaking parameter δ ą 0 (say 10´10) Output: Top-label calibrated predictor pc, hq 1 cÐ classifier or top-class based on g; 2 g Ð top-class-probability based on g; 3 for lÐ 1 to L do 4 Dl Ð tpXi,1 tYi “ luq : cpXiq “ lqu and nl Ð |Dl|; 5 hl Ð Binary-histogram-binningpg,Dl, tnl{ku , δq; 6 end 7 hp¨q Ð hcp¨qp¨q; 8 return pc, hq; Verifying top-label calibration is similar (Figure 3c), but in addition to the predictions gp¨q, we also retain the values of cp¨q. Thus the gp¨q and 1 tY “ cp¨qu are shown, but split across the 4 classes. Class-wise calibration requires access to all the predictions, however, each class is considered separately as indicated by Figure 3d. Canonical calibration looks at the full prediction vector in each case. However, in doing so, it becomes unlikely that gpxq “ gpyq for any x,y since the number of values that g can take is now exponential. Let us turn this around and suppose that D were a calibration set instead of a test set. We argue that D should be used in the same way, whether testing or calibrating. Thus, if confidence calibration is to be achieved, we should focus on the pg,1 tY “ cp¨quq corresponding to g. If top-label calibration is to be achieved, we should use the pc, gq values. If class-wise calibration is to be achieved, we should look at each gl separately and solve L different problems. Finally, for canonical calibration, we must look at the entire g vector as a single unit. This is the core philosophy behind M2B calibrators: if binary claims are being verified, solve binary calibration problems. B DISTRIBUTION-FREE TOP-LABEL CALIBRATION USING HISTOGRAM BINNING In this section, we formally describe histogram binning (HB) with the top-label-calibrator (Algorithm 2) and provide methodological insights through theory and experiments. B.1 FORMAL ALGORITHM AND THEORETICAL GUARANTEES Algorithm 8 describes the top-label calibrator formally using HB as the binary calibration algorithm. The function called in line 5 is Algorithm 2 of Gupta and Ramdas (2021). The first argument in the call is the top-label confidence predictor, the second argument is the dataset to be used, the third argument is the number of bins to be used, and the fourth argument is a tie-breaking parameter (described shortly). While previous empirical works on HB fixed the number of bins per class, the analysis of Gupta and Ramdas (2021) suggests that a more principled way of choosing the number of bins is to fix the number of points per bin. This is parameter k of Algorithm 8. Given k, the number of bins is decided separately for every class as tnl{ku where nl is the number of points predicted as class l. This choice is particularly relevant for top-label calibration since nl can be highly non-uniform (we illustrate this empirically in Section B.2). The tie-breaking parameter δ can be arbitrarily small (like 10´10), and its significance is mostly theoretical—it is used to ensure that outputs of different bins are not exactly identical by chance, so that conditioning on a calibrated probability output is equivalent to conditioning on a bin; this leads to a cleaner theoretical guarantee. HB recalibrates g to a piecewise constant function h that takes one value per bin. Consider a specific bin b; the h value for this bin is computed as the average of the indicators t1 tYi “ cpXiqu : Xi P Bin bu. This is an estimate of the bias of the bin P pY “ cpXq | X P Bin bq. A concentration inequality can then be used to bound the deviation between the estimate and the true bias to prove distribution-free calibration guarantees. In the forthcoming Theorem 1, we show high-probability and in-expectation bounds on the the TL-ECE of HB. Additionally, we show marginal and condi- tional top-label calibration bounds, defined next. These notions were proposed in the binary calibration setting by Gupta et al. (2020) and Gupta and Ramdas (2021). In the definition below, A refers to any algorithm that takes as input calibration data D and an initial classifier g to produce a top-label predictor c and an associated probability map h. Algorithm 8 is an example of A. Definition 1 (Marginal and conditional top-label calibration). Let ε, α P p0, 1q be some given levels of approximation and failure respectively. An algorithm A : pg,Dq ÞÑ pc, hq is (a) pε, αq-marginally top-label calibrated if for every distribution P over X ˆ rLs, P ´ |P pY “ cpXq | cpXq, hpXqq ´ hpXq| ď ε ¯ ě 1´ α. (8) (b) pε, αq-conditionally top-label calibrated if for every distribution P over X ˆ rLs, P ´ @ l P rLs, r P Rangephq, |P pY “ cpXq | cpXq “ l, hpXq “ rq ´ r| ď ε ¯ ě 1´ α. (9) To clarify, all probabilities are taken over the test point pX,Y q „ P , the calibration data D „ Pn, and any other inherent algorithmic randomness in A; these are all implicit in pc, hq “ ApD,gq. Marginal calibration asserts that with high probability, on average over the distribution of D, X , P pY “ cpXq | cpXq, hpXqq is at most ε away from hpXq. In comparison, TL-ECE is the average of these deviations over X . Marginal calibration may be a more appropriate metric for calibration than TL-ECE if we are somewhat agnostic to probabilistic errors less than some fixed threshold ε (like 0.05). Conditional calibration is a strictly stronger definition that requires the deviation to be at most ε for every possible prediction pl, rq, including rare ones, not just on average over predictions. This may be relevant in medical settings where we want the prediction on every patient to be reasonably calibrated. Algorithm 8 satisfies the following calibration guarantees. Theorem 1. Fix hyperparameters δ ą 0 (arbitrarily small) and points per bin k ě 2, and assume nl ě k for every l P rLs. Then, for any α P p0, 1q, Algorithm 8 is pε1, αq-marginally and pε2, αqconditionally top-label calibrated for ε1 “ d logp2{αq 2pk ´ 1q ` δ, and ε2 “ d logp2n{kαq 2pk ´ 1q ` δ. (10) Further, for any distribution P over X ˆ rLs, we have P pTL-ECEpc, hq ď ε2q ě 1 ´ α, and E rTL-ECEpc, hqs ď a 1{2k ` δ. The proof in Appendix H is a multiclass top-label adaption of the guarantee in the binary setting by Gupta and Ramdas (2021). The rOp1{ ? kq dependence of the bound relies on Algorithm 8 delegating at least k points to every bin. Since δ can be chosen to be arbitrarily small, setting k “ 50 gives roughly ED rTL-ECEphqs ď 0.1. Base on this, we suggest setting k P r50, 150s in practice. B.2 TOP-LABEL HISTOGRAM BINNING ADAPTS TO CLASS IMBALANCED DATASETS The principled methodology of fixing the number of points per bin reaps practical benefits. Figure 4 illustrates this through the performance of HB for the class imbalanced COVTYPE-7 dataset (Blackard and Dean, 1999) with class ratio approximately 36% for class 1 and 49% for class 2. The entire dataset has 581012 points which is divided into train-test in the ratio 70:30. Then, 10% of the training points are held out for calibration (n “ |D| “ 40671). The base classifier is a random forest (RF) trained on the remaining training points (it achieves around 95% test accuracy). The RF is then recalibrated using HB. The top-label reliability diagrams in Figure 4a illustrate that the original RF (in orange) is underconfident on both the most likely and least likely classes. Additional figures in Appendix F show that the RF is always underconfident no matter which class is predicted as the top-label. HB (in green) recalibrates the RF effectively across all classes. Validity plots (Gupta and Ramdas, 2021) estimate how the LHS of condition (8), denoted as V pεq, varies with ε. We observe that for all ε, V pεq is higher for HB. The rightmost barplot compares the estimated TL-ECE for all classes, and also shows the class proportions. While the original RF is significantly miscalibrated for 1 2 3 4 5 6 70.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 EC E Random forest Histogram binning 0.0 0.1 0.2 0.3 0.4 0.5 Cl as s r at io Class ratio 1 2 3 4 5 6 70.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 EC E Random forest Histogram b nning 0.0 0.1 0.2 0.3 0.4 0.5 Cl as s r at io Class ratio 0.00 0.25 0.50 0.75 1.0 Predicted probability 0.0 0.2 0.4 0.6 0.8 1.0 Tr ue p ro ba bi lit y Class 2 reliability diagram 0.00 0.05 0.10 0.15 0.0 0.2 0.4 0.6 0.8 1.0 V( ) Class 2 validity plot 0.00 0.25 0.50 0.75 1.00 Predicted probability 0.0 0.2 0.4 0.6 0.8 1.0 Tr ue p ro ba bi lit y Class 4 reliability diagram 0.00 0.05 0.10 0.15 0.0 0.2 0.4 0.6 0.8 1.0 V( ) Class 4 validity plot 1 2 3 4 5 6 7 Class 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 To pla be l E CE 0.0 0.1 0.2 0.3 0.4 0.5 Cl as s r at io (a) Top-label histogram binning (Algorithm 8) with k “ 100 points per bin. Class 4 has only 183 calibration points. Algorithm 8 adapts and uses only a single bin to ensure that the TL-ECE on class 4 is comparable to the TL-ECE on class 2. Overall, the random forest classifier has significantly higher TL-ECE for the least likely classes (4, 5, and 6), but the post-calibration TL-ECE using binning is quite uniform. 0.00 0.25 0.50 0.75 1.00 Predicted probability 0.0 0.2 0.4 0.6 0.8 1.0 Tr ue p ro ba bi lit y Class 2 reliability diagram 0.00 0.05 0.10 0.15 0.0 0.2 0.4 0.6 0.8 1.0 V( ) Class 2 validity plot 0.00 0.25 0.50 0.75 1.00 Predicted probability 0.0 0.2 0.4 0.6 0.8 1.0 Tr ue p ro ba bi lit y Class 4 reliability diagram 0.00 0.05 0.10 0.15 0.0 0.2 0.4 0.6 0.8 1.0 V( ) Class 4 validity plot 1 2 3 4 5 6 7 Class 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 To pla be l E CE 0.0 0.1 0.2 0.3 0.4 0.5 Cl as s r at io (b) Histogram binning with B “ 50 bins for every class. Compared to Figure 4a, the post-calibration TL-ECE for the most likely classes decreases while the TL-ECE for the least likely classes increases. Figure 4: Recalibration of a random forest using histogram binning on the class imbalanced COVTYPE-7 dataset (class 2 is roughly 100 times likelier than class 4). By ensuring a fixed number of calibration points per bin, Algorithm 8 obtains relatively uniform top-label calibration across classes (Figure 4a). In comparison, if a fixed number of bins are chosen for all classes, the performance deteriorates for the least likely classes (Figure 4b). the less likely classes, HB has a more uniform miscalibration across classes. Figure 4b considers a slightly different HB algorithm where the number of points per class is not adapted to the number of times the class is predicted, but is fixed beforehand (this corresponds to replacing tnl{ku in line 5 of Algorithm 8 with a fixed B P N). While even in this setting there is a drop in the TL-ECE compared to the RF model, the final profile is less uniform compared to fixing the number of points per bin. The validity plots and top-label reliability diagrams for all the 7 classes are reported in Figure 9 in Appendix F, along with some additional observations. C DISTRIBUTION-FREE CLASS-WISE CALIBRATION USING HISTOGRAM BINNING In this section, we formally describe histogram binning (HB) with the class-wise-calibrator (Algorithm 3) and provide theoretical guarantees for it. The overall procedure is called class-wise-HB. Further details and background on HB are contained in Appendix B, where top-label-HB is described. C.1 FORMAL ALGORITHM To achieve class-wise calibration using binary routines, we learn each component function hl in a 1- v-all fashion as described in Algorithm 3. Algorithm 9 contains the pseudocode with the underlying routine as binary HB. To learn hl, we use a dataset Dl, which unlike top-label HB (Algorithm 8), contains Xi even if cpXiq ‰ l. However the Yi is replaced with 1 tYi “ lu. The number of points per bin kl can be different for different classes, but generally one would set k1 “ . . . “ kL “ k P N. Larger values of kl will lead to smaller εl and δl in the guarantees, at loss of sharpness since the number of bins tn{klu would be smaller. Algorithm 9: Class-wise histogram binning Input: Base multiclass predictor g : X Ñ ∆L´1, calibration data D “ pX1, Y1q, . . . , pXn, Ynq Hyperparameter: # points per bin k1, k2, . . . , kl P NL (say each kl “ 50), tie-breaking parameter δ ą 0 (say 10´10) Output: L class-wise calibrated predictors h1, h2, . . . , hL 1 for lÐ 1 to L do 2 Dl Ð tpXi,1 tYi “ luq : i P rnsqu; 3 hl Ð Binary-histogram-binningpgl,Dl, tn{klu , δq; 4 end 5 return ph1, h2, . . . , hLq; C.2 CALIBRATION GUARANTEES A general algorithm A for class-wise calibration takes as input calibration data D and an initial classifier g to produce an approximately class-wise calibrated predictor h : X Ñ r0, 1sL. Define the notation ε “ pε1, ε2, . . . , εLq P p0, 1qL and α “ pα1, α2, . . . , αLq P p0, 1qL. Definition 2 (Marginal and conditional class-wise calibration). Let ε,α P p0, 1qL be some given levels of approximation and failure respectively. An algorithm A : pg,Dq ÞÑ h is (a) pε,αq-marginally class-wise calibrated if for every distribution P over X ˆ rLs and for every l P rLs P ´ |P pY “ l | hlpXqq ´ hlpXq| ď εl ¯ ě 1´ αl. (11) (b) pε,αq-conditionally class-wise calibrated if for every distribution P over X ˆ rLs and for every l P rLs, P ´ @r P Rangephlq, |P pY “ l | hlpXq “ rq ´ r| ď εl ¯ ě 1´ αl. (12) Definition 2 requires that each hl is pεl, αlq calibrated in the binary senses defined by Gupta et al. (2021, Definitions 1 and 2). From Definition 2, we can also uniform bounds that hold simultaneously over every l P rLs. Let α “ řL l“1 αl and ε “ maxlPrLs εl. Then (11) implies P ´ @l P rLs, |P pY “ l | hlpXqq ´ hlpXq| ď ε ¯ ě 1´ α, (13) and (12) implies P ´ @l P rLs, r P Rangephlq, |P pY “ l | hlpXq “ rq ´ r| ď ε ¯ ě 1´ α. (14) The choice of not including the uniformity over L in Definition 2 reveals the nature of our class-wise HB algorithm and the upcoming theoretical guarantees: (a) we learn the hl’s separately for each l and do not combine the learnt functions in any way (such as normalization), (b) we do not combine the calibration inequalities for different rLs in any other way other than a union bound. Thus the only way we can show (13) (or (14)) is by using a union bound over (11) (or (12)). We now state the distribution-free calibration guarantees satisfied by Algorithm 9. Theorem 2. Fix hyperparameters δ ą 0 (arbitrarily small) and points per bin k1, k2, . . . , kl ě 2, and assume nl ě kl for every l P rLs. Then, for every l P rLs, for any αl P p0, 1q, Algorithm 9 is pεp1q,αq-marginally and pεp2q,αq-conditionally class-wise calibrated with ε p1q l “ d logp2{αlq 2pkl ´ 1q ` δ, and εp2ql “ d logp2n{klαlq 2pkl ´ 1q ` δ. (15) Further, for any distribution P over X ˆ rLs, (a) P pCW-ECEpc, hq ď maxlPrLs ε p2q l q ě 1´ ř lPrLs αl, and (b) E rCW-ECEpc, hqs ď maxlPrLs a 1{2kl ` δ. Theorem 2 is proved in Appendix H. The proof follows by using the result of Gupta and Ramdas (2021, Theorem 2), derived in the binary calibration setting, for each hl separately. Gupta and Ramdas (2021) proved a more general result for general `p-ECE bounds. Similar results can also be derived for the suitably defined `p-CW-ECE. As discussed in Section 3.2, unlike previous works (Zadrozny and Elkan, 2002; Guo et al., 2017; Kull et al., 2019), Algorithm 9 does not normalize the hl’s. We do not know how to derive Theorem 2 style results for a normalized version of Algorithm 9. D FIGURES FOR APPENDIX E Appendix E begins on page 23. The relevant figures for Appendix E are displayed on the following pages. E ADDITIONAL EXPERIMENTAL DETAILS AND RESULTS FOR CIFAR-10 AND CIFAR-100 We present additional details and results to supplement the experiments with CIFAR-10 and CIFAR100 in Sections 2 and 4 of the main paper. E.1 EXTERNAL LIBRARIES USED All our base models were pre-trained deep-net models generated by Mukhoti et al. (2020), obtained from www.robots.ox.ac.uk/„viveka/focal calibration/ and used along with the code at https://github.com/torrvision/focal calibration to obtain base predictions. We focused on the models trained with Brier score and focal loss, since it was found to perform the best for calibration. All reports in the main paper are with the Brier score; in Appendix E.4, we report corresponding results with focal loss. We also used the code at https://github.com/torrvision/focal calibration for temperature scaling (TS). For vector scaling (VS) and Dirichlet scaling (DS), we used the code of Kull et al. (2019), hosted at https://github.com/dirichletcal/dirichlet python. For VS, we used the file dirichletcal/calib/vectorscaling.py, and for DS, we used the file dirichletcal/calib/fulldirichlet.py. No hyperparameter tuning was performed in any of our histogram binning experiments or baseline experiments; default settings were used in every case. The random seed was fixed so that every run of the experiment gives the same result. In particular, by relying on pre-trained models, we avoid training new deep-net models with multiple hyperparameters, thus avoiding any selection biases that may arise due to test-data peeking across multiple settings. E.2 FURTHER COMMENTS ON BINNING FOR ECE ESTIMATION As mentioned in Remark 1, ECE estimates for all methods except TL-HB and CW-HB was done using fixed-width bins r0, 1{Bq, r1{B, 2{Bq, . . . r1´ 1{B, 1s for various values of B P r5, 25s. For TL-HB and CW-HB, B is the number of bins used for each call to binary HB. For TL-HB, note that we actually proposed that the number of bins-per-class should be fixed; see Section B.2. However, for ease of comparison to other methods, we simply set the number of bins to B for each call to binary HB. That is, in line 5, we replace tnl{ku with B. For CW-HB, we described Algorithm 9 with different values of kl corresponding to the number of bins per class. For the CIFAR-10 and CIFAR-100 comparisons, we set each k1 “ k2 “ . . . “ kL “ k, where k P N satisfies tn{ku “ B. Tables 2,3, 4, and 5 report estimates with B “ 15, which has been commonly used in many works (Guo et al., 2017; Kull et al., 2019; Mukhoti et al., 2020). Corresponding to each table, we have a figure where ECE estimates with varying B are reported to strengthen conclusions: these are Figure 5,7, 6, and 8 respectively. Plugin estimates of the ECE were used, same as Guo et al. (2017). Further binning was not done for TL-HB and CW-HB since the output is already discrete and sufficiently many points take each of the predicted values. Note that due to Jensen’s inequality, any further binning will only decrease the ECE estimate (Kumar et al., 2019). Thus, using unbinned estimates may give TL-HB and CW-HB a disadvantage. E.3 SOME REMARKS ON MAXIMUM-CALIBRATION-ERROR (MCE) Guo et al. (2017) defined MCE with respect to confidence calibration, as follows: conf-MCEpc, hq :“ sup rPRangephq |P pY “ cpXq | hpXq “ rq ´ r| . (16) Conf-MCE suffers from the same issue illustrated in Figure 2 for conf-ECE. In Figure 1b, we looked at the reliability diagram within two bins. These indicate two of the values over which the supremum is taken in equation (16): these are the Y-axis distances between the ‹ markers and the X “ Y line for bins 6 and 10 (both are less than 0.02). On the other hand, the effective maximum miscalibration for bin 6 is roughly 0.15 (for class 1), and roughly 0.045 (for class 4), and the maximum should be taken with respect to these values across all bins. To remedy the underestimation of the effective MCE, we can consider the top-label-MCE, defined as TL-MCEpc, hq :“ max lPrLs sup rPRangephq |P pY “ l | cpXq “ l, hpXq “ rq ´ r| . (17) Interpreted in words, the TL-MCE assesses the maximum deviation between the predicted and true probabilities across all predictions and all classes. Following the same argument as in the proof of Proposition 4, it can be shown that for any c, h, conf-MCEpc, hq ď TL-MCEpc, hq. The TL-MCE is closely related to conditional top-label calibration (Definition 1b). Clearly, an algorithm is pε, αqconditionally top-label calibrated if and only if for every distribution P , P pTL-MCEpc, hq ď εq ě 1´ α. Thus the conditional top-label calibration guarantee of Theorem 1 implies a high probability bound on the TL-MCE as well. E.4 TABLE 2 AND 3 STYLE RESULTS WITH FOCAL LOSS Results for top-label-ECE and top-label-MCE with the base deep net model being trained using focal loss are reported in Table 4. Corresponding results for class-wise-ECE are reported in Table 5. The observations are similar to the ones reported for Brier score: 1. For TL-ECE, TL-HB is either the best or close to the best performing method on CIFAR10, but suffers on CIFAR-100. This phenomenon is discussed further in Appendix E.6. N-HB is the best or close to the best for both CIFAR-10 and CIFAR-100. 2. For TL-MCE, TL-HB is the best performing method on CIFAR-10, by a huge margin. For CIFAR-100, TS or VS perform better than TL-HB, but not by a huge margin. 3. For CW-ECE, CW-HB is the best performing method across the two datasets and all four architectures. E.5 ECE AND MCE ESTIMATES WITH VARYING NUMBER OF BINS Corresponding to each entry in Tables 2 and 4, we perform an ablation study with the number of bins varying as B P r5, 25s. This is in keeping with the findings of Roelofs et al. (2020) that the ECE/MCE estimate can vary with different numbers of bins, along with the relative performance of the various models. The results are reported in Figure 5 (ablation of Table 2) and Figure 7 (ablation of Table 3). The captions of these figures contain further details on the findings. Most findings are similar to those in the main paper, but the findings in the tables are strengthened through this ablation. The same ablations are performed for focal loss as well. The results are reported in Figure 6 (ablation of Metric Dataset Architecture Base TS VS DS N-HB CW-HB Table 4) and Figure 8 (ablation of Table 5). The captions of these figures contain further details on the findings. The ablation results in the figures support those in the tables. E.6 ANALYZING THE POOR PERFORMANCE OF TL-HB ON CIFAR-100 CIFAR-100 is an imbalanced dataset with 100 classes and 5000 points for validation/calibration (as per the default splits). Due to random subsampling, the validation split we used had one of the classes predicted as the top-label only 31 times. Thus, based on Theorem 1, we do not expect HB to have small TL-ECE. This is confirmed by the empirical results presented in Tables 2/4, and Figures 5b/6b. We observe that HB has higher estimated TL-ECE than all methods except DS, for most values of the number of bins. The performance of TL-HB for TL-MCE however is much much closer to the other methods since HB uses the same number of points per bin, ensuring that the predictions are somewhat equally calibrated across bins (Figures 5d/6d). In comparison, for CWECE, CW-HB is the best performing method. This is because in the class-wise setting, 5000 points are available for recalibration irrespective of the class, which is sufficient for HB. The deterioration in performance of HB when few calibration points are available was also observed in the binary setting by Gupta and Ramdas (2021, Appendix C). Niculescu-Mizil and Caruana (2005) noted in the conclusion of their paper that Platt scaling (Platt, 1999), which is closely related to TS, performs well when the data is small, but another nonparametric binning method, isotonic regression (Zadroz
1. What is the focus of the paper regarding classifier calibration? 2. What are the shortcomings of the commonly used confidence calibration method according to the authors? 3. What is the alternative proposal by the authors, and how does it differ from confidence calibration? 4. What are the advantages of the proposed method, particularly in comparison to confidence calibration? 5. Are there any limitations or potential issues with the approach suggested by the authors, especially in certain types of classification problems?
Summary Of The Paper Review
Summary Of The Paper This paper discusses the topic of classifier calibration in multi-class classification. The focus is on obtaining well-calibrated probabilities for the top-predicted classes. The authors give arguments, examples and experimental results to show that the commonly-used confidence calibration method suffers a number of shortcomings that makes it less useful in practice. As an alternative, top-label calibration is proposed, where calibration is analyzed on a per-class basis, when a specific class is the top class. In addition, the authors discuss shortcomings of confidence reliability diagrams, and they propose multi-class to binary reductions for achieving top-label calibration. In the experiments several methods are compared on two classical image classification benchmarks. Review This is a pretty dense paper that contains a lot of material, but it is very well written. I do not consider myself an expert on the topic, but I could follow the flow of the paper quite well. The authors have been able to convince me of the disadvantages of confidence calibration, and the advantages of the method they introduce. To this end, Example 1 and the case study in Figure 1 really helped. The experimental results also seem to support the claims of the authors. From a more conceptual perspective, the proposed algorithms are also appealing. However, I do see potential problems with the approach of the authors for classification problems with infrequent classes, like in extreme multi-class classification, where long-tail classes have very few observations. In such situations, confidence calibration will probably work much better, as one doesn't have to condition for rare classes. Conversely, for the approach of the authors, one needs much more observations per class. So, in this regard, the experiments are currently somewhat limited, and probably telling a too optimistic story. I would have liked to see some experiments with extreme classification datasets that have long-tail class distributions.
ICLR
Title Scalable Private Learning with PATE Abstract The rapid adoption of machine learning has increased concerns about the privacy implications of machine learning models trained on sensitive data, such as medical records or other personal information. To address those concerns, one promising approach is Private Aggregation of Teacher Ensembles, or PATE, which transfers to a “student” model the knowledge of an ensemble of “teacher” models, with intuitive privacy provided by training teachers on disjoint data and strong privacy guaranteed by noisy aggregation of teachers’ answers. However, PATE has so far been evaluated only on simple classification tasks like MNIST, leaving unclear its utility when applied to larger-scale learning tasks and real-world datasets. In this work, we show how PATE can scale to learning tasks with large numbers of output classes and uncurated, imbalanced training data with errors. For this, we introduce new noisy aggregation mechanisms for teacher ensembles that are more selective and add less noise, and prove their tighter differential-privacy guarantees. Our new mechanisms build on two insights: the chance of teacher consensus is increased by using more concentrated noise and, lacking consensus, no answer need be given to a student. The consensus answers used are more likely to be correct, offer better intuitive privacy, and incur lower-differential privacy cost. Our evaluation shows our mechanisms improve on the original PATE on all measures, and scale to larger tasks with both high utility and very strong privacy (ε < 1.0). 1 INTRODUCTION Many attractive applications of modern machine-learning techniques involve training models using highly sensitive data. For example, models trained on people’s personal messages or detailed medical information can offer invaluable insights into real-world language usage or the diagnoses and treatment of human diseases (McMahan et al., 2017; Liu et al., 2017). A key challenge in such applications is to prevent models from revealing inappropriate details of the sensitive data—a nontrivial task, since models are known to implicitly memorize such details during training and also to inadvertently reveal them during inference (Zhang et al., 2017; Shokri et al., 2017). Recently, two promising, new model-training approaches have offered the hope that practical, highutility machine learning may be compatible with strong privacy-protection guarantees for sensitive training data (Abadi et al., 2017). This paper revisits one of these approaches, Private Aggregation of Teacher Ensembles, or PATE (Papernot et al., 2017), and develops techniques that improve its scalability and practical applicability. PATE has the advantage of being able to learn from the aggregated consensus of separate “teacher” models trained on disjoint data, in a manner that both provides intuitive privacy guarantees and is agnostic to the underlying machine-learning techniques (cf. the approach of differentially-private stochastic gradient descent (Abadi et al., 2016)). In the PATE approach multiple teachers are trained on disjoint sensitive data (e.g., different users’ data), and uses the teachers’ aggregate consensus answers in a black-box fashion to supervise the training of a “student” model. By publishing only the student model (keeping the teachers private) and by adding carefully-calibrated Laplacian noise to the aggregate answers used to train the student, the ∗Equal contributions, authors ordered alphabetically. Work done while the authors were at Google Brain. original PATE work showed how to establish rigorous (ε, δ) differential-privacy guarantees (Papernot et al., 2017)—a gold standard of privacy (Dwork et al., 2006). However, to date, PATE has been applied to only simple tasks, like MNIST, without any realistic, larger-scale evaluation. The techniques presented in this paper allow PATE to be applied on a larger scale to build more accurate models, in a manner that improves both on PATE’s intuitive privacy-protection due to the teachers’ independent consensus as well as its differential-privacy guarantees. As shown in our experiments, the result is a gain in privacy, utility, and practicality—an uncommon joint improvement. The primary technical contributions of this paper are new mechanisms for aggregating teachers’ answers that are more selective and add less noise. On all measures, our techniques improve on the original PATE mechanism when evaluated on the same tasks using the same datasets, as described in Section 5. Furthermore, we evaluate both variants of PATE on a new, large-scale character recognition task with 150 output classes, inspired by MNIST. The results show that PATE can be successfully utilized even to uncurated datasets—with significant class imbalance as well as erroneous class labels—and that our new aggregation mechanisms improve both privacy and model accuracy. To be more selective, our new mechanisms leverage some pleasant synergies between privacy and utility in PATE aggregation. For example, when teachers disagree, and there is no real consensus, the privacy cost is much higher; however, since such disagreement also suggest that the teachers may not give a correct answer, the answer may simply be omitted. Similarly, teachers may avoid giving an answer where the student already is confidently predicting the right answer. Additionally, we ensure that these selection steps are themselves done in a private manner. To add less noise, our new PATE aggregation mechanisms sample Gaussian noise, since the tails of that distribution diminish far more rapidly than those of the Laplacian noise used in the original PATE work. This reduction greatly increases the chance that the noisy aggregation of teachers’ votes results in the correct consensus answer, which is especially important when PATE is scaled to learning tasks with large numbers of output classes. However, changing the sampled noise requires redoing the entire PATE privacy analysis from scratch (see Section 4 and details in Appendix A). Finally, of independent interest are the details of our evaluation extending that of the original PATE work. In particular, we find that the virtual adversarial training (VAT) technique of Miyato et al. (2017) is a good basis for semi-supervised learning on tasks with many classes, outperforming the improved GANs by Salimans et al. (2016) used in the original PATE work. Furthermore, we explain how to tune the PATE approach to achieve very strong privacy (ε ≈ 1.0) along with high utility, for our real-world character recognition learning task. This paper is structured as follows: Section 2 is the related work section; Section 3 gives a background on PATE and an overview of our work; Section 4 describes our improved aggregation mechanisms; Section 5 details our experimental evaluation; Section 6 offers conclusions; and proofs are deferred to the Appendices. 2 RELATED WORK Differential privacy is by now the gold standard of privacy. It offers a rigorous framework whose threat model makes few assumptions about the adversary’s capabilities, allowing differentially private algorithms to effectively cope against strong adversaries. This is not the case of all privacy definitions, as demonstrated by successful attacks against anonymization techniques (Aggarwal, 2005; Narayanan & Shmatikov, 2008; Bindschaedler et al., 2017). The first learning algorithms adapted to provide differential privacy with respect to their training data were often linear and convex (Pathak et al., 2010; Chaudhuri et al., 2011; Song et al., 2013; Bassily et al., 2014; Hamm et al., 2016). More recently, successful developments in deep learning called for differentially private stochastic gradient descent algorithms (Abadi et al., 2016), some of which have been tailored to learn in federated (McMahan et al., 2017) settings. Differentially private selection mechanisms like GNMax (Section 4.1) are commonly used in hypothesis testing, frequent itemset mining, and as building blocks of more complicated private mechanisms. The most commonly used differentially private selection mechanisms are exponential mechanism (McSherry & Talwar, 2007) and LNMax (Bhaskar et al., 2010). Recent works offer lower bounds on sample complexity of such problem (Steinke & Ullman, 2017; Bafna & Ullman, 2017). The Confident and Interactive Aggregator proposed in our work (Section 4.2 and Section 4.3 resp.) use the intuition that selecting samples under certain constraints could result in better training than using samples uniformly at random. In Machine Learning Theory, active learning (Cohn et al., 1994) has been shown to allow learning from fewer labeled examples than the passive case (see e.g. Hanneke (2014)). Similarly, in model stealing (Tramèr et al., 2016), a goal is to learn a model from limited access to a teacher network. There is previous work in differential privacy literature (Hardt & Rothblum, 2010; Roth & Roughgarden, 2010) where the mechanism first decides whether or not to answer a query, and then privately answers the queries it chooses to answer using a traditional noiseaddition mechanism. In these cases, the sparse vector technique (Dwork & Roth, 2014, Chapter 3.6) helps bound the privacy cost in terms of the number of answered queries. This is in contrast to our work where a constant fraction of queries get answered and the sparse vector technique does not seem to help reduce the privacy cost. Closer to our work, Bun et al. (2017) consider a setting where the answer to a query of interest is often either very large or very small. They show that a sparse vector-like analysis applies in this case, where one pays only for queries that are in the middle. 3 BACKGROUND AND OVERVIEW We introduce essential components of our approach towards a generic and flexible framework for machine learning with provable privacy guarantees for training data. 3.1 THE PATE FRAMEWORK Here, we provide an overview of the PATE framework. To protect the privacy of training data during learning, PATE transfers knowledge from an ensemble of teacher models trained on partitions of the data to a student model. Privacy guarantees may be understood intuitively and expressed rigorously in terms of differential privacy. Illustrated in Figure 2, the PATE framework consists of three key parts: (1) an ensemble of n teacher models, (2) an aggregation mechanism and (3) a student model. Teacher models: Each teacher is a model trained independently on a subset of the data whose privacy one wishes to protect. The data is partitioned to ensure no pair of teachers will have trained on overlapping data. Any learning technique suitable for the data can be used for any teacher. Training each teacher on a partition of the sensitive data produces n different models solving the same task. At inference, teachers independently predict labels. Aggregation mechanism: When there is a strong consensus among teachers, the label they almost all agree on does not depend on the model learned by any given teacher. Hence, this collective decision is intuitively private with respect to any given training point—because such a point could have been included only in one of the teachers’ training set. To provide rigorous guarantees of differential privacy, the aggregation mechanism of the original PATE framework counts votes assigned to each class, adds carefully calibrated Laplacian noise to the resulting vote histogram, and outputs the class with the most noisy votes as the ensemble’s prediction. This mechanism is referred to as the max-of-Laplacian mechanism, or LNMax, going forward. For samples x and classes 1, . . . ,m, let fj(x) ∈ [m] denote the j-th teacher model’s prediction and ni denote the vote count for the i-th class (i.e., ni , |fj(x) = i|). The output of the mechanism is A(x) , argmaxi (ni(x) + Lap (1/γ)). Through a rigorous analysis of this mechanism, the PATE framework provides a differentially private API: the privacy cost of each aggregated prediction made by the teacher ensemble is known. Student model: PATE’s final step involves the training of a student model by knowledge transfer from the teacher ensemble using access to public—but unlabeled—data. To limit the privacy cost of labeling them, queries are only made to the aggregation mechanism for a subset of public data to train the student in a semi-supervised way using a fixed number of queries. The authors note that every additional ensemble prediction increases the privacy cost spent and thus cannot work with unbounded queries. Fixed queries fixes privacy costs as well as diminishes the value of attacks analyzing model parameters to recover training data (Zhang et al., 2017). The student only sees public data and privacy-preserving labels. 3.2 DIFFERENTIAL PRIVACY Differential privacy (Dwork et al., 2006) requires that the sensitivity of the distribution of an algorithm’s output to small perturbations of its input be limited. The following variant of the definition captures this intuition formally: Definition 1. A randomized mechanismM with domain D and rangeR satisfies (ε, δ)-differential privacy if for any two adjacent inputs D,D′ ∈ D and for any subset of outputs S ⊆ R it holds that: Pr[M(D) ∈ S] ≤ eε ·Pr[M(D′) ∈ S] + δ. (1) For our application of differential privacy to ML, adjacent inputs are defined as two datasets that only differ by one training example and the randomized mechanismM would be the model training algorithm. The privacy parameters have the following natural interpretation: ε is an upper bound on the loss of privacy, and δ is the probability with which this guarantee may not hold. Composition theorems (Dwork & Roth, 2014) allow us to keep track of the privacy cost when we run a sequence of mechanisms. 3.3 RÉNYI DIFFERENTIAL PRIVACY Papernot et al. (2017) note that the natural approach to bounding PATE’s privacy loss—by bounding the privacy cost of each label queried and using strong composition (Dwork et al., 2010) to derive the total cost—yields loose privacy guarantees. Instead, their approach uses data-dependent privacy analysis. This takes advantage of the fact that when the consensus among the teachers is very strong, the plurality outcome has overwhelming likelihood leading to a very small privacy cost whenever the consensus occurs. To capture this effect quantitatively, Papernot et al. (2017) rely on the moments accountant, introduced by Abadi et al. (2016) and building on previous work (Bun & Steinke, 2016; Dwork & Rothblum, 2016). In this section, we recall the language of Rényi Differential Privacy or RDP (Mironov, 2017). RDP generalizes pure differential privacy (δ = 0) and is closely related to the moments accountant. We choose to use RDP as a more natural analysis framework when dealing with our mechanisms that use Gaussian noise. Defined below, the RDP of a mechanism is stated in terms of the Rényi divergence. Definition 2 (Rényi Divergence). The Rényi divergence of order λ between two distributions P and Q is defined as: Dλ(P‖Q) , 1 λ− 1 logEx∼Q [ (P (x)/Q(x)) λ ] = 1 λ− 1 logEx∼P [ (P (x)/Q(x)) λ−1 ] . Definition 3 (Rényi Differential Privacy (RDP)). A randomized mechanismM is said to guarantee (λ, ε)-RDP with λ ≥ 1 if for any neighboring datasets D and D′, Dλ(M(D)‖M(D′)) = 1 λ− 1 logEx∼M(D) [( Pr [M(D) = x] Pr [M(D′) = x] )λ−1] ≤ ε. RDP generalizes pure differential privacy in the sense that ε-differential privacy is equivalent to (∞, ε)-RDP. Mironov (2017) proves the following key facts that allow easy composition of RDP guarantees and their conversion to (ε, δ)-differential privacy bounds. Theorem 4 (Composition). If a mechanism M consists of a sequence of adaptive mechanisms M1, . . . ,Mk such that for any i ∈ [k], Mi guarantees (λ, εi)-RDP, then M guarantees (λ, ∑k i=1 εi)-RDP. Theorem 5 (From RDP to DP). If a mechanism M guarantees (λ, ε)-RDP, then M guarantees (ε+ log 1/δλ−1 , δ)-differential privacy for any δ ∈ (0, 1). While both (ε, δ)-differential privacy and RDP are relaxations of pure ε-differential privacy, the two main advantages of RDP are as follows. First, it composes nicely; second, it captures the privacy guarantee of Gaussian noise in a much cleaner manner compared to (ε, δ)-differential privacy. This lets us do a careful privacy analysis of the GNMax mechanism as stated in Theorem 6. While the analysis of Papernot et al. (2017) leverages the first aspect of such frameworks with the Laplace noise (LNMax mechanism), our analysis of the GNMax mechanism relies on both. 3.4 PATE AGGREGATION MECHANISMS The aggregation step is a crucial component of PATE. It enables knowledge transfer from the teachers to the student while enforcing privacy. We improve the LNMax mechanism used by Papernot et al. (2017) which adds Laplace noise to teacher votes and outputs the class with the highest votes. First, we add Gaussian noise with an accompanying privacy analysis in the RDP framework. This modification effectively reduces the noise needed to achieve the same privacy cost per student query. Second, the aggregation mechanism is now selective: teacher votes are analyzed to decide which student queries are worth answering. This takes into account both the privacy cost of each query and its payout in improving the student’s utility. Surprisingly, our analysis shows that these two metrics are not at odds and in fact align with each other: the privacy cost is the smallest when teachers agree, and when teachers agree, the label is more likely to be correct thus being more useful to the student. Third, we propose and study an interactive mechanism that takes into account not only teacher votes on a queried example but possible student predictions on that query. Now, queries worth answering are those where the teachers agree on a class but the student is not confident in its prediction on that class. This third modification aligns the two metrics discussed above even further: queries where the student already agrees with the consensus of teachers are not worth expending our privacy budget on, but queries where the student is less confident are useful and answered at a small privacy cost. 3.5 DATA-DEPENDENT PRIVACY IN PATE A direct privacy analysis of the aggregation mechanism, for reasonable values of the noise parameter, allows answering only few queries before the privacy cost becomes prohibitive. The original PATE proposal used a data-dependent analysis, exploiting the fact that when the teachers have large agreement, the privacy cost is usually much smaller than the data-independent bound would suggest. In our work, we perform a data-dependent privacy analysis of the aggregation mechanism with Gaussian noise. This change of noise distribution turns out be technically much more challenging than the Laplace noise case and we defer the details to Appendix A. This increased complexity of the analysis however does not make the algorithm any more complicated and thus allows us to improve the privacy-utility tradeoff. Sanitizing the privacy cost via smooth sensitivity analysis. An additional challenge with datadependent privacy analyses arises from the fact that the privacy cost itself is now a function of the private data. Further, the data-dependent bound on the privacy cost has large global sensitivity (a metric used in differential privacy to calibrate the noise injected) and is therefore difficult to sanitize. To remedy this, we use the smooth sensitivity framework proposed by Nissim et al. (2007). Appendix B describes how we add noise to the computed privacy cost using this framework to publish a sanitized version of the privacy cost. Section B.1 defines smooth sensitivity and outlines algorithms 3–5 that compute it. The rest of Appendix B argues the correctness of these algorithms. The final analysis shows that the incremental cost of sanitizing our privacy estimates is modest— less than 50% of the raw estimates—thus enabling us to use precise data-dependent privacy analysis while taking into account its privacy implications. 4 IMPROVED AGGREGATION MECHANISMS FOR PATE The privacy guarantees provided by PATE stem from the design and analysis of the aggregation step. Here, we detail our improvements to the mechanism used by Papernot et al. (2017). As outlined in Section 3.4, we first replace the Laplace noise added to teacher votes with Gaussian noise, adapting the data-dependent privacy analysis. Next, we describe the Confident and Interactive Aggregators that select queries worth answering in a privacy-preserving way: the privacy budget is shared between the query selection and answer computation. The aggregators use different heuristics to select queries: the former does not take into account student predictions, while the latter does. 4.1 THE GNMAX AGGREGATOR AND ITS PRIVACY GUARANTEE This section uses the following notation. For a sample x and classes 1 to m, let fj(x) ∈ [m] denote the j-th teacher model’s prediction on x and ni(x) denote the vote count for the i-th class (i.e., ni(x) = |{j: fj(x) = i}|). We define a Gaussian NoisyMax (GNMax) aggregation mechanism as: Mσ(x) , argmax i { ni(x) +N (0, σ2) } , where N (0, σ2) is the Gaussian distribution with mean 0 and variance σ2. The aggregator outputs the class with noisy plurality after adding Gaussian noise to each vote count. In what follow, plurality more generally refers to the highest number of teacher votes assigned among the classes. The Gaussian distribution is more concentrated than the Laplace distribution used by Papernot et al. (2017). This concentration directly improves the aggregation’s utility when the number of classesm is large. The GNMax mechanism satisfies (λ, λ/σ2)-RDP, which holds for all inputs and all λ ≥ 1 (precise statements and proofs of claims in this section are deferred to Appendix A). A straightforward application of composition theorems leads to loose privacy bounds. As an example, the standard advanced composition theorem applied to experiments in the last two rows of Table 1 would give us ε = 8.42 and ε = 10.14 resp. at δ = 10−8 for the Glyph dataset. To refine these, we work out a careful data-dependent analysis that yields values of ε smaller than 1 for the same δ. The following theorem translates data-independent RDP guarantees for higher orders into a data-dependent RDP guarantee for a smaller order λ. We use it in conjunction with Proposition 7 to bound the privacy cost of each query to the GNMax algorithm as a function of q̃, the probability that the most common answer will not be output by the mechanism. Theorem 6 (informal). Let M be a randomized algorithm with (µ1, ε1)-RDP and (µ2, ε2)RDP guarantees and suppose that given a dataset D, there exists a likely outcome i∗ such that Pr [M(D) 6= i∗] ≤ q̃. Then the data-dependent Rényi differential privacy for M of order λ ≤ µ1, µ2 at D is bounded by a function of q̃, µ1, ε1, µ2, ε2, which approaches 0 as q̃ → 0. The new bound improves on the data-independent privacy for λ as long as the distribution of the algorithm’s output on that input has a strong peak (i.e., q̃ 1). Values of q̃ close to 1 could result in a looser bound. Therefore, in practice we take the minimum between this bound and λ/σ2 (the data-independent one). The theorem generalizes Theorem 3 from Papernot et al. (2017), where it was shown for a mechanism satisfying ε-differential privacy (i.e., µ1 = µ2 =∞ and ε1 = ε2). The final step in our analysis uses the following lemma to bound the probability q̃ when i∗ corresponds to the class with the true plurality of teacher votes. Proposition 7. For any i∗ ∈ [m], we have Pr [Mσ(D) 6= i∗] ≤ 12 ∑ i 6=i∗ erfc ( ni∗−ni 2σ ) , where erfc is the complementary error function. In Appendix A, we detail how these results translate to privacy bounds. In short, for each query to the GNMax aggregator, given teacher votes ni and the class i∗ with maximal support, Proposition 7 gives us the value of q̃ to use in Theorem 6. We optimize over µ1 and µ2 to get a data-dependent RDP guarantee for any order λ. Finally, we use composition properties of RDP to analyze a sequence of queries, and translate the RDP bound back to an (ε, δ)-DP bound. Expensive queries. This data-dependent privacy analysis leads us to the concept of an expensive query in terms of its privacy cost. When teacher votes largely disagree, some ni∗ − ni values may be small leading to a large value for q̃: i.e., the lack of consensus amongst teachers indicates that the aggregator is likely to output a wrong label. Thus expensive queries from a privacy perspective are often bad for training too. Conversely, queries with strong consensus enable tight privacy bounds. This synergy motivates the aggregation mechanisms discussed in the following sections: they evaluate the strength of the consensus before answering a query. 4.2 THE CONFIDENT-GNMAX AGGREGATOR In this section, we propose a refinement of the GNMax aggregator that enables us to filter out queries for which teachers do not have a sufficiently strong consensus. This filtering enables the teachers to avoid answering expensive queries. We also take note to do this selection step itself in a private manner. The proposed Confident Aggregator is described in Algorithm 1. To select queries with overwhelming consensus, the algorithm checks if the plurality vote crosses a threshold T . To enforce privacy in this step, the comparison is done after adding Gaussian noise with variance σ21 . Then, for queries that pass this noisy threshold check, the aggregator proceeds with the usual GNMax mechanism with a smaller variance σ22 . For queries that do not pass the noisy threshold check, the aggregator simply returns ⊥ and the student discards this example in its training. In practice, we often choose significantly higher values for σ1 compared to σ2. This is because we pay the cost of the noisy threshold check always, and without the benefit of knowing that the consensus is strong. We pick T so that queries where the plurality gets less than half the votes (often very expensive) are unlikely to pass the threshold after adding noise, but we still have a high enough yield amongst the queries with a strong consensus. This tradeoff leads us to look for T ’s between 0.6× to 0.8× the number of teachers. The privacy cost of this aggregator is intuitive: we pay for the threshold check for every query, and for the GNMax step only for queries that pass the check. In the work of Papernot et al. (2017), the mechanism paid a privacy cost for every query, expensive or otherwise. In comparison, the Confident Aggregator expends a much smaller privacy cost to check against the threshold, and by answering a significantly smaller fraction of expensive queries, it expends a lower privacy cost overall. 4.3 THE INTERACTIVE-GNMAX AGGREGATOR While the Confident Aggregator excludes expensive queries, it ignores the possibility that the student might receive labels that contribute little to learning, and in turn to its utility. By incorporating the Algorithm 1 – Confident-GNMax Aggregator: given a query, consensus among teachers is first estimated in a privacy-preserving way to then only reveal confident teacher predictions. Input: input x, threshold T , noise parameters σ1 and σ2 1: if maxi{nj(x)}+N (0, σ21) ≥ T then . Privately check for consensus 2: return argmaxj { nj(x) +N (0, σ22) } . Run the usual max-of-Gaussian 3: else 4: return ⊥ 5: end if Algorithm 2 – Interactive-GNMax Aggregator: the protocol first compares student predictions to the teacher votes in a privacy-preserving way to then either (a) reinforce the student prediction for the given query or (b) provide the student with a new label predicted by the teachers. Input: input x, confidence γ, threshold T , noise parameters σ1 and σ2, total number of teachers M 1: Ask the student to provide prediction scores p(x) 2: if maxj{nj(x)−Mpj(x)}+N (0, σ21) ≥ T then . Student does not agree with teachers 3: return argmaxj{nj(x) +N (0, σ22)} . Teachers provide new label 4: else if max{pi(x)} > γ then . Student agrees with teachers and is confident 5: return arg maxj pj(x) . Reinforce student’s prediction 6: else 7: return ⊥ . No output given for this label 8: end if student’s current predictions for its public training data, we design an Interactive Aggregator that discards queries where the student already confidently predicts the same label as the teachers. Given a set of queries, the Interactive Aggregator (Algorithm 2) selects those answered by comparing student predictions to teacher votes for each class. Similar to Step 1 in the Confident Aggregator, queries where the plurality of these noised differences crosses a threshold are answered with GNMax. This noisy threshold suffices to enforce privacy of the first step because student predictions can be considered public information (the student is trained in a differentially private manner). For queries that fail this check, the mechanism reinforces the predicted student label if the student is confident enough and does this without looking at teacher votes again. This limited form of supervision comes at a small privacy cost. Moreover, the order of the checks ensures that a student falsely confident in its predictions on a query is not accidentally reinforced if it disagrees with the teacher consensus. The privacy accounting is identical to the Confident Aggregator except in considering the difference between teachers and the student instead of only the teachers votes. In practice, the Confident Aggregator can be used to start training a student when it can make no meaningful predictions and training can be finished off with the Interactive Aggregator after the student gains some proficiency. 5 EXPERIMENTAL EVALUATION Our goal is first to show that the improved aggregators introduced in Section 4 enable the application of PATE to uncurated data, thus departing from previous results on tasks with balanced and wellseparated classes. We experiment with the Glyph dataset described below to address two aspects left open by Papernot et al. (2017): (a) the performance of PATE on a task with a larger number of classes (the framework was only evaluated on datasets with at most 10 classes) and (b) the privacy-utility tradeoffs offered by PATE on data that is class imbalanced and partly mislabeled. In Section 5.2, we evaluate the improvements given by the GNMax aggregator over its Laplace counterpart (LNMax) and demonstrate the necessity of the Gaussian mechanism for uncurated tasks. In Section 5.3, we then evaluate the performance of PATE with both the Confident and Interactive Aggregators on all datasets used to benchmark the original PATE framework, in addition to Glyph. With the right teacher and student training, the two mechanisms from Section 4 achieve high accuracy with very tight privacy bounds. Not answering queries for which teacher consensus is too low (Confident-GNMax) or the student’s predictions already agree with teacher votes (InteractiveGNMax) better aligns utility and privacy: queries are answered at a significantly reduced cost. 5.1 EXPERIMENTAL SETUP MNIST, SVHN, and the UCI Adult databases. We evaluate with two computer vision tasks (MNIST and Street View House Numbers (Netzer et al., 2011)) and census data from the UCI Adult dataset (Kohavi, 1996). This enables a comparative analysis of the utility-privacy tradeoff achieved with our Confident-GNMax aggregator and the LNMax originally used in PATE. We replicate the experimental setup and results found in Papernot et al. (2017) with code and teacher votes made available online. The source code for the privacy analysis in this paper as well as supporting data required to run this analysis is available on Github.1 A detailed description of the experimental setup can be found in Papernot et al. (2017); we provide here only a brief overview. For MNIST and SVHN, teachers are convolutional networks trained on partitions of the training set. For UCI Adult, each teacher is a random forest. The test set is split in two halves: the first is used as unlabeled inputs to simulate the student’s public data and the second is used as a hold out to evaluate test performance. The MNIST and SVHN students are convolutional networks trained using semi-supervised learning with GANs à la Salimans et al. (2016). The student for the Adult dataset are fully supervised random forests. Glyph. This optical character recognition task has an order of magnitude more classes than all previous applications of PATE. The Glyph dataset also possesses many characteristics shared by real-world tasks: e.g., it is imbalanced and some inputs are mislabeled. Each input is a 28 × 28 grayscale image containing a single glyph generated synthetically from a collection of over 500K computer fonts.2 Samples representative of the difficulties raised by the data are depicted in Figure 3. The task is to classify inputs as one of the 150 Unicode symbols used to generate them. This set of 150 classes results from pre-processing efforts. We discarded additional classes that had few samples; some classes had at least 50 times fewer inputs than the most popular classes, and these were almost exclusively incorrectly labeled inputs. We also merged classes that were too ambiguous for even a human to differentiate them. Nevertheless, a manual inspection of samples grouped by classes—favorably to the human observer—led to the conservative estimate that some classes remain 5 times more frequent, and mislabeled inputs represent at least 10% of the data. To simulate the availability of private and public data (see Section 3.1), we split data originally marked as the training set (about 65M points) into partitions given to the teachers. Each teacher is a ResNet (He et al., 2016) made of 32 leaky ReLU layers. We train on batches of 100 inputs for 40K steps using SGD with momentum. The learning rate, initially set to 0.1, is decayed after 10K steps to 0.01 and again after 20K steps to 0.001. These parameters were found with a grid search. We split holdout data in two subsets of 100K and 400K samples: the first acts as public data to train the student and the second as its testing data. The student architecture is a convolutional network learnt in a semi-supervised fashion with virtual adversarial training (VAT) from Miyato et al. (2017). Using unlabeled data, we show how VAT can regularize the student by making predictions constant in adversarial3 directions. Indeed, we found that GANs did not yield as much utility for Glyph as for MNIST or SVHN. We train with Adam for 400 epochs and a learning rate of 6 · 10−5. 5.2 COMPARING THE LNMAX AND GNMAX MECHANISMS Section 4.1 introduces the GNMax mechanism and the accompanying privacy analysis. With a Gaussian distribution, whose tail diminishes more rapidly than the Laplace distribution, we expect better utility when using the new mechanism (albeit with a more involved privacy analysis). To study the tradeoff between privacy and accuracy with the two mechanisms, we run experiments training several ensembles of M teachers for M ∈ {100, 500, 1000, 5000} on the Glyph data. Re- 1https://github.com/tensorflow/models/tree/master/research/differential_privacy 2Glyph data is not public but similar data is available publicly as part of the notMNIST dataset. 3In this context, the adversarial component refers to the phenomenon commonly referred to as adversarial examples (Biggio et al., 2013; Szegedy et al., 2014) and not to the adversarial training approach taken in GANs. call that 65 million training inputs are partitioned and distributed among the M teachers with each teacher receiving between 650K and 13K inputs for the values of M above. The test data is used to query the teacher ensemble and the resulting labels (after the LNMax and GNMax mechanisms) are compared with the ground truth labels provided in the dataset. This predictive performance of the teachers is essential to good student training with accurate labels and is a useful proxy for utility. For each mechanism, we compute (ε, δ)-differential privacy guarantees. As is common in literature, for a dataset on the order of 108 samples, we choose δ = 10−8 and denote the corresponding ε as the privacy cost. The total ε is calculated on a subset of 4,000 queries, which is representative of the number of labels needed by a student for accurate training (see Section 5.3). We visualize in Figure 4 the effect of the noise distribution (left) and the number of teachers (right) on the tradeoff between privacy costs and label accuracy. Observations. On the left of Figure 1, we compare our GNMax aggregator to the LNMax aggregator used by the original PATE proposal, on an ensemble of 1000 teachers and for varying noise scales σ. At fixed test accuracy, the GNMax algorithm consistently outperforms the LNMax mechanism in terms of privacy cost. To explain this improved performance, recall notation from Section 4.1. For both mechanisms, the data dependent privacy cost scales linearly with q̃—the likelihood of an answer other than the true plurality. The value of q̃ falls of as exp(−x2) for GNMax and exp(−x) for LNMax, where x is the ratio (ni∗−ni)/σ. Thus, when ni∗−ni is (say) 4σ, LNMax would have q̃ ≈ e−4 = 0.018..., whereas GNMax would have q̃ ≈ e−16 ≈ 10−7, thereby leading to a much higher likelihood of returning the true plurality. Moreover, this reduced q̃ translates to a smaller privacy cost for a given σ leading to a better utility-privacy tradeoff. As long as each teacher has sufficient data to learn a good-enough model, increasing the number M of teachers improves the tradeoff—as illustrated on the right of Figure 4 with GNMax. The larger ensembles lower the privacy cost of answering queries by tolerating larger σ’s. Combining the two observations made in this Figure, for a fixed label accuracy, we lower privacy costs by switching to the GNMax aggregator and training a larger number M of teachers. 5.3 STUDENT TRAINING WITH THE GNMAX AGGREGATION MECHANISMS As outlined in Section 3, we train a student on public data labeled by the aggregation mechanisms. We take advantage of PATE’s flexibility and apply the technique that performs best on each dataset: semi-supervised learning with Generative Adversarial Networks (Salimans et al., 2016) for MNIST and SVHN, Virtual Adversarial Training (Miyato et al., 2017) for Glyph, and fully-supervised random forests for UCI Adult. In addition to evaluating the total privacy cost associated with training the student model, we compare its utility to a non-private baseline obtained by training on the sensitive data (used to train teachers in PATE): we use the baselines of 99.2%, 92.8%, and 85.0% reported by Papernot et al. (2017) respectively for MNIST, SVHN, and UCI Adult, and we measure a baseline of 82.2% for Glyph. We compute (ε, δ)-privacy bounds and denote the privacy cost as the ε value at a value of δ set accordingly to number of training samples. Confident-GNMax Aggregator. Given a pool of 500 to 12,000 samples to learn from (depending on the dataset), the student submits queries to the teacher ensemble running the Confident-GNMax aggregator from Section 4.2. A grid search over a range of plausible values for parameters T , σ1 and σ2 yielded the values reported in Table 1, illustrating the tradeoff between utility and privacy achieved. We additionally measure the number of queries selected by the teachers to be answered and compare student utility to a non-private baseline. The Confident-GNMax aggregator outperforms LNMax for the four datasets considered in the original PATE proposal: it reduces the privacy cost ε, increases student accuracy, or both simultaneously. On the uncurated Glyph data, despite the imbalance of classes and mislabeled data (as evidenced by the 82.2% baseline), the Confident Aggregator achieves 73.5% accuracy with a privacy cost of just ε = 1.02. Roughly 1,300 out of 12,000 queries made are not answered, indicating that several expensive queries were successfully avoided. This selectivity is analyzed in more details in Section 5.4. Interactive-GNMax Aggregator. On Glyph, we evaluate the utility and privacy of an interactive training routine that proceeds in two rounds. Round one runs student training with a Confident Aggregator. A grid search targeting the best privacy for roughly 3,400 answered queries (out of 6,000)—sufficient to bootstrap a student—led us to setting (T=3500, σ1=1500, σ2=100) and a privacy cost of ε ≈ 0.59. In round two, this student was then trained with 10,000 more queries made with the InteractiveGNMax Aggregator (T=3500, σ1=2000, σ2=200). We computed the resulting (total) privacy cost and utility at an exemplar data point through another grid search of plausible parameter values. The result appears in the last row of Table 1. With just over 10,422 answered queries in total at a privacy cost of ε = 0.84, the trained student was able to achieve 73.2% accuracy. Note that this students required fewer answered queries compared to the Confident Aggregator. The best overall cost of student training occurred when the privacy costs for the first and second rounds of training were roughly the same. (The total ε is less than 0.59 × 2 = 1.18 due to better composition—via Theorems 4 and 5.) Comparison with Baseline. Note that the Glyph student’s accuracy remains seven percentage points below the non-private model’s accuracy achieved by training on the 65M sensitive inputs. We hypothesize that this is due to the uncurated nature of the data considered. Indeed, the class imbalance naturally requires more queries to return labels from the less represented classes. For instance, a model trained on 200K queries is only 77% accurate on test data. In addition, the large fraction of mislabeled inputs are likely to have a large privacy cost: these inputs are sensitive because they are outliers of the distribution, which is reflected by the weak consensus among teachers on these inputs. 5.4 NOISY THRESHOLD CHECKS AND PRIVACY COSTS Sections 4.1 and 4.2 motivated the need for a noisy threshold checking step before having the teachers answer queries: it prevents most of the privacy budget being consumed by few queries that are expensive and also likely to be incorrectly answered. In Figure 5, we compare the privacy cost ε of answering all queries to only answering confident queries for a fixed number of queries. We run additional experiments to support the evaluation from Section 5.3. With the votes of 5,000 teachers on the Glyph dataset, we plot in Figure 5 the histogram of the plurality vote counts (ni∗ in the notation of Section 4.1) across 25,000 student queries. We compare these values to the vote counts of queries that passed the noisy threshold check for two sets of parameters T and σ1 in Algorithm 1. Smaller values imply weaker teacher agreements and consequently more expensive queries. When (T=3500, σ1=1500) we capture a significant fraction of queries where teachers have a strong consensus (roughly > 4000 votes) while managing to filter out many queries with poor consensus. This moderate check ensures that although many queries with plurality votes between 2,500 and 3,500 are answered (i.e., only 50–70% of teachers agree on a label) the expensive ones are most likely discarded. For (T=5000, σ1=1500), queries with poor consensus are completely culled out. This selectivity comes at the expense of a noticeable drop for queries that might have had a strong consensus and little-to-no privacy cost. Thus, this aggressive check answer fewer queries with very strong privacy guarantees. We reiterate that this threshold checking step itself is done in a private manner. Empirically, in our Interactive Aggregator experiments, we expend about a third to a half of our privacy budget on this step, which still yields a very small cost per query across 6,000 queries. 6 CONCLUSIONS The key insight motivating the addition of a noisy thresholding step to the two aggregation mechanisms proposed in our work is that there is a form of synergy between the privacy and accuracy of labels output by the aggregation: labels that come at a small privacy cost also happen to be more likely to be correct. As a consequence, we are able to provide more quality supervision to the student by choosing not to output labels when the consensus among teachers is too low to provide an aggregated prediction at a small cost in privacy. This observation was further confirmed in some of our experiments where we observed that if we trained the student on either private or non-private labels, the former almost always gave better performance than the latter—for a fixed number of labels. Complementary with these aggregation mechanisms is the use of a Gaussian (rather than Laplace) distribution to perturb teacher votes. In our experiments with Glyph data, these changes proved essential to preserve the accuracy of the aggregated labels—because of the large number of classes. The analysis presented in Section 4 details the delicate but necessary adaptation of analogous results for the Laplace NoisyMax. As was the case for the original PATE proposal, semi-supervised learning was instrumental to ensure the student achieves strong utility given a limited set of labels from the aggregation mechanism. However, we found that virtual adversarial training outperforms the approach from Salimans et al. (2016) in our experiments with Glyph data. These results establish lower bounds on the performance that a student can achieve when supervised with our aggregation mechanisms; future work may continue to investigate virtual adversarial training, semi-supervised generative adversarial networks and other techniques for learning the student in these particular settings with restricted supervision. ACKNOWLEDGMENTS We are grateful to Martín Abadi, Vincent Vanhoucke, and Daniel Levy for their useful inputs and discussions towards this paper. A APPENDIX: PRIVACY ANALYSIS In this appendix, we provide the proofs of Theorem 6 and Proposition 7. Moreover, we present Proposition 10, which provides optimal values of µ1 and µ2 to apply towards Theorem 6 for the GNMax mechanism. We start off with a statement about the Rényi differential privacy guarantee of the GNMax. Proposition 8. The GNMax aggregatorMσ guarantees ( λ, λ/σ2 ) -RDP for all λ ≥ 1. Proof. The result follows from observing thatMσ can be decomposed into applying the argmax operator to a noisy histogram resulted from adding Gaussian noise to each dimension of the original histogram. The Gaussian mechanism satisfies (λ, λ/2σ2)-RDP (Mironov, 2017), and since each teacher may change two counts (incrementing one and decrementing the other), the overall RDP guarantee is as claimed. Proposition 7. For a GNMax aggregator Mσ , the teachers’ votes histogram n̄ = (n1, . . . , nm), and for any i∗ ∈ [m], we have Pr [Mσ(D) 6= i∗] ≤ q(n̄), where q(n̄) , 1 2 ∑ i 6=i∗ erfc ( ni∗ − ni 2σ ) . Proof. Recall thatMσ(D) = argmax(ni + Zi), where Zi are distributed as N (0, σ2). Then for any i∗ ∈ [m], we have Pr[Mσ(D) 6= i∗] = Pr [∃i, ni + Zi > ni∗ + Zi∗ ] ≤ ∑ i 6=i∗ Pr [ni + Zi > ni∗ + Zi∗ ] = ∑ i 6=i∗ Pr [Zi − Zi∗ > ni∗ − ni] = ∑ i 6=i∗ 1 2 ( 1− erf ( ni∗ − ni 2σ )) . where the last equality follows from the fact that Zi − Zj is a Gaussian random variable with mean zero and variance 2σ2. We now present a precise statement of Theorem 6. Theorem 6. LetM be a randomized algorithm with (µ1, ε1)-RDP and (µ2, ε2)-RDP guarantees and suppose that there exists a likely outcome i∗ given a dataset D and a bound q̃ ≤ 1 such that q̃ ≥ Pr [M(D) 6= i∗]. Additionally suppose that λ ≤ µ1 and q̃ ≤ e(µ2−1)ε2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2 . Then, for any neighboring dataset D′ of D, we have: Dλ(M(D)‖M(D′)) ≤ 1 λ− 1 log ( (1− q̃) ·A(q̃, µ2, ε2)λ−1 + q̃ ·B(q̃, µ1, ε1)λ−1 ) (2) whereA(q̃, µ2, ε2) , (1− q̃)/ ( 1− (q̃eε2) µ2−1 µ2 ) andB(q̃, µ1, ε1) , eε1/q̃ 1 µ1−1 . Proof. Before we proceed to the proof, we introduce some simplifying notation. For a randomized mechanismM and neighboring datasets D and D′, we define βM(λ;D,D ′) , Dλ(M(D)‖M(D′)) = 1 λ− 1 logEx∼M(D) [( Pr [M(D) = x] Pr [M(D′) = x] )λ−1] . As the proof involves working with the RDP bounds in the exponent, we set ζ1 , eε1(µ1−1) and ζ2 , eε2(µ2−1). Finally, we define the following shortcuts: qi , Pr [M(D) = i] and q , ∑ i 6=i∗ qi = Pr [M(D) 6= i∗] , pi , Pr [M(D′) = i] and p , ∑ i6=i∗ pi = Pr [M(D′) 6= i∗] , and note that q ≤ q̃. From the definition of Rényi differential privacy, (µ1, ε1)-RDP implies: exp (βM(µ1;D,D ′)) = (1− q)µ1 (1− p)µ1−1 + ∑ i6=i∗ qµ1i pµ1−1i 1/(µ1−1) ≤ exp(ε1) =⇒ ∑ i>1 qµ1i pµ1−1i = ∑ i>1 qi ( qi pi )µ1−1 ≤ ζ1. (3) Since µ1 ≥ λ, f(x) , x µ1−1 λ−1 is convex. Applying Jensen’s Inequality we have the following: ∑ i 6=i∗ qi ( qi pi )λ−1 q µ1−1 λ−1 ≤ ∑ i 6=i∗ qi ( qi pi )µ1−1 q =⇒ ∑ i6=i∗ qi ( qi pi )λ−1 ≤ q ∑ i 6=i∗ qi ( qi pi )µ1−1 q λ−1 µ1−1 (3) =⇒ ∑ i6=i∗ qi ( qi pi )λ−1 ≤ ζ1 λ−1 µ1−1 · q1− λ−1 µ1−1 . (4) Next, by the bound at order µ2, we have: exp (βM(µ2;D ′, D)) = (1− p)µ2 (1− q)µ2−1 + ∑ i 6=i∗ pµ2i qµ2−1i 1/(µ2−1) ≤ exp(ε2) =⇒ (1− p) µ2 (1− q)µ2−1 + ∑ i6=i∗ pµ2i qµ2−1i ≤ ζ2. By the data processing inequality of Rényi divergence, we have (1− p)µ2 (1− q)µ2−1 + pµ2 qµ2−1 ≤ ζ2, which implies p µ2 qµ2−1 ≤ ζ2 and thus p ≤ ( qµ2−1ζ2 ) 1 µ2 . (5) Combining (4) and (5), we can derive a bound at λ. exp (βM(λ,D,D ′)) = (1− q)λ (1− p)λ−1 + ∑ i6=i∗ qλi pλ−1i 1/(λ−1) ≤ (1− q)λ( 1− (qµ2−1ζ2) 1 µ2 )λ−1 + ζ1 λ−1µ1−1 · q1− λ−1µ1−1 1/(λ−1) . (6) Although Equation (6) is very close to the corresponding statement in the theorem’s claim, one subtlety remains. The bound (6) applies to the exact probability q = Pr [M(D) 6= i∗]. In the theorem statement, and in practice, we can only derive an upper bound q̃ on Pr [M(D) 6= i∗]. The last step of the proof requires showing that the expression in Equation (6) is monotone in the range of values of q that we care about. Lemma 9 (Monotonicity of the bound). Let the functions f1(·) and f2(·) be f1(x) , (1− x)λ( 1− (xµ2−1ζ2) 1 µ2 )λ−1 and f2(x) , ζ1 λ−1µ1−1 · x1− λ−1µ1−1 , Then f1(x) + f2(x) is increasing in [ 0,min ( 1, ζ2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2)] . Proof. Taking the derivative of f1(x), we have: f ′1(x) = −λ(1− x)λ−1(1− (xµ2−1ζ2) 1 µ2 )λ−1 (1− (xµ2−1ζ2) 1 µ2 )2λ−2 + (1− x)λ(λ− 1)(1− (xµ2−1ζ2) 1 µ2 )λ−2ζ2 1 µ2 · µ2−1µ2 · x − 1µ2 (1− (xµ2−1ζ2) 1 µ2 )2λ−2 = (1− x)λ−1 (1− (xµ2−1ζ2) 1 µ2 )λ−1 ( −λ+ (λ− 1) ( 1− 1 µ2 ) 1− x 1− (xµ2−1ζ2) 1 µ2 ( ζ2 x ) 1 µ2 ) . We intend to show that: f ′1(x) ≥ −λ+ (λ− 1) ( 1− 1 µ2 )( ζ2 x ) 1 µ2 . (7) For x ∈ [ 0, ζ2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2] and y ∈ [1,∞), define g(x, y) as: g(x, y) , −λ · yλ−1 + (λ− 1) ( 1− 1 µ2 )( ζ2 x ) 1 µ2 yλ. We claim that g(x, y) is increasing in y and therefore g(x, y) ≥ g(x, 1), and prove it by showing the partial derivative of g(x, y) with respect to y is non-negative. Take a derivative with respect to y as: g′y(x, y) = −λ(λ− 1)yλ−2 + λ(λ− 1) ( 1− 1 µ2 )( ζ2 x ) 1 µ2 yλ−1 = λ(λ− 1)yλ−2 ( −1 + ( 1− 1 µ2 )( ζ2 x ) 1 µ2 y ) . To see why g′y(x, y) is non-negative in the respective ranges of x and y, note that: x ≤ ζ2/ ( µ1 µ1 − 1 · µ2 µ2 − 1 )µ2 =⇒ x ≤ ζ2/ ( µ2 µ2 − 1 )µ2 =⇒ 1 ≤ ζ2 x · ( µ2 − 1 µ2 )µ2 =⇒ 1 ≤ µ2 − 1 µ2 ( ζ2 x ) 1 µ2 =⇒ 1 ≤ µ2 − 1 µ2 ( ζ2 x ) 1 µ2 y (as y ≥ 1) =⇒ 0 ≤ −1 + µ2 − 1 µ2 ( ζ2 x ) 1 µ2 y =⇒ 0 ≤ g′y(x, y). (in the resp. range of x and y) Consider 1−x 1−(xµ2−1ζ2)1/µ2 . Since ζ2 ≥ 1 and x ≤ 1, we have x ≤ ζ2 and hence 1− x 1− (xµ2−1ζ2) 1 µ2 ≥ 1− x 1− (xµ2−1x) 1 µ2 = 1. Therefore we can set y = 1−x 1−(xµ2−1ζ2)1/µ2 and apply the fact that g(x, y) ≥ g(x, 1) for all y ≥ 1 to get f ′1(x) ≥ −λ+ (λ− 1) ( 1− 1 µ2 )( ζ2 x ) 1 µ2 , as required by (7). Taking the derivative of f2(x), we have: f ′2(x) = ζ1 λ−1 µ1−1 · ( 1− λ− 1 µ1 − 1 ) x− λ−1 µ1−1 = ( ζ1 x ) λ−1 µ1−1 ( 1− λ− 1 µ1 − 1 ) ≥ 1− λ− 1 µ1 − 1 . Combining the two terms together, we have: f ′(x) ≥ −λ+ (λ− 1) ( 1− 1 µ2 )( ζ2 x ) 1 µ2 + 1− λ− 1 µ1 − 1 = (λ− 1) ( − µ1 µ1 − 1 + µ2 − 1 µ2 ( ζ2 x ) 1 µ2 ) . For f ′(x) to be non-negative we need: − µ1 µ1 − 1 + µ2 − 1 µ2 ( ζ2 x ) 1 µ2 ≥ 0 ⇐⇒ ( µ1 µ1 − 1 · µ2 µ2 − 1 )µ2 ≤ ζ2 x . So f(x) is increasing for x ∈ [ 0, ζ2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2] . This means for q ≤ q̃ ≤ ζ2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2 , we have f(q) ≤ f(q̃). This completes the proof of the lemma and that of the theorem. Theorem 6 yields data-dependent Rényi differential privacy bounds for any value of µ1 and µ2 larger than λ. The following proposition simplifies this search by calculating optimal higher moments µ1 and µ2 for the GNMax mechanism with variance σ2. Proposition 10. When applying Theorem 6 and Proposition 8 for GNMax with Gaussian of variance σ2, the right-hand side of (2) is minimized at µ2 = σ · √ log(1/q̃), and µ1 = µ2 + 1. Proof. We can minimize both terms in (2) independently. To minimize the first term in (6), we minimize (q̃eε2)1−1/µ2 by considering logarithms: log { (q̃eε2) 1−1/µ2 } = log { q̃1− 1 µ2 exp ( µ2 − 1 σ2 )} = ( 1− 1 µ2 ) · log q̃ + µ2 − 1 σ2 = 1 µ2 log 1 q̃ + µ2 σ2 − 1 σ2 − log 1 q̃ , which is minimized at µ2 = σ · √ log(1/q̃). To minimize the second term in (6), we minimize eε1/q̃1/(µ1−1) as follows: log { eε1 q̃1/(µ1−1) } = log { q̃−1/(µ1−1) exp (µ1 σ2 )} = µ1 σ2 + 1 µ1 − 1 log 1 q̃ = 1 σ2 + µ1 − 1 σ2 + 1 µ1 − 1 log 1 q̃ , which is minimized at µ1 = 1 + σ · √ log(1/q̃) completing the proof. Putting this together, we apply the following steps to calculate RDP of order λ for GNMax with variance σ2 on a given dataset D. First, we compute a bound q according to Proposition 7. Then we use the smaller of two bounds: a data-dependent (Theorem 6) and a data-independent one (Proposition 8) : βσ(q) , min { 1 λ− 1 log { (1− q) ·A(q, µ2, ε2)λ−1 + q ·B(q, µ1, ε1)λ−1 } , λ/σ2 } , whereA andB are defined as in the statement of Theorem 6, the parameters µ1 and µ2 are selected according to Proposition 10, and ε1 , µ1/σ2 and ε2 , µ2/σ2 (Proposition 8). Importantly, the first expression is evaluated only when q < 1, µ1 ≥ λ, µ2 > 1, and q ≤ e(µ2−1)ε2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2 . These conditions can either be checked for each application of the aggregation mechanism, or a critical value of q0 that separates the range of applicability of the data-dependent and data-independent bounds can be computed for given σ and λ. In our implementation we pursue the second approach. The following corollary offers a simple asymptotic expression of the privacy of GNMax for the case when there are large (relative to σ) gaps between the highest three vote counts. Corollary 11. If the top three vote counts are n1 > n2 > n3 and n1 − n2, n2 − n3 σ, then the mechanism GNMax with Gaussian of variance σ2 satisfies (λ, exp(−2λ/σ2)/λ)-RDP for λ = (n1 − n2)/4. Proof. Denote the noisy counts as ñi = ni + N (0, σ2). Ignoring outputs other than those with the highest and the second highest counts, we bound q = Pr [M(D) 6= 1] as Pr[ñ1 < ñ2] = Pr[N(0, 2σ2) > n1 − n2] < exp ( −(n1 − n2)2/4σ2 ) , which we use as q̃. Plugging q̃ in Proposition 10, we have µ1 − 1 = µ2 = (n1 − n2)/2, limiting the range of applicability of Theorem 6 to λ < (n1 − n2)/2. Choosing λ = (n1−n2)/4 ensuresA(q̃, µ2, ε2) ≈ 1, which allows approximating the bound (2) as q̃ ·B(q̃, µ1, ε1)λ−1/(λ− 1). The proof follows by straightforward calculation. B SMOOTH SENSITIVITY AND PUBLISHING THE PRIVACY PARAMETER The privacy guarantees obtained for the mechanisms in this paper via Theorem 6 take as input q̃, an upper bound on the probability that the aggregate mechanism returns the true plurality. This means that the resulting privacy parameters computed depend on teacher votes and hence the underlying data. To avoid potential privacy breaches from simply publishing the data-dependent parameter, we need to publish a sanitized version of the privacy loss. This is done by adding noise to the computed privacy loss estimates using the smooth sensitivity algorithm proposed by Nissim et al. (2007). This section has the following structure. First we recall the notion of smooth sensitivity and introduce an algorithm for computing the smooth sensitivity of the privacy loss function of the GNMax mechanism. In the rest of the section we prove correctness of these algorithms by stating several conditions on the mechanism, proving that these conditions are sufficient for correctness of the algorithm, and finally demonstrating that GNMax satisfies these conditions. B.1 COMPUTING SMOOTH SENSITIVITY Any dataset D defines a histogram n̄ = (n1, . . . , nm) ∈ Nm of the teachers’ votes. We have a natural notion of the distance between two histograms dist(n̄, n̄′) and a function q:Nm → [0, 1] on these histograms computing the bound according to Proposition 7. The value q(n̄) can be used as q̃ in the application of Theorem 6. Additionally we have n(i) denote the i-th highest bar in the histogram. We aim at calculating a smooth sensitivity of β (q(n̄)) whose definition we recall now. Definition 12 (Smooth Sensitivity). Given the smoothness parameter β, a β-smooth sensitivity of f(n) is defined as SSβ(n̄) , max d≥0 e−βd · max n̄′:dist(n̄,n̄′)≤d L̃S(n̄′), where L̃S(n̄) ≥ max n̄′:dist(n̄,n̄′)=1 |f(n)− f(n′)| is an upper bound on the local sensitivity. We now describe Algorithms 3–5 computing a smooth sensitivity of β (q(·)). The algorithms assume the existence of efficiently computable functions q:Nm → [0, 1], BL,BU: [0, 1] → [0, 1], and a constant q0. Informally, the functions BU and BL respectively upper and lower bound the value of q evaluated at any neighbor of n̄ given q(n̄), and [0, q0) limits the range of applicability of data-dependent analysis. The functions BL and BU are defined as follows. Their derivation appears in Section B.4. BU(q) , min { m− 1 2 erfc ( erfc-1 ( 2q m− 1 ) − 1 σ ) , 1 } , BL(q) , m− 1 2 erfc ( erfc-1 ( 2q m− 1 ) + 1 σ ) , Algorithm 3 – Local Sensitivity: use the functions BU and BL to compute (an upper bound) of the local sensitivity at a given q value by looking at the difference of β (·) evaluated on the bounds. 1: procedure L̃S(q) 2: if q1 ≤ q ≤ q0 then . q1 = BL(q0). Interpolate the middle part. 3: q ← q1 4: end if 5: return max{β (BU(q))− β (q) ,β (q)− β (BL(q))} 6: end procedure B.2 NOTATION AND CONDITIONS Notation. We find that the algorithm and the proof of its correctness are more naturally expressed if we relax the notions of a histogram and its neighbors to allow non-integer values. • We generalize histograms to be any vector with non-negative real values. This relaxation is used only in the analysis of algorithms; the actual computations are performed exclusively over integer-valued inputs. • Let n̄ = [n1, . . . , nm] ∈ Rm, ni ≥ 0 denote a histogram. Let n(i) denote the i-th bar in the descending order. • Define a “move” as increasing one bar by some value in [0, 1] and decreasing one bar by a (possibly different) value in [0, 1] subject to the resulting value be non-negative. Notice the difference between the original problem and our relaxation. In the original formulation, the histogram takes only integer values and we can only increase/decrease them by exactly 1. In contrast, we allow real values and a teacher can contribute an arbitrary amount in [0, 1] to any one class. Algorithm 4 – Sensitivity at a distance: given a histogram n̄, compute the sensitivity of β (·) at distance at most d using the procedure L̃S, function q(·), constants q0 and q1 = BL(q0), and careful case analysis that finds the neighbor at distance d with the maximum sensitivity. 1: procedure ATDISTANCED(n̄, d) 2: q ← q(n̄) 3: if q1 ≤ q ≤ q0 then . q is in the flat region. 4: return L̃S(q), STOP 5: end if 6: if q < q1 then . Need to increase q. 7: if n(1) − n(2) < 2d then . n(i) is the ith largest element. 8: return L̃S(q1), STOP 9: else 10: n̄′ ← SORT(n̄) + [−d, d, 0, . . . , 0] 11: q′ ← q(n̄′) 12: if q′ > q1 then 13: return L̃S(q0), STOP 14: else 15: return L̃S(q′), CONTINUE 16: end if 17: end if 18: else . Need to decrease q. 19: if ∑d i=2 n (i) ≤ d then 20: n̄′ ← [n, 0, . . . , 0] 21: q′ ← q(n̄′) 22: return L̃S(q′), STOP 23: else 24: n̄′ ← SORT(n̄) + [d, 0, . . . , 0] 25: for d′ = 1, . . . , d do 26: n′(2) ← n′(2) − 1 . The index of n′(2) may change. 27: end for 28: q′ ← q(n̄′) 29: if q′ < q0 then 30: return L̃S(q0), STOP 31: else 32: return L̃S(q′), CONTINUE 33: end if 34: end if 35: end if 36: end procedure Algorithm 5 – Smooth Sensitivity: Compute the β smooth sensitivity of β (·) via Definition 12 by looking at sensitivities at various distances and returning the maximum weighted by e−βd. 1: procedure SMOOTHSENSITIVITY(n̄, β) 2: S ← 0 3: d← 0 4: repeat 5: c,StoppingCondition← ATDISTANCED(n̄, d) 6: S ← max{S, c · e−βd} 7: d← d+ 1 8: until StoppingCondition = STOP 9: end procedure • Define the distance between two histograms n̄ = (n1, . . . , nm) and n̄′ = (n′1, . . . , n ′ m) as d(n̄, n̄′) , max ∑ i:ni>n′i dni − n′ie, ∑ i:ni<n′i dn′i − nie , which is equal to the smallest number of “moves” needed to make the two histograms identical. We use the ceiling function since a single step can increase/decrease one bar by at most 1. We say that two histograms are neighbors if their distance d is 1. Notice that analyses of Rényi differential privacy for LNMax, GNMax and the exponential mechanism are still applicable when the neighboring datasets are defined in this manner. • Given a randomized aggregatorM:Rm≥0 → [m], let q:Rm≥0 → [0, 1] be so that q(n̄) ≥ Pr[M(n̄) 6= argmax(n̄)]. When the context is clear, we use q to denote a specific value of the function, which, in particular, can be used as q̃ in applications of Theorem 6. • Let β: [0, 1]→ R be the function that maps a q value to the value of the Rényi accountant. Conditions. Throughout this section we will be referring to the list of conditions on q(·) and β (·): C1. The function q(·) is continuous in each argument ni. C2. There exist functions BU,BL: [0, 1] → [0, 1] such that for any neighbor n̄′ of n̄, we have BL(q(n̄)) ≤ q(n̄′) ≤ BU(q(n̄)), i.e., BU and BL provide upper and lower bounds on the q value of any neighbor of n̄. C3. BL(q) is increasing in q. C4. BU and BL are functional inverses of each other in part of the range, i.e., q = BL(BU(q)) for all q ∈ [0, q0], where q0 is defined below. Additionally BL(q) ≤ q ≤ BU(q) for all q ∈ [0, 1]. C5. β (·) has the following shape: there exist constants β∗ and q0 ≤ 0.5, such that β (q) nondecreasing in [0, q0] and β (q) = β∗ ≥ β (q0) for q > q0. The constant β∗ corresponds to a data-independent bound. C6. ∆β (q) , β (BU(q))− β (q) is non-decreasing in [0,BL(q0)], i.e., when BU(q) ≤ q0. C7. Recall that n(i) is the i-th largest coordinate of a histogram n̄. Then, if q(n̄) ≤ BU(q0), then q(n̄) is differentiable in all coordinates and ∀i > j ≥ 2 ∂q ∂n(j) (n̄) ≥ ∂q ∂n(i) (n̄) ≥ 0. C8. The function q(n̄) is invariant under addition of a constant, i.e., q(n̄) = q(n̄+ [x, . . . , x]) for all n̄ and x ≥ 0, and q(n̄) is invariant under permutation of n̄, i.e., q(n̄) = q(π(n̄)) for all permutations π on [m]. Finally, we require that if n(1) = n(2), then q(n̄) ≥ q0. We may additionally assume that q0 ≥ q([n, 0, . . . , 0]). Indeed, if this condition is not satisfied, then the data-dependent analysis is not going to be used anywhere. The most extreme histogram— [n, 0, . . . , 0]—is the most advantageous setting for applying data-dependent bounds. If we cannot use the data-dependent bound even in that case, we would be using the data-independent bound everywhere and do not need to compute smooth sensitivity anyway. Yet this condition is not automatically satisfied. For example, if m (the number of classes) is large compared to n (the number of teachers), we might have large q([n, 0, . . . , 0]). So we need to check this condition in the code before doing smooth sensitivity calculation. B.3 CORRECTNESS OF ALGORITHMS 3–5 Recall that local sensitivity of a deterministic function f is defined as max f(D)− f(D′), where D and D′ are neighbors. Proposition 13. Under conditions C2–C6, Algorithm 3 computes an upper bound on local sensitivity of β (q(n̄)). Proof. Since β (·) is non-decreasing everywhere (by C5), and for any neighbors n̄ and n̄′ it holds that BL(q(n̄)) ≤ q(n̄′) ≤ BU(q(n̄)) (by C2), we have the following |β (q(n̄))− β (q(n̄′))| ≤ max { β ( BU(q(n̄)) ) − β ( q(n̄) ) , β ( q(n̄) ) − β ( BL(q(n̄)) )} = max { ∆β ( q(n̄) ) , ∆β ( BL(q(n̄)) )} as an upper bound on the local sensitivity of β (q(·)) at input n̄. The function computed by Algorithm 3 differs from above when q(n̄) ∈ (BL(q0), q0). To complete the proof we need to argue that the local sensitivity is upper bounded by ∆β (BL(q0)) for q(n̄) in this interval. The bound follows from the following three observations. First, ∆β (q) is non-increasing in the range (BL(q0), 1], since β (BU(q)) is constant (by BU(q) ≥ BU(BL(q0)) = q0 and C5) and β (q) is non-decreasing in the range (by C5). In particular, ∆β (q) ≤ ∆β (BL(q0)) if q ≥ BL(q0). (8) Second, ∆β (BL(q)) is non-decreasing in the range [0, q0] since BL(q) is increasing (by C3 and C6). This implies that ∆β (BL(q)) ≤ ∆β (BL(q0)) if q ≤ q0. (9) By (8) and (9) applied to the intersection of the two ranges, it holds that max { ∆β ( q(n̄) ) , ∆β ( BL(q(n̄)) )} ≤ ∆β (BL(q0)) if BL(q0) ≤ q ≤ q0, as needed. We thus established that the function computed by Algorithm 3, which we call L̃S(q) from now on, is an upper bound on the local sensitivity. Formally, L̃S(q) , { ∆β (BL(q0)) if q ∈ (BL(q0), q0), max {∆β (q) ,∆β (BL(q))} otherwise. The following proposition characterizes the growth of L̃S(q). Proposition 14. Assuming conditions C2–C6, the function L̃S(q) is non-decreasing in [0,BL(q0)], constant in [BL(q0), q0], and non-increasing in [q0, 1]. Proof. Consider separately three intervals. • By construction, L̃S is constant in [BL(q0), q0]. • Since both functions ∆β (·) and ∆β (BL(·)) are each non-decreasing in [0,BL(q0)), so is their max. • In the interval (q0, 1], β (q) is constant. Hence ∆β (q) = 0 and ∆β (BL(q)) = β (q) − β (BL(q)) is non-decreasing. Their maximum value ∆β (BL(q)) is non-decreasing. The claim follows. We next prove correctness of Algorithm 4, which computes the maximal sensitivity of β at a fixed distance. The proof relies on the following notion of a partial order between histograms. Definition
1. How does the proposed technique utilize Gaussian noise in the PATE framework, and what are its advantages over Laplace noise? 2. Can you provide a detailed explanation of the selective answering strategy used in the teacher ensemble, and how does it impact the privacy-utility tradeoff? 3. How does the privacy cost of selective aggregation work, especially when the teachers do not agree, and what are the implications for privacy preservation? 4. What are the main strengths and weaknesses of the paper regarding its contributions to private learning using the PATE framework? 5. Are there any limitations or areas for improvement in the proposed techniques, particularly regarding their applicability to various privacy-preserving scenarios?
Review
Review The paper proposes novel techniques for private learning with PATE framework. Two key ideas in the paper include the use of Gaussian noise for the aggregation mechanism in PATE instead of Laplace noise and selective answering strategy by teacher ensemble. In the experiments, the efficacy of the proposed techniques has been demonstrated. I am not familiar with privacy learning but it is interesting to see that more concentrated distribution (Gaussian) and clever aggregators provide better utility-privacy tradeoff. 1. As for noise distribution, I am wondering if the variance of the distribution also plays a role to keep good utility-privacy trade-off. It would be great to discuss and show experimental results for utility-privacy tradeoff with different variances of Laplace and Gaussian noise. 2. It would be great to have an intuitive explanation about differential privacy and selective aggregation mechanisms with examples. 3. It would be great if there is an explanation about the privacy cost for selective aggregation. Intuitively, if teacher ensemble does not answer, it seems that it would reveal the fact that teachers do not agree, and thus spend some privacy cost.
ICLR
Title Scalable Private Learning with PATE Abstract The rapid adoption of machine learning has increased concerns about the privacy implications of machine learning models trained on sensitive data, such as medical records or other personal information. To address those concerns, one promising approach is Private Aggregation of Teacher Ensembles, or PATE, which transfers to a “student” model the knowledge of an ensemble of “teacher” models, with intuitive privacy provided by training teachers on disjoint data and strong privacy guaranteed by noisy aggregation of teachers’ answers. However, PATE has so far been evaluated only on simple classification tasks like MNIST, leaving unclear its utility when applied to larger-scale learning tasks and real-world datasets. In this work, we show how PATE can scale to learning tasks with large numbers of output classes and uncurated, imbalanced training data with errors. For this, we introduce new noisy aggregation mechanisms for teacher ensembles that are more selective and add less noise, and prove their tighter differential-privacy guarantees. Our new mechanisms build on two insights: the chance of teacher consensus is increased by using more concentrated noise and, lacking consensus, no answer need be given to a student. The consensus answers used are more likely to be correct, offer better intuitive privacy, and incur lower-differential privacy cost. Our evaluation shows our mechanisms improve on the original PATE on all measures, and scale to larger tasks with both high utility and very strong privacy (ε < 1.0). 1 INTRODUCTION Many attractive applications of modern machine-learning techniques involve training models using highly sensitive data. For example, models trained on people’s personal messages or detailed medical information can offer invaluable insights into real-world language usage or the diagnoses and treatment of human diseases (McMahan et al., 2017; Liu et al., 2017). A key challenge in such applications is to prevent models from revealing inappropriate details of the sensitive data—a nontrivial task, since models are known to implicitly memorize such details during training and also to inadvertently reveal them during inference (Zhang et al., 2017; Shokri et al., 2017). Recently, two promising, new model-training approaches have offered the hope that practical, highutility machine learning may be compatible with strong privacy-protection guarantees for sensitive training data (Abadi et al., 2017). This paper revisits one of these approaches, Private Aggregation of Teacher Ensembles, or PATE (Papernot et al., 2017), and develops techniques that improve its scalability and practical applicability. PATE has the advantage of being able to learn from the aggregated consensus of separate “teacher” models trained on disjoint data, in a manner that both provides intuitive privacy guarantees and is agnostic to the underlying machine-learning techniques (cf. the approach of differentially-private stochastic gradient descent (Abadi et al., 2016)). In the PATE approach multiple teachers are trained on disjoint sensitive data (e.g., different users’ data), and uses the teachers’ aggregate consensus answers in a black-box fashion to supervise the training of a “student” model. By publishing only the student model (keeping the teachers private) and by adding carefully-calibrated Laplacian noise to the aggregate answers used to train the student, the ∗Equal contributions, authors ordered alphabetically. Work done while the authors were at Google Brain. original PATE work showed how to establish rigorous (ε, δ) differential-privacy guarantees (Papernot et al., 2017)—a gold standard of privacy (Dwork et al., 2006). However, to date, PATE has been applied to only simple tasks, like MNIST, without any realistic, larger-scale evaluation. The techniques presented in this paper allow PATE to be applied on a larger scale to build more accurate models, in a manner that improves both on PATE’s intuitive privacy-protection due to the teachers’ independent consensus as well as its differential-privacy guarantees. As shown in our experiments, the result is a gain in privacy, utility, and practicality—an uncommon joint improvement. The primary technical contributions of this paper are new mechanisms for aggregating teachers’ answers that are more selective and add less noise. On all measures, our techniques improve on the original PATE mechanism when evaluated on the same tasks using the same datasets, as described in Section 5. Furthermore, we evaluate both variants of PATE on a new, large-scale character recognition task with 150 output classes, inspired by MNIST. The results show that PATE can be successfully utilized even to uncurated datasets—with significant class imbalance as well as erroneous class labels—and that our new aggregation mechanisms improve both privacy and model accuracy. To be more selective, our new mechanisms leverage some pleasant synergies between privacy and utility in PATE aggregation. For example, when teachers disagree, and there is no real consensus, the privacy cost is much higher; however, since such disagreement also suggest that the teachers may not give a correct answer, the answer may simply be omitted. Similarly, teachers may avoid giving an answer where the student already is confidently predicting the right answer. Additionally, we ensure that these selection steps are themselves done in a private manner. To add less noise, our new PATE aggregation mechanisms sample Gaussian noise, since the tails of that distribution diminish far more rapidly than those of the Laplacian noise used in the original PATE work. This reduction greatly increases the chance that the noisy aggregation of teachers’ votes results in the correct consensus answer, which is especially important when PATE is scaled to learning tasks with large numbers of output classes. However, changing the sampled noise requires redoing the entire PATE privacy analysis from scratch (see Section 4 and details in Appendix A). Finally, of independent interest are the details of our evaluation extending that of the original PATE work. In particular, we find that the virtual adversarial training (VAT) technique of Miyato et al. (2017) is a good basis for semi-supervised learning on tasks with many classes, outperforming the improved GANs by Salimans et al. (2016) used in the original PATE work. Furthermore, we explain how to tune the PATE approach to achieve very strong privacy (ε ≈ 1.0) along with high utility, for our real-world character recognition learning task. This paper is structured as follows: Section 2 is the related work section; Section 3 gives a background on PATE and an overview of our work; Section 4 describes our improved aggregation mechanisms; Section 5 details our experimental evaluation; Section 6 offers conclusions; and proofs are deferred to the Appendices. 2 RELATED WORK Differential privacy is by now the gold standard of privacy. It offers a rigorous framework whose threat model makes few assumptions about the adversary’s capabilities, allowing differentially private algorithms to effectively cope against strong adversaries. This is not the case of all privacy definitions, as demonstrated by successful attacks against anonymization techniques (Aggarwal, 2005; Narayanan & Shmatikov, 2008; Bindschaedler et al., 2017). The first learning algorithms adapted to provide differential privacy with respect to their training data were often linear and convex (Pathak et al., 2010; Chaudhuri et al., 2011; Song et al., 2013; Bassily et al., 2014; Hamm et al., 2016). More recently, successful developments in deep learning called for differentially private stochastic gradient descent algorithms (Abadi et al., 2016), some of which have been tailored to learn in federated (McMahan et al., 2017) settings. Differentially private selection mechanisms like GNMax (Section 4.1) are commonly used in hypothesis testing, frequent itemset mining, and as building blocks of more complicated private mechanisms. The most commonly used differentially private selection mechanisms are exponential mechanism (McSherry & Talwar, 2007) and LNMax (Bhaskar et al., 2010). Recent works offer lower bounds on sample complexity of such problem (Steinke & Ullman, 2017; Bafna & Ullman, 2017). The Confident and Interactive Aggregator proposed in our work (Section 4.2 and Section 4.3 resp.) use the intuition that selecting samples under certain constraints could result in better training than using samples uniformly at random. In Machine Learning Theory, active learning (Cohn et al., 1994) has been shown to allow learning from fewer labeled examples than the passive case (see e.g. Hanneke (2014)). Similarly, in model stealing (Tramèr et al., 2016), a goal is to learn a model from limited access to a teacher network. There is previous work in differential privacy literature (Hardt & Rothblum, 2010; Roth & Roughgarden, 2010) where the mechanism first decides whether or not to answer a query, and then privately answers the queries it chooses to answer using a traditional noiseaddition mechanism. In these cases, the sparse vector technique (Dwork & Roth, 2014, Chapter 3.6) helps bound the privacy cost in terms of the number of answered queries. This is in contrast to our work where a constant fraction of queries get answered and the sparse vector technique does not seem to help reduce the privacy cost. Closer to our work, Bun et al. (2017) consider a setting where the answer to a query of interest is often either very large or very small. They show that a sparse vector-like analysis applies in this case, where one pays only for queries that are in the middle. 3 BACKGROUND AND OVERVIEW We introduce essential components of our approach towards a generic and flexible framework for machine learning with provable privacy guarantees for training data. 3.1 THE PATE FRAMEWORK Here, we provide an overview of the PATE framework. To protect the privacy of training data during learning, PATE transfers knowledge from an ensemble of teacher models trained on partitions of the data to a student model. Privacy guarantees may be understood intuitively and expressed rigorously in terms of differential privacy. Illustrated in Figure 2, the PATE framework consists of three key parts: (1) an ensemble of n teacher models, (2) an aggregation mechanism and (3) a student model. Teacher models: Each teacher is a model trained independently on a subset of the data whose privacy one wishes to protect. The data is partitioned to ensure no pair of teachers will have trained on overlapping data. Any learning technique suitable for the data can be used for any teacher. Training each teacher on a partition of the sensitive data produces n different models solving the same task. At inference, teachers independently predict labels. Aggregation mechanism: When there is a strong consensus among teachers, the label they almost all agree on does not depend on the model learned by any given teacher. Hence, this collective decision is intuitively private with respect to any given training point—because such a point could have been included only in one of the teachers’ training set. To provide rigorous guarantees of differential privacy, the aggregation mechanism of the original PATE framework counts votes assigned to each class, adds carefully calibrated Laplacian noise to the resulting vote histogram, and outputs the class with the most noisy votes as the ensemble’s prediction. This mechanism is referred to as the max-of-Laplacian mechanism, or LNMax, going forward. For samples x and classes 1, . . . ,m, let fj(x) ∈ [m] denote the j-th teacher model’s prediction and ni denote the vote count for the i-th class (i.e., ni , |fj(x) = i|). The output of the mechanism is A(x) , argmaxi (ni(x) + Lap (1/γ)). Through a rigorous analysis of this mechanism, the PATE framework provides a differentially private API: the privacy cost of each aggregated prediction made by the teacher ensemble is known. Student model: PATE’s final step involves the training of a student model by knowledge transfer from the teacher ensemble using access to public—but unlabeled—data. To limit the privacy cost of labeling them, queries are only made to the aggregation mechanism for a subset of public data to train the student in a semi-supervised way using a fixed number of queries. The authors note that every additional ensemble prediction increases the privacy cost spent and thus cannot work with unbounded queries. Fixed queries fixes privacy costs as well as diminishes the value of attacks analyzing model parameters to recover training data (Zhang et al., 2017). The student only sees public data and privacy-preserving labels. 3.2 DIFFERENTIAL PRIVACY Differential privacy (Dwork et al., 2006) requires that the sensitivity of the distribution of an algorithm’s output to small perturbations of its input be limited. The following variant of the definition captures this intuition formally: Definition 1. A randomized mechanismM with domain D and rangeR satisfies (ε, δ)-differential privacy if for any two adjacent inputs D,D′ ∈ D and for any subset of outputs S ⊆ R it holds that: Pr[M(D) ∈ S] ≤ eε ·Pr[M(D′) ∈ S] + δ. (1) For our application of differential privacy to ML, adjacent inputs are defined as two datasets that only differ by one training example and the randomized mechanismM would be the model training algorithm. The privacy parameters have the following natural interpretation: ε is an upper bound on the loss of privacy, and δ is the probability with which this guarantee may not hold. Composition theorems (Dwork & Roth, 2014) allow us to keep track of the privacy cost when we run a sequence of mechanisms. 3.3 RÉNYI DIFFERENTIAL PRIVACY Papernot et al. (2017) note that the natural approach to bounding PATE’s privacy loss—by bounding the privacy cost of each label queried and using strong composition (Dwork et al., 2010) to derive the total cost—yields loose privacy guarantees. Instead, their approach uses data-dependent privacy analysis. This takes advantage of the fact that when the consensus among the teachers is very strong, the plurality outcome has overwhelming likelihood leading to a very small privacy cost whenever the consensus occurs. To capture this effect quantitatively, Papernot et al. (2017) rely on the moments accountant, introduced by Abadi et al. (2016) and building on previous work (Bun & Steinke, 2016; Dwork & Rothblum, 2016). In this section, we recall the language of Rényi Differential Privacy or RDP (Mironov, 2017). RDP generalizes pure differential privacy (δ = 0) and is closely related to the moments accountant. We choose to use RDP as a more natural analysis framework when dealing with our mechanisms that use Gaussian noise. Defined below, the RDP of a mechanism is stated in terms of the Rényi divergence. Definition 2 (Rényi Divergence). The Rényi divergence of order λ between two distributions P and Q is defined as: Dλ(P‖Q) , 1 λ− 1 logEx∼Q [ (P (x)/Q(x)) λ ] = 1 λ− 1 logEx∼P [ (P (x)/Q(x)) λ−1 ] . Definition 3 (Rényi Differential Privacy (RDP)). A randomized mechanismM is said to guarantee (λ, ε)-RDP with λ ≥ 1 if for any neighboring datasets D and D′, Dλ(M(D)‖M(D′)) = 1 λ− 1 logEx∼M(D) [( Pr [M(D) = x] Pr [M(D′) = x] )λ−1] ≤ ε. RDP generalizes pure differential privacy in the sense that ε-differential privacy is equivalent to (∞, ε)-RDP. Mironov (2017) proves the following key facts that allow easy composition of RDP guarantees and their conversion to (ε, δ)-differential privacy bounds. Theorem 4 (Composition). If a mechanism M consists of a sequence of adaptive mechanisms M1, . . . ,Mk such that for any i ∈ [k], Mi guarantees (λ, εi)-RDP, then M guarantees (λ, ∑k i=1 εi)-RDP. Theorem 5 (From RDP to DP). If a mechanism M guarantees (λ, ε)-RDP, then M guarantees (ε+ log 1/δλ−1 , δ)-differential privacy for any δ ∈ (0, 1). While both (ε, δ)-differential privacy and RDP are relaxations of pure ε-differential privacy, the two main advantages of RDP are as follows. First, it composes nicely; second, it captures the privacy guarantee of Gaussian noise in a much cleaner manner compared to (ε, δ)-differential privacy. This lets us do a careful privacy analysis of the GNMax mechanism as stated in Theorem 6. While the analysis of Papernot et al. (2017) leverages the first aspect of such frameworks with the Laplace noise (LNMax mechanism), our analysis of the GNMax mechanism relies on both. 3.4 PATE AGGREGATION MECHANISMS The aggregation step is a crucial component of PATE. It enables knowledge transfer from the teachers to the student while enforcing privacy. We improve the LNMax mechanism used by Papernot et al. (2017) which adds Laplace noise to teacher votes and outputs the class with the highest votes. First, we add Gaussian noise with an accompanying privacy analysis in the RDP framework. This modification effectively reduces the noise needed to achieve the same privacy cost per student query. Second, the aggregation mechanism is now selective: teacher votes are analyzed to decide which student queries are worth answering. This takes into account both the privacy cost of each query and its payout in improving the student’s utility. Surprisingly, our analysis shows that these two metrics are not at odds and in fact align with each other: the privacy cost is the smallest when teachers agree, and when teachers agree, the label is more likely to be correct thus being more useful to the student. Third, we propose and study an interactive mechanism that takes into account not only teacher votes on a queried example but possible student predictions on that query. Now, queries worth answering are those where the teachers agree on a class but the student is not confident in its prediction on that class. This third modification aligns the two metrics discussed above even further: queries where the student already agrees with the consensus of teachers are not worth expending our privacy budget on, but queries where the student is less confident are useful and answered at a small privacy cost. 3.5 DATA-DEPENDENT PRIVACY IN PATE A direct privacy analysis of the aggregation mechanism, for reasonable values of the noise parameter, allows answering only few queries before the privacy cost becomes prohibitive. The original PATE proposal used a data-dependent analysis, exploiting the fact that when the teachers have large agreement, the privacy cost is usually much smaller than the data-independent bound would suggest. In our work, we perform a data-dependent privacy analysis of the aggregation mechanism with Gaussian noise. This change of noise distribution turns out be technically much more challenging than the Laplace noise case and we defer the details to Appendix A. This increased complexity of the analysis however does not make the algorithm any more complicated and thus allows us to improve the privacy-utility tradeoff. Sanitizing the privacy cost via smooth sensitivity analysis. An additional challenge with datadependent privacy analyses arises from the fact that the privacy cost itself is now a function of the private data. Further, the data-dependent bound on the privacy cost has large global sensitivity (a metric used in differential privacy to calibrate the noise injected) and is therefore difficult to sanitize. To remedy this, we use the smooth sensitivity framework proposed by Nissim et al. (2007). Appendix B describes how we add noise to the computed privacy cost using this framework to publish a sanitized version of the privacy cost. Section B.1 defines smooth sensitivity and outlines algorithms 3–5 that compute it. The rest of Appendix B argues the correctness of these algorithms. The final analysis shows that the incremental cost of sanitizing our privacy estimates is modest— less than 50% of the raw estimates—thus enabling us to use precise data-dependent privacy analysis while taking into account its privacy implications. 4 IMPROVED AGGREGATION MECHANISMS FOR PATE The privacy guarantees provided by PATE stem from the design and analysis of the aggregation step. Here, we detail our improvements to the mechanism used by Papernot et al. (2017). As outlined in Section 3.4, we first replace the Laplace noise added to teacher votes with Gaussian noise, adapting the data-dependent privacy analysis. Next, we describe the Confident and Interactive Aggregators that select queries worth answering in a privacy-preserving way: the privacy budget is shared between the query selection and answer computation. The aggregators use different heuristics to select queries: the former does not take into account student predictions, while the latter does. 4.1 THE GNMAX AGGREGATOR AND ITS PRIVACY GUARANTEE This section uses the following notation. For a sample x and classes 1 to m, let fj(x) ∈ [m] denote the j-th teacher model’s prediction on x and ni(x) denote the vote count for the i-th class (i.e., ni(x) = |{j: fj(x) = i}|). We define a Gaussian NoisyMax (GNMax) aggregation mechanism as: Mσ(x) , argmax i { ni(x) +N (0, σ2) } , where N (0, σ2) is the Gaussian distribution with mean 0 and variance σ2. The aggregator outputs the class with noisy plurality after adding Gaussian noise to each vote count. In what follow, plurality more generally refers to the highest number of teacher votes assigned among the classes. The Gaussian distribution is more concentrated than the Laplace distribution used by Papernot et al. (2017). This concentration directly improves the aggregation’s utility when the number of classesm is large. The GNMax mechanism satisfies (λ, λ/σ2)-RDP, which holds for all inputs and all λ ≥ 1 (precise statements and proofs of claims in this section are deferred to Appendix A). A straightforward application of composition theorems leads to loose privacy bounds. As an example, the standard advanced composition theorem applied to experiments in the last two rows of Table 1 would give us ε = 8.42 and ε = 10.14 resp. at δ = 10−8 for the Glyph dataset. To refine these, we work out a careful data-dependent analysis that yields values of ε smaller than 1 for the same δ. The following theorem translates data-independent RDP guarantees for higher orders into a data-dependent RDP guarantee for a smaller order λ. We use it in conjunction with Proposition 7 to bound the privacy cost of each query to the GNMax algorithm as a function of q̃, the probability that the most common answer will not be output by the mechanism. Theorem 6 (informal). Let M be a randomized algorithm with (µ1, ε1)-RDP and (µ2, ε2)RDP guarantees and suppose that given a dataset D, there exists a likely outcome i∗ such that Pr [M(D) 6= i∗] ≤ q̃. Then the data-dependent Rényi differential privacy for M of order λ ≤ µ1, µ2 at D is bounded by a function of q̃, µ1, ε1, µ2, ε2, which approaches 0 as q̃ → 0. The new bound improves on the data-independent privacy for λ as long as the distribution of the algorithm’s output on that input has a strong peak (i.e., q̃ 1). Values of q̃ close to 1 could result in a looser bound. Therefore, in practice we take the minimum between this bound and λ/σ2 (the data-independent one). The theorem generalizes Theorem 3 from Papernot et al. (2017), where it was shown for a mechanism satisfying ε-differential privacy (i.e., µ1 = µ2 =∞ and ε1 = ε2). The final step in our analysis uses the following lemma to bound the probability q̃ when i∗ corresponds to the class with the true plurality of teacher votes. Proposition 7. For any i∗ ∈ [m], we have Pr [Mσ(D) 6= i∗] ≤ 12 ∑ i 6=i∗ erfc ( ni∗−ni 2σ ) , where erfc is the complementary error function. In Appendix A, we detail how these results translate to privacy bounds. In short, for each query to the GNMax aggregator, given teacher votes ni and the class i∗ with maximal support, Proposition 7 gives us the value of q̃ to use in Theorem 6. We optimize over µ1 and µ2 to get a data-dependent RDP guarantee for any order λ. Finally, we use composition properties of RDP to analyze a sequence of queries, and translate the RDP bound back to an (ε, δ)-DP bound. Expensive queries. This data-dependent privacy analysis leads us to the concept of an expensive query in terms of its privacy cost. When teacher votes largely disagree, some ni∗ − ni values may be small leading to a large value for q̃: i.e., the lack of consensus amongst teachers indicates that the aggregator is likely to output a wrong label. Thus expensive queries from a privacy perspective are often bad for training too. Conversely, queries with strong consensus enable tight privacy bounds. This synergy motivates the aggregation mechanisms discussed in the following sections: they evaluate the strength of the consensus before answering a query. 4.2 THE CONFIDENT-GNMAX AGGREGATOR In this section, we propose a refinement of the GNMax aggregator that enables us to filter out queries for which teachers do not have a sufficiently strong consensus. This filtering enables the teachers to avoid answering expensive queries. We also take note to do this selection step itself in a private manner. The proposed Confident Aggregator is described in Algorithm 1. To select queries with overwhelming consensus, the algorithm checks if the plurality vote crosses a threshold T . To enforce privacy in this step, the comparison is done after adding Gaussian noise with variance σ21 . Then, for queries that pass this noisy threshold check, the aggregator proceeds with the usual GNMax mechanism with a smaller variance σ22 . For queries that do not pass the noisy threshold check, the aggregator simply returns ⊥ and the student discards this example in its training. In practice, we often choose significantly higher values for σ1 compared to σ2. This is because we pay the cost of the noisy threshold check always, and without the benefit of knowing that the consensus is strong. We pick T so that queries where the plurality gets less than half the votes (often very expensive) are unlikely to pass the threshold after adding noise, but we still have a high enough yield amongst the queries with a strong consensus. This tradeoff leads us to look for T ’s between 0.6× to 0.8× the number of teachers. The privacy cost of this aggregator is intuitive: we pay for the threshold check for every query, and for the GNMax step only for queries that pass the check. In the work of Papernot et al. (2017), the mechanism paid a privacy cost for every query, expensive or otherwise. In comparison, the Confident Aggregator expends a much smaller privacy cost to check against the threshold, and by answering a significantly smaller fraction of expensive queries, it expends a lower privacy cost overall. 4.3 THE INTERACTIVE-GNMAX AGGREGATOR While the Confident Aggregator excludes expensive queries, it ignores the possibility that the student might receive labels that contribute little to learning, and in turn to its utility. By incorporating the Algorithm 1 – Confident-GNMax Aggregator: given a query, consensus among teachers is first estimated in a privacy-preserving way to then only reveal confident teacher predictions. Input: input x, threshold T , noise parameters σ1 and σ2 1: if maxi{nj(x)}+N (0, σ21) ≥ T then . Privately check for consensus 2: return argmaxj { nj(x) +N (0, σ22) } . Run the usual max-of-Gaussian 3: else 4: return ⊥ 5: end if Algorithm 2 – Interactive-GNMax Aggregator: the protocol first compares student predictions to the teacher votes in a privacy-preserving way to then either (a) reinforce the student prediction for the given query or (b) provide the student with a new label predicted by the teachers. Input: input x, confidence γ, threshold T , noise parameters σ1 and σ2, total number of teachers M 1: Ask the student to provide prediction scores p(x) 2: if maxj{nj(x)−Mpj(x)}+N (0, σ21) ≥ T then . Student does not agree with teachers 3: return argmaxj{nj(x) +N (0, σ22)} . Teachers provide new label 4: else if max{pi(x)} > γ then . Student agrees with teachers and is confident 5: return arg maxj pj(x) . Reinforce student’s prediction 6: else 7: return ⊥ . No output given for this label 8: end if student’s current predictions for its public training data, we design an Interactive Aggregator that discards queries where the student already confidently predicts the same label as the teachers. Given a set of queries, the Interactive Aggregator (Algorithm 2) selects those answered by comparing student predictions to teacher votes for each class. Similar to Step 1 in the Confident Aggregator, queries where the plurality of these noised differences crosses a threshold are answered with GNMax. This noisy threshold suffices to enforce privacy of the first step because student predictions can be considered public information (the student is trained in a differentially private manner). For queries that fail this check, the mechanism reinforces the predicted student label if the student is confident enough and does this without looking at teacher votes again. This limited form of supervision comes at a small privacy cost. Moreover, the order of the checks ensures that a student falsely confident in its predictions on a query is not accidentally reinforced if it disagrees with the teacher consensus. The privacy accounting is identical to the Confident Aggregator except in considering the difference between teachers and the student instead of only the teachers votes. In practice, the Confident Aggregator can be used to start training a student when it can make no meaningful predictions and training can be finished off with the Interactive Aggregator after the student gains some proficiency. 5 EXPERIMENTAL EVALUATION Our goal is first to show that the improved aggregators introduced in Section 4 enable the application of PATE to uncurated data, thus departing from previous results on tasks with balanced and wellseparated classes. We experiment with the Glyph dataset described below to address two aspects left open by Papernot et al. (2017): (a) the performance of PATE on a task with a larger number of classes (the framework was only evaluated on datasets with at most 10 classes) and (b) the privacy-utility tradeoffs offered by PATE on data that is class imbalanced and partly mislabeled. In Section 5.2, we evaluate the improvements given by the GNMax aggregator over its Laplace counterpart (LNMax) and demonstrate the necessity of the Gaussian mechanism for uncurated tasks. In Section 5.3, we then evaluate the performance of PATE with both the Confident and Interactive Aggregators on all datasets used to benchmark the original PATE framework, in addition to Glyph. With the right teacher and student training, the two mechanisms from Section 4 achieve high accuracy with very tight privacy bounds. Not answering queries for which teacher consensus is too low (Confident-GNMax) or the student’s predictions already agree with teacher votes (InteractiveGNMax) better aligns utility and privacy: queries are answered at a significantly reduced cost. 5.1 EXPERIMENTAL SETUP MNIST, SVHN, and the UCI Adult databases. We evaluate with two computer vision tasks (MNIST and Street View House Numbers (Netzer et al., 2011)) and census data from the UCI Adult dataset (Kohavi, 1996). This enables a comparative analysis of the utility-privacy tradeoff achieved with our Confident-GNMax aggregator and the LNMax originally used in PATE. We replicate the experimental setup and results found in Papernot et al. (2017) with code and teacher votes made available online. The source code for the privacy analysis in this paper as well as supporting data required to run this analysis is available on Github.1 A detailed description of the experimental setup can be found in Papernot et al. (2017); we provide here only a brief overview. For MNIST and SVHN, teachers are convolutional networks trained on partitions of the training set. For UCI Adult, each teacher is a random forest. The test set is split in two halves: the first is used as unlabeled inputs to simulate the student’s public data and the second is used as a hold out to evaluate test performance. The MNIST and SVHN students are convolutional networks trained using semi-supervised learning with GANs à la Salimans et al. (2016). The student for the Adult dataset are fully supervised random forests. Glyph. This optical character recognition task has an order of magnitude more classes than all previous applications of PATE. The Glyph dataset also possesses many characteristics shared by real-world tasks: e.g., it is imbalanced and some inputs are mislabeled. Each input is a 28 × 28 grayscale image containing a single glyph generated synthetically from a collection of over 500K computer fonts.2 Samples representative of the difficulties raised by the data are depicted in Figure 3. The task is to classify inputs as one of the 150 Unicode symbols used to generate them. This set of 150 classes results from pre-processing efforts. We discarded additional classes that had few samples; some classes had at least 50 times fewer inputs than the most popular classes, and these were almost exclusively incorrectly labeled inputs. We also merged classes that were too ambiguous for even a human to differentiate them. Nevertheless, a manual inspection of samples grouped by classes—favorably to the human observer—led to the conservative estimate that some classes remain 5 times more frequent, and mislabeled inputs represent at least 10% of the data. To simulate the availability of private and public data (see Section 3.1), we split data originally marked as the training set (about 65M points) into partitions given to the teachers. Each teacher is a ResNet (He et al., 2016) made of 32 leaky ReLU layers. We train on batches of 100 inputs for 40K steps using SGD with momentum. The learning rate, initially set to 0.1, is decayed after 10K steps to 0.01 and again after 20K steps to 0.001. These parameters were found with a grid search. We split holdout data in two subsets of 100K and 400K samples: the first acts as public data to train the student and the second as its testing data. The student architecture is a convolutional network learnt in a semi-supervised fashion with virtual adversarial training (VAT) from Miyato et al. (2017). Using unlabeled data, we show how VAT can regularize the student by making predictions constant in adversarial3 directions. Indeed, we found that GANs did not yield as much utility for Glyph as for MNIST or SVHN. We train with Adam for 400 epochs and a learning rate of 6 · 10−5. 5.2 COMPARING THE LNMAX AND GNMAX MECHANISMS Section 4.1 introduces the GNMax mechanism and the accompanying privacy analysis. With a Gaussian distribution, whose tail diminishes more rapidly than the Laplace distribution, we expect better utility when using the new mechanism (albeit with a more involved privacy analysis). To study the tradeoff between privacy and accuracy with the two mechanisms, we run experiments training several ensembles of M teachers for M ∈ {100, 500, 1000, 5000} on the Glyph data. Re- 1https://github.com/tensorflow/models/tree/master/research/differential_privacy 2Glyph data is not public but similar data is available publicly as part of the notMNIST dataset. 3In this context, the adversarial component refers to the phenomenon commonly referred to as adversarial examples (Biggio et al., 2013; Szegedy et al., 2014) and not to the adversarial training approach taken in GANs. call that 65 million training inputs are partitioned and distributed among the M teachers with each teacher receiving between 650K and 13K inputs for the values of M above. The test data is used to query the teacher ensemble and the resulting labels (after the LNMax and GNMax mechanisms) are compared with the ground truth labels provided in the dataset. This predictive performance of the teachers is essential to good student training with accurate labels and is a useful proxy for utility. For each mechanism, we compute (ε, δ)-differential privacy guarantees. As is common in literature, for a dataset on the order of 108 samples, we choose δ = 10−8 and denote the corresponding ε as the privacy cost. The total ε is calculated on a subset of 4,000 queries, which is representative of the number of labels needed by a student for accurate training (see Section 5.3). We visualize in Figure 4 the effect of the noise distribution (left) and the number of teachers (right) on the tradeoff between privacy costs and label accuracy. Observations. On the left of Figure 1, we compare our GNMax aggregator to the LNMax aggregator used by the original PATE proposal, on an ensemble of 1000 teachers and for varying noise scales σ. At fixed test accuracy, the GNMax algorithm consistently outperforms the LNMax mechanism in terms of privacy cost. To explain this improved performance, recall notation from Section 4.1. For both mechanisms, the data dependent privacy cost scales linearly with q̃—the likelihood of an answer other than the true plurality. The value of q̃ falls of as exp(−x2) for GNMax and exp(−x) for LNMax, where x is the ratio (ni∗−ni)/σ. Thus, when ni∗−ni is (say) 4σ, LNMax would have q̃ ≈ e−4 = 0.018..., whereas GNMax would have q̃ ≈ e−16 ≈ 10−7, thereby leading to a much higher likelihood of returning the true plurality. Moreover, this reduced q̃ translates to a smaller privacy cost for a given σ leading to a better utility-privacy tradeoff. As long as each teacher has sufficient data to learn a good-enough model, increasing the number M of teachers improves the tradeoff—as illustrated on the right of Figure 4 with GNMax. The larger ensembles lower the privacy cost of answering queries by tolerating larger σ’s. Combining the two observations made in this Figure, for a fixed label accuracy, we lower privacy costs by switching to the GNMax aggregator and training a larger number M of teachers. 5.3 STUDENT TRAINING WITH THE GNMAX AGGREGATION MECHANISMS As outlined in Section 3, we train a student on public data labeled by the aggregation mechanisms. We take advantage of PATE’s flexibility and apply the technique that performs best on each dataset: semi-supervised learning with Generative Adversarial Networks (Salimans et al., 2016) for MNIST and SVHN, Virtual Adversarial Training (Miyato et al., 2017) for Glyph, and fully-supervised random forests for UCI Adult. In addition to evaluating the total privacy cost associated with training the student model, we compare its utility to a non-private baseline obtained by training on the sensitive data (used to train teachers in PATE): we use the baselines of 99.2%, 92.8%, and 85.0% reported by Papernot et al. (2017) respectively for MNIST, SVHN, and UCI Adult, and we measure a baseline of 82.2% for Glyph. We compute (ε, δ)-privacy bounds and denote the privacy cost as the ε value at a value of δ set accordingly to number of training samples. Confident-GNMax Aggregator. Given a pool of 500 to 12,000 samples to learn from (depending on the dataset), the student submits queries to the teacher ensemble running the Confident-GNMax aggregator from Section 4.2. A grid search over a range of plausible values for parameters T , σ1 and σ2 yielded the values reported in Table 1, illustrating the tradeoff between utility and privacy achieved. We additionally measure the number of queries selected by the teachers to be answered and compare student utility to a non-private baseline. The Confident-GNMax aggregator outperforms LNMax for the four datasets considered in the original PATE proposal: it reduces the privacy cost ε, increases student accuracy, or both simultaneously. On the uncurated Glyph data, despite the imbalance of classes and mislabeled data (as evidenced by the 82.2% baseline), the Confident Aggregator achieves 73.5% accuracy with a privacy cost of just ε = 1.02. Roughly 1,300 out of 12,000 queries made are not answered, indicating that several expensive queries were successfully avoided. This selectivity is analyzed in more details in Section 5.4. Interactive-GNMax Aggregator. On Glyph, we evaluate the utility and privacy of an interactive training routine that proceeds in two rounds. Round one runs student training with a Confident Aggregator. A grid search targeting the best privacy for roughly 3,400 answered queries (out of 6,000)—sufficient to bootstrap a student—led us to setting (T=3500, σ1=1500, σ2=100) and a privacy cost of ε ≈ 0.59. In round two, this student was then trained with 10,000 more queries made with the InteractiveGNMax Aggregator (T=3500, σ1=2000, σ2=200). We computed the resulting (total) privacy cost and utility at an exemplar data point through another grid search of plausible parameter values. The result appears in the last row of Table 1. With just over 10,422 answered queries in total at a privacy cost of ε = 0.84, the trained student was able to achieve 73.2% accuracy. Note that this students required fewer answered queries compared to the Confident Aggregator. The best overall cost of student training occurred when the privacy costs for the first and second rounds of training were roughly the same. (The total ε is less than 0.59 × 2 = 1.18 due to better composition—via Theorems 4 and 5.) Comparison with Baseline. Note that the Glyph student’s accuracy remains seven percentage points below the non-private model’s accuracy achieved by training on the 65M sensitive inputs. We hypothesize that this is due to the uncurated nature of the data considered. Indeed, the class imbalance naturally requires more queries to return labels from the less represented classes. For instance, a model trained on 200K queries is only 77% accurate on test data. In addition, the large fraction of mislabeled inputs are likely to have a large privacy cost: these inputs are sensitive because they are outliers of the distribution, which is reflected by the weak consensus among teachers on these inputs. 5.4 NOISY THRESHOLD CHECKS AND PRIVACY COSTS Sections 4.1 and 4.2 motivated the need for a noisy threshold checking step before having the teachers answer queries: it prevents most of the privacy budget being consumed by few queries that are expensive and also likely to be incorrectly answered. In Figure 5, we compare the privacy cost ε of answering all queries to only answering confident queries for a fixed number of queries. We run additional experiments to support the evaluation from Section 5.3. With the votes of 5,000 teachers on the Glyph dataset, we plot in Figure 5 the histogram of the plurality vote counts (ni∗ in the notation of Section 4.1) across 25,000 student queries. We compare these values to the vote counts of queries that passed the noisy threshold check for two sets of parameters T and σ1 in Algorithm 1. Smaller values imply weaker teacher agreements and consequently more expensive queries. When (T=3500, σ1=1500) we capture a significant fraction of queries where teachers have a strong consensus (roughly > 4000 votes) while managing to filter out many queries with poor consensus. This moderate check ensures that although many queries with plurality votes between 2,500 and 3,500 are answered (i.e., only 50–70% of teachers agree on a label) the expensive ones are most likely discarded. For (T=5000, σ1=1500), queries with poor consensus are completely culled out. This selectivity comes at the expense of a noticeable drop for queries that might have had a strong consensus and little-to-no privacy cost. Thus, this aggressive check answer fewer queries with very strong privacy guarantees. We reiterate that this threshold checking step itself is done in a private manner. Empirically, in our Interactive Aggregator experiments, we expend about a third to a half of our privacy budget on this step, which still yields a very small cost per query across 6,000 queries. 6 CONCLUSIONS The key insight motivating the addition of a noisy thresholding step to the two aggregation mechanisms proposed in our work is that there is a form of synergy between the privacy and accuracy of labels output by the aggregation: labels that come at a small privacy cost also happen to be more likely to be correct. As a consequence, we are able to provide more quality supervision to the student by choosing not to output labels when the consensus among teachers is too low to provide an aggregated prediction at a small cost in privacy. This observation was further confirmed in some of our experiments where we observed that if we trained the student on either private or non-private labels, the former almost always gave better performance than the latter—for a fixed number of labels. Complementary with these aggregation mechanisms is the use of a Gaussian (rather than Laplace) distribution to perturb teacher votes. In our experiments with Glyph data, these changes proved essential to preserve the accuracy of the aggregated labels—because of the large number of classes. The analysis presented in Section 4 details the delicate but necessary adaptation of analogous results for the Laplace NoisyMax. As was the case for the original PATE proposal, semi-supervised learning was instrumental to ensure the student achieves strong utility given a limited set of labels from the aggregation mechanism. However, we found that virtual adversarial training outperforms the approach from Salimans et al. (2016) in our experiments with Glyph data. These results establish lower bounds on the performance that a student can achieve when supervised with our aggregation mechanisms; future work may continue to investigate virtual adversarial training, semi-supervised generative adversarial networks and other techniques for learning the student in these particular settings with restricted supervision. ACKNOWLEDGMENTS We are grateful to Martín Abadi, Vincent Vanhoucke, and Daniel Levy for their useful inputs and discussions towards this paper. A APPENDIX: PRIVACY ANALYSIS In this appendix, we provide the proofs of Theorem 6 and Proposition 7. Moreover, we present Proposition 10, which provides optimal values of µ1 and µ2 to apply towards Theorem 6 for the GNMax mechanism. We start off with a statement about the Rényi differential privacy guarantee of the GNMax. Proposition 8. The GNMax aggregatorMσ guarantees ( λ, λ/σ2 ) -RDP for all λ ≥ 1. Proof. The result follows from observing thatMσ can be decomposed into applying the argmax operator to a noisy histogram resulted from adding Gaussian noise to each dimension of the original histogram. The Gaussian mechanism satisfies (λ, λ/2σ2)-RDP (Mironov, 2017), and since each teacher may change two counts (incrementing one and decrementing the other), the overall RDP guarantee is as claimed. Proposition 7. For a GNMax aggregator Mσ , the teachers’ votes histogram n̄ = (n1, . . . , nm), and for any i∗ ∈ [m], we have Pr [Mσ(D) 6= i∗] ≤ q(n̄), where q(n̄) , 1 2 ∑ i 6=i∗ erfc ( ni∗ − ni 2σ ) . Proof. Recall thatMσ(D) = argmax(ni + Zi), where Zi are distributed as N (0, σ2). Then for any i∗ ∈ [m], we have Pr[Mσ(D) 6= i∗] = Pr [∃i, ni + Zi > ni∗ + Zi∗ ] ≤ ∑ i 6=i∗ Pr [ni + Zi > ni∗ + Zi∗ ] = ∑ i 6=i∗ Pr [Zi − Zi∗ > ni∗ − ni] = ∑ i 6=i∗ 1 2 ( 1− erf ( ni∗ − ni 2σ )) . where the last equality follows from the fact that Zi − Zj is a Gaussian random variable with mean zero and variance 2σ2. We now present a precise statement of Theorem 6. Theorem 6. LetM be a randomized algorithm with (µ1, ε1)-RDP and (µ2, ε2)-RDP guarantees and suppose that there exists a likely outcome i∗ given a dataset D and a bound q̃ ≤ 1 such that q̃ ≥ Pr [M(D) 6= i∗]. Additionally suppose that λ ≤ µ1 and q̃ ≤ e(µ2−1)ε2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2 . Then, for any neighboring dataset D′ of D, we have: Dλ(M(D)‖M(D′)) ≤ 1 λ− 1 log ( (1− q̃) ·A(q̃, µ2, ε2)λ−1 + q̃ ·B(q̃, µ1, ε1)λ−1 ) (2) whereA(q̃, µ2, ε2) , (1− q̃)/ ( 1− (q̃eε2) µ2−1 µ2 ) andB(q̃, µ1, ε1) , eε1/q̃ 1 µ1−1 . Proof. Before we proceed to the proof, we introduce some simplifying notation. For a randomized mechanismM and neighboring datasets D and D′, we define βM(λ;D,D ′) , Dλ(M(D)‖M(D′)) = 1 λ− 1 logEx∼M(D) [( Pr [M(D) = x] Pr [M(D′) = x] )λ−1] . As the proof involves working with the RDP bounds in the exponent, we set ζ1 , eε1(µ1−1) and ζ2 , eε2(µ2−1). Finally, we define the following shortcuts: qi , Pr [M(D) = i] and q , ∑ i 6=i∗ qi = Pr [M(D) 6= i∗] , pi , Pr [M(D′) = i] and p , ∑ i6=i∗ pi = Pr [M(D′) 6= i∗] , and note that q ≤ q̃. From the definition of Rényi differential privacy, (µ1, ε1)-RDP implies: exp (βM(µ1;D,D ′)) = (1− q)µ1 (1− p)µ1−1 + ∑ i6=i∗ qµ1i pµ1−1i 1/(µ1−1) ≤ exp(ε1) =⇒ ∑ i>1 qµ1i pµ1−1i = ∑ i>1 qi ( qi pi )µ1−1 ≤ ζ1. (3) Since µ1 ≥ λ, f(x) , x µ1−1 λ−1 is convex. Applying Jensen’s Inequality we have the following: ∑ i 6=i∗ qi ( qi pi )λ−1 q µ1−1 λ−1 ≤ ∑ i 6=i∗ qi ( qi pi )µ1−1 q =⇒ ∑ i6=i∗ qi ( qi pi )λ−1 ≤ q ∑ i 6=i∗ qi ( qi pi )µ1−1 q λ−1 µ1−1 (3) =⇒ ∑ i6=i∗ qi ( qi pi )λ−1 ≤ ζ1 λ−1 µ1−1 · q1− λ−1 µ1−1 . (4) Next, by the bound at order µ2, we have: exp (βM(µ2;D ′, D)) = (1− p)µ2 (1− q)µ2−1 + ∑ i 6=i∗ pµ2i qµ2−1i 1/(µ2−1) ≤ exp(ε2) =⇒ (1− p) µ2 (1− q)µ2−1 + ∑ i6=i∗ pµ2i qµ2−1i ≤ ζ2. By the data processing inequality of Rényi divergence, we have (1− p)µ2 (1− q)µ2−1 + pµ2 qµ2−1 ≤ ζ2, which implies p µ2 qµ2−1 ≤ ζ2 and thus p ≤ ( qµ2−1ζ2 ) 1 µ2 . (5) Combining (4) and (5), we can derive a bound at λ. exp (βM(λ,D,D ′)) = (1− q)λ (1− p)λ−1 + ∑ i6=i∗ qλi pλ−1i 1/(λ−1) ≤ (1− q)λ( 1− (qµ2−1ζ2) 1 µ2 )λ−1 + ζ1 λ−1µ1−1 · q1− λ−1µ1−1 1/(λ−1) . (6) Although Equation (6) is very close to the corresponding statement in the theorem’s claim, one subtlety remains. The bound (6) applies to the exact probability q = Pr [M(D) 6= i∗]. In the theorem statement, and in practice, we can only derive an upper bound q̃ on Pr [M(D) 6= i∗]. The last step of the proof requires showing that the expression in Equation (6) is monotone in the range of values of q that we care about. Lemma 9 (Monotonicity of the bound). Let the functions f1(·) and f2(·) be f1(x) , (1− x)λ( 1− (xµ2−1ζ2) 1 µ2 )λ−1 and f2(x) , ζ1 λ−1µ1−1 · x1− λ−1µ1−1 , Then f1(x) + f2(x) is increasing in [ 0,min ( 1, ζ2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2)] . Proof. Taking the derivative of f1(x), we have: f ′1(x) = −λ(1− x)λ−1(1− (xµ2−1ζ2) 1 µ2 )λ−1 (1− (xµ2−1ζ2) 1 µ2 )2λ−2 + (1− x)λ(λ− 1)(1− (xµ2−1ζ2) 1 µ2 )λ−2ζ2 1 µ2 · µ2−1µ2 · x − 1µ2 (1− (xµ2−1ζ2) 1 µ2 )2λ−2 = (1− x)λ−1 (1− (xµ2−1ζ2) 1 µ2 )λ−1 ( −λ+ (λ− 1) ( 1− 1 µ2 ) 1− x 1− (xµ2−1ζ2) 1 µ2 ( ζ2 x ) 1 µ2 ) . We intend to show that: f ′1(x) ≥ −λ+ (λ− 1) ( 1− 1 µ2 )( ζ2 x ) 1 µ2 . (7) For x ∈ [ 0, ζ2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2] and y ∈ [1,∞), define g(x, y) as: g(x, y) , −λ · yλ−1 + (λ− 1) ( 1− 1 µ2 )( ζ2 x ) 1 µ2 yλ. We claim that g(x, y) is increasing in y and therefore g(x, y) ≥ g(x, 1), and prove it by showing the partial derivative of g(x, y) with respect to y is non-negative. Take a derivative with respect to y as: g′y(x, y) = −λ(λ− 1)yλ−2 + λ(λ− 1) ( 1− 1 µ2 )( ζ2 x ) 1 µ2 yλ−1 = λ(λ− 1)yλ−2 ( −1 + ( 1− 1 µ2 )( ζ2 x ) 1 µ2 y ) . To see why g′y(x, y) is non-negative in the respective ranges of x and y, note that: x ≤ ζ2/ ( µ1 µ1 − 1 · µ2 µ2 − 1 )µ2 =⇒ x ≤ ζ2/ ( µ2 µ2 − 1 )µ2 =⇒ 1 ≤ ζ2 x · ( µ2 − 1 µ2 )µ2 =⇒ 1 ≤ µ2 − 1 µ2 ( ζ2 x ) 1 µ2 =⇒ 1 ≤ µ2 − 1 µ2 ( ζ2 x ) 1 µ2 y (as y ≥ 1) =⇒ 0 ≤ −1 + µ2 − 1 µ2 ( ζ2 x ) 1 µ2 y =⇒ 0 ≤ g′y(x, y). (in the resp. range of x and y) Consider 1−x 1−(xµ2−1ζ2)1/µ2 . Since ζ2 ≥ 1 and x ≤ 1, we have x ≤ ζ2 and hence 1− x 1− (xµ2−1ζ2) 1 µ2 ≥ 1− x 1− (xµ2−1x) 1 µ2 = 1. Therefore we can set y = 1−x 1−(xµ2−1ζ2)1/µ2 and apply the fact that g(x, y) ≥ g(x, 1) for all y ≥ 1 to get f ′1(x) ≥ −λ+ (λ− 1) ( 1− 1 µ2 )( ζ2 x ) 1 µ2 , as required by (7). Taking the derivative of f2(x), we have: f ′2(x) = ζ1 λ−1 µ1−1 · ( 1− λ− 1 µ1 − 1 ) x− λ−1 µ1−1 = ( ζ1 x ) λ−1 µ1−1 ( 1− λ− 1 µ1 − 1 ) ≥ 1− λ− 1 µ1 − 1 . Combining the two terms together, we have: f ′(x) ≥ −λ+ (λ− 1) ( 1− 1 µ2 )( ζ2 x ) 1 µ2 + 1− λ− 1 µ1 − 1 = (λ− 1) ( − µ1 µ1 − 1 + µ2 − 1 µ2 ( ζ2 x ) 1 µ2 ) . For f ′(x) to be non-negative we need: − µ1 µ1 − 1 + µ2 − 1 µ2 ( ζ2 x ) 1 µ2 ≥ 0 ⇐⇒ ( µ1 µ1 − 1 · µ2 µ2 − 1 )µ2 ≤ ζ2 x . So f(x) is increasing for x ∈ [ 0, ζ2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2] . This means for q ≤ q̃ ≤ ζ2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2 , we have f(q) ≤ f(q̃). This completes the proof of the lemma and that of the theorem. Theorem 6 yields data-dependent Rényi differential privacy bounds for any value of µ1 and µ2 larger than λ. The following proposition simplifies this search by calculating optimal higher moments µ1 and µ2 for the GNMax mechanism with variance σ2. Proposition 10. When applying Theorem 6 and Proposition 8 for GNMax with Gaussian of variance σ2, the right-hand side of (2) is minimized at µ2 = σ · √ log(1/q̃), and µ1 = µ2 + 1. Proof. We can minimize both terms in (2) independently. To minimize the first term in (6), we minimize (q̃eε2)1−1/µ2 by considering logarithms: log { (q̃eε2) 1−1/µ2 } = log { q̃1− 1 µ2 exp ( µ2 − 1 σ2 )} = ( 1− 1 µ2 ) · log q̃ + µ2 − 1 σ2 = 1 µ2 log 1 q̃ + µ2 σ2 − 1 σ2 − log 1 q̃ , which is minimized at µ2 = σ · √ log(1/q̃). To minimize the second term in (6), we minimize eε1/q̃1/(µ1−1) as follows: log { eε1 q̃1/(µ1−1) } = log { q̃−1/(µ1−1) exp (µ1 σ2 )} = µ1 σ2 + 1 µ1 − 1 log 1 q̃ = 1 σ2 + µ1 − 1 σ2 + 1 µ1 − 1 log 1 q̃ , which is minimized at µ1 = 1 + σ · √ log(1/q̃) completing the proof. Putting this together, we apply the following steps to calculate RDP of order λ for GNMax with variance σ2 on a given dataset D. First, we compute a bound q according to Proposition 7. Then we use the smaller of two bounds: a data-dependent (Theorem 6) and a data-independent one (Proposition 8) : βσ(q) , min { 1 λ− 1 log { (1− q) ·A(q, µ2, ε2)λ−1 + q ·B(q, µ1, ε1)λ−1 } , λ/σ2 } , whereA andB are defined as in the statement of Theorem 6, the parameters µ1 and µ2 are selected according to Proposition 10, and ε1 , µ1/σ2 and ε2 , µ2/σ2 (Proposition 8). Importantly, the first expression is evaluated only when q < 1, µ1 ≥ λ, µ2 > 1, and q ≤ e(µ2−1)ε2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2 . These conditions can either be checked for each application of the aggregation mechanism, or a critical value of q0 that separates the range of applicability of the data-dependent and data-independent bounds can be computed for given σ and λ. In our implementation we pursue the second approach. The following corollary offers a simple asymptotic expression of the privacy of GNMax for the case when there are large (relative to σ) gaps between the highest three vote counts. Corollary 11. If the top three vote counts are n1 > n2 > n3 and n1 − n2, n2 − n3 σ, then the mechanism GNMax with Gaussian of variance σ2 satisfies (λ, exp(−2λ/σ2)/λ)-RDP for λ = (n1 − n2)/4. Proof. Denote the noisy counts as ñi = ni + N (0, σ2). Ignoring outputs other than those with the highest and the second highest counts, we bound q = Pr [M(D) 6= 1] as Pr[ñ1 < ñ2] = Pr[N(0, 2σ2) > n1 − n2] < exp ( −(n1 − n2)2/4σ2 ) , which we use as q̃. Plugging q̃ in Proposition 10, we have µ1 − 1 = µ2 = (n1 − n2)/2, limiting the range of applicability of Theorem 6 to λ < (n1 − n2)/2. Choosing λ = (n1−n2)/4 ensuresA(q̃, µ2, ε2) ≈ 1, which allows approximating the bound (2) as q̃ ·B(q̃, µ1, ε1)λ−1/(λ− 1). The proof follows by straightforward calculation. B SMOOTH SENSITIVITY AND PUBLISHING THE PRIVACY PARAMETER The privacy guarantees obtained for the mechanisms in this paper via Theorem 6 take as input q̃, an upper bound on the probability that the aggregate mechanism returns the true plurality. This means that the resulting privacy parameters computed depend on teacher votes and hence the underlying data. To avoid potential privacy breaches from simply publishing the data-dependent parameter, we need to publish a sanitized version of the privacy loss. This is done by adding noise to the computed privacy loss estimates using the smooth sensitivity algorithm proposed by Nissim et al. (2007). This section has the following structure. First we recall the notion of smooth sensitivity and introduce an algorithm for computing the smooth sensitivity of the privacy loss function of the GNMax mechanism. In the rest of the section we prove correctness of these algorithms by stating several conditions on the mechanism, proving that these conditions are sufficient for correctness of the algorithm, and finally demonstrating that GNMax satisfies these conditions. B.1 COMPUTING SMOOTH SENSITIVITY Any dataset D defines a histogram n̄ = (n1, . . . , nm) ∈ Nm of the teachers’ votes. We have a natural notion of the distance between two histograms dist(n̄, n̄′) and a function q:Nm → [0, 1] on these histograms computing the bound according to Proposition 7. The value q(n̄) can be used as q̃ in the application of Theorem 6. Additionally we have n(i) denote the i-th highest bar in the histogram. We aim at calculating a smooth sensitivity of β (q(n̄)) whose definition we recall now. Definition 12 (Smooth Sensitivity). Given the smoothness parameter β, a β-smooth sensitivity of f(n) is defined as SSβ(n̄) , max d≥0 e−βd · max n̄′:dist(n̄,n̄′)≤d L̃S(n̄′), where L̃S(n̄) ≥ max n̄′:dist(n̄,n̄′)=1 |f(n)− f(n′)| is an upper bound on the local sensitivity. We now describe Algorithms 3–5 computing a smooth sensitivity of β (q(·)). The algorithms assume the existence of efficiently computable functions q:Nm → [0, 1], BL,BU: [0, 1] → [0, 1], and a constant q0. Informally, the functions BU and BL respectively upper and lower bound the value of q evaluated at any neighbor of n̄ given q(n̄), and [0, q0) limits the range of applicability of data-dependent analysis. The functions BL and BU are defined as follows. Their derivation appears in Section B.4. BU(q) , min { m− 1 2 erfc ( erfc-1 ( 2q m− 1 ) − 1 σ ) , 1 } , BL(q) , m− 1 2 erfc ( erfc-1 ( 2q m− 1 ) + 1 σ ) , Algorithm 3 – Local Sensitivity: use the functions BU and BL to compute (an upper bound) of the local sensitivity at a given q value by looking at the difference of β (·) evaluated on the bounds. 1: procedure L̃S(q) 2: if q1 ≤ q ≤ q0 then . q1 = BL(q0). Interpolate the middle part. 3: q ← q1 4: end if 5: return max{β (BU(q))− β (q) ,β (q)− β (BL(q))} 6: end procedure B.2 NOTATION AND CONDITIONS Notation. We find that the algorithm and the proof of its correctness are more naturally expressed if we relax the notions of a histogram and its neighbors to allow non-integer values. • We generalize histograms to be any vector with non-negative real values. This relaxation is used only in the analysis of algorithms; the actual computations are performed exclusively over integer-valued inputs. • Let n̄ = [n1, . . . , nm] ∈ Rm, ni ≥ 0 denote a histogram. Let n(i) denote the i-th bar in the descending order. • Define a “move” as increasing one bar by some value in [0, 1] and decreasing one bar by a (possibly different) value in [0, 1] subject to the resulting value be non-negative. Notice the difference between the original problem and our relaxation. In the original formulation, the histogram takes only integer values and we can only increase/decrease them by exactly 1. In contrast, we allow real values and a teacher can contribute an arbitrary amount in [0, 1] to any one class. Algorithm 4 – Sensitivity at a distance: given a histogram n̄, compute the sensitivity of β (·) at distance at most d using the procedure L̃S, function q(·), constants q0 and q1 = BL(q0), and careful case analysis that finds the neighbor at distance d with the maximum sensitivity. 1: procedure ATDISTANCED(n̄, d) 2: q ← q(n̄) 3: if q1 ≤ q ≤ q0 then . q is in the flat region. 4: return L̃S(q), STOP 5: end if 6: if q < q1 then . Need to increase q. 7: if n(1) − n(2) < 2d then . n(i) is the ith largest element. 8: return L̃S(q1), STOP 9: else 10: n̄′ ← SORT(n̄) + [−d, d, 0, . . . , 0] 11: q′ ← q(n̄′) 12: if q′ > q1 then 13: return L̃S(q0), STOP 14: else 15: return L̃S(q′), CONTINUE 16: end if 17: end if 18: else . Need to decrease q. 19: if ∑d i=2 n (i) ≤ d then 20: n̄′ ← [n, 0, . . . , 0] 21: q′ ← q(n̄′) 22: return L̃S(q′), STOP 23: else 24: n̄′ ← SORT(n̄) + [d, 0, . . . , 0] 25: for d′ = 1, . . . , d do 26: n′(2) ← n′(2) − 1 . The index of n′(2) may change. 27: end for 28: q′ ← q(n̄′) 29: if q′ < q0 then 30: return L̃S(q0), STOP 31: else 32: return L̃S(q′), CONTINUE 33: end if 34: end if 35: end if 36: end procedure Algorithm 5 – Smooth Sensitivity: Compute the β smooth sensitivity of β (·) via Definition 12 by looking at sensitivities at various distances and returning the maximum weighted by e−βd. 1: procedure SMOOTHSENSITIVITY(n̄, β) 2: S ← 0 3: d← 0 4: repeat 5: c,StoppingCondition← ATDISTANCED(n̄, d) 6: S ← max{S, c · e−βd} 7: d← d+ 1 8: until StoppingCondition = STOP 9: end procedure • Define the distance between two histograms n̄ = (n1, . . . , nm) and n̄′ = (n′1, . . . , n ′ m) as d(n̄, n̄′) , max ∑ i:ni>n′i dni − n′ie, ∑ i:ni<n′i dn′i − nie , which is equal to the smallest number of “moves” needed to make the two histograms identical. We use the ceiling function since a single step can increase/decrease one bar by at most 1. We say that two histograms are neighbors if their distance d is 1. Notice that analyses of Rényi differential privacy for LNMax, GNMax and the exponential mechanism are still applicable when the neighboring datasets are defined in this manner. • Given a randomized aggregatorM:Rm≥0 → [m], let q:Rm≥0 → [0, 1] be so that q(n̄) ≥ Pr[M(n̄) 6= argmax(n̄)]. When the context is clear, we use q to denote a specific value of the function, which, in particular, can be used as q̃ in applications of Theorem 6. • Let β: [0, 1]→ R be the function that maps a q value to the value of the Rényi accountant. Conditions. Throughout this section we will be referring to the list of conditions on q(·) and β (·): C1. The function q(·) is continuous in each argument ni. C2. There exist functions BU,BL: [0, 1] → [0, 1] such that for any neighbor n̄′ of n̄, we have BL(q(n̄)) ≤ q(n̄′) ≤ BU(q(n̄)), i.e., BU and BL provide upper and lower bounds on the q value of any neighbor of n̄. C3. BL(q) is increasing in q. C4. BU and BL are functional inverses of each other in part of the range, i.e., q = BL(BU(q)) for all q ∈ [0, q0], where q0 is defined below. Additionally BL(q) ≤ q ≤ BU(q) for all q ∈ [0, 1]. C5. β (·) has the following shape: there exist constants β∗ and q0 ≤ 0.5, such that β (q) nondecreasing in [0, q0] and β (q) = β∗ ≥ β (q0) for q > q0. The constant β∗ corresponds to a data-independent bound. C6. ∆β (q) , β (BU(q))− β (q) is non-decreasing in [0,BL(q0)], i.e., when BU(q) ≤ q0. C7. Recall that n(i) is the i-th largest coordinate of a histogram n̄. Then, if q(n̄) ≤ BU(q0), then q(n̄) is differentiable in all coordinates and ∀i > j ≥ 2 ∂q ∂n(j) (n̄) ≥ ∂q ∂n(i) (n̄) ≥ 0. C8. The function q(n̄) is invariant under addition of a constant, i.e., q(n̄) = q(n̄+ [x, . . . , x]) for all n̄ and x ≥ 0, and q(n̄) is invariant under permutation of n̄, i.e., q(n̄) = q(π(n̄)) for all permutations π on [m]. Finally, we require that if n(1) = n(2), then q(n̄) ≥ q0. We may additionally assume that q0 ≥ q([n, 0, . . . , 0]). Indeed, if this condition is not satisfied, then the data-dependent analysis is not going to be used anywhere. The most extreme histogram— [n, 0, . . . , 0]—is the most advantageous setting for applying data-dependent bounds. If we cannot use the data-dependent bound even in that case, we would be using the data-independent bound everywhere and do not need to compute smooth sensitivity anyway. Yet this condition is not automatically satisfied. For example, if m (the number of classes) is large compared to n (the number of teachers), we might have large q([n, 0, . . . , 0]). So we need to check this condition in the code before doing smooth sensitivity calculation. B.3 CORRECTNESS OF ALGORITHMS 3–5 Recall that local sensitivity of a deterministic function f is defined as max f(D)− f(D′), where D and D′ are neighbors. Proposition 13. Under conditions C2–C6, Algorithm 3 computes an upper bound on local sensitivity of β (q(n̄)). Proof. Since β (·) is non-decreasing everywhere (by C5), and for any neighbors n̄ and n̄′ it holds that BL(q(n̄)) ≤ q(n̄′) ≤ BU(q(n̄)) (by C2), we have the following |β (q(n̄))− β (q(n̄′))| ≤ max { β ( BU(q(n̄)) ) − β ( q(n̄) ) , β ( q(n̄) ) − β ( BL(q(n̄)) )} = max { ∆β ( q(n̄) ) , ∆β ( BL(q(n̄)) )} as an upper bound on the local sensitivity of β (q(·)) at input n̄. The function computed by Algorithm 3 differs from above when q(n̄) ∈ (BL(q0), q0). To complete the proof we need to argue that the local sensitivity is upper bounded by ∆β (BL(q0)) for q(n̄) in this interval. The bound follows from the following three observations. First, ∆β (q) is non-increasing in the range (BL(q0), 1], since β (BU(q)) is constant (by BU(q) ≥ BU(BL(q0)) = q0 and C5) and β (q) is non-decreasing in the range (by C5). In particular, ∆β (q) ≤ ∆β (BL(q0)) if q ≥ BL(q0). (8) Second, ∆β (BL(q)) is non-decreasing in the range [0, q0] since BL(q) is increasing (by C3 and C6). This implies that ∆β (BL(q)) ≤ ∆β (BL(q0)) if q ≤ q0. (9) By (8) and (9) applied to the intersection of the two ranges, it holds that max { ∆β ( q(n̄) ) , ∆β ( BL(q(n̄)) )} ≤ ∆β (BL(q0)) if BL(q0) ≤ q ≤ q0, as needed. We thus established that the function computed by Algorithm 3, which we call L̃S(q) from now on, is an upper bound on the local sensitivity. Formally, L̃S(q) , { ∆β (BL(q0)) if q ∈ (BL(q0), q0), max {∆β (q) ,∆β (BL(q))} otherwise. The following proposition characterizes the growth of L̃S(q). Proposition 14. Assuming conditions C2–C6, the function L̃S(q) is non-decreasing in [0,BL(q0)], constant in [BL(q0), q0], and non-increasing in [q0, 1]. Proof. Consider separately three intervals. • By construction, L̃S is constant in [BL(q0), q0]. • Since both functions ∆β (·) and ∆β (BL(·)) are each non-decreasing in [0,BL(q0)), so is their max. • In the interval (q0, 1], β (q) is constant. Hence ∆β (q) = 0 and ∆β (BL(q)) = β (q) − β (BL(q)) is non-decreasing. Their maximum value ∆β (BL(q)) is non-decreasing. The claim follows. We next prove correctness of Algorithm 4, which computes the maximal sensitivity of β at a fixed distance. The proof relies on the following notion of a partial order between histograms. Definition
1. What is the focus of the paper regarding privacy-preserving methods? 2. What are the strengths of the proposed approach, particularly in terms of scalability and utility? 3. What are the weaknesses of the paper, especially regarding its claims and applications? 4. Do you have any suggestions for improving the presentation of the results or the clarity of the writing?
Review
Review Summary: In this work, PATE, an approach for learning with privacy, is modified to scale its application to real-world data sets. This is done by leveraging the synergy between privacy and utility, to make better use of the privacy budget spent when transferring knowledge from teachers to the student. Two aggregation mechanisms are introduced for this reason. It is demonstrated that sampling from a Gaussian distribution (instead from a Laplacian distribution) facilitates the aggregation of teacher votes in tasks with large number of output classes. on the positive side: Having scalable models is important, especially models that can be applied to data with privacy concerns. The extension of an approach for learning with privacy to make it scalable is of merit. The paper is well written, and the idea of the model is clear. on the negative side: In the introduction, the authors introduce the problem by the importance of privacy issues in medical and health care data. This is for sure an important topic. However, in the following paper, the model is applied no neither medical nor healthcare data. The authors mention that the original model PATE was applied to medical record and census data with the UCI diabetes and adult data set. I personally would prefer to see the proposed model applied to this kind of data sets as well. minor comments: Figure 2, legend needs to be outside the Figure, in the current Figure a lot is covered by the legend
ICLR
Title Scalable Private Learning with PATE Abstract The rapid adoption of machine learning has increased concerns about the privacy implications of machine learning models trained on sensitive data, such as medical records or other personal information. To address those concerns, one promising approach is Private Aggregation of Teacher Ensembles, or PATE, which transfers to a “student” model the knowledge of an ensemble of “teacher” models, with intuitive privacy provided by training teachers on disjoint data and strong privacy guaranteed by noisy aggregation of teachers’ answers. However, PATE has so far been evaluated only on simple classification tasks like MNIST, leaving unclear its utility when applied to larger-scale learning tasks and real-world datasets. In this work, we show how PATE can scale to learning tasks with large numbers of output classes and uncurated, imbalanced training data with errors. For this, we introduce new noisy aggregation mechanisms for teacher ensembles that are more selective and add less noise, and prove their tighter differential-privacy guarantees. Our new mechanisms build on two insights: the chance of teacher consensus is increased by using more concentrated noise and, lacking consensus, no answer need be given to a student. The consensus answers used are more likely to be correct, offer better intuitive privacy, and incur lower-differential privacy cost. Our evaluation shows our mechanisms improve on the original PATE on all measures, and scale to larger tasks with both high utility and very strong privacy (ε < 1.0). 1 INTRODUCTION Many attractive applications of modern machine-learning techniques involve training models using highly sensitive data. For example, models trained on people’s personal messages or detailed medical information can offer invaluable insights into real-world language usage or the diagnoses and treatment of human diseases (McMahan et al., 2017; Liu et al., 2017). A key challenge in such applications is to prevent models from revealing inappropriate details of the sensitive data—a nontrivial task, since models are known to implicitly memorize such details during training and also to inadvertently reveal them during inference (Zhang et al., 2017; Shokri et al., 2017). Recently, two promising, new model-training approaches have offered the hope that practical, highutility machine learning may be compatible with strong privacy-protection guarantees for sensitive training data (Abadi et al., 2017). This paper revisits one of these approaches, Private Aggregation of Teacher Ensembles, or PATE (Papernot et al., 2017), and develops techniques that improve its scalability and practical applicability. PATE has the advantage of being able to learn from the aggregated consensus of separate “teacher” models trained on disjoint data, in a manner that both provides intuitive privacy guarantees and is agnostic to the underlying machine-learning techniques (cf. the approach of differentially-private stochastic gradient descent (Abadi et al., 2016)). In the PATE approach multiple teachers are trained on disjoint sensitive data (e.g., different users’ data), and uses the teachers’ aggregate consensus answers in a black-box fashion to supervise the training of a “student” model. By publishing only the student model (keeping the teachers private) and by adding carefully-calibrated Laplacian noise to the aggregate answers used to train the student, the ∗Equal contributions, authors ordered alphabetically. Work done while the authors were at Google Brain. original PATE work showed how to establish rigorous (ε, δ) differential-privacy guarantees (Papernot et al., 2017)—a gold standard of privacy (Dwork et al., 2006). However, to date, PATE has been applied to only simple tasks, like MNIST, without any realistic, larger-scale evaluation. The techniques presented in this paper allow PATE to be applied on a larger scale to build more accurate models, in a manner that improves both on PATE’s intuitive privacy-protection due to the teachers’ independent consensus as well as its differential-privacy guarantees. As shown in our experiments, the result is a gain in privacy, utility, and practicality—an uncommon joint improvement. The primary technical contributions of this paper are new mechanisms for aggregating teachers’ answers that are more selective and add less noise. On all measures, our techniques improve on the original PATE mechanism when evaluated on the same tasks using the same datasets, as described in Section 5. Furthermore, we evaluate both variants of PATE on a new, large-scale character recognition task with 150 output classes, inspired by MNIST. The results show that PATE can be successfully utilized even to uncurated datasets—with significant class imbalance as well as erroneous class labels—and that our new aggregation mechanisms improve both privacy and model accuracy. To be more selective, our new mechanisms leverage some pleasant synergies between privacy and utility in PATE aggregation. For example, when teachers disagree, and there is no real consensus, the privacy cost is much higher; however, since such disagreement also suggest that the teachers may not give a correct answer, the answer may simply be omitted. Similarly, teachers may avoid giving an answer where the student already is confidently predicting the right answer. Additionally, we ensure that these selection steps are themselves done in a private manner. To add less noise, our new PATE aggregation mechanisms sample Gaussian noise, since the tails of that distribution diminish far more rapidly than those of the Laplacian noise used in the original PATE work. This reduction greatly increases the chance that the noisy aggregation of teachers’ votes results in the correct consensus answer, which is especially important when PATE is scaled to learning tasks with large numbers of output classes. However, changing the sampled noise requires redoing the entire PATE privacy analysis from scratch (see Section 4 and details in Appendix A). Finally, of independent interest are the details of our evaluation extending that of the original PATE work. In particular, we find that the virtual adversarial training (VAT) technique of Miyato et al. (2017) is a good basis for semi-supervised learning on tasks with many classes, outperforming the improved GANs by Salimans et al. (2016) used in the original PATE work. Furthermore, we explain how to tune the PATE approach to achieve very strong privacy (ε ≈ 1.0) along with high utility, for our real-world character recognition learning task. This paper is structured as follows: Section 2 is the related work section; Section 3 gives a background on PATE and an overview of our work; Section 4 describes our improved aggregation mechanisms; Section 5 details our experimental evaluation; Section 6 offers conclusions; and proofs are deferred to the Appendices. 2 RELATED WORK Differential privacy is by now the gold standard of privacy. It offers a rigorous framework whose threat model makes few assumptions about the adversary’s capabilities, allowing differentially private algorithms to effectively cope against strong adversaries. This is not the case of all privacy definitions, as demonstrated by successful attacks against anonymization techniques (Aggarwal, 2005; Narayanan & Shmatikov, 2008; Bindschaedler et al., 2017). The first learning algorithms adapted to provide differential privacy with respect to their training data were often linear and convex (Pathak et al., 2010; Chaudhuri et al., 2011; Song et al., 2013; Bassily et al., 2014; Hamm et al., 2016). More recently, successful developments in deep learning called for differentially private stochastic gradient descent algorithms (Abadi et al., 2016), some of which have been tailored to learn in federated (McMahan et al., 2017) settings. Differentially private selection mechanisms like GNMax (Section 4.1) are commonly used in hypothesis testing, frequent itemset mining, and as building blocks of more complicated private mechanisms. The most commonly used differentially private selection mechanisms are exponential mechanism (McSherry & Talwar, 2007) and LNMax (Bhaskar et al., 2010). Recent works offer lower bounds on sample complexity of such problem (Steinke & Ullman, 2017; Bafna & Ullman, 2017). The Confident and Interactive Aggregator proposed in our work (Section 4.2 and Section 4.3 resp.) use the intuition that selecting samples under certain constraints could result in better training than using samples uniformly at random. In Machine Learning Theory, active learning (Cohn et al., 1994) has been shown to allow learning from fewer labeled examples than the passive case (see e.g. Hanneke (2014)). Similarly, in model stealing (Tramèr et al., 2016), a goal is to learn a model from limited access to a teacher network. There is previous work in differential privacy literature (Hardt & Rothblum, 2010; Roth & Roughgarden, 2010) where the mechanism first decides whether or not to answer a query, and then privately answers the queries it chooses to answer using a traditional noiseaddition mechanism. In these cases, the sparse vector technique (Dwork & Roth, 2014, Chapter 3.6) helps bound the privacy cost in terms of the number of answered queries. This is in contrast to our work where a constant fraction of queries get answered and the sparse vector technique does not seem to help reduce the privacy cost. Closer to our work, Bun et al. (2017) consider a setting where the answer to a query of interest is often either very large or very small. They show that a sparse vector-like analysis applies in this case, where one pays only for queries that are in the middle. 3 BACKGROUND AND OVERVIEW We introduce essential components of our approach towards a generic and flexible framework for machine learning with provable privacy guarantees for training data. 3.1 THE PATE FRAMEWORK Here, we provide an overview of the PATE framework. To protect the privacy of training data during learning, PATE transfers knowledge from an ensemble of teacher models trained on partitions of the data to a student model. Privacy guarantees may be understood intuitively and expressed rigorously in terms of differential privacy. Illustrated in Figure 2, the PATE framework consists of three key parts: (1) an ensemble of n teacher models, (2) an aggregation mechanism and (3) a student model. Teacher models: Each teacher is a model trained independently on a subset of the data whose privacy one wishes to protect. The data is partitioned to ensure no pair of teachers will have trained on overlapping data. Any learning technique suitable for the data can be used for any teacher. Training each teacher on a partition of the sensitive data produces n different models solving the same task. At inference, teachers independently predict labels. Aggregation mechanism: When there is a strong consensus among teachers, the label they almost all agree on does not depend on the model learned by any given teacher. Hence, this collective decision is intuitively private with respect to any given training point—because such a point could have been included only in one of the teachers’ training set. To provide rigorous guarantees of differential privacy, the aggregation mechanism of the original PATE framework counts votes assigned to each class, adds carefully calibrated Laplacian noise to the resulting vote histogram, and outputs the class with the most noisy votes as the ensemble’s prediction. This mechanism is referred to as the max-of-Laplacian mechanism, or LNMax, going forward. For samples x and classes 1, . . . ,m, let fj(x) ∈ [m] denote the j-th teacher model’s prediction and ni denote the vote count for the i-th class (i.e., ni , |fj(x) = i|). The output of the mechanism is A(x) , argmaxi (ni(x) + Lap (1/γ)). Through a rigorous analysis of this mechanism, the PATE framework provides a differentially private API: the privacy cost of each aggregated prediction made by the teacher ensemble is known. Student model: PATE’s final step involves the training of a student model by knowledge transfer from the teacher ensemble using access to public—but unlabeled—data. To limit the privacy cost of labeling them, queries are only made to the aggregation mechanism for a subset of public data to train the student in a semi-supervised way using a fixed number of queries. The authors note that every additional ensemble prediction increases the privacy cost spent and thus cannot work with unbounded queries. Fixed queries fixes privacy costs as well as diminishes the value of attacks analyzing model parameters to recover training data (Zhang et al., 2017). The student only sees public data and privacy-preserving labels. 3.2 DIFFERENTIAL PRIVACY Differential privacy (Dwork et al., 2006) requires that the sensitivity of the distribution of an algorithm’s output to small perturbations of its input be limited. The following variant of the definition captures this intuition formally: Definition 1. A randomized mechanismM with domain D and rangeR satisfies (ε, δ)-differential privacy if for any two adjacent inputs D,D′ ∈ D and for any subset of outputs S ⊆ R it holds that: Pr[M(D) ∈ S] ≤ eε ·Pr[M(D′) ∈ S] + δ. (1) For our application of differential privacy to ML, adjacent inputs are defined as two datasets that only differ by one training example and the randomized mechanismM would be the model training algorithm. The privacy parameters have the following natural interpretation: ε is an upper bound on the loss of privacy, and δ is the probability with which this guarantee may not hold. Composition theorems (Dwork & Roth, 2014) allow us to keep track of the privacy cost when we run a sequence of mechanisms. 3.3 RÉNYI DIFFERENTIAL PRIVACY Papernot et al. (2017) note that the natural approach to bounding PATE’s privacy loss—by bounding the privacy cost of each label queried and using strong composition (Dwork et al., 2010) to derive the total cost—yields loose privacy guarantees. Instead, their approach uses data-dependent privacy analysis. This takes advantage of the fact that when the consensus among the teachers is very strong, the plurality outcome has overwhelming likelihood leading to a very small privacy cost whenever the consensus occurs. To capture this effect quantitatively, Papernot et al. (2017) rely on the moments accountant, introduced by Abadi et al. (2016) and building on previous work (Bun & Steinke, 2016; Dwork & Rothblum, 2016). In this section, we recall the language of Rényi Differential Privacy or RDP (Mironov, 2017). RDP generalizes pure differential privacy (δ = 0) and is closely related to the moments accountant. We choose to use RDP as a more natural analysis framework when dealing with our mechanisms that use Gaussian noise. Defined below, the RDP of a mechanism is stated in terms of the Rényi divergence. Definition 2 (Rényi Divergence). The Rényi divergence of order λ between two distributions P and Q is defined as: Dλ(P‖Q) , 1 λ− 1 logEx∼Q [ (P (x)/Q(x)) λ ] = 1 λ− 1 logEx∼P [ (P (x)/Q(x)) λ−1 ] . Definition 3 (Rényi Differential Privacy (RDP)). A randomized mechanismM is said to guarantee (λ, ε)-RDP with λ ≥ 1 if for any neighboring datasets D and D′, Dλ(M(D)‖M(D′)) = 1 λ− 1 logEx∼M(D) [( Pr [M(D) = x] Pr [M(D′) = x] )λ−1] ≤ ε. RDP generalizes pure differential privacy in the sense that ε-differential privacy is equivalent to (∞, ε)-RDP. Mironov (2017) proves the following key facts that allow easy composition of RDP guarantees and their conversion to (ε, δ)-differential privacy bounds. Theorem 4 (Composition). If a mechanism M consists of a sequence of adaptive mechanisms M1, . . . ,Mk such that for any i ∈ [k], Mi guarantees (λ, εi)-RDP, then M guarantees (λ, ∑k i=1 εi)-RDP. Theorem 5 (From RDP to DP). If a mechanism M guarantees (λ, ε)-RDP, then M guarantees (ε+ log 1/δλ−1 , δ)-differential privacy for any δ ∈ (0, 1). While both (ε, δ)-differential privacy and RDP are relaxations of pure ε-differential privacy, the two main advantages of RDP are as follows. First, it composes nicely; second, it captures the privacy guarantee of Gaussian noise in a much cleaner manner compared to (ε, δ)-differential privacy. This lets us do a careful privacy analysis of the GNMax mechanism as stated in Theorem 6. While the analysis of Papernot et al. (2017) leverages the first aspect of such frameworks with the Laplace noise (LNMax mechanism), our analysis of the GNMax mechanism relies on both. 3.4 PATE AGGREGATION MECHANISMS The aggregation step is a crucial component of PATE. It enables knowledge transfer from the teachers to the student while enforcing privacy. We improve the LNMax mechanism used by Papernot et al. (2017) which adds Laplace noise to teacher votes and outputs the class with the highest votes. First, we add Gaussian noise with an accompanying privacy analysis in the RDP framework. This modification effectively reduces the noise needed to achieve the same privacy cost per student query. Second, the aggregation mechanism is now selective: teacher votes are analyzed to decide which student queries are worth answering. This takes into account both the privacy cost of each query and its payout in improving the student’s utility. Surprisingly, our analysis shows that these two metrics are not at odds and in fact align with each other: the privacy cost is the smallest when teachers agree, and when teachers agree, the label is more likely to be correct thus being more useful to the student. Third, we propose and study an interactive mechanism that takes into account not only teacher votes on a queried example but possible student predictions on that query. Now, queries worth answering are those where the teachers agree on a class but the student is not confident in its prediction on that class. This third modification aligns the two metrics discussed above even further: queries where the student already agrees with the consensus of teachers are not worth expending our privacy budget on, but queries where the student is less confident are useful and answered at a small privacy cost. 3.5 DATA-DEPENDENT PRIVACY IN PATE A direct privacy analysis of the aggregation mechanism, for reasonable values of the noise parameter, allows answering only few queries before the privacy cost becomes prohibitive. The original PATE proposal used a data-dependent analysis, exploiting the fact that when the teachers have large agreement, the privacy cost is usually much smaller than the data-independent bound would suggest. In our work, we perform a data-dependent privacy analysis of the aggregation mechanism with Gaussian noise. This change of noise distribution turns out be technically much more challenging than the Laplace noise case and we defer the details to Appendix A. This increased complexity of the analysis however does not make the algorithm any more complicated and thus allows us to improve the privacy-utility tradeoff. Sanitizing the privacy cost via smooth sensitivity analysis. An additional challenge with datadependent privacy analyses arises from the fact that the privacy cost itself is now a function of the private data. Further, the data-dependent bound on the privacy cost has large global sensitivity (a metric used in differential privacy to calibrate the noise injected) and is therefore difficult to sanitize. To remedy this, we use the smooth sensitivity framework proposed by Nissim et al. (2007). Appendix B describes how we add noise to the computed privacy cost using this framework to publish a sanitized version of the privacy cost. Section B.1 defines smooth sensitivity and outlines algorithms 3–5 that compute it. The rest of Appendix B argues the correctness of these algorithms. The final analysis shows that the incremental cost of sanitizing our privacy estimates is modest— less than 50% of the raw estimates—thus enabling us to use precise data-dependent privacy analysis while taking into account its privacy implications. 4 IMPROVED AGGREGATION MECHANISMS FOR PATE The privacy guarantees provided by PATE stem from the design and analysis of the aggregation step. Here, we detail our improvements to the mechanism used by Papernot et al. (2017). As outlined in Section 3.4, we first replace the Laplace noise added to teacher votes with Gaussian noise, adapting the data-dependent privacy analysis. Next, we describe the Confident and Interactive Aggregators that select queries worth answering in a privacy-preserving way: the privacy budget is shared between the query selection and answer computation. The aggregators use different heuristics to select queries: the former does not take into account student predictions, while the latter does. 4.1 THE GNMAX AGGREGATOR AND ITS PRIVACY GUARANTEE This section uses the following notation. For a sample x and classes 1 to m, let fj(x) ∈ [m] denote the j-th teacher model’s prediction on x and ni(x) denote the vote count for the i-th class (i.e., ni(x) = |{j: fj(x) = i}|). We define a Gaussian NoisyMax (GNMax) aggregation mechanism as: Mσ(x) , argmax i { ni(x) +N (0, σ2) } , where N (0, σ2) is the Gaussian distribution with mean 0 and variance σ2. The aggregator outputs the class with noisy plurality after adding Gaussian noise to each vote count. In what follow, plurality more generally refers to the highest number of teacher votes assigned among the classes. The Gaussian distribution is more concentrated than the Laplace distribution used by Papernot et al. (2017). This concentration directly improves the aggregation’s utility when the number of classesm is large. The GNMax mechanism satisfies (λ, λ/σ2)-RDP, which holds for all inputs and all λ ≥ 1 (precise statements and proofs of claims in this section are deferred to Appendix A). A straightforward application of composition theorems leads to loose privacy bounds. As an example, the standard advanced composition theorem applied to experiments in the last two rows of Table 1 would give us ε = 8.42 and ε = 10.14 resp. at δ = 10−8 for the Glyph dataset. To refine these, we work out a careful data-dependent analysis that yields values of ε smaller than 1 for the same δ. The following theorem translates data-independent RDP guarantees for higher orders into a data-dependent RDP guarantee for a smaller order λ. We use it in conjunction with Proposition 7 to bound the privacy cost of each query to the GNMax algorithm as a function of q̃, the probability that the most common answer will not be output by the mechanism. Theorem 6 (informal). Let M be a randomized algorithm with (µ1, ε1)-RDP and (µ2, ε2)RDP guarantees and suppose that given a dataset D, there exists a likely outcome i∗ such that Pr [M(D) 6= i∗] ≤ q̃. Then the data-dependent Rényi differential privacy for M of order λ ≤ µ1, µ2 at D is bounded by a function of q̃, µ1, ε1, µ2, ε2, which approaches 0 as q̃ → 0. The new bound improves on the data-independent privacy for λ as long as the distribution of the algorithm’s output on that input has a strong peak (i.e., q̃ 1). Values of q̃ close to 1 could result in a looser bound. Therefore, in practice we take the minimum between this bound and λ/σ2 (the data-independent one). The theorem generalizes Theorem 3 from Papernot et al. (2017), where it was shown for a mechanism satisfying ε-differential privacy (i.e., µ1 = µ2 =∞ and ε1 = ε2). The final step in our analysis uses the following lemma to bound the probability q̃ when i∗ corresponds to the class with the true plurality of teacher votes. Proposition 7. For any i∗ ∈ [m], we have Pr [Mσ(D) 6= i∗] ≤ 12 ∑ i 6=i∗ erfc ( ni∗−ni 2σ ) , where erfc is the complementary error function. In Appendix A, we detail how these results translate to privacy bounds. In short, for each query to the GNMax aggregator, given teacher votes ni and the class i∗ with maximal support, Proposition 7 gives us the value of q̃ to use in Theorem 6. We optimize over µ1 and µ2 to get a data-dependent RDP guarantee for any order λ. Finally, we use composition properties of RDP to analyze a sequence of queries, and translate the RDP bound back to an (ε, δ)-DP bound. Expensive queries. This data-dependent privacy analysis leads us to the concept of an expensive query in terms of its privacy cost. When teacher votes largely disagree, some ni∗ − ni values may be small leading to a large value for q̃: i.e., the lack of consensus amongst teachers indicates that the aggregator is likely to output a wrong label. Thus expensive queries from a privacy perspective are often bad for training too. Conversely, queries with strong consensus enable tight privacy bounds. This synergy motivates the aggregation mechanisms discussed in the following sections: they evaluate the strength of the consensus before answering a query. 4.2 THE CONFIDENT-GNMAX AGGREGATOR In this section, we propose a refinement of the GNMax aggregator that enables us to filter out queries for which teachers do not have a sufficiently strong consensus. This filtering enables the teachers to avoid answering expensive queries. We also take note to do this selection step itself in a private manner. The proposed Confident Aggregator is described in Algorithm 1. To select queries with overwhelming consensus, the algorithm checks if the plurality vote crosses a threshold T . To enforce privacy in this step, the comparison is done after adding Gaussian noise with variance σ21 . Then, for queries that pass this noisy threshold check, the aggregator proceeds with the usual GNMax mechanism with a smaller variance σ22 . For queries that do not pass the noisy threshold check, the aggregator simply returns ⊥ and the student discards this example in its training. In practice, we often choose significantly higher values for σ1 compared to σ2. This is because we pay the cost of the noisy threshold check always, and without the benefit of knowing that the consensus is strong. We pick T so that queries where the plurality gets less than half the votes (often very expensive) are unlikely to pass the threshold after adding noise, but we still have a high enough yield amongst the queries with a strong consensus. This tradeoff leads us to look for T ’s between 0.6× to 0.8× the number of teachers. The privacy cost of this aggregator is intuitive: we pay for the threshold check for every query, and for the GNMax step only for queries that pass the check. In the work of Papernot et al. (2017), the mechanism paid a privacy cost for every query, expensive or otherwise. In comparison, the Confident Aggregator expends a much smaller privacy cost to check against the threshold, and by answering a significantly smaller fraction of expensive queries, it expends a lower privacy cost overall. 4.3 THE INTERACTIVE-GNMAX AGGREGATOR While the Confident Aggregator excludes expensive queries, it ignores the possibility that the student might receive labels that contribute little to learning, and in turn to its utility. By incorporating the Algorithm 1 – Confident-GNMax Aggregator: given a query, consensus among teachers is first estimated in a privacy-preserving way to then only reveal confident teacher predictions. Input: input x, threshold T , noise parameters σ1 and σ2 1: if maxi{nj(x)}+N (0, σ21) ≥ T then . Privately check for consensus 2: return argmaxj { nj(x) +N (0, σ22) } . Run the usual max-of-Gaussian 3: else 4: return ⊥ 5: end if Algorithm 2 – Interactive-GNMax Aggregator: the protocol first compares student predictions to the teacher votes in a privacy-preserving way to then either (a) reinforce the student prediction for the given query or (b) provide the student with a new label predicted by the teachers. Input: input x, confidence γ, threshold T , noise parameters σ1 and σ2, total number of teachers M 1: Ask the student to provide prediction scores p(x) 2: if maxj{nj(x)−Mpj(x)}+N (0, σ21) ≥ T then . Student does not agree with teachers 3: return argmaxj{nj(x) +N (0, σ22)} . Teachers provide new label 4: else if max{pi(x)} > γ then . Student agrees with teachers and is confident 5: return arg maxj pj(x) . Reinforce student’s prediction 6: else 7: return ⊥ . No output given for this label 8: end if student’s current predictions for its public training data, we design an Interactive Aggregator that discards queries where the student already confidently predicts the same label as the teachers. Given a set of queries, the Interactive Aggregator (Algorithm 2) selects those answered by comparing student predictions to teacher votes for each class. Similar to Step 1 in the Confident Aggregator, queries where the plurality of these noised differences crosses a threshold are answered with GNMax. This noisy threshold suffices to enforce privacy of the first step because student predictions can be considered public information (the student is trained in a differentially private manner). For queries that fail this check, the mechanism reinforces the predicted student label if the student is confident enough and does this without looking at teacher votes again. This limited form of supervision comes at a small privacy cost. Moreover, the order of the checks ensures that a student falsely confident in its predictions on a query is not accidentally reinforced if it disagrees with the teacher consensus. The privacy accounting is identical to the Confident Aggregator except in considering the difference between teachers and the student instead of only the teachers votes. In practice, the Confident Aggregator can be used to start training a student when it can make no meaningful predictions and training can be finished off with the Interactive Aggregator after the student gains some proficiency. 5 EXPERIMENTAL EVALUATION Our goal is first to show that the improved aggregators introduced in Section 4 enable the application of PATE to uncurated data, thus departing from previous results on tasks with balanced and wellseparated classes. We experiment with the Glyph dataset described below to address two aspects left open by Papernot et al. (2017): (a) the performance of PATE on a task with a larger number of classes (the framework was only evaluated on datasets with at most 10 classes) and (b) the privacy-utility tradeoffs offered by PATE on data that is class imbalanced and partly mislabeled. In Section 5.2, we evaluate the improvements given by the GNMax aggregator over its Laplace counterpart (LNMax) and demonstrate the necessity of the Gaussian mechanism for uncurated tasks. In Section 5.3, we then evaluate the performance of PATE with both the Confident and Interactive Aggregators on all datasets used to benchmark the original PATE framework, in addition to Glyph. With the right teacher and student training, the two mechanisms from Section 4 achieve high accuracy with very tight privacy bounds. Not answering queries for which teacher consensus is too low (Confident-GNMax) or the student’s predictions already agree with teacher votes (InteractiveGNMax) better aligns utility and privacy: queries are answered at a significantly reduced cost. 5.1 EXPERIMENTAL SETUP MNIST, SVHN, and the UCI Adult databases. We evaluate with two computer vision tasks (MNIST and Street View House Numbers (Netzer et al., 2011)) and census data from the UCI Adult dataset (Kohavi, 1996). This enables a comparative analysis of the utility-privacy tradeoff achieved with our Confident-GNMax aggregator and the LNMax originally used in PATE. We replicate the experimental setup and results found in Papernot et al. (2017) with code and teacher votes made available online. The source code for the privacy analysis in this paper as well as supporting data required to run this analysis is available on Github.1 A detailed description of the experimental setup can be found in Papernot et al. (2017); we provide here only a brief overview. For MNIST and SVHN, teachers are convolutional networks trained on partitions of the training set. For UCI Adult, each teacher is a random forest. The test set is split in two halves: the first is used as unlabeled inputs to simulate the student’s public data and the second is used as a hold out to evaluate test performance. The MNIST and SVHN students are convolutional networks trained using semi-supervised learning with GANs à la Salimans et al. (2016). The student for the Adult dataset are fully supervised random forests. Glyph. This optical character recognition task has an order of magnitude more classes than all previous applications of PATE. The Glyph dataset also possesses many characteristics shared by real-world tasks: e.g., it is imbalanced and some inputs are mislabeled. Each input is a 28 × 28 grayscale image containing a single glyph generated synthetically from a collection of over 500K computer fonts.2 Samples representative of the difficulties raised by the data are depicted in Figure 3. The task is to classify inputs as one of the 150 Unicode symbols used to generate them. This set of 150 classes results from pre-processing efforts. We discarded additional classes that had few samples; some classes had at least 50 times fewer inputs than the most popular classes, and these were almost exclusively incorrectly labeled inputs. We also merged classes that were too ambiguous for even a human to differentiate them. Nevertheless, a manual inspection of samples grouped by classes—favorably to the human observer—led to the conservative estimate that some classes remain 5 times more frequent, and mislabeled inputs represent at least 10% of the data. To simulate the availability of private and public data (see Section 3.1), we split data originally marked as the training set (about 65M points) into partitions given to the teachers. Each teacher is a ResNet (He et al., 2016) made of 32 leaky ReLU layers. We train on batches of 100 inputs for 40K steps using SGD with momentum. The learning rate, initially set to 0.1, is decayed after 10K steps to 0.01 and again after 20K steps to 0.001. These parameters were found with a grid search. We split holdout data in two subsets of 100K and 400K samples: the first acts as public data to train the student and the second as its testing data. The student architecture is a convolutional network learnt in a semi-supervised fashion with virtual adversarial training (VAT) from Miyato et al. (2017). Using unlabeled data, we show how VAT can regularize the student by making predictions constant in adversarial3 directions. Indeed, we found that GANs did not yield as much utility for Glyph as for MNIST or SVHN. We train with Adam for 400 epochs and a learning rate of 6 · 10−5. 5.2 COMPARING THE LNMAX AND GNMAX MECHANISMS Section 4.1 introduces the GNMax mechanism and the accompanying privacy analysis. With a Gaussian distribution, whose tail diminishes more rapidly than the Laplace distribution, we expect better utility when using the new mechanism (albeit with a more involved privacy analysis). To study the tradeoff between privacy and accuracy with the two mechanisms, we run experiments training several ensembles of M teachers for M ∈ {100, 500, 1000, 5000} on the Glyph data. Re- 1https://github.com/tensorflow/models/tree/master/research/differential_privacy 2Glyph data is not public but similar data is available publicly as part of the notMNIST dataset. 3In this context, the adversarial component refers to the phenomenon commonly referred to as adversarial examples (Biggio et al., 2013; Szegedy et al., 2014) and not to the adversarial training approach taken in GANs. call that 65 million training inputs are partitioned and distributed among the M teachers with each teacher receiving between 650K and 13K inputs for the values of M above. The test data is used to query the teacher ensemble and the resulting labels (after the LNMax and GNMax mechanisms) are compared with the ground truth labels provided in the dataset. This predictive performance of the teachers is essential to good student training with accurate labels and is a useful proxy for utility. For each mechanism, we compute (ε, δ)-differential privacy guarantees. As is common in literature, for a dataset on the order of 108 samples, we choose δ = 10−8 and denote the corresponding ε as the privacy cost. The total ε is calculated on a subset of 4,000 queries, which is representative of the number of labels needed by a student for accurate training (see Section 5.3). We visualize in Figure 4 the effect of the noise distribution (left) and the number of teachers (right) on the tradeoff between privacy costs and label accuracy. Observations. On the left of Figure 1, we compare our GNMax aggregator to the LNMax aggregator used by the original PATE proposal, on an ensemble of 1000 teachers and for varying noise scales σ. At fixed test accuracy, the GNMax algorithm consistently outperforms the LNMax mechanism in terms of privacy cost. To explain this improved performance, recall notation from Section 4.1. For both mechanisms, the data dependent privacy cost scales linearly with q̃—the likelihood of an answer other than the true plurality. The value of q̃ falls of as exp(−x2) for GNMax and exp(−x) for LNMax, where x is the ratio (ni∗−ni)/σ. Thus, when ni∗−ni is (say) 4σ, LNMax would have q̃ ≈ e−4 = 0.018..., whereas GNMax would have q̃ ≈ e−16 ≈ 10−7, thereby leading to a much higher likelihood of returning the true plurality. Moreover, this reduced q̃ translates to a smaller privacy cost for a given σ leading to a better utility-privacy tradeoff. As long as each teacher has sufficient data to learn a good-enough model, increasing the number M of teachers improves the tradeoff—as illustrated on the right of Figure 4 with GNMax. The larger ensembles lower the privacy cost of answering queries by tolerating larger σ’s. Combining the two observations made in this Figure, for a fixed label accuracy, we lower privacy costs by switching to the GNMax aggregator and training a larger number M of teachers. 5.3 STUDENT TRAINING WITH THE GNMAX AGGREGATION MECHANISMS As outlined in Section 3, we train a student on public data labeled by the aggregation mechanisms. We take advantage of PATE’s flexibility and apply the technique that performs best on each dataset: semi-supervised learning with Generative Adversarial Networks (Salimans et al., 2016) for MNIST and SVHN, Virtual Adversarial Training (Miyato et al., 2017) for Glyph, and fully-supervised random forests for UCI Adult. In addition to evaluating the total privacy cost associated with training the student model, we compare its utility to a non-private baseline obtained by training on the sensitive data (used to train teachers in PATE): we use the baselines of 99.2%, 92.8%, and 85.0% reported by Papernot et al. (2017) respectively for MNIST, SVHN, and UCI Adult, and we measure a baseline of 82.2% for Glyph. We compute (ε, δ)-privacy bounds and denote the privacy cost as the ε value at a value of δ set accordingly to number of training samples. Confident-GNMax Aggregator. Given a pool of 500 to 12,000 samples to learn from (depending on the dataset), the student submits queries to the teacher ensemble running the Confident-GNMax aggregator from Section 4.2. A grid search over a range of plausible values for parameters T , σ1 and σ2 yielded the values reported in Table 1, illustrating the tradeoff between utility and privacy achieved. We additionally measure the number of queries selected by the teachers to be answered and compare student utility to a non-private baseline. The Confident-GNMax aggregator outperforms LNMax for the four datasets considered in the original PATE proposal: it reduces the privacy cost ε, increases student accuracy, or both simultaneously. On the uncurated Glyph data, despite the imbalance of classes and mislabeled data (as evidenced by the 82.2% baseline), the Confident Aggregator achieves 73.5% accuracy with a privacy cost of just ε = 1.02. Roughly 1,300 out of 12,000 queries made are not answered, indicating that several expensive queries were successfully avoided. This selectivity is analyzed in more details in Section 5.4. Interactive-GNMax Aggregator. On Glyph, we evaluate the utility and privacy of an interactive training routine that proceeds in two rounds. Round one runs student training with a Confident Aggregator. A grid search targeting the best privacy for roughly 3,400 answered queries (out of 6,000)—sufficient to bootstrap a student—led us to setting (T=3500, σ1=1500, σ2=100) and a privacy cost of ε ≈ 0.59. In round two, this student was then trained with 10,000 more queries made with the InteractiveGNMax Aggregator (T=3500, σ1=2000, σ2=200). We computed the resulting (total) privacy cost and utility at an exemplar data point through another grid search of plausible parameter values. The result appears in the last row of Table 1. With just over 10,422 answered queries in total at a privacy cost of ε = 0.84, the trained student was able to achieve 73.2% accuracy. Note that this students required fewer answered queries compared to the Confident Aggregator. The best overall cost of student training occurred when the privacy costs for the first and second rounds of training were roughly the same. (The total ε is less than 0.59 × 2 = 1.18 due to better composition—via Theorems 4 and 5.) Comparison with Baseline. Note that the Glyph student’s accuracy remains seven percentage points below the non-private model’s accuracy achieved by training on the 65M sensitive inputs. We hypothesize that this is due to the uncurated nature of the data considered. Indeed, the class imbalance naturally requires more queries to return labels from the less represented classes. For instance, a model trained on 200K queries is only 77% accurate on test data. In addition, the large fraction of mislabeled inputs are likely to have a large privacy cost: these inputs are sensitive because they are outliers of the distribution, which is reflected by the weak consensus among teachers on these inputs. 5.4 NOISY THRESHOLD CHECKS AND PRIVACY COSTS Sections 4.1 and 4.2 motivated the need for a noisy threshold checking step before having the teachers answer queries: it prevents most of the privacy budget being consumed by few queries that are expensive and also likely to be incorrectly answered. In Figure 5, we compare the privacy cost ε of answering all queries to only answering confident queries for a fixed number of queries. We run additional experiments to support the evaluation from Section 5.3. With the votes of 5,000 teachers on the Glyph dataset, we plot in Figure 5 the histogram of the plurality vote counts (ni∗ in the notation of Section 4.1) across 25,000 student queries. We compare these values to the vote counts of queries that passed the noisy threshold check for two sets of parameters T and σ1 in Algorithm 1. Smaller values imply weaker teacher agreements and consequently more expensive queries. When (T=3500, σ1=1500) we capture a significant fraction of queries where teachers have a strong consensus (roughly > 4000 votes) while managing to filter out many queries with poor consensus. This moderate check ensures that although many queries with plurality votes between 2,500 and 3,500 are answered (i.e., only 50–70% of teachers agree on a label) the expensive ones are most likely discarded. For (T=5000, σ1=1500), queries with poor consensus are completely culled out. This selectivity comes at the expense of a noticeable drop for queries that might have had a strong consensus and little-to-no privacy cost. Thus, this aggressive check answer fewer queries with very strong privacy guarantees. We reiterate that this threshold checking step itself is done in a private manner. Empirically, in our Interactive Aggregator experiments, we expend about a third to a half of our privacy budget on this step, which still yields a very small cost per query across 6,000 queries. 6 CONCLUSIONS The key insight motivating the addition of a noisy thresholding step to the two aggregation mechanisms proposed in our work is that there is a form of synergy between the privacy and accuracy of labels output by the aggregation: labels that come at a small privacy cost also happen to be more likely to be correct. As a consequence, we are able to provide more quality supervision to the student by choosing not to output labels when the consensus among teachers is too low to provide an aggregated prediction at a small cost in privacy. This observation was further confirmed in some of our experiments where we observed that if we trained the student on either private or non-private labels, the former almost always gave better performance than the latter—for a fixed number of labels. Complementary with these aggregation mechanisms is the use of a Gaussian (rather than Laplace) distribution to perturb teacher votes. In our experiments with Glyph data, these changes proved essential to preserve the accuracy of the aggregated labels—because of the large number of classes. The analysis presented in Section 4 details the delicate but necessary adaptation of analogous results for the Laplace NoisyMax. As was the case for the original PATE proposal, semi-supervised learning was instrumental to ensure the student achieves strong utility given a limited set of labels from the aggregation mechanism. However, we found that virtual adversarial training outperforms the approach from Salimans et al. (2016) in our experiments with Glyph data. These results establish lower bounds on the performance that a student can achieve when supervised with our aggregation mechanisms; future work may continue to investigate virtual adversarial training, semi-supervised generative adversarial networks and other techniques for learning the student in these particular settings with restricted supervision. ACKNOWLEDGMENTS We are grateful to Martín Abadi, Vincent Vanhoucke, and Daniel Levy for their useful inputs and discussions towards this paper. A APPENDIX: PRIVACY ANALYSIS In this appendix, we provide the proofs of Theorem 6 and Proposition 7. Moreover, we present Proposition 10, which provides optimal values of µ1 and µ2 to apply towards Theorem 6 for the GNMax mechanism. We start off with a statement about the Rényi differential privacy guarantee of the GNMax. Proposition 8. The GNMax aggregatorMσ guarantees ( λ, λ/σ2 ) -RDP for all λ ≥ 1. Proof. The result follows from observing thatMσ can be decomposed into applying the argmax operator to a noisy histogram resulted from adding Gaussian noise to each dimension of the original histogram. The Gaussian mechanism satisfies (λ, λ/2σ2)-RDP (Mironov, 2017), and since each teacher may change two counts (incrementing one and decrementing the other), the overall RDP guarantee is as claimed. Proposition 7. For a GNMax aggregator Mσ , the teachers’ votes histogram n̄ = (n1, . . . , nm), and for any i∗ ∈ [m], we have Pr [Mσ(D) 6= i∗] ≤ q(n̄), where q(n̄) , 1 2 ∑ i 6=i∗ erfc ( ni∗ − ni 2σ ) . Proof. Recall thatMσ(D) = argmax(ni + Zi), where Zi are distributed as N (0, σ2). Then for any i∗ ∈ [m], we have Pr[Mσ(D) 6= i∗] = Pr [∃i, ni + Zi > ni∗ + Zi∗ ] ≤ ∑ i 6=i∗ Pr [ni + Zi > ni∗ + Zi∗ ] = ∑ i 6=i∗ Pr [Zi − Zi∗ > ni∗ − ni] = ∑ i 6=i∗ 1 2 ( 1− erf ( ni∗ − ni 2σ )) . where the last equality follows from the fact that Zi − Zj is a Gaussian random variable with mean zero and variance 2σ2. We now present a precise statement of Theorem 6. Theorem 6. LetM be a randomized algorithm with (µ1, ε1)-RDP and (µ2, ε2)-RDP guarantees and suppose that there exists a likely outcome i∗ given a dataset D and a bound q̃ ≤ 1 such that q̃ ≥ Pr [M(D) 6= i∗]. Additionally suppose that λ ≤ µ1 and q̃ ≤ e(µ2−1)ε2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2 . Then, for any neighboring dataset D′ of D, we have: Dλ(M(D)‖M(D′)) ≤ 1 λ− 1 log ( (1− q̃) ·A(q̃, µ2, ε2)λ−1 + q̃ ·B(q̃, µ1, ε1)λ−1 ) (2) whereA(q̃, µ2, ε2) , (1− q̃)/ ( 1− (q̃eε2) µ2−1 µ2 ) andB(q̃, µ1, ε1) , eε1/q̃ 1 µ1−1 . Proof. Before we proceed to the proof, we introduce some simplifying notation. For a randomized mechanismM and neighboring datasets D and D′, we define βM(λ;D,D ′) , Dλ(M(D)‖M(D′)) = 1 λ− 1 logEx∼M(D) [( Pr [M(D) = x] Pr [M(D′) = x] )λ−1] . As the proof involves working with the RDP bounds in the exponent, we set ζ1 , eε1(µ1−1) and ζ2 , eε2(µ2−1). Finally, we define the following shortcuts: qi , Pr [M(D) = i] and q , ∑ i 6=i∗ qi = Pr [M(D) 6= i∗] , pi , Pr [M(D′) = i] and p , ∑ i6=i∗ pi = Pr [M(D′) 6= i∗] , and note that q ≤ q̃. From the definition of Rényi differential privacy, (µ1, ε1)-RDP implies: exp (βM(µ1;D,D ′)) = (1− q)µ1 (1− p)µ1−1 + ∑ i6=i∗ qµ1i pµ1−1i 1/(µ1−1) ≤ exp(ε1) =⇒ ∑ i>1 qµ1i pµ1−1i = ∑ i>1 qi ( qi pi )µ1−1 ≤ ζ1. (3) Since µ1 ≥ λ, f(x) , x µ1−1 λ−1 is convex. Applying Jensen’s Inequality we have the following: ∑ i 6=i∗ qi ( qi pi )λ−1 q µ1−1 λ−1 ≤ ∑ i 6=i∗ qi ( qi pi )µ1−1 q =⇒ ∑ i6=i∗ qi ( qi pi )λ−1 ≤ q ∑ i 6=i∗ qi ( qi pi )µ1−1 q λ−1 µ1−1 (3) =⇒ ∑ i6=i∗ qi ( qi pi )λ−1 ≤ ζ1 λ−1 µ1−1 · q1− λ−1 µ1−1 . (4) Next, by the bound at order µ2, we have: exp (βM(µ2;D ′, D)) = (1− p)µ2 (1− q)µ2−1 + ∑ i 6=i∗ pµ2i qµ2−1i 1/(µ2−1) ≤ exp(ε2) =⇒ (1− p) µ2 (1− q)µ2−1 + ∑ i6=i∗ pµ2i qµ2−1i ≤ ζ2. By the data processing inequality of Rényi divergence, we have (1− p)µ2 (1− q)µ2−1 + pµ2 qµ2−1 ≤ ζ2, which implies p µ2 qµ2−1 ≤ ζ2 and thus p ≤ ( qµ2−1ζ2 ) 1 µ2 . (5) Combining (4) and (5), we can derive a bound at λ. exp (βM(λ,D,D ′)) = (1− q)λ (1− p)λ−1 + ∑ i6=i∗ qλi pλ−1i 1/(λ−1) ≤ (1− q)λ( 1− (qµ2−1ζ2) 1 µ2 )λ−1 + ζ1 λ−1µ1−1 · q1− λ−1µ1−1 1/(λ−1) . (6) Although Equation (6) is very close to the corresponding statement in the theorem’s claim, one subtlety remains. The bound (6) applies to the exact probability q = Pr [M(D) 6= i∗]. In the theorem statement, and in practice, we can only derive an upper bound q̃ on Pr [M(D) 6= i∗]. The last step of the proof requires showing that the expression in Equation (6) is monotone in the range of values of q that we care about. Lemma 9 (Monotonicity of the bound). Let the functions f1(·) and f2(·) be f1(x) , (1− x)λ( 1− (xµ2−1ζ2) 1 µ2 )λ−1 and f2(x) , ζ1 λ−1µ1−1 · x1− λ−1µ1−1 , Then f1(x) + f2(x) is increasing in [ 0,min ( 1, ζ2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2)] . Proof. Taking the derivative of f1(x), we have: f ′1(x) = −λ(1− x)λ−1(1− (xµ2−1ζ2) 1 µ2 )λ−1 (1− (xµ2−1ζ2) 1 µ2 )2λ−2 + (1− x)λ(λ− 1)(1− (xµ2−1ζ2) 1 µ2 )λ−2ζ2 1 µ2 · µ2−1µ2 · x − 1µ2 (1− (xµ2−1ζ2) 1 µ2 )2λ−2 = (1− x)λ−1 (1− (xµ2−1ζ2) 1 µ2 )λ−1 ( −λ+ (λ− 1) ( 1− 1 µ2 ) 1− x 1− (xµ2−1ζ2) 1 µ2 ( ζ2 x ) 1 µ2 ) . We intend to show that: f ′1(x) ≥ −λ+ (λ− 1) ( 1− 1 µ2 )( ζ2 x ) 1 µ2 . (7) For x ∈ [ 0, ζ2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2] and y ∈ [1,∞), define g(x, y) as: g(x, y) , −λ · yλ−1 + (λ− 1) ( 1− 1 µ2 )( ζ2 x ) 1 µ2 yλ. We claim that g(x, y) is increasing in y and therefore g(x, y) ≥ g(x, 1), and prove it by showing the partial derivative of g(x, y) with respect to y is non-negative. Take a derivative with respect to y as: g′y(x, y) = −λ(λ− 1)yλ−2 + λ(λ− 1) ( 1− 1 µ2 )( ζ2 x ) 1 µ2 yλ−1 = λ(λ− 1)yλ−2 ( −1 + ( 1− 1 µ2 )( ζ2 x ) 1 µ2 y ) . To see why g′y(x, y) is non-negative in the respective ranges of x and y, note that: x ≤ ζ2/ ( µ1 µ1 − 1 · µ2 µ2 − 1 )µ2 =⇒ x ≤ ζ2/ ( µ2 µ2 − 1 )µ2 =⇒ 1 ≤ ζ2 x · ( µ2 − 1 µ2 )µ2 =⇒ 1 ≤ µ2 − 1 µ2 ( ζ2 x ) 1 µ2 =⇒ 1 ≤ µ2 − 1 µ2 ( ζ2 x ) 1 µ2 y (as y ≥ 1) =⇒ 0 ≤ −1 + µ2 − 1 µ2 ( ζ2 x ) 1 µ2 y =⇒ 0 ≤ g′y(x, y). (in the resp. range of x and y) Consider 1−x 1−(xµ2−1ζ2)1/µ2 . Since ζ2 ≥ 1 and x ≤ 1, we have x ≤ ζ2 and hence 1− x 1− (xµ2−1ζ2) 1 µ2 ≥ 1− x 1− (xµ2−1x) 1 µ2 = 1. Therefore we can set y = 1−x 1−(xµ2−1ζ2)1/µ2 and apply the fact that g(x, y) ≥ g(x, 1) for all y ≥ 1 to get f ′1(x) ≥ −λ+ (λ− 1) ( 1− 1 µ2 )( ζ2 x ) 1 µ2 , as required by (7). Taking the derivative of f2(x), we have: f ′2(x) = ζ1 λ−1 µ1−1 · ( 1− λ− 1 µ1 − 1 ) x− λ−1 µ1−1 = ( ζ1 x ) λ−1 µ1−1 ( 1− λ− 1 µ1 − 1 ) ≥ 1− λ− 1 µ1 − 1 . Combining the two terms together, we have: f ′(x) ≥ −λ+ (λ− 1) ( 1− 1 µ2 )( ζ2 x ) 1 µ2 + 1− λ− 1 µ1 − 1 = (λ− 1) ( − µ1 µ1 − 1 + µ2 − 1 µ2 ( ζ2 x ) 1 µ2 ) . For f ′(x) to be non-negative we need: − µ1 µ1 − 1 + µ2 − 1 µ2 ( ζ2 x ) 1 µ2 ≥ 0 ⇐⇒ ( µ1 µ1 − 1 · µ2 µ2 − 1 )µ2 ≤ ζ2 x . So f(x) is increasing for x ∈ [ 0, ζ2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2] . This means for q ≤ q̃ ≤ ζ2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2 , we have f(q) ≤ f(q̃). This completes the proof of the lemma and that of the theorem. Theorem 6 yields data-dependent Rényi differential privacy bounds for any value of µ1 and µ2 larger than λ. The following proposition simplifies this search by calculating optimal higher moments µ1 and µ2 for the GNMax mechanism with variance σ2. Proposition 10. When applying Theorem 6 and Proposition 8 for GNMax with Gaussian of variance σ2, the right-hand side of (2) is minimized at µ2 = σ · √ log(1/q̃), and µ1 = µ2 + 1. Proof. We can minimize both terms in (2) independently. To minimize the first term in (6), we minimize (q̃eε2)1−1/µ2 by considering logarithms: log { (q̃eε2) 1−1/µ2 } = log { q̃1− 1 µ2 exp ( µ2 − 1 σ2 )} = ( 1− 1 µ2 ) · log q̃ + µ2 − 1 σ2 = 1 µ2 log 1 q̃ + µ2 σ2 − 1 σ2 − log 1 q̃ , which is minimized at µ2 = σ · √ log(1/q̃). To minimize the second term in (6), we minimize eε1/q̃1/(µ1−1) as follows: log { eε1 q̃1/(µ1−1) } = log { q̃−1/(µ1−1) exp (µ1 σ2 )} = µ1 σ2 + 1 µ1 − 1 log 1 q̃ = 1 σ2 + µ1 − 1 σ2 + 1 µ1 − 1 log 1 q̃ , which is minimized at µ1 = 1 + σ · √ log(1/q̃) completing the proof. Putting this together, we apply the following steps to calculate RDP of order λ for GNMax with variance σ2 on a given dataset D. First, we compute a bound q according to Proposition 7. Then we use the smaller of two bounds: a data-dependent (Theorem 6) and a data-independent one (Proposition 8) : βσ(q) , min { 1 λ− 1 log { (1− q) ·A(q, µ2, ε2)λ−1 + q ·B(q, µ1, ε1)λ−1 } , λ/σ2 } , whereA andB are defined as in the statement of Theorem 6, the parameters µ1 and µ2 are selected according to Proposition 10, and ε1 , µ1/σ2 and ε2 , µ2/σ2 (Proposition 8). Importantly, the first expression is evaluated only when q < 1, µ1 ≥ λ, µ2 > 1, and q ≤ e(µ2−1)ε2/ ( µ1 µ1−1 · µ2 µ2−1 )µ2 . These conditions can either be checked for each application of the aggregation mechanism, or a critical value of q0 that separates the range of applicability of the data-dependent and data-independent bounds can be computed for given σ and λ. In our implementation we pursue the second approach. The following corollary offers a simple asymptotic expression of the privacy of GNMax for the case when there are large (relative to σ) gaps between the highest three vote counts. Corollary 11. If the top three vote counts are n1 > n2 > n3 and n1 − n2, n2 − n3 σ, then the mechanism GNMax with Gaussian of variance σ2 satisfies (λ, exp(−2λ/σ2)/λ)-RDP for λ = (n1 − n2)/4. Proof. Denote the noisy counts as ñi = ni + N (0, σ2). Ignoring outputs other than those with the highest and the second highest counts, we bound q = Pr [M(D) 6= 1] as Pr[ñ1 < ñ2] = Pr[N(0, 2σ2) > n1 − n2] < exp ( −(n1 − n2)2/4σ2 ) , which we use as q̃. Plugging q̃ in Proposition 10, we have µ1 − 1 = µ2 = (n1 − n2)/2, limiting the range of applicability of Theorem 6 to λ < (n1 − n2)/2. Choosing λ = (n1−n2)/4 ensuresA(q̃, µ2, ε2) ≈ 1, which allows approximating the bound (2) as q̃ ·B(q̃, µ1, ε1)λ−1/(λ− 1). The proof follows by straightforward calculation. B SMOOTH SENSITIVITY AND PUBLISHING THE PRIVACY PARAMETER The privacy guarantees obtained for the mechanisms in this paper via Theorem 6 take as input q̃, an upper bound on the probability that the aggregate mechanism returns the true plurality. This means that the resulting privacy parameters computed depend on teacher votes and hence the underlying data. To avoid potential privacy breaches from simply publishing the data-dependent parameter, we need to publish a sanitized version of the privacy loss. This is done by adding noise to the computed privacy loss estimates using the smooth sensitivity algorithm proposed by Nissim et al. (2007). This section has the following structure. First we recall the notion of smooth sensitivity and introduce an algorithm for computing the smooth sensitivity of the privacy loss function of the GNMax mechanism. In the rest of the section we prove correctness of these algorithms by stating several conditions on the mechanism, proving that these conditions are sufficient for correctness of the algorithm, and finally demonstrating that GNMax satisfies these conditions. B.1 COMPUTING SMOOTH SENSITIVITY Any dataset D defines a histogram n̄ = (n1, . . . , nm) ∈ Nm of the teachers’ votes. We have a natural notion of the distance between two histograms dist(n̄, n̄′) and a function q:Nm → [0, 1] on these histograms computing the bound according to Proposition 7. The value q(n̄) can be used as q̃ in the application of Theorem 6. Additionally we have n(i) denote the i-th highest bar in the histogram. We aim at calculating a smooth sensitivity of β (q(n̄)) whose definition we recall now. Definition 12 (Smooth Sensitivity). Given the smoothness parameter β, a β-smooth sensitivity of f(n) is defined as SSβ(n̄) , max d≥0 e−βd · max n̄′:dist(n̄,n̄′)≤d L̃S(n̄′), where L̃S(n̄) ≥ max n̄′:dist(n̄,n̄′)=1 |f(n)− f(n′)| is an upper bound on the local sensitivity. We now describe Algorithms 3–5 computing a smooth sensitivity of β (q(·)). The algorithms assume the existence of efficiently computable functions q:Nm → [0, 1], BL,BU: [0, 1] → [0, 1], and a constant q0. Informally, the functions BU and BL respectively upper and lower bound the value of q evaluated at any neighbor of n̄ given q(n̄), and [0, q0) limits the range of applicability of data-dependent analysis. The functions BL and BU are defined as follows. Their derivation appears in Section B.4. BU(q) , min { m− 1 2 erfc ( erfc-1 ( 2q m− 1 ) − 1 σ ) , 1 } , BL(q) , m− 1 2 erfc ( erfc-1 ( 2q m− 1 ) + 1 σ ) , Algorithm 3 – Local Sensitivity: use the functions BU and BL to compute (an upper bound) of the local sensitivity at a given q value by looking at the difference of β (·) evaluated on the bounds. 1: procedure L̃S(q) 2: if q1 ≤ q ≤ q0 then . q1 = BL(q0). Interpolate the middle part. 3: q ← q1 4: end if 5: return max{β (BU(q))− β (q) ,β (q)− β (BL(q))} 6: end procedure B.2 NOTATION AND CONDITIONS Notation. We find that the algorithm and the proof of its correctness are more naturally expressed if we relax the notions of a histogram and its neighbors to allow non-integer values. • We generalize histograms to be any vector with non-negative real values. This relaxation is used only in the analysis of algorithms; the actual computations are performed exclusively over integer-valued inputs. • Let n̄ = [n1, . . . , nm] ∈ Rm, ni ≥ 0 denote a histogram. Let n(i) denote the i-th bar in the descending order. • Define a “move” as increasing one bar by some value in [0, 1] and decreasing one bar by a (possibly different) value in [0, 1] subject to the resulting value be non-negative. Notice the difference between the original problem and our relaxation. In the original formulation, the histogram takes only integer values and we can only increase/decrease them by exactly 1. In contrast, we allow real values and a teacher can contribute an arbitrary amount in [0, 1] to any one class. Algorithm 4 – Sensitivity at a distance: given a histogram n̄, compute the sensitivity of β (·) at distance at most d using the procedure L̃S, function q(·), constants q0 and q1 = BL(q0), and careful case analysis that finds the neighbor at distance d with the maximum sensitivity. 1: procedure ATDISTANCED(n̄, d) 2: q ← q(n̄) 3: if q1 ≤ q ≤ q0 then . q is in the flat region. 4: return L̃S(q), STOP 5: end if 6: if q < q1 then . Need to increase q. 7: if n(1) − n(2) < 2d then . n(i) is the ith largest element. 8: return L̃S(q1), STOP 9: else 10: n̄′ ← SORT(n̄) + [−d, d, 0, . . . , 0] 11: q′ ← q(n̄′) 12: if q′ > q1 then 13: return L̃S(q0), STOP 14: else 15: return L̃S(q′), CONTINUE 16: end if 17: end if 18: else . Need to decrease q. 19: if ∑d i=2 n (i) ≤ d then 20: n̄′ ← [n, 0, . . . , 0] 21: q′ ← q(n̄′) 22: return L̃S(q′), STOP 23: else 24: n̄′ ← SORT(n̄) + [d, 0, . . . , 0] 25: for d′ = 1, . . . , d do 26: n′(2) ← n′(2) − 1 . The index of n′(2) may change. 27: end for 28: q′ ← q(n̄′) 29: if q′ < q0 then 30: return L̃S(q0), STOP 31: else 32: return L̃S(q′), CONTINUE 33: end if 34: end if 35: end if 36: end procedure Algorithm 5 – Smooth Sensitivity: Compute the β smooth sensitivity of β (·) via Definition 12 by looking at sensitivities at various distances and returning the maximum weighted by e−βd. 1: procedure SMOOTHSENSITIVITY(n̄, β) 2: S ← 0 3: d← 0 4: repeat 5: c,StoppingCondition← ATDISTANCED(n̄, d) 6: S ← max{S, c · e−βd} 7: d← d+ 1 8: until StoppingCondition = STOP 9: end procedure • Define the distance between two histograms n̄ = (n1, . . . , nm) and n̄′ = (n′1, . . . , n ′ m) as d(n̄, n̄′) , max ∑ i:ni>n′i dni − n′ie, ∑ i:ni<n′i dn′i − nie , which is equal to the smallest number of “moves” needed to make the two histograms identical. We use the ceiling function since a single step can increase/decrease one bar by at most 1. We say that two histograms are neighbors if their distance d is 1. Notice that analyses of Rényi differential privacy for LNMax, GNMax and the exponential mechanism are still applicable when the neighboring datasets are defined in this manner. • Given a randomized aggregatorM:Rm≥0 → [m], let q:Rm≥0 → [0, 1] be so that q(n̄) ≥ Pr[M(n̄) 6= argmax(n̄)]. When the context is clear, we use q to denote a specific value of the function, which, in particular, can be used as q̃ in applications of Theorem 6. • Let β: [0, 1]→ R be the function that maps a q value to the value of the Rényi accountant. Conditions. Throughout this section we will be referring to the list of conditions on q(·) and β (·): C1. The function q(·) is continuous in each argument ni. C2. There exist functions BU,BL: [0, 1] → [0, 1] such that for any neighbor n̄′ of n̄, we have BL(q(n̄)) ≤ q(n̄′) ≤ BU(q(n̄)), i.e., BU and BL provide upper and lower bounds on the q value of any neighbor of n̄. C3. BL(q) is increasing in q. C4. BU and BL are functional inverses of each other in part of the range, i.e., q = BL(BU(q)) for all q ∈ [0, q0], where q0 is defined below. Additionally BL(q) ≤ q ≤ BU(q) for all q ∈ [0, 1]. C5. β (·) has the following shape: there exist constants β∗ and q0 ≤ 0.5, such that β (q) nondecreasing in [0, q0] and β (q) = β∗ ≥ β (q0) for q > q0. The constant β∗ corresponds to a data-independent bound. C6. ∆β (q) , β (BU(q))− β (q) is non-decreasing in [0,BL(q0)], i.e., when BU(q) ≤ q0. C7. Recall that n(i) is the i-th largest coordinate of a histogram n̄. Then, if q(n̄) ≤ BU(q0), then q(n̄) is differentiable in all coordinates and ∀i > j ≥ 2 ∂q ∂n(j) (n̄) ≥ ∂q ∂n(i) (n̄) ≥ 0. C8. The function q(n̄) is invariant under addition of a constant, i.e., q(n̄) = q(n̄+ [x, . . . , x]) for all n̄ and x ≥ 0, and q(n̄) is invariant under permutation of n̄, i.e., q(n̄) = q(π(n̄)) for all permutations π on [m]. Finally, we require that if n(1) = n(2), then q(n̄) ≥ q0. We may additionally assume that q0 ≥ q([n, 0, . . . , 0]). Indeed, if this condition is not satisfied, then the data-dependent analysis is not going to be used anywhere. The most extreme histogram— [n, 0, . . . , 0]—is the most advantageous setting for applying data-dependent bounds. If we cannot use the data-dependent bound even in that case, we would be using the data-independent bound everywhere and do not need to compute smooth sensitivity anyway. Yet this condition is not automatically satisfied. For example, if m (the number of classes) is large compared to n (the number of teachers), we might have large q([n, 0, . . . , 0]). So we need to check this condition in the code before doing smooth sensitivity calculation. B.3 CORRECTNESS OF ALGORITHMS 3–5 Recall that local sensitivity of a deterministic function f is defined as max f(D)− f(D′), where D and D′ are neighbors. Proposition 13. Under conditions C2–C6, Algorithm 3 computes an upper bound on local sensitivity of β (q(n̄)). Proof. Since β (·) is non-decreasing everywhere (by C5), and for any neighbors n̄ and n̄′ it holds that BL(q(n̄)) ≤ q(n̄′) ≤ BU(q(n̄)) (by C2), we have the following |β (q(n̄))− β (q(n̄′))| ≤ max { β ( BU(q(n̄)) ) − β ( q(n̄) ) , β ( q(n̄) ) − β ( BL(q(n̄)) )} = max { ∆β ( q(n̄) ) , ∆β ( BL(q(n̄)) )} as an upper bound on the local sensitivity of β (q(·)) at input n̄. The function computed by Algorithm 3 differs from above when q(n̄) ∈ (BL(q0), q0). To complete the proof we need to argue that the local sensitivity is upper bounded by ∆β (BL(q0)) for q(n̄) in this interval. The bound follows from the following three observations. First, ∆β (q) is non-increasing in the range (BL(q0), 1], since β (BU(q)) is constant (by BU(q) ≥ BU(BL(q0)) = q0 and C5) and β (q) is non-decreasing in the range (by C5). In particular, ∆β (q) ≤ ∆β (BL(q0)) if q ≥ BL(q0). (8) Second, ∆β (BL(q)) is non-decreasing in the range [0, q0] since BL(q) is increasing (by C3 and C6). This implies that ∆β (BL(q)) ≤ ∆β (BL(q0)) if q ≤ q0. (9) By (8) and (9) applied to the intersection of the two ranges, it holds that max { ∆β ( q(n̄) ) , ∆β ( BL(q(n̄)) )} ≤ ∆β (BL(q0)) if BL(q0) ≤ q ≤ q0, as needed. We thus established that the function computed by Algorithm 3, which we call L̃S(q) from now on, is an upper bound on the local sensitivity. Formally, L̃S(q) , { ∆β (BL(q0)) if q ∈ (BL(q0), q0), max {∆β (q) ,∆β (BL(q))} otherwise. The following proposition characterizes the growth of L̃S(q). Proposition 14. Assuming conditions C2–C6, the function L̃S(q) is non-decreasing in [0,BL(q0)], constant in [BL(q0), q0], and non-increasing in [q0, 1]. Proof. Consider separately three intervals. • By construction, L̃S is constant in [BL(q0), q0]. • Since both functions ∆β (·) and ∆β (BL(·)) are each non-decreasing in [0,BL(q0)), so is their max. • In the interval (q0, 1], β (q) is constant. Hence ∆β (q) = 0 and ∆β (BL(q)) = β (q) − β (BL(q)) is non-decreasing. Their maximum value ∆β (BL(q)) is non-decreasing. The claim follows. We next prove correctness of Algorithm 4, which computes the maximal sensitivity of β at a fixed distance. The proof relies on the following notion of a partial order between histograms. Definition
1. What is the focus of the paper regarding private learning? 2. What are the novel aspects introduced by the paper in the PATE framework for differential privacy? 3. How does the reviewer assess the practicality and modularity of the proposed framework? 4. What is the concern raised by the reviewer regarding the data-dependent privacy guarantee? 5. How does the reviewer suggest resolving the issue of data-dependent privacy guarantees? 6. What other works are similar to the proposed algorithm, and how do they compare regarding filtering uninformative queries?
Review
Review This paper considers the problem of private learning and uses the PATE framework to achieve differential privacy. The dataset is partitioned and multiple learning algorithms produce so-called teacher classifiers. The labels produced by the teachers are aggregated in a differentially private manner and the aggregated labels are then used to train a student classifier, which forms the final output. The novelty of this work is a refined aggregation process, which is improved in three ways: a) Gaussian instead of Laplace noise is used to achieve differential privacy. b) Queries to the aggregator are "filtered" so that the limited privacy budget is only expended on queries where the teachers are confident and the student is uncertain or wrong. c) A data-dependent privacy analysis is used to attain sharper bounds on the privacy loss with each query. I think this is a nice modular framework form private learning, with significant refinements relative to previous work that make the algorithm more practical. On this basis, I think the paper should be accepted. However, I think some clarification is needed with regard to item c above: Theorem 2 gives a data-dependent privacy guarantee. That is, if there is one label backed by a clear majority of teachers, then the privacy loss (as measured by Renyi divergence) is low. This data-dependent privacy guarantee is likely to be much tighter than the data-independent guarantee. However, since the privacy guarantee now depends on the data, it is itself sensitive information. How is this issue resolved? If the final privacy guarantee is data-dependent, then this is very different to the way differential privacy is usually applied. This would resemble the "privacy odometer" setting of Rogers-Roth-Ullman-Vadhan [ https://arxiv.org/abs/1605.08294 ]. Another way to resolve this would be to have an output-dependent privacy guarantee. That is, the privacy guarantee would depend only on public information, rather than the private data. The widely-used "sparse vector" technique [ http://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf#page=59 ] does this. In any case, this is an important issue that needs to be clarified, as it is not clear to me how this is resolved. The algorithm in this work is similar to the so-called median mechanism [ https://www.cis.upenn.edu/~aaroth/Papers/onlineprivacy.pdf ] and private multiplicative weights [ http://mrtz.org/papers/HR10mult.pdf ]. These works also involve a "student" being trained using sensitive data with queries being answered in a differentially private manner. And, in particular, these works also filter out uninformative queries using the sparse vector technique. It would be helpful to add a comparison.
ICLR
Title AT-GAN: An Adversarial Generative Model for Non-constrained Adversarial Examples Abstract With the rapid development of adversarial machine learning, numerous adversarial attack methods have been proposed. Typical attacks are based on a search in the neighborhood of input image to generate a perturbed adversarial example. Since 2017, generative models are adopted for adversarial attacks, and most of them focus on generating adversarial perturbations from input noise or input image. Thus the output is restricted by input for these works. A recent work targets “unrestricted adversarial example” using generative model but their method is based on a search in the neighborhood of input noise, so actually their output is still constrained by input. In this work, we propose AT-GAN (Adversarial Transfer on Generative Adversarial Net) to train an adversarial generative model that can directly produce adversarial examples. Different from previous works, we aim to learn the distribution of adversarial examples so as to generate semantically meaningful adversaries. AT-GAN achieves this goal by first learning a generative model for real data, followed by transfer learning to obtain the desired generative model. Once trained and transferred, AT-GAN could generate adversarial examples directly and quickly for any input noise, denoted as non-constrained adversarial examples. Extensive experiments and visualizations show that AT-GAN can efficiently generate diverse adversarial examples that are realistic to human perception, and yields higher attack success rates against adversarially trained models. 1 INTRODUCTION In recent years, Deep Neural Networks (DNNs) have been found vulnerable to adversarial examples (Szegedy et al., 2014), which are well-crafted samples with tiny perturbations imperceptible to humans but can fool the learning models. Despite the great success of the deep learning empowered applications, many of them are safety-critical, for example under the scenario of self-driving cars (Eykholt et al., 2018; Cao et al., 2019), raising serious concerns in academy and industry. Numerous works of adversarial examples have been developed on adversarial attacks (Goodfellow et al., 2015; Carlini & Wagner, 2017; Madry et al., 2018), adversarial defenses (Goodfellow et al., 2015; Kurakin et al., 2017; Song et al., 2019) and exploring the property of adversarial examples (He et al., 2018; Shamir et al., 2019). For adversarial attacks, most studies focus on the perturbation-based adversarial examples constrained by input images, which is also the generally accepted conception of adversarial examples. Generative models are also adopted recently to generate adversarial perturbations from an input noise (Reddy Mopuri et al., 2018; Omid et al., 2018) or from a given image (Xiao et al., 2018; Bai et al., 2020), and such perturbations are added to the original image to craft adversarial examples. Song et al. (2018) propose to search a neighborhood noise around the input noise of a Generative Adversarial Net (GAN) (Goodfellow et al., 2014) such that the output is an adversarial example, which they denoted as unrestricted adversarial example as there is no original image in their method. However, their output is still constrained by the input noise, and the search is time-consuming. In this work, we propose an adversarial generative model called AT-GAN (Adversarial Transfer on Generative Adversarial Net), which aims to learn the distribution of adversarial examples. Unlike previous works that constrain the adversaries in the neighborhood of input image or input noise, including the prominent work of Song et al. (2018) that searches over the neighborhood of the input noise of a pre-trained GAN in order to find a noise whose output image is misclassified by the target classifier, AT-GAN is an adversarial generative model that could produce semantically meaningful adversarial examples directly from any input noise, and we call such examples the non-constrained adversarial examples. Specifically, we first develop a normal GAN to learn the distribution of benign data so that it can produce plausible images that the classifier and a human oracle will classify in the same way. Then we transfer the pre-trained GAN into an adversarial GAN called AT-GAN that can fool the target classifier while being still well recognized by the human oracle. AT-GAN is a conditional GAN that has learned to estimate the distribution of adversarial examples for the target classifier, so AT-GAN can directly generate adversarial examples from any random noise, leading to high diversity and efficiency. We implement AT-GAN by adopting AC-GAN (Odena et al., 2017) and WGAN-GP (Gulrajani et al., 2017) in the pre-training stage, then do transfer learning for the adversary generation. Here we develop AT-GAN on three benchmark datasets, namely MNIST, Fashion-MNIST and CelebA, and apply typical defense methods to compare AT-GAN with existing search-based attacks. Empirical results show that the non-constrained adversarial examples generated by AT-GAN yield higher attack success rates, and state-of-the-art adversarially trained models exhibit little robustness against ATGAN, indicating the high diversity of our adversaries. In addition, AT-GAN, as a generation-based adversarial attack, is more efficient than the search-based adversarial attacks. Note that all conditional GANs that can craft realistic examples could be used for the implementation of AT-GAN. For another demonstration, we adopt StyleGAN2-ada (Karras et al., 2020a) and develop AT-GAN on CIFAR-10 benchmark dataset using wide ResNet w32-10 (Zagoruyko & Komodakis, 2016) as the target classifier. Empirical results show that AT-GAN can produce plausible adversarial images, and yield higher attack success rates on the adversarially trained models. 2 PRELIMINARIES In this section, we provide definitions on several types of adversarial examples and adversarial attacks, and give a brief overview of adversarial attacks using GAN. Other related works on typical adversarial attacks and defenses (Goodfellow et al., 2015; Madry et al., 2018; Tramèr et al., 2018), as well as some typical GANs (Goodfellow et al., 2014; Radford et al., 2016; Odena et al., 2017; Arjovsky et al., 2017; Gulrajani et al., 2017) are introduced in Appendix A. 2.1 DEFINITIONS ON ADVERSARIES Let X be the set of all digital images under consideration for a learning task, Y ∈ R be the output label space and pz ∈ Rm be an arbitrary probability distribution (e.g. Gaussian distribution) where m is the dimension of pz . A deep learning classifier f : X → Y takes an image x ∈ X and predicts its label f(x). Suppose px and padv are the distributions of benign images and adversarial examples, respectively. Assume we have an oracle classifier o : X → Y , which could always predict the correct label for any image x ∈ X , we define several types of adversarial examples as follows. For perturbation-based adversarial examples (Szegedy et al., 2014; Goodfellow et al., 2015; MoosaviDezfooli et al., 2016), tiny perturbations are added to the input images, which are imperceptible to humans but can cause the target classifier to make wrong predictions. Definition 1. Perturbation-based Adversarial Examples. Given a subset (trainset or testset) images T ⊂ X and a small constant > 0, the perturbation-based adversarial examples can be defined as: Ap = {xadv ∈ X |∃x ∈ T , ‖x− xadv‖p < ∧ f(xadv) 6= o(xadv) = f(x) = o(x)}. Song et al. (2018) define a new type of adversarial examples called unrestricted adversarial examples, which is not related to the subset (trainset or testset) images, by adding adversarial perturbation to the input noise of a mapping, such as GAN, so that the output of the perturbed noise is an adversary to the target classifier. Definition 2. Unrestricted Adversarial Examples. Given a mappingG from z ∼ pz toG(z, y) ∼ pθ, where pθ is an approximated distribution of px, and a small constant > 0, the unrestricted adversarial examples can be defined as: Au = {G(z∗, ys) ∈ X |∃z ∼ pz, z∗ ∼ pz, ‖z − z∗‖p < ∧ f(G(z∗, ys)) 6= o(G(z∗, ys)) = f(G(z, ys)) = o(G(z, ys)) = ys} where ys is the source label. In this work, we train a conditional GAN to learn the distribution of adversarial examples and output the corresponding adversary directly from any input noise. To clarify the difference with Song et al. (2018), we call our generated adversaries the non-constrained adversarial examples. Definition 3. Non-constrained Adversarial Examples. If there is a mapping G∗ from z ∼ pz to G∗(z, y) ∼ qθ, where qθ is an approximated distribution of padv, the non-constrained adversarial examples can be defined as An = {G∗(z, ys) ∈ X |f(G∗(z, ys)) 6= o(G∗(z, ys)) = ys} where ys is the source label. Here we need to find a mapping G∗, e.g. a generative model, such that for z ∼ pz , G∗(z, y) is an image in X and the output distribution is an approximated distribution of padv , for example using the Kullback-Leibler divergence (Kullback & Leibler, 1951), KL(qθ||padv) < for a small constant . In summary, perturbation-based adversarial examples are based on perturbing an image x ∈ X , and unrestricted adversarial examples (Song et al., 2018) perturbs an input noise z ∼ pz for an existing mapping G. Most perturbation-based adversarial attacks and Song et al. (2018) fall into the search-based adversarial attack. Definition 4. Search-based Adversarial Attack. Given an input vector v ∈ V (either benign image x or random vector z), the search-based adversarial attack searches a vector v′ : ‖v− v′‖p < where v′ leads to an adversarial example for the target classifier. In contrast, non-constrained adversarial examples are more generalized so that we need to learn a mapping G∗ such that for any input noise sampled from distribution pz , the output is an adversarial image. Such a mapping to be learned is called an adversarial generative model, and our method falls into the generation-based adversarial attack. Definition 5. Generation-based Adversarial Attack. Given an input vector v ∈ V (either benign image x or random vector z), the generation-based adversarial attack generates adversarial perturbation or adversarial example directly from v, usually adopting generative models. 2.2 GENERATIVE MODELS FOR ADVERSARIAL ATTACK Generative models have been adopted for adversarial attack in recent works (Baluja & Fischer, 2017). Reddy Mopuri et al. (2018) propose a Network for Adversary Generation (NAG) that models the distribution of adversarial perturbations for a target classifier so that their NAG can craft adversarial perturbations from any given random noise, which will be added to the natural image to fool the target classifier. Omid et al. (2018) propose to generate universal or image-dependent adversarial perturbations using U-Net (Ronneberger et al., 2015) or ResNet Generator (He et al., 2016) from any given random noise. Xiao et al. (2018) propose to train AdvGAN that takes an original image as the input and generate adversarial perturbation for the input to craft an adversarial example. Bai et al. (2020) further propose AI-GAN that adopts projected gradient descent (PGD) (Madry et al., 2018) in the training stage to train a GAN to generate target adversarial perturbation for the input image and target class. The above attack methods all fall into the generation-based adversarial attack, and their crafted examples fall into the perturbation-based adversarial examples. Another recent work called PS-GAN (Liu et al., 2019) pre-processes an input seed patch (a small image) to adversarial patch that will be added to a natural image to craft an adversarial example, and an attention model is used to locate the attack area on the natural image. Different from the above methods that generate adversarial perturbations or patches, Song et al. (2018) propose to search a random noise z∗ around the input noise z of AC-GAN (Odena et al., 2017) such that the corresponding output of AC-GAN is an adversarial example for the target classifier. Their method falls into the search-based adversarial attack, and their crafted examples fall into the unrestricted adversarial examples as there is no original image in their method. AT-GAN falls into the generation-based adversarial attack, and the crafted examples fall into the non-constrained adversarial examples. To clearly distinguish our work, we highlight the differences with most related works as follows: NAG, AdvGAN and AI-GAN vs. AT-GAN. NAG (Reddy Mopuri et al., 2018), AdvGAN (Xiao et al., 2018) and AI-GAN (Bai et al., 2020) focus on crafting adversarial perturbations by GANs. NAG takes random noise as input and crafts image-agnostic adversarial perturbation. AdvGAN and AI-GAN both use natural images as inputs, and generate the corresponding adversarial perturbations for the input image. AI-GAN uses adversarial examples generated by PGD for the training. In contrast, AT-GAN does not use any natural image as the input, and generates adversarial examples directly from any random noise. Further, compared with AI-GAN, we do not use any adversarial examples for the training. Song’s vs. AT-GAN. Song’s method (Song et al., 2018) searches over the neighborhood of the input noise for the pre-trained AC-GAN in order to find a noise whose output image is misclassified by the target classifier. They define such adversaries as the unrestricted adversarial examples, however, their adversaries are still constrained by the original input noise. Their method is essentially based on search, while AT-GAN is trained as an adversarial generative model, and our output is not constrained by any neighborhood. 3 AT-GAN: AN ADVERSARIAL GENERATIVE MODEL Here we first introduce the estimation on the distribution of adversarial examples, then propose the AT-GAN framework, a generation-based adversarial attack for crafting non-constrained adversarial examples. Further analysis is provided that AT-GAN could learn the adversary distribution. 3.1 ESTIMATING THE ADVERSARIAL DISTRIBUTION In order to generate non-constrained adversarial examples, we need to estimate the distribution of adversarial examples padv(xadv|ytrue) where ytrue is the true label. Given the parameterized estimated distribution of adversarial examples qθ(x|ytrue), we can define the estimation problem as: qθ∗(xadv|ytrue) = arg min θ∈Ω KL(qθ(xadv|ytrue)‖padv(xadv|ytrue)), (1) where θ indicates trainable parameters and Ω is the parameter space. It is hard to calculate equation 1 directly as padv(xadv|ytrue) is unknown. Inspired by the perturbationbased adversarial examples, as shown in Figure 1, we postulate that for each adversarial example xadv , there exists some benign examples x where ‖x−xadv‖p < . In other words, padv(xadv|ytrue) is close to p(x|ytrue) to some extent and we can obtain p(x|ytrue) by Bayes’ theorem, p(x|ytrue) = p(ytrue|x)·p(x) p(ytrue) , where p(ytrue|x), p(x) and p(ytrue) can be obtained directly from the trainset. Thus, we can approximately solve equation 1 in two stages: 1) Fit the distribution of benign data pθ. 2) Transfer pθ to estimate the distribution of adversarial examples qθ. Specifically, we propose an adversarial generative model called AT-GAN to learn the distribution of adversarial examples. The overall architecture of AT-GAN is illustrated in Figure 2. Corresponding to the above two stages, we implement AT-GAN by first training a GAN model called AC-WGAN_GP, which combines AC-GAN (Odena et al., 2017) and WGAN_GP (Gulrajani et al., 2017) to get a generator Goriginal, to learn pθ (See Appendix B), then transfering Goriginal to attack the target classifier f for the learning of qθ. We adopt AC-GAN and WGAN-GP for the AT-GAN implementation as they could build a powerful generative model on three evaluated datasets, and Song et al. (2018) also utilize the same combination. But AT-GAN is not limited to the above GANs, and we also implement AT-GAN using StyleGAN2-ada (Karras et al., 2020a) on a different dataset. 3.2 TRANSFERRING THE GENERATOR FOR ATTACK After the original generator Goriginal is trained, we transfer the generator Goriginal to learn the distribution of adversarial examples in order to attack the target model. As illustrated in Figure 2 (b), there are three neural networks, including the original generator Goriginal, the attack generator Gattack to be transferred that is initialized by the weights of Goriginal, and the classifier f to be attacked. The goal of the second stage can be described as: G∗attack = arg min Gattack ||Goriginal(z, ys)−Gattack(z, ys)||p s. t. f(G(z, ys)) = yt 6= ys, (2) where yt denotes the target label, ‖ · ‖p denotes the `p norm and we focus on p = 2 in this work. To optimize equation 2, we construct the loss function by L1 and L2, where L1 aims to assure that f yields the target label yt that is fixed for target attack for each category: L1 = Ez∼pz [H(f(Gattack(z, ys)), yt)]. (3) Here H(·, ·) denotes the cross entropy between the two terms and ys is sampled from Y . L2 aims to assure that the adversarial generator Gattack generates realistic examples: L2 = Ez∼pz [||Goriginal(z, ys) + ρ−Gattack(z, ys)||p]. (4) Here ρ is a small uniform random noise constrained by both l0 and l∞ norm. We add ρ to constrain Gattack(z, ys) to be in the neighborhood of Goriginal(z, ys) rather than be exactly the same as Goriginal(z, ys). The objective function for transferring Goriginal to Gattack can be formulated as L = 〈αL1, βL2〉, where α and β are hyper-parameters to control the training process. Note that in the case that α = 1 and β →∞, the objective function is similar to that of the perturbation-based attacks (Goodfellow et al., 2015; Tramèr et al., 2018; Madry et al., 2018). For the untargeted attack, we can replace yt in La with the maximum confidence of prediction label y except for ys, maxy 6=ys f(y|Gattack(z, ys)). 3.3 THEORETICAL ANALYSIS ON AT-GAN This subsection provides theoretical analysis on why AT-GAN can generate as realistic and diverse non-constrained adversarial examples as real data. We will prove that under ideal condition, AT-GAN can estimate the distribution of adversarial examples, which is close to that of real data. Suppose pdata is the distribution of real data, pg and pa are the distribution learned by the generator of AC-WGAN_GP and AT-GAN respectively. For the optimization of equation 4, L2 aims to constrain the image generated by Gattack in the -neighborhood of Goriginal. We prove that under the ideal condition that L2 guaranteesGattack(z, ys) to be close enough toGoriginal(z, ys) for any input noise z, the distribution of AT-GAN almost coincides the distribution of AC-WGAN_GP. Formally, we state our result for the two distributions as follows. Theorem 1. Suppose maxz,y L2 < , we have KL(pa‖pg)→ 0 when → 0. The proof of Theorem 1 is in Appendix C. Samangouei et al. (2018) prove that the global optimum of WGAN is pg = pdata and we show that the optimum of AC-WGAN_GP has the same property. We formalize the property as follows. Theorem 2. The global minimum of the virtual training of AC-WGAN_GP is achieved if and only if pg = pdata. The proof of Theorem 2 is in Appendix C. According to Theorem 1 and 2, under the ideal condition, we conclude pa ≈ pg = pdata, which indicates that the distribution of non-constrained adversarial examples learned by AT-GAN is very close to that of real data as discussed in Section 3.1, so that the non-constrained adversarial instances are as realistic and diverse as the real data. 4 EXPERIMENTS In this section, we provide two implementations of AT-GAN to validate the effectiveness and efficiency of the proposed approach. Empirical experiments demonstrate that AT-GAN yields higher attack success rates against adversarially trained models with higher efficiency. Besides, AT-GAN can learn a distribution of adversarial examples which is close to the real data distribution, and generate realistic and diverse adversarial examples. 4.1 EXPERIMENTAL SETUP Datasets. We consider four standard datasets, namely MNIST (LeCun et al., 1989), Fashion-MNIST (Xiao et al., 2017), CelebA (Liu et al., 2015) on the AT-GAN implementation using AC-GAN (Odena et al., 2017) and WGAN_GP (Gulrajani et al., 2017), and CIFAR-10 dataset (Krizhevsky et al., 2009) on the AT-GAN implementation of StyleGAN2-ada (StyleGAN2 with adaptive discriminator augmentation) (Karras et al., 2020a). MNIST is a dataset of hand written digits from 0 to 9. FashionMNIST is similar to MNIST with 10 categories of fashion clothes. CelebA contains more than 200, 000 celebrity faces. We group them according to female/male and focus on gender classification as in Song et al. (2018). CIFAR-10 consists of 32× 32 color images in 10 classes, with 6, 000 images per class. For all datasets, we normalize the pixel values into range [0, 1]. Baselines. We compare AT-GAN with the search-based attack methods, including Song’s (Song et al., 2018) for unrestricted adversarial examples, as well as FGSM (Goodfellow et al., 2015), PGD (Madry et al., 2018) and R+FGSM (Tramèr et al., 2018) for perturbation-based adversarial examples. Note that although the perturbation-based results are not directly comparable to ours as they are limited to small perturbations on real images, they can provide a good sense on the model robustness. Models. For MNIST and Fashion-MNIST, we adopt four models used in Tramèr et al. (2018), denoted as Model A to D. For CelebA, we consider three models, i.e. CNN, VGG16 (Simonyan & Zisserman, 2015) and ResNet (He et al., 2016). Details of Model A to D and CNN are described in Table 1. The ResNet is same as in Song et al. (2018). For CIFAR-10, we adopt the wide ResNet w32-10 (Zagoruyko & Komodakis, 2016). Details about the architectures of AT-GAN are provided in Appendix D. Evaluation Setup. We consider normal training and existing advanced defenses, namely adversarial training (Goodfellow et al., 2015), ensemble adversarial training (Tramèr et al., 2018) and iterative adversarial training (Madry et al., 2018). All experiments are conducted on a single Titan X GPU and the hyper-parameters used for attacks are described in Appendix D. 4.2 EVALUATION RESULTS For evaluation, we report the comparisons on attack success rate, attack efficiency and visualize some adversarial examples for AT-GAN and the baselines. More evaluation results on the transferability, ablation study, human evaluation, and the attack results on CIFAR-10, are provided in Appendix D. 4.2.1 COMPARISON ON ATTACK SUCCESS RATE To validate the attack effectiveness, we compare AT-GAN with the baselines under white-box setting. Since Athalye et al. (2018) show that the currently most effective defense method is adversarial training, we consider adversarially trained models as the defense models. The attack success rates are reported in Table 2. On MNIST, AT-GAN achieves the highest Attack Success Rate (ASR) against the baselines on all defense models. As for normal training, AT-GAN achieves the highest ASR on Model D, and the second highest ASR of over 98% on the other models. On Fashion-MNIST, AT-GAN achieves the highest ASR on average. On CelebA, AT-GAN achieves the highest ASR on almost all the models, with two exceptions under normal training but the results of AT-GAN are close to the highest. In general, AT-GAN achieves the highest attack performance above 90% on all the defense models. As AT-GAN aims to estimate the distribution of adversarial examples, adversarial training with some specific attacks has little robustness against AT-GAN, raising a new security issue for the development of more generalized adversarial training models. 4.2.2 COMPARISON ON ATTACK EFFICIENCY There are many scenarios where one needs a large amount of adversarial examples, such as adversarial training or exploring the property of adversarial examples. Thus, the efficiency of generating adversarial examples is very important, but such metric is ignored in most existing works. As an adversarial generative model, once trained, AT-GAN can generate adversarial examples very quickly. Here we evaluate the efficiency of each attack method for Model A on MNIST. The average time of generating/searching 1000 adversarial examples is summarized in Table 3. Among the five attack methods, AT-GAN is the fastest as it could craft adversarial examples without target classifier and gradient calculation. Note that Song’s needs much longer time than others as it needs multiple searches and queries to generate one adversarial example. It takes about 8 minutes for transferring the generator of AT-GAN. Here we only focus on the efficiency of generating adversarial examples after AT-GAN is transferred, i.e. we have already found the generator G∗, as in such case we could generate as many adversarial examples as we need. 4.2.3 VISUALIZATION ON ADVERSARIAL EXAMPLES Since the goal of adversarial examples is to fool target neural networks but not to fool human oracle, in Figure 3 we illustrate some adversarial examples generated by different attacks for Modle A on MNIST and Fashion-MNIST, and CNN on CelebA. On MNIST, AT-GAN generates slightly more realistic images than Song’s, e.g. “0” and “3”. On Fashion-MNIST and CelebA, some adversarial examples generated by Song’s method are not as realistic as AT-GAN to human perception, for example “t-shirt/top (0) ”, “sandal (5)” and some facial details. Note that Song’s method tends to distort the foreground that makes the images on MNIST more clean but some images are not realistic while AT-GAN tends to distort the background. As for perturbation-based attacks, their adversarial examples are not clear enough, especially on MNIST and Fashion-MNIST, due to the adversarial perturbations. There are also some unnatural samples generated by AT-GAN due to the limitation of GAN and we hope some better generative models can solve such issue. For target attack, please see more examples crafted by AT-GAN in Appendix D. In general, AT-GAN can generate realistic and diverse adversarial examples as equation 1 forces the generated non-constrained adversarial examples to be close to the benign examples generated by the original generator. 4.3 VISUALIZATION ON ADVERSARIAL DISTRIBUTION As discussed in Section 3.3, we provide a brief analysis that AT-GAN can learn a distribution of adversarial examples close to the distribution of real image data. To identify it empirically, we randomly choose 5, 000 benign images and 5, 000 adversarial examples generated by different attack methods, and merge these images according to their real label for MNIST and Fashion-MNIST. Then we use t-SNE (Maaten & Hinton, 2008) on these images to illustrate the distributions in two dimensions. t-SNE models each high-dimensional object in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability. It indicates that, if the adversarial examples have different distribution to the benign data, t-SNE could not deal with them well and the points with different categories will overlap with each other after the dimension reduction, i.e. the results will be in chaos. The results are illustrated in Figure 4. For AT-GAN, different categories are separated as that of the test set while those of other methods are mixed with each other, especially on MNIST (top). It indicates the distribution AT-GAN learned is indeed very close to the distribution of real data. To further validate that AT-GAN learns a different distribution from the original GAN rather than just adding some constant universal perturbation vector. In Appendix E, we illustrate some instances generated by the original generator and AT-GAN for the same input. We find that for different inputs, the original generator outputs different images and the difference between the instances generated by the original generator and AT-GAN is also different, indicating that AT-GAN indeed learns a different distribution from the original GAN. 5 CONCLUSION In this work, we propose a generation-based adversarial attack method, called AT-GAN (Adversarial Transfer on Generative Adversarial Net), that aims to learn the distribution of adversarial examples for the target classifier. The generated adversaries are “non-constrained” as we do no search at all in the neighborhood of the input, and once trained AT-GAN can output adversarial examples directly for any input noise drawn from arbitrary distribution (e.g. Gaussian distribution). Extensive experiments and visualizations show that AT-GAN achieves highest attack success rates against adversarially trained models and can generate diverse and realistic adversarial examples efficiently. Our work also suggests that adversarial training, a popular defense method based on perturbationbased adversarial examples, could not guarantee robustness against non-constrained adversarial examples. A possible reason is that AT-GAN learns a more complete version of the adversarial example distribution, which is much more diverse than that of the perturbation-based method. Note that any conditional GANs that craft realistic examples could be used for the implementation of AT-GAN. In this work, we provide two implementations on four datasets. In future work we plan to try advanced GANs for generating high resolution images. Our method also suggests a new way of adversarial attack by designing an adversarial generative model directly. There are several other interesting questions related to our work that can be explored in future work. For instance, what is the distribution of adversarial examples really like? Is it a continuous or smooth manifold? How close could we learn such distribution through GAN? We hope our work could inspire more researches in this direction. APPENDIX In the appendix, we provide additional related work on gradient-based adversarial attack methods, adversarial training methods and typical generative adversarial nets. Then we describe how to obtain the original generator and provide theoretical analysis, as well as experimental details and additional results. In the end, we visualize the examples generated by original GAN and AT-GAN. A ADDITIONAL RELATED WORK A.1 GRADIENT-BASED ATTACKS Numerous adversarial attacks have been proposed in recent years (Carlini & Wagner, 2017; Liu et al., 2017; Bhagoji et al., 2017; Li et al., 2019). In this part, we will introduce three typical adversarial attack methods. Here the components of all adversarial examples are clipped in [0, 1]. Fast Gradient Sign Method (FGSM). FGSM (Goodfellow et al., 2015) adds perturbation in the gradient direction of the training loss J on the input x to generate adversarial examples. xadv = x+ · sign(∇xJ(θ, x, ytrue)), where ytrue is the true label of a sample x, θ is the model parameter and specifies the `∞ distortion between x and xadv . Projected Gradient Descent (PGD). PGD adversary (Madry et al., 2018) is a multi-step variant of FGSM, which applies FGSM for k iterations with a budget α. xadvt+1 = clip(xadvt+αsign(∇xJ(θ, xadvt , ytrue)), xadvt − , xadvt + ) xadv0 = x, xadv = xadvk Here clip(x′, p, q) forces its input x′ to reside in the range of [p, q]. Rand FGSM (R+FGSM). R+FGSM (Tramèr et al., 2018) first applies a small random perturbation on the benign image with a parameter α (α < ), then it uses FGSM to generate an adversarial example based on the perturbed image. xadv = x ′ + ( − α) · sign(∇x′J(θ, x′, ytrue)) where x′ = x+ α · sign(N (0, I)). A.2 ADVERSARIAL TRAINING There are many defense strategies, such as detecting adversarial perturbations (Metzen et al., 2017), obfuscating gradients (Buckman et al., 2018; Guo et al., 2018) and eliminating perturbations (Shen et al., 2017; Liao et al., 2018), among which adversarial training is the most effective method (Athalye et al., 2018). We list several adversarial training methods as follows. Adversarial training. Goodfellow et al. (2015) first introduce the method of adversarial training, where the standard loss function f for a neural network is modified as: J̃(θ, x, ytrue) = αJf (θ, x, ytrue) + (1− α)Jf (θ, xadv, ytrue). Here ytrue is the true label of a sample x and θ is the model’s parameter. The modified objective is to make the neural network more robust by penalizing it to count for adversarial samples. During the training, the adversarial samples are calculated with respect to the current status of the network. Taking FGSM for example, the loss function could be written as: J̃(θ, x, ytrue) =αJf (θ, x, ytrue) + (1− α)Jf (θ, x+ sign(∇xJ(θ, x, ytrue)), ytrue). Ensemble adversarial training. Tramèr et al. (2018) propose an ensemble adversarial training method, in which DNN is trained with adversarial examples transferred from a number of fixed pre-trained models. Iterative adversarial training. Madry et al. (2018) propose to train a DNN with adversarial examples generated by iterative methods such as PGD. A.3 GENERATIVE ADVERSARIAL NET Generative Adversarial Net (GAN) (Goodfellow et al., 2014) consists of two neural networks, G and D, trained in opposition to each other. The generator G is optimized to estimate the data distribution and the discriminator D aims to distinguish fake samples from G and real samples from the training data. The objective of D and G can be formalized as a min-max value function V (G,D): min G max D V (G,D) = Ex∼px [logD(x)] + Ez∼pz [log(1−D(G(z)))]. Deep Convolutional Generative Adversarial Net (DCGAN) (Radford et al., 2016) is the convolutional version of GAN, which implements GAN with convolutional networks and stabilizes the training process. Auxiliary Classifier GAN (AC-GAN) (Odena et al., 2017) is another variant that extends GAN with some conditions by an extra classifier C. The objective function of AC-GAN can be formalized as follows: min G max D min C V (G,D,C) =Ex∼px [logD(x)] + Ez∼pz [log(1−D(G(z, ys)))] + Ex∼px [log(1− C(x, ys))] + Ez∼pz [log(1− C(G(z, ys), ys))]. To make GAN more trainable in practice, Arjovsky et al. (2017) propose Wasserstein GAN (WGAN) that uses Wassertein distance so that the loss function has more desirable properties. Gulrajani et al. (2017) introduce WGAN with gradient penalty (WGAN_GP) that outperforms WGAN in practice. Its objective function is formulated as: min G max D V (D,G) = Ex∼px [D(x)]− Ez∼pz [D(G(z))]− λEx̂∼px̂ [(‖∇x̂D(x̂)‖2 − 1)2], where px̂ is uniformly sampled along straight lines between pairs of points sampled from the data distribution px and the generator distribution pg . B TRAINING THE ORIGINAL GENERATOR Figure 2 (a) illustrates the overall architecture of AC-WGAN_GP that we used as the normal GAN. AC-WGAN_GP is the combination of AC-GAN (Odena et al., 2017) and WGAN_GP (Gulrajani et al., 2017), composed by three neural networks: a generator G, a discriminator D and a classifier f . The generator G takes a random noise z and a source label ys as the inputs and generates an image G(z, ys). It aims to generate an image G(z, ys) that is indistinguishable to discriminator D and makes the classifier f to output label ys. The loss function of G can be formulated as: LG = Ez∼pz(z)[H(f(G(z, ys)), ys)]− Ez∼pz(z)[D(G(z, ys))]. Here H(a, b) is the entropy between a and b. The discriminator D takes the training data x or the generated data G(z, ys) as the input and tries to distinguish them. The loss function of D with gradient penalty for samples x̂ ∼ px̂ can be formulated as: LD = −Ex∼pdata(x)[D(x)] + Ez∼pz(z)[D(G(z, ys))] + λEx̂∼px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1) 2]. The classifier f takes the training data x or the generated data G(z, ys) as the input and predicts the corresponding label. The loss function is: Lf =Ex∼pdata(x)[H(f(x), ytrue)] + Ez∼pz(z)[H(f(G(z, ys)), ys)]. Different from AC-WGAN_GP, styleGAN2-ada (Karras et al., 2020a) trains styleGAN2 (Karras et al., 2020b) with adaptive discriminator augmentation. We obtain the network and weights from Karras et al. (2020a). C THEORETICAL ANALYSIS OF AT-GAN In this section, we provide proofs for theorems in Section 3.3. Theorem 1. Suppose maxz,y L2 < , we have KL(pa‖pg)→ 0 when → 0. Proof. We first consider that for a distribution p(x) in space X , we construct another distribution q(x) by selecting points p (x) in the -neighborhood of p(x) for any x ∈ X . Obviously, when p (x) is close enough to p(x), q(x) has almost the same distribution as p(x). Formally, we have the following lemma. Lemma 1. Given two distributions P and Q with probability density function p(x) and q(x) in space X , if there exists a constant that satisfies ‖q(x) − p(x)‖ < for any x ∈ X , we could get KL(P‖Q)→ 0 when → 0. Proof. For two distributions P and Q with probability density function p(x) and q(x), we could get q(x) = p(x) + r(x) where ‖r(x)‖ < . KL(P‖Q) = ∫ p(x) log p(x) q(x) dx = ∫ p(x) log p(x)dx− ∫ p(x) log q(x)dx = ∫ (q(x)− r(x)) log p(x)dx− ∫ (q(x)− r(x)) log q(x)dx = ∫ q(x) log p(x)dx− ∫ q(x) log q(x)dx− ∫ r(x) log p(x)dx+ ∫ r(x) log q(x)dx = ∫ r(x) log q(x) p(x) dx−KL(Q‖P ) ≤ ∫ log(1 + p(x) )dx Obviously, when → 0, we could get ∫ log(1 + p(x) )dx→ 0, which means DL(P‖Q)→ 0. Now, we get back to Theorem 1. For two distributions pa and pg, maxy,z L2 < indicates ∀z ∼ pz, ‖pa(z, ·)− pg(z, ·)‖ < . According to Lemma 1, we have KL(pa‖pg)→ 0 when → 0. This concludes the proof. Theorem 2. The global minimum of the virtual training of AC-WGAN_GP is achieved if and only if pg = pdata. Proof. To simplify the analysis, we choose a category y of AC-WGAN_GP and denote pg(x|y) and pdata(x|y) the distribution that the generator learns and the distribution of real data respectively. Then for each category, the loss function is equivalent to WGAN_GP. We refers to Samangouei et al. (2018) to prove this property. The WGAN_GP min-max loss is given by: min G max D V (D,G) = Ex∼pdata(x)[D(x)]− Ez∼pz(z)[D(G(z))]− λEx̂∼px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1) 2] = ∫ x pdata(x)D(x)dx− ∫ z pz(z)D(G(z))dz − λ ∫ x̂ px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1)2]dx̂ = ∫ x [pdata(x)− pg(x)]D(x)dx− λ ∫ x̂ px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1)2]dx̂ (5) For a fixed G, the optimal discriminator D that maximizes V (D,G) should be: D∗G(x) = { 1 if pdata(x) ≥ pg(x) 0 otherwise (6) According to equation 5 and equation 6, we could get: V (D,G) = ∫ x [pdata(x)− pg(x)]D(x)dx− λ ∫ x̂ px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1)2]dx̂ = ∫ {x|pdata(x)≥pg(x)} (pdata(x)− pg(x))dx− λ ∫ x̂ px̂(x̂)dx̂ = ∫ {x|pdata(x)≥pg(x)} (pdata(x)− pg(x))dx− λ (7) Let X = {x|pdata(x) ≥ pg(x)}, in order to minimize equation 7, we set pdata(x) = pg(x) for any x ∈ X . Then, since both pg and pdata integrate to 1, we could get:∫ X c pg(x)dx = ∫ X c pdata(x)dx. However, this contradicts equation 6 where pdata(x) < pg(x) for x ∈ X c, unless µ(X c) = 0 where µ is the Lebesgue measure. Therefore, for each category we have pg(x|y) = pdata(x|y), which means pg(x) = pdata(x) for AC-WGAN_GP. D ADDITIONAL DETAILS ON EXPERIMENTS In this section, we provide more details on experimental setup, report results on transferability, do ablation study on hyper-parameters, investigate the generating capacity by human evaluation, and show details for another implementation of AT-GAN on CIFAR-10 dataset. In the end, we illustrate some non-constrained adversarial examples generated by AT-GAN on MNIST, Fashion-MNIST and CelebA for the target attack. D.1 MORE EXPERIMENTAL SETUP We first provide more details on the experimental setup, including the model architectures and attack hyper-parameters. Model Architectures for AT-GAN. We first describe the neural network architectures used for AT-GAN in experiments. The abbreviations for components in the network are described in Table 4. The architecture of AC-WGAN_GP for MNIST and Fashion-MNIST is shown in Table 5 where the generator and discriminator are the same as in Chen et al. (2016), while the architecture of AC_WGAN_GP for CelebA is the same as in Gulrajani et al. (2017) and the architecture of styleGAN2-ada for CIFAR-10 is the same as in Karras et al. (2020a). Hyper-parameters for Attacks. The hyper-parameters used in experiments for each attack method are described in Table 6 for MNIST, Fashion-MNIST and CelebA datasets. For CIFAR-10 dataset, we set = 0.03 for FGSM, = 0.03, α = 0.0075 and epochs= 20 for PGD, α = 3, β = 2 and epochs= 1, 000 for AT-GAN. D.2 TRANSFERABILITY OF AT-GAN Another important issue for adversarial examples is the transferability across different models. To demonstrate the transferability of non-constrained adversarial examples, we use adversarial examples generated by attacking Model A (MNIST and Fashion-MNIST) and CNN (CelebA), to evaluate the attack success rates on Model C (MNIST and Fashion-MNIST) and VGG16 (CelebA). As shown in Table 7, non-constrained adversarial examples generated by AT-GAN exhibit moderate transferability. D.3 ABLATION STUDY In this subsection, we investigate the impact of using different ρ in the loss function. As ρ could be constrained by both `0 and `∞ norm, we test various bounds, using Model A on MNIST dataset, for ρ in `0 and `∞, respectively. We first fix ‖ρ‖∞ = 0.5 and try various values for ‖ρ‖0, i.e. 0, 100, 200, 300, 400 (the maximum possible value is 784 for 28*28 input). The attack success rates are in Table 8. We can observe that different values of ‖ρ‖0 only have a little impact on the attack success rates, and the performances are very close for ‖ρ‖0 = 0, 100, 200. Figure 5 further illustrates some generated adversarial examples, among which we can see that there exist some slight differences on the examples. When ‖ρ‖0 = 0, AT-GAN tends to change the foreground (body) of the digits. When we increase the value of ‖ρ‖0 (100 and 200), AT-GAN is more likely to add tiny noise to the background and the crafted examples are more realistic to humans (for instance, smoother on digit 4). But if we continue to increase ‖ρ‖0 (300 or 400), AT-GAN tends to add more noise and the quality of the generated examples decays. To have a good tradeoff on attack performance and generation quality, we set ‖ρ‖0 = 200. We then fix ‖ρ‖0 = 200 and test different values for ‖ρ‖∞, i.e. 0, 0.1, 0.2, 0.3, 0.4, 0.5 (the maximum possible value is 1). The attack success rates are in Table 9. We can observe that different values of ‖ρ‖∞ have very little impact on the attack performance. Figure 6 further illustrates some generated adversarial examples, among which we can see that a little bit more noises are added for bigger ‖ρ‖∞ but the differences are very tiny when ‖ρ‖∞ = 0.2 to 0.5. So we simply set ‖ρ‖∞ = 0.5 in experiments, but other values of ‖ρ‖∞ (0.2, 0.3, 0.4) also work. D.4 HUMAN EVALUATION To investigate the generating capacity of AT-GAN, we use the same input, and randomly pick 100 images for each category of MNIST generated by AT-GAN and the original generator, respectively. We then conduct human evaluation to determine whether each example is realistic. The evaluation results are in Table 10. We see that adversarial examples in some categories (e.g. 2, 4) are harder to be semantically meaningful than other categories (e.g. 0, 1). On average, however, the generating capability is close to that of the original generator. D.5 AT-GAN ON CIFAR-10 DATASET To further demonstrate the flexibility of AT-GAN, we implement AT-GAN on CIFAR-10 dataset using StyleGAN2-ada (Karras et al., 2020a), a recently proposed conditional GAN. The target classifier is wide ResNet w32-10 (Zagoruyko & Komodakis, 2016) by normal training (Nor.) and Iterative adversarial training (Iter.). The attack success rates are in Table 11. On normally trained models, PGD achieves the attack success rate of 100% while AT-GAN achieves the attack success rate of 93.5%. However, the adversarially trained model exhibits little robustness against AT-GAN and AT-GAN achieves attack success rate of 73.0%. In Figure 7, we illustrate some generated adversarial examples on CIFAR-10 dataset. D.6 AT-GAN ON TARGET ATTACK Here we show some non-constrained adversarial examples generated by AT-GAN for the target attack. The results are illustrated in Figure 8 for MNIST and Fashion-MNIST, and Figure 9 for CelebA. Instead of adding perturbations to the original images, AT-GAN transfers the generative model (GAN) so that the generated adversarial instances are not in the same shape of the initial examples (in diagonal) generated by the original generator. Note that for CelebA, the target adversarial attack is equivalent to the untarget adversarial attack as it is a binary classification task. E VISUALIZATIONS FOR THE ORIGINAL GAN AND AT-GAN Here we provide some instances generated by the original GAN and AT-GAN with the same input noise and their difference on MNIST and Fashion-MNIST. The results are depicted in Figure 10 and 11. For different input noise, both the original GAN and AT-GAN output different instances. For each category with the same input noise, the difference between original GAN and AT-GAN is mainly related to the main content of image. For two different input noises, the differences between the original GAN and AT-GAN are not the same with each other, indicating that AT-GAN learns a distribution of adversarial examples different from the original GAN rather than just adds some universal perturbation vectors on the original GAN.
1. What is the focus of the paper regarding generative adversarial networks? 2. What are the strengths of the proposed approach, particularly in its ability to generate diverse and realistic adversarial examples? 3. Do you have any concerns about the novelty of the paper compared to other works in the field? 4. Are there any questions regarding the experimental setup or the presentation of the results?
Review
Review The paper proposes AT-GAN (Adversarial Transfer on Generative Adversarial Net) to train an adversarial generative model that can directly produce adversarial examples. Different from previous works, the study aims to learn the distribution of adversarial examples so as to generate semantically meaningful adversaries. AT-GAN achieves this goal by first learning a generative model for real data, followed by transfer learning to obtain the desired generative model. Once trained and transferred, AT-GAN could generate adversarial examples directly for any input noise, denoted as non-constrained adversarial examples. Some experiments and visualizations show that AT-GAN can generate some diverse adversarial examples that are realistic to human perception, and yields higher attack success rates against adversarially trained models. Overall, the idea seems straightforward. Benefiting from the GAN, the proposed model could learn the distribution of adversarial examples to attach the target models. The paper is clearly written and some experiments are conducted. However, I have some concerns as below: In the loss function, ρ controls the difference between the outputs of the original and attach GANs, it is expected to see the performance and generated examples with different ρ . The idea seems incremental. The main contribution is to transfer a pre-trained GAN to attach GAN to fool the classifiers. The novelty could be further summarized by highlighting the difference with most related works including but not limited to the aforementioned ones. The current manuscript makes the work seem like a straightforward combination of many existing approaches. Some experiment settings are not clear. A brief introduction to Model A to B should be given in the main paper, though the details is provided in Appendix. As most concerns of mine are addressed by the rebuttal and I would like to rise my score.
ICLR
Title AT-GAN: An Adversarial Generative Model for Non-constrained Adversarial Examples Abstract With the rapid development of adversarial machine learning, numerous adversarial attack methods have been proposed. Typical attacks are based on a search in the neighborhood of input image to generate a perturbed adversarial example. Since 2017, generative models are adopted for adversarial attacks, and most of them focus on generating adversarial perturbations from input noise or input image. Thus the output is restricted by input for these works. A recent work targets “unrestricted adversarial example” using generative model but their method is based on a search in the neighborhood of input noise, so actually their output is still constrained by input. In this work, we propose AT-GAN (Adversarial Transfer on Generative Adversarial Net) to train an adversarial generative model that can directly produce adversarial examples. Different from previous works, we aim to learn the distribution of adversarial examples so as to generate semantically meaningful adversaries. AT-GAN achieves this goal by first learning a generative model for real data, followed by transfer learning to obtain the desired generative model. Once trained and transferred, AT-GAN could generate adversarial examples directly and quickly for any input noise, denoted as non-constrained adversarial examples. Extensive experiments and visualizations show that AT-GAN can efficiently generate diverse adversarial examples that are realistic to human perception, and yields higher attack success rates against adversarially trained models. 1 INTRODUCTION In recent years, Deep Neural Networks (DNNs) have been found vulnerable to adversarial examples (Szegedy et al., 2014), which are well-crafted samples with tiny perturbations imperceptible to humans but can fool the learning models. Despite the great success of the deep learning empowered applications, many of them are safety-critical, for example under the scenario of self-driving cars (Eykholt et al., 2018; Cao et al., 2019), raising serious concerns in academy and industry. Numerous works of adversarial examples have been developed on adversarial attacks (Goodfellow et al., 2015; Carlini & Wagner, 2017; Madry et al., 2018), adversarial defenses (Goodfellow et al., 2015; Kurakin et al., 2017; Song et al., 2019) and exploring the property of adversarial examples (He et al., 2018; Shamir et al., 2019). For adversarial attacks, most studies focus on the perturbation-based adversarial examples constrained by input images, which is also the generally accepted conception of adversarial examples. Generative models are also adopted recently to generate adversarial perturbations from an input noise (Reddy Mopuri et al., 2018; Omid et al., 2018) or from a given image (Xiao et al., 2018; Bai et al., 2020), and such perturbations are added to the original image to craft adversarial examples. Song et al. (2018) propose to search a neighborhood noise around the input noise of a Generative Adversarial Net (GAN) (Goodfellow et al., 2014) such that the output is an adversarial example, which they denoted as unrestricted adversarial example as there is no original image in their method. However, their output is still constrained by the input noise, and the search is time-consuming. In this work, we propose an adversarial generative model called AT-GAN (Adversarial Transfer on Generative Adversarial Net), which aims to learn the distribution of adversarial examples. Unlike previous works that constrain the adversaries in the neighborhood of input image or input noise, including the prominent work of Song et al. (2018) that searches over the neighborhood of the input noise of a pre-trained GAN in order to find a noise whose output image is misclassified by the target classifier, AT-GAN is an adversarial generative model that could produce semantically meaningful adversarial examples directly from any input noise, and we call such examples the non-constrained adversarial examples. Specifically, we first develop a normal GAN to learn the distribution of benign data so that it can produce plausible images that the classifier and a human oracle will classify in the same way. Then we transfer the pre-trained GAN into an adversarial GAN called AT-GAN that can fool the target classifier while being still well recognized by the human oracle. AT-GAN is a conditional GAN that has learned to estimate the distribution of adversarial examples for the target classifier, so AT-GAN can directly generate adversarial examples from any random noise, leading to high diversity and efficiency. We implement AT-GAN by adopting AC-GAN (Odena et al., 2017) and WGAN-GP (Gulrajani et al., 2017) in the pre-training stage, then do transfer learning for the adversary generation. Here we develop AT-GAN on three benchmark datasets, namely MNIST, Fashion-MNIST and CelebA, and apply typical defense methods to compare AT-GAN with existing search-based attacks. Empirical results show that the non-constrained adversarial examples generated by AT-GAN yield higher attack success rates, and state-of-the-art adversarially trained models exhibit little robustness against ATGAN, indicating the high diversity of our adversaries. In addition, AT-GAN, as a generation-based adversarial attack, is more efficient than the search-based adversarial attacks. Note that all conditional GANs that can craft realistic examples could be used for the implementation of AT-GAN. For another demonstration, we adopt StyleGAN2-ada (Karras et al., 2020a) and develop AT-GAN on CIFAR-10 benchmark dataset using wide ResNet w32-10 (Zagoruyko & Komodakis, 2016) as the target classifier. Empirical results show that AT-GAN can produce plausible adversarial images, and yield higher attack success rates on the adversarially trained models. 2 PRELIMINARIES In this section, we provide definitions on several types of adversarial examples and adversarial attacks, and give a brief overview of adversarial attacks using GAN. Other related works on typical adversarial attacks and defenses (Goodfellow et al., 2015; Madry et al., 2018; Tramèr et al., 2018), as well as some typical GANs (Goodfellow et al., 2014; Radford et al., 2016; Odena et al., 2017; Arjovsky et al., 2017; Gulrajani et al., 2017) are introduced in Appendix A. 2.1 DEFINITIONS ON ADVERSARIES Let X be the set of all digital images under consideration for a learning task, Y ∈ R be the output label space and pz ∈ Rm be an arbitrary probability distribution (e.g. Gaussian distribution) where m is the dimension of pz . A deep learning classifier f : X → Y takes an image x ∈ X and predicts its label f(x). Suppose px and padv are the distributions of benign images and adversarial examples, respectively. Assume we have an oracle classifier o : X → Y , which could always predict the correct label for any image x ∈ X , we define several types of adversarial examples as follows. For perturbation-based adversarial examples (Szegedy et al., 2014; Goodfellow et al., 2015; MoosaviDezfooli et al., 2016), tiny perturbations are added to the input images, which are imperceptible to humans but can cause the target classifier to make wrong predictions. Definition 1. Perturbation-based Adversarial Examples. Given a subset (trainset or testset) images T ⊂ X and a small constant > 0, the perturbation-based adversarial examples can be defined as: Ap = {xadv ∈ X |∃x ∈ T , ‖x− xadv‖p < ∧ f(xadv) 6= o(xadv) = f(x) = o(x)}. Song et al. (2018) define a new type of adversarial examples called unrestricted adversarial examples, which is not related to the subset (trainset or testset) images, by adding adversarial perturbation to the input noise of a mapping, such as GAN, so that the output of the perturbed noise is an adversary to the target classifier. Definition 2. Unrestricted Adversarial Examples. Given a mappingG from z ∼ pz toG(z, y) ∼ pθ, where pθ is an approximated distribution of px, and a small constant > 0, the unrestricted adversarial examples can be defined as: Au = {G(z∗, ys) ∈ X |∃z ∼ pz, z∗ ∼ pz, ‖z − z∗‖p < ∧ f(G(z∗, ys)) 6= o(G(z∗, ys)) = f(G(z, ys)) = o(G(z, ys)) = ys} where ys is the source label. In this work, we train a conditional GAN to learn the distribution of adversarial examples and output the corresponding adversary directly from any input noise. To clarify the difference with Song et al. (2018), we call our generated adversaries the non-constrained adversarial examples. Definition 3. Non-constrained Adversarial Examples. If there is a mapping G∗ from z ∼ pz to G∗(z, y) ∼ qθ, where qθ is an approximated distribution of padv, the non-constrained adversarial examples can be defined as An = {G∗(z, ys) ∈ X |f(G∗(z, ys)) 6= o(G∗(z, ys)) = ys} where ys is the source label. Here we need to find a mapping G∗, e.g. a generative model, such that for z ∼ pz , G∗(z, y) is an image in X and the output distribution is an approximated distribution of padv , for example using the Kullback-Leibler divergence (Kullback & Leibler, 1951), KL(qθ||padv) < for a small constant . In summary, perturbation-based adversarial examples are based on perturbing an image x ∈ X , and unrestricted adversarial examples (Song et al., 2018) perturbs an input noise z ∼ pz for an existing mapping G. Most perturbation-based adversarial attacks and Song et al. (2018) fall into the search-based adversarial attack. Definition 4. Search-based Adversarial Attack. Given an input vector v ∈ V (either benign image x or random vector z), the search-based adversarial attack searches a vector v′ : ‖v− v′‖p < where v′ leads to an adversarial example for the target classifier. In contrast, non-constrained adversarial examples are more generalized so that we need to learn a mapping G∗ such that for any input noise sampled from distribution pz , the output is an adversarial image. Such a mapping to be learned is called an adversarial generative model, and our method falls into the generation-based adversarial attack. Definition 5. Generation-based Adversarial Attack. Given an input vector v ∈ V (either benign image x or random vector z), the generation-based adversarial attack generates adversarial perturbation or adversarial example directly from v, usually adopting generative models. 2.2 GENERATIVE MODELS FOR ADVERSARIAL ATTACK Generative models have been adopted for adversarial attack in recent works (Baluja & Fischer, 2017). Reddy Mopuri et al. (2018) propose a Network for Adversary Generation (NAG) that models the distribution of adversarial perturbations for a target classifier so that their NAG can craft adversarial perturbations from any given random noise, which will be added to the natural image to fool the target classifier. Omid et al. (2018) propose to generate universal or image-dependent adversarial perturbations using U-Net (Ronneberger et al., 2015) or ResNet Generator (He et al., 2016) from any given random noise. Xiao et al. (2018) propose to train AdvGAN that takes an original image as the input and generate adversarial perturbation for the input to craft an adversarial example. Bai et al. (2020) further propose AI-GAN that adopts projected gradient descent (PGD) (Madry et al., 2018) in the training stage to train a GAN to generate target adversarial perturbation for the input image and target class. The above attack methods all fall into the generation-based adversarial attack, and their crafted examples fall into the perturbation-based adversarial examples. Another recent work called PS-GAN (Liu et al., 2019) pre-processes an input seed patch (a small image) to adversarial patch that will be added to a natural image to craft an adversarial example, and an attention model is used to locate the attack area on the natural image. Different from the above methods that generate adversarial perturbations or patches, Song et al. (2018) propose to search a random noise z∗ around the input noise z of AC-GAN (Odena et al., 2017) such that the corresponding output of AC-GAN is an adversarial example for the target classifier. Their method falls into the search-based adversarial attack, and their crafted examples fall into the unrestricted adversarial examples as there is no original image in their method. AT-GAN falls into the generation-based adversarial attack, and the crafted examples fall into the non-constrained adversarial examples. To clearly distinguish our work, we highlight the differences with most related works as follows: NAG, AdvGAN and AI-GAN vs. AT-GAN. NAG (Reddy Mopuri et al., 2018), AdvGAN (Xiao et al., 2018) and AI-GAN (Bai et al., 2020) focus on crafting adversarial perturbations by GANs. NAG takes random noise as input and crafts image-agnostic adversarial perturbation. AdvGAN and AI-GAN both use natural images as inputs, and generate the corresponding adversarial perturbations for the input image. AI-GAN uses adversarial examples generated by PGD for the training. In contrast, AT-GAN does not use any natural image as the input, and generates adversarial examples directly from any random noise. Further, compared with AI-GAN, we do not use any adversarial examples for the training. Song’s vs. AT-GAN. Song’s method (Song et al., 2018) searches over the neighborhood of the input noise for the pre-trained AC-GAN in order to find a noise whose output image is misclassified by the target classifier. They define such adversaries as the unrestricted adversarial examples, however, their adversaries are still constrained by the original input noise. Their method is essentially based on search, while AT-GAN is trained as an adversarial generative model, and our output is not constrained by any neighborhood. 3 AT-GAN: AN ADVERSARIAL GENERATIVE MODEL Here we first introduce the estimation on the distribution of adversarial examples, then propose the AT-GAN framework, a generation-based adversarial attack for crafting non-constrained adversarial examples. Further analysis is provided that AT-GAN could learn the adversary distribution. 3.1 ESTIMATING THE ADVERSARIAL DISTRIBUTION In order to generate non-constrained adversarial examples, we need to estimate the distribution of adversarial examples padv(xadv|ytrue) where ytrue is the true label. Given the parameterized estimated distribution of adversarial examples qθ(x|ytrue), we can define the estimation problem as: qθ∗(xadv|ytrue) = arg min θ∈Ω KL(qθ(xadv|ytrue)‖padv(xadv|ytrue)), (1) where θ indicates trainable parameters and Ω is the parameter space. It is hard to calculate equation 1 directly as padv(xadv|ytrue) is unknown. Inspired by the perturbationbased adversarial examples, as shown in Figure 1, we postulate that for each adversarial example xadv , there exists some benign examples x where ‖x−xadv‖p < . In other words, padv(xadv|ytrue) is close to p(x|ytrue) to some extent and we can obtain p(x|ytrue) by Bayes’ theorem, p(x|ytrue) = p(ytrue|x)·p(x) p(ytrue) , where p(ytrue|x), p(x) and p(ytrue) can be obtained directly from the trainset. Thus, we can approximately solve equation 1 in two stages: 1) Fit the distribution of benign data pθ. 2) Transfer pθ to estimate the distribution of adversarial examples qθ. Specifically, we propose an adversarial generative model called AT-GAN to learn the distribution of adversarial examples. The overall architecture of AT-GAN is illustrated in Figure 2. Corresponding to the above two stages, we implement AT-GAN by first training a GAN model called AC-WGAN_GP, which combines AC-GAN (Odena et al., 2017) and WGAN_GP (Gulrajani et al., 2017) to get a generator Goriginal, to learn pθ (See Appendix B), then transfering Goriginal to attack the target classifier f for the learning of qθ. We adopt AC-GAN and WGAN-GP for the AT-GAN implementation as they could build a powerful generative model on three evaluated datasets, and Song et al. (2018) also utilize the same combination. But AT-GAN is not limited to the above GANs, and we also implement AT-GAN using StyleGAN2-ada (Karras et al., 2020a) on a different dataset. 3.2 TRANSFERRING THE GENERATOR FOR ATTACK After the original generator Goriginal is trained, we transfer the generator Goriginal to learn the distribution of adversarial examples in order to attack the target model. As illustrated in Figure 2 (b), there are three neural networks, including the original generator Goriginal, the attack generator Gattack to be transferred that is initialized by the weights of Goriginal, and the classifier f to be attacked. The goal of the second stage can be described as: G∗attack = arg min Gattack ||Goriginal(z, ys)−Gattack(z, ys)||p s. t. f(G(z, ys)) = yt 6= ys, (2) where yt denotes the target label, ‖ · ‖p denotes the `p norm and we focus on p = 2 in this work. To optimize equation 2, we construct the loss function by L1 and L2, where L1 aims to assure that f yields the target label yt that is fixed for target attack for each category: L1 = Ez∼pz [H(f(Gattack(z, ys)), yt)]. (3) Here H(·, ·) denotes the cross entropy between the two terms and ys is sampled from Y . L2 aims to assure that the adversarial generator Gattack generates realistic examples: L2 = Ez∼pz [||Goriginal(z, ys) + ρ−Gattack(z, ys)||p]. (4) Here ρ is a small uniform random noise constrained by both l0 and l∞ norm. We add ρ to constrain Gattack(z, ys) to be in the neighborhood of Goriginal(z, ys) rather than be exactly the same as Goriginal(z, ys). The objective function for transferring Goriginal to Gattack can be formulated as L = 〈αL1, βL2〉, where α and β are hyper-parameters to control the training process. Note that in the case that α = 1 and β →∞, the objective function is similar to that of the perturbation-based attacks (Goodfellow et al., 2015; Tramèr et al., 2018; Madry et al., 2018). For the untargeted attack, we can replace yt in La with the maximum confidence of prediction label y except for ys, maxy 6=ys f(y|Gattack(z, ys)). 3.3 THEORETICAL ANALYSIS ON AT-GAN This subsection provides theoretical analysis on why AT-GAN can generate as realistic and diverse non-constrained adversarial examples as real data. We will prove that under ideal condition, AT-GAN can estimate the distribution of adversarial examples, which is close to that of real data. Suppose pdata is the distribution of real data, pg and pa are the distribution learned by the generator of AC-WGAN_GP and AT-GAN respectively. For the optimization of equation 4, L2 aims to constrain the image generated by Gattack in the -neighborhood of Goriginal. We prove that under the ideal condition that L2 guaranteesGattack(z, ys) to be close enough toGoriginal(z, ys) for any input noise z, the distribution of AT-GAN almost coincides the distribution of AC-WGAN_GP. Formally, we state our result for the two distributions as follows. Theorem 1. Suppose maxz,y L2 < , we have KL(pa‖pg)→ 0 when → 0. The proof of Theorem 1 is in Appendix C. Samangouei et al. (2018) prove that the global optimum of WGAN is pg = pdata and we show that the optimum of AC-WGAN_GP has the same property. We formalize the property as follows. Theorem 2. The global minimum of the virtual training of AC-WGAN_GP is achieved if and only if pg = pdata. The proof of Theorem 2 is in Appendix C. According to Theorem 1 and 2, under the ideal condition, we conclude pa ≈ pg = pdata, which indicates that the distribution of non-constrained adversarial examples learned by AT-GAN is very close to that of real data as discussed in Section 3.1, so that the non-constrained adversarial instances are as realistic and diverse as the real data. 4 EXPERIMENTS In this section, we provide two implementations of AT-GAN to validate the effectiveness and efficiency of the proposed approach. Empirical experiments demonstrate that AT-GAN yields higher attack success rates against adversarially trained models with higher efficiency. Besides, AT-GAN can learn a distribution of adversarial examples which is close to the real data distribution, and generate realistic and diverse adversarial examples. 4.1 EXPERIMENTAL SETUP Datasets. We consider four standard datasets, namely MNIST (LeCun et al., 1989), Fashion-MNIST (Xiao et al., 2017), CelebA (Liu et al., 2015) on the AT-GAN implementation using AC-GAN (Odena et al., 2017) and WGAN_GP (Gulrajani et al., 2017), and CIFAR-10 dataset (Krizhevsky et al., 2009) on the AT-GAN implementation of StyleGAN2-ada (StyleGAN2 with adaptive discriminator augmentation) (Karras et al., 2020a). MNIST is a dataset of hand written digits from 0 to 9. FashionMNIST is similar to MNIST with 10 categories of fashion clothes. CelebA contains more than 200, 000 celebrity faces. We group them according to female/male and focus on gender classification as in Song et al. (2018). CIFAR-10 consists of 32× 32 color images in 10 classes, with 6, 000 images per class. For all datasets, we normalize the pixel values into range [0, 1]. Baselines. We compare AT-GAN with the search-based attack methods, including Song’s (Song et al., 2018) for unrestricted adversarial examples, as well as FGSM (Goodfellow et al., 2015), PGD (Madry et al., 2018) and R+FGSM (Tramèr et al., 2018) for perturbation-based adversarial examples. Note that although the perturbation-based results are not directly comparable to ours as they are limited to small perturbations on real images, they can provide a good sense on the model robustness. Models. For MNIST and Fashion-MNIST, we adopt four models used in Tramèr et al. (2018), denoted as Model A to D. For CelebA, we consider three models, i.e. CNN, VGG16 (Simonyan & Zisserman, 2015) and ResNet (He et al., 2016). Details of Model A to D and CNN are described in Table 1. The ResNet is same as in Song et al. (2018). For CIFAR-10, we adopt the wide ResNet w32-10 (Zagoruyko & Komodakis, 2016). Details about the architectures of AT-GAN are provided in Appendix D. Evaluation Setup. We consider normal training and existing advanced defenses, namely adversarial training (Goodfellow et al., 2015), ensemble adversarial training (Tramèr et al., 2018) and iterative adversarial training (Madry et al., 2018). All experiments are conducted on a single Titan X GPU and the hyper-parameters used for attacks are described in Appendix D. 4.2 EVALUATION RESULTS For evaluation, we report the comparisons on attack success rate, attack efficiency and visualize some adversarial examples for AT-GAN and the baselines. More evaluation results on the transferability, ablation study, human evaluation, and the attack results on CIFAR-10, are provided in Appendix D. 4.2.1 COMPARISON ON ATTACK SUCCESS RATE To validate the attack effectiveness, we compare AT-GAN with the baselines under white-box setting. Since Athalye et al. (2018) show that the currently most effective defense method is adversarial training, we consider adversarially trained models as the defense models. The attack success rates are reported in Table 2. On MNIST, AT-GAN achieves the highest Attack Success Rate (ASR) against the baselines on all defense models. As for normal training, AT-GAN achieves the highest ASR on Model D, and the second highest ASR of over 98% on the other models. On Fashion-MNIST, AT-GAN achieves the highest ASR on average. On CelebA, AT-GAN achieves the highest ASR on almost all the models, with two exceptions under normal training but the results of AT-GAN are close to the highest. In general, AT-GAN achieves the highest attack performance above 90% on all the defense models. As AT-GAN aims to estimate the distribution of adversarial examples, adversarial training with some specific attacks has little robustness against AT-GAN, raising a new security issue for the development of more generalized adversarial training models. 4.2.2 COMPARISON ON ATTACK EFFICIENCY There are many scenarios where one needs a large amount of adversarial examples, such as adversarial training or exploring the property of adversarial examples. Thus, the efficiency of generating adversarial examples is very important, but such metric is ignored in most existing works. As an adversarial generative model, once trained, AT-GAN can generate adversarial examples very quickly. Here we evaluate the efficiency of each attack method for Model A on MNIST. The average time of generating/searching 1000 adversarial examples is summarized in Table 3. Among the five attack methods, AT-GAN is the fastest as it could craft adversarial examples without target classifier and gradient calculation. Note that Song’s needs much longer time than others as it needs multiple searches and queries to generate one adversarial example. It takes about 8 minutes for transferring the generator of AT-GAN. Here we only focus on the efficiency of generating adversarial examples after AT-GAN is transferred, i.e. we have already found the generator G∗, as in such case we could generate as many adversarial examples as we need. 4.2.3 VISUALIZATION ON ADVERSARIAL EXAMPLES Since the goal of adversarial examples is to fool target neural networks but not to fool human oracle, in Figure 3 we illustrate some adversarial examples generated by different attacks for Modle A on MNIST and Fashion-MNIST, and CNN on CelebA. On MNIST, AT-GAN generates slightly more realistic images than Song’s, e.g. “0” and “3”. On Fashion-MNIST and CelebA, some adversarial examples generated by Song’s method are not as realistic as AT-GAN to human perception, for example “t-shirt/top (0) ”, “sandal (5)” and some facial details. Note that Song’s method tends to distort the foreground that makes the images on MNIST more clean but some images are not realistic while AT-GAN tends to distort the background. As for perturbation-based attacks, their adversarial examples are not clear enough, especially on MNIST and Fashion-MNIST, due to the adversarial perturbations. There are also some unnatural samples generated by AT-GAN due to the limitation of GAN and we hope some better generative models can solve such issue. For target attack, please see more examples crafted by AT-GAN in Appendix D. In general, AT-GAN can generate realistic and diverse adversarial examples as equation 1 forces the generated non-constrained adversarial examples to be close to the benign examples generated by the original generator. 4.3 VISUALIZATION ON ADVERSARIAL DISTRIBUTION As discussed in Section 3.3, we provide a brief analysis that AT-GAN can learn a distribution of adversarial examples close to the distribution of real image data. To identify it empirically, we randomly choose 5, 000 benign images and 5, 000 adversarial examples generated by different attack methods, and merge these images according to their real label for MNIST and Fashion-MNIST. Then we use t-SNE (Maaten & Hinton, 2008) on these images to illustrate the distributions in two dimensions. t-SNE models each high-dimensional object in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability. It indicates that, if the adversarial examples have different distribution to the benign data, t-SNE could not deal with them well and the points with different categories will overlap with each other after the dimension reduction, i.e. the results will be in chaos. The results are illustrated in Figure 4. For AT-GAN, different categories are separated as that of the test set while those of other methods are mixed with each other, especially on MNIST (top). It indicates the distribution AT-GAN learned is indeed very close to the distribution of real data. To further validate that AT-GAN learns a different distribution from the original GAN rather than just adding some constant universal perturbation vector. In Appendix E, we illustrate some instances generated by the original generator and AT-GAN for the same input. We find that for different inputs, the original generator outputs different images and the difference between the instances generated by the original generator and AT-GAN is also different, indicating that AT-GAN indeed learns a different distribution from the original GAN. 5 CONCLUSION In this work, we propose a generation-based adversarial attack method, called AT-GAN (Adversarial Transfer on Generative Adversarial Net), that aims to learn the distribution of adversarial examples for the target classifier. The generated adversaries are “non-constrained” as we do no search at all in the neighborhood of the input, and once trained AT-GAN can output adversarial examples directly for any input noise drawn from arbitrary distribution (e.g. Gaussian distribution). Extensive experiments and visualizations show that AT-GAN achieves highest attack success rates against adversarially trained models and can generate diverse and realistic adversarial examples efficiently. Our work also suggests that adversarial training, a popular defense method based on perturbationbased adversarial examples, could not guarantee robustness against non-constrained adversarial examples. A possible reason is that AT-GAN learns a more complete version of the adversarial example distribution, which is much more diverse than that of the perturbation-based method. Note that any conditional GANs that craft realistic examples could be used for the implementation of AT-GAN. In this work, we provide two implementations on four datasets. In future work we plan to try advanced GANs for generating high resolution images. Our method also suggests a new way of adversarial attack by designing an adversarial generative model directly. There are several other interesting questions related to our work that can be explored in future work. For instance, what is the distribution of adversarial examples really like? Is it a continuous or smooth manifold? How close could we learn such distribution through GAN? We hope our work could inspire more researches in this direction. APPENDIX In the appendix, we provide additional related work on gradient-based adversarial attack methods, adversarial training methods and typical generative adversarial nets. Then we describe how to obtain the original generator and provide theoretical analysis, as well as experimental details and additional results. In the end, we visualize the examples generated by original GAN and AT-GAN. A ADDITIONAL RELATED WORK A.1 GRADIENT-BASED ATTACKS Numerous adversarial attacks have been proposed in recent years (Carlini & Wagner, 2017; Liu et al., 2017; Bhagoji et al., 2017; Li et al., 2019). In this part, we will introduce three typical adversarial attack methods. Here the components of all adversarial examples are clipped in [0, 1]. Fast Gradient Sign Method (FGSM). FGSM (Goodfellow et al., 2015) adds perturbation in the gradient direction of the training loss J on the input x to generate adversarial examples. xadv = x+ · sign(∇xJ(θ, x, ytrue)), where ytrue is the true label of a sample x, θ is the model parameter and specifies the `∞ distortion between x and xadv . Projected Gradient Descent (PGD). PGD adversary (Madry et al., 2018) is a multi-step variant of FGSM, which applies FGSM for k iterations with a budget α. xadvt+1 = clip(xadvt+αsign(∇xJ(θ, xadvt , ytrue)), xadvt − , xadvt + ) xadv0 = x, xadv = xadvk Here clip(x′, p, q) forces its input x′ to reside in the range of [p, q]. Rand FGSM (R+FGSM). R+FGSM (Tramèr et al., 2018) first applies a small random perturbation on the benign image with a parameter α (α < ), then it uses FGSM to generate an adversarial example based on the perturbed image. xadv = x ′ + ( − α) · sign(∇x′J(θ, x′, ytrue)) where x′ = x+ α · sign(N (0, I)). A.2 ADVERSARIAL TRAINING There are many defense strategies, such as detecting adversarial perturbations (Metzen et al., 2017), obfuscating gradients (Buckman et al., 2018; Guo et al., 2018) and eliminating perturbations (Shen et al., 2017; Liao et al., 2018), among which adversarial training is the most effective method (Athalye et al., 2018). We list several adversarial training methods as follows. Adversarial training. Goodfellow et al. (2015) first introduce the method of adversarial training, where the standard loss function f for a neural network is modified as: J̃(θ, x, ytrue) = αJf (θ, x, ytrue) + (1− α)Jf (θ, xadv, ytrue). Here ytrue is the true label of a sample x and θ is the model’s parameter. The modified objective is to make the neural network more robust by penalizing it to count for adversarial samples. During the training, the adversarial samples are calculated with respect to the current status of the network. Taking FGSM for example, the loss function could be written as: J̃(θ, x, ytrue) =αJf (θ, x, ytrue) + (1− α)Jf (θ, x+ sign(∇xJ(θ, x, ytrue)), ytrue). Ensemble adversarial training. Tramèr et al. (2018) propose an ensemble adversarial training method, in which DNN is trained with adversarial examples transferred from a number of fixed pre-trained models. Iterative adversarial training. Madry et al. (2018) propose to train a DNN with adversarial examples generated by iterative methods such as PGD. A.3 GENERATIVE ADVERSARIAL NET Generative Adversarial Net (GAN) (Goodfellow et al., 2014) consists of two neural networks, G and D, trained in opposition to each other. The generator G is optimized to estimate the data distribution and the discriminator D aims to distinguish fake samples from G and real samples from the training data. The objective of D and G can be formalized as a min-max value function V (G,D): min G max D V (G,D) = Ex∼px [logD(x)] + Ez∼pz [log(1−D(G(z)))]. Deep Convolutional Generative Adversarial Net (DCGAN) (Radford et al., 2016) is the convolutional version of GAN, which implements GAN with convolutional networks and stabilizes the training process. Auxiliary Classifier GAN (AC-GAN) (Odena et al., 2017) is another variant that extends GAN with some conditions by an extra classifier C. The objective function of AC-GAN can be formalized as follows: min G max D min C V (G,D,C) =Ex∼px [logD(x)] + Ez∼pz [log(1−D(G(z, ys)))] + Ex∼px [log(1− C(x, ys))] + Ez∼pz [log(1− C(G(z, ys), ys))]. To make GAN more trainable in practice, Arjovsky et al. (2017) propose Wasserstein GAN (WGAN) that uses Wassertein distance so that the loss function has more desirable properties. Gulrajani et al. (2017) introduce WGAN with gradient penalty (WGAN_GP) that outperforms WGAN in practice. Its objective function is formulated as: min G max D V (D,G) = Ex∼px [D(x)]− Ez∼pz [D(G(z))]− λEx̂∼px̂ [(‖∇x̂D(x̂)‖2 − 1)2], where px̂ is uniformly sampled along straight lines between pairs of points sampled from the data distribution px and the generator distribution pg . B TRAINING THE ORIGINAL GENERATOR Figure 2 (a) illustrates the overall architecture of AC-WGAN_GP that we used as the normal GAN. AC-WGAN_GP is the combination of AC-GAN (Odena et al., 2017) and WGAN_GP (Gulrajani et al., 2017), composed by three neural networks: a generator G, a discriminator D and a classifier f . The generator G takes a random noise z and a source label ys as the inputs and generates an image G(z, ys). It aims to generate an image G(z, ys) that is indistinguishable to discriminator D and makes the classifier f to output label ys. The loss function of G can be formulated as: LG = Ez∼pz(z)[H(f(G(z, ys)), ys)]− Ez∼pz(z)[D(G(z, ys))]. Here H(a, b) is the entropy between a and b. The discriminator D takes the training data x or the generated data G(z, ys) as the input and tries to distinguish them. The loss function of D with gradient penalty for samples x̂ ∼ px̂ can be formulated as: LD = −Ex∼pdata(x)[D(x)] + Ez∼pz(z)[D(G(z, ys))] + λEx̂∼px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1) 2]. The classifier f takes the training data x or the generated data G(z, ys) as the input and predicts the corresponding label. The loss function is: Lf =Ex∼pdata(x)[H(f(x), ytrue)] + Ez∼pz(z)[H(f(G(z, ys)), ys)]. Different from AC-WGAN_GP, styleGAN2-ada (Karras et al., 2020a) trains styleGAN2 (Karras et al., 2020b) with adaptive discriminator augmentation. We obtain the network and weights from Karras et al. (2020a). C THEORETICAL ANALYSIS OF AT-GAN In this section, we provide proofs for theorems in Section 3.3. Theorem 1. Suppose maxz,y L2 < , we have KL(pa‖pg)→ 0 when → 0. Proof. We first consider that for a distribution p(x) in space X , we construct another distribution q(x) by selecting points p (x) in the -neighborhood of p(x) for any x ∈ X . Obviously, when p (x) is close enough to p(x), q(x) has almost the same distribution as p(x). Formally, we have the following lemma. Lemma 1. Given two distributions P and Q with probability density function p(x) and q(x) in space X , if there exists a constant that satisfies ‖q(x) − p(x)‖ < for any x ∈ X , we could get KL(P‖Q)→ 0 when → 0. Proof. For two distributions P and Q with probability density function p(x) and q(x), we could get q(x) = p(x) + r(x) where ‖r(x)‖ < . KL(P‖Q) = ∫ p(x) log p(x) q(x) dx = ∫ p(x) log p(x)dx− ∫ p(x) log q(x)dx = ∫ (q(x)− r(x)) log p(x)dx− ∫ (q(x)− r(x)) log q(x)dx = ∫ q(x) log p(x)dx− ∫ q(x) log q(x)dx− ∫ r(x) log p(x)dx+ ∫ r(x) log q(x)dx = ∫ r(x) log q(x) p(x) dx−KL(Q‖P ) ≤ ∫ log(1 + p(x) )dx Obviously, when → 0, we could get ∫ log(1 + p(x) )dx→ 0, which means DL(P‖Q)→ 0. Now, we get back to Theorem 1. For two distributions pa and pg, maxy,z L2 < indicates ∀z ∼ pz, ‖pa(z, ·)− pg(z, ·)‖ < . According to Lemma 1, we have KL(pa‖pg)→ 0 when → 0. This concludes the proof. Theorem 2. The global minimum of the virtual training of AC-WGAN_GP is achieved if and only if pg = pdata. Proof. To simplify the analysis, we choose a category y of AC-WGAN_GP and denote pg(x|y) and pdata(x|y) the distribution that the generator learns and the distribution of real data respectively. Then for each category, the loss function is equivalent to WGAN_GP. We refers to Samangouei et al. (2018) to prove this property. The WGAN_GP min-max loss is given by: min G max D V (D,G) = Ex∼pdata(x)[D(x)]− Ez∼pz(z)[D(G(z))]− λEx̂∼px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1) 2] = ∫ x pdata(x)D(x)dx− ∫ z pz(z)D(G(z))dz − λ ∫ x̂ px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1)2]dx̂ = ∫ x [pdata(x)− pg(x)]D(x)dx− λ ∫ x̂ px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1)2]dx̂ (5) For a fixed G, the optimal discriminator D that maximizes V (D,G) should be: D∗G(x) = { 1 if pdata(x) ≥ pg(x) 0 otherwise (6) According to equation 5 and equation 6, we could get: V (D,G) = ∫ x [pdata(x)− pg(x)]D(x)dx− λ ∫ x̂ px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1)2]dx̂ = ∫ {x|pdata(x)≥pg(x)} (pdata(x)− pg(x))dx− λ ∫ x̂ px̂(x̂)dx̂ = ∫ {x|pdata(x)≥pg(x)} (pdata(x)− pg(x))dx− λ (7) Let X = {x|pdata(x) ≥ pg(x)}, in order to minimize equation 7, we set pdata(x) = pg(x) for any x ∈ X . Then, since both pg and pdata integrate to 1, we could get:∫ X c pg(x)dx = ∫ X c pdata(x)dx. However, this contradicts equation 6 where pdata(x) < pg(x) for x ∈ X c, unless µ(X c) = 0 where µ is the Lebesgue measure. Therefore, for each category we have pg(x|y) = pdata(x|y), which means pg(x) = pdata(x) for AC-WGAN_GP. D ADDITIONAL DETAILS ON EXPERIMENTS In this section, we provide more details on experimental setup, report results on transferability, do ablation study on hyper-parameters, investigate the generating capacity by human evaluation, and show details for another implementation of AT-GAN on CIFAR-10 dataset. In the end, we illustrate some non-constrained adversarial examples generated by AT-GAN on MNIST, Fashion-MNIST and CelebA for the target attack. D.1 MORE EXPERIMENTAL SETUP We first provide more details on the experimental setup, including the model architectures and attack hyper-parameters. Model Architectures for AT-GAN. We first describe the neural network architectures used for AT-GAN in experiments. The abbreviations for components in the network are described in Table 4. The architecture of AC-WGAN_GP for MNIST and Fashion-MNIST is shown in Table 5 where the generator and discriminator are the same as in Chen et al. (2016), while the architecture of AC_WGAN_GP for CelebA is the same as in Gulrajani et al. (2017) and the architecture of styleGAN2-ada for CIFAR-10 is the same as in Karras et al. (2020a). Hyper-parameters for Attacks. The hyper-parameters used in experiments for each attack method are described in Table 6 for MNIST, Fashion-MNIST and CelebA datasets. For CIFAR-10 dataset, we set = 0.03 for FGSM, = 0.03, α = 0.0075 and epochs= 20 for PGD, α = 3, β = 2 and epochs= 1, 000 for AT-GAN. D.2 TRANSFERABILITY OF AT-GAN Another important issue for adversarial examples is the transferability across different models. To demonstrate the transferability of non-constrained adversarial examples, we use adversarial examples generated by attacking Model A (MNIST and Fashion-MNIST) and CNN (CelebA), to evaluate the attack success rates on Model C (MNIST and Fashion-MNIST) and VGG16 (CelebA). As shown in Table 7, non-constrained adversarial examples generated by AT-GAN exhibit moderate transferability. D.3 ABLATION STUDY In this subsection, we investigate the impact of using different ρ in the loss function. As ρ could be constrained by both `0 and `∞ norm, we test various bounds, using Model A on MNIST dataset, for ρ in `0 and `∞, respectively. We first fix ‖ρ‖∞ = 0.5 and try various values for ‖ρ‖0, i.e. 0, 100, 200, 300, 400 (the maximum possible value is 784 for 28*28 input). The attack success rates are in Table 8. We can observe that different values of ‖ρ‖0 only have a little impact on the attack success rates, and the performances are very close for ‖ρ‖0 = 0, 100, 200. Figure 5 further illustrates some generated adversarial examples, among which we can see that there exist some slight differences on the examples. When ‖ρ‖0 = 0, AT-GAN tends to change the foreground (body) of the digits. When we increase the value of ‖ρ‖0 (100 and 200), AT-GAN is more likely to add tiny noise to the background and the crafted examples are more realistic to humans (for instance, smoother on digit 4). But if we continue to increase ‖ρ‖0 (300 or 400), AT-GAN tends to add more noise and the quality of the generated examples decays. To have a good tradeoff on attack performance and generation quality, we set ‖ρ‖0 = 200. We then fix ‖ρ‖0 = 200 and test different values for ‖ρ‖∞, i.e. 0, 0.1, 0.2, 0.3, 0.4, 0.5 (the maximum possible value is 1). The attack success rates are in Table 9. We can observe that different values of ‖ρ‖∞ have very little impact on the attack performance. Figure 6 further illustrates some generated adversarial examples, among which we can see that a little bit more noises are added for bigger ‖ρ‖∞ but the differences are very tiny when ‖ρ‖∞ = 0.2 to 0.5. So we simply set ‖ρ‖∞ = 0.5 in experiments, but other values of ‖ρ‖∞ (0.2, 0.3, 0.4) also work. D.4 HUMAN EVALUATION To investigate the generating capacity of AT-GAN, we use the same input, and randomly pick 100 images for each category of MNIST generated by AT-GAN and the original generator, respectively. We then conduct human evaluation to determine whether each example is realistic. The evaluation results are in Table 10. We see that adversarial examples in some categories (e.g. 2, 4) are harder to be semantically meaningful than other categories (e.g. 0, 1). On average, however, the generating capability is close to that of the original generator. D.5 AT-GAN ON CIFAR-10 DATASET To further demonstrate the flexibility of AT-GAN, we implement AT-GAN on CIFAR-10 dataset using StyleGAN2-ada (Karras et al., 2020a), a recently proposed conditional GAN. The target classifier is wide ResNet w32-10 (Zagoruyko & Komodakis, 2016) by normal training (Nor.) and Iterative adversarial training (Iter.). The attack success rates are in Table 11. On normally trained models, PGD achieves the attack success rate of 100% while AT-GAN achieves the attack success rate of 93.5%. However, the adversarially trained model exhibits little robustness against AT-GAN and AT-GAN achieves attack success rate of 73.0%. In Figure 7, we illustrate some generated adversarial examples on CIFAR-10 dataset. D.6 AT-GAN ON TARGET ATTACK Here we show some non-constrained adversarial examples generated by AT-GAN for the target attack. The results are illustrated in Figure 8 for MNIST and Fashion-MNIST, and Figure 9 for CelebA. Instead of adding perturbations to the original images, AT-GAN transfers the generative model (GAN) so that the generated adversarial instances are not in the same shape of the initial examples (in diagonal) generated by the original generator. Note that for CelebA, the target adversarial attack is equivalent to the untarget adversarial attack as it is a binary classification task. E VISUALIZATIONS FOR THE ORIGINAL GAN AND AT-GAN Here we provide some instances generated by the original GAN and AT-GAN with the same input noise and their difference on MNIST and Fashion-MNIST. The results are depicted in Figure 10 and 11. For different input noise, both the original GAN and AT-GAN output different instances. For each category with the same input noise, the difference between original GAN and AT-GAN is mainly related to the main content of image. For two different input noises, the differences between the original GAN and AT-GAN are not the same with each other, indicating that AT-GAN learns a distribution of adversarial examples different from the original GAN rather than just adds some universal perturbation vectors on the original GAN.
1. What is the focus and contribution of the paper on adversarial generative models? 2. What are the strengths of the proposed approach, particularly in terms of its ability to generate non-constrained adversarial examples? 3. What are the weaknesses of the paper, especially regarding the choice of pre-training stages? 4. Do you have any concerns or suggestions regarding the paper's organization, clarity, or experimental validation? 5. How does the reviewer assess the novelty and significance of the proposed approach in the context of existing works on generative adversarial networks?
Review
Review This paper proposed the adversarial transfer on generative adversarial net (AT-GAN) to train an adversarial generative model that can directly produce adversarial examples. In the other way, AT-GAN could generate the adversarial examples directly for any input noise. Such a generative model was able to draw non-constrained adversarial examples. Pros: This paper is clearly written with reasonable paper organization covering background, model design, mathematical formula and experiments. The goal of this work is obvious with experimental justification. Mathematical description and experimental illustration are desirable to show the merit of this method. Cons: The reasons of using AC-GAN and WGAN-GP as the pre-train stage are missing.
ICLR
Title AT-GAN: An Adversarial Generative Model for Non-constrained Adversarial Examples Abstract With the rapid development of adversarial machine learning, numerous adversarial attack methods have been proposed. Typical attacks are based on a search in the neighborhood of input image to generate a perturbed adversarial example. Since 2017, generative models are adopted for adversarial attacks, and most of them focus on generating adversarial perturbations from input noise or input image. Thus the output is restricted by input for these works. A recent work targets “unrestricted adversarial example” using generative model but their method is based on a search in the neighborhood of input noise, so actually their output is still constrained by input. In this work, we propose AT-GAN (Adversarial Transfer on Generative Adversarial Net) to train an adversarial generative model that can directly produce adversarial examples. Different from previous works, we aim to learn the distribution of adversarial examples so as to generate semantically meaningful adversaries. AT-GAN achieves this goal by first learning a generative model for real data, followed by transfer learning to obtain the desired generative model. Once trained and transferred, AT-GAN could generate adversarial examples directly and quickly for any input noise, denoted as non-constrained adversarial examples. Extensive experiments and visualizations show that AT-GAN can efficiently generate diverse adversarial examples that are realistic to human perception, and yields higher attack success rates against adversarially trained models. 1 INTRODUCTION In recent years, Deep Neural Networks (DNNs) have been found vulnerable to adversarial examples (Szegedy et al., 2014), which are well-crafted samples with tiny perturbations imperceptible to humans but can fool the learning models. Despite the great success of the deep learning empowered applications, many of them are safety-critical, for example under the scenario of self-driving cars (Eykholt et al., 2018; Cao et al., 2019), raising serious concerns in academy and industry. Numerous works of adversarial examples have been developed on adversarial attacks (Goodfellow et al., 2015; Carlini & Wagner, 2017; Madry et al., 2018), adversarial defenses (Goodfellow et al., 2015; Kurakin et al., 2017; Song et al., 2019) and exploring the property of adversarial examples (He et al., 2018; Shamir et al., 2019). For adversarial attacks, most studies focus on the perturbation-based adversarial examples constrained by input images, which is also the generally accepted conception of adversarial examples. Generative models are also adopted recently to generate adversarial perturbations from an input noise (Reddy Mopuri et al., 2018; Omid et al., 2018) or from a given image (Xiao et al., 2018; Bai et al., 2020), and such perturbations are added to the original image to craft adversarial examples. Song et al. (2018) propose to search a neighborhood noise around the input noise of a Generative Adversarial Net (GAN) (Goodfellow et al., 2014) such that the output is an adversarial example, which they denoted as unrestricted adversarial example as there is no original image in their method. However, their output is still constrained by the input noise, and the search is time-consuming. In this work, we propose an adversarial generative model called AT-GAN (Adversarial Transfer on Generative Adversarial Net), which aims to learn the distribution of adversarial examples. Unlike previous works that constrain the adversaries in the neighborhood of input image or input noise, including the prominent work of Song et al. (2018) that searches over the neighborhood of the input noise of a pre-trained GAN in order to find a noise whose output image is misclassified by the target classifier, AT-GAN is an adversarial generative model that could produce semantically meaningful adversarial examples directly from any input noise, and we call such examples the non-constrained adversarial examples. Specifically, we first develop a normal GAN to learn the distribution of benign data so that it can produce plausible images that the classifier and a human oracle will classify in the same way. Then we transfer the pre-trained GAN into an adversarial GAN called AT-GAN that can fool the target classifier while being still well recognized by the human oracle. AT-GAN is a conditional GAN that has learned to estimate the distribution of adversarial examples for the target classifier, so AT-GAN can directly generate adversarial examples from any random noise, leading to high diversity and efficiency. We implement AT-GAN by adopting AC-GAN (Odena et al., 2017) and WGAN-GP (Gulrajani et al., 2017) in the pre-training stage, then do transfer learning for the adversary generation. Here we develop AT-GAN on three benchmark datasets, namely MNIST, Fashion-MNIST and CelebA, and apply typical defense methods to compare AT-GAN with existing search-based attacks. Empirical results show that the non-constrained adversarial examples generated by AT-GAN yield higher attack success rates, and state-of-the-art adversarially trained models exhibit little robustness against ATGAN, indicating the high diversity of our adversaries. In addition, AT-GAN, as a generation-based adversarial attack, is more efficient than the search-based adversarial attacks. Note that all conditional GANs that can craft realistic examples could be used for the implementation of AT-GAN. For another demonstration, we adopt StyleGAN2-ada (Karras et al., 2020a) and develop AT-GAN on CIFAR-10 benchmark dataset using wide ResNet w32-10 (Zagoruyko & Komodakis, 2016) as the target classifier. Empirical results show that AT-GAN can produce plausible adversarial images, and yield higher attack success rates on the adversarially trained models. 2 PRELIMINARIES In this section, we provide definitions on several types of adversarial examples and adversarial attacks, and give a brief overview of adversarial attacks using GAN. Other related works on typical adversarial attacks and defenses (Goodfellow et al., 2015; Madry et al., 2018; Tramèr et al., 2018), as well as some typical GANs (Goodfellow et al., 2014; Radford et al., 2016; Odena et al., 2017; Arjovsky et al., 2017; Gulrajani et al., 2017) are introduced in Appendix A. 2.1 DEFINITIONS ON ADVERSARIES Let X be the set of all digital images under consideration for a learning task, Y ∈ R be the output label space and pz ∈ Rm be an arbitrary probability distribution (e.g. Gaussian distribution) where m is the dimension of pz . A deep learning classifier f : X → Y takes an image x ∈ X and predicts its label f(x). Suppose px and padv are the distributions of benign images and adversarial examples, respectively. Assume we have an oracle classifier o : X → Y , which could always predict the correct label for any image x ∈ X , we define several types of adversarial examples as follows. For perturbation-based adversarial examples (Szegedy et al., 2014; Goodfellow et al., 2015; MoosaviDezfooli et al., 2016), tiny perturbations are added to the input images, which are imperceptible to humans but can cause the target classifier to make wrong predictions. Definition 1. Perturbation-based Adversarial Examples. Given a subset (trainset or testset) images T ⊂ X and a small constant > 0, the perturbation-based adversarial examples can be defined as: Ap = {xadv ∈ X |∃x ∈ T , ‖x− xadv‖p < ∧ f(xadv) 6= o(xadv) = f(x) = o(x)}. Song et al. (2018) define a new type of adversarial examples called unrestricted adversarial examples, which is not related to the subset (trainset or testset) images, by adding adversarial perturbation to the input noise of a mapping, such as GAN, so that the output of the perturbed noise is an adversary to the target classifier. Definition 2. Unrestricted Adversarial Examples. Given a mappingG from z ∼ pz toG(z, y) ∼ pθ, where pθ is an approximated distribution of px, and a small constant > 0, the unrestricted adversarial examples can be defined as: Au = {G(z∗, ys) ∈ X |∃z ∼ pz, z∗ ∼ pz, ‖z − z∗‖p < ∧ f(G(z∗, ys)) 6= o(G(z∗, ys)) = f(G(z, ys)) = o(G(z, ys)) = ys} where ys is the source label. In this work, we train a conditional GAN to learn the distribution of adversarial examples and output the corresponding adversary directly from any input noise. To clarify the difference with Song et al. (2018), we call our generated adversaries the non-constrained adversarial examples. Definition 3. Non-constrained Adversarial Examples. If there is a mapping G∗ from z ∼ pz to G∗(z, y) ∼ qθ, where qθ is an approximated distribution of padv, the non-constrained adversarial examples can be defined as An = {G∗(z, ys) ∈ X |f(G∗(z, ys)) 6= o(G∗(z, ys)) = ys} where ys is the source label. Here we need to find a mapping G∗, e.g. a generative model, such that for z ∼ pz , G∗(z, y) is an image in X and the output distribution is an approximated distribution of padv , for example using the Kullback-Leibler divergence (Kullback & Leibler, 1951), KL(qθ||padv) < for a small constant . In summary, perturbation-based adversarial examples are based on perturbing an image x ∈ X , and unrestricted adversarial examples (Song et al., 2018) perturbs an input noise z ∼ pz for an existing mapping G. Most perturbation-based adversarial attacks and Song et al. (2018) fall into the search-based adversarial attack. Definition 4. Search-based Adversarial Attack. Given an input vector v ∈ V (either benign image x or random vector z), the search-based adversarial attack searches a vector v′ : ‖v− v′‖p < where v′ leads to an adversarial example for the target classifier. In contrast, non-constrained adversarial examples are more generalized so that we need to learn a mapping G∗ such that for any input noise sampled from distribution pz , the output is an adversarial image. Such a mapping to be learned is called an adversarial generative model, and our method falls into the generation-based adversarial attack. Definition 5. Generation-based Adversarial Attack. Given an input vector v ∈ V (either benign image x or random vector z), the generation-based adversarial attack generates adversarial perturbation or adversarial example directly from v, usually adopting generative models. 2.2 GENERATIVE MODELS FOR ADVERSARIAL ATTACK Generative models have been adopted for adversarial attack in recent works (Baluja & Fischer, 2017). Reddy Mopuri et al. (2018) propose a Network for Adversary Generation (NAG) that models the distribution of adversarial perturbations for a target classifier so that their NAG can craft adversarial perturbations from any given random noise, which will be added to the natural image to fool the target classifier. Omid et al. (2018) propose to generate universal or image-dependent adversarial perturbations using U-Net (Ronneberger et al., 2015) or ResNet Generator (He et al., 2016) from any given random noise. Xiao et al. (2018) propose to train AdvGAN that takes an original image as the input and generate adversarial perturbation for the input to craft an adversarial example. Bai et al. (2020) further propose AI-GAN that adopts projected gradient descent (PGD) (Madry et al., 2018) in the training stage to train a GAN to generate target adversarial perturbation for the input image and target class. The above attack methods all fall into the generation-based adversarial attack, and their crafted examples fall into the perturbation-based adversarial examples. Another recent work called PS-GAN (Liu et al., 2019) pre-processes an input seed patch (a small image) to adversarial patch that will be added to a natural image to craft an adversarial example, and an attention model is used to locate the attack area on the natural image. Different from the above methods that generate adversarial perturbations or patches, Song et al. (2018) propose to search a random noise z∗ around the input noise z of AC-GAN (Odena et al., 2017) such that the corresponding output of AC-GAN is an adversarial example for the target classifier. Their method falls into the search-based adversarial attack, and their crafted examples fall into the unrestricted adversarial examples as there is no original image in their method. AT-GAN falls into the generation-based adversarial attack, and the crafted examples fall into the non-constrained adversarial examples. To clearly distinguish our work, we highlight the differences with most related works as follows: NAG, AdvGAN and AI-GAN vs. AT-GAN. NAG (Reddy Mopuri et al., 2018), AdvGAN (Xiao et al., 2018) and AI-GAN (Bai et al., 2020) focus on crafting adversarial perturbations by GANs. NAG takes random noise as input and crafts image-agnostic adversarial perturbation. AdvGAN and AI-GAN both use natural images as inputs, and generate the corresponding adversarial perturbations for the input image. AI-GAN uses adversarial examples generated by PGD for the training. In contrast, AT-GAN does not use any natural image as the input, and generates adversarial examples directly from any random noise. Further, compared with AI-GAN, we do not use any adversarial examples for the training. Song’s vs. AT-GAN. Song’s method (Song et al., 2018) searches over the neighborhood of the input noise for the pre-trained AC-GAN in order to find a noise whose output image is misclassified by the target classifier. They define such adversaries as the unrestricted adversarial examples, however, their adversaries are still constrained by the original input noise. Their method is essentially based on search, while AT-GAN is trained as an adversarial generative model, and our output is not constrained by any neighborhood. 3 AT-GAN: AN ADVERSARIAL GENERATIVE MODEL Here we first introduce the estimation on the distribution of adversarial examples, then propose the AT-GAN framework, a generation-based adversarial attack for crafting non-constrained adversarial examples. Further analysis is provided that AT-GAN could learn the adversary distribution. 3.1 ESTIMATING THE ADVERSARIAL DISTRIBUTION In order to generate non-constrained adversarial examples, we need to estimate the distribution of adversarial examples padv(xadv|ytrue) where ytrue is the true label. Given the parameterized estimated distribution of adversarial examples qθ(x|ytrue), we can define the estimation problem as: qθ∗(xadv|ytrue) = arg min θ∈Ω KL(qθ(xadv|ytrue)‖padv(xadv|ytrue)), (1) where θ indicates trainable parameters and Ω is the parameter space. It is hard to calculate equation 1 directly as padv(xadv|ytrue) is unknown. Inspired by the perturbationbased adversarial examples, as shown in Figure 1, we postulate that for each adversarial example xadv , there exists some benign examples x where ‖x−xadv‖p < . In other words, padv(xadv|ytrue) is close to p(x|ytrue) to some extent and we can obtain p(x|ytrue) by Bayes’ theorem, p(x|ytrue) = p(ytrue|x)·p(x) p(ytrue) , where p(ytrue|x), p(x) and p(ytrue) can be obtained directly from the trainset. Thus, we can approximately solve equation 1 in two stages: 1) Fit the distribution of benign data pθ. 2) Transfer pθ to estimate the distribution of adversarial examples qθ. Specifically, we propose an adversarial generative model called AT-GAN to learn the distribution of adversarial examples. The overall architecture of AT-GAN is illustrated in Figure 2. Corresponding to the above two stages, we implement AT-GAN by first training a GAN model called AC-WGAN_GP, which combines AC-GAN (Odena et al., 2017) and WGAN_GP (Gulrajani et al., 2017) to get a generator Goriginal, to learn pθ (See Appendix B), then transfering Goriginal to attack the target classifier f for the learning of qθ. We adopt AC-GAN and WGAN-GP for the AT-GAN implementation as they could build a powerful generative model on three evaluated datasets, and Song et al. (2018) also utilize the same combination. But AT-GAN is not limited to the above GANs, and we also implement AT-GAN using StyleGAN2-ada (Karras et al., 2020a) on a different dataset. 3.2 TRANSFERRING THE GENERATOR FOR ATTACK After the original generator Goriginal is trained, we transfer the generator Goriginal to learn the distribution of adversarial examples in order to attack the target model. As illustrated in Figure 2 (b), there are three neural networks, including the original generator Goriginal, the attack generator Gattack to be transferred that is initialized by the weights of Goriginal, and the classifier f to be attacked. The goal of the second stage can be described as: G∗attack = arg min Gattack ||Goriginal(z, ys)−Gattack(z, ys)||p s. t. f(G(z, ys)) = yt 6= ys, (2) where yt denotes the target label, ‖ · ‖p denotes the `p norm and we focus on p = 2 in this work. To optimize equation 2, we construct the loss function by L1 and L2, where L1 aims to assure that f yields the target label yt that is fixed for target attack for each category: L1 = Ez∼pz [H(f(Gattack(z, ys)), yt)]. (3) Here H(·, ·) denotes the cross entropy between the two terms and ys is sampled from Y . L2 aims to assure that the adversarial generator Gattack generates realistic examples: L2 = Ez∼pz [||Goriginal(z, ys) + ρ−Gattack(z, ys)||p]. (4) Here ρ is a small uniform random noise constrained by both l0 and l∞ norm. We add ρ to constrain Gattack(z, ys) to be in the neighborhood of Goriginal(z, ys) rather than be exactly the same as Goriginal(z, ys). The objective function for transferring Goriginal to Gattack can be formulated as L = 〈αL1, βL2〉, where α and β are hyper-parameters to control the training process. Note that in the case that α = 1 and β →∞, the objective function is similar to that of the perturbation-based attacks (Goodfellow et al., 2015; Tramèr et al., 2018; Madry et al., 2018). For the untargeted attack, we can replace yt in La with the maximum confidence of prediction label y except for ys, maxy 6=ys f(y|Gattack(z, ys)). 3.3 THEORETICAL ANALYSIS ON AT-GAN This subsection provides theoretical analysis on why AT-GAN can generate as realistic and diverse non-constrained adversarial examples as real data. We will prove that under ideal condition, AT-GAN can estimate the distribution of adversarial examples, which is close to that of real data. Suppose pdata is the distribution of real data, pg and pa are the distribution learned by the generator of AC-WGAN_GP and AT-GAN respectively. For the optimization of equation 4, L2 aims to constrain the image generated by Gattack in the -neighborhood of Goriginal. We prove that under the ideal condition that L2 guaranteesGattack(z, ys) to be close enough toGoriginal(z, ys) for any input noise z, the distribution of AT-GAN almost coincides the distribution of AC-WGAN_GP. Formally, we state our result for the two distributions as follows. Theorem 1. Suppose maxz,y L2 < , we have KL(pa‖pg)→ 0 when → 0. The proof of Theorem 1 is in Appendix C. Samangouei et al. (2018) prove that the global optimum of WGAN is pg = pdata and we show that the optimum of AC-WGAN_GP has the same property. We formalize the property as follows. Theorem 2. The global minimum of the virtual training of AC-WGAN_GP is achieved if and only if pg = pdata. The proof of Theorem 2 is in Appendix C. According to Theorem 1 and 2, under the ideal condition, we conclude pa ≈ pg = pdata, which indicates that the distribution of non-constrained adversarial examples learned by AT-GAN is very close to that of real data as discussed in Section 3.1, so that the non-constrained adversarial instances are as realistic and diverse as the real data. 4 EXPERIMENTS In this section, we provide two implementations of AT-GAN to validate the effectiveness and efficiency of the proposed approach. Empirical experiments demonstrate that AT-GAN yields higher attack success rates against adversarially trained models with higher efficiency. Besides, AT-GAN can learn a distribution of adversarial examples which is close to the real data distribution, and generate realistic and diverse adversarial examples. 4.1 EXPERIMENTAL SETUP Datasets. We consider four standard datasets, namely MNIST (LeCun et al., 1989), Fashion-MNIST (Xiao et al., 2017), CelebA (Liu et al., 2015) on the AT-GAN implementation using AC-GAN (Odena et al., 2017) and WGAN_GP (Gulrajani et al., 2017), and CIFAR-10 dataset (Krizhevsky et al., 2009) on the AT-GAN implementation of StyleGAN2-ada (StyleGAN2 with adaptive discriminator augmentation) (Karras et al., 2020a). MNIST is a dataset of hand written digits from 0 to 9. FashionMNIST is similar to MNIST with 10 categories of fashion clothes. CelebA contains more than 200, 000 celebrity faces. We group them according to female/male and focus on gender classification as in Song et al. (2018). CIFAR-10 consists of 32× 32 color images in 10 classes, with 6, 000 images per class. For all datasets, we normalize the pixel values into range [0, 1]. Baselines. We compare AT-GAN with the search-based attack methods, including Song’s (Song et al., 2018) for unrestricted adversarial examples, as well as FGSM (Goodfellow et al., 2015), PGD (Madry et al., 2018) and R+FGSM (Tramèr et al., 2018) for perturbation-based adversarial examples. Note that although the perturbation-based results are not directly comparable to ours as they are limited to small perturbations on real images, they can provide a good sense on the model robustness. Models. For MNIST and Fashion-MNIST, we adopt four models used in Tramèr et al. (2018), denoted as Model A to D. For CelebA, we consider three models, i.e. CNN, VGG16 (Simonyan & Zisserman, 2015) and ResNet (He et al., 2016). Details of Model A to D and CNN are described in Table 1. The ResNet is same as in Song et al. (2018). For CIFAR-10, we adopt the wide ResNet w32-10 (Zagoruyko & Komodakis, 2016). Details about the architectures of AT-GAN are provided in Appendix D. Evaluation Setup. We consider normal training and existing advanced defenses, namely adversarial training (Goodfellow et al., 2015), ensemble adversarial training (Tramèr et al., 2018) and iterative adversarial training (Madry et al., 2018). All experiments are conducted on a single Titan X GPU and the hyper-parameters used for attacks are described in Appendix D. 4.2 EVALUATION RESULTS For evaluation, we report the comparisons on attack success rate, attack efficiency and visualize some adversarial examples for AT-GAN and the baselines. More evaluation results on the transferability, ablation study, human evaluation, and the attack results on CIFAR-10, are provided in Appendix D. 4.2.1 COMPARISON ON ATTACK SUCCESS RATE To validate the attack effectiveness, we compare AT-GAN with the baselines under white-box setting. Since Athalye et al. (2018) show that the currently most effective defense method is adversarial training, we consider adversarially trained models as the defense models. The attack success rates are reported in Table 2. On MNIST, AT-GAN achieves the highest Attack Success Rate (ASR) against the baselines on all defense models. As for normal training, AT-GAN achieves the highest ASR on Model D, and the second highest ASR of over 98% on the other models. On Fashion-MNIST, AT-GAN achieves the highest ASR on average. On CelebA, AT-GAN achieves the highest ASR on almost all the models, with two exceptions under normal training but the results of AT-GAN are close to the highest. In general, AT-GAN achieves the highest attack performance above 90% on all the defense models. As AT-GAN aims to estimate the distribution of adversarial examples, adversarial training with some specific attacks has little robustness against AT-GAN, raising a new security issue for the development of more generalized adversarial training models. 4.2.2 COMPARISON ON ATTACK EFFICIENCY There are many scenarios where one needs a large amount of adversarial examples, such as adversarial training or exploring the property of adversarial examples. Thus, the efficiency of generating adversarial examples is very important, but such metric is ignored in most existing works. As an adversarial generative model, once trained, AT-GAN can generate adversarial examples very quickly. Here we evaluate the efficiency of each attack method for Model A on MNIST. The average time of generating/searching 1000 adversarial examples is summarized in Table 3. Among the five attack methods, AT-GAN is the fastest as it could craft adversarial examples without target classifier and gradient calculation. Note that Song’s needs much longer time than others as it needs multiple searches and queries to generate one adversarial example. It takes about 8 minutes for transferring the generator of AT-GAN. Here we only focus on the efficiency of generating adversarial examples after AT-GAN is transferred, i.e. we have already found the generator G∗, as in such case we could generate as many adversarial examples as we need. 4.2.3 VISUALIZATION ON ADVERSARIAL EXAMPLES Since the goal of adversarial examples is to fool target neural networks but not to fool human oracle, in Figure 3 we illustrate some adversarial examples generated by different attacks for Modle A on MNIST and Fashion-MNIST, and CNN on CelebA. On MNIST, AT-GAN generates slightly more realistic images than Song’s, e.g. “0” and “3”. On Fashion-MNIST and CelebA, some adversarial examples generated by Song’s method are not as realistic as AT-GAN to human perception, for example “t-shirt/top (0) ”, “sandal (5)” and some facial details. Note that Song’s method tends to distort the foreground that makes the images on MNIST more clean but some images are not realistic while AT-GAN tends to distort the background. As for perturbation-based attacks, their adversarial examples are not clear enough, especially on MNIST and Fashion-MNIST, due to the adversarial perturbations. There are also some unnatural samples generated by AT-GAN due to the limitation of GAN and we hope some better generative models can solve such issue. For target attack, please see more examples crafted by AT-GAN in Appendix D. In general, AT-GAN can generate realistic and diverse adversarial examples as equation 1 forces the generated non-constrained adversarial examples to be close to the benign examples generated by the original generator. 4.3 VISUALIZATION ON ADVERSARIAL DISTRIBUTION As discussed in Section 3.3, we provide a brief analysis that AT-GAN can learn a distribution of adversarial examples close to the distribution of real image data. To identify it empirically, we randomly choose 5, 000 benign images and 5, 000 adversarial examples generated by different attack methods, and merge these images according to their real label for MNIST and Fashion-MNIST. Then we use t-SNE (Maaten & Hinton, 2008) on these images to illustrate the distributions in two dimensions. t-SNE models each high-dimensional object in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability. It indicates that, if the adversarial examples have different distribution to the benign data, t-SNE could not deal with them well and the points with different categories will overlap with each other after the dimension reduction, i.e. the results will be in chaos. The results are illustrated in Figure 4. For AT-GAN, different categories are separated as that of the test set while those of other methods are mixed with each other, especially on MNIST (top). It indicates the distribution AT-GAN learned is indeed very close to the distribution of real data. To further validate that AT-GAN learns a different distribution from the original GAN rather than just adding some constant universal perturbation vector. In Appendix E, we illustrate some instances generated by the original generator and AT-GAN for the same input. We find that for different inputs, the original generator outputs different images and the difference between the instances generated by the original generator and AT-GAN is also different, indicating that AT-GAN indeed learns a different distribution from the original GAN. 5 CONCLUSION In this work, we propose a generation-based adversarial attack method, called AT-GAN (Adversarial Transfer on Generative Adversarial Net), that aims to learn the distribution of adversarial examples for the target classifier. The generated adversaries are “non-constrained” as we do no search at all in the neighborhood of the input, and once trained AT-GAN can output adversarial examples directly for any input noise drawn from arbitrary distribution (e.g. Gaussian distribution). Extensive experiments and visualizations show that AT-GAN achieves highest attack success rates against adversarially trained models and can generate diverse and realistic adversarial examples efficiently. Our work also suggests that adversarial training, a popular defense method based on perturbationbased adversarial examples, could not guarantee robustness against non-constrained adversarial examples. A possible reason is that AT-GAN learns a more complete version of the adversarial example distribution, which is much more diverse than that of the perturbation-based method. Note that any conditional GANs that craft realistic examples could be used for the implementation of AT-GAN. In this work, we provide two implementations on four datasets. In future work we plan to try advanced GANs for generating high resolution images. Our method also suggests a new way of adversarial attack by designing an adversarial generative model directly. There are several other interesting questions related to our work that can be explored in future work. For instance, what is the distribution of adversarial examples really like? Is it a continuous or smooth manifold? How close could we learn such distribution through GAN? We hope our work could inspire more researches in this direction. APPENDIX In the appendix, we provide additional related work on gradient-based adversarial attack methods, adversarial training methods and typical generative adversarial nets. Then we describe how to obtain the original generator and provide theoretical analysis, as well as experimental details and additional results. In the end, we visualize the examples generated by original GAN and AT-GAN. A ADDITIONAL RELATED WORK A.1 GRADIENT-BASED ATTACKS Numerous adversarial attacks have been proposed in recent years (Carlini & Wagner, 2017; Liu et al., 2017; Bhagoji et al., 2017; Li et al., 2019). In this part, we will introduce three typical adversarial attack methods. Here the components of all adversarial examples are clipped in [0, 1]. Fast Gradient Sign Method (FGSM). FGSM (Goodfellow et al., 2015) adds perturbation in the gradient direction of the training loss J on the input x to generate adversarial examples. xadv = x+ · sign(∇xJ(θ, x, ytrue)), where ytrue is the true label of a sample x, θ is the model parameter and specifies the `∞ distortion between x and xadv . Projected Gradient Descent (PGD). PGD adversary (Madry et al., 2018) is a multi-step variant of FGSM, which applies FGSM for k iterations with a budget α. xadvt+1 = clip(xadvt+αsign(∇xJ(θ, xadvt , ytrue)), xadvt − , xadvt + ) xadv0 = x, xadv = xadvk Here clip(x′, p, q) forces its input x′ to reside in the range of [p, q]. Rand FGSM (R+FGSM). R+FGSM (Tramèr et al., 2018) first applies a small random perturbation on the benign image with a parameter α (α < ), then it uses FGSM to generate an adversarial example based on the perturbed image. xadv = x ′ + ( − α) · sign(∇x′J(θ, x′, ytrue)) where x′ = x+ α · sign(N (0, I)). A.2 ADVERSARIAL TRAINING There are many defense strategies, such as detecting adversarial perturbations (Metzen et al., 2017), obfuscating gradients (Buckman et al., 2018; Guo et al., 2018) and eliminating perturbations (Shen et al., 2017; Liao et al., 2018), among which adversarial training is the most effective method (Athalye et al., 2018). We list several adversarial training methods as follows. Adversarial training. Goodfellow et al. (2015) first introduce the method of adversarial training, where the standard loss function f for a neural network is modified as: J̃(θ, x, ytrue) = αJf (θ, x, ytrue) + (1− α)Jf (θ, xadv, ytrue). Here ytrue is the true label of a sample x and θ is the model’s parameter. The modified objective is to make the neural network more robust by penalizing it to count for adversarial samples. During the training, the adversarial samples are calculated with respect to the current status of the network. Taking FGSM for example, the loss function could be written as: J̃(θ, x, ytrue) =αJf (θ, x, ytrue) + (1− α)Jf (θ, x+ sign(∇xJ(θ, x, ytrue)), ytrue). Ensemble adversarial training. Tramèr et al. (2018) propose an ensemble adversarial training method, in which DNN is trained with adversarial examples transferred from a number of fixed pre-trained models. Iterative adversarial training. Madry et al. (2018) propose to train a DNN with adversarial examples generated by iterative methods such as PGD. A.3 GENERATIVE ADVERSARIAL NET Generative Adversarial Net (GAN) (Goodfellow et al., 2014) consists of two neural networks, G and D, trained in opposition to each other. The generator G is optimized to estimate the data distribution and the discriminator D aims to distinguish fake samples from G and real samples from the training data. The objective of D and G can be formalized as a min-max value function V (G,D): min G max D V (G,D) = Ex∼px [logD(x)] + Ez∼pz [log(1−D(G(z)))]. Deep Convolutional Generative Adversarial Net (DCGAN) (Radford et al., 2016) is the convolutional version of GAN, which implements GAN with convolutional networks and stabilizes the training process. Auxiliary Classifier GAN (AC-GAN) (Odena et al., 2017) is another variant that extends GAN with some conditions by an extra classifier C. The objective function of AC-GAN can be formalized as follows: min G max D min C V (G,D,C) =Ex∼px [logD(x)] + Ez∼pz [log(1−D(G(z, ys)))] + Ex∼px [log(1− C(x, ys))] + Ez∼pz [log(1− C(G(z, ys), ys))]. To make GAN more trainable in practice, Arjovsky et al. (2017) propose Wasserstein GAN (WGAN) that uses Wassertein distance so that the loss function has more desirable properties. Gulrajani et al. (2017) introduce WGAN with gradient penalty (WGAN_GP) that outperforms WGAN in practice. Its objective function is formulated as: min G max D V (D,G) = Ex∼px [D(x)]− Ez∼pz [D(G(z))]− λEx̂∼px̂ [(‖∇x̂D(x̂)‖2 − 1)2], where px̂ is uniformly sampled along straight lines between pairs of points sampled from the data distribution px and the generator distribution pg . B TRAINING THE ORIGINAL GENERATOR Figure 2 (a) illustrates the overall architecture of AC-WGAN_GP that we used as the normal GAN. AC-WGAN_GP is the combination of AC-GAN (Odena et al., 2017) and WGAN_GP (Gulrajani et al., 2017), composed by three neural networks: a generator G, a discriminator D and a classifier f . The generator G takes a random noise z and a source label ys as the inputs and generates an image G(z, ys). It aims to generate an image G(z, ys) that is indistinguishable to discriminator D and makes the classifier f to output label ys. The loss function of G can be formulated as: LG = Ez∼pz(z)[H(f(G(z, ys)), ys)]− Ez∼pz(z)[D(G(z, ys))]. Here H(a, b) is the entropy between a and b. The discriminator D takes the training data x or the generated data G(z, ys) as the input and tries to distinguish them. The loss function of D with gradient penalty for samples x̂ ∼ px̂ can be formulated as: LD = −Ex∼pdata(x)[D(x)] + Ez∼pz(z)[D(G(z, ys))] + λEx̂∼px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1) 2]. The classifier f takes the training data x or the generated data G(z, ys) as the input and predicts the corresponding label. The loss function is: Lf =Ex∼pdata(x)[H(f(x), ytrue)] + Ez∼pz(z)[H(f(G(z, ys)), ys)]. Different from AC-WGAN_GP, styleGAN2-ada (Karras et al., 2020a) trains styleGAN2 (Karras et al., 2020b) with adaptive discriminator augmentation. We obtain the network and weights from Karras et al. (2020a). C THEORETICAL ANALYSIS OF AT-GAN In this section, we provide proofs for theorems in Section 3.3. Theorem 1. Suppose maxz,y L2 < , we have KL(pa‖pg)→ 0 when → 0. Proof. We first consider that for a distribution p(x) in space X , we construct another distribution q(x) by selecting points p (x) in the -neighborhood of p(x) for any x ∈ X . Obviously, when p (x) is close enough to p(x), q(x) has almost the same distribution as p(x). Formally, we have the following lemma. Lemma 1. Given two distributions P and Q with probability density function p(x) and q(x) in space X , if there exists a constant that satisfies ‖q(x) − p(x)‖ < for any x ∈ X , we could get KL(P‖Q)→ 0 when → 0. Proof. For two distributions P and Q with probability density function p(x) and q(x), we could get q(x) = p(x) + r(x) where ‖r(x)‖ < . KL(P‖Q) = ∫ p(x) log p(x) q(x) dx = ∫ p(x) log p(x)dx− ∫ p(x) log q(x)dx = ∫ (q(x)− r(x)) log p(x)dx− ∫ (q(x)− r(x)) log q(x)dx = ∫ q(x) log p(x)dx− ∫ q(x) log q(x)dx− ∫ r(x) log p(x)dx+ ∫ r(x) log q(x)dx = ∫ r(x) log q(x) p(x) dx−KL(Q‖P ) ≤ ∫ log(1 + p(x) )dx Obviously, when → 0, we could get ∫ log(1 + p(x) )dx→ 0, which means DL(P‖Q)→ 0. Now, we get back to Theorem 1. For two distributions pa and pg, maxy,z L2 < indicates ∀z ∼ pz, ‖pa(z, ·)− pg(z, ·)‖ < . According to Lemma 1, we have KL(pa‖pg)→ 0 when → 0. This concludes the proof. Theorem 2. The global minimum of the virtual training of AC-WGAN_GP is achieved if and only if pg = pdata. Proof. To simplify the analysis, we choose a category y of AC-WGAN_GP and denote pg(x|y) and pdata(x|y) the distribution that the generator learns and the distribution of real data respectively. Then for each category, the loss function is equivalent to WGAN_GP. We refers to Samangouei et al. (2018) to prove this property. The WGAN_GP min-max loss is given by: min G max D V (D,G) = Ex∼pdata(x)[D(x)]− Ez∼pz(z)[D(G(z))]− λEx̂∼px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1) 2] = ∫ x pdata(x)D(x)dx− ∫ z pz(z)D(G(z))dz − λ ∫ x̂ px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1)2]dx̂ = ∫ x [pdata(x)− pg(x)]D(x)dx− λ ∫ x̂ px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1)2]dx̂ (5) For a fixed G, the optimal discriminator D that maximizes V (D,G) should be: D∗G(x) = { 1 if pdata(x) ≥ pg(x) 0 otherwise (6) According to equation 5 and equation 6, we could get: V (D,G) = ∫ x [pdata(x)− pg(x)]D(x)dx− λ ∫ x̂ px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1)2]dx̂ = ∫ {x|pdata(x)≥pg(x)} (pdata(x)− pg(x))dx− λ ∫ x̂ px̂(x̂)dx̂ = ∫ {x|pdata(x)≥pg(x)} (pdata(x)− pg(x))dx− λ (7) Let X = {x|pdata(x) ≥ pg(x)}, in order to minimize equation 7, we set pdata(x) = pg(x) for any x ∈ X . Then, since both pg and pdata integrate to 1, we could get:∫ X c pg(x)dx = ∫ X c pdata(x)dx. However, this contradicts equation 6 where pdata(x) < pg(x) for x ∈ X c, unless µ(X c) = 0 where µ is the Lebesgue measure. Therefore, for each category we have pg(x|y) = pdata(x|y), which means pg(x) = pdata(x) for AC-WGAN_GP. D ADDITIONAL DETAILS ON EXPERIMENTS In this section, we provide more details on experimental setup, report results on transferability, do ablation study on hyper-parameters, investigate the generating capacity by human evaluation, and show details for another implementation of AT-GAN on CIFAR-10 dataset. In the end, we illustrate some non-constrained adversarial examples generated by AT-GAN on MNIST, Fashion-MNIST and CelebA for the target attack. D.1 MORE EXPERIMENTAL SETUP We first provide more details on the experimental setup, including the model architectures and attack hyper-parameters. Model Architectures for AT-GAN. We first describe the neural network architectures used for AT-GAN in experiments. The abbreviations for components in the network are described in Table 4. The architecture of AC-WGAN_GP for MNIST and Fashion-MNIST is shown in Table 5 where the generator and discriminator are the same as in Chen et al. (2016), while the architecture of AC_WGAN_GP for CelebA is the same as in Gulrajani et al. (2017) and the architecture of styleGAN2-ada for CIFAR-10 is the same as in Karras et al. (2020a). Hyper-parameters for Attacks. The hyper-parameters used in experiments for each attack method are described in Table 6 for MNIST, Fashion-MNIST and CelebA datasets. For CIFAR-10 dataset, we set = 0.03 for FGSM, = 0.03, α = 0.0075 and epochs= 20 for PGD, α = 3, β = 2 and epochs= 1, 000 for AT-GAN. D.2 TRANSFERABILITY OF AT-GAN Another important issue for adversarial examples is the transferability across different models. To demonstrate the transferability of non-constrained adversarial examples, we use adversarial examples generated by attacking Model A (MNIST and Fashion-MNIST) and CNN (CelebA), to evaluate the attack success rates on Model C (MNIST and Fashion-MNIST) and VGG16 (CelebA). As shown in Table 7, non-constrained adversarial examples generated by AT-GAN exhibit moderate transferability. D.3 ABLATION STUDY In this subsection, we investigate the impact of using different ρ in the loss function. As ρ could be constrained by both `0 and `∞ norm, we test various bounds, using Model A on MNIST dataset, for ρ in `0 and `∞, respectively. We first fix ‖ρ‖∞ = 0.5 and try various values for ‖ρ‖0, i.e. 0, 100, 200, 300, 400 (the maximum possible value is 784 for 28*28 input). The attack success rates are in Table 8. We can observe that different values of ‖ρ‖0 only have a little impact on the attack success rates, and the performances are very close for ‖ρ‖0 = 0, 100, 200. Figure 5 further illustrates some generated adversarial examples, among which we can see that there exist some slight differences on the examples. When ‖ρ‖0 = 0, AT-GAN tends to change the foreground (body) of the digits. When we increase the value of ‖ρ‖0 (100 and 200), AT-GAN is more likely to add tiny noise to the background and the crafted examples are more realistic to humans (for instance, smoother on digit 4). But if we continue to increase ‖ρ‖0 (300 or 400), AT-GAN tends to add more noise and the quality of the generated examples decays. To have a good tradeoff on attack performance and generation quality, we set ‖ρ‖0 = 200. We then fix ‖ρ‖0 = 200 and test different values for ‖ρ‖∞, i.e. 0, 0.1, 0.2, 0.3, 0.4, 0.5 (the maximum possible value is 1). The attack success rates are in Table 9. We can observe that different values of ‖ρ‖∞ have very little impact on the attack performance. Figure 6 further illustrates some generated adversarial examples, among which we can see that a little bit more noises are added for bigger ‖ρ‖∞ but the differences are very tiny when ‖ρ‖∞ = 0.2 to 0.5. So we simply set ‖ρ‖∞ = 0.5 in experiments, but other values of ‖ρ‖∞ (0.2, 0.3, 0.4) also work. D.4 HUMAN EVALUATION To investigate the generating capacity of AT-GAN, we use the same input, and randomly pick 100 images for each category of MNIST generated by AT-GAN and the original generator, respectively. We then conduct human evaluation to determine whether each example is realistic. The evaluation results are in Table 10. We see that adversarial examples in some categories (e.g. 2, 4) are harder to be semantically meaningful than other categories (e.g. 0, 1). On average, however, the generating capability is close to that of the original generator. D.5 AT-GAN ON CIFAR-10 DATASET To further demonstrate the flexibility of AT-GAN, we implement AT-GAN on CIFAR-10 dataset using StyleGAN2-ada (Karras et al., 2020a), a recently proposed conditional GAN. The target classifier is wide ResNet w32-10 (Zagoruyko & Komodakis, 2016) by normal training (Nor.) and Iterative adversarial training (Iter.). The attack success rates are in Table 11. On normally trained models, PGD achieves the attack success rate of 100% while AT-GAN achieves the attack success rate of 93.5%. However, the adversarially trained model exhibits little robustness against AT-GAN and AT-GAN achieves attack success rate of 73.0%. In Figure 7, we illustrate some generated adversarial examples on CIFAR-10 dataset. D.6 AT-GAN ON TARGET ATTACK Here we show some non-constrained adversarial examples generated by AT-GAN for the target attack. The results are illustrated in Figure 8 for MNIST and Fashion-MNIST, and Figure 9 for CelebA. Instead of adding perturbations to the original images, AT-GAN transfers the generative model (GAN) so that the generated adversarial instances are not in the same shape of the initial examples (in diagonal) generated by the original generator. Note that for CelebA, the target adversarial attack is equivalent to the untarget adversarial attack as it is a binary classification task. E VISUALIZATIONS FOR THE ORIGINAL GAN AND AT-GAN Here we provide some instances generated by the original GAN and AT-GAN with the same input noise and their difference on MNIST and Fashion-MNIST. The results are depicted in Figure 10 and 11. For different input noise, both the original GAN and AT-GAN output different instances. For each category with the same input noise, the difference between original GAN and AT-GAN is mainly related to the main content of image. For two different input noises, the differences between the original GAN and AT-GAN are not the same with each other, indicating that AT-GAN learns a distribution of adversarial examples different from the original GAN rather than just adds some universal perturbation vectors on the original GAN.
1. What is the main contribution of the paper regarding generative neural networks and adversarial examples? 2. What are the strengths and weaknesses of the proposed approach compared to existing methods such as AdvGAN and Song's attack procedure? 3. How does the reviewer assess the training aspects and generating capability of the proposed method? 4. What are the limitations of the proposed method in terms of attack transferability and required adversarial examples? 5. What additional information would the reviewer like to see in the experimental results, such as the source of adversarial examples, training time, and generating failure ratio?
Review
Review This paper is to train a generative neural networks that can output adversarial examples. The main idea is to first train a normal GAN and then use the idea of transfer learning based on adversarial examples. The aim sounds good but the authors fail to clearly distinguish the idea with the exiting related methods theoretically or numerically. The idea of transferring is good (although not new), but after checking the implementation details, I have to say in the current version, the fact of transferring is quite limited. Details: the idea of generating adversarial examples by a trained GAN is interesting. the writing is quite clear. lack of comparison with existing related methods. Consider the core formulation, namely (2), which well describes the idea of this authors. But it is necessary to consider the following ideas: 1). generating adversarial permutation (AdvGAN, AI-GAN): min_G | G(z,y) |_p, s.t., f(z+G((z,y)) = y_t \neq y_s. It is to train the difference of G_original and G_attack and I think in the training aspects, this is almost equal to the proposed idea. The authors try to argue that the proposed model does not require an input. But in my opinion, no input is a disadvantage: if only adversarial examples are needed, AdvGAN etc. can feed an random input to original GAN and then add perturbations; but if one wants to attack a specific image, the proposed method will fail. 2). attack a GAN to generate adversarial examples (Song's): min_z' |z - x|, s.t., f(G(z,y)) \neq f(G(z'),y). The author may argue the Song's attack procedure takes longer time. However, the there is no training time additionally needed . Moreover, I guess the generating capability of Song's idea, which relies on the GAN and there are many well-designed ones, is better than the proposed one. I would like to see the generating performance of the proposed method on more complicated datasets, e.g., on CIFAR or other HIGH-RESOLUTION images. Another good point of Song's idea is that almost all the attacks on images could be parallelly used. I do not know whether its ASR could be easily improved. The idea of transferring the original GAN to the attacking one is interesting. However, except of using the original GAN as the starting point, I cannot find other facts of "transferring". I would like to know if transferring learning technique could be used to reduce the number of required adversarial examples. The attack transferbility has not been tested. Since there is adversarial samples involved, the obtained GAN is expected to be related to the victim model. Additional questions, mainly for the experiments' result It is good that attack performance on adversarial trained NN is included. But where the adversarial examples come from? Are the examples are generated by AT-GAN? How many examples and time are needed to train the AT-GAN? Since the GAN has been changed, how about the generating capability, i.e., generating failure ratio of the AT-GAN should be reported.
ICLR
Title i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable? Abstract Masked image modeling (MIM) has been recognized as a strong and popular self-supervised pre-training approach in the vision domain. However, the interpretability of the mechanism and properties in the learned representations by such a scheme is so far not well explored. In this work, through comprehensive experiments and empirical studies on Masked Autoencoders (MAE), we address two critical questions to explore the behaviors of the learned representations: (i) Are the latent representations in Masked Autoencoders linearly separable if the input is a mixture of two images instead of one? This can be concrete evidence to explain why MAE-learned representations have superior performance on downstream tasks, as proven by many literatures impressively. (ii) What is the degree of semantics encoded in the latent feature space by Masked Autoencoders? To explore these two problems, we propose a simple yet effective Interpretable MAE (i-MAE) framework with a two-way image reconstruction and a latent feature reconstruction with distillation loss, to help us understand the behaviors inside MAE structure. Extensive experiments are conducted on CIFAR-10/100, TinyImageNet and ImageNet-1K datasets to verify the observations we discovered. Furthermore, in addition to qualitatively analyzing the characteristics in the latent representations, we also examine the existence of linear separability and the degree of semantics in the latent space by proposing two novel metrics. The surprising and consistent results between the qualitative and quantitative experiments demonstrate that i-MAE is a superior framework design for interpretability research of MAE frameworks, as well as achieving better representational ability. 1 INTRODUCTION Self-supervised learning aims to learn representations from abundant unlabeled data for benefiting various downstream tasks. Recently, many self-supervised approaches have been proposed in the vision domain, such as pre-text based methods (Doersch et al., 2015; Zhang et al., 2016; Gidaris et al., 2018), contrastive learning with Siamese networks (Oord et al., 2018; He et al., 2020; Chen et al., 2020; Henaff, 2020), masked image modeling (MIM) (He et al., 2022; Bao et al., 2022; Xie et al., 2022), etc. Among them, the MIM has shown a preponderant advantage in performance and the representative method Masked Autoencoders (MAE) (He et al., 2022) has attracted much attention in the field. A natural question is raised: Where is the benefit of the transferability to downstream tasks from in MAE-based training? This motivates us to develop a framework to shed light on the reasons for the superior latent representation from MAE. Also, as the interpretability of MAE framework is still under-studied in this area, it is crucial to explore this in a specific and exhaustive way. Intuitively, a good representation should be separable and contain enough semantics from input, so that it can have a qualified ability to distinguish different classes with better performance on downstream tasks. While, how to evaluate the separability and the degree of semantics on latent features is not clear so far. Moreover, the mechanism of an Autoencoder to compress the information from input by reconstructing itself, has been a strong self-supervised learning architecture, but the explanation of the learned features through this way is still under-explored. To address the difficulties of identifying separability and semantics in the latent features, we first propose a novel framework i-MAE upon vanilla MAE. It consists of a mixture-based masked au- toencoder branch for disentangling the mixed representations by linearly separating two different instances, and a pre-trained vanilla MAE as the guidance to distill the disentangled representations. An illustration of the overview framework architecture is shown in Fig. 2. This framework is designed for answering two interesting questions: (i) Are the latent representations in Masked Autoencoders linearly separable? (ii) What is the degree of semantics encoded in the latent feature space by Masked Autoencoders? These two questions can reveal the factor that MAE learned features are good at separating different classes. We attribute the superior representation of MAE to it learning separable features for downstream tasks with enough semantics. In addition to qualitative studies, we also develop two metrics to address the two questions quantitatively. In the first metric, we employ ℓ2 distance from the high-dimensional Euclidean spaces to measure the similarity between i-MAE’s disentangled feature and “ground-truth” feature from pre-trained MAE on the same image. In the second metric, we control different ratios of semantic classes as a mixture within a mini-batch and evaluate the finetuning and linear probing results of the model to reflect the learned semantic information. More details will be provided in Section 3. We conduct extensive experiments on different scales of datasets: small CIFAR-10/100, medium Tiny-ImageNet and large ImageNet-1K to verify the linear separability and the degree of semantics in the latent representations. We also provide both qualitative and quantitative results to explain our observations and discoveries. The characteristics we observed in latent representations according to our proposed i-MAE framework: (I) i-MAE learned feature representation has good linear separability for input data, which is beneficial for downstream tasks. (II) Though the training scheme of MAE is different from instance classification pre-text in contrastive learning, its representation still encodes sufficient semantic information from input data. Moreover, mixing the same class images as the input substantially improves the quality of learned features. (III) We can reconstruct an image from a mixture by i-MAE effortlessly. To the best of our knowledge, this is the pioneer study to explicitly explore the separability and semantics inside MAE’s features with extensive well-designed qualitative and quantitative experiments. Our contributions in this work are: • We propose an i-MAE framework with two-way image reconstruction and latent feature reconstruction by a distillation loss, to explore the interpretability of mechanisms and properties inside the learned representations of MAE framework. • We introduce two metrics to examine the linear separability and the degree of semantics quantitatively on the learned latent representations. • We conduct extensive experiments on different scales of datasets: CIFAR-10/100, Tiny-ImageNet and ImageNet-1K and provide sufficient qualitative and quantitative results. 2 RELATED WORK Masked image modeling. Motivated by masked language modeling’s success in language tasks (Devlin et al., 2018; Radford & Narasimhan, 2018), Masked Image Modeling (MIM) in the vision domain learn representations from images corrupted by masking. State-of-the-art results on downstream tasks are achieved by several approaches. BEiT (Bao et al., 2022)proposes to recover discrete visual tokens, whereas SimMIM (Xie et al., 2022) addresses the MIM task as a pixel-level reconstruction. In this work, we focus on MAE (He et al., 2022), which proposes to use a high masking ratio and a non-arbitrary ViT decoder. Despite the great popularity of MIM approaches and their conceptual similarity to language modeling, the question of why has not been addressed in the visual domain. Moreover, as revealed by MAE, pixels are semantically sparse, and we novelly examine semantic-level information quantitatively. Image mixtures. Widely adopted mixture methods in visual supervised learning include Mixup (Zhang et al., 2017) and Cutmix (Yun et al., 2019). However, these methods require ground-truth labels for calculating mixed labels; in this work, we adapt Mixup to our unsupervised framework by formulating losses on only one of the two input images. On the other hand, in very recent visual SSL, joint embedding methods and contrastive learning approaches such as MoCo (He et al., 2020), SimCLR (Chen et al., 2020), and more recently UnMix (Shen et al., 2022) have acquired success and predominance in mixing visual inputs. These approaches promote instance discrimination by aligning features of augmented views of the same image. However, unlike joint embedding methods, i-MAE does not heavily rely on data augmentation and negative sampling. Moreover, whereas most MIM methods are generative tasks, i-MAE also utilizes characteristics of discriminative tasks in learning linearly separable representations. Invariance and disentangling representation learning in Autoencoders. Representation learning focuses on the properties of the features learned by the layers of deep models while remaining agnostic to the particular optimization process. Variance and entanglement are two commonly discussed factors that occur in data distribution for representation learning. In this work, we focus on the latent disentanglement that one feature is correlated or connected to other vectors in the latent space. Autoencoder is a classical generative unsupervised representation learning framework based on image reconstruction as loss function. Specifically, autoencoders learn both the mapping of inputs to latent features and then the reconstruction of the original input. Denoising autoencoders reconstruct the original input from a corrupted input, and most MIM methods are categorized as denoising autoencoders that use masking as a noise type. We notice, that recent work in the literature (He et al., 2022; Bao et al., 2022) performs many experiments in masking strategies, but to the best of our knowledge, we are the first to introduce image mixtures in the pre-training of MIM. 3 I-MAE In this section, we first introduce an overview of our proposed framework. Then, we present each component in detail. Ensuing, we elaborate on the metrics we proposed evaluating linear separability and degree of semantics, as well as broadly discuss the observations and discoveries. 3.1 FRAMEWORK OVERVIEW As shown in Fig 2, our framework consists of three submodules: (i) a mixture encoder module that takes the masked mixture image as the input and output mixed features; (ii) a disentanglement module that splits the mixed feature to the individual ones; (iii) MAE teacher module that provides the pre-trained embedding for guiding the splitting process in the disentanglement module. 3.1.1 COMPONENTS Input Mixture with MAE Encoder. Inspired by Mixup, we use an unsupervised mixture of inputs formulated by α ∗ I1 and (1− α) ∗ I2, I1, I2 are the input mixes. Essentially, our encoder extrapolates mixed features from a tiny fraction (e.g., 25%) of visible patches, which we then tune to only represent the subordinate image. The mixed image will be: Im = α ∗ I1 + (1− α) ∗ I2 (1) where α is the coefficient to mix two images following a Beta distribution. Two-branch Masked Autoencoders with Shared Decoder. Although sufficient semantic information of both images is embedded in the mixed representation to reconstruct both images, the vanilla MAE cannot by itself associate separated features to either input. The MAE structure does not retain identification information of the two mixed inputs (e.g., order or positional information), i.e., the model cannot tell which of the two images to deconstruct to, since both are sampled from the same distribution and mixed randomly. The consequence is that both reconstructions look identical to each other and fail to look similar to either original input. Similar to how positional embeddings are needed to explicitly encode spatial information, i-MAE implicitly encodes the semantic difference between the two inputs by using a dominant and subordinate mixture strategy. Concretely, through an unbalanced mix ratio and a reconstruction loss targeting only one of the inputs, our framework encodes sufficient information for i-MAE to linearly map the input mixture to two outputs. Two-way Image Reconstruction Loss. Formally, we build our reconstruction loss to recover individual images from a mixed input, which is first fed into the encoder to generate mixed features: hm = Ei-MAE(Im) (2) where Ei-MAE is i-MAE’s encoder, hm is the latent mixed representation. Then, we employ two nonshareable linear embedding layers to separate the mixed representation from the individual ones: h1 = f1(hm) h2 = f2(hm) (3) where f1, f2 are two linear layers with different parameters for disentanglement and h1 and h2 are corresponding representations. After that, we feed the individual representations into the shared decoder with the corresponding reconstruction losses: LI1recon = EI1∼p(I1) [∥Dshared(h1))− I1∥2 LI2recon = EI2∼p(I2) [∥Dshared(h2))− I2∥2 (4) In practice, we train the linear separation layers to distinguish between the dominant input Id (higher mix ratio) and the subordinate input Is (lower ratio). Showing that our encoder learns to embed representations of both images, we intentionally choose to reconstruct only the subordinate image Is to prevent the Id from guiding the reconstruction. Essentially, successful reconstructions from only the Is prove that representations of both images can be learned and that the subordinate image is not filtered out as noise. Patch-wise Distillation Loss for Latent Reconstruction. With the linear separation layers and an in-balanced mixture, the i-MAE encoder is presented with sufficient information about both images to perform visual reconstructions; however, information is inevitably lost during the mixture process, harming the value of the learned features in downstream tasks such as classification. To mitigate such an effect, we propose a knowledge distillation module both for enhancing the learned feature’s quality, and that a successful distillation can evidently prove the linear separability of our features. Intuitively, MAE’s feature can be regarded as ”ground-truth” and i-MAE learns features distilled from the original MAE. Specifically, our loss function computes ℓ2 loss between disentangled representations and original representations to help our encoder learn useful features of both inputs. Our Patch-wise latent reconstruction loss can be formulated as: Lh1recon = Eh1∼q(h1) [∥Ep-MAE(I1))− h1∥2 Lh2recon = Eh2∼q(h2) [∥Ep-MAE(I2))− h2∥2 (5) where Ep-MAE is the pre-trained MAE encoder. 3.2 LINEAR SEPARABILITY For i-MAE to reconstruct the subordinate image from a linear mixture, not only does the encoder have to be general enough to retain information of both inputs, but it must also generate embeddings that are specific enough for the decoder to distinguish them into their pixel-level forms. A straightforward interpretation of how i-MAE fulfills both conditions is that the latent mixture hm is a linear combination of features that closely relate to h1 and h2, e.g. in a linear relationship. Our distillation module aids the information loss. To verify this explanation, we employ a linear separability metric to experimentally observe such behavior. Metric of Linear Separability. A core contribution of our i-MAE is the quantitative analysis of features. In general, linear separability is a property of two sets of features that can be separated into their respective sets by a hyperplane. In our example, the set of latent representations H1 and H2 are linearly separable if there exists n+ 1 real numbers w1,w2, ...,wn, b, such that every h ∈ H1 satisfies ∑ wihi > b and every h ∈ H2 satisfies ∑ wihi < b. It is a common practice to train a classical linear classifier (e.g., SVM) and evaluate if two sets of data are linearly separable. However, to quantitatively measure the separation of latent representations, we devised a more intuitive yet effective metric. Our metric computes the Mean Squared Error (MSE) distance between the disentangled feature of the subordinate image Is and the vanilla MAE feature of a single input Is. Since the disentangled feature without constraints will unlikely resemble the vanilla feature, we utilize a linear layer to transform the disentangled feature space to the vanilla feature space. Note that this is similar to knowledge distillation, but happens after the pre-training process without finetuning the parameters and conceptually measures the distance between the two latent representations, and thus the linear transformation will not be needed for i-MAE with distillation. The detailed formulation of the metric is: Mls = 1 N N∑ n=1 ∥hns − fθ (Ins )∥ 2 2 (6) where N is the total number of samples. fθ is the encoder of vanilla MAE. Is is the subordinate image and Is ∈ {I1, I2}. 3.3 SEMANTICS Metric of Semantics. Vanilla MAE exhibits strong signs of semantics understanding (He et al., 2022). However, studying the abstract concept of semantics in the visual domain is difficult due to its semantic sparsity and repetitiveness. Addressing this problem, we propose a metric unique to i-MAE that is readily available for examining the degree of semantics learned in the model. Asides from straightforwardly evaluating classification accuracy to measure the quality of latent representation, i-MAE utilizes the mixing of semantically similar instances to determine to what degree the disentangled latent representations can reflect image-level meaning. Naturally, the segmentation of different instances from the same class is a more difficult task than classification between different classes, intra-class separation requires the understanding of highlevel visual concepts, i.e. semantic differences, rather than lower-level patterns, i.e. shape or color. While generally, data transformation (Olah et al., 2017) can help mitigate overfitting. Similarly, our semantic disentangle module is another data augmentation that introduces significantly more mixtures of the same class into the training process. We find our method to boost the semantics of features learned by this semantics-controllable mixture scheme. Specifically, we choose training instances from the same or different classes following different distributions to constitute an input mixture, so that to examine the quality of learned features as follows: p = fm(Ica + Icb) (7) where fm is the backbone network for mixture input and p is the corresponding prediction. Ica and Icb are the input samples and ca, cb have a certain percentage r that belongs to the same category. For instance, if r = 0.1, it indicates that 10% images in a mini-batch are mixed with the same class. When r = 1.0, all training images will be mixed by another one from the same class, which can be regarded as a semantically enhanced augmentation. During training, r is fixed for individual models, and we study the degree of semantics that the model encoded by changing the percentage value r. After the model is trained by i-MAE using such kind of input data, we finetune the model with Mixup strategy (both baseline and our models) and cross-entropy loss. We use the accuracy as the metric of semantics under this percentage of instance mixture: Msem = − n∑ i=1 ti log (pi) (8) where ti is the ground-truth. The insight behind is that: if the input mixture is composed of two images or instances with the same semantics (i.e., the same category), it will confuse the model during training and i-MAE will be struggling to disentangle them. Thus, the encoded information/semantics may be weakened in training and it can be reflected by the quality of learned representation. It is interesting to see whether this conjecture is supported by the empirical results. We use the representation quality through finetuned accuracy to monitor the degree of semantics with this semantics-controllable mixture scheme. 4 EMPIRICAL RESULTS AND ANALYSIS In the experiments section, we analyze the properties of i-MAE’s disentangled representations on an extensive range of datasets. First, we provide the datasets used and our implementations details. Then, we thoroughly ablate our experiments, focusing on the properties of linear separation, and controllable-semantic mixture. Lastly, we give the final evaluation of our results. 4.1 DATASETS AND TRAINING IMPLEMENTATION FOR BASELINE AND I-MAE. Settings: We perform empirical experiments of i-MAE on CIFAR-10/100, Tiny-ImageNet, and ImageNet-1K. On CIFAR-10/100, we pre-train i-MAE unsupervisedly and adjust MAE’s structure to better fit the smaller datasets: ViT-Tiny (Touvron et al., 2021) in the encoder and a lite-version of ViT-Tiny (4 layers) as the decoder. Our pre-training lasts 2,000 epochs with learning rate 1.5×10−4 and 200 warm-up epochs. On Tiny-ImageNet, i-MAE’s encoder is ViT-small and decoder is ViTTiny, trained for 1,000 epochs with learning rate 1.5 × 10−4. Additionally, we apply warm-up for first 100 epochs, and use cosine learning rate decay with AdamW optimizer as in MAE. Supervised Finetuning: In the finetuning process, we apply Mixup for all experiments to fit our pre-training scheme, and compare our results with baselines of the same configuration. On CIFAR10/100, we finetune 100 epochs with a learning rate of 1.5× 10−3 and AdamW optimizer. Linear Probing: For linear evaluation, we follow MAE (He et al., 2022) to train with no extra augmentations and use zero weight decay. We also adopt an additional BatchNorm layer without affine transformation. 4.2 ABLATION STUDY In this section, we perform ablation studies on i-MAE to demonstrate the invariant property of linear separability and to what extent can i-MAE separate features. Then, we analyze the effect of semantic-level instance mixing on the quality of i-MAE’s learned representations. 4.2.1 ABLATION FOR LINEAR SEPARABILITY To begin, we thoroughly ablate our experiments on small-scaled datasets and demonstrate how iMAE’s learned features display linear separability. Specifically, we experiment with the separability of the following aspects of our methods: (i) constant or probability mix factor; (ii) masking ratio of input mixtures; (iii) different ViT architectures. Unless otherwise stated, the default settings used in our ablation experiments are ViT-Tiny, masking ratio of 75%, fixed mixing ratio of 35%, and reconstructing only the subordinate image for a harder task. Mix Ratio. To demonstrate the separable nature of input mixtures, we compared different mixture ratios and random mixture ratios from a Beta distribution. Intuitively, low mixing ratios contain less information that are easily confused with noise, whereas higher ratios destroy the subordinatedominant relationship. Experimentally, we observe matching results shown in Appendix (Fig. 10 and Fig. 1). The better separation performance around the 0.3 range indicates that i-MAE features are better dichotomized when balanced between noise and useful information. Whereas below 0.15, the subordinate image is noisy and reconstructions are not interpretable, mixing ratios above 0.45 break the balance between the two images, and the two features cannot be distinguished. Moreover, notice that at 0.45, reconstruction patches are turning green and resembling the pepper. Mask Ratio. In i-MAE, visible information of the subordinate image is inherently limited due to the unbalanced mix ratio in addition to masking. Therefore, a high masking ratio (75% (He et al., 2022)) may not be necessary to suppress the amount of information the encoder sees, so we attempt ratios of 50%, 60% to introduce more information of the subordinate target. As shown in Fig. 3, a lower masking ratio can improve the reconstruction quality. Combining our findings in mix and mask ratios, we empirically find that i-MAE can compensate for the information loss at low ratios with the additional alleviation of more visible patches (lower mask ratio). Illustrated in Fig. 1, we display a case of i-MAE qualitatively succeeding in separating the features of a α = 0.1 mix and 0.5 masking ratio. Our core finding in the separability ablation section is that i-MAE can learn linearly separable features under two conditions: (i) enough information about both images must be present (this can be alleviated by mask ratios). (ii) the image-level distinguishing relationship between minority and majority (determined by mix ratio) is clear enough. ViT Backbone Architecture. We study the effect of different ViT scales in linear separation in Appendix of Fig. 5, and find that larger backbones are not necessary for small datasets on i-MAE, although it is crucial on large-scale ImageNet-1K. 4.2.2 ABLATION FOR DEGREE OF SEMANTICS Semantic Mixes. Depending on the number of classes and overall size, pristine datasets usually contain around 10% (e.g., CIFAR-10) to less than 1% (e.g. ImageNet-1K) samples of the same class. By default, uniformly random sampling mixtures will be of the same likelihood. However, in the semantics-controllable mixture scheme, we test whether the introduction of semantically homogeneous mixtures affects the classification performance at different amounts. We intentionally test to see if similar instances during pre-training can negatively affect the classification performance. As shown in Tab. 1, after i-MAE pre-training, we perform finetuning and linear probing on classification tasks to evaluate the degree of semantics learned given different amounts of intra-class mix r. From Tab. 1, we discover that i-MAE overall has a stronger performance in finetuning and linear probing with a non-zero same class ratio. Specifically, a high r increases the accuracy in linear evaluation most in all datasets, meaning that the quality of learned features is best and separated. On the other hand, setting r = 0.5 is advantageous during finetuning, as it gains a balanced prior of separating both intra- and inter-class mixtures. 4.3 RESULTS OF FINAL EVALUATION In this section, we provide a summary of our main findings: how separable are i-MAE embedded features and the amount of semantics embedded in mixed-representations. Then, we evaluate the quality of our features with classfication and analyze the features. 4.3.1 SEPARABILITY In this section, we show how i-MAE displays properties of linear separability, visually and quantitatively, and demonstrate our advantage over baseline (vanilla MAE). In a visual comparison of disentanglement capability, shown in Fig. 4, the vanilla MAE does not perform well out-of-the-box. In fact, the reconstructions represent the mixed input more so than the subordinate image. Since the mixture inputs of i-MAE is a linear combination of the two images and our results show i-MAE’s potent ability to reconstruct both images, even at very low mixture ratios, we account such ability to i-MAE’s disentanglement correlating strongly to vanilla MAE’s features. As aforementioned, we gave the formal definition of linear separability; we now empirically illustrate the strength of the linear relationship between MAE’s features and i-MAE’s disentangled features with a linear regressor. We employ ℓ2 distance as our criterion and results are reported in Tab. 2. Experimentally, we feed mixed inputs to i-MAE and singular image to the target model (vanilla MAE), Before indicates that we directly calculate the distance between the “ground-truth” features from pre-trained MAE and our disentangled features. After indicates that we train the linear regression’s parameters to fit the “ground-truth”. Baseline is the model trained without disentanglement module. It can be observed that our i-MAE has a significantly smaller distance than the vanilla model, reflecting that such a scheme can obtain better separability ability. 4.3.2 SEMANTICS Finetune and Linear Evaluation. We evaluate our i-MAE’s performance with finetuning and linear evaluation of regular inputs and targets. For all approaches in the finetuning phase, we use Mixup as augmentation and no extra augmentations for linear evaluation. Classification performance is outlined in Tab. 3 and Tab. 4. As our features are learned from a harder scenario, it encodes more information with a more robust representation and classification accuracy. Besides, i-MAE shows a considerable performance boost with both evaluation methods. Analysis. We emphasize that our enhanced performance comes from i-MAE’s ability to learn more separable features with the disentanglement module, and the enhanced semantics learned from training with semantics-controllable mixture. Our classification results show that it is crucial for MAE to learn features that are linearly separable, which can help identify between different classes. However, to correctly identify features with their corresponding classes, semantically rich features are needed, and can be enhanced by sampling intra-class mixing strategy. 5 CONCLUSION It is non-trivial to understand why Masked Image Modeling (MIM) in the self-supervised scheme can learn useful representations for downstream tasks without labels. In this work, we have introduced a novel interpretable framework upon Masked Autoencoders (i-MAE) to explore two critical properties in latent features: linear separability and degree of semantics. We identified that the two specialties are the core for superior latent representations and revealed the reasons where is the good transferability of MAE from. Moreover, we proposed two metrics to evaluate these two specialties quantitatively. Extensive experiments are conducted on CIFAR-10/100, Tiny-ImageNet, and ImageNet-1K datasets to demonstrate our discoveries and observations in this work. We also provided sufficient qualitative results and analyses of different hyperparameters. We hope this work can inspire more studies on the interpretability of the MIM frameworks in the future. A DATASETS CIFAR-10/100 (Krizhevsky, 2009) Both CIFAR datasets contain 60,000 tiny colored images sized 32×32. CIFAR-10 and 100 are split into 10 and 100 classes, respectively. Tiny-ImageNet The Tiny-ImageNet is a scaled-down version of the standard ImageNet-1K consisting of 100,000 64x64 colored images, categorized into 200 classes. ImageNet-1K (Deng et al., 2009) The ILSVRC 2012 ImageNet-1K classification dataset consist of 1.28 million training images and 50,000 validation images of 1000 classes. B IMPLEMENTATION DETAILS IN SELF-SUPERVISED PRE-TRAINING, FINETUNING, AND LINEAR EVALUATION ViT architecture. In our non-ImageNet datasets, we adopt smaller ViT backbones that generally follow (Touvron et al., 2021). The central implementation of linear separation happens between the MAE encoder and decoder, with a linear projection layer for each branch of reconstruction. A shared decoder is used to reconstruct both images. A qualitative evaluation of different ViT sizes on TinyImageNet is displayed in Fig. 5; the perceptive difference is not large and generally, ViT-small/tiny are sufficient for non-ImageNet datasets. Pre-training. The default setting for pre-training is listed in Tab. 5. On ImageNet-1K, we strictly use MAE’s specifications. For better classification performance, we use normalized pixels (He et al., 2022) and a high masking ratio (0.75); for better visual reconstructions, we use a lower masking ratio (0.5) without normalizing target pixels. In CIFAR-10/100, and Tiny-ImageNet, reconstruct ordinary pixels. Semantics-controllable mixture The default setting for our semantics-controllable mixtures are listed in Tab. 6. We modified the dataloader to mix, within a mini-batch, r percent of samples that have homogenous classes, and 1− r percent that is different. Classification For the classification task, we provide the detailed settings of our finetuning process in Tab. 7 and linear evaluation process in Tab. 8. C VISUALIZATION We provide extra examples of a single-branch trained i-MAE reconstructing the subordinate image. Fig. 10 are visualizations on CIFAR-100 at mix ratios from 0.1 to 0.45, in 0.05 steps. Shown in Fig. 6 and Fig. 7, we produce finer ranges of reconstructions from 0.05 to 0.45. Notice that in most cases, mixture rates above 0.4 tends to show features of the dominant image. This observation demonstrates that a low mixture rate can better embed important information separating the subordinate image. D PYTORCH (PASZKE ET AL., 2019) STYLED PSEUDOCODE The pseudocode of our mixture and subordinate reconstruction approach is shown in Algorithm 1. This is only a simple demonstration of our most basic framework without distillation losses. In Algorithm 1: PyTorch-style pseudocode for a single subordinate reconstruction on i-MAE. # alpha: mixture ratio # args.beta: hyperparameter for the Beta Distribution. # # args.beta=1.0 for x in loader: # Minibatch x of N samples alpha = np.random.beta(args.beta, args.beta) sub idx = np.argmin(alpha, 1-alpha) # Identifying the subordinate (target) image perm = torch.randperm(batch size) # inner-batch mix im 1, im 2 = x, x[perm, :] mixed images = alpha * im 1 + (1-alpha) * im 2 # # Subordinate Loss loss sub = loss fn (model(mixed images), im 2) # # update gradients optimizer.zero grad() loss.backward() optimizer.step() ... our full-fledged i-MAE, we employ two additional distillation losses, an additional linear separation branch, and the semantics-controllable mixture scheme; nonetheless, the key implementation remains the same as the pseudocode presented here.
1. What is the main contribution of the paper, and how does it differ from other masked image modeling frameworks? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its writing clarity and experimental support? 3. How does the reviewer assess the novelty and reproducibility of the paper's content? 4. Are there any concerns or questions regarding the paper's methodology, such as the design of the distillation loss and the choice of metrics?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a new masked image modeling framework called i-MAE with two-way image reconstruction and latent feature reconstruction by a distillation loss. Besides, two metrics are proposed in this paper in order to examine the linear separability and the degree of semantics. Strengths And Weaknesses Strength Inspecting the linear separability and the degree of semantics in masked image modeling is a good perspective to understand MIM. Weaknesses The writing is overall confusing. For example, Equation 4 seems to mention that reconstruction is performed for both images contained in the mixture. However, from the subsequent description and Algorithm 1 in the Appendix, it seems that only one of the minor images will be reconstructed. Besides, many images may be missing legends, such as Figure 3 or Figure 4. In the introduction part, the authors mention that "Intuitively, a good representation should be separable and contain enough semantics from input, so that it can have a qualified ability to distinguish different classes with better performance on downstream tasks". Is there any evidence to support this intuition? Considering one of the best metrics for semantic separable information is the linear probing accuracy, most masked image modeling practices (e.g. MAE, SimMIM) do not outperform the contrastive learning counterparts on this benchmark. However, MIM shows superior performance on several downstream tasks with fine-tuning, such as ImageNet-1K classification, COCO object detection and ADE-20K semantic segmentation. Therefore this intuition seems to be untenable. About linear separability, Eq.6 seems to be calculated in the same way as Eq.5, so the metric after introducing this distillation loss is certain to perform better on this metric. Also, from the results in Table.2, there is not much difference in the performance of these three methods after linear regression. Regarding the semantic part, the mix ratio=0.5 is a problematic design. Since f 1 and f 2 do not distinguish their order, the network does not have any ability to distinguish between these two parts of the output. In addition, both f 1 and f 2 have only one linear layer, which means that the disentangling for mixture feature is still done in the same linear space. This also means that when the weights are full rank, f 1 and f 2 just complete a projection without information loss. These designs need further consideration. The paper lacks experiments on larger models and more representative datasets (e.g. ImageNet-1K). In addition, more experiments on downstream tasks (e.g. COCO / ADE-20K) need to be provided to support the conclusions. Clarity, Quality, Novelty And Reproducibility Please refer to the previous section for possible problems in these four aspects.
ICLR
Title i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable? Abstract Masked image modeling (MIM) has been recognized as a strong and popular self-supervised pre-training approach in the vision domain. However, the interpretability of the mechanism and properties in the learned representations by such a scheme is so far not well explored. In this work, through comprehensive experiments and empirical studies on Masked Autoencoders (MAE), we address two critical questions to explore the behaviors of the learned representations: (i) Are the latent representations in Masked Autoencoders linearly separable if the input is a mixture of two images instead of one? This can be concrete evidence to explain why MAE-learned representations have superior performance on downstream tasks, as proven by many literatures impressively. (ii) What is the degree of semantics encoded in the latent feature space by Masked Autoencoders? To explore these two problems, we propose a simple yet effective Interpretable MAE (i-MAE) framework with a two-way image reconstruction and a latent feature reconstruction with distillation loss, to help us understand the behaviors inside MAE structure. Extensive experiments are conducted on CIFAR-10/100, TinyImageNet and ImageNet-1K datasets to verify the observations we discovered. Furthermore, in addition to qualitatively analyzing the characteristics in the latent representations, we also examine the existence of linear separability and the degree of semantics in the latent space by proposing two novel metrics. The surprising and consistent results between the qualitative and quantitative experiments demonstrate that i-MAE is a superior framework design for interpretability research of MAE frameworks, as well as achieving better representational ability. 1 INTRODUCTION Self-supervised learning aims to learn representations from abundant unlabeled data for benefiting various downstream tasks. Recently, many self-supervised approaches have been proposed in the vision domain, such as pre-text based methods (Doersch et al., 2015; Zhang et al., 2016; Gidaris et al., 2018), contrastive learning with Siamese networks (Oord et al., 2018; He et al., 2020; Chen et al., 2020; Henaff, 2020), masked image modeling (MIM) (He et al., 2022; Bao et al., 2022; Xie et al., 2022), etc. Among them, the MIM has shown a preponderant advantage in performance and the representative method Masked Autoencoders (MAE) (He et al., 2022) has attracted much attention in the field. A natural question is raised: Where is the benefit of the transferability to downstream tasks from in MAE-based training? This motivates us to develop a framework to shed light on the reasons for the superior latent representation from MAE. Also, as the interpretability of MAE framework is still under-studied in this area, it is crucial to explore this in a specific and exhaustive way. Intuitively, a good representation should be separable and contain enough semantics from input, so that it can have a qualified ability to distinguish different classes with better performance on downstream tasks. While, how to evaluate the separability and the degree of semantics on latent features is not clear so far. Moreover, the mechanism of an Autoencoder to compress the information from input by reconstructing itself, has been a strong self-supervised learning architecture, but the explanation of the learned features through this way is still under-explored. To address the difficulties of identifying separability and semantics in the latent features, we first propose a novel framework i-MAE upon vanilla MAE. It consists of a mixture-based masked au- toencoder branch for disentangling the mixed representations by linearly separating two different instances, and a pre-trained vanilla MAE as the guidance to distill the disentangled representations. An illustration of the overview framework architecture is shown in Fig. 2. This framework is designed for answering two interesting questions: (i) Are the latent representations in Masked Autoencoders linearly separable? (ii) What is the degree of semantics encoded in the latent feature space by Masked Autoencoders? These two questions can reveal the factor that MAE learned features are good at separating different classes. We attribute the superior representation of MAE to it learning separable features for downstream tasks with enough semantics. In addition to qualitative studies, we also develop two metrics to address the two questions quantitatively. In the first metric, we employ ℓ2 distance from the high-dimensional Euclidean spaces to measure the similarity between i-MAE’s disentangled feature and “ground-truth” feature from pre-trained MAE on the same image. In the second metric, we control different ratios of semantic classes as a mixture within a mini-batch and evaluate the finetuning and linear probing results of the model to reflect the learned semantic information. More details will be provided in Section 3. We conduct extensive experiments on different scales of datasets: small CIFAR-10/100, medium Tiny-ImageNet and large ImageNet-1K to verify the linear separability and the degree of semantics in the latent representations. We also provide both qualitative and quantitative results to explain our observations and discoveries. The characteristics we observed in latent representations according to our proposed i-MAE framework: (I) i-MAE learned feature representation has good linear separability for input data, which is beneficial for downstream tasks. (II) Though the training scheme of MAE is different from instance classification pre-text in contrastive learning, its representation still encodes sufficient semantic information from input data. Moreover, mixing the same class images as the input substantially improves the quality of learned features. (III) We can reconstruct an image from a mixture by i-MAE effortlessly. To the best of our knowledge, this is the pioneer study to explicitly explore the separability and semantics inside MAE’s features with extensive well-designed qualitative and quantitative experiments. Our contributions in this work are: • We propose an i-MAE framework with two-way image reconstruction and latent feature reconstruction by a distillation loss, to explore the interpretability of mechanisms and properties inside the learned representations of MAE framework. • We introduce two metrics to examine the linear separability and the degree of semantics quantitatively on the learned latent representations. • We conduct extensive experiments on different scales of datasets: CIFAR-10/100, Tiny-ImageNet and ImageNet-1K and provide sufficient qualitative and quantitative results. 2 RELATED WORK Masked image modeling. Motivated by masked language modeling’s success in language tasks (Devlin et al., 2018; Radford & Narasimhan, 2018), Masked Image Modeling (MIM) in the vision domain learn representations from images corrupted by masking. State-of-the-art results on downstream tasks are achieved by several approaches. BEiT (Bao et al., 2022)proposes to recover discrete visual tokens, whereas SimMIM (Xie et al., 2022) addresses the MIM task as a pixel-level reconstruction. In this work, we focus on MAE (He et al., 2022), which proposes to use a high masking ratio and a non-arbitrary ViT decoder. Despite the great popularity of MIM approaches and their conceptual similarity to language modeling, the question of why has not been addressed in the visual domain. Moreover, as revealed by MAE, pixels are semantically sparse, and we novelly examine semantic-level information quantitatively. Image mixtures. Widely adopted mixture methods in visual supervised learning include Mixup (Zhang et al., 2017) and Cutmix (Yun et al., 2019). However, these methods require ground-truth labels for calculating mixed labels; in this work, we adapt Mixup to our unsupervised framework by formulating losses on only one of the two input images. On the other hand, in very recent visual SSL, joint embedding methods and contrastive learning approaches such as MoCo (He et al., 2020), SimCLR (Chen et al., 2020), and more recently UnMix (Shen et al., 2022) have acquired success and predominance in mixing visual inputs. These approaches promote instance discrimination by aligning features of augmented views of the same image. However, unlike joint embedding methods, i-MAE does not heavily rely on data augmentation and negative sampling. Moreover, whereas most MIM methods are generative tasks, i-MAE also utilizes characteristics of discriminative tasks in learning linearly separable representations. Invariance and disentangling representation learning in Autoencoders. Representation learning focuses on the properties of the features learned by the layers of deep models while remaining agnostic to the particular optimization process. Variance and entanglement are two commonly discussed factors that occur in data distribution for representation learning. In this work, we focus on the latent disentanglement that one feature is correlated or connected to other vectors in the latent space. Autoencoder is a classical generative unsupervised representation learning framework based on image reconstruction as loss function. Specifically, autoencoders learn both the mapping of inputs to latent features and then the reconstruction of the original input. Denoising autoencoders reconstruct the original input from a corrupted input, and most MIM methods are categorized as denoising autoencoders that use masking as a noise type. We notice, that recent work in the literature (He et al., 2022; Bao et al., 2022) performs many experiments in masking strategies, but to the best of our knowledge, we are the first to introduce image mixtures in the pre-training of MIM. 3 I-MAE In this section, we first introduce an overview of our proposed framework. Then, we present each component in detail. Ensuing, we elaborate on the metrics we proposed evaluating linear separability and degree of semantics, as well as broadly discuss the observations and discoveries. 3.1 FRAMEWORK OVERVIEW As shown in Fig 2, our framework consists of three submodules: (i) a mixture encoder module that takes the masked mixture image as the input and output mixed features; (ii) a disentanglement module that splits the mixed feature to the individual ones; (iii) MAE teacher module that provides the pre-trained embedding for guiding the splitting process in the disentanglement module. 3.1.1 COMPONENTS Input Mixture with MAE Encoder. Inspired by Mixup, we use an unsupervised mixture of inputs formulated by α ∗ I1 and (1− α) ∗ I2, I1, I2 are the input mixes. Essentially, our encoder extrapolates mixed features from a tiny fraction (e.g., 25%) of visible patches, which we then tune to only represent the subordinate image. The mixed image will be: Im = α ∗ I1 + (1− α) ∗ I2 (1) where α is the coefficient to mix two images following a Beta distribution. Two-branch Masked Autoencoders with Shared Decoder. Although sufficient semantic information of both images is embedded in the mixed representation to reconstruct both images, the vanilla MAE cannot by itself associate separated features to either input. The MAE structure does not retain identification information of the two mixed inputs (e.g., order or positional information), i.e., the model cannot tell which of the two images to deconstruct to, since both are sampled from the same distribution and mixed randomly. The consequence is that both reconstructions look identical to each other and fail to look similar to either original input. Similar to how positional embeddings are needed to explicitly encode spatial information, i-MAE implicitly encodes the semantic difference between the two inputs by using a dominant and subordinate mixture strategy. Concretely, through an unbalanced mix ratio and a reconstruction loss targeting only one of the inputs, our framework encodes sufficient information for i-MAE to linearly map the input mixture to two outputs. Two-way Image Reconstruction Loss. Formally, we build our reconstruction loss to recover individual images from a mixed input, which is first fed into the encoder to generate mixed features: hm = Ei-MAE(Im) (2) where Ei-MAE is i-MAE’s encoder, hm is the latent mixed representation. Then, we employ two nonshareable linear embedding layers to separate the mixed representation from the individual ones: h1 = f1(hm) h2 = f2(hm) (3) where f1, f2 are two linear layers with different parameters for disentanglement and h1 and h2 are corresponding representations. After that, we feed the individual representations into the shared decoder with the corresponding reconstruction losses: LI1recon = EI1∼p(I1) [∥Dshared(h1))− I1∥2 LI2recon = EI2∼p(I2) [∥Dshared(h2))− I2∥2 (4) In practice, we train the linear separation layers to distinguish between the dominant input Id (higher mix ratio) and the subordinate input Is (lower ratio). Showing that our encoder learns to embed representations of both images, we intentionally choose to reconstruct only the subordinate image Is to prevent the Id from guiding the reconstruction. Essentially, successful reconstructions from only the Is prove that representations of both images can be learned and that the subordinate image is not filtered out as noise. Patch-wise Distillation Loss for Latent Reconstruction. With the linear separation layers and an in-balanced mixture, the i-MAE encoder is presented with sufficient information about both images to perform visual reconstructions; however, information is inevitably lost during the mixture process, harming the value of the learned features in downstream tasks such as classification. To mitigate such an effect, we propose a knowledge distillation module both for enhancing the learned feature’s quality, and that a successful distillation can evidently prove the linear separability of our features. Intuitively, MAE’s feature can be regarded as ”ground-truth” and i-MAE learns features distilled from the original MAE. Specifically, our loss function computes ℓ2 loss between disentangled representations and original representations to help our encoder learn useful features of both inputs. Our Patch-wise latent reconstruction loss can be formulated as: Lh1recon = Eh1∼q(h1) [∥Ep-MAE(I1))− h1∥2 Lh2recon = Eh2∼q(h2) [∥Ep-MAE(I2))− h2∥2 (5) where Ep-MAE is the pre-trained MAE encoder. 3.2 LINEAR SEPARABILITY For i-MAE to reconstruct the subordinate image from a linear mixture, not only does the encoder have to be general enough to retain information of both inputs, but it must also generate embeddings that are specific enough for the decoder to distinguish them into their pixel-level forms. A straightforward interpretation of how i-MAE fulfills both conditions is that the latent mixture hm is a linear combination of features that closely relate to h1 and h2, e.g. in a linear relationship. Our distillation module aids the information loss. To verify this explanation, we employ a linear separability metric to experimentally observe such behavior. Metric of Linear Separability. A core contribution of our i-MAE is the quantitative analysis of features. In general, linear separability is a property of two sets of features that can be separated into their respective sets by a hyperplane. In our example, the set of latent representations H1 and H2 are linearly separable if there exists n+ 1 real numbers w1,w2, ...,wn, b, such that every h ∈ H1 satisfies ∑ wihi > b and every h ∈ H2 satisfies ∑ wihi < b. It is a common practice to train a classical linear classifier (e.g., SVM) and evaluate if two sets of data are linearly separable. However, to quantitatively measure the separation of latent representations, we devised a more intuitive yet effective metric. Our metric computes the Mean Squared Error (MSE) distance between the disentangled feature of the subordinate image Is and the vanilla MAE feature of a single input Is. Since the disentangled feature without constraints will unlikely resemble the vanilla feature, we utilize a linear layer to transform the disentangled feature space to the vanilla feature space. Note that this is similar to knowledge distillation, but happens after the pre-training process without finetuning the parameters and conceptually measures the distance between the two latent representations, and thus the linear transformation will not be needed for i-MAE with distillation. The detailed formulation of the metric is: Mls = 1 N N∑ n=1 ∥hns − fθ (Ins )∥ 2 2 (6) where N is the total number of samples. fθ is the encoder of vanilla MAE. Is is the subordinate image and Is ∈ {I1, I2}. 3.3 SEMANTICS Metric of Semantics. Vanilla MAE exhibits strong signs of semantics understanding (He et al., 2022). However, studying the abstract concept of semantics in the visual domain is difficult due to its semantic sparsity and repetitiveness. Addressing this problem, we propose a metric unique to i-MAE that is readily available for examining the degree of semantics learned in the model. Asides from straightforwardly evaluating classification accuracy to measure the quality of latent representation, i-MAE utilizes the mixing of semantically similar instances to determine to what degree the disentangled latent representations can reflect image-level meaning. Naturally, the segmentation of different instances from the same class is a more difficult task than classification between different classes, intra-class separation requires the understanding of highlevel visual concepts, i.e. semantic differences, rather than lower-level patterns, i.e. shape or color. While generally, data transformation (Olah et al., 2017) can help mitigate overfitting. Similarly, our semantic disentangle module is another data augmentation that introduces significantly more mixtures of the same class into the training process. We find our method to boost the semantics of features learned by this semantics-controllable mixture scheme. Specifically, we choose training instances from the same or different classes following different distributions to constitute an input mixture, so that to examine the quality of learned features as follows: p = fm(Ica + Icb) (7) where fm is the backbone network for mixture input and p is the corresponding prediction. Ica and Icb are the input samples and ca, cb have a certain percentage r that belongs to the same category. For instance, if r = 0.1, it indicates that 10% images in a mini-batch are mixed with the same class. When r = 1.0, all training images will be mixed by another one from the same class, which can be regarded as a semantically enhanced augmentation. During training, r is fixed for individual models, and we study the degree of semantics that the model encoded by changing the percentage value r. After the model is trained by i-MAE using such kind of input data, we finetune the model with Mixup strategy (both baseline and our models) and cross-entropy loss. We use the accuracy as the metric of semantics under this percentage of instance mixture: Msem = − n∑ i=1 ti log (pi) (8) where ti is the ground-truth. The insight behind is that: if the input mixture is composed of two images or instances with the same semantics (i.e., the same category), it will confuse the model during training and i-MAE will be struggling to disentangle them. Thus, the encoded information/semantics may be weakened in training and it can be reflected by the quality of learned representation. It is interesting to see whether this conjecture is supported by the empirical results. We use the representation quality through finetuned accuracy to monitor the degree of semantics with this semantics-controllable mixture scheme. 4 EMPIRICAL RESULTS AND ANALYSIS In the experiments section, we analyze the properties of i-MAE’s disentangled representations on an extensive range of datasets. First, we provide the datasets used and our implementations details. Then, we thoroughly ablate our experiments, focusing on the properties of linear separation, and controllable-semantic mixture. Lastly, we give the final evaluation of our results. 4.1 DATASETS AND TRAINING IMPLEMENTATION FOR BASELINE AND I-MAE. Settings: We perform empirical experiments of i-MAE on CIFAR-10/100, Tiny-ImageNet, and ImageNet-1K. On CIFAR-10/100, we pre-train i-MAE unsupervisedly and adjust MAE’s structure to better fit the smaller datasets: ViT-Tiny (Touvron et al., 2021) in the encoder and a lite-version of ViT-Tiny (4 layers) as the decoder. Our pre-training lasts 2,000 epochs with learning rate 1.5×10−4 and 200 warm-up epochs. On Tiny-ImageNet, i-MAE’s encoder is ViT-small and decoder is ViTTiny, trained for 1,000 epochs with learning rate 1.5 × 10−4. Additionally, we apply warm-up for first 100 epochs, and use cosine learning rate decay with AdamW optimizer as in MAE. Supervised Finetuning: In the finetuning process, we apply Mixup for all experiments to fit our pre-training scheme, and compare our results with baselines of the same configuration. On CIFAR10/100, we finetune 100 epochs with a learning rate of 1.5× 10−3 and AdamW optimizer. Linear Probing: For linear evaluation, we follow MAE (He et al., 2022) to train with no extra augmentations and use zero weight decay. We also adopt an additional BatchNorm layer without affine transformation. 4.2 ABLATION STUDY In this section, we perform ablation studies on i-MAE to demonstrate the invariant property of linear separability and to what extent can i-MAE separate features. Then, we analyze the effect of semantic-level instance mixing on the quality of i-MAE’s learned representations. 4.2.1 ABLATION FOR LINEAR SEPARABILITY To begin, we thoroughly ablate our experiments on small-scaled datasets and demonstrate how iMAE’s learned features display linear separability. Specifically, we experiment with the separability of the following aspects of our methods: (i) constant or probability mix factor; (ii) masking ratio of input mixtures; (iii) different ViT architectures. Unless otherwise stated, the default settings used in our ablation experiments are ViT-Tiny, masking ratio of 75%, fixed mixing ratio of 35%, and reconstructing only the subordinate image for a harder task. Mix Ratio. To demonstrate the separable nature of input mixtures, we compared different mixture ratios and random mixture ratios from a Beta distribution. Intuitively, low mixing ratios contain less information that are easily confused with noise, whereas higher ratios destroy the subordinatedominant relationship. Experimentally, we observe matching results shown in Appendix (Fig. 10 and Fig. 1). The better separation performance around the 0.3 range indicates that i-MAE features are better dichotomized when balanced between noise and useful information. Whereas below 0.15, the subordinate image is noisy and reconstructions are not interpretable, mixing ratios above 0.45 break the balance between the two images, and the two features cannot be distinguished. Moreover, notice that at 0.45, reconstruction patches are turning green and resembling the pepper. Mask Ratio. In i-MAE, visible information of the subordinate image is inherently limited due to the unbalanced mix ratio in addition to masking. Therefore, a high masking ratio (75% (He et al., 2022)) may not be necessary to suppress the amount of information the encoder sees, so we attempt ratios of 50%, 60% to introduce more information of the subordinate target. As shown in Fig. 3, a lower masking ratio can improve the reconstruction quality. Combining our findings in mix and mask ratios, we empirically find that i-MAE can compensate for the information loss at low ratios with the additional alleviation of more visible patches (lower mask ratio). Illustrated in Fig. 1, we display a case of i-MAE qualitatively succeeding in separating the features of a α = 0.1 mix and 0.5 masking ratio. Our core finding in the separability ablation section is that i-MAE can learn linearly separable features under two conditions: (i) enough information about both images must be present (this can be alleviated by mask ratios). (ii) the image-level distinguishing relationship between minority and majority (determined by mix ratio) is clear enough. ViT Backbone Architecture. We study the effect of different ViT scales in linear separation in Appendix of Fig. 5, and find that larger backbones are not necessary for small datasets on i-MAE, although it is crucial on large-scale ImageNet-1K. 4.2.2 ABLATION FOR DEGREE OF SEMANTICS Semantic Mixes. Depending on the number of classes and overall size, pristine datasets usually contain around 10% (e.g., CIFAR-10) to less than 1% (e.g. ImageNet-1K) samples of the same class. By default, uniformly random sampling mixtures will be of the same likelihood. However, in the semantics-controllable mixture scheme, we test whether the introduction of semantically homogeneous mixtures affects the classification performance at different amounts. We intentionally test to see if similar instances during pre-training can negatively affect the classification performance. As shown in Tab. 1, after i-MAE pre-training, we perform finetuning and linear probing on classification tasks to evaluate the degree of semantics learned given different amounts of intra-class mix r. From Tab. 1, we discover that i-MAE overall has a stronger performance in finetuning and linear probing with a non-zero same class ratio. Specifically, a high r increases the accuracy in linear evaluation most in all datasets, meaning that the quality of learned features is best and separated. On the other hand, setting r = 0.5 is advantageous during finetuning, as it gains a balanced prior of separating both intra- and inter-class mixtures. 4.3 RESULTS OF FINAL EVALUATION In this section, we provide a summary of our main findings: how separable are i-MAE embedded features and the amount of semantics embedded in mixed-representations. Then, we evaluate the quality of our features with classfication and analyze the features. 4.3.1 SEPARABILITY In this section, we show how i-MAE displays properties of linear separability, visually and quantitatively, and demonstrate our advantage over baseline (vanilla MAE). In a visual comparison of disentanglement capability, shown in Fig. 4, the vanilla MAE does not perform well out-of-the-box. In fact, the reconstructions represent the mixed input more so than the subordinate image. Since the mixture inputs of i-MAE is a linear combination of the two images and our results show i-MAE’s potent ability to reconstruct both images, even at very low mixture ratios, we account such ability to i-MAE’s disentanglement correlating strongly to vanilla MAE’s features. As aforementioned, we gave the formal definition of linear separability; we now empirically illustrate the strength of the linear relationship between MAE’s features and i-MAE’s disentangled features with a linear regressor. We employ ℓ2 distance as our criterion and results are reported in Tab. 2. Experimentally, we feed mixed inputs to i-MAE and singular image to the target model (vanilla MAE), Before indicates that we directly calculate the distance between the “ground-truth” features from pre-trained MAE and our disentangled features. After indicates that we train the linear regression’s parameters to fit the “ground-truth”. Baseline is the model trained without disentanglement module. It can be observed that our i-MAE has a significantly smaller distance than the vanilla model, reflecting that such a scheme can obtain better separability ability. 4.3.2 SEMANTICS Finetune and Linear Evaluation. We evaluate our i-MAE’s performance with finetuning and linear evaluation of regular inputs and targets. For all approaches in the finetuning phase, we use Mixup as augmentation and no extra augmentations for linear evaluation. Classification performance is outlined in Tab. 3 and Tab. 4. As our features are learned from a harder scenario, it encodes more information with a more robust representation and classification accuracy. Besides, i-MAE shows a considerable performance boost with both evaluation methods. Analysis. We emphasize that our enhanced performance comes from i-MAE’s ability to learn more separable features with the disentanglement module, and the enhanced semantics learned from training with semantics-controllable mixture. Our classification results show that it is crucial for MAE to learn features that are linearly separable, which can help identify between different classes. However, to correctly identify features with their corresponding classes, semantically rich features are needed, and can be enhanced by sampling intra-class mixing strategy. 5 CONCLUSION It is non-trivial to understand why Masked Image Modeling (MIM) in the self-supervised scheme can learn useful representations for downstream tasks without labels. In this work, we have introduced a novel interpretable framework upon Masked Autoencoders (i-MAE) to explore two critical properties in latent features: linear separability and degree of semantics. We identified that the two specialties are the core for superior latent representations and revealed the reasons where is the good transferability of MAE from. Moreover, we proposed two metrics to evaluate these two specialties quantitatively. Extensive experiments are conducted on CIFAR-10/100, Tiny-ImageNet, and ImageNet-1K datasets to demonstrate our discoveries and observations in this work. We also provided sufficient qualitative results and analyses of different hyperparameters. We hope this work can inspire more studies on the interpretability of the MIM frameworks in the future. A DATASETS CIFAR-10/100 (Krizhevsky, 2009) Both CIFAR datasets contain 60,000 tiny colored images sized 32×32. CIFAR-10 and 100 are split into 10 and 100 classes, respectively. Tiny-ImageNet The Tiny-ImageNet is a scaled-down version of the standard ImageNet-1K consisting of 100,000 64x64 colored images, categorized into 200 classes. ImageNet-1K (Deng et al., 2009) The ILSVRC 2012 ImageNet-1K classification dataset consist of 1.28 million training images and 50,000 validation images of 1000 classes. B IMPLEMENTATION DETAILS IN SELF-SUPERVISED PRE-TRAINING, FINETUNING, AND LINEAR EVALUATION ViT architecture. In our non-ImageNet datasets, we adopt smaller ViT backbones that generally follow (Touvron et al., 2021). The central implementation of linear separation happens between the MAE encoder and decoder, with a linear projection layer for each branch of reconstruction. A shared decoder is used to reconstruct both images. A qualitative evaluation of different ViT sizes on TinyImageNet is displayed in Fig. 5; the perceptive difference is not large and generally, ViT-small/tiny are sufficient for non-ImageNet datasets. Pre-training. The default setting for pre-training is listed in Tab. 5. On ImageNet-1K, we strictly use MAE’s specifications. For better classification performance, we use normalized pixels (He et al., 2022) and a high masking ratio (0.75); for better visual reconstructions, we use a lower masking ratio (0.5) without normalizing target pixels. In CIFAR-10/100, and Tiny-ImageNet, reconstruct ordinary pixels. Semantics-controllable mixture The default setting for our semantics-controllable mixtures are listed in Tab. 6. We modified the dataloader to mix, within a mini-batch, r percent of samples that have homogenous classes, and 1− r percent that is different. Classification For the classification task, we provide the detailed settings of our finetuning process in Tab. 7 and linear evaluation process in Tab. 8. C VISUALIZATION We provide extra examples of a single-branch trained i-MAE reconstructing the subordinate image. Fig. 10 are visualizations on CIFAR-100 at mix ratios from 0.1 to 0.45, in 0.05 steps. Shown in Fig. 6 and Fig. 7, we produce finer ranges of reconstructions from 0.05 to 0.45. Notice that in most cases, mixture rates above 0.4 tends to show features of the dominant image. This observation demonstrates that a low mixture rate can better embed important information separating the subordinate image. D PYTORCH (PASZKE ET AL., 2019) STYLED PSEUDOCODE The pseudocode of our mixture and subordinate reconstruction approach is shown in Algorithm 1. This is only a simple demonstration of our most basic framework without distillation losses. In Algorithm 1: PyTorch-style pseudocode for a single subordinate reconstruction on i-MAE. # alpha: mixture ratio # args.beta: hyperparameter for the Beta Distribution. # # args.beta=1.0 for x in loader: # Minibatch x of N samples alpha = np.random.beta(args.beta, args.beta) sub idx = np.argmin(alpha, 1-alpha) # Identifying the subordinate (target) image perm = torch.randperm(batch size) # inner-batch mix im 1, im 2 = x, x[perm, :] mixed images = alpha * im 1 + (1-alpha) * im 2 # # Subordinate Loss loss sub = loss fn (model(mixed images), im 2) # # update gradients optimizer.zero grad() loss.backward() optimizer.step() ... our full-fledged i-MAE, we employ two additional distillation losses, an additional linear separation branch, and the semantics-controllable mixture scheme; nonetheless, the key implementation remains the same as the pseudocode presented here.
1. What is the main contribution of the paper, and how does it relate to the success of masked autoencoders (MAE)? 2. What are the strengths and weaknesses of the proposed method, i-MAE, compared to other MAE variants? 3. How does the paper evaluate the effectiveness of i-MAE, and what are the results? 4. What are the novel analysis metrics proposed by the paper for measuring linear separability and semantics, and how do they compare to prior works in this area? 5. How does the paper justify the claim that a good representation should have good linear separability for input data and contain the original semantics? 6. What are some potential issues with the presentation of the paper, and how could they be improved? 7. What related work has been missed by the paper, and how does it impact the novelty and reproducibility of the proposed method?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a new method called i-MAE combining mixup and masked autoencoder (MAE). i-MAE outperforms MAE in terms of accuracy and suggested two metrics for linear separability and semantics on CIFAR-10 and CIFAR-100. Strengths And Weaknesses Strength Analyzing the success of MAE is an important problem and could provide good insights for the self-supervised learning community. Weakness Goal of the paper. My biggest concern is that the goal of the paper is unclear. The title and abstract looks like an interpretable method for analyzing masked autoencoder (MAE). However, it turns out that the paper suggests a new method that combines mixup and MAE, which may have better properties than MAE. This paper cannot be positioned as an interpretability paper since it does NOT analyze why MAE is successful but just studies the properties of its own proposed i-MAE. In terms of a new method, the paper has a very weak evaluation compared to other MAE variants. I suggest the next version of the paper clarify what is its purpose and focus on that direction. Novelty of analysis metrics. The paper claims that it proposes novel analysis metrics for measuring linear separability and semantics. However, there are many prior works on this line, and the paper should survey prior arts rather than reinvent the wheel. Moreover, those metrics can often be used without assuming the mixed inputs and can be generally applied to the vanilla MAE. For example, StyleGAN [1] proposes a linear separability metric using a binary SVM on feature space, and Deep-InfoMax [2] measures the mutual information between input and feature. I could find much more related metrics by googling for a second. So why should one use the suggested i-MAE-specific metrics instead of them? Justification for the "good" representation. The paper claims that a good representation should have good linear separability for input data and contains the original semantics. However, how to define a good representation is controversial, and one needs a deeper analysis for this claim. For example, MAE does not show a good linear probing accuracy for the original embedding but outperforms contrastive learning after fine-tuning. Table 2 of this paper also shows that MAE does not show good linear separability before fine-tuning. Then, how can one conclude that linear separability is an important measure for evaluating representation? Does it tell more insights rather than accuracy after fine-tuning? Presentation. Current presentation is somewhat verbose yet often missing the important parts. For example, the abstract is verbose in that the first 5 lines do not have much information, so as the last 3 lines. Instead of saying "The surprising and consistent results between the qualitative and quantitative experiments demonstrate that i-MAE is a superior framework design for interpretability research of MAE frameworks, as well as achieving better representational ability." which does not add new information for readers, the paper could say what is the exact novel findings more concisely. The caption of Figure 1 is also confusing in that the reader often jumps to this concept figure first, but hard to understand what's going on solely from the caption before reading the details later. Missing related work. "Invariance and disentangling representation learning in Autoencoders" is a very widely studied topic. However, the related work does not discuss ANY prior work on this line. Also, there is a prior work [3] combining mixup and MAE, although the motivation is different. [1] Karras et al. A Style-Based Generator Architecture for Generative Adversarial Networks. CVPR 2019. [2] Hjelm et al. Learning deep representations by mutual information estimation and maximization. ICLR 2019. [3] Liu et al. MixMIM: Mixed and Masked Image Modeling for Efficient Visual Representation Learning. arXiv 2022. Clarity, Quality, Novelty And Reproducibility Clarity: Should be improved, as I mentioned above. Quality: The paper should clarify the goal first. Novelty: Somewhat novel, as it proposes a new mixup-based variant of MAE. Reproducibility: The paper does not provide code in supplementary.
ICLR
Title i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable? Abstract Masked image modeling (MIM) has been recognized as a strong and popular self-supervised pre-training approach in the vision domain. However, the interpretability of the mechanism and properties in the learned representations by such a scheme is so far not well explored. In this work, through comprehensive experiments and empirical studies on Masked Autoencoders (MAE), we address two critical questions to explore the behaviors of the learned representations: (i) Are the latent representations in Masked Autoencoders linearly separable if the input is a mixture of two images instead of one? This can be concrete evidence to explain why MAE-learned representations have superior performance on downstream tasks, as proven by many literatures impressively. (ii) What is the degree of semantics encoded in the latent feature space by Masked Autoencoders? To explore these two problems, we propose a simple yet effective Interpretable MAE (i-MAE) framework with a two-way image reconstruction and a latent feature reconstruction with distillation loss, to help us understand the behaviors inside MAE structure. Extensive experiments are conducted on CIFAR-10/100, TinyImageNet and ImageNet-1K datasets to verify the observations we discovered. Furthermore, in addition to qualitatively analyzing the characteristics in the latent representations, we also examine the existence of linear separability and the degree of semantics in the latent space by proposing two novel metrics. The surprising and consistent results between the qualitative and quantitative experiments demonstrate that i-MAE is a superior framework design for interpretability research of MAE frameworks, as well as achieving better representational ability. 1 INTRODUCTION Self-supervised learning aims to learn representations from abundant unlabeled data for benefiting various downstream tasks. Recently, many self-supervised approaches have been proposed in the vision domain, such as pre-text based methods (Doersch et al., 2015; Zhang et al., 2016; Gidaris et al., 2018), contrastive learning with Siamese networks (Oord et al., 2018; He et al., 2020; Chen et al., 2020; Henaff, 2020), masked image modeling (MIM) (He et al., 2022; Bao et al., 2022; Xie et al., 2022), etc. Among them, the MIM has shown a preponderant advantage in performance and the representative method Masked Autoencoders (MAE) (He et al., 2022) has attracted much attention in the field. A natural question is raised: Where is the benefit of the transferability to downstream tasks from in MAE-based training? This motivates us to develop a framework to shed light on the reasons for the superior latent representation from MAE. Also, as the interpretability of MAE framework is still under-studied in this area, it is crucial to explore this in a specific and exhaustive way. Intuitively, a good representation should be separable and contain enough semantics from input, so that it can have a qualified ability to distinguish different classes with better performance on downstream tasks. While, how to evaluate the separability and the degree of semantics on latent features is not clear so far. Moreover, the mechanism of an Autoencoder to compress the information from input by reconstructing itself, has been a strong self-supervised learning architecture, but the explanation of the learned features through this way is still under-explored. To address the difficulties of identifying separability and semantics in the latent features, we first propose a novel framework i-MAE upon vanilla MAE. It consists of a mixture-based masked au- toencoder branch for disentangling the mixed representations by linearly separating two different instances, and a pre-trained vanilla MAE as the guidance to distill the disentangled representations. An illustration of the overview framework architecture is shown in Fig. 2. This framework is designed for answering two interesting questions: (i) Are the latent representations in Masked Autoencoders linearly separable? (ii) What is the degree of semantics encoded in the latent feature space by Masked Autoencoders? These two questions can reveal the factor that MAE learned features are good at separating different classes. We attribute the superior representation of MAE to it learning separable features for downstream tasks with enough semantics. In addition to qualitative studies, we also develop two metrics to address the two questions quantitatively. In the first metric, we employ ℓ2 distance from the high-dimensional Euclidean spaces to measure the similarity between i-MAE’s disentangled feature and “ground-truth” feature from pre-trained MAE on the same image. In the second metric, we control different ratios of semantic classes as a mixture within a mini-batch and evaluate the finetuning and linear probing results of the model to reflect the learned semantic information. More details will be provided in Section 3. We conduct extensive experiments on different scales of datasets: small CIFAR-10/100, medium Tiny-ImageNet and large ImageNet-1K to verify the linear separability and the degree of semantics in the latent representations. We also provide both qualitative and quantitative results to explain our observations and discoveries. The characteristics we observed in latent representations according to our proposed i-MAE framework: (I) i-MAE learned feature representation has good linear separability for input data, which is beneficial for downstream tasks. (II) Though the training scheme of MAE is different from instance classification pre-text in contrastive learning, its representation still encodes sufficient semantic information from input data. Moreover, mixing the same class images as the input substantially improves the quality of learned features. (III) We can reconstruct an image from a mixture by i-MAE effortlessly. To the best of our knowledge, this is the pioneer study to explicitly explore the separability and semantics inside MAE’s features with extensive well-designed qualitative and quantitative experiments. Our contributions in this work are: • We propose an i-MAE framework with two-way image reconstruction and latent feature reconstruction by a distillation loss, to explore the interpretability of mechanisms and properties inside the learned representations of MAE framework. • We introduce two metrics to examine the linear separability and the degree of semantics quantitatively on the learned latent representations. • We conduct extensive experiments on different scales of datasets: CIFAR-10/100, Tiny-ImageNet and ImageNet-1K and provide sufficient qualitative and quantitative results. 2 RELATED WORK Masked image modeling. Motivated by masked language modeling’s success in language tasks (Devlin et al., 2018; Radford & Narasimhan, 2018), Masked Image Modeling (MIM) in the vision domain learn representations from images corrupted by masking. State-of-the-art results on downstream tasks are achieved by several approaches. BEiT (Bao et al., 2022)proposes to recover discrete visual tokens, whereas SimMIM (Xie et al., 2022) addresses the MIM task as a pixel-level reconstruction. In this work, we focus on MAE (He et al., 2022), which proposes to use a high masking ratio and a non-arbitrary ViT decoder. Despite the great popularity of MIM approaches and their conceptual similarity to language modeling, the question of why has not been addressed in the visual domain. Moreover, as revealed by MAE, pixels are semantically sparse, and we novelly examine semantic-level information quantitatively. Image mixtures. Widely adopted mixture methods in visual supervised learning include Mixup (Zhang et al., 2017) and Cutmix (Yun et al., 2019). However, these methods require ground-truth labels for calculating mixed labels; in this work, we adapt Mixup to our unsupervised framework by formulating losses on only one of the two input images. On the other hand, in very recent visual SSL, joint embedding methods and contrastive learning approaches such as MoCo (He et al., 2020), SimCLR (Chen et al., 2020), and more recently UnMix (Shen et al., 2022) have acquired success and predominance in mixing visual inputs. These approaches promote instance discrimination by aligning features of augmented views of the same image. However, unlike joint embedding methods, i-MAE does not heavily rely on data augmentation and negative sampling. Moreover, whereas most MIM methods are generative tasks, i-MAE also utilizes characteristics of discriminative tasks in learning linearly separable representations. Invariance and disentangling representation learning in Autoencoders. Representation learning focuses on the properties of the features learned by the layers of deep models while remaining agnostic to the particular optimization process. Variance and entanglement are two commonly discussed factors that occur in data distribution for representation learning. In this work, we focus on the latent disentanglement that one feature is correlated or connected to other vectors in the latent space. Autoencoder is a classical generative unsupervised representation learning framework based on image reconstruction as loss function. Specifically, autoencoders learn both the mapping of inputs to latent features and then the reconstruction of the original input. Denoising autoencoders reconstruct the original input from a corrupted input, and most MIM methods are categorized as denoising autoencoders that use masking as a noise type. We notice, that recent work in the literature (He et al., 2022; Bao et al., 2022) performs many experiments in masking strategies, but to the best of our knowledge, we are the first to introduce image mixtures in the pre-training of MIM. 3 I-MAE In this section, we first introduce an overview of our proposed framework. Then, we present each component in detail. Ensuing, we elaborate on the metrics we proposed evaluating linear separability and degree of semantics, as well as broadly discuss the observations and discoveries. 3.1 FRAMEWORK OVERVIEW As shown in Fig 2, our framework consists of three submodules: (i) a mixture encoder module that takes the masked mixture image as the input and output mixed features; (ii) a disentanglement module that splits the mixed feature to the individual ones; (iii) MAE teacher module that provides the pre-trained embedding for guiding the splitting process in the disentanglement module. 3.1.1 COMPONENTS Input Mixture with MAE Encoder. Inspired by Mixup, we use an unsupervised mixture of inputs formulated by α ∗ I1 and (1− α) ∗ I2, I1, I2 are the input mixes. Essentially, our encoder extrapolates mixed features from a tiny fraction (e.g., 25%) of visible patches, which we then tune to only represent the subordinate image. The mixed image will be: Im = α ∗ I1 + (1− α) ∗ I2 (1) where α is the coefficient to mix two images following a Beta distribution. Two-branch Masked Autoencoders with Shared Decoder. Although sufficient semantic information of both images is embedded in the mixed representation to reconstruct both images, the vanilla MAE cannot by itself associate separated features to either input. The MAE structure does not retain identification information of the two mixed inputs (e.g., order or positional information), i.e., the model cannot tell which of the two images to deconstruct to, since both are sampled from the same distribution and mixed randomly. The consequence is that both reconstructions look identical to each other and fail to look similar to either original input. Similar to how positional embeddings are needed to explicitly encode spatial information, i-MAE implicitly encodes the semantic difference between the two inputs by using a dominant and subordinate mixture strategy. Concretely, through an unbalanced mix ratio and a reconstruction loss targeting only one of the inputs, our framework encodes sufficient information for i-MAE to linearly map the input mixture to two outputs. Two-way Image Reconstruction Loss. Formally, we build our reconstruction loss to recover individual images from a mixed input, which is first fed into the encoder to generate mixed features: hm = Ei-MAE(Im) (2) where Ei-MAE is i-MAE’s encoder, hm is the latent mixed representation. Then, we employ two nonshareable linear embedding layers to separate the mixed representation from the individual ones: h1 = f1(hm) h2 = f2(hm) (3) where f1, f2 are two linear layers with different parameters for disentanglement and h1 and h2 are corresponding representations. After that, we feed the individual representations into the shared decoder with the corresponding reconstruction losses: LI1recon = EI1∼p(I1) [∥Dshared(h1))− I1∥2 LI2recon = EI2∼p(I2) [∥Dshared(h2))− I2∥2 (4) In practice, we train the linear separation layers to distinguish between the dominant input Id (higher mix ratio) and the subordinate input Is (lower ratio). Showing that our encoder learns to embed representations of both images, we intentionally choose to reconstruct only the subordinate image Is to prevent the Id from guiding the reconstruction. Essentially, successful reconstructions from only the Is prove that representations of both images can be learned and that the subordinate image is not filtered out as noise. Patch-wise Distillation Loss for Latent Reconstruction. With the linear separation layers and an in-balanced mixture, the i-MAE encoder is presented with sufficient information about both images to perform visual reconstructions; however, information is inevitably lost during the mixture process, harming the value of the learned features in downstream tasks such as classification. To mitigate such an effect, we propose a knowledge distillation module both for enhancing the learned feature’s quality, and that a successful distillation can evidently prove the linear separability of our features. Intuitively, MAE’s feature can be regarded as ”ground-truth” and i-MAE learns features distilled from the original MAE. Specifically, our loss function computes ℓ2 loss between disentangled representations and original representations to help our encoder learn useful features of both inputs. Our Patch-wise latent reconstruction loss can be formulated as: Lh1recon = Eh1∼q(h1) [∥Ep-MAE(I1))− h1∥2 Lh2recon = Eh2∼q(h2) [∥Ep-MAE(I2))− h2∥2 (5) where Ep-MAE is the pre-trained MAE encoder. 3.2 LINEAR SEPARABILITY For i-MAE to reconstruct the subordinate image from a linear mixture, not only does the encoder have to be general enough to retain information of both inputs, but it must also generate embeddings that are specific enough for the decoder to distinguish them into their pixel-level forms. A straightforward interpretation of how i-MAE fulfills both conditions is that the latent mixture hm is a linear combination of features that closely relate to h1 and h2, e.g. in a linear relationship. Our distillation module aids the information loss. To verify this explanation, we employ a linear separability metric to experimentally observe such behavior. Metric of Linear Separability. A core contribution of our i-MAE is the quantitative analysis of features. In general, linear separability is a property of two sets of features that can be separated into their respective sets by a hyperplane. In our example, the set of latent representations H1 and H2 are linearly separable if there exists n+ 1 real numbers w1,w2, ...,wn, b, such that every h ∈ H1 satisfies ∑ wihi > b and every h ∈ H2 satisfies ∑ wihi < b. It is a common practice to train a classical linear classifier (e.g., SVM) and evaluate if two sets of data are linearly separable. However, to quantitatively measure the separation of latent representations, we devised a more intuitive yet effective metric. Our metric computes the Mean Squared Error (MSE) distance between the disentangled feature of the subordinate image Is and the vanilla MAE feature of a single input Is. Since the disentangled feature without constraints will unlikely resemble the vanilla feature, we utilize a linear layer to transform the disentangled feature space to the vanilla feature space. Note that this is similar to knowledge distillation, but happens after the pre-training process without finetuning the parameters and conceptually measures the distance between the two latent representations, and thus the linear transformation will not be needed for i-MAE with distillation. The detailed formulation of the metric is: Mls = 1 N N∑ n=1 ∥hns − fθ (Ins )∥ 2 2 (6) where N is the total number of samples. fθ is the encoder of vanilla MAE. Is is the subordinate image and Is ∈ {I1, I2}. 3.3 SEMANTICS Metric of Semantics. Vanilla MAE exhibits strong signs of semantics understanding (He et al., 2022). However, studying the abstract concept of semantics in the visual domain is difficult due to its semantic sparsity and repetitiveness. Addressing this problem, we propose a metric unique to i-MAE that is readily available for examining the degree of semantics learned in the model. Asides from straightforwardly evaluating classification accuracy to measure the quality of latent representation, i-MAE utilizes the mixing of semantically similar instances to determine to what degree the disentangled latent representations can reflect image-level meaning. Naturally, the segmentation of different instances from the same class is a more difficult task than classification between different classes, intra-class separation requires the understanding of highlevel visual concepts, i.e. semantic differences, rather than lower-level patterns, i.e. shape or color. While generally, data transformation (Olah et al., 2017) can help mitigate overfitting. Similarly, our semantic disentangle module is another data augmentation that introduces significantly more mixtures of the same class into the training process. We find our method to boost the semantics of features learned by this semantics-controllable mixture scheme. Specifically, we choose training instances from the same or different classes following different distributions to constitute an input mixture, so that to examine the quality of learned features as follows: p = fm(Ica + Icb) (7) where fm is the backbone network for mixture input and p is the corresponding prediction. Ica and Icb are the input samples and ca, cb have a certain percentage r that belongs to the same category. For instance, if r = 0.1, it indicates that 10% images in a mini-batch are mixed with the same class. When r = 1.0, all training images will be mixed by another one from the same class, which can be regarded as a semantically enhanced augmentation. During training, r is fixed for individual models, and we study the degree of semantics that the model encoded by changing the percentage value r. After the model is trained by i-MAE using such kind of input data, we finetune the model with Mixup strategy (both baseline and our models) and cross-entropy loss. We use the accuracy as the metric of semantics under this percentage of instance mixture: Msem = − n∑ i=1 ti log (pi) (8) where ti is the ground-truth. The insight behind is that: if the input mixture is composed of two images or instances with the same semantics (i.e., the same category), it will confuse the model during training and i-MAE will be struggling to disentangle them. Thus, the encoded information/semantics may be weakened in training and it can be reflected by the quality of learned representation. It is interesting to see whether this conjecture is supported by the empirical results. We use the representation quality through finetuned accuracy to monitor the degree of semantics with this semantics-controllable mixture scheme. 4 EMPIRICAL RESULTS AND ANALYSIS In the experiments section, we analyze the properties of i-MAE’s disentangled representations on an extensive range of datasets. First, we provide the datasets used and our implementations details. Then, we thoroughly ablate our experiments, focusing on the properties of linear separation, and controllable-semantic mixture. Lastly, we give the final evaluation of our results. 4.1 DATASETS AND TRAINING IMPLEMENTATION FOR BASELINE AND I-MAE. Settings: We perform empirical experiments of i-MAE on CIFAR-10/100, Tiny-ImageNet, and ImageNet-1K. On CIFAR-10/100, we pre-train i-MAE unsupervisedly and adjust MAE’s structure to better fit the smaller datasets: ViT-Tiny (Touvron et al., 2021) in the encoder and a lite-version of ViT-Tiny (4 layers) as the decoder. Our pre-training lasts 2,000 epochs with learning rate 1.5×10−4 and 200 warm-up epochs. On Tiny-ImageNet, i-MAE’s encoder is ViT-small and decoder is ViTTiny, trained for 1,000 epochs with learning rate 1.5 × 10−4. Additionally, we apply warm-up for first 100 epochs, and use cosine learning rate decay with AdamW optimizer as in MAE. Supervised Finetuning: In the finetuning process, we apply Mixup for all experiments to fit our pre-training scheme, and compare our results with baselines of the same configuration. On CIFAR10/100, we finetune 100 epochs with a learning rate of 1.5× 10−3 and AdamW optimizer. Linear Probing: For linear evaluation, we follow MAE (He et al., 2022) to train with no extra augmentations and use zero weight decay. We also adopt an additional BatchNorm layer without affine transformation. 4.2 ABLATION STUDY In this section, we perform ablation studies on i-MAE to demonstrate the invariant property of linear separability and to what extent can i-MAE separate features. Then, we analyze the effect of semantic-level instance mixing on the quality of i-MAE’s learned representations. 4.2.1 ABLATION FOR LINEAR SEPARABILITY To begin, we thoroughly ablate our experiments on small-scaled datasets and demonstrate how iMAE’s learned features display linear separability. Specifically, we experiment with the separability of the following aspects of our methods: (i) constant or probability mix factor; (ii) masking ratio of input mixtures; (iii) different ViT architectures. Unless otherwise stated, the default settings used in our ablation experiments are ViT-Tiny, masking ratio of 75%, fixed mixing ratio of 35%, and reconstructing only the subordinate image for a harder task. Mix Ratio. To demonstrate the separable nature of input mixtures, we compared different mixture ratios and random mixture ratios from a Beta distribution. Intuitively, low mixing ratios contain less information that are easily confused with noise, whereas higher ratios destroy the subordinatedominant relationship. Experimentally, we observe matching results shown in Appendix (Fig. 10 and Fig. 1). The better separation performance around the 0.3 range indicates that i-MAE features are better dichotomized when balanced between noise and useful information. Whereas below 0.15, the subordinate image is noisy and reconstructions are not interpretable, mixing ratios above 0.45 break the balance between the two images, and the two features cannot be distinguished. Moreover, notice that at 0.45, reconstruction patches are turning green and resembling the pepper. Mask Ratio. In i-MAE, visible information of the subordinate image is inherently limited due to the unbalanced mix ratio in addition to masking. Therefore, a high masking ratio (75% (He et al., 2022)) may not be necessary to suppress the amount of information the encoder sees, so we attempt ratios of 50%, 60% to introduce more information of the subordinate target. As shown in Fig. 3, a lower masking ratio can improve the reconstruction quality. Combining our findings in mix and mask ratios, we empirically find that i-MAE can compensate for the information loss at low ratios with the additional alleviation of more visible patches (lower mask ratio). Illustrated in Fig. 1, we display a case of i-MAE qualitatively succeeding in separating the features of a α = 0.1 mix and 0.5 masking ratio. Our core finding in the separability ablation section is that i-MAE can learn linearly separable features under two conditions: (i) enough information about both images must be present (this can be alleviated by mask ratios). (ii) the image-level distinguishing relationship between minority and majority (determined by mix ratio) is clear enough. ViT Backbone Architecture. We study the effect of different ViT scales in linear separation in Appendix of Fig. 5, and find that larger backbones are not necessary for small datasets on i-MAE, although it is crucial on large-scale ImageNet-1K. 4.2.2 ABLATION FOR DEGREE OF SEMANTICS Semantic Mixes. Depending on the number of classes and overall size, pristine datasets usually contain around 10% (e.g., CIFAR-10) to less than 1% (e.g. ImageNet-1K) samples of the same class. By default, uniformly random sampling mixtures will be of the same likelihood. However, in the semantics-controllable mixture scheme, we test whether the introduction of semantically homogeneous mixtures affects the classification performance at different amounts. We intentionally test to see if similar instances during pre-training can negatively affect the classification performance. As shown in Tab. 1, after i-MAE pre-training, we perform finetuning and linear probing on classification tasks to evaluate the degree of semantics learned given different amounts of intra-class mix r. From Tab. 1, we discover that i-MAE overall has a stronger performance in finetuning and linear probing with a non-zero same class ratio. Specifically, a high r increases the accuracy in linear evaluation most in all datasets, meaning that the quality of learned features is best and separated. On the other hand, setting r = 0.5 is advantageous during finetuning, as it gains a balanced prior of separating both intra- and inter-class mixtures. 4.3 RESULTS OF FINAL EVALUATION In this section, we provide a summary of our main findings: how separable are i-MAE embedded features and the amount of semantics embedded in mixed-representations. Then, we evaluate the quality of our features with classfication and analyze the features. 4.3.1 SEPARABILITY In this section, we show how i-MAE displays properties of linear separability, visually and quantitatively, and demonstrate our advantage over baseline (vanilla MAE). In a visual comparison of disentanglement capability, shown in Fig. 4, the vanilla MAE does not perform well out-of-the-box. In fact, the reconstructions represent the mixed input more so than the subordinate image. Since the mixture inputs of i-MAE is a linear combination of the two images and our results show i-MAE’s potent ability to reconstruct both images, even at very low mixture ratios, we account such ability to i-MAE’s disentanglement correlating strongly to vanilla MAE’s features. As aforementioned, we gave the formal definition of linear separability; we now empirically illustrate the strength of the linear relationship between MAE’s features and i-MAE’s disentangled features with a linear regressor. We employ ℓ2 distance as our criterion and results are reported in Tab. 2. Experimentally, we feed mixed inputs to i-MAE and singular image to the target model (vanilla MAE), Before indicates that we directly calculate the distance between the “ground-truth” features from pre-trained MAE and our disentangled features. After indicates that we train the linear regression’s parameters to fit the “ground-truth”. Baseline is the model trained without disentanglement module. It can be observed that our i-MAE has a significantly smaller distance than the vanilla model, reflecting that such a scheme can obtain better separability ability. 4.3.2 SEMANTICS Finetune and Linear Evaluation. We evaluate our i-MAE’s performance with finetuning and linear evaluation of regular inputs and targets. For all approaches in the finetuning phase, we use Mixup as augmentation and no extra augmentations for linear evaluation. Classification performance is outlined in Tab. 3 and Tab. 4. As our features are learned from a harder scenario, it encodes more information with a more robust representation and classification accuracy. Besides, i-MAE shows a considerable performance boost with both evaluation methods. Analysis. We emphasize that our enhanced performance comes from i-MAE’s ability to learn more separable features with the disentanglement module, and the enhanced semantics learned from training with semantics-controllable mixture. Our classification results show that it is crucial for MAE to learn features that are linearly separable, which can help identify between different classes. However, to correctly identify features with their corresponding classes, semantically rich features are needed, and can be enhanced by sampling intra-class mixing strategy. 5 CONCLUSION It is non-trivial to understand why Masked Image Modeling (MIM) in the self-supervised scheme can learn useful representations for downstream tasks without labels. In this work, we have introduced a novel interpretable framework upon Masked Autoencoders (i-MAE) to explore two critical properties in latent features: linear separability and degree of semantics. We identified that the two specialties are the core for superior latent representations and revealed the reasons where is the good transferability of MAE from. Moreover, we proposed two metrics to evaluate these two specialties quantitatively. Extensive experiments are conducted on CIFAR-10/100, Tiny-ImageNet, and ImageNet-1K datasets to demonstrate our discoveries and observations in this work. We also provided sufficient qualitative results and analyses of different hyperparameters. We hope this work can inspire more studies on the interpretability of the MIM frameworks in the future. A DATASETS CIFAR-10/100 (Krizhevsky, 2009) Both CIFAR datasets contain 60,000 tiny colored images sized 32×32. CIFAR-10 and 100 are split into 10 and 100 classes, respectively. Tiny-ImageNet The Tiny-ImageNet is a scaled-down version of the standard ImageNet-1K consisting of 100,000 64x64 colored images, categorized into 200 classes. ImageNet-1K (Deng et al., 2009) The ILSVRC 2012 ImageNet-1K classification dataset consist of 1.28 million training images and 50,000 validation images of 1000 classes. B IMPLEMENTATION DETAILS IN SELF-SUPERVISED PRE-TRAINING, FINETUNING, AND LINEAR EVALUATION ViT architecture. In our non-ImageNet datasets, we adopt smaller ViT backbones that generally follow (Touvron et al., 2021). The central implementation of linear separation happens between the MAE encoder and decoder, with a linear projection layer for each branch of reconstruction. A shared decoder is used to reconstruct both images. A qualitative evaluation of different ViT sizes on TinyImageNet is displayed in Fig. 5; the perceptive difference is not large and generally, ViT-small/tiny are sufficient for non-ImageNet datasets. Pre-training. The default setting for pre-training is listed in Tab. 5. On ImageNet-1K, we strictly use MAE’s specifications. For better classification performance, we use normalized pixels (He et al., 2022) and a high masking ratio (0.75); for better visual reconstructions, we use a lower masking ratio (0.5) without normalizing target pixels. In CIFAR-10/100, and Tiny-ImageNet, reconstruct ordinary pixels. Semantics-controllable mixture The default setting for our semantics-controllable mixtures are listed in Tab. 6. We modified the dataloader to mix, within a mini-batch, r percent of samples that have homogenous classes, and 1− r percent that is different. Classification For the classification task, we provide the detailed settings of our finetuning process in Tab. 7 and linear evaluation process in Tab. 8. C VISUALIZATION We provide extra examples of a single-branch trained i-MAE reconstructing the subordinate image. Fig. 10 are visualizations on CIFAR-100 at mix ratios from 0.1 to 0.45, in 0.05 steps. Shown in Fig. 6 and Fig. 7, we produce finer ranges of reconstructions from 0.05 to 0.45. Notice that in most cases, mixture rates above 0.4 tends to show features of the dominant image. This observation demonstrates that a low mixture rate can better embed important information separating the subordinate image. D PYTORCH (PASZKE ET AL., 2019) STYLED PSEUDOCODE The pseudocode of our mixture and subordinate reconstruction approach is shown in Algorithm 1. This is only a simple demonstration of our most basic framework without distillation losses. In Algorithm 1: PyTorch-style pseudocode for a single subordinate reconstruction on i-MAE. # alpha: mixture ratio # args.beta: hyperparameter for the Beta Distribution. # # args.beta=1.0 for x in loader: # Minibatch x of N samples alpha = np.random.beta(args.beta, args.beta) sub idx = np.argmin(alpha, 1-alpha) # Identifying the subordinate (target) image perm = torch.randperm(batch size) # inner-batch mix im 1, im 2 = x, x[perm, :] mixed images = alpha * im 1 + (1-alpha) * im 2 # # Subordinate Loss loss sub = loss fn (model(mixed images), im 2) # # update gradients optimizer.zero grad() loss.backward() optimizer.step() ... our full-fledged i-MAE, we employ two additional distillation losses, an additional linear separation branch, and the semantics-controllable mixture scheme; nonetheless, the key implementation remains the same as the pseudocode presented here.
1. What are the main contributions and novel aspects introduced by the paper regarding MAE latent representations? 2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to disentangle the representations and measure semantic meaning? 3. Do you have any concerns or confusions regarding the paper's content, such as the choice of Euclidean distance, the interpretation of Fig. 1, or the vagueness of certain terms like "semantically enhanced augmentation"? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors study two questions relating to the linear separabilityof MAE latent representations and the semantic meaning of the encoded MAE features. The authors also propose i-MAE which mixes an image and then has a reconstruction objective which relates to only one of the two mixed images, which causes the autencoder to disentagle the representation. Strengths And Weaknesses Strengths Disentangling the representations is a novel task to present to an MAE, which intuitively seems like a task that woudl generalize well. Weaknesses The caption of figure 1 is confusing. How does I s at 0.1 trconstruct the individual image well? What is meant by individual image? I don't see it reconstructing either the dominant or subordinate image well so I am not sure how to interpret this. Likewise, how does 0.45 show the appearance of the dominant image? All I see is a better reconstruction of the subordinate image because the subordinate image has received more weight in the mixing. I am left very confused after this figure. Why is Euclidean distance a good choice for the first metric? How can one be sure that a small Euclidean distance means that the features are linearly separable? If the data exist on two different manifolds that are close in Euclidean space, then the geodesic distance could be very high, but the euclidean distance could be low. There is no mention of what p ( I n ) is in equation 4. Is this a sample from the distribution which produced I n or from the masking distribution? The outlined procedure requires a fully pretrained MAE to work. Section 3.2: "disentangled feature without constraints will unlikely resemble the vanilla feature." What is the vanilla feature? Also this sentence needs to be rewritten for clarity, "will unlikely resemble" is hard to interpret. Section 3.3: "which can be regarded as a semantically enhanced augmentation" What does semantically enhanced mean? What is enhanced and how? Section 3.3: I am left confused after reading this paragraph. Throughout the paper, this metric sounded like soemthing which was used to measure an effect of i-MAE. Upon reading this paragraph I am not sure if this is a method for training or evaluation. Figure 4 is hard to inerpret. What are the different entries in the row (x-axis)? Furthermore, what does the figure show? All of the images look basically the same to me. Whatever small differences there may be, I have no idea what the significance of those differences means. Table 2: I am confused by this and the section which explains it. What is the linear regressor? Why does one need a linear regressor? Minor Page 2: "These two questions can reveal the factor that MAE learned features are good at separating different classes." This sentence is confusing and should be rewritten Section 2 (Invariance heading): Shouldn't the first word in the second sentence be "Invariance" and not "Variance?" Section 3.1.1: " I 1 , , I 2 are the input mixes." Does this mean that they are already mixed images? That is what the sentence says, but I do not think that is the case unless I have misunderstood something. Section 3.2: "Our distillation module aids the information loss." What does aid mean here? It can be interpreted as increasing information loss. After equation 8: "Where t i is the ground truth." This means the ground truth label, right? Clarity, Quality, Novelty And Reproducibility Clarity The work is overall confusing in almost every section, as indicated by the above comments. The clarity needs to be improved in both terms of language and presentation of figures, and discussion. Quality The quality is overall mediocre. I remain unconvinced of the overall claims in the paper. The authors claim tehy study two questions relating to linear separability (metric) and encoded semantics (metric). But I remain unconvinced that the linear separability metric can actually shows anything concrete and and I am not sure that the encoded semantics metric is actually a metric as stated in Section 3.3. I see a training routine, and not a metric. Novelty The idea seems novel and intuitive, and I do not doubt that it can deliver some promising results.
ICLR
Title Light-weight probing of unsupervised representations for Reinforcement Learning Abstract Unsupervised visual representation learning offers the opportunity to leverage large corpora of unlabeled trajectories to form useful visual representations, which can benefit the training of reinforcement learning (RL) algorithms. However, evaluating the fitness of such representations requires training RL algorithms which is computationally intensive and has high variance outcomes. To alleviate this issue, we design an evaluation protocol for unsupervised RL representations with lower variance and up to 600x lower computational cost. Inspired by the vision community, we propose two linear probing tasks: predicting the reward observed in a given state, and predicting the action of an expert in a given state. These two tasks are generally applicable to many RL domains, and we show through rigorous experimentation that they correlate strongly with the actual downstream control performance on the Atari100k Benchmark. This provides a better method for exploring the space of pretraining algorithms without the need of running RL evaluations for every setting. Leveraging this framework, we further improve existing self-supervised learning (SSL) recipes for RL, highlighting the importance of the forward model, the size of the visual backbone, and the precise formulation of the unsupervised objective. Code will be released upon acceptance. 1 INTRODUCTION Learning visual representations is a critical step towards solving many kinds of tasks, from supervised tasks such as image classification or object detection, to reinforcement learning (RL). Ever since the early successes of deep reinforcement learning (Mnih et al., 2015), neural networks have been widely adopted to solve pixel-based reinforcement learning tasks such as arcade games (Bellemare et al., 2013), physical continuous control (Todorov et al., 2012; Tassa et al., 2018), and complex video games (Synnaeve et al., 2018; Oh et al., 2016). However, learning deep representations directly from rewards is a challenging task, since this learning signal is often noisy, sparse and delayed. With ongoing progress in unsupervised visual representation learning for vision tasks (Zbontar et al., 2021; Chen et al., 2020a;b; Grill et al., 2020; Caron et al., 2020; 2021), recent efforts have likewise applied self-supervised techniques and ideas to improve representation learning for RL. Some promising approaches include supplementing the RL loss with self-supervised objectives (Laskin et al., 2020; Schwarzer et al., 2021a), or first pre-training the representations on a corpus of trajectories (Schwarzer et al., 2021b; Stooke et al., 2021). However, the diversity in the settings considered, as well as the self-supervised methods used, make it difficult to identify the core principles of successful self-supervised methods in RL. Moreover, estimating the performance of RL algorithms is notoriously challenging (Henderson et al., 2018; Agarwal et al., 2021): it often requires repeating the same experience with a different random seed, and the high CPU-to-GPU ratio is a compute requirement of most online RL methods that is inefficient for typical research compute clusters. This hinders systematic exploration of the many design choices that characterize SSL methods. In this paper, we strive to provide a reliable and lightweight evaluation scheme for unsupervised visual representation in the context of RL. Inspired by the vision community, we propose to evaluate the representations using linear probing, by training a linear prediction head on top of frozen features. We devise two probing tasks that we deem widely applicable: predicting the reward in a given state, and predicting the action that would be taken by a fixed policy in a given state (for example that of an expert). We stress that these probing tasks are only used as a means of evaluation. Because very little supervised data is required, they are particularly suitable for situations where obtaining the expert trajectories or reward labels is expensive. Through thorough experimentation, we show that the performance of the SSL algorithms (in terms of their downstream RL outcomes) correlates with the performance in both probing tasks with statistically significant (p<0.001) Spearman’s rank correlation, making them particularly effective proxies. Given the vastly reduced computational burden of linear evaluations, we argue that it enables much easier and straightforward experimentation of SSL design choices, paving the way for a more systematic exploration of the design space. Finally, we leverage this framework to systematically assess some key attributes of SSL methods. First off, we explore the utility and role of learning a forward model as part of the self-supervised objective. We investigate whether its expressiveness matters and show that equipping it with the ability to model uncertainty (through random latent variable) significantly improves the quality of the representations. Next, we identify several knobs in the self-supervised objective, allowing us to carefully tune the parameters in a principled way. Finally, we confirm the previous finding (Schwarzer et al., 2021b) that bigger architectures, when adequately pre-trained, tend to perform better. Our contributions can be summarized as follows: • Design of a rigorous and efficient SSL evaluation protocol in the context of RL • Empirical demonstration that this evaluation scheme correlates with downstream RL perfor- mance • Systematic exploration of design choices in existing SSL methods. 2 RELATED WORK 2.1 REPRESENTATION LEARNING There has recently been a surge in interest and advances in the domain of self-supervised learning in computer vision. Some state-of-art techniques include contrastive learning methods SimCLR, MoCov2 (Chen et al., 2020a;b); clustering methods SwAV (Caron et al., 2020); distillation methods BYOL, SimSiam, OBoW (Grill et al., 2020; Chen and He, 2021; Gidaris et al., 2020); and information maximization methods Barlow Twins and VicReg (Zbontar et al., 2021; Bardes et al., 2021). These advances have likewise stimulated development in representation learning for reinforcement learning. A line of work includes unsupervised losses as an auxiliary objective during RL training to improve data efficiency. Such objective can be contrastive (Laskin et al., 2020; Zhu et al., 2020) or non-contrastive (Schwarzer et al., 2021a; Yu et al., 2022). ST-DIM (Anand et al., 2019), ATC (Stooke et al., 2021) and BVS-DIM (Mengistu et al., 2022) incorporate temporal information in their contrastive objective, adapting similar techniques from the unsupervised video representation learning (Sermanet et al., 2018). Proto-RL (Yarats et al., 2021a) uses a SwAV-like objective to learn representation as well as guide effective exploration during pre-training. Similarly, CRL (Du et al., 2021) trains a policy to optimize a SimCLR loss, then shows transfer to RL, imitation learning and image classification. Closer to our approach, SGI (Schwarzer et al., 2021b) pretrains both an encoder and forward prediction model by minimizing the distance between predictions and target latents using BYOL, and the encoder is recycled during RL for improved data efficiency. While different in spirit, many model based methods also train an encoder from a corpus of trajectory, either by explicit pixel reconstruction Kaiser et al. (2020); Hafner et al. (2021) or in embedding space Ye et al. (2021); Schrittwieser et al. (2020). Self-supervised representations have also been used for imitation learning (Aytar et al., 2018; Pari et al., 2021) as well as exploration (Burda et al., 2019a). 2.2 REPRESENTATION PROBING IN REINFORCEMENT LEARNING Some prior work (Racah and Pal, 2019; Guo et al., 2018; Anand et al., 2019; Higgins et al., 2018; Dittadi et al., 2022) evaluate the quality of their pretrained representations by probing for ground truth state variables such as agent/object locations, game scores or model-specific quantities (eg. ELBO). Das et al. (2020) propose to probe representations with natural language question-answering. Despite the efficiency of these probing methods, their designs are highly domain-specific and require careful handcrafting for each environment. In addition, they fail to demonstrate the actual correlation between probing and RL performances, which makes their practical usefulness uncertain. On the other hand, the authors of ATC (Stooke et al., 2021) propose to evaluate representations by finetuning for RL tasks using the pretrained encoder with weights frozen. Similarly, Laskin et al. (2021) propose a unified benchmark for SSL methods in continuous control but still require full RL training. Our work seeks to bridge these two approaches by demonstrating the correlation between linear probing and RL performances, as well as designing probing tasks that are generalizable across environments. 3 A FRAMEWORK TO DEVELOP UNSUPERVISED REPRESENTATIONS FOR RL In this section, we detail our proposed framework for training and evaluating unsupervised representations for reinforcement learning. 3.1 UNSUPERVISED PRE-TRAINING The network is first pre-trained on a large corpus of trajectories. Formally, we define a trajectory Ti of length Ti as a sequence of tuples Ti = [(ot, at) | t ∈ [1, Ti]], where ot is the observation of the state at time t in the environment and at was the action taken in this state. This setting is closely related to Batch RL (Lange et al., 2012), with the crucial difference that the reward is not being observed. In particular, it should be possible to use the learned representations to maximize any reward (Touati and Ollivier, 2021). The training corpus corresponds to a set of such trajectories: Dunsup {T1, · · · , Tn}. We note that the policy used to generate this data is left unspecified in this formulation, and is bound to be environment-specific. Since unsupervised methods usually necessitate a lot of data, this pre-training corpus is required to be substantial. In some domains, it might be straightforward to collect a large number of random trajectories to constitute Dunsup. In some other cases, like self-driving, where generating random trajectories is undesirable, expert trajectories from humans can be used instead. The goal of the pre-training step is to learn the parameters θ of an encoder ENCθ which maps any observation o of the state s (for example raw pixels) to a representation e = ENCθ(o). This representation must be amenable for the downstream control task, for example learning a policy. 3.2 EVALUATION In general, the evaluation of RL algorithms is tricky due to the high variance in performance (Henderson et al., 2018). This requires evaluating many random seeds, which creates a computational burden. We side-step this issue by formulating an evaluation protocol which is light-weight and purely supervised. Specifically, we identify two proxy supervised tasks that are broadly applicable and relevant for control. We further show in the experiment section that they are sound, in the sense that models’ performance on the proxy tasks strongly correlates with their performance in the downstream control task of interest. Similar to the evaluation protocol typically used for computer vision models, we rely on linear probing, meaning that we train only a linear layer on top of the representations, which are kept frozen. Reward Probing Our first task consists in predicting the reward observed in a given state. For this task, we require a corpus of trajectories Drew = {T ′1, · · · , T ′m} for which the observed rewards are known, i.e. T ′i = [(ot, at, rt) | t ∈ [1, Ti]] In the most general setting, it can be formulated as a regression problem, where the goal is to minimize the following loss: L(ψ)reward-reg = 1 |Drew| ∑ T ′i∈Drew 1 |T ′i| ∑ (ot,at,rt∈T ′i) ∥lψ(ENCθ(ot))− rt∥2 Here, the only learnt parameters ψ are those of the linear prediction layer lψ . In practice, in many environments where rewards are sparse, the presence or absence of a reward is more important than its magnitude. To simplify the problem in those cases, we can cast it as a binary prediction problem instead (this could be extended to ternary classification if the sign of the reward is of interest): L(ψ)reward-classif = 1 |Drew| ∑ T ′i∈Drew 1 |T ′i| ∑ (ot,at,rt∈T ′i) BinaryCE(1R>0(rt), lψ(ENCθ(ot))) Reward prediction is closely related to value prediction, a central objective in RL that is essential for value-based control and the critic in actor-critic methods. The ability to predict instantaneous reward, akin to predicting value with a very small discount factor, can be viewed as a lower bound on the learned representation’s ability to encode the value function, and has been demonstrably helpful for control, particularly in sparse reward tasks (Jaderberg et al., 2017). Thus, we hypothesize reward prediction accuracy to be a good probing proxy task for our setting as well. Action prediction Our second task consists in predicting the action taken by an expert in a given state. For this task, we require a corpus of trajectories Dexp = {T1, · · · , Tn} generated by an expert policy. We stress that this dataset may be much smaller than the pretraining corpus since we only require to fit and evaluate a linear model. The corresponding objective is as follows: L(ψ)action-classif = 1 |Dexp| ∑ Ti∈Dexp 1 |Ti| ∑ (ot,at∈T ′i) CrossEntropy(at, lψ(ENCθ(ot))) This task is closely related to imitation learning, however, we are not concerned with the performance of the policy that we learn as a by-product. 4 SELF PREDICTIVE REPRESENTATION LEARNING FOR RL In our work, we focus on evaluating and improving a particular class of unsupervised pretraining algorithms that involves using a transition model to predict its own representations in the future (Schwarzer et al., 2021b; Guo et al., 2018; Gelada et al., 2019). This pretraining modality is especially well suited for RL, since the transition model can be conditioned on agent actions, and can be repurposed for model-based RL after pretraining. Our framework is depicted in Fig.2. In this section, we present the main design choices, and we investigate their performance in Section 5. 4.1 TRANSITION MODELS Our baseline transition model is a 2D convolutional network applied directly to the spatial output of the convolutional encoder (Schwarzer et al., 2021b; Schrittwieser et al., 2020). The network consists of two 64-channel convolutional layers with 3x3 filters. The action is represented as a one-hot encoding spatially replicated (in a 2D map) and concatenated with the representation input along the channel dimension. We believe a well-established sequence modeling architecture such as GRU can serve as a superior transition model. Its gating mechanisms should be better at retaining information from both the immediate and distant past, especially helpful for learning dynamics in a partially observable environment. Encoder : ê0 = e0 = ENCθ(o0) RecurrentModel : êt = fϕ(êt−1, at−1) In addition to the deterministic GRU model above, we also experiment with a GRU variant where we introduce stochastic states to allow our model to generalize better to stochastic environments, such as Atari with sticky actions (Machado et al., 2018). Our model is based on the RSSM from DreamerV2 (Hafner et al., 2021), with the main difference being that while pixel reconstruction is used as the SSL objective in the original work, we minimize the distance between predictions and targets purely in the latent space. Following DreamerV2, we optimize the latent variables using straight-through gradients (Bengio et al., 2013), and minimize the distance between posterior (z) and prior (ẑ) distributions using KL loss. Encoder : et = ENCθ(ot) RecurrentModel : ht = fϕ(ht−1, zt−1, at−1) PosteriorModel : zt ∼ pϕ(zt|ht, et) PriorPredictor : ẑt ∼ jϕ(ẑt|ht) LatentMerger : êt = gϕ(ht, zt) 4.2 PREDICTION OBJECTIVES The objective of self predictive representation learning is to minimize the distance between the predicted and the target representations, while ensuring that they do not collapse to a trivial solution. Our baseline prediction objective is BYOL (Grill et al., 2020), which is used in SGI (Schwarzer et al., 2021b). The predicted representation êt+k, and the target representation ẽt+k are first projected to lower dimensions to produce ŷt+k and ỹt+k. BYOL then maximizes the cosine similarity between the predicted and target projections, using a linear prediction function q to translate from ŷ to ỹ: LBY OLθ (ŷt:t+k, ỹt:t+k) = − K∑ k=1 q(ŷt+k) · ỹt+k ∥q(ŷt+k)∥2 · ∥ỹt+k∥2 In the case of BYOL, the target encoder and projection module are the exponentially moving average of the online weights, and the gradients are blocked on the target branch. As an alternative prediction objective, we experiment with Barlow Twins (Zbontar et al., 2021). Similar to BYOL, Barlow Twins minimizes the distance of the latent representations between the online and target branches; however, instead of using a predictor module and stop gradient on the target branch, Barlow Twins avoids collapse by pushing the cross-correlation matrix between the projection outputs on the two branches to be as close to the identity matrix as possible. To adapt Barlow Twins, we calculate the cross correlation across batch and time dimensions: LBT (ŷt:t+k, ỹt:t+k) = ∑ i (1− Cii)2 + λ ∑ i,j ̸=i C2ij where Cij = ∑ b,t(ŷb,t,i) · (ỹb,t,j)√∑ b,t(ŷb,t,i) 2 · √∑ b,t(ỹb,t,j) 2 where λ is a positive constant trading off the importance of the invariance and covariance terms of the loss, C is the cross-correlation matrix computed between the projection outputs of two branches along the batch and time dimensions, b indexes batch samples, t indexes time, and i, j index the vector dimension of the projection output. By enabling gradients on both the prediction and target branches, the Barlow objective pushes the predictions towards the representations, while regularizing the representations toward the predictions. In practice, learning the transition model takes time and we want to avoid regularizing the representations towards poorly trained predictions. To address this, we apply a higher learning rate to the prediction branch. We call this technique Barlow Balancing, and implement it in Algorithm 1. Algorithm 1: PyTorch-style pseudocode for Barlow Balancing BarlowLoss = µ ∗ LBT (ŷ, ỹ.detach()) + (1− µ) ∗ LBT (ŷ.detach(), ỹ) 4.3 OTHER SSL OBJECTIVES SGI’s authors (Schwarzer et al., 2021b) showed that in the absence of other SSL objectives, pretraining with BYOL prediction objective alone results in representation collapse; the addition of inverse dynamics modeling loss is necessary to prevent collapse, while the addition of goal-oriented RL loss results in minor downstream RL performance improvement. In inverse dynamics modeling, the model is trained using cross-entropy to model p(at|ŷt+k, ỹt+k+1), effectively predicting the transition action between two adjacent states. The goal-oriented loss tries to predict distance to states in the near future from the sampled trajectories (details in Appendix). 5 RESULTS 5.1 EXPERIMENTAL DETAILS We conduct experiments on the Arcade Learning Environment benchmark (Bellemare et al., 2013). Given the multitude of pretraining setups we investigate, we limit our experiment to 9 Atari games1. Pretraining We use the publicly-available DQN replay dataset (Agarwal et al., 2020), which contains data from training a DQN agent for 50M steps with sticky action (Machado et al., 2018). We select 1.5 million frames from the 3.5 to 5 millionth steps of the replay dataset, which constitutes trajectories of a weak, partially trained agent. We largely follow the recipe of SGI (Schwarzer et al., 2021b), where we jointly optimize the self prediction, goal-conditioned RL, and inverse dynamics modeling 1Amidar, Assault, Asterix, Boxing, Demon Attack, Frostbite, Gopher, Krull, Seaquest losses for 20 epochs; in some of our experiments we remove one or both of the last two objectives. We use the data-augmentations introduced by Yarats et al. (2021b). All experiments are performed on a single MI50 AMD GPU, and the pretraining process took 2 to 8 days depending on the model. Reward probing We focus on the simplified binary classification task of whether a reward occurs in a given state. We use 100k frames from the 1-1.1 millionth step of the replay dataset, with a 4:1 train/eval split. We train a logistic regression model on frozen features using the Cyanure (Mairal, 2019) library, with the MISO algorithm (Mairal, 2015) coupled with QNING acceleration (Lin et al., 2019) for a maximum of 300 steps. We do not use any data augmentation. We report the mean F1 averaged across all 9 games. On a MI50 AMD GPU, each probing run takes 10 minutes. Action probing We use the last 100k (4:1 train/eval split) frames of the DQN replay dataset, which correspond to a fully trained DQN agent. We train a linear layer on top of frozen, un-augmented features for 12 epochs with softmax focal loss (Lin et al., 2017) using SGD optimizer with learning rate 0.2, batch size 256, 1e-6 weight decay, stepwise scheduler with step size 10 and gamma 0.1. We report the Multiclass F1 (weighted average of F1 scores of each class) averaged across all games. RL evaluation We focus on the Atari 100k benchmark (Kaiser et al., 2020), where only 100k interactive steps are allowed by the agent. This is roughly equivalent to two hours of human play, providing an approximation for human level sample-efficiency. We follow Schwarzer et al. (2021b) training protocol using the Rainbow algorithm (Hessel et al., 2018) with the following differences: we freeze the pretrained encoder (thus only training the Q head), do not apply auxiliary SSL losses while fine-tuning, and finally disable noisy layers and rely instead on ϵ-greedy exploration. This changes are made to make the RL results reflect as closely as possible the performance induced by the quality of the representations. On a MI50 AMD GPU, each run takes between 8 and 12 hours. We evaluate the agent’s performance using human-normalized score (HNS), defined as (agentscore− randomscore)/(humanscore−randomscore). We calculate this per game, per seed by averaging scores over 100 evaluation trajectories at the end of training. For aggregate metrics across games and seeds, we report the median and interquartile mean (IQM). For median, we first average the HNS across seeds for each game, and report the median of the averaged HNS values. For IQM, we first take the middle 50% of scores across both seeds and games, then report the average. While median is commonly reported for Atari100k, recent work has recommended IQM as a superior aggregate metric for the RL setting due to its smaller uncertainty (Agarwal et al., 2021); we also follow the cited work to report the 95% bootstrapped confidence intervals for these aggregate metrics. Unless specified otherwise, the experiments use the medium ResNet-M from Schwarzer et al. (2021b), and the inverse dynamics loss as an auxiliary loss. In BYOL experiments, the target network is an exponential moving average of the online network, while in Barlow Twins both networks are identical, following the original papers. For additional details regarding model architectures and hyperparameters used during pretraining and RL evaluation, please refer to Appendix. 5.2 IMPACT OF TRANSITION MODELS AND PREDICTION OBJECTIVES Table 1: F1 scores on probing tasks for different transition models and prediction objectives. All standard deviations are on the order of 1e-4 Pred Obj Transition Reward Action BYOL Conv-det 64.9 22.7 GRU-det 62.2 26.8 GRU-latent 63.4 23.2 Barlow0.7 Conv-det 52.7 24.9 GRU-latent 67.5 26.2 Table 2: F1 scores on probing tasks for different Barlow variants. All standard deviations are on the order of 1e-4 which we omit below. Pred Obj Reward Action Barlow0.5 65.0 26.3 Barlow0.7 67.5 26.2 Barlow1 65.0 24.7 Barlowrand 67.7 25.8 In table 1, we report the mean probing F1 scores for the convolutional, deterministic GRU, and latent GRU transition models trained using either the BYOL or Barlow prediction objective. When using the BYOL objective, the relative probing strengths for the different transition models are somewhat ambiguous: while the convolutional model results in better reward probing F1, the GRU models are superior in terms of expert action probing. Interestingly, we observe that after replacing BYOL with Barlow, the probing scores for the latent model improve, while those of the deterministic models deteriorate. Overall, the particular combination of pre-training using the GRU-latent transition model with the Barlow prediction objective results in representations with the best overall probing qualities. Since the deterministic model’s predictions are likely to regress to the mean, allowing gradients to flow through the target branch in the case of Barlow objective can regularize the representations towards poor predictions, and can explain their inferior probing performance. Introducing latent variables can alleviate this issue through better predictions. We stress that the transition models are not used during probing, only the encoder is. These experiments show that having a more expressive forward model during the pre-training has a direct impact on the quality of the learnt representations. In Fig.3, we investigate the impact of the latent variable on the information contained in the representations, by training a decoder on frozen features. In table 2, we show the results from experimenting with different variants of the Barlow objective. We find that using a higher learning rate for the prediction branch (Barlow0.7, with 7:3 prediction to target lr ratio) results in better probing outcome than using equal learning rates (Barlow0.5) or not letting gradients flow in the target branch altogether (Barlow1, here the target encoder is a copy of the online encoder). This suggests that while it is helpful to regularize the representations towards the predictions, there is a potential for them being regularized towards poorly trained ones. This can be addressed by applying a higher learning rate on the prediction branch. We also demonstrate that using a frozen, random target network (Barlowrand) results in good features, and in our experiments it gets the best reward probing performance. This contradicts findings from the vision domain (Grill et al., 2020), but corroborates self-supervised results from other domains such as speech (Chiu et al., 2022). Random networks have also been shown to exhibit useful inductive biases for exploration (Burda et al., 2019b;a). An explanation is that random targets act as a regularization that prevent partial collapse by enforcing a wide range of features to be encoded by the model. 5.3 IMPACT OF AUXILIARY SSL OBJECTIVES AND ENCODERS SSL objective Although pretraining with multiple objectives can sometimes result in better downstream performance, in practice they also make it harder to tune for hyperparameters and debug, therefore it is desirable to use the least number of objectives that can result in comparable performance. In table 4, we show the effects of inverse dynamics modeling (inv) and goal-conditioned RL (goal) objectives on probing performance. The BYOL model experiences partial collapse without the inverse dynamics modeling loss, while the addition of goal loss improves the probing performance slightly. This is in congruence with results reported by Schwarzer et al. (2021b) for the same ablations. The Barlow-only model performs significantly better than the BYOL-only model in terms of probing scores, indicating that the Barlow objective is less prone to collapse in the predictive SSL setting. Similar to the BYOL model, the Barlow model can also be improved with inverse dynamics modeling, while the addition of goal loss has a slight negative impact. Encoders SGI (Schwarzer et al., 2021b) showed that using bigger encoders during pretraining results in improved downstream RL performance. We revisit this topic from the point of finding out whether the pretrained representations from bigger networks also have better probing qualities. We experiment with the medium (ResNet-M) and large (ResNet-L) residual networks from SGI. In table 5 we show that Barlow models pretrained using the larger ResNet have improved probing scores. 5.4 CORRELATIONS BETWEEN PROBING AND RL PERFORMANCES If our goal is to use linear probing as a guide to identify superior pretraining setup for RL, then they are only useful to the extent to which they correlate with the actual downstream RL performance. We perform RL evaluations for 9 representative setups (the best settings from each of table 1,2,4,5), as well as two contrastive methods: ST-DIM (Anand et al., 2019) and ATC (Stooke et al., 2021); and a reconstruction-based method VAE-T (Stooke et al., 2021)2. We report their probing and aggregate RL metrics in table 3, with the confidence intervals of the aggregate RL metrics depicted on the right. We find that the rank correlations between reward and action probing F1 scores and the RL aggregate metrics are significant (Figure 1). In summary, our results show the proposed probing scheme is a reliable guide for designing pretraining setups that deliver significant downstream RL performance improvements. 6 CONCLUSION In this paper we have investigated the opportunity to replace costly RL evaluation with lightweight linear probing task to assess the quality of learned representations. Reward and action probing are task-agnostic and should cover most practical applications. Using this methodology to guide us, we have demonstrated the impact of a number of key design choices in the pre-training methodology. We hope that these results encourage the research community to systematically explore the design space to further improve the quality of self-supervised representations for RL. 2See appendix for details on ATC, ST-DIM and VAE-T A MODELS AND HYPER-PARAMETERS A.1 BACKBONES M and L models are ResNet-M and ResNet-L from SGI (Schwarzer et al., 2021b). The ResNet-M encoder consists of inverted residual blocked with an expansion ratio of 2, with batch normalization applied after each convolutional layer; it uses 3 groups with 32, 64, and 64 channels, and has 3 residual blocks per group; it down-scales the input by a factor of 3 in the first group and 2 in the latter 2 groups. This yields a representation of shape 64x7x7 when applied to 84x84-dimensional Atari frames. ResNet-L uses 3 groups with 48, 96, and 96 channels, and has 5 residual blocks per group; it uses a larger expansion ratio of 4, producing a representation shape of 96x7x7 from an 84x84 frame. This enlargement increases the number of parameters by approximately a factor of 5. S model is the model used in Stooke et al. (2021). It consists of three convolutional layers, with [32, 64, 64] channels , kernel sizes [8, 4, 3], and strides [4, 2, 1], listed from first to last layer. A.2 TRANSITION MODELS We experimented with three transition models: convolutional model, deterministic GRU, and latent GRU. Our convolutional model is based on SGI (Schwarzer et al., 2021b). The input into the convolutional transition model is the concatenation of the spatially replicated 2D action map and the representation et along the channel dimension. The network itself consists of two 64-channel convolutional layers with 3x3 filters, separated by ReLU activation and batch normalization layers. The deterministic GRU has hidden dimension 600 and input dimension 250. The input at is prepared by passing the one-hot action vector through a 250 dimensional embedding layer. The initial hidden state ê0 is generated by projecting the representation e0 through a 600 dimensional linear layer with ELU activation and dropout. Layer normalization is applied to the hidden input at all timesteps. The latent GRU model is based on Dreamerv2’s RSSM (Hafner et al., 2021), and is consisted of a recurrent model, posterior model, prior predictor, and latent merger. The recurrent model has a hidden dimension and input dimension of 600. The initial hidden state h0 and input z0 are zero vectors. The flattened stochastic variables zt and one-hot action vector at are first concatenated and then projected to 600 dimension through a linear layer with ELU activation, before being passed into the recurrent model as input. Layer normalization is applied to the hidden input at all non-zero timesteps. The posterior model is a two-layer MLP with 600 dimensional bottleneck separated by ELU activation. It takes the concatenation of representation et and recurrent hidden output ht as input, and outputs a 1024 dimensional vector representing the 32 dimensional logits for 32 latent categorical variables. zt is sampled from the posterior logits. The prior model is a two-layer MLP with 600 dimensional bottleneck separated by ELU activation. Its output format is same as that of the posterior model. ẑt is sampled from the prior logits. The latent merger is a linear layer that projects the concatenation of ht and flattened zt to the same dimension of representation et. A.3 SSL PROJECTION MODULE In the case of the deterministic GRU, ê is first projected to the same dimension of representation through a linear layer. Henceforth we shall assume that ê underwent this step for GRUdet. The predicted representation ê and target representation ẽ are projected to 1024 dimensional vectors ŷ and ỹ through a linear layer. The BYOL objective involves processing ŷ with an additional linear layer q with output dimension 1024. The Barlow objective involves applying batch normalization to ŷ and ỹ prior to taking the covariance and variance losses. The inverse dynamics model is a two-layer MLP with 256 dimensional bottleneck separated by ReLU activation. It takes the concatenation of ŷt and ỹt+1 as input, and outputs logits with dimension equivalent to number of actions. A.4 ATC, VAE-T, ST-DIM We use the implementation, hyperparameters and architecture from the codebase of (Stooke et al., 2021) and (Stooke and Abbeel, 2019) for these models. We change the dataset to the one used in all our experiments We use the dataset described in section 5 to train these models, and train all methods for 58,500 updates. ATC (Augmented-Temporal Contrast) uses InfoNCE loss between output of the momentum encoder and online branch applied to different augmentations of an image to pre-train the encoder. VAE-T from Stooke et al. (2021) uses variational auto-encoder (Kingma and Welling, 2014) objective to reconstruct the frame from the next time step given an image at the current time step. ST-DIM (Anand et al., 2019) also uses InfoNCE objective, and in additional to traditional global-global infomax, introduces global-local infomax by using local representations taken from the feature map output of the convolutional encoder and the global pooled feature vector as positive pairs. For more details, we refer the reader to the referenced works. A.5 IMAGE RECONSTRUCTION MODEL We used a decoder architecture that mirrors the structure of the ResNet-M encoder. In decoding, instead of transposed convolutions we used upsampling with the nearest value followed by a regular convolution (Odena et al., 2016). We used mean squared error between the reconstructed pixels and the target image as the training criterion. Models were trained and evaluated on the same data as reward and action probing, for 30 epochs using Adam optimizer with learning rate 0.001. A.6 HYPERPARAMETERS See tables 6, 7, 8, 9 for hyperparameter values. For ATC, ST-DIM and VAE-T hyperparameters, see Stooke et al. (2021). A.7 IMAGE AUGMENTATION We use the same image augmentations as used in SGI (Schwarzer et al., 2021b), which itself used the augmentations in DrQ (Yarats et al., 2021b), in both pretraining and fine-tuning. We specifically apply random crops (4 pixel padding and 84x84 crops) and image intensity jittering. A.8 GOAL-ORIENTED RL LOSS Goal-oriented RL loss is taken directly from SGI (Schwarzer et al., 2021b). This objective trains a goal-conditional DQN, with rewards specified by proximity to sampled goals. First, a goal g is sampled to be the state encoding either of the near future in the current trajectory (up to 50 steps in the future), or, with probability of 20%, of the future state in another trajectory in the current batch. Then, we add Gaussian noise to obtain the final goal g: g ← αn + (1 − α)g, where α ∼ Uniform(0.5), and n is a vector sampled from isotropic Gaussian normalized to have length of 1. Then, in order to obtain the reward of taking action at going from state st to st+1, we first encode the states with the target encoder ẽt = ENCtarget(ot), ẽt + 1 = ENCtarget(ot+1). Then, we calculate the reward as: R(ẽt, ẽt+1) = d(ẽt, g)− d(ẽt+1, g), where d(ẽt, g) = exp ( 2 ẽt·g∥ẽt∥2·∥g∥2 − 2 ) . We use FiLM (Perez et al., 2018) to condition the Q-function Q(ot, at, g) on g, and optimize the model using DQN (Mnih et al., 2015). B FORWARD MODEL PROBING While our principal goal is to demonstrate the correlation between representation probing and offline RL performances, we also apply the reward probing technique to predictions in order to evaluate the qualities of transition models under different pretraining setups. In table 10, we show the effects of using different transition models during pretraining on prediction probing performance. All models are trained with ResNet-M encoder and inverse loss. Goal loss is also applied to the BYOL models. Table 8: GRU-latent specific hyperparameters. Parameter Setting kl loss weight 0.1 kl balance 0.95 Table 10: Mean reward probing F1 scores for pretraining setups with different transition models. Evaluated on 5th and 10th predictions. All standard deviations are on order of 1e-4. Pred Obj Transition Pred 5 Pred 10 BYOL Conv-det 33.1 28.4 GRU-det 33.0 27.4 GRU-latent 33.4 28.9 Barlow0.7 Conv-det 32.0 27.6 GRU-det 30.1 25.0 GRU-latent 39.5 30.2 Table 11: Mean reward probing F1 scores for pretraining setups with different prediction objectives. Evaluated on 5th and 10th predictions. All standard deviations are on order of 1e-4. Pred Obj Pred 5 Pred 10 BYOL 33.4 28.9 Barlow0.5 40.2 30.2 Barlow0.7 39.5 30.2 Barlow1 37.4 29.7 Barlowrand 36.8 27.5 In the deterministic setting, the predictions of the GRU model are worse than those of the convolutional model. The introduction of stochasticity appears to fix the underlying issue for predictions, resulting in the latent GRU model having the best overall prediction probing performance. One possible explanation for Conv-det having better predictions than GRU-det is that the spatial inductive bias in the convolutional kernels acts as a constraint and helps regularize the predictions from regressing to the mean. However, this is more effectively solved by the introduction of latent variables into GRU during training and inference. In table 11, we show the effects of using different prediction objectives during pretraining on prediction probing performance. All models are trained with ResNet-M encoder, GRU-latent transition model, and inverse loss; goal loss is also applied to the BYOL model. Comparing to the BYOL model, Barlow models generally have higher probing scores for predictions. We also note that for Barlow models, regularizing the representations towards the predictions (by setting Barlow Balance < 1) improves the qualities of predictions. This is likely because it makes the prediction task easier, making it more likely to learn a capable transition model. This reasoning can also explain why the Barlow model with frozen, random target network achieves superior probing result for representation (table 2) but worse result for predictions compared to the other Barlow versions. Predicting a random target representation is likely more difficult than predicting a learned representation, and this may in turn encourage the model to rely more on learning a powerful encoder and posterior model, and less on learning an accurate transition model. C FULL RL RESULTS D STATISTICAL HYPOTHESIS TESTING OF RANK CORRELATION In Fig. 5, we show the correlations results for both the action and reward predictions. We estimate Spearman’s rank correlation coefficient (Spearman’s r) between the linear probing performance and the (interquartile) mean RL human-normalized score (HNS) over 9 Atari games. The reason for using Spearman’s r instead of the Pearson correlation coefficient is because we are interested in whether the relative ranking of the models on the linear probing tasks is indicative of the relative ranking of the same models when RL is trained on top of it. As an example, this allows us to say if model A out-ranks model B in the reward prediction task, an RL model trained on top of model A’s representations will likely out-perform an RL model trained on top of model B’s representation. However, it does not let us predict by how much model A will out-perform model B. Let d denote the difference in ranking between the linear probing performance and the RL performance, Spearman’s r (denoted as ρ below) is computed as, ρ = 1− 6 ∑n i=1 d 2 i n(n2 − 1) , (1) where di is the difference in ranking for the i-th model, and n is the total number of models we have. We perform statistical hypothesis testing on ρ with null hypothesis ρ = 0 (no correlation between linear probing performance and RL performance) and alternative hypothesis ρ > 0 (positive correlation). The null distribution is constructed nonparametrically using permutation testing: we sample random orderings of the observed linear probing performance and RL performance independently and compute ρ. This is repeated 50,000 times to generate the null distribution (which is centered at ρ = 0 as we do not expect randomly ordered values to be correlated). We then compare our observed ρ to this distribution and perform one-tailed test for the proportion of samples larger than our observed ρ to report our p-value. D.1 RANK CORRELATION ON A DIFFERENT DATASET In Fig. 1, we explored the correlation between the RL performance and the reward probing task, where the dataset used for the reward probing was a set of quasi-random trajectories from the DQN dataset, coming from very beginning of the training run of the DQN agent used to collect the data. It is natural to ask whether the correlation results we obtain are sensitive to the specific dataset used. To put this question to the test, we re-run the same reward probing task, this time on the "expert" dataset, i.e. the last trajectories of the DQN dataset, corresponding to a fully trained agent. The results are shown in Fig.6. The Spearman’s correlation coefficient that we obtain is the exact same as the one for the random trajectory dataset (even though the reward statistic are different, see Table 14), showing that the correlation result is not sensitive to the probing dataset used. D.2 CONFIDENCE INTERVAL OF RL PERFORMANCE AS A FUNCTION OF INDEPENDENT RUNS We further show the confidence interval of the estimated mean RL performance as the number of independent runs increase. From our total of 10 independent runs each game, we sample with replacement k ≤ 10 runs (k being number of independent runs we “pretend” to have instead of the full 10), independently for each game. We can compute the IQM over this sample to get an estimate for the IQM as if we only have k independent runs. We repeat this process 10,000 times to construct the 95 confidence interval of the empirical IQM for different k’s. Illustrative examples of how much this confidence interval shrinks for different pairs of models is shown in Fig. 7. We observe in Fig. 7 the mean RL performance estimates have CIs that eventually separate with many independent runs. This is an unbiased but high variance and computationally intensive estimator of the true expected RL performance. On the other hand, the reward prediction F1 score is a computationally cheap, low variance and accurate estimator of the relative model ranks in mean RL performance. This further corroborates our previous results of positive correlation between reward prediction F1 score and mean RL performance (Fig. 1). E COMPARISON WITH DOMAIN SPECIFIC PROBING BENCHMARKS One of the key advantages of our probing method is that it is domain agnostic, unlike the previously proposed AtariARI benchmark (Anand et al., 2019) which acquires probing labels through the RAM state of the emulator, making their method impractical for image-based trajectories. To better understand how our probing metrics compare with the domain specific ones in terms of correlations with RL performances, we perform the AtariARI probing benchmarks using our pretrained encoders on the 4 overlapping games (Boxing, Seaquest, Frostbite, DemonAttack) used in both works. For AtariARI, we first calculate the average probe F1 scores across categories, then average this quantity across the games. For reward probing, we apply our own protocol detailed in section 5.1. For RL performance we use the IQM. We report the correlation between the probing metrics and RL performances across different models. Our results are summarized in Table 13. We find that the correlation between the average probing F1s and RL performances is stronger for our reward probing method. In particular, our probing method has a significant correlation with RL performances (p < 0.05), while the AtariARI probing method does not. F PROBING DURING TRAINING We show evolution of probing performance as training progresses in figure 8. G REWARD STATISTICS IN PROBING DATASETS In table 14, we report the percentage of states that have a non-zero reward in each of the 9 games, for two different subsets of data: • Checkpoint 1, which correspond to quasi-random trajectories from the beginning of the training process of DQN. This is the data used for the reward probing in Fig 1. • Checkpoint 50, which is the last checkpoint of the DQN replay dataset, and corresponds to the fully trained DQN agent, that we assimilate to an expert agent. This data is used for action probing, and for reward probing in Fig.6 All the games have a fairly small percentage of positive reward states, and we generally observe a higher percentage of reward in checkpoint 50, which is expected since the agent is more capable by then. G.1 IMPACT OF SPARSITY ON THE CORRELATION In Fig.9, we plot the Spearman’s correlation coefficient between the RL performance on each individual game and the reward probing F1, as a function of the percentage of reward observed in each game (see Table 14). We do not observe any particular pattern with respect to the sparsity, suggesting that the probing task is not very sensitive to the sparsity level of each individual game. Note however that, as usual in the Atari benchmark, it is difficult to draw conclusion from any given individual game, and the statistical significance of our results only emerge when considering the set of games as a whole. Indeed, only 3 games achieve individual statistical significance at p < 0.01 (Boxing, Seaquest and Assault), while the other do not obtain statistically significant correlations. H LIMITATIONS One limitation of the current work is that for the presented probing methods to work one needs a subset of the data either with known rewards, where ideally rewards are not too sparse, or with expert actions. If none of the two is available, our method cannot be used. For the reward probing task, the usefulness of the method also depends on the hardness of the reward prediction itself. If the prediction task is too easy, for example because there are rewards at every step, or because the states with rewards are completely different than the ones without (such that even a randomly initialized model would yield features allowing linear separation between the two types of states), then the performance of all the models on this task are going to be extremely similar, with the only differences coming from random noise. In such a case, the performance of the prediction task cannot be used to accurately rank the quality of the features of each of the models. For future work we also would like to extend the findings of this paper to more settings, for example different environments.
1. What is the focus and contribution of the paper regarding unsupervised representation learning in reinforcement learning? 2. What are the strengths of the proposed method, particularly in its ability to predict reward values and expert actions? 3. What are the weaknesses of the paper, especially regarding the lack of diversity in methods used for evaluation? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions regarding the robustness of the linear probe F1 score and its correlation with full reinforcement learning performance?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a method for evaluating unsupervised representation learning in reinforcement learning. Using a linear probe on top of frozen, pretrained representations, the paper suggests learning to predict reward values from various states in downstream tasks. Additionally, the paper uses a linear probe to predict expert actions from learned representations. They authors show evidence that, for a selection of representation learning approaches, the F1 score of the linear probe correlates strongly with full reinforcement learning on the downstream task. Strengths And Weaknesses The paper tackles a very difficult and relevant problem, that of evaluating self-supervised representations. The paper shows evidence that linear probing can give strong indications of eventual RL training performance, which promises to shorten evaluation time and could be impactful in the representation learning for reinforcement learning field. My main concern with the paper is the lack of diversity in methods used to assess the correlation between linear probes and RL training performance. All methods compared are ablations of the self-predictive representation approach described in the paper. While these are important and elucidating experiments, I would like to see a broader set of methods compared, like augmentation-based representations (DrQ or CURL). Do these correlations hold in these cases as well? Also, I'm curious about the noise in the linear probe F1 score. Do the numbers reported in the tables stay the same regardless of random seed? Clarity, Quality, Novelty And Reproducibility The paper is clearly written, the experiments are carefully done and interesting ablations are conducted. Although linear probing is common in computer vision representation evaluation, the generalization to RL and reward prediction is novel as far as I am aware.
ICLR
Title Light-weight probing of unsupervised representations for Reinforcement Learning Abstract Unsupervised visual representation learning offers the opportunity to leverage large corpora of unlabeled trajectories to form useful visual representations, which can benefit the training of reinforcement learning (RL) algorithms. However, evaluating the fitness of such representations requires training RL algorithms which is computationally intensive and has high variance outcomes. To alleviate this issue, we design an evaluation protocol for unsupervised RL representations with lower variance and up to 600x lower computational cost. Inspired by the vision community, we propose two linear probing tasks: predicting the reward observed in a given state, and predicting the action of an expert in a given state. These two tasks are generally applicable to many RL domains, and we show through rigorous experimentation that they correlate strongly with the actual downstream control performance on the Atari100k Benchmark. This provides a better method for exploring the space of pretraining algorithms without the need of running RL evaluations for every setting. Leveraging this framework, we further improve existing self-supervised learning (SSL) recipes for RL, highlighting the importance of the forward model, the size of the visual backbone, and the precise formulation of the unsupervised objective. Code will be released upon acceptance. 1 INTRODUCTION Learning visual representations is a critical step towards solving many kinds of tasks, from supervised tasks such as image classification or object detection, to reinforcement learning (RL). Ever since the early successes of deep reinforcement learning (Mnih et al., 2015), neural networks have been widely adopted to solve pixel-based reinforcement learning tasks such as arcade games (Bellemare et al., 2013), physical continuous control (Todorov et al., 2012; Tassa et al., 2018), and complex video games (Synnaeve et al., 2018; Oh et al., 2016). However, learning deep representations directly from rewards is a challenging task, since this learning signal is often noisy, sparse and delayed. With ongoing progress in unsupervised visual representation learning for vision tasks (Zbontar et al., 2021; Chen et al., 2020a;b; Grill et al., 2020; Caron et al., 2020; 2021), recent efforts have likewise applied self-supervised techniques and ideas to improve representation learning for RL. Some promising approaches include supplementing the RL loss with self-supervised objectives (Laskin et al., 2020; Schwarzer et al., 2021a), or first pre-training the representations on a corpus of trajectories (Schwarzer et al., 2021b; Stooke et al., 2021). However, the diversity in the settings considered, as well as the self-supervised methods used, make it difficult to identify the core principles of successful self-supervised methods in RL. Moreover, estimating the performance of RL algorithms is notoriously challenging (Henderson et al., 2018; Agarwal et al., 2021): it often requires repeating the same experience with a different random seed, and the high CPU-to-GPU ratio is a compute requirement of most online RL methods that is inefficient for typical research compute clusters. This hinders systematic exploration of the many design choices that characterize SSL methods. In this paper, we strive to provide a reliable and lightweight evaluation scheme for unsupervised visual representation in the context of RL. Inspired by the vision community, we propose to evaluate the representations using linear probing, by training a linear prediction head on top of frozen features. We devise two probing tasks that we deem widely applicable: predicting the reward in a given state, and predicting the action that would be taken by a fixed policy in a given state (for example that of an expert). We stress that these probing tasks are only used as a means of evaluation. Because very little supervised data is required, they are particularly suitable for situations where obtaining the expert trajectories or reward labels is expensive. Through thorough experimentation, we show that the performance of the SSL algorithms (in terms of their downstream RL outcomes) correlates with the performance in both probing tasks with statistically significant (p<0.001) Spearman’s rank correlation, making them particularly effective proxies. Given the vastly reduced computational burden of linear evaluations, we argue that it enables much easier and straightforward experimentation of SSL design choices, paving the way for a more systematic exploration of the design space. Finally, we leverage this framework to systematically assess some key attributes of SSL methods. First off, we explore the utility and role of learning a forward model as part of the self-supervised objective. We investigate whether its expressiveness matters and show that equipping it with the ability to model uncertainty (through random latent variable) significantly improves the quality of the representations. Next, we identify several knobs in the self-supervised objective, allowing us to carefully tune the parameters in a principled way. Finally, we confirm the previous finding (Schwarzer et al., 2021b) that bigger architectures, when adequately pre-trained, tend to perform better. Our contributions can be summarized as follows: • Design of a rigorous and efficient SSL evaluation protocol in the context of RL • Empirical demonstration that this evaluation scheme correlates with downstream RL perfor- mance • Systematic exploration of design choices in existing SSL methods. 2 RELATED WORK 2.1 REPRESENTATION LEARNING There has recently been a surge in interest and advances in the domain of self-supervised learning in computer vision. Some state-of-art techniques include contrastive learning methods SimCLR, MoCov2 (Chen et al., 2020a;b); clustering methods SwAV (Caron et al., 2020); distillation methods BYOL, SimSiam, OBoW (Grill et al., 2020; Chen and He, 2021; Gidaris et al., 2020); and information maximization methods Barlow Twins and VicReg (Zbontar et al., 2021; Bardes et al., 2021). These advances have likewise stimulated development in representation learning for reinforcement learning. A line of work includes unsupervised losses as an auxiliary objective during RL training to improve data efficiency. Such objective can be contrastive (Laskin et al., 2020; Zhu et al., 2020) or non-contrastive (Schwarzer et al., 2021a; Yu et al., 2022). ST-DIM (Anand et al., 2019), ATC (Stooke et al., 2021) and BVS-DIM (Mengistu et al., 2022) incorporate temporal information in their contrastive objective, adapting similar techniques from the unsupervised video representation learning (Sermanet et al., 2018). Proto-RL (Yarats et al., 2021a) uses a SwAV-like objective to learn representation as well as guide effective exploration during pre-training. Similarly, CRL (Du et al., 2021) trains a policy to optimize a SimCLR loss, then shows transfer to RL, imitation learning and image classification. Closer to our approach, SGI (Schwarzer et al., 2021b) pretrains both an encoder and forward prediction model by minimizing the distance between predictions and target latents using BYOL, and the encoder is recycled during RL for improved data efficiency. While different in spirit, many model based methods also train an encoder from a corpus of trajectory, either by explicit pixel reconstruction Kaiser et al. (2020); Hafner et al. (2021) or in embedding space Ye et al. (2021); Schrittwieser et al. (2020). Self-supervised representations have also been used for imitation learning (Aytar et al., 2018; Pari et al., 2021) as well as exploration (Burda et al., 2019a). 2.2 REPRESENTATION PROBING IN REINFORCEMENT LEARNING Some prior work (Racah and Pal, 2019; Guo et al., 2018; Anand et al., 2019; Higgins et al., 2018; Dittadi et al., 2022) evaluate the quality of their pretrained representations by probing for ground truth state variables such as agent/object locations, game scores or model-specific quantities (eg. ELBO). Das et al. (2020) propose to probe representations with natural language question-answering. Despite the efficiency of these probing methods, their designs are highly domain-specific and require careful handcrafting for each environment. In addition, they fail to demonstrate the actual correlation between probing and RL performances, which makes their practical usefulness uncertain. On the other hand, the authors of ATC (Stooke et al., 2021) propose to evaluate representations by finetuning for RL tasks using the pretrained encoder with weights frozen. Similarly, Laskin et al. (2021) propose a unified benchmark for SSL methods in continuous control but still require full RL training. Our work seeks to bridge these two approaches by demonstrating the correlation between linear probing and RL performances, as well as designing probing tasks that are generalizable across environments. 3 A FRAMEWORK TO DEVELOP UNSUPERVISED REPRESENTATIONS FOR RL In this section, we detail our proposed framework for training and evaluating unsupervised representations for reinforcement learning. 3.1 UNSUPERVISED PRE-TRAINING The network is first pre-trained on a large corpus of trajectories. Formally, we define a trajectory Ti of length Ti as a sequence of tuples Ti = [(ot, at) | t ∈ [1, Ti]], where ot is the observation of the state at time t in the environment and at was the action taken in this state. This setting is closely related to Batch RL (Lange et al., 2012), with the crucial difference that the reward is not being observed. In particular, it should be possible to use the learned representations to maximize any reward (Touati and Ollivier, 2021). The training corpus corresponds to a set of such trajectories: Dunsup {T1, · · · , Tn}. We note that the policy used to generate this data is left unspecified in this formulation, and is bound to be environment-specific. Since unsupervised methods usually necessitate a lot of data, this pre-training corpus is required to be substantial. In some domains, it might be straightforward to collect a large number of random trajectories to constitute Dunsup. In some other cases, like self-driving, where generating random trajectories is undesirable, expert trajectories from humans can be used instead. The goal of the pre-training step is to learn the parameters θ of an encoder ENCθ which maps any observation o of the state s (for example raw pixels) to a representation e = ENCθ(o). This representation must be amenable for the downstream control task, for example learning a policy. 3.2 EVALUATION In general, the evaluation of RL algorithms is tricky due to the high variance in performance (Henderson et al., 2018). This requires evaluating many random seeds, which creates a computational burden. We side-step this issue by formulating an evaluation protocol which is light-weight and purely supervised. Specifically, we identify two proxy supervised tasks that are broadly applicable and relevant for control. We further show in the experiment section that they are sound, in the sense that models’ performance on the proxy tasks strongly correlates with their performance in the downstream control task of interest. Similar to the evaluation protocol typically used for computer vision models, we rely on linear probing, meaning that we train only a linear layer on top of the representations, which are kept frozen. Reward Probing Our first task consists in predicting the reward observed in a given state. For this task, we require a corpus of trajectories Drew = {T ′1, · · · , T ′m} for which the observed rewards are known, i.e. T ′i = [(ot, at, rt) | t ∈ [1, Ti]] In the most general setting, it can be formulated as a regression problem, where the goal is to minimize the following loss: L(ψ)reward-reg = 1 |Drew| ∑ T ′i∈Drew 1 |T ′i| ∑ (ot,at,rt∈T ′i) ∥lψ(ENCθ(ot))− rt∥2 Here, the only learnt parameters ψ are those of the linear prediction layer lψ . In practice, in many environments where rewards are sparse, the presence or absence of a reward is more important than its magnitude. To simplify the problem in those cases, we can cast it as a binary prediction problem instead (this could be extended to ternary classification if the sign of the reward is of interest): L(ψ)reward-classif = 1 |Drew| ∑ T ′i∈Drew 1 |T ′i| ∑ (ot,at,rt∈T ′i) BinaryCE(1R>0(rt), lψ(ENCθ(ot))) Reward prediction is closely related to value prediction, a central objective in RL that is essential for value-based control and the critic in actor-critic methods. The ability to predict instantaneous reward, akin to predicting value with a very small discount factor, can be viewed as a lower bound on the learned representation’s ability to encode the value function, and has been demonstrably helpful for control, particularly in sparse reward tasks (Jaderberg et al., 2017). Thus, we hypothesize reward prediction accuracy to be a good probing proxy task for our setting as well. Action prediction Our second task consists in predicting the action taken by an expert in a given state. For this task, we require a corpus of trajectories Dexp = {T1, · · · , Tn} generated by an expert policy. We stress that this dataset may be much smaller than the pretraining corpus since we only require to fit and evaluate a linear model. The corresponding objective is as follows: L(ψ)action-classif = 1 |Dexp| ∑ Ti∈Dexp 1 |Ti| ∑ (ot,at∈T ′i) CrossEntropy(at, lψ(ENCθ(ot))) This task is closely related to imitation learning, however, we are not concerned with the performance of the policy that we learn as a by-product. 4 SELF PREDICTIVE REPRESENTATION LEARNING FOR RL In our work, we focus on evaluating and improving a particular class of unsupervised pretraining algorithms that involves using a transition model to predict its own representations in the future (Schwarzer et al., 2021b; Guo et al., 2018; Gelada et al., 2019). This pretraining modality is especially well suited for RL, since the transition model can be conditioned on agent actions, and can be repurposed for model-based RL after pretraining. Our framework is depicted in Fig.2. In this section, we present the main design choices, and we investigate their performance in Section 5. 4.1 TRANSITION MODELS Our baseline transition model is a 2D convolutional network applied directly to the spatial output of the convolutional encoder (Schwarzer et al., 2021b; Schrittwieser et al., 2020). The network consists of two 64-channel convolutional layers with 3x3 filters. The action is represented as a one-hot encoding spatially replicated (in a 2D map) and concatenated with the representation input along the channel dimension. We believe a well-established sequence modeling architecture such as GRU can serve as a superior transition model. Its gating mechanisms should be better at retaining information from both the immediate and distant past, especially helpful for learning dynamics in a partially observable environment. Encoder : ê0 = e0 = ENCθ(o0) RecurrentModel : êt = fϕ(êt−1, at−1) In addition to the deterministic GRU model above, we also experiment with a GRU variant where we introduce stochastic states to allow our model to generalize better to stochastic environments, such as Atari with sticky actions (Machado et al., 2018). Our model is based on the RSSM from DreamerV2 (Hafner et al., 2021), with the main difference being that while pixel reconstruction is used as the SSL objective in the original work, we minimize the distance between predictions and targets purely in the latent space. Following DreamerV2, we optimize the latent variables using straight-through gradients (Bengio et al., 2013), and minimize the distance between posterior (z) and prior (ẑ) distributions using KL loss. Encoder : et = ENCθ(ot) RecurrentModel : ht = fϕ(ht−1, zt−1, at−1) PosteriorModel : zt ∼ pϕ(zt|ht, et) PriorPredictor : ẑt ∼ jϕ(ẑt|ht) LatentMerger : êt = gϕ(ht, zt) 4.2 PREDICTION OBJECTIVES The objective of self predictive representation learning is to minimize the distance between the predicted and the target representations, while ensuring that they do not collapse to a trivial solution. Our baseline prediction objective is BYOL (Grill et al., 2020), which is used in SGI (Schwarzer et al., 2021b). The predicted representation êt+k, and the target representation ẽt+k are first projected to lower dimensions to produce ŷt+k and ỹt+k. BYOL then maximizes the cosine similarity between the predicted and target projections, using a linear prediction function q to translate from ŷ to ỹ: LBY OLθ (ŷt:t+k, ỹt:t+k) = − K∑ k=1 q(ŷt+k) · ỹt+k ∥q(ŷt+k)∥2 · ∥ỹt+k∥2 In the case of BYOL, the target encoder and projection module are the exponentially moving average of the online weights, and the gradients are blocked on the target branch. As an alternative prediction objective, we experiment with Barlow Twins (Zbontar et al., 2021). Similar to BYOL, Barlow Twins minimizes the distance of the latent representations between the online and target branches; however, instead of using a predictor module and stop gradient on the target branch, Barlow Twins avoids collapse by pushing the cross-correlation matrix between the projection outputs on the two branches to be as close to the identity matrix as possible. To adapt Barlow Twins, we calculate the cross correlation across batch and time dimensions: LBT (ŷt:t+k, ỹt:t+k) = ∑ i (1− Cii)2 + λ ∑ i,j ̸=i C2ij where Cij = ∑ b,t(ŷb,t,i) · (ỹb,t,j)√∑ b,t(ŷb,t,i) 2 · √∑ b,t(ỹb,t,j) 2 where λ is a positive constant trading off the importance of the invariance and covariance terms of the loss, C is the cross-correlation matrix computed between the projection outputs of two branches along the batch and time dimensions, b indexes batch samples, t indexes time, and i, j index the vector dimension of the projection output. By enabling gradients on both the prediction and target branches, the Barlow objective pushes the predictions towards the representations, while regularizing the representations toward the predictions. In practice, learning the transition model takes time and we want to avoid regularizing the representations towards poorly trained predictions. To address this, we apply a higher learning rate to the prediction branch. We call this technique Barlow Balancing, and implement it in Algorithm 1. Algorithm 1: PyTorch-style pseudocode for Barlow Balancing BarlowLoss = µ ∗ LBT (ŷ, ỹ.detach()) + (1− µ) ∗ LBT (ŷ.detach(), ỹ) 4.3 OTHER SSL OBJECTIVES SGI’s authors (Schwarzer et al., 2021b) showed that in the absence of other SSL objectives, pretraining with BYOL prediction objective alone results in representation collapse; the addition of inverse dynamics modeling loss is necessary to prevent collapse, while the addition of goal-oriented RL loss results in minor downstream RL performance improvement. In inverse dynamics modeling, the model is trained using cross-entropy to model p(at|ŷt+k, ỹt+k+1), effectively predicting the transition action between two adjacent states. The goal-oriented loss tries to predict distance to states in the near future from the sampled trajectories (details in Appendix). 5 RESULTS 5.1 EXPERIMENTAL DETAILS We conduct experiments on the Arcade Learning Environment benchmark (Bellemare et al., 2013). Given the multitude of pretraining setups we investigate, we limit our experiment to 9 Atari games1. Pretraining We use the publicly-available DQN replay dataset (Agarwal et al., 2020), which contains data from training a DQN agent for 50M steps with sticky action (Machado et al., 2018). We select 1.5 million frames from the 3.5 to 5 millionth steps of the replay dataset, which constitutes trajectories of a weak, partially trained agent. We largely follow the recipe of SGI (Schwarzer et al., 2021b), where we jointly optimize the self prediction, goal-conditioned RL, and inverse dynamics modeling 1Amidar, Assault, Asterix, Boxing, Demon Attack, Frostbite, Gopher, Krull, Seaquest losses for 20 epochs; in some of our experiments we remove one or both of the last two objectives. We use the data-augmentations introduced by Yarats et al. (2021b). All experiments are performed on a single MI50 AMD GPU, and the pretraining process took 2 to 8 days depending on the model. Reward probing We focus on the simplified binary classification task of whether a reward occurs in a given state. We use 100k frames from the 1-1.1 millionth step of the replay dataset, with a 4:1 train/eval split. We train a logistic regression model on frozen features using the Cyanure (Mairal, 2019) library, with the MISO algorithm (Mairal, 2015) coupled with QNING acceleration (Lin et al., 2019) for a maximum of 300 steps. We do not use any data augmentation. We report the mean F1 averaged across all 9 games. On a MI50 AMD GPU, each probing run takes 10 minutes. Action probing We use the last 100k (4:1 train/eval split) frames of the DQN replay dataset, which correspond to a fully trained DQN agent. We train a linear layer on top of frozen, un-augmented features for 12 epochs with softmax focal loss (Lin et al., 2017) using SGD optimizer with learning rate 0.2, batch size 256, 1e-6 weight decay, stepwise scheduler with step size 10 and gamma 0.1. We report the Multiclass F1 (weighted average of F1 scores of each class) averaged across all games. RL evaluation We focus on the Atari 100k benchmark (Kaiser et al., 2020), where only 100k interactive steps are allowed by the agent. This is roughly equivalent to two hours of human play, providing an approximation for human level sample-efficiency. We follow Schwarzer et al. (2021b) training protocol using the Rainbow algorithm (Hessel et al., 2018) with the following differences: we freeze the pretrained encoder (thus only training the Q head), do not apply auxiliary SSL losses while fine-tuning, and finally disable noisy layers and rely instead on ϵ-greedy exploration. This changes are made to make the RL results reflect as closely as possible the performance induced by the quality of the representations. On a MI50 AMD GPU, each run takes between 8 and 12 hours. We evaluate the agent’s performance using human-normalized score (HNS), defined as (agentscore− randomscore)/(humanscore−randomscore). We calculate this per game, per seed by averaging scores over 100 evaluation trajectories at the end of training. For aggregate metrics across games and seeds, we report the median and interquartile mean (IQM). For median, we first average the HNS across seeds for each game, and report the median of the averaged HNS values. For IQM, we first take the middle 50% of scores across both seeds and games, then report the average. While median is commonly reported for Atari100k, recent work has recommended IQM as a superior aggregate metric for the RL setting due to its smaller uncertainty (Agarwal et al., 2021); we also follow the cited work to report the 95% bootstrapped confidence intervals for these aggregate metrics. Unless specified otherwise, the experiments use the medium ResNet-M from Schwarzer et al. (2021b), and the inverse dynamics loss as an auxiliary loss. In BYOL experiments, the target network is an exponential moving average of the online network, while in Barlow Twins both networks are identical, following the original papers. For additional details regarding model architectures and hyperparameters used during pretraining and RL evaluation, please refer to Appendix. 5.2 IMPACT OF TRANSITION MODELS AND PREDICTION OBJECTIVES Table 1: F1 scores on probing tasks for different transition models and prediction objectives. All standard deviations are on the order of 1e-4 Pred Obj Transition Reward Action BYOL Conv-det 64.9 22.7 GRU-det 62.2 26.8 GRU-latent 63.4 23.2 Barlow0.7 Conv-det 52.7 24.9 GRU-latent 67.5 26.2 Table 2: F1 scores on probing tasks for different Barlow variants. All standard deviations are on the order of 1e-4 which we omit below. Pred Obj Reward Action Barlow0.5 65.0 26.3 Barlow0.7 67.5 26.2 Barlow1 65.0 24.7 Barlowrand 67.7 25.8 In table 1, we report the mean probing F1 scores for the convolutional, deterministic GRU, and latent GRU transition models trained using either the BYOL or Barlow prediction objective. When using the BYOL objective, the relative probing strengths for the different transition models are somewhat ambiguous: while the convolutional model results in better reward probing F1, the GRU models are superior in terms of expert action probing. Interestingly, we observe that after replacing BYOL with Barlow, the probing scores for the latent model improve, while those of the deterministic models deteriorate. Overall, the particular combination of pre-training using the GRU-latent transition model with the Barlow prediction objective results in representations with the best overall probing qualities. Since the deterministic model’s predictions are likely to regress to the mean, allowing gradients to flow through the target branch in the case of Barlow objective can regularize the representations towards poor predictions, and can explain their inferior probing performance. Introducing latent variables can alleviate this issue through better predictions. We stress that the transition models are not used during probing, only the encoder is. These experiments show that having a more expressive forward model during the pre-training has a direct impact on the quality of the learnt representations. In Fig.3, we investigate the impact of the latent variable on the information contained in the representations, by training a decoder on frozen features. In table 2, we show the results from experimenting with different variants of the Barlow objective. We find that using a higher learning rate for the prediction branch (Barlow0.7, with 7:3 prediction to target lr ratio) results in better probing outcome than using equal learning rates (Barlow0.5) or not letting gradients flow in the target branch altogether (Barlow1, here the target encoder is a copy of the online encoder). This suggests that while it is helpful to regularize the representations towards the predictions, there is a potential for them being regularized towards poorly trained ones. This can be addressed by applying a higher learning rate on the prediction branch. We also demonstrate that using a frozen, random target network (Barlowrand) results in good features, and in our experiments it gets the best reward probing performance. This contradicts findings from the vision domain (Grill et al., 2020), but corroborates self-supervised results from other domains such as speech (Chiu et al., 2022). Random networks have also been shown to exhibit useful inductive biases for exploration (Burda et al., 2019b;a). An explanation is that random targets act as a regularization that prevent partial collapse by enforcing a wide range of features to be encoded by the model. 5.3 IMPACT OF AUXILIARY SSL OBJECTIVES AND ENCODERS SSL objective Although pretraining with multiple objectives can sometimes result in better downstream performance, in practice they also make it harder to tune for hyperparameters and debug, therefore it is desirable to use the least number of objectives that can result in comparable performance. In table 4, we show the effects of inverse dynamics modeling (inv) and goal-conditioned RL (goal) objectives on probing performance. The BYOL model experiences partial collapse without the inverse dynamics modeling loss, while the addition of goal loss improves the probing performance slightly. This is in congruence with results reported by Schwarzer et al. (2021b) for the same ablations. The Barlow-only model performs significantly better than the BYOL-only model in terms of probing scores, indicating that the Barlow objective is less prone to collapse in the predictive SSL setting. Similar to the BYOL model, the Barlow model can also be improved with inverse dynamics modeling, while the addition of goal loss has a slight negative impact. Encoders SGI (Schwarzer et al., 2021b) showed that using bigger encoders during pretraining results in improved downstream RL performance. We revisit this topic from the point of finding out whether the pretrained representations from bigger networks also have better probing qualities. We experiment with the medium (ResNet-M) and large (ResNet-L) residual networks from SGI. In table 5 we show that Barlow models pretrained using the larger ResNet have improved probing scores. 5.4 CORRELATIONS BETWEEN PROBING AND RL PERFORMANCES If our goal is to use linear probing as a guide to identify superior pretraining setup for RL, then they are only useful to the extent to which they correlate with the actual downstream RL performance. We perform RL evaluations for 9 representative setups (the best settings from each of table 1,2,4,5), as well as two contrastive methods: ST-DIM (Anand et al., 2019) and ATC (Stooke et al., 2021); and a reconstruction-based method VAE-T (Stooke et al., 2021)2. We report their probing and aggregate RL metrics in table 3, with the confidence intervals of the aggregate RL metrics depicted on the right. We find that the rank correlations between reward and action probing F1 scores and the RL aggregate metrics are significant (Figure 1). In summary, our results show the proposed probing scheme is a reliable guide for designing pretraining setups that deliver significant downstream RL performance improvements. 6 CONCLUSION In this paper we have investigated the opportunity to replace costly RL evaluation with lightweight linear probing task to assess the quality of learned representations. Reward and action probing are task-agnostic and should cover most practical applications. Using this methodology to guide us, we have demonstrated the impact of a number of key design choices in the pre-training methodology. We hope that these results encourage the research community to systematically explore the design space to further improve the quality of self-supervised representations for RL. 2See appendix for details on ATC, ST-DIM and VAE-T A MODELS AND HYPER-PARAMETERS A.1 BACKBONES M and L models are ResNet-M and ResNet-L from SGI (Schwarzer et al., 2021b). The ResNet-M encoder consists of inverted residual blocked with an expansion ratio of 2, with batch normalization applied after each convolutional layer; it uses 3 groups with 32, 64, and 64 channels, and has 3 residual blocks per group; it down-scales the input by a factor of 3 in the first group and 2 in the latter 2 groups. This yields a representation of shape 64x7x7 when applied to 84x84-dimensional Atari frames. ResNet-L uses 3 groups with 48, 96, and 96 channels, and has 5 residual blocks per group; it uses a larger expansion ratio of 4, producing a representation shape of 96x7x7 from an 84x84 frame. This enlargement increases the number of parameters by approximately a factor of 5. S model is the model used in Stooke et al. (2021). It consists of three convolutional layers, with [32, 64, 64] channels , kernel sizes [8, 4, 3], and strides [4, 2, 1], listed from first to last layer. A.2 TRANSITION MODELS We experimented with three transition models: convolutional model, deterministic GRU, and latent GRU. Our convolutional model is based on SGI (Schwarzer et al., 2021b). The input into the convolutional transition model is the concatenation of the spatially replicated 2D action map and the representation et along the channel dimension. The network itself consists of two 64-channel convolutional layers with 3x3 filters, separated by ReLU activation and batch normalization layers. The deterministic GRU has hidden dimension 600 and input dimension 250. The input at is prepared by passing the one-hot action vector through a 250 dimensional embedding layer. The initial hidden state ê0 is generated by projecting the representation e0 through a 600 dimensional linear layer with ELU activation and dropout. Layer normalization is applied to the hidden input at all timesteps. The latent GRU model is based on Dreamerv2’s RSSM (Hafner et al., 2021), and is consisted of a recurrent model, posterior model, prior predictor, and latent merger. The recurrent model has a hidden dimension and input dimension of 600. The initial hidden state h0 and input z0 are zero vectors. The flattened stochastic variables zt and one-hot action vector at are first concatenated and then projected to 600 dimension through a linear layer with ELU activation, before being passed into the recurrent model as input. Layer normalization is applied to the hidden input at all non-zero timesteps. The posterior model is a two-layer MLP with 600 dimensional bottleneck separated by ELU activation. It takes the concatenation of representation et and recurrent hidden output ht as input, and outputs a 1024 dimensional vector representing the 32 dimensional logits for 32 latent categorical variables. zt is sampled from the posterior logits. The prior model is a two-layer MLP with 600 dimensional bottleneck separated by ELU activation. Its output format is same as that of the posterior model. ẑt is sampled from the prior logits. The latent merger is a linear layer that projects the concatenation of ht and flattened zt to the same dimension of representation et. A.3 SSL PROJECTION MODULE In the case of the deterministic GRU, ê is first projected to the same dimension of representation through a linear layer. Henceforth we shall assume that ê underwent this step for GRUdet. The predicted representation ê and target representation ẽ are projected to 1024 dimensional vectors ŷ and ỹ through a linear layer. The BYOL objective involves processing ŷ with an additional linear layer q with output dimension 1024. The Barlow objective involves applying batch normalization to ŷ and ỹ prior to taking the covariance and variance losses. The inverse dynamics model is a two-layer MLP with 256 dimensional bottleneck separated by ReLU activation. It takes the concatenation of ŷt and ỹt+1 as input, and outputs logits with dimension equivalent to number of actions. A.4 ATC, VAE-T, ST-DIM We use the implementation, hyperparameters and architecture from the codebase of (Stooke et al., 2021) and (Stooke and Abbeel, 2019) for these models. We change the dataset to the one used in all our experiments We use the dataset described in section 5 to train these models, and train all methods for 58,500 updates. ATC (Augmented-Temporal Contrast) uses InfoNCE loss between output of the momentum encoder and online branch applied to different augmentations of an image to pre-train the encoder. VAE-T from Stooke et al. (2021) uses variational auto-encoder (Kingma and Welling, 2014) objective to reconstruct the frame from the next time step given an image at the current time step. ST-DIM (Anand et al., 2019) also uses InfoNCE objective, and in additional to traditional global-global infomax, introduces global-local infomax by using local representations taken from the feature map output of the convolutional encoder and the global pooled feature vector as positive pairs. For more details, we refer the reader to the referenced works. A.5 IMAGE RECONSTRUCTION MODEL We used a decoder architecture that mirrors the structure of the ResNet-M encoder. In decoding, instead of transposed convolutions we used upsampling with the nearest value followed by a regular convolution (Odena et al., 2016). We used mean squared error between the reconstructed pixels and the target image as the training criterion. Models were trained and evaluated on the same data as reward and action probing, for 30 epochs using Adam optimizer with learning rate 0.001. A.6 HYPERPARAMETERS See tables 6, 7, 8, 9 for hyperparameter values. For ATC, ST-DIM and VAE-T hyperparameters, see Stooke et al. (2021). A.7 IMAGE AUGMENTATION We use the same image augmentations as used in SGI (Schwarzer et al., 2021b), which itself used the augmentations in DrQ (Yarats et al., 2021b), in both pretraining and fine-tuning. We specifically apply random crops (4 pixel padding and 84x84 crops) and image intensity jittering. A.8 GOAL-ORIENTED RL LOSS Goal-oriented RL loss is taken directly from SGI (Schwarzer et al., 2021b). This objective trains a goal-conditional DQN, with rewards specified by proximity to sampled goals. First, a goal g is sampled to be the state encoding either of the near future in the current trajectory (up to 50 steps in the future), or, with probability of 20%, of the future state in another trajectory in the current batch. Then, we add Gaussian noise to obtain the final goal g: g ← αn + (1 − α)g, where α ∼ Uniform(0.5), and n is a vector sampled from isotropic Gaussian normalized to have length of 1. Then, in order to obtain the reward of taking action at going from state st to st+1, we first encode the states with the target encoder ẽt = ENCtarget(ot), ẽt + 1 = ENCtarget(ot+1). Then, we calculate the reward as: R(ẽt, ẽt+1) = d(ẽt, g)− d(ẽt+1, g), where d(ẽt, g) = exp ( 2 ẽt·g∥ẽt∥2·∥g∥2 − 2 ) . We use FiLM (Perez et al., 2018) to condition the Q-function Q(ot, at, g) on g, and optimize the model using DQN (Mnih et al., 2015). B FORWARD MODEL PROBING While our principal goal is to demonstrate the correlation between representation probing and offline RL performances, we also apply the reward probing technique to predictions in order to evaluate the qualities of transition models under different pretraining setups. In table 10, we show the effects of using different transition models during pretraining on prediction probing performance. All models are trained with ResNet-M encoder and inverse loss. Goal loss is also applied to the BYOL models. Table 8: GRU-latent specific hyperparameters. Parameter Setting kl loss weight 0.1 kl balance 0.95 Table 10: Mean reward probing F1 scores for pretraining setups with different transition models. Evaluated on 5th and 10th predictions. All standard deviations are on order of 1e-4. Pred Obj Transition Pred 5 Pred 10 BYOL Conv-det 33.1 28.4 GRU-det 33.0 27.4 GRU-latent 33.4 28.9 Barlow0.7 Conv-det 32.0 27.6 GRU-det 30.1 25.0 GRU-latent 39.5 30.2 Table 11: Mean reward probing F1 scores for pretraining setups with different prediction objectives. Evaluated on 5th and 10th predictions. All standard deviations are on order of 1e-4. Pred Obj Pred 5 Pred 10 BYOL 33.4 28.9 Barlow0.5 40.2 30.2 Barlow0.7 39.5 30.2 Barlow1 37.4 29.7 Barlowrand 36.8 27.5 In the deterministic setting, the predictions of the GRU model are worse than those of the convolutional model. The introduction of stochasticity appears to fix the underlying issue for predictions, resulting in the latent GRU model having the best overall prediction probing performance. One possible explanation for Conv-det having better predictions than GRU-det is that the spatial inductive bias in the convolutional kernels acts as a constraint and helps regularize the predictions from regressing to the mean. However, this is more effectively solved by the introduction of latent variables into GRU during training and inference. In table 11, we show the effects of using different prediction objectives during pretraining on prediction probing performance. All models are trained with ResNet-M encoder, GRU-latent transition model, and inverse loss; goal loss is also applied to the BYOL model. Comparing to the BYOL model, Barlow models generally have higher probing scores for predictions. We also note that for Barlow models, regularizing the representations towards the predictions (by setting Barlow Balance < 1) improves the qualities of predictions. This is likely because it makes the prediction task easier, making it more likely to learn a capable transition model. This reasoning can also explain why the Barlow model with frozen, random target network achieves superior probing result for representation (table 2) but worse result for predictions compared to the other Barlow versions. Predicting a random target representation is likely more difficult than predicting a learned representation, and this may in turn encourage the model to rely more on learning a powerful encoder and posterior model, and less on learning an accurate transition model. C FULL RL RESULTS D STATISTICAL HYPOTHESIS TESTING OF RANK CORRELATION In Fig. 5, we show the correlations results for both the action and reward predictions. We estimate Spearman’s rank correlation coefficient (Spearman’s r) between the linear probing performance and the (interquartile) mean RL human-normalized score (HNS) over 9 Atari games. The reason for using Spearman’s r instead of the Pearson correlation coefficient is because we are interested in whether the relative ranking of the models on the linear probing tasks is indicative of the relative ranking of the same models when RL is trained on top of it. As an example, this allows us to say if model A out-ranks model B in the reward prediction task, an RL model trained on top of model A’s representations will likely out-perform an RL model trained on top of model B’s representation. However, it does not let us predict by how much model A will out-perform model B. Let d denote the difference in ranking between the linear probing performance and the RL performance, Spearman’s r (denoted as ρ below) is computed as, ρ = 1− 6 ∑n i=1 d 2 i n(n2 − 1) , (1) where di is the difference in ranking for the i-th model, and n is the total number of models we have. We perform statistical hypothesis testing on ρ with null hypothesis ρ = 0 (no correlation between linear probing performance and RL performance) and alternative hypothesis ρ > 0 (positive correlation). The null distribution is constructed nonparametrically using permutation testing: we sample random orderings of the observed linear probing performance and RL performance independently and compute ρ. This is repeated 50,000 times to generate the null distribution (which is centered at ρ = 0 as we do not expect randomly ordered values to be correlated). We then compare our observed ρ to this distribution and perform one-tailed test for the proportion of samples larger than our observed ρ to report our p-value. D.1 RANK CORRELATION ON A DIFFERENT DATASET In Fig. 1, we explored the correlation between the RL performance and the reward probing task, where the dataset used for the reward probing was a set of quasi-random trajectories from the DQN dataset, coming from very beginning of the training run of the DQN agent used to collect the data. It is natural to ask whether the correlation results we obtain are sensitive to the specific dataset used. To put this question to the test, we re-run the same reward probing task, this time on the "expert" dataset, i.e. the last trajectories of the DQN dataset, corresponding to a fully trained agent. The results are shown in Fig.6. The Spearman’s correlation coefficient that we obtain is the exact same as the one for the random trajectory dataset (even though the reward statistic are different, see Table 14), showing that the correlation result is not sensitive to the probing dataset used. D.2 CONFIDENCE INTERVAL OF RL PERFORMANCE AS A FUNCTION OF INDEPENDENT RUNS We further show the confidence interval of the estimated mean RL performance as the number of independent runs increase. From our total of 10 independent runs each game, we sample with replacement k ≤ 10 runs (k being number of independent runs we “pretend” to have instead of the full 10), independently for each game. We can compute the IQM over this sample to get an estimate for the IQM as if we only have k independent runs. We repeat this process 10,000 times to construct the 95 confidence interval of the empirical IQM for different k’s. Illustrative examples of how much this confidence interval shrinks for different pairs of models is shown in Fig. 7. We observe in Fig. 7 the mean RL performance estimates have CIs that eventually separate with many independent runs. This is an unbiased but high variance and computationally intensive estimator of the true expected RL performance. On the other hand, the reward prediction F1 score is a computationally cheap, low variance and accurate estimator of the relative model ranks in mean RL performance. This further corroborates our previous results of positive correlation between reward prediction F1 score and mean RL performance (Fig. 1). E COMPARISON WITH DOMAIN SPECIFIC PROBING BENCHMARKS One of the key advantages of our probing method is that it is domain agnostic, unlike the previously proposed AtariARI benchmark (Anand et al., 2019) which acquires probing labels through the RAM state of the emulator, making their method impractical for image-based trajectories. To better understand how our probing metrics compare with the domain specific ones in terms of correlations with RL performances, we perform the AtariARI probing benchmarks using our pretrained encoders on the 4 overlapping games (Boxing, Seaquest, Frostbite, DemonAttack) used in both works. For AtariARI, we first calculate the average probe F1 scores across categories, then average this quantity across the games. For reward probing, we apply our own protocol detailed in section 5.1. For RL performance we use the IQM. We report the correlation between the probing metrics and RL performances across different models. Our results are summarized in Table 13. We find that the correlation between the average probing F1s and RL performances is stronger for our reward probing method. In particular, our probing method has a significant correlation with RL performances (p < 0.05), while the AtariARI probing method does not. F PROBING DURING TRAINING We show evolution of probing performance as training progresses in figure 8. G REWARD STATISTICS IN PROBING DATASETS In table 14, we report the percentage of states that have a non-zero reward in each of the 9 games, for two different subsets of data: • Checkpoint 1, which correspond to quasi-random trajectories from the beginning of the training process of DQN. This is the data used for the reward probing in Fig 1. • Checkpoint 50, which is the last checkpoint of the DQN replay dataset, and corresponds to the fully trained DQN agent, that we assimilate to an expert agent. This data is used for action probing, and for reward probing in Fig.6 All the games have a fairly small percentage of positive reward states, and we generally observe a higher percentage of reward in checkpoint 50, which is expected since the agent is more capable by then. G.1 IMPACT OF SPARSITY ON THE CORRELATION In Fig.9, we plot the Spearman’s correlation coefficient between the RL performance on each individual game and the reward probing F1, as a function of the percentage of reward observed in each game (see Table 14). We do not observe any particular pattern with respect to the sparsity, suggesting that the probing task is not very sensitive to the sparsity level of each individual game. Note however that, as usual in the Atari benchmark, it is difficult to draw conclusion from any given individual game, and the statistical significance of our results only emerge when considering the set of games as a whole. Indeed, only 3 games achieve individual statistical significance at p < 0.01 (Boxing, Seaquest and Assault), while the other do not obtain statistically significant correlations. H LIMITATIONS One limitation of the current work is that for the presented probing methods to work one needs a subset of the data either with known rewards, where ideally rewards are not too sparse, or with expert actions. If none of the two is available, our method cannot be used. For the reward probing task, the usefulness of the method also depends on the hardness of the reward prediction itself. If the prediction task is too easy, for example because there are rewards at every step, or because the states with rewards are completely different than the ones without (such that even a randomly initialized model would yield features allowing linear separation between the two types of states), then the performance of all the models on this task are going to be extremely similar, with the only differences coming from random noise. In such a case, the performance of the prediction task cannot be used to accurately rank the quality of the features of each of the models. For future work we also would like to extend the findings of this paper to more settings, for example different environments.
1. What is the focus and contribution of the paper regarding unsupervised visual pretraining? 2. What are the strengths and weaknesses of the proposed evaluation protocol? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions regarding the paper, particularly on its limitations and potential applications?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper develops an evaluation protocol for unsupervised visual pretraining. They learn linear probes to predict expert agent actions and rewards from encoded states. These probes provide a more cost-efficient way of comparing visual representation learning methods for RL. The evaluation protocol is tested on a handful of Atari tasks and the authors show that the performance of networks on the linear probes well correlates with RL performance. Strengths And Weaknesses This paper introduces an interesting idea for the important problem of cost-effective evaluations of visual representations. Currently results are limited to Atari and the evaluation is only performed for a small number of models + self-supervised losses. Because the main goal of this paper is to provide an evaluation protocol that can be used in place of downstream RL performance, it would be helpful to see a much broader range of losses and models as well as more difficult control tasks. It's also unclear how well these evaluation protocols predict downstream performance in the presence of task transfer: e.g., one goal in developing visual representation pre-training methods is to get a good generalizable encoder. Because the evaluation leverages reward information and information about the optimal policy, it doesn't seem like it would predict the fitness of an encoder for new tasks. Clarity, Quality, Novelty And Reproducibility Overall I thought the presentation was straightforward save a few minor confusions: Is section 4.1 a contribution of this paper? From the introduction and abstract, I assumed that the paper's main contribution was the evaluation protocol, but it was unclear if the architecture in Figure 2 was adapted from past work or newly introduced for this task. I think this would be helpful to clarify because it would be useful to see the evaluation protocol on multiple kinds of models or on models developed in past work.
ICLR
Title Light-weight probing of unsupervised representations for Reinforcement Learning Abstract Unsupervised visual representation learning offers the opportunity to leverage large corpora of unlabeled trajectories to form useful visual representations, which can benefit the training of reinforcement learning (RL) algorithms. However, evaluating the fitness of such representations requires training RL algorithms which is computationally intensive and has high variance outcomes. To alleviate this issue, we design an evaluation protocol for unsupervised RL representations with lower variance and up to 600x lower computational cost. Inspired by the vision community, we propose two linear probing tasks: predicting the reward observed in a given state, and predicting the action of an expert in a given state. These two tasks are generally applicable to many RL domains, and we show through rigorous experimentation that they correlate strongly with the actual downstream control performance on the Atari100k Benchmark. This provides a better method for exploring the space of pretraining algorithms without the need of running RL evaluations for every setting. Leveraging this framework, we further improve existing self-supervised learning (SSL) recipes for RL, highlighting the importance of the forward model, the size of the visual backbone, and the precise formulation of the unsupervised objective. Code will be released upon acceptance. 1 INTRODUCTION Learning visual representations is a critical step towards solving many kinds of tasks, from supervised tasks such as image classification or object detection, to reinforcement learning (RL). Ever since the early successes of deep reinforcement learning (Mnih et al., 2015), neural networks have been widely adopted to solve pixel-based reinforcement learning tasks such as arcade games (Bellemare et al., 2013), physical continuous control (Todorov et al., 2012; Tassa et al., 2018), and complex video games (Synnaeve et al., 2018; Oh et al., 2016). However, learning deep representations directly from rewards is a challenging task, since this learning signal is often noisy, sparse and delayed. With ongoing progress in unsupervised visual representation learning for vision tasks (Zbontar et al., 2021; Chen et al., 2020a;b; Grill et al., 2020; Caron et al., 2020; 2021), recent efforts have likewise applied self-supervised techniques and ideas to improve representation learning for RL. Some promising approaches include supplementing the RL loss with self-supervised objectives (Laskin et al., 2020; Schwarzer et al., 2021a), or first pre-training the representations on a corpus of trajectories (Schwarzer et al., 2021b; Stooke et al., 2021). However, the diversity in the settings considered, as well as the self-supervised methods used, make it difficult to identify the core principles of successful self-supervised methods in RL. Moreover, estimating the performance of RL algorithms is notoriously challenging (Henderson et al., 2018; Agarwal et al., 2021): it often requires repeating the same experience with a different random seed, and the high CPU-to-GPU ratio is a compute requirement of most online RL methods that is inefficient for typical research compute clusters. This hinders systematic exploration of the many design choices that characterize SSL methods. In this paper, we strive to provide a reliable and lightweight evaluation scheme for unsupervised visual representation in the context of RL. Inspired by the vision community, we propose to evaluate the representations using linear probing, by training a linear prediction head on top of frozen features. We devise two probing tasks that we deem widely applicable: predicting the reward in a given state, and predicting the action that would be taken by a fixed policy in a given state (for example that of an expert). We stress that these probing tasks are only used as a means of evaluation. Because very little supervised data is required, they are particularly suitable for situations where obtaining the expert trajectories or reward labels is expensive. Through thorough experimentation, we show that the performance of the SSL algorithms (in terms of their downstream RL outcomes) correlates with the performance in both probing tasks with statistically significant (p<0.001) Spearman’s rank correlation, making them particularly effective proxies. Given the vastly reduced computational burden of linear evaluations, we argue that it enables much easier and straightforward experimentation of SSL design choices, paving the way for a more systematic exploration of the design space. Finally, we leverage this framework to systematically assess some key attributes of SSL methods. First off, we explore the utility and role of learning a forward model as part of the self-supervised objective. We investigate whether its expressiveness matters and show that equipping it with the ability to model uncertainty (through random latent variable) significantly improves the quality of the representations. Next, we identify several knobs in the self-supervised objective, allowing us to carefully tune the parameters in a principled way. Finally, we confirm the previous finding (Schwarzer et al., 2021b) that bigger architectures, when adequately pre-trained, tend to perform better. Our contributions can be summarized as follows: • Design of a rigorous and efficient SSL evaluation protocol in the context of RL • Empirical demonstration that this evaluation scheme correlates with downstream RL perfor- mance • Systematic exploration of design choices in existing SSL methods. 2 RELATED WORK 2.1 REPRESENTATION LEARNING There has recently been a surge in interest and advances in the domain of self-supervised learning in computer vision. Some state-of-art techniques include contrastive learning methods SimCLR, MoCov2 (Chen et al., 2020a;b); clustering methods SwAV (Caron et al., 2020); distillation methods BYOL, SimSiam, OBoW (Grill et al., 2020; Chen and He, 2021; Gidaris et al., 2020); and information maximization methods Barlow Twins and VicReg (Zbontar et al., 2021; Bardes et al., 2021). These advances have likewise stimulated development in representation learning for reinforcement learning. A line of work includes unsupervised losses as an auxiliary objective during RL training to improve data efficiency. Such objective can be contrastive (Laskin et al., 2020; Zhu et al., 2020) or non-contrastive (Schwarzer et al., 2021a; Yu et al., 2022). ST-DIM (Anand et al., 2019), ATC (Stooke et al., 2021) and BVS-DIM (Mengistu et al., 2022) incorporate temporal information in their contrastive objective, adapting similar techniques from the unsupervised video representation learning (Sermanet et al., 2018). Proto-RL (Yarats et al., 2021a) uses a SwAV-like objective to learn representation as well as guide effective exploration during pre-training. Similarly, CRL (Du et al., 2021) trains a policy to optimize a SimCLR loss, then shows transfer to RL, imitation learning and image classification. Closer to our approach, SGI (Schwarzer et al., 2021b) pretrains both an encoder and forward prediction model by minimizing the distance between predictions and target latents using BYOL, and the encoder is recycled during RL for improved data efficiency. While different in spirit, many model based methods also train an encoder from a corpus of trajectory, either by explicit pixel reconstruction Kaiser et al. (2020); Hafner et al. (2021) or in embedding space Ye et al. (2021); Schrittwieser et al. (2020). Self-supervised representations have also been used for imitation learning (Aytar et al., 2018; Pari et al., 2021) as well as exploration (Burda et al., 2019a). 2.2 REPRESENTATION PROBING IN REINFORCEMENT LEARNING Some prior work (Racah and Pal, 2019; Guo et al., 2018; Anand et al., 2019; Higgins et al., 2018; Dittadi et al., 2022) evaluate the quality of their pretrained representations by probing for ground truth state variables such as agent/object locations, game scores or model-specific quantities (eg. ELBO). Das et al. (2020) propose to probe representations with natural language question-answering. Despite the efficiency of these probing methods, their designs are highly domain-specific and require careful handcrafting for each environment. In addition, they fail to demonstrate the actual correlation between probing and RL performances, which makes their practical usefulness uncertain. On the other hand, the authors of ATC (Stooke et al., 2021) propose to evaluate representations by finetuning for RL tasks using the pretrained encoder with weights frozen. Similarly, Laskin et al. (2021) propose a unified benchmark for SSL methods in continuous control but still require full RL training. Our work seeks to bridge these two approaches by demonstrating the correlation between linear probing and RL performances, as well as designing probing tasks that are generalizable across environments. 3 A FRAMEWORK TO DEVELOP UNSUPERVISED REPRESENTATIONS FOR RL In this section, we detail our proposed framework for training and evaluating unsupervised representations for reinforcement learning. 3.1 UNSUPERVISED PRE-TRAINING The network is first pre-trained on a large corpus of trajectories. Formally, we define a trajectory Ti of length Ti as a sequence of tuples Ti = [(ot, at) | t ∈ [1, Ti]], where ot is the observation of the state at time t in the environment and at was the action taken in this state. This setting is closely related to Batch RL (Lange et al., 2012), with the crucial difference that the reward is not being observed. In particular, it should be possible to use the learned representations to maximize any reward (Touati and Ollivier, 2021). The training corpus corresponds to a set of such trajectories: Dunsup {T1, · · · , Tn}. We note that the policy used to generate this data is left unspecified in this formulation, and is bound to be environment-specific. Since unsupervised methods usually necessitate a lot of data, this pre-training corpus is required to be substantial. In some domains, it might be straightforward to collect a large number of random trajectories to constitute Dunsup. In some other cases, like self-driving, where generating random trajectories is undesirable, expert trajectories from humans can be used instead. The goal of the pre-training step is to learn the parameters θ of an encoder ENCθ which maps any observation o of the state s (for example raw pixels) to a representation e = ENCθ(o). This representation must be amenable for the downstream control task, for example learning a policy. 3.2 EVALUATION In general, the evaluation of RL algorithms is tricky due to the high variance in performance (Henderson et al., 2018). This requires evaluating many random seeds, which creates a computational burden. We side-step this issue by formulating an evaluation protocol which is light-weight and purely supervised. Specifically, we identify two proxy supervised tasks that are broadly applicable and relevant for control. We further show in the experiment section that they are sound, in the sense that models’ performance on the proxy tasks strongly correlates with their performance in the downstream control task of interest. Similar to the evaluation protocol typically used for computer vision models, we rely on linear probing, meaning that we train only a linear layer on top of the representations, which are kept frozen. Reward Probing Our first task consists in predicting the reward observed in a given state. For this task, we require a corpus of trajectories Drew = {T ′1, · · · , T ′m} for which the observed rewards are known, i.e. T ′i = [(ot, at, rt) | t ∈ [1, Ti]] In the most general setting, it can be formulated as a regression problem, where the goal is to minimize the following loss: L(ψ)reward-reg = 1 |Drew| ∑ T ′i∈Drew 1 |T ′i| ∑ (ot,at,rt∈T ′i) ∥lψ(ENCθ(ot))− rt∥2 Here, the only learnt parameters ψ are those of the linear prediction layer lψ . In practice, in many environments where rewards are sparse, the presence or absence of a reward is more important than its magnitude. To simplify the problem in those cases, we can cast it as a binary prediction problem instead (this could be extended to ternary classification if the sign of the reward is of interest): L(ψ)reward-classif = 1 |Drew| ∑ T ′i∈Drew 1 |T ′i| ∑ (ot,at,rt∈T ′i) BinaryCE(1R>0(rt), lψ(ENCθ(ot))) Reward prediction is closely related to value prediction, a central objective in RL that is essential for value-based control and the critic in actor-critic methods. The ability to predict instantaneous reward, akin to predicting value with a very small discount factor, can be viewed as a lower bound on the learned representation’s ability to encode the value function, and has been demonstrably helpful for control, particularly in sparse reward tasks (Jaderberg et al., 2017). Thus, we hypothesize reward prediction accuracy to be a good probing proxy task for our setting as well. Action prediction Our second task consists in predicting the action taken by an expert in a given state. For this task, we require a corpus of trajectories Dexp = {T1, · · · , Tn} generated by an expert policy. We stress that this dataset may be much smaller than the pretraining corpus since we only require to fit and evaluate a linear model. The corresponding objective is as follows: L(ψ)action-classif = 1 |Dexp| ∑ Ti∈Dexp 1 |Ti| ∑ (ot,at∈T ′i) CrossEntropy(at, lψ(ENCθ(ot))) This task is closely related to imitation learning, however, we are not concerned with the performance of the policy that we learn as a by-product. 4 SELF PREDICTIVE REPRESENTATION LEARNING FOR RL In our work, we focus on evaluating and improving a particular class of unsupervised pretraining algorithms that involves using a transition model to predict its own representations in the future (Schwarzer et al., 2021b; Guo et al., 2018; Gelada et al., 2019). This pretraining modality is especially well suited for RL, since the transition model can be conditioned on agent actions, and can be repurposed for model-based RL after pretraining. Our framework is depicted in Fig.2. In this section, we present the main design choices, and we investigate their performance in Section 5. 4.1 TRANSITION MODELS Our baseline transition model is a 2D convolutional network applied directly to the spatial output of the convolutional encoder (Schwarzer et al., 2021b; Schrittwieser et al., 2020). The network consists of two 64-channel convolutional layers with 3x3 filters. The action is represented as a one-hot encoding spatially replicated (in a 2D map) and concatenated with the representation input along the channel dimension. We believe a well-established sequence modeling architecture such as GRU can serve as a superior transition model. Its gating mechanisms should be better at retaining information from both the immediate and distant past, especially helpful for learning dynamics in a partially observable environment. Encoder : ê0 = e0 = ENCθ(o0) RecurrentModel : êt = fϕ(êt−1, at−1) In addition to the deterministic GRU model above, we also experiment with a GRU variant where we introduce stochastic states to allow our model to generalize better to stochastic environments, such as Atari with sticky actions (Machado et al., 2018). Our model is based on the RSSM from DreamerV2 (Hafner et al., 2021), with the main difference being that while pixel reconstruction is used as the SSL objective in the original work, we minimize the distance between predictions and targets purely in the latent space. Following DreamerV2, we optimize the latent variables using straight-through gradients (Bengio et al., 2013), and minimize the distance between posterior (z) and prior (ẑ) distributions using KL loss. Encoder : et = ENCθ(ot) RecurrentModel : ht = fϕ(ht−1, zt−1, at−1) PosteriorModel : zt ∼ pϕ(zt|ht, et) PriorPredictor : ẑt ∼ jϕ(ẑt|ht) LatentMerger : êt = gϕ(ht, zt) 4.2 PREDICTION OBJECTIVES The objective of self predictive representation learning is to minimize the distance between the predicted and the target representations, while ensuring that they do not collapse to a trivial solution. Our baseline prediction objective is BYOL (Grill et al., 2020), which is used in SGI (Schwarzer et al., 2021b). The predicted representation êt+k, and the target representation ẽt+k are first projected to lower dimensions to produce ŷt+k and ỹt+k. BYOL then maximizes the cosine similarity between the predicted and target projections, using a linear prediction function q to translate from ŷ to ỹ: LBY OLθ (ŷt:t+k, ỹt:t+k) = − K∑ k=1 q(ŷt+k) · ỹt+k ∥q(ŷt+k)∥2 · ∥ỹt+k∥2 In the case of BYOL, the target encoder and projection module are the exponentially moving average of the online weights, and the gradients are blocked on the target branch. As an alternative prediction objective, we experiment with Barlow Twins (Zbontar et al., 2021). Similar to BYOL, Barlow Twins minimizes the distance of the latent representations between the online and target branches; however, instead of using a predictor module and stop gradient on the target branch, Barlow Twins avoids collapse by pushing the cross-correlation matrix between the projection outputs on the two branches to be as close to the identity matrix as possible. To adapt Barlow Twins, we calculate the cross correlation across batch and time dimensions: LBT (ŷt:t+k, ỹt:t+k) = ∑ i (1− Cii)2 + λ ∑ i,j ̸=i C2ij where Cij = ∑ b,t(ŷb,t,i) · (ỹb,t,j)√∑ b,t(ŷb,t,i) 2 · √∑ b,t(ỹb,t,j) 2 where λ is a positive constant trading off the importance of the invariance and covariance terms of the loss, C is the cross-correlation matrix computed between the projection outputs of two branches along the batch and time dimensions, b indexes batch samples, t indexes time, and i, j index the vector dimension of the projection output. By enabling gradients on both the prediction and target branches, the Barlow objective pushes the predictions towards the representations, while regularizing the representations toward the predictions. In practice, learning the transition model takes time and we want to avoid regularizing the representations towards poorly trained predictions. To address this, we apply a higher learning rate to the prediction branch. We call this technique Barlow Balancing, and implement it in Algorithm 1. Algorithm 1: PyTorch-style pseudocode for Barlow Balancing BarlowLoss = µ ∗ LBT (ŷ, ỹ.detach()) + (1− µ) ∗ LBT (ŷ.detach(), ỹ) 4.3 OTHER SSL OBJECTIVES SGI’s authors (Schwarzer et al., 2021b) showed that in the absence of other SSL objectives, pretraining with BYOL prediction objective alone results in representation collapse; the addition of inverse dynamics modeling loss is necessary to prevent collapse, while the addition of goal-oriented RL loss results in minor downstream RL performance improvement. In inverse dynamics modeling, the model is trained using cross-entropy to model p(at|ŷt+k, ỹt+k+1), effectively predicting the transition action between two adjacent states. The goal-oriented loss tries to predict distance to states in the near future from the sampled trajectories (details in Appendix). 5 RESULTS 5.1 EXPERIMENTAL DETAILS We conduct experiments on the Arcade Learning Environment benchmark (Bellemare et al., 2013). Given the multitude of pretraining setups we investigate, we limit our experiment to 9 Atari games1. Pretraining We use the publicly-available DQN replay dataset (Agarwal et al., 2020), which contains data from training a DQN agent for 50M steps with sticky action (Machado et al., 2018). We select 1.5 million frames from the 3.5 to 5 millionth steps of the replay dataset, which constitutes trajectories of a weak, partially trained agent. We largely follow the recipe of SGI (Schwarzer et al., 2021b), where we jointly optimize the self prediction, goal-conditioned RL, and inverse dynamics modeling 1Amidar, Assault, Asterix, Boxing, Demon Attack, Frostbite, Gopher, Krull, Seaquest losses for 20 epochs; in some of our experiments we remove one or both of the last two objectives. We use the data-augmentations introduced by Yarats et al. (2021b). All experiments are performed on a single MI50 AMD GPU, and the pretraining process took 2 to 8 days depending on the model. Reward probing We focus on the simplified binary classification task of whether a reward occurs in a given state. We use 100k frames from the 1-1.1 millionth step of the replay dataset, with a 4:1 train/eval split. We train a logistic regression model on frozen features using the Cyanure (Mairal, 2019) library, with the MISO algorithm (Mairal, 2015) coupled with QNING acceleration (Lin et al., 2019) for a maximum of 300 steps. We do not use any data augmentation. We report the mean F1 averaged across all 9 games. On a MI50 AMD GPU, each probing run takes 10 minutes. Action probing We use the last 100k (4:1 train/eval split) frames of the DQN replay dataset, which correspond to a fully trained DQN agent. We train a linear layer on top of frozen, un-augmented features for 12 epochs with softmax focal loss (Lin et al., 2017) using SGD optimizer with learning rate 0.2, batch size 256, 1e-6 weight decay, stepwise scheduler with step size 10 and gamma 0.1. We report the Multiclass F1 (weighted average of F1 scores of each class) averaged across all games. RL evaluation We focus on the Atari 100k benchmark (Kaiser et al., 2020), where only 100k interactive steps are allowed by the agent. This is roughly equivalent to two hours of human play, providing an approximation for human level sample-efficiency. We follow Schwarzer et al. (2021b) training protocol using the Rainbow algorithm (Hessel et al., 2018) with the following differences: we freeze the pretrained encoder (thus only training the Q head), do not apply auxiliary SSL losses while fine-tuning, and finally disable noisy layers and rely instead on ϵ-greedy exploration. This changes are made to make the RL results reflect as closely as possible the performance induced by the quality of the representations. On a MI50 AMD GPU, each run takes between 8 and 12 hours. We evaluate the agent’s performance using human-normalized score (HNS), defined as (agentscore− randomscore)/(humanscore−randomscore). We calculate this per game, per seed by averaging scores over 100 evaluation trajectories at the end of training. For aggregate metrics across games and seeds, we report the median and interquartile mean (IQM). For median, we first average the HNS across seeds for each game, and report the median of the averaged HNS values. For IQM, we first take the middle 50% of scores across both seeds and games, then report the average. While median is commonly reported for Atari100k, recent work has recommended IQM as a superior aggregate metric for the RL setting due to its smaller uncertainty (Agarwal et al., 2021); we also follow the cited work to report the 95% bootstrapped confidence intervals for these aggregate metrics. Unless specified otherwise, the experiments use the medium ResNet-M from Schwarzer et al. (2021b), and the inverse dynamics loss as an auxiliary loss. In BYOL experiments, the target network is an exponential moving average of the online network, while in Barlow Twins both networks are identical, following the original papers. For additional details regarding model architectures and hyperparameters used during pretraining and RL evaluation, please refer to Appendix. 5.2 IMPACT OF TRANSITION MODELS AND PREDICTION OBJECTIVES Table 1: F1 scores on probing tasks for different transition models and prediction objectives. All standard deviations are on the order of 1e-4 Pred Obj Transition Reward Action BYOL Conv-det 64.9 22.7 GRU-det 62.2 26.8 GRU-latent 63.4 23.2 Barlow0.7 Conv-det 52.7 24.9 GRU-latent 67.5 26.2 Table 2: F1 scores on probing tasks for different Barlow variants. All standard deviations are on the order of 1e-4 which we omit below. Pred Obj Reward Action Barlow0.5 65.0 26.3 Barlow0.7 67.5 26.2 Barlow1 65.0 24.7 Barlowrand 67.7 25.8 In table 1, we report the mean probing F1 scores for the convolutional, deterministic GRU, and latent GRU transition models trained using either the BYOL or Barlow prediction objective. When using the BYOL objective, the relative probing strengths for the different transition models are somewhat ambiguous: while the convolutional model results in better reward probing F1, the GRU models are superior in terms of expert action probing. Interestingly, we observe that after replacing BYOL with Barlow, the probing scores for the latent model improve, while those of the deterministic models deteriorate. Overall, the particular combination of pre-training using the GRU-latent transition model with the Barlow prediction objective results in representations with the best overall probing qualities. Since the deterministic model’s predictions are likely to regress to the mean, allowing gradients to flow through the target branch in the case of Barlow objective can regularize the representations towards poor predictions, and can explain their inferior probing performance. Introducing latent variables can alleviate this issue through better predictions. We stress that the transition models are not used during probing, only the encoder is. These experiments show that having a more expressive forward model during the pre-training has a direct impact on the quality of the learnt representations. In Fig.3, we investigate the impact of the latent variable on the information contained in the representations, by training a decoder on frozen features. In table 2, we show the results from experimenting with different variants of the Barlow objective. We find that using a higher learning rate for the prediction branch (Barlow0.7, with 7:3 prediction to target lr ratio) results in better probing outcome than using equal learning rates (Barlow0.5) or not letting gradients flow in the target branch altogether (Barlow1, here the target encoder is a copy of the online encoder). This suggests that while it is helpful to regularize the representations towards the predictions, there is a potential for them being regularized towards poorly trained ones. This can be addressed by applying a higher learning rate on the prediction branch. We also demonstrate that using a frozen, random target network (Barlowrand) results in good features, and in our experiments it gets the best reward probing performance. This contradicts findings from the vision domain (Grill et al., 2020), but corroborates self-supervised results from other domains such as speech (Chiu et al., 2022). Random networks have also been shown to exhibit useful inductive biases for exploration (Burda et al., 2019b;a). An explanation is that random targets act as a regularization that prevent partial collapse by enforcing a wide range of features to be encoded by the model. 5.3 IMPACT OF AUXILIARY SSL OBJECTIVES AND ENCODERS SSL objective Although pretraining with multiple objectives can sometimes result in better downstream performance, in practice they also make it harder to tune for hyperparameters and debug, therefore it is desirable to use the least number of objectives that can result in comparable performance. In table 4, we show the effects of inverse dynamics modeling (inv) and goal-conditioned RL (goal) objectives on probing performance. The BYOL model experiences partial collapse without the inverse dynamics modeling loss, while the addition of goal loss improves the probing performance slightly. This is in congruence with results reported by Schwarzer et al. (2021b) for the same ablations. The Barlow-only model performs significantly better than the BYOL-only model in terms of probing scores, indicating that the Barlow objective is less prone to collapse in the predictive SSL setting. Similar to the BYOL model, the Barlow model can also be improved with inverse dynamics modeling, while the addition of goal loss has a slight negative impact. Encoders SGI (Schwarzer et al., 2021b) showed that using bigger encoders during pretraining results in improved downstream RL performance. We revisit this topic from the point of finding out whether the pretrained representations from bigger networks also have better probing qualities. We experiment with the medium (ResNet-M) and large (ResNet-L) residual networks from SGI. In table 5 we show that Barlow models pretrained using the larger ResNet have improved probing scores. 5.4 CORRELATIONS BETWEEN PROBING AND RL PERFORMANCES If our goal is to use linear probing as a guide to identify superior pretraining setup for RL, then they are only useful to the extent to which they correlate with the actual downstream RL performance. We perform RL evaluations for 9 representative setups (the best settings from each of table 1,2,4,5), as well as two contrastive methods: ST-DIM (Anand et al., 2019) and ATC (Stooke et al., 2021); and a reconstruction-based method VAE-T (Stooke et al., 2021)2. We report their probing and aggregate RL metrics in table 3, with the confidence intervals of the aggregate RL metrics depicted on the right. We find that the rank correlations between reward and action probing F1 scores and the RL aggregate metrics are significant (Figure 1). In summary, our results show the proposed probing scheme is a reliable guide for designing pretraining setups that deliver significant downstream RL performance improvements. 6 CONCLUSION In this paper we have investigated the opportunity to replace costly RL evaluation with lightweight linear probing task to assess the quality of learned representations. Reward and action probing are task-agnostic and should cover most practical applications. Using this methodology to guide us, we have demonstrated the impact of a number of key design choices in the pre-training methodology. We hope that these results encourage the research community to systematically explore the design space to further improve the quality of self-supervised representations for RL. 2See appendix for details on ATC, ST-DIM and VAE-T A MODELS AND HYPER-PARAMETERS A.1 BACKBONES M and L models are ResNet-M and ResNet-L from SGI (Schwarzer et al., 2021b). The ResNet-M encoder consists of inverted residual blocked with an expansion ratio of 2, with batch normalization applied after each convolutional layer; it uses 3 groups with 32, 64, and 64 channels, and has 3 residual blocks per group; it down-scales the input by a factor of 3 in the first group and 2 in the latter 2 groups. This yields a representation of shape 64x7x7 when applied to 84x84-dimensional Atari frames. ResNet-L uses 3 groups with 48, 96, and 96 channels, and has 5 residual blocks per group; it uses a larger expansion ratio of 4, producing a representation shape of 96x7x7 from an 84x84 frame. This enlargement increases the number of parameters by approximately a factor of 5. S model is the model used in Stooke et al. (2021). It consists of three convolutional layers, with [32, 64, 64] channels , kernel sizes [8, 4, 3], and strides [4, 2, 1], listed from first to last layer. A.2 TRANSITION MODELS We experimented with three transition models: convolutional model, deterministic GRU, and latent GRU. Our convolutional model is based on SGI (Schwarzer et al., 2021b). The input into the convolutional transition model is the concatenation of the spatially replicated 2D action map and the representation et along the channel dimension. The network itself consists of two 64-channel convolutional layers with 3x3 filters, separated by ReLU activation and batch normalization layers. The deterministic GRU has hidden dimension 600 and input dimension 250. The input at is prepared by passing the one-hot action vector through a 250 dimensional embedding layer. The initial hidden state ê0 is generated by projecting the representation e0 through a 600 dimensional linear layer with ELU activation and dropout. Layer normalization is applied to the hidden input at all timesteps. The latent GRU model is based on Dreamerv2’s RSSM (Hafner et al., 2021), and is consisted of a recurrent model, posterior model, prior predictor, and latent merger. The recurrent model has a hidden dimension and input dimension of 600. The initial hidden state h0 and input z0 are zero vectors. The flattened stochastic variables zt and one-hot action vector at are first concatenated and then projected to 600 dimension through a linear layer with ELU activation, before being passed into the recurrent model as input. Layer normalization is applied to the hidden input at all non-zero timesteps. The posterior model is a two-layer MLP with 600 dimensional bottleneck separated by ELU activation. It takes the concatenation of representation et and recurrent hidden output ht as input, and outputs a 1024 dimensional vector representing the 32 dimensional logits for 32 latent categorical variables. zt is sampled from the posterior logits. The prior model is a two-layer MLP with 600 dimensional bottleneck separated by ELU activation. Its output format is same as that of the posterior model. ẑt is sampled from the prior logits. The latent merger is a linear layer that projects the concatenation of ht and flattened zt to the same dimension of representation et. A.3 SSL PROJECTION MODULE In the case of the deterministic GRU, ê is first projected to the same dimension of representation through a linear layer. Henceforth we shall assume that ê underwent this step for GRUdet. The predicted representation ê and target representation ẽ are projected to 1024 dimensional vectors ŷ and ỹ through a linear layer. The BYOL objective involves processing ŷ with an additional linear layer q with output dimension 1024. The Barlow objective involves applying batch normalization to ŷ and ỹ prior to taking the covariance and variance losses. The inverse dynamics model is a two-layer MLP with 256 dimensional bottleneck separated by ReLU activation. It takes the concatenation of ŷt and ỹt+1 as input, and outputs logits with dimension equivalent to number of actions. A.4 ATC, VAE-T, ST-DIM We use the implementation, hyperparameters and architecture from the codebase of (Stooke et al., 2021) and (Stooke and Abbeel, 2019) for these models. We change the dataset to the one used in all our experiments We use the dataset described in section 5 to train these models, and train all methods for 58,500 updates. ATC (Augmented-Temporal Contrast) uses InfoNCE loss between output of the momentum encoder and online branch applied to different augmentations of an image to pre-train the encoder. VAE-T from Stooke et al. (2021) uses variational auto-encoder (Kingma and Welling, 2014) objective to reconstruct the frame from the next time step given an image at the current time step. ST-DIM (Anand et al., 2019) also uses InfoNCE objective, and in additional to traditional global-global infomax, introduces global-local infomax by using local representations taken from the feature map output of the convolutional encoder and the global pooled feature vector as positive pairs. For more details, we refer the reader to the referenced works. A.5 IMAGE RECONSTRUCTION MODEL We used a decoder architecture that mirrors the structure of the ResNet-M encoder. In decoding, instead of transposed convolutions we used upsampling with the nearest value followed by a regular convolution (Odena et al., 2016). We used mean squared error between the reconstructed pixels and the target image as the training criterion. Models were trained and evaluated on the same data as reward and action probing, for 30 epochs using Adam optimizer with learning rate 0.001. A.6 HYPERPARAMETERS See tables 6, 7, 8, 9 for hyperparameter values. For ATC, ST-DIM and VAE-T hyperparameters, see Stooke et al. (2021). A.7 IMAGE AUGMENTATION We use the same image augmentations as used in SGI (Schwarzer et al., 2021b), which itself used the augmentations in DrQ (Yarats et al., 2021b), in both pretraining and fine-tuning. We specifically apply random crops (4 pixel padding and 84x84 crops) and image intensity jittering. A.8 GOAL-ORIENTED RL LOSS Goal-oriented RL loss is taken directly from SGI (Schwarzer et al., 2021b). This objective trains a goal-conditional DQN, with rewards specified by proximity to sampled goals. First, a goal g is sampled to be the state encoding either of the near future in the current trajectory (up to 50 steps in the future), or, with probability of 20%, of the future state in another trajectory in the current batch. Then, we add Gaussian noise to obtain the final goal g: g ← αn + (1 − α)g, where α ∼ Uniform(0.5), and n is a vector sampled from isotropic Gaussian normalized to have length of 1. Then, in order to obtain the reward of taking action at going from state st to st+1, we first encode the states with the target encoder ẽt = ENCtarget(ot), ẽt + 1 = ENCtarget(ot+1). Then, we calculate the reward as: R(ẽt, ẽt+1) = d(ẽt, g)− d(ẽt+1, g), where d(ẽt, g) = exp ( 2 ẽt·g∥ẽt∥2·∥g∥2 − 2 ) . We use FiLM (Perez et al., 2018) to condition the Q-function Q(ot, at, g) on g, and optimize the model using DQN (Mnih et al., 2015). B FORWARD MODEL PROBING While our principal goal is to demonstrate the correlation between representation probing and offline RL performances, we also apply the reward probing technique to predictions in order to evaluate the qualities of transition models under different pretraining setups. In table 10, we show the effects of using different transition models during pretraining on prediction probing performance. All models are trained with ResNet-M encoder and inverse loss. Goal loss is also applied to the BYOL models. Table 8: GRU-latent specific hyperparameters. Parameter Setting kl loss weight 0.1 kl balance 0.95 Table 10: Mean reward probing F1 scores for pretraining setups with different transition models. Evaluated on 5th and 10th predictions. All standard deviations are on order of 1e-4. Pred Obj Transition Pred 5 Pred 10 BYOL Conv-det 33.1 28.4 GRU-det 33.0 27.4 GRU-latent 33.4 28.9 Barlow0.7 Conv-det 32.0 27.6 GRU-det 30.1 25.0 GRU-latent 39.5 30.2 Table 11: Mean reward probing F1 scores for pretraining setups with different prediction objectives. Evaluated on 5th and 10th predictions. All standard deviations are on order of 1e-4. Pred Obj Pred 5 Pred 10 BYOL 33.4 28.9 Barlow0.5 40.2 30.2 Barlow0.7 39.5 30.2 Barlow1 37.4 29.7 Barlowrand 36.8 27.5 In the deterministic setting, the predictions of the GRU model are worse than those of the convolutional model. The introduction of stochasticity appears to fix the underlying issue for predictions, resulting in the latent GRU model having the best overall prediction probing performance. One possible explanation for Conv-det having better predictions than GRU-det is that the spatial inductive bias in the convolutional kernels acts as a constraint and helps regularize the predictions from regressing to the mean. However, this is more effectively solved by the introduction of latent variables into GRU during training and inference. In table 11, we show the effects of using different prediction objectives during pretraining on prediction probing performance. All models are trained with ResNet-M encoder, GRU-latent transition model, and inverse loss; goal loss is also applied to the BYOL model. Comparing to the BYOL model, Barlow models generally have higher probing scores for predictions. We also note that for Barlow models, regularizing the representations towards the predictions (by setting Barlow Balance < 1) improves the qualities of predictions. This is likely because it makes the prediction task easier, making it more likely to learn a capable transition model. This reasoning can also explain why the Barlow model with frozen, random target network achieves superior probing result for representation (table 2) but worse result for predictions compared to the other Barlow versions. Predicting a random target representation is likely more difficult than predicting a learned representation, and this may in turn encourage the model to rely more on learning a powerful encoder and posterior model, and less on learning an accurate transition model. C FULL RL RESULTS D STATISTICAL HYPOTHESIS TESTING OF RANK CORRELATION In Fig. 5, we show the correlations results for both the action and reward predictions. We estimate Spearman’s rank correlation coefficient (Spearman’s r) between the linear probing performance and the (interquartile) mean RL human-normalized score (HNS) over 9 Atari games. The reason for using Spearman’s r instead of the Pearson correlation coefficient is because we are interested in whether the relative ranking of the models on the linear probing tasks is indicative of the relative ranking of the same models when RL is trained on top of it. As an example, this allows us to say if model A out-ranks model B in the reward prediction task, an RL model trained on top of model A’s representations will likely out-perform an RL model trained on top of model B’s representation. However, it does not let us predict by how much model A will out-perform model B. Let d denote the difference in ranking between the linear probing performance and the RL performance, Spearman’s r (denoted as ρ below) is computed as, ρ = 1− 6 ∑n i=1 d 2 i n(n2 − 1) , (1) where di is the difference in ranking for the i-th model, and n is the total number of models we have. We perform statistical hypothesis testing on ρ with null hypothesis ρ = 0 (no correlation between linear probing performance and RL performance) and alternative hypothesis ρ > 0 (positive correlation). The null distribution is constructed nonparametrically using permutation testing: we sample random orderings of the observed linear probing performance and RL performance independently and compute ρ. This is repeated 50,000 times to generate the null distribution (which is centered at ρ = 0 as we do not expect randomly ordered values to be correlated). We then compare our observed ρ to this distribution and perform one-tailed test for the proportion of samples larger than our observed ρ to report our p-value. D.1 RANK CORRELATION ON A DIFFERENT DATASET In Fig. 1, we explored the correlation between the RL performance and the reward probing task, where the dataset used for the reward probing was a set of quasi-random trajectories from the DQN dataset, coming from very beginning of the training run of the DQN agent used to collect the data. It is natural to ask whether the correlation results we obtain are sensitive to the specific dataset used. To put this question to the test, we re-run the same reward probing task, this time on the "expert" dataset, i.e. the last trajectories of the DQN dataset, corresponding to a fully trained agent. The results are shown in Fig.6. The Spearman’s correlation coefficient that we obtain is the exact same as the one for the random trajectory dataset (even though the reward statistic are different, see Table 14), showing that the correlation result is not sensitive to the probing dataset used. D.2 CONFIDENCE INTERVAL OF RL PERFORMANCE AS A FUNCTION OF INDEPENDENT RUNS We further show the confidence interval of the estimated mean RL performance as the number of independent runs increase. From our total of 10 independent runs each game, we sample with replacement k ≤ 10 runs (k being number of independent runs we “pretend” to have instead of the full 10), independently for each game. We can compute the IQM over this sample to get an estimate for the IQM as if we only have k independent runs. We repeat this process 10,000 times to construct the 95 confidence interval of the empirical IQM for different k’s. Illustrative examples of how much this confidence interval shrinks for different pairs of models is shown in Fig. 7. We observe in Fig. 7 the mean RL performance estimates have CIs that eventually separate with many independent runs. This is an unbiased but high variance and computationally intensive estimator of the true expected RL performance. On the other hand, the reward prediction F1 score is a computationally cheap, low variance and accurate estimator of the relative model ranks in mean RL performance. This further corroborates our previous results of positive correlation between reward prediction F1 score and mean RL performance (Fig. 1). E COMPARISON WITH DOMAIN SPECIFIC PROBING BENCHMARKS One of the key advantages of our probing method is that it is domain agnostic, unlike the previously proposed AtariARI benchmark (Anand et al., 2019) which acquires probing labels through the RAM state of the emulator, making their method impractical for image-based trajectories. To better understand how our probing metrics compare with the domain specific ones in terms of correlations with RL performances, we perform the AtariARI probing benchmarks using our pretrained encoders on the 4 overlapping games (Boxing, Seaquest, Frostbite, DemonAttack) used in both works. For AtariARI, we first calculate the average probe F1 scores across categories, then average this quantity across the games. For reward probing, we apply our own protocol detailed in section 5.1. For RL performance we use the IQM. We report the correlation between the probing metrics and RL performances across different models. Our results are summarized in Table 13. We find that the correlation between the average probing F1s and RL performances is stronger for our reward probing method. In particular, our probing method has a significant correlation with RL performances (p < 0.05), while the AtariARI probing method does not. F PROBING DURING TRAINING We show evolution of probing performance as training progresses in figure 8. G REWARD STATISTICS IN PROBING DATASETS In table 14, we report the percentage of states that have a non-zero reward in each of the 9 games, for two different subsets of data: • Checkpoint 1, which correspond to quasi-random trajectories from the beginning of the training process of DQN. This is the data used for the reward probing in Fig 1. • Checkpoint 50, which is the last checkpoint of the DQN replay dataset, and corresponds to the fully trained DQN agent, that we assimilate to an expert agent. This data is used for action probing, and for reward probing in Fig.6 All the games have a fairly small percentage of positive reward states, and we generally observe a higher percentage of reward in checkpoint 50, which is expected since the agent is more capable by then. G.1 IMPACT OF SPARSITY ON THE CORRELATION In Fig.9, we plot the Spearman’s correlation coefficient between the RL performance on each individual game and the reward probing F1, as a function of the percentage of reward observed in each game (see Table 14). We do not observe any particular pattern with respect to the sparsity, suggesting that the probing task is not very sensitive to the sparsity level of each individual game. Note however that, as usual in the Atari benchmark, it is difficult to draw conclusion from any given individual game, and the statistical significance of our results only emerge when considering the set of games as a whole. Indeed, only 3 games achieve individual statistical significance at p < 0.01 (Boxing, Seaquest and Assault), while the other do not obtain statistically significant correlations. H LIMITATIONS One limitation of the current work is that for the presented probing methods to work one needs a subset of the data either with known rewards, where ideally rewards are not too sparse, or with expert actions. If none of the two is available, our method cannot be used. For the reward probing task, the usefulness of the method also depends on the hardness of the reward prediction itself. If the prediction task is too easy, for example because there are rewards at every step, or because the states with rewards are completely different than the ones without (such that even a randomly initialized model would yield features allowing linear separation between the two types of states), then the performance of all the models on this task are going to be extremely similar, with the only differences coming from random noise. In such a case, the performance of the prediction task cannot be used to accurately rank the quality of the features of each of the models. For future work we also would like to extend the findings of this paper to more settings, for example different environments.
1. What is the focus of the paper regarding unsupervised representation learning? 2. What are the strengths and weaknesses of the proposed evaluation protocol for lightweight probing? 3. Do you have any concerns about the correlation analysis between probing and RL performance? 4. How does the reviewer assess the novelty and originality of the paper's content? 5. Are there any suggestions for improving the clarity and presentation of the paper?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper attempts to propose an evaluation protocol for lightweight probing of unsupervised representations and investigates the correlation between RL performance and linear probing from a pretrained representation. Authors are testing this on a very specific class of self-predictive (recurrent) representation models that are being trained with SSL objectives. The authors extensively analyse several design choices of their studied model regarding performance on the two probing tasks of predicting reward or action from a held-out labelled train set. Finally, the paper aims to present some correlation between probing and RL performance on 9 Atari games. Strengths And Weaknesses Strengths Identification of an important problem setup in RL, that is how to assess which representations and pretraining data is best suited for improving downstream RL performance. Extensive analysis of various design choices of the model architecture studied Weaknesses It is not very clear what the goal or objective of the paper is. Authors say they propose an evaluation protocol for unsupervised RL representations that saves up to 600x computation cost. However, this computational cost saving and protocol seems to be not addressed or described in detail in the main paper and doesn't appear to be the main focus of the paper. Instead, the authors analyse a very specific class of self-predictive (recurrent) representation models that are being trained with SSL objectives. I would have expected to see a more thorough coverage of different unsupervised representation learning methods and more empirical analysis to support these claims (see additional comments in point 3) It is not very clear what is actually novel about the proposed model and what is based on prior work. The listed contributions are very vague. I would hope the authors can clarify what exactly their contributions are. Also, there exist additional prior works that very extensively studied various light-weight probing tasks on unsupervised representations and how their performance relates to RL performances [1,2] I am a bit confused by the experiments, especially the correlative analysis and it is not very clear to me if this holds beyond the very particular method and environments. In particular Figure 3 is very confusing and I do not fully understand what the 7 representative setups are supposed to be; how this relates to the 9 rows/models presented in Figure 3 (where do they come from?); and how this relates to the 7 Atari games studied in Figure 1. Without this additional information, it is not clear to me if it is sound to draw a general conclusion about the correlation between probing task performance and RL performance or if there are any other confounders. Also, can authors provide error bars for results in Table 1-5? Minor: There appears to be a slight mismatch between the title/abstract and the presented experiments and contributions of the main paper. The authors study a very particular narrow setting but the title suggests a generally applicable evaluation protocol for different unsupervised representation learning methods. Regarding the two probing tasks I wonder how generally applicable they really are. E.g. the reward task seems to be constrained to have labelled data from a very early point in RL training (i.e. rather random policy), whereas the action prediction is limited to labelled data from close to expert trajectories. To be generally applicable I would have hoped to see both probes being trained on the same labelled dataset to compare apples to apples. [1] Dittadi, Andrea et al. “The Role of Pretrained Representations for the OOD Generalization of Reinforcement Learning Agents”, ICLR 2022. [2] Higgins, Irina, et al. "Darla: Improving zero-shot transfer in reinforcement learning." International Conference on Machine Learning. PMLR, 2017. Clarity, Quality, Novelty And Reproducibility I think the paper could benefit a lot from improving clarity and presentation. I would especially suggest that authors more explicitly specify the novel contributions and the scope of the work. Maybe the authors could provide more detail in section 5.4. It is possible that I missed parts when reading the manuscript but I believe novelty and originality are limited in light of my comments above.
ICLR
Title Light-weight probing of unsupervised representations for Reinforcement Learning Abstract Unsupervised visual representation learning offers the opportunity to leverage large corpora of unlabeled trajectories to form useful visual representations, which can benefit the training of reinforcement learning (RL) algorithms. However, evaluating the fitness of such representations requires training RL algorithms which is computationally intensive and has high variance outcomes. To alleviate this issue, we design an evaluation protocol for unsupervised RL representations with lower variance and up to 600x lower computational cost. Inspired by the vision community, we propose two linear probing tasks: predicting the reward observed in a given state, and predicting the action of an expert in a given state. These two tasks are generally applicable to many RL domains, and we show through rigorous experimentation that they correlate strongly with the actual downstream control performance on the Atari100k Benchmark. This provides a better method for exploring the space of pretraining algorithms without the need of running RL evaluations for every setting. Leveraging this framework, we further improve existing self-supervised learning (SSL) recipes for RL, highlighting the importance of the forward model, the size of the visual backbone, and the precise formulation of the unsupervised objective. Code will be released upon acceptance. 1 INTRODUCTION Learning visual representations is a critical step towards solving many kinds of tasks, from supervised tasks such as image classification or object detection, to reinforcement learning (RL). Ever since the early successes of deep reinforcement learning (Mnih et al., 2015), neural networks have been widely adopted to solve pixel-based reinforcement learning tasks such as arcade games (Bellemare et al., 2013), physical continuous control (Todorov et al., 2012; Tassa et al., 2018), and complex video games (Synnaeve et al., 2018; Oh et al., 2016). However, learning deep representations directly from rewards is a challenging task, since this learning signal is often noisy, sparse and delayed. With ongoing progress in unsupervised visual representation learning for vision tasks (Zbontar et al., 2021; Chen et al., 2020a;b; Grill et al., 2020; Caron et al., 2020; 2021), recent efforts have likewise applied self-supervised techniques and ideas to improve representation learning for RL. Some promising approaches include supplementing the RL loss with self-supervised objectives (Laskin et al., 2020; Schwarzer et al., 2021a), or first pre-training the representations on a corpus of trajectories (Schwarzer et al., 2021b; Stooke et al., 2021). However, the diversity in the settings considered, as well as the self-supervised methods used, make it difficult to identify the core principles of successful self-supervised methods in RL. Moreover, estimating the performance of RL algorithms is notoriously challenging (Henderson et al., 2018; Agarwal et al., 2021): it often requires repeating the same experience with a different random seed, and the high CPU-to-GPU ratio is a compute requirement of most online RL methods that is inefficient for typical research compute clusters. This hinders systematic exploration of the many design choices that characterize SSL methods. In this paper, we strive to provide a reliable and lightweight evaluation scheme for unsupervised visual representation in the context of RL. Inspired by the vision community, we propose to evaluate the representations using linear probing, by training a linear prediction head on top of frozen features. We devise two probing tasks that we deem widely applicable: predicting the reward in a given state, and predicting the action that would be taken by a fixed policy in a given state (for example that of an expert). We stress that these probing tasks are only used as a means of evaluation. Because very little supervised data is required, they are particularly suitable for situations where obtaining the expert trajectories or reward labels is expensive. Through thorough experimentation, we show that the performance of the SSL algorithms (in terms of their downstream RL outcomes) correlates with the performance in both probing tasks with statistically significant (p<0.001) Spearman’s rank correlation, making them particularly effective proxies. Given the vastly reduced computational burden of linear evaluations, we argue that it enables much easier and straightforward experimentation of SSL design choices, paving the way for a more systematic exploration of the design space. Finally, we leverage this framework to systematically assess some key attributes of SSL methods. First off, we explore the utility and role of learning a forward model as part of the self-supervised objective. We investigate whether its expressiveness matters and show that equipping it with the ability to model uncertainty (through random latent variable) significantly improves the quality of the representations. Next, we identify several knobs in the self-supervised objective, allowing us to carefully tune the parameters in a principled way. Finally, we confirm the previous finding (Schwarzer et al., 2021b) that bigger architectures, when adequately pre-trained, tend to perform better. Our contributions can be summarized as follows: • Design of a rigorous and efficient SSL evaluation protocol in the context of RL • Empirical demonstration that this evaluation scheme correlates with downstream RL perfor- mance • Systematic exploration of design choices in existing SSL methods. 2 RELATED WORK 2.1 REPRESENTATION LEARNING There has recently been a surge in interest and advances in the domain of self-supervised learning in computer vision. Some state-of-art techniques include contrastive learning methods SimCLR, MoCov2 (Chen et al., 2020a;b); clustering methods SwAV (Caron et al., 2020); distillation methods BYOL, SimSiam, OBoW (Grill et al., 2020; Chen and He, 2021; Gidaris et al., 2020); and information maximization methods Barlow Twins and VicReg (Zbontar et al., 2021; Bardes et al., 2021). These advances have likewise stimulated development in representation learning for reinforcement learning. A line of work includes unsupervised losses as an auxiliary objective during RL training to improve data efficiency. Such objective can be contrastive (Laskin et al., 2020; Zhu et al., 2020) or non-contrastive (Schwarzer et al., 2021a; Yu et al., 2022). ST-DIM (Anand et al., 2019), ATC (Stooke et al., 2021) and BVS-DIM (Mengistu et al., 2022) incorporate temporal information in their contrastive objective, adapting similar techniques from the unsupervised video representation learning (Sermanet et al., 2018). Proto-RL (Yarats et al., 2021a) uses a SwAV-like objective to learn representation as well as guide effective exploration during pre-training. Similarly, CRL (Du et al., 2021) trains a policy to optimize a SimCLR loss, then shows transfer to RL, imitation learning and image classification. Closer to our approach, SGI (Schwarzer et al., 2021b) pretrains both an encoder and forward prediction model by minimizing the distance between predictions and target latents using BYOL, and the encoder is recycled during RL for improved data efficiency. While different in spirit, many model based methods also train an encoder from a corpus of trajectory, either by explicit pixel reconstruction Kaiser et al. (2020); Hafner et al. (2021) or in embedding space Ye et al. (2021); Schrittwieser et al. (2020). Self-supervised representations have also been used for imitation learning (Aytar et al., 2018; Pari et al., 2021) as well as exploration (Burda et al., 2019a). 2.2 REPRESENTATION PROBING IN REINFORCEMENT LEARNING Some prior work (Racah and Pal, 2019; Guo et al., 2018; Anand et al., 2019; Higgins et al., 2018; Dittadi et al., 2022) evaluate the quality of their pretrained representations by probing for ground truth state variables such as agent/object locations, game scores or model-specific quantities (eg. ELBO). Das et al. (2020) propose to probe representations with natural language question-answering. Despite the efficiency of these probing methods, their designs are highly domain-specific and require careful handcrafting for each environment. In addition, they fail to demonstrate the actual correlation between probing and RL performances, which makes their practical usefulness uncertain. On the other hand, the authors of ATC (Stooke et al., 2021) propose to evaluate representations by finetuning for RL tasks using the pretrained encoder with weights frozen. Similarly, Laskin et al. (2021) propose a unified benchmark for SSL methods in continuous control but still require full RL training. Our work seeks to bridge these two approaches by demonstrating the correlation between linear probing and RL performances, as well as designing probing tasks that are generalizable across environments. 3 A FRAMEWORK TO DEVELOP UNSUPERVISED REPRESENTATIONS FOR RL In this section, we detail our proposed framework for training and evaluating unsupervised representations for reinforcement learning. 3.1 UNSUPERVISED PRE-TRAINING The network is first pre-trained on a large corpus of trajectories. Formally, we define a trajectory Ti of length Ti as a sequence of tuples Ti = [(ot, at) | t ∈ [1, Ti]], where ot is the observation of the state at time t in the environment and at was the action taken in this state. This setting is closely related to Batch RL (Lange et al., 2012), with the crucial difference that the reward is not being observed. In particular, it should be possible to use the learned representations to maximize any reward (Touati and Ollivier, 2021). The training corpus corresponds to a set of such trajectories: Dunsup {T1, · · · , Tn}. We note that the policy used to generate this data is left unspecified in this formulation, and is bound to be environment-specific. Since unsupervised methods usually necessitate a lot of data, this pre-training corpus is required to be substantial. In some domains, it might be straightforward to collect a large number of random trajectories to constitute Dunsup. In some other cases, like self-driving, where generating random trajectories is undesirable, expert trajectories from humans can be used instead. The goal of the pre-training step is to learn the parameters θ of an encoder ENCθ which maps any observation o of the state s (for example raw pixels) to a representation e = ENCθ(o). This representation must be amenable for the downstream control task, for example learning a policy. 3.2 EVALUATION In general, the evaluation of RL algorithms is tricky due to the high variance in performance (Henderson et al., 2018). This requires evaluating many random seeds, which creates a computational burden. We side-step this issue by formulating an evaluation protocol which is light-weight and purely supervised. Specifically, we identify two proxy supervised tasks that are broadly applicable and relevant for control. We further show in the experiment section that they are sound, in the sense that models’ performance on the proxy tasks strongly correlates with their performance in the downstream control task of interest. Similar to the evaluation protocol typically used for computer vision models, we rely on linear probing, meaning that we train only a linear layer on top of the representations, which are kept frozen. Reward Probing Our first task consists in predicting the reward observed in a given state. For this task, we require a corpus of trajectories Drew = {T ′1, · · · , T ′m} for which the observed rewards are known, i.e. T ′i = [(ot, at, rt) | t ∈ [1, Ti]] In the most general setting, it can be formulated as a regression problem, where the goal is to minimize the following loss: L(ψ)reward-reg = 1 |Drew| ∑ T ′i∈Drew 1 |T ′i| ∑ (ot,at,rt∈T ′i) ∥lψ(ENCθ(ot))− rt∥2 Here, the only learnt parameters ψ are those of the linear prediction layer lψ . In practice, in many environments where rewards are sparse, the presence or absence of a reward is more important than its magnitude. To simplify the problem in those cases, we can cast it as a binary prediction problem instead (this could be extended to ternary classification if the sign of the reward is of interest): L(ψ)reward-classif = 1 |Drew| ∑ T ′i∈Drew 1 |T ′i| ∑ (ot,at,rt∈T ′i) BinaryCE(1R>0(rt), lψ(ENCθ(ot))) Reward prediction is closely related to value prediction, a central objective in RL that is essential for value-based control and the critic in actor-critic methods. The ability to predict instantaneous reward, akin to predicting value with a very small discount factor, can be viewed as a lower bound on the learned representation’s ability to encode the value function, and has been demonstrably helpful for control, particularly in sparse reward tasks (Jaderberg et al., 2017). Thus, we hypothesize reward prediction accuracy to be a good probing proxy task for our setting as well. Action prediction Our second task consists in predicting the action taken by an expert in a given state. For this task, we require a corpus of trajectories Dexp = {T1, · · · , Tn} generated by an expert policy. We stress that this dataset may be much smaller than the pretraining corpus since we only require to fit and evaluate a linear model. The corresponding objective is as follows: L(ψ)action-classif = 1 |Dexp| ∑ Ti∈Dexp 1 |Ti| ∑ (ot,at∈T ′i) CrossEntropy(at, lψ(ENCθ(ot))) This task is closely related to imitation learning, however, we are not concerned with the performance of the policy that we learn as a by-product. 4 SELF PREDICTIVE REPRESENTATION LEARNING FOR RL In our work, we focus on evaluating and improving a particular class of unsupervised pretraining algorithms that involves using a transition model to predict its own representations in the future (Schwarzer et al., 2021b; Guo et al., 2018; Gelada et al., 2019). This pretraining modality is especially well suited for RL, since the transition model can be conditioned on agent actions, and can be repurposed for model-based RL after pretraining. Our framework is depicted in Fig.2. In this section, we present the main design choices, and we investigate their performance in Section 5. 4.1 TRANSITION MODELS Our baseline transition model is a 2D convolutional network applied directly to the spatial output of the convolutional encoder (Schwarzer et al., 2021b; Schrittwieser et al., 2020). The network consists of two 64-channel convolutional layers with 3x3 filters. The action is represented as a one-hot encoding spatially replicated (in a 2D map) and concatenated with the representation input along the channel dimension. We believe a well-established sequence modeling architecture such as GRU can serve as a superior transition model. Its gating mechanisms should be better at retaining information from both the immediate and distant past, especially helpful for learning dynamics in a partially observable environment. Encoder : ê0 = e0 = ENCθ(o0) RecurrentModel : êt = fϕ(êt−1, at−1) In addition to the deterministic GRU model above, we also experiment with a GRU variant where we introduce stochastic states to allow our model to generalize better to stochastic environments, such as Atari with sticky actions (Machado et al., 2018). Our model is based on the RSSM from DreamerV2 (Hafner et al., 2021), with the main difference being that while pixel reconstruction is used as the SSL objective in the original work, we minimize the distance between predictions and targets purely in the latent space. Following DreamerV2, we optimize the latent variables using straight-through gradients (Bengio et al., 2013), and minimize the distance between posterior (z) and prior (ẑ) distributions using KL loss. Encoder : et = ENCθ(ot) RecurrentModel : ht = fϕ(ht−1, zt−1, at−1) PosteriorModel : zt ∼ pϕ(zt|ht, et) PriorPredictor : ẑt ∼ jϕ(ẑt|ht) LatentMerger : êt = gϕ(ht, zt) 4.2 PREDICTION OBJECTIVES The objective of self predictive representation learning is to minimize the distance between the predicted and the target representations, while ensuring that they do not collapse to a trivial solution. Our baseline prediction objective is BYOL (Grill et al., 2020), which is used in SGI (Schwarzer et al., 2021b). The predicted representation êt+k, and the target representation ẽt+k are first projected to lower dimensions to produce ŷt+k and ỹt+k. BYOL then maximizes the cosine similarity between the predicted and target projections, using a linear prediction function q to translate from ŷ to ỹ: LBY OLθ (ŷt:t+k, ỹt:t+k) = − K∑ k=1 q(ŷt+k) · ỹt+k ∥q(ŷt+k)∥2 · ∥ỹt+k∥2 In the case of BYOL, the target encoder and projection module are the exponentially moving average of the online weights, and the gradients are blocked on the target branch. As an alternative prediction objective, we experiment with Barlow Twins (Zbontar et al., 2021). Similar to BYOL, Barlow Twins minimizes the distance of the latent representations between the online and target branches; however, instead of using a predictor module and stop gradient on the target branch, Barlow Twins avoids collapse by pushing the cross-correlation matrix between the projection outputs on the two branches to be as close to the identity matrix as possible. To adapt Barlow Twins, we calculate the cross correlation across batch and time dimensions: LBT (ŷt:t+k, ỹt:t+k) = ∑ i (1− Cii)2 + λ ∑ i,j ̸=i C2ij where Cij = ∑ b,t(ŷb,t,i) · (ỹb,t,j)√∑ b,t(ŷb,t,i) 2 · √∑ b,t(ỹb,t,j) 2 where λ is a positive constant trading off the importance of the invariance and covariance terms of the loss, C is the cross-correlation matrix computed between the projection outputs of two branches along the batch and time dimensions, b indexes batch samples, t indexes time, and i, j index the vector dimension of the projection output. By enabling gradients on both the prediction and target branches, the Barlow objective pushes the predictions towards the representations, while regularizing the representations toward the predictions. In practice, learning the transition model takes time and we want to avoid regularizing the representations towards poorly trained predictions. To address this, we apply a higher learning rate to the prediction branch. We call this technique Barlow Balancing, and implement it in Algorithm 1. Algorithm 1: PyTorch-style pseudocode for Barlow Balancing BarlowLoss = µ ∗ LBT (ŷ, ỹ.detach()) + (1− µ) ∗ LBT (ŷ.detach(), ỹ) 4.3 OTHER SSL OBJECTIVES SGI’s authors (Schwarzer et al., 2021b) showed that in the absence of other SSL objectives, pretraining with BYOL prediction objective alone results in representation collapse; the addition of inverse dynamics modeling loss is necessary to prevent collapse, while the addition of goal-oriented RL loss results in minor downstream RL performance improvement. In inverse dynamics modeling, the model is trained using cross-entropy to model p(at|ŷt+k, ỹt+k+1), effectively predicting the transition action between two adjacent states. The goal-oriented loss tries to predict distance to states in the near future from the sampled trajectories (details in Appendix). 5 RESULTS 5.1 EXPERIMENTAL DETAILS We conduct experiments on the Arcade Learning Environment benchmark (Bellemare et al., 2013). Given the multitude of pretraining setups we investigate, we limit our experiment to 9 Atari games1. Pretraining We use the publicly-available DQN replay dataset (Agarwal et al., 2020), which contains data from training a DQN agent for 50M steps with sticky action (Machado et al., 2018). We select 1.5 million frames from the 3.5 to 5 millionth steps of the replay dataset, which constitutes trajectories of a weak, partially trained agent. We largely follow the recipe of SGI (Schwarzer et al., 2021b), where we jointly optimize the self prediction, goal-conditioned RL, and inverse dynamics modeling 1Amidar, Assault, Asterix, Boxing, Demon Attack, Frostbite, Gopher, Krull, Seaquest losses for 20 epochs; in some of our experiments we remove one or both of the last two objectives. We use the data-augmentations introduced by Yarats et al. (2021b). All experiments are performed on a single MI50 AMD GPU, and the pretraining process took 2 to 8 days depending on the model. Reward probing We focus on the simplified binary classification task of whether a reward occurs in a given state. We use 100k frames from the 1-1.1 millionth step of the replay dataset, with a 4:1 train/eval split. We train a logistic regression model on frozen features using the Cyanure (Mairal, 2019) library, with the MISO algorithm (Mairal, 2015) coupled with QNING acceleration (Lin et al., 2019) for a maximum of 300 steps. We do not use any data augmentation. We report the mean F1 averaged across all 9 games. On a MI50 AMD GPU, each probing run takes 10 minutes. Action probing We use the last 100k (4:1 train/eval split) frames of the DQN replay dataset, which correspond to a fully trained DQN agent. We train a linear layer on top of frozen, un-augmented features for 12 epochs with softmax focal loss (Lin et al., 2017) using SGD optimizer with learning rate 0.2, batch size 256, 1e-6 weight decay, stepwise scheduler with step size 10 and gamma 0.1. We report the Multiclass F1 (weighted average of F1 scores of each class) averaged across all games. RL evaluation We focus on the Atari 100k benchmark (Kaiser et al., 2020), where only 100k interactive steps are allowed by the agent. This is roughly equivalent to two hours of human play, providing an approximation for human level sample-efficiency. We follow Schwarzer et al. (2021b) training protocol using the Rainbow algorithm (Hessel et al., 2018) with the following differences: we freeze the pretrained encoder (thus only training the Q head), do not apply auxiliary SSL losses while fine-tuning, and finally disable noisy layers and rely instead on ϵ-greedy exploration. This changes are made to make the RL results reflect as closely as possible the performance induced by the quality of the representations. On a MI50 AMD GPU, each run takes between 8 and 12 hours. We evaluate the agent’s performance using human-normalized score (HNS), defined as (agentscore− randomscore)/(humanscore−randomscore). We calculate this per game, per seed by averaging scores over 100 evaluation trajectories at the end of training. For aggregate metrics across games and seeds, we report the median and interquartile mean (IQM). For median, we first average the HNS across seeds for each game, and report the median of the averaged HNS values. For IQM, we first take the middle 50% of scores across both seeds and games, then report the average. While median is commonly reported for Atari100k, recent work has recommended IQM as a superior aggregate metric for the RL setting due to its smaller uncertainty (Agarwal et al., 2021); we also follow the cited work to report the 95% bootstrapped confidence intervals for these aggregate metrics. Unless specified otherwise, the experiments use the medium ResNet-M from Schwarzer et al. (2021b), and the inverse dynamics loss as an auxiliary loss. In BYOL experiments, the target network is an exponential moving average of the online network, while in Barlow Twins both networks are identical, following the original papers. For additional details regarding model architectures and hyperparameters used during pretraining and RL evaluation, please refer to Appendix. 5.2 IMPACT OF TRANSITION MODELS AND PREDICTION OBJECTIVES Table 1: F1 scores on probing tasks for different transition models and prediction objectives. All standard deviations are on the order of 1e-4 Pred Obj Transition Reward Action BYOL Conv-det 64.9 22.7 GRU-det 62.2 26.8 GRU-latent 63.4 23.2 Barlow0.7 Conv-det 52.7 24.9 GRU-latent 67.5 26.2 Table 2: F1 scores on probing tasks for different Barlow variants. All standard deviations are on the order of 1e-4 which we omit below. Pred Obj Reward Action Barlow0.5 65.0 26.3 Barlow0.7 67.5 26.2 Barlow1 65.0 24.7 Barlowrand 67.7 25.8 In table 1, we report the mean probing F1 scores for the convolutional, deterministic GRU, and latent GRU transition models trained using either the BYOL or Barlow prediction objective. When using the BYOL objective, the relative probing strengths for the different transition models are somewhat ambiguous: while the convolutional model results in better reward probing F1, the GRU models are superior in terms of expert action probing. Interestingly, we observe that after replacing BYOL with Barlow, the probing scores for the latent model improve, while those of the deterministic models deteriorate. Overall, the particular combination of pre-training using the GRU-latent transition model with the Barlow prediction objective results in representations with the best overall probing qualities. Since the deterministic model’s predictions are likely to regress to the mean, allowing gradients to flow through the target branch in the case of Barlow objective can regularize the representations towards poor predictions, and can explain their inferior probing performance. Introducing latent variables can alleviate this issue through better predictions. We stress that the transition models are not used during probing, only the encoder is. These experiments show that having a more expressive forward model during the pre-training has a direct impact on the quality of the learnt representations. In Fig.3, we investigate the impact of the latent variable on the information contained in the representations, by training a decoder on frozen features. In table 2, we show the results from experimenting with different variants of the Barlow objective. We find that using a higher learning rate for the prediction branch (Barlow0.7, with 7:3 prediction to target lr ratio) results in better probing outcome than using equal learning rates (Barlow0.5) or not letting gradients flow in the target branch altogether (Barlow1, here the target encoder is a copy of the online encoder). This suggests that while it is helpful to regularize the representations towards the predictions, there is a potential for them being regularized towards poorly trained ones. This can be addressed by applying a higher learning rate on the prediction branch. We also demonstrate that using a frozen, random target network (Barlowrand) results in good features, and in our experiments it gets the best reward probing performance. This contradicts findings from the vision domain (Grill et al., 2020), but corroborates self-supervised results from other domains such as speech (Chiu et al., 2022). Random networks have also been shown to exhibit useful inductive biases for exploration (Burda et al., 2019b;a). An explanation is that random targets act as a regularization that prevent partial collapse by enforcing a wide range of features to be encoded by the model. 5.3 IMPACT OF AUXILIARY SSL OBJECTIVES AND ENCODERS SSL objective Although pretraining with multiple objectives can sometimes result in better downstream performance, in practice they also make it harder to tune for hyperparameters and debug, therefore it is desirable to use the least number of objectives that can result in comparable performance. In table 4, we show the effects of inverse dynamics modeling (inv) and goal-conditioned RL (goal) objectives on probing performance. The BYOL model experiences partial collapse without the inverse dynamics modeling loss, while the addition of goal loss improves the probing performance slightly. This is in congruence with results reported by Schwarzer et al. (2021b) for the same ablations. The Barlow-only model performs significantly better than the BYOL-only model in terms of probing scores, indicating that the Barlow objective is less prone to collapse in the predictive SSL setting. Similar to the BYOL model, the Barlow model can also be improved with inverse dynamics modeling, while the addition of goal loss has a slight negative impact. Encoders SGI (Schwarzer et al., 2021b) showed that using bigger encoders during pretraining results in improved downstream RL performance. We revisit this topic from the point of finding out whether the pretrained representations from bigger networks also have better probing qualities. We experiment with the medium (ResNet-M) and large (ResNet-L) residual networks from SGI. In table 5 we show that Barlow models pretrained using the larger ResNet have improved probing scores. 5.4 CORRELATIONS BETWEEN PROBING AND RL PERFORMANCES If our goal is to use linear probing as a guide to identify superior pretraining setup for RL, then they are only useful to the extent to which they correlate with the actual downstream RL performance. We perform RL evaluations for 9 representative setups (the best settings from each of table 1,2,4,5), as well as two contrastive methods: ST-DIM (Anand et al., 2019) and ATC (Stooke et al., 2021); and a reconstruction-based method VAE-T (Stooke et al., 2021)2. We report their probing and aggregate RL metrics in table 3, with the confidence intervals of the aggregate RL metrics depicted on the right. We find that the rank correlations between reward and action probing F1 scores and the RL aggregate metrics are significant (Figure 1). In summary, our results show the proposed probing scheme is a reliable guide for designing pretraining setups that deliver significant downstream RL performance improvements. 6 CONCLUSION In this paper we have investigated the opportunity to replace costly RL evaluation with lightweight linear probing task to assess the quality of learned representations. Reward and action probing are task-agnostic and should cover most practical applications. Using this methodology to guide us, we have demonstrated the impact of a number of key design choices in the pre-training methodology. We hope that these results encourage the research community to systematically explore the design space to further improve the quality of self-supervised representations for RL. 2See appendix for details on ATC, ST-DIM and VAE-T A MODELS AND HYPER-PARAMETERS A.1 BACKBONES M and L models are ResNet-M and ResNet-L from SGI (Schwarzer et al., 2021b). The ResNet-M encoder consists of inverted residual blocked with an expansion ratio of 2, with batch normalization applied after each convolutional layer; it uses 3 groups with 32, 64, and 64 channels, and has 3 residual blocks per group; it down-scales the input by a factor of 3 in the first group and 2 in the latter 2 groups. This yields a representation of shape 64x7x7 when applied to 84x84-dimensional Atari frames. ResNet-L uses 3 groups with 48, 96, and 96 channels, and has 5 residual blocks per group; it uses a larger expansion ratio of 4, producing a representation shape of 96x7x7 from an 84x84 frame. This enlargement increases the number of parameters by approximately a factor of 5. S model is the model used in Stooke et al. (2021). It consists of three convolutional layers, with [32, 64, 64] channels , kernel sizes [8, 4, 3], and strides [4, 2, 1], listed from first to last layer. A.2 TRANSITION MODELS We experimented with three transition models: convolutional model, deterministic GRU, and latent GRU. Our convolutional model is based on SGI (Schwarzer et al., 2021b). The input into the convolutional transition model is the concatenation of the spatially replicated 2D action map and the representation et along the channel dimension. The network itself consists of two 64-channel convolutional layers with 3x3 filters, separated by ReLU activation and batch normalization layers. The deterministic GRU has hidden dimension 600 and input dimension 250. The input at is prepared by passing the one-hot action vector through a 250 dimensional embedding layer. The initial hidden state ê0 is generated by projecting the representation e0 through a 600 dimensional linear layer with ELU activation and dropout. Layer normalization is applied to the hidden input at all timesteps. The latent GRU model is based on Dreamerv2’s RSSM (Hafner et al., 2021), and is consisted of a recurrent model, posterior model, prior predictor, and latent merger. The recurrent model has a hidden dimension and input dimension of 600. The initial hidden state h0 and input z0 are zero vectors. The flattened stochastic variables zt and one-hot action vector at are first concatenated and then projected to 600 dimension through a linear layer with ELU activation, before being passed into the recurrent model as input. Layer normalization is applied to the hidden input at all non-zero timesteps. The posterior model is a two-layer MLP with 600 dimensional bottleneck separated by ELU activation. It takes the concatenation of representation et and recurrent hidden output ht as input, and outputs a 1024 dimensional vector representing the 32 dimensional logits for 32 latent categorical variables. zt is sampled from the posterior logits. The prior model is a two-layer MLP with 600 dimensional bottleneck separated by ELU activation. Its output format is same as that of the posterior model. ẑt is sampled from the prior logits. The latent merger is a linear layer that projects the concatenation of ht and flattened zt to the same dimension of representation et. A.3 SSL PROJECTION MODULE In the case of the deterministic GRU, ê is first projected to the same dimension of representation through a linear layer. Henceforth we shall assume that ê underwent this step for GRUdet. The predicted representation ê and target representation ẽ are projected to 1024 dimensional vectors ŷ and ỹ through a linear layer. The BYOL objective involves processing ŷ with an additional linear layer q with output dimension 1024. The Barlow objective involves applying batch normalization to ŷ and ỹ prior to taking the covariance and variance losses. The inverse dynamics model is a two-layer MLP with 256 dimensional bottleneck separated by ReLU activation. It takes the concatenation of ŷt and ỹt+1 as input, and outputs logits with dimension equivalent to number of actions. A.4 ATC, VAE-T, ST-DIM We use the implementation, hyperparameters and architecture from the codebase of (Stooke et al., 2021) and (Stooke and Abbeel, 2019) for these models. We change the dataset to the one used in all our experiments We use the dataset described in section 5 to train these models, and train all methods for 58,500 updates. ATC (Augmented-Temporal Contrast) uses InfoNCE loss between output of the momentum encoder and online branch applied to different augmentations of an image to pre-train the encoder. VAE-T from Stooke et al. (2021) uses variational auto-encoder (Kingma and Welling, 2014) objective to reconstruct the frame from the next time step given an image at the current time step. ST-DIM (Anand et al., 2019) also uses InfoNCE objective, and in additional to traditional global-global infomax, introduces global-local infomax by using local representations taken from the feature map output of the convolutional encoder and the global pooled feature vector as positive pairs. For more details, we refer the reader to the referenced works. A.5 IMAGE RECONSTRUCTION MODEL We used a decoder architecture that mirrors the structure of the ResNet-M encoder. In decoding, instead of transposed convolutions we used upsampling with the nearest value followed by a regular convolution (Odena et al., 2016). We used mean squared error between the reconstructed pixels and the target image as the training criterion. Models were trained and evaluated on the same data as reward and action probing, for 30 epochs using Adam optimizer with learning rate 0.001. A.6 HYPERPARAMETERS See tables 6, 7, 8, 9 for hyperparameter values. For ATC, ST-DIM and VAE-T hyperparameters, see Stooke et al. (2021). A.7 IMAGE AUGMENTATION We use the same image augmentations as used in SGI (Schwarzer et al., 2021b), which itself used the augmentations in DrQ (Yarats et al., 2021b), in both pretraining and fine-tuning. We specifically apply random crops (4 pixel padding and 84x84 crops) and image intensity jittering. A.8 GOAL-ORIENTED RL LOSS Goal-oriented RL loss is taken directly from SGI (Schwarzer et al., 2021b). This objective trains a goal-conditional DQN, with rewards specified by proximity to sampled goals. First, a goal g is sampled to be the state encoding either of the near future in the current trajectory (up to 50 steps in the future), or, with probability of 20%, of the future state in another trajectory in the current batch. Then, we add Gaussian noise to obtain the final goal g: g ← αn + (1 − α)g, where α ∼ Uniform(0.5), and n is a vector sampled from isotropic Gaussian normalized to have length of 1. Then, in order to obtain the reward of taking action at going from state st to st+1, we first encode the states with the target encoder ẽt = ENCtarget(ot), ẽt + 1 = ENCtarget(ot+1). Then, we calculate the reward as: R(ẽt, ẽt+1) = d(ẽt, g)− d(ẽt+1, g), where d(ẽt, g) = exp ( 2 ẽt·g∥ẽt∥2·∥g∥2 − 2 ) . We use FiLM (Perez et al., 2018) to condition the Q-function Q(ot, at, g) on g, and optimize the model using DQN (Mnih et al., 2015). B FORWARD MODEL PROBING While our principal goal is to demonstrate the correlation between representation probing and offline RL performances, we also apply the reward probing technique to predictions in order to evaluate the qualities of transition models under different pretraining setups. In table 10, we show the effects of using different transition models during pretraining on prediction probing performance. All models are trained with ResNet-M encoder and inverse loss. Goal loss is also applied to the BYOL models. Table 8: GRU-latent specific hyperparameters. Parameter Setting kl loss weight 0.1 kl balance 0.95 Table 10: Mean reward probing F1 scores for pretraining setups with different transition models. Evaluated on 5th and 10th predictions. All standard deviations are on order of 1e-4. Pred Obj Transition Pred 5 Pred 10 BYOL Conv-det 33.1 28.4 GRU-det 33.0 27.4 GRU-latent 33.4 28.9 Barlow0.7 Conv-det 32.0 27.6 GRU-det 30.1 25.0 GRU-latent 39.5 30.2 Table 11: Mean reward probing F1 scores for pretraining setups with different prediction objectives. Evaluated on 5th and 10th predictions. All standard deviations are on order of 1e-4. Pred Obj Pred 5 Pred 10 BYOL 33.4 28.9 Barlow0.5 40.2 30.2 Barlow0.7 39.5 30.2 Barlow1 37.4 29.7 Barlowrand 36.8 27.5 In the deterministic setting, the predictions of the GRU model are worse than those of the convolutional model. The introduction of stochasticity appears to fix the underlying issue for predictions, resulting in the latent GRU model having the best overall prediction probing performance. One possible explanation for Conv-det having better predictions than GRU-det is that the spatial inductive bias in the convolutional kernels acts as a constraint and helps regularize the predictions from regressing to the mean. However, this is more effectively solved by the introduction of latent variables into GRU during training and inference. In table 11, we show the effects of using different prediction objectives during pretraining on prediction probing performance. All models are trained with ResNet-M encoder, GRU-latent transition model, and inverse loss; goal loss is also applied to the BYOL model. Comparing to the BYOL model, Barlow models generally have higher probing scores for predictions. We also note that for Barlow models, regularizing the representations towards the predictions (by setting Barlow Balance < 1) improves the qualities of predictions. This is likely because it makes the prediction task easier, making it more likely to learn a capable transition model. This reasoning can also explain why the Barlow model with frozen, random target network achieves superior probing result for representation (table 2) but worse result for predictions compared to the other Barlow versions. Predicting a random target representation is likely more difficult than predicting a learned representation, and this may in turn encourage the model to rely more on learning a powerful encoder and posterior model, and less on learning an accurate transition model. C FULL RL RESULTS D STATISTICAL HYPOTHESIS TESTING OF RANK CORRELATION In Fig. 5, we show the correlations results for both the action and reward predictions. We estimate Spearman’s rank correlation coefficient (Spearman’s r) between the linear probing performance and the (interquartile) mean RL human-normalized score (HNS) over 9 Atari games. The reason for using Spearman’s r instead of the Pearson correlation coefficient is because we are interested in whether the relative ranking of the models on the linear probing tasks is indicative of the relative ranking of the same models when RL is trained on top of it. As an example, this allows us to say if model A out-ranks model B in the reward prediction task, an RL model trained on top of model A’s representations will likely out-perform an RL model trained on top of model B’s representation. However, it does not let us predict by how much model A will out-perform model B. Let d denote the difference in ranking between the linear probing performance and the RL performance, Spearman’s r (denoted as ρ below) is computed as, ρ = 1− 6 ∑n i=1 d 2 i n(n2 − 1) , (1) where di is the difference in ranking for the i-th model, and n is the total number of models we have. We perform statistical hypothesis testing on ρ with null hypothesis ρ = 0 (no correlation between linear probing performance and RL performance) and alternative hypothesis ρ > 0 (positive correlation). The null distribution is constructed nonparametrically using permutation testing: we sample random orderings of the observed linear probing performance and RL performance independently and compute ρ. This is repeated 50,000 times to generate the null distribution (which is centered at ρ = 0 as we do not expect randomly ordered values to be correlated). We then compare our observed ρ to this distribution and perform one-tailed test for the proportion of samples larger than our observed ρ to report our p-value. D.1 RANK CORRELATION ON A DIFFERENT DATASET In Fig. 1, we explored the correlation between the RL performance and the reward probing task, where the dataset used for the reward probing was a set of quasi-random trajectories from the DQN dataset, coming from very beginning of the training run of the DQN agent used to collect the data. It is natural to ask whether the correlation results we obtain are sensitive to the specific dataset used. To put this question to the test, we re-run the same reward probing task, this time on the "expert" dataset, i.e. the last trajectories of the DQN dataset, corresponding to a fully trained agent. The results are shown in Fig.6. The Spearman’s correlation coefficient that we obtain is the exact same as the one for the random trajectory dataset (even though the reward statistic are different, see Table 14), showing that the correlation result is not sensitive to the probing dataset used. D.2 CONFIDENCE INTERVAL OF RL PERFORMANCE AS A FUNCTION OF INDEPENDENT RUNS We further show the confidence interval of the estimated mean RL performance as the number of independent runs increase. From our total of 10 independent runs each game, we sample with replacement k ≤ 10 runs (k being number of independent runs we “pretend” to have instead of the full 10), independently for each game. We can compute the IQM over this sample to get an estimate for the IQM as if we only have k independent runs. We repeat this process 10,000 times to construct the 95 confidence interval of the empirical IQM for different k’s. Illustrative examples of how much this confidence interval shrinks for different pairs of models is shown in Fig. 7. We observe in Fig. 7 the mean RL performance estimates have CIs that eventually separate with many independent runs. This is an unbiased but high variance and computationally intensive estimator of the true expected RL performance. On the other hand, the reward prediction F1 score is a computationally cheap, low variance and accurate estimator of the relative model ranks in mean RL performance. This further corroborates our previous results of positive correlation between reward prediction F1 score and mean RL performance (Fig. 1). E COMPARISON WITH DOMAIN SPECIFIC PROBING BENCHMARKS One of the key advantages of our probing method is that it is domain agnostic, unlike the previously proposed AtariARI benchmark (Anand et al., 2019) which acquires probing labels through the RAM state of the emulator, making their method impractical for image-based trajectories. To better understand how our probing metrics compare with the domain specific ones in terms of correlations with RL performances, we perform the AtariARI probing benchmarks using our pretrained encoders on the 4 overlapping games (Boxing, Seaquest, Frostbite, DemonAttack) used in both works. For AtariARI, we first calculate the average probe F1 scores across categories, then average this quantity across the games. For reward probing, we apply our own protocol detailed in section 5.1. For RL performance we use the IQM. We report the correlation between the probing metrics and RL performances across different models. Our results are summarized in Table 13. We find that the correlation between the average probing F1s and RL performances is stronger for our reward probing method. In particular, our probing method has a significant correlation with RL performances (p < 0.05), while the AtariARI probing method does not. F PROBING DURING TRAINING We show evolution of probing performance as training progresses in figure 8. G REWARD STATISTICS IN PROBING DATASETS In table 14, we report the percentage of states that have a non-zero reward in each of the 9 games, for two different subsets of data: • Checkpoint 1, which correspond to quasi-random trajectories from the beginning of the training process of DQN. This is the data used for the reward probing in Fig 1. • Checkpoint 50, which is the last checkpoint of the DQN replay dataset, and corresponds to the fully trained DQN agent, that we assimilate to an expert agent. This data is used for action probing, and for reward probing in Fig.6 All the games have a fairly small percentage of positive reward states, and we generally observe a higher percentage of reward in checkpoint 50, which is expected since the agent is more capable by then. G.1 IMPACT OF SPARSITY ON THE CORRELATION In Fig.9, we plot the Spearman’s correlation coefficient between the RL performance on each individual game and the reward probing F1, as a function of the percentage of reward observed in each game (see Table 14). We do not observe any particular pattern with respect to the sparsity, suggesting that the probing task is not very sensitive to the sparsity level of each individual game. Note however that, as usual in the Atari benchmark, it is difficult to draw conclusion from any given individual game, and the statistical significance of our results only emerge when considering the set of games as a whole. Indeed, only 3 games achieve individual statistical significance at p < 0.01 (Boxing, Seaquest and Assault), while the other do not obtain statistically significant correlations. H LIMITATIONS One limitation of the current work is that for the presented probing methods to work one needs a subset of the data either with known rewards, where ideally rewards are not too sparse, or with expert actions. If none of the two is available, our method cannot be used. For the reward probing task, the usefulness of the method also depends on the hardness of the reward prediction itself. If the prediction task is too easy, for example because there are rewards at every step, or because the states with rewards are completely different than the ones without (such that even a randomly initialized model would yield features allowing linear separation between the two types of states), then the performance of all the models on this task are going to be extremely similar, with the only differences coming from random noise. In such a case, the performance of the prediction task cannot be used to accurately rank the quality of the features of each of the models. For future work we also would like to extend the findings of this paper to more settings, for example different environments.
1. What is the focus of the paper regarding light-weight probings in RL tasks? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of variations and ablations? 3. Do you have any concerns or questions about the equations and their inputs? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions or ideas that could improve the performance or the scope of the research?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper investigates whether or not light-weight probings to the action between continuous states and reward can measure the pretrained encoder performance on RL tasks. To do this, they pretrained the encoders through self-supervised learning loss with a transition model implemented through recurrent module or recurrent state space modeling with several variations. They tested the pretrained encoder for linear probing and RL tasks. They showed the correlations between the performances on linear probings and RL tasks. They found the linear probing to reward is highly correlated with performance on RL tasks, while the relationship between action linear probing and RL performance is weaker. Strengths And Weaknesses Strength The motivation behind this paper is good. Applying the pretrained encoders to RL tasks has been investigated [1,2]. It requires lots of resources. This investigation could be a piece of good evidence to skip the costly evaluation. The many variations and ablations are evaluated. For transition modeling, deterministic recurrent modules and RSSM are validated. For SSL, BYOL and Barlow Twins are tested with various configurations. The ablation studies are reported with and without each objective, such as inverse dynamics modeling and goal-conditioned RL. Weaknesses They only investigated the SSL methods, not other unsupervised methods such as VAE. I expected they would cover the overall methods from their title, but it is not. They only evaluated nine games. It could not be enough to back up the conclusion. In the equation for reward-reg loss in Reward Probing in section 3.2, should the encoder get o t + 1 not o t ? Because the reward r t is given when the action is given on the observation o t . In the equation for action-classif loss in Action Prediction in section 3.2, shouldn't the input be o t and o t + 1 ? In Figure 2, the consecutive observations are given to the loss, but not in this equation. In Figure 2, I cannot understand the below sentence. Why is the stacked observation related to data augmentation? The observations consist of a stack of 4 frames, to which we apply data augmentation before passing them to a convolutional encoder. The action is represented as a 2D one-hot vector and appended to the input to the first convolutional layer. Why is the action represented through a 2D one-hot vector? The action space is larger than the 2 Dimension. For RSSM, you used the discrete latent. Why didn't you try the continuous latent version [3]? Perhaps, because the discrete version outperforms the continuous latent version in [4], but the discrete latent variable training is more unstable than the continuous latent variable training, so maybe the RSSM with continuous latent variable could outperform the discrete latent version. In BYOL loss equation in section 4.2, should $q(\hat{y}{t+k}) b e q(\hat{e}{t+k})$? For Algorithm 1, why did you use the Pseudo code block? It is just a single equation. For the goal-oriented RL loss, please introduce it roughly in the main paper even though the details are in Appendix. In A.7, there are typo e ~ t + 1 . Similar to the BYOL model, the Barlow model can also be improved with inverse dynamics modeling, while the addition of goal loss has a slight negative impact. It is interesting. Could you analyze this? [1] Schwarzer, Max, et al. "Pretraining representations for data-efficient reinforcement learning." Advances in Neural Information Processing Systems 34 (2021): 12686-12699. [2] Dittadi, Andrea, et al. "The role of pretrained representations for the ood generalization of rl agents." arXiv preprint arXiv:2107.05686 (2021). [3] Hafner, Danijar, et al. "Dream to control: Learning behaviors by latent imagination." arXiv preprint arXiv:1912.01603 (2019). [4] Hafner, Danijar, et al. "Mastering atari with discrete world models." arXiv preprint arXiv:2010.02193 (2020). Clarity, Quality, Novelty And Reproducibility Their motivation, model design to evaluate, and evaluation are clearly written except for some minor things, such as the equation typos. Their investigation is novel and looks reproducible through the hyperparameters shared in Appendix.
ICLR
Title A novel Bayesian estimation-based word embedding model for sentiment analysis Abstract The word embedding models have achieved state-of-the-art results in a variety of natural language processing tasks. Whereas, current word embedding models mainly focus on the rich semantic meanings while are challenged by capturing the sentiment information. For this reason, we propose a novel sentiment word embedding model. In line with the working principle, the parameter estimating method is highlighted. On the task of semantic and sentiment embeddings, the parameters in the proposed model are determined by using both the maximum likelihood estimation and the Bayesian estimation. Experimental results show the proposed model significantly outperforms the baseline methods in sentiment analysis for low-frequency words and sentences. Besides, it is also effective in conventional semantic and sentiment analysis tasks. 1 Introduction Word embeddings provide continuous low-dimensional vector representations of words from documents (Li et al., 2017). Aiming to capture semantic and syntactic contextual information from large datasets, the word embedding models are extensively employed to represent words in natural language processing tasks (Levy and Goldberg, 2014). For this reason, many modelling methods are proposed to generate dense representations of words (Rath, 2017). Seeing the flourish of word embeddings, Word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) are considered as the edge-cutting approaches to deal with the word contexts. C&W is another most widespread method due to the progress in neural networks (Collobert and Weston, 2008). Besides, other algorithms are integrated into the existing models. For instance, Jameel and Schockaert proposes D-GloVe by combing GloVe and Dirichlet-Multinomial language modeling (Jameel and Schockaert, 2016). More recently, the contextualized word embedding models, which improve the accuracy to a large extent, are put forward. As such, the traditional approaches are concluded as pre-trained word embeddings. Whereas, the newly proposed methods, such as ELMo by Peters M. E. et al.(Peters et al., 2018) , BERT by Devlin J. et al.(Devlin et al., 2018) and XLNet by Yang Z et al.(Yang et al., 2019) , cost large amount of computing resource for training whilst obtain a better working performance in downstream tasks. In this way, the pre-trained word embeddings still hold a great promise in handling complicated natural language processing tasks. The aforementioned models are effective in dealing with semantic-oriented tasks. Likewise, in sentiment analysis, research is still ongoing to capture sufficient sentiment information while the sentiment embeddings typically depend on the sentiment polarity labels provided by labeled corpora to guide learning processes via objective functions (Yu et al., 2017). Tang et al. propose a method for learning sentiment embeddings by regulating the C&W model, which encodes sentiment information in the continuous word representations (Tang et al., 2015). By exploiting the prior knowledge, Li et al. incorporate the sentiment information to analyze the sentiment label of each word in target and contexts (Li et al., 2017). Maas et al. apply a semi-supervised method to get sentiment information and carry out the maximum likelihood estimation for parameter determination (Maas et al., 2011). Notwithstanding, the pre-trained word embeddings still have challenges in tackling sentiment analysis tasks, which are concluded as the following two aspects. On the one hand, the semantically similar word may have opposite sentiment polarities. Thus the sentiment polarity identification process has to be dedicatedly designed (Tang et al., 2015) (Li et al., 2017) (Shi et al., 2018). On the other hand, the capturing of sentiment information from low-frequency words is most pronounced. Typically, the low-frequency words can be regarded as the derivation of entity nouns, new terms and some deformation high-frequency words, which also contain significant semantic information. Nevertheless, due to the low frequency, current models are absent of processing their sentiment. The objective of this work is to devise a sentiment word embedding model. Specifically, the issue of parameter setting is deeply studied. Methods for effectively estimating the involving parameters based on Bayesian estimation and maximum likelihood estimation are proposed. For low-frequency word analysis, the Bayesian estimation is applied to determine the co-occurrence probabilities and the sentiment probabilities. This work describes current parameter estimation approaches and the model of GloVe in Section 2, illustrates our sentiment word embedding model in Section 3, shows the experiments in Section 4, and presents the research findings in Section 5. 2 Preliminary This section introduces the basic theory related to parameter estimating algorithms and the GloVe model, so as to facilitate the description of subsequent model architecture. 2.1 Parameter Estimating Algorithms Typically, word vectors are taken as learning variables in the word embeddings, which results in the use of parameter estimating algorithms. The way of establishing objective function is therefore be employed. According to (Tang et al., 2014), the objective function of the Skip-Gram model is to maximize the average log probability, which is expressed as: J = 1 T T∑ i=1 ∑ −c⩽j⩽c,j ̸=0 ln p (wi+j | ei) (1) where T is the number of words in the corpus and c indicates the size of window. We take ei as the embedding of target word wi and wi+j as the context of wi. The outcome p (wi+j |ei) is obtained via the hierarchical softmax. Similarly, the objective of GloVe refers to the maximum of likelihood probability and is defined as (Jameel et al., 2019): J = ∏ i,j N ( lnxij ;wi · w̃j + bi + b̃j , σ2 ) (2) where N ( .;µ, σ2 ) represents the normal distribution with the mean µ and the variance σ2. In GloVe, the variance is determined by each word couple (i, j). In addition to objective function constructing, the estimation algorithms are applied to compute other parameters within word embeddings. In (Maas et al., 2011), the maximum posterior probability estimation identifies the parameter to weigh the semantic information (Maas et al., 2011). In D-GloVe, Jameel and Schockaert also use the semantic information weighing parameter, whose value corresponds to the Bayesian estimating outcome (Jameel and Schockaert, 2016). 2.2 The GloVe model Basically, the GloVe model is a word-embedding method that combines evidence from the local context and the global counts. Typically, three distinguished words are used in this model, which are Wi,Wj and Wk. Both Wi and Wj are target words while Wk is the context. Let x be the matrix representing the word-word co-occurrence counts. We define the element xik as the times for word Wk appearing in the context of Wi. Correspondingly, xi = ∑ k xik indicates the frequency of each word occurs in the context of Wi. The cooccurrence probability of Wk being the context word of Wi is given as Pik = P (Wk|Wi) = xik/xi (3) The parameter Pik/Pjk is taken to determine the relation of Wi to Wk and Wj to Wk. For Wk has a similar relation to Wi and Wj , i.e. both relevant or irrelevant, the ratio approaches 1. The information in the ratio of co-occurrence probabilities is: F ( wTi w̃k − wTj w̃k ) = Pik/Pjk (4) where wϵRn refers to the target word vector and w̃ϵRn to the context vector. Commonly, GloVe extracts the semantic relation between different words by using the ratio of cooccurrence probabilities while the semantic information are identified via the maximum likelihood estimation (Maas et al., 2011). 3 Methodology This section depicts the architecture of the sentiment word embedding, working principle of the parameter estimating process using two different estimation algorithms. 3.1 Sentiment Word Embedding Model Architecture In sentiment analysis tasks, the sentiment information is captured during processing. Aiming to identify the sentiment polarities of different words, a word embedding model, incorporating the sentiment information, is established. Typically, we tend to characterize the proposed model by the loss function. To compute the sentiment embeddings, we define the probability of Wi being positive as Bi and negative as 1−Bi. Assuming that Wi = good and Wj = bad, the value of Bi/Bj is larger than 1, which indicates good is more positive than bad. In turn, the value (1−Bi) / (1−Bj) is less than 1 since bad shows a negative polarity. In this way, the relations of the word sentiment are expressed by the ratio of sentiment probabilities. For Bi+(1−Bi) = 1, Bi/Bj and (1−Bi) / (1−Bj) make the same sense in conveying the sentiment, we take Bi/Bj to construct the sentiment relation of Wi and Wj . More details of the words’ relation and the ratios are presented in Appendix 1. Considering the bias vector corresponds to positive sentiment polarity, we take sϵRn to indicate the bias vector to match the size of word vector. By transforming Wi and Wj into word vectors wi and wj , the difference established upon si and sj is written as: F (wTi si − wTj sj) = Bi/Bj (5) Assuming that F is confirming to the homomorphisms between groups (R,+) and (R>0,×), the semantic and sentiment information is combined to get: F (( wTi w̃k − wTj w̃k ) + ( wTi si − wTj sj )) =F ( wTi w̃k − wTj w̃k ) · F ( wTi si − wTj sj ) = Pik Pjk · Bi Bj (6) Due to properties of group homomorphism, eqn.6 is transformed into F ( wTi w̃k − wTj w̃k + wTi si − wTj sj ) =F (( wTi w̃k + w T i si ) − ( wTj w̃k + w T j sj )) = F ( wTi w̃k + w T i si ) F ( wTj w̃k + w T j sj ) = Pik Pjk · Bi Bj (7) in line with F ( wTi w̃k + w T i si ) = Pik ·Bi (8) According to eqn.7,the basic objective function F is in the form of exponential, that is F (x) = exp (x). Thus, we apply the logarithm operation to each side and have: wTi w̃k + w T i si = ln (Pik ·Bi) = lnPik + lnBi (9) By incorporating the sentiment information, the loss function of the word embedding model is defined as loss (wi, w̃k, si) = V∑ i,k=1 [ wTi w̃k + w T i si − lnPik − lnBi ]2 (10) where V indicates the size of the vocabulary. The parameters wTi , w̃k and si are computed via gradient descent algorithms. 3.2 Incorporating Sentiment Information As pointed out in the Introduction, current models use the maximum likelihood estimating algorithm for parameter determination. In this part, we preliminarily carry out the parameter estimation based on the maximum likelihood principle. For each target word Wi, xi times Bernoulli experiments are conducted to extract the context independently with V different outcomes in each experiment (Djuric and Huang, 2000). The occurrence number of the kth outcome and its probability are represented by xik and Pik. If the random variable Xi = (Xi1, Xi2, · · ·, XiV ) stands for the occurrence times of all the possibilities, i.e. Xik is the number for the kth one, the parameter Xi obeys the Multinomial distribution, i.e. Xi ∼ Multinomial (−→xi ,−→Pi) with −→Pi = (Pi1, Pi2, · · · , PiV ) and −→xi = (xi1, xi2, · · · , xiV ). Hence, a log-likelihood function is constructed: max Pi1,Pi2,··· ,Pik,··· ,PiV lnL(Pi1, Pi2, · · · , Pik, · · · , PiV ) = max Pi1,Pi2,··· ,Pik,··· ,PiV ln [(Pi1)xi1 · (Pi2)xi2 · · · (Pik)xik · · · (PiV )xiV ] = max Pi1,Pi2,··· ,Pik,··· ,PiV V∑ k=1 xik · lnPik s.t. V∑ k=1 Pik = 1 (11) According to eqn.11, the objective function can be resolved as an optimal problem that equality constraints. Thus, the corresponding Lagrangian function is formulated as J (Pi1, Pi2, · · · , PiV , λ) = V∑ k=1 xik · logPik + λ ( 1− V∑ k=1 Pik ) (12) where we have Pik = xikλ determined by ∂J(Pi1,Pi2,··· ,PiV ,λ) ∂Pik = xikPik − λ = 0. Likewise, λ = ∑V k=1 xik = xi is calculated with respect to ∑V k=1 Pik = ∑V k=1 ( xik λ ) = ∑V k=1 xik/λ = 1. Thus, the estimation of Pik is written as P̂ik = xik/xi (13) Notably, the obtained Pik is the same with that from GloVe according to eqn.3, which demonstrates the feasibility for parameter estimation in our model. In this way, the outcome of parameter sentiment probability can also be computed by using the maximum likelihood estimator. As such, a maximum likelihood estimation-based sentiment word embedding, namely MLESWE, is put forward. In this case, the Bernoulli experiments are applied to pick up the sentiment polarity of the target word Wi and the outcome can be either positive or negative. Since Bi is the probability of Wi being positive, we designate the distribution of Wi obeys−→ ti = (ti0, ti1) where ti0 is the number of negative texts and ti1 indicates that of the positive ones. Thus, the total number of texts including Wi is expressed as ti = ti0 + ti1. Support a random variable Ti = (Ti0, Ti1) denotes the times of all the possibilities of outcomes and Ti conforms to the binomial distribution, i.e. Ti ∼ Binomial (−→ ti , −→ Bi ) where −→Bi = (Bi, 1−Bi). The log-likelihood function of sentiment probabilities is delivered as: max Bi lnL (Bi) = max Bi ln [ (Bi) ti1 · (1−Bi)ti0 ] = max Bi [ti1 · lnBi + ti0 · ln (1−Bi)] (14) Similarly, B̂i = ti1ti is obtained based on ∂(ln L) ∂Bi = ti1Bi − ti0 1−Bi = 0. Combining the semantic and sentiment information, the final loss function based on maximum likelihood principle is loss (wi, w̃k, si) = V∑ i,k=1 [ wTi w̃k + w T i si − ln xik xi − ln ti1 ti ]2 (15) 3.3 Parameter Estimating using Bayesian Estimation The Bayesian estimating method is highlighted due to its not sensitive to initialization via proper prior distributions to parameters (Ma et al., 2018). By using the prior knowledge, the deficiency of lacking information of small datasets can be resolved, which leads to the converge to the actual value (Phoong and Ismail, 2015). Accordingly, the generalization ability of the model can be improved (Wu et al., 2018). The Bayesian approach, in this way, is able to present an elegant solution for automatically determining the parameters (Ferguson, 1973). We thus employ the Bayesian estimation for the parameter estimating of the proposed model. The Bayesian estimation-based sentiment word embedding, namely BESWE, is performed. In accordance to the assumption of maximum likelihood principle mentioned before, the prior distribution P (−→ Pi ) is assumed to obey the Dirichlet distribution of −→α = (αi, αi, · · · , αV ). The prior distribution is converted to P (−→ Pi ) = Dir (−→α ) = Γ( ∑ k αk)∏ k Γ(αk) ∏ k P αk−1 ik , with the identical likelihood function: P (−→x i|−→P i) = Mult(−→x i,−→P i) = xi!∏V k (xik!) · V∏ k P xikik (16) Considering the Dirichlet-Multinomial conjugate structure, the posterior distribution is P (−→ P i|−→x i ) = Dir (−→α +−→x i) = Γ ( ∑ k αk + xik)∏ k Γ (αk + xik) · ∏ k Pαk+xik−1ik (17) where αk = λ1 · nk∑ k nk , nk is the total number of occurrences of word Wk in the corpus and λ1 > 0 is determined by tuning data. By satisfying cik = EP (−→ P i|−→x i ) [lnPik] (18) we compute the Bayesian estimating outcome of lnPik in the loss function provided by eqn.10, which is also the mean value of posterior probability in line with Pik. As stated in (Jameel and Schockaert, 2016), the computation of E P (−→ P i|−→x i ) [lnPik] is facilitated via Taylor expansion: E P (−→ P i|−→x i ) [lnPik] ≈ lnEP(−→P i|−→x i) [Pik]− V ar P (−→ P i|−→x i ) [Pik] 2 · E2 P (−→ P i|−→x i ) [Pik] (19) where we have V ar P (−→ P i|−→x i ) [Pik] = αk+xik∑ k(αk+xik) · ( 1− αk+xik∑ k(αk+xik) ) · 1∑ k(αk+xik)+1 and E P (−→ P i|−→x i ) [Pik] = αk+xik∑ k(αk+xik) . Note that lnPik is estimated via Bayesian principle in eqn.18 whose form is unlike that of eqn.13. Comparing to the maximum likelihood estimation, a direct outcome is obtained without using Laplace smoothing in experiment. Comparatively, P (−→ Bi ) is designed to obey Beta distribution with the parameter −→β = (β0, β1), along with the prior distribution given as P (−→ B i ) = Beta (−→ β ) = Γ (β0 + β1) Γ (β0) Γ (β1) (1−Bi)β0−1 ·Bβ1−1i (20) from which the log-likelihood function is P (−→ t i| −→ B i ) = b (−→ t i, −→ B i ) = Cti1ti · (1−Bi) ti0 ·Bti1i (21) and the posterior distribution subject to the conjugate structure of Beta-Binomial is P (−→ B i| −→ t i ) = Beta (−→ β + −→ t i ) = Γ (β0 + ti0 + β1 + ti1) Γ (β0 + ti0) Γ (β1 + ti1) · (1−Bi)β0+ti0−1 ·Bβ1+ti1−1i (22) where mk stands for the texts of the sentiment label k, λ2 > 0 is a parameter depending on tuning data and βk = λ2 · mk∑ k mk . Thereupon, to determine the lnBi in eqn.10, we take the Bayesian estimation approach. The solution to the posterior probability expectation of lnBi, which is involved with Bi is characterized as ei = EP (−→ B i| −→ t i ) [lnBi] (23) Furthermore, the Taylor expansion is employed to update the equation: E P (−→ B i| −→ t i ) [lnBi] ≈ lnEP(−→B i|−→t i) [Bi]− V ar P (−→ B i| −→ t i ) [Bi] 2 · E2 P (−→ B i| −→ t i ) [Bi] (24) where we have V ar P (−→ B i| −→ t i ) [Bi] = β1+ti1∑ k(βk+tik) · ( 1− β1+ti1∑ k(βk+tik) ) · 1∑ k(βk+tik)+1 and E P (−→ B i| −→ t i ) [Bi] = β1+ti1∑ k(βk+tik) . Hence, the final loss function of BESWE can be obtained: loss (wi, w̃k, si) = V∑ i,k=1 [ wTi w̃k + w T i si − cik − ei ]2 (25) 4 Experiments In this section, the working performance of BESWE and MLESWE are evaluated. The task of word similarity analysis is carried out. To deliver the sentiment embeddings, both wordand sentence-level sentiment analysis using different models is conducted. 4.1 Experiment Settings Datasets. The dataset SST (Stanford Sentiment Tree) is employed for the mode training. There are five classes annotations within SST, which are very negative, negative, neutral, positive and very positive. Typically, we assign the value 3 and 4 to represent the positive polarity, 0 and 1 to negative and 2 to else. For our models, the word representation dimension is 50, the learning rate is 0.05 and the iteration number is 50. Besides, the loss function is optimized with the deployment of AdaGrad. Baseline Methods. We evaluate the proposed model in comparison to other state-of-theart models. The models of word embeddings, such as C&W, word2vec and GloVe, together with models of sentiment embeddings, including SE-HyRank and DLJT2, are implemented. For the baseline methods, we use default settings in the provided implementations or described as their papers and the word representation dimension is 50. Word Similarity. Computing word similarity (Levy et al., 2015) aims to capture the general meanings of words. In this research, the word similarity tasks are conducted on the dataset EN-WS-353-ALL, EN-WS-353-SIM and SCWS, which are detailed illustrated in (Jameel et al., 2019). Word-level Sentiment Analysis. We conduct word-level sentiment analysis on two sentiment lexicons, namely MPQA and NRC. The number of positive and negative items for MPQA is 2301 and 4151 while for NRC is 2231 and 3324. The N-fold cross validation with N=5 and N=10 is performed. An SVM classifier is trained whose average accuracy is the evaluation metric. Specifically, the words from SST corpus are extracted and converted into word embeddings, which are taken as the features of SVM. As Bayesian estimating principle is capable of tackling low-frequency words, we distinctively pick up the words with a frequency less than 5 for analysis. Statistically, the SST corpus contains 9984 low-frequency words. Sentence-level Sentiment Analysis. Considering the sentiment analysis for sentence, the movie review polarity datasets MovieReview is employed (Pang and Lee, 2005), which contains 10622 samples with the proportion of each polarity 1:1. We use a convolutional neural network (CNN) model, namely Text-CNN, with its online implementation (Kim, 2014). Likewise, the inputs of Text-CNN are word embeddings as well. The training episode is set as 200 epochs using the default settings. Similarly, we also pick the low-frequency words with the occupation over 10% as the lowfrequency sentences for testing. There are totally 1258 sentences cater to the demands and are all sent to Text-CNN classifier for processing. 4.2 Experimental Results Word Similarity. On the task of working performance evaluation, we first present the results for of word similarity analysis (Fig. 1). It can be observed BESWE outperforms other algorithms on all datasets, indicating that our model is capable to capture sufficient semantic information. Distinctively, the implementation of MLESWE, although not as good as BESWE, still achieves a better result on the average accuracy (i.e. Ave_Acc in Fig. 1) than the original GloVe. Yet the maximum likelihood estimating algorithm can also be applied to parameter determination of the word embeddings. Word-level Sentiment Analysis Results. The word-level sentiment analysis task is conducted on the dataset of single-word entries. The DLJT2 model outperforms other word embedding models by incorporating sentiment information into the learning processes, as shown in Fig.2. Compared to the state-of-the-arts, our model fails to exceed the outcome of the best method on average accuracy. Encouragingly, the BESWE model shows an even better performance in tackling the low-frequency words. Sentence-level Sentiment Analysis Results. The working performance of the proposed model is further evaluated on the sentence-level sentiment analysis task. From Fig.3, we see that SE-HyRank has a better performance than any other algorithms in average accuracy. Clearly, the outcome of BESWE is anyhow decent which is comparable with that of SEHyRank. Regarding low-frequency sentences, the minimum performance gap of over 9% against SE-HyRank is reported. Consequently, for the sentiment analysis of low-frequency words or low-frequency sentences, BESWE always obtain the best and most consistent results in the identification of sentiment polarity. Effects of λ1 and λ2. The hyperparameters in BESWE, i.e. regulatory factors λ1 and λ2, are used to represent the semantic and the sentiment information. The settings of the involving parameter can be therefore determined. The values of λ1 and λ2 are varied within the collection of {1, 0.75, 0.5, 0.25, 0.1, 0.05, 0.02, 0.01}. Firstly, the value of λ1 is set as {1, 0.75, 0.5, 0.25, 0.1, 0.05, 0.02, 0.01}. When λ2 = 1, we name BESWE as BESWE#1 and λ2 = 0.75 as BESWE#2, and so on so forth. Correspondingly, the value of λ2 is also picked from the same set and named from BESWE#9 to BESWE#16 in the same order. Totally, we get 16 different models. The results on the sentence-level sentiment analysis against different hyperparameter settings are shown in Fig.4(a) and Fig.4(c). Likewise, the results for low-frequency sentences are in Fig.4(b) and Fig.4(d). We take LowFreSentence#n to nominate the outcomes from low-frequency sentences. The sentence-level sentiment analysis reaches the highest accuracy 71.64% at the point λ1 = 0.05 and λ2 = 0.02. For the analysis of low-frequency sentences, the optimal values of λ1 and λ2 are 0.1 and 0.01, which results in an accuracy of 79.92%. The experimental results verify the effectiveness of the proposed sentiment word embedding. The BESWE model outperforms other state-of-the-art in word similarity identification. In the sentiment analysis of both word level and sentence level, our method still presents comparable outcomes. Specifically, by integrating the prior knowledge into sentiment probabilities estimating, the BESWE model is a better alternative for low-frequency-word sentiment capturing. It is reasonable to expect better performance in sentiment analysis tasks, as it is the case. 5 Conclusion In this work, the designing and deploying of the sentiment word embeddings is deeply studied. On the foundation of current word embedding models, the estimation principle of the objective function, together with other parameters, are investigated. Motivated by the significance of sentiment information, a novel word embedding model for sentiment analysis is established. Within the proposed model, both semantic and sentiment information is integrated into the word vectors. Aiming to construct the objective function, the group homomorphism theory is applied. As for the parameter determination, the maximum likelihood estimator and the Bayesian estimator are employed. Experiments are conducted on various tasks to evaluate the working performance. In comparison to the baseline models, our model is capable of tackling word similarity tasks. For the purpose of sentiment embeddings representation, the proposed model is effective in word-level and sentence-level sentiment analysis. Specifically, it outperforms other methods on low-frequency words and sentences sentiment polarity identification to demonstrate its efficacy. A Appendix I For Wi = good and Wj = bad, we have Bi⁄Bj > 1, (1−Bi) / (1−Bj) < 1, i.e. Wi = good is more positive than Wj = bad. For Wi = good and Wj = great, we have Bi/Bj ≈ 1, (1−Bi) / (1−Bj) ≈ 1, i.e. Wi = good and Wj = great are of positive polarities. For Wi = then and Wj = home, we have Bi/Bj ≈ 1, (1−Bi) / (1−Bj) ≈ 1, i.e. Wi = then and Wj = home are of neutral polarities. The specific sentiment probabilities calculated by maximum likelihood estimation are presented in Table 1. Likewise, the outcomes based on Bayesian estimation are in Table 2.
1. What is the main contribution of the paper regarding sentiment analysis? 2. What are the strengths of the proposed approach, particularly in terms of its extension term and Bayesian estimation? 3. What are the weaknesses of the paper, especially regarding its experimental design and limitations? 4. How does the reviewer assess the quality and reliability of the learned word embeddings and their performance comparisons? 5. Are there any suggestions or recommendations for future research directions or improvements to the current approach?
Review
Review The paper aims at extending GloVe word embedding model so that the resulting embeddings should capture sentiments (e.g. "good" is positive while "bad" is negative). The key idea is to employ an extension term to deal with the fact that some words appearing in text with sentiment information. Furthermore, to deal with the fact that many words an infrequent, besides maximum likelihood estimation, the paper proposes to use bayesian estimation. In the experiments, Stanford sentiment tree (SST) corpus is used. The word embeddings from the two models (each trained on different estimation methods) show their capability of expressing sentiments, compared with popular methods like Glove, word2vec. I would accept this paper because: - This paper is well written, with thoughtful maths details. - The proposed models, although are extensions of GloVe, gives interesting (and rigorous) points of how to add sentiment information. - The experiments do support what the paper claims. I would reject it because of the experiments. The dataset (SST) is so small and thus is questionable about the quality of the learned word embeddings and the comparisons. I think there should be better ways, such as word embeddings are trained on massive data (like for GloVe and word2vec), then are fine-tuned on sentiment analysis dataset. Also, I was wondering whether there's a way to collect more sentiment data (like in SE-HyRank paper).
ICLR
Title A novel Bayesian estimation-based word embedding model for sentiment analysis Abstract The word embedding models have achieved state-of-the-art results in a variety of natural language processing tasks. Whereas, current word embedding models mainly focus on the rich semantic meanings while are challenged by capturing the sentiment information. For this reason, we propose a novel sentiment word embedding model. In line with the working principle, the parameter estimating method is highlighted. On the task of semantic and sentiment embeddings, the parameters in the proposed model are determined by using both the maximum likelihood estimation and the Bayesian estimation. Experimental results show the proposed model significantly outperforms the baseline methods in sentiment analysis for low-frequency words and sentences. Besides, it is also effective in conventional semantic and sentiment analysis tasks. 1 Introduction Word embeddings provide continuous low-dimensional vector representations of words from documents (Li et al., 2017). Aiming to capture semantic and syntactic contextual information from large datasets, the word embedding models are extensively employed to represent words in natural language processing tasks (Levy and Goldberg, 2014). For this reason, many modelling methods are proposed to generate dense representations of words (Rath, 2017). Seeing the flourish of word embeddings, Word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) are considered as the edge-cutting approaches to deal with the word contexts. C&W is another most widespread method due to the progress in neural networks (Collobert and Weston, 2008). Besides, other algorithms are integrated into the existing models. For instance, Jameel and Schockaert proposes D-GloVe by combing GloVe and Dirichlet-Multinomial language modeling (Jameel and Schockaert, 2016). More recently, the contextualized word embedding models, which improve the accuracy to a large extent, are put forward. As such, the traditional approaches are concluded as pre-trained word embeddings. Whereas, the newly proposed methods, such as ELMo by Peters M. E. et al.(Peters et al., 2018) , BERT by Devlin J. et al.(Devlin et al., 2018) and XLNet by Yang Z et al.(Yang et al., 2019) , cost large amount of computing resource for training whilst obtain a better working performance in downstream tasks. In this way, the pre-trained word embeddings still hold a great promise in handling complicated natural language processing tasks. The aforementioned models are effective in dealing with semantic-oriented tasks. Likewise, in sentiment analysis, research is still ongoing to capture sufficient sentiment information while the sentiment embeddings typically depend on the sentiment polarity labels provided by labeled corpora to guide learning processes via objective functions (Yu et al., 2017). Tang et al. propose a method for learning sentiment embeddings by regulating the C&W model, which encodes sentiment information in the continuous word representations (Tang et al., 2015). By exploiting the prior knowledge, Li et al. incorporate the sentiment information to analyze the sentiment label of each word in target and contexts (Li et al., 2017). Maas et al. apply a semi-supervised method to get sentiment information and carry out the maximum likelihood estimation for parameter determination (Maas et al., 2011). Notwithstanding, the pre-trained word embeddings still have challenges in tackling sentiment analysis tasks, which are concluded as the following two aspects. On the one hand, the semantically similar word may have opposite sentiment polarities. Thus the sentiment polarity identification process has to be dedicatedly designed (Tang et al., 2015) (Li et al., 2017) (Shi et al., 2018). On the other hand, the capturing of sentiment information from low-frequency words is most pronounced. Typically, the low-frequency words can be regarded as the derivation of entity nouns, new terms and some deformation high-frequency words, which also contain significant semantic information. Nevertheless, due to the low frequency, current models are absent of processing their sentiment. The objective of this work is to devise a sentiment word embedding model. Specifically, the issue of parameter setting is deeply studied. Methods for effectively estimating the involving parameters based on Bayesian estimation and maximum likelihood estimation are proposed. For low-frequency word analysis, the Bayesian estimation is applied to determine the co-occurrence probabilities and the sentiment probabilities. This work describes current parameter estimation approaches and the model of GloVe in Section 2, illustrates our sentiment word embedding model in Section 3, shows the experiments in Section 4, and presents the research findings in Section 5. 2 Preliminary This section introduces the basic theory related to parameter estimating algorithms and the GloVe model, so as to facilitate the description of subsequent model architecture. 2.1 Parameter Estimating Algorithms Typically, word vectors are taken as learning variables in the word embeddings, which results in the use of parameter estimating algorithms. The way of establishing objective function is therefore be employed. According to (Tang et al., 2014), the objective function of the Skip-Gram model is to maximize the average log probability, which is expressed as: J = 1 T T∑ i=1 ∑ −c⩽j⩽c,j ̸=0 ln p (wi+j | ei) (1) where T is the number of words in the corpus and c indicates the size of window. We take ei as the embedding of target word wi and wi+j as the context of wi. The outcome p (wi+j |ei) is obtained via the hierarchical softmax. Similarly, the objective of GloVe refers to the maximum of likelihood probability and is defined as (Jameel et al., 2019): J = ∏ i,j N ( lnxij ;wi · w̃j + bi + b̃j , σ2 ) (2) where N ( .;µ, σ2 ) represents the normal distribution with the mean µ and the variance σ2. In GloVe, the variance is determined by each word couple (i, j). In addition to objective function constructing, the estimation algorithms are applied to compute other parameters within word embeddings. In (Maas et al., 2011), the maximum posterior probability estimation identifies the parameter to weigh the semantic information (Maas et al., 2011). In D-GloVe, Jameel and Schockaert also use the semantic information weighing parameter, whose value corresponds to the Bayesian estimating outcome (Jameel and Schockaert, 2016). 2.2 The GloVe model Basically, the GloVe model is a word-embedding method that combines evidence from the local context and the global counts. Typically, three distinguished words are used in this model, which are Wi,Wj and Wk. Both Wi and Wj are target words while Wk is the context. Let x be the matrix representing the word-word co-occurrence counts. We define the element xik as the times for word Wk appearing in the context of Wi. Correspondingly, xi = ∑ k xik indicates the frequency of each word occurs in the context of Wi. The cooccurrence probability of Wk being the context word of Wi is given as Pik = P (Wk|Wi) = xik/xi (3) The parameter Pik/Pjk is taken to determine the relation of Wi to Wk and Wj to Wk. For Wk has a similar relation to Wi and Wj , i.e. both relevant or irrelevant, the ratio approaches 1. The information in the ratio of co-occurrence probabilities is: F ( wTi w̃k − wTj w̃k ) = Pik/Pjk (4) where wϵRn refers to the target word vector and w̃ϵRn to the context vector. Commonly, GloVe extracts the semantic relation between different words by using the ratio of cooccurrence probabilities while the semantic information are identified via the maximum likelihood estimation (Maas et al., 2011). 3 Methodology This section depicts the architecture of the sentiment word embedding, working principle of the parameter estimating process using two different estimation algorithms. 3.1 Sentiment Word Embedding Model Architecture In sentiment analysis tasks, the sentiment information is captured during processing. Aiming to identify the sentiment polarities of different words, a word embedding model, incorporating the sentiment information, is established. Typically, we tend to characterize the proposed model by the loss function. To compute the sentiment embeddings, we define the probability of Wi being positive as Bi and negative as 1−Bi. Assuming that Wi = good and Wj = bad, the value of Bi/Bj is larger than 1, which indicates good is more positive than bad. In turn, the value (1−Bi) / (1−Bj) is less than 1 since bad shows a negative polarity. In this way, the relations of the word sentiment are expressed by the ratio of sentiment probabilities. For Bi+(1−Bi) = 1, Bi/Bj and (1−Bi) / (1−Bj) make the same sense in conveying the sentiment, we take Bi/Bj to construct the sentiment relation of Wi and Wj . More details of the words’ relation and the ratios are presented in Appendix 1. Considering the bias vector corresponds to positive sentiment polarity, we take sϵRn to indicate the bias vector to match the size of word vector. By transforming Wi and Wj into word vectors wi and wj , the difference established upon si and sj is written as: F (wTi si − wTj sj) = Bi/Bj (5) Assuming that F is confirming to the homomorphisms between groups (R,+) and (R>0,×), the semantic and sentiment information is combined to get: F (( wTi w̃k − wTj w̃k ) + ( wTi si − wTj sj )) =F ( wTi w̃k − wTj w̃k ) · F ( wTi si − wTj sj ) = Pik Pjk · Bi Bj (6) Due to properties of group homomorphism, eqn.6 is transformed into F ( wTi w̃k − wTj w̃k + wTi si − wTj sj ) =F (( wTi w̃k + w T i si ) − ( wTj w̃k + w T j sj )) = F ( wTi w̃k + w T i si ) F ( wTj w̃k + w T j sj ) = Pik Pjk · Bi Bj (7) in line with F ( wTi w̃k + w T i si ) = Pik ·Bi (8) According to eqn.7,the basic objective function F is in the form of exponential, that is F (x) = exp (x). Thus, we apply the logarithm operation to each side and have: wTi w̃k + w T i si = ln (Pik ·Bi) = lnPik + lnBi (9) By incorporating the sentiment information, the loss function of the word embedding model is defined as loss (wi, w̃k, si) = V∑ i,k=1 [ wTi w̃k + w T i si − lnPik − lnBi ]2 (10) where V indicates the size of the vocabulary. The parameters wTi , w̃k and si are computed via gradient descent algorithms. 3.2 Incorporating Sentiment Information As pointed out in the Introduction, current models use the maximum likelihood estimating algorithm for parameter determination. In this part, we preliminarily carry out the parameter estimation based on the maximum likelihood principle. For each target word Wi, xi times Bernoulli experiments are conducted to extract the context independently with V different outcomes in each experiment (Djuric and Huang, 2000). The occurrence number of the kth outcome and its probability are represented by xik and Pik. If the random variable Xi = (Xi1, Xi2, · · ·, XiV ) stands for the occurrence times of all the possibilities, i.e. Xik is the number for the kth one, the parameter Xi obeys the Multinomial distribution, i.e. Xi ∼ Multinomial (−→xi ,−→Pi) with −→Pi = (Pi1, Pi2, · · · , PiV ) and −→xi = (xi1, xi2, · · · , xiV ). Hence, a log-likelihood function is constructed: max Pi1,Pi2,··· ,Pik,··· ,PiV lnL(Pi1, Pi2, · · · , Pik, · · · , PiV ) = max Pi1,Pi2,··· ,Pik,··· ,PiV ln [(Pi1)xi1 · (Pi2)xi2 · · · (Pik)xik · · · (PiV )xiV ] = max Pi1,Pi2,··· ,Pik,··· ,PiV V∑ k=1 xik · lnPik s.t. V∑ k=1 Pik = 1 (11) According to eqn.11, the objective function can be resolved as an optimal problem that equality constraints. Thus, the corresponding Lagrangian function is formulated as J (Pi1, Pi2, · · · , PiV , λ) = V∑ k=1 xik · logPik + λ ( 1− V∑ k=1 Pik ) (12) where we have Pik = xikλ determined by ∂J(Pi1,Pi2,··· ,PiV ,λ) ∂Pik = xikPik − λ = 0. Likewise, λ = ∑V k=1 xik = xi is calculated with respect to ∑V k=1 Pik = ∑V k=1 ( xik λ ) = ∑V k=1 xik/λ = 1. Thus, the estimation of Pik is written as P̂ik = xik/xi (13) Notably, the obtained Pik is the same with that from GloVe according to eqn.3, which demonstrates the feasibility for parameter estimation in our model. In this way, the outcome of parameter sentiment probability can also be computed by using the maximum likelihood estimator. As such, a maximum likelihood estimation-based sentiment word embedding, namely MLESWE, is put forward. In this case, the Bernoulli experiments are applied to pick up the sentiment polarity of the target word Wi and the outcome can be either positive or negative. Since Bi is the probability of Wi being positive, we designate the distribution of Wi obeys−→ ti = (ti0, ti1) where ti0 is the number of negative texts and ti1 indicates that of the positive ones. Thus, the total number of texts including Wi is expressed as ti = ti0 + ti1. Support a random variable Ti = (Ti0, Ti1) denotes the times of all the possibilities of outcomes and Ti conforms to the binomial distribution, i.e. Ti ∼ Binomial (−→ ti , −→ Bi ) where −→Bi = (Bi, 1−Bi). The log-likelihood function of sentiment probabilities is delivered as: max Bi lnL (Bi) = max Bi ln [ (Bi) ti1 · (1−Bi)ti0 ] = max Bi [ti1 · lnBi + ti0 · ln (1−Bi)] (14) Similarly, B̂i = ti1ti is obtained based on ∂(ln L) ∂Bi = ti1Bi − ti0 1−Bi = 0. Combining the semantic and sentiment information, the final loss function based on maximum likelihood principle is loss (wi, w̃k, si) = V∑ i,k=1 [ wTi w̃k + w T i si − ln xik xi − ln ti1 ti ]2 (15) 3.3 Parameter Estimating using Bayesian Estimation The Bayesian estimating method is highlighted due to its not sensitive to initialization via proper prior distributions to parameters (Ma et al., 2018). By using the prior knowledge, the deficiency of lacking information of small datasets can be resolved, which leads to the converge to the actual value (Phoong and Ismail, 2015). Accordingly, the generalization ability of the model can be improved (Wu et al., 2018). The Bayesian approach, in this way, is able to present an elegant solution for automatically determining the parameters (Ferguson, 1973). We thus employ the Bayesian estimation for the parameter estimating of the proposed model. The Bayesian estimation-based sentiment word embedding, namely BESWE, is performed. In accordance to the assumption of maximum likelihood principle mentioned before, the prior distribution P (−→ Pi ) is assumed to obey the Dirichlet distribution of −→α = (αi, αi, · · · , αV ). The prior distribution is converted to P (−→ Pi ) = Dir (−→α ) = Γ( ∑ k αk)∏ k Γ(αk) ∏ k P αk−1 ik , with the identical likelihood function: P (−→x i|−→P i) = Mult(−→x i,−→P i) = xi!∏V k (xik!) · V∏ k P xikik (16) Considering the Dirichlet-Multinomial conjugate structure, the posterior distribution is P (−→ P i|−→x i ) = Dir (−→α +−→x i) = Γ ( ∑ k αk + xik)∏ k Γ (αk + xik) · ∏ k Pαk+xik−1ik (17) where αk = λ1 · nk∑ k nk , nk is the total number of occurrences of word Wk in the corpus and λ1 > 0 is determined by tuning data. By satisfying cik = EP (−→ P i|−→x i ) [lnPik] (18) we compute the Bayesian estimating outcome of lnPik in the loss function provided by eqn.10, which is also the mean value of posterior probability in line with Pik. As stated in (Jameel and Schockaert, 2016), the computation of E P (−→ P i|−→x i ) [lnPik] is facilitated via Taylor expansion: E P (−→ P i|−→x i ) [lnPik] ≈ lnEP(−→P i|−→x i) [Pik]− V ar P (−→ P i|−→x i ) [Pik] 2 · E2 P (−→ P i|−→x i ) [Pik] (19) where we have V ar P (−→ P i|−→x i ) [Pik] = αk+xik∑ k(αk+xik) · ( 1− αk+xik∑ k(αk+xik) ) · 1∑ k(αk+xik)+1 and E P (−→ P i|−→x i ) [Pik] = αk+xik∑ k(αk+xik) . Note that lnPik is estimated via Bayesian principle in eqn.18 whose form is unlike that of eqn.13. Comparing to the maximum likelihood estimation, a direct outcome is obtained without using Laplace smoothing in experiment. Comparatively, P (−→ Bi ) is designed to obey Beta distribution with the parameter −→β = (β0, β1), along with the prior distribution given as P (−→ B i ) = Beta (−→ β ) = Γ (β0 + β1) Γ (β0) Γ (β1) (1−Bi)β0−1 ·Bβ1−1i (20) from which the log-likelihood function is P (−→ t i| −→ B i ) = b (−→ t i, −→ B i ) = Cti1ti · (1−Bi) ti0 ·Bti1i (21) and the posterior distribution subject to the conjugate structure of Beta-Binomial is P (−→ B i| −→ t i ) = Beta (−→ β + −→ t i ) = Γ (β0 + ti0 + β1 + ti1) Γ (β0 + ti0) Γ (β1 + ti1) · (1−Bi)β0+ti0−1 ·Bβ1+ti1−1i (22) where mk stands for the texts of the sentiment label k, λ2 > 0 is a parameter depending on tuning data and βk = λ2 · mk∑ k mk . Thereupon, to determine the lnBi in eqn.10, we take the Bayesian estimation approach. The solution to the posterior probability expectation of lnBi, which is involved with Bi is characterized as ei = EP (−→ B i| −→ t i ) [lnBi] (23) Furthermore, the Taylor expansion is employed to update the equation: E P (−→ B i| −→ t i ) [lnBi] ≈ lnEP(−→B i|−→t i) [Bi]− V ar P (−→ B i| −→ t i ) [Bi] 2 · E2 P (−→ B i| −→ t i ) [Bi] (24) where we have V ar P (−→ B i| −→ t i ) [Bi] = β1+ti1∑ k(βk+tik) · ( 1− β1+ti1∑ k(βk+tik) ) · 1∑ k(βk+tik)+1 and E P (−→ B i| −→ t i ) [Bi] = β1+ti1∑ k(βk+tik) . Hence, the final loss function of BESWE can be obtained: loss (wi, w̃k, si) = V∑ i,k=1 [ wTi w̃k + w T i si − cik − ei ]2 (25) 4 Experiments In this section, the working performance of BESWE and MLESWE are evaluated. The task of word similarity analysis is carried out. To deliver the sentiment embeddings, both wordand sentence-level sentiment analysis using different models is conducted. 4.1 Experiment Settings Datasets. The dataset SST (Stanford Sentiment Tree) is employed for the mode training. There are five classes annotations within SST, which are very negative, negative, neutral, positive and very positive. Typically, we assign the value 3 and 4 to represent the positive polarity, 0 and 1 to negative and 2 to else. For our models, the word representation dimension is 50, the learning rate is 0.05 and the iteration number is 50. Besides, the loss function is optimized with the deployment of AdaGrad. Baseline Methods. We evaluate the proposed model in comparison to other state-of-theart models. The models of word embeddings, such as C&W, word2vec and GloVe, together with models of sentiment embeddings, including SE-HyRank and DLJT2, are implemented. For the baseline methods, we use default settings in the provided implementations or described as their papers and the word representation dimension is 50. Word Similarity. Computing word similarity (Levy et al., 2015) aims to capture the general meanings of words. In this research, the word similarity tasks are conducted on the dataset EN-WS-353-ALL, EN-WS-353-SIM and SCWS, which are detailed illustrated in (Jameel et al., 2019). Word-level Sentiment Analysis. We conduct word-level sentiment analysis on two sentiment lexicons, namely MPQA and NRC. The number of positive and negative items for MPQA is 2301 and 4151 while for NRC is 2231 and 3324. The N-fold cross validation with N=5 and N=10 is performed. An SVM classifier is trained whose average accuracy is the evaluation metric. Specifically, the words from SST corpus are extracted and converted into word embeddings, which are taken as the features of SVM. As Bayesian estimating principle is capable of tackling low-frequency words, we distinctively pick up the words with a frequency less than 5 for analysis. Statistically, the SST corpus contains 9984 low-frequency words. Sentence-level Sentiment Analysis. Considering the sentiment analysis for sentence, the movie review polarity datasets MovieReview is employed (Pang and Lee, 2005), which contains 10622 samples with the proportion of each polarity 1:1. We use a convolutional neural network (CNN) model, namely Text-CNN, with its online implementation (Kim, 2014). Likewise, the inputs of Text-CNN are word embeddings as well. The training episode is set as 200 epochs using the default settings. Similarly, we also pick the low-frequency words with the occupation over 10% as the lowfrequency sentences for testing. There are totally 1258 sentences cater to the demands and are all sent to Text-CNN classifier for processing. 4.2 Experimental Results Word Similarity. On the task of working performance evaluation, we first present the results for of word similarity analysis (Fig. 1). It can be observed BESWE outperforms other algorithms on all datasets, indicating that our model is capable to capture sufficient semantic information. Distinctively, the implementation of MLESWE, although not as good as BESWE, still achieves a better result on the average accuracy (i.e. Ave_Acc in Fig. 1) than the original GloVe. Yet the maximum likelihood estimating algorithm can also be applied to parameter determination of the word embeddings. Word-level Sentiment Analysis Results. The word-level sentiment analysis task is conducted on the dataset of single-word entries. The DLJT2 model outperforms other word embedding models by incorporating sentiment information into the learning processes, as shown in Fig.2. Compared to the state-of-the-arts, our model fails to exceed the outcome of the best method on average accuracy. Encouragingly, the BESWE model shows an even better performance in tackling the low-frequency words. Sentence-level Sentiment Analysis Results. The working performance of the proposed model is further evaluated on the sentence-level sentiment analysis task. From Fig.3, we see that SE-HyRank has a better performance than any other algorithms in average accuracy. Clearly, the outcome of BESWE is anyhow decent which is comparable with that of SEHyRank. Regarding low-frequency sentences, the minimum performance gap of over 9% against SE-HyRank is reported. Consequently, for the sentiment analysis of low-frequency words or low-frequency sentences, BESWE always obtain the best and most consistent results in the identification of sentiment polarity. Effects of λ1 and λ2. The hyperparameters in BESWE, i.e. regulatory factors λ1 and λ2, are used to represent the semantic and the sentiment information. The settings of the involving parameter can be therefore determined. The values of λ1 and λ2 are varied within the collection of {1, 0.75, 0.5, 0.25, 0.1, 0.05, 0.02, 0.01}. Firstly, the value of λ1 is set as {1, 0.75, 0.5, 0.25, 0.1, 0.05, 0.02, 0.01}. When λ2 = 1, we name BESWE as BESWE#1 and λ2 = 0.75 as BESWE#2, and so on so forth. Correspondingly, the value of λ2 is also picked from the same set and named from BESWE#9 to BESWE#16 in the same order. Totally, we get 16 different models. The results on the sentence-level sentiment analysis against different hyperparameter settings are shown in Fig.4(a) and Fig.4(c). Likewise, the results for low-frequency sentences are in Fig.4(b) and Fig.4(d). We take LowFreSentence#n to nominate the outcomes from low-frequency sentences. The sentence-level sentiment analysis reaches the highest accuracy 71.64% at the point λ1 = 0.05 and λ2 = 0.02. For the analysis of low-frequency sentences, the optimal values of λ1 and λ2 are 0.1 and 0.01, which results in an accuracy of 79.92%. The experimental results verify the effectiveness of the proposed sentiment word embedding. The BESWE model outperforms other state-of-the-art in word similarity identification. In the sentiment analysis of both word level and sentence level, our method still presents comparable outcomes. Specifically, by integrating the prior knowledge into sentiment probabilities estimating, the BESWE model is a better alternative for low-frequency-word sentiment capturing. It is reasonable to expect better performance in sentiment analysis tasks, as it is the case. 5 Conclusion In this work, the designing and deploying of the sentiment word embeddings is deeply studied. On the foundation of current word embedding models, the estimation principle of the objective function, together with other parameters, are investigated. Motivated by the significance of sentiment information, a novel word embedding model for sentiment analysis is established. Within the proposed model, both semantic and sentiment information is integrated into the word vectors. Aiming to construct the objective function, the group homomorphism theory is applied. As for the parameter determination, the maximum likelihood estimator and the Bayesian estimator are employed. Experiments are conducted on various tasks to evaluate the working performance. In comparison to the baseline models, our model is capable of tackling word similarity tasks. For the purpose of sentiment embeddings representation, the proposed model is effective in word-level and sentence-level sentiment analysis. Specifically, it outperforms other methods on low-frequency words and sentences sentiment polarity identification to demonstrate its efficacy. A Appendix I For Wi = good and Wj = bad, we have Bi⁄Bj > 1, (1−Bi) / (1−Bj) < 1, i.e. Wi = good is more positive than Wj = bad. For Wi = good and Wj = great, we have Bi/Bj ≈ 1, (1−Bi) / (1−Bj) ≈ 1, i.e. Wi = good and Wj = great are of positive polarities. For Wi = then and Wj = home, we have Bi/Bj ≈ 1, (1−Bi) / (1−Bj) ≈ 1, i.e. Wi = then and Wj = home are of neutral polarities. The specific sentiment probabilities calculated by maximum likelihood estimation are presented in Table 1. Likewise, the outcomes based on Bayesian estimation are in Table 2.
1. What is the main contribution of the paper, and how does it extend from D-GloVe? 2. What are the strengths and weaknesses of the proposed method, particularly in its ability to learn word embedding with sentiment information? 3. How convincing are the experimental results, and do they support the claim of learning better embeddings for rare words? 4. Are there any concerns regarding the writing style, clarity, and organization of the paper? 5. Are there any suggestions for improving the paper, such as revising the format, citations, and writing quality?
Review
Review This paper proposes a method to learn word embedding by incorporating additional sentiment information. The proposed method extends from D-GloVe by adding the probability of positive sentiment to the loss function. The paper presents three experiments: word similarity, word-level sentiment analysis, and sentence-level sentiment analysis. The experiments show that the method performs comparably with other baseline methods and outperforms in the low-frequency sentence setting (i.e. sentence containing lower frequency words). I recommend rejecting this paper because (1) the writing is unclear and hard to follow, and (2) the experiment results are not convincing. From what I can understand in the model part, there are many clarifications needed, not to mention the writing style. I think the re-derivations of GloVe and D-GloVe are not helpful as they cloud the main contribution of the paper. The author should clearly highlight the differences between the main subjects of the experiment: MLESWE and BESWE. In addition, it is not clearly motivated why we need Dirichlet prior for the sentiment variable. While the claim is to learn better embeddings for rare words, the experiments show that the proposed methods have similar results to the previous work. The only gain we can observe is in the sentence-level experiments in which other factors could affect the performance. Thus, it is hard to draw a supportive conclusion. Finally, the writing quality must be improved. The paper contains a lot of unrelated and redundant texts (it could be that I could follow the paper). 1. I do not think eq 2 is a representative of how the paper train the model, nor attempt to compare with. 2. As mention earlier, in section 3.2 and 3.3, the re-derivation is not particularly helpful. I think the paper should put more emphasis on the novelty of the work. 3. Plots in the experiment results are illegible. Tables should be more suitable for Figures 1, 2, and 3. I urged the authors to revise this paper and make sure it follows the formatting guideline, especially the citations. Finally, I'd recommend the authors have a professional writer (English) review the paper before submission.
ICLR
Title A novel Bayesian estimation-based word embedding model for sentiment analysis Abstract The word embedding models have achieved state-of-the-art results in a variety of natural language processing tasks. Whereas, current word embedding models mainly focus on the rich semantic meanings while are challenged by capturing the sentiment information. For this reason, we propose a novel sentiment word embedding model. In line with the working principle, the parameter estimating method is highlighted. On the task of semantic and sentiment embeddings, the parameters in the proposed model are determined by using both the maximum likelihood estimation and the Bayesian estimation. Experimental results show the proposed model significantly outperforms the baseline methods in sentiment analysis for low-frequency words and sentences. Besides, it is also effective in conventional semantic and sentiment analysis tasks. 1 Introduction Word embeddings provide continuous low-dimensional vector representations of words from documents (Li et al., 2017). Aiming to capture semantic and syntactic contextual information from large datasets, the word embedding models are extensively employed to represent words in natural language processing tasks (Levy and Goldberg, 2014). For this reason, many modelling methods are proposed to generate dense representations of words (Rath, 2017). Seeing the flourish of word embeddings, Word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) are considered as the edge-cutting approaches to deal with the word contexts. C&W is another most widespread method due to the progress in neural networks (Collobert and Weston, 2008). Besides, other algorithms are integrated into the existing models. For instance, Jameel and Schockaert proposes D-GloVe by combing GloVe and Dirichlet-Multinomial language modeling (Jameel and Schockaert, 2016). More recently, the contextualized word embedding models, which improve the accuracy to a large extent, are put forward. As such, the traditional approaches are concluded as pre-trained word embeddings. Whereas, the newly proposed methods, such as ELMo by Peters M. E. et al.(Peters et al., 2018) , BERT by Devlin J. et al.(Devlin et al., 2018) and XLNet by Yang Z et al.(Yang et al., 2019) , cost large amount of computing resource for training whilst obtain a better working performance in downstream tasks. In this way, the pre-trained word embeddings still hold a great promise in handling complicated natural language processing tasks. The aforementioned models are effective in dealing with semantic-oriented tasks. Likewise, in sentiment analysis, research is still ongoing to capture sufficient sentiment information while the sentiment embeddings typically depend on the sentiment polarity labels provided by labeled corpora to guide learning processes via objective functions (Yu et al., 2017). Tang et al. propose a method for learning sentiment embeddings by regulating the C&W model, which encodes sentiment information in the continuous word representations (Tang et al., 2015). By exploiting the prior knowledge, Li et al. incorporate the sentiment information to analyze the sentiment label of each word in target and contexts (Li et al., 2017). Maas et al. apply a semi-supervised method to get sentiment information and carry out the maximum likelihood estimation for parameter determination (Maas et al., 2011). Notwithstanding, the pre-trained word embeddings still have challenges in tackling sentiment analysis tasks, which are concluded as the following two aspects. On the one hand, the semantically similar word may have opposite sentiment polarities. Thus the sentiment polarity identification process has to be dedicatedly designed (Tang et al., 2015) (Li et al., 2017) (Shi et al., 2018). On the other hand, the capturing of sentiment information from low-frequency words is most pronounced. Typically, the low-frequency words can be regarded as the derivation of entity nouns, new terms and some deformation high-frequency words, which also contain significant semantic information. Nevertheless, due to the low frequency, current models are absent of processing their sentiment. The objective of this work is to devise a sentiment word embedding model. Specifically, the issue of parameter setting is deeply studied. Methods for effectively estimating the involving parameters based on Bayesian estimation and maximum likelihood estimation are proposed. For low-frequency word analysis, the Bayesian estimation is applied to determine the co-occurrence probabilities and the sentiment probabilities. This work describes current parameter estimation approaches and the model of GloVe in Section 2, illustrates our sentiment word embedding model in Section 3, shows the experiments in Section 4, and presents the research findings in Section 5. 2 Preliminary This section introduces the basic theory related to parameter estimating algorithms and the GloVe model, so as to facilitate the description of subsequent model architecture. 2.1 Parameter Estimating Algorithms Typically, word vectors are taken as learning variables in the word embeddings, which results in the use of parameter estimating algorithms. The way of establishing objective function is therefore be employed. According to (Tang et al., 2014), the objective function of the Skip-Gram model is to maximize the average log probability, which is expressed as: J = 1 T T∑ i=1 ∑ −c⩽j⩽c,j ̸=0 ln p (wi+j | ei) (1) where T is the number of words in the corpus and c indicates the size of window. We take ei as the embedding of target word wi and wi+j as the context of wi. The outcome p (wi+j |ei) is obtained via the hierarchical softmax. Similarly, the objective of GloVe refers to the maximum of likelihood probability and is defined as (Jameel et al., 2019): J = ∏ i,j N ( lnxij ;wi · w̃j + bi + b̃j , σ2 ) (2) where N ( .;µ, σ2 ) represents the normal distribution with the mean µ and the variance σ2. In GloVe, the variance is determined by each word couple (i, j). In addition to objective function constructing, the estimation algorithms are applied to compute other parameters within word embeddings. In (Maas et al., 2011), the maximum posterior probability estimation identifies the parameter to weigh the semantic information (Maas et al., 2011). In D-GloVe, Jameel and Schockaert also use the semantic information weighing parameter, whose value corresponds to the Bayesian estimating outcome (Jameel and Schockaert, 2016). 2.2 The GloVe model Basically, the GloVe model is a word-embedding method that combines evidence from the local context and the global counts. Typically, three distinguished words are used in this model, which are Wi,Wj and Wk. Both Wi and Wj are target words while Wk is the context. Let x be the matrix representing the word-word co-occurrence counts. We define the element xik as the times for word Wk appearing in the context of Wi. Correspondingly, xi = ∑ k xik indicates the frequency of each word occurs in the context of Wi. The cooccurrence probability of Wk being the context word of Wi is given as Pik = P (Wk|Wi) = xik/xi (3) The parameter Pik/Pjk is taken to determine the relation of Wi to Wk and Wj to Wk. For Wk has a similar relation to Wi and Wj , i.e. both relevant or irrelevant, the ratio approaches 1. The information in the ratio of co-occurrence probabilities is: F ( wTi w̃k − wTj w̃k ) = Pik/Pjk (4) where wϵRn refers to the target word vector and w̃ϵRn to the context vector. Commonly, GloVe extracts the semantic relation between different words by using the ratio of cooccurrence probabilities while the semantic information are identified via the maximum likelihood estimation (Maas et al., 2011). 3 Methodology This section depicts the architecture of the sentiment word embedding, working principle of the parameter estimating process using two different estimation algorithms. 3.1 Sentiment Word Embedding Model Architecture In sentiment analysis tasks, the sentiment information is captured during processing. Aiming to identify the sentiment polarities of different words, a word embedding model, incorporating the sentiment information, is established. Typically, we tend to characterize the proposed model by the loss function. To compute the sentiment embeddings, we define the probability of Wi being positive as Bi and negative as 1−Bi. Assuming that Wi = good and Wj = bad, the value of Bi/Bj is larger than 1, which indicates good is more positive than bad. In turn, the value (1−Bi) / (1−Bj) is less than 1 since bad shows a negative polarity. In this way, the relations of the word sentiment are expressed by the ratio of sentiment probabilities. For Bi+(1−Bi) = 1, Bi/Bj and (1−Bi) / (1−Bj) make the same sense in conveying the sentiment, we take Bi/Bj to construct the sentiment relation of Wi and Wj . More details of the words’ relation and the ratios are presented in Appendix 1. Considering the bias vector corresponds to positive sentiment polarity, we take sϵRn to indicate the bias vector to match the size of word vector. By transforming Wi and Wj into word vectors wi and wj , the difference established upon si and sj is written as: F (wTi si − wTj sj) = Bi/Bj (5) Assuming that F is confirming to the homomorphisms between groups (R,+) and (R>0,×), the semantic and sentiment information is combined to get: F (( wTi w̃k − wTj w̃k ) + ( wTi si − wTj sj )) =F ( wTi w̃k − wTj w̃k ) · F ( wTi si − wTj sj ) = Pik Pjk · Bi Bj (6) Due to properties of group homomorphism, eqn.6 is transformed into F ( wTi w̃k − wTj w̃k + wTi si − wTj sj ) =F (( wTi w̃k + w T i si ) − ( wTj w̃k + w T j sj )) = F ( wTi w̃k + w T i si ) F ( wTj w̃k + w T j sj ) = Pik Pjk · Bi Bj (7) in line with F ( wTi w̃k + w T i si ) = Pik ·Bi (8) According to eqn.7,the basic objective function F is in the form of exponential, that is F (x) = exp (x). Thus, we apply the logarithm operation to each side and have: wTi w̃k + w T i si = ln (Pik ·Bi) = lnPik + lnBi (9) By incorporating the sentiment information, the loss function of the word embedding model is defined as loss (wi, w̃k, si) = V∑ i,k=1 [ wTi w̃k + w T i si − lnPik − lnBi ]2 (10) where V indicates the size of the vocabulary. The parameters wTi , w̃k and si are computed via gradient descent algorithms. 3.2 Incorporating Sentiment Information As pointed out in the Introduction, current models use the maximum likelihood estimating algorithm for parameter determination. In this part, we preliminarily carry out the parameter estimation based on the maximum likelihood principle. For each target word Wi, xi times Bernoulli experiments are conducted to extract the context independently with V different outcomes in each experiment (Djuric and Huang, 2000). The occurrence number of the kth outcome and its probability are represented by xik and Pik. If the random variable Xi = (Xi1, Xi2, · · ·, XiV ) stands for the occurrence times of all the possibilities, i.e. Xik is the number for the kth one, the parameter Xi obeys the Multinomial distribution, i.e. Xi ∼ Multinomial (−→xi ,−→Pi) with −→Pi = (Pi1, Pi2, · · · , PiV ) and −→xi = (xi1, xi2, · · · , xiV ). Hence, a log-likelihood function is constructed: max Pi1,Pi2,··· ,Pik,··· ,PiV lnL(Pi1, Pi2, · · · , Pik, · · · , PiV ) = max Pi1,Pi2,··· ,Pik,··· ,PiV ln [(Pi1)xi1 · (Pi2)xi2 · · · (Pik)xik · · · (PiV )xiV ] = max Pi1,Pi2,··· ,Pik,··· ,PiV V∑ k=1 xik · lnPik s.t. V∑ k=1 Pik = 1 (11) According to eqn.11, the objective function can be resolved as an optimal problem that equality constraints. Thus, the corresponding Lagrangian function is formulated as J (Pi1, Pi2, · · · , PiV , λ) = V∑ k=1 xik · logPik + λ ( 1− V∑ k=1 Pik ) (12) where we have Pik = xikλ determined by ∂J(Pi1,Pi2,··· ,PiV ,λ) ∂Pik = xikPik − λ = 0. Likewise, λ = ∑V k=1 xik = xi is calculated with respect to ∑V k=1 Pik = ∑V k=1 ( xik λ ) = ∑V k=1 xik/λ = 1. Thus, the estimation of Pik is written as P̂ik = xik/xi (13) Notably, the obtained Pik is the same with that from GloVe according to eqn.3, which demonstrates the feasibility for parameter estimation in our model. In this way, the outcome of parameter sentiment probability can also be computed by using the maximum likelihood estimator. As such, a maximum likelihood estimation-based sentiment word embedding, namely MLESWE, is put forward. In this case, the Bernoulli experiments are applied to pick up the sentiment polarity of the target word Wi and the outcome can be either positive or negative. Since Bi is the probability of Wi being positive, we designate the distribution of Wi obeys−→ ti = (ti0, ti1) where ti0 is the number of negative texts and ti1 indicates that of the positive ones. Thus, the total number of texts including Wi is expressed as ti = ti0 + ti1. Support a random variable Ti = (Ti0, Ti1) denotes the times of all the possibilities of outcomes and Ti conforms to the binomial distribution, i.e. Ti ∼ Binomial (−→ ti , −→ Bi ) where −→Bi = (Bi, 1−Bi). The log-likelihood function of sentiment probabilities is delivered as: max Bi lnL (Bi) = max Bi ln [ (Bi) ti1 · (1−Bi)ti0 ] = max Bi [ti1 · lnBi + ti0 · ln (1−Bi)] (14) Similarly, B̂i = ti1ti is obtained based on ∂(ln L) ∂Bi = ti1Bi − ti0 1−Bi = 0. Combining the semantic and sentiment information, the final loss function based on maximum likelihood principle is loss (wi, w̃k, si) = V∑ i,k=1 [ wTi w̃k + w T i si − ln xik xi − ln ti1 ti ]2 (15) 3.3 Parameter Estimating using Bayesian Estimation The Bayesian estimating method is highlighted due to its not sensitive to initialization via proper prior distributions to parameters (Ma et al., 2018). By using the prior knowledge, the deficiency of lacking information of small datasets can be resolved, which leads to the converge to the actual value (Phoong and Ismail, 2015). Accordingly, the generalization ability of the model can be improved (Wu et al., 2018). The Bayesian approach, in this way, is able to present an elegant solution for automatically determining the parameters (Ferguson, 1973). We thus employ the Bayesian estimation for the parameter estimating of the proposed model. The Bayesian estimation-based sentiment word embedding, namely BESWE, is performed. In accordance to the assumption of maximum likelihood principle mentioned before, the prior distribution P (−→ Pi ) is assumed to obey the Dirichlet distribution of −→α = (αi, αi, · · · , αV ). The prior distribution is converted to P (−→ Pi ) = Dir (−→α ) = Γ( ∑ k αk)∏ k Γ(αk) ∏ k P αk−1 ik , with the identical likelihood function: P (−→x i|−→P i) = Mult(−→x i,−→P i) = xi!∏V k (xik!) · V∏ k P xikik (16) Considering the Dirichlet-Multinomial conjugate structure, the posterior distribution is P (−→ P i|−→x i ) = Dir (−→α +−→x i) = Γ ( ∑ k αk + xik)∏ k Γ (αk + xik) · ∏ k Pαk+xik−1ik (17) where αk = λ1 · nk∑ k nk , nk is the total number of occurrences of word Wk in the corpus and λ1 > 0 is determined by tuning data. By satisfying cik = EP (−→ P i|−→x i ) [lnPik] (18) we compute the Bayesian estimating outcome of lnPik in the loss function provided by eqn.10, which is also the mean value of posterior probability in line with Pik. As stated in (Jameel and Schockaert, 2016), the computation of E P (−→ P i|−→x i ) [lnPik] is facilitated via Taylor expansion: E P (−→ P i|−→x i ) [lnPik] ≈ lnEP(−→P i|−→x i) [Pik]− V ar P (−→ P i|−→x i ) [Pik] 2 · E2 P (−→ P i|−→x i ) [Pik] (19) where we have V ar P (−→ P i|−→x i ) [Pik] = αk+xik∑ k(αk+xik) · ( 1− αk+xik∑ k(αk+xik) ) · 1∑ k(αk+xik)+1 and E P (−→ P i|−→x i ) [Pik] = αk+xik∑ k(αk+xik) . Note that lnPik is estimated via Bayesian principle in eqn.18 whose form is unlike that of eqn.13. Comparing to the maximum likelihood estimation, a direct outcome is obtained without using Laplace smoothing in experiment. Comparatively, P (−→ Bi ) is designed to obey Beta distribution with the parameter −→β = (β0, β1), along with the prior distribution given as P (−→ B i ) = Beta (−→ β ) = Γ (β0 + β1) Γ (β0) Γ (β1) (1−Bi)β0−1 ·Bβ1−1i (20) from which the log-likelihood function is P (−→ t i| −→ B i ) = b (−→ t i, −→ B i ) = Cti1ti · (1−Bi) ti0 ·Bti1i (21) and the posterior distribution subject to the conjugate structure of Beta-Binomial is P (−→ B i| −→ t i ) = Beta (−→ β + −→ t i ) = Γ (β0 + ti0 + β1 + ti1) Γ (β0 + ti0) Γ (β1 + ti1) · (1−Bi)β0+ti0−1 ·Bβ1+ti1−1i (22) where mk stands for the texts of the sentiment label k, λ2 > 0 is a parameter depending on tuning data and βk = λ2 · mk∑ k mk . Thereupon, to determine the lnBi in eqn.10, we take the Bayesian estimation approach. The solution to the posterior probability expectation of lnBi, which is involved with Bi is characterized as ei = EP (−→ B i| −→ t i ) [lnBi] (23) Furthermore, the Taylor expansion is employed to update the equation: E P (−→ B i| −→ t i ) [lnBi] ≈ lnEP(−→B i|−→t i) [Bi]− V ar P (−→ B i| −→ t i ) [Bi] 2 · E2 P (−→ B i| −→ t i ) [Bi] (24) where we have V ar P (−→ B i| −→ t i ) [Bi] = β1+ti1∑ k(βk+tik) · ( 1− β1+ti1∑ k(βk+tik) ) · 1∑ k(βk+tik)+1 and E P (−→ B i| −→ t i ) [Bi] = β1+ti1∑ k(βk+tik) . Hence, the final loss function of BESWE can be obtained: loss (wi, w̃k, si) = V∑ i,k=1 [ wTi w̃k + w T i si − cik − ei ]2 (25) 4 Experiments In this section, the working performance of BESWE and MLESWE are evaluated. The task of word similarity analysis is carried out. To deliver the sentiment embeddings, both wordand sentence-level sentiment analysis using different models is conducted. 4.1 Experiment Settings Datasets. The dataset SST (Stanford Sentiment Tree) is employed for the mode training. There are five classes annotations within SST, which are very negative, negative, neutral, positive and very positive. Typically, we assign the value 3 and 4 to represent the positive polarity, 0 and 1 to negative and 2 to else. For our models, the word representation dimension is 50, the learning rate is 0.05 and the iteration number is 50. Besides, the loss function is optimized with the deployment of AdaGrad. Baseline Methods. We evaluate the proposed model in comparison to other state-of-theart models. The models of word embeddings, such as C&W, word2vec and GloVe, together with models of sentiment embeddings, including SE-HyRank and DLJT2, are implemented. For the baseline methods, we use default settings in the provided implementations or described as their papers and the word representation dimension is 50. Word Similarity. Computing word similarity (Levy et al., 2015) aims to capture the general meanings of words. In this research, the word similarity tasks are conducted on the dataset EN-WS-353-ALL, EN-WS-353-SIM and SCWS, which are detailed illustrated in (Jameel et al., 2019). Word-level Sentiment Analysis. We conduct word-level sentiment analysis on two sentiment lexicons, namely MPQA and NRC. The number of positive and negative items for MPQA is 2301 and 4151 while for NRC is 2231 and 3324. The N-fold cross validation with N=5 and N=10 is performed. An SVM classifier is trained whose average accuracy is the evaluation metric. Specifically, the words from SST corpus are extracted and converted into word embeddings, which are taken as the features of SVM. As Bayesian estimating principle is capable of tackling low-frequency words, we distinctively pick up the words with a frequency less than 5 for analysis. Statistically, the SST corpus contains 9984 low-frequency words. Sentence-level Sentiment Analysis. Considering the sentiment analysis for sentence, the movie review polarity datasets MovieReview is employed (Pang and Lee, 2005), which contains 10622 samples with the proportion of each polarity 1:1. We use a convolutional neural network (CNN) model, namely Text-CNN, with its online implementation (Kim, 2014). Likewise, the inputs of Text-CNN are word embeddings as well. The training episode is set as 200 epochs using the default settings. Similarly, we also pick the low-frequency words with the occupation over 10% as the lowfrequency sentences for testing. There are totally 1258 sentences cater to the demands and are all sent to Text-CNN classifier for processing. 4.2 Experimental Results Word Similarity. On the task of working performance evaluation, we first present the results for of word similarity analysis (Fig. 1). It can be observed BESWE outperforms other algorithms on all datasets, indicating that our model is capable to capture sufficient semantic information. Distinctively, the implementation of MLESWE, although not as good as BESWE, still achieves a better result on the average accuracy (i.e. Ave_Acc in Fig. 1) than the original GloVe. Yet the maximum likelihood estimating algorithm can also be applied to parameter determination of the word embeddings. Word-level Sentiment Analysis Results. The word-level sentiment analysis task is conducted on the dataset of single-word entries. The DLJT2 model outperforms other word embedding models by incorporating sentiment information into the learning processes, as shown in Fig.2. Compared to the state-of-the-arts, our model fails to exceed the outcome of the best method on average accuracy. Encouragingly, the BESWE model shows an even better performance in tackling the low-frequency words. Sentence-level Sentiment Analysis Results. The working performance of the proposed model is further evaluated on the sentence-level sentiment analysis task. From Fig.3, we see that SE-HyRank has a better performance than any other algorithms in average accuracy. Clearly, the outcome of BESWE is anyhow decent which is comparable with that of SEHyRank. Regarding low-frequency sentences, the minimum performance gap of over 9% against SE-HyRank is reported. Consequently, for the sentiment analysis of low-frequency words or low-frequency sentences, BESWE always obtain the best and most consistent results in the identification of sentiment polarity. Effects of λ1 and λ2. The hyperparameters in BESWE, i.e. regulatory factors λ1 and λ2, are used to represent the semantic and the sentiment information. The settings of the involving parameter can be therefore determined. The values of λ1 and λ2 are varied within the collection of {1, 0.75, 0.5, 0.25, 0.1, 0.05, 0.02, 0.01}. Firstly, the value of λ1 is set as {1, 0.75, 0.5, 0.25, 0.1, 0.05, 0.02, 0.01}. When λ2 = 1, we name BESWE as BESWE#1 and λ2 = 0.75 as BESWE#2, and so on so forth. Correspondingly, the value of λ2 is also picked from the same set and named from BESWE#9 to BESWE#16 in the same order. Totally, we get 16 different models. The results on the sentence-level sentiment analysis against different hyperparameter settings are shown in Fig.4(a) and Fig.4(c). Likewise, the results for low-frequency sentences are in Fig.4(b) and Fig.4(d). We take LowFreSentence#n to nominate the outcomes from low-frequency sentences. The sentence-level sentiment analysis reaches the highest accuracy 71.64% at the point λ1 = 0.05 and λ2 = 0.02. For the analysis of low-frequency sentences, the optimal values of λ1 and λ2 are 0.1 and 0.01, which results in an accuracy of 79.92%. The experimental results verify the effectiveness of the proposed sentiment word embedding. The BESWE model outperforms other state-of-the-art in word similarity identification. In the sentiment analysis of both word level and sentence level, our method still presents comparable outcomes. Specifically, by integrating the prior knowledge into sentiment probabilities estimating, the BESWE model is a better alternative for low-frequency-word sentiment capturing. It is reasonable to expect better performance in sentiment analysis tasks, as it is the case. 5 Conclusion In this work, the designing and deploying of the sentiment word embeddings is deeply studied. On the foundation of current word embedding models, the estimation principle of the objective function, together with other parameters, are investigated. Motivated by the significance of sentiment information, a novel word embedding model for sentiment analysis is established. Within the proposed model, both semantic and sentiment information is integrated into the word vectors. Aiming to construct the objective function, the group homomorphism theory is applied. As for the parameter determination, the maximum likelihood estimator and the Bayesian estimator are employed. Experiments are conducted on various tasks to evaluate the working performance. In comparison to the baseline models, our model is capable of tackling word similarity tasks. For the purpose of sentiment embeddings representation, the proposed model is effective in word-level and sentence-level sentiment analysis. Specifically, it outperforms other methods on low-frequency words and sentences sentiment polarity identification to demonstrate its efficacy. A Appendix I For Wi = good and Wj = bad, we have Bi⁄Bj > 1, (1−Bi) / (1−Bj) < 1, i.e. Wi = good is more positive than Wj = bad. For Wi = good and Wj = great, we have Bi/Bj ≈ 1, (1−Bi) / (1−Bj) ≈ 1, i.e. Wi = good and Wj = great are of positive polarities. For Wi = then and Wj = home, we have Bi/Bj ≈ 1, (1−Bi) / (1−Bj) ≈ 1, i.e. Wi = then and Wj = home are of neutral polarities. The specific sentiment probabilities calculated by maximum likelihood estimation are presented in Table 1. Likewise, the outcomes based on Bayesian estimation are in Table 2.
1. What is the focus of the paper, particularly regarding word embedding and sentiment information? 2. What are the strengths of the proposed approach, especially in terms of its ability to incorporate sentiment information? 3. What are the weaknesses of the paper, specifically concerning Bayesian inference and prior knowledge? 4. How does the reviewer assess the effectiveness of the Laplace approximation for posterior distribution in the proposed model? 5. How might the prior introduced into the model impact the performance of the embedding, especially in low-frequency examples?
Review
Review The paper proposed a word embedding model to incorporate the sentiment information. The paper provided both maximum likelihood estimation and maximum posterior estimation for the proposed framework. Improved experiment results on word similarity and low frequency embeddings are presented. Overall, the paper incorporates the sentiment information in a neat way. And my main concern is the around the Bayesian inference and the prior knowledge distilled into the model. Detail comments are as following, 1. The model employed Laplace approximation for posterior distribution. Not quite sure this is a good idea for the Bernoulli case since Laplace approximation is trying to use Gaussian distribution to approximate the region around the mode. How will the MAP solution compare with a full Bayesian solution such as VB or sampling-based methods? 2. Another concern is the prior introduced into the model. Normally prior information will be washed away as the training data grow. Not the case for the low frequency examples that the model performed well on. Would it possible that the improved performance on low frequency example is just a side effect of the biased introduced by the prior? How sensitive will the embedding perform with respect to the prior selected?
ICLR
Title Functional Relation Field: A Model-Agnostic Framework for Multivariate Time Series Forecasting Abstract In multivariate time series forecasting, the most popular strategy for modeling the relationship between multiple time series is the construction of graph, where each time series is represented as a node and related nodes are connected by edges, i.e. spatial-temporal graph neural networks. The graph structure is either given apriori or learned based the similarity between nodes. However, the relationship between multiple time series is typically complicated, for instance, the sum of outflows from upstream nodes may be equal to the inflows of downstream nodes. Such relations widely exist in many real-world multivariate time series forecasting scenarios, yet are far from well studied. In these cases, graph might only be a crude description on the dependency between nodes. To this end, we explore a new framework to model the inter-node relationship in a more precise way based our proposed inductive bias for graphs, Functional Relation Field, where a group of functions parameterized by neural networks are learned to characterize the dependency between multiple time series. These learned functions are versatile: they can then be used to discover the underlying graph structure by identifying the most relevant neighbors of the target node; and on the other hand, the learned functions will form a “field” where the nodes in the backbone prediction networks are enforced to satisfy the constraints defined by these functions. The experiment is conducted on one toy dataset to show our approach can well recover the true constraint relationship between nodes. And two real-world MiniApp calling traffic and road networks datasets are also considered with various different backbone networks. Results show that the prediction error can be reduced remarkably with the aid of the proposed functional relation field framework. N/A 1 INTRODUCTION Multivariate time series forecasting has surged recently due to its strong expressiveness of the spatio-temporal dependence among the data and its enormous popularity in vast application areas, such as the prediction of urban traffic, computer network flow, cloud micro-services calling flow, and rigid body motion, to name a few (Li et al., 2018; Yu et al., 2018; Bai et al., 2020; Yan et al., 2018; Liu et al., 2020). The most popular and straightforward strategy for modeling the relationship between multiple time series is the introduction of graph, where each time series is represented as a node and related nodes are connected by edges. This particular inductive bias for multivariate time series prediction results in the so called spatial-temporal graph neural networks (Yu et al., 2018). The graph structure is either given apriori (e.g. in traffic flow prediction, each road as a node has connected roads forming the graph.) or learned based the similarity between nodes (Yu et al., 2019; Bai et al., 2020; Shang et al., 2021). However, in practice, the relationship between multiple time series is typically complicated. For instance, there often exist constraints among the nodes, ranging from the equality between the inflow and the outflow for a node in a traffic network to the geometric constraints of the rigid body motion. Such relations widely exist in many real-world multivariate time series forecasting scenarios, yet are far from well studied. In these cases, graph might not be sufficient for characterizing the dependency between nodes. As a remedy, in this work, we explore a new framework to model the inter-node relationship in a more precise manner than graph, Functional Relation Field (FRF), where a group of functions parameterized by neural networks are learned to characterize the dependency between multiple time series explicitly. These learned functions are versatile: first they can then be used to discover the underlying graph structure by identifying the most relevant neighbors of the target node; and on the other hand, the learned functions will form a “field” where the nodes in the backbone prediction networks are further enforced to satisfy the constraints defined by these functions. As illustrated in Fig.1, the left panel shows the traditional graph neural networks assuming similar time series have edge connections, while our framework on the right panel models the dependency between nodes through a functional relationship, e.g. a linear form to enforce the constraints between the flows of target and dependent nodes. In our framework, we mainly solve the following two issues: (i) How to learn the functional field? We need to select the dependent nodes that have a relationship with the target node, and express the constraint in a functional form; (ii) How to guarantee the constraints satisfaction? The (functional) constraints relationship should be maintained in the predicted output in both training and test process. To address these issues, we propose a two-stage approach that can discover the functional relations (i.e. constraints) from data and further integrate the constraints seamlessly when forecasting the multivariate time series. Specifically, we first train a neural network with a selected target node as its output and all the other nodes as dependent variables (i.e. the input of this neural network), and identify the most relevant dependent nodes based on this trained network. We then re-train it to learn the relationship among the target and the discovered relevant nodes. Next, we incorporate these functional constraints into the network backbones by imposing them to the predicted output during both training and test process. More precisely, the output of the network could be guaranteed to satisfy the constraints by utilizing the constraint-satisfied transformation and loss minimization. We compare the proposed approach with SVM, fully connected networks, fully connected LSTM, and five backbone models (i.e., STGCN (Yu et al., 2018), AGCRN (Bai et al., 2020), Autoformer (Wu et al., 2021), FEDformer (Zhou et al., 2022), SCINet (Liu et al., 2022)). Experimental results show that our approach significantly improves the performance over the original network backbones and other baseline models. RELATED WORK Univariate time series forecasting. Recently, much research focuses on time series forecasting with deep learning models due to their powerful representational capability and prediction performance, including feed-forward neural network, RNN (Rumelhart, 1986) and its variants LSTM (Hochreiter & Schmidhuber, 1997) and GRU (Cho et al., 2014). The transformer architecture and its variants (Vaswani et al., 2017; Simm et al., 2020; Zhou et al., 2021; Child et al., 2019; Lim et al., 2020; Li et al., 2019; Wu et al., 2021; Zhou et al., 2022) also made much progress on univariate time-series forecasting on learning long-range dependence. In order to model the trend and seasonality of time series in an interpretable way, N-beats (Oreshkin et al., 2020) network that stacked very deep fullconnection network based on backward and forward residual links has improved the multi-horizon prediction accuracy significantly. Moreover, DeepAR (Salinas et al., 2020) and Deep State-Space Model (DSSM) (Rangapuram et al., 2018) stack multi-layer LSTM network to generate parameters of one-step-ahead Gaussian predictive distributions for multi-horizon prediction. Multivariate time series forecasting. Spatio-temporal graph neural networks (Yu et al., 2018; Chen et al., 2019; Pan et al., 2021; Li et al., 2020) have been proposed to model the spatial correlation and temporal dependency in multivariate time-series. Apart from capturing the temporal dependence, these methods further model the spatial dependence among all time series via graph neural networks, leveraging the information from the neighboring time series to help forecasting the target one. It is well known that an informative graph structure is important to the graph time series forecasting. Therefore, many algorithms (Bai et al., 2020; Seo et al., 2016; Shang et al., 2021) were proposed to discovery the underlying graph structure. AGCRN (Bai et al., 2020) assumed the graph structure is unknown and adopted an adaptive approach to learn the embedding vectors for all nodes, and then replaced the adjacency matrix in graph convolutions with a function of the node embeddings. However, the similarity graph calculated with the learned node embedding is a dense and continuous graph instead of a sparse and discrete graph. Therefore, GTS (Shang et al., 2021) formulated the graph structure learning problem as a probabilistic graph model to learn the discrete graph through optimizing the mean performance over the graph distribution. Different from the existing multivariate time series prediction methods, AGCRN (Bai et al., 2020) (with a fully connected graph) and STGCN (Yu et al., 2018) (with a given graph), we consider a more precise way, i.e. functional relations as constraints, to learn the connection between time series. The new inductive bias expressed by these functional relations can be applied to different backbone networks to help recover the graph structure and act as regularization in both training and test process. 2 METHODOLOGY: FUNCTIONAL RELATION FIELD Multivariate time series forecasting. Suppose we have N time series {xi}Ni=1 with length T , written compactly as X ∈ RN×T . Each time series can be denoted as a node, where xi,t ∈ R for each node i and time step t. xt ∈ RN is the time slice of X at the t-th time step. The multi-step forecasting problem of a multivariate time series can be formulated as predicting the future M frames of the multivariates given the last H time slices: {ŷt+1, ..., ŷt+M} = argmax P ({yt+1, ..., yt+M}|{xt−H+1, ..., xt}), (1) where {yt+1, · · · , yt+M} and {ŷt+1, · · · , ŷt+M} represent the true and predicted values at the future time steps, M is the number of future steps. Note that here we use y to denote the output so as to differentiate it from the input x. Forecasting with functional relations. In many real-world scenarios, the relationship between multiple time series is typically complicate, graph might not be sufficient for modelling their dependency, particularly for the cases values of multivariate time series at each time step are subject to some intrinsic constraints. Existing methods have not incorporated these constraints into their models. In this work, we intend to show that models with the account of constraints (expressed with functional relationship) are superior to those without constraints in terms of prediction performance. As an example, suppose that the flow in a computer network satisfies the homogeneous linear constraints, at each time step t, the following linear constraints hold for slice xt: Axt = 0,∀t, (2) where A ∈ RM×N is a matrix that is constant across time. In other more complex cases, the constraints can be non-homogeneous, non-linear, or even intertemporal. Here, we concentrate on time-invariant constraints that is not intertemporal. As such, the constraints can be described by a set of functions f with size m, i.e. functional relation field, f = (f1, f2, ..., fm). fi(xt) = 0, ∀i, ∀t. (3) Based on the constraints defined above, we consider the following constrained multivariate time series prediction problem, {ŷt+1, ..., ŷt+M} = arg max P ({yt+1, ..., yt+M}|{xt−H+1, ..., xt}), s.t. fi(ŷt+τ ) = 0, 1 ≤ τ ≤M, 1 ≤ i ≤ m. (4) However, in most real-world scenarios, neither the functional form F nor the specific weights variables involved in the constraints are given, and one of our objectives is to extract such information from the data and solve the problem (4). We now elaborate the functional relation field for multivariate times series prediction in the following. The schematic diagram of the proposed framework is depicted in Figure 2, including two parts. The first part displayed Figure 2(a) shows how we learn the functional relations, i.e. the constraints between nodes. Assuming that the constraints are unknown, we aim to find the constrained nodes and the specific functional form for these constraints. The constraint function in this paper is Constraint nodes set and relevant nodes 𝒩! Retraining the functional relation networkTrain constraint network 𝑤! 𝑤" 𝑤# 𝑤$ 𝑤% 𝑤& Training Phase: Constraint-Satisfaction loss minimization in in Eq.(10) (a) Functional Relation Field (b) Applying Functional Relation Field Testing Phase: Constraint-Satisfaction transformation in Eq.(15)Output Layer Input Backbone Network Predict value "𝑦"#$ ℒ%&% Function: 𝐟 "𝑦"#$ = 0 Predict output "𝑦"#$ Constraint-satisfied output 1𝑦"#$ Predict output "𝑦"#$ Learned function relation: 𝐟 𝑥 = 0 𝑤! 𝑤" 𝑤# 𝑤$𝑤% 𝑤& Independent nodes Learned function relation ℱ 1𝑔 "𝑦"#$,( Figure 2: The schematic diagram of functional relation field framework. The two subfigures denote the two stages: (a) The training data is employed to discover the nodes in each constraint function and these functions are expressed by constraint network; (b) The learned constraints are incorporated in the backbone models (cf. Section 2.2) in three complementary ways so as to improve the forecasting performance. approximated by a neural network, named as functional relation network or constraint network. After training the functional relation network, we can identify the most relevant neighbors and produce a more informative graph structure. Then we can proceed to integrate the learned constraints into the backbone graph neural networks for multivariate time series prediction, as shown in Figure 2(b). We enforce these constraints to the output of spatio-temporal graph neural networks during both training and test phases. For the outputs of the networks, we add a constraint-satisfied transformation layer during the inference process such that the outputs strictly satisfy the constraints. Altogether, we refer to the proposed framework as functional relation field-enhanced spatio-temporal graph networks (FRF-STG). It is model-agnostic and can be applied to different backbone graph networks. In the following, we will describe the two stages including learning functional relation network and how to apply the constraints induced by the functional relation between nodes in more details. 2.1 LEARNING THE FUNCTIONAL RELATION NETWORK We start with discussing the first question: how to learn the unknown constraints (i.e. the functional relations) from the multivariate time series data? As demonstrated in Figure 2(a), we assume that there exists a constraint for each node. We first discover the relevant nodes involved in these constraints and then express the constraint functions via neural networks. Identifying constrained nodes and their relevant nodes. Here we consider a simplified case where the functional relation between nodes can be formulated as: xt,i = gi(xt,\i),∀t (5) i.e. for each target node i, we use a constraint network gi to approximate the function relation taking all the remaining (N − 1) nodes as input. We then train the constraint network to predict the value of the i-th node with the loss function : Lpred,(i) = ‖x̂t,i − xt,i‖2 (6) where x̂t,i and xt,i represent the estimated and observed values of node i at time step t. Second, a threshold err is set, and treat xi as a constrained node if both the training and validation error are smaller than err. Otherwise, xi is unpredictable with the other nodes, indicating it has weak dependency with other nodes. Then, to identify the most relevant nodes set Ni for target node i, we introduce the sensitivity of input change to the output for the trained constraint network, measured by the absolute value of the partial derivative: δi,j = ∣∣∣∣ ∂g∂xt,j ∣∣∣∣ , j 6= i (7) We calculate the average gradients over the training and the validation set for node j. Then, we specify another threshold grad here and consider the node j as the most relevant node of target i if δi,j is larger than grad. Besides, if the cardinality of Ni is larger than the scale threshold J , we further shrink Ni by only keeping the top-J nodes with the largest δi,j . Retraining the functional relation network. Since we filter out the irrelevant nodes for the discovered constrained node xi, it is necessary to re-train the constraint network using the relevant nodes in Ni as inputs, denoted as xt,Ni = {xt,ij |j ∈ Ni}, x̂t,i = g̃i(xt,Ni). (8) Regarding the architecture of the functional relation network g̃i, we adopt a simple attention-based structure for each node i, described as follows. αt,i = Softmax(MLP i(xt,Ni)), x̂t,i = α T t,ixt,Ni , (9) where αt,i is the attention weight vector generated from the relevant nodes xt,Ni , and x̂t,i is the reconstructed input with the constraint nodes. Others alternatives for designing the functional relation network is also possible. 2.2 APPLYING THE CONSTRAINTS The constraints learned by the functional relation network are versatile. A naive usage is to construct meaningful graph structure by drawing edges between the identified target and its dependent nodes. Secondly, we propose to incorporate the learned constraints into the backbone prediction network in both training and test process through constraint-satisfaction loss minimization and constraintsatisfaction transformation, respectively. Both of them are used to guarantee that the constraints are maintained in the outputs of the backbone network. Constraint satisfaction in training phase. We expect the output of the backbone network, ŷ = {ŷt+1, ŷt+2..., ŷt+M}, to satisfy the learned constraints that could reveal the underlying structure of the multivariate time series. A straightforward yet effective way of implementing the constraint satisfaction is loss minimization over the functional relation network based on the output of the backbone prediction network, LFRF (ŷ) = N∑ i=1 M∑ τ=1 ‖ŷt+τ,i − g̃({ŷt+τ,j}, j ∈ Ni)‖22 (10) Therefore, the overall loss function for training the backbone prediction network include two terms, Ltotal = L(ŷ, y) + λLFRF (ŷ), (11) where λ is a tradeoff coefficient for balancing the supervised term and constraint satisfaction. Constraint satisfaction in testing phase. Furthermore, although the constraints are fully utilized during training, there is no guarantee that the constraints hold for the outputs during the inference process. Therefore, it is necessary to perform constraint-satisfaction transformation on outputs of the prediction networks. Let us first consider the linear constraint Axt = 0,∀t. Suppose that ŷ = {ŷt+1, ŷt+2..., ŷt+M} and y = {yt+1, yt+2, ..., yt+M} denote the predicted output of the backbone network and the ground truth, respectively. To make the output ŷt+τ to satisfy the linear constraint, we can project the predicted output onto the hyperplane Axt = 0 as ỹt+τ with a closed-form solution, ỹt+τ = ŷt+τ −AT (AAT )−1Aŷt+τ . (12) On the other hand, for non-linear constraint set f(y) = (f1(y), ..., fm(y))T = 0, where each constraint fi(y) = 0 represents yi− g̃i(yt,Ni) = 0, there are no analytical solutions, but we can solve an optimization problem with nonlinear equality constraints, i.e. finding the nearest projection point on the plane f(y) = 0 given the reference point ŷt+τ for τ = 1, . . . ,m min ỹt+τ ‖ỹt+τ − ŷt+τ‖22, s.t. f(ỹt+τ ) = 0. (13) A simple approximate method for solving this equality-constrained quadratic programming is to conduct iterative projections. Denote J = ∂f∂x as the Jacobian matrix. Assuming ŷt+τ ≈ ỹt+τ , closed to the surface f(x) = 0. We derive the first-order Taylor expansion of f(x) at ŷt+τ as f(x) ≈ f(ŷt+τ ) + J T · (x− ŷt+τ ). (14) Equating f(x) to zero with x = ỹt+τ yields ỹt+τ = ŷt+τ − J (J TJ )−1f(ŷt+τ ). (15) Then we can repeat the above transformation several times (e.g. number of projections K = 10 times used in our experiments) until the constraints are well satisfied by evaluating whether F (x) =∑m j=1 |fj(x)| is small enough. 2.3 FUNCTIONAL RELATION FIELD-ENHANCED SPATIO-TEMPORAL GRAPH NETWORKS In this part, we integrate the proposed functional relation field framework into five representative backbone models, STGCN (Yu et al., 2018), AGCRN (Bai et al., 2020), Autoformer (Wu et al., 2021), FEDformer (Zhou et al., 2022) and SCINet (Liu et al., 2022) to boost their prediction performance, referred as FRF-STGCN, FRF-AGCRN, FRF-Autoformer, FRF-FEDformer and FRFSCINet, respectively. In the first stage, we learn the functional relation network, based on which the most relevant nodes can be identified. And the resultant graph structure could be used for the five backbone networks. In the second stage, we enforce the learned constraints in the training and inference process, as described in Figure 2. Since different backbone networks has their own specific design, we need adapt FRF to these backbones. For the constraint satisfaction of output, in AGCRN and SCINet, the networks produce all the prediction results at multiple time steps in one batch, and therefore, the constraint-satisfied transformation is applied to the prediction at each time step respectively for K times as described in Eq. (15). For STGCN, we apply the above transformation sequentially to each future time step, obtain the transformed predictions, and then feed the predictions to STGCN to produce the predictions at the next time step. We repeat this procedure until we finish the multi-step forecasting task. Algorithm 1: Training and inference of functional relation field Input: Trained function relation networks f , hyper-parameters λ and K. Output: constraint-satisfied output ỹt+τ // Training Phase; repeat 1 Forward on backbone network to get ŷt+τ . on training dataset; 2 Back-propagate with the loss Ltotal in Eq. 2.2 and run Adam. . constraint-satisfaction loss 3 until stopping criteria is met; // Inference Phase; Forward on the trained backbone network to obtain ŷt+τ . on test dataset; 4 for k in K do 5 Calculate ỹt+τ by Eq.(15) . constraint-satisfaction transformation; 6 end 3 EXPERIMENT In this section, we conduct experiments on five datasets including one synthetic graph dataset, two real-word MiniApp calling flow datasets and two traffic flow datasets to demonstrate the effectiveness of FRF on learning the underlying relationship between nodes and boosting prediction performance of these backbone networks. The code for reproducibility is attached in the Supplementary Materials. The baseline models. We first compare our framework with two traditional forecasting models including Historical Average (HA) and Support Vector Regression (SVR). Then, we also conduct experiments on two classical univariate time series prediction models, including Feed-Forward Neural Network (FNN) and Full-Connected LSTM (FC-LSTM (Sutskever et al., 2014)). We select the widely used graph time series model STGCN (Yu et al., 2018), AGCRN (Bai et al., 2020), and the univariate time series forecasting models based on transformer architectures Autoformer (Wu et al., 2021), FEDformer (Zhou et al., 2022) and another state-of-the-art univariate prediction model SCINet (Liu et al., 2022)) as our backbone networks. We refer the readers to the supplementary materials for the detailed experimental settings. 3.1 DATASETS AND SETTINGS Binary tree dataset. We first generate an artificial graph time series dataset. The graph structure for this dataset is a complete binary tree with 255 nodes. For each leaf node i, its value is a noisy sinusoidal wave across time, xi,t = ni,tAi sin( 2πtTi +φ), where ni,t ∼ U(0.95, 1.05). We sort all leaf nodes from left to right in an increasing order of their periods. For a non-leaf node p, we denote its left and right child as l and r. We further set the value of node p to be the geometric mean of its two children l and r, xp,t = √ xl,t · xr,t. We sample one point every 5 minutes, so there are 288 points per day. We generate the data for 40 days, including 30 days for training (i.e., 30× 288 = 8640 time points), 5 days for validation, and 5 days for testing. We intentionally design this dataset since it has true graph structure between different time series and the constraints between nodes are explicit, and thus it is a suitable testbed to compare the superiority of FRF over those without FRF. In the experiments, for the backbone with FRF, we assume the constraints are unknown and learn them using the proposed method in Section 2.1. MiniApp calling flow dataset 1 and 2. These two datasets are real-word flow data from two popular online payment MiniApps, attached in the Supplementary Materials. For the two MiniApps, there are N = 30, 23 filtered pages linking to each other in the calling process, which produces visiting request flow from one page to another, constituting a graph with N = 30, 23 nodes. We aggregate the flow with averaged value every 5 minutes for each node, so there are 288 points per day. For the first MiniApp, we collect 21 days of data, including 15 days for training, 3 days for validation, and 3 days for test. For the second one, 24 days of data are collected, including 18 days for training, 3 days for validation, and 3 days for testing. PEMSD4 and PEMSD8 traffic datasets. This benchmark dataset is popular for multi-variate time series prediction, describing the traffic speed in San Francisco Bay Area with 307 sensors on 29 roads (https://paperswithcode.com/dataset/pemsd4). The other one consists of 170 detectors on 8 roads in San Bernardino area (https://paperswithcode.com/dataset/pemsd8). Settings of constraint network and hyper-parameters. For the architectures of the constraint network, we compare two a 4-layer MLP and a self-attention network, and the results show the latter is more effective. We measure the constraint relationship with MAPE, where the large MAPE indicates the time-invariate constraint is weak. Specifically, the MAPEs for BinaryTree, MiniAPP1, MiniApp2, PEMSD4, PEMSD8 datasets are 0.10, 0.008, 0.01, 0.02, 0.07 respectively. The larger MAPE means the weaker constraint relationship, therefore the proposed FRF model is applicable to backbone network only when the MAPE of constraint network is small. In addition, we only tune the parameters of FRF while keeping the other hyper-parameters setting the same as backbone networks. 3.2 RESULTS Overall performance Table 1 summarizes the performance of all the compared models on the five datasets, including the proposed FRF approach coupled with STGCN, AGCRN, Autoformer, FEDformer and SCINet, denoted as FRF-STGCN and FRF-AGCRN, FRF-Autoformer, FRF-FEDformer and FRF-SCINet, respectively. For the binary tree dataset, we predict the future 12 time steps and evaluate the performance in terms of three metrics (MAE, RMSE, MAPE). Since the underlying true constraints are known, we report the experimental results of our models with both true and learned constraints, denoted as “T” and “L”. We can observe that deep learning-based models typically outperform the traditional ones, as expected. Furthermore, the proposed functional relation field can further improve the performance of the original backbone models. Regardless of the differences between the two backbone networks, FRF can consistently improve the prediction accuracy for both of the backbones. indicating that the FRF framework could be potentially applied to a wide variety of backbones. For the two MiniApp datasets, we omit the metric MAPE since the scale of data changes dramatically across time such that MAPE fails to characterize the performance of different models. Due to the error accumulation problem for multi-step prediction in STGCN, the performance of this model pales in comparison with its non-iterative counterpart. As a result, we only report the results of the non-iterative version of STGCN. Since the underlying true constraint relationship between nodes are not available, we only report the FRF with learned constraints. We can easily observe that augmentation of the proposed FRF can consistently boost the performance of the five backbone networks. Specifically, FRF improves STGCN by 36.3% and 6.9% on the two datasets, also improves AGCRN by 14.6% and 7.0%, respectively. For traffic datasets PEMSD4 and PEMSD8, one particular reason we choose SCINet as the baseline is that the reported results can achieve state-of-the-art prediction performance on this task. We can observe that even relying on such a strong baseline, FRF framework can still improve its performance of with a margin 0.6% and 0.3% on both datasets, respectively. For other backbones, we again see that FRF further improves the prediction performance, showing the effectiveness of FRF as a model-agnostic framework. Learning the relationship between nodes. We further test whether FRF could discover the underlying true constraints between nodes. First, we investigate whether we can reliably estimate the target node given the values of constraint nodes. To be exact, we compute x̂t,i = g̃({xt,Ni}) and compare x̂t,i with xt,i in terms of MAPE. For the test data of the synthetic binary tree, the resulting MAPE is 0.399%. Note that the MAPE of AGCRN or STGCN reported in Table 1 is around 4% without considering the constraints. Therefore, using the learned constraints can well regularize the predictions given by the original network backbones as well as further improve the forecasting performance. On the other hand, we compare the performance of the proposed algorithm when using the true and estimated constraints, showing the results in Table 1. We can observe that the performance based on both the true and estimated constraints is almost the same, indicating that the constraints are accurately learned. Additionally, we visualize the learned constraints by connecting each constrained node with their most relevant neighbors as a graph, shown in Figure 4. The structure of the binary tree is well recovered, although some extra edges are involved. Hyperparameters Sensitivity. FRF enhanced model introduces additional three kinds of hyperparameters including validation error threshold err, the loss tradeoff coefficient λ and the number of output transformation K. Therefore, we conduct hyper-parameters sensitivity experiments on binary tree dataset using backbone AGCRN as shown in Fig 3. We can observe that the performance slightly improves when the err increases due to more constraints are discovered, while the performance decreases with large err because of the introduced noise. Even more, the FRF enhanced model performs worse than backbone network when err = 5.0. Consistently, FRF enhanced model performs better when λ = 0.1 and worse than backbone with large λ. For the K, the larger K improves the backbone more significantly than smaller k because iterating more times makes the non-linear constraint optimization problem more accurate. Ablation Study. We first conduct an ablation study on the constraint graph learned from constraint network using the STGCN as backbone network in Table 3. We can observe that the constraint graph performs better than explicit graph extracted from prior knowledge on both traffic and MiniApp datases. In addition, for backbone networks without explicit graph structure such as AGCRN and SCINet, we investigate the effectiveness of constraint-satisfaction loss minimization and constraintsatisfaction transformation as shown in Table 4, finding that both of the two components contribute to the forecasting performance. Specifically, for the backbone network AGCRN which achieves the state-of-the-art performance on binary tree dataset, FRF enhances the backbone by 1.95% in training phase and by 9.0% in inference phase, while the combination of two components improves the performance by 10.16% in total. 4 CONCLUSION In this paper, we have proposed to enhance the multivariate time series forecasting with a new inductive bias, function relation fieild (FRF), which is model-agnostic. FRF can discover the intrinsic graph structure, as well as improve flow forecasting performance by applying constraint function relationship to the output in training and testing phases. The constraints learned by FRF can be incorporated into existing backbone networks, consistently improving the prediction performance. Experimental results show that the proposed FRF framework can reliably learn the constraints from the time-series data and restore the graph structure. Moreover, these constraints in turn help improve the prediction accuracy by a notable margin, regardless of the diversity of the network architecture in different backbone models. We expect that this FRF inductive bias could be potentially employed in other multivariate settings beyond times series scenarios. A PERFORMANCES ON MORE BACKBONES GTS Shang et al. (2021). The discrete graph structure learning model learns a graph structure among multiple time series and forecasts them simultaneously with a GNN. There are two differences between GTS and our proposed FRF. On one hand, GTS performs prediction under GNN paradigm which is model-specific while FRF is model-agnostic applying the function field to forecasting loss optimization. On the other hand, existing studies including AGCRN and GTS construct the graph based on the time-series similarity, while the FRF is the first proposed to exploiting the the constraint function relation to enhance the multi-variate time-series forecasting. We conduct experiments on Binary tree, Miniapp1 and Miniapp2 datasets using the opensource code (https://github.com/ chaoshangcs/GTS.git) shown in table.5, demonstrating that FRF can also improve the forecasting performance on GTS. The code of FRF-GTS and the running log is released in the supplementary material. NRI Kipf et al. (2018). The neural relational inference (NRI) model is an unsupervised model that learns to infer interactions and forecasting with a lstm. We conduct experiments on Binary tree, Miniapp1 and Miniapp2 dataset using the opensource code (https://github.com/ ethanfetaya/NRI.git). The results on NRI network in table.5 showing that there is a large margin from the SOTA backbone AGCRN Bai et al. (2020). B EXPERIMENTAL SETTINGS The error threshold. For the binary tree dataset and MiniApp calling flow datasets which have strong constraint relationships, we set err = 0.01 to filter the constaint nodes. However, for traffic dataset PEMSD4 and PEMSD8 with relative weak constraints, we set err = 0.025 to achieve the best performance. The hyper-parameters sensitivity experiments of err on PEMSD4 and PEMSD8 datasets are shown in Fig 5. The function relation graph. Note that for the real datasets, the graph structure is not given in advance. In order to use STGCN, we adopt Gaussian copula graphical models Liu et al. (2009); Yu et al. (2020) to learn the graph structure from the data, and take the learned graph as benchmark graph. For the FRF enhanced backbone network STGCN Yu et al. (2018), we replace the fixed graph structure with the learned constraint graph then achieve better performance. As results shown in table 3, we can observe that constraint graph performs better than graph learned with copula graphical model. Besides, for uni-variate backbones SCINet, Autoformer and FEDformer taking no time-series relationship into consideration, As well as graph model AGCRN, which have optimized with learned node embedding dynamically ignoring the origin graph, we don’t exploit constraint relation at graph construction stage. The function relation is applied in training stage and output constraints. The setting of J . For binary tree dataset, we set J = 4 to recover the function relation shown in Fig 4. We set J = 6 for two MiniApp flow calling datasets. For traffic dataset PEMSD4 with 307 nodes and PEMSD8 with 170 nodes, we achieve best performance when J = 30. The detailed settings at λ and k. In the training stage, we only tune the trade off coefficient λ and iteration times K while keep all other parameters the same as SOTA settings in benchmark. The detailed settings are shown in 6. C VISUALIZATION OF LEARNED FUNCTION RELATION The flow visualization of different relations. We show the comparison of learned function relation and origin relation on MiniApp1 dataset in table 6. Note that, the origin relation of MiniApp is learned by Gaussian copula graphical models Liu et al. (2009); Yu et al. (2020). We can observe that the flows of the target node has the same pattern and scale with relevant node on learned function, while has different scale on origin graph. The results demonstrating that learned function is more effective to capture the flow relationship. D DISCUSSION ON HYPERPARAMETERS AND COMPUTATIONAL COMPLEXITY Hyper-parameters. There are three newly-introduced hyper-parameters including error threshold err, trade-off coefficient λ and number of iterations K. The err and λ can be easily chosen based on the validation loss. And a largerK could be used to obtain more accurate optimization and achieve better performance. So, there is a balance between performance gain and computation. We typically set it as K = 10 which could work well for all the tasks we have considered. Computational complexity. On one hand, the computational complexity increases in the forecasting network training caused by the K iterations of output constraint satisfaction. The K is usually setted as a small number 5 or 10, which is computationally easy. And the main time-consuming operations come from forward and back propagation of backbones rather than the output constraint. On the other hand, we need to train the constraint network for all time-series. Fortunately, the constraint network is a simple two-layer attention network, which only has a small number of parameters but effective enough to capture the complex function relation. For example, in MiniApp1 task, each constraint network only has around 3,000 parameters, the training time is in the scale of seconds. Thus, we believe training a constraint network is very fast and does not require much computational resources. The small size of the constraint networks is amenable to a large-scale multi-variate time series.
1. What is the focus of the paper regarding multivariate time series forecasting? 2. What are the strengths of the proposed approach, particularly in its technical soundness and novelty? 3. What are the weaknesses of the paper, especially regarding its comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper is about multivariate time series forecasting with structure learning. Existing works usually assume the graph structure of the multiple time series is given or learned by the node similarity. However, in some applications, the relationship between time series can be much more complicated and the graph structure is not enough. This paper proposes to use functional relation field to model the inter-node relationship. Experiments on one synthetic and two real-world datasets show that the proposed method can enhance the existing spatial-temporal forecasting model. Strengths And Weaknesses Strength: The proposed method is novel. There are no existing papers that consider the complicated constraints relationship among nodes. The proposed method is technically sound Empirical performance is promising Weakness: The paper did not compare with some baselines that perform joint forecasting and structure learning, e.g., GTS [1] and NIR [2]. [1] DISCRETE GRAPH STRUCTURE LEARNING FOR FORECASTING MULTIPLE TIME SERIES, ICLR 2021 [2] Neural Relational Inference for Interacting Systems. ICML 2018 Clarity, Quality, Novelty And Reproducibility The paper is written well and provides source code in the supplementary for reproducibility.
ICLR
Title Functional Relation Field: A Model-Agnostic Framework for Multivariate Time Series Forecasting Abstract In multivariate time series forecasting, the most popular strategy for modeling the relationship between multiple time series is the construction of graph, where each time series is represented as a node and related nodes are connected by edges, i.e. spatial-temporal graph neural networks. The graph structure is either given apriori or learned based the similarity between nodes. However, the relationship between multiple time series is typically complicated, for instance, the sum of outflows from upstream nodes may be equal to the inflows of downstream nodes. Such relations widely exist in many real-world multivariate time series forecasting scenarios, yet are far from well studied. In these cases, graph might only be a crude description on the dependency between nodes. To this end, we explore a new framework to model the inter-node relationship in a more precise way based our proposed inductive bias for graphs, Functional Relation Field, where a group of functions parameterized by neural networks are learned to characterize the dependency between multiple time series. These learned functions are versatile: they can then be used to discover the underlying graph structure by identifying the most relevant neighbors of the target node; and on the other hand, the learned functions will form a “field” where the nodes in the backbone prediction networks are enforced to satisfy the constraints defined by these functions. The experiment is conducted on one toy dataset to show our approach can well recover the true constraint relationship between nodes. And two real-world MiniApp calling traffic and road networks datasets are also considered with various different backbone networks. Results show that the prediction error can be reduced remarkably with the aid of the proposed functional relation field framework. N/A 1 INTRODUCTION Multivariate time series forecasting has surged recently due to its strong expressiveness of the spatio-temporal dependence among the data and its enormous popularity in vast application areas, such as the prediction of urban traffic, computer network flow, cloud micro-services calling flow, and rigid body motion, to name a few (Li et al., 2018; Yu et al., 2018; Bai et al., 2020; Yan et al., 2018; Liu et al., 2020). The most popular and straightforward strategy for modeling the relationship between multiple time series is the introduction of graph, where each time series is represented as a node and related nodes are connected by edges. This particular inductive bias for multivariate time series prediction results in the so called spatial-temporal graph neural networks (Yu et al., 2018). The graph structure is either given apriori (e.g. in traffic flow prediction, each road as a node has connected roads forming the graph.) or learned based the similarity between nodes (Yu et al., 2019; Bai et al., 2020; Shang et al., 2021). However, in practice, the relationship between multiple time series is typically complicated. For instance, there often exist constraints among the nodes, ranging from the equality between the inflow and the outflow for a node in a traffic network to the geometric constraints of the rigid body motion. Such relations widely exist in many real-world multivariate time series forecasting scenarios, yet are far from well studied. In these cases, graph might not be sufficient for characterizing the dependency between nodes. As a remedy, in this work, we explore a new framework to model the inter-node relationship in a more precise manner than graph, Functional Relation Field (FRF), where a group of functions parameterized by neural networks are learned to characterize the dependency between multiple time series explicitly. These learned functions are versatile: first they can then be used to discover the underlying graph structure by identifying the most relevant neighbors of the target node; and on the other hand, the learned functions will form a “field” where the nodes in the backbone prediction networks are further enforced to satisfy the constraints defined by these functions. As illustrated in Fig.1, the left panel shows the traditional graph neural networks assuming similar time series have edge connections, while our framework on the right panel models the dependency between nodes through a functional relationship, e.g. a linear form to enforce the constraints between the flows of target and dependent nodes. In our framework, we mainly solve the following two issues: (i) How to learn the functional field? We need to select the dependent nodes that have a relationship with the target node, and express the constraint in a functional form; (ii) How to guarantee the constraints satisfaction? The (functional) constraints relationship should be maintained in the predicted output in both training and test process. To address these issues, we propose a two-stage approach that can discover the functional relations (i.e. constraints) from data and further integrate the constraints seamlessly when forecasting the multivariate time series. Specifically, we first train a neural network with a selected target node as its output and all the other nodes as dependent variables (i.e. the input of this neural network), and identify the most relevant dependent nodes based on this trained network. We then re-train it to learn the relationship among the target and the discovered relevant nodes. Next, we incorporate these functional constraints into the network backbones by imposing them to the predicted output during both training and test process. More precisely, the output of the network could be guaranteed to satisfy the constraints by utilizing the constraint-satisfied transformation and loss minimization. We compare the proposed approach with SVM, fully connected networks, fully connected LSTM, and five backbone models (i.e., STGCN (Yu et al., 2018), AGCRN (Bai et al., 2020), Autoformer (Wu et al., 2021), FEDformer (Zhou et al., 2022), SCINet (Liu et al., 2022)). Experimental results show that our approach significantly improves the performance over the original network backbones and other baseline models. RELATED WORK Univariate time series forecasting. Recently, much research focuses on time series forecasting with deep learning models due to their powerful representational capability and prediction performance, including feed-forward neural network, RNN (Rumelhart, 1986) and its variants LSTM (Hochreiter & Schmidhuber, 1997) and GRU (Cho et al., 2014). The transformer architecture and its variants (Vaswani et al., 2017; Simm et al., 2020; Zhou et al., 2021; Child et al., 2019; Lim et al., 2020; Li et al., 2019; Wu et al., 2021; Zhou et al., 2022) also made much progress on univariate time-series forecasting on learning long-range dependence. In order to model the trend and seasonality of time series in an interpretable way, N-beats (Oreshkin et al., 2020) network that stacked very deep fullconnection network based on backward and forward residual links has improved the multi-horizon prediction accuracy significantly. Moreover, DeepAR (Salinas et al., 2020) and Deep State-Space Model (DSSM) (Rangapuram et al., 2018) stack multi-layer LSTM network to generate parameters of one-step-ahead Gaussian predictive distributions for multi-horizon prediction. Multivariate time series forecasting. Spatio-temporal graph neural networks (Yu et al., 2018; Chen et al., 2019; Pan et al., 2021; Li et al., 2020) have been proposed to model the spatial correlation and temporal dependency in multivariate time-series. Apart from capturing the temporal dependence, these methods further model the spatial dependence among all time series via graph neural networks, leveraging the information from the neighboring time series to help forecasting the target one. It is well known that an informative graph structure is important to the graph time series forecasting. Therefore, many algorithms (Bai et al., 2020; Seo et al., 2016; Shang et al., 2021) were proposed to discovery the underlying graph structure. AGCRN (Bai et al., 2020) assumed the graph structure is unknown and adopted an adaptive approach to learn the embedding vectors for all nodes, and then replaced the adjacency matrix in graph convolutions with a function of the node embeddings. However, the similarity graph calculated with the learned node embedding is a dense and continuous graph instead of a sparse and discrete graph. Therefore, GTS (Shang et al., 2021) formulated the graph structure learning problem as a probabilistic graph model to learn the discrete graph through optimizing the mean performance over the graph distribution. Different from the existing multivariate time series prediction methods, AGCRN (Bai et al., 2020) (with a fully connected graph) and STGCN (Yu et al., 2018) (with a given graph), we consider a more precise way, i.e. functional relations as constraints, to learn the connection between time series. The new inductive bias expressed by these functional relations can be applied to different backbone networks to help recover the graph structure and act as regularization in both training and test process. 2 METHODOLOGY: FUNCTIONAL RELATION FIELD Multivariate time series forecasting. Suppose we have N time series {xi}Ni=1 with length T , written compactly as X ∈ RN×T . Each time series can be denoted as a node, where xi,t ∈ R for each node i and time step t. xt ∈ RN is the time slice of X at the t-th time step. The multi-step forecasting problem of a multivariate time series can be formulated as predicting the future M frames of the multivariates given the last H time slices: {ŷt+1, ..., ŷt+M} = argmax P ({yt+1, ..., yt+M}|{xt−H+1, ..., xt}), (1) where {yt+1, · · · , yt+M} and {ŷt+1, · · · , ŷt+M} represent the true and predicted values at the future time steps, M is the number of future steps. Note that here we use y to denote the output so as to differentiate it from the input x. Forecasting with functional relations. In many real-world scenarios, the relationship between multiple time series is typically complicate, graph might not be sufficient for modelling their dependency, particularly for the cases values of multivariate time series at each time step are subject to some intrinsic constraints. Existing methods have not incorporated these constraints into their models. In this work, we intend to show that models with the account of constraints (expressed with functional relationship) are superior to those without constraints in terms of prediction performance. As an example, suppose that the flow in a computer network satisfies the homogeneous linear constraints, at each time step t, the following linear constraints hold for slice xt: Axt = 0,∀t, (2) where A ∈ RM×N is a matrix that is constant across time. In other more complex cases, the constraints can be non-homogeneous, non-linear, or even intertemporal. Here, we concentrate on time-invariant constraints that is not intertemporal. As such, the constraints can be described by a set of functions f with size m, i.e. functional relation field, f = (f1, f2, ..., fm). fi(xt) = 0, ∀i, ∀t. (3) Based on the constraints defined above, we consider the following constrained multivariate time series prediction problem, {ŷt+1, ..., ŷt+M} = arg max P ({yt+1, ..., yt+M}|{xt−H+1, ..., xt}), s.t. fi(ŷt+τ ) = 0, 1 ≤ τ ≤M, 1 ≤ i ≤ m. (4) However, in most real-world scenarios, neither the functional form F nor the specific weights variables involved in the constraints are given, and one of our objectives is to extract such information from the data and solve the problem (4). We now elaborate the functional relation field for multivariate times series prediction in the following. The schematic diagram of the proposed framework is depicted in Figure 2, including two parts. The first part displayed Figure 2(a) shows how we learn the functional relations, i.e. the constraints between nodes. Assuming that the constraints are unknown, we aim to find the constrained nodes and the specific functional form for these constraints. The constraint function in this paper is Constraint nodes set and relevant nodes 𝒩! Retraining the functional relation networkTrain constraint network 𝑤! 𝑤" 𝑤# 𝑤$ 𝑤% 𝑤& Training Phase: Constraint-Satisfaction loss minimization in in Eq.(10) (a) Functional Relation Field (b) Applying Functional Relation Field Testing Phase: Constraint-Satisfaction transformation in Eq.(15)Output Layer Input Backbone Network Predict value "𝑦"#$ ℒ%&% Function: 𝐟 "𝑦"#$ = 0 Predict output "𝑦"#$ Constraint-satisfied output 1𝑦"#$ Predict output "𝑦"#$ Learned function relation: 𝐟 𝑥 = 0 𝑤! 𝑤" 𝑤# 𝑤$𝑤% 𝑤& Independent nodes Learned function relation ℱ 1𝑔 "𝑦"#$,( Figure 2: The schematic diagram of functional relation field framework. The two subfigures denote the two stages: (a) The training data is employed to discover the nodes in each constraint function and these functions are expressed by constraint network; (b) The learned constraints are incorporated in the backbone models (cf. Section 2.2) in three complementary ways so as to improve the forecasting performance. approximated by a neural network, named as functional relation network or constraint network. After training the functional relation network, we can identify the most relevant neighbors and produce a more informative graph structure. Then we can proceed to integrate the learned constraints into the backbone graph neural networks for multivariate time series prediction, as shown in Figure 2(b). We enforce these constraints to the output of spatio-temporal graph neural networks during both training and test phases. For the outputs of the networks, we add a constraint-satisfied transformation layer during the inference process such that the outputs strictly satisfy the constraints. Altogether, we refer to the proposed framework as functional relation field-enhanced spatio-temporal graph networks (FRF-STG). It is model-agnostic and can be applied to different backbone graph networks. In the following, we will describe the two stages including learning functional relation network and how to apply the constraints induced by the functional relation between nodes in more details. 2.1 LEARNING THE FUNCTIONAL RELATION NETWORK We start with discussing the first question: how to learn the unknown constraints (i.e. the functional relations) from the multivariate time series data? As demonstrated in Figure 2(a), we assume that there exists a constraint for each node. We first discover the relevant nodes involved in these constraints and then express the constraint functions via neural networks. Identifying constrained nodes and their relevant nodes. Here we consider a simplified case where the functional relation between nodes can be formulated as: xt,i = gi(xt,\i),∀t (5) i.e. for each target node i, we use a constraint network gi to approximate the function relation taking all the remaining (N − 1) nodes as input. We then train the constraint network to predict the value of the i-th node with the loss function : Lpred,(i) = ‖x̂t,i − xt,i‖2 (6) where x̂t,i and xt,i represent the estimated and observed values of node i at time step t. Second, a threshold err is set, and treat xi as a constrained node if both the training and validation error are smaller than err. Otherwise, xi is unpredictable with the other nodes, indicating it has weak dependency with other nodes. Then, to identify the most relevant nodes set Ni for target node i, we introduce the sensitivity of input change to the output for the trained constraint network, measured by the absolute value of the partial derivative: δi,j = ∣∣∣∣ ∂g∂xt,j ∣∣∣∣ , j 6= i (7) We calculate the average gradients over the training and the validation set for node j. Then, we specify another threshold grad here and consider the node j as the most relevant node of target i if δi,j is larger than grad. Besides, if the cardinality of Ni is larger than the scale threshold J , we further shrink Ni by only keeping the top-J nodes with the largest δi,j . Retraining the functional relation network. Since we filter out the irrelevant nodes for the discovered constrained node xi, it is necessary to re-train the constraint network using the relevant nodes in Ni as inputs, denoted as xt,Ni = {xt,ij |j ∈ Ni}, x̂t,i = g̃i(xt,Ni). (8) Regarding the architecture of the functional relation network g̃i, we adopt a simple attention-based structure for each node i, described as follows. αt,i = Softmax(MLP i(xt,Ni)), x̂t,i = α T t,ixt,Ni , (9) where αt,i is the attention weight vector generated from the relevant nodes xt,Ni , and x̂t,i is the reconstructed input with the constraint nodes. Others alternatives for designing the functional relation network is also possible. 2.2 APPLYING THE CONSTRAINTS The constraints learned by the functional relation network are versatile. A naive usage is to construct meaningful graph structure by drawing edges between the identified target and its dependent nodes. Secondly, we propose to incorporate the learned constraints into the backbone prediction network in both training and test process through constraint-satisfaction loss minimization and constraintsatisfaction transformation, respectively. Both of them are used to guarantee that the constraints are maintained in the outputs of the backbone network. Constraint satisfaction in training phase. We expect the output of the backbone network, ŷ = {ŷt+1, ŷt+2..., ŷt+M}, to satisfy the learned constraints that could reveal the underlying structure of the multivariate time series. A straightforward yet effective way of implementing the constraint satisfaction is loss minimization over the functional relation network based on the output of the backbone prediction network, LFRF (ŷ) = N∑ i=1 M∑ τ=1 ‖ŷt+τ,i − g̃({ŷt+τ,j}, j ∈ Ni)‖22 (10) Therefore, the overall loss function for training the backbone prediction network include two terms, Ltotal = L(ŷ, y) + λLFRF (ŷ), (11) where λ is a tradeoff coefficient for balancing the supervised term and constraint satisfaction. Constraint satisfaction in testing phase. Furthermore, although the constraints are fully utilized during training, there is no guarantee that the constraints hold for the outputs during the inference process. Therefore, it is necessary to perform constraint-satisfaction transformation on outputs of the prediction networks. Let us first consider the linear constraint Axt = 0,∀t. Suppose that ŷ = {ŷt+1, ŷt+2..., ŷt+M} and y = {yt+1, yt+2, ..., yt+M} denote the predicted output of the backbone network and the ground truth, respectively. To make the output ŷt+τ to satisfy the linear constraint, we can project the predicted output onto the hyperplane Axt = 0 as ỹt+τ with a closed-form solution, ỹt+τ = ŷt+τ −AT (AAT )−1Aŷt+τ . (12) On the other hand, for non-linear constraint set f(y) = (f1(y), ..., fm(y))T = 0, where each constraint fi(y) = 0 represents yi− g̃i(yt,Ni) = 0, there are no analytical solutions, but we can solve an optimization problem with nonlinear equality constraints, i.e. finding the nearest projection point on the plane f(y) = 0 given the reference point ŷt+τ for τ = 1, . . . ,m min ỹt+τ ‖ỹt+τ − ŷt+τ‖22, s.t. f(ỹt+τ ) = 0. (13) A simple approximate method for solving this equality-constrained quadratic programming is to conduct iterative projections. Denote J = ∂f∂x as the Jacobian matrix. Assuming ŷt+τ ≈ ỹt+τ , closed to the surface f(x) = 0. We derive the first-order Taylor expansion of f(x) at ŷt+τ as f(x) ≈ f(ŷt+τ ) + J T · (x− ŷt+τ ). (14) Equating f(x) to zero with x = ỹt+τ yields ỹt+τ = ŷt+τ − J (J TJ )−1f(ŷt+τ ). (15) Then we can repeat the above transformation several times (e.g. number of projections K = 10 times used in our experiments) until the constraints are well satisfied by evaluating whether F (x) =∑m j=1 |fj(x)| is small enough. 2.3 FUNCTIONAL RELATION FIELD-ENHANCED SPATIO-TEMPORAL GRAPH NETWORKS In this part, we integrate the proposed functional relation field framework into five representative backbone models, STGCN (Yu et al., 2018), AGCRN (Bai et al., 2020), Autoformer (Wu et al., 2021), FEDformer (Zhou et al., 2022) and SCINet (Liu et al., 2022) to boost their prediction performance, referred as FRF-STGCN, FRF-AGCRN, FRF-Autoformer, FRF-FEDformer and FRFSCINet, respectively. In the first stage, we learn the functional relation network, based on which the most relevant nodes can be identified. And the resultant graph structure could be used for the five backbone networks. In the second stage, we enforce the learned constraints in the training and inference process, as described in Figure 2. Since different backbone networks has their own specific design, we need adapt FRF to these backbones. For the constraint satisfaction of output, in AGCRN and SCINet, the networks produce all the prediction results at multiple time steps in one batch, and therefore, the constraint-satisfied transformation is applied to the prediction at each time step respectively for K times as described in Eq. (15). For STGCN, we apply the above transformation sequentially to each future time step, obtain the transformed predictions, and then feed the predictions to STGCN to produce the predictions at the next time step. We repeat this procedure until we finish the multi-step forecasting task. Algorithm 1: Training and inference of functional relation field Input: Trained function relation networks f , hyper-parameters λ and K. Output: constraint-satisfied output ỹt+τ // Training Phase; repeat 1 Forward on backbone network to get ŷt+τ . on training dataset; 2 Back-propagate with the loss Ltotal in Eq. 2.2 and run Adam. . constraint-satisfaction loss 3 until stopping criteria is met; // Inference Phase; Forward on the trained backbone network to obtain ŷt+τ . on test dataset; 4 for k in K do 5 Calculate ỹt+τ by Eq.(15) . constraint-satisfaction transformation; 6 end 3 EXPERIMENT In this section, we conduct experiments on five datasets including one synthetic graph dataset, two real-word MiniApp calling flow datasets and two traffic flow datasets to demonstrate the effectiveness of FRF on learning the underlying relationship between nodes and boosting prediction performance of these backbone networks. The code for reproducibility is attached in the Supplementary Materials. The baseline models. We first compare our framework with two traditional forecasting models including Historical Average (HA) and Support Vector Regression (SVR). Then, we also conduct experiments on two classical univariate time series prediction models, including Feed-Forward Neural Network (FNN) and Full-Connected LSTM (FC-LSTM (Sutskever et al., 2014)). We select the widely used graph time series model STGCN (Yu et al., 2018), AGCRN (Bai et al., 2020), and the univariate time series forecasting models based on transformer architectures Autoformer (Wu et al., 2021), FEDformer (Zhou et al., 2022) and another state-of-the-art univariate prediction model SCINet (Liu et al., 2022)) as our backbone networks. We refer the readers to the supplementary materials for the detailed experimental settings. 3.1 DATASETS AND SETTINGS Binary tree dataset. We first generate an artificial graph time series dataset. The graph structure for this dataset is a complete binary tree with 255 nodes. For each leaf node i, its value is a noisy sinusoidal wave across time, xi,t = ni,tAi sin( 2πtTi +φ), where ni,t ∼ U(0.95, 1.05). We sort all leaf nodes from left to right in an increasing order of their periods. For a non-leaf node p, we denote its left and right child as l and r. We further set the value of node p to be the geometric mean of its two children l and r, xp,t = √ xl,t · xr,t. We sample one point every 5 minutes, so there are 288 points per day. We generate the data for 40 days, including 30 days for training (i.e., 30× 288 = 8640 time points), 5 days for validation, and 5 days for testing. We intentionally design this dataset since it has true graph structure between different time series and the constraints between nodes are explicit, and thus it is a suitable testbed to compare the superiority of FRF over those without FRF. In the experiments, for the backbone with FRF, we assume the constraints are unknown and learn them using the proposed method in Section 2.1. MiniApp calling flow dataset 1 and 2. These two datasets are real-word flow data from two popular online payment MiniApps, attached in the Supplementary Materials. For the two MiniApps, there are N = 30, 23 filtered pages linking to each other in the calling process, which produces visiting request flow from one page to another, constituting a graph with N = 30, 23 nodes. We aggregate the flow with averaged value every 5 minutes for each node, so there are 288 points per day. For the first MiniApp, we collect 21 days of data, including 15 days for training, 3 days for validation, and 3 days for test. For the second one, 24 days of data are collected, including 18 days for training, 3 days for validation, and 3 days for testing. PEMSD4 and PEMSD8 traffic datasets. This benchmark dataset is popular for multi-variate time series prediction, describing the traffic speed in San Francisco Bay Area with 307 sensors on 29 roads (https://paperswithcode.com/dataset/pemsd4). The other one consists of 170 detectors on 8 roads in San Bernardino area (https://paperswithcode.com/dataset/pemsd8). Settings of constraint network and hyper-parameters. For the architectures of the constraint network, we compare two a 4-layer MLP and a self-attention network, and the results show the latter is more effective. We measure the constraint relationship with MAPE, where the large MAPE indicates the time-invariate constraint is weak. Specifically, the MAPEs for BinaryTree, MiniAPP1, MiniApp2, PEMSD4, PEMSD8 datasets are 0.10, 0.008, 0.01, 0.02, 0.07 respectively. The larger MAPE means the weaker constraint relationship, therefore the proposed FRF model is applicable to backbone network only when the MAPE of constraint network is small. In addition, we only tune the parameters of FRF while keeping the other hyper-parameters setting the same as backbone networks. 3.2 RESULTS Overall performance Table 1 summarizes the performance of all the compared models on the five datasets, including the proposed FRF approach coupled with STGCN, AGCRN, Autoformer, FEDformer and SCINet, denoted as FRF-STGCN and FRF-AGCRN, FRF-Autoformer, FRF-FEDformer and FRF-SCINet, respectively. For the binary tree dataset, we predict the future 12 time steps and evaluate the performance in terms of three metrics (MAE, RMSE, MAPE). Since the underlying true constraints are known, we report the experimental results of our models with both true and learned constraints, denoted as “T” and “L”. We can observe that deep learning-based models typically outperform the traditional ones, as expected. Furthermore, the proposed functional relation field can further improve the performance of the original backbone models. Regardless of the differences between the two backbone networks, FRF can consistently improve the prediction accuracy for both of the backbones. indicating that the FRF framework could be potentially applied to a wide variety of backbones. For the two MiniApp datasets, we omit the metric MAPE since the scale of data changes dramatically across time such that MAPE fails to characterize the performance of different models. Due to the error accumulation problem for multi-step prediction in STGCN, the performance of this model pales in comparison with its non-iterative counterpart. As a result, we only report the results of the non-iterative version of STGCN. Since the underlying true constraint relationship between nodes are not available, we only report the FRF with learned constraints. We can easily observe that augmentation of the proposed FRF can consistently boost the performance of the five backbone networks. Specifically, FRF improves STGCN by 36.3% and 6.9% on the two datasets, also improves AGCRN by 14.6% and 7.0%, respectively. For traffic datasets PEMSD4 and PEMSD8, one particular reason we choose SCINet as the baseline is that the reported results can achieve state-of-the-art prediction performance on this task. We can observe that even relying on such a strong baseline, FRF framework can still improve its performance of with a margin 0.6% and 0.3% on both datasets, respectively. For other backbones, we again see that FRF further improves the prediction performance, showing the effectiveness of FRF as a model-agnostic framework. Learning the relationship between nodes. We further test whether FRF could discover the underlying true constraints between nodes. First, we investigate whether we can reliably estimate the target node given the values of constraint nodes. To be exact, we compute x̂t,i = g̃({xt,Ni}) and compare x̂t,i with xt,i in terms of MAPE. For the test data of the synthetic binary tree, the resulting MAPE is 0.399%. Note that the MAPE of AGCRN or STGCN reported in Table 1 is around 4% without considering the constraints. Therefore, using the learned constraints can well regularize the predictions given by the original network backbones as well as further improve the forecasting performance. On the other hand, we compare the performance of the proposed algorithm when using the true and estimated constraints, showing the results in Table 1. We can observe that the performance based on both the true and estimated constraints is almost the same, indicating that the constraints are accurately learned. Additionally, we visualize the learned constraints by connecting each constrained node with their most relevant neighbors as a graph, shown in Figure 4. The structure of the binary tree is well recovered, although some extra edges are involved. Hyperparameters Sensitivity. FRF enhanced model introduces additional three kinds of hyperparameters including validation error threshold err, the loss tradeoff coefficient λ and the number of output transformation K. Therefore, we conduct hyper-parameters sensitivity experiments on binary tree dataset using backbone AGCRN as shown in Fig 3. We can observe that the performance slightly improves when the err increases due to more constraints are discovered, while the performance decreases with large err because of the introduced noise. Even more, the FRF enhanced model performs worse than backbone network when err = 5.0. Consistently, FRF enhanced model performs better when λ = 0.1 and worse than backbone with large λ. For the K, the larger K improves the backbone more significantly than smaller k because iterating more times makes the non-linear constraint optimization problem more accurate. Ablation Study. We first conduct an ablation study on the constraint graph learned from constraint network using the STGCN as backbone network in Table 3. We can observe that the constraint graph performs better than explicit graph extracted from prior knowledge on both traffic and MiniApp datases. In addition, for backbone networks without explicit graph structure such as AGCRN and SCINet, we investigate the effectiveness of constraint-satisfaction loss minimization and constraintsatisfaction transformation as shown in Table 4, finding that both of the two components contribute to the forecasting performance. Specifically, for the backbone network AGCRN which achieves the state-of-the-art performance on binary tree dataset, FRF enhances the backbone by 1.95% in training phase and by 9.0% in inference phase, while the combination of two components improves the performance by 10.16% in total. 4 CONCLUSION In this paper, we have proposed to enhance the multivariate time series forecasting with a new inductive bias, function relation fieild (FRF), which is model-agnostic. FRF can discover the intrinsic graph structure, as well as improve flow forecasting performance by applying constraint function relationship to the output in training and testing phases. The constraints learned by FRF can be incorporated into existing backbone networks, consistently improving the prediction performance. Experimental results show that the proposed FRF framework can reliably learn the constraints from the time-series data and restore the graph structure. Moreover, these constraints in turn help improve the prediction accuracy by a notable margin, regardless of the diversity of the network architecture in different backbone models. We expect that this FRF inductive bias could be potentially employed in other multivariate settings beyond times series scenarios. A PERFORMANCES ON MORE BACKBONES GTS Shang et al. (2021). The discrete graph structure learning model learns a graph structure among multiple time series and forecasts them simultaneously with a GNN. There are two differences between GTS and our proposed FRF. On one hand, GTS performs prediction under GNN paradigm which is model-specific while FRF is model-agnostic applying the function field to forecasting loss optimization. On the other hand, existing studies including AGCRN and GTS construct the graph based on the time-series similarity, while the FRF is the first proposed to exploiting the the constraint function relation to enhance the multi-variate time-series forecasting. We conduct experiments on Binary tree, Miniapp1 and Miniapp2 datasets using the opensource code (https://github.com/ chaoshangcs/GTS.git) shown in table.5, demonstrating that FRF can also improve the forecasting performance on GTS. The code of FRF-GTS and the running log is released in the supplementary material. NRI Kipf et al. (2018). The neural relational inference (NRI) model is an unsupervised model that learns to infer interactions and forecasting with a lstm. We conduct experiments on Binary tree, Miniapp1 and Miniapp2 dataset using the opensource code (https://github.com/ ethanfetaya/NRI.git). The results on NRI network in table.5 showing that there is a large margin from the SOTA backbone AGCRN Bai et al. (2020). B EXPERIMENTAL SETTINGS The error threshold. For the binary tree dataset and MiniApp calling flow datasets which have strong constraint relationships, we set err = 0.01 to filter the constaint nodes. However, for traffic dataset PEMSD4 and PEMSD8 with relative weak constraints, we set err = 0.025 to achieve the best performance. The hyper-parameters sensitivity experiments of err on PEMSD4 and PEMSD8 datasets are shown in Fig 5. The function relation graph. Note that for the real datasets, the graph structure is not given in advance. In order to use STGCN, we adopt Gaussian copula graphical models Liu et al. (2009); Yu et al. (2020) to learn the graph structure from the data, and take the learned graph as benchmark graph. For the FRF enhanced backbone network STGCN Yu et al. (2018), we replace the fixed graph structure with the learned constraint graph then achieve better performance. As results shown in table 3, we can observe that constraint graph performs better than graph learned with copula graphical model. Besides, for uni-variate backbones SCINet, Autoformer and FEDformer taking no time-series relationship into consideration, As well as graph model AGCRN, which have optimized with learned node embedding dynamically ignoring the origin graph, we don’t exploit constraint relation at graph construction stage. The function relation is applied in training stage and output constraints. The setting of J . For binary tree dataset, we set J = 4 to recover the function relation shown in Fig 4. We set J = 6 for two MiniApp flow calling datasets. For traffic dataset PEMSD4 with 307 nodes and PEMSD8 with 170 nodes, we achieve best performance when J = 30. The detailed settings at λ and k. In the training stage, we only tune the trade off coefficient λ and iteration times K while keep all other parameters the same as SOTA settings in benchmark. The detailed settings are shown in 6. C VISUALIZATION OF LEARNED FUNCTION RELATION The flow visualization of different relations. We show the comparison of learned function relation and origin relation on MiniApp1 dataset in table 6. Note that, the origin relation of MiniApp is learned by Gaussian copula graphical models Liu et al. (2009); Yu et al. (2020). We can observe that the flows of the target node has the same pattern and scale with relevant node on learned function, while has different scale on origin graph. The results demonstrating that learned function is more effective to capture the flow relationship. D DISCUSSION ON HYPERPARAMETERS AND COMPUTATIONAL COMPLEXITY Hyper-parameters. There are three newly-introduced hyper-parameters including error threshold err, trade-off coefficient λ and number of iterations K. The err and λ can be easily chosen based on the validation loss. And a largerK could be used to obtain more accurate optimization and achieve better performance. So, there is a balance between performance gain and computation. We typically set it as K = 10 which could work well for all the tasks we have considered. Computational complexity. On one hand, the computational complexity increases in the forecasting network training caused by the K iterations of output constraint satisfaction. The K is usually setted as a small number 5 or 10, which is computationally easy. And the main time-consuming operations come from forward and back propagation of backbones rather than the output constraint. On the other hand, we need to train the constraint network for all time-series. Fortunately, the constraint network is a simple two-layer attention network, which only has a small number of parameters but effective enough to capture the complex function relation. For example, in MiniApp1 task, each constraint network only has around 3,000 parameters, the training time is in the scale of seconds. Thus, we believe training a constraint network is very fast and does not require much computational resources. The small size of the constraint networks is amenable to a large-scale multi-variate time series.
1. What is the focus of the paper regarding time series forecasting? 2. What are the strengths and weaknesses of the proposed functional relation field framework? 3. Do you have any concerns regarding the presented constraint in the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions for improving the presentation and evaluation of the proposed method?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper focuses on the problem of time series forecasting with constraints. The proposed functional relation field framework is aimed at learning constraints from multi-variate time series data. Then, the authors develop the training and inference method incorporating the learned constraints. The proposed method is evaluated on both synthetic and real datasets. Strengths And Weaknesses Strength: This paper explores the idea of capturing hidden inter-variate constraints/relations in multivariate time series and imposing the discovered constraints on the training and inference process. In the experiments, the proposed framework is applied to baselines to compare the performance difference. This is a valid idea. Weakness: The problem of identifying inter-variate constraints/relations and applying them in forecasting is not new, and several works have studied it, e.g., [1, 2]. The technical contribution in this paper looks marginal, given that the technique used in this paper is standard and no news insights seem uncovered. Meanwhile, the presentation, some design choices, and evaluation have the following issues. (a) Eq.(1) seems to formulate the time series forecasting from the perspective of the probability model, while the rest of the paper follows the standard point forecast paradigm. In the probability model, the forecast is not necessarily the mode and could be mean, quantile, or interval. Eq.(1) seems disconnected from the problem of this paper. (b) Eq.(4)-(6) present the "constraint", which is not rigorous w.r.t. the concept of constraint. The presented constraint is essentially closer to the concept of correlation or relation, since it is simply derived from how well the other variables fit the target variable. This is highly data or observation-dependent. Moreover, it needs a threshold to determine the set of relevant variables. For multi-variables in different value domains or distributions, finding proper thresholds seems nontrivial and would affect the overall performance. This way of identifying "constraint" seems ad-hoc and arbitrary. (c) From Eq.(10), the constraint discovered from X seems to be applied to Y. It is a bit confusing to present this way. (d) Eq.13 - Eq.15 seems problematic. In Eq.(13), the minimization and the constraint are mutually exclusive, i.e., the minimization problem is for relaxing the constraint, and if the constraint is to behold, the minimization is unnecessary. Meanwhile, Eq.(13) is simply a least-squares problem w.r.t. y ~ and the iterative process seems redundant. (e) In the experiment, only the newly introduced hyperparameter is compared. It would be good to also show the hyperparameters in training, since they affect the end performance significantly in many cases. [1] Wu, Zonghan, et al. "Connecting the dots: Multivariate time series forecasting with graph neural networks." Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 2020. [2] Li, Zhuoling, et al. "Dynamic Graph Learning-Neural Network for Multivariate Time Series Modeling." arXiv preprint arXiv:2112.03273 (2021). Clarity, Quality, Novelty And Reproducibility In this paper, some formulation is not precise and clear enough, and the presentation needs improvements as listed above. The problem studied in this paper is not new. The technical contribution seems marginal.
ICLR
Title Functional Relation Field: A Model-Agnostic Framework for Multivariate Time Series Forecasting Abstract In multivariate time series forecasting, the most popular strategy for modeling the relationship between multiple time series is the construction of graph, where each time series is represented as a node and related nodes are connected by edges, i.e. spatial-temporal graph neural networks. The graph structure is either given apriori or learned based the similarity between nodes. However, the relationship between multiple time series is typically complicated, for instance, the sum of outflows from upstream nodes may be equal to the inflows of downstream nodes. Such relations widely exist in many real-world multivariate time series forecasting scenarios, yet are far from well studied. In these cases, graph might only be a crude description on the dependency between nodes. To this end, we explore a new framework to model the inter-node relationship in a more precise way based our proposed inductive bias for graphs, Functional Relation Field, where a group of functions parameterized by neural networks are learned to characterize the dependency between multiple time series. These learned functions are versatile: they can then be used to discover the underlying graph structure by identifying the most relevant neighbors of the target node; and on the other hand, the learned functions will form a “field” where the nodes in the backbone prediction networks are enforced to satisfy the constraints defined by these functions. The experiment is conducted on one toy dataset to show our approach can well recover the true constraint relationship between nodes. And two real-world MiniApp calling traffic and road networks datasets are also considered with various different backbone networks. Results show that the prediction error can be reduced remarkably with the aid of the proposed functional relation field framework. N/A 1 INTRODUCTION Multivariate time series forecasting has surged recently due to its strong expressiveness of the spatio-temporal dependence among the data and its enormous popularity in vast application areas, such as the prediction of urban traffic, computer network flow, cloud micro-services calling flow, and rigid body motion, to name a few (Li et al., 2018; Yu et al., 2018; Bai et al., 2020; Yan et al., 2018; Liu et al., 2020). The most popular and straightforward strategy for modeling the relationship between multiple time series is the introduction of graph, where each time series is represented as a node and related nodes are connected by edges. This particular inductive bias for multivariate time series prediction results in the so called spatial-temporal graph neural networks (Yu et al., 2018). The graph structure is either given apriori (e.g. in traffic flow prediction, each road as a node has connected roads forming the graph.) or learned based the similarity between nodes (Yu et al., 2019; Bai et al., 2020; Shang et al., 2021). However, in practice, the relationship between multiple time series is typically complicated. For instance, there often exist constraints among the nodes, ranging from the equality between the inflow and the outflow for a node in a traffic network to the geometric constraints of the rigid body motion. Such relations widely exist in many real-world multivariate time series forecasting scenarios, yet are far from well studied. In these cases, graph might not be sufficient for characterizing the dependency between nodes. As a remedy, in this work, we explore a new framework to model the inter-node relationship in a more precise manner than graph, Functional Relation Field (FRF), where a group of functions parameterized by neural networks are learned to characterize the dependency between multiple time series explicitly. These learned functions are versatile: first they can then be used to discover the underlying graph structure by identifying the most relevant neighbors of the target node; and on the other hand, the learned functions will form a “field” where the nodes in the backbone prediction networks are further enforced to satisfy the constraints defined by these functions. As illustrated in Fig.1, the left panel shows the traditional graph neural networks assuming similar time series have edge connections, while our framework on the right panel models the dependency between nodes through a functional relationship, e.g. a linear form to enforce the constraints between the flows of target and dependent nodes. In our framework, we mainly solve the following two issues: (i) How to learn the functional field? We need to select the dependent nodes that have a relationship with the target node, and express the constraint in a functional form; (ii) How to guarantee the constraints satisfaction? The (functional) constraints relationship should be maintained in the predicted output in both training and test process. To address these issues, we propose a two-stage approach that can discover the functional relations (i.e. constraints) from data and further integrate the constraints seamlessly when forecasting the multivariate time series. Specifically, we first train a neural network with a selected target node as its output and all the other nodes as dependent variables (i.e. the input of this neural network), and identify the most relevant dependent nodes based on this trained network. We then re-train it to learn the relationship among the target and the discovered relevant nodes. Next, we incorporate these functional constraints into the network backbones by imposing them to the predicted output during both training and test process. More precisely, the output of the network could be guaranteed to satisfy the constraints by utilizing the constraint-satisfied transformation and loss minimization. We compare the proposed approach with SVM, fully connected networks, fully connected LSTM, and five backbone models (i.e., STGCN (Yu et al., 2018), AGCRN (Bai et al., 2020), Autoformer (Wu et al., 2021), FEDformer (Zhou et al., 2022), SCINet (Liu et al., 2022)). Experimental results show that our approach significantly improves the performance over the original network backbones and other baseline models. RELATED WORK Univariate time series forecasting. Recently, much research focuses on time series forecasting with deep learning models due to their powerful representational capability and prediction performance, including feed-forward neural network, RNN (Rumelhart, 1986) and its variants LSTM (Hochreiter & Schmidhuber, 1997) and GRU (Cho et al., 2014). The transformer architecture and its variants (Vaswani et al., 2017; Simm et al., 2020; Zhou et al., 2021; Child et al., 2019; Lim et al., 2020; Li et al., 2019; Wu et al., 2021; Zhou et al., 2022) also made much progress on univariate time-series forecasting on learning long-range dependence. In order to model the trend and seasonality of time series in an interpretable way, N-beats (Oreshkin et al., 2020) network that stacked very deep fullconnection network based on backward and forward residual links has improved the multi-horizon prediction accuracy significantly. Moreover, DeepAR (Salinas et al., 2020) and Deep State-Space Model (DSSM) (Rangapuram et al., 2018) stack multi-layer LSTM network to generate parameters of one-step-ahead Gaussian predictive distributions for multi-horizon prediction. Multivariate time series forecasting. Spatio-temporal graph neural networks (Yu et al., 2018; Chen et al., 2019; Pan et al., 2021; Li et al., 2020) have been proposed to model the spatial correlation and temporal dependency in multivariate time-series. Apart from capturing the temporal dependence, these methods further model the spatial dependence among all time series via graph neural networks, leveraging the information from the neighboring time series to help forecasting the target one. It is well known that an informative graph structure is important to the graph time series forecasting. Therefore, many algorithms (Bai et al., 2020; Seo et al., 2016; Shang et al., 2021) were proposed to discovery the underlying graph structure. AGCRN (Bai et al., 2020) assumed the graph structure is unknown and adopted an adaptive approach to learn the embedding vectors for all nodes, and then replaced the adjacency matrix in graph convolutions with a function of the node embeddings. However, the similarity graph calculated with the learned node embedding is a dense and continuous graph instead of a sparse and discrete graph. Therefore, GTS (Shang et al., 2021) formulated the graph structure learning problem as a probabilistic graph model to learn the discrete graph through optimizing the mean performance over the graph distribution. Different from the existing multivariate time series prediction methods, AGCRN (Bai et al., 2020) (with a fully connected graph) and STGCN (Yu et al., 2018) (with a given graph), we consider a more precise way, i.e. functional relations as constraints, to learn the connection between time series. The new inductive bias expressed by these functional relations can be applied to different backbone networks to help recover the graph structure and act as regularization in both training and test process. 2 METHODOLOGY: FUNCTIONAL RELATION FIELD Multivariate time series forecasting. Suppose we have N time series {xi}Ni=1 with length T , written compactly as X ∈ RN×T . Each time series can be denoted as a node, where xi,t ∈ R for each node i and time step t. xt ∈ RN is the time slice of X at the t-th time step. The multi-step forecasting problem of a multivariate time series can be formulated as predicting the future M frames of the multivariates given the last H time slices: {ŷt+1, ..., ŷt+M} = argmax P ({yt+1, ..., yt+M}|{xt−H+1, ..., xt}), (1) where {yt+1, · · · , yt+M} and {ŷt+1, · · · , ŷt+M} represent the true and predicted values at the future time steps, M is the number of future steps. Note that here we use y to denote the output so as to differentiate it from the input x. Forecasting with functional relations. In many real-world scenarios, the relationship between multiple time series is typically complicate, graph might not be sufficient for modelling their dependency, particularly for the cases values of multivariate time series at each time step are subject to some intrinsic constraints. Existing methods have not incorporated these constraints into their models. In this work, we intend to show that models with the account of constraints (expressed with functional relationship) are superior to those without constraints in terms of prediction performance. As an example, suppose that the flow in a computer network satisfies the homogeneous linear constraints, at each time step t, the following linear constraints hold for slice xt: Axt = 0,∀t, (2) where A ∈ RM×N is a matrix that is constant across time. In other more complex cases, the constraints can be non-homogeneous, non-linear, or even intertemporal. Here, we concentrate on time-invariant constraints that is not intertemporal. As such, the constraints can be described by a set of functions f with size m, i.e. functional relation field, f = (f1, f2, ..., fm). fi(xt) = 0, ∀i, ∀t. (3) Based on the constraints defined above, we consider the following constrained multivariate time series prediction problem, {ŷt+1, ..., ŷt+M} = arg max P ({yt+1, ..., yt+M}|{xt−H+1, ..., xt}), s.t. fi(ŷt+τ ) = 0, 1 ≤ τ ≤M, 1 ≤ i ≤ m. (4) However, in most real-world scenarios, neither the functional form F nor the specific weights variables involved in the constraints are given, and one of our objectives is to extract such information from the data and solve the problem (4). We now elaborate the functional relation field for multivariate times series prediction in the following. The schematic diagram of the proposed framework is depicted in Figure 2, including two parts. The first part displayed Figure 2(a) shows how we learn the functional relations, i.e. the constraints between nodes. Assuming that the constraints are unknown, we aim to find the constrained nodes and the specific functional form for these constraints. The constraint function in this paper is Constraint nodes set and relevant nodes 𝒩! Retraining the functional relation networkTrain constraint network 𝑤! 𝑤" 𝑤# 𝑤$ 𝑤% 𝑤& Training Phase: Constraint-Satisfaction loss minimization in in Eq.(10) (a) Functional Relation Field (b) Applying Functional Relation Field Testing Phase: Constraint-Satisfaction transformation in Eq.(15)Output Layer Input Backbone Network Predict value "𝑦"#$ ℒ%&% Function: 𝐟 "𝑦"#$ = 0 Predict output "𝑦"#$ Constraint-satisfied output 1𝑦"#$ Predict output "𝑦"#$ Learned function relation: 𝐟 𝑥 = 0 𝑤! 𝑤" 𝑤# 𝑤$𝑤% 𝑤& Independent nodes Learned function relation ℱ 1𝑔 "𝑦"#$,( Figure 2: The schematic diagram of functional relation field framework. The two subfigures denote the two stages: (a) The training data is employed to discover the nodes in each constraint function and these functions are expressed by constraint network; (b) The learned constraints are incorporated in the backbone models (cf. Section 2.2) in three complementary ways so as to improve the forecasting performance. approximated by a neural network, named as functional relation network or constraint network. After training the functional relation network, we can identify the most relevant neighbors and produce a more informative graph structure. Then we can proceed to integrate the learned constraints into the backbone graph neural networks for multivariate time series prediction, as shown in Figure 2(b). We enforce these constraints to the output of spatio-temporal graph neural networks during both training and test phases. For the outputs of the networks, we add a constraint-satisfied transformation layer during the inference process such that the outputs strictly satisfy the constraints. Altogether, we refer to the proposed framework as functional relation field-enhanced spatio-temporal graph networks (FRF-STG). It is model-agnostic and can be applied to different backbone graph networks. In the following, we will describe the two stages including learning functional relation network and how to apply the constraints induced by the functional relation between nodes in more details. 2.1 LEARNING THE FUNCTIONAL RELATION NETWORK We start with discussing the first question: how to learn the unknown constraints (i.e. the functional relations) from the multivariate time series data? As demonstrated in Figure 2(a), we assume that there exists a constraint for each node. We first discover the relevant nodes involved in these constraints and then express the constraint functions via neural networks. Identifying constrained nodes and their relevant nodes. Here we consider a simplified case where the functional relation between nodes can be formulated as: xt,i = gi(xt,\i),∀t (5) i.e. for each target node i, we use a constraint network gi to approximate the function relation taking all the remaining (N − 1) nodes as input. We then train the constraint network to predict the value of the i-th node with the loss function : Lpred,(i) = ‖x̂t,i − xt,i‖2 (6) where x̂t,i and xt,i represent the estimated and observed values of node i at time step t. Second, a threshold err is set, and treat xi as a constrained node if both the training and validation error are smaller than err. Otherwise, xi is unpredictable with the other nodes, indicating it has weak dependency with other nodes. Then, to identify the most relevant nodes set Ni for target node i, we introduce the sensitivity of input change to the output for the trained constraint network, measured by the absolute value of the partial derivative: δi,j = ∣∣∣∣ ∂g∂xt,j ∣∣∣∣ , j 6= i (7) We calculate the average gradients over the training and the validation set for node j. Then, we specify another threshold grad here and consider the node j as the most relevant node of target i if δi,j is larger than grad. Besides, if the cardinality of Ni is larger than the scale threshold J , we further shrink Ni by only keeping the top-J nodes with the largest δi,j . Retraining the functional relation network. Since we filter out the irrelevant nodes for the discovered constrained node xi, it is necessary to re-train the constraint network using the relevant nodes in Ni as inputs, denoted as xt,Ni = {xt,ij |j ∈ Ni}, x̂t,i = g̃i(xt,Ni). (8) Regarding the architecture of the functional relation network g̃i, we adopt a simple attention-based structure for each node i, described as follows. αt,i = Softmax(MLP i(xt,Ni)), x̂t,i = α T t,ixt,Ni , (9) where αt,i is the attention weight vector generated from the relevant nodes xt,Ni , and x̂t,i is the reconstructed input with the constraint nodes. Others alternatives for designing the functional relation network is also possible. 2.2 APPLYING THE CONSTRAINTS The constraints learned by the functional relation network are versatile. A naive usage is to construct meaningful graph structure by drawing edges between the identified target and its dependent nodes. Secondly, we propose to incorporate the learned constraints into the backbone prediction network in both training and test process through constraint-satisfaction loss minimization and constraintsatisfaction transformation, respectively. Both of them are used to guarantee that the constraints are maintained in the outputs of the backbone network. Constraint satisfaction in training phase. We expect the output of the backbone network, ŷ = {ŷt+1, ŷt+2..., ŷt+M}, to satisfy the learned constraints that could reveal the underlying structure of the multivariate time series. A straightforward yet effective way of implementing the constraint satisfaction is loss minimization over the functional relation network based on the output of the backbone prediction network, LFRF (ŷ) = N∑ i=1 M∑ τ=1 ‖ŷt+τ,i − g̃({ŷt+τ,j}, j ∈ Ni)‖22 (10) Therefore, the overall loss function for training the backbone prediction network include two terms, Ltotal = L(ŷ, y) + λLFRF (ŷ), (11) where λ is a tradeoff coefficient for balancing the supervised term and constraint satisfaction. Constraint satisfaction in testing phase. Furthermore, although the constraints are fully utilized during training, there is no guarantee that the constraints hold for the outputs during the inference process. Therefore, it is necessary to perform constraint-satisfaction transformation on outputs of the prediction networks. Let us first consider the linear constraint Axt = 0,∀t. Suppose that ŷ = {ŷt+1, ŷt+2..., ŷt+M} and y = {yt+1, yt+2, ..., yt+M} denote the predicted output of the backbone network and the ground truth, respectively. To make the output ŷt+τ to satisfy the linear constraint, we can project the predicted output onto the hyperplane Axt = 0 as ỹt+τ with a closed-form solution, ỹt+τ = ŷt+τ −AT (AAT )−1Aŷt+τ . (12) On the other hand, for non-linear constraint set f(y) = (f1(y), ..., fm(y))T = 0, where each constraint fi(y) = 0 represents yi− g̃i(yt,Ni) = 0, there are no analytical solutions, but we can solve an optimization problem with nonlinear equality constraints, i.e. finding the nearest projection point on the plane f(y) = 0 given the reference point ŷt+τ for τ = 1, . . . ,m min ỹt+τ ‖ỹt+τ − ŷt+τ‖22, s.t. f(ỹt+τ ) = 0. (13) A simple approximate method for solving this equality-constrained quadratic programming is to conduct iterative projections. Denote J = ∂f∂x as the Jacobian matrix. Assuming ŷt+τ ≈ ỹt+τ , closed to the surface f(x) = 0. We derive the first-order Taylor expansion of f(x) at ŷt+τ as f(x) ≈ f(ŷt+τ ) + J T · (x− ŷt+τ ). (14) Equating f(x) to zero with x = ỹt+τ yields ỹt+τ = ŷt+τ − J (J TJ )−1f(ŷt+τ ). (15) Then we can repeat the above transformation several times (e.g. number of projections K = 10 times used in our experiments) until the constraints are well satisfied by evaluating whether F (x) =∑m j=1 |fj(x)| is small enough. 2.3 FUNCTIONAL RELATION FIELD-ENHANCED SPATIO-TEMPORAL GRAPH NETWORKS In this part, we integrate the proposed functional relation field framework into five representative backbone models, STGCN (Yu et al., 2018), AGCRN (Bai et al., 2020), Autoformer (Wu et al., 2021), FEDformer (Zhou et al., 2022) and SCINet (Liu et al., 2022) to boost their prediction performance, referred as FRF-STGCN, FRF-AGCRN, FRF-Autoformer, FRF-FEDformer and FRFSCINet, respectively. In the first stage, we learn the functional relation network, based on which the most relevant nodes can be identified. And the resultant graph structure could be used for the five backbone networks. In the second stage, we enforce the learned constraints in the training and inference process, as described in Figure 2. Since different backbone networks has their own specific design, we need adapt FRF to these backbones. For the constraint satisfaction of output, in AGCRN and SCINet, the networks produce all the prediction results at multiple time steps in one batch, and therefore, the constraint-satisfied transformation is applied to the prediction at each time step respectively for K times as described in Eq. (15). For STGCN, we apply the above transformation sequentially to each future time step, obtain the transformed predictions, and then feed the predictions to STGCN to produce the predictions at the next time step. We repeat this procedure until we finish the multi-step forecasting task. Algorithm 1: Training and inference of functional relation field Input: Trained function relation networks f , hyper-parameters λ and K. Output: constraint-satisfied output ỹt+τ // Training Phase; repeat 1 Forward on backbone network to get ŷt+τ . on training dataset; 2 Back-propagate with the loss Ltotal in Eq. 2.2 and run Adam. . constraint-satisfaction loss 3 until stopping criteria is met; // Inference Phase; Forward on the trained backbone network to obtain ŷt+τ . on test dataset; 4 for k in K do 5 Calculate ỹt+τ by Eq.(15) . constraint-satisfaction transformation; 6 end 3 EXPERIMENT In this section, we conduct experiments on five datasets including one synthetic graph dataset, two real-word MiniApp calling flow datasets and two traffic flow datasets to demonstrate the effectiveness of FRF on learning the underlying relationship between nodes and boosting prediction performance of these backbone networks. The code for reproducibility is attached in the Supplementary Materials. The baseline models. We first compare our framework with two traditional forecasting models including Historical Average (HA) and Support Vector Regression (SVR). Then, we also conduct experiments on two classical univariate time series prediction models, including Feed-Forward Neural Network (FNN) and Full-Connected LSTM (FC-LSTM (Sutskever et al., 2014)). We select the widely used graph time series model STGCN (Yu et al., 2018), AGCRN (Bai et al., 2020), and the univariate time series forecasting models based on transformer architectures Autoformer (Wu et al., 2021), FEDformer (Zhou et al., 2022) and another state-of-the-art univariate prediction model SCINet (Liu et al., 2022)) as our backbone networks. We refer the readers to the supplementary materials for the detailed experimental settings. 3.1 DATASETS AND SETTINGS Binary tree dataset. We first generate an artificial graph time series dataset. The graph structure for this dataset is a complete binary tree with 255 nodes. For each leaf node i, its value is a noisy sinusoidal wave across time, xi,t = ni,tAi sin( 2πtTi +φ), where ni,t ∼ U(0.95, 1.05). We sort all leaf nodes from left to right in an increasing order of their periods. For a non-leaf node p, we denote its left and right child as l and r. We further set the value of node p to be the geometric mean of its two children l and r, xp,t = √ xl,t · xr,t. We sample one point every 5 minutes, so there are 288 points per day. We generate the data for 40 days, including 30 days for training (i.e., 30× 288 = 8640 time points), 5 days for validation, and 5 days for testing. We intentionally design this dataset since it has true graph structure between different time series and the constraints between nodes are explicit, and thus it is a suitable testbed to compare the superiority of FRF over those without FRF. In the experiments, for the backbone with FRF, we assume the constraints are unknown and learn them using the proposed method in Section 2.1. MiniApp calling flow dataset 1 and 2. These two datasets are real-word flow data from two popular online payment MiniApps, attached in the Supplementary Materials. For the two MiniApps, there are N = 30, 23 filtered pages linking to each other in the calling process, which produces visiting request flow from one page to another, constituting a graph with N = 30, 23 nodes. We aggregate the flow with averaged value every 5 minutes for each node, so there are 288 points per day. For the first MiniApp, we collect 21 days of data, including 15 days for training, 3 days for validation, and 3 days for test. For the second one, 24 days of data are collected, including 18 days for training, 3 days for validation, and 3 days for testing. PEMSD4 and PEMSD8 traffic datasets. This benchmark dataset is popular for multi-variate time series prediction, describing the traffic speed in San Francisco Bay Area with 307 sensors on 29 roads (https://paperswithcode.com/dataset/pemsd4). The other one consists of 170 detectors on 8 roads in San Bernardino area (https://paperswithcode.com/dataset/pemsd8). Settings of constraint network and hyper-parameters. For the architectures of the constraint network, we compare two a 4-layer MLP and a self-attention network, and the results show the latter is more effective. We measure the constraint relationship with MAPE, where the large MAPE indicates the time-invariate constraint is weak. Specifically, the MAPEs for BinaryTree, MiniAPP1, MiniApp2, PEMSD4, PEMSD8 datasets are 0.10, 0.008, 0.01, 0.02, 0.07 respectively. The larger MAPE means the weaker constraint relationship, therefore the proposed FRF model is applicable to backbone network only when the MAPE of constraint network is small. In addition, we only tune the parameters of FRF while keeping the other hyper-parameters setting the same as backbone networks. 3.2 RESULTS Overall performance Table 1 summarizes the performance of all the compared models on the five datasets, including the proposed FRF approach coupled with STGCN, AGCRN, Autoformer, FEDformer and SCINet, denoted as FRF-STGCN and FRF-AGCRN, FRF-Autoformer, FRF-FEDformer and FRF-SCINet, respectively. For the binary tree dataset, we predict the future 12 time steps and evaluate the performance in terms of three metrics (MAE, RMSE, MAPE). Since the underlying true constraints are known, we report the experimental results of our models with both true and learned constraints, denoted as “T” and “L”. We can observe that deep learning-based models typically outperform the traditional ones, as expected. Furthermore, the proposed functional relation field can further improve the performance of the original backbone models. Regardless of the differences between the two backbone networks, FRF can consistently improve the prediction accuracy for both of the backbones. indicating that the FRF framework could be potentially applied to a wide variety of backbones. For the two MiniApp datasets, we omit the metric MAPE since the scale of data changes dramatically across time such that MAPE fails to characterize the performance of different models. Due to the error accumulation problem for multi-step prediction in STGCN, the performance of this model pales in comparison with its non-iterative counterpart. As a result, we only report the results of the non-iterative version of STGCN. Since the underlying true constraint relationship between nodes are not available, we only report the FRF with learned constraints. We can easily observe that augmentation of the proposed FRF can consistently boost the performance of the five backbone networks. Specifically, FRF improves STGCN by 36.3% and 6.9% on the two datasets, also improves AGCRN by 14.6% and 7.0%, respectively. For traffic datasets PEMSD4 and PEMSD8, one particular reason we choose SCINet as the baseline is that the reported results can achieve state-of-the-art prediction performance on this task. We can observe that even relying on such a strong baseline, FRF framework can still improve its performance of with a margin 0.6% and 0.3% on both datasets, respectively. For other backbones, we again see that FRF further improves the prediction performance, showing the effectiveness of FRF as a model-agnostic framework. Learning the relationship between nodes. We further test whether FRF could discover the underlying true constraints between nodes. First, we investigate whether we can reliably estimate the target node given the values of constraint nodes. To be exact, we compute x̂t,i = g̃({xt,Ni}) and compare x̂t,i with xt,i in terms of MAPE. For the test data of the synthetic binary tree, the resulting MAPE is 0.399%. Note that the MAPE of AGCRN or STGCN reported in Table 1 is around 4% without considering the constraints. Therefore, using the learned constraints can well regularize the predictions given by the original network backbones as well as further improve the forecasting performance. On the other hand, we compare the performance of the proposed algorithm when using the true and estimated constraints, showing the results in Table 1. We can observe that the performance based on both the true and estimated constraints is almost the same, indicating that the constraints are accurately learned. Additionally, we visualize the learned constraints by connecting each constrained node with their most relevant neighbors as a graph, shown in Figure 4. The structure of the binary tree is well recovered, although some extra edges are involved. Hyperparameters Sensitivity. FRF enhanced model introduces additional three kinds of hyperparameters including validation error threshold err, the loss tradeoff coefficient λ and the number of output transformation K. Therefore, we conduct hyper-parameters sensitivity experiments on binary tree dataset using backbone AGCRN as shown in Fig 3. We can observe that the performance slightly improves when the err increases due to more constraints are discovered, while the performance decreases with large err because of the introduced noise. Even more, the FRF enhanced model performs worse than backbone network when err = 5.0. Consistently, FRF enhanced model performs better when λ = 0.1 and worse than backbone with large λ. For the K, the larger K improves the backbone more significantly than smaller k because iterating more times makes the non-linear constraint optimization problem more accurate. Ablation Study. We first conduct an ablation study on the constraint graph learned from constraint network using the STGCN as backbone network in Table 3. We can observe that the constraint graph performs better than explicit graph extracted from prior knowledge on both traffic and MiniApp datases. In addition, for backbone networks without explicit graph structure such as AGCRN and SCINet, we investigate the effectiveness of constraint-satisfaction loss minimization and constraintsatisfaction transformation as shown in Table 4, finding that both of the two components contribute to the forecasting performance. Specifically, for the backbone network AGCRN which achieves the state-of-the-art performance on binary tree dataset, FRF enhances the backbone by 1.95% in training phase and by 9.0% in inference phase, while the combination of two components improves the performance by 10.16% in total. 4 CONCLUSION In this paper, we have proposed to enhance the multivariate time series forecasting with a new inductive bias, function relation fieild (FRF), which is model-agnostic. FRF can discover the intrinsic graph structure, as well as improve flow forecasting performance by applying constraint function relationship to the output in training and testing phases. The constraints learned by FRF can be incorporated into existing backbone networks, consistently improving the prediction performance. Experimental results show that the proposed FRF framework can reliably learn the constraints from the time-series data and restore the graph structure. Moreover, these constraints in turn help improve the prediction accuracy by a notable margin, regardless of the diversity of the network architecture in different backbone models. We expect that this FRF inductive bias could be potentially employed in other multivariate settings beyond times series scenarios. A PERFORMANCES ON MORE BACKBONES GTS Shang et al. (2021). The discrete graph structure learning model learns a graph structure among multiple time series and forecasts them simultaneously with a GNN. There are two differences between GTS and our proposed FRF. On one hand, GTS performs prediction under GNN paradigm which is model-specific while FRF is model-agnostic applying the function field to forecasting loss optimization. On the other hand, existing studies including AGCRN and GTS construct the graph based on the time-series similarity, while the FRF is the first proposed to exploiting the the constraint function relation to enhance the multi-variate time-series forecasting. We conduct experiments on Binary tree, Miniapp1 and Miniapp2 datasets using the opensource code (https://github.com/ chaoshangcs/GTS.git) shown in table.5, demonstrating that FRF can also improve the forecasting performance on GTS. The code of FRF-GTS and the running log is released in the supplementary material. NRI Kipf et al. (2018). The neural relational inference (NRI) model is an unsupervised model that learns to infer interactions and forecasting with a lstm. We conduct experiments on Binary tree, Miniapp1 and Miniapp2 dataset using the opensource code (https://github.com/ ethanfetaya/NRI.git). The results on NRI network in table.5 showing that there is a large margin from the SOTA backbone AGCRN Bai et al. (2020). B EXPERIMENTAL SETTINGS The error threshold. For the binary tree dataset and MiniApp calling flow datasets which have strong constraint relationships, we set err = 0.01 to filter the constaint nodes. However, for traffic dataset PEMSD4 and PEMSD8 with relative weak constraints, we set err = 0.025 to achieve the best performance. The hyper-parameters sensitivity experiments of err on PEMSD4 and PEMSD8 datasets are shown in Fig 5. The function relation graph. Note that for the real datasets, the graph structure is not given in advance. In order to use STGCN, we adopt Gaussian copula graphical models Liu et al. (2009); Yu et al. (2020) to learn the graph structure from the data, and take the learned graph as benchmark graph. For the FRF enhanced backbone network STGCN Yu et al. (2018), we replace the fixed graph structure with the learned constraint graph then achieve better performance. As results shown in table 3, we can observe that constraint graph performs better than graph learned with copula graphical model. Besides, for uni-variate backbones SCINet, Autoformer and FEDformer taking no time-series relationship into consideration, As well as graph model AGCRN, which have optimized with learned node embedding dynamically ignoring the origin graph, we don’t exploit constraint relation at graph construction stage. The function relation is applied in training stage and output constraints. The setting of J . For binary tree dataset, we set J = 4 to recover the function relation shown in Fig 4. We set J = 6 for two MiniApp flow calling datasets. For traffic dataset PEMSD4 with 307 nodes and PEMSD8 with 170 nodes, we achieve best performance when J = 30. The detailed settings at λ and k. In the training stage, we only tune the trade off coefficient λ and iteration times K while keep all other parameters the same as SOTA settings in benchmark. The detailed settings are shown in 6. C VISUALIZATION OF LEARNED FUNCTION RELATION The flow visualization of different relations. We show the comparison of learned function relation and origin relation on MiniApp1 dataset in table 6. Note that, the origin relation of MiniApp is learned by Gaussian copula graphical models Liu et al. (2009); Yu et al. (2020). We can observe that the flows of the target node has the same pattern and scale with relevant node on learned function, while has different scale on origin graph. The results demonstrating that learned function is more effective to capture the flow relationship. D DISCUSSION ON HYPERPARAMETERS AND COMPUTATIONAL COMPLEXITY Hyper-parameters. There are three newly-introduced hyper-parameters including error threshold err, trade-off coefficient λ and number of iterations K. The err and λ can be easily chosen based on the validation loss. And a largerK could be used to obtain more accurate optimization and achieve better performance. So, there is a balance between performance gain and computation. We typically set it as K = 10 which could work well for all the tasks we have considered. Computational complexity. On one hand, the computational complexity increases in the forecasting network training caused by the K iterations of output constraint satisfaction. The K is usually setted as a small number 5 or 10, which is computationally easy. And the main time-consuming operations come from forward and back propagation of backbones rather than the output constraint. On the other hand, we need to train the constraint network for all time-series. Fortunately, the constraint network is a simple two-layer attention network, which only has a small number of parameters but effective enough to capture the complex function relation. For example, in MiniApp1 task, each constraint network only has around 3,000 parameters, the training time is in the scale of seconds. Thus, we believe training a constraint network is very fast and does not require much computational resources. The small size of the constraint networks is amenable to a large-scale multi-variate time series.
1. What is the main contribution of the paper regarding multivariate forecasting? 2. How does the proposed method differ from other approaches in encoding functional relationships between time series? 3. What are the strengths and weaknesses of the proposed method, particularly in terms of its novelty, generalization, and scalability? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or suggestions regarding the paper's experimental design, hyperparameter tuning, and computational complexity?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors propose a new model for multivariate forecasting with structured relationship between time series. They propose to learn a static functional relationship between each node and every other node (where the target node/series value is equal to a function of the values of the other nodes/series), and then incorporate these functional relationships into any existing forecast model - via adding a regularization term (penalizing forecasts that deviate from the relationships) to the regular forecast loss (weighted by a hyper parameter lambda), and also via subsequently projecting the final forecasts onto the (learned) constraints. The authors validate their approach by comparing results on several datasets for several recent forecast methods with and without the proposed functional relationship component and demonstrate consistently improved forecast error metrics using the proposed approach. They also perform a variety of hyper parameter sensitivity and ablation analysis, and some analysis around learned functional relationships. Strengths And Weaknesses Strengths: I find the approach to be interesting and novel - I personally have not seen this kind of approach taken before, and I feel it can be seen as generalizing a couple common classes of approaches: hierarchical forecasting - which does incorporate functional relationships but of a fixed form (hierarchical aggregates), and graph-based forecasting - which does not specify the particular relationship between nodes as is done here. This enhances the hierarchical approach by generalizing the functional form encoded to be not just hierarchical or linear, and also generalizes the graph approach to encode more specific relationships between variables explicitly. The paper was well-written and organized so it was easy to follow and understand. The method is logical and sound. The number of datasets and different models compared to and enhanced with the proposed method lend good credence to the proposed approach, along with the additional analyses - ablation study, hyperparameter sensitivity, and analysis of the relationships learned. Weaknesses: A graph learning / graph based approach (e.g., with graph neural nets) could theoretically learn arbitrary relationships between nodes as well (as for example this would be encoded in multiple layers of graph neural networks) - the authors never really say what is the advantage of the proposed approach compared to this idea. The learned relationships are static / stationary - that is for the same series values x, the relationship will be the same regardless of time. This seems a bit over-restricting and not realistic for time series which often have non-stationary relationships as well. It seems like this approach might be over-constraining to enforce some relationship learned across all time points - i.e., in many cases it may be more realistic if the functional relationship changes over time or context (which could also be captured by encoding context in additional exogenous series). Additionally, some prior work on learning the graph structure for forecasting allows the graph structure learned to be influenced by the forecast modeling as well - which is a disadvantage of this approach as it is learned separately. It would have been useful to also see a comparison with prior work on reconciliation (at least given a known structure) - as typical approaches can encode arbitrary linear relationships between time series as well As the constraints are learned (in a completely uncontrolled fashion with flexible neural nets) - how can you guarantee the constraints don't contradict each other at all? If they do, how can the constraint projection work and what do you do in those cases? This could easily happen for example if the model learns some (incorrect) relationship like: x_1 = 2 * x_2 and x_2 = 2 * x_1 for 2 variables / time series x_1 and x_2 (as a simplified case to illustrate the point). A major weakness is the large number of hyper parameters introduced by the approach - beyond the hyper parameters used for each forecast model itself as well. There are even more than really tested and pointed out because the relationship network architecture and training introduces its own set of hyper parameters as well (and this could even be done for the two different types of networks for learning the relationship as well). Several hyper parameters are chosen seemingly without the typical validation process (seemingly choosing whatever gave the best results) - along with the fact of being so many that need to be precisely tuned this limits practical usefulness of the method and confidence it will work well on other datasets for real use (where we have to do our best to select hyper parameters for everything). Applying this method, the number of hyper parameters that need to be tuned is quite daunting - tuning the hyper parameters (architecture and learning) for the relationship networks. Tuning the various relationship thresholds. Tuning lambda and K in the learning objective. And finally tuning the multiple hyper parameters of the backbone forecast model as well. It would be helpful for the authors to add some discussion on how to address this complexity and for clear description of how all hyper parameters were chosen (some hyper parameter numbers are just reported in tables and mentioned they worked better, but the process for choosing them is not clearly explained). Discussing this issue and how it could be addressed, and further study if certain fixed values or procedures would work sufficiently, could strengthen this work. Another major weakness of the proposed methods is the limited scalability - and lack of discussion around this weakness along with lack of analysis of computational complexity / reported run times as a function of the number of time series. Adding these could help strengthen this work. In particular the computational complexity of the method seems daunting, as we have to train a neural network with the roughly the same amount of data as for the forecast model itself, for every single time series, twice, just for learning the functional relationships. So for thousands of time series this amounts to training thousands of neural networks, which can be further multiplied for hyper parameter tuning / optimization. Some discussion should really be added around this, and also if there are any ways to address it, and would strengthen the paper. Also repeated experiments are not performed - and std. dev. of metric scores / confidence intervals are not reported so it's hard to determine the significance of differences in metric scores, and robustness of the results. Ideally metric scores would be averaged over multiple random runs and multiple test time series windows. Clarity, Quality, Novelty And Reproducibility Clearly and well written, organized and explained - with some grammar and typos that should be fixed (see below). I find the approach to be highly novel as mentioned, and reproducibility is good as code is provided and details around experiments, datasets, and hyper parameters are reported. I do feel the complete experiment process / how all hyper parameters and architectures are chosen is not fully explained. Grammar issues and typos throughout hurt readability - e.g., in intro: "...learned based the similarity..." instead of "...learned based on the similarity...", "...a more precise manner than graph...", "...the introduction of graph...", "...were proposed to discovery the...", "...relationship between multiple time series is typically complicate...", "Others alternatives ... is also possible", (typo) "function relation fieild" Incorrect statement: "finding the nearest projection point on the plane f(y) = 0" if f is nonlinear as described, this may not define a plane
ICLR
Title Functional Relation Field: A Model-Agnostic Framework for Multivariate Time Series Forecasting Abstract In multivariate time series forecasting, the most popular strategy for modeling the relationship between multiple time series is the construction of graph, where each time series is represented as a node and related nodes are connected by edges, i.e. spatial-temporal graph neural networks. The graph structure is either given apriori or learned based the similarity between nodes. However, the relationship between multiple time series is typically complicated, for instance, the sum of outflows from upstream nodes may be equal to the inflows of downstream nodes. Such relations widely exist in many real-world multivariate time series forecasting scenarios, yet are far from well studied. In these cases, graph might only be a crude description on the dependency between nodes. To this end, we explore a new framework to model the inter-node relationship in a more precise way based our proposed inductive bias for graphs, Functional Relation Field, where a group of functions parameterized by neural networks are learned to characterize the dependency between multiple time series. These learned functions are versatile: they can then be used to discover the underlying graph structure by identifying the most relevant neighbors of the target node; and on the other hand, the learned functions will form a “field” where the nodes in the backbone prediction networks are enforced to satisfy the constraints defined by these functions. The experiment is conducted on one toy dataset to show our approach can well recover the true constraint relationship between nodes. And two real-world MiniApp calling traffic and road networks datasets are also considered with various different backbone networks. Results show that the prediction error can be reduced remarkably with the aid of the proposed functional relation field framework. N/A 1 INTRODUCTION Multivariate time series forecasting has surged recently due to its strong expressiveness of the spatio-temporal dependence among the data and its enormous popularity in vast application areas, such as the prediction of urban traffic, computer network flow, cloud micro-services calling flow, and rigid body motion, to name a few (Li et al., 2018; Yu et al., 2018; Bai et al., 2020; Yan et al., 2018; Liu et al., 2020). The most popular and straightforward strategy for modeling the relationship between multiple time series is the introduction of graph, where each time series is represented as a node and related nodes are connected by edges. This particular inductive bias for multivariate time series prediction results in the so called spatial-temporal graph neural networks (Yu et al., 2018). The graph structure is either given apriori (e.g. in traffic flow prediction, each road as a node has connected roads forming the graph.) or learned based the similarity between nodes (Yu et al., 2019; Bai et al., 2020; Shang et al., 2021). However, in practice, the relationship between multiple time series is typically complicated. For instance, there often exist constraints among the nodes, ranging from the equality between the inflow and the outflow for a node in a traffic network to the geometric constraints of the rigid body motion. Such relations widely exist in many real-world multivariate time series forecasting scenarios, yet are far from well studied. In these cases, graph might not be sufficient for characterizing the dependency between nodes. As a remedy, in this work, we explore a new framework to model the inter-node relationship in a more precise manner than graph, Functional Relation Field (FRF), where a group of functions parameterized by neural networks are learned to characterize the dependency between multiple time series explicitly. These learned functions are versatile: first they can then be used to discover the underlying graph structure by identifying the most relevant neighbors of the target node; and on the other hand, the learned functions will form a “field” where the nodes in the backbone prediction networks are further enforced to satisfy the constraints defined by these functions. As illustrated in Fig.1, the left panel shows the traditional graph neural networks assuming similar time series have edge connections, while our framework on the right panel models the dependency between nodes through a functional relationship, e.g. a linear form to enforce the constraints between the flows of target and dependent nodes. In our framework, we mainly solve the following two issues: (i) How to learn the functional field? We need to select the dependent nodes that have a relationship with the target node, and express the constraint in a functional form; (ii) How to guarantee the constraints satisfaction? The (functional) constraints relationship should be maintained in the predicted output in both training and test process. To address these issues, we propose a two-stage approach that can discover the functional relations (i.e. constraints) from data and further integrate the constraints seamlessly when forecasting the multivariate time series. Specifically, we first train a neural network with a selected target node as its output and all the other nodes as dependent variables (i.e. the input of this neural network), and identify the most relevant dependent nodes based on this trained network. We then re-train it to learn the relationship among the target and the discovered relevant nodes. Next, we incorporate these functional constraints into the network backbones by imposing them to the predicted output during both training and test process. More precisely, the output of the network could be guaranteed to satisfy the constraints by utilizing the constraint-satisfied transformation and loss minimization. We compare the proposed approach with SVM, fully connected networks, fully connected LSTM, and five backbone models (i.e., STGCN (Yu et al., 2018), AGCRN (Bai et al., 2020), Autoformer (Wu et al., 2021), FEDformer (Zhou et al., 2022), SCINet (Liu et al., 2022)). Experimental results show that our approach significantly improves the performance over the original network backbones and other baseline models. RELATED WORK Univariate time series forecasting. Recently, much research focuses on time series forecasting with deep learning models due to their powerful representational capability and prediction performance, including feed-forward neural network, RNN (Rumelhart, 1986) and its variants LSTM (Hochreiter & Schmidhuber, 1997) and GRU (Cho et al., 2014). The transformer architecture and its variants (Vaswani et al., 2017; Simm et al., 2020; Zhou et al., 2021; Child et al., 2019; Lim et al., 2020; Li et al., 2019; Wu et al., 2021; Zhou et al., 2022) also made much progress on univariate time-series forecasting on learning long-range dependence. In order to model the trend and seasonality of time series in an interpretable way, N-beats (Oreshkin et al., 2020) network that stacked very deep fullconnection network based on backward and forward residual links has improved the multi-horizon prediction accuracy significantly. Moreover, DeepAR (Salinas et al., 2020) and Deep State-Space Model (DSSM) (Rangapuram et al., 2018) stack multi-layer LSTM network to generate parameters of one-step-ahead Gaussian predictive distributions for multi-horizon prediction. Multivariate time series forecasting. Spatio-temporal graph neural networks (Yu et al., 2018; Chen et al., 2019; Pan et al., 2021; Li et al., 2020) have been proposed to model the spatial correlation and temporal dependency in multivariate time-series. Apart from capturing the temporal dependence, these methods further model the spatial dependence among all time series via graph neural networks, leveraging the information from the neighboring time series to help forecasting the target one. It is well known that an informative graph structure is important to the graph time series forecasting. Therefore, many algorithms (Bai et al., 2020; Seo et al., 2016; Shang et al., 2021) were proposed to discovery the underlying graph structure. AGCRN (Bai et al., 2020) assumed the graph structure is unknown and adopted an adaptive approach to learn the embedding vectors for all nodes, and then replaced the adjacency matrix in graph convolutions with a function of the node embeddings. However, the similarity graph calculated with the learned node embedding is a dense and continuous graph instead of a sparse and discrete graph. Therefore, GTS (Shang et al., 2021) formulated the graph structure learning problem as a probabilistic graph model to learn the discrete graph through optimizing the mean performance over the graph distribution. Different from the existing multivariate time series prediction methods, AGCRN (Bai et al., 2020) (with a fully connected graph) and STGCN (Yu et al., 2018) (with a given graph), we consider a more precise way, i.e. functional relations as constraints, to learn the connection between time series. The new inductive bias expressed by these functional relations can be applied to different backbone networks to help recover the graph structure and act as regularization in both training and test process. 2 METHODOLOGY: FUNCTIONAL RELATION FIELD Multivariate time series forecasting. Suppose we have N time series {xi}Ni=1 with length T , written compactly as X ∈ RN×T . Each time series can be denoted as a node, where xi,t ∈ R for each node i and time step t. xt ∈ RN is the time slice of X at the t-th time step. The multi-step forecasting problem of a multivariate time series can be formulated as predicting the future M frames of the multivariates given the last H time slices: {ŷt+1, ..., ŷt+M} = argmax P ({yt+1, ..., yt+M}|{xt−H+1, ..., xt}), (1) where {yt+1, · · · , yt+M} and {ŷt+1, · · · , ŷt+M} represent the true and predicted values at the future time steps, M is the number of future steps. Note that here we use y to denote the output so as to differentiate it from the input x. Forecasting with functional relations. In many real-world scenarios, the relationship between multiple time series is typically complicate, graph might not be sufficient for modelling their dependency, particularly for the cases values of multivariate time series at each time step are subject to some intrinsic constraints. Existing methods have not incorporated these constraints into their models. In this work, we intend to show that models with the account of constraints (expressed with functional relationship) are superior to those without constraints in terms of prediction performance. As an example, suppose that the flow in a computer network satisfies the homogeneous linear constraints, at each time step t, the following linear constraints hold for slice xt: Axt = 0,∀t, (2) where A ∈ RM×N is a matrix that is constant across time. In other more complex cases, the constraints can be non-homogeneous, non-linear, or even intertemporal. Here, we concentrate on time-invariant constraints that is not intertemporal. As such, the constraints can be described by a set of functions f with size m, i.e. functional relation field, f = (f1, f2, ..., fm). fi(xt) = 0, ∀i, ∀t. (3) Based on the constraints defined above, we consider the following constrained multivariate time series prediction problem, {ŷt+1, ..., ŷt+M} = arg max P ({yt+1, ..., yt+M}|{xt−H+1, ..., xt}), s.t. fi(ŷt+τ ) = 0, 1 ≤ τ ≤M, 1 ≤ i ≤ m. (4) However, in most real-world scenarios, neither the functional form F nor the specific weights variables involved in the constraints are given, and one of our objectives is to extract such information from the data and solve the problem (4). We now elaborate the functional relation field for multivariate times series prediction in the following. The schematic diagram of the proposed framework is depicted in Figure 2, including two parts. The first part displayed Figure 2(a) shows how we learn the functional relations, i.e. the constraints between nodes. Assuming that the constraints are unknown, we aim to find the constrained nodes and the specific functional form for these constraints. The constraint function in this paper is Constraint nodes set and relevant nodes 𝒩! Retraining the functional relation networkTrain constraint network 𝑤! 𝑤" 𝑤# 𝑤$ 𝑤% 𝑤& Training Phase: Constraint-Satisfaction loss minimization in in Eq.(10) (a) Functional Relation Field (b) Applying Functional Relation Field Testing Phase: Constraint-Satisfaction transformation in Eq.(15)Output Layer Input Backbone Network Predict value "𝑦"#$ ℒ%&% Function: 𝐟 "𝑦"#$ = 0 Predict output "𝑦"#$ Constraint-satisfied output 1𝑦"#$ Predict output "𝑦"#$ Learned function relation: 𝐟 𝑥 = 0 𝑤! 𝑤" 𝑤# 𝑤$𝑤% 𝑤& Independent nodes Learned function relation ℱ 1𝑔 "𝑦"#$,( Figure 2: The schematic diagram of functional relation field framework. The two subfigures denote the two stages: (a) The training data is employed to discover the nodes in each constraint function and these functions are expressed by constraint network; (b) The learned constraints are incorporated in the backbone models (cf. Section 2.2) in three complementary ways so as to improve the forecasting performance. approximated by a neural network, named as functional relation network or constraint network. After training the functional relation network, we can identify the most relevant neighbors and produce a more informative graph structure. Then we can proceed to integrate the learned constraints into the backbone graph neural networks for multivariate time series prediction, as shown in Figure 2(b). We enforce these constraints to the output of spatio-temporal graph neural networks during both training and test phases. For the outputs of the networks, we add a constraint-satisfied transformation layer during the inference process such that the outputs strictly satisfy the constraints. Altogether, we refer to the proposed framework as functional relation field-enhanced spatio-temporal graph networks (FRF-STG). It is model-agnostic and can be applied to different backbone graph networks. In the following, we will describe the two stages including learning functional relation network and how to apply the constraints induced by the functional relation between nodes in more details. 2.1 LEARNING THE FUNCTIONAL RELATION NETWORK We start with discussing the first question: how to learn the unknown constraints (i.e. the functional relations) from the multivariate time series data? As demonstrated in Figure 2(a), we assume that there exists a constraint for each node. We first discover the relevant nodes involved in these constraints and then express the constraint functions via neural networks. Identifying constrained nodes and their relevant nodes. Here we consider a simplified case where the functional relation between nodes can be formulated as: xt,i = gi(xt,\i),∀t (5) i.e. for each target node i, we use a constraint network gi to approximate the function relation taking all the remaining (N − 1) nodes as input. We then train the constraint network to predict the value of the i-th node with the loss function : Lpred,(i) = ‖x̂t,i − xt,i‖2 (6) where x̂t,i and xt,i represent the estimated and observed values of node i at time step t. Second, a threshold err is set, and treat xi as a constrained node if both the training and validation error are smaller than err. Otherwise, xi is unpredictable with the other nodes, indicating it has weak dependency with other nodes. Then, to identify the most relevant nodes set Ni for target node i, we introduce the sensitivity of input change to the output for the trained constraint network, measured by the absolute value of the partial derivative: δi,j = ∣∣∣∣ ∂g∂xt,j ∣∣∣∣ , j 6= i (7) We calculate the average gradients over the training and the validation set for node j. Then, we specify another threshold grad here and consider the node j as the most relevant node of target i if δi,j is larger than grad. Besides, if the cardinality of Ni is larger than the scale threshold J , we further shrink Ni by only keeping the top-J nodes with the largest δi,j . Retraining the functional relation network. Since we filter out the irrelevant nodes for the discovered constrained node xi, it is necessary to re-train the constraint network using the relevant nodes in Ni as inputs, denoted as xt,Ni = {xt,ij |j ∈ Ni}, x̂t,i = g̃i(xt,Ni). (8) Regarding the architecture of the functional relation network g̃i, we adopt a simple attention-based structure for each node i, described as follows. αt,i = Softmax(MLP i(xt,Ni)), x̂t,i = α T t,ixt,Ni , (9) where αt,i is the attention weight vector generated from the relevant nodes xt,Ni , and x̂t,i is the reconstructed input with the constraint nodes. Others alternatives for designing the functional relation network is also possible. 2.2 APPLYING THE CONSTRAINTS The constraints learned by the functional relation network are versatile. A naive usage is to construct meaningful graph structure by drawing edges between the identified target and its dependent nodes. Secondly, we propose to incorporate the learned constraints into the backbone prediction network in both training and test process through constraint-satisfaction loss minimization and constraintsatisfaction transformation, respectively. Both of them are used to guarantee that the constraints are maintained in the outputs of the backbone network. Constraint satisfaction in training phase. We expect the output of the backbone network, ŷ = {ŷt+1, ŷt+2..., ŷt+M}, to satisfy the learned constraints that could reveal the underlying structure of the multivariate time series. A straightforward yet effective way of implementing the constraint satisfaction is loss minimization over the functional relation network based on the output of the backbone prediction network, LFRF (ŷ) = N∑ i=1 M∑ τ=1 ‖ŷt+τ,i − g̃({ŷt+τ,j}, j ∈ Ni)‖22 (10) Therefore, the overall loss function for training the backbone prediction network include two terms, Ltotal = L(ŷ, y) + λLFRF (ŷ), (11) where λ is a tradeoff coefficient for balancing the supervised term and constraint satisfaction. Constraint satisfaction in testing phase. Furthermore, although the constraints are fully utilized during training, there is no guarantee that the constraints hold for the outputs during the inference process. Therefore, it is necessary to perform constraint-satisfaction transformation on outputs of the prediction networks. Let us first consider the linear constraint Axt = 0,∀t. Suppose that ŷ = {ŷt+1, ŷt+2..., ŷt+M} and y = {yt+1, yt+2, ..., yt+M} denote the predicted output of the backbone network and the ground truth, respectively. To make the output ŷt+τ to satisfy the linear constraint, we can project the predicted output onto the hyperplane Axt = 0 as ỹt+τ with a closed-form solution, ỹt+τ = ŷt+τ −AT (AAT )−1Aŷt+τ . (12) On the other hand, for non-linear constraint set f(y) = (f1(y), ..., fm(y))T = 0, where each constraint fi(y) = 0 represents yi− g̃i(yt,Ni) = 0, there are no analytical solutions, but we can solve an optimization problem with nonlinear equality constraints, i.e. finding the nearest projection point on the plane f(y) = 0 given the reference point ŷt+τ for τ = 1, . . . ,m min ỹt+τ ‖ỹt+τ − ŷt+τ‖22, s.t. f(ỹt+τ ) = 0. (13) A simple approximate method for solving this equality-constrained quadratic programming is to conduct iterative projections. Denote J = ∂f∂x as the Jacobian matrix. Assuming ŷt+τ ≈ ỹt+τ , closed to the surface f(x) = 0. We derive the first-order Taylor expansion of f(x) at ŷt+τ as f(x) ≈ f(ŷt+τ ) + J T · (x− ŷt+τ ). (14) Equating f(x) to zero with x = ỹt+τ yields ỹt+τ = ŷt+τ − J (J TJ )−1f(ŷt+τ ). (15) Then we can repeat the above transformation several times (e.g. number of projections K = 10 times used in our experiments) until the constraints are well satisfied by evaluating whether F (x) =∑m j=1 |fj(x)| is small enough. 2.3 FUNCTIONAL RELATION FIELD-ENHANCED SPATIO-TEMPORAL GRAPH NETWORKS In this part, we integrate the proposed functional relation field framework into five representative backbone models, STGCN (Yu et al., 2018), AGCRN (Bai et al., 2020), Autoformer (Wu et al., 2021), FEDformer (Zhou et al., 2022) and SCINet (Liu et al., 2022) to boost their prediction performance, referred as FRF-STGCN, FRF-AGCRN, FRF-Autoformer, FRF-FEDformer and FRFSCINet, respectively. In the first stage, we learn the functional relation network, based on which the most relevant nodes can be identified. And the resultant graph structure could be used for the five backbone networks. In the second stage, we enforce the learned constraints in the training and inference process, as described in Figure 2. Since different backbone networks has their own specific design, we need adapt FRF to these backbones. For the constraint satisfaction of output, in AGCRN and SCINet, the networks produce all the prediction results at multiple time steps in one batch, and therefore, the constraint-satisfied transformation is applied to the prediction at each time step respectively for K times as described in Eq. (15). For STGCN, we apply the above transformation sequentially to each future time step, obtain the transformed predictions, and then feed the predictions to STGCN to produce the predictions at the next time step. We repeat this procedure until we finish the multi-step forecasting task. Algorithm 1: Training and inference of functional relation field Input: Trained function relation networks f , hyper-parameters λ and K. Output: constraint-satisfied output ỹt+τ // Training Phase; repeat 1 Forward on backbone network to get ŷt+τ . on training dataset; 2 Back-propagate with the loss Ltotal in Eq. 2.2 and run Adam. . constraint-satisfaction loss 3 until stopping criteria is met; // Inference Phase; Forward on the trained backbone network to obtain ŷt+τ . on test dataset; 4 for k in K do 5 Calculate ỹt+τ by Eq.(15) . constraint-satisfaction transformation; 6 end 3 EXPERIMENT In this section, we conduct experiments on five datasets including one synthetic graph dataset, two real-word MiniApp calling flow datasets and two traffic flow datasets to demonstrate the effectiveness of FRF on learning the underlying relationship between nodes and boosting prediction performance of these backbone networks. The code for reproducibility is attached in the Supplementary Materials. The baseline models. We first compare our framework with two traditional forecasting models including Historical Average (HA) and Support Vector Regression (SVR). Then, we also conduct experiments on two classical univariate time series prediction models, including Feed-Forward Neural Network (FNN) and Full-Connected LSTM (FC-LSTM (Sutskever et al., 2014)). We select the widely used graph time series model STGCN (Yu et al., 2018), AGCRN (Bai et al., 2020), and the univariate time series forecasting models based on transformer architectures Autoformer (Wu et al., 2021), FEDformer (Zhou et al., 2022) and another state-of-the-art univariate prediction model SCINet (Liu et al., 2022)) as our backbone networks. We refer the readers to the supplementary materials for the detailed experimental settings. 3.1 DATASETS AND SETTINGS Binary tree dataset. We first generate an artificial graph time series dataset. The graph structure for this dataset is a complete binary tree with 255 nodes. For each leaf node i, its value is a noisy sinusoidal wave across time, xi,t = ni,tAi sin( 2πtTi +φ), where ni,t ∼ U(0.95, 1.05). We sort all leaf nodes from left to right in an increasing order of their periods. For a non-leaf node p, we denote its left and right child as l and r. We further set the value of node p to be the geometric mean of its two children l and r, xp,t = √ xl,t · xr,t. We sample one point every 5 minutes, so there are 288 points per day. We generate the data for 40 days, including 30 days for training (i.e., 30× 288 = 8640 time points), 5 days for validation, and 5 days for testing. We intentionally design this dataset since it has true graph structure between different time series and the constraints between nodes are explicit, and thus it is a suitable testbed to compare the superiority of FRF over those without FRF. In the experiments, for the backbone with FRF, we assume the constraints are unknown and learn them using the proposed method in Section 2.1. MiniApp calling flow dataset 1 and 2. These two datasets are real-word flow data from two popular online payment MiniApps, attached in the Supplementary Materials. For the two MiniApps, there are N = 30, 23 filtered pages linking to each other in the calling process, which produces visiting request flow from one page to another, constituting a graph with N = 30, 23 nodes. We aggregate the flow with averaged value every 5 minutes for each node, so there are 288 points per day. For the first MiniApp, we collect 21 days of data, including 15 days for training, 3 days for validation, and 3 days for test. For the second one, 24 days of data are collected, including 18 days for training, 3 days for validation, and 3 days for testing. PEMSD4 and PEMSD8 traffic datasets. This benchmark dataset is popular for multi-variate time series prediction, describing the traffic speed in San Francisco Bay Area with 307 sensors on 29 roads (https://paperswithcode.com/dataset/pemsd4). The other one consists of 170 detectors on 8 roads in San Bernardino area (https://paperswithcode.com/dataset/pemsd8). Settings of constraint network and hyper-parameters. For the architectures of the constraint network, we compare two a 4-layer MLP and a self-attention network, and the results show the latter is more effective. We measure the constraint relationship with MAPE, where the large MAPE indicates the time-invariate constraint is weak. Specifically, the MAPEs for BinaryTree, MiniAPP1, MiniApp2, PEMSD4, PEMSD8 datasets are 0.10, 0.008, 0.01, 0.02, 0.07 respectively. The larger MAPE means the weaker constraint relationship, therefore the proposed FRF model is applicable to backbone network only when the MAPE of constraint network is small. In addition, we only tune the parameters of FRF while keeping the other hyper-parameters setting the same as backbone networks. 3.2 RESULTS Overall performance Table 1 summarizes the performance of all the compared models on the five datasets, including the proposed FRF approach coupled with STGCN, AGCRN, Autoformer, FEDformer and SCINet, denoted as FRF-STGCN and FRF-AGCRN, FRF-Autoformer, FRF-FEDformer and FRF-SCINet, respectively. For the binary tree dataset, we predict the future 12 time steps and evaluate the performance in terms of three metrics (MAE, RMSE, MAPE). Since the underlying true constraints are known, we report the experimental results of our models with both true and learned constraints, denoted as “T” and “L”. We can observe that deep learning-based models typically outperform the traditional ones, as expected. Furthermore, the proposed functional relation field can further improve the performance of the original backbone models. Regardless of the differences between the two backbone networks, FRF can consistently improve the prediction accuracy for both of the backbones. indicating that the FRF framework could be potentially applied to a wide variety of backbones. For the two MiniApp datasets, we omit the metric MAPE since the scale of data changes dramatically across time such that MAPE fails to characterize the performance of different models. Due to the error accumulation problem for multi-step prediction in STGCN, the performance of this model pales in comparison with its non-iterative counterpart. As a result, we only report the results of the non-iterative version of STGCN. Since the underlying true constraint relationship between nodes are not available, we only report the FRF with learned constraints. We can easily observe that augmentation of the proposed FRF can consistently boost the performance of the five backbone networks. Specifically, FRF improves STGCN by 36.3% and 6.9% on the two datasets, also improves AGCRN by 14.6% and 7.0%, respectively. For traffic datasets PEMSD4 and PEMSD8, one particular reason we choose SCINet as the baseline is that the reported results can achieve state-of-the-art prediction performance on this task. We can observe that even relying on such a strong baseline, FRF framework can still improve its performance of with a margin 0.6% and 0.3% on both datasets, respectively. For other backbones, we again see that FRF further improves the prediction performance, showing the effectiveness of FRF as a model-agnostic framework. Learning the relationship between nodes. We further test whether FRF could discover the underlying true constraints between nodes. First, we investigate whether we can reliably estimate the target node given the values of constraint nodes. To be exact, we compute x̂t,i = g̃({xt,Ni}) and compare x̂t,i with xt,i in terms of MAPE. For the test data of the synthetic binary tree, the resulting MAPE is 0.399%. Note that the MAPE of AGCRN or STGCN reported in Table 1 is around 4% without considering the constraints. Therefore, using the learned constraints can well regularize the predictions given by the original network backbones as well as further improve the forecasting performance. On the other hand, we compare the performance of the proposed algorithm when using the true and estimated constraints, showing the results in Table 1. We can observe that the performance based on both the true and estimated constraints is almost the same, indicating that the constraints are accurately learned. Additionally, we visualize the learned constraints by connecting each constrained node with their most relevant neighbors as a graph, shown in Figure 4. The structure of the binary tree is well recovered, although some extra edges are involved. Hyperparameters Sensitivity. FRF enhanced model introduces additional three kinds of hyperparameters including validation error threshold err, the loss tradeoff coefficient λ and the number of output transformation K. Therefore, we conduct hyper-parameters sensitivity experiments on binary tree dataset using backbone AGCRN as shown in Fig 3. We can observe that the performance slightly improves when the err increases due to more constraints are discovered, while the performance decreases with large err because of the introduced noise. Even more, the FRF enhanced model performs worse than backbone network when err = 5.0. Consistently, FRF enhanced model performs better when λ = 0.1 and worse than backbone with large λ. For the K, the larger K improves the backbone more significantly than smaller k because iterating more times makes the non-linear constraint optimization problem more accurate. Ablation Study. We first conduct an ablation study on the constraint graph learned from constraint network using the STGCN as backbone network in Table 3. We can observe that the constraint graph performs better than explicit graph extracted from prior knowledge on both traffic and MiniApp datases. In addition, for backbone networks without explicit graph structure such as AGCRN and SCINet, we investigate the effectiveness of constraint-satisfaction loss minimization and constraintsatisfaction transformation as shown in Table 4, finding that both of the two components contribute to the forecasting performance. Specifically, for the backbone network AGCRN which achieves the state-of-the-art performance on binary tree dataset, FRF enhances the backbone by 1.95% in training phase and by 9.0% in inference phase, while the combination of two components improves the performance by 10.16% in total. 4 CONCLUSION In this paper, we have proposed to enhance the multivariate time series forecasting with a new inductive bias, function relation fieild (FRF), which is model-agnostic. FRF can discover the intrinsic graph structure, as well as improve flow forecasting performance by applying constraint function relationship to the output in training and testing phases. The constraints learned by FRF can be incorporated into existing backbone networks, consistently improving the prediction performance. Experimental results show that the proposed FRF framework can reliably learn the constraints from the time-series data and restore the graph structure. Moreover, these constraints in turn help improve the prediction accuracy by a notable margin, regardless of the diversity of the network architecture in different backbone models. We expect that this FRF inductive bias could be potentially employed in other multivariate settings beyond times series scenarios. A PERFORMANCES ON MORE BACKBONES GTS Shang et al. (2021). The discrete graph structure learning model learns a graph structure among multiple time series and forecasts them simultaneously with a GNN. There are two differences between GTS and our proposed FRF. On one hand, GTS performs prediction under GNN paradigm which is model-specific while FRF is model-agnostic applying the function field to forecasting loss optimization. On the other hand, existing studies including AGCRN and GTS construct the graph based on the time-series similarity, while the FRF is the first proposed to exploiting the the constraint function relation to enhance the multi-variate time-series forecasting. We conduct experiments on Binary tree, Miniapp1 and Miniapp2 datasets using the opensource code (https://github.com/ chaoshangcs/GTS.git) shown in table.5, demonstrating that FRF can also improve the forecasting performance on GTS. The code of FRF-GTS and the running log is released in the supplementary material. NRI Kipf et al. (2018). The neural relational inference (NRI) model is an unsupervised model that learns to infer interactions and forecasting with a lstm. We conduct experiments on Binary tree, Miniapp1 and Miniapp2 dataset using the opensource code (https://github.com/ ethanfetaya/NRI.git). The results on NRI network in table.5 showing that there is a large margin from the SOTA backbone AGCRN Bai et al. (2020). B EXPERIMENTAL SETTINGS The error threshold. For the binary tree dataset and MiniApp calling flow datasets which have strong constraint relationships, we set err = 0.01 to filter the constaint nodes. However, for traffic dataset PEMSD4 and PEMSD8 with relative weak constraints, we set err = 0.025 to achieve the best performance. The hyper-parameters sensitivity experiments of err on PEMSD4 and PEMSD8 datasets are shown in Fig 5. The function relation graph. Note that for the real datasets, the graph structure is not given in advance. In order to use STGCN, we adopt Gaussian copula graphical models Liu et al. (2009); Yu et al. (2020) to learn the graph structure from the data, and take the learned graph as benchmark graph. For the FRF enhanced backbone network STGCN Yu et al. (2018), we replace the fixed graph structure with the learned constraint graph then achieve better performance. As results shown in table 3, we can observe that constraint graph performs better than graph learned with copula graphical model. Besides, for uni-variate backbones SCINet, Autoformer and FEDformer taking no time-series relationship into consideration, As well as graph model AGCRN, which have optimized with learned node embedding dynamically ignoring the origin graph, we don’t exploit constraint relation at graph construction stage. The function relation is applied in training stage and output constraints. The setting of J . For binary tree dataset, we set J = 4 to recover the function relation shown in Fig 4. We set J = 6 for two MiniApp flow calling datasets. For traffic dataset PEMSD4 with 307 nodes and PEMSD8 with 170 nodes, we achieve best performance when J = 30. The detailed settings at λ and k. In the training stage, we only tune the trade off coefficient λ and iteration times K while keep all other parameters the same as SOTA settings in benchmark. The detailed settings are shown in 6. C VISUALIZATION OF LEARNED FUNCTION RELATION The flow visualization of different relations. We show the comparison of learned function relation and origin relation on MiniApp1 dataset in table 6. Note that, the origin relation of MiniApp is learned by Gaussian copula graphical models Liu et al. (2009); Yu et al. (2020). We can observe that the flows of the target node has the same pattern and scale with relevant node on learned function, while has different scale on origin graph. The results demonstrating that learned function is more effective to capture the flow relationship. D DISCUSSION ON HYPERPARAMETERS AND COMPUTATIONAL COMPLEXITY Hyper-parameters. There are three newly-introduced hyper-parameters including error threshold err, trade-off coefficient λ and number of iterations K. The err and λ can be easily chosen based on the validation loss. And a largerK could be used to obtain more accurate optimization and achieve better performance. So, there is a balance between performance gain and computation. We typically set it as K = 10 which could work well for all the tasks we have considered. Computational complexity. On one hand, the computational complexity increases in the forecasting network training caused by the K iterations of output constraint satisfaction. The K is usually setted as a small number 5 or 10, which is computationally easy. And the main time-consuming operations come from forward and back propagation of backbones rather than the output constraint. On the other hand, we need to train the constraint network for all time-series. Fortunately, the constraint network is a simple two-layer attention network, which only has a small number of parameters but effective enough to capture the complex function relation. For example, in MiniApp1 task, each constraint network only has around 3,000 parameters, the training time is in the scale of seconds. Thus, we believe training a constraint network is very fast and does not require much computational resources. The small size of the constraint networks is amenable to a large-scale multi-variate time series.
1. What is the focus and contribution of the paper regarding spatial-temporal forecasting? 2. What are the strengths of the proposed approach, particularly in terms of motivation and novelty? 3. What are the weaknesses of the paper, especially regarding the writing, explanations, and hyperparameters? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposed a method to automatically learn constraints over nodes in a graph, and apply the constraints for spatial-temporal forecasting. The major contribution is the proposed constraint learning framework. Strengths And Weaknesses Strength: The motivation for exploring relationships between nodes in terms of constraints is insightful. Weaknesses: The writing is hard to follow. Some parts are not clear: (1) what is the physical meaning of linear constraints (equ. 2)? (2) In equ. 3, why set f_i(x_t)=0 (3) In section 2.2, is the target-dependent relationship bidiractional? (4) It seems the proposed model needs to go over all the nodes to check the target and dependent, what is the complexity of the method? \epsilon_{grad} is a hyperparameter. But the experiment does not evaluate the effects of \epsilon_{grad}. It seems too many hyperparameters to control the constraint learning. It may expect much more effort for hyperparameter tuning, which may jeopardize the applicability of the method. Typo: the second paragraph on page 2, "...while our framework on the left panel models the..." it should be" the right panel"? Clarity, Quality, Novelty And Reproducibility The motivation is strong, the proposed method is novel. But the writing is hard to follow, some concepts are not well-explained.
ICLR
Title Bidirectional global to local attention for deep metric learning. Abstract Deep metric learning (DML) provides rich measures of content-based visual similarity, which have become an essential component for many downstream tasks in computer vision and beyond. This paper questions a central paradigm of DML, the process of embedding individual images before comparing their embedding vectors. The embedding drastically reduces image information, removing all spatial information and pooling local image characteristics into a holistic representation. But how can we determine for an individual image the characteristics that would render it similar to a particular other image without having seen the other one? Rather than aiming for the least common denominator and requiring a common embedding space for all training images, our approach identifies for each pair of input images the locations and features that should be considered to compare them. We follow a cross-attention approach to determine these meaningful local features in one image by measuring their correspondences to the other image. Overall image similarity is then a non-linear aggregation of these meaningful local comparisons. The experimental evaluation on standard DML benchmarks shows this approach to significantly improve over the state of the art. 1 INTRODUCTION Similarity learning is important for many different tasks in computer vision: classification, detection, face recognition, zero-shot and few-shot learning. Usually similarity learning is trained on one set of examples of similar and dissimilar pairs and later applied to a different set of pairs. In such a way a certain amount of generalization is required when training a model to find similarities between objects. The main goal of the conventional approach to deep metric learning is to train an encoder function E and an embedding function ϕ such that composition ϕ ◦ E yields a representation that can fully describe input image. And this representation is later used to measure similarities to other images and to retrieve nearest neighbours, i.e. most similar objects with respect to the notion of similarity. Moreover, we see that conventional approach focuses a lot on the problem of finding image representation. The comparison to another image is performed via feeding individual image representations to the loss function. What is important here is that the representation of an image is fixed and does not change whatever image it is compared with. Hence this approach is unnatural to the problem of similarities estimation: given a query image - most decisive parts for similarity estimation may change depending on the image we compare it to. Let us illustrate this idea with the following example. When we have been working with the SOP dataset we have noticed that images of the same bike vary a lot in viewpoint. One image can focus on a saddle another one on the gears and the wheel, see Fig.1. So it can be hard to determine whether these images are of the same bike if only look on the bike specific details. However, it might be useful to notice a unique joint pattern, for example a green carpet on the floor frame color to amplify those details when perform similarity estimation. unique visual feature that can be amplified and focused on only if we observe two images jointly. But how do we learn this joint similarity? We need to design a mechanism that will somehow blend two images we want to compare together. Furthermore, we need a mechanism to blend information and we also must decide at which level to fuse images. Taking the input pixel representation can be too coarse, but if we take the final representation yielded by the ϕ ◦ E, we may already loose too much information at this point. This happens mostly because the output of the encoder E, usually a pretrained on ImageNet(Deng et al., 2009) convolutional part of the Resnet-50(He et al., 2016). For an image of size 224 × 224px we get a tensor of size 7 × 7 × 2048 as the output of E. The projection ϕ includes a pooling operation of some kind and an embedding projection onto the unit sphere of dimensionality 512. So we have a compression rate of ≥ 200. Moreover, this projection also removes all the spatial information. We see that the image first undergoes a severe compression operation and only afterwards is being compared with another image. This is also bad because it may disregard relations between different image parts. This leads to the following necessity we want to fuse information of a pair of samples as early as possible together and we want our representation to be as rich as possible. First, aggregation methods is on of the crucial thing to redesign. Second, information from both images must be fused at the output of E. There is also a technical side of the problem of conventional approach: pooling of the features into a single representation is a bottleneck for information flow between the loss and the weights we want to adjust. With the recent advancements in computing hardware the trend for increase of image resolution for deep learning becomes apparent. Regarding our problem, the higher the input resolution is, the more lossy becomes the aggregation method described above. For that reason it becomes necessary to find an alternative to the lossy aggregation operation in particular and to the holistic approach based on finding fixed representation in general. Novel approaches must focus on fusing rich image representation and finding features adjusted to a particular image pair, namely for a particular comparison. Moreover simple pooling methods(aggregation method) like average pooling or max pooling of features result in information blurring, which becomes a bigger problem when scaling image resolutions which prevents effective training of high resolution input. Talk about other results on this experiment. Mention that we do not need true hi-res, but we use the upsampled version. Also mention that we do not need extra parameters. We suggest an alternative to the holistic approach. We design a novel bidirectional global to local attention mechanism that facilitates more direct similarity learning between rich image representation and aggregates all individual similarities better better then the conventional approaches. Our attention mechanism can better fuse features together and turn a similarity into a truly pair-based concept. Through extensive experiments we show that pairbased similarity learning is being superior to the image-based similarity learning in terms of retrieval performance. We study individual elements of the novel bidirectional global to local attention mechanism and provide meaningful insights into the decision making process of our approach. We also show that our method can be combined with classic DML losses and can significantly boosts their performance and make them outperform stateof-the-art approaches which are full of heavy machinery used for training them. We also observe that our method can scales much better with the input image resolution compared to other methods, thus indicating that we have a better training signal. 2 RELATED WORK 2.1 DEEP METRIC LEARNING. Deep Metric Learning (DML) (Roth et al., 2020b; Musgrave et al., 2020; Milbich et al., 2021) is one of the leading lines of research on similarity learning and related applications, such as image retrieval and search (Sohn, 2016; Wu et al., 2017; Roth et al., 2019; Jacob et al., 2019) or face recognition (Schroff et al., 2015; Hu et al., 2014; Liu et al., 2017; Deng et al., 2019), and even influenced the advance of self-supervised, contrastive representation learning (He et al., 2020; Chen et al., 2020; Misra & Maaten, 2020). With the goal of optimizing individual image projections into an expressive embedding space such that similarity relations between the images are reflected by a given distance metric, a multitude of different approaches for learning have been proposed. The main problem formulation of DML are surrogate ranking tasks over tuples of images, ranging from simple pairs (Hadsell et al., 2006) and triplets (Wu et al., 2017; Schroff et al., 2015) to higherorder quadruplets (Chen et al., 2017) and more generic n-tuples (Sohn, 2016; Oh Song et al., 2016; Hermans et al., 2017; Wang et al., 2019). These ranking tasks sometimes include geometrical constraints (Wang et al., 2017; Deng et al., 2019). To make learning feasible despite the exponential complexity of tuple combinations, such methods are often combined with tuple sampling strategies following either manually defined (Wu et al., 2017; Schroff et al., 2015; Xuan et al., 2020) or learned heuristics (Ge, 2018; Harwood et al., 2017; Roth et al., 2020a). Often, this issue is also successfully alleviated by class proxies representing entire sets of training images such as NCA formulations (Goldberger et al., 2005; Movshovitz-Attias et al., 2017; Kim et al., 2020; Teh et al., 2020; Qian et al., 2019) or classification-based approaches (Deng et al., 2019; Zhai & Wu, 2018). Finally, extensions of these basic formulations further improved the out-of-distribution generalization capabilities of the learned embedding spaces, e.g by leveraging multi-task and ensemble learning (Opitz et al., 2017; 2018; Sanakoyeu et al., 2021; Roth et al., 2019; Milbich et al., 2020; Kim et al., 2018), generating synthetic training samples (Duan et al., 2018; Lin et al., 2018; Zheng et al., 2019; Gu et al., 2021; Ko & Gu, 2020), diverse, complementary feature semantics (Milbich et al., 2020; Milbich et al., 2020), self-distillation (Roth et al., 2021) or sample memory banks (Wang et al., 2020). All the above works follow the predominating paradigm of determining image similarity by comparing mutually independent, holistic image projections in the embedding space. Thereby, they rely on the rationale that features shared by similar images are implicitly similarly encoded in the latent encoding. In our work, we break this paradigm and design a bidirectional global to local attention module that explicitly identifies and links local, shared image features for estimating similarity. Most similar to our work is the work of Seidenschwarz et al. (Seidenschwarz et al., 2021) and Elezi et al. (Elezi et al., 2020), which use self-attention, respectively label-propagation to exchange messages between standard, holistic image embeddings to incorporate global structure into the embedding space. Moreover, DIML (Zhao et al., 2021) similarly to our work proposed an interpretable DML framework operating on local features. However, correspondences are established by solving an expensive optimal transport problem. In contrast, our approach is based on an efficient cross-images attention mechanism, thus allowing us to greatly scale the spatial maps of local features. 2.2 ATTENTION MECHANISMS. The attention mechanism allows neural networks to explicitly focus on dedicated parts of the model input (Jaderberg et al., 2015), feature representations (Vaswani et al., 2017) and even output (Jaegle et al., 2021a). Introduced as hard attention, Spatial Transformers (Jaderberg et al., 2015) proposed a differentiable input sampler. The powerful formulation of soft (self-)attention was pioneered by transformers (Vaswani et al., 2017) which revolutionized the field of natural language processing and recently also gain influence in the vision domain (Dosovitskiy et al., 2021). Finally, cross attention has been shown to be a flexible concept for relating two arbitrary data representations (Jaegle et al., 2021b;a), e.g. for effectively scaling Vision Transformers (Dosovitskiy et al., 2021) to large input images. In our work, we formulate a bidirectional global to local attention mechanism to find correspondences between images. 2.3 EXPLAINABILITY IN DEEP LEARNING. Deep Metric Learning methods typically are difficult to interpret due to the holistic nature of the optimized latent embedding spaces. ABE (Kim et al., 2018) uses an self-attention mechanism for learning an ensemble of global learners to implicitly focus on different parts of the input image. However, (i) attention is not performed between images, thus only masked image regions that are captured by a particular learner can be visualized and (ii) those image regions are only consistent for very attention channels. In contrast, our approach explicitly establishes local correspondences between images, which are used to determine individual similarities between object parts. These correspondences naturally allow to visualize fine-grained relations between objects that the model considers crucial for similarity assessment. Similarly, DIML (Zhao et al., 2021) aims at finding local object correspondences, which, however, are limited to coarse object parts only, due to computational restrictions limiting the number of independent image regions to be represented. A widely used visualization in DML are UMAP (McInnes et al., 2018) or tSNE (Maaten & Hinton, 2008) projections of the holistic image embeddings. While such visualizations help to show which images are overall similar and dissimilar, they only implicitly provide insights into why a model puts two images next to each other on the embedding manifold. 3 APPROACH Lets first recap the conventional approach to Deep Metric Learning. The task is given an input image I find such an embedding e such that it satisfies label relations to the other samples in the dataset. Usually, the image I is fed first into the encoder network E and then mapped onto the manifold using embedding function ϕ. This gives us a representation e = ϕ(E(I)) in a d dimensional space on a d− 1 dimensional unit sphere Sd−1 := {x ∈ Rd | ∥x∥ = 1}. To satisfy relationships between dataset labels networks measure similarity between images I1, I2 by computing a distance between embeddings ϕ(E(I1)) and ϕ(E(I2)). Thus, it is assumed that image is fully represented using its embedding ϕ(E(I)). The training signal is computed only after plugging distances between embeddings d(ϕ(E(I1)), ϕ(E(I2))) into the loss function used for optimization. As the reader can notice, the images do not interact until the distance between the points is computed, hence all the computations are performed on the per image basis. Moreover, training signal passes though the lossy process of compression inside of an embedding function ϕ. However, images contain plenty of information and compressing this information by means of some simple pooling method in the function ϕ can be detrimental to the performance. To give you exact numbers: the most widely used encoder network E is the convolutional part of the Resnet-50 network. For an input image I1 of size 224 × 224 pixels we obtain a spatial tensor F1 := E(I1) ∈ Rh×w×d, where h = 7, w = 7, d = 2048. This representation has much more space to store useful information compared to the final embedding e1 := ϕ(E(I1)) ∈ Rd, where d is usually 128 or 512. This results in a compression rate of ≈ 200 between F1 and e1. These are two flaws of the representation seeking approach when applied to the problem of similarity learning - no interaction between images when computing their embeddings and lossy aggregation procedure. Additionally, a holistic approach can not explain which parts of an image are important for similarity and which are not. Thus, we need a mechanism to directly compare F1 with F2 := E(I2), not e1 with e2 = ϕ(E(I2)). Since F1, F2 ∈ Rh×w×d are of extremely high dimensionality, we can not just flatten this representation and feed it into the fully connected layer - this would have been computationally ineffective. Instead, we need a mechanism that can effectively estimate which parts across a pair of images to compare and how to weight those similarities. If we do not know what to compare we may throw information we need before even having a chance to find out this information was useful. The well established way to estimate which parts of an input must be related and processed jointly is the attention mechanism introduced by (Vaswani et al., 2017). However, if we compute attention between F1 and F2, the result is a matrix of size hw × hw which indicates correlation between different sites of those images. This set of correlations can be dominated by correlations between irrelevant parts of an image. For example, for birds classification task we can have the highest correlations between blue sky segments in both images, though this information is useless for the task of birds discrimination. For that reason we must know what to relate - what part is that and how meaningful it is? Additionally, we want to learn how similar two different parts are? For that reason we split the representation F = E(I) into parts embeddings FP := πP (F ) ∈ Rh×w×d and similarities embeddings FS := πS(F ) ∈ Rh×w×d. πP , πS are defined in the Sec.4.1. Hence, we need to compare with each other not only on the level of individual parts but with an image as a whole. To have an additional global representation of an image we maxpool the parts representation FP together across dimension h×w and obtain g := πG(FP ). Detailed description of πG is provided in the Sec.4.1. That means we want to compare g1 with all parts from FP2 and g2 with all parts from F P 1 . This is more efficient then comparing exhaustively individual tokens from FP1 and F P 2 . For example, sky patches are present in both images and have high correlations but they are not important for discrimination. For the sake of simplicity from now on we assume that all FP , FS are reshaped to the shape hw×d. This should remove ambiguity of the matrix calculus below. Moreover, we want our method to focus on those details of image I2 which are important for image I1. Therefore, we want to relate g1 with F2 to enable amplification of tokens of F2 which are highly correlated with g1. Even though they might have been unnoticeable in F2 on its own. We find the importance of parts of image I2 to image I1 as a whole by computing the attention of between local parts of I2 and global representation g1 of I1 softmax( g1F p 2√ d ) ∈ R1×hw). And vice versa we compute attention softmax( g1F p 2√ d ) ∈ R1×hw) for attention between parts of I2 to image I1 as a whole. Expressions above tell us which parts must be related. Now we need to estimate similarity between individual local parts. This can be formulated as S := FS1 ( FS2 )T . Now we have similarities between individual parts and importance of inidividual parts. Next we combine those two concepts together: s(F1, F2) := softmax ( g1F p 2 ⊤ √ d )( F s2F s 1 ⊤ ) softmax ( F p1 g ⊤ 2√ d ) . (1) We call this computation block consisting of πP , πS , πG a bidirectional global to local attention for similarity estimation. Reader may note a connection of the equation above to the renown attention mechanism widely used for establishing correlations between objects of different nature. Given queries Q, keys K and values V we estimate first correlation between queries and keys softmax(QK⊤). In our case we attention applied from both sides and values V being individual similarities S between different image parts, while attention weighting matrix is the global to local attention between images. Given the similarity scores between all pairs of points we plug them into any loss function used as a training objective in DML. We use the multi-similarity loss (Wang et al., 2019) to compute the loss for every batch: L := 1 b ( b∑ i=1 1 α log [∑ k∈Pi exp−α(s(Fi,Fk)−λ) ] + 1 β log [∑ k∈Ni expβ(s(Fi,Fk)−λ) ]) . (2) The training algorithm is summarized in Alg.1. Algorithm 1 Training Require: E - pretrained ResNet-50, X - dataset with images and class labels, b - batch size Initialize E Initialize layers πS , πP , πG of the similarity cross attention. while not converged do Sample b Images with labels (Ii, li) ∈ X , i ∈ {1, .., b} for ∀i ∈ {1, .., b} do Compute backbone output F̄i Compute similarities FSi = π S(Fi), parts FPi = π P (Fi) Compute global representation gi = πG(πP (Fi)) end for for ∀i, j ∈ {1, .., b} | i ̸= j do Compute local similarities Sij = FSj F S i ⊤ Compute global to local attentions softmax ( giF p j ⊤ √ d ) and softmax ( Fpi g ⊤ j√ d ) Compute final similarity s(Fi, Fj) using Eq.1 end for Compute loss L specified in Eq.2 Backpropagate gradients of L into weights θπS , θπP , θπG . end while 4 EXPERIMENTS 4.1 IMPLEMENTATION DETAILS. Implementation details. We follow the common training protocol (Wu et al., 2017; Roth et al., 2019; Sanakoyeu et al., 2021) for DML and utilize an ResNet50 (He et al., 2016) encoder E pretrained on the ImageNet dataset. The model is implemented in the Tensorflow2 framework. All the experiments are conducted on a single RTX 8000 or a single RTX 6000 GPU. For training, we use the Adam (Kingma & Ba, 2015) optimizer with a fixed learning rate of 10−5 and default β1, β2 parameters with no learning rate scheduling being applied. A default batch size of 32 is used unless stated otherwise. We choose the popular multi-similarity loss (Wang et al., 2019) as our DML objective function using default parameters stated in the original paper. For all the experiments unless stated otherwise we first resize input images to the size 256× 256px following standard practice (Musgrave et al., 2020; Roth et al., 2020a) and afterwards artificially upsample them to size 608 × 608px. At inference time, to further follow standard protocol, we apply center cropping to size 224× 224px after the initial resize to 256× 256px and then upsamle it back to the our final input size of 608 × 608px. We discuss the rationale of the upsampling and its benefit for our approach in Sec. 4.3.1. Datasets. We evaluate the performance on three standard DML benchmark datasets using the default train-test splits: • CARS196(Krause et al., 2013), which contains 16,185 images from 196 car classes. The first 98 classes containing 8054 images are used for training, while the remaining 98 classes with 8131 images are used for testing. • CUB200-2011(Wah et al., 2011) with 11,788 bird images from 200 classes. Training/test sets contain the first/last 100 classes with 5864/5924 images respectively. • Stanford Online Products (SOP)(Oh Song et al., 2016) provides 120,053 images divided in 22,634 product classes. 11318 classes with 59551 images are used for training, while the remaining 11316 classes with 60502 images are used for testing. Architecture design. The design of the mappings πP , πS , πG is inspired by the design of the transformer encoder of the vision transformers(Dosovitskiy et al., 2021). Both πP , πS perform layer normalization of the input and follow that by a single fully connected layer. πG performs max pooling across hw channels, followed by another fully connected layer and L2-normalization. Evaluation procedure. Our method computes similarity score directly between a pair of images images. In order to compute R@k for every query image we need to compute its similarities to all the other neighbours in the dataset. This results in a quadratic complexity at evaluation step, since we need to porcess all pairs of images. To circumvent this nuisance we compute and store all intermediate embeddings F and the global parts embeddings g. The latter is used to compute nearest 100 neighbours using these global embeddings. And only for those approximate nearest neighbours we compute similarities with our full method. Using these similarities we rerank approximate neighbours accordingly and compute final retrieval scores. This gives a reasonable time overhead, especially when compared to the exhaustive pairwise similarity computation for all pairs in the dataset. In practice it results in 15% increase in evaluation time. 4.2 COMPARISON TO THE STATE OF THE ART METHODS First of all we present how our approach stands against other methods. We evaluate performance on three standard datasets i.e. CUB200 (Wah et al., 2011), CARS196 (Krause et al., 2013) and SOP (Oh Song et al., 2016). We measure the retrieval performance using the widely used Recall@k score (Jegou et al., 2011). Results are summarized in Tab.1. They indicate that our approach significantly outperforms other approaches and validates efficiency of our cross-image similarity estimation. Please note that for the sake of fairness all experiments are performed after applying standard DML image preprocessing - image is first scaled to the size of 256× 256px , then we take a central crop of size 224×224px and only afterwards image is upsampled to the size 608×608px. Thus our approach can not benefit from minuscule details visible only in high-resolutional input, see Sec.4.3.1 for the detailed study on the importance of the resolution and fine details. There is another popular metrics in DML is the NMI (Manning et al., 2010) (Normalized Mutual information) score. We do not report it because our approach yields a single similarity score and essentially eliminates a concept of embedding, thus making NMI score inapplicable to our approach. 4.3 COMPONENTS OF THE BIDIRECTIONAL GLOBAL TO LOCAL ATTENTION MODULE Let us have a closer look at Eq.1 closely. It consists of two main components: attention between holistic parts embeddings of the first image and parts embedding of the second image softmax(g1F p 2 )R1×hw and the matrix of local similarities S = F s2F s1 . We can study the effect of each individual component separately. At first we can assume that we do not need any attention between image parts across images. In that case our similarity boils down to the average of the local similarities S, namely final similarity is 1TF s2F s 11, where 1 ∈ Rd×1 is a vector of all ones. The R@1 score drops by 8.9pp on CUB dataset and by 6.5pp on the Cars196 dataset for the image resolution 608×608px. We conclude that the parts embeddings FP are crucial for similarity learning. We can also ablate effect of individual similarities between global embeddings g1 and FP2 and replace it with attention between local parts, namely replace eq.1 with softmax ( F p1 (F p 2 ) ⊤ √ d ) ⊙ ( F s2 (F s 1 ) ⊤)⊙ softmax(F p2 (F p1 )⊤√ d ) . (3) . This has less effect on the final score with 3.5pp and 2.9pp drop in R@1 on CUB200 and Cars196 datasets respectively. This indicates that relation between local and global representation in Eq.1 helps similarity learning. We can completely remove the bidirectional global to local attention mechanism and use baseline projection function ϕ for finding the representation and use cosine similarity for computing the similarity between points. This experiment is provided in Sec.4.3.2. Where we study how does our model performs if coupled with different losses. 4.3.1 RESOLUTION EFFECT We see an increase in performance with the increase of the image size. In Fig.2 we summarize effect of the increase in image resolution for different methods on different datasets. Majority of the methods benefit to some extend from the increase in image size. However, our attention mechanism that replaces pooling operation helps to unleash the benefits of hi-resolution training. Fine-grained details importance. As an additional experiment we verify how much performance is lost due to the intermediate downsampling (no downsampling) to the size 256×256px. When no downsampling is performed we can reach 0.7pp higher on R@1 on the CUB200 dataset and only 0.15pp R@1 on the Cars1-196. As we see, our model does not significantly suffer from the missing information of real high-resolution input. Hence, not additional, fine-grained information is crucial for performance, but the increased number of “tokens” entailed by larger input image resolutions of tensors FS and FP . 4.3.2 OTHER LOSSES We also apply our method using other losses used for similarity learning and observe consistent improvement when scaling to larger image size. Thus, our bidirectional global to local attention mechanism for similarity learning is applicable to other methods as well. Though other methods increase the recall scores with the increase in resolution, our method helps to boost this effect. This becomes especially prominent when we go for higher resolutions rates, reaching image size 608 × 608. In Fig.3 we visualize results for multi-similarity loss and for margin loss(Wu et al., 2017) on the Cars-196 and CUB200 datasets. 5 CONCLUSIONS We have presented a novel approach to visual similarity learning by abandoning the common paradigm of holistic image encodings. Rather we have framed a similarity learning task as a pairbased approach and not an image-based approach more suitable for a general representation learning. We have designed a novel way to learn and utilize similarities between local regions of the image without any extra labels. Our novel bidirectional global to local attention module splits the task into two parts: what is related and how similar is that. We have provided a visual evidence that the similarity learning may alter its focus within the same image depending on the image we compare it to. On a technical side, we fight a problem of high compression rate of the embeddings mapping function. We have shown that our bidirectional global to local attention similarity learning scales better with increase in resolution compared to the other state-of-the-art approaches and significantly outperform them in retrieval metrics on all three datasets. Our approach is generic and easy to combine with other losses or even more sophisticated approaches to DML. We have also studied the effect of each individual block of our bidirectional global to local attention block.
1. What is the main contribution of the paper regarding image matching? 2. What are the strengths and weaknesses of the proposed method, particularly in its attention mechanism and feature extraction? 3. How does the reviewer assess the novelty and originality of the paper's ideas compared to prior works in image retrieval and deep learning literature? 4. What are the concerns regarding the paper's experimental setup and comparisons with other works? 5. How would you rate the clarity, quality, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes to help deep metric learning focus on localized parts of the image by using attention. From an image, a global vector and a set of localized features are computed, then cross-attention between the global vector of image I and the local features of image J (and vice versa) provides a weighting scheme that enables focusing on the local features that are relevant, using a bilinear product. Strengths And Weaknesses The idea of using local features to match images is very sound. Indeed, it has been the backbond of image retrieval for the past 20 years (See, e.g., Video Google by Sivic et al, which is not mentioned in the paper). Using global to local attention to select relevant parts of a dense feature map is sound. The paper is very unpolished which makes it difficult to read. See, e.g., the comments left in plain text in page 2 "Talk about other results on this experiments. Mention that we do not need true hi-res, but we use the upsampled version. Also mention that we do no need extra parameters.". In the same direction, the first half of page 5 is very difficult to follow: there is no clear direction of where this goes. There is a lack of positioning with respect to the literature that is hindering the paper. As already menetioned, using local features to compute image similarities is more than 20 years old now, yet no reference in the paper is more than 5 years old. Essentially, all the non-deep learning literature is ignored which is not a good sign, because maybe the idea already existed before deep learning. Even in the deep learning literature, there are missing references that the paper should compare to. More specifically, 2 papers come to mind: NetVLAD which uses a pooling that allow to keep more local information than the usual average pooling, and R-MAC which was design to keep localization information. Tolias, Giorgos, Ronan Sicre, and Hervé Jégou. "Particular object retrieval with integral max-pooling of CNN activations." ICLR 2016. Arandjelovic, Relja, et al. "NetVLAD: CNN architecture for weakly supervised place recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. The experiments are also not convincing: the authors mention Musgrave et al, 2020, and then proceed to completely ignore the recommendations that where made with respect to DML evaluation and keep the old setup that was shown to be flawed. Minor: Algorithm 1 is very inefficient. Computing the attention before the similarity leads to O(hw) cost instead of the O((hw)^2) of the proposed approach. Clarity, Quality, Novelty And Reproducibility The paper is not very clear. There are section there were not polished and are difficult to follow. The reproducibility is questionable due to the lack of following established setups.
ICLR
Title Bidirectional global to local attention for deep metric learning. Abstract Deep metric learning (DML) provides rich measures of content-based visual similarity, which have become an essential component for many downstream tasks in computer vision and beyond. This paper questions a central paradigm of DML, the process of embedding individual images before comparing their embedding vectors. The embedding drastically reduces image information, removing all spatial information and pooling local image characteristics into a holistic representation. But how can we determine for an individual image the characteristics that would render it similar to a particular other image without having seen the other one? Rather than aiming for the least common denominator and requiring a common embedding space for all training images, our approach identifies for each pair of input images the locations and features that should be considered to compare them. We follow a cross-attention approach to determine these meaningful local features in one image by measuring their correspondences to the other image. Overall image similarity is then a non-linear aggregation of these meaningful local comparisons. The experimental evaluation on standard DML benchmarks shows this approach to significantly improve over the state of the art. 1 INTRODUCTION Similarity learning is important for many different tasks in computer vision: classification, detection, face recognition, zero-shot and few-shot learning. Usually similarity learning is trained on one set of examples of similar and dissimilar pairs and later applied to a different set of pairs. In such a way a certain amount of generalization is required when training a model to find similarities between objects. The main goal of the conventional approach to deep metric learning is to train an encoder function E and an embedding function ϕ such that composition ϕ ◦ E yields a representation that can fully describe input image. And this representation is later used to measure similarities to other images and to retrieve nearest neighbours, i.e. most similar objects with respect to the notion of similarity. Moreover, we see that conventional approach focuses a lot on the problem of finding image representation. The comparison to another image is performed via feeding individual image representations to the loss function. What is important here is that the representation of an image is fixed and does not change whatever image it is compared with. Hence this approach is unnatural to the problem of similarities estimation: given a query image - most decisive parts for similarity estimation may change depending on the image we compare it to. Let us illustrate this idea with the following example. When we have been working with the SOP dataset we have noticed that images of the same bike vary a lot in viewpoint. One image can focus on a saddle another one on the gears and the wheel, see Fig.1. So it can be hard to determine whether these images are of the same bike if only look on the bike specific details. However, it might be useful to notice a unique joint pattern, for example a green carpet on the floor frame color to amplify those details when perform similarity estimation. unique visual feature that can be amplified and focused on only if we observe two images jointly. But how do we learn this joint similarity? We need to design a mechanism that will somehow blend two images we want to compare together. Furthermore, we need a mechanism to blend information and we also must decide at which level to fuse images. Taking the input pixel representation can be too coarse, but if we take the final representation yielded by the ϕ ◦ E, we may already loose too much information at this point. This happens mostly because the output of the encoder E, usually a pretrained on ImageNet(Deng et al., 2009) convolutional part of the Resnet-50(He et al., 2016). For an image of size 224 × 224px we get a tensor of size 7 × 7 × 2048 as the output of E. The projection ϕ includes a pooling operation of some kind and an embedding projection onto the unit sphere of dimensionality 512. So we have a compression rate of ≥ 200. Moreover, this projection also removes all the spatial information. We see that the image first undergoes a severe compression operation and only afterwards is being compared with another image. This is also bad because it may disregard relations between different image parts. This leads to the following necessity we want to fuse information of a pair of samples as early as possible together and we want our representation to be as rich as possible. First, aggregation methods is on of the crucial thing to redesign. Second, information from both images must be fused at the output of E. There is also a technical side of the problem of conventional approach: pooling of the features into a single representation is a bottleneck for information flow between the loss and the weights we want to adjust. With the recent advancements in computing hardware the trend for increase of image resolution for deep learning becomes apparent. Regarding our problem, the higher the input resolution is, the more lossy becomes the aggregation method described above. For that reason it becomes necessary to find an alternative to the lossy aggregation operation in particular and to the holistic approach based on finding fixed representation in general. Novel approaches must focus on fusing rich image representation and finding features adjusted to a particular image pair, namely for a particular comparison. Moreover simple pooling methods(aggregation method) like average pooling or max pooling of features result in information blurring, which becomes a bigger problem when scaling image resolutions which prevents effective training of high resolution input. Talk about other results on this experiment. Mention that we do not need true hi-res, but we use the upsampled version. Also mention that we do not need extra parameters. We suggest an alternative to the holistic approach. We design a novel bidirectional global to local attention mechanism that facilitates more direct similarity learning between rich image representation and aggregates all individual similarities better better then the conventional approaches. Our attention mechanism can better fuse features together and turn a similarity into a truly pair-based concept. Through extensive experiments we show that pairbased similarity learning is being superior to the image-based similarity learning in terms of retrieval performance. We study individual elements of the novel bidirectional global to local attention mechanism and provide meaningful insights into the decision making process of our approach. We also show that our method can be combined with classic DML losses and can significantly boosts their performance and make them outperform stateof-the-art approaches which are full of heavy machinery used for training them. We also observe that our method can scales much better with the input image resolution compared to other methods, thus indicating that we have a better training signal. 2 RELATED WORK 2.1 DEEP METRIC LEARNING. Deep Metric Learning (DML) (Roth et al., 2020b; Musgrave et al., 2020; Milbich et al., 2021) is one of the leading lines of research on similarity learning and related applications, such as image retrieval and search (Sohn, 2016; Wu et al., 2017; Roth et al., 2019; Jacob et al., 2019) or face recognition (Schroff et al., 2015; Hu et al., 2014; Liu et al., 2017; Deng et al., 2019), and even influenced the advance of self-supervised, contrastive representation learning (He et al., 2020; Chen et al., 2020; Misra & Maaten, 2020). With the goal of optimizing individual image projections into an expressive embedding space such that similarity relations between the images are reflected by a given distance metric, a multitude of different approaches for learning have been proposed. The main problem formulation of DML are surrogate ranking tasks over tuples of images, ranging from simple pairs (Hadsell et al., 2006) and triplets (Wu et al., 2017; Schroff et al., 2015) to higherorder quadruplets (Chen et al., 2017) and more generic n-tuples (Sohn, 2016; Oh Song et al., 2016; Hermans et al., 2017; Wang et al., 2019). These ranking tasks sometimes include geometrical constraints (Wang et al., 2017; Deng et al., 2019). To make learning feasible despite the exponential complexity of tuple combinations, such methods are often combined with tuple sampling strategies following either manually defined (Wu et al., 2017; Schroff et al., 2015; Xuan et al., 2020) or learned heuristics (Ge, 2018; Harwood et al., 2017; Roth et al., 2020a). Often, this issue is also successfully alleviated by class proxies representing entire sets of training images such as NCA formulations (Goldberger et al., 2005; Movshovitz-Attias et al., 2017; Kim et al., 2020; Teh et al., 2020; Qian et al., 2019) or classification-based approaches (Deng et al., 2019; Zhai & Wu, 2018). Finally, extensions of these basic formulations further improved the out-of-distribution generalization capabilities of the learned embedding spaces, e.g by leveraging multi-task and ensemble learning (Opitz et al., 2017; 2018; Sanakoyeu et al., 2021; Roth et al., 2019; Milbich et al., 2020; Kim et al., 2018), generating synthetic training samples (Duan et al., 2018; Lin et al., 2018; Zheng et al., 2019; Gu et al., 2021; Ko & Gu, 2020), diverse, complementary feature semantics (Milbich et al., 2020; Milbich et al., 2020), self-distillation (Roth et al., 2021) or sample memory banks (Wang et al., 2020). All the above works follow the predominating paradigm of determining image similarity by comparing mutually independent, holistic image projections in the embedding space. Thereby, they rely on the rationale that features shared by similar images are implicitly similarly encoded in the latent encoding. In our work, we break this paradigm and design a bidirectional global to local attention module that explicitly identifies and links local, shared image features for estimating similarity. Most similar to our work is the work of Seidenschwarz et al. (Seidenschwarz et al., 2021) and Elezi et al. (Elezi et al., 2020), which use self-attention, respectively label-propagation to exchange messages between standard, holistic image embeddings to incorporate global structure into the embedding space. Moreover, DIML (Zhao et al., 2021) similarly to our work proposed an interpretable DML framework operating on local features. However, correspondences are established by solving an expensive optimal transport problem. In contrast, our approach is based on an efficient cross-images attention mechanism, thus allowing us to greatly scale the spatial maps of local features. 2.2 ATTENTION MECHANISMS. The attention mechanism allows neural networks to explicitly focus on dedicated parts of the model input (Jaderberg et al., 2015), feature representations (Vaswani et al., 2017) and even output (Jaegle et al., 2021a). Introduced as hard attention, Spatial Transformers (Jaderberg et al., 2015) proposed a differentiable input sampler. The powerful formulation of soft (self-)attention was pioneered by transformers (Vaswani et al., 2017) which revolutionized the field of natural language processing and recently also gain influence in the vision domain (Dosovitskiy et al., 2021). Finally, cross attention has been shown to be a flexible concept for relating two arbitrary data representations (Jaegle et al., 2021b;a), e.g. for effectively scaling Vision Transformers (Dosovitskiy et al., 2021) to large input images. In our work, we formulate a bidirectional global to local attention mechanism to find correspondences between images. 2.3 EXPLAINABILITY IN DEEP LEARNING. Deep Metric Learning methods typically are difficult to interpret due to the holistic nature of the optimized latent embedding spaces. ABE (Kim et al., 2018) uses an self-attention mechanism for learning an ensemble of global learners to implicitly focus on different parts of the input image. However, (i) attention is not performed between images, thus only masked image regions that are captured by a particular learner can be visualized and (ii) those image regions are only consistent for very attention channels. In contrast, our approach explicitly establishes local correspondences between images, which are used to determine individual similarities between object parts. These correspondences naturally allow to visualize fine-grained relations between objects that the model considers crucial for similarity assessment. Similarly, DIML (Zhao et al., 2021) aims at finding local object correspondences, which, however, are limited to coarse object parts only, due to computational restrictions limiting the number of independent image regions to be represented. A widely used visualization in DML are UMAP (McInnes et al., 2018) or tSNE (Maaten & Hinton, 2008) projections of the holistic image embeddings. While such visualizations help to show which images are overall similar and dissimilar, they only implicitly provide insights into why a model puts two images next to each other on the embedding manifold. 3 APPROACH Lets first recap the conventional approach to Deep Metric Learning. The task is given an input image I find such an embedding e such that it satisfies label relations to the other samples in the dataset. Usually, the image I is fed first into the encoder network E and then mapped onto the manifold using embedding function ϕ. This gives us a representation e = ϕ(E(I)) in a d dimensional space on a d− 1 dimensional unit sphere Sd−1 := {x ∈ Rd | ∥x∥ = 1}. To satisfy relationships between dataset labels networks measure similarity between images I1, I2 by computing a distance between embeddings ϕ(E(I1)) and ϕ(E(I2)). Thus, it is assumed that image is fully represented using its embedding ϕ(E(I)). The training signal is computed only after plugging distances between embeddings d(ϕ(E(I1)), ϕ(E(I2))) into the loss function used for optimization. As the reader can notice, the images do not interact until the distance between the points is computed, hence all the computations are performed on the per image basis. Moreover, training signal passes though the lossy process of compression inside of an embedding function ϕ. However, images contain plenty of information and compressing this information by means of some simple pooling method in the function ϕ can be detrimental to the performance. To give you exact numbers: the most widely used encoder network E is the convolutional part of the Resnet-50 network. For an input image I1 of size 224 × 224 pixels we obtain a spatial tensor F1 := E(I1) ∈ Rh×w×d, where h = 7, w = 7, d = 2048. This representation has much more space to store useful information compared to the final embedding e1 := ϕ(E(I1)) ∈ Rd, where d is usually 128 or 512. This results in a compression rate of ≈ 200 between F1 and e1. These are two flaws of the representation seeking approach when applied to the problem of similarity learning - no interaction between images when computing their embeddings and lossy aggregation procedure. Additionally, a holistic approach can not explain which parts of an image are important for similarity and which are not. Thus, we need a mechanism to directly compare F1 with F2 := E(I2), not e1 with e2 = ϕ(E(I2)). Since F1, F2 ∈ Rh×w×d are of extremely high dimensionality, we can not just flatten this representation and feed it into the fully connected layer - this would have been computationally ineffective. Instead, we need a mechanism that can effectively estimate which parts across a pair of images to compare and how to weight those similarities. If we do not know what to compare we may throw information we need before even having a chance to find out this information was useful. The well established way to estimate which parts of an input must be related and processed jointly is the attention mechanism introduced by (Vaswani et al., 2017). However, if we compute attention between F1 and F2, the result is a matrix of size hw × hw which indicates correlation between different sites of those images. This set of correlations can be dominated by correlations between irrelevant parts of an image. For example, for birds classification task we can have the highest correlations between blue sky segments in both images, though this information is useless for the task of birds discrimination. For that reason we must know what to relate - what part is that and how meaningful it is? Additionally, we want to learn how similar two different parts are? For that reason we split the representation F = E(I) into parts embeddings FP := πP (F ) ∈ Rh×w×d and similarities embeddings FS := πS(F ) ∈ Rh×w×d. πP , πS are defined in the Sec.4.1. Hence, we need to compare with each other not only on the level of individual parts but with an image as a whole. To have an additional global representation of an image we maxpool the parts representation FP together across dimension h×w and obtain g := πG(FP ). Detailed description of πG is provided in the Sec.4.1. That means we want to compare g1 with all parts from FP2 and g2 with all parts from F P 1 . This is more efficient then comparing exhaustively individual tokens from FP1 and F P 2 . For example, sky patches are present in both images and have high correlations but they are not important for discrimination. For the sake of simplicity from now on we assume that all FP , FS are reshaped to the shape hw×d. This should remove ambiguity of the matrix calculus below. Moreover, we want our method to focus on those details of image I2 which are important for image I1. Therefore, we want to relate g1 with F2 to enable amplification of tokens of F2 which are highly correlated with g1. Even though they might have been unnoticeable in F2 on its own. We find the importance of parts of image I2 to image I1 as a whole by computing the attention of between local parts of I2 and global representation g1 of I1 softmax( g1F p 2√ d ) ∈ R1×hw). And vice versa we compute attention softmax( g1F p 2√ d ) ∈ R1×hw) for attention between parts of I2 to image I1 as a whole. Expressions above tell us which parts must be related. Now we need to estimate similarity between individual local parts. This can be formulated as S := FS1 ( FS2 )T . Now we have similarities between individual parts and importance of inidividual parts. Next we combine those two concepts together: s(F1, F2) := softmax ( g1F p 2 ⊤ √ d )( F s2F s 1 ⊤ ) softmax ( F p1 g ⊤ 2√ d ) . (1) We call this computation block consisting of πP , πS , πG a bidirectional global to local attention for similarity estimation. Reader may note a connection of the equation above to the renown attention mechanism widely used for establishing correlations between objects of different nature. Given queries Q, keys K and values V we estimate first correlation between queries and keys softmax(QK⊤). In our case we attention applied from both sides and values V being individual similarities S between different image parts, while attention weighting matrix is the global to local attention between images. Given the similarity scores between all pairs of points we plug them into any loss function used as a training objective in DML. We use the multi-similarity loss (Wang et al., 2019) to compute the loss for every batch: L := 1 b ( b∑ i=1 1 α log [∑ k∈Pi exp−α(s(Fi,Fk)−λ) ] + 1 β log [∑ k∈Ni expβ(s(Fi,Fk)−λ) ]) . (2) The training algorithm is summarized in Alg.1. Algorithm 1 Training Require: E - pretrained ResNet-50, X - dataset with images and class labels, b - batch size Initialize E Initialize layers πS , πP , πG of the similarity cross attention. while not converged do Sample b Images with labels (Ii, li) ∈ X , i ∈ {1, .., b} for ∀i ∈ {1, .., b} do Compute backbone output F̄i Compute similarities FSi = π S(Fi), parts FPi = π P (Fi) Compute global representation gi = πG(πP (Fi)) end for for ∀i, j ∈ {1, .., b} | i ̸= j do Compute local similarities Sij = FSj F S i ⊤ Compute global to local attentions softmax ( giF p j ⊤ √ d ) and softmax ( Fpi g ⊤ j√ d ) Compute final similarity s(Fi, Fj) using Eq.1 end for Compute loss L specified in Eq.2 Backpropagate gradients of L into weights θπS , θπP , θπG . end while 4 EXPERIMENTS 4.1 IMPLEMENTATION DETAILS. Implementation details. We follow the common training protocol (Wu et al., 2017; Roth et al., 2019; Sanakoyeu et al., 2021) for DML and utilize an ResNet50 (He et al., 2016) encoder E pretrained on the ImageNet dataset. The model is implemented in the Tensorflow2 framework. All the experiments are conducted on a single RTX 8000 or a single RTX 6000 GPU. For training, we use the Adam (Kingma & Ba, 2015) optimizer with a fixed learning rate of 10−5 and default β1, β2 parameters with no learning rate scheduling being applied. A default batch size of 32 is used unless stated otherwise. We choose the popular multi-similarity loss (Wang et al., 2019) as our DML objective function using default parameters stated in the original paper. For all the experiments unless stated otherwise we first resize input images to the size 256× 256px following standard practice (Musgrave et al., 2020; Roth et al., 2020a) and afterwards artificially upsample them to size 608 × 608px. At inference time, to further follow standard protocol, we apply center cropping to size 224× 224px after the initial resize to 256× 256px and then upsamle it back to the our final input size of 608 × 608px. We discuss the rationale of the upsampling and its benefit for our approach in Sec. 4.3.1. Datasets. We evaluate the performance on three standard DML benchmark datasets using the default train-test splits: • CARS196(Krause et al., 2013), which contains 16,185 images from 196 car classes. The first 98 classes containing 8054 images are used for training, while the remaining 98 classes with 8131 images are used for testing. • CUB200-2011(Wah et al., 2011) with 11,788 bird images from 200 classes. Training/test sets contain the first/last 100 classes with 5864/5924 images respectively. • Stanford Online Products (SOP)(Oh Song et al., 2016) provides 120,053 images divided in 22,634 product classes. 11318 classes with 59551 images are used for training, while the remaining 11316 classes with 60502 images are used for testing. Architecture design. The design of the mappings πP , πS , πG is inspired by the design of the transformer encoder of the vision transformers(Dosovitskiy et al., 2021). Both πP , πS perform layer normalization of the input and follow that by a single fully connected layer. πG performs max pooling across hw channels, followed by another fully connected layer and L2-normalization. Evaluation procedure. Our method computes similarity score directly between a pair of images images. In order to compute R@k for every query image we need to compute its similarities to all the other neighbours in the dataset. This results in a quadratic complexity at evaluation step, since we need to porcess all pairs of images. To circumvent this nuisance we compute and store all intermediate embeddings F and the global parts embeddings g. The latter is used to compute nearest 100 neighbours using these global embeddings. And only for those approximate nearest neighbours we compute similarities with our full method. Using these similarities we rerank approximate neighbours accordingly and compute final retrieval scores. This gives a reasonable time overhead, especially when compared to the exhaustive pairwise similarity computation for all pairs in the dataset. In practice it results in 15% increase in evaluation time. 4.2 COMPARISON TO THE STATE OF THE ART METHODS First of all we present how our approach stands against other methods. We evaluate performance on three standard datasets i.e. CUB200 (Wah et al., 2011), CARS196 (Krause et al., 2013) and SOP (Oh Song et al., 2016). We measure the retrieval performance using the widely used Recall@k score (Jegou et al., 2011). Results are summarized in Tab.1. They indicate that our approach significantly outperforms other approaches and validates efficiency of our cross-image similarity estimation. Please note that for the sake of fairness all experiments are performed after applying standard DML image preprocessing - image is first scaled to the size of 256× 256px , then we take a central crop of size 224×224px and only afterwards image is upsampled to the size 608×608px. Thus our approach can not benefit from minuscule details visible only in high-resolutional input, see Sec.4.3.1 for the detailed study on the importance of the resolution and fine details. There is another popular metrics in DML is the NMI (Manning et al., 2010) (Normalized Mutual information) score. We do not report it because our approach yields a single similarity score and essentially eliminates a concept of embedding, thus making NMI score inapplicable to our approach. 4.3 COMPONENTS OF THE BIDIRECTIONAL GLOBAL TO LOCAL ATTENTION MODULE Let us have a closer look at Eq.1 closely. It consists of two main components: attention between holistic parts embeddings of the first image and parts embedding of the second image softmax(g1F p 2 )R1×hw and the matrix of local similarities S = F s2F s1 . We can study the effect of each individual component separately. At first we can assume that we do not need any attention between image parts across images. In that case our similarity boils down to the average of the local similarities S, namely final similarity is 1TF s2F s 11, where 1 ∈ Rd×1 is a vector of all ones. The R@1 score drops by 8.9pp on CUB dataset and by 6.5pp on the Cars196 dataset for the image resolution 608×608px. We conclude that the parts embeddings FP are crucial for similarity learning. We can also ablate effect of individual similarities between global embeddings g1 and FP2 and replace it with attention between local parts, namely replace eq.1 with softmax ( F p1 (F p 2 ) ⊤ √ d ) ⊙ ( F s2 (F s 1 ) ⊤)⊙ softmax(F p2 (F p1 )⊤√ d ) . (3) . This has less effect on the final score with 3.5pp and 2.9pp drop in R@1 on CUB200 and Cars196 datasets respectively. This indicates that relation between local and global representation in Eq.1 helps similarity learning. We can completely remove the bidirectional global to local attention mechanism and use baseline projection function ϕ for finding the representation and use cosine similarity for computing the similarity between points. This experiment is provided in Sec.4.3.2. Where we study how does our model performs if coupled with different losses. 4.3.1 RESOLUTION EFFECT We see an increase in performance with the increase of the image size. In Fig.2 we summarize effect of the increase in image resolution for different methods on different datasets. Majority of the methods benefit to some extend from the increase in image size. However, our attention mechanism that replaces pooling operation helps to unleash the benefits of hi-resolution training. Fine-grained details importance. As an additional experiment we verify how much performance is lost due to the intermediate downsampling (no downsampling) to the size 256×256px. When no downsampling is performed we can reach 0.7pp higher on R@1 on the CUB200 dataset and only 0.15pp R@1 on the Cars1-196. As we see, our model does not significantly suffer from the missing information of real high-resolution input. Hence, not additional, fine-grained information is crucial for performance, but the increased number of “tokens” entailed by larger input image resolutions of tensors FS and FP . 4.3.2 OTHER LOSSES We also apply our method using other losses used for similarity learning and observe consistent improvement when scaling to larger image size. Thus, our bidirectional global to local attention mechanism for similarity learning is applicable to other methods as well. Though other methods increase the recall scores with the increase in resolution, our method helps to boost this effect. This becomes especially prominent when we go for higher resolutions rates, reaching image size 608 × 608. In Fig.3 we visualize results for multi-similarity loss and for margin loss(Wu et al., 2017) on the Cars-196 and CUB200 datasets. 5 CONCLUSIONS We have presented a novel approach to visual similarity learning by abandoning the common paradigm of holistic image encodings. Rather we have framed a similarity learning task as a pairbased approach and not an image-based approach more suitable for a general representation learning. We have designed a novel way to learn and utilize similarities between local regions of the image without any extra labels. Our novel bidirectional global to local attention module splits the task into two parts: what is related and how similar is that. We have provided a visual evidence that the similarity learning may alter its focus within the same image depending on the image we compare it to. On a technical side, we fight a problem of high compression rate of the embeddings mapping function. We have shown that our bidirectional global to local attention similarity learning scales better with increase in resolution compared to the other state-of-the-art approaches and significantly outperform them in retrieval metrics on all three datasets. Our approach is generic and easy to combine with other losses or even more sophisticated approaches to DML. We have also studied the effect of each individual block of our bidirectional global to local attention block.
1. What is the main contribution of the paper in the context of deep metric learning? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its assumption about global feature vectors? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or concerns regarding the paper's experimental design or results?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a method to compare images based on local feature maps, to improve similarity estimation. In contrast to standard deep metric learning methods, it does not compare images based on global image representations. A cross-attention method is employed to compare features from two images in a learnable manner, and a non-linear aggregation of local comparisons is the final similarity. Experiments are conducted on standard deep metric learning datasets. Strengths And Weaknesses Strengths: S1) The paper tackles deep metric learning, which is an important problem in computer vision. Weaknesses: W1) The paper is rather naive in assuming that previous work on deep metric learning expects that the whole image contents can be captured in a global feature vector. The community already knows and expects that better performance can be obtained if spatial feature maps are preserved for more detailed comparisons. The paper is written as if a discovery was just made, that if a pre-pooling feature map is used better performance can be obtained. Instead, what actually the deep metric learning community is trying to achieve is to exactly efficiently compress as much relevant information as possible in a single global image vector. This paper should rather be compared with methods that do spatial matching, rather than global feature DML techniques. Some passages that show this: Page 2: “Through extensive experiments we show that pairbased similarity learning is being superior to the image-based similarity learning in terms of retrieval performance.” Page 3: “Thereby, they rely on the rationale that features shared by similar images are implicitly similarly encoded in the latent encoding” Page 4: “Thus, it is assumed that image is fully represented using its embedding” Page 4: “ However, images contain plenty of information and compressing this information by means of some simple pooling method in the function ϕ can be detrimental to the performance” W2) The paper is rather unpolished and written in a very informal way. Examples: Page 2: remaining draft comments in the paper’s text: “Talk about other results on this experiment. Mention that we do not need true hi-res, but we use the upsampled version. Also mention that we do not need extra parameters.” Page 4: “To give you exact numbers” Furthermore, there are lots of small writing issues as described below. Writing issues: Page 2: “better better then the conventional approaches.” -> “better than the conventional approaches.” Page 2: “ our method can scales” -> “ our method can scale” Page 4: “for very attention channels” Page 5: “ In our case we attention applied from both sides” Page 7: “a pair of images images” Page 7: “we need to porcess” Page 8: “hi-resolution training” Page 9: “We see that the our mechanism” Clarity, Quality, Novelty And Reproducibility Clarity: The paper is clear in what it proposes. Quality: Low. The paper is not well written and proposes something assuming that the whole body of work in deep metric learning is naive. Novelty: No novelty in this work. It is known that using spatial feature maps preserves much more spatial information and can improve image matching. The whole field of local image features is dedicated to spatial image matching, and the paper does not even cite it. Reproducibility: The method is simple, so there should be no issues with reproducibility.
ICLR
Title Bidirectional global to local attention for deep metric learning. Abstract Deep metric learning (DML) provides rich measures of content-based visual similarity, which have become an essential component for many downstream tasks in computer vision and beyond. This paper questions a central paradigm of DML, the process of embedding individual images before comparing their embedding vectors. The embedding drastically reduces image information, removing all spatial information and pooling local image characteristics into a holistic representation. But how can we determine for an individual image the characteristics that would render it similar to a particular other image without having seen the other one? Rather than aiming for the least common denominator and requiring a common embedding space for all training images, our approach identifies for each pair of input images the locations and features that should be considered to compare them. We follow a cross-attention approach to determine these meaningful local features in one image by measuring their correspondences to the other image. Overall image similarity is then a non-linear aggregation of these meaningful local comparisons. The experimental evaluation on standard DML benchmarks shows this approach to significantly improve over the state of the art. 1 INTRODUCTION Similarity learning is important for many different tasks in computer vision: classification, detection, face recognition, zero-shot and few-shot learning. Usually similarity learning is trained on one set of examples of similar and dissimilar pairs and later applied to a different set of pairs. In such a way a certain amount of generalization is required when training a model to find similarities between objects. The main goal of the conventional approach to deep metric learning is to train an encoder function E and an embedding function ϕ such that composition ϕ ◦ E yields a representation that can fully describe input image. And this representation is later used to measure similarities to other images and to retrieve nearest neighbours, i.e. most similar objects with respect to the notion of similarity. Moreover, we see that conventional approach focuses a lot on the problem of finding image representation. The comparison to another image is performed via feeding individual image representations to the loss function. What is important here is that the representation of an image is fixed and does not change whatever image it is compared with. Hence this approach is unnatural to the problem of similarities estimation: given a query image - most decisive parts for similarity estimation may change depending on the image we compare it to. Let us illustrate this idea with the following example. When we have been working with the SOP dataset we have noticed that images of the same bike vary a lot in viewpoint. One image can focus on a saddle another one on the gears and the wheel, see Fig.1. So it can be hard to determine whether these images are of the same bike if only look on the bike specific details. However, it might be useful to notice a unique joint pattern, for example a green carpet on the floor frame color to amplify those details when perform similarity estimation. unique visual feature that can be amplified and focused on only if we observe two images jointly. But how do we learn this joint similarity? We need to design a mechanism that will somehow blend two images we want to compare together. Furthermore, we need a mechanism to blend information and we also must decide at which level to fuse images. Taking the input pixel representation can be too coarse, but if we take the final representation yielded by the ϕ ◦ E, we may already loose too much information at this point. This happens mostly because the output of the encoder E, usually a pretrained on ImageNet(Deng et al., 2009) convolutional part of the Resnet-50(He et al., 2016). For an image of size 224 × 224px we get a tensor of size 7 × 7 × 2048 as the output of E. The projection ϕ includes a pooling operation of some kind and an embedding projection onto the unit sphere of dimensionality 512. So we have a compression rate of ≥ 200. Moreover, this projection also removes all the spatial information. We see that the image first undergoes a severe compression operation and only afterwards is being compared with another image. This is also bad because it may disregard relations between different image parts. This leads to the following necessity we want to fuse information of a pair of samples as early as possible together and we want our representation to be as rich as possible. First, aggregation methods is on of the crucial thing to redesign. Second, information from both images must be fused at the output of E. There is also a technical side of the problem of conventional approach: pooling of the features into a single representation is a bottleneck for information flow between the loss and the weights we want to adjust. With the recent advancements in computing hardware the trend for increase of image resolution for deep learning becomes apparent. Regarding our problem, the higher the input resolution is, the more lossy becomes the aggregation method described above. For that reason it becomes necessary to find an alternative to the lossy aggregation operation in particular and to the holistic approach based on finding fixed representation in general. Novel approaches must focus on fusing rich image representation and finding features adjusted to a particular image pair, namely for a particular comparison. Moreover simple pooling methods(aggregation method) like average pooling or max pooling of features result in information blurring, which becomes a bigger problem when scaling image resolutions which prevents effective training of high resolution input. Talk about other results on this experiment. Mention that we do not need true hi-res, but we use the upsampled version. Also mention that we do not need extra parameters. We suggest an alternative to the holistic approach. We design a novel bidirectional global to local attention mechanism that facilitates more direct similarity learning between rich image representation and aggregates all individual similarities better better then the conventional approaches. Our attention mechanism can better fuse features together and turn a similarity into a truly pair-based concept. Through extensive experiments we show that pairbased similarity learning is being superior to the image-based similarity learning in terms of retrieval performance. We study individual elements of the novel bidirectional global to local attention mechanism and provide meaningful insights into the decision making process of our approach. We also show that our method can be combined with classic DML losses and can significantly boosts their performance and make them outperform stateof-the-art approaches which are full of heavy machinery used for training them. We also observe that our method can scales much better with the input image resolution compared to other methods, thus indicating that we have a better training signal. 2 RELATED WORK 2.1 DEEP METRIC LEARNING. Deep Metric Learning (DML) (Roth et al., 2020b; Musgrave et al., 2020; Milbich et al., 2021) is one of the leading lines of research on similarity learning and related applications, such as image retrieval and search (Sohn, 2016; Wu et al., 2017; Roth et al., 2019; Jacob et al., 2019) or face recognition (Schroff et al., 2015; Hu et al., 2014; Liu et al., 2017; Deng et al., 2019), and even influenced the advance of self-supervised, contrastive representation learning (He et al., 2020; Chen et al., 2020; Misra & Maaten, 2020). With the goal of optimizing individual image projections into an expressive embedding space such that similarity relations between the images are reflected by a given distance metric, a multitude of different approaches for learning have been proposed. The main problem formulation of DML are surrogate ranking tasks over tuples of images, ranging from simple pairs (Hadsell et al., 2006) and triplets (Wu et al., 2017; Schroff et al., 2015) to higherorder quadruplets (Chen et al., 2017) and more generic n-tuples (Sohn, 2016; Oh Song et al., 2016; Hermans et al., 2017; Wang et al., 2019). These ranking tasks sometimes include geometrical constraints (Wang et al., 2017; Deng et al., 2019). To make learning feasible despite the exponential complexity of tuple combinations, such methods are often combined with tuple sampling strategies following either manually defined (Wu et al., 2017; Schroff et al., 2015; Xuan et al., 2020) or learned heuristics (Ge, 2018; Harwood et al., 2017; Roth et al., 2020a). Often, this issue is also successfully alleviated by class proxies representing entire sets of training images such as NCA formulations (Goldberger et al., 2005; Movshovitz-Attias et al., 2017; Kim et al., 2020; Teh et al., 2020; Qian et al., 2019) or classification-based approaches (Deng et al., 2019; Zhai & Wu, 2018). Finally, extensions of these basic formulations further improved the out-of-distribution generalization capabilities of the learned embedding spaces, e.g by leveraging multi-task and ensemble learning (Opitz et al., 2017; 2018; Sanakoyeu et al., 2021; Roth et al., 2019; Milbich et al., 2020; Kim et al., 2018), generating synthetic training samples (Duan et al., 2018; Lin et al., 2018; Zheng et al., 2019; Gu et al., 2021; Ko & Gu, 2020), diverse, complementary feature semantics (Milbich et al., 2020; Milbich et al., 2020), self-distillation (Roth et al., 2021) or sample memory banks (Wang et al., 2020). All the above works follow the predominating paradigm of determining image similarity by comparing mutually independent, holistic image projections in the embedding space. Thereby, they rely on the rationale that features shared by similar images are implicitly similarly encoded in the latent encoding. In our work, we break this paradigm and design a bidirectional global to local attention module that explicitly identifies and links local, shared image features for estimating similarity. Most similar to our work is the work of Seidenschwarz et al. (Seidenschwarz et al., 2021) and Elezi et al. (Elezi et al., 2020), which use self-attention, respectively label-propagation to exchange messages between standard, holistic image embeddings to incorporate global structure into the embedding space. Moreover, DIML (Zhao et al., 2021) similarly to our work proposed an interpretable DML framework operating on local features. However, correspondences are established by solving an expensive optimal transport problem. In contrast, our approach is based on an efficient cross-images attention mechanism, thus allowing us to greatly scale the spatial maps of local features. 2.2 ATTENTION MECHANISMS. The attention mechanism allows neural networks to explicitly focus on dedicated parts of the model input (Jaderberg et al., 2015), feature representations (Vaswani et al., 2017) and even output (Jaegle et al., 2021a). Introduced as hard attention, Spatial Transformers (Jaderberg et al., 2015) proposed a differentiable input sampler. The powerful formulation of soft (self-)attention was pioneered by transformers (Vaswani et al., 2017) which revolutionized the field of natural language processing and recently also gain influence in the vision domain (Dosovitskiy et al., 2021). Finally, cross attention has been shown to be a flexible concept for relating two arbitrary data representations (Jaegle et al., 2021b;a), e.g. for effectively scaling Vision Transformers (Dosovitskiy et al., 2021) to large input images. In our work, we formulate a bidirectional global to local attention mechanism to find correspondences between images. 2.3 EXPLAINABILITY IN DEEP LEARNING. Deep Metric Learning methods typically are difficult to interpret due to the holistic nature of the optimized latent embedding spaces. ABE (Kim et al., 2018) uses an self-attention mechanism for learning an ensemble of global learners to implicitly focus on different parts of the input image. However, (i) attention is not performed between images, thus only masked image regions that are captured by a particular learner can be visualized and (ii) those image regions are only consistent for very attention channels. In contrast, our approach explicitly establishes local correspondences between images, which are used to determine individual similarities between object parts. These correspondences naturally allow to visualize fine-grained relations between objects that the model considers crucial for similarity assessment. Similarly, DIML (Zhao et al., 2021) aims at finding local object correspondences, which, however, are limited to coarse object parts only, due to computational restrictions limiting the number of independent image regions to be represented. A widely used visualization in DML are UMAP (McInnes et al., 2018) or tSNE (Maaten & Hinton, 2008) projections of the holistic image embeddings. While such visualizations help to show which images are overall similar and dissimilar, they only implicitly provide insights into why a model puts two images next to each other on the embedding manifold. 3 APPROACH Lets first recap the conventional approach to Deep Metric Learning. The task is given an input image I find such an embedding e such that it satisfies label relations to the other samples in the dataset. Usually, the image I is fed first into the encoder network E and then mapped onto the manifold using embedding function ϕ. This gives us a representation e = ϕ(E(I)) in a d dimensional space on a d− 1 dimensional unit sphere Sd−1 := {x ∈ Rd | ∥x∥ = 1}. To satisfy relationships between dataset labels networks measure similarity between images I1, I2 by computing a distance between embeddings ϕ(E(I1)) and ϕ(E(I2)). Thus, it is assumed that image is fully represented using its embedding ϕ(E(I)). The training signal is computed only after plugging distances between embeddings d(ϕ(E(I1)), ϕ(E(I2))) into the loss function used for optimization. As the reader can notice, the images do not interact until the distance between the points is computed, hence all the computations are performed on the per image basis. Moreover, training signal passes though the lossy process of compression inside of an embedding function ϕ. However, images contain plenty of information and compressing this information by means of some simple pooling method in the function ϕ can be detrimental to the performance. To give you exact numbers: the most widely used encoder network E is the convolutional part of the Resnet-50 network. For an input image I1 of size 224 × 224 pixels we obtain a spatial tensor F1 := E(I1) ∈ Rh×w×d, where h = 7, w = 7, d = 2048. This representation has much more space to store useful information compared to the final embedding e1 := ϕ(E(I1)) ∈ Rd, where d is usually 128 or 512. This results in a compression rate of ≈ 200 between F1 and e1. These are two flaws of the representation seeking approach when applied to the problem of similarity learning - no interaction between images when computing their embeddings and lossy aggregation procedure. Additionally, a holistic approach can not explain which parts of an image are important for similarity and which are not. Thus, we need a mechanism to directly compare F1 with F2 := E(I2), not e1 with e2 = ϕ(E(I2)). Since F1, F2 ∈ Rh×w×d are of extremely high dimensionality, we can not just flatten this representation and feed it into the fully connected layer - this would have been computationally ineffective. Instead, we need a mechanism that can effectively estimate which parts across a pair of images to compare and how to weight those similarities. If we do not know what to compare we may throw information we need before even having a chance to find out this information was useful. The well established way to estimate which parts of an input must be related and processed jointly is the attention mechanism introduced by (Vaswani et al., 2017). However, if we compute attention between F1 and F2, the result is a matrix of size hw × hw which indicates correlation between different sites of those images. This set of correlations can be dominated by correlations between irrelevant parts of an image. For example, for birds classification task we can have the highest correlations between blue sky segments in both images, though this information is useless for the task of birds discrimination. For that reason we must know what to relate - what part is that and how meaningful it is? Additionally, we want to learn how similar two different parts are? For that reason we split the representation F = E(I) into parts embeddings FP := πP (F ) ∈ Rh×w×d and similarities embeddings FS := πS(F ) ∈ Rh×w×d. πP , πS are defined in the Sec.4.1. Hence, we need to compare with each other not only on the level of individual parts but with an image as a whole. To have an additional global representation of an image we maxpool the parts representation FP together across dimension h×w and obtain g := πG(FP ). Detailed description of πG is provided in the Sec.4.1. That means we want to compare g1 with all parts from FP2 and g2 with all parts from F P 1 . This is more efficient then comparing exhaustively individual tokens from FP1 and F P 2 . For example, sky patches are present in both images and have high correlations but they are not important for discrimination. For the sake of simplicity from now on we assume that all FP , FS are reshaped to the shape hw×d. This should remove ambiguity of the matrix calculus below. Moreover, we want our method to focus on those details of image I2 which are important for image I1. Therefore, we want to relate g1 with F2 to enable amplification of tokens of F2 which are highly correlated with g1. Even though they might have been unnoticeable in F2 on its own. We find the importance of parts of image I2 to image I1 as a whole by computing the attention of between local parts of I2 and global representation g1 of I1 softmax( g1F p 2√ d ) ∈ R1×hw). And vice versa we compute attention softmax( g1F p 2√ d ) ∈ R1×hw) for attention between parts of I2 to image I1 as a whole. Expressions above tell us which parts must be related. Now we need to estimate similarity between individual local parts. This can be formulated as S := FS1 ( FS2 )T . Now we have similarities between individual parts and importance of inidividual parts. Next we combine those two concepts together: s(F1, F2) := softmax ( g1F p 2 ⊤ √ d )( F s2F s 1 ⊤ ) softmax ( F p1 g ⊤ 2√ d ) . (1) We call this computation block consisting of πP , πS , πG a bidirectional global to local attention for similarity estimation. Reader may note a connection of the equation above to the renown attention mechanism widely used for establishing correlations between objects of different nature. Given queries Q, keys K and values V we estimate first correlation between queries and keys softmax(QK⊤). In our case we attention applied from both sides and values V being individual similarities S between different image parts, while attention weighting matrix is the global to local attention between images. Given the similarity scores between all pairs of points we plug them into any loss function used as a training objective in DML. We use the multi-similarity loss (Wang et al., 2019) to compute the loss for every batch: L := 1 b ( b∑ i=1 1 α log [∑ k∈Pi exp−α(s(Fi,Fk)−λ) ] + 1 β log [∑ k∈Ni expβ(s(Fi,Fk)−λ) ]) . (2) The training algorithm is summarized in Alg.1. Algorithm 1 Training Require: E - pretrained ResNet-50, X - dataset with images and class labels, b - batch size Initialize E Initialize layers πS , πP , πG of the similarity cross attention. while not converged do Sample b Images with labels (Ii, li) ∈ X , i ∈ {1, .., b} for ∀i ∈ {1, .., b} do Compute backbone output F̄i Compute similarities FSi = π S(Fi), parts FPi = π P (Fi) Compute global representation gi = πG(πP (Fi)) end for for ∀i, j ∈ {1, .., b} | i ̸= j do Compute local similarities Sij = FSj F S i ⊤ Compute global to local attentions softmax ( giF p j ⊤ √ d ) and softmax ( Fpi g ⊤ j√ d ) Compute final similarity s(Fi, Fj) using Eq.1 end for Compute loss L specified in Eq.2 Backpropagate gradients of L into weights θπS , θπP , θπG . end while 4 EXPERIMENTS 4.1 IMPLEMENTATION DETAILS. Implementation details. We follow the common training protocol (Wu et al., 2017; Roth et al., 2019; Sanakoyeu et al., 2021) for DML and utilize an ResNet50 (He et al., 2016) encoder E pretrained on the ImageNet dataset. The model is implemented in the Tensorflow2 framework. All the experiments are conducted on a single RTX 8000 or a single RTX 6000 GPU. For training, we use the Adam (Kingma & Ba, 2015) optimizer with a fixed learning rate of 10−5 and default β1, β2 parameters with no learning rate scheduling being applied. A default batch size of 32 is used unless stated otherwise. We choose the popular multi-similarity loss (Wang et al., 2019) as our DML objective function using default parameters stated in the original paper. For all the experiments unless stated otherwise we first resize input images to the size 256× 256px following standard practice (Musgrave et al., 2020; Roth et al., 2020a) and afterwards artificially upsample them to size 608 × 608px. At inference time, to further follow standard protocol, we apply center cropping to size 224× 224px after the initial resize to 256× 256px and then upsamle it back to the our final input size of 608 × 608px. We discuss the rationale of the upsampling and its benefit for our approach in Sec. 4.3.1. Datasets. We evaluate the performance on three standard DML benchmark datasets using the default train-test splits: • CARS196(Krause et al., 2013), which contains 16,185 images from 196 car classes. The first 98 classes containing 8054 images are used for training, while the remaining 98 classes with 8131 images are used for testing. • CUB200-2011(Wah et al., 2011) with 11,788 bird images from 200 classes. Training/test sets contain the first/last 100 classes with 5864/5924 images respectively. • Stanford Online Products (SOP)(Oh Song et al., 2016) provides 120,053 images divided in 22,634 product classes. 11318 classes with 59551 images are used for training, while the remaining 11316 classes with 60502 images are used for testing. Architecture design. The design of the mappings πP , πS , πG is inspired by the design of the transformer encoder of the vision transformers(Dosovitskiy et al., 2021). Both πP , πS perform layer normalization of the input and follow that by a single fully connected layer. πG performs max pooling across hw channels, followed by another fully connected layer and L2-normalization. Evaluation procedure. Our method computes similarity score directly between a pair of images images. In order to compute R@k for every query image we need to compute its similarities to all the other neighbours in the dataset. This results in a quadratic complexity at evaluation step, since we need to porcess all pairs of images. To circumvent this nuisance we compute and store all intermediate embeddings F and the global parts embeddings g. The latter is used to compute nearest 100 neighbours using these global embeddings. And only for those approximate nearest neighbours we compute similarities with our full method. Using these similarities we rerank approximate neighbours accordingly and compute final retrieval scores. This gives a reasonable time overhead, especially when compared to the exhaustive pairwise similarity computation for all pairs in the dataset. In practice it results in 15% increase in evaluation time. 4.2 COMPARISON TO THE STATE OF THE ART METHODS First of all we present how our approach stands against other methods. We evaluate performance on three standard datasets i.e. CUB200 (Wah et al., 2011), CARS196 (Krause et al., 2013) and SOP (Oh Song et al., 2016). We measure the retrieval performance using the widely used Recall@k score (Jegou et al., 2011). Results are summarized in Tab.1. They indicate that our approach significantly outperforms other approaches and validates efficiency of our cross-image similarity estimation. Please note that for the sake of fairness all experiments are performed after applying standard DML image preprocessing - image is first scaled to the size of 256× 256px , then we take a central crop of size 224×224px and only afterwards image is upsampled to the size 608×608px. Thus our approach can not benefit from minuscule details visible only in high-resolutional input, see Sec.4.3.1 for the detailed study on the importance of the resolution and fine details. There is another popular metrics in DML is the NMI (Manning et al., 2010) (Normalized Mutual information) score. We do not report it because our approach yields a single similarity score and essentially eliminates a concept of embedding, thus making NMI score inapplicable to our approach. 4.3 COMPONENTS OF THE BIDIRECTIONAL GLOBAL TO LOCAL ATTENTION MODULE Let us have a closer look at Eq.1 closely. It consists of two main components: attention between holistic parts embeddings of the first image and parts embedding of the second image softmax(g1F p 2 )R1×hw and the matrix of local similarities S = F s2F s1 . We can study the effect of each individual component separately. At first we can assume that we do not need any attention between image parts across images. In that case our similarity boils down to the average of the local similarities S, namely final similarity is 1TF s2F s 11, where 1 ∈ Rd×1 is a vector of all ones. The R@1 score drops by 8.9pp on CUB dataset and by 6.5pp on the Cars196 dataset for the image resolution 608×608px. We conclude that the parts embeddings FP are crucial for similarity learning. We can also ablate effect of individual similarities between global embeddings g1 and FP2 and replace it with attention between local parts, namely replace eq.1 with softmax ( F p1 (F p 2 ) ⊤ √ d ) ⊙ ( F s2 (F s 1 ) ⊤)⊙ softmax(F p2 (F p1 )⊤√ d ) . (3) . This has less effect on the final score with 3.5pp and 2.9pp drop in R@1 on CUB200 and Cars196 datasets respectively. This indicates that relation between local and global representation in Eq.1 helps similarity learning. We can completely remove the bidirectional global to local attention mechanism and use baseline projection function ϕ for finding the representation and use cosine similarity for computing the similarity between points. This experiment is provided in Sec.4.3.2. Where we study how does our model performs if coupled with different losses. 4.3.1 RESOLUTION EFFECT We see an increase in performance with the increase of the image size. In Fig.2 we summarize effect of the increase in image resolution for different methods on different datasets. Majority of the methods benefit to some extend from the increase in image size. However, our attention mechanism that replaces pooling operation helps to unleash the benefits of hi-resolution training. Fine-grained details importance. As an additional experiment we verify how much performance is lost due to the intermediate downsampling (no downsampling) to the size 256×256px. When no downsampling is performed we can reach 0.7pp higher on R@1 on the CUB200 dataset and only 0.15pp R@1 on the Cars1-196. As we see, our model does not significantly suffer from the missing information of real high-resolution input. Hence, not additional, fine-grained information is crucial for performance, but the increased number of “tokens” entailed by larger input image resolutions of tensors FS and FP . 4.3.2 OTHER LOSSES We also apply our method using other losses used for similarity learning and observe consistent improvement when scaling to larger image size. Thus, our bidirectional global to local attention mechanism for similarity learning is applicable to other methods as well. Though other methods increase the recall scores with the increase in resolution, our method helps to boost this effect. This becomes especially prominent when we go for higher resolutions rates, reaching image size 608 × 608. In Fig.3 we visualize results for multi-similarity loss and for margin loss(Wu et al., 2017) on the Cars-196 and CUB200 datasets. 5 CONCLUSIONS We have presented a novel approach to visual similarity learning by abandoning the common paradigm of holistic image encodings. Rather we have framed a similarity learning task as a pairbased approach and not an image-based approach more suitable for a general representation learning. We have designed a novel way to learn and utilize similarities between local regions of the image without any extra labels. Our novel bidirectional global to local attention module splits the task into two parts: what is related and how similar is that. We have provided a visual evidence that the similarity learning may alter its focus within the same image depending on the image we compare it to. On a technical side, we fight a problem of high compression rate of the embeddings mapping function. We have shown that our bidirectional global to local attention similarity learning scales better with increase in resolution compared to the other state-of-the-art approaches and significantly outperform them in retrieval metrics on all three datasets. Our approach is generic and easy to combine with other losses or even more sophisticated approaches to DML. We have also studied the effect of each individual block of our bidirectional global to local attention block.
1. What is the focus and contribution of the paper on similarity learning? 2. What are the strengths of the proposed approach, particularly in terms of its ability to highlight attention? 3. What are the weaknesses of the paper, especially regarding its dependence on upsampling and the lack of clarity in certain parts of the introduction? 4. Do you have any concerns or suggestions regarding the visualization of attention in the paper? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper shows that similarity learning may alter its focus within the same image depending on the image we compare it to. It designs a novel cross-attention mechanism to learn this joint similarity. This improves results against several baselines and achieves state-of-the-art. Strengths And Weaknesses Strengths: Computing similarities between global and local parts as weights to highlight attention proves worked. Many experiments performed + additional analysis supported by plots/tables. Weaknesses: 1. This paper presents a cross-attention approach to determine meaningful local features, but Figure 2 shows that performance increasing is largely depend on upsampling. It would be good to state the relation between cross-attention and upsampling, e.g. why cross-attention has a bigger effect with the increase in image resolution? 2. The attention visualization in Figure 1 is interesting. It would be more convincing if the authors show more image pairs from various datasets. In addition, what the attention visualization will be if we do not use upsampling? 3. The writing about the connection between high resolution and pooling in Introduction is not clear. 4. Bidirectional vs unidirectional. It would be good to show some unidirectional analysis or experiments. Minor comments: Proof reading should be made for this paper. 1. Typos: better better then should be better than, images images should be images, porcess should be process, etc. 2. Some sentences do not support English grammar, e.g., one sentence with two verbs: The task is given an input image I find such an embedding e such that it satisfies label relations to the other samples in the dataset . Clarity, Quality, Novelty And Reproducibility The logic connection of paragraph 7 in Introduction is confused, which should be stated in a clear way.
ICLR
Title Bidirectional global to local attention for deep metric learning. Abstract Deep metric learning (DML) provides rich measures of content-based visual similarity, which have become an essential component for many downstream tasks in computer vision and beyond. This paper questions a central paradigm of DML, the process of embedding individual images before comparing their embedding vectors. The embedding drastically reduces image information, removing all spatial information and pooling local image characteristics into a holistic representation. But how can we determine for an individual image the characteristics that would render it similar to a particular other image without having seen the other one? Rather than aiming for the least common denominator and requiring a common embedding space for all training images, our approach identifies for each pair of input images the locations and features that should be considered to compare them. We follow a cross-attention approach to determine these meaningful local features in one image by measuring their correspondences to the other image. Overall image similarity is then a non-linear aggregation of these meaningful local comparisons. The experimental evaluation on standard DML benchmarks shows this approach to significantly improve over the state of the art. 1 INTRODUCTION Similarity learning is important for many different tasks in computer vision: classification, detection, face recognition, zero-shot and few-shot learning. Usually similarity learning is trained on one set of examples of similar and dissimilar pairs and later applied to a different set of pairs. In such a way a certain amount of generalization is required when training a model to find similarities between objects. The main goal of the conventional approach to deep metric learning is to train an encoder function E and an embedding function ϕ such that composition ϕ ◦ E yields a representation that can fully describe input image. And this representation is later used to measure similarities to other images and to retrieve nearest neighbours, i.e. most similar objects with respect to the notion of similarity. Moreover, we see that conventional approach focuses a lot on the problem of finding image representation. The comparison to another image is performed via feeding individual image representations to the loss function. What is important here is that the representation of an image is fixed and does not change whatever image it is compared with. Hence this approach is unnatural to the problem of similarities estimation: given a query image - most decisive parts for similarity estimation may change depending on the image we compare it to. Let us illustrate this idea with the following example. When we have been working with the SOP dataset we have noticed that images of the same bike vary a lot in viewpoint. One image can focus on a saddle another one on the gears and the wheel, see Fig.1. So it can be hard to determine whether these images are of the same bike if only look on the bike specific details. However, it might be useful to notice a unique joint pattern, for example a green carpet on the floor frame color to amplify those details when perform similarity estimation. unique visual feature that can be amplified and focused on only if we observe two images jointly. But how do we learn this joint similarity? We need to design a mechanism that will somehow blend two images we want to compare together. Furthermore, we need a mechanism to blend information and we also must decide at which level to fuse images. Taking the input pixel representation can be too coarse, but if we take the final representation yielded by the ϕ ◦ E, we may already loose too much information at this point. This happens mostly because the output of the encoder E, usually a pretrained on ImageNet(Deng et al., 2009) convolutional part of the Resnet-50(He et al., 2016). For an image of size 224 × 224px we get a tensor of size 7 × 7 × 2048 as the output of E. The projection ϕ includes a pooling operation of some kind and an embedding projection onto the unit sphere of dimensionality 512. So we have a compression rate of ≥ 200. Moreover, this projection also removes all the spatial information. We see that the image first undergoes a severe compression operation and only afterwards is being compared with another image. This is also bad because it may disregard relations between different image parts. This leads to the following necessity we want to fuse information of a pair of samples as early as possible together and we want our representation to be as rich as possible. First, aggregation methods is on of the crucial thing to redesign. Second, information from both images must be fused at the output of E. There is also a technical side of the problem of conventional approach: pooling of the features into a single representation is a bottleneck for information flow between the loss and the weights we want to adjust. With the recent advancements in computing hardware the trend for increase of image resolution for deep learning becomes apparent. Regarding our problem, the higher the input resolution is, the more lossy becomes the aggregation method described above. For that reason it becomes necessary to find an alternative to the lossy aggregation operation in particular and to the holistic approach based on finding fixed representation in general. Novel approaches must focus on fusing rich image representation and finding features adjusted to a particular image pair, namely for a particular comparison. Moreover simple pooling methods(aggregation method) like average pooling or max pooling of features result in information blurring, which becomes a bigger problem when scaling image resolutions which prevents effective training of high resolution input. Talk about other results on this experiment. Mention that we do not need true hi-res, but we use the upsampled version. Also mention that we do not need extra parameters. We suggest an alternative to the holistic approach. We design a novel bidirectional global to local attention mechanism that facilitates more direct similarity learning between rich image representation and aggregates all individual similarities better better then the conventional approaches. Our attention mechanism can better fuse features together and turn a similarity into a truly pair-based concept. Through extensive experiments we show that pairbased similarity learning is being superior to the image-based similarity learning in terms of retrieval performance. We study individual elements of the novel bidirectional global to local attention mechanism and provide meaningful insights into the decision making process of our approach. We also show that our method can be combined with classic DML losses and can significantly boosts their performance and make them outperform stateof-the-art approaches which are full of heavy machinery used for training them. We also observe that our method can scales much better with the input image resolution compared to other methods, thus indicating that we have a better training signal. 2 RELATED WORK 2.1 DEEP METRIC LEARNING. Deep Metric Learning (DML) (Roth et al., 2020b; Musgrave et al., 2020; Milbich et al., 2021) is one of the leading lines of research on similarity learning and related applications, such as image retrieval and search (Sohn, 2016; Wu et al., 2017; Roth et al., 2019; Jacob et al., 2019) or face recognition (Schroff et al., 2015; Hu et al., 2014; Liu et al., 2017; Deng et al., 2019), and even influenced the advance of self-supervised, contrastive representation learning (He et al., 2020; Chen et al., 2020; Misra & Maaten, 2020). With the goal of optimizing individual image projections into an expressive embedding space such that similarity relations between the images are reflected by a given distance metric, a multitude of different approaches for learning have been proposed. The main problem formulation of DML are surrogate ranking tasks over tuples of images, ranging from simple pairs (Hadsell et al., 2006) and triplets (Wu et al., 2017; Schroff et al., 2015) to higherorder quadruplets (Chen et al., 2017) and more generic n-tuples (Sohn, 2016; Oh Song et al., 2016; Hermans et al., 2017; Wang et al., 2019). These ranking tasks sometimes include geometrical constraints (Wang et al., 2017; Deng et al., 2019). To make learning feasible despite the exponential complexity of tuple combinations, such methods are often combined with tuple sampling strategies following either manually defined (Wu et al., 2017; Schroff et al., 2015; Xuan et al., 2020) or learned heuristics (Ge, 2018; Harwood et al., 2017; Roth et al., 2020a). Often, this issue is also successfully alleviated by class proxies representing entire sets of training images such as NCA formulations (Goldberger et al., 2005; Movshovitz-Attias et al., 2017; Kim et al., 2020; Teh et al., 2020; Qian et al., 2019) or classification-based approaches (Deng et al., 2019; Zhai & Wu, 2018). Finally, extensions of these basic formulations further improved the out-of-distribution generalization capabilities of the learned embedding spaces, e.g by leveraging multi-task and ensemble learning (Opitz et al., 2017; 2018; Sanakoyeu et al., 2021; Roth et al., 2019; Milbich et al., 2020; Kim et al., 2018), generating synthetic training samples (Duan et al., 2018; Lin et al., 2018; Zheng et al., 2019; Gu et al., 2021; Ko & Gu, 2020), diverse, complementary feature semantics (Milbich et al., 2020; Milbich et al., 2020), self-distillation (Roth et al., 2021) or sample memory banks (Wang et al., 2020). All the above works follow the predominating paradigm of determining image similarity by comparing mutually independent, holistic image projections in the embedding space. Thereby, they rely on the rationale that features shared by similar images are implicitly similarly encoded in the latent encoding. In our work, we break this paradigm and design a bidirectional global to local attention module that explicitly identifies and links local, shared image features for estimating similarity. Most similar to our work is the work of Seidenschwarz et al. (Seidenschwarz et al., 2021) and Elezi et al. (Elezi et al., 2020), which use self-attention, respectively label-propagation to exchange messages between standard, holistic image embeddings to incorporate global structure into the embedding space. Moreover, DIML (Zhao et al., 2021) similarly to our work proposed an interpretable DML framework operating on local features. However, correspondences are established by solving an expensive optimal transport problem. In contrast, our approach is based on an efficient cross-images attention mechanism, thus allowing us to greatly scale the spatial maps of local features. 2.2 ATTENTION MECHANISMS. The attention mechanism allows neural networks to explicitly focus on dedicated parts of the model input (Jaderberg et al., 2015), feature representations (Vaswani et al., 2017) and even output (Jaegle et al., 2021a). Introduced as hard attention, Spatial Transformers (Jaderberg et al., 2015) proposed a differentiable input sampler. The powerful formulation of soft (self-)attention was pioneered by transformers (Vaswani et al., 2017) which revolutionized the field of natural language processing and recently also gain influence in the vision domain (Dosovitskiy et al., 2021). Finally, cross attention has been shown to be a flexible concept for relating two arbitrary data representations (Jaegle et al., 2021b;a), e.g. for effectively scaling Vision Transformers (Dosovitskiy et al., 2021) to large input images. In our work, we formulate a bidirectional global to local attention mechanism to find correspondences between images. 2.3 EXPLAINABILITY IN DEEP LEARNING. Deep Metric Learning methods typically are difficult to interpret due to the holistic nature of the optimized latent embedding spaces. ABE (Kim et al., 2018) uses an self-attention mechanism for learning an ensemble of global learners to implicitly focus on different parts of the input image. However, (i) attention is not performed between images, thus only masked image regions that are captured by a particular learner can be visualized and (ii) those image regions are only consistent for very attention channels. In contrast, our approach explicitly establishes local correspondences between images, which are used to determine individual similarities between object parts. These correspondences naturally allow to visualize fine-grained relations between objects that the model considers crucial for similarity assessment. Similarly, DIML (Zhao et al., 2021) aims at finding local object correspondences, which, however, are limited to coarse object parts only, due to computational restrictions limiting the number of independent image regions to be represented. A widely used visualization in DML are UMAP (McInnes et al., 2018) or tSNE (Maaten & Hinton, 2008) projections of the holistic image embeddings. While such visualizations help to show which images are overall similar and dissimilar, they only implicitly provide insights into why a model puts two images next to each other on the embedding manifold. 3 APPROACH Lets first recap the conventional approach to Deep Metric Learning. The task is given an input image I find such an embedding e such that it satisfies label relations to the other samples in the dataset. Usually, the image I is fed first into the encoder network E and then mapped onto the manifold using embedding function ϕ. This gives us a representation e = ϕ(E(I)) in a d dimensional space on a d− 1 dimensional unit sphere Sd−1 := {x ∈ Rd | ∥x∥ = 1}. To satisfy relationships between dataset labels networks measure similarity between images I1, I2 by computing a distance between embeddings ϕ(E(I1)) and ϕ(E(I2)). Thus, it is assumed that image is fully represented using its embedding ϕ(E(I)). The training signal is computed only after plugging distances between embeddings d(ϕ(E(I1)), ϕ(E(I2))) into the loss function used for optimization. As the reader can notice, the images do not interact until the distance between the points is computed, hence all the computations are performed on the per image basis. Moreover, training signal passes though the lossy process of compression inside of an embedding function ϕ. However, images contain plenty of information and compressing this information by means of some simple pooling method in the function ϕ can be detrimental to the performance. To give you exact numbers: the most widely used encoder network E is the convolutional part of the Resnet-50 network. For an input image I1 of size 224 × 224 pixels we obtain a spatial tensor F1 := E(I1) ∈ Rh×w×d, where h = 7, w = 7, d = 2048. This representation has much more space to store useful information compared to the final embedding e1 := ϕ(E(I1)) ∈ Rd, where d is usually 128 or 512. This results in a compression rate of ≈ 200 between F1 and e1. These are two flaws of the representation seeking approach when applied to the problem of similarity learning - no interaction between images when computing their embeddings and lossy aggregation procedure. Additionally, a holistic approach can not explain which parts of an image are important for similarity and which are not. Thus, we need a mechanism to directly compare F1 with F2 := E(I2), not e1 with e2 = ϕ(E(I2)). Since F1, F2 ∈ Rh×w×d are of extremely high dimensionality, we can not just flatten this representation and feed it into the fully connected layer - this would have been computationally ineffective. Instead, we need a mechanism that can effectively estimate which parts across a pair of images to compare and how to weight those similarities. If we do not know what to compare we may throw information we need before even having a chance to find out this information was useful. The well established way to estimate which parts of an input must be related and processed jointly is the attention mechanism introduced by (Vaswani et al., 2017). However, if we compute attention between F1 and F2, the result is a matrix of size hw × hw which indicates correlation between different sites of those images. This set of correlations can be dominated by correlations between irrelevant parts of an image. For example, for birds classification task we can have the highest correlations between blue sky segments in both images, though this information is useless for the task of birds discrimination. For that reason we must know what to relate - what part is that and how meaningful it is? Additionally, we want to learn how similar two different parts are? For that reason we split the representation F = E(I) into parts embeddings FP := πP (F ) ∈ Rh×w×d and similarities embeddings FS := πS(F ) ∈ Rh×w×d. πP , πS are defined in the Sec.4.1. Hence, we need to compare with each other not only on the level of individual parts but with an image as a whole. To have an additional global representation of an image we maxpool the parts representation FP together across dimension h×w and obtain g := πG(FP ). Detailed description of πG is provided in the Sec.4.1. That means we want to compare g1 with all parts from FP2 and g2 with all parts from F P 1 . This is more efficient then comparing exhaustively individual tokens from FP1 and F P 2 . For example, sky patches are present in both images and have high correlations but they are not important for discrimination. For the sake of simplicity from now on we assume that all FP , FS are reshaped to the shape hw×d. This should remove ambiguity of the matrix calculus below. Moreover, we want our method to focus on those details of image I2 which are important for image I1. Therefore, we want to relate g1 with F2 to enable amplification of tokens of F2 which are highly correlated with g1. Even though they might have been unnoticeable in F2 on its own. We find the importance of parts of image I2 to image I1 as a whole by computing the attention of between local parts of I2 and global representation g1 of I1 softmax( g1F p 2√ d ) ∈ R1×hw). And vice versa we compute attention softmax( g1F p 2√ d ) ∈ R1×hw) for attention between parts of I2 to image I1 as a whole. Expressions above tell us which parts must be related. Now we need to estimate similarity between individual local parts. This can be formulated as S := FS1 ( FS2 )T . Now we have similarities between individual parts and importance of inidividual parts. Next we combine those two concepts together: s(F1, F2) := softmax ( g1F p 2 ⊤ √ d )( F s2F s 1 ⊤ ) softmax ( F p1 g ⊤ 2√ d ) . (1) We call this computation block consisting of πP , πS , πG a bidirectional global to local attention for similarity estimation. Reader may note a connection of the equation above to the renown attention mechanism widely used for establishing correlations between objects of different nature. Given queries Q, keys K and values V we estimate first correlation between queries and keys softmax(QK⊤). In our case we attention applied from both sides and values V being individual similarities S between different image parts, while attention weighting matrix is the global to local attention between images. Given the similarity scores between all pairs of points we plug them into any loss function used as a training objective in DML. We use the multi-similarity loss (Wang et al., 2019) to compute the loss for every batch: L := 1 b ( b∑ i=1 1 α log [∑ k∈Pi exp−α(s(Fi,Fk)−λ) ] + 1 β log [∑ k∈Ni expβ(s(Fi,Fk)−λ) ]) . (2) The training algorithm is summarized in Alg.1. Algorithm 1 Training Require: E - pretrained ResNet-50, X - dataset with images and class labels, b - batch size Initialize E Initialize layers πS , πP , πG of the similarity cross attention. while not converged do Sample b Images with labels (Ii, li) ∈ X , i ∈ {1, .., b} for ∀i ∈ {1, .., b} do Compute backbone output F̄i Compute similarities FSi = π S(Fi), parts FPi = π P (Fi) Compute global representation gi = πG(πP (Fi)) end for for ∀i, j ∈ {1, .., b} | i ̸= j do Compute local similarities Sij = FSj F S i ⊤ Compute global to local attentions softmax ( giF p j ⊤ √ d ) and softmax ( Fpi g ⊤ j√ d ) Compute final similarity s(Fi, Fj) using Eq.1 end for Compute loss L specified in Eq.2 Backpropagate gradients of L into weights θπS , θπP , θπG . end while 4 EXPERIMENTS 4.1 IMPLEMENTATION DETAILS. Implementation details. We follow the common training protocol (Wu et al., 2017; Roth et al., 2019; Sanakoyeu et al., 2021) for DML and utilize an ResNet50 (He et al., 2016) encoder E pretrained on the ImageNet dataset. The model is implemented in the Tensorflow2 framework. All the experiments are conducted on a single RTX 8000 or a single RTX 6000 GPU. For training, we use the Adam (Kingma & Ba, 2015) optimizer with a fixed learning rate of 10−5 and default β1, β2 parameters with no learning rate scheduling being applied. A default batch size of 32 is used unless stated otherwise. We choose the popular multi-similarity loss (Wang et al., 2019) as our DML objective function using default parameters stated in the original paper. For all the experiments unless stated otherwise we first resize input images to the size 256× 256px following standard practice (Musgrave et al., 2020; Roth et al., 2020a) and afterwards artificially upsample them to size 608 × 608px. At inference time, to further follow standard protocol, we apply center cropping to size 224× 224px after the initial resize to 256× 256px and then upsamle it back to the our final input size of 608 × 608px. We discuss the rationale of the upsampling and its benefit for our approach in Sec. 4.3.1. Datasets. We evaluate the performance on three standard DML benchmark datasets using the default train-test splits: • CARS196(Krause et al., 2013), which contains 16,185 images from 196 car classes. The first 98 classes containing 8054 images are used for training, while the remaining 98 classes with 8131 images are used for testing. • CUB200-2011(Wah et al., 2011) with 11,788 bird images from 200 classes. Training/test sets contain the first/last 100 classes with 5864/5924 images respectively. • Stanford Online Products (SOP)(Oh Song et al., 2016) provides 120,053 images divided in 22,634 product classes. 11318 classes with 59551 images are used for training, while the remaining 11316 classes with 60502 images are used for testing. Architecture design. The design of the mappings πP , πS , πG is inspired by the design of the transformer encoder of the vision transformers(Dosovitskiy et al., 2021). Both πP , πS perform layer normalization of the input and follow that by a single fully connected layer. πG performs max pooling across hw channels, followed by another fully connected layer and L2-normalization. Evaluation procedure. Our method computes similarity score directly between a pair of images images. In order to compute R@k for every query image we need to compute its similarities to all the other neighbours in the dataset. This results in a quadratic complexity at evaluation step, since we need to porcess all pairs of images. To circumvent this nuisance we compute and store all intermediate embeddings F and the global parts embeddings g. The latter is used to compute nearest 100 neighbours using these global embeddings. And only for those approximate nearest neighbours we compute similarities with our full method. Using these similarities we rerank approximate neighbours accordingly and compute final retrieval scores. This gives a reasonable time overhead, especially when compared to the exhaustive pairwise similarity computation for all pairs in the dataset. In practice it results in 15% increase in evaluation time. 4.2 COMPARISON TO THE STATE OF THE ART METHODS First of all we present how our approach stands against other methods. We evaluate performance on three standard datasets i.e. CUB200 (Wah et al., 2011), CARS196 (Krause et al., 2013) and SOP (Oh Song et al., 2016). We measure the retrieval performance using the widely used Recall@k score (Jegou et al., 2011). Results are summarized in Tab.1. They indicate that our approach significantly outperforms other approaches and validates efficiency of our cross-image similarity estimation. Please note that for the sake of fairness all experiments are performed after applying standard DML image preprocessing - image is first scaled to the size of 256× 256px , then we take a central crop of size 224×224px and only afterwards image is upsampled to the size 608×608px. Thus our approach can not benefit from minuscule details visible only in high-resolutional input, see Sec.4.3.1 for the detailed study on the importance of the resolution and fine details. There is another popular metrics in DML is the NMI (Manning et al., 2010) (Normalized Mutual information) score. We do not report it because our approach yields a single similarity score and essentially eliminates a concept of embedding, thus making NMI score inapplicable to our approach. 4.3 COMPONENTS OF THE BIDIRECTIONAL GLOBAL TO LOCAL ATTENTION MODULE Let us have a closer look at Eq.1 closely. It consists of two main components: attention between holistic parts embeddings of the first image and parts embedding of the second image softmax(g1F p 2 )R1×hw and the matrix of local similarities S = F s2F s1 . We can study the effect of each individual component separately. At first we can assume that we do not need any attention between image parts across images. In that case our similarity boils down to the average of the local similarities S, namely final similarity is 1TF s2F s 11, where 1 ∈ Rd×1 is a vector of all ones. The R@1 score drops by 8.9pp on CUB dataset and by 6.5pp on the Cars196 dataset for the image resolution 608×608px. We conclude that the parts embeddings FP are crucial for similarity learning. We can also ablate effect of individual similarities between global embeddings g1 and FP2 and replace it with attention between local parts, namely replace eq.1 with softmax ( F p1 (F p 2 ) ⊤ √ d ) ⊙ ( F s2 (F s 1 ) ⊤)⊙ softmax(F p2 (F p1 )⊤√ d ) . (3) . This has less effect on the final score with 3.5pp and 2.9pp drop in R@1 on CUB200 and Cars196 datasets respectively. This indicates that relation between local and global representation in Eq.1 helps similarity learning. We can completely remove the bidirectional global to local attention mechanism and use baseline projection function ϕ for finding the representation and use cosine similarity for computing the similarity between points. This experiment is provided in Sec.4.3.2. Where we study how does our model performs if coupled with different losses. 4.3.1 RESOLUTION EFFECT We see an increase in performance with the increase of the image size. In Fig.2 we summarize effect of the increase in image resolution for different methods on different datasets. Majority of the methods benefit to some extend from the increase in image size. However, our attention mechanism that replaces pooling operation helps to unleash the benefits of hi-resolution training. Fine-grained details importance. As an additional experiment we verify how much performance is lost due to the intermediate downsampling (no downsampling) to the size 256×256px. When no downsampling is performed we can reach 0.7pp higher on R@1 on the CUB200 dataset and only 0.15pp R@1 on the Cars1-196. As we see, our model does not significantly suffer from the missing information of real high-resolution input. Hence, not additional, fine-grained information is crucial for performance, but the increased number of “tokens” entailed by larger input image resolutions of tensors FS and FP . 4.3.2 OTHER LOSSES We also apply our method using other losses used for similarity learning and observe consistent improvement when scaling to larger image size. Thus, our bidirectional global to local attention mechanism for similarity learning is applicable to other methods as well. Though other methods increase the recall scores with the increase in resolution, our method helps to boost this effect. This becomes especially prominent when we go for higher resolutions rates, reaching image size 608 × 608. In Fig.3 we visualize results for multi-similarity loss and for margin loss(Wu et al., 2017) on the Cars-196 and CUB200 datasets. 5 CONCLUSIONS We have presented a novel approach to visual similarity learning by abandoning the common paradigm of holistic image encodings. Rather we have framed a similarity learning task as a pairbased approach and not an image-based approach more suitable for a general representation learning. We have designed a novel way to learn and utilize similarities between local regions of the image without any extra labels. Our novel bidirectional global to local attention module splits the task into two parts: what is related and how similar is that. We have provided a visual evidence that the similarity learning may alter its focus within the same image depending on the image we compare it to. On a technical side, we fight a problem of high compression rate of the embeddings mapping function. We have shown that our bidirectional global to local attention similarity learning scales better with increase in resolution compared to the other state-of-the-art approaches and significantly outperform them in retrieval metrics on all three datasets. Our approach is generic and easy to combine with other losses or even more sophisticated approaches to DML. We have also studied the effect of each individual block of our bidirectional global to local attention block.
1. What is the main contribution of the paper in deep metric learning? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty and comparison with other works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or questions about the computation cost and memory consumption of the proposed method? 5. Does the reviewer have any suggestions for improving the presentation and completeness of the literature review?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper claims to replace the embedding features of deep metric learning with larger local feature map to keep the spatial information. The matching/retrieval is finished using a global-to-local attention pipeline. The performance on CUB, CARS, SOP seem to be improved over baseline method (MS loss). The computation is not discussed. Strengths And Weaknesses Strength: Using more detailed local feature for deep metric learning is interesting. The presentation of result looks good for me. Weakness: The idea of using local feature is not new. Actually using local-feature-based geometric verification as reranking for retrieval system was a standard process for image retrieval. And recently there are also deep learning based reranking methods using local feature, for example [A]. Local feature based retrieval methods should also be compared since this paper is using large local feature map. [A] Tan, Fuwen, Jiangbo Yuan, and Vicente Ordonez. "Instance-level image retrieval using reranking transformers." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. It is necessary to provide detailed comparison and discussion about the inference speed given that the proposed method should be slower than all the previous methods. The writing of introduction is kind of misleading for me. Previous deep metric learning methods use a small embedding feature because the similarity computation is a simple matrix multiplication and can be easily scaled to real-world application with over 1M reference images. Directly using local feature (7x7x2048) has large memory consumption and cannot scale to large-scale real-world system. Aparently using more information has potential to get better result, and this is not new insight even for deep metric learning (DIML Zhao et al., 2021). The authors might want to focus more on the key contribution of this work. The comparison seems to be unfair for previous works (Table 1). Previous methods use a resolution of 224 but this paper upsamples the resolution to 608. Why not use the same resolution. Using the large resolution would also increase the computation cost. In Figure2, the performance of the proposed method is much lower if using 224 as resolution. The writing could also be improved. The literature review could be more complete. Clarity, Quality, Novelty And Reproducibility Overall, I feel that the quality does not meet the bar of ICLR. Some evaluation details are not very clear. Since the evaluation process is significantly different from previous works,the description should be more detailed to guarantee reproducibility. The idea is interesting but not significant enough, given that using local feature is not new for deep metric learnng/image retrieval system.
ICLR
Title Differentially Private Conditional Text Generation For Synthetic Data Production Abstract Companies have faced increasing pressure in recent years to anonymize user collected data when sharing internally or to third parties. Text data in particular contains copious amounts of personally identifiable information that has proven to be difficult to de-identify while remain useful for the party of interest. Previous works have suggested that synthetic text generation could provide a promising avenue to curate high performant and private datasets. In this paper, we introduce an approach to synthesize high utility text classification datasets by performing conditional generation through a large language model, distilGPT2, while providing measurable guarantees via differential privacy. We show that naive approaches suffer heavily from utility loss by entangling task-relevant factors in the transformer embedding space, making controlled generation more difficult. We analyze how incorporating a secondary learning objective can improve the performance of the generative model, improving utility of the generated data. 1 INTRODUCTION In recent years, language models have seen dramatic improvements in performance over NLP tasks. In large part, this has been due to the rapid accumulation of user generated text on the internet. Companies have been able to aggregate millions of documents available online as well as their user data to train these large language models. However, lawmakers and their constituents have grown wary of data collection and usage practices, urging more stringent regulations. In 2018, the EU set the General Data Protection Regulation (GDPR) into motion, with the goal to increase transparency about collected information and give users more control over how their data is handled. (Voigt & Bussche, 2017). Consequently, companies are now searching for ways to utilize user data without exploiting user privacy. The GDPR begins with the statement: “The protection of natural persons in relation to the processing of personal data is a fundamental right”; it is imperative that we innovate on methods to use data effectively without risking user privacy. In this paper, we study privatization of unstructured text data. Even with safety measures in mind, there has been massive exploitation of user text data. For example, in 2006, as part of their algorithm contest, Netflix released a de-identified dataset of user generated movie reviews. Researchers discovered that surprisingly little information was required to reconstruct the identities of users that contributed to the reviews (Narayanan & Shmatikov, 2006). Further studies have shown how other methods, such as authorship and membership inference attacks (Carlini et al., 2020), can be utilized to reconstruct user identities. All this to say, without proper privacy guarantees and careful data analysis, companies risk user data to exploitation. Dwork (2006) and Abadi et al. (2016) proposed differential privacy (DP) and DP-SGD/DP-Adam, respectively, as methods to provide provable and quantifiable guarantees about privacy. Generally, we say that a randomized algorithm satisfies DP if the output distribution is indistinguisable when run on neighboring datasets. However, current trade-offs between privacy and utility, particularly in synthetic text generation, makes it impractical for companies to create useful data with strong privacy guarantees. A common approach for anonymization is to de-identify (redact) personally identifiable tokens in text, such as names and addresses. While this may seem like a reasonable approach on paper with SOTA models reporting accuracies of nearly than 97%, the 3% of tokens that are misidentified could be used by an adversary to re-identify users. Consequently, this approach isn’t a strong enough guarantee of privacy. A permissible error from such a model should be lower than 1% (Yogarajan et al., 2020; Al Aziz et al., 2021), something that has not been achieved today for abitrary datasets. Synthetic data is promising because it avoids the problem of anonymizing an individual’s data by instead producing information about non-existent persons. Other approaches to anonymize unstructured text data have focused on word or sentence level perturbations in order to reduce vulnerability to membership inference and authorship attacks. These approaches often heavily degrade semantic quality of the text and may struggle to provide overall privacy guarantees in the context of language peculiarities, such as with the leakage of PII. Other approaches seek to generate data synthetically, such as Libbi et al. (2021) and Al Aziz et al. (2021). However, such studies often show a large tradeoff between privacy and utility or make differentially private guarantees with a potentially unreasonable epsilon parameter (e.g. ϵ > 10). In this paper, we present an approach of generating synthetic text data by performing controllable generation through a large language model. We show it is possible to synthesize text classification datasets with rigorous privacy guarantees. We hope this method will enable companies to share data and train high utility models without putting their users’ data at risk. Our contributions are as follows: 1. We present findings on problems that arise when performing conditional finetuning of large language models with DP-Adam. Particulary, we find that it becomes difficult to conditionally prompt the model towards a desired class and generate synthetic data that mimics desired attributes of the original. We propose using a task-relevant loss via a secondary learning objective to solve this issue. 2. We generate synthetic versions of the SST-2 and AG News datasets by performing conditional text generation over a langauge model. We incorporate a combination of generation techniques: attribute conditioning and a gradient based approach (Dathathri et al., 2019) to further steer generation. We show minimal loss in utility of our synthetic datasets (6.3%) with strong privacy guarantees (ϵ = 3). Code to recreate our results are available here: (redacted for review) 2 BACKGROUND 2.1 LANGUAGE MODELING Given a sequence of tokens X = x0, ... , xn , language models (LMs) are trained to compute the unconditional probability of the sequence p(X). This probability can be rewritten in terms of product of conditional probabilities by recursively applying the chain-rule (Bengio et al., 2003) as: p(X) = N∏ i=1 p(xi|x0, ..., xi−1) (1) This allows modeling the language via next-word prediction. We use the transformer architecture (Vaswani et al., 2017) to model the distribution of natural language. Generation of a new sequence y can be created by sequentially sampling its constituents: pθ(y0), pθ(y1|y0), ..., pθ(ym|y<m). 2.2 CONDITIONAL TEXT GENERATION Conditional generation of text attempts to steer the output of a LM given a desired condition or control variable. Keskar et al. (2019) introduced a method to accomplish this goal by performing training a LM over a dataset, such that the desired condition is prepended to the text body: “BOS [condition] SEP text” (BOS and SEP are special tokens to indiciate the beginning of the sentence and to separate label from the text body, respectively). On the other hand, plug and play controllable language generation (PPLM) (Dathathri et al., 2019) combines an attribute model (such as a discriminator) with a LM to manipulate its output and perform controllable text generation. Given an attribute a and generated text x, let the output of the discriminator model represent p(a|x). In order to control generation, we shift the latent hidden state of the language model at step i, hi by ∆hi in the direction of the sum of two gradients: (1) towards a smaller cross entropy loss in the attribute model p(a|x) for the desired attribute a and (2) toward higher log likelihood of the language modeling head p(x) to preserve the generation quality and fluency. In this paper, we use a combination of the two approaches in order to generate high-quality data. We first fine-tune a large language model over the desired dataset with conditional prompting similar to Keskar et al. (2019) and then use the gradient-based approach as described by Dathathri et al. (2019) to steer generation with high likelihood towards the desired attribute. With this process, we can generate labeled data for our synthetic dataset. 2.3 DIFFERENTIAL PRIVACY Differential Privacy (DP) is a formal definition of privacy which offers strong assurances against various re-identification and re-construction attacks (Dwork, 2006; Dwork & Roth, 2013). In recent years, DP has attracted significant attention due to its mathematically sound and provable privacy guarantees. Moreover, it has unique properties such as robustness to auxillary information and postprocessing, composability to enable modular design, and group privacy. (Dwork & Roth, 2013; Abadi et al., 2016). Definition 1. (Differential Privacy (Dwork, 2006)) A randomized function M provides (ϵ, δ)differential privacy if for all adjacent datasets X,X ′ ∈ X and all Y ⊂ Y, P r[M(X) ∈ Y ] ≤ exp (ϵ) · Pr[M(X ′) ∈ Y ] + δ (2) This is a standard definition of DP, which implies that the outputs of a DP model/algorithm for neighboring datasets are indistinguishable, bounded by the privacy parameter ϵ. ϵ is a non-negative number which represents the privacy budget. Smaller ϵ values more rigorously enforce privacy, but may have the effect of decreasing data utility. DP also allows for tracking privacy loss throughout the execution of a program by computing its leakage parameters. In this paper, we use Renyi Differential Privacy for accounting privacy budget (Mironov, 2017). Composability and robustness to post-processing are important properties of DP that are necessary for the guarantees in our paper. Composability allows for reasoning about overall privacy loss from the composition of multiple DP algorithms releasing multiple statistics about a particular dataset. Robustness to post-processing implies that if some mechanism M satisfies ϵ-differential privacy, then for any deterministic or randomized function F , so does F(M). This allows us to make ϵ-DP guarantees about the generated text from our ϵ-DP trained language model. Definition 2. Differentially Private Stochastic Gradient Descent (DP-SGD) modifies the update step during backpropagation by (1) clipping the gradient for each example in the mini-batch to a maximal norm C and (2) adding Gaussian noise with standard deviation proportional to C to the mean of the clipped gradients. w(t+1) = w(t) − ηt · 1 B { ∑ i∈Bt clipC(∇Li(wt)) +N(0, σ2C2I)} (3) Where clipC = v · min(1, C||v||2 ). Intuitively, the DP-SGD mechanism preserves privacy by mitigating the impact of out-of-distribution samples on the model, and is used during fine-tuning of our language models. DP-Adam is the differentially private version of the Adam optimizer (Kingma & Ba, 2014), using the same gradient privitization as outlined in DP-SGD. 3 RELATED WORKS Current methods on text privitization fall into three general categories: word/sentence level perturbations, private text embeddings, and synthetically generated text. Here, we discuss each method. Word/Sentence Level Perturbations: Many works have discussed anonymizing text by perturbing word or sentence level embeddings to satisfy ϵ-differential privacy. This set of approaches change individual words in a document, often following a variant of metric based DP (Alvim et al., 2018) which has shown to be a more utilitarian perspective of privacy in the context of NLP. However, as discussed by Mattern et al. (2022), these perturbations struggle to provide overall privacy guarantees in the context of language peculiarities and leakage of other personally identifiable information (PII) that allow for re-identification. They also suffer from utility losses since grammatical and syntactic structure are degraded. Other methods suggested by Weggenmann & Kerschbaum (2018) and Bo et al. (2019) investigate differentially private mechanisms via latent space perturbations and adversarial training, respectively, to reduce the impact of authorship inference attacks. However, these methods, again, do not address the issue of PII leakage and suffer from significant uility losses. Private Text Embeddings: Other methods have investigated releasing private text embeddings instead of the original text content. Recent work such as Lyu et al. (2020) and Xu et al. (2021) propose randomization mechanisms that can transform text embedding vectors into one that satisfies metric space differential privacy guarantees. This method has shown promise in providing formal guarantees while also retaining high utility. However, this process does not leave human readable text, which is a desired property for companies performing internal data sharing; thus, we examine our approach independent of this body of work. Synthetic Text: Other methods, particularly in the medical domain, have attempted to address the issue of privacy via synthetic text generation. Synthetic data addresses the problems of deidentification by simply not describing real people, and thus retaining plausible deniability over the data produced. Recent methods like Libbi et al. (2021) and Al Aziz et al. (2021) have proposed text generation approaches; This paper goes further, investigating the impact of a large range of parameter selection in conditional text generation and most importantly, demonstrating high utility even with strong privacy parameters (e.g. ϵ = 3), something previous works have not done. 4 DATASETS AND PREPROCESSING In this paper, we generate artificial datasets for text classification. We choose this task because it allows us to best compare utility and privacy in one dataset. We experiment over two datasets. Each dataset is split 80:20 for train and test. We represent datasets as D = {(xi, yi)}ni=1 4.1 SST-2 The SST-2 corpus consists of 11,855 movie review samples, each labeled with positive orn egative sentiment by human annotators. This dataset was perfectly balanced with each class having equal representation (Socher et al., 2013). 4.2 AG NEWS The AG News corpus is a topic classification task. This dataset consists of over 120,000 samples, each labeled under a topic from: Sports, World, Business, Sci/Tech. This dataset was perfectly balanced with each topic having equal representation (Zhang et al., 2015). 5 EXPERIMENTS This paper improves on existing methods for generating high-utility synthetic text data with differential privacy guarantees. Bommasani et al. (2019) argued that for successful private synthetic text data, we must have formal guarnatees of privacy and have distributional similarity to the original dataset. We achieve this by conditionally finetuning a LM (distilGPT2) over the original text data, the intuition being that we can reconstruct a similar distribution via generation. Since the model is learned privately, the post-processing theorem (Dwork, 2006) allows us to make the same ϵ guarantees about the generated samples. We show that with this approach, we are able to construct private, synthetic data that retains high utility. We hope that this will enable companies to utilize synthetic data, reducing reliance on private user information. All our experiments were run on one NVIDIA V100 GPU instance. 5.1 FINE-TUNING The baseline language model that we use for training is a pretrained distilgpt2 from HuggingFace Sanh et al. (2019). We use this model over the larger versions to provide faster iteration of training under different configurations. We fine-tune the language model G to the task of synthesizing labeled sentences to obtain the finetuned language model Gtuned. Here, G is specifically fine-tuned to the linguistic domain of Dtrain (that is, the sentences, vocabulary, style, etc.), as well as the particular classes in Dtrain. The language modeling head, a feed forward network attached to the transformer architecture, is used to model the distribution of the next-word from an input sequence. During generation, we sample from this head. Generally speaking, we would like to use Gtuned to generate a sentence set of any length with conditioned attribute a being the class label. We fine-tune G by training it over the data from Dtrain = {(xi, yi)}ni=1. We generate training samples for conditional finetuning by prepending the label with the text body so that we end up with: U = BOS yi SEP xi. We fine-tune this model under different privacy settings, specified by the epsilon parameter. When training with DP, the Adam optimizer is substituted with the DP-Adam optimizer implemented from the private-transformers library 1, provided by Li et al. (2021). We also use the ghost-clipping mechanism outlined by Li et al. (2021) which introduces a memory efficient method to perform per-example gradient clipping. Renyi differential privacy (Mironov, 2017) was used to account privacy budget during training. 5.1.1 BASELINE METHOD 1: CONDITIONAL FINE-TUNING WITH FILTER In our first approach, we (1) perform full fine-tuning of G with the training procedure described above to produce Gtuned. (2) We independently train a discriminator to model p(a|x), the probability of generated sample, x, to belong to the class a. In our work, we model the discriminator by fine-tuning a language model for classification over the dataset. (3) We conditionally generate na samples for each class a from G and filter out any samples that do not meet a desired threshold score from the discriminator (e.g. only include the sample if p(a|x) > 0.5). Specifically, generation was done by performing nucleus sampling (Holtzman et al., 2019) over the output distribution of Gtuned. The described approach is similar to several methods used in data augmentation (Anaby-Tavor et al., 2019; Bayer et al., 2022; Queiroz Abonizio & Barbon Junior, 2020). This approach worked well for generating artificial datasets for SST-2 and AG News in the nonprivate setting. We synthesized datasets for each by generating the same number of samples for each class as the original. Generation was done by simply prompting the model with “BOS class SEP”. In the private setting, we replaced the Adam optimizer with DP-Adam and tracked the total privacy budget with the RDP accountant. As we improved the privacy guarantee with smaller epsilon parameters (e.g. ϵ = 8), the quality of conditional generation quickly degraded. While the private LM generated text that appropriately mimicked the linguistic domain of the training data, conditional prompting did not produce consistent results; prompting the model with attribute a would infrequently meet the threshold requirement from p(a|x). We also analyzed samples qualitatively and found the same results. For example, the non-private Gtuned generally produced samples that fit the class it was prompted: (e.g. “BOS positive SEP” might yield “a sensitive and heartwarming story of an aging man...”). However, the same approach with the private Gtuned produced samples that very inconsistently fit the prompted attribute (e.g. “BOS positive SEP” might yield “an emotional slap in the face, and...”). See Appendix B for more examples. Without having high confidence in our model being able to generate text conditionally for a desired class, the labels in the synthesized dataset may be meaningless. This would severely degrade the utility of the artificial data. This result suggests that a stronger mechanism than just prompting is required to steer the model towards high-quality class conditional samples. 5.1.2 BASELINE METHOD 2: CONDITIONAL FINE-TUNING WITH PPLM GENERATION Iterating from Baseline 1, we attempted to use a similar approach as PPLM (Dathathri et al., 2019), a gradient based steering mechanism, to guide the private Gtuned models towards more high qual- 1https://github.com/lxuechen/private-transformers ity generation. Similar to Baseline 1, we (1) train Gtuned, then (2) train a discriminator to estimate the attribute model p(a|x) by training a discriminator head over the frozen Gtuned model. The discriminator head is a simple MLP with non-linear activations. Lastly, (3) we perform PPLM-based conditional generation (See Section 5.2) to generate the synthetic labeled text classification dataset. The intuition for this approach is that the gradient based generation mechanism will guide Gtuned into generating samples that align strongly with the desired label. In order to effectively use the discriminator to perform gradient updates on the hidden states of Gtuned, we trained the discriminator over the fine-tuned LM’s frozen embeddings. Again, while this approach worked well in the nonprivate setting, it became infeasible to train the discriminator at strong epsilon settings. At ϵ = 3, 8 the discriminator was not strong enough to properly contribute to generation. We hypothesized that this issue was indicative that Gtuned was not preserving information about the attribute labels during private fine-tuning, making it difficult for the discriminator to learn separation, and simulatenously making it more difficult for the LM to generate label aligned samples as observed in the previous section. We investigated this hypothesis by visualizing the embedding space of Gtuned at different epsilon settings and estimating the mutual information between the transformer embedding space and class labels by training a Random Forest classifier (See Figure 1). We hypothesize that in order to strongly reconstruct distributional properties from the original dataset, the generative model should produce embeddings that are separable with respect to those task-relevant factors. 5.1.3 OUR METHOD: MULTITASK CONDITIONAL FINE-TUNING WITH PPLM GENERATION In order to address this issue we introduce a secondary learning objective and perform multitask learning during fine-tuning. In Baselines 1 and 2, the transformer is only attached to a linear language modeling head that models the probability distribution of the next word. In our approach, we simultaneously train a discriminator head, as shown in the diagram above. The discriminator head is, like Baseline 2, a simple MLP head. We now perform two gradient updates at every step – one to update the language modeling head and the other to update the discriminator head. We add the appropriate amount of noise to the gradients to maintain ϵ-DP guarantees and track privacy budget throughout training with RDP (Mironov, 2017). Since we still want to retain conditional prompting for the model, we want the language model to be able to see the conditional prompt, i.e. “BOS positive SEP text”, which includes the prepended label so that the model is able to understand prompting. Meanwhile, the discriminator head should be able to learn to model p(a|x) for a label, a, and generated sample x without seeing the label in the input. So, for the language head, we feed the label prompted text data and perform a gradient Figure 1: UMAP Projection of SST2 Embeddings from Gtunedwith ϵ = 3. Baseline 2 (top). Ours (bottom). DP Guarantee Baseline 2 Ours ϵ = inf 0.803 0.883 ϵ = 256 0.792 0.873 ϵ = 16 0.773 0.869 ϵ = 8 0.739 0.865 ϵ = 3 0.693 0.866 Figure 2: Random Forest Classifier Test Accuracies over SST2 Embeddings from Gtuned. The multitask approach (ours) shows marginal loss in performance at high privacy settings. update. Then, for the discriminator head, we replace the label in the input with a random token, the intuition being that the discriminator head will pay less attention to the embeddings at that location, and be a more informative guide during generation. We also train this discriminator head to classify text at different prefix lengths. For example, if the prefix step was specified to be 2, we would compute the loss given the transformer output for the second token, fourth token, sixth token, and so on. The loss is linearly weighted such that the first prefix is weighted the least and the last prefix is weighted the most. Lastly, this loss is averaged, and then the gradient update is computed. This loss procedure is to ensure the discriminator head is robust enough to provide meaningful classifications at different lengths of a sequence to improve its contribution during gradient based generation. Algorithm 1 DP Multitask Conditional Training Data: Gpretrained, Dtrain = {(xi, yi)}Ni=1, number of iterations T, learning rates ηlm, ηdiscrim, noise multiplier σ, clipping bound C, initial parameter vectors θ(0)transf, θ (0) lm , θ (0) discrim, batch size B, initial moment estimates m0, v0 ∈ Rp, exponential decay rates β1, β2 ∈ R and constant γ for t ∈ [E ·N/B] do Draw batch bt from D with sampling probability q. for (xi, yi) ∈ bt do rand← random token from vocabulary slm ← “BOS yi SEP xi”, sdiscrim ← “BOS rand SEP xi” g (t) lm ← ∇L(Gθ(t)transf, lm(slm), slm), g (t) discrim ← ∇L(Gθ(t)transf, discrim(sdiscrim), yi) g (t) lm ← g (t) lm ·min(1, C/||g (t) lm ||2), g (t) discrim ← g (t) discrim ·min(1, C/||g (t) discrim||2) end g (t) lm ← 1 B ( ∑ i∈bt g (t) lm +N(0, σ 2C2I)) g (t) discrim ← 1 B ( ∑ i∈bt g (t) discrim +N(0, σ 2C2I)) θ (t+1) transf, lm ← AdamUpdate(θ (t) transf, lm,mt, vt, g (t) lm , β1, β2, γ) θ (t+1) transf, discrim ← AdamUpdate(θ (t) transf, discrim,mt, vt, g (t) discrim, β1, β2, γ) end Output: Trained Model θ(T )transf, θ (T ) lm , θ (T ) discrim Ultimately, we find that by training both the discriminator and language modeling head simultaneously, Gtuned is able to conditionally generate even when trained with strong privacy guarantees. In Figure 1, we show how this approach impacts the embedding space of models trained at rigorous privacy constraints compared to the naive approach via a UMAP projection. We find that the noise injected via differential privacy doesn’t prioritize the model to implicitly learn particular distributional factors about the original dataset such as separation of class labels, and an explicit loss mechanism can recover this and improve quality of generation. 5.2 GENERATION Next, we describe in detail the conditional generation procedure to synthesize a private version of the dataset. We aim to generate labeled samples of text that reconstruct similar distributional properties as the original. In order to guide generation towards a particular class, we apply a PPLM (Dathathri et al., 2019) based gradient approach. We utilize the discriminator trained in the previous step to perform gradient updates over the hidden states of the model to steer the generation towards the desired class. The steps for generation of a single sample are as follows: 1. Prompt the model with BOS class SEP and generate the distribution of the next word via the language modeling head. 2. Compute hidden embedding states of the generated text. Pass this embedding through the discriminator, which models p(a|x). 3. We now shift the hidden state, hi by summing two gradients: (1) gradient of the cross entropy loss between the discriminator output and desired class vector. (2) gradient towards the higher log likelihood of the language modeling head which models p(x). This is done by minimizing the KL divergence between the modified and unmodified language mdoeling head distribution. 4. Compute the new LM head distribution from the updated latent space. 5. Sample from the new language modeling head distribution for the next word by performing nucleus sampling (Holtzman et al., 2019). 6. Repeat steps 1-3 until the termination token or the specified maximum length is reached. We discuss further implications and limitations of this approach in Section 7. 6 EVALUATION With the described approach, we generate synthetic versions of the SST-2 and AG news dataset. 5 variations are generated with different differential privacy settings: ϵ ∈ {256, 16, 8, 3} and a nonprivate version. The only change between the non-private and private versions are replacing the optimizer from Adam to DP-Adam provided by the private-transformers library (Li et al., 2021). The gradients in the non-private version are still clipped to the maximum gradient norm parameter, C. 6.1 PRIVACY Differentially private training provides formal guarantees about the generated text as a consequence of the post-processing theorem. However, recent works have shown that the impact of epsilon DP on large language model training is still unclear, and we could observe empirical privacy-preservation even at high epsilon levels. To test this, we test the artificial dataset for memorization by comparing the proportion of n-grams (for n ∈ [3...7]) in the synthesized data to those present in the original dataset. Our findings are consistent with previous studies with language modeling. Empirically, we see even large epsilon settings dramatically decrease memorization in the synthesized data (Ponomareva et al., 2022). 6.2 UTILITY We measure the utility of the synthetic dataset by training a classifier over the synthesized data and evaluate the performance on the held-out test dataset. We don’t experiment with different classi- fication models since our goal is to strictly evaluate the performance of the synthesized dataset. So, we choose to use a state of the art classifier, DistilBERTForSequenceClassification, from the HuggingFace transformers library. We first train a classifier over the original dataset to produce baseline accuracies to compare the utility of the synthetic data to. Next, for each dataset variant, ϵ ∈ {inf, 256, 16, 8, 3}, we train a classifier. To measure the performance of the model, we compute the accuracy of the model over the held out test set. These results are shown in Table 1. We do not modify any hyperparameters of the classifier for each dataset. The selected parameters can be seen in Appendix A. 7 DISCUSSION In this paper, we propose a method for generating synthetic text classification datasets with differential privacy guarantees by performing conditional text generation via large language models. We show the difficulties in doing this naively, particularly exploring how strong settings of privacy impact the conditional prompting scheme which has performed well in non-DP settings. By utilizing a task-relevant second learning objective and gradient based steering of generation towards a desired class, we show conditional generation is possible even at strong privacy settings. We believe this method has potential for creating synthetic datasets that will enable companies to share and train on information without putting users’ personal information at risk. However, we want to point out some limitations and future directions for this line of work. Firstly, previous studies have shown that training neural network models with DP-SGD can result in increased bias Bagdasaryan et al. (2019). In our work, we chose to use perfectly balanced datasets in order to mitigate the problems of unequal representation of classes. This could potentially lead to fairness issues when generating synthetic data, and biases from the original data may be amplified in the new dataset (Kuppam et al., 2019; Zhu et al., 2020). Future work may investigate how using this method affects fairness among groups represented in a dataset. A HYPERPARAMETERS AND TRAINING RESULTS DP Guarantee Loss (Naive) Loss (Multitask) Overall, we found that the only hyperparameters that had significant impact on the performance of the language model was learning rate and batch size, consistent with other works. B TEXT GENERATION EXAMPLES When performing generation thorugh the naive model with DP guarantees, we noticed that it was often unpredictable if the model would output text according to its conditional prompting. This is undesirable when generating text for a synthetic dataset, where the samples need to be generated for a particular class. We see that the output is much more consistent in our approach with the multitask model. This is evidence that separating transformer embeddings with respect to task-relevant factors enables more consistent text generation towards a desired class.
1. What is the focus and contribution of the paper on synthetic data generation? 2. What are the strengths and weaknesses of the proposed DP-SGD based learning algorithm? 3. Are there any concerns regarding the novelty and technical contribution of the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. What are some missing comparisons with other works that the reviewer suggests?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper propose a DP-SGD based learning algorithm to achieve synthetic data generation. The proposed method achieve DP to protect the privacy of the training data. Strengths And Weaknesses This paper simply combines existing methods (DP-SGD and GPT-2). The novel and the technical contribution to the community is limited. The model performance (utility) is far from real use. If the \epsilon reaches a meaningful range (<10), the performance drops a lot compared with the baseline (in Table1). The related work section misses some important papers or fails to cite some papers.. 3.1. GPT-2 + DP in text generation. SeqPATE: Differentially Private Text Generation via Knowledge Distillation. I admit this paper was only accepted by NIPS'22 just now. The authors are not required to compare this paper as a baseline. However, it's necessary to discuss the difference between your paper and this paper. 3.2. GPT-2 + DP-SGD: Xuechen Li, Florian Tramèr, Percy Liang, and Tatsunori Hashimoto. Large language models can be strong differentially private learners. In ICLR, 2022. This paper was published at ICLR'22 instead of arxiv only. 3.3.GPT-2 + DP-SGD: Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, et al. Differentially private fine-tuning of language models. In ICLR, 2022 Some competitive DP-based baselines are missing . The technique proposed in [Xuechen Li' ICLR'22] and [Da Yu' ICLR'22] can be easily used in this method. The organization of this paper should be improved. Clarity, Quality, Novelty And Reproducibility The organization of this paper is confused. The proposed method should not be put into the experiment section. The data processing section is not so important.
ICLR
Title Differentially Private Conditional Text Generation For Synthetic Data Production Abstract Companies have faced increasing pressure in recent years to anonymize user collected data when sharing internally or to third parties. Text data in particular contains copious amounts of personally identifiable information that has proven to be difficult to de-identify while remain useful for the party of interest. Previous works have suggested that synthetic text generation could provide a promising avenue to curate high performant and private datasets. In this paper, we introduce an approach to synthesize high utility text classification datasets by performing conditional generation through a large language model, distilGPT2, while providing measurable guarantees via differential privacy. We show that naive approaches suffer heavily from utility loss by entangling task-relevant factors in the transformer embedding space, making controlled generation more difficult. We analyze how incorporating a secondary learning objective can improve the performance of the generative model, improving utility of the generated data. 1 INTRODUCTION In recent years, language models have seen dramatic improvements in performance over NLP tasks. In large part, this has been due to the rapid accumulation of user generated text on the internet. Companies have been able to aggregate millions of documents available online as well as their user data to train these large language models. However, lawmakers and their constituents have grown wary of data collection and usage practices, urging more stringent regulations. In 2018, the EU set the General Data Protection Regulation (GDPR) into motion, with the goal to increase transparency about collected information and give users more control over how their data is handled. (Voigt & Bussche, 2017). Consequently, companies are now searching for ways to utilize user data without exploiting user privacy. The GDPR begins with the statement: “The protection of natural persons in relation to the processing of personal data is a fundamental right”; it is imperative that we innovate on methods to use data effectively without risking user privacy. In this paper, we study privatization of unstructured text data. Even with safety measures in mind, there has been massive exploitation of user text data. For example, in 2006, as part of their algorithm contest, Netflix released a de-identified dataset of user generated movie reviews. Researchers discovered that surprisingly little information was required to reconstruct the identities of users that contributed to the reviews (Narayanan & Shmatikov, 2006). Further studies have shown how other methods, such as authorship and membership inference attacks (Carlini et al., 2020), can be utilized to reconstruct user identities. All this to say, without proper privacy guarantees and careful data analysis, companies risk user data to exploitation. Dwork (2006) and Abadi et al. (2016) proposed differential privacy (DP) and DP-SGD/DP-Adam, respectively, as methods to provide provable and quantifiable guarantees about privacy. Generally, we say that a randomized algorithm satisfies DP if the output distribution is indistinguisable when run on neighboring datasets. However, current trade-offs between privacy and utility, particularly in synthetic text generation, makes it impractical for companies to create useful data with strong privacy guarantees. A common approach for anonymization is to de-identify (redact) personally identifiable tokens in text, such as names and addresses. While this may seem like a reasonable approach on paper with SOTA models reporting accuracies of nearly than 97%, the 3% of tokens that are misidentified could be used by an adversary to re-identify users. Consequently, this approach isn’t a strong enough guarantee of privacy. A permissible error from such a model should be lower than 1% (Yogarajan et al., 2020; Al Aziz et al., 2021), something that has not been achieved today for abitrary datasets. Synthetic data is promising because it avoids the problem of anonymizing an individual’s data by instead producing information about non-existent persons. Other approaches to anonymize unstructured text data have focused on word or sentence level perturbations in order to reduce vulnerability to membership inference and authorship attacks. These approaches often heavily degrade semantic quality of the text and may struggle to provide overall privacy guarantees in the context of language peculiarities, such as with the leakage of PII. Other approaches seek to generate data synthetically, such as Libbi et al. (2021) and Al Aziz et al. (2021). However, such studies often show a large tradeoff between privacy and utility or make differentially private guarantees with a potentially unreasonable epsilon parameter (e.g. ϵ > 10). In this paper, we present an approach of generating synthetic text data by performing controllable generation through a large language model. We show it is possible to synthesize text classification datasets with rigorous privacy guarantees. We hope this method will enable companies to share data and train high utility models without putting their users’ data at risk. Our contributions are as follows: 1. We present findings on problems that arise when performing conditional finetuning of large language models with DP-Adam. Particulary, we find that it becomes difficult to conditionally prompt the model towards a desired class and generate synthetic data that mimics desired attributes of the original. We propose using a task-relevant loss via a secondary learning objective to solve this issue. 2. We generate synthetic versions of the SST-2 and AG News datasets by performing conditional text generation over a langauge model. We incorporate a combination of generation techniques: attribute conditioning and a gradient based approach (Dathathri et al., 2019) to further steer generation. We show minimal loss in utility of our synthetic datasets (6.3%) with strong privacy guarantees (ϵ = 3). Code to recreate our results are available here: (redacted for review) 2 BACKGROUND 2.1 LANGUAGE MODELING Given a sequence of tokens X = x0, ... , xn , language models (LMs) are trained to compute the unconditional probability of the sequence p(X). This probability can be rewritten in terms of product of conditional probabilities by recursively applying the chain-rule (Bengio et al., 2003) as: p(X) = N∏ i=1 p(xi|x0, ..., xi−1) (1) This allows modeling the language via next-word prediction. We use the transformer architecture (Vaswani et al., 2017) to model the distribution of natural language. Generation of a new sequence y can be created by sequentially sampling its constituents: pθ(y0), pθ(y1|y0), ..., pθ(ym|y<m). 2.2 CONDITIONAL TEXT GENERATION Conditional generation of text attempts to steer the output of a LM given a desired condition or control variable. Keskar et al. (2019) introduced a method to accomplish this goal by performing training a LM over a dataset, such that the desired condition is prepended to the text body: “BOS [condition] SEP text” (BOS and SEP are special tokens to indiciate the beginning of the sentence and to separate label from the text body, respectively). On the other hand, plug and play controllable language generation (PPLM) (Dathathri et al., 2019) combines an attribute model (such as a discriminator) with a LM to manipulate its output and perform controllable text generation. Given an attribute a and generated text x, let the output of the discriminator model represent p(a|x). In order to control generation, we shift the latent hidden state of the language model at step i, hi by ∆hi in the direction of the sum of two gradients: (1) towards a smaller cross entropy loss in the attribute model p(a|x) for the desired attribute a and (2) toward higher log likelihood of the language modeling head p(x) to preserve the generation quality and fluency. In this paper, we use a combination of the two approaches in order to generate high-quality data. We first fine-tune a large language model over the desired dataset with conditional prompting similar to Keskar et al. (2019) and then use the gradient-based approach as described by Dathathri et al. (2019) to steer generation with high likelihood towards the desired attribute. With this process, we can generate labeled data for our synthetic dataset. 2.3 DIFFERENTIAL PRIVACY Differential Privacy (DP) is a formal definition of privacy which offers strong assurances against various re-identification and re-construction attacks (Dwork, 2006; Dwork & Roth, 2013). In recent years, DP has attracted significant attention due to its mathematically sound and provable privacy guarantees. Moreover, it has unique properties such as robustness to auxillary information and postprocessing, composability to enable modular design, and group privacy. (Dwork & Roth, 2013; Abadi et al., 2016). Definition 1. (Differential Privacy (Dwork, 2006)) A randomized function M provides (ϵ, δ)differential privacy if for all adjacent datasets X,X ′ ∈ X and all Y ⊂ Y, P r[M(X) ∈ Y ] ≤ exp (ϵ) · Pr[M(X ′) ∈ Y ] + δ (2) This is a standard definition of DP, which implies that the outputs of a DP model/algorithm for neighboring datasets are indistinguishable, bounded by the privacy parameter ϵ. ϵ is a non-negative number which represents the privacy budget. Smaller ϵ values more rigorously enforce privacy, but may have the effect of decreasing data utility. DP also allows for tracking privacy loss throughout the execution of a program by computing its leakage parameters. In this paper, we use Renyi Differential Privacy for accounting privacy budget (Mironov, 2017). Composability and robustness to post-processing are important properties of DP that are necessary for the guarantees in our paper. Composability allows for reasoning about overall privacy loss from the composition of multiple DP algorithms releasing multiple statistics about a particular dataset. Robustness to post-processing implies that if some mechanism M satisfies ϵ-differential privacy, then for any deterministic or randomized function F , so does F(M). This allows us to make ϵ-DP guarantees about the generated text from our ϵ-DP trained language model. Definition 2. Differentially Private Stochastic Gradient Descent (DP-SGD) modifies the update step during backpropagation by (1) clipping the gradient for each example in the mini-batch to a maximal norm C and (2) adding Gaussian noise with standard deviation proportional to C to the mean of the clipped gradients. w(t+1) = w(t) − ηt · 1 B { ∑ i∈Bt clipC(∇Li(wt)) +N(0, σ2C2I)} (3) Where clipC = v · min(1, C||v||2 ). Intuitively, the DP-SGD mechanism preserves privacy by mitigating the impact of out-of-distribution samples on the model, and is used during fine-tuning of our language models. DP-Adam is the differentially private version of the Adam optimizer (Kingma & Ba, 2014), using the same gradient privitization as outlined in DP-SGD. 3 RELATED WORKS Current methods on text privitization fall into three general categories: word/sentence level perturbations, private text embeddings, and synthetically generated text. Here, we discuss each method. Word/Sentence Level Perturbations: Many works have discussed anonymizing text by perturbing word or sentence level embeddings to satisfy ϵ-differential privacy. This set of approaches change individual words in a document, often following a variant of metric based DP (Alvim et al., 2018) which has shown to be a more utilitarian perspective of privacy in the context of NLP. However, as discussed by Mattern et al. (2022), these perturbations struggle to provide overall privacy guarantees in the context of language peculiarities and leakage of other personally identifiable information (PII) that allow for re-identification. They also suffer from utility losses since grammatical and syntactic structure are degraded. Other methods suggested by Weggenmann & Kerschbaum (2018) and Bo et al. (2019) investigate differentially private mechanisms via latent space perturbations and adversarial training, respectively, to reduce the impact of authorship inference attacks. However, these methods, again, do not address the issue of PII leakage and suffer from significant uility losses. Private Text Embeddings: Other methods have investigated releasing private text embeddings instead of the original text content. Recent work such as Lyu et al. (2020) and Xu et al. (2021) propose randomization mechanisms that can transform text embedding vectors into one that satisfies metric space differential privacy guarantees. This method has shown promise in providing formal guarantees while also retaining high utility. However, this process does not leave human readable text, which is a desired property for companies performing internal data sharing; thus, we examine our approach independent of this body of work. Synthetic Text: Other methods, particularly in the medical domain, have attempted to address the issue of privacy via synthetic text generation. Synthetic data addresses the problems of deidentification by simply not describing real people, and thus retaining plausible deniability over the data produced. Recent methods like Libbi et al. (2021) and Al Aziz et al. (2021) have proposed text generation approaches; This paper goes further, investigating the impact of a large range of parameter selection in conditional text generation and most importantly, demonstrating high utility even with strong privacy parameters (e.g. ϵ = 3), something previous works have not done. 4 DATASETS AND PREPROCESSING In this paper, we generate artificial datasets for text classification. We choose this task because it allows us to best compare utility and privacy in one dataset. We experiment over two datasets. Each dataset is split 80:20 for train and test. We represent datasets as D = {(xi, yi)}ni=1 4.1 SST-2 The SST-2 corpus consists of 11,855 movie review samples, each labeled with positive orn egative sentiment by human annotators. This dataset was perfectly balanced with each class having equal representation (Socher et al., 2013). 4.2 AG NEWS The AG News corpus is a topic classification task. This dataset consists of over 120,000 samples, each labeled under a topic from: Sports, World, Business, Sci/Tech. This dataset was perfectly balanced with each topic having equal representation (Zhang et al., 2015). 5 EXPERIMENTS This paper improves on existing methods for generating high-utility synthetic text data with differential privacy guarantees. Bommasani et al. (2019) argued that for successful private synthetic text data, we must have formal guarnatees of privacy and have distributional similarity to the original dataset. We achieve this by conditionally finetuning a LM (distilGPT2) over the original text data, the intuition being that we can reconstruct a similar distribution via generation. Since the model is learned privately, the post-processing theorem (Dwork, 2006) allows us to make the same ϵ guarantees about the generated samples. We show that with this approach, we are able to construct private, synthetic data that retains high utility. We hope that this will enable companies to utilize synthetic data, reducing reliance on private user information. All our experiments were run on one NVIDIA V100 GPU instance. 5.1 FINE-TUNING The baseline language model that we use for training is a pretrained distilgpt2 from HuggingFace Sanh et al. (2019). We use this model over the larger versions to provide faster iteration of training under different configurations. We fine-tune the language model G to the task of synthesizing labeled sentences to obtain the finetuned language model Gtuned. Here, G is specifically fine-tuned to the linguistic domain of Dtrain (that is, the sentences, vocabulary, style, etc.), as well as the particular classes in Dtrain. The language modeling head, a feed forward network attached to the transformer architecture, is used to model the distribution of the next-word from an input sequence. During generation, we sample from this head. Generally speaking, we would like to use Gtuned to generate a sentence set of any length with conditioned attribute a being the class label. We fine-tune G by training it over the data from Dtrain = {(xi, yi)}ni=1. We generate training samples for conditional finetuning by prepending the label with the text body so that we end up with: U = BOS yi SEP xi. We fine-tune this model under different privacy settings, specified by the epsilon parameter. When training with DP, the Adam optimizer is substituted with the DP-Adam optimizer implemented from the private-transformers library 1, provided by Li et al. (2021). We also use the ghost-clipping mechanism outlined by Li et al. (2021) which introduces a memory efficient method to perform per-example gradient clipping. Renyi differential privacy (Mironov, 2017) was used to account privacy budget during training. 5.1.1 BASELINE METHOD 1: CONDITIONAL FINE-TUNING WITH FILTER In our first approach, we (1) perform full fine-tuning of G with the training procedure described above to produce Gtuned. (2) We independently train a discriminator to model p(a|x), the probability of generated sample, x, to belong to the class a. In our work, we model the discriminator by fine-tuning a language model for classification over the dataset. (3) We conditionally generate na samples for each class a from G and filter out any samples that do not meet a desired threshold score from the discriminator (e.g. only include the sample if p(a|x) > 0.5). Specifically, generation was done by performing nucleus sampling (Holtzman et al., 2019) over the output distribution of Gtuned. The described approach is similar to several methods used in data augmentation (Anaby-Tavor et al., 2019; Bayer et al., 2022; Queiroz Abonizio & Barbon Junior, 2020). This approach worked well for generating artificial datasets for SST-2 and AG News in the nonprivate setting. We synthesized datasets for each by generating the same number of samples for each class as the original. Generation was done by simply prompting the model with “BOS class SEP”. In the private setting, we replaced the Adam optimizer with DP-Adam and tracked the total privacy budget with the RDP accountant. As we improved the privacy guarantee with smaller epsilon parameters (e.g. ϵ = 8), the quality of conditional generation quickly degraded. While the private LM generated text that appropriately mimicked the linguistic domain of the training data, conditional prompting did not produce consistent results; prompting the model with attribute a would infrequently meet the threshold requirement from p(a|x). We also analyzed samples qualitatively and found the same results. For example, the non-private Gtuned generally produced samples that fit the class it was prompted: (e.g. “BOS positive SEP” might yield “a sensitive and heartwarming story of an aging man...”). However, the same approach with the private Gtuned produced samples that very inconsistently fit the prompted attribute (e.g. “BOS positive SEP” might yield “an emotional slap in the face, and...”). See Appendix B for more examples. Without having high confidence in our model being able to generate text conditionally for a desired class, the labels in the synthesized dataset may be meaningless. This would severely degrade the utility of the artificial data. This result suggests that a stronger mechanism than just prompting is required to steer the model towards high-quality class conditional samples. 5.1.2 BASELINE METHOD 2: CONDITIONAL FINE-TUNING WITH PPLM GENERATION Iterating from Baseline 1, we attempted to use a similar approach as PPLM (Dathathri et al., 2019), a gradient based steering mechanism, to guide the private Gtuned models towards more high qual- 1https://github.com/lxuechen/private-transformers ity generation. Similar to Baseline 1, we (1) train Gtuned, then (2) train a discriminator to estimate the attribute model p(a|x) by training a discriminator head over the frozen Gtuned model. The discriminator head is a simple MLP with non-linear activations. Lastly, (3) we perform PPLM-based conditional generation (See Section 5.2) to generate the synthetic labeled text classification dataset. The intuition for this approach is that the gradient based generation mechanism will guide Gtuned into generating samples that align strongly with the desired label. In order to effectively use the discriminator to perform gradient updates on the hidden states of Gtuned, we trained the discriminator over the fine-tuned LM’s frozen embeddings. Again, while this approach worked well in the nonprivate setting, it became infeasible to train the discriminator at strong epsilon settings. At ϵ = 3, 8 the discriminator was not strong enough to properly contribute to generation. We hypothesized that this issue was indicative that Gtuned was not preserving information about the attribute labels during private fine-tuning, making it difficult for the discriminator to learn separation, and simulatenously making it more difficult for the LM to generate label aligned samples as observed in the previous section. We investigated this hypothesis by visualizing the embedding space of Gtuned at different epsilon settings and estimating the mutual information between the transformer embedding space and class labels by training a Random Forest classifier (See Figure 1). We hypothesize that in order to strongly reconstruct distributional properties from the original dataset, the generative model should produce embeddings that are separable with respect to those task-relevant factors. 5.1.3 OUR METHOD: MULTITASK CONDITIONAL FINE-TUNING WITH PPLM GENERATION In order to address this issue we introduce a secondary learning objective and perform multitask learning during fine-tuning. In Baselines 1 and 2, the transformer is only attached to a linear language modeling head that models the probability distribution of the next word. In our approach, we simultaneously train a discriminator head, as shown in the diagram above. The discriminator head is, like Baseline 2, a simple MLP head. We now perform two gradient updates at every step – one to update the language modeling head and the other to update the discriminator head. We add the appropriate amount of noise to the gradients to maintain ϵ-DP guarantees and track privacy budget throughout training with RDP (Mironov, 2017). Since we still want to retain conditional prompting for the model, we want the language model to be able to see the conditional prompt, i.e. “BOS positive SEP text”, which includes the prepended label so that the model is able to understand prompting. Meanwhile, the discriminator head should be able to learn to model p(a|x) for a label, a, and generated sample x without seeing the label in the input. So, for the language head, we feed the label prompted text data and perform a gradient Figure 1: UMAP Projection of SST2 Embeddings from Gtunedwith ϵ = 3. Baseline 2 (top). Ours (bottom). DP Guarantee Baseline 2 Ours ϵ = inf 0.803 0.883 ϵ = 256 0.792 0.873 ϵ = 16 0.773 0.869 ϵ = 8 0.739 0.865 ϵ = 3 0.693 0.866 Figure 2: Random Forest Classifier Test Accuracies over SST2 Embeddings from Gtuned. The multitask approach (ours) shows marginal loss in performance at high privacy settings. update. Then, for the discriminator head, we replace the label in the input with a random token, the intuition being that the discriminator head will pay less attention to the embeddings at that location, and be a more informative guide during generation. We also train this discriminator head to classify text at different prefix lengths. For example, if the prefix step was specified to be 2, we would compute the loss given the transformer output for the second token, fourth token, sixth token, and so on. The loss is linearly weighted such that the first prefix is weighted the least and the last prefix is weighted the most. Lastly, this loss is averaged, and then the gradient update is computed. This loss procedure is to ensure the discriminator head is robust enough to provide meaningful classifications at different lengths of a sequence to improve its contribution during gradient based generation. Algorithm 1 DP Multitask Conditional Training Data: Gpretrained, Dtrain = {(xi, yi)}Ni=1, number of iterations T, learning rates ηlm, ηdiscrim, noise multiplier σ, clipping bound C, initial parameter vectors θ(0)transf, θ (0) lm , θ (0) discrim, batch size B, initial moment estimates m0, v0 ∈ Rp, exponential decay rates β1, β2 ∈ R and constant γ for t ∈ [E ·N/B] do Draw batch bt from D with sampling probability q. for (xi, yi) ∈ bt do rand← random token from vocabulary slm ← “BOS yi SEP xi”, sdiscrim ← “BOS rand SEP xi” g (t) lm ← ∇L(Gθ(t)transf, lm(slm), slm), g (t) discrim ← ∇L(Gθ(t)transf, discrim(sdiscrim), yi) g (t) lm ← g (t) lm ·min(1, C/||g (t) lm ||2), g (t) discrim ← g (t) discrim ·min(1, C/||g (t) discrim||2) end g (t) lm ← 1 B ( ∑ i∈bt g (t) lm +N(0, σ 2C2I)) g (t) discrim ← 1 B ( ∑ i∈bt g (t) discrim +N(0, σ 2C2I)) θ (t+1) transf, lm ← AdamUpdate(θ (t) transf, lm,mt, vt, g (t) lm , β1, β2, γ) θ (t+1) transf, discrim ← AdamUpdate(θ (t) transf, discrim,mt, vt, g (t) discrim, β1, β2, γ) end Output: Trained Model θ(T )transf, θ (T ) lm , θ (T ) discrim Ultimately, we find that by training both the discriminator and language modeling head simultaneously, Gtuned is able to conditionally generate even when trained with strong privacy guarantees. In Figure 1, we show how this approach impacts the embedding space of models trained at rigorous privacy constraints compared to the naive approach via a UMAP projection. We find that the noise injected via differential privacy doesn’t prioritize the model to implicitly learn particular distributional factors about the original dataset such as separation of class labels, and an explicit loss mechanism can recover this and improve quality of generation. 5.2 GENERATION Next, we describe in detail the conditional generation procedure to synthesize a private version of the dataset. We aim to generate labeled samples of text that reconstruct similar distributional properties as the original. In order to guide generation towards a particular class, we apply a PPLM (Dathathri et al., 2019) based gradient approach. We utilize the discriminator trained in the previous step to perform gradient updates over the hidden states of the model to steer the generation towards the desired class. The steps for generation of a single sample are as follows: 1. Prompt the model with BOS class SEP and generate the distribution of the next word via the language modeling head. 2. Compute hidden embedding states of the generated text. Pass this embedding through the discriminator, which models p(a|x). 3. We now shift the hidden state, hi by summing two gradients: (1) gradient of the cross entropy loss between the discriminator output and desired class vector. (2) gradient towards the higher log likelihood of the language modeling head which models p(x). This is done by minimizing the KL divergence between the modified and unmodified language mdoeling head distribution. 4. Compute the new LM head distribution from the updated latent space. 5. Sample from the new language modeling head distribution for the next word by performing nucleus sampling (Holtzman et al., 2019). 6. Repeat steps 1-3 until the termination token or the specified maximum length is reached. We discuss further implications and limitations of this approach in Section 7. 6 EVALUATION With the described approach, we generate synthetic versions of the SST-2 and AG news dataset. 5 variations are generated with different differential privacy settings: ϵ ∈ {256, 16, 8, 3} and a nonprivate version. The only change between the non-private and private versions are replacing the optimizer from Adam to DP-Adam provided by the private-transformers library (Li et al., 2021). The gradients in the non-private version are still clipped to the maximum gradient norm parameter, C. 6.1 PRIVACY Differentially private training provides formal guarantees about the generated text as a consequence of the post-processing theorem. However, recent works have shown that the impact of epsilon DP on large language model training is still unclear, and we could observe empirical privacy-preservation even at high epsilon levels. To test this, we test the artificial dataset for memorization by comparing the proportion of n-grams (for n ∈ [3...7]) in the synthesized data to those present in the original dataset. Our findings are consistent with previous studies with language modeling. Empirically, we see even large epsilon settings dramatically decrease memorization in the synthesized data (Ponomareva et al., 2022). 6.2 UTILITY We measure the utility of the synthetic dataset by training a classifier over the synthesized data and evaluate the performance on the held-out test dataset. We don’t experiment with different classi- fication models since our goal is to strictly evaluate the performance of the synthesized dataset. So, we choose to use a state of the art classifier, DistilBERTForSequenceClassification, from the HuggingFace transformers library. We first train a classifier over the original dataset to produce baseline accuracies to compare the utility of the synthetic data to. Next, for each dataset variant, ϵ ∈ {inf, 256, 16, 8, 3}, we train a classifier. To measure the performance of the model, we compute the accuracy of the model over the held out test set. These results are shown in Table 1. We do not modify any hyperparameters of the classifier for each dataset. The selected parameters can be seen in Appendix A. 7 DISCUSSION In this paper, we propose a method for generating synthetic text classification datasets with differential privacy guarantees by performing conditional text generation via large language models. We show the difficulties in doing this naively, particularly exploring how strong settings of privacy impact the conditional prompting scheme which has performed well in non-DP settings. By utilizing a task-relevant second learning objective and gradient based steering of generation towards a desired class, we show conditional generation is possible even at strong privacy settings. We believe this method has potential for creating synthetic datasets that will enable companies to share and train on information without putting users’ personal information at risk. However, we want to point out some limitations and future directions for this line of work. Firstly, previous studies have shown that training neural network models with DP-SGD can result in increased bias Bagdasaryan et al. (2019). In our work, we chose to use perfectly balanced datasets in order to mitigate the problems of unequal representation of classes. This could potentially lead to fairness issues when generating synthetic data, and biases from the original data may be amplified in the new dataset (Kuppam et al., 2019; Zhu et al., 2020). Future work may investigate how using this method affects fairness among groups represented in a dataset. A HYPERPARAMETERS AND TRAINING RESULTS DP Guarantee Loss (Naive) Loss (Multitask) Overall, we found that the only hyperparameters that had significant impact on the performance of the language model was learning rate and batch size, consistent with other works. B TEXT GENERATION EXAMPLES When performing generation thorugh the naive model with DP guarantees, we noticed that it was often unpredictable if the model would output text according to its conditional prompting. This is undesirable when generating text for a synthetic dataset, where the samples need to be generated for a particular class. We see that the output is much more consistent in our approach with the multitask model. This is evidence that separating transformer embeddings with respect to task-relevant factors enables more consistent text generation towards a desired class.
1. What is the main contribution of the paper regarding synthesizing datasets from private data using differential privacy? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of experimental comparisons and overhead costs? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are some relevant related works that the paper has missed, and how do they compare to the proposed method? 5. How does the paper handle privacy accounting, particularly for the two-stage approach of Baseline 2? 6. What are some potential improvements or suggestions for future work regarding the proposed method and its applications?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper aims at synthesizing datasets from private data, using differential privacy. The main goal is to generate data for NLP classification tasks, and the approach this paper takes is to do so through conditional generation, as opposed to just training a generative model with DP-SGD and then taking free form samples from it. The paper discusses three main methods, two as baselines and one as the main proposed method: the first baseline is augmenting the private, training data with its class label (for instance if positive class, you pre-append with a positive token, if negative sentiment class, negative token), which means during generation you would basically prompt the model with the class label and have it generate. The second baseline improves on this by adding controllable generation (using PPLM), which uses a discriminator at decoding time to help enforce given attributes, for example sentiment. So apart from the prompt, the PPLM controllable generation would also help enforce the attribute. This setup is two stage, as in first the generator is trained and then the discriminator for PPLM is trained on top of it. Finally, they propose their method, which uses multi-task learning for training both the language generator and the discriminator at the same time, as opposed to consecutively, to help improve separability of the representations learned by the main model, thereby improving its downstream utility on the classification task. Strengths And Weaknesses Strengths: 1. The paper studies a very relevant problem, as synthesizing DP data (especially text) could have many applications, especially on enterprise level. 2. The proposed method of conditioning on class label and using controllable generation is new and has not been explored before. Weaknesses: The paper lacks experiments that would help make its case and show the superiority of the proposed method, and how it is actually better, given the overheads it has. More specifically, these are the questions I have regarding experiments: a. Table 1, which is basically the main downstream evaluation result, seems to only show results for the proposed method, and not baselines 1 and 2. This makes this evaluation not very helpful, as we don’t know how the baselines are doing, and how much improvement we are getting from what trick. To clarify: there should be comparisons with at least one baseline, and ablations against others so we can see how much benefit a) conditioning on label b)PPLM and c) multi-task learning have. Right now we don’t even know if the proposed method does better than vanilla DP-SGD training a transformer-based classifier on the generated text. The only comparison we do get with one of the baselines is in Figure2, where the experimental setup is not even explained, but it seems to be a comparison of embeddings of the methods, using Random Forrest and is not an end-to-end downstream comparison. b. The overheads of the proposed method are not at all discussed. By overhead I mean any extra cost, at training or inference. For instance decoding (inference) with PPLM is actually quite expensive (due to the gradient flow and backprop, and also using a discriminator) compared to non-controllable generation. These costs are not at all discussed/explored/measured in the paper c. The paper has missed a very relevant related work, submix [1] which tackles the same problem of DP generation of text. I think a comparison with this paper is necessary. The privacy budgeting of Baseline 2 is not at all discussed: given how this is a two stage approach, where outputs from stage 1 + input to stage 1 (the labels) go to stage 2, the privacy accounting is actually non-trivial. A conservative approach would be to compose the budgets used for stage 1 and 2, as we are re-using the labels. However I am sure better accounting could be done. This affects the reported ϵ and therefore the privacy utility. Using N-gram counts as a measure of privacy is very unorthodox and uninformative as privacy is not necessarily violated if a unigram is regurgitated. Researchers usually use metrics like recall of Membership Inference Attacks [2-4], exposure metric [5], extraction attack success [6]. Minor issues: 1. The figures explaining baseline 2 and the proposed method don’t really have captions or labels. 2. Figures 3 and 4 aren’t referenced anywhere in the text. I assume they relate to section 6.1? Refs: [1] Ginart, Antonio, et al. "Submix: Practical private prediction for large-scale language models." arXiv preprint arXiv:2201.00971 (2022). [2] Shokri, Reza, et al. "Membership inference attacks against machine learning models." 2017 IEEE symposium on security and privacy (SP). IEEE, 2017. [3] Mireshghallah, Fatemehsadat, et al. "Quantifying privacy risks of masked language models using membership inference attacks." arXiv preprint arXiv:2203.03929 (2022). [4] Carlini, Nicholas, et al. "Membership inference attacks from first principles." 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022. [5] Carlini, Nicholas, et al. "The secret sharer: Evaluating and testing unintended memorization in neural networks." 28th USENIX Security Symposium (USENIX Security 19). 2019. [6] Carlini, Nicholas, et al. "Extracting training data from large language models." 30th USENIX Security Symposium (USENIX Security 21). 2021. Clarity, Quality, Novelty And Reproducibility The paper's writing is somewhat unclear, with the missing caption/figure labels and the unexplained figures (as explained above under weaknesses) The idea and proposed method are fairly novel and interesting, however, the experiments are very lacking and do not provide sufficient evidence. Also, the experiments don't seem reproducible as there are many details missing, especially under budgeting as explained above.
ICLR
Title Differentially Private Conditional Text Generation For Synthetic Data Production Abstract Companies have faced increasing pressure in recent years to anonymize user collected data when sharing internally or to third parties. Text data in particular contains copious amounts of personally identifiable information that has proven to be difficult to de-identify while remain useful for the party of interest. Previous works have suggested that synthetic text generation could provide a promising avenue to curate high performant and private datasets. In this paper, we introduce an approach to synthesize high utility text classification datasets by performing conditional generation through a large language model, distilGPT2, while providing measurable guarantees via differential privacy. We show that naive approaches suffer heavily from utility loss by entangling task-relevant factors in the transformer embedding space, making controlled generation more difficult. We analyze how incorporating a secondary learning objective can improve the performance of the generative model, improving utility of the generated data. 1 INTRODUCTION In recent years, language models have seen dramatic improvements in performance over NLP tasks. In large part, this has been due to the rapid accumulation of user generated text on the internet. Companies have been able to aggregate millions of documents available online as well as their user data to train these large language models. However, lawmakers and their constituents have grown wary of data collection and usage practices, urging more stringent regulations. In 2018, the EU set the General Data Protection Regulation (GDPR) into motion, with the goal to increase transparency about collected information and give users more control over how their data is handled. (Voigt & Bussche, 2017). Consequently, companies are now searching for ways to utilize user data without exploiting user privacy. The GDPR begins with the statement: “The protection of natural persons in relation to the processing of personal data is a fundamental right”; it is imperative that we innovate on methods to use data effectively without risking user privacy. In this paper, we study privatization of unstructured text data. Even with safety measures in mind, there has been massive exploitation of user text data. For example, in 2006, as part of their algorithm contest, Netflix released a de-identified dataset of user generated movie reviews. Researchers discovered that surprisingly little information was required to reconstruct the identities of users that contributed to the reviews (Narayanan & Shmatikov, 2006). Further studies have shown how other methods, such as authorship and membership inference attacks (Carlini et al., 2020), can be utilized to reconstruct user identities. All this to say, without proper privacy guarantees and careful data analysis, companies risk user data to exploitation. Dwork (2006) and Abadi et al. (2016) proposed differential privacy (DP) and DP-SGD/DP-Adam, respectively, as methods to provide provable and quantifiable guarantees about privacy. Generally, we say that a randomized algorithm satisfies DP if the output distribution is indistinguisable when run on neighboring datasets. However, current trade-offs between privacy and utility, particularly in synthetic text generation, makes it impractical for companies to create useful data with strong privacy guarantees. A common approach for anonymization is to de-identify (redact) personally identifiable tokens in text, such as names and addresses. While this may seem like a reasonable approach on paper with SOTA models reporting accuracies of nearly than 97%, the 3% of tokens that are misidentified could be used by an adversary to re-identify users. Consequently, this approach isn’t a strong enough guarantee of privacy. A permissible error from such a model should be lower than 1% (Yogarajan et al., 2020; Al Aziz et al., 2021), something that has not been achieved today for abitrary datasets. Synthetic data is promising because it avoids the problem of anonymizing an individual’s data by instead producing information about non-existent persons. Other approaches to anonymize unstructured text data have focused on word or sentence level perturbations in order to reduce vulnerability to membership inference and authorship attacks. These approaches often heavily degrade semantic quality of the text and may struggle to provide overall privacy guarantees in the context of language peculiarities, such as with the leakage of PII. Other approaches seek to generate data synthetically, such as Libbi et al. (2021) and Al Aziz et al. (2021). However, such studies often show a large tradeoff between privacy and utility or make differentially private guarantees with a potentially unreasonable epsilon parameter (e.g. ϵ > 10). In this paper, we present an approach of generating synthetic text data by performing controllable generation through a large language model. We show it is possible to synthesize text classification datasets with rigorous privacy guarantees. We hope this method will enable companies to share data and train high utility models without putting their users’ data at risk. Our contributions are as follows: 1. We present findings on problems that arise when performing conditional finetuning of large language models with DP-Adam. Particulary, we find that it becomes difficult to conditionally prompt the model towards a desired class and generate synthetic data that mimics desired attributes of the original. We propose using a task-relevant loss via a secondary learning objective to solve this issue. 2. We generate synthetic versions of the SST-2 and AG News datasets by performing conditional text generation over a langauge model. We incorporate a combination of generation techniques: attribute conditioning and a gradient based approach (Dathathri et al., 2019) to further steer generation. We show minimal loss in utility of our synthetic datasets (6.3%) with strong privacy guarantees (ϵ = 3). Code to recreate our results are available here: (redacted for review) 2 BACKGROUND 2.1 LANGUAGE MODELING Given a sequence of tokens X = x0, ... , xn , language models (LMs) are trained to compute the unconditional probability of the sequence p(X). This probability can be rewritten in terms of product of conditional probabilities by recursively applying the chain-rule (Bengio et al., 2003) as: p(X) = N∏ i=1 p(xi|x0, ..., xi−1) (1) This allows modeling the language via next-word prediction. We use the transformer architecture (Vaswani et al., 2017) to model the distribution of natural language. Generation of a new sequence y can be created by sequentially sampling its constituents: pθ(y0), pθ(y1|y0), ..., pθ(ym|y<m). 2.2 CONDITIONAL TEXT GENERATION Conditional generation of text attempts to steer the output of a LM given a desired condition or control variable. Keskar et al. (2019) introduced a method to accomplish this goal by performing training a LM over a dataset, such that the desired condition is prepended to the text body: “BOS [condition] SEP text” (BOS and SEP are special tokens to indiciate the beginning of the sentence and to separate label from the text body, respectively). On the other hand, plug and play controllable language generation (PPLM) (Dathathri et al., 2019) combines an attribute model (such as a discriminator) with a LM to manipulate its output and perform controllable text generation. Given an attribute a and generated text x, let the output of the discriminator model represent p(a|x). In order to control generation, we shift the latent hidden state of the language model at step i, hi by ∆hi in the direction of the sum of two gradients: (1) towards a smaller cross entropy loss in the attribute model p(a|x) for the desired attribute a and (2) toward higher log likelihood of the language modeling head p(x) to preserve the generation quality and fluency. In this paper, we use a combination of the two approaches in order to generate high-quality data. We first fine-tune a large language model over the desired dataset with conditional prompting similar to Keskar et al. (2019) and then use the gradient-based approach as described by Dathathri et al. (2019) to steer generation with high likelihood towards the desired attribute. With this process, we can generate labeled data for our synthetic dataset. 2.3 DIFFERENTIAL PRIVACY Differential Privacy (DP) is a formal definition of privacy which offers strong assurances against various re-identification and re-construction attacks (Dwork, 2006; Dwork & Roth, 2013). In recent years, DP has attracted significant attention due to its mathematically sound and provable privacy guarantees. Moreover, it has unique properties such as robustness to auxillary information and postprocessing, composability to enable modular design, and group privacy. (Dwork & Roth, 2013; Abadi et al., 2016). Definition 1. (Differential Privacy (Dwork, 2006)) A randomized function M provides (ϵ, δ)differential privacy if for all adjacent datasets X,X ′ ∈ X and all Y ⊂ Y, P r[M(X) ∈ Y ] ≤ exp (ϵ) · Pr[M(X ′) ∈ Y ] + δ (2) This is a standard definition of DP, which implies that the outputs of a DP model/algorithm for neighboring datasets are indistinguishable, bounded by the privacy parameter ϵ. ϵ is a non-negative number which represents the privacy budget. Smaller ϵ values more rigorously enforce privacy, but may have the effect of decreasing data utility. DP also allows for tracking privacy loss throughout the execution of a program by computing its leakage parameters. In this paper, we use Renyi Differential Privacy for accounting privacy budget (Mironov, 2017). Composability and robustness to post-processing are important properties of DP that are necessary for the guarantees in our paper. Composability allows for reasoning about overall privacy loss from the composition of multiple DP algorithms releasing multiple statistics about a particular dataset. Robustness to post-processing implies that if some mechanism M satisfies ϵ-differential privacy, then for any deterministic or randomized function F , so does F(M). This allows us to make ϵ-DP guarantees about the generated text from our ϵ-DP trained language model. Definition 2. Differentially Private Stochastic Gradient Descent (DP-SGD) modifies the update step during backpropagation by (1) clipping the gradient for each example in the mini-batch to a maximal norm C and (2) adding Gaussian noise with standard deviation proportional to C to the mean of the clipped gradients. w(t+1) = w(t) − ηt · 1 B { ∑ i∈Bt clipC(∇Li(wt)) +N(0, σ2C2I)} (3) Where clipC = v · min(1, C||v||2 ). Intuitively, the DP-SGD mechanism preserves privacy by mitigating the impact of out-of-distribution samples on the model, and is used during fine-tuning of our language models. DP-Adam is the differentially private version of the Adam optimizer (Kingma & Ba, 2014), using the same gradient privitization as outlined in DP-SGD. 3 RELATED WORKS Current methods on text privitization fall into three general categories: word/sentence level perturbations, private text embeddings, and synthetically generated text. Here, we discuss each method. Word/Sentence Level Perturbations: Many works have discussed anonymizing text by perturbing word or sentence level embeddings to satisfy ϵ-differential privacy. This set of approaches change individual words in a document, often following a variant of metric based DP (Alvim et al., 2018) which has shown to be a more utilitarian perspective of privacy in the context of NLP. However, as discussed by Mattern et al. (2022), these perturbations struggle to provide overall privacy guarantees in the context of language peculiarities and leakage of other personally identifiable information (PII) that allow for re-identification. They also suffer from utility losses since grammatical and syntactic structure are degraded. Other methods suggested by Weggenmann & Kerschbaum (2018) and Bo et al. (2019) investigate differentially private mechanisms via latent space perturbations and adversarial training, respectively, to reduce the impact of authorship inference attacks. However, these methods, again, do not address the issue of PII leakage and suffer from significant uility losses. Private Text Embeddings: Other methods have investigated releasing private text embeddings instead of the original text content. Recent work such as Lyu et al. (2020) and Xu et al. (2021) propose randomization mechanisms that can transform text embedding vectors into one that satisfies metric space differential privacy guarantees. This method has shown promise in providing formal guarantees while also retaining high utility. However, this process does not leave human readable text, which is a desired property for companies performing internal data sharing; thus, we examine our approach independent of this body of work. Synthetic Text: Other methods, particularly in the medical domain, have attempted to address the issue of privacy via synthetic text generation. Synthetic data addresses the problems of deidentification by simply not describing real people, and thus retaining plausible deniability over the data produced. Recent methods like Libbi et al. (2021) and Al Aziz et al. (2021) have proposed text generation approaches; This paper goes further, investigating the impact of a large range of parameter selection in conditional text generation and most importantly, demonstrating high utility even with strong privacy parameters (e.g. ϵ = 3), something previous works have not done. 4 DATASETS AND PREPROCESSING In this paper, we generate artificial datasets for text classification. We choose this task because it allows us to best compare utility and privacy in one dataset. We experiment over two datasets. Each dataset is split 80:20 for train and test. We represent datasets as D = {(xi, yi)}ni=1 4.1 SST-2 The SST-2 corpus consists of 11,855 movie review samples, each labeled with positive orn egative sentiment by human annotators. This dataset was perfectly balanced with each class having equal representation (Socher et al., 2013). 4.2 AG NEWS The AG News corpus is a topic classification task. This dataset consists of over 120,000 samples, each labeled under a topic from: Sports, World, Business, Sci/Tech. This dataset was perfectly balanced with each topic having equal representation (Zhang et al., 2015). 5 EXPERIMENTS This paper improves on existing methods for generating high-utility synthetic text data with differential privacy guarantees. Bommasani et al. (2019) argued that for successful private synthetic text data, we must have formal guarnatees of privacy and have distributional similarity to the original dataset. We achieve this by conditionally finetuning a LM (distilGPT2) over the original text data, the intuition being that we can reconstruct a similar distribution via generation. Since the model is learned privately, the post-processing theorem (Dwork, 2006) allows us to make the same ϵ guarantees about the generated samples. We show that with this approach, we are able to construct private, synthetic data that retains high utility. We hope that this will enable companies to utilize synthetic data, reducing reliance on private user information. All our experiments were run on one NVIDIA V100 GPU instance. 5.1 FINE-TUNING The baseline language model that we use for training is a pretrained distilgpt2 from HuggingFace Sanh et al. (2019). We use this model over the larger versions to provide faster iteration of training under different configurations. We fine-tune the language model G to the task of synthesizing labeled sentences to obtain the finetuned language model Gtuned. Here, G is specifically fine-tuned to the linguistic domain of Dtrain (that is, the sentences, vocabulary, style, etc.), as well as the particular classes in Dtrain. The language modeling head, a feed forward network attached to the transformer architecture, is used to model the distribution of the next-word from an input sequence. During generation, we sample from this head. Generally speaking, we would like to use Gtuned to generate a sentence set of any length with conditioned attribute a being the class label. We fine-tune G by training it over the data from Dtrain = {(xi, yi)}ni=1. We generate training samples for conditional finetuning by prepending the label with the text body so that we end up with: U = BOS yi SEP xi. We fine-tune this model under different privacy settings, specified by the epsilon parameter. When training with DP, the Adam optimizer is substituted with the DP-Adam optimizer implemented from the private-transformers library 1, provided by Li et al. (2021). We also use the ghost-clipping mechanism outlined by Li et al. (2021) which introduces a memory efficient method to perform per-example gradient clipping. Renyi differential privacy (Mironov, 2017) was used to account privacy budget during training. 5.1.1 BASELINE METHOD 1: CONDITIONAL FINE-TUNING WITH FILTER In our first approach, we (1) perform full fine-tuning of G with the training procedure described above to produce Gtuned. (2) We independently train a discriminator to model p(a|x), the probability of generated sample, x, to belong to the class a. In our work, we model the discriminator by fine-tuning a language model for classification over the dataset. (3) We conditionally generate na samples for each class a from G and filter out any samples that do not meet a desired threshold score from the discriminator (e.g. only include the sample if p(a|x) > 0.5). Specifically, generation was done by performing nucleus sampling (Holtzman et al., 2019) over the output distribution of Gtuned. The described approach is similar to several methods used in data augmentation (Anaby-Tavor et al., 2019; Bayer et al., 2022; Queiroz Abonizio & Barbon Junior, 2020). This approach worked well for generating artificial datasets for SST-2 and AG News in the nonprivate setting. We synthesized datasets for each by generating the same number of samples for each class as the original. Generation was done by simply prompting the model with “BOS class SEP”. In the private setting, we replaced the Adam optimizer with DP-Adam and tracked the total privacy budget with the RDP accountant. As we improved the privacy guarantee with smaller epsilon parameters (e.g. ϵ = 8), the quality of conditional generation quickly degraded. While the private LM generated text that appropriately mimicked the linguistic domain of the training data, conditional prompting did not produce consistent results; prompting the model with attribute a would infrequently meet the threshold requirement from p(a|x). We also analyzed samples qualitatively and found the same results. For example, the non-private Gtuned generally produced samples that fit the class it was prompted: (e.g. “BOS positive SEP” might yield “a sensitive and heartwarming story of an aging man...”). However, the same approach with the private Gtuned produced samples that very inconsistently fit the prompted attribute (e.g. “BOS positive SEP” might yield “an emotional slap in the face, and...”). See Appendix B for more examples. Without having high confidence in our model being able to generate text conditionally for a desired class, the labels in the synthesized dataset may be meaningless. This would severely degrade the utility of the artificial data. This result suggests that a stronger mechanism than just prompting is required to steer the model towards high-quality class conditional samples. 5.1.2 BASELINE METHOD 2: CONDITIONAL FINE-TUNING WITH PPLM GENERATION Iterating from Baseline 1, we attempted to use a similar approach as PPLM (Dathathri et al., 2019), a gradient based steering mechanism, to guide the private Gtuned models towards more high qual- 1https://github.com/lxuechen/private-transformers ity generation. Similar to Baseline 1, we (1) train Gtuned, then (2) train a discriminator to estimate the attribute model p(a|x) by training a discriminator head over the frozen Gtuned model. The discriminator head is a simple MLP with non-linear activations. Lastly, (3) we perform PPLM-based conditional generation (See Section 5.2) to generate the synthetic labeled text classification dataset. The intuition for this approach is that the gradient based generation mechanism will guide Gtuned into generating samples that align strongly with the desired label. In order to effectively use the discriminator to perform gradient updates on the hidden states of Gtuned, we trained the discriminator over the fine-tuned LM’s frozen embeddings. Again, while this approach worked well in the nonprivate setting, it became infeasible to train the discriminator at strong epsilon settings. At ϵ = 3, 8 the discriminator was not strong enough to properly contribute to generation. We hypothesized that this issue was indicative that Gtuned was not preserving information about the attribute labels during private fine-tuning, making it difficult for the discriminator to learn separation, and simulatenously making it more difficult for the LM to generate label aligned samples as observed in the previous section. We investigated this hypothesis by visualizing the embedding space of Gtuned at different epsilon settings and estimating the mutual information between the transformer embedding space and class labels by training a Random Forest classifier (See Figure 1). We hypothesize that in order to strongly reconstruct distributional properties from the original dataset, the generative model should produce embeddings that are separable with respect to those task-relevant factors. 5.1.3 OUR METHOD: MULTITASK CONDITIONAL FINE-TUNING WITH PPLM GENERATION In order to address this issue we introduce a secondary learning objective and perform multitask learning during fine-tuning. In Baselines 1 and 2, the transformer is only attached to a linear language modeling head that models the probability distribution of the next word. In our approach, we simultaneously train a discriminator head, as shown in the diagram above. The discriminator head is, like Baseline 2, a simple MLP head. We now perform two gradient updates at every step – one to update the language modeling head and the other to update the discriminator head. We add the appropriate amount of noise to the gradients to maintain ϵ-DP guarantees and track privacy budget throughout training with RDP (Mironov, 2017). Since we still want to retain conditional prompting for the model, we want the language model to be able to see the conditional prompt, i.e. “BOS positive SEP text”, which includes the prepended label so that the model is able to understand prompting. Meanwhile, the discriminator head should be able to learn to model p(a|x) for a label, a, and generated sample x without seeing the label in the input. So, for the language head, we feed the label prompted text data and perform a gradient Figure 1: UMAP Projection of SST2 Embeddings from Gtunedwith ϵ = 3. Baseline 2 (top). Ours (bottom). DP Guarantee Baseline 2 Ours ϵ = inf 0.803 0.883 ϵ = 256 0.792 0.873 ϵ = 16 0.773 0.869 ϵ = 8 0.739 0.865 ϵ = 3 0.693 0.866 Figure 2: Random Forest Classifier Test Accuracies over SST2 Embeddings from Gtuned. The multitask approach (ours) shows marginal loss in performance at high privacy settings. update. Then, for the discriminator head, we replace the label in the input with a random token, the intuition being that the discriminator head will pay less attention to the embeddings at that location, and be a more informative guide during generation. We also train this discriminator head to classify text at different prefix lengths. For example, if the prefix step was specified to be 2, we would compute the loss given the transformer output for the second token, fourth token, sixth token, and so on. The loss is linearly weighted such that the first prefix is weighted the least and the last prefix is weighted the most. Lastly, this loss is averaged, and then the gradient update is computed. This loss procedure is to ensure the discriminator head is robust enough to provide meaningful classifications at different lengths of a sequence to improve its contribution during gradient based generation. Algorithm 1 DP Multitask Conditional Training Data: Gpretrained, Dtrain = {(xi, yi)}Ni=1, number of iterations T, learning rates ηlm, ηdiscrim, noise multiplier σ, clipping bound C, initial parameter vectors θ(0)transf, θ (0) lm , θ (0) discrim, batch size B, initial moment estimates m0, v0 ∈ Rp, exponential decay rates β1, β2 ∈ R and constant γ for t ∈ [E ·N/B] do Draw batch bt from D with sampling probability q. for (xi, yi) ∈ bt do rand← random token from vocabulary slm ← “BOS yi SEP xi”, sdiscrim ← “BOS rand SEP xi” g (t) lm ← ∇L(Gθ(t)transf, lm(slm), slm), g (t) discrim ← ∇L(Gθ(t)transf, discrim(sdiscrim), yi) g (t) lm ← g (t) lm ·min(1, C/||g (t) lm ||2), g (t) discrim ← g (t) discrim ·min(1, C/||g (t) discrim||2) end g (t) lm ← 1 B ( ∑ i∈bt g (t) lm +N(0, σ 2C2I)) g (t) discrim ← 1 B ( ∑ i∈bt g (t) discrim +N(0, σ 2C2I)) θ (t+1) transf, lm ← AdamUpdate(θ (t) transf, lm,mt, vt, g (t) lm , β1, β2, γ) θ (t+1) transf, discrim ← AdamUpdate(θ (t) transf, discrim,mt, vt, g (t) discrim, β1, β2, γ) end Output: Trained Model θ(T )transf, θ (T ) lm , θ (T ) discrim Ultimately, we find that by training both the discriminator and language modeling head simultaneously, Gtuned is able to conditionally generate even when trained with strong privacy guarantees. In Figure 1, we show how this approach impacts the embedding space of models trained at rigorous privacy constraints compared to the naive approach via a UMAP projection. We find that the noise injected via differential privacy doesn’t prioritize the model to implicitly learn particular distributional factors about the original dataset such as separation of class labels, and an explicit loss mechanism can recover this and improve quality of generation. 5.2 GENERATION Next, we describe in detail the conditional generation procedure to synthesize a private version of the dataset. We aim to generate labeled samples of text that reconstruct similar distributional properties as the original. In order to guide generation towards a particular class, we apply a PPLM (Dathathri et al., 2019) based gradient approach. We utilize the discriminator trained in the previous step to perform gradient updates over the hidden states of the model to steer the generation towards the desired class. The steps for generation of a single sample are as follows: 1. Prompt the model with BOS class SEP and generate the distribution of the next word via the language modeling head. 2. Compute hidden embedding states of the generated text. Pass this embedding through the discriminator, which models p(a|x). 3. We now shift the hidden state, hi by summing two gradients: (1) gradient of the cross entropy loss between the discriminator output and desired class vector. (2) gradient towards the higher log likelihood of the language modeling head which models p(x). This is done by minimizing the KL divergence between the modified and unmodified language mdoeling head distribution. 4. Compute the new LM head distribution from the updated latent space. 5. Sample from the new language modeling head distribution for the next word by performing nucleus sampling (Holtzman et al., 2019). 6. Repeat steps 1-3 until the termination token or the specified maximum length is reached. We discuss further implications and limitations of this approach in Section 7. 6 EVALUATION With the described approach, we generate synthetic versions of the SST-2 and AG news dataset. 5 variations are generated with different differential privacy settings: ϵ ∈ {256, 16, 8, 3} and a nonprivate version. The only change between the non-private and private versions are replacing the optimizer from Adam to DP-Adam provided by the private-transformers library (Li et al., 2021). The gradients in the non-private version are still clipped to the maximum gradient norm parameter, C. 6.1 PRIVACY Differentially private training provides formal guarantees about the generated text as a consequence of the post-processing theorem. However, recent works have shown that the impact of epsilon DP on large language model training is still unclear, and we could observe empirical privacy-preservation even at high epsilon levels. To test this, we test the artificial dataset for memorization by comparing the proportion of n-grams (for n ∈ [3...7]) in the synthesized data to those present in the original dataset. Our findings are consistent with previous studies with language modeling. Empirically, we see even large epsilon settings dramatically decrease memorization in the synthesized data (Ponomareva et al., 2022). 6.2 UTILITY We measure the utility of the synthetic dataset by training a classifier over the synthesized data and evaluate the performance on the held-out test dataset. We don’t experiment with different classi- fication models since our goal is to strictly evaluate the performance of the synthesized dataset. So, we choose to use a state of the art classifier, DistilBERTForSequenceClassification, from the HuggingFace transformers library. We first train a classifier over the original dataset to produce baseline accuracies to compare the utility of the synthetic data to. Next, for each dataset variant, ϵ ∈ {inf, 256, 16, 8, 3}, we train a classifier. To measure the performance of the model, we compute the accuracy of the model over the held out test set. These results are shown in Table 1. We do not modify any hyperparameters of the classifier for each dataset. The selected parameters can be seen in Appendix A. 7 DISCUSSION In this paper, we propose a method for generating synthetic text classification datasets with differential privacy guarantees by performing conditional text generation via large language models. We show the difficulties in doing this naively, particularly exploring how strong settings of privacy impact the conditional prompting scheme which has performed well in non-DP settings. By utilizing a task-relevant second learning objective and gradient based steering of generation towards a desired class, we show conditional generation is possible even at strong privacy settings. We believe this method has potential for creating synthetic datasets that will enable companies to share and train on information without putting users’ personal information at risk. However, we want to point out some limitations and future directions for this line of work. Firstly, previous studies have shown that training neural network models with DP-SGD can result in increased bias Bagdasaryan et al. (2019). In our work, we chose to use perfectly balanced datasets in order to mitigate the problems of unequal representation of classes. This could potentially lead to fairness issues when generating synthetic data, and biases from the original data may be amplified in the new dataset (Kuppam et al., 2019; Zhu et al., 2020). Future work may investigate how using this method affects fairness among groups represented in a dataset. A HYPERPARAMETERS AND TRAINING RESULTS DP Guarantee Loss (Naive) Loss (Multitask) Overall, we found that the only hyperparameters that had significant impact on the performance of the language model was learning rate and batch size, consistent with other works. B TEXT GENERATION EXAMPLES When performing generation thorugh the naive model with DP guarantees, we noticed that it was often unpredictable if the model would output text according to its conditional prompting. This is undesirable when generating text for a synthetic dataset, where the samples need to be generated for a particular class. We see that the output is much more consistent in our approach with the multitask model. This is evidence that separating transformer embeddings with respect to task-relevant factors enables more consistent text generation towards a desired class.
1. What is the focus of the paper regarding data sharing using differential privacy methods? 2. What are the strengths of the proposed approach, particularly in its ability to generate high-fidelity and high-utility synthetic data? 3. What are the weaknesses of the paper, especially regarding the novelty of the proposed approach and the remaining gap between original and synthetic versions? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper tackles the problem of sharing text data by utilizing differential privacy methods. They show how a naive approach does not give good text aligned with the class labels but their new proposed architecture does. Experiments showcasing the efficacy of their approaches are shown by using a pretrained GPT-2 model and adapted on to two datasets namely SST-2 and AG news. Strengths And Weaknesses Pros: (A) They are able to generate high-fidelity and high-utility synthetic data as shown on two datasets in the experiments section. (B) The novel multi-task learning approach gives them the additional boost over the naive sequential approach. Also, the UMAP projections shows the clear separation between the positive and negative labels. Cons: (i) There is not much novelty in terms of the proposed approach. Existing DP settings and code (RDP) are used for implementing the privacy aspects and similarly the multi-task learning is also relatively straightforward. (ii) There is still substantial gap between original and the synthetic versions (0.94 vs 0.89) and it would have been interesting to address that gap even before attempting the DP guarantees. Clarity, Quality, Novelty And Reproducibility The paper is easy to read and understand. Proposed a multi-task learning approach to combine the language modeling and the discriminator training sumultaneously. The architecture diagram for the various approaches makes it amenable to reimplementation and potentially the code will be released in the future.
ICLR
Title Differentially Private Conditional Text Generation For Synthetic Data Production Abstract Companies have faced increasing pressure in recent years to anonymize user collected data when sharing internally or to third parties. Text data in particular contains copious amounts of personally identifiable information that has proven to be difficult to de-identify while remain useful for the party of interest. Previous works have suggested that synthetic text generation could provide a promising avenue to curate high performant and private datasets. In this paper, we introduce an approach to synthesize high utility text classification datasets by performing conditional generation through a large language model, distilGPT2, while providing measurable guarantees via differential privacy. We show that naive approaches suffer heavily from utility loss by entangling task-relevant factors in the transformer embedding space, making controlled generation more difficult. We analyze how incorporating a secondary learning objective can improve the performance of the generative model, improving utility of the generated data. 1 INTRODUCTION In recent years, language models have seen dramatic improvements in performance over NLP tasks. In large part, this has been due to the rapid accumulation of user generated text on the internet. Companies have been able to aggregate millions of documents available online as well as their user data to train these large language models. However, lawmakers and their constituents have grown wary of data collection and usage practices, urging more stringent regulations. In 2018, the EU set the General Data Protection Regulation (GDPR) into motion, with the goal to increase transparency about collected information and give users more control over how their data is handled. (Voigt & Bussche, 2017). Consequently, companies are now searching for ways to utilize user data without exploiting user privacy. The GDPR begins with the statement: “The protection of natural persons in relation to the processing of personal data is a fundamental right”; it is imperative that we innovate on methods to use data effectively without risking user privacy. In this paper, we study privatization of unstructured text data. Even with safety measures in mind, there has been massive exploitation of user text data. For example, in 2006, as part of their algorithm contest, Netflix released a de-identified dataset of user generated movie reviews. Researchers discovered that surprisingly little information was required to reconstruct the identities of users that contributed to the reviews (Narayanan & Shmatikov, 2006). Further studies have shown how other methods, such as authorship and membership inference attacks (Carlini et al., 2020), can be utilized to reconstruct user identities. All this to say, without proper privacy guarantees and careful data analysis, companies risk user data to exploitation. Dwork (2006) and Abadi et al. (2016) proposed differential privacy (DP) and DP-SGD/DP-Adam, respectively, as methods to provide provable and quantifiable guarantees about privacy. Generally, we say that a randomized algorithm satisfies DP if the output distribution is indistinguisable when run on neighboring datasets. However, current trade-offs between privacy and utility, particularly in synthetic text generation, makes it impractical for companies to create useful data with strong privacy guarantees. A common approach for anonymization is to de-identify (redact) personally identifiable tokens in text, such as names and addresses. While this may seem like a reasonable approach on paper with SOTA models reporting accuracies of nearly than 97%, the 3% of tokens that are misidentified could be used by an adversary to re-identify users. Consequently, this approach isn’t a strong enough guarantee of privacy. A permissible error from such a model should be lower than 1% (Yogarajan et al., 2020; Al Aziz et al., 2021), something that has not been achieved today for abitrary datasets. Synthetic data is promising because it avoids the problem of anonymizing an individual’s data by instead producing information about non-existent persons. Other approaches to anonymize unstructured text data have focused on word or sentence level perturbations in order to reduce vulnerability to membership inference and authorship attacks. These approaches often heavily degrade semantic quality of the text and may struggle to provide overall privacy guarantees in the context of language peculiarities, such as with the leakage of PII. Other approaches seek to generate data synthetically, such as Libbi et al. (2021) and Al Aziz et al. (2021). However, such studies often show a large tradeoff between privacy and utility or make differentially private guarantees with a potentially unreasonable epsilon parameter (e.g. ϵ > 10). In this paper, we present an approach of generating synthetic text data by performing controllable generation through a large language model. We show it is possible to synthesize text classification datasets with rigorous privacy guarantees. We hope this method will enable companies to share data and train high utility models without putting their users’ data at risk. Our contributions are as follows: 1. We present findings on problems that arise when performing conditional finetuning of large language models with DP-Adam. Particulary, we find that it becomes difficult to conditionally prompt the model towards a desired class and generate synthetic data that mimics desired attributes of the original. We propose using a task-relevant loss via a secondary learning objective to solve this issue. 2. We generate synthetic versions of the SST-2 and AG News datasets by performing conditional text generation over a langauge model. We incorporate a combination of generation techniques: attribute conditioning and a gradient based approach (Dathathri et al., 2019) to further steer generation. We show minimal loss in utility of our synthetic datasets (6.3%) with strong privacy guarantees (ϵ = 3). Code to recreate our results are available here: (redacted for review) 2 BACKGROUND 2.1 LANGUAGE MODELING Given a sequence of tokens X = x0, ... , xn , language models (LMs) are trained to compute the unconditional probability of the sequence p(X). This probability can be rewritten in terms of product of conditional probabilities by recursively applying the chain-rule (Bengio et al., 2003) as: p(X) = N∏ i=1 p(xi|x0, ..., xi−1) (1) This allows modeling the language via next-word prediction. We use the transformer architecture (Vaswani et al., 2017) to model the distribution of natural language. Generation of a new sequence y can be created by sequentially sampling its constituents: pθ(y0), pθ(y1|y0), ..., pθ(ym|y<m). 2.2 CONDITIONAL TEXT GENERATION Conditional generation of text attempts to steer the output of a LM given a desired condition or control variable. Keskar et al. (2019) introduced a method to accomplish this goal by performing training a LM over a dataset, such that the desired condition is prepended to the text body: “BOS [condition] SEP text” (BOS and SEP are special tokens to indiciate the beginning of the sentence and to separate label from the text body, respectively). On the other hand, plug and play controllable language generation (PPLM) (Dathathri et al., 2019) combines an attribute model (such as a discriminator) with a LM to manipulate its output and perform controllable text generation. Given an attribute a and generated text x, let the output of the discriminator model represent p(a|x). In order to control generation, we shift the latent hidden state of the language model at step i, hi by ∆hi in the direction of the sum of two gradients: (1) towards a smaller cross entropy loss in the attribute model p(a|x) for the desired attribute a and (2) toward higher log likelihood of the language modeling head p(x) to preserve the generation quality and fluency. In this paper, we use a combination of the two approaches in order to generate high-quality data. We first fine-tune a large language model over the desired dataset with conditional prompting similar to Keskar et al. (2019) and then use the gradient-based approach as described by Dathathri et al. (2019) to steer generation with high likelihood towards the desired attribute. With this process, we can generate labeled data for our synthetic dataset. 2.3 DIFFERENTIAL PRIVACY Differential Privacy (DP) is a formal definition of privacy which offers strong assurances against various re-identification and re-construction attacks (Dwork, 2006; Dwork & Roth, 2013). In recent years, DP has attracted significant attention due to its mathematically sound and provable privacy guarantees. Moreover, it has unique properties such as robustness to auxillary information and postprocessing, composability to enable modular design, and group privacy. (Dwork & Roth, 2013; Abadi et al., 2016). Definition 1. (Differential Privacy (Dwork, 2006)) A randomized function M provides (ϵ, δ)differential privacy if for all adjacent datasets X,X ′ ∈ X and all Y ⊂ Y, P r[M(X) ∈ Y ] ≤ exp (ϵ) · Pr[M(X ′) ∈ Y ] + δ (2) This is a standard definition of DP, which implies that the outputs of a DP model/algorithm for neighboring datasets are indistinguishable, bounded by the privacy parameter ϵ. ϵ is a non-negative number which represents the privacy budget. Smaller ϵ values more rigorously enforce privacy, but may have the effect of decreasing data utility. DP also allows for tracking privacy loss throughout the execution of a program by computing its leakage parameters. In this paper, we use Renyi Differential Privacy for accounting privacy budget (Mironov, 2017). Composability and robustness to post-processing are important properties of DP that are necessary for the guarantees in our paper. Composability allows for reasoning about overall privacy loss from the composition of multiple DP algorithms releasing multiple statistics about a particular dataset. Robustness to post-processing implies that if some mechanism M satisfies ϵ-differential privacy, then for any deterministic or randomized function F , so does F(M). This allows us to make ϵ-DP guarantees about the generated text from our ϵ-DP trained language model. Definition 2. Differentially Private Stochastic Gradient Descent (DP-SGD) modifies the update step during backpropagation by (1) clipping the gradient for each example in the mini-batch to a maximal norm C and (2) adding Gaussian noise with standard deviation proportional to C to the mean of the clipped gradients. w(t+1) = w(t) − ηt · 1 B { ∑ i∈Bt clipC(∇Li(wt)) +N(0, σ2C2I)} (3) Where clipC = v · min(1, C||v||2 ). Intuitively, the DP-SGD mechanism preserves privacy by mitigating the impact of out-of-distribution samples on the model, and is used during fine-tuning of our language models. DP-Adam is the differentially private version of the Adam optimizer (Kingma & Ba, 2014), using the same gradient privitization as outlined in DP-SGD. 3 RELATED WORKS Current methods on text privitization fall into three general categories: word/sentence level perturbations, private text embeddings, and synthetically generated text. Here, we discuss each method. Word/Sentence Level Perturbations: Many works have discussed anonymizing text by perturbing word or sentence level embeddings to satisfy ϵ-differential privacy. This set of approaches change individual words in a document, often following a variant of metric based DP (Alvim et al., 2018) which has shown to be a more utilitarian perspective of privacy in the context of NLP. However, as discussed by Mattern et al. (2022), these perturbations struggle to provide overall privacy guarantees in the context of language peculiarities and leakage of other personally identifiable information (PII) that allow for re-identification. They also suffer from utility losses since grammatical and syntactic structure are degraded. Other methods suggested by Weggenmann & Kerschbaum (2018) and Bo et al. (2019) investigate differentially private mechanisms via latent space perturbations and adversarial training, respectively, to reduce the impact of authorship inference attacks. However, these methods, again, do not address the issue of PII leakage and suffer from significant uility losses. Private Text Embeddings: Other methods have investigated releasing private text embeddings instead of the original text content. Recent work such as Lyu et al. (2020) and Xu et al. (2021) propose randomization mechanisms that can transform text embedding vectors into one that satisfies metric space differential privacy guarantees. This method has shown promise in providing formal guarantees while also retaining high utility. However, this process does not leave human readable text, which is a desired property for companies performing internal data sharing; thus, we examine our approach independent of this body of work. Synthetic Text: Other methods, particularly in the medical domain, have attempted to address the issue of privacy via synthetic text generation. Synthetic data addresses the problems of deidentification by simply not describing real people, and thus retaining plausible deniability over the data produced. Recent methods like Libbi et al. (2021) and Al Aziz et al. (2021) have proposed text generation approaches; This paper goes further, investigating the impact of a large range of parameter selection in conditional text generation and most importantly, demonstrating high utility even with strong privacy parameters (e.g. ϵ = 3), something previous works have not done. 4 DATASETS AND PREPROCESSING In this paper, we generate artificial datasets for text classification. We choose this task because it allows us to best compare utility and privacy in one dataset. We experiment over two datasets. Each dataset is split 80:20 for train and test. We represent datasets as D = {(xi, yi)}ni=1 4.1 SST-2 The SST-2 corpus consists of 11,855 movie review samples, each labeled with positive orn egative sentiment by human annotators. This dataset was perfectly balanced with each class having equal representation (Socher et al., 2013). 4.2 AG NEWS The AG News corpus is a topic classification task. This dataset consists of over 120,000 samples, each labeled under a topic from: Sports, World, Business, Sci/Tech. This dataset was perfectly balanced with each topic having equal representation (Zhang et al., 2015). 5 EXPERIMENTS This paper improves on existing methods for generating high-utility synthetic text data with differential privacy guarantees. Bommasani et al. (2019) argued that for successful private synthetic text data, we must have formal guarnatees of privacy and have distributional similarity to the original dataset. We achieve this by conditionally finetuning a LM (distilGPT2) over the original text data, the intuition being that we can reconstruct a similar distribution via generation. Since the model is learned privately, the post-processing theorem (Dwork, 2006) allows us to make the same ϵ guarantees about the generated samples. We show that with this approach, we are able to construct private, synthetic data that retains high utility. We hope that this will enable companies to utilize synthetic data, reducing reliance on private user information. All our experiments were run on one NVIDIA V100 GPU instance. 5.1 FINE-TUNING The baseline language model that we use for training is a pretrained distilgpt2 from HuggingFace Sanh et al. (2019). We use this model over the larger versions to provide faster iteration of training under different configurations. We fine-tune the language model G to the task of synthesizing labeled sentences to obtain the finetuned language model Gtuned. Here, G is specifically fine-tuned to the linguistic domain of Dtrain (that is, the sentences, vocabulary, style, etc.), as well as the particular classes in Dtrain. The language modeling head, a feed forward network attached to the transformer architecture, is used to model the distribution of the next-word from an input sequence. During generation, we sample from this head. Generally speaking, we would like to use Gtuned to generate a sentence set of any length with conditioned attribute a being the class label. We fine-tune G by training it over the data from Dtrain = {(xi, yi)}ni=1. We generate training samples for conditional finetuning by prepending the label with the text body so that we end up with: U = BOS yi SEP xi. We fine-tune this model under different privacy settings, specified by the epsilon parameter. When training with DP, the Adam optimizer is substituted with the DP-Adam optimizer implemented from the private-transformers library 1, provided by Li et al. (2021). We also use the ghost-clipping mechanism outlined by Li et al. (2021) which introduces a memory efficient method to perform per-example gradient clipping. Renyi differential privacy (Mironov, 2017) was used to account privacy budget during training. 5.1.1 BASELINE METHOD 1: CONDITIONAL FINE-TUNING WITH FILTER In our first approach, we (1) perform full fine-tuning of G with the training procedure described above to produce Gtuned. (2) We independently train a discriminator to model p(a|x), the probability of generated sample, x, to belong to the class a. In our work, we model the discriminator by fine-tuning a language model for classification over the dataset. (3) We conditionally generate na samples for each class a from G and filter out any samples that do not meet a desired threshold score from the discriminator (e.g. only include the sample if p(a|x) > 0.5). Specifically, generation was done by performing nucleus sampling (Holtzman et al., 2019) over the output distribution of Gtuned. The described approach is similar to several methods used in data augmentation (Anaby-Tavor et al., 2019; Bayer et al., 2022; Queiroz Abonizio & Barbon Junior, 2020). This approach worked well for generating artificial datasets for SST-2 and AG News in the nonprivate setting. We synthesized datasets for each by generating the same number of samples for each class as the original. Generation was done by simply prompting the model with “BOS class SEP”. In the private setting, we replaced the Adam optimizer with DP-Adam and tracked the total privacy budget with the RDP accountant. As we improved the privacy guarantee with smaller epsilon parameters (e.g. ϵ = 8), the quality of conditional generation quickly degraded. While the private LM generated text that appropriately mimicked the linguistic domain of the training data, conditional prompting did not produce consistent results; prompting the model with attribute a would infrequently meet the threshold requirement from p(a|x). We also analyzed samples qualitatively and found the same results. For example, the non-private Gtuned generally produced samples that fit the class it was prompted: (e.g. “BOS positive SEP” might yield “a sensitive and heartwarming story of an aging man...”). However, the same approach with the private Gtuned produced samples that very inconsistently fit the prompted attribute (e.g. “BOS positive SEP” might yield “an emotional slap in the face, and...”). See Appendix B for more examples. Without having high confidence in our model being able to generate text conditionally for a desired class, the labels in the synthesized dataset may be meaningless. This would severely degrade the utility of the artificial data. This result suggests that a stronger mechanism than just prompting is required to steer the model towards high-quality class conditional samples. 5.1.2 BASELINE METHOD 2: CONDITIONAL FINE-TUNING WITH PPLM GENERATION Iterating from Baseline 1, we attempted to use a similar approach as PPLM (Dathathri et al., 2019), a gradient based steering mechanism, to guide the private Gtuned models towards more high qual- 1https://github.com/lxuechen/private-transformers ity generation. Similar to Baseline 1, we (1) train Gtuned, then (2) train a discriminator to estimate the attribute model p(a|x) by training a discriminator head over the frozen Gtuned model. The discriminator head is a simple MLP with non-linear activations. Lastly, (3) we perform PPLM-based conditional generation (See Section 5.2) to generate the synthetic labeled text classification dataset. The intuition for this approach is that the gradient based generation mechanism will guide Gtuned into generating samples that align strongly with the desired label. In order to effectively use the discriminator to perform gradient updates on the hidden states of Gtuned, we trained the discriminator over the fine-tuned LM’s frozen embeddings. Again, while this approach worked well in the nonprivate setting, it became infeasible to train the discriminator at strong epsilon settings. At ϵ = 3, 8 the discriminator was not strong enough to properly contribute to generation. We hypothesized that this issue was indicative that Gtuned was not preserving information about the attribute labels during private fine-tuning, making it difficult for the discriminator to learn separation, and simulatenously making it more difficult for the LM to generate label aligned samples as observed in the previous section. We investigated this hypothesis by visualizing the embedding space of Gtuned at different epsilon settings and estimating the mutual information between the transformer embedding space and class labels by training a Random Forest classifier (See Figure 1). We hypothesize that in order to strongly reconstruct distributional properties from the original dataset, the generative model should produce embeddings that are separable with respect to those task-relevant factors. 5.1.3 OUR METHOD: MULTITASK CONDITIONAL FINE-TUNING WITH PPLM GENERATION In order to address this issue we introduce a secondary learning objective and perform multitask learning during fine-tuning. In Baselines 1 and 2, the transformer is only attached to a linear language modeling head that models the probability distribution of the next word. In our approach, we simultaneously train a discriminator head, as shown in the diagram above. The discriminator head is, like Baseline 2, a simple MLP head. We now perform two gradient updates at every step – one to update the language modeling head and the other to update the discriminator head. We add the appropriate amount of noise to the gradients to maintain ϵ-DP guarantees and track privacy budget throughout training with RDP (Mironov, 2017). Since we still want to retain conditional prompting for the model, we want the language model to be able to see the conditional prompt, i.e. “BOS positive SEP text”, which includes the prepended label so that the model is able to understand prompting. Meanwhile, the discriminator head should be able to learn to model p(a|x) for a label, a, and generated sample x without seeing the label in the input. So, for the language head, we feed the label prompted text data and perform a gradient Figure 1: UMAP Projection of SST2 Embeddings from Gtunedwith ϵ = 3. Baseline 2 (top). Ours (bottom). DP Guarantee Baseline 2 Ours ϵ = inf 0.803 0.883 ϵ = 256 0.792 0.873 ϵ = 16 0.773 0.869 ϵ = 8 0.739 0.865 ϵ = 3 0.693 0.866 Figure 2: Random Forest Classifier Test Accuracies over SST2 Embeddings from Gtuned. The multitask approach (ours) shows marginal loss in performance at high privacy settings. update. Then, for the discriminator head, we replace the label in the input with a random token, the intuition being that the discriminator head will pay less attention to the embeddings at that location, and be a more informative guide during generation. We also train this discriminator head to classify text at different prefix lengths. For example, if the prefix step was specified to be 2, we would compute the loss given the transformer output for the second token, fourth token, sixth token, and so on. The loss is linearly weighted such that the first prefix is weighted the least and the last prefix is weighted the most. Lastly, this loss is averaged, and then the gradient update is computed. This loss procedure is to ensure the discriminator head is robust enough to provide meaningful classifications at different lengths of a sequence to improve its contribution during gradient based generation. Algorithm 1 DP Multitask Conditional Training Data: Gpretrained, Dtrain = {(xi, yi)}Ni=1, number of iterations T, learning rates ηlm, ηdiscrim, noise multiplier σ, clipping bound C, initial parameter vectors θ(0)transf, θ (0) lm , θ (0) discrim, batch size B, initial moment estimates m0, v0 ∈ Rp, exponential decay rates β1, β2 ∈ R and constant γ for t ∈ [E ·N/B] do Draw batch bt from D with sampling probability q. for (xi, yi) ∈ bt do rand← random token from vocabulary slm ← “BOS yi SEP xi”, sdiscrim ← “BOS rand SEP xi” g (t) lm ← ∇L(Gθ(t)transf, lm(slm), slm), g (t) discrim ← ∇L(Gθ(t)transf, discrim(sdiscrim), yi) g (t) lm ← g (t) lm ·min(1, C/||g (t) lm ||2), g (t) discrim ← g (t) discrim ·min(1, C/||g (t) discrim||2) end g (t) lm ← 1 B ( ∑ i∈bt g (t) lm +N(0, σ 2C2I)) g (t) discrim ← 1 B ( ∑ i∈bt g (t) discrim +N(0, σ 2C2I)) θ (t+1) transf, lm ← AdamUpdate(θ (t) transf, lm,mt, vt, g (t) lm , β1, β2, γ) θ (t+1) transf, discrim ← AdamUpdate(θ (t) transf, discrim,mt, vt, g (t) discrim, β1, β2, γ) end Output: Trained Model θ(T )transf, θ (T ) lm , θ (T ) discrim Ultimately, we find that by training both the discriminator and language modeling head simultaneously, Gtuned is able to conditionally generate even when trained with strong privacy guarantees. In Figure 1, we show how this approach impacts the embedding space of models trained at rigorous privacy constraints compared to the naive approach via a UMAP projection. We find that the noise injected via differential privacy doesn’t prioritize the model to implicitly learn particular distributional factors about the original dataset such as separation of class labels, and an explicit loss mechanism can recover this and improve quality of generation. 5.2 GENERATION Next, we describe in detail the conditional generation procedure to synthesize a private version of the dataset. We aim to generate labeled samples of text that reconstruct similar distributional properties as the original. In order to guide generation towards a particular class, we apply a PPLM (Dathathri et al., 2019) based gradient approach. We utilize the discriminator trained in the previous step to perform gradient updates over the hidden states of the model to steer the generation towards the desired class. The steps for generation of a single sample are as follows: 1. Prompt the model with BOS class SEP and generate the distribution of the next word via the language modeling head. 2. Compute hidden embedding states of the generated text. Pass this embedding through the discriminator, which models p(a|x). 3. We now shift the hidden state, hi by summing two gradients: (1) gradient of the cross entropy loss between the discriminator output and desired class vector. (2) gradient towards the higher log likelihood of the language modeling head which models p(x). This is done by minimizing the KL divergence between the modified and unmodified language mdoeling head distribution. 4. Compute the new LM head distribution from the updated latent space. 5. Sample from the new language modeling head distribution for the next word by performing nucleus sampling (Holtzman et al., 2019). 6. Repeat steps 1-3 until the termination token or the specified maximum length is reached. We discuss further implications and limitations of this approach in Section 7. 6 EVALUATION With the described approach, we generate synthetic versions of the SST-2 and AG news dataset. 5 variations are generated with different differential privacy settings: ϵ ∈ {256, 16, 8, 3} and a nonprivate version. The only change between the non-private and private versions are replacing the optimizer from Adam to DP-Adam provided by the private-transformers library (Li et al., 2021). The gradients in the non-private version are still clipped to the maximum gradient norm parameter, C. 6.1 PRIVACY Differentially private training provides formal guarantees about the generated text as a consequence of the post-processing theorem. However, recent works have shown that the impact of epsilon DP on large language model training is still unclear, and we could observe empirical privacy-preservation even at high epsilon levels. To test this, we test the artificial dataset for memorization by comparing the proportion of n-grams (for n ∈ [3...7]) in the synthesized data to those present in the original dataset. Our findings are consistent with previous studies with language modeling. Empirically, we see even large epsilon settings dramatically decrease memorization in the synthesized data (Ponomareva et al., 2022). 6.2 UTILITY We measure the utility of the synthetic dataset by training a classifier over the synthesized data and evaluate the performance on the held-out test dataset. We don’t experiment with different classi- fication models since our goal is to strictly evaluate the performance of the synthesized dataset. So, we choose to use a state of the art classifier, DistilBERTForSequenceClassification, from the HuggingFace transformers library. We first train a classifier over the original dataset to produce baseline accuracies to compare the utility of the synthetic data to. Next, for each dataset variant, ϵ ∈ {inf, 256, 16, 8, 3}, we train a classifier. To measure the performance of the model, we compute the accuracy of the model over the held out test set. These results are shown in Table 1. We do not modify any hyperparameters of the classifier for each dataset. The selected parameters can be seen in Appendix A. 7 DISCUSSION In this paper, we propose a method for generating synthetic text classification datasets with differential privacy guarantees by performing conditional text generation via large language models. We show the difficulties in doing this naively, particularly exploring how strong settings of privacy impact the conditional prompting scheme which has performed well in non-DP settings. By utilizing a task-relevant second learning objective and gradient based steering of generation towards a desired class, we show conditional generation is possible even at strong privacy settings. We believe this method has potential for creating synthetic datasets that will enable companies to share and train on information without putting users’ personal information at risk. However, we want to point out some limitations and future directions for this line of work. Firstly, previous studies have shown that training neural network models with DP-SGD can result in increased bias Bagdasaryan et al. (2019). In our work, we chose to use perfectly balanced datasets in order to mitigate the problems of unequal representation of classes. This could potentially lead to fairness issues when generating synthetic data, and biases from the original data may be amplified in the new dataset (Kuppam et al., 2019; Zhu et al., 2020). Future work may investigate how using this method affects fairness among groups represented in a dataset. A HYPERPARAMETERS AND TRAINING RESULTS DP Guarantee Loss (Naive) Loss (Multitask) Overall, we found that the only hyperparameters that had significant impact on the performance of the language model was learning rate and batch size, consistent with other works. B TEXT GENERATION EXAMPLES When performing generation thorugh the naive model with DP guarantees, we noticed that it was often unpredictable if the model would output text according to its conditional prompting. This is undesirable when generating text for a synthetic dataset, where the samples need to be generated for a particular class. We see that the output is much more consistent in our approach with the multitask model. This is evidence that separating transformer embeddings with respect to task-relevant factors enables more consistent text generation towards a desired class.
1. What is the focus of the paper regarding data generation and differential privacy? 2. What are the strengths and weaknesses of the proposed approach, particularly in its organization and understanding? 3. Do you have any concerns or suggestions regarding the comparisons with other works and baselines? 4. How would you assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a synthetic data generation method for differential privacy guarantee for large language models. More specifically, the author applies the PPLM-based gradient approach with a discriminator. The proposed method is evaluate on SST-2 and AG datasets. Strengths And Weaknesses Strength: The paper addresses an important problem of differential privacy for language modeling. The motivation is clearly stated. Weaknesses: The paper is not well organized. The proposed method is described in the experimental part, which makes the reader confused about the motivation and the target of the paper. The main sampling strategy is based on a prior work called PPLM. However, the PPLM is not well introduced, which makes the method part hard to understand for the reviewer without prior knowledge about this method. Some baselines are not mentioned and compared in the experimental part. For example, [1] also discussed the differential privacy problem for language modeling. [1] Shi W et al. , Selective Differential Privacy for Language Modeling, NAACL 2022 Clarity, Quality, Novelty And Reproducibility The quality and clarity are not fair enough. The novelty is fair. The experimental part seems reproducible.
ICLR
Title Safe Exploration in Linear Equality Constraint Abstract With the extensive research and application, some shortcomings of reinforcement learning methods are gradually revealed. One of the considerable problems is that it is difficult for reinforcement learning methods to strictly satisfy the constraints. In this paper, a Singular Value Decomposition-based non-training method called ‘Action Decomposition Regular’ is proposed to achieve safe exploration. By adopting linear dynamics model, our method decomposes the action space into a constraint dimension and a free dimension for separate control, making policy strictly satisfy the linear equality constraint without limiting the exploration region. In addition, we show how our method should be used when the action space is limited and convex, which makes the method more suitable for real-world scenarios. Finally, we show the effectiveness of our method in a physically-based environment and prevail where reward shaping fails. N/A With the extensive research and application, some shortcomings of reinforcement learning methods are gradually revealed. One of the considerable problems is that it is difficult for reinforcement learning methods to strictly satisfy the constraints. In this paper, a Singular Value Decomposition-based non-training method called ‘Action Decomposition Regular’ is proposed to achieve safe exploration. By adopting linear dynamics model, our method decomposes the action space into a constraint dimension and a free dimension for separate control, making policy strictly satisfy the linear equality constraint without limiting the exploration region. In addition, we show how our method should be used when the action space is limited and convex, which makes the method more suitable for real-world scenarios. Finally, we show the effectiveness of our method in a physically-based environment and prevail where reward shaping fails. 1 INTRODUCTION In the past ten years, reinforcement learning(RL)(Sutton & Barto, 2018) has made significant breakthroughs in many fields, such as games(Mnih et al., 2013; Schaul et al., 2015; Mnih et al., 2015; Hasselt et al., 2015; Wang et al., 2016), robotics(Gu et al., 2017), autonomous vehicles(Sallab et al., 2017), healthcare(Yu et al., 2019). In the reinforcement learning task, the agent can obtain the policy of making the action that maximizes the long-term return. Although it can improve one’s own policy through trial and error learning under the interaction with the environment, it is difficult to strictly ensure the safety of the actions output by its policy(Garcı́a et al., 2015). Therefore, the constraint problem has become one of the active research contents in reinforcement learning recently. In the application, making such actions that violate constraints will bring serious consequences in some fields. Therefore never violating these constraints is a strict necessity in many scenarios, such as the stability of robots and avoidance of pedestrians or obstacles appearing in front of the vehicle during autonomous driving(Levinson et al., 2011; Amodei et al., 2016). In the real world, the linear equality constraints are relatively common, for example, we want the robot to achieve a certainly required configuration on a certain trajectory, where the constraint may appear at different instants in any dimension; or the robot center of mass is restricted at the beginning of the movement(Laine & Tomlin, 2019). And all these complex constraints typically take the form of linear equality constraints. Therefore, it is necessary to have a method that can ensure these constraints to be strictly satisfied in the real world. Researchers have carried out much meaningful research on how to better satisfy the constraint. Dalal et al. (2018) achieve good results in satisfying hard constraints, but it relies heavily on the security layer of data training and cannot cross domains. Tessler et al. (2019) can solve the mean value constraints or discounted sum constraints, but there is no guarantee that the constraints can be met during the training process. More importantly, the existing learning-based methods can hardly satisfy the constraints. In fact, the constraint guarantee for the agent’s behavioral decisionmaking benefits from knowledge about the causal mechanism that controls it, such as the dynamic model(Fisac et al., 2019). Fortunately, the designer of an agent always knows or approximately knows its dynamics(Fisac et al., 2019). For example, Lutter et al. (2020) adopt the linear dynamic model of the robot and finally, make the optimal strategy policy their action limit. This inspires people to find a balance between data-driven and model-based technology. Among the existing model-based methods, the idea of using the linear dynamic model is common(Aswani et al., 2013; 2012). Although most robots have nonlinear dynamic models, there are already many methods based on the linearization of the model. For example, sequential quadratic programming requires the continuous local approximation of the problem and then transforms it into the constrained linear quadratic regulator problem(Giftthaler et al., 2018). And iLQR(Levine & Koltun, 2013) is a method with linearizing a nonlinear model, which often appears as baselines in experiments about model-based reinforcement learning. And there are many theories about the stability of linearized systems(Spong, 1995; Russ, 2021). For convenience, this paper only discusses the case of the linear dynamic model. In this paper, we propose the ‘Action Decomposition Regular’(ADR) as shown in Fig 1. Using Singular Value Decomposition(SVD) approach, ADR decomposes the action space into a constraint dimension containing all constraint information and the remaining free dimension. The goal is to achieve better policy exploration without violating linear equality constraints at all. Under the above idea, we find a balance between the model-based technology’s control of constraints and the data-driven policy learning method. It is worth mentioning that our method is non-training and can conjunct any efficient continuous-control RL method. The main contributions of this paper are as follows: 1. We propose a non-training method called ADR that can make the reinforcement learning strictly satisfy the constraints without restricting the system’s ability to explore. And the method does not need to make assumptions about the dimensions of the constraints. 2. We give an action correction scheme with the property of Pareto optimal solution(Van Moffaert & Nowé, 2014) in convex action space and give the proof. 3. The effectiveness of the method is verified in a simulation environment with physical properties. The simulation shows good results where reward shaping fails. 2 RELATED WORK Implementing policy security through constrained reinforcement learning is an active research content(Amodei et al., 2016). The algorithm based on Constrained Markov Decision Processes (CMDP)(Kallenberg, 1983; Ross, 1985; Ross & Varadarajan, 1989; Altman, 1999; Le et al., 2019) is a common method. CPO(Achiam et al., 2017) is an algorithm based on CMDP, mainly inspired by TRPO(Schulman et al., 2015), to find a surrogate function that is the lower bound of the original objective function and the upper bound of the original constraint. RCPO(Tessler et al., 2019) uses the idea of PPO(Schulman et al., 2017; Heess et al., 2017), introduces the lagrange method, and solves the problem based on the adaptively updated lagrange multiplier. And a RCPO-based method uses PID to control the lagrange multiplier(Stooke et al., 2020). Recently Zhang et al. (2020) propose FOCOPS, which first finds the optimal update policy by solving a constrained optimization problem in the non-parameterized policy space, then projects the updated policy back into the parametric policy space. However, these methods require a long training process. They are shown to solve the mean value constraints or discounted sum constraints. As such, it is difficult to ensure that the constraints are met as much as possible during the training process, even for any simple constraints. Modifying the exploration process is another way to solve the constraint problem. In Dalal et al. (2018), their method requires first using data to train a security layer to modify actions according to certain criteria. Although they have achieved excellent results in their experiments, the problem is that security is very dependent on the security layer, and the linear relationship of the predicted cost may not be established. The solution of Amos & Kolter (2017) relies on a complete Quadratic Programming solver, but their solution is too expensive to calculate. In addition, there are many methods that agree to be model-based. One possible approach is to try to perform imitation learning on the trajectory obtained by the model-based optimal control policy, i.e., DAgger(Ross et al., 2011). But as stated by Bellegarda & Byl (2020), when facing with areas of state space that the expert trajectory has not visited before, policy learned only from expert data may perform poorly in these areas. And Fisac et al. (2019) propose a general safety framework based on Hamilton–Jacobi reachability methods. This safety framework also can work in conjunction with any efficient learning algorithm. But this method is computationally intensive and limited in dimension. Aswani et al. (2013) use the method about the robust model-predictive control approach and achieve good results in some problems such as quadrotor flight. But it limits the exploration ability of the system. And Berkenkamp et al. (2016; 2017) both limit the exploration region of the method. The method in Sadraddini & Belta (2016) is conservative since it does not update the model. Reward shaping is a natural alternative to constraints, influencing the agent by artificially shaping negative rewards in the state space(Dalal et al., 2018; Ng et al., 1999). But it often needs to design a modified reward function through expert knowledge(Randløv & Alstrøm, 1998) or neural network methods(Burda et al., 2018) in advance. In other words, it needs to know the occurrence of constraints in advance, but many urgent constraints are sudden. Our method overcomes the shortcomings mentioned above. A comparison with the different approaches is provided in Table 1. 3 PRELIMINARIES 3.1 MARKOV DECISION PROCESS (MDP) A Markov Decision Process (MDP) (Sutton & Barto, 2018) is defined by 5-tuple (S,A,R,P ,µ). Where S is the state space; A is the action space; R : S × A → R is the reward function; P : S × A × S → [0, 1] is the transition kernel and µ : S → [0, 1] is the initial state distribution. Let s0 ∼ µ denote that the initial state s0 depends on µ, then at ∼ π(· | st) and st+1 ∼ P (· | st, at) are similar. This can set a simple trajectory τ = (s0, a0, s1, . . .). Consider a policy denoted by π = {π(a | s) : s ∈ S, a ∈ A} and aim to find a stationary policy that maximizes the expected discounted return, i.e., objective function: JR(π) = Eπs∼µ[ ∞∑ t=0 γtrt] , where γ is the discount factor, and rt is the reward at time t. Therefore, the update and improvement of π is based on the comprehensive judgment of each reward. If adopting the deterministic policy, a = π(s), else the stochastic policy a ∼ π(a | s). 3.2 EQUALITY CONSTRAINT ACTION SPACE EXPLORATION The method we propose is based on the dynamic knowledge. Therefore, in order to highlight the actual effectiveness of the method and the scalability applicable to any continuous-control reinforcement learning method, dynamic knowledge only affects action selection stage. We first formulate the constraints and dynamics following the notation used in Laine & Tomlin (2019). Without loss of generality, the constraint occurs at t = 0, 1, . . . , T − 1, T . As a matter of convenience, let s ∈ Rn, a ∈ Rm, and we address the following policy problem: a ∼ π(a | s) s.t. dynamics : st+1 − (Fstst + Fatat + f1t) = 0, t = 0, 1, . . . , T − 1 initial condition : s0 ∼ µ constraint at t : Gstst +Gatat + g1t = 0, t = 0, 1, . . . , T − 1 constraint at T : GsT sT + g1T = 0 Where Fst , Fat and f1t define the agent dynamics at time t = 0, 1, . . . , T − 1, T . Gst , Gat and g1t define the constraints at t = 0, 1, . . . , T − 1. GsT and g1T define the constraint at t = T . The deterministic policy is similar. In addition, we introduce the function C(st) called ‘constraint-to-go’ used in Laine & Tomlin (2019): C(st) = Hstst + h1t, t = 0, 1, . . . , T , which is similar to the value function and representing the stacking of values that the residual constraint from the beginning of st to the back. So at time T there is: C(sT ) = GsT sT + g1T . 4 ACTION DECOMPOSITION REGULAR 4.1 ACTION DECOMPOSITION We first explain the idea of action decomposition. For a simple example, as shown in the speed coordinate system in the Fig. 2, when a constraint that requires ux = uy occurs, we can linearly combine ux and uy into w = √ 2 2 ux + √ 2 2 uy and y = − √ 2 2 ux + √ 2 2 uy , so that we only need to keep y = 0 to satisfy the constraint , and the w dimension will be completely free. 4.2 SAFETY REGULAR BASED ON ACTION DECOMPOSITION We solve the problem of safety exploration in the action space under the linear equality constraint based on the above ideas. In our method, the solving technology of the constraint dimension matches the programming technology in Laine & Tomlin (2019), but the solution of the free dimension is expanded according to the property of the policy exploration. The solution process firstly goes backwards from t = T − 1: a ∼ π(a | s) s.t. sT − (FsT−1sT−1 + FaT−1aT−1 + f1T−1) = 0 a ∈ arg min a ‖ ( GsT−1sT−1 +GaT−1aT−1 + g1T−1 = 0 HsT sT + h1T = 0 ) ‖2 and use the dynamic equation to eliminate sT . In this way, only sT−1 and aT−1 exist in the problem. Organize the formula and rewrite the above question as: a ∼ π(a | s) s.t. a ∈ arg min a ‖NsT−1sT−1 +NaT−1aT−1 + n1T−1‖2 where we define as follows: NsT−1 = ( GsT−1 HsTFsT−1 ) , NaT−1 = ( GaT−1 HsTFaT−1 ) , n1T−1 =( g1T−1 HsT f1T−1 + h1T ) . Obviously, at this step, a of the constraint item is only related to NaT−1 , that is, all the information of the constraint item is contained in NaT−1 .Perform SVD on NaT−1 to get NaT−1 = UT−1ΣT−1V T T−1, and define V T T−1 = ( PTT−1 ZTT−1 ) , where the first r rows of the V TT−1 are denoted as PTT−1, the last (m− r) rows are denoted as ZTT−1. And r is the rank of NaT−1 . Then we make use of the following result: Corollary 4.1. The action a formulated at time t can be decomposed in the following form: ât = Ptyt + Ztwt Proof. The proof is provided in Appendix D. We can regard yt as a constraint dimension and wt as a free dimension. Since the learning of the policy also includes the punishment feedback caused by the violation of constraints, so the original problem is transformed into: wT−1 = Z T · a, a ∼ π(a | s) yT−1 = arg min a ‖NsT−1sT−1 +NaT−1PT−1yT−1 + n1T−1‖2 = −(NaT−1PT−1)†(NsT−1sT−1 + n1T−1) Through the above steps, the solution âT−1 will be easily obtained: âT−1 = PT−1yT−1 + ZT−1wT−1. And update C(sT−1) by combining âT−1 and (NsT−1sT−1 +NaT−1 âT−1 + n1T−1): C(sT−1) = HsT−1sT−1 + h1T−1 = (I −NaT−1PT−1(NaT−1PT−1)†)NsT−1sT−1 + (I −NaT−1PT−1(NaT−1PT−1)†)n1T−1 Where (NaT−1PT−1) † is the pseudo inverse of NaT−1PT−1. We can show that C(st) = 0, if NaT−1PT−1 is an invertible matrix. 5 PRACTICAL IMPLEMENTATION 5.1 IMPLEMENTATION DETAIL We divide the use of ADR into two types, which is shown in the Fig. 3. The first type: When the agent does not receive any constraint signals, the policy generated by the policy network directly obtains executable actions in the form of deterministic policy or stochastic policy. The second type: When the agent receives the constraint signals that it needs to comply within a period of time(t = 0, 1, . . . , T − 1, T ) in the future, we might as well start counting the time from receiving the constraint signals. We require that the constraint signals of this period of time be processed through ADR to obtain the constraint dimension action and the free dimension projection matrix, and the output action of the RL also needs to be corrected by the above result for obtaining the actual execution. In addition, we give the method to deal with situations where the selected action violates convex action space. The details are provided in Subsection 5.2. A detailed pseudo code is provided in Appendix A of the supplementary materials. 5.2 CONVEX ACTION SPACE Various physical limitations of action will appear in real-world applications. And physical limits can lead to the limited action space. In this section, we discuss the most common convex action space. In fact, the physical limitation is also a constraint when the selected action exceeds it. In order to satisfy the hard constraint(Chen et al., 2021) as much as possible, we suggest that when the action exceeds the physical limitation, first program the constraint dimension in the action space to find the constraint dimension action closest to ADR’s recommendation, and then find the closest free dimension action suggested by the RL. This ensures that actions get higher rewards under conditions that satisfy the constraints as much as possible. In fact, this is a multi-objective optimization problem(MOO)(Miettinen, 2012; Lin et al., 2019). The above method (also called -constraint method or main objective method) is widely used, and its optimal solution is the effective solution of MOO(also called the Pareto optimum solving) when the limited action space is a convex set. We define the following problems: Problem 1. min ( f1(a) f2(a) ) = ( ‖PTa− PT â‖2 ‖ZTa− ZT â‖2 ) s.t. a ∈ D Problem 2. min ( ‖PTa− PT â‖2 ) s.t. a ∈ D Problem 3. min ( ‖ZTa− ZT â‖2 ) s.t. a ∈ H where H is the efficient solution set of Problem. 2. This result can be demonstrated by the following Theorem 5.1. Theorem 5.1. Suppose to exist ā ∈ D, D is a convex set, subject to ā is the optimal solution of Problem. 3, then ā is not only the weakly effective solution of Problem. 1, but also the Pareto optimal solution of Problem. 1, and it is unique. Proof. See Appendix E of the supplementary materials. 6 EXPERIMENTS Although we expect to show benefits from combining ADR with any continuous-control RL method, for the following experiments, we use the Deep Deterministic Policy Gradient (DDPG)(Lillicrap et al., 2015). Although DDPG (Lillicrap et al., 2015) is a deterministic policy that can directly output actions, in fact, our method is not only suitable for reinforcement learning algorithms for deterministic policy, but also has applicability for stochastic policy. Our experiments are based on the current popular multi-agent particle world (Lowe et al., 2017) with continuous observation and action space and some basic simulated physics. We design two new sets of simulation experiments based on physical constraints to test the effectiveness of ADR as shown in Fig 4. It is worth mentioning that no new hyperparameters are introduced in the process of our experiment. We provide exact details about experiments in Appendix B and hyperparameters about the method in Appendix C. 6.1 KEEP IT STRAIGHT 6.1.1 EXPERIMENT DESCRIPTION The agent starts from a random starting point to a random final landmark. But we require the agent to maintain a straight line movement as accurately as possible in a certain direction during the first period of time. Although this task seems simple, it is not easy to satisfy the accuracy requirements for RL. That is because the larger learning rate of the algorithm leads the faster convergence and the poorer stability, and the smaller learning rate of the algorithm leads to slow convergence and waste of time(Smith, 2017). In this experiment, the reward is set based on the negative Euclidean distance from the final landmark at each moment. At each step, the agent also obtains the reward for minimizing energy consumption based on the negative two-norm of action. The penalty is set based on the two-norm of the velocity deviating from the current motion direction. Finally, the violated constraint is equal to the accumulation of the two-norm of the distance from the original straight line at each time step when the constraint occurs. In fact, this will require the agent to learn to approach the landmark more quickly while keeping the direction of motion stable in the early stage. 6.1.2 EXPERIMENT ANALYSIS Learning curves are provided in the Fig. 5. For the reward curve, DDPG needs a lot of episodes of training to obtain higher rewards, but DDPG+ADR gets higher rewards at the beginning and is always higher than DDPG in the whole training process. For the violated constraint curve, DDPG seriously violates the constraints at the beginning of training, and can not strictly satisfy the constraints in the whole training process. In fact, the minimum value of constraint violation in a single round of DDPG is 7.4 × 10−8. But DDPG+ADR can keep the violation of constraints in the order of 10−16 in the whole process, which can be considered negligible. The experiments show that, on the one hand, DDPG+ADR can indeed make the actions output by RL’s policy strictly satsify the linear equality constraints, even in the training process. On the other hand, compared with DDPG, DDPG+ADR shows better performance in obtaining rewards. 6.2 PASSING THE INTERMEDIATE STATION 6.2.1 EXPERIMENT DESCRIPTION The agent is still required to go from a random starting point to a random final landmark. And the agent will suddenly receive a constraint signal to go to an intermediate station at the intermediate moment. Note that since the agent is constrained only at the intermediate moment, the agent will exceed its physical limitations due to the distance of the intermediate station, which is too far away. In this case, the agent can only approach as close as possible and never satisfy the constraint. In fact, this experiment requires the algorithm to be robust when the agent encounters a sudden constraint that exceeds its physical limit. In this experiment, the reward is set based on the negative Euclidean distance from the final landmark at each moment. At the same time, the agent also obtains the negative two-norm of action as the reward for minimizing energy consumption. The penalty for the agent receives and the violated constraint in each episode are set based on the Euclidean distance from the intermediate station. 6.2.2 REWARD SHAPING For comparison, we also conduct reward shaping experiments on the DDPG algorithm. At each time step before the end of the constraint, we set the modified reward function(Ng et al., 1999) to the same scale as the original reward, which is set by the following formula: rF = φ(st)− φ(st−1), φ(s0) = 0 Where φ is set based on the distance from the intermediate station, see Appendix B for details. 6.2.3 EXPERIMENT ANALYSIS The experimental results are shown in the Fig. 6. Compared with DDPG, DDPG+ADR has demonstrated superior performance, not only in terms of cumulative rewards much higher, but also much smaller in violation of constraints. Surprisingly, the design of reward shaping does not make DDPG run better but have an adverse effect. It means that the value function of this task is complicated, and the reward shaping that only relies on constraints is quite different from the value function. This shows that at the moment when the constraint occurs, DDPG+ADR really shows robustness. It helps the agent make the action that satisfies the constraint as much as possible and minimizes the missed reward. 7 DISCUSSION In this paper, we propose a simple and practical approach that can effectively solve the problem of action exploration in reinforcement learning under the linear equality constraints. Our method ADR is based on the linear dynamics model and uses the idea of SVD to decompose the action space into constrained dimension and free dimension to control separately. At the same time, we propose feasible solutions to the situation that constraints exceed convex action space, and ensure that actions satisfy the constraints as much as possible within a single time step, and the loss of rewards can be minimized. In the experiment, compared with DDPG, DDPG+ADR can obtain more rewards and stricter constraints satisfaction in both tasks. At the same time, DDPG+ADR shows its robustness in sudden constrained tasks. It is worth mentioning that our method has the advantages of no training and does not need to make assumptions about the dimensions of constraints. An exciting feature is that our method can be combined with any continuous-control RL method. In addition, there are many promising ideas for future work: the use of interior point methods to improve the equality constraints; the deeper integration of SVD ideas with reinforcement learning(Gemp et al., 2020). And in the real world, some dynamic models are too complicated to be researched. In future work, we plan to use Piecewise Linear Neural Networks(PLNN) which can explain the non-linear dynamic model of an object(Nagabandi et al., 2018; Chu et al., 2018) to extend the applicability of our method. A PSEUDO CODE Algorithm 1: Action Decomposition Regular Input: constraintGst , Gat , g1t , GsT , g1T ; policy network πθ ; dynamics Fst , Fat , f1t ; t = 0, 1, . . . , T − 1 Output: action at; t = 0, 1, . . . , T − 1 1: if T > 0 then 2: HsT ← GsT 3: hsT ← gsT 4: for t = T − 1, T − 2, . . . , 0 do 5: Nat ← ( Gat Hst+1Fat ) 6: Nst ← ( Gst Hst+1Fst ) 7: n1t ← ( g1t Hst+1f1t + h1t+1 ) 8: V Tt ← SVD(Nat) 9: Pt, Zt ← V Tt 10: Hst ← (I −NatPt(NatPt)†)Nst 11: h1t ← (I −NatPt(NatPt)†)n1t 12: end for 13: end if 14: if T > 0 then 15: for t = 0, 1, . . . , T − 1 do 16: at ← πθ 17: Receive st 18: yt ← −(NatPt)†(Nstst + n1t) 19: wt ← ZTt at 20: at ← Ptyt + Ztwt 21: end for 22: else 23: at ← πθ 24: end if B EXPERIMENT DETAILS All the experiments we conducted are built on Python(3.6) and Tensorflow (1.8.0) in Intel i7-10875H CPU. B.1 KEEP IT STRAIGHT We used the multi-agent particle environment (Mordatch & Abbeel, 2017) provided by OpenAI Gym(Brockman et al., 2016) for this set of tasks. The agent moves on a two-dimensional plane and travels from a random starting point to a random goal point. At the beginning of each episode, we require the agent to accurately move in a straight line in the y-axis direction, similar to walking out of a parking space or crossing a narrow road. In our setting, the step length of an episode is 26 steps, so the duration of this straight-going phase should not be too long, and our setting is 5 steps. For the reward of the agent in the experiment, we set the following: 1. Reward for the agent to go to the goal: rgoal = −‖pagent − pgoal‖22 2. Reward for the agent to keep moving in a straight line: rkeep = −10000|vy|2 3. Reward for the agent about control Effort Penalty: rcontrol = −0.01‖a‖22 Usually when we face such a multi-objective optimization problem(MOO), we always impose a large weight on the hard constraint. In order to let DDPG and DDPG+ADR learn to keep the straight line as hard as possible, we both set the weight to 10000. This has no effect on the comparison of our method ADR. where pagent, pgoal are the positions of the agent and goal point. And vy is the velocity of the agent in y-axis. And the constraint setting is: constraint : vy = 0 In the multi-agent particle environment (Mordatch & Abbeel, 2017), a ∈ R5 represents the join forces of the agent, and s ∈ R4 is composed of the speed of the agent and the distance to the goal point. Regardless of noise, and let the mass m of the agent be 1, we fully follow the dynamic equation in multi-agent particle environment(Mordatch & Abbeel, 2017): v = Amadt+ (1− d)v x = vdt+ x A = ( 0, 1,−1, 0, 0 0, 0, 0, 1,−1 ) where A is the matrix that turn the resultant force into the driving force of agent. And dt = 0.1 is the step size of a single time step. The physical damping coefficient d = 0.25. B.2 PASSING THE INTERMEDIATE STATION The agent is also on a two-dimensional plane, going from a random starting point to a random goal point. But unlike before, there is an intermediate station at a distance of (0.3, 0.3) from the goal point. The agent will receive the constraint of going to the intermediate station as much as possible in the middle moment. The time step of each episode is also 26 steps, this time we chose the intermediate time t = 12. And it only takes effect at this moment. This requires the agent to learn to take corresponding actions in emergency situations. We set the agent’s reward in this task as follows: 1. Reward for the agent to go to the goal: rgoal = −‖pagent − pgoal‖22 2. Reward for the agent to move towards the intermediate station: rpass = −10000‖(0.3, 0.3)− (pagent − pgoal)‖22 3. Reward for the agent about control Effort Penalty: rcontrol = −0.01‖a‖22 Similarly, in order for the policy in DDPG and DDPG+ADR to learn to satisfy the hard constraints as much as possible, we set a larger weight for the second reward. And the constraint setting is: constraint : pagent − pgoal = (0.3, 0.3) As for the dynamic equation, it is exactly the same as the task setting above. And in reward shaping, rpass is modified to rF t. The effective time of rFt is modified from t=1 to t=12. The formula of rF is modified to: rFt = rpasst − rpasst−1 , t = 1, 2 . . . , 12, where rpasst means that the argument of the function rpass is the current state, and the argument of rpasst−1 is the state at the previous moment. And rpass0 = φ(s0) = 0 (Ng et al., 1999). C HYPERPARAMETERS FOR EXPERIMENTS The hyperparameter settings of DDPG and DDPG+ADR are exactly the same, and there are no additional parameters introduced. And in fact, there is no need to adjust the parameters in our experiment. Activation function for MLP is ReLU. Table 2 shows the hyperparameters used in the experiment. D PROOF OF COROLLARY 4.1 Proof. Since V is composed of normal orthogonal basis, then we have ât = Vt · V Tt · ât, where V Tt = ( PTt ZTt ) . We can therefore derive ât = ( Pt, Zt ) · V Tt · ât = Ptyt + Ztwt. E PROOF OF THEOREM 5.1 Proof. Since the objective functions of Problem. 2 and Problem. 3 are both convex functions, D is a convex set, and the local minimum of the convex function is the global minimum, so Problem. 2 and Problem. 3 always have optimal solutions. If ā is not the Pareto optimal solution of Problem. 1, then ∃a ∈ D, which satisfies one of the following two cases: either f1(a) ≤ f1(ā) and f2(a) < f2(ā), or f1(a) < f1(ā) and f2(a) ≤ f2(ā). But the first case contradicts Problem. 3, and the second case contradicts Problem. 2. Uniqueness is obvious, because V T = ( PT ZT ) constitutes a set of Orthonormal basis in the action space.
1. What is the focus and contribution of the paper on action exploration? 2. What are the strengths of the proposed approach, particularly in terms of its ability to satisfy safety criteria? 3. What are the limitations of the paper's experimental results, especially regarding their applicability to realistic domains? 4. How does the reviewer assess the clarity and quality of the paper's content?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors propose an SVD-based method for action exploration under certain safety criteria. The intuition of this paper is clear that SVD-based decomposition can make the action space into two subspaces where one is contained and the other is free. Review This paper proposes method that can make the policy satisfy the constraint as much as possible while minimizing the loss of reward. They also experimentally show that off-the-shelf RL algorithms can be augmented with the ADR module such that certain constraints are satisfied. The paper is clearly written both theoretically and empirically. The experiments are valid on simple environments such as keep it straight. The authors also show that under sudden constraints, this method would also work. The main concern about this paper is that whether the proposed ideas can be applied to more realistic domains such as games and robotics.
ICLR
Title Safe Exploration in Linear Equality Constraint Abstract With the extensive research and application, some shortcomings of reinforcement learning methods are gradually revealed. One of the considerable problems is that it is difficult for reinforcement learning methods to strictly satisfy the constraints. In this paper, a Singular Value Decomposition-based non-training method called ‘Action Decomposition Regular’ is proposed to achieve safe exploration. By adopting linear dynamics model, our method decomposes the action space into a constraint dimension and a free dimension for separate control, making policy strictly satisfy the linear equality constraint without limiting the exploration region. In addition, we show how our method should be used when the action space is limited and convex, which makes the method more suitable for real-world scenarios. Finally, we show the effectiveness of our method in a physically-based environment and prevail where reward shaping fails. N/A With the extensive research and application, some shortcomings of reinforcement learning methods are gradually revealed. One of the considerable problems is that it is difficult for reinforcement learning methods to strictly satisfy the constraints. In this paper, a Singular Value Decomposition-based non-training method called ‘Action Decomposition Regular’ is proposed to achieve safe exploration. By adopting linear dynamics model, our method decomposes the action space into a constraint dimension and a free dimension for separate control, making policy strictly satisfy the linear equality constraint without limiting the exploration region. In addition, we show how our method should be used when the action space is limited and convex, which makes the method more suitable for real-world scenarios. Finally, we show the effectiveness of our method in a physically-based environment and prevail where reward shaping fails. 1 INTRODUCTION In the past ten years, reinforcement learning(RL)(Sutton & Barto, 2018) has made significant breakthroughs in many fields, such as games(Mnih et al., 2013; Schaul et al., 2015; Mnih et al., 2015; Hasselt et al., 2015; Wang et al., 2016), robotics(Gu et al., 2017), autonomous vehicles(Sallab et al., 2017), healthcare(Yu et al., 2019). In the reinforcement learning task, the agent can obtain the policy of making the action that maximizes the long-term return. Although it can improve one’s own policy through trial and error learning under the interaction with the environment, it is difficult to strictly ensure the safety of the actions output by its policy(Garcı́a et al., 2015). Therefore, the constraint problem has become one of the active research contents in reinforcement learning recently. In the application, making such actions that violate constraints will bring serious consequences in some fields. Therefore never violating these constraints is a strict necessity in many scenarios, such as the stability of robots and avoidance of pedestrians or obstacles appearing in front of the vehicle during autonomous driving(Levinson et al., 2011; Amodei et al., 2016). In the real world, the linear equality constraints are relatively common, for example, we want the robot to achieve a certainly required configuration on a certain trajectory, where the constraint may appear at different instants in any dimension; or the robot center of mass is restricted at the beginning of the movement(Laine & Tomlin, 2019). And all these complex constraints typically take the form of linear equality constraints. Therefore, it is necessary to have a method that can ensure these constraints to be strictly satisfied in the real world. Researchers have carried out much meaningful research on how to better satisfy the constraint. Dalal et al. (2018) achieve good results in satisfying hard constraints, but it relies heavily on the security layer of data training and cannot cross domains. Tessler et al. (2019) can solve the mean value constraints or discounted sum constraints, but there is no guarantee that the constraints can be met during the training process. More importantly, the existing learning-based methods can hardly satisfy the constraints. In fact, the constraint guarantee for the agent’s behavioral decisionmaking benefits from knowledge about the causal mechanism that controls it, such as the dynamic model(Fisac et al., 2019). Fortunately, the designer of an agent always knows or approximately knows its dynamics(Fisac et al., 2019). For example, Lutter et al. (2020) adopt the linear dynamic model of the robot and finally, make the optimal strategy policy their action limit. This inspires people to find a balance between data-driven and model-based technology. Among the existing model-based methods, the idea of using the linear dynamic model is common(Aswani et al., 2013; 2012). Although most robots have nonlinear dynamic models, there are already many methods based on the linearization of the model. For example, sequential quadratic programming requires the continuous local approximation of the problem and then transforms it into the constrained linear quadratic regulator problem(Giftthaler et al., 2018). And iLQR(Levine & Koltun, 2013) is a method with linearizing a nonlinear model, which often appears as baselines in experiments about model-based reinforcement learning. And there are many theories about the stability of linearized systems(Spong, 1995; Russ, 2021). For convenience, this paper only discusses the case of the linear dynamic model. In this paper, we propose the ‘Action Decomposition Regular’(ADR) as shown in Fig 1. Using Singular Value Decomposition(SVD) approach, ADR decomposes the action space into a constraint dimension containing all constraint information and the remaining free dimension. The goal is to achieve better policy exploration without violating linear equality constraints at all. Under the above idea, we find a balance between the model-based technology’s control of constraints and the data-driven policy learning method. It is worth mentioning that our method is non-training and can conjunct any efficient continuous-control RL method. The main contributions of this paper are as follows: 1. We propose a non-training method called ADR that can make the reinforcement learning strictly satisfy the constraints without restricting the system’s ability to explore. And the method does not need to make assumptions about the dimensions of the constraints. 2. We give an action correction scheme with the property of Pareto optimal solution(Van Moffaert & Nowé, 2014) in convex action space and give the proof. 3. The effectiveness of the method is verified in a simulation environment with physical properties. The simulation shows good results where reward shaping fails. 2 RELATED WORK Implementing policy security through constrained reinforcement learning is an active research content(Amodei et al., 2016). The algorithm based on Constrained Markov Decision Processes (CMDP)(Kallenberg, 1983; Ross, 1985; Ross & Varadarajan, 1989; Altman, 1999; Le et al., 2019) is a common method. CPO(Achiam et al., 2017) is an algorithm based on CMDP, mainly inspired by TRPO(Schulman et al., 2015), to find a surrogate function that is the lower bound of the original objective function and the upper bound of the original constraint. RCPO(Tessler et al., 2019) uses the idea of PPO(Schulman et al., 2017; Heess et al., 2017), introduces the lagrange method, and solves the problem based on the adaptively updated lagrange multiplier. And a RCPO-based method uses PID to control the lagrange multiplier(Stooke et al., 2020). Recently Zhang et al. (2020) propose FOCOPS, which first finds the optimal update policy by solving a constrained optimization problem in the non-parameterized policy space, then projects the updated policy back into the parametric policy space. However, these methods require a long training process. They are shown to solve the mean value constraints or discounted sum constraints. As such, it is difficult to ensure that the constraints are met as much as possible during the training process, even for any simple constraints. Modifying the exploration process is another way to solve the constraint problem. In Dalal et al. (2018), their method requires first using data to train a security layer to modify actions according to certain criteria. Although they have achieved excellent results in their experiments, the problem is that security is very dependent on the security layer, and the linear relationship of the predicted cost may not be established. The solution of Amos & Kolter (2017) relies on a complete Quadratic Programming solver, but their solution is too expensive to calculate. In addition, there are many methods that agree to be model-based. One possible approach is to try to perform imitation learning on the trajectory obtained by the model-based optimal control policy, i.e., DAgger(Ross et al., 2011). But as stated by Bellegarda & Byl (2020), when facing with areas of state space that the expert trajectory has not visited before, policy learned only from expert data may perform poorly in these areas. And Fisac et al. (2019) propose a general safety framework based on Hamilton–Jacobi reachability methods. This safety framework also can work in conjunction with any efficient learning algorithm. But this method is computationally intensive and limited in dimension. Aswani et al. (2013) use the method about the robust model-predictive control approach and achieve good results in some problems such as quadrotor flight. But it limits the exploration ability of the system. And Berkenkamp et al. (2016; 2017) both limit the exploration region of the method. The method in Sadraddini & Belta (2016) is conservative since it does not update the model. Reward shaping is a natural alternative to constraints, influencing the agent by artificially shaping negative rewards in the state space(Dalal et al., 2018; Ng et al., 1999). But it often needs to design a modified reward function through expert knowledge(Randløv & Alstrøm, 1998) or neural network methods(Burda et al., 2018) in advance. In other words, it needs to know the occurrence of constraints in advance, but many urgent constraints are sudden. Our method overcomes the shortcomings mentioned above. A comparison with the different approaches is provided in Table 1. 3 PRELIMINARIES 3.1 MARKOV DECISION PROCESS (MDP) A Markov Decision Process (MDP) (Sutton & Barto, 2018) is defined by 5-tuple (S,A,R,P ,µ). Where S is the state space; A is the action space; R : S × A → R is the reward function; P : S × A × S → [0, 1] is the transition kernel and µ : S → [0, 1] is the initial state distribution. Let s0 ∼ µ denote that the initial state s0 depends on µ, then at ∼ π(· | st) and st+1 ∼ P (· | st, at) are similar. This can set a simple trajectory τ = (s0, a0, s1, . . .). Consider a policy denoted by π = {π(a | s) : s ∈ S, a ∈ A} and aim to find a stationary policy that maximizes the expected discounted return, i.e., objective function: JR(π) = Eπs∼µ[ ∞∑ t=0 γtrt] , where γ is the discount factor, and rt is the reward at time t. Therefore, the update and improvement of π is based on the comprehensive judgment of each reward. If adopting the deterministic policy, a = π(s), else the stochastic policy a ∼ π(a | s). 3.2 EQUALITY CONSTRAINT ACTION SPACE EXPLORATION The method we propose is based on the dynamic knowledge. Therefore, in order to highlight the actual effectiveness of the method and the scalability applicable to any continuous-control reinforcement learning method, dynamic knowledge only affects action selection stage. We first formulate the constraints and dynamics following the notation used in Laine & Tomlin (2019). Without loss of generality, the constraint occurs at t = 0, 1, . . . , T − 1, T . As a matter of convenience, let s ∈ Rn, a ∈ Rm, and we address the following policy problem: a ∼ π(a | s) s.t. dynamics : st+1 − (Fstst + Fatat + f1t) = 0, t = 0, 1, . . . , T − 1 initial condition : s0 ∼ µ constraint at t : Gstst +Gatat + g1t = 0, t = 0, 1, . . . , T − 1 constraint at T : GsT sT + g1T = 0 Where Fst , Fat and f1t define the agent dynamics at time t = 0, 1, . . . , T − 1, T . Gst , Gat and g1t define the constraints at t = 0, 1, . . . , T − 1. GsT and g1T define the constraint at t = T . The deterministic policy is similar. In addition, we introduce the function C(st) called ‘constraint-to-go’ used in Laine & Tomlin (2019): C(st) = Hstst + h1t, t = 0, 1, . . . , T , which is similar to the value function and representing the stacking of values that the residual constraint from the beginning of st to the back. So at time T there is: C(sT ) = GsT sT + g1T . 4 ACTION DECOMPOSITION REGULAR 4.1 ACTION DECOMPOSITION We first explain the idea of action decomposition. For a simple example, as shown in the speed coordinate system in the Fig. 2, when a constraint that requires ux = uy occurs, we can linearly combine ux and uy into w = √ 2 2 ux + √ 2 2 uy and y = − √ 2 2 ux + √ 2 2 uy , so that we only need to keep y = 0 to satisfy the constraint , and the w dimension will be completely free. 4.2 SAFETY REGULAR BASED ON ACTION DECOMPOSITION We solve the problem of safety exploration in the action space under the linear equality constraint based on the above ideas. In our method, the solving technology of the constraint dimension matches the programming technology in Laine & Tomlin (2019), but the solution of the free dimension is expanded according to the property of the policy exploration. The solution process firstly goes backwards from t = T − 1: a ∼ π(a | s) s.t. sT − (FsT−1sT−1 + FaT−1aT−1 + f1T−1) = 0 a ∈ arg min a ‖ ( GsT−1sT−1 +GaT−1aT−1 + g1T−1 = 0 HsT sT + h1T = 0 ) ‖2 and use the dynamic equation to eliminate sT . In this way, only sT−1 and aT−1 exist in the problem. Organize the formula and rewrite the above question as: a ∼ π(a | s) s.t. a ∈ arg min a ‖NsT−1sT−1 +NaT−1aT−1 + n1T−1‖2 where we define as follows: NsT−1 = ( GsT−1 HsTFsT−1 ) , NaT−1 = ( GaT−1 HsTFaT−1 ) , n1T−1 =( g1T−1 HsT f1T−1 + h1T ) . Obviously, at this step, a of the constraint item is only related to NaT−1 , that is, all the information of the constraint item is contained in NaT−1 .Perform SVD on NaT−1 to get NaT−1 = UT−1ΣT−1V T T−1, and define V T T−1 = ( PTT−1 ZTT−1 ) , where the first r rows of the V TT−1 are denoted as PTT−1, the last (m− r) rows are denoted as ZTT−1. And r is the rank of NaT−1 . Then we make use of the following result: Corollary 4.1. The action a formulated at time t can be decomposed in the following form: ât = Ptyt + Ztwt Proof. The proof is provided in Appendix D. We can regard yt as a constraint dimension and wt as a free dimension. Since the learning of the policy also includes the punishment feedback caused by the violation of constraints, so the original problem is transformed into: wT−1 = Z T · a, a ∼ π(a | s) yT−1 = arg min a ‖NsT−1sT−1 +NaT−1PT−1yT−1 + n1T−1‖2 = −(NaT−1PT−1)†(NsT−1sT−1 + n1T−1) Through the above steps, the solution âT−1 will be easily obtained: âT−1 = PT−1yT−1 + ZT−1wT−1. And update C(sT−1) by combining âT−1 and (NsT−1sT−1 +NaT−1 âT−1 + n1T−1): C(sT−1) = HsT−1sT−1 + h1T−1 = (I −NaT−1PT−1(NaT−1PT−1)†)NsT−1sT−1 + (I −NaT−1PT−1(NaT−1PT−1)†)n1T−1 Where (NaT−1PT−1) † is the pseudo inverse of NaT−1PT−1. We can show that C(st) = 0, if NaT−1PT−1 is an invertible matrix. 5 PRACTICAL IMPLEMENTATION 5.1 IMPLEMENTATION DETAIL We divide the use of ADR into two types, which is shown in the Fig. 3. The first type: When the agent does not receive any constraint signals, the policy generated by the policy network directly obtains executable actions in the form of deterministic policy or stochastic policy. The second type: When the agent receives the constraint signals that it needs to comply within a period of time(t = 0, 1, . . . , T − 1, T ) in the future, we might as well start counting the time from receiving the constraint signals. We require that the constraint signals of this period of time be processed through ADR to obtain the constraint dimension action and the free dimension projection matrix, and the output action of the RL also needs to be corrected by the above result for obtaining the actual execution. In addition, we give the method to deal with situations where the selected action violates convex action space. The details are provided in Subsection 5.2. A detailed pseudo code is provided in Appendix A of the supplementary materials. 5.2 CONVEX ACTION SPACE Various physical limitations of action will appear in real-world applications. And physical limits can lead to the limited action space. In this section, we discuss the most common convex action space. In fact, the physical limitation is also a constraint when the selected action exceeds it. In order to satisfy the hard constraint(Chen et al., 2021) as much as possible, we suggest that when the action exceeds the physical limitation, first program the constraint dimension in the action space to find the constraint dimension action closest to ADR’s recommendation, and then find the closest free dimension action suggested by the RL. This ensures that actions get higher rewards under conditions that satisfy the constraints as much as possible. In fact, this is a multi-objective optimization problem(MOO)(Miettinen, 2012; Lin et al., 2019). The above method (also called -constraint method or main objective method) is widely used, and its optimal solution is the effective solution of MOO(also called the Pareto optimum solving) when the limited action space is a convex set. We define the following problems: Problem 1. min ( f1(a) f2(a) ) = ( ‖PTa− PT â‖2 ‖ZTa− ZT â‖2 ) s.t. a ∈ D Problem 2. min ( ‖PTa− PT â‖2 ) s.t. a ∈ D Problem 3. min ( ‖ZTa− ZT â‖2 ) s.t. a ∈ H where H is the efficient solution set of Problem. 2. This result can be demonstrated by the following Theorem 5.1. Theorem 5.1. Suppose to exist ā ∈ D, D is a convex set, subject to ā is the optimal solution of Problem. 3, then ā is not only the weakly effective solution of Problem. 1, but also the Pareto optimal solution of Problem. 1, and it is unique. Proof. See Appendix E of the supplementary materials. 6 EXPERIMENTS Although we expect to show benefits from combining ADR with any continuous-control RL method, for the following experiments, we use the Deep Deterministic Policy Gradient (DDPG)(Lillicrap et al., 2015). Although DDPG (Lillicrap et al., 2015) is a deterministic policy that can directly output actions, in fact, our method is not only suitable for reinforcement learning algorithms for deterministic policy, but also has applicability for stochastic policy. Our experiments are based on the current popular multi-agent particle world (Lowe et al., 2017) with continuous observation and action space and some basic simulated physics. We design two new sets of simulation experiments based on physical constraints to test the effectiveness of ADR as shown in Fig 4. It is worth mentioning that no new hyperparameters are introduced in the process of our experiment. We provide exact details about experiments in Appendix B and hyperparameters about the method in Appendix C. 6.1 KEEP IT STRAIGHT 6.1.1 EXPERIMENT DESCRIPTION The agent starts from a random starting point to a random final landmark. But we require the agent to maintain a straight line movement as accurately as possible in a certain direction during the first period of time. Although this task seems simple, it is not easy to satisfy the accuracy requirements for RL. That is because the larger learning rate of the algorithm leads the faster convergence and the poorer stability, and the smaller learning rate of the algorithm leads to slow convergence and waste of time(Smith, 2017). In this experiment, the reward is set based on the negative Euclidean distance from the final landmark at each moment. At each step, the agent also obtains the reward for minimizing energy consumption based on the negative two-norm of action. The penalty is set based on the two-norm of the velocity deviating from the current motion direction. Finally, the violated constraint is equal to the accumulation of the two-norm of the distance from the original straight line at each time step when the constraint occurs. In fact, this will require the agent to learn to approach the landmark more quickly while keeping the direction of motion stable in the early stage. 6.1.2 EXPERIMENT ANALYSIS Learning curves are provided in the Fig. 5. For the reward curve, DDPG needs a lot of episodes of training to obtain higher rewards, but DDPG+ADR gets higher rewards at the beginning and is always higher than DDPG in the whole training process. For the violated constraint curve, DDPG seriously violates the constraints at the beginning of training, and can not strictly satisfy the constraints in the whole training process. In fact, the minimum value of constraint violation in a single round of DDPG is 7.4 × 10−8. But DDPG+ADR can keep the violation of constraints in the order of 10−16 in the whole process, which can be considered negligible. The experiments show that, on the one hand, DDPG+ADR can indeed make the actions output by RL’s policy strictly satsify the linear equality constraints, even in the training process. On the other hand, compared with DDPG, DDPG+ADR shows better performance in obtaining rewards. 6.2 PASSING THE INTERMEDIATE STATION 6.2.1 EXPERIMENT DESCRIPTION The agent is still required to go from a random starting point to a random final landmark. And the agent will suddenly receive a constraint signal to go to an intermediate station at the intermediate moment. Note that since the agent is constrained only at the intermediate moment, the agent will exceed its physical limitations due to the distance of the intermediate station, which is too far away. In this case, the agent can only approach as close as possible and never satisfy the constraint. In fact, this experiment requires the algorithm to be robust when the agent encounters a sudden constraint that exceeds its physical limit. In this experiment, the reward is set based on the negative Euclidean distance from the final landmark at each moment. At the same time, the agent also obtains the negative two-norm of action as the reward for minimizing energy consumption. The penalty for the agent receives and the violated constraint in each episode are set based on the Euclidean distance from the intermediate station. 6.2.2 REWARD SHAPING For comparison, we also conduct reward shaping experiments on the DDPG algorithm. At each time step before the end of the constraint, we set the modified reward function(Ng et al., 1999) to the same scale as the original reward, which is set by the following formula: rF = φ(st)− φ(st−1), φ(s0) = 0 Where φ is set based on the distance from the intermediate station, see Appendix B for details. 6.2.3 EXPERIMENT ANALYSIS The experimental results are shown in the Fig. 6. Compared with DDPG, DDPG+ADR has demonstrated superior performance, not only in terms of cumulative rewards much higher, but also much smaller in violation of constraints. Surprisingly, the design of reward shaping does not make DDPG run better but have an adverse effect. It means that the value function of this task is complicated, and the reward shaping that only relies on constraints is quite different from the value function. This shows that at the moment when the constraint occurs, DDPG+ADR really shows robustness. It helps the agent make the action that satisfies the constraint as much as possible and minimizes the missed reward. 7 DISCUSSION In this paper, we propose a simple and practical approach that can effectively solve the problem of action exploration in reinforcement learning under the linear equality constraints. Our method ADR is based on the linear dynamics model and uses the idea of SVD to decompose the action space into constrained dimension and free dimension to control separately. At the same time, we propose feasible solutions to the situation that constraints exceed convex action space, and ensure that actions satisfy the constraints as much as possible within a single time step, and the loss of rewards can be minimized. In the experiment, compared with DDPG, DDPG+ADR can obtain more rewards and stricter constraints satisfaction in both tasks. At the same time, DDPG+ADR shows its robustness in sudden constrained tasks. It is worth mentioning that our method has the advantages of no training and does not need to make assumptions about the dimensions of constraints. An exciting feature is that our method can be combined with any continuous-control RL method. In addition, there are many promising ideas for future work: the use of interior point methods to improve the equality constraints; the deeper integration of SVD ideas with reinforcement learning(Gemp et al., 2020). And in the real world, some dynamic models are too complicated to be researched. In future work, we plan to use Piecewise Linear Neural Networks(PLNN) which can explain the non-linear dynamic model of an object(Nagabandi et al., 2018; Chu et al., 2018) to extend the applicability of our method. A PSEUDO CODE Algorithm 1: Action Decomposition Regular Input: constraintGst , Gat , g1t , GsT , g1T ; policy network πθ ; dynamics Fst , Fat , f1t ; t = 0, 1, . . . , T − 1 Output: action at; t = 0, 1, . . . , T − 1 1: if T > 0 then 2: HsT ← GsT 3: hsT ← gsT 4: for t = T − 1, T − 2, . . . , 0 do 5: Nat ← ( Gat Hst+1Fat ) 6: Nst ← ( Gst Hst+1Fst ) 7: n1t ← ( g1t Hst+1f1t + h1t+1 ) 8: V Tt ← SVD(Nat) 9: Pt, Zt ← V Tt 10: Hst ← (I −NatPt(NatPt)†)Nst 11: h1t ← (I −NatPt(NatPt)†)n1t 12: end for 13: end if 14: if T > 0 then 15: for t = 0, 1, . . . , T − 1 do 16: at ← πθ 17: Receive st 18: yt ← −(NatPt)†(Nstst + n1t) 19: wt ← ZTt at 20: at ← Ptyt + Ztwt 21: end for 22: else 23: at ← πθ 24: end if B EXPERIMENT DETAILS All the experiments we conducted are built on Python(3.6) and Tensorflow (1.8.0) in Intel i7-10875H CPU. B.1 KEEP IT STRAIGHT We used the multi-agent particle environment (Mordatch & Abbeel, 2017) provided by OpenAI Gym(Brockman et al., 2016) for this set of tasks. The agent moves on a two-dimensional plane and travels from a random starting point to a random goal point. At the beginning of each episode, we require the agent to accurately move in a straight line in the y-axis direction, similar to walking out of a parking space or crossing a narrow road. In our setting, the step length of an episode is 26 steps, so the duration of this straight-going phase should not be too long, and our setting is 5 steps. For the reward of the agent in the experiment, we set the following: 1. Reward for the agent to go to the goal: rgoal = −‖pagent − pgoal‖22 2. Reward for the agent to keep moving in a straight line: rkeep = −10000|vy|2 3. Reward for the agent about control Effort Penalty: rcontrol = −0.01‖a‖22 Usually when we face such a multi-objective optimization problem(MOO), we always impose a large weight on the hard constraint. In order to let DDPG and DDPG+ADR learn to keep the straight line as hard as possible, we both set the weight to 10000. This has no effect on the comparison of our method ADR. where pagent, pgoal are the positions of the agent and goal point. And vy is the velocity of the agent in y-axis. And the constraint setting is: constraint : vy = 0 In the multi-agent particle environment (Mordatch & Abbeel, 2017), a ∈ R5 represents the join forces of the agent, and s ∈ R4 is composed of the speed of the agent and the distance to the goal point. Regardless of noise, and let the mass m of the agent be 1, we fully follow the dynamic equation in multi-agent particle environment(Mordatch & Abbeel, 2017): v = Amadt+ (1− d)v x = vdt+ x A = ( 0, 1,−1, 0, 0 0, 0, 0, 1,−1 ) where A is the matrix that turn the resultant force into the driving force of agent. And dt = 0.1 is the step size of a single time step. The physical damping coefficient d = 0.25. B.2 PASSING THE INTERMEDIATE STATION The agent is also on a two-dimensional plane, going from a random starting point to a random goal point. But unlike before, there is an intermediate station at a distance of (0.3, 0.3) from the goal point. The agent will receive the constraint of going to the intermediate station as much as possible in the middle moment. The time step of each episode is also 26 steps, this time we chose the intermediate time t = 12. And it only takes effect at this moment. This requires the agent to learn to take corresponding actions in emergency situations. We set the agent’s reward in this task as follows: 1. Reward for the agent to go to the goal: rgoal = −‖pagent − pgoal‖22 2. Reward for the agent to move towards the intermediate station: rpass = −10000‖(0.3, 0.3)− (pagent − pgoal)‖22 3. Reward for the agent about control Effort Penalty: rcontrol = −0.01‖a‖22 Similarly, in order for the policy in DDPG and DDPG+ADR to learn to satisfy the hard constraints as much as possible, we set a larger weight for the second reward. And the constraint setting is: constraint : pagent − pgoal = (0.3, 0.3) As for the dynamic equation, it is exactly the same as the task setting above. And in reward shaping, rpass is modified to rF t. The effective time of rFt is modified from t=1 to t=12. The formula of rF is modified to: rFt = rpasst − rpasst−1 , t = 1, 2 . . . , 12, where rpasst means that the argument of the function rpass is the current state, and the argument of rpasst−1 is the state at the previous moment. And rpass0 = φ(s0) = 0 (Ng et al., 1999). C HYPERPARAMETERS FOR EXPERIMENTS The hyperparameter settings of DDPG and DDPG+ADR are exactly the same, and there are no additional parameters introduced. And in fact, there is no need to adjust the parameters in our experiment. Activation function for MLP is ReLU. Table 2 shows the hyperparameters used in the experiment. D PROOF OF COROLLARY 4.1 Proof. Since V is composed of normal orthogonal basis, then we have ât = Vt · V Tt · ât, where V Tt = ( PTt ZTt ) . We can therefore derive ât = ( Pt, Zt ) · V Tt · ât = Ptyt + Ztwt. E PROOF OF THEOREM 5.1 Proof. Since the objective functions of Problem. 2 and Problem. 3 are both convex functions, D is a convex set, and the local minimum of the convex function is the global minimum, so Problem. 2 and Problem. 3 always have optimal solutions. If ā is not the Pareto optimal solution of Problem. 1, then ∃a ∈ D, which satisfies one of the following two cases: either f1(a) ≤ f1(ā) and f2(a) < f2(ā), or f1(a) < f1(ā) and f2(a) ≤ f2(ā). But the first case contradicts Problem. 3, and the second case contradicts Problem. 2. Uniqueness is obvious, because V T = ( PT ZT ) constitutes a set of Orthonormal basis in the action space.
1. What is the focus and contribution of the paper regarding constrained reinforcement learning? 2. What are the strengths of the proposed method, particularly in its novel approach to decomposing the action space? 3. What are the weaknesses of the paper regarding its writing, organization, and formatting? 4. Do you have any questions or suggestions regarding the notation and definitions used in the paper? 5. What are your concerns regarding the experimental results and comparisons with other methods?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a non-training method "Action Decomposition Regular" (ADR) to resolve the constrained reinforcement learning problem, which decomposes the action space into a constrained dimension and free dimension using SVD on top of the linear dynamics model. It theoretically proves the applicability of ADR on convex action space and the experimental results demonstrate its advantage over the unconstrained methods and reward shaping. Review Strengths : The method is well-motivated and the technical contribution is significant. The problem considered is interesting and the proposed method is novel, especially the idea to decompose the action space, which I believe will be inspiring for future works. Weaknesses : Basically, the writing and organization of this paper need improvement. Some parts of the paper are very confusing, I have the following questions and suggestions for the authors. (1) I highly recommend the authors assign the number for those important equations (2) For the first objective functions in section 4.2, I don't understand why the constraints inside arg ⁡ min are equations, and in the following text, these equations become functions, i.e., without " = 0 ". (3) Clarify the definition of some notations, including F for agent dynamics and G for constraints, dimensions to be defined. Also for the pseudo inverse at the very end of section 4.2, the convex set D at section 5.2, the definition doesn't follow their first appearances, organizations here need improvement. (4) In the main paper, only the decomposition of the action at time T − 1 is considered. (5) The equation of reward shaping is in the wrong format. For the experiment parts, I also have the following concerns. (1) ADR on DDPG only is not convincing, I expect more continuous RL algorithms to be tested. (2) Could authors explain why no constrained RL algorithms are used as baselines? (3) For the two experiments, especially "passing the intermediate station", the performance of DDPG and DDPG + ADR are almost the same at the end of the training from the plots, but in the analysis, it's claimed that DDPG + ADR achieves much higher cumulative rewards and much smaller violations of constraints, I expect the author to detail the scales here.
ICLR
Title Safe Exploration in Linear Equality Constraint Abstract With the extensive research and application, some shortcomings of reinforcement learning methods are gradually revealed. One of the considerable problems is that it is difficult for reinforcement learning methods to strictly satisfy the constraints. In this paper, a Singular Value Decomposition-based non-training method called ‘Action Decomposition Regular’ is proposed to achieve safe exploration. By adopting linear dynamics model, our method decomposes the action space into a constraint dimension and a free dimension for separate control, making policy strictly satisfy the linear equality constraint without limiting the exploration region. In addition, we show how our method should be used when the action space is limited and convex, which makes the method more suitable for real-world scenarios. Finally, we show the effectiveness of our method in a physically-based environment and prevail where reward shaping fails. N/A With the extensive research and application, some shortcomings of reinforcement learning methods are gradually revealed. One of the considerable problems is that it is difficult for reinforcement learning methods to strictly satisfy the constraints. In this paper, a Singular Value Decomposition-based non-training method called ‘Action Decomposition Regular’ is proposed to achieve safe exploration. By adopting linear dynamics model, our method decomposes the action space into a constraint dimension and a free dimension for separate control, making policy strictly satisfy the linear equality constraint without limiting the exploration region. In addition, we show how our method should be used when the action space is limited and convex, which makes the method more suitable for real-world scenarios. Finally, we show the effectiveness of our method in a physically-based environment and prevail where reward shaping fails. 1 INTRODUCTION In the past ten years, reinforcement learning(RL)(Sutton & Barto, 2018) has made significant breakthroughs in many fields, such as games(Mnih et al., 2013; Schaul et al., 2015; Mnih et al., 2015; Hasselt et al., 2015; Wang et al., 2016), robotics(Gu et al., 2017), autonomous vehicles(Sallab et al., 2017), healthcare(Yu et al., 2019). In the reinforcement learning task, the agent can obtain the policy of making the action that maximizes the long-term return. Although it can improve one’s own policy through trial and error learning under the interaction with the environment, it is difficult to strictly ensure the safety of the actions output by its policy(Garcı́a et al., 2015). Therefore, the constraint problem has become one of the active research contents in reinforcement learning recently. In the application, making such actions that violate constraints will bring serious consequences in some fields. Therefore never violating these constraints is a strict necessity in many scenarios, such as the stability of robots and avoidance of pedestrians or obstacles appearing in front of the vehicle during autonomous driving(Levinson et al., 2011; Amodei et al., 2016). In the real world, the linear equality constraints are relatively common, for example, we want the robot to achieve a certainly required configuration on a certain trajectory, where the constraint may appear at different instants in any dimension; or the robot center of mass is restricted at the beginning of the movement(Laine & Tomlin, 2019). And all these complex constraints typically take the form of linear equality constraints. Therefore, it is necessary to have a method that can ensure these constraints to be strictly satisfied in the real world. Researchers have carried out much meaningful research on how to better satisfy the constraint. Dalal et al. (2018) achieve good results in satisfying hard constraints, but it relies heavily on the security layer of data training and cannot cross domains. Tessler et al. (2019) can solve the mean value constraints or discounted sum constraints, but there is no guarantee that the constraints can be met during the training process. More importantly, the existing learning-based methods can hardly satisfy the constraints. In fact, the constraint guarantee for the agent’s behavioral decisionmaking benefits from knowledge about the causal mechanism that controls it, such as the dynamic model(Fisac et al., 2019). Fortunately, the designer of an agent always knows or approximately knows its dynamics(Fisac et al., 2019). For example, Lutter et al. (2020) adopt the linear dynamic model of the robot and finally, make the optimal strategy policy their action limit. This inspires people to find a balance between data-driven and model-based technology. Among the existing model-based methods, the idea of using the linear dynamic model is common(Aswani et al., 2013; 2012). Although most robots have nonlinear dynamic models, there are already many methods based on the linearization of the model. For example, sequential quadratic programming requires the continuous local approximation of the problem and then transforms it into the constrained linear quadratic regulator problem(Giftthaler et al., 2018). And iLQR(Levine & Koltun, 2013) is a method with linearizing a nonlinear model, which often appears as baselines in experiments about model-based reinforcement learning. And there are many theories about the stability of linearized systems(Spong, 1995; Russ, 2021). For convenience, this paper only discusses the case of the linear dynamic model. In this paper, we propose the ‘Action Decomposition Regular’(ADR) as shown in Fig 1. Using Singular Value Decomposition(SVD) approach, ADR decomposes the action space into a constraint dimension containing all constraint information and the remaining free dimension. The goal is to achieve better policy exploration without violating linear equality constraints at all. Under the above idea, we find a balance between the model-based technology’s control of constraints and the data-driven policy learning method. It is worth mentioning that our method is non-training and can conjunct any efficient continuous-control RL method. The main contributions of this paper are as follows: 1. We propose a non-training method called ADR that can make the reinforcement learning strictly satisfy the constraints without restricting the system’s ability to explore. And the method does not need to make assumptions about the dimensions of the constraints. 2. We give an action correction scheme with the property of Pareto optimal solution(Van Moffaert & Nowé, 2014) in convex action space and give the proof. 3. The effectiveness of the method is verified in a simulation environment with physical properties. The simulation shows good results where reward shaping fails. 2 RELATED WORK Implementing policy security through constrained reinforcement learning is an active research content(Amodei et al., 2016). The algorithm based on Constrained Markov Decision Processes (CMDP)(Kallenberg, 1983; Ross, 1985; Ross & Varadarajan, 1989; Altman, 1999; Le et al., 2019) is a common method. CPO(Achiam et al., 2017) is an algorithm based on CMDP, mainly inspired by TRPO(Schulman et al., 2015), to find a surrogate function that is the lower bound of the original objective function and the upper bound of the original constraint. RCPO(Tessler et al., 2019) uses the idea of PPO(Schulman et al., 2017; Heess et al., 2017), introduces the lagrange method, and solves the problem based on the adaptively updated lagrange multiplier. And a RCPO-based method uses PID to control the lagrange multiplier(Stooke et al., 2020). Recently Zhang et al. (2020) propose FOCOPS, which first finds the optimal update policy by solving a constrained optimization problem in the non-parameterized policy space, then projects the updated policy back into the parametric policy space. However, these methods require a long training process. They are shown to solve the mean value constraints or discounted sum constraints. As such, it is difficult to ensure that the constraints are met as much as possible during the training process, even for any simple constraints. Modifying the exploration process is another way to solve the constraint problem. In Dalal et al. (2018), their method requires first using data to train a security layer to modify actions according to certain criteria. Although they have achieved excellent results in their experiments, the problem is that security is very dependent on the security layer, and the linear relationship of the predicted cost may not be established. The solution of Amos & Kolter (2017) relies on a complete Quadratic Programming solver, but their solution is too expensive to calculate. In addition, there are many methods that agree to be model-based. One possible approach is to try to perform imitation learning on the trajectory obtained by the model-based optimal control policy, i.e., DAgger(Ross et al., 2011). But as stated by Bellegarda & Byl (2020), when facing with areas of state space that the expert trajectory has not visited before, policy learned only from expert data may perform poorly in these areas. And Fisac et al. (2019) propose a general safety framework based on Hamilton–Jacobi reachability methods. This safety framework also can work in conjunction with any efficient learning algorithm. But this method is computationally intensive and limited in dimension. Aswani et al. (2013) use the method about the robust model-predictive control approach and achieve good results in some problems such as quadrotor flight. But it limits the exploration ability of the system. And Berkenkamp et al. (2016; 2017) both limit the exploration region of the method. The method in Sadraddini & Belta (2016) is conservative since it does not update the model. Reward shaping is a natural alternative to constraints, influencing the agent by artificially shaping negative rewards in the state space(Dalal et al., 2018; Ng et al., 1999). But it often needs to design a modified reward function through expert knowledge(Randløv & Alstrøm, 1998) or neural network methods(Burda et al., 2018) in advance. In other words, it needs to know the occurrence of constraints in advance, but many urgent constraints are sudden. Our method overcomes the shortcomings mentioned above. A comparison with the different approaches is provided in Table 1. 3 PRELIMINARIES 3.1 MARKOV DECISION PROCESS (MDP) A Markov Decision Process (MDP) (Sutton & Barto, 2018) is defined by 5-tuple (S,A,R,P ,µ). Where S is the state space; A is the action space; R : S × A → R is the reward function; P : S × A × S → [0, 1] is the transition kernel and µ : S → [0, 1] is the initial state distribution. Let s0 ∼ µ denote that the initial state s0 depends on µ, then at ∼ π(· | st) and st+1 ∼ P (· | st, at) are similar. This can set a simple trajectory τ = (s0, a0, s1, . . .). Consider a policy denoted by π = {π(a | s) : s ∈ S, a ∈ A} and aim to find a stationary policy that maximizes the expected discounted return, i.e., objective function: JR(π) = Eπs∼µ[ ∞∑ t=0 γtrt] , where γ is the discount factor, and rt is the reward at time t. Therefore, the update and improvement of π is based on the comprehensive judgment of each reward. If adopting the deterministic policy, a = π(s), else the stochastic policy a ∼ π(a | s). 3.2 EQUALITY CONSTRAINT ACTION SPACE EXPLORATION The method we propose is based on the dynamic knowledge. Therefore, in order to highlight the actual effectiveness of the method and the scalability applicable to any continuous-control reinforcement learning method, dynamic knowledge only affects action selection stage. We first formulate the constraints and dynamics following the notation used in Laine & Tomlin (2019). Without loss of generality, the constraint occurs at t = 0, 1, . . . , T − 1, T . As a matter of convenience, let s ∈ Rn, a ∈ Rm, and we address the following policy problem: a ∼ π(a | s) s.t. dynamics : st+1 − (Fstst + Fatat + f1t) = 0, t = 0, 1, . . . , T − 1 initial condition : s0 ∼ µ constraint at t : Gstst +Gatat + g1t = 0, t = 0, 1, . . . , T − 1 constraint at T : GsT sT + g1T = 0 Where Fst , Fat and f1t define the agent dynamics at time t = 0, 1, . . . , T − 1, T . Gst , Gat and g1t define the constraints at t = 0, 1, . . . , T − 1. GsT and g1T define the constraint at t = T . The deterministic policy is similar. In addition, we introduce the function C(st) called ‘constraint-to-go’ used in Laine & Tomlin (2019): C(st) = Hstst + h1t, t = 0, 1, . . . , T , which is similar to the value function and representing the stacking of values that the residual constraint from the beginning of st to the back. So at time T there is: C(sT ) = GsT sT + g1T . 4 ACTION DECOMPOSITION REGULAR 4.1 ACTION DECOMPOSITION We first explain the idea of action decomposition. For a simple example, as shown in the speed coordinate system in the Fig. 2, when a constraint that requires ux = uy occurs, we can linearly combine ux and uy into w = √ 2 2 ux + √ 2 2 uy and y = − √ 2 2 ux + √ 2 2 uy , so that we only need to keep y = 0 to satisfy the constraint , and the w dimension will be completely free. 4.2 SAFETY REGULAR BASED ON ACTION DECOMPOSITION We solve the problem of safety exploration in the action space under the linear equality constraint based on the above ideas. In our method, the solving technology of the constraint dimension matches the programming technology in Laine & Tomlin (2019), but the solution of the free dimension is expanded according to the property of the policy exploration. The solution process firstly goes backwards from t = T − 1: a ∼ π(a | s) s.t. sT − (FsT−1sT−1 + FaT−1aT−1 + f1T−1) = 0 a ∈ arg min a ‖ ( GsT−1sT−1 +GaT−1aT−1 + g1T−1 = 0 HsT sT + h1T = 0 ) ‖2 and use the dynamic equation to eliminate sT . In this way, only sT−1 and aT−1 exist in the problem. Organize the formula and rewrite the above question as: a ∼ π(a | s) s.t. a ∈ arg min a ‖NsT−1sT−1 +NaT−1aT−1 + n1T−1‖2 where we define as follows: NsT−1 = ( GsT−1 HsTFsT−1 ) , NaT−1 = ( GaT−1 HsTFaT−1 ) , n1T−1 =( g1T−1 HsT f1T−1 + h1T ) . Obviously, at this step, a of the constraint item is only related to NaT−1 , that is, all the information of the constraint item is contained in NaT−1 .Perform SVD on NaT−1 to get NaT−1 = UT−1ΣT−1V T T−1, and define V T T−1 = ( PTT−1 ZTT−1 ) , where the first r rows of the V TT−1 are denoted as PTT−1, the last (m− r) rows are denoted as ZTT−1. And r is the rank of NaT−1 . Then we make use of the following result: Corollary 4.1. The action a formulated at time t can be decomposed in the following form: ât = Ptyt + Ztwt Proof. The proof is provided in Appendix D. We can regard yt as a constraint dimension and wt as a free dimension. Since the learning of the policy also includes the punishment feedback caused by the violation of constraints, so the original problem is transformed into: wT−1 = Z T · a, a ∼ π(a | s) yT−1 = arg min a ‖NsT−1sT−1 +NaT−1PT−1yT−1 + n1T−1‖2 = −(NaT−1PT−1)†(NsT−1sT−1 + n1T−1) Through the above steps, the solution âT−1 will be easily obtained: âT−1 = PT−1yT−1 + ZT−1wT−1. And update C(sT−1) by combining âT−1 and (NsT−1sT−1 +NaT−1 âT−1 + n1T−1): C(sT−1) = HsT−1sT−1 + h1T−1 = (I −NaT−1PT−1(NaT−1PT−1)†)NsT−1sT−1 + (I −NaT−1PT−1(NaT−1PT−1)†)n1T−1 Where (NaT−1PT−1) † is the pseudo inverse of NaT−1PT−1. We can show that C(st) = 0, if NaT−1PT−1 is an invertible matrix. 5 PRACTICAL IMPLEMENTATION 5.1 IMPLEMENTATION DETAIL We divide the use of ADR into two types, which is shown in the Fig. 3. The first type: When the agent does not receive any constraint signals, the policy generated by the policy network directly obtains executable actions in the form of deterministic policy or stochastic policy. The second type: When the agent receives the constraint signals that it needs to comply within a period of time(t = 0, 1, . . . , T − 1, T ) in the future, we might as well start counting the time from receiving the constraint signals. We require that the constraint signals of this period of time be processed through ADR to obtain the constraint dimension action and the free dimension projection matrix, and the output action of the RL also needs to be corrected by the above result for obtaining the actual execution. In addition, we give the method to deal with situations where the selected action violates convex action space. The details are provided in Subsection 5.2. A detailed pseudo code is provided in Appendix A of the supplementary materials. 5.2 CONVEX ACTION SPACE Various physical limitations of action will appear in real-world applications. And physical limits can lead to the limited action space. In this section, we discuss the most common convex action space. In fact, the physical limitation is also a constraint when the selected action exceeds it. In order to satisfy the hard constraint(Chen et al., 2021) as much as possible, we suggest that when the action exceeds the physical limitation, first program the constraint dimension in the action space to find the constraint dimension action closest to ADR’s recommendation, and then find the closest free dimension action suggested by the RL. This ensures that actions get higher rewards under conditions that satisfy the constraints as much as possible. In fact, this is a multi-objective optimization problem(MOO)(Miettinen, 2012; Lin et al., 2019). The above method (also called -constraint method or main objective method) is widely used, and its optimal solution is the effective solution of MOO(also called the Pareto optimum solving) when the limited action space is a convex set. We define the following problems: Problem 1. min ( f1(a) f2(a) ) = ( ‖PTa− PT â‖2 ‖ZTa− ZT â‖2 ) s.t. a ∈ D Problem 2. min ( ‖PTa− PT â‖2 ) s.t. a ∈ D Problem 3. min ( ‖ZTa− ZT â‖2 ) s.t. a ∈ H where H is the efficient solution set of Problem. 2. This result can be demonstrated by the following Theorem 5.1. Theorem 5.1. Suppose to exist ā ∈ D, D is a convex set, subject to ā is the optimal solution of Problem. 3, then ā is not only the weakly effective solution of Problem. 1, but also the Pareto optimal solution of Problem. 1, and it is unique. Proof. See Appendix E of the supplementary materials. 6 EXPERIMENTS Although we expect to show benefits from combining ADR with any continuous-control RL method, for the following experiments, we use the Deep Deterministic Policy Gradient (DDPG)(Lillicrap et al., 2015). Although DDPG (Lillicrap et al., 2015) is a deterministic policy that can directly output actions, in fact, our method is not only suitable for reinforcement learning algorithms for deterministic policy, but also has applicability for stochastic policy. Our experiments are based on the current popular multi-agent particle world (Lowe et al., 2017) with continuous observation and action space and some basic simulated physics. We design two new sets of simulation experiments based on physical constraints to test the effectiveness of ADR as shown in Fig 4. It is worth mentioning that no new hyperparameters are introduced in the process of our experiment. We provide exact details about experiments in Appendix B and hyperparameters about the method in Appendix C. 6.1 KEEP IT STRAIGHT 6.1.1 EXPERIMENT DESCRIPTION The agent starts from a random starting point to a random final landmark. But we require the agent to maintain a straight line movement as accurately as possible in a certain direction during the first period of time. Although this task seems simple, it is not easy to satisfy the accuracy requirements for RL. That is because the larger learning rate of the algorithm leads the faster convergence and the poorer stability, and the smaller learning rate of the algorithm leads to slow convergence and waste of time(Smith, 2017). In this experiment, the reward is set based on the negative Euclidean distance from the final landmark at each moment. At each step, the agent also obtains the reward for minimizing energy consumption based on the negative two-norm of action. The penalty is set based on the two-norm of the velocity deviating from the current motion direction. Finally, the violated constraint is equal to the accumulation of the two-norm of the distance from the original straight line at each time step when the constraint occurs. In fact, this will require the agent to learn to approach the landmark more quickly while keeping the direction of motion stable in the early stage. 6.1.2 EXPERIMENT ANALYSIS Learning curves are provided in the Fig. 5. For the reward curve, DDPG needs a lot of episodes of training to obtain higher rewards, but DDPG+ADR gets higher rewards at the beginning and is always higher than DDPG in the whole training process. For the violated constraint curve, DDPG seriously violates the constraints at the beginning of training, and can not strictly satisfy the constraints in the whole training process. In fact, the minimum value of constraint violation in a single round of DDPG is 7.4 × 10−8. But DDPG+ADR can keep the violation of constraints in the order of 10−16 in the whole process, which can be considered negligible. The experiments show that, on the one hand, DDPG+ADR can indeed make the actions output by RL’s policy strictly satsify the linear equality constraints, even in the training process. On the other hand, compared with DDPG, DDPG+ADR shows better performance in obtaining rewards. 6.2 PASSING THE INTERMEDIATE STATION 6.2.1 EXPERIMENT DESCRIPTION The agent is still required to go from a random starting point to a random final landmark. And the agent will suddenly receive a constraint signal to go to an intermediate station at the intermediate moment. Note that since the agent is constrained only at the intermediate moment, the agent will exceed its physical limitations due to the distance of the intermediate station, which is too far away. In this case, the agent can only approach as close as possible and never satisfy the constraint. In fact, this experiment requires the algorithm to be robust when the agent encounters a sudden constraint that exceeds its physical limit. In this experiment, the reward is set based on the negative Euclidean distance from the final landmark at each moment. At the same time, the agent also obtains the negative two-norm of action as the reward for minimizing energy consumption. The penalty for the agent receives and the violated constraint in each episode are set based on the Euclidean distance from the intermediate station. 6.2.2 REWARD SHAPING For comparison, we also conduct reward shaping experiments on the DDPG algorithm. At each time step before the end of the constraint, we set the modified reward function(Ng et al., 1999) to the same scale as the original reward, which is set by the following formula: rF = φ(st)− φ(st−1), φ(s0) = 0 Where φ is set based on the distance from the intermediate station, see Appendix B for details. 6.2.3 EXPERIMENT ANALYSIS The experimental results are shown in the Fig. 6. Compared with DDPG, DDPG+ADR has demonstrated superior performance, not only in terms of cumulative rewards much higher, but also much smaller in violation of constraints. Surprisingly, the design of reward shaping does not make DDPG run better but have an adverse effect. It means that the value function of this task is complicated, and the reward shaping that only relies on constraints is quite different from the value function. This shows that at the moment when the constraint occurs, DDPG+ADR really shows robustness. It helps the agent make the action that satisfies the constraint as much as possible and minimizes the missed reward. 7 DISCUSSION In this paper, we propose a simple and practical approach that can effectively solve the problem of action exploration in reinforcement learning under the linear equality constraints. Our method ADR is based on the linear dynamics model and uses the idea of SVD to decompose the action space into constrained dimension and free dimension to control separately. At the same time, we propose feasible solutions to the situation that constraints exceed convex action space, and ensure that actions satisfy the constraints as much as possible within a single time step, and the loss of rewards can be minimized. In the experiment, compared with DDPG, DDPG+ADR can obtain more rewards and stricter constraints satisfaction in both tasks. At the same time, DDPG+ADR shows its robustness in sudden constrained tasks. It is worth mentioning that our method has the advantages of no training and does not need to make assumptions about the dimensions of constraints. An exciting feature is that our method can be combined with any continuous-control RL method. In addition, there are many promising ideas for future work: the use of interior point methods to improve the equality constraints; the deeper integration of SVD ideas with reinforcement learning(Gemp et al., 2020). And in the real world, some dynamic models are too complicated to be researched. In future work, we plan to use Piecewise Linear Neural Networks(PLNN) which can explain the non-linear dynamic model of an object(Nagabandi et al., 2018; Chu et al., 2018) to extend the applicability of our method. A PSEUDO CODE Algorithm 1: Action Decomposition Regular Input: constraintGst , Gat , g1t , GsT , g1T ; policy network πθ ; dynamics Fst , Fat , f1t ; t = 0, 1, . . . , T − 1 Output: action at; t = 0, 1, . . . , T − 1 1: if T > 0 then 2: HsT ← GsT 3: hsT ← gsT 4: for t = T − 1, T − 2, . . . , 0 do 5: Nat ← ( Gat Hst+1Fat ) 6: Nst ← ( Gst Hst+1Fst ) 7: n1t ← ( g1t Hst+1f1t + h1t+1 ) 8: V Tt ← SVD(Nat) 9: Pt, Zt ← V Tt 10: Hst ← (I −NatPt(NatPt)†)Nst 11: h1t ← (I −NatPt(NatPt)†)n1t 12: end for 13: end if 14: if T > 0 then 15: for t = 0, 1, . . . , T − 1 do 16: at ← πθ 17: Receive st 18: yt ← −(NatPt)†(Nstst + n1t) 19: wt ← ZTt at 20: at ← Ptyt + Ztwt 21: end for 22: else 23: at ← πθ 24: end if B EXPERIMENT DETAILS All the experiments we conducted are built on Python(3.6) and Tensorflow (1.8.0) in Intel i7-10875H CPU. B.1 KEEP IT STRAIGHT We used the multi-agent particle environment (Mordatch & Abbeel, 2017) provided by OpenAI Gym(Brockman et al., 2016) for this set of tasks. The agent moves on a two-dimensional plane and travels from a random starting point to a random goal point. At the beginning of each episode, we require the agent to accurately move in a straight line in the y-axis direction, similar to walking out of a parking space or crossing a narrow road. In our setting, the step length of an episode is 26 steps, so the duration of this straight-going phase should not be too long, and our setting is 5 steps. For the reward of the agent in the experiment, we set the following: 1. Reward for the agent to go to the goal: rgoal = −‖pagent − pgoal‖22 2. Reward for the agent to keep moving in a straight line: rkeep = −10000|vy|2 3. Reward for the agent about control Effort Penalty: rcontrol = −0.01‖a‖22 Usually when we face such a multi-objective optimization problem(MOO), we always impose a large weight on the hard constraint. In order to let DDPG and DDPG+ADR learn to keep the straight line as hard as possible, we both set the weight to 10000. This has no effect on the comparison of our method ADR. where pagent, pgoal are the positions of the agent and goal point. And vy is the velocity of the agent in y-axis. And the constraint setting is: constraint : vy = 0 In the multi-agent particle environment (Mordatch & Abbeel, 2017), a ∈ R5 represents the join forces of the agent, and s ∈ R4 is composed of the speed of the agent and the distance to the goal point. Regardless of noise, and let the mass m of the agent be 1, we fully follow the dynamic equation in multi-agent particle environment(Mordatch & Abbeel, 2017): v = Amadt+ (1− d)v x = vdt+ x A = ( 0, 1,−1, 0, 0 0, 0, 0, 1,−1 ) where A is the matrix that turn the resultant force into the driving force of agent. And dt = 0.1 is the step size of a single time step. The physical damping coefficient d = 0.25. B.2 PASSING THE INTERMEDIATE STATION The agent is also on a two-dimensional plane, going from a random starting point to a random goal point. But unlike before, there is an intermediate station at a distance of (0.3, 0.3) from the goal point. The agent will receive the constraint of going to the intermediate station as much as possible in the middle moment. The time step of each episode is also 26 steps, this time we chose the intermediate time t = 12. And it only takes effect at this moment. This requires the agent to learn to take corresponding actions in emergency situations. We set the agent’s reward in this task as follows: 1. Reward for the agent to go to the goal: rgoal = −‖pagent − pgoal‖22 2. Reward for the agent to move towards the intermediate station: rpass = −10000‖(0.3, 0.3)− (pagent − pgoal)‖22 3. Reward for the agent about control Effort Penalty: rcontrol = −0.01‖a‖22 Similarly, in order for the policy in DDPG and DDPG+ADR to learn to satisfy the hard constraints as much as possible, we set a larger weight for the second reward. And the constraint setting is: constraint : pagent − pgoal = (0.3, 0.3) As for the dynamic equation, it is exactly the same as the task setting above. And in reward shaping, rpass is modified to rF t. The effective time of rFt is modified from t=1 to t=12. The formula of rF is modified to: rFt = rpasst − rpasst−1 , t = 1, 2 . . . , 12, where rpasst means that the argument of the function rpass is the current state, and the argument of rpasst−1 is the state at the previous moment. And rpass0 = φ(s0) = 0 (Ng et al., 1999). C HYPERPARAMETERS FOR EXPERIMENTS The hyperparameter settings of DDPG and DDPG+ADR are exactly the same, and there are no additional parameters introduced. And in fact, there is no need to adjust the parameters in our experiment. Activation function for MLP is ReLU. Table 2 shows the hyperparameters used in the experiment. D PROOF OF COROLLARY 4.1 Proof. Since V is composed of normal orthogonal basis, then we have ât = Vt · V Tt · ât, where V Tt = ( PTt ZTt ) . We can therefore derive ât = ( Pt, Zt ) · V Tt · ât = Ptyt + Ztwt. E PROOF OF THEOREM 5.1 Proof. Since the objective functions of Problem. 2 and Problem. 3 are both convex functions, D is a convex set, and the local minimum of the convex function is the global minimum, so Problem. 2 and Problem. 3 always have optimal solutions. If ā is not the Pareto optimal solution of Problem. 1, then ∃a ∈ D, which satisfies one of the following two cases: either f1(a) ≤ f1(ā) and f2(a) < f2(ā), or f1(a) < f1(ā) and f2(a) ≤ f2(ā). But the first case contradicts Problem. 3, and the second case contradicts Problem. 2. Uniqueness is obvious, because V T = ( PT ZT ) constitutes a set of Orthonormal basis in the action space.
1. What is the focus of the paper regarding reinforcement learning and constraints satisfaction? 2. What are the strengths and weaknesses of the proposed Action Decomposition Regular (ADR) technique? 3. How does the reviewer assess the novelty, quality, clarity, and significance of the paper's content? 4. What are some suggestions for improving the paper, such as providing real-world examples, addressing technical novelty, and better explaining theoretical guarantees? 5. What questions does the reviewer have regarding the paper, such as understanding the timing of constraints, the purpose of experiments, and the relationship between constraints and rewards?
Summary Of The Paper Review
Summary Of The Paper This paper considers reinforcement learning, a common model which has seen success recently in many areas (e.g. games, robotics, autonomous vehicles). Prototypical reinforcement learning algorithms explore the action space in order to maximize their rewards as much as possible, potentially ignoring impacts of the chosen actions or safety constraints. Typical to many real-world scenarios, though, are constraints on the selected actions allowing the algorithm designer or practitioner to enforce certain constraints on the selected actions in the environment (e.g. ensure that the autonomous vehicle stays within the dictated lines on the road, etc). While many RL algorithms have been designed in this setting, they mostly focus on modifying existing training algorithms for RL to ensure satisfying the constraints by either modifying the objective used in policy gradient algorithms, projecting back to a 'safe' policy set, or linearizing certain objective functions for more readily easy computations. In contrast, the authors in this paper take a different view, ensuring a post-processing step which projects the chosen RL action to the 'feasibility set' which is closest, simultaneously ensuring the constraints are satisfied while also maximizing the reward (by picking an action close to the action dictated by the RL algorithm). To be more specific, the authors consider the typical RL model with an MDP characterized via ( S , A , P , r , γ ) . They consider a (potentially random? but known time) in which there are a finite set of linear equality constraints that need to be satisfied over the next T periods (the time horizon here being artificial and just being a time horizon established to have to satisfy the constraints). Once this event occurs, the algorithm must satisfy a specific set of constraints where the dynamics satisfy s t + 1 = θ ( s t , a t ) (i.e. linear dynamics on the state space), ψ ( s t , a t ) = 0 (i.e. linear constraint on the state and actions selected). The hope is to pick actions which satisfy the constraints and equations dictating the dynamics over the next T time periods while also maximizing the observed rewards. The authors propose a novel method, termed 'Action Decomposition Regular' which is a post-processing step using action decomposition. They provide a simple example, but the intuitive description is as follows. The set of actions must satisfy a specific set of equations in order to satisfy all of the given constraints. Write these linear constraints as an optimization problem, minimizing the ℓ 2 norm of the equation (without the equal to zero) constraint. Clearly a solution which satisfies the equation will have that value equal to zero, and hence any solution satisfies the constraint. This optimization problem can be solved, in closed form, giving that the actions and states must satisfy a specific series of equations. Taking the SVD of these constraint matrices allows you to decompose the actions into two different terms. One term, the 'free' dimension, can be used in order to steer the algorithm to take actions which are 'close' to the one dictated by the RL algorithm. The other, the 'constrained' dimension, is then used to ensure that the actions satisfy the set of constraints. To complement the algorithmic framework, the authors present a set of synthetic experiments to compare the efficiency and constraint violation of the resulting policies of their method and others in the literature. In particular, they test the algorithms on a set of movement tasks with the goal of 'keeping the algorithm straight' or 'passing an intermediate station'. They compare their ADR technique combined with a deep RL algorithm to just the naive deep RL algorithm. Obviously, their algorithm which additionally enforces the constraints will satisfy them, with a minor loss in performance. Review Originality: The authors present a novel technique for RL settings with the addition of linear equality constraints. The approach presented is a simple post-processing step (taking the SVD of the constraint matrices to decompose the action space into two terms, one of which used to ensure constraints and the other to best approximate the RL algorithm). However, the authors make it clear that their approach has advantages in that it can easily be included in any RL algorithm. Moreover, there are no theoretical guarantees for their approach (outside of a pareto-type guarantee which is not explained well). Quality: The submission seems technically sound and the theoretical claims are moderately well-supported, but the notation and description of the algorithm are confusing and difficult to follow, with some notation which is never explicitly outlined in the main paper. The authors are honest and upfront in the new techniques used in their modeling and algorithm development, namely: Knowledge of the set of constraints which needs to be satisfied However, the numerical experiments done are not robust and there are no comparisons to other related algorithms in the literature. They simply compare their post-processing step to an algorithm without the post-processing, and obviously their algorithm will satisfy the constraints as it is designed to do so. Clarity: The submission is well-organized. However, the submission could use some extensive rewriting to help with clarity to better describe their algorithm design, approach, provide real-world examples with these constraints, and highlight the technical novelty. The authors should take a pass through the paper and address the following: At a high level please address the following: Include spaces before a parenthesis (including a citation) to help the paper read better Do not start sentences with 'And' And more specifically: The first two sentences in the abstract need to be rewritten, don't provide anything useful 'making policy strictly' in the abstract 'physically-based environment'? in abstract 'prevail' in abstract- Typo in first sentence of introduction Grammar issues in last sentence of first paragraph of introduction Second paragraph 'in the application' - which applications? The example provided in the second paragraph should be more concrete Grammar issues in first sentence of third paragraph 'In fact, the constraint guarantee for the agent's behavior....' is awkwardly written On page 2 - what is 'model-based technology'? The major contributions on page 2 (i.e. points 1-3) could be better explained and clarified (e.g. 'show good results') 'agree to be model-based' 'when facing with' Last paragraph of section two is confusing as your algorithm also requires knowledge of the constraints 'dynamic knowledge' in start of section 3.2 - what is this referring to? Make equations on top of page 4 have text for the words 'speed coordinate system' - also, shouldn't the picture just be an exact circle instead of an oval? 'the solving technology' Equal to zero inside of norm on bottom of page 4 should not be there double space in "Appendix A" on page 5 What is the set D in section 5.2 - this entire section was confusing as the two 'objectives' are never described in words, and never referenced explicitly with respect to the RL algorithm "Suppose to exist" in Theorem 5.1 "constraints exceed convex action space" Significance: The algorithm presented in the paper is a simple SVD style approach to decompose actions to ensure satisfying the constraints while simultaneously selecting actions close to the one provided by the RL algorithm. While the novel theoretical techniques are very limited, the work can be built upon by performing more robust experiments, explaining more scenarios which satisfy these simple linear constraints, and better - explaining the theoretical guarantees. Strengths: The main strengths of the paper are as follows: simple approach that can be included with any RL algortihm, requires no additional training or meta-parameters easy to compute and does not add to the computational complexity of the algorithms Weaknesses: The main weaknesses of the paper are as follows: the final model, experimental details, and algorithmic approach should be better explained (see clarity section) experiments should be compared against other algorithms designed to satisfy constraints (instead of the simple comparison provided here) Questions: When do the "constraints" happen? It is interesting the constraints are modeled with a finite time horizon but the authors consider the MDP setting with an infinite horizon and discount. The notion of the time horizon is a bit confusing. What are the point of the experiments? Clearly your algorithm will satisfy the constraints, but why are there no 'learning' in the algorithms? Or is it difficult to see the curve of the line due to the axis chosen? What is the relation between constraint-to-go and the rewards, seems like they got lost somehow?
ICLR
Title Safe Exploration in Linear Equality Constraint Abstract With the extensive research and application, some shortcomings of reinforcement learning methods are gradually revealed. One of the considerable problems is that it is difficult for reinforcement learning methods to strictly satisfy the constraints. In this paper, a Singular Value Decomposition-based non-training method called ‘Action Decomposition Regular’ is proposed to achieve safe exploration. By adopting linear dynamics model, our method decomposes the action space into a constraint dimension and a free dimension for separate control, making policy strictly satisfy the linear equality constraint without limiting the exploration region. In addition, we show how our method should be used when the action space is limited and convex, which makes the method more suitable for real-world scenarios. Finally, we show the effectiveness of our method in a physically-based environment and prevail where reward shaping fails. N/A With the extensive research and application, some shortcomings of reinforcement learning methods are gradually revealed. One of the considerable problems is that it is difficult for reinforcement learning methods to strictly satisfy the constraints. In this paper, a Singular Value Decomposition-based non-training method called ‘Action Decomposition Regular’ is proposed to achieve safe exploration. By adopting linear dynamics model, our method decomposes the action space into a constraint dimension and a free dimension for separate control, making policy strictly satisfy the linear equality constraint without limiting the exploration region. In addition, we show how our method should be used when the action space is limited and convex, which makes the method more suitable for real-world scenarios. Finally, we show the effectiveness of our method in a physically-based environment and prevail where reward shaping fails. 1 INTRODUCTION In the past ten years, reinforcement learning(RL)(Sutton & Barto, 2018) has made significant breakthroughs in many fields, such as games(Mnih et al., 2013; Schaul et al., 2015; Mnih et al., 2015; Hasselt et al., 2015; Wang et al., 2016), robotics(Gu et al., 2017), autonomous vehicles(Sallab et al., 2017), healthcare(Yu et al., 2019). In the reinforcement learning task, the agent can obtain the policy of making the action that maximizes the long-term return. Although it can improve one’s own policy through trial and error learning under the interaction with the environment, it is difficult to strictly ensure the safety of the actions output by its policy(Garcı́a et al., 2015). Therefore, the constraint problem has become one of the active research contents in reinforcement learning recently. In the application, making such actions that violate constraints will bring serious consequences in some fields. Therefore never violating these constraints is a strict necessity in many scenarios, such as the stability of robots and avoidance of pedestrians or obstacles appearing in front of the vehicle during autonomous driving(Levinson et al., 2011; Amodei et al., 2016). In the real world, the linear equality constraints are relatively common, for example, we want the robot to achieve a certainly required configuration on a certain trajectory, where the constraint may appear at different instants in any dimension; or the robot center of mass is restricted at the beginning of the movement(Laine & Tomlin, 2019). And all these complex constraints typically take the form of linear equality constraints. Therefore, it is necessary to have a method that can ensure these constraints to be strictly satisfied in the real world. Researchers have carried out much meaningful research on how to better satisfy the constraint. Dalal et al. (2018) achieve good results in satisfying hard constraints, but it relies heavily on the security layer of data training and cannot cross domains. Tessler et al. (2019) can solve the mean value constraints or discounted sum constraints, but there is no guarantee that the constraints can be met during the training process. More importantly, the existing learning-based methods can hardly satisfy the constraints. In fact, the constraint guarantee for the agent’s behavioral decisionmaking benefits from knowledge about the causal mechanism that controls it, such as the dynamic model(Fisac et al., 2019). Fortunately, the designer of an agent always knows or approximately knows its dynamics(Fisac et al., 2019). For example, Lutter et al. (2020) adopt the linear dynamic model of the robot and finally, make the optimal strategy policy their action limit. This inspires people to find a balance between data-driven and model-based technology. Among the existing model-based methods, the idea of using the linear dynamic model is common(Aswani et al., 2013; 2012). Although most robots have nonlinear dynamic models, there are already many methods based on the linearization of the model. For example, sequential quadratic programming requires the continuous local approximation of the problem and then transforms it into the constrained linear quadratic regulator problem(Giftthaler et al., 2018). And iLQR(Levine & Koltun, 2013) is a method with linearizing a nonlinear model, which often appears as baselines in experiments about model-based reinforcement learning. And there are many theories about the stability of linearized systems(Spong, 1995; Russ, 2021). For convenience, this paper only discusses the case of the linear dynamic model. In this paper, we propose the ‘Action Decomposition Regular’(ADR) as shown in Fig 1. Using Singular Value Decomposition(SVD) approach, ADR decomposes the action space into a constraint dimension containing all constraint information and the remaining free dimension. The goal is to achieve better policy exploration without violating linear equality constraints at all. Under the above idea, we find a balance between the model-based technology’s control of constraints and the data-driven policy learning method. It is worth mentioning that our method is non-training and can conjunct any efficient continuous-control RL method. The main contributions of this paper are as follows: 1. We propose a non-training method called ADR that can make the reinforcement learning strictly satisfy the constraints without restricting the system’s ability to explore. And the method does not need to make assumptions about the dimensions of the constraints. 2. We give an action correction scheme with the property of Pareto optimal solution(Van Moffaert & Nowé, 2014) in convex action space and give the proof. 3. The effectiveness of the method is verified in a simulation environment with physical properties. The simulation shows good results where reward shaping fails. 2 RELATED WORK Implementing policy security through constrained reinforcement learning is an active research content(Amodei et al., 2016). The algorithm based on Constrained Markov Decision Processes (CMDP)(Kallenberg, 1983; Ross, 1985; Ross & Varadarajan, 1989; Altman, 1999; Le et al., 2019) is a common method. CPO(Achiam et al., 2017) is an algorithm based on CMDP, mainly inspired by TRPO(Schulman et al., 2015), to find a surrogate function that is the lower bound of the original objective function and the upper bound of the original constraint. RCPO(Tessler et al., 2019) uses the idea of PPO(Schulman et al., 2017; Heess et al., 2017), introduces the lagrange method, and solves the problem based on the adaptively updated lagrange multiplier. And a RCPO-based method uses PID to control the lagrange multiplier(Stooke et al., 2020). Recently Zhang et al. (2020) propose FOCOPS, which first finds the optimal update policy by solving a constrained optimization problem in the non-parameterized policy space, then projects the updated policy back into the parametric policy space. However, these methods require a long training process. They are shown to solve the mean value constraints or discounted sum constraints. As such, it is difficult to ensure that the constraints are met as much as possible during the training process, even for any simple constraints. Modifying the exploration process is another way to solve the constraint problem. In Dalal et al. (2018), their method requires first using data to train a security layer to modify actions according to certain criteria. Although they have achieved excellent results in their experiments, the problem is that security is very dependent on the security layer, and the linear relationship of the predicted cost may not be established. The solution of Amos & Kolter (2017) relies on a complete Quadratic Programming solver, but their solution is too expensive to calculate. In addition, there are many methods that agree to be model-based. One possible approach is to try to perform imitation learning on the trajectory obtained by the model-based optimal control policy, i.e., DAgger(Ross et al., 2011). But as stated by Bellegarda & Byl (2020), when facing with areas of state space that the expert trajectory has not visited before, policy learned only from expert data may perform poorly in these areas. And Fisac et al. (2019) propose a general safety framework based on Hamilton–Jacobi reachability methods. This safety framework also can work in conjunction with any efficient learning algorithm. But this method is computationally intensive and limited in dimension. Aswani et al. (2013) use the method about the robust model-predictive control approach and achieve good results in some problems such as quadrotor flight. But it limits the exploration ability of the system. And Berkenkamp et al. (2016; 2017) both limit the exploration region of the method. The method in Sadraddini & Belta (2016) is conservative since it does not update the model. Reward shaping is a natural alternative to constraints, influencing the agent by artificially shaping negative rewards in the state space(Dalal et al., 2018; Ng et al., 1999). But it often needs to design a modified reward function through expert knowledge(Randløv & Alstrøm, 1998) or neural network methods(Burda et al., 2018) in advance. In other words, it needs to know the occurrence of constraints in advance, but many urgent constraints are sudden. Our method overcomes the shortcomings mentioned above. A comparison with the different approaches is provided in Table 1. 3 PRELIMINARIES 3.1 MARKOV DECISION PROCESS (MDP) A Markov Decision Process (MDP) (Sutton & Barto, 2018) is defined by 5-tuple (S,A,R,P ,µ). Where S is the state space; A is the action space; R : S × A → R is the reward function; P : S × A × S → [0, 1] is the transition kernel and µ : S → [0, 1] is the initial state distribution. Let s0 ∼ µ denote that the initial state s0 depends on µ, then at ∼ π(· | st) and st+1 ∼ P (· | st, at) are similar. This can set a simple trajectory τ = (s0, a0, s1, . . .). Consider a policy denoted by π = {π(a | s) : s ∈ S, a ∈ A} and aim to find a stationary policy that maximizes the expected discounted return, i.e., objective function: JR(π) = Eπs∼µ[ ∞∑ t=0 γtrt] , where γ is the discount factor, and rt is the reward at time t. Therefore, the update and improvement of π is based on the comprehensive judgment of each reward. If adopting the deterministic policy, a = π(s), else the stochastic policy a ∼ π(a | s). 3.2 EQUALITY CONSTRAINT ACTION SPACE EXPLORATION The method we propose is based on the dynamic knowledge. Therefore, in order to highlight the actual effectiveness of the method and the scalability applicable to any continuous-control reinforcement learning method, dynamic knowledge only affects action selection stage. We first formulate the constraints and dynamics following the notation used in Laine & Tomlin (2019). Without loss of generality, the constraint occurs at t = 0, 1, . . . , T − 1, T . As a matter of convenience, let s ∈ Rn, a ∈ Rm, and we address the following policy problem: a ∼ π(a | s) s.t. dynamics : st+1 − (Fstst + Fatat + f1t) = 0, t = 0, 1, . . . , T − 1 initial condition : s0 ∼ µ constraint at t : Gstst +Gatat + g1t = 0, t = 0, 1, . . . , T − 1 constraint at T : GsT sT + g1T = 0 Where Fst , Fat and f1t define the agent dynamics at time t = 0, 1, . . . , T − 1, T . Gst , Gat and g1t define the constraints at t = 0, 1, . . . , T − 1. GsT and g1T define the constraint at t = T . The deterministic policy is similar. In addition, we introduce the function C(st) called ‘constraint-to-go’ used in Laine & Tomlin (2019): C(st) = Hstst + h1t, t = 0, 1, . . . , T , which is similar to the value function and representing the stacking of values that the residual constraint from the beginning of st to the back. So at time T there is: C(sT ) = GsT sT + g1T . 4 ACTION DECOMPOSITION REGULAR 4.1 ACTION DECOMPOSITION We first explain the idea of action decomposition. For a simple example, as shown in the speed coordinate system in the Fig. 2, when a constraint that requires ux = uy occurs, we can linearly combine ux and uy into w = √ 2 2 ux + √ 2 2 uy and y = − √ 2 2 ux + √ 2 2 uy , so that we only need to keep y = 0 to satisfy the constraint , and the w dimension will be completely free. 4.2 SAFETY REGULAR BASED ON ACTION DECOMPOSITION We solve the problem of safety exploration in the action space under the linear equality constraint based on the above ideas. In our method, the solving technology of the constraint dimension matches the programming technology in Laine & Tomlin (2019), but the solution of the free dimension is expanded according to the property of the policy exploration. The solution process firstly goes backwards from t = T − 1: a ∼ π(a | s) s.t. sT − (FsT−1sT−1 + FaT−1aT−1 + f1T−1) = 0 a ∈ arg min a ‖ ( GsT−1sT−1 +GaT−1aT−1 + g1T−1 = 0 HsT sT + h1T = 0 ) ‖2 and use the dynamic equation to eliminate sT . In this way, only sT−1 and aT−1 exist in the problem. Organize the formula and rewrite the above question as: a ∼ π(a | s) s.t. a ∈ arg min a ‖NsT−1sT−1 +NaT−1aT−1 + n1T−1‖2 where we define as follows: NsT−1 = ( GsT−1 HsTFsT−1 ) , NaT−1 = ( GaT−1 HsTFaT−1 ) , n1T−1 =( g1T−1 HsT f1T−1 + h1T ) . Obviously, at this step, a of the constraint item is only related to NaT−1 , that is, all the information of the constraint item is contained in NaT−1 .Perform SVD on NaT−1 to get NaT−1 = UT−1ΣT−1V T T−1, and define V T T−1 = ( PTT−1 ZTT−1 ) , where the first r rows of the V TT−1 are denoted as PTT−1, the last (m− r) rows are denoted as ZTT−1. And r is the rank of NaT−1 . Then we make use of the following result: Corollary 4.1. The action a formulated at time t can be decomposed in the following form: ât = Ptyt + Ztwt Proof. The proof is provided in Appendix D. We can regard yt as a constraint dimension and wt as a free dimension. Since the learning of the policy also includes the punishment feedback caused by the violation of constraints, so the original problem is transformed into: wT−1 = Z T · a, a ∼ π(a | s) yT−1 = arg min a ‖NsT−1sT−1 +NaT−1PT−1yT−1 + n1T−1‖2 = −(NaT−1PT−1)†(NsT−1sT−1 + n1T−1) Through the above steps, the solution âT−1 will be easily obtained: âT−1 = PT−1yT−1 + ZT−1wT−1. And update C(sT−1) by combining âT−1 and (NsT−1sT−1 +NaT−1 âT−1 + n1T−1): C(sT−1) = HsT−1sT−1 + h1T−1 = (I −NaT−1PT−1(NaT−1PT−1)†)NsT−1sT−1 + (I −NaT−1PT−1(NaT−1PT−1)†)n1T−1 Where (NaT−1PT−1) † is the pseudo inverse of NaT−1PT−1. We can show that C(st) = 0, if NaT−1PT−1 is an invertible matrix. 5 PRACTICAL IMPLEMENTATION 5.1 IMPLEMENTATION DETAIL We divide the use of ADR into two types, which is shown in the Fig. 3. The first type: When the agent does not receive any constraint signals, the policy generated by the policy network directly obtains executable actions in the form of deterministic policy or stochastic policy. The second type: When the agent receives the constraint signals that it needs to comply within a period of time(t = 0, 1, . . . , T − 1, T ) in the future, we might as well start counting the time from receiving the constraint signals. We require that the constraint signals of this period of time be processed through ADR to obtain the constraint dimension action and the free dimension projection matrix, and the output action of the RL also needs to be corrected by the above result for obtaining the actual execution. In addition, we give the method to deal with situations where the selected action violates convex action space. The details are provided in Subsection 5.2. A detailed pseudo code is provided in Appendix A of the supplementary materials. 5.2 CONVEX ACTION SPACE Various physical limitations of action will appear in real-world applications. And physical limits can lead to the limited action space. In this section, we discuss the most common convex action space. In fact, the physical limitation is also a constraint when the selected action exceeds it. In order to satisfy the hard constraint(Chen et al., 2021) as much as possible, we suggest that when the action exceeds the physical limitation, first program the constraint dimension in the action space to find the constraint dimension action closest to ADR’s recommendation, and then find the closest free dimension action suggested by the RL. This ensures that actions get higher rewards under conditions that satisfy the constraints as much as possible. In fact, this is a multi-objective optimization problem(MOO)(Miettinen, 2012; Lin et al., 2019). The above method (also called -constraint method or main objective method) is widely used, and its optimal solution is the effective solution of MOO(also called the Pareto optimum solving) when the limited action space is a convex set. We define the following problems: Problem 1. min ( f1(a) f2(a) ) = ( ‖PTa− PT â‖2 ‖ZTa− ZT â‖2 ) s.t. a ∈ D Problem 2. min ( ‖PTa− PT â‖2 ) s.t. a ∈ D Problem 3. min ( ‖ZTa− ZT â‖2 ) s.t. a ∈ H where H is the efficient solution set of Problem. 2. This result can be demonstrated by the following Theorem 5.1. Theorem 5.1. Suppose to exist ā ∈ D, D is a convex set, subject to ā is the optimal solution of Problem. 3, then ā is not only the weakly effective solution of Problem. 1, but also the Pareto optimal solution of Problem. 1, and it is unique. Proof. See Appendix E of the supplementary materials. 6 EXPERIMENTS Although we expect to show benefits from combining ADR with any continuous-control RL method, for the following experiments, we use the Deep Deterministic Policy Gradient (DDPG)(Lillicrap et al., 2015). Although DDPG (Lillicrap et al., 2015) is a deterministic policy that can directly output actions, in fact, our method is not only suitable for reinforcement learning algorithms for deterministic policy, but also has applicability for stochastic policy. Our experiments are based on the current popular multi-agent particle world (Lowe et al., 2017) with continuous observation and action space and some basic simulated physics. We design two new sets of simulation experiments based on physical constraints to test the effectiveness of ADR as shown in Fig 4. It is worth mentioning that no new hyperparameters are introduced in the process of our experiment. We provide exact details about experiments in Appendix B and hyperparameters about the method in Appendix C. 6.1 KEEP IT STRAIGHT 6.1.1 EXPERIMENT DESCRIPTION The agent starts from a random starting point to a random final landmark. But we require the agent to maintain a straight line movement as accurately as possible in a certain direction during the first period of time. Although this task seems simple, it is not easy to satisfy the accuracy requirements for RL. That is because the larger learning rate of the algorithm leads the faster convergence and the poorer stability, and the smaller learning rate of the algorithm leads to slow convergence and waste of time(Smith, 2017). In this experiment, the reward is set based on the negative Euclidean distance from the final landmark at each moment. At each step, the agent also obtains the reward for minimizing energy consumption based on the negative two-norm of action. The penalty is set based on the two-norm of the velocity deviating from the current motion direction. Finally, the violated constraint is equal to the accumulation of the two-norm of the distance from the original straight line at each time step when the constraint occurs. In fact, this will require the agent to learn to approach the landmark more quickly while keeping the direction of motion stable in the early stage. 6.1.2 EXPERIMENT ANALYSIS Learning curves are provided in the Fig. 5. For the reward curve, DDPG needs a lot of episodes of training to obtain higher rewards, but DDPG+ADR gets higher rewards at the beginning and is always higher than DDPG in the whole training process. For the violated constraint curve, DDPG seriously violates the constraints at the beginning of training, and can not strictly satisfy the constraints in the whole training process. In fact, the minimum value of constraint violation in a single round of DDPG is 7.4 × 10−8. But DDPG+ADR can keep the violation of constraints in the order of 10−16 in the whole process, which can be considered negligible. The experiments show that, on the one hand, DDPG+ADR can indeed make the actions output by RL’s policy strictly satsify the linear equality constraints, even in the training process. On the other hand, compared with DDPG, DDPG+ADR shows better performance in obtaining rewards. 6.2 PASSING THE INTERMEDIATE STATION 6.2.1 EXPERIMENT DESCRIPTION The agent is still required to go from a random starting point to a random final landmark. And the agent will suddenly receive a constraint signal to go to an intermediate station at the intermediate moment. Note that since the agent is constrained only at the intermediate moment, the agent will exceed its physical limitations due to the distance of the intermediate station, which is too far away. In this case, the agent can only approach as close as possible and never satisfy the constraint. In fact, this experiment requires the algorithm to be robust when the agent encounters a sudden constraint that exceeds its physical limit. In this experiment, the reward is set based on the negative Euclidean distance from the final landmark at each moment. At the same time, the agent also obtains the negative two-norm of action as the reward for minimizing energy consumption. The penalty for the agent receives and the violated constraint in each episode are set based on the Euclidean distance from the intermediate station. 6.2.2 REWARD SHAPING For comparison, we also conduct reward shaping experiments on the DDPG algorithm. At each time step before the end of the constraint, we set the modified reward function(Ng et al., 1999) to the same scale as the original reward, which is set by the following formula: rF = φ(st)− φ(st−1), φ(s0) = 0 Where φ is set based on the distance from the intermediate station, see Appendix B for details. 6.2.3 EXPERIMENT ANALYSIS The experimental results are shown in the Fig. 6. Compared with DDPG, DDPG+ADR has demonstrated superior performance, not only in terms of cumulative rewards much higher, but also much smaller in violation of constraints. Surprisingly, the design of reward shaping does not make DDPG run better but have an adverse effect. It means that the value function of this task is complicated, and the reward shaping that only relies on constraints is quite different from the value function. This shows that at the moment when the constraint occurs, DDPG+ADR really shows robustness. It helps the agent make the action that satisfies the constraint as much as possible and minimizes the missed reward. 7 DISCUSSION In this paper, we propose a simple and practical approach that can effectively solve the problem of action exploration in reinforcement learning under the linear equality constraints. Our method ADR is based on the linear dynamics model and uses the idea of SVD to decompose the action space into constrained dimension and free dimension to control separately. At the same time, we propose feasible solutions to the situation that constraints exceed convex action space, and ensure that actions satisfy the constraints as much as possible within a single time step, and the loss of rewards can be minimized. In the experiment, compared with DDPG, DDPG+ADR can obtain more rewards and stricter constraints satisfaction in both tasks. At the same time, DDPG+ADR shows its robustness in sudden constrained tasks. It is worth mentioning that our method has the advantages of no training and does not need to make assumptions about the dimensions of constraints. An exciting feature is that our method can be combined with any continuous-control RL method. In addition, there are many promising ideas for future work: the use of interior point methods to improve the equality constraints; the deeper integration of SVD ideas with reinforcement learning(Gemp et al., 2020). And in the real world, some dynamic models are too complicated to be researched. In future work, we plan to use Piecewise Linear Neural Networks(PLNN) which can explain the non-linear dynamic model of an object(Nagabandi et al., 2018; Chu et al., 2018) to extend the applicability of our method. A PSEUDO CODE Algorithm 1: Action Decomposition Regular Input: constraintGst , Gat , g1t , GsT , g1T ; policy network πθ ; dynamics Fst , Fat , f1t ; t = 0, 1, . . . , T − 1 Output: action at; t = 0, 1, . . . , T − 1 1: if T > 0 then 2: HsT ← GsT 3: hsT ← gsT 4: for t = T − 1, T − 2, . . . , 0 do 5: Nat ← ( Gat Hst+1Fat ) 6: Nst ← ( Gst Hst+1Fst ) 7: n1t ← ( g1t Hst+1f1t + h1t+1 ) 8: V Tt ← SVD(Nat) 9: Pt, Zt ← V Tt 10: Hst ← (I −NatPt(NatPt)†)Nst 11: h1t ← (I −NatPt(NatPt)†)n1t 12: end for 13: end if 14: if T > 0 then 15: for t = 0, 1, . . . , T − 1 do 16: at ← πθ 17: Receive st 18: yt ← −(NatPt)†(Nstst + n1t) 19: wt ← ZTt at 20: at ← Ptyt + Ztwt 21: end for 22: else 23: at ← πθ 24: end if B EXPERIMENT DETAILS All the experiments we conducted are built on Python(3.6) and Tensorflow (1.8.0) in Intel i7-10875H CPU. B.1 KEEP IT STRAIGHT We used the multi-agent particle environment (Mordatch & Abbeel, 2017) provided by OpenAI Gym(Brockman et al., 2016) for this set of tasks. The agent moves on a two-dimensional plane and travels from a random starting point to a random goal point. At the beginning of each episode, we require the agent to accurately move in a straight line in the y-axis direction, similar to walking out of a parking space or crossing a narrow road. In our setting, the step length of an episode is 26 steps, so the duration of this straight-going phase should not be too long, and our setting is 5 steps. For the reward of the agent in the experiment, we set the following: 1. Reward for the agent to go to the goal: rgoal = −‖pagent − pgoal‖22 2. Reward for the agent to keep moving in a straight line: rkeep = −10000|vy|2 3. Reward for the agent about control Effort Penalty: rcontrol = −0.01‖a‖22 Usually when we face such a multi-objective optimization problem(MOO), we always impose a large weight on the hard constraint. In order to let DDPG and DDPG+ADR learn to keep the straight line as hard as possible, we both set the weight to 10000. This has no effect on the comparison of our method ADR. where pagent, pgoal are the positions of the agent and goal point. And vy is the velocity of the agent in y-axis. And the constraint setting is: constraint : vy = 0 In the multi-agent particle environment (Mordatch & Abbeel, 2017), a ∈ R5 represents the join forces of the agent, and s ∈ R4 is composed of the speed of the agent and the distance to the goal point. Regardless of noise, and let the mass m of the agent be 1, we fully follow the dynamic equation in multi-agent particle environment(Mordatch & Abbeel, 2017): v = Amadt+ (1− d)v x = vdt+ x A = ( 0, 1,−1, 0, 0 0, 0, 0, 1,−1 ) where A is the matrix that turn the resultant force into the driving force of agent. And dt = 0.1 is the step size of a single time step. The physical damping coefficient d = 0.25. B.2 PASSING THE INTERMEDIATE STATION The agent is also on a two-dimensional plane, going from a random starting point to a random goal point. But unlike before, there is an intermediate station at a distance of (0.3, 0.3) from the goal point. The agent will receive the constraint of going to the intermediate station as much as possible in the middle moment. The time step of each episode is also 26 steps, this time we chose the intermediate time t = 12. And it only takes effect at this moment. This requires the agent to learn to take corresponding actions in emergency situations. We set the agent’s reward in this task as follows: 1. Reward for the agent to go to the goal: rgoal = −‖pagent − pgoal‖22 2. Reward for the agent to move towards the intermediate station: rpass = −10000‖(0.3, 0.3)− (pagent − pgoal)‖22 3. Reward for the agent about control Effort Penalty: rcontrol = −0.01‖a‖22 Similarly, in order for the policy in DDPG and DDPG+ADR to learn to satisfy the hard constraints as much as possible, we set a larger weight for the second reward. And the constraint setting is: constraint : pagent − pgoal = (0.3, 0.3) As for the dynamic equation, it is exactly the same as the task setting above. And in reward shaping, rpass is modified to rF t. The effective time of rFt is modified from t=1 to t=12. The formula of rF is modified to: rFt = rpasst − rpasst−1 , t = 1, 2 . . . , 12, where rpasst means that the argument of the function rpass is the current state, and the argument of rpasst−1 is the state at the previous moment. And rpass0 = φ(s0) = 0 (Ng et al., 1999). C HYPERPARAMETERS FOR EXPERIMENTS The hyperparameter settings of DDPG and DDPG+ADR are exactly the same, and there are no additional parameters introduced. And in fact, there is no need to adjust the parameters in our experiment. Activation function for MLP is ReLU. Table 2 shows the hyperparameters used in the experiment. D PROOF OF COROLLARY 4.1 Proof. Since V is composed of normal orthogonal basis, then we have ât = Vt · V Tt · ât, where V Tt = ( PTt ZTt ) . We can therefore derive ât = ( Pt, Zt ) · V Tt · ât = Ptyt + Ztwt. E PROOF OF THEOREM 5.1 Proof. Since the objective functions of Problem. 2 and Problem. 3 are both convex functions, D is a convex set, and the local minimum of the convex function is the global minimum, so Problem. 2 and Problem. 3 always have optimal solutions. If ā is not the Pareto optimal solution of Problem. 1, then ∃a ∈ D, which satisfies one of the following two cases: either f1(a) ≤ f1(ā) and f2(a) < f2(ā), or f1(a) < f1(ā) and f2(a) ≤ f2(ā). But the first case contradicts Problem. 3, and the second case contradicts Problem. 2. Uniqueness is obvious, because V T = ( PT ZT ) constitutes a set of Orthonormal basis in the action space.
1. What is the focus and contribution of the paper on safe reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its inspiration from SVD? 3. What are the weaknesses of the paper regarding its experiments and comparisons with other works? 4. How can the mathematical presentation in the paper be improved for better understanding? 5. Are there any concerns regarding the effectiveness and efficiency of the proposed method in guaranteeing safety? 6. What are the limitations of the paper regarding its discussion of important aspects, such as safety guarantees and violations?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a method called Action Decomposition Regular (ADR) for guaranteeing safety in RL problems with linear equality constraints. ADR is inspired by SVD and decompose the action space into constrained dimension and free dimension separately. The authors empirically compared their proposed method with several baselines (e.g., safety-agnostic one, reward-shaping). Review I think it is an interesting idea to use SVD for safe RL is interesting. As far as I know, the key idea is novel and would be helpful under a strong assumption that the dynamics are linear. I have several concerns. First concern is experiment. The environments used in this paper are multi-agent particle world, and this task is known to be easy. Given there is no theoretical result (regarding the performance of the algorithm) in this paper, I think the authors should have tested their proposed method in more complicated environments. Recently, Safety-Gym has been a popular as a testbed. Also, the baselines are safety-agnostic DDPG in Section 6.1 and safety-agnostic DDPG and reward-shaping in Section 6.2, which are also rather weak baselines. I do feel that the authors should have compared their proposed method with more powerful baselines (e.g., CPO, PPO-Lagrangian) in more complicated tasks (i.e., Safety-Gym). I think it is possible to implement CPO or PPO-Lagrangian as the authors conducted for reward-shaping (i.e., Section 6.2.2). Second concern is that mathematics in this paper is hard to understand. I think this situation may be resolved by, for example in Section 4.2 Use ⊤ instead of T . In this paper, symbol for transpose and time are same, which is very confusing. Clearly write the dimension of each matrix (e.g., R d 1 × d 2 ) and add some figures. r is defined in two different meanings (reward and rank) Also, Section 5.2 is hard to follow. I guess major reasons would be that the "effective solution", "efficient solution", and H are not explained. I would recommend the authors to explain the motivation or intuition behind this analysis. Finally, I consider that important aspects have not fully discussed. in the Introduction, the authors emphasize the importance of guaranteeing safety and never violating constraint. However, in Figure 6, the authors' proposed method violate the safety constraint even after a long training episodes. The authors should have discussed why this happens (I guess it is due to non-linearity).
ICLR
Title Semi-supervised learning objectives as log-likelihoods in a generative model of data curation Abstract We currently do not have an understanding of semi-supervised learning (SSL) objectives such as pseudo-labelling and entropy minimization as log-likelihoods, which precludes the development of e.g. Bayesian SSL. Here, we note that benchmark image datasets such as CIFAR-10 are carefully curated, and we formulate SSL objectives as a log-likelihood in a generative model of data curation that was initially developed to explain the cold-posterior effect (Aitchison 2020). SSL methods, from entropy minimization and pseudo-labelling, to state-of-the-art techniques similar to FixMatch can be understood as lower-bounds on our principled log-likelihood. We are thus able to give a proof-of-principle for Bayesian SSL on toy data. Finally, our theory suggests that SSL is effective in part due to the statistical patterns induced by data curation. This provides an explanation of past results which show SSL performs better on clean datasets without any “out of distribution” examples. Confirming these results we find that SSL gave much larger performance improvements on curated than on uncurated data, using matched curated and uncurated datasets based on Galaxy Zoo 2.1 1 INTRODUCTION To build high-performing deep learning models for industrial and medical applications, it is necessary to train on large human-labelled datasets. For instance, Imagenet (Deng et al., 2009), a classic benchmark dataset for object recognition, contains over 1 million labelled examples. Unfortunately, human labelling is often prohibitively expensive. In contrast obtaining unlabelled data is usually very straightforward. For instance, unlabelled image data can be obtained in almost unlimited volumes from the internet. Semi-supervised learning (SSL) attempts to leverage this unlabelled data to reduce the required number of human labels (Seeger, 2000; Zhu, 2005; Chapelle et al., 2006; Zhu & Goldberg, 2009; Van Engelen & Hoos, 2020). One family of SSL methods — those based on low-density separation — assume that decision boundaries lie in regions of low probability density, far from all labelled and unlabelled points. To achieve this, pre deep learning (DL) low-density separation SSL methods such as entropy minimization and pseudo-labelling (Grandvalet & Bengio, 2005; Lee, 2013) use objectives that repel decision boundaries away from unlabelled points by encouraging the network to make more certain predictions on those points. Entropy minimization (as the name suggests) minimizes the predictive entropy, whereas pseudo-labelling treats the currently most-probable label as a pseudo-label, and minimizes the cross entropy to that pseudo-label. More modern work uses the notion of consistency regularisation, which augments the unlabelled data (e.g. using translations and rotations), then encourages the neural network to produce similar outputs for different augmentations of the same underlying image (Sajjadi et al., 2016; Xie et al., 2019; Berthelot et al., 2019b; Sohn et al., 2020). Further developments of this line of work have resulted in many variants/combinations of these algorithms, from directly encouraging the smoothness of the classifier outputs around unlabelled datapoints (Miyato et al., 2018) to the “FixMatch” family of algorithms (Berthelot et al., 2019b;a; Sohn et al., 2020), which combine pseudo-labelling and consistency regularisation by augmenting each image twice, and using one of the augmented images to provide a pseudo-label for the other augmentation. 1Our code: https://anonymous.4open.science/r/GZ_SSL-B6CC; MIT Licensed However, some of the biggest successes of deep learning, from supervised learning to many generative models, have been built on a principled statistical framework as maximum (marginal) likelihood inference (e.g. the cross-entropy objective in supervised learning can be understood as the log-likelihood for a Categorical-softmax model of the class-label MacKay, 2003). Low-density separation SSL methods such as pseudo-labelling and entropy minimization are designed primarily to encourage the class-boundary to lie in low-density regions. Therefore they cannot be understood as log-likelihoods and cannot be combined with principled statistical methods such as Bayesian inference. Here, we give a formal account of SSL methods based on low-density separation (Chapelle et al., 2006) as lower bounds on a principled log-likelihood. In particular, we consider pseudo-labelling (Lee, 2013), entropy minimization (Grandvalet & Bengio, 2005), and modern methods similar to FixMatch (Sohn et al., 2020). This log-likelihood arises from a generative model of data curation that was initially developed to explain the cold-posterior effect (Aitchison, 2021). Critically, this approach gives an explanation for previous findings that SSL is most effective when unlabelled data is obtained by throwing away labels from the carefully curated training set, and is less effective when unlabelled data is taken from uncurated images, especially those that do not depict one of the classes of interest (Cozman et al., 2003; Oliver et al., 2018; Chen et al., 2020; Guo et al., 2020). We confirmed the importance of data curation for SSL on toy data generated from a known model and on real data from Galaxy Zoo 2 (Willett et al., 2013). 2 BACKGROUND Our work brings together many disparate areas. Here, we give an introduction to a generative model of data curation (Aitchison, 2021) initially developed to explain the cold posterior effect (Wenzel et al., 2020), pseudo-labelling and entropy minimization (Grandvalet & Bengio, 2005; Lee, 2013), and the treatment of unlabelled points in the standard supervised learning setup. 2.1 A GENERATIVE MODEL OF DATA CURATION To develop a model of data curation, remember that image datasets including CIFAR-10 and ImageNet are curated to ensure they only contain images whose class-labels are unambiguous. For instance, in CIFAR-10, annotators were instructed that “It’s worse to include one that shouldn’t be included than to exclude one.”, and Krizhevsky (2009) “personally verified every label submitted by the annotators”. In creating ImageNet, Deng et al. (2009) made sure that a number of Amazon Mechanical Turk annotators agreed upon the class before including an image in the dataset. Thus, these datasets have two odd properties. First, consensus labels exist only for a subset of images, e.g. for a white-noise image, consensus cannot be reached and the image cannot be labelled. Second, inclusion of an image in a dataset like CIFAR-10 is informative in and of itself, as it indicates that the image shows an unambiguous example of one of the ten classes. To understand these odd properties of curated datasets, consider a simplified generative model of consensus-formation: draw a random image, X , from the distribution over images, P (X), and ask S human annotators, indexed s, to give a label, {Ys}Ss=1 (e.g. using Mechanical Turk). Importantly, every annotator is forced to label every image and if the image is ambiguous they should give a random label. If all the annotators agree, Y1=Y2= · · · =YS , they have consensus and the datapoint is included in the dataset. However, in the case of any disagreement, consensus is not reached and the datapoint is excluded (Fig. 1), Concretely, the final label, Y is Y1 (which is the same as all the other labels) if consensus was reached and None otherwise (Fig. 2C), Y |{Ys}Ss=1 = { Y1 if Y1=Y2= · · · =YS None otherwise (1) Taking Y to be the label set, we have Ys ∈ Y , and the final label, Y , could be any of the underlying labels in Y , or None if consensus is not reached, so Y ∈ Y ∪ {None}. When consensus was reached, the likelihood is, P (Y =y|X, θ) = P ( {Ys=y}Ss=1|X, θ ) = ∏S s=1 P (Ys=y|X, θ) = P (Ys=y|X, θ) S = (py(X)) S (2) where we have assumed annotators are IID, and py(X) = P (Ys=y|X, θ) is the single-annotator probability. From here, it is possible to see how this model might be taken to give an account of tempering, as we have taken the underlying single-annotator likelihood, py(X) to the power S (for further details see Aitchison, 2021). 2.2 LOW-DENSITY SEPARATION SEMI-SUPERVISED LEARNING OBJECTIVES The intuition behind low-density separation objectives for semi-supervised learning is that decision boundaries should be in low-density regions away from both labelled and unlabelled data. As such, it is sensible to “repel” decision boundaries away from labelled and unlabelled datapoints and this can be achieved by making the classifier as certain as possible on those points. This happens automatically for labelled points as the standard supervised objective encourages the classifier to be as certain as possible about the true class label. But for unlabelled points we need a new objective that encourages certainty, and we focus on two approaches. First, and perhaps most direct is entropy minimization (Grandvalet & Bengio, 2005) Lentropy(X) = ∑ y∈Y py(X) log py(X) (3) where, following the typical probabilistic approach, we write the negative entropy as an objective to be maximized. Alternatively, we could use pseudo-labelling, which takes the current classification, y∗, to be the true label, and maximizes the log-probability of that label (Lee, 2013), Lpseudo(X) = log py∗(X) y∗ = argmax y∈Y log py(X). (4) Lee (2013) regarded pseudo-labelling as closely related to entropy miminization as the optimal value of both objectives is reached when all the probability mass is assigned to one class. However, they are not formulated as a principled log-likelihood, which gives rise to at least three problems. First, these methods cannot be combined with other principled statistical methods such as Bayesian inference. Second, it is unclear how to combine these objectives with standard supervised objectives, except by taking a weighted sum and doing hyperparameter optimization over the weight. Third, these objectives risk reinforcing any initial poor classifications and it is unclear whether this is desirable. 2.3 IN STANDARD SUPERVISED LEARNING, UNLABELLED POINTS SHOULD BE UNINFORMATIVE It is important to note that under the standard supervised-learning generative model (Fig. 2A), unlabelled points should not give any information about the weights. Omitting the label, Ysup, we obtain the graphical model in Fig. 2B. This model emphasises that the images, X , and the model parameters, θ, are marginally independent, so we cannot obtain any information about θ from X alone (Fig. 2B). Formally, the posterior over θ conditioned on X is equal to the prior, P (θ|X) = P (θ,X) P (X) = ∑ y∈Y P (θ,X, Ysup=y) P (X) (5) = P (θ) P (X) ∑ y∈Y P (Ysup=y|θ,X) P (X) = P (θ) . as 1 = ∑ y∈Y P (Ysup=y|θ,X). To confirm this result is intuitively sensible, note that are many situations where encouraging the decision boundary to lie in low density regions would be very detrimental to performance. Consider a classifier with two input features: x0 and x1 (Fig. 4A). The class boundary lies in the high-density region crossing both clusters, so to obtain a reasonable result, the classifier should ignore the low-density region lying between the clusters. However, strong low-density separation SSL terms in the objective may align the cluster boundaries with the class boundaries, leading the classifier to wrongly believe that one cluster is entirely one class and the other cluster is entirely the other class. In contrast, supervised learning without SSL will ignore clustering and obtain a reasonable answer close to the grey dashed line. Importantly, this is just an illustrative example to demonstrate that without further assumptions, the standard supervised approach of ignoring unlabelled data is sensible; semi-supervised learning without loss of performance in such settings has been studied and is known as Safe SSL (Li & Zhou, 2014; Krijthe & Loog, 2014; Kawakita & Takeuchi, 2014; Loog, 2015; Krijthe & Loog, 2016). 3 THEORY SSL methods are usually applied to benchmark datasets such as CIFAR-10 or ImageNet. These datasets were first carefully curated during the labelling process: (Fig. 3A), implying that ambiguous images close to the decision boundary were excluded. Critically, unlabelled points for these benchmark datasets are obtained by taking labelled points (which have reached consensus) and throwing away their labels (Fig. 3B). The likelihood for consensus (Y 6=None) is P (Y 6=None|X, θ) = ∑ y∈Y (py(X)) S . (6) This probability is close to 1 (for S > 1) if the underlying distribution, (py(X)) S puts most of its mass onto one class, and the probability is smaller if the mass is spread out over classes. As such, the likelihood “repels” decision boundaries away from unlabelled points, which is the common intuition behind low-density separation SSL methods, and which should be beneficial if class boundaries indeed lie in regions of low probability density away from both labelled and unlabelled points. If noconsensus images are observed (Fig. 2C), we can include a likelihood term for those images, P (Y =None|X, θ) = 1− P (Y 6= None|X, θ) = 1− ∑ y∈Y (py(X)) S . (7) If noconsensus images are not observed, we could in principle integrate over the underlying distribution over images, P (X=x). However, we do not even have samples from the underlying distributions over images (and if we did, we would have the noconsensus images so we could use Eq. 7). As such this term is usually omitted (e.g. Aitchison, 2021), but the use of out-of-distribution (OOD) datasets as surrogate noconsensus points is an important direction for future work. 3.1 ENTROPY MINIMIZATION AND PSEUDO-LABELS ARE LOWER BOUNDS ON OUR PRINCIPLED LOG-LIKELIHOOD To prove that entropy minimization forms a lower-bound on our log-likelihood (Eq. 6), we begin by writing the log-likelihood of consensus in terms of an expectation over labels, y, log P (Y 6=None|X, θ) = log ∑ y∈Y py(X) (py(X)) S−1 = logEpy(X) [ (py(X)) S−1 ] . (8) Applying Jensen’s inequality, the negative entropy gives a lower-bound on our log-likelihood, log P (Y 6=None|X, θ) ≥ Epy(X) [ log (py(X)) S−1 ] = (S − 1) ∑ y∈Y py(X) log py(X) = (S − 1)Lentropy(X) (9) This bound is tight for a uniform predictive distribution, log P (Y 6=None|X, θ) = log ∑ y∈Y (py(X)) S = logS ( 1 S )S = (S − 1) logS (10) (S − 1)Lentropy(X) = −(S − 1) log ∑ y∈Ypy(X) log py(X) = (S − 1) logS. (11) Pseudo-labelling forms an alternative lower bound on the log-likelihood which is obtained by noting that all (py(X)) S are positive, so selecting any subset of terms in the sum gives a lower bound, log P (Y 6=None|X, θ) = log ∑ y∈Y (py(X)) S ≥ log (py∗(X))S = S log py∗(X) = SLpseudo(X). (12) The inequality holds if we choose y∗ to be any class, but will be tightest if we choose the highest probability class. This bound is tight for a predictive distribution that puts all its mass on y∗, so py∗(X) = 1 and py 6=y∗ = 0 log P (Y 6=None|X, θ) = log ∑ y∈Y (py(X)) S = log (py∗(X)) S = log 1 = 0 (13) SLpseudo(X) = S log p∗y(X) = S log 1 = 0. (14) As such, entropy minimization and pseudo-labelling optimize different lower-bounds on our principled log-likelihood, log P (Y 6=None|X, θ), which gives a potential explanation for the effectiveness of pseudo-labelling and entropy minimization. Additionally, low-density separation SSL objectives encourages class-labels to be more certain. We can therefore expect pseudo-labelling to be the more relevant bound, as that bound is tight when the predictive distribution puts all its mass onto one class. In contrast, the entropy maximization bound is tight when the predictive distribution is uniform, which is discouraged by all low-density separation SSL objectives. This provides a potential explanation for the use of psuedo-labelling rather than entropy regularisation in modern SSL approaches such as (Sohn et al., 2020). 3.2 DATA AUGMENTATION PRIORS AND FIXMATCH FAMILY METHODS FixMatch family methods combine data augmentation and pseudo-labelling. To understand FixMatch as a bound on a principled log-likelihood, we therefore need a principled account of data augmentation as a likelihood. Inspired by Wenzel et al. (2020) (their Appendix K), we consider a distribution, P (X ′|X), over augmented images, X ′, given the underlying unaugmented image, X . We choose the single-annotator predictive distribution as the average over predictive distributions for many different augmented images, P (Ys=y|X, θ) = E [py(X ′)|X] (15) where py(X ′) is the predictive probabilities resulting from applying the neural network to the augmented image, and remember s ∈ {1, . . . , S} indexes the annotator. This is a sensible prior because we expect the neural network to be invariant under data-augmentation, and if the predictions are approximately invariant, then averaging the predictive distributions has little impact (Fig. 4B left). However, if the predictions do vary dramatically with different data augmentations then we should not trust the network’s classifications (i.e. we should have an uncertain predictive distribution), and averaging over very different predictive distributions for different augmentations indeed gives rise to broader, more uncertain predictions (Fig. 4B right). To obtain a tractable objective in the supervised setting, we use a multi-sample version of Jensen’s inequality, with K augmented images denoted X ′k, log P (Ys=y|X, θ) ≥ E [ log 1K ∑ kpy(X ′ k) ∣∣X] . (16) Combining this single-annotator probability with our generative model of curation, we obtain, log P (Y =y|X, θ) = S log P (Ys=y|X, θ) = S logE [py(X ′)|X] ≥ S E [ log 1K ∑ kpy(X ′ k) ∣∣X] , (17) The resulting objective for unlabelled points is, log P (Y 6=None|X, θ) = log ∑ y∈Y P (Y =y|X, θ) = log ∑ y∈YE [py(X ′)|X] S ≈ log ∑ y∈Y ( 1 K ∑ kpy(X ′ k) )S , (18) where we approximate the expectation with K different samples of X ′, denoted X ′k. Unfortunately, this approach does not immediately form a bound on the log-likelihood due to the convex nonlinearity in taking the power of S. Nonetheless, one key problem with approximating machine learning losses is that the optimizer learns to exploit approximation errors to find a pathological solution that makes the objective unboundedly large. We appear to be safe from that pathology here, as we are simply forming predictions by averaging over K augmentations of the underlying image. Nonetheless, to form a lower bound, we can follow FixMatch family algorithms by pseudo-labelling, i.e. by taking only one term in the sum for class y∗. FixMatch chooses y∗ by using the highest-probability class for a weakly-augmented image. An alternative approach is to choose the y∗ giving the tightest bound, i.e. argmaxy 1 K ∑ kpy(X ′ k). In either case, log P (Y 6=None|X, θ) ≥ logE [py∗(X ′)|X] S ≥ S E [ log 1K ∑ kpy∗(X ′ k) ∣∣X] , (19) If K = 1 and y∗ is chosen using a separate “weak” augmentation, then this is exactly equal to the FixMatch objective for unlabelled points. Note that both of these objectives (Eq. 18 and 19) promote reduced predictive uncertainty. Importantly, this does not just increase confidence in the single-augmentation predictive distributions, py(X ′ k), but also increases alignment between the predictive distributions for different augmentations (Fig. 4B). In particular, if the single-augmentation predictives are all highly confident, but place that high-confidence on different classes, then the multi-augmentation predictive formed by averaging will have low-confidence (Fig. 4B right). The only way for the multi-augmentation predictive to have high confidence is if the underlying single-augmentation predictive distributions have high confidence in the same class (Fig. 4B left), which encourages the underlying network to become more invariant. This makes sense: if data-augmentation changes the class predicted by the neural network, then any predictions should be low confidence. And it implies that combining principled data augmentation with a generative model of data curation automatically gives rise to an objective encouraging invariance. 4 RESULTS We begin by giving a proof-of-principle for Bayesian SSL on a toy dataset generated from a known model. Next, we tested our theoretical results (rather than trying to achieve SOTA performance) on real-world datasets. In particular, our theory gives one explanation for why SSL is typically more effective when unlabelled data is taken from the original, curated training set. To confirm these results, we used Galaxy Zoo 2 as this was a real-world dataset which allowed us to generate matched curated and uncurated datasets. 4.1 BAYESIAN SSL ON A GENERATED DATASET Our formulation of SSL as a likelihood implies that it should be possible to take entirely novel approaches, such as using low-density separation SSL in a Bayesian neural network (BNN). We considered a toy dataset generated from a “true” neural network model with one hidden layer and 30 hidden units, 5 dimensional inputs and 2 output classes. We generated inputs IID from a Gaussian, then passed them through the “true” neural network, then sampled multiple categorical class-labels corresponding to different annotators. If all the simulated annotators agreed, consensus was reached and if any simulated annotators disagreed, consensus was not reached. We used 100 labelled datapoints, though not all of them will have reached consensus, and we used up to 1600 unlabelled points, though again not all of them will have reached consensus. Note that as the consensus/noconsensus status of a point arises from the generative model, we cannot independently specify the number of consensus/noconsensus points. We used Eq. (2) as the likelihood for labelled points, Eq. (6) as the likelihood for unlabelled points and Eq. (7) as the likelihood for noconsensus points. We sampled (and trained networks on) 500 datasets in parallel. We trained using Langevin dynamics with all data simultaneously (no minibatching) with no momentum and no rejection. For a generative model with S = 1, consensus is always reached and the problem is equivalent to standard supervised learning. As such, we found no benefits from including unlabelled points for S = 1. In contrast, for any setting of S > 1 we found that increasing the number of unlabelled points improved the test log-likelihood (Fig. 5A–B) and the test accuracy (Fig. 5C–D). 4.2 GALAXY ZOO 2 Our data curation based theory predicts that low-density separation based SSL should be much more effective on curated than uncurated data. To test this prediction on real-world data, we turned to Galaxy Zoo 22 (GZ2) (Willett et al., 2013) which uses images from the Sloan Digital Sky Survey. This dataset is particularly useful for us as it has received only very minimal filtering based on criteria such as object brightness and spatial extent. We defined 9 labels by truncating the complex decision tree followed by the annotators (for further details see Aitchison, 2021). Further, as each GZ2 image has received ∼ 50 labels, we can define a consensus coefficient by taking the fraction of annotators that agreed upon the highest probability class. We can then define a curated dataset by taking the images with consensus coefficient above some threshold within each class. Note that we needed to select images on a per-class basis, because annotators tend to be more confident on some classes than others, so taking the highest consensus coefficients overall would dramatically change the class balance. In particular, we used the top 8.2% of images, which gave a full curated dataset of just over 20,000 images. Of those, we randomly selected 2000 as labelled examples, 10000 as test examples, and 0 – 6000 as unlabelled examples. The images were preprocessed by center-cropping to 212× 212 and then scaled to 32× 32. We applied a FixMatch-inspired semi-supervised learning algorithm, with a standard supervised objective, with unlabelled objective given by Eq. (18) with K = 2. Data augmentation was given by vertical and horizontal flips, rotations from −180◦ to 180◦, translations by up to 40% on both axes and scaling from 20% to 180%. Note that as we were trying to mirror the standard SSL setup, we did not include noconsensus points in the objective. We trained a ResNet18 with our maximum likelihood objective using SGD with a batch size of 500, a learning rate of 0.01 and 1500 epochs. We used an internal cluster of nVidia 1080 and 2080 GPUs, and the experiments took roughly 300 GPU hours. We found that the test-log-likelihood for curated data improved slightly as more unlabelled points were included, whereas the test-log-likelihood for uncurated dramatically declined as unlabelled points were added (Fig. 6A–B). We saw strong improvements in test accuracy with the number of 2https://data.galaxyzoo.org; www.sdss.org/collaboration/image-use-policy/ unlabelled points for curated datasets (Fig. 6C–D). Note that in Fig. 6C the error rate for curated datasets is already very small, so to see any effect we needed to plot the test error, normalized to the initial test error (Fig. 6D). For uncurated data, the inclusion of large numbers of unlabelled points dramatically worsened performance, though the inclusion of a small number of unlabelled points gave very small performance improvements (Fig. 6C–D). Thus, this experiment is consistent with the idea that the effectiveness of SSL arises at least in part from curation of the underlying dataset. 5 RELATED WORK There are at least three main approaches to semi-supervised learning (Seeger, 2000; Zhu, 2005; Chapelle et al., 2006; Zhu & Goldberg, 2009). First there is low-density separation, where we assume that the class boundary lies in a region of low probability density away from both labelled and unlabelled points. This approach dates back at least to transductive support vector machines (SVMs) where the model is to be tested on a finite number of known test locations (Vapnik, 1998; Chapelle et al., 1999). Those known test locations are treated as unlabelled points, and we find the decision boundary that perfectly classifies the limited number of labelled points, while at the same time being as far as possible from labelled and unlabelled data. Alternative approaches include pseudo-labelling and entropy minimization (Grandvalet & Bengio, 2005; Lee, 2013). Second, there are graph-based methods such as (Zhu & Ghahramani, 2002) which are very different from the methods considered here. Third, there are approaches that use unlabelled points to build a generative model of the inputs and leverage that model to improve classification (e.g. Kingma et al., 2014; Odena, 2016; Gordon & Hernández-Lobato, 2017). This approach was originally explored in a considerable body of classical work (e.g. McLachlan, 1975; Castelli & Cover, 1995; Druck et al., 2007) for a review, see Seeger (2000) and references therein. These approaches are fundamentally different from the SSL approaches considered here, as they require a generative model of inputs, while low-density separation methods do not. Generative modelling can be problematic as training a generative model can be more involved than training a discriminative model and because the even when the model can produce excellent samples, the high-level representation may be “entangled” (Higgins et al., 2017) in which case it may not offer benefits for classification. 6 DISCUSSION Our theory provides a theoretical understanding of past results showing that SSL is more effective when unlabelled data is drawn from the original, curated training set (Cozman et al., 2003; Oliver et al., 2018; Chen et al., 2020; Guo et al., 2020). In the extreme, our theory might be taken to imply that if data has not been curated, then SSL cannot work, and therefore that low-density separation SSL methods will not be effective in messy, uncurated real-world datasets. However, this is not the complete picture. Low-density separation SSL methods, including our log-likelihood, fundamentally exploit class-boundaries lying in low-density regions. As such, low-density separation could equally come from the real underlying data or could be artificially induced by data curation (Fig. 3). None of these methods are able to distinguish between these different underlying sources of low-density separation and as such any of them may work on uncurated data where the underlying distribution displays low-density separation. However, the possibility for curation to artificially induce low-density separation does imply that we should be cautious about overinterpreting spectacular results obtained on very carefully curated benchmark datasets such as CIFAR-10. Surprisingly, the generative model of data curation used here also explains the cold-posterior effect in Bayesian neural networks (Wenzel et al., 2020; Aitchison, 2021), revealing a profound and previously unsuspected connection. In conclusion, we showed that low-density separation SSL objectives can be understood as a lowerbound on a log-probability which arises from a principled generative model of data curation. This gives a theoretical understanding of recent results showing that SSL is more effective on curated data, which we confirmed by developing a Bayesian SSL model applied to toy data, using GZ2, which allowed us to consider a completely uncurated dataset.
1. What is the focus of the paper regarding semi-supervised learning and curation processes? 2. What are the strengths and weaknesses of the proposed theoretical results connecting popular objectives and the generative model? 3. How does the reviewer assess the insights provided by the experimental results? 4. Are there any concerns or questions regarding the exposition and clarity of the paper's content? 5. What are some minor issues mentioned by the reviewer, such as Equation 5's redundancy?
Summary Of The Paper Review
Summary Of The Paper The paper analysis semi-supervised methods taking into account the curation process employed in the creation of popular datasets, like ImageNet. The authors model this process via a generative model which they use to show two common objectives in semi-supervised learning, namely entropy minimisation and pseudo-labelling, are actually lower-bounds on the log-likelihood of the data under this generative model. The paper also shows experimentally, that unlabelled data improves the accuracy of classifiers when the dataset has been curated but actually hurts performance when uncurated datasets are used. Review Strengths The theoretical results seem sound and connect popular semi-supervised objectives to the proposed generative model for curated datasets. Using a generative model of the curation process for popular datasets in the context of semi-supervised learning is insightful, even though the model itself has already been proposed in previous work. Weaknesses The paper shows entropy minimisation and pseudo-labelling form a lower bound on the log-likelihood of the proposed generative model. However, optimising lower bounds is only useful when the bounds are tight, which does not seem to be the case here. The authors could expand their discussion on the insights these bounds provide. The theoretical results motivates, to some extent, the use of semi-supervised learning when curated data is available but do not tell us much about uncurated data. We could argue that uncurated data does not match the proposed generative model, but then that is just a case of model misspecification which is a rather well-known problem and not directly connected to the theoretical results of the paper. In that light, I do not find the experimental results very insightful. There is no discussion about other works that tried to explain the occasional poor performance of semi-supervised methods. I know of at least one [2] that provides theoretical results showing that unlabelled data (even when matching the distribution of labelled data) might hurt performance if the model is misspecified. They cover classifiers based on generative models, but there might still be parallels worth discussing. Even though the text is well written, I do not find the exposition very clear. For instance, it was not clear if θ refers to the parameters of 'true' data-generating process or the parameters of the classifier. Moreover, when I checked [1] for further background information, I found the exact same description, almost word for word, of the generative model for curated datasets. I find this problematic, as at best this is a bad case of text recycling. Questions In section 4.1., it is not clear how the annotators are simulated. Could the authors give more details on how the data was generated? Were the annotations just sampled from the categorical distribution defined by the 'true' neural network? Still in section 5.1., how was the test data constructed? This is not entirely clear in the text. Could there not be other factors influencing the poor performance when using uncurated data? For instance, uncurated data could be simply harder to model, with more ambiguous and noisier samples. Did the authors control for this somehow? Minor Issues Equation 5 seems a bit redundant. The result follows directly from the assumptions in the generative model, and this equation does not add much to the discussion. [1] Aitchison, Laurence. "A statistical theory of cold posteriors in deep neural networks." International Conference on Learning Representations. 2021. [2] Cozman, Fabio Gagliardi, Ira Cohen, and Marcelo Cesar Cirelo. "Semi-supervised learning of mixture models." ICML. Vol. 4. 2003.
ICLR
Title Semi-supervised learning objectives as log-likelihoods in a generative model of data curation Abstract We currently do not have an understanding of semi-supervised learning (SSL) objectives such as pseudo-labelling and entropy minimization as log-likelihoods, which precludes the development of e.g. Bayesian SSL. Here, we note that benchmark image datasets such as CIFAR-10 are carefully curated, and we formulate SSL objectives as a log-likelihood in a generative model of data curation that was initially developed to explain the cold-posterior effect (Aitchison 2020). SSL methods, from entropy minimization and pseudo-labelling, to state-of-the-art techniques similar to FixMatch can be understood as lower-bounds on our principled log-likelihood. We are thus able to give a proof-of-principle for Bayesian SSL on toy data. Finally, our theory suggests that SSL is effective in part due to the statistical patterns induced by data curation. This provides an explanation of past results which show SSL performs better on clean datasets without any “out of distribution” examples. Confirming these results we find that SSL gave much larger performance improvements on curated than on uncurated data, using matched curated and uncurated datasets based on Galaxy Zoo 2.1 1 INTRODUCTION To build high-performing deep learning models for industrial and medical applications, it is necessary to train on large human-labelled datasets. For instance, Imagenet (Deng et al., 2009), a classic benchmark dataset for object recognition, contains over 1 million labelled examples. Unfortunately, human labelling is often prohibitively expensive. In contrast obtaining unlabelled data is usually very straightforward. For instance, unlabelled image data can be obtained in almost unlimited volumes from the internet. Semi-supervised learning (SSL) attempts to leverage this unlabelled data to reduce the required number of human labels (Seeger, 2000; Zhu, 2005; Chapelle et al., 2006; Zhu & Goldberg, 2009; Van Engelen & Hoos, 2020). One family of SSL methods — those based on low-density separation — assume that decision boundaries lie in regions of low probability density, far from all labelled and unlabelled points. To achieve this, pre deep learning (DL) low-density separation SSL methods such as entropy minimization and pseudo-labelling (Grandvalet & Bengio, 2005; Lee, 2013) use objectives that repel decision boundaries away from unlabelled points by encouraging the network to make more certain predictions on those points. Entropy minimization (as the name suggests) minimizes the predictive entropy, whereas pseudo-labelling treats the currently most-probable label as a pseudo-label, and minimizes the cross entropy to that pseudo-label. More modern work uses the notion of consistency regularisation, which augments the unlabelled data (e.g. using translations and rotations), then encourages the neural network to produce similar outputs for different augmentations of the same underlying image (Sajjadi et al., 2016; Xie et al., 2019; Berthelot et al., 2019b; Sohn et al., 2020). Further developments of this line of work have resulted in many variants/combinations of these algorithms, from directly encouraging the smoothness of the classifier outputs around unlabelled datapoints (Miyato et al., 2018) to the “FixMatch” family of algorithms (Berthelot et al., 2019b;a; Sohn et al., 2020), which combine pseudo-labelling and consistency regularisation by augmenting each image twice, and using one of the augmented images to provide a pseudo-label for the other augmentation. 1Our code: https://anonymous.4open.science/r/GZ_SSL-B6CC; MIT Licensed However, some of the biggest successes of deep learning, from supervised learning to many generative models, have been built on a principled statistical framework as maximum (marginal) likelihood inference (e.g. the cross-entropy objective in supervised learning can be understood as the log-likelihood for a Categorical-softmax model of the class-label MacKay, 2003). Low-density separation SSL methods such as pseudo-labelling and entropy minimization are designed primarily to encourage the class-boundary to lie in low-density regions. Therefore they cannot be understood as log-likelihoods and cannot be combined with principled statistical methods such as Bayesian inference. Here, we give a formal account of SSL methods based on low-density separation (Chapelle et al., 2006) as lower bounds on a principled log-likelihood. In particular, we consider pseudo-labelling (Lee, 2013), entropy minimization (Grandvalet & Bengio, 2005), and modern methods similar to FixMatch (Sohn et al., 2020). This log-likelihood arises from a generative model of data curation that was initially developed to explain the cold-posterior effect (Aitchison, 2021). Critically, this approach gives an explanation for previous findings that SSL is most effective when unlabelled data is obtained by throwing away labels from the carefully curated training set, and is less effective when unlabelled data is taken from uncurated images, especially those that do not depict one of the classes of interest (Cozman et al., 2003; Oliver et al., 2018; Chen et al., 2020; Guo et al., 2020). We confirmed the importance of data curation for SSL on toy data generated from a known model and on real data from Galaxy Zoo 2 (Willett et al., 2013). 2 BACKGROUND Our work brings together many disparate areas. Here, we give an introduction to a generative model of data curation (Aitchison, 2021) initially developed to explain the cold posterior effect (Wenzel et al., 2020), pseudo-labelling and entropy minimization (Grandvalet & Bengio, 2005; Lee, 2013), and the treatment of unlabelled points in the standard supervised learning setup. 2.1 A GENERATIVE MODEL OF DATA CURATION To develop a model of data curation, remember that image datasets including CIFAR-10 and ImageNet are curated to ensure they only contain images whose class-labels are unambiguous. For instance, in CIFAR-10, annotators were instructed that “It’s worse to include one that shouldn’t be included than to exclude one.”, and Krizhevsky (2009) “personally verified every label submitted by the annotators”. In creating ImageNet, Deng et al. (2009) made sure that a number of Amazon Mechanical Turk annotators agreed upon the class before including an image in the dataset. Thus, these datasets have two odd properties. First, consensus labels exist only for a subset of images, e.g. for a white-noise image, consensus cannot be reached and the image cannot be labelled. Second, inclusion of an image in a dataset like CIFAR-10 is informative in and of itself, as it indicates that the image shows an unambiguous example of one of the ten classes. To understand these odd properties of curated datasets, consider a simplified generative model of consensus-formation: draw a random image, X , from the distribution over images, P (X), and ask S human annotators, indexed s, to give a label, {Ys}Ss=1 (e.g. using Mechanical Turk). Importantly, every annotator is forced to label every image and if the image is ambiguous they should give a random label. If all the annotators agree, Y1=Y2= · · · =YS , they have consensus and the datapoint is included in the dataset. However, in the case of any disagreement, consensus is not reached and the datapoint is excluded (Fig. 1), Concretely, the final label, Y is Y1 (which is the same as all the other labels) if consensus was reached and None otherwise (Fig. 2C), Y |{Ys}Ss=1 = { Y1 if Y1=Y2= · · · =YS None otherwise (1) Taking Y to be the label set, we have Ys ∈ Y , and the final label, Y , could be any of the underlying labels in Y , or None if consensus is not reached, so Y ∈ Y ∪ {None}. When consensus was reached, the likelihood is, P (Y =y|X, θ) = P ( {Ys=y}Ss=1|X, θ ) = ∏S s=1 P (Ys=y|X, θ) = P (Ys=y|X, θ) S = (py(X)) S (2) where we have assumed annotators are IID, and py(X) = P (Ys=y|X, θ) is the single-annotator probability. From here, it is possible to see how this model might be taken to give an account of tempering, as we have taken the underlying single-annotator likelihood, py(X) to the power S (for further details see Aitchison, 2021). 2.2 LOW-DENSITY SEPARATION SEMI-SUPERVISED LEARNING OBJECTIVES The intuition behind low-density separation objectives for semi-supervised learning is that decision boundaries should be in low-density regions away from both labelled and unlabelled data. As such, it is sensible to “repel” decision boundaries away from labelled and unlabelled datapoints and this can be achieved by making the classifier as certain as possible on those points. This happens automatically for labelled points as the standard supervised objective encourages the classifier to be as certain as possible about the true class label. But for unlabelled points we need a new objective that encourages certainty, and we focus on two approaches. First, and perhaps most direct is entropy minimization (Grandvalet & Bengio, 2005) Lentropy(X) = ∑ y∈Y py(X) log py(X) (3) where, following the typical probabilistic approach, we write the negative entropy as an objective to be maximized. Alternatively, we could use pseudo-labelling, which takes the current classification, y∗, to be the true label, and maximizes the log-probability of that label (Lee, 2013), Lpseudo(X) = log py∗(X) y∗ = argmax y∈Y log py(X). (4) Lee (2013) regarded pseudo-labelling as closely related to entropy miminization as the optimal value of both objectives is reached when all the probability mass is assigned to one class. However, they are not formulated as a principled log-likelihood, which gives rise to at least three problems. First, these methods cannot be combined with other principled statistical methods such as Bayesian inference. Second, it is unclear how to combine these objectives with standard supervised objectives, except by taking a weighted sum and doing hyperparameter optimization over the weight. Third, these objectives risk reinforcing any initial poor classifications and it is unclear whether this is desirable. 2.3 IN STANDARD SUPERVISED LEARNING, UNLABELLED POINTS SHOULD BE UNINFORMATIVE It is important to note that under the standard supervised-learning generative model (Fig. 2A), unlabelled points should not give any information about the weights. Omitting the label, Ysup, we obtain the graphical model in Fig. 2B. This model emphasises that the images, X , and the model parameters, θ, are marginally independent, so we cannot obtain any information about θ from X alone (Fig. 2B). Formally, the posterior over θ conditioned on X is equal to the prior, P (θ|X) = P (θ,X) P (X) = ∑ y∈Y P (θ,X, Ysup=y) P (X) (5) = P (θ) P (X) ∑ y∈Y P (Ysup=y|θ,X) P (X) = P (θ) . as 1 = ∑ y∈Y P (Ysup=y|θ,X). To confirm this result is intuitively sensible, note that are many situations where encouraging the decision boundary to lie in low density regions would be very detrimental to performance. Consider a classifier with two input features: x0 and x1 (Fig. 4A). The class boundary lies in the high-density region crossing both clusters, so to obtain a reasonable result, the classifier should ignore the low-density region lying between the clusters. However, strong low-density separation SSL terms in the objective may align the cluster boundaries with the class boundaries, leading the classifier to wrongly believe that one cluster is entirely one class and the other cluster is entirely the other class. In contrast, supervised learning without SSL will ignore clustering and obtain a reasonable answer close to the grey dashed line. Importantly, this is just an illustrative example to demonstrate that without further assumptions, the standard supervised approach of ignoring unlabelled data is sensible; semi-supervised learning without loss of performance in such settings has been studied and is known as Safe SSL (Li & Zhou, 2014; Krijthe & Loog, 2014; Kawakita & Takeuchi, 2014; Loog, 2015; Krijthe & Loog, 2016). 3 THEORY SSL methods are usually applied to benchmark datasets such as CIFAR-10 or ImageNet. These datasets were first carefully curated during the labelling process: (Fig. 3A), implying that ambiguous images close to the decision boundary were excluded. Critically, unlabelled points for these benchmark datasets are obtained by taking labelled points (which have reached consensus) and throwing away their labels (Fig. 3B). The likelihood for consensus (Y 6=None) is P (Y 6=None|X, θ) = ∑ y∈Y (py(X)) S . (6) This probability is close to 1 (for S > 1) if the underlying distribution, (py(X)) S puts most of its mass onto one class, and the probability is smaller if the mass is spread out over classes. As such, the likelihood “repels” decision boundaries away from unlabelled points, which is the common intuition behind low-density separation SSL methods, and which should be beneficial if class boundaries indeed lie in regions of low probability density away from both labelled and unlabelled points. If noconsensus images are observed (Fig. 2C), we can include a likelihood term for those images, P (Y =None|X, θ) = 1− P (Y 6= None|X, θ) = 1− ∑ y∈Y (py(X)) S . (7) If noconsensus images are not observed, we could in principle integrate over the underlying distribution over images, P (X=x). However, we do not even have samples from the underlying distributions over images (and if we did, we would have the noconsensus images so we could use Eq. 7). As such this term is usually omitted (e.g. Aitchison, 2021), but the use of out-of-distribution (OOD) datasets as surrogate noconsensus points is an important direction for future work. 3.1 ENTROPY MINIMIZATION AND PSEUDO-LABELS ARE LOWER BOUNDS ON OUR PRINCIPLED LOG-LIKELIHOOD To prove that entropy minimization forms a lower-bound on our log-likelihood (Eq. 6), we begin by writing the log-likelihood of consensus in terms of an expectation over labels, y, log P (Y 6=None|X, θ) = log ∑ y∈Y py(X) (py(X)) S−1 = logEpy(X) [ (py(X)) S−1 ] . (8) Applying Jensen’s inequality, the negative entropy gives a lower-bound on our log-likelihood, log P (Y 6=None|X, θ) ≥ Epy(X) [ log (py(X)) S−1 ] = (S − 1) ∑ y∈Y py(X) log py(X) = (S − 1)Lentropy(X) (9) This bound is tight for a uniform predictive distribution, log P (Y 6=None|X, θ) = log ∑ y∈Y (py(X)) S = logS ( 1 S )S = (S − 1) logS (10) (S − 1)Lentropy(X) = −(S − 1) log ∑ y∈Ypy(X) log py(X) = (S − 1) logS. (11) Pseudo-labelling forms an alternative lower bound on the log-likelihood which is obtained by noting that all (py(X)) S are positive, so selecting any subset of terms in the sum gives a lower bound, log P (Y 6=None|X, θ) = log ∑ y∈Y (py(X)) S ≥ log (py∗(X))S = S log py∗(X) = SLpseudo(X). (12) The inequality holds if we choose y∗ to be any class, but will be tightest if we choose the highest probability class. This bound is tight for a predictive distribution that puts all its mass on y∗, so py∗(X) = 1 and py 6=y∗ = 0 log P (Y 6=None|X, θ) = log ∑ y∈Y (py(X)) S = log (py∗(X)) S = log 1 = 0 (13) SLpseudo(X) = S log p∗y(X) = S log 1 = 0. (14) As such, entropy minimization and pseudo-labelling optimize different lower-bounds on our principled log-likelihood, log P (Y 6=None|X, θ), which gives a potential explanation for the effectiveness of pseudo-labelling and entropy minimization. Additionally, low-density separation SSL objectives encourages class-labels to be more certain. We can therefore expect pseudo-labelling to be the more relevant bound, as that bound is tight when the predictive distribution puts all its mass onto one class. In contrast, the entropy maximization bound is tight when the predictive distribution is uniform, which is discouraged by all low-density separation SSL objectives. This provides a potential explanation for the use of psuedo-labelling rather than entropy regularisation in modern SSL approaches such as (Sohn et al., 2020). 3.2 DATA AUGMENTATION PRIORS AND FIXMATCH FAMILY METHODS FixMatch family methods combine data augmentation and pseudo-labelling. To understand FixMatch as a bound on a principled log-likelihood, we therefore need a principled account of data augmentation as a likelihood. Inspired by Wenzel et al. (2020) (their Appendix K), we consider a distribution, P (X ′|X), over augmented images, X ′, given the underlying unaugmented image, X . We choose the single-annotator predictive distribution as the average over predictive distributions for many different augmented images, P (Ys=y|X, θ) = E [py(X ′)|X] (15) where py(X ′) is the predictive probabilities resulting from applying the neural network to the augmented image, and remember s ∈ {1, . . . , S} indexes the annotator. This is a sensible prior because we expect the neural network to be invariant under data-augmentation, and if the predictions are approximately invariant, then averaging the predictive distributions has little impact (Fig. 4B left). However, if the predictions do vary dramatically with different data augmentations then we should not trust the network’s classifications (i.e. we should have an uncertain predictive distribution), and averaging over very different predictive distributions for different augmentations indeed gives rise to broader, more uncertain predictions (Fig. 4B right). To obtain a tractable objective in the supervised setting, we use a multi-sample version of Jensen’s inequality, with K augmented images denoted X ′k, log P (Ys=y|X, θ) ≥ E [ log 1K ∑ kpy(X ′ k) ∣∣X] . (16) Combining this single-annotator probability with our generative model of curation, we obtain, log P (Y =y|X, θ) = S log P (Ys=y|X, θ) = S logE [py(X ′)|X] ≥ S E [ log 1K ∑ kpy(X ′ k) ∣∣X] , (17) The resulting objective for unlabelled points is, log P (Y 6=None|X, θ) = log ∑ y∈Y P (Y =y|X, θ) = log ∑ y∈YE [py(X ′)|X] S ≈ log ∑ y∈Y ( 1 K ∑ kpy(X ′ k) )S , (18) where we approximate the expectation with K different samples of X ′, denoted X ′k. Unfortunately, this approach does not immediately form a bound on the log-likelihood due to the convex nonlinearity in taking the power of S. Nonetheless, one key problem with approximating machine learning losses is that the optimizer learns to exploit approximation errors to find a pathological solution that makes the objective unboundedly large. We appear to be safe from that pathology here, as we are simply forming predictions by averaging over K augmentations of the underlying image. Nonetheless, to form a lower bound, we can follow FixMatch family algorithms by pseudo-labelling, i.e. by taking only one term in the sum for class y∗. FixMatch chooses y∗ by using the highest-probability class for a weakly-augmented image. An alternative approach is to choose the y∗ giving the tightest bound, i.e. argmaxy 1 K ∑ kpy(X ′ k). In either case, log P (Y 6=None|X, θ) ≥ logE [py∗(X ′)|X] S ≥ S E [ log 1K ∑ kpy∗(X ′ k) ∣∣X] , (19) If K = 1 and y∗ is chosen using a separate “weak” augmentation, then this is exactly equal to the FixMatch objective for unlabelled points. Note that both of these objectives (Eq. 18 and 19) promote reduced predictive uncertainty. Importantly, this does not just increase confidence in the single-augmentation predictive distributions, py(X ′ k), but also increases alignment between the predictive distributions for different augmentations (Fig. 4B). In particular, if the single-augmentation predictives are all highly confident, but place that high-confidence on different classes, then the multi-augmentation predictive formed by averaging will have low-confidence (Fig. 4B right). The only way for the multi-augmentation predictive to have high confidence is if the underlying single-augmentation predictive distributions have high confidence in the same class (Fig. 4B left), which encourages the underlying network to become more invariant. This makes sense: if data-augmentation changes the class predicted by the neural network, then any predictions should be low confidence. And it implies that combining principled data augmentation with a generative model of data curation automatically gives rise to an objective encouraging invariance. 4 RESULTS We begin by giving a proof-of-principle for Bayesian SSL on a toy dataset generated from a known model. Next, we tested our theoretical results (rather than trying to achieve SOTA performance) on real-world datasets. In particular, our theory gives one explanation for why SSL is typically more effective when unlabelled data is taken from the original, curated training set. To confirm these results, we used Galaxy Zoo 2 as this was a real-world dataset which allowed us to generate matched curated and uncurated datasets. 4.1 BAYESIAN SSL ON A GENERATED DATASET Our formulation of SSL as a likelihood implies that it should be possible to take entirely novel approaches, such as using low-density separation SSL in a Bayesian neural network (BNN). We considered a toy dataset generated from a “true” neural network model with one hidden layer and 30 hidden units, 5 dimensional inputs and 2 output classes. We generated inputs IID from a Gaussian, then passed them through the “true” neural network, then sampled multiple categorical class-labels corresponding to different annotators. If all the simulated annotators agreed, consensus was reached and if any simulated annotators disagreed, consensus was not reached. We used 100 labelled datapoints, though not all of them will have reached consensus, and we used up to 1600 unlabelled points, though again not all of them will have reached consensus. Note that as the consensus/noconsensus status of a point arises from the generative model, we cannot independently specify the number of consensus/noconsensus points. We used Eq. (2) as the likelihood for labelled points, Eq. (6) as the likelihood for unlabelled points and Eq. (7) as the likelihood for noconsensus points. We sampled (and trained networks on) 500 datasets in parallel. We trained using Langevin dynamics with all data simultaneously (no minibatching) with no momentum and no rejection. For a generative model with S = 1, consensus is always reached and the problem is equivalent to standard supervised learning. As such, we found no benefits from including unlabelled points for S = 1. In contrast, for any setting of S > 1 we found that increasing the number of unlabelled points improved the test log-likelihood (Fig. 5A–B) and the test accuracy (Fig. 5C–D). 4.2 GALAXY ZOO 2 Our data curation based theory predicts that low-density separation based SSL should be much more effective on curated than uncurated data. To test this prediction on real-world data, we turned to Galaxy Zoo 22 (GZ2) (Willett et al., 2013) which uses images from the Sloan Digital Sky Survey. This dataset is particularly useful for us as it has received only very minimal filtering based on criteria such as object brightness and spatial extent. We defined 9 labels by truncating the complex decision tree followed by the annotators (for further details see Aitchison, 2021). Further, as each GZ2 image has received ∼ 50 labels, we can define a consensus coefficient by taking the fraction of annotators that agreed upon the highest probability class. We can then define a curated dataset by taking the images with consensus coefficient above some threshold within each class. Note that we needed to select images on a per-class basis, because annotators tend to be more confident on some classes than others, so taking the highest consensus coefficients overall would dramatically change the class balance. In particular, we used the top 8.2% of images, which gave a full curated dataset of just over 20,000 images. Of those, we randomly selected 2000 as labelled examples, 10000 as test examples, and 0 – 6000 as unlabelled examples. The images were preprocessed by center-cropping to 212× 212 and then scaled to 32× 32. We applied a FixMatch-inspired semi-supervised learning algorithm, with a standard supervised objective, with unlabelled objective given by Eq. (18) with K = 2. Data augmentation was given by vertical and horizontal flips, rotations from −180◦ to 180◦, translations by up to 40% on both axes and scaling from 20% to 180%. Note that as we were trying to mirror the standard SSL setup, we did not include noconsensus points in the objective. We trained a ResNet18 with our maximum likelihood objective using SGD with a batch size of 500, a learning rate of 0.01 and 1500 epochs. We used an internal cluster of nVidia 1080 and 2080 GPUs, and the experiments took roughly 300 GPU hours. We found that the test-log-likelihood for curated data improved slightly as more unlabelled points were included, whereas the test-log-likelihood for uncurated dramatically declined as unlabelled points were added (Fig. 6A–B). We saw strong improvements in test accuracy with the number of 2https://data.galaxyzoo.org; www.sdss.org/collaboration/image-use-policy/ unlabelled points for curated datasets (Fig. 6C–D). Note that in Fig. 6C the error rate for curated datasets is already very small, so to see any effect we needed to plot the test error, normalized to the initial test error (Fig. 6D). For uncurated data, the inclusion of large numbers of unlabelled points dramatically worsened performance, though the inclusion of a small number of unlabelled points gave very small performance improvements (Fig. 6C–D). Thus, this experiment is consistent with the idea that the effectiveness of SSL arises at least in part from curation of the underlying dataset. 5 RELATED WORK There are at least three main approaches to semi-supervised learning (Seeger, 2000; Zhu, 2005; Chapelle et al., 2006; Zhu & Goldberg, 2009). First there is low-density separation, where we assume that the class boundary lies in a region of low probability density away from both labelled and unlabelled points. This approach dates back at least to transductive support vector machines (SVMs) where the model is to be tested on a finite number of known test locations (Vapnik, 1998; Chapelle et al., 1999). Those known test locations are treated as unlabelled points, and we find the decision boundary that perfectly classifies the limited number of labelled points, while at the same time being as far as possible from labelled and unlabelled data. Alternative approaches include pseudo-labelling and entropy minimization (Grandvalet & Bengio, 2005; Lee, 2013). Second, there are graph-based methods such as (Zhu & Ghahramani, 2002) which are very different from the methods considered here. Third, there are approaches that use unlabelled points to build a generative model of the inputs and leverage that model to improve classification (e.g. Kingma et al., 2014; Odena, 2016; Gordon & Hernández-Lobato, 2017). This approach was originally explored in a considerable body of classical work (e.g. McLachlan, 1975; Castelli & Cover, 1995; Druck et al., 2007) for a review, see Seeger (2000) and references therein. These approaches are fundamentally different from the SSL approaches considered here, as they require a generative model of inputs, while low-density separation methods do not. Generative modelling can be problematic as training a generative model can be more involved than training a discriminative model and because the even when the model can produce excellent samples, the high-level representation may be “entangled” (Higgins et al., 2017) in which case it may not offer benefits for classification. 6 DISCUSSION Our theory provides a theoretical understanding of past results showing that SSL is more effective when unlabelled data is drawn from the original, curated training set (Cozman et al., 2003; Oliver et al., 2018; Chen et al., 2020; Guo et al., 2020). In the extreme, our theory might be taken to imply that if data has not been curated, then SSL cannot work, and therefore that low-density separation SSL methods will not be effective in messy, uncurated real-world datasets. However, this is not the complete picture. Low-density separation SSL methods, including our log-likelihood, fundamentally exploit class-boundaries lying in low-density regions. As such, low-density separation could equally come from the real underlying data or could be artificially induced by data curation (Fig. 3). None of these methods are able to distinguish between these different underlying sources of low-density separation and as such any of them may work on uncurated data where the underlying distribution displays low-density separation. However, the possibility for curation to artificially induce low-density separation does imply that we should be cautious about overinterpreting spectacular results obtained on very carefully curated benchmark datasets such as CIFAR-10. Surprisingly, the generative model of data curation used here also explains the cold-posterior effect in Bayesian neural networks (Wenzel et al., 2020; Aitchison, 2021), revealing a profound and previously unsuspected connection. In conclusion, we showed that low-density separation SSL objectives can be understood as a lowerbound on a log-probability which arises from a principled generative model of data curation. This gives a theoretical understanding of recent results showing that SSL is more effective on curated data, which we confirmed by developing a Bayesian SSL model applied to toy data, using GZ2, which allowed us to consider a completely uncurated dataset.
1. What is the focus of the paper regarding semantic correspondence? 2. What are the strengths of the proposed approach, particularly in terms of neural representation? 3. What are the weaknesses of the paper, especially for the experiment section? 4. Do you have any concerns about the semantic correspondence representation? 5. What are the limitations regarding the NeMF approach? 6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 7. What is the contribution of the paper and the significance of the proposed modules? 8. What are the strengths of the proposed differentiable data generation pipeline? 9. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works? 10. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 11. What are the key contributions and novel aspects introduced by the paper in spin glass techniques? 12. What are the weaknesses of the paper compared to prior works? 13. What is the main contribution of the paper on dictionary learning? 14. What are the strengths of the paper, especially in the theoretical analysis? 15. Do you have any questions regarding the paper?
Summary Of The Paper Review
Summary Of The Paper This paper uses a generative model for "curated" labeled datasets that was initially developed by Aitchison 2020 for another seemingly unrelated purpose (explaining the "cold posterior" effect). The assumed task is classification, where there is a set of mutually exclusive possible class we wish to assign to each image. To obtain labels for a training set given specific images (or other example features), the generative model assumes that each example is assigned a label from S independent, identically-distributed annotators. Only if all annotators agree is consensus reached and the class label is provided, otherwise the image is considered "noconsensus" and is usually not included. It is suggested that many common SSL datasets (e.g. CIFAR-10) use such a consensus curation process, and that even unlabeled sets are subjected to consensus curation. Using this model, they provide a principled explanation for the empirical success of several well-known semi-supervised learning (SSL) objectives -- entropy minimization, pseudo-label, and FixMatch -- applicable to discriminative deep learning. The key idea is that each objective can be viewed as a lower bound of the log likelihood of the unlabeled set under the proposed model for curation, assuming that the unlabeled set contains only "consensus" images. In Sec. 4.1, the paper presents a Bayesian SSL analysis on a toy dataset. The key result is that if unlabeled data is generated without multi-annotator curation, then the unlabeled data does not improve classifier accuracy when included in training using SSL. However, if only consensus examples are included in the unlabeled set, then reasonable improvements in accuracy might be expected from SSL vs. labeled-set-only learning. In Sec. 4.2, the paper analyzes astronomy images with 9 class labels, prepared using the Galaxy Zoo dataset (to my knowledge not a common dataset for SSL). The existence of multiple annotators for this dataset allows creating both curated and uncurated versions. Again, they find that for curated datasets (where many annotators agree), both test-log-likelihood and test accuracy improve from the inclusion of unlabeled examples. However, for uncurated datasets, including too many unlabeled examples "dramatically worsened" test-set performance, though small unlabeled sets help. Review Strengths Focus on the curated aspect of common SSL benchmark datasets Goal of relating objectives to a proper generative model is valuable Insightful use of the same generative model to offer support for 3 different common SSL objectives arising from last 20 years of SSL research Nice toy data experiments in Figure 5, showing impact of more unlabeled data on SSL methods for both curated and uncurated unlabeled sets Weaknesses Missing previous known connections between Pseudolabel and Entropy regularization See W1 below. Results on Galaxy Zoo (Fig. 6) could use more explanation: why does performance benefit from a few (not too many) unlabeled examples even in noconsensus case? See Q3 below. W1: Missing previous known connections between Pseudolabel and Entropy regularization In the presented background in Sec. 2, entropy regularization in Eq 3 and pseudolabel in Eq 4 are presented as seemingly separate alternatives. I think the authors could do a better job explaining that pseudolabel was originally motivated by entropy minimization -- the original PseudoLabel paper dedicates all of Sec. 3.2 to explaining that "... our method is equivalent to Entropy Regularization." Not a dealbreaker, but making this connection could help the audience understand why both methods might be lower bounds to the same log likelihood. Questions for Rebuttal Q1: What insight about which SSL objectives to prefer is offered by your lower-bound arguments? The paper offers justification of several SSL objectives -- pseudolabel, entropy minimization, and fixmatch --- by deriving each as a lower bound of the log likelihood of the label-consensus model. However, there is little insight here about when each bound might be expected to be "the best".... does FixMatch always dominate? This may seem a bit out of scope, but I think offering some analysis here would help practitioners in thinking about which methods to apply in which circumstances. Q2: Questions about Fig. 6 What do "exact-" and "pseudo-" prefixes mean here? Can you adjust caption in revision to clarify? Q3: Results on Galaxy Zoo (Fig. 6) could use more explanation In Fig. 6, seems like both log likelihood and accuracy improve slightly with modest unlabeled data in uncurated case, but then decline with large unlabeled data. The authors acknowledge this in the paper by saying " though the inclusion of a small number of unlabelled points gave very small performance improvements (Fig. 6CD)". Is there any insight about why the curves look U shaped here, rather than a strict decline as the theory might suggest? Perhaps the data augmentation related to FixMatch used here is driving this? Would the same trend be seen if we just used entropy minimization? Minor Presentation Comments No need to mention these in rebuttal, but I hope you consider them to improve the paper Comments on Figure 2 I felt Fig. 2 could communicate more than it did. Perhaps in the right-most panel that shows your consensus model, you could somehow visually emphasize that for unlabeled examples, you are assuming that "consensus" was reached, and thus that the distribution p ( θ | X ) does not reduce to the prior marginal p ( θ ) under the assumed model? Superscript Notation could be improved This is somewhat minor and perhaps a personal nitpick, but I dislike the notation p y S ( X ) in Eq. 2. I'd prefer to see it like this: ( p y ( X ) ) S This makes it clear that the quantity is raised to the S -th power. The current notation is a bit less clear to me on that front. Lots of use of Y_s without defining s In many cases, the paper writes P ( Y s = y | X , θ ) without defining a value for the index s (as an example, see Eq. 2). I guess I found this a little confusing, since elsewhere s requires a concrete value to be well understood. I'd rather there was some explicit comment like "where s can be any value in 1 to S" or instead some other notation. Coments on Figure 3 Why in Figure 3 are points labeled "bus" and "train"? Aren't these just toy examples? Not sure that both panels (3A and 3B) are needed in this figure. Could use just one, and verbally explain how unlabeled data would be selected. Or, at very least, the second panel could remove the noconsensus points, to help drive home visually the separation that occurs in the unlabeled set. Comments on Sec. 4.2 Current text says: "data-curation based theory predicts that SSL should be much more effective on curated than uncurated data". I'd clarify that the theory only applies to SSL methods using the low-density separation principle. Other SSL principles (e.g. those based on generative models) aren't applicable.
ICLR
Title Semi-supervised learning objectives as log-likelihoods in a generative model of data curation Abstract We currently do not have an understanding of semi-supervised learning (SSL) objectives such as pseudo-labelling and entropy minimization as log-likelihoods, which precludes the development of e.g. Bayesian SSL. Here, we note that benchmark image datasets such as CIFAR-10 are carefully curated, and we formulate SSL objectives as a log-likelihood in a generative model of data curation that was initially developed to explain the cold-posterior effect (Aitchison 2020). SSL methods, from entropy minimization and pseudo-labelling, to state-of-the-art techniques similar to FixMatch can be understood as lower-bounds on our principled log-likelihood. We are thus able to give a proof-of-principle for Bayesian SSL on toy data. Finally, our theory suggests that SSL is effective in part due to the statistical patterns induced by data curation. This provides an explanation of past results which show SSL performs better on clean datasets without any “out of distribution” examples. Confirming these results we find that SSL gave much larger performance improvements on curated than on uncurated data, using matched curated and uncurated datasets based on Galaxy Zoo 2.1 1 INTRODUCTION To build high-performing deep learning models for industrial and medical applications, it is necessary to train on large human-labelled datasets. For instance, Imagenet (Deng et al., 2009), a classic benchmark dataset for object recognition, contains over 1 million labelled examples. Unfortunately, human labelling is often prohibitively expensive. In contrast obtaining unlabelled data is usually very straightforward. For instance, unlabelled image data can be obtained in almost unlimited volumes from the internet. Semi-supervised learning (SSL) attempts to leverage this unlabelled data to reduce the required number of human labels (Seeger, 2000; Zhu, 2005; Chapelle et al., 2006; Zhu & Goldberg, 2009; Van Engelen & Hoos, 2020). One family of SSL methods — those based on low-density separation — assume that decision boundaries lie in regions of low probability density, far from all labelled and unlabelled points. To achieve this, pre deep learning (DL) low-density separation SSL methods such as entropy minimization and pseudo-labelling (Grandvalet & Bengio, 2005; Lee, 2013) use objectives that repel decision boundaries away from unlabelled points by encouraging the network to make more certain predictions on those points. Entropy minimization (as the name suggests) minimizes the predictive entropy, whereas pseudo-labelling treats the currently most-probable label as a pseudo-label, and minimizes the cross entropy to that pseudo-label. More modern work uses the notion of consistency regularisation, which augments the unlabelled data (e.g. using translations and rotations), then encourages the neural network to produce similar outputs for different augmentations of the same underlying image (Sajjadi et al., 2016; Xie et al., 2019; Berthelot et al., 2019b; Sohn et al., 2020). Further developments of this line of work have resulted in many variants/combinations of these algorithms, from directly encouraging the smoothness of the classifier outputs around unlabelled datapoints (Miyato et al., 2018) to the “FixMatch” family of algorithms (Berthelot et al., 2019b;a; Sohn et al., 2020), which combine pseudo-labelling and consistency regularisation by augmenting each image twice, and using one of the augmented images to provide a pseudo-label for the other augmentation. 1Our code: https://anonymous.4open.science/r/GZ_SSL-B6CC; MIT Licensed However, some of the biggest successes of deep learning, from supervised learning to many generative models, have been built on a principled statistical framework as maximum (marginal) likelihood inference (e.g. the cross-entropy objective in supervised learning can be understood as the log-likelihood for a Categorical-softmax model of the class-label MacKay, 2003). Low-density separation SSL methods such as pseudo-labelling and entropy minimization are designed primarily to encourage the class-boundary to lie in low-density regions. Therefore they cannot be understood as log-likelihoods and cannot be combined with principled statistical methods such as Bayesian inference. Here, we give a formal account of SSL methods based on low-density separation (Chapelle et al., 2006) as lower bounds on a principled log-likelihood. In particular, we consider pseudo-labelling (Lee, 2013), entropy minimization (Grandvalet & Bengio, 2005), and modern methods similar to FixMatch (Sohn et al., 2020). This log-likelihood arises from a generative model of data curation that was initially developed to explain the cold-posterior effect (Aitchison, 2021). Critically, this approach gives an explanation for previous findings that SSL is most effective when unlabelled data is obtained by throwing away labels from the carefully curated training set, and is less effective when unlabelled data is taken from uncurated images, especially those that do not depict one of the classes of interest (Cozman et al., 2003; Oliver et al., 2018; Chen et al., 2020; Guo et al., 2020). We confirmed the importance of data curation for SSL on toy data generated from a known model and on real data from Galaxy Zoo 2 (Willett et al., 2013). 2 BACKGROUND Our work brings together many disparate areas. Here, we give an introduction to a generative model of data curation (Aitchison, 2021) initially developed to explain the cold posterior effect (Wenzel et al., 2020), pseudo-labelling and entropy minimization (Grandvalet & Bengio, 2005; Lee, 2013), and the treatment of unlabelled points in the standard supervised learning setup. 2.1 A GENERATIVE MODEL OF DATA CURATION To develop a model of data curation, remember that image datasets including CIFAR-10 and ImageNet are curated to ensure they only contain images whose class-labels are unambiguous. For instance, in CIFAR-10, annotators were instructed that “It’s worse to include one that shouldn’t be included than to exclude one.”, and Krizhevsky (2009) “personally verified every label submitted by the annotators”. In creating ImageNet, Deng et al. (2009) made sure that a number of Amazon Mechanical Turk annotators agreed upon the class before including an image in the dataset. Thus, these datasets have two odd properties. First, consensus labels exist only for a subset of images, e.g. for a white-noise image, consensus cannot be reached and the image cannot be labelled. Second, inclusion of an image in a dataset like CIFAR-10 is informative in and of itself, as it indicates that the image shows an unambiguous example of one of the ten classes. To understand these odd properties of curated datasets, consider a simplified generative model of consensus-formation: draw a random image, X , from the distribution over images, P (X), and ask S human annotators, indexed s, to give a label, {Ys}Ss=1 (e.g. using Mechanical Turk). Importantly, every annotator is forced to label every image and if the image is ambiguous they should give a random label. If all the annotators agree, Y1=Y2= · · · =YS , they have consensus and the datapoint is included in the dataset. However, in the case of any disagreement, consensus is not reached and the datapoint is excluded (Fig. 1), Concretely, the final label, Y is Y1 (which is the same as all the other labels) if consensus was reached and None otherwise (Fig. 2C), Y |{Ys}Ss=1 = { Y1 if Y1=Y2= · · · =YS None otherwise (1) Taking Y to be the label set, we have Ys ∈ Y , and the final label, Y , could be any of the underlying labels in Y , or None if consensus is not reached, so Y ∈ Y ∪ {None}. When consensus was reached, the likelihood is, P (Y =y|X, θ) = P ( {Ys=y}Ss=1|X, θ ) = ∏S s=1 P (Ys=y|X, θ) = P (Ys=y|X, θ) S = (py(X)) S (2) where we have assumed annotators are IID, and py(X) = P (Ys=y|X, θ) is the single-annotator probability. From here, it is possible to see how this model might be taken to give an account of tempering, as we have taken the underlying single-annotator likelihood, py(X) to the power S (for further details see Aitchison, 2021). 2.2 LOW-DENSITY SEPARATION SEMI-SUPERVISED LEARNING OBJECTIVES The intuition behind low-density separation objectives for semi-supervised learning is that decision boundaries should be in low-density regions away from both labelled and unlabelled data. As such, it is sensible to “repel” decision boundaries away from labelled and unlabelled datapoints and this can be achieved by making the classifier as certain as possible on those points. This happens automatically for labelled points as the standard supervised objective encourages the classifier to be as certain as possible about the true class label. But for unlabelled points we need a new objective that encourages certainty, and we focus on two approaches. First, and perhaps most direct is entropy minimization (Grandvalet & Bengio, 2005) Lentropy(X) = ∑ y∈Y py(X) log py(X) (3) where, following the typical probabilistic approach, we write the negative entropy as an objective to be maximized. Alternatively, we could use pseudo-labelling, which takes the current classification, y∗, to be the true label, and maximizes the log-probability of that label (Lee, 2013), Lpseudo(X) = log py∗(X) y∗ = argmax y∈Y log py(X). (4) Lee (2013) regarded pseudo-labelling as closely related to entropy miminization as the optimal value of both objectives is reached when all the probability mass is assigned to one class. However, they are not formulated as a principled log-likelihood, which gives rise to at least three problems. First, these methods cannot be combined with other principled statistical methods such as Bayesian inference. Second, it is unclear how to combine these objectives with standard supervised objectives, except by taking a weighted sum and doing hyperparameter optimization over the weight. Third, these objectives risk reinforcing any initial poor classifications and it is unclear whether this is desirable. 2.3 IN STANDARD SUPERVISED LEARNING, UNLABELLED POINTS SHOULD BE UNINFORMATIVE It is important to note that under the standard supervised-learning generative model (Fig. 2A), unlabelled points should not give any information about the weights. Omitting the label, Ysup, we obtain the graphical model in Fig. 2B. This model emphasises that the images, X , and the model parameters, θ, are marginally independent, so we cannot obtain any information about θ from X alone (Fig. 2B). Formally, the posterior over θ conditioned on X is equal to the prior, P (θ|X) = P (θ,X) P (X) = ∑ y∈Y P (θ,X, Ysup=y) P (X) (5) = P (θ) P (X) ∑ y∈Y P (Ysup=y|θ,X) P (X) = P (θ) . as 1 = ∑ y∈Y P (Ysup=y|θ,X). To confirm this result is intuitively sensible, note that are many situations where encouraging the decision boundary to lie in low density regions would be very detrimental to performance. Consider a classifier with two input features: x0 and x1 (Fig. 4A). The class boundary lies in the high-density region crossing both clusters, so to obtain a reasonable result, the classifier should ignore the low-density region lying between the clusters. However, strong low-density separation SSL terms in the objective may align the cluster boundaries with the class boundaries, leading the classifier to wrongly believe that one cluster is entirely one class and the other cluster is entirely the other class. In contrast, supervised learning without SSL will ignore clustering and obtain a reasonable answer close to the grey dashed line. Importantly, this is just an illustrative example to demonstrate that without further assumptions, the standard supervised approach of ignoring unlabelled data is sensible; semi-supervised learning without loss of performance in such settings has been studied and is known as Safe SSL (Li & Zhou, 2014; Krijthe & Loog, 2014; Kawakita & Takeuchi, 2014; Loog, 2015; Krijthe & Loog, 2016). 3 THEORY SSL methods are usually applied to benchmark datasets such as CIFAR-10 or ImageNet. These datasets were first carefully curated during the labelling process: (Fig. 3A), implying that ambiguous images close to the decision boundary were excluded. Critically, unlabelled points for these benchmark datasets are obtained by taking labelled points (which have reached consensus) and throwing away their labels (Fig. 3B). The likelihood for consensus (Y 6=None) is P (Y 6=None|X, θ) = ∑ y∈Y (py(X)) S . (6) This probability is close to 1 (for S > 1) if the underlying distribution, (py(X)) S puts most of its mass onto one class, and the probability is smaller if the mass is spread out over classes. As such, the likelihood “repels” decision boundaries away from unlabelled points, which is the common intuition behind low-density separation SSL methods, and which should be beneficial if class boundaries indeed lie in regions of low probability density away from both labelled and unlabelled points. If noconsensus images are observed (Fig. 2C), we can include a likelihood term for those images, P (Y =None|X, θ) = 1− P (Y 6= None|X, θ) = 1− ∑ y∈Y (py(X)) S . (7) If noconsensus images are not observed, we could in principle integrate over the underlying distribution over images, P (X=x). However, we do not even have samples from the underlying distributions over images (and if we did, we would have the noconsensus images so we could use Eq. 7). As such this term is usually omitted (e.g. Aitchison, 2021), but the use of out-of-distribution (OOD) datasets as surrogate noconsensus points is an important direction for future work. 3.1 ENTROPY MINIMIZATION AND PSEUDO-LABELS ARE LOWER BOUNDS ON OUR PRINCIPLED LOG-LIKELIHOOD To prove that entropy minimization forms a lower-bound on our log-likelihood (Eq. 6), we begin by writing the log-likelihood of consensus in terms of an expectation over labels, y, log P (Y 6=None|X, θ) = log ∑ y∈Y py(X) (py(X)) S−1 = logEpy(X) [ (py(X)) S−1 ] . (8) Applying Jensen’s inequality, the negative entropy gives a lower-bound on our log-likelihood, log P (Y 6=None|X, θ) ≥ Epy(X) [ log (py(X)) S−1 ] = (S − 1) ∑ y∈Y py(X) log py(X) = (S − 1)Lentropy(X) (9) This bound is tight for a uniform predictive distribution, log P (Y 6=None|X, θ) = log ∑ y∈Y (py(X)) S = logS ( 1 S )S = (S − 1) logS (10) (S − 1)Lentropy(X) = −(S − 1) log ∑ y∈Ypy(X) log py(X) = (S − 1) logS. (11) Pseudo-labelling forms an alternative lower bound on the log-likelihood which is obtained by noting that all (py(X)) S are positive, so selecting any subset of terms in the sum gives a lower bound, log P (Y 6=None|X, θ) = log ∑ y∈Y (py(X)) S ≥ log (py∗(X))S = S log py∗(X) = SLpseudo(X). (12) The inequality holds if we choose y∗ to be any class, but will be tightest if we choose the highest probability class. This bound is tight for a predictive distribution that puts all its mass on y∗, so py∗(X) = 1 and py 6=y∗ = 0 log P (Y 6=None|X, θ) = log ∑ y∈Y (py(X)) S = log (py∗(X)) S = log 1 = 0 (13) SLpseudo(X) = S log p∗y(X) = S log 1 = 0. (14) As such, entropy minimization and pseudo-labelling optimize different lower-bounds on our principled log-likelihood, log P (Y 6=None|X, θ), which gives a potential explanation for the effectiveness of pseudo-labelling and entropy minimization. Additionally, low-density separation SSL objectives encourages class-labels to be more certain. We can therefore expect pseudo-labelling to be the more relevant bound, as that bound is tight when the predictive distribution puts all its mass onto one class. In contrast, the entropy maximization bound is tight when the predictive distribution is uniform, which is discouraged by all low-density separation SSL objectives. This provides a potential explanation for the use of psuedo-labelling rather than entropy regularisation in modern SSL approaches such as (Sohn et al., 2020). 3.2 DATA AUGMENTATION PRIORS AND FIXMATCH FAMILY METHODS FixMatch family methods combine data augmentation and pseudo-labelling. To understand FixMatch as a bound on a principled log-likelihood, we therefore need a principled account of data augmentation as a likelihood. Inspired by Wenzel et al. (2020) (their Appendix K), we consider a distribution, P (X ′|X), over augmented images, X ′, given the underlying unaugmented image, X . We choose the single-annotator predictive distribution as the average over predictive distributions for many different augmented images, P (Ys=y|X, θ) = E [py(X ′)|X] (15) where py(X ′) is the predictive probabilities resulting from applying the neural network to the augmented image, and remember s ∈ {1, . . . , S} indexes the annotator. This is a sensible prior because we expect the neural network to be invariant under data-augmentation, and if the predictions are approximately invariant, then averaging the predictive distributions has little impact (Fig. 4B left). However, if the predictions do vary dramatically with different data augmentations then we should not trust the network’s classifications (i.e. we should have an uncertain predictive distribution), and averaging over very different predictive distributions for different augmentations indeed gives rise to broader, more uncertain predictions (Fig. 4B right). To obtain a tractable objective in the supervised setting, we use a multi-sample version of Jensen’s inequality, with K augmented images denoted X ′k, log P (Ys=y|X, θ) ≥ E [ log 1K ∑ kpy(X ′ k) ∣∣X] . (16) Combining this single-annotator probability with our generative model of curation, we obtain, log P (Y =y|X, θ) = S log P (Ys=y|X, θ) = S logE [py(X ′)|X] ≥ S E [ log 1K ∑ kpy(X ′ k) ∣∣X] , (17) The resulting objective for unlabelled points is, log P (Y 6=None|X, θ) = log ∑ y∈Y P (Y =y|X, θ) = log ∑ y∈YE [py(X ′)|X] S ≈ log ∑ y∈Y ( 1 K ∑ kpy(X ′ k) )S , (18) where we approximate the expectation with K different samples of X ′, denoted X ′k. Unfortunately, this approach does not immediately form a bound on the log-likelihood due to the convex nonlinearity in taking the power of S. Nonetheless, one key problem with approximating machine learning losses is that the optimizer learns to exploit approximation errors to find a pathological solution that makes the objective unboundedly large. We appear to be safe from that pathology here, as we are simply forming predictions by averaging over K augmentations of the underlying image. Nonetheless, to form a lower bound, we can follow FixMatch family algorithms by pseudo-labelling, i.e. by taking only one term in the sum for class y∗. FixMatch chooses y∗ by using the highest-probability class for a weakly-augmented image. An alternative approach is to choose the y∗ giving the tightest bound, i.e. argmaxy 1 K ∑ kpy(X ′ k). In either case, log P (Y 6=None|X, θ) ≥ logE [py∗(X ′)|X] S ≥ S E [ log 1K ∑ kpy∗(X ′ k) ∣∣X] , (19) If K = 1 and y∗ is chosen using a separate “weak” augmentation, then this is exactly equal to the FixMatch objective for unlabelled points. Note that both of these objectives (Eq. 18 and 19) promote reduced predictive uncertainty. Importantly, this does not just increase confidence in the single-augmentation predictive distributions, py(X ′ k), but also increases alignment between the predictive distributions for different augmentations (Fig. 4B). In particular, if the single-augmentation predictives are all highly confident, but place that high-confidence on different classes, then the multi-augmentation predictive formed by averaging will have low-confidence (Fig. 4B right). The only way for the multi-augmentation predictive to have high confidence is if the underlying single-augmentation predictive distributions have high confidence in the same class (Fig. 4B left), which encourages the underlying network to become more invariant. This makes sense: if data-augmentation changes the class predicted by the neural network, then any predictions should be low confidence. And it implies that combining principled data augmentation with a generative model of data curation automatically gives rise to an objective encouraging invariance. 4 RESULTS We begin by giving a proof-of-principle for Bayesian SSL on a toy dataset generated from a known model. Next, we tested our theoretical results (rather than trying to achieve SOTA performance) on real-world datasets. In particular, our theory gives one explanation for why SSL is typically more effective when unlabelled data is taken from the original, curated training set. To confirm these results, we used Galaxy Zoo 2 as this was a real-world dataset which allowed us to generate matched curated and uncurated datasets. 4.1 BAYESIAN SSL ON A GENERATED DATASET Our formulation of SSL as a likelihood implies that it should be possible to take entirely novel approaches, such as using low-density separation SSL in a Bayesian neural network (BNN). We considered a toy dataset generated from a “true” neural network model with one hidden layer and 30 hidden units, 5 dimensional inputs and 2 output classes. We generated inputs IID from a Gaussian, then passed them through the “true” neural network, then sampled multiple categorical class-labels corresponding to different annotators. If all the simulated annotators agreed, consensus was reached and if any simulated annotators disagreed, consensus was not reached. We used 100 labelled datapoints, though not all of them will have reached consensus, and we used up to 1600 unlabelled points, though again not all of them will have reached consensus. Note that as the consensus/noconsensus status of a point arises from the generative model, we cannot independently specify the number of consensus/noconsensus points. We used Eq. (2) as the likelihood for labelled points, Eq. (6) as the likelihood for unlabelled points and Eq. (7) as the likelihood for noconsensus points. We sampled (and trained networks on) 500 datasets in parallel. We trained using Langevin dynamics with all data simultaneously (no minibatching) with no momentum and no rejection. For a generative model with S = 1, consensus is always reached and the problem is equivalent to standard supervised learning. As such, we found no benefits from including unlabelled points for S = 1. In contrast, for any setting of S > 1 we found that increasing the number of unlabelled points improved the test log-likelihood (Fig. 5A–B) and the test accuracy (Fig. 5C–D). 4.2 GALAXY ZOO 2 Our data curation based theory predicts that low-density separation based SSL should be much more effective on curated than uncurated data. To test this prediction on real-world data, we turned to Galaxy Zoo 22 (GZ2) (Willett et al., 2013) which uses images from the Sloan Digital Sky Survey. This dataset is particularly useful for us as it has received only very minimal filtering based on criteria such as object brightness and spatial extent. We defined 9 labels by truncating the complex decision tree followed by the annotators (for further details see Aitchison, 2021). Further, as each GZ2 image has received ∼ 50 labels, we can define a consensus coefficient by taking the fraction of annotators that agreed upon the highest probability class. We can then define a curated dataset by taking the images with consensus coefficient above some threshold within each class. Note that we needed to select images on a per-class basis, because annotators tend to be more confident on some classes than others, so taking the highest consensus coefficients overall would dramatically change the class balance. In particular, we used the top 8.2% of images, which gave a full curated dataset of just over 20,000 images. Of those, we randomly selected 2000 as labelled examples, 10000 as test examples, and 0 – 6000 as unlabelled examples. The images were preprocessed by center-cropping to 212× 212 and then scaled to 32× 32. We applied a FixMatch-inspired semi-supervised learning algorithm, with a standard supervised objective, with unlabelled objective given by Eq. (18) with K = 2. Data augmentation was given by vertical and horizontal flips, rotations from −180◦ to 180◦, translations by up to 40% on both axes and scaling from 20% to 180%. Note that as we were trying to mirror the standard SSL setup, we did not include noconsensus points in the objective. We trained a ResNet18 with our maximum likelihood objective using SGD with a batch size of 500, a learning rate of 0.01 and 1500 epochs. We used an internal cluster of nVidia 1080 and 2080 GPUs, and the experiments took roughly 300 GPU hours. We found that the test-log-likelihood for curated data improved slightly as more unlabelled points were included, whereas the test-log-likelihood for uncurated dramatically declined as unlabelled points were added (Fig. 6A–B). We saw strong improvements in test accuracy with the number of 2https://data.galaxyzoo.org; www.sdss.org/collaboration/image-use-policy/ unlabelled points for curated datasets (Fig. 6C–D). Note that in Fig. 6C the error rate for curated datasets is already very small, so to see any effect we needed to plot the test error, normalized to the initial test error (Fig. 6D). For uncurated data, the inclusion of large numbers of unlabelled points dramatically worsened performance, though the inclusion of a small number of unlabelled points gave very small performance improvements (Fig. 6C–D). Thus, this experiment is consistent with the idea that the effectiveness of SSL arises at least in part from curation of the underlying dataset. 5 RELATED WORK There are at least three main approaches to semi-supervised learning (Seeger, 2000; Zhu, 2005; Chapelle et al., 2006; Zhu & Goldberg, 2009). First there is low-density separation, where we assume that the class boundary lies in a region of low probability density away from both labelled and unlabelled points. This approach dates back at least to transductive support vector machines (SVMs) where the model is to be tested on a finite number of known test locations (Vapnik, 1998; Chapelle et al., 1999). Those known test locations are treated as unlabelled points, and we find the decision boundary that perfectly classifies the limited number of labelled points, while at the same time being as far as possible from labelled and unlabelled data. Alternative approaches include pseudo-labelling and entropy minimization (Grandvalet & Bengio, 2005; Lee, 2013). Second, there are graph-based methods such as (Zhu & Ghahramani, 2002) which are very different from the methods considered here. Third, there are approaches that use unlabelled points to build a generative model of the inputs and leverage that model to improve classification (e.g. Kingma et al., 2014; Odena, 2016; Gordon & Hernández-Lobato, 2017). This approach was originally explored in a considerable body of classical work (e.g. McLachlan, 1975; Castelli & Cover, 1995; Druck et al., 2007) for a review, see Seeger (2000) and references therein. These approaches are fundamentally different from the SSL approaches considered here, as they require a generative model of inputs, while low-density separation methods do not. Generative modelling can be problematic as training a generative model can be more involved than training a discriminative model and because the even when the model can produce excellent samples, the high-level representation may be “entangled” (Higgins et al., 2017) in which case it may not offer benefits for classification. 6 DISCUSSION Our theory provides a theoretical understanding of past results showing that SSL is more effective when unlabelled data is drawn from the original, curated training set (Cozman et al., 2003; Oliver et al., 2018; Chen et al., 2020; Guo et al., 2020). In the extreme, our theory might be taken to imply that if data has not been curated, then SSL cannot work, and therefore that low-density separation SSL methods will not be effective in messy, uncurated real-world datasets. However, this is not the complete picture. Low-density separation SSL methods, including our log-likelihood, fundamentally exploit class-boundaries lying in low-density regions. As such, low-density separation could equally come from the real underlying data or could be artificially induced by data curation (Fig. 3). None of these methods are able to distinguish between these different underlying sources of low-density separation and as such any of them may work on uncurated data where the underlying distribution displays low-density separation. However, the possibility for curation to artificially induce low-density separation does imply that we should be cautious about overinterpreting spectacular results obtained on very carefully curated benchmark datasets such as CIFAR-10. Surprisingly, the generative model of data curation used here also explains the cold-posterior effect in Bayesian neural networks (Wenzel et al., 2020; Aitchison, 2021), revealing a profound and previously unsuspected connection. In conclusion, we showed that low-density separation SSL objectives can be understood as a lowerbound on a log-probability which arises from a principled generative model of data curation. This gives a theoretical understanding of recent results showing that SSL is more effective on curated data, which we confirmed by developing a Bayesian SSL model applied to toy data, using GZ2, which allowed us to consider a completely uncurated dataset.
1. What is the focus of the paper in terms of semi-supervised learning? 2. What is the main contribution of the paper regarding data curation? 3. Are there any concerns or suggestions regarding the experiments presented in the paper? 4. Do you have any questions regarding the paper's analysis of the relationship between data curation and semi-supervised learning? 5. How does the reviewer assess the clarity and quality of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper shows that low-density separation semi-supervised learning (SSL) objectives can be understood as a lower-bound on a log-probability that arises from a principled generative model of data curation. This gives a theoretical understanding of recent results showing that SSL is more effective when unlabelled data is obtained by throwing away labels from the carefully curated training set, and is less effective when unlabeled data is taken from uncurated images. Experiments on toy data generated from a known model and on real data from Galaxy Zoo confirm the importance of data curation for SSL. Review Minor comments (language mistakes, etc.): Figure 1 caption: The first annotators" should have its first letter capitalized. Section 2.1: "ensure sure" should be "ensure". Section 4.1: Fig. 5AB" and Fig. 5CD" should be written as Fig. 5A-B" and ``Fig. 5C-D" (i.e., with a hyphen inserted between).
ICLR
Title Semi-supervised learning objectives as log-likelihoods in a generative model of data curation Abstract We currently do not have an understanding of semi-supervised learning (SSL) objectives such as pseudo-labelling and entropy minimization as log-likelihoods, which precludes the development of e.g. Bayesian SSL. Here, we note that benchmark image datasets such as CIFAR-10 are carefully curated, and we formulate SSL objectives as a log-likelihood in a generative model of data curation that was initially developed to explain the cold-posterior effect (Aitchison 2020). SSL methods, from entropy minimization and pseudo-labelling, to state-of-the-art techniques similar to FixMatch can be understood as lower-bounds on our principled log-likelihood. We are thus able to give a proof-of-principle for Bayesian SSL on toy data. Finally, our theory suggests that SSL is effective in part due to the statistical patterns induced by data curation. This provides an explanation of past results which show SSL performs better on clean datasets without any “out of distribution” examples. Confirming these results we find that SSL gave much larger performance improvements on curated than on uncurated data, using matched curated and uncurated datasets based on Galaxy Zoo 2.1 1 INTRODUCTION To build high-performing deep learning models for industrial and medical applications, it is necessary to train on large human-labelled datasets. For instance, Imagenet (Deng et al., 2009), a classic benchmark dataset for object recognition, contains over 1 million labelled examples. Unfortunately, human labelling is often prohibitively expensive. In contrast obtaining unlabelled data is usually very straightforward. For instance, unlabelled image data can be obtained in almost unlimited volumes from the internet. Semi-supervised learning (SSL) attempts to leverage this unlabelled data to reduce the required number of human labels (Seeger, 2000; Zhu, 2005; Chapelle et al., 2006; Zhu & Goldberg, 2009; Van Engelen & Hoos, 2020). One family of SSL methods — those based on low-density separation — assume that decision boundaries lie in regions of low probability density, far from all labelled and unlabelled points. To achieve this, pre deep learning (DL) low-density separation SSL methods such as entropy minimization and pseudo-labelling (Grandvalet & Bengio, 2005; Lee, 2013) use objectives that repel decision boundaries away from unlabelled points by encouraging the network to make more certain predictions on those points. Entropy minimization (as the name suggests) minimizes the predictive entropy, whereas pseudo-labelling treats the currently most-probable label as a pseudo-label, and minimizes the cross entropy to that pseudo-label. More modern work uses the notion of consistency regularisation, which augments the unlabelled data (e.g. using translations and rotations), then encourages the neural network to produce similar outputs for different augmentations of the same underlying image (Sajjadi et al., 2016; Xie et al., 2019; Berthelot et al., 2019b; Sohn et al., 2020). Further developments of this line of work have resulted in many variants/combinations of these algorithms, from directly encouraging the smoothness of the classifier outputs around unlabelled datapoints (Miyato et al., 2018) to the “FixMatch” family of algorithms (Berthelot et al., 2019b;a; Sohn et al., 2020), which combine pseudo-labelling and consistency regularisation by augmenting each image twice, and using one of the augmented images to provide a pseudo-label for the other augmentation. 1Our code: https://anonymous.4open.science/r/GZ_SSL-B6CC; MIT Licensed However, some of the biggest successes of deep learning, from supervised learning to many generative models, have been built on a principled statistical framework as maximum (marginal) likelihood inference (e.g. the cross-entropy objective in supervised learning can be understood as the log-likelihood for a Categorical-softmax model of the class-label MacKay, 2003). Low-density separation SSL methods such as pseudo-labelling and entropy minimization are designed primarily to encourage the class-boundary to lie in low-density regions. Therefore they cannot be understood as log-likelihoods and cannot be combined with principled statistical methods such as Bayesian inference. Here, we give a formal account of SSL methods based on low-density separation (Chapelle et al., 2006) as lower bounds on a principled log-likelihood. In particular, we consider pseudo-labelling (Lee, 2013), entropy minimization (Grandvalet & Bengio, 2005), and modern methods similar to FixMatch (Sohn et al., 2020). This log-likelihood arises from a generative model of data curation that was initially developed to explain the cold-posterior effect (Aitchison, 2021). Critically, this approach gives an explanation for previous findings that SSL is most effective when unlabelled data is obtained by throwing away labels from the carefully curated training set, and is less effective when unlabelled data is taken from uncurated images, especially those that do not depict one of the classes of interest (Cozman et al., 2003; Oliver et al., 2018; Chen et al., 2020; Guo et al., 2020). We confirmed the importance of data curation for SSL on toy data generated from a known model and on real data from Galaxy Zoo 2 (Willett et al., 2013). 2 BACKGROUND Our work brings together many disparate areas. Here, we give an introduction to a generative model of data curation (Aitchison, 2021) initially developed to explain the cold posterior effect (Wenzel et al., 2020), pseudo-labelling and entropy minimization (Grandvalet & Bengio, 2005; Lee, 2013), and the treatment of unlabelled points in the standard supervised learning setup. 2.1 A GENERATIVE MODEL OF DATA CURATION To develop a model of data curation, remember that image datasets including CIFAR-10 and ImageNet are curated to ensure they only contain images whose class-labels are unambiguous. For instance, in CIFAR-10, annotators were instructed that “It’s worse to include one that shouldn’t be included than to exclude one.”, and Krizhevsky (2009) “personally verified every label submitted by the annotators”. In creating ImageNet, Deng et al. (2009) made sure that a number of Amazon Mechanical Turk annotators agreed upon the class before including an image in the dataset. Thus, these datasets have two odd properties. First, consensus labels exist only for a subset of images, e.g. for a white-noise image, consensus cannot be reached and the image cannot be labelled. Second, inclusion of an image in a dataset like CIFAR-10 is informative in and of itself, as it indicates that the image shows an unambiguous example of one of the ten classes. To understand these odd properties of curated datasets, consider a simplified generative model of consensus-formation: draw a random image, X , from the distribution over images, P (X), and ask S human annotators, indexed s, to give a label, {Ys}Ss=1 (e.g. using Mechanical Turk). Importantly, every annotator is forced to label every image and if the image is ambiguous they should give a random label. If all the annotators agree, Y1=Y2= · · · =YS , they have consensus and the datapoint is included in the dataset. However, in the case of any disagreement, consensus is not reached and the datapoint is excluded (Fig. 1), Concretely, the final label, Y is Y1 (which is the same as all the other labels) if consensus was reached and None otherwise (Fig. 2C), Y |{Ys}Ss=1 = { Y1 if Y1=Y2= · · · =YS None otherwise (1) Taking Y to be the label set, we have Ys ∈ Y , and the final label, Y , could be any of the underlying labels in Y , or None if consensus is not reached, so Y ∈ Y ∪ {None}. When consensus was reached, the likelihood is, P (Y =y|X, θ) = P ( {Ys=y}Ss=1|X, θ ) = ∏S s=1 P (Ys=y|X, θ) = P (Ys=y|X, θ) S = (py(X)) S (2) where we have assumed annotators are IID, and py(X) = P (Ys=y|X, θ) is the single-annotator probability. From here, it is possible to see how this model might be taken to give an account of tempering, as we have taken the underlying single-annotator likelihood, py(X) to the power S (for further details see Aitchison, 2021). 2.2 LOW-DENSITY SEPARATION SEMI-SUPERVISED LEARNING OBJECTIVES The intuition behind low-density separation objectives for semi-supervised learning is that decision boundaries should be in low-density regions away from both labelled and unlabelled data. As such, it is sensible to “repel” decision boundaries away from labelled and unlabelled datapoints and this can be achieved by making the classifier as certain as possible on those points. This happens automatically for labelled points as the standard supervised objective encourages the classifier to be as certain as possible about the true class label. But for unlabelled points we need a new objective that encourages certainty, and we focus on two approaches. First, and perhaps most direct is entropy minimization (Grandvalet & Bengio, 2005) Lentropy(X) = ∑ y∈Y py(X) log py(X) (3) where, following the typical probabilistic approach, we write the negative entropy as an objective to be maximized. Alternatively, we could use pseudo-labelling, which takes the current classification, y∗, to be the true label, and maximizes the log-probability of that label (Lee, 2013), Lpseudo(X) = log py∗(X) y∗ = argmax y∈Y log py(X). (4) Lee (2013) regarded pseudo-labelling as closely related to entropy miminization as the optimal value of both objectives is reached when all the probability mass is assigned to one class. However, they are not formulated as a principled log-likelihood, which gives rise to at least three problems. First, these methods cannot be combined with other principled statistical methods such as Bayesian inference. Second, it is unclear how to combine these objectives with standard supervised objectives, except by taking a weighted sum and doing hyperparameter optimization over the weight. Third, these objectives risk reinforcing any initial poor classifications and it is unclear whether this is desirable. 2.3 IN STANDARD SUPERVISED LEARNING, UNLABELLED POINTS SHOULD BE UNINFORMATIVE It is important to note that under the standard supervised-learning generative model (Fig. 2A), unlabelled points should not give any information about the weights. Omitting the label, Ysup, we obtain the graphical model in Fig. 2B. This model emphasises that the images, X , and the model parameters, θ, are marginally independent, so we cannot obtain any information about θ from X alone (Fig. 2B). Formally, the posterior over θ conditioned on X is equal to the prior, P (θ|X) = P (θ,X) P (X) = ∑ y∈Y P (θ,X, Ysup=y) P (X) (5) = P (θ) P (X) ∑ y∈Y P (Ysup=y|θ,X) P (X) = P (θ) . as 1 = ∑ y∈Y P (Ysup=y|θ,X). To confirm this result is intuitively sensible, note that are many situations where encouraging the decision boundary to lie in low density regions would be very detrimental to performance. Consider a classifier with two input features: x0 and x1 (Fig. 4A). The class boundary lies in the high-density region crossing both clusters, so to obtain a reasonable result, the classifier should ignore the low-density region lying between the clusters. However, strong low-density separation SSL terms in the objective may align the cluster boundaries with the class boundaries, leading the classifier to wrongly believe that one cluster is entirely one class and the other cluster is entirely the other class. In contrast, supervised learning without SSL will ignore clustering and obtain a reasonable answer close to the grey dashed line. Importantly, this is just an illustrative example to demonstrate that without further assumptions, the standard supervised approach of ignoring unlabelled data is sensible; semi-supervised learning without loss of performance in such settings has been studied and is known as Safe SSL (Li & Zhou, 2014; Krijthe & Loog, 2014; Kawakita & Takeuchi, 2014; Loog, 2015; Krijthe & Loog, 2016). 3 THEORY SSL methods are usually applied to benchmark datasets such as CIFAR-10 or ImageNet. These datasets were first carefully curated during the labelling process: (Fig. 3A), implying that ambiguous images close to the decision boundary were excluded. Critically, unlabelled points for these benchmark datasets are obtained by taking labelled points (which have reached consensus) and throwing away their labels (Fig. 3B). The likelihood for consensus (Y 6=None) is P (Y 6=None|X, θ) = ∑ y∈Y (py(X)) S . (6) This probability is close to 1 (for S > 1) if the underlying distribution, (py(X)) S puts most of its mass onto one class, and the probability is smaller if the mass is spread out over classes. As such, the likelihood “repels” decision boundaries away from unlabelled points, which is the common intuition behind low-density separation SSL methods, and which should be beneficial if class boundaries indeed lie in regions of low probability density away from both labelled and unlabelled points. If noconsensus images are observed (Fig. 2C), we can include a likelihood term for those images, P (Y =None|X, θ) = 1− P (Y 6= None|X, θ) = 1− ∑ y∈Y (py(X)) S . (7) If noconsensus images are not observed, we could in principle integrate over the underlying distribution over images, P (X=x). However, we do not even have samples from the underlying distributions over images (and if we did, we would have the noconsensus images so we could use Eq. 7). As such this term is usually omitted (e.g. Aitchison, 2021), but the use of out-of-distribution (OOD) datasets as surrogate noconsensus points is an important direction for future work. 3.1 ENTROPY MINIMIZATION AND PSEUDO-LABELS ARE LOWER BOUNDS ON OUR PRINCIPLED LOG-LIKELIHOOD To prove that entropy minimization forms a lower-bound on our log-likelihood (Eq. 6), we begin by writing the log-likelihood of consensus in terms of an expectation over labels, y, log P (Y 6=None|X, θ) = log ∑ y∈Y py(X) (py(X)) S−1 = logEpy(X) [ (py(X)) S−1 ] . (8) Applying Jensen’s inequality, the negative entropy gives a lower-bound on our log-likelihood, log P (Y 6=None|X, θ) ≥ Epy(X) [ log (py(X)) S−1 ] = (S − 1) ∑ y∈Y py(X) log py(X) = (S − 1)Lentropy(X) (9) This bound is tight for a uniform predictive distribution, log P (Y 6=None|X, θ) = log ∑ y∈Y (py(X)) S = logS ( 1 S )S = (S − 1) logS (10) (S − 1)Lentropy(X) = −(S − 1) log ∑ y∈Ypy(X) log py(X) = (S − 1) logS. (11) Pseudo-labelling forms an alternative lower bound on the log-likelihood which is obtained by noting that all (py(X)) S are positive, so selecting any subset of terms in the sum gives a lower bound, log P (Y 6=None|X, θ) = log ∑ y∈Y (py(X)) S ≥ log (py∗(X))S = S log py∗(X) = SLpseudo(X). (12) The inequality holds if we choose y∗ to be any class, but will be tightest if we choose the highest probability class. This bound is tight for a predictive distribution that puts all its mass on y∗, so py∗(X) = 1 and py 6=y∗ = 0 log P (Y 6=None|X, θ) = log ∑ y∈Y (py(X)) S = log (py∗(X)) S = log 1 = 0 (13) SLpseudo(X) = S log p∗y(X) = S log 1 = 0. (14) As such, entropy minimization and pseudo-labelling optimize different lower-bounds on our principled log-likelihood, log P (Y 6=None|X, θ), which gives a potential explanation for the effectiveness of pseudo-labelling and entropy minimization. Additionally, low-density separation SSL objectives encourages class-labels to be more certain. We can therefore expect pseudo-labelling to be the more relevant bound, as that bound is tight when the predictive distribution puts all its mass onto one class. In contrast, the entropy maximization bound is tight when the predictive distribution is uniform, which is discouraged by all low-density separation SSL objectives. This provides a potential explanation for the use of psuedo-labelling rather than entropy regularisation in modern SSL approaches such as (Sohn et al., 2020). 3.2 DATA AUGMENTATION PRIORS AND FIXMATCH FAMILY METHODS FixMatch family methods combine data augmentation and pseudo-labelling. To understand FixMatch as a bound on a principled log-likelihood, we therefore need a principled account of data augmentation as a likelihood. Inspired by Wenzel et al. (2020) (their Appendix K), we consider a distribution, P (X ′|X), over augmented images, X ′, given the underlying unaugmented image, X . We choose the single-annotator predictive distribution as the average over predictive distributions for many different augmented images, P (Ys=y|X, θ) = E [py(X ′)|X] (15) where py(X ′) is the predictive probabilities resulting from applying the neural network to the augmented image, and remember s ∈ {1, . . . , S} indexes the annotator. This is a sensible prior because we expect the neural network to be invariant under data-augmentation, and if the predictions are approximately invariant, then averaging the predictive distributions has little impact (Fig. 4B left). However, if the predictions do vary dramatically with different data augmentations then we should not trust the network’s classifications (i.e. we should have an uncertain predictive distribution), and averaging over very different predictive distributions for different augmentations indeed gives rise to broader, more uncertain predictions (Fig. 4B right). To obtain a tractable objective in the supervised setting, we use a multi-sample version of Jensen’s inequality, with K augmented images denoted X ′k, log P (Ys=y|X, θ) ≥ E [ log 1K ∑ kpy(X ′ k) ∣∣X] . (16) Combining this single-annotator probability with our generative model of curation, we obtain, log P (Y =y|X, θ) = S log P (Ys=y|X, θ) = S logE [py(X ′)|X] ≥ S E [ log 1K ∑ kpy(X ′ k) ∣∣X] , (17) The resulting objective for unlabelled points is, log P (Y 6=None|X, θ) = log ∑ y∈Y P (Y =y|X, θ) = log ∑ y∈YE [py(X ′)|X] S ≈ log ∑ y∈Y ( 1 K ∑ kpy(X ′ k) )S , (18) where we approximate the expectation with K different samples of X ′, denoted X ′k. Unfortunately, this approach does not immediately form a bound on the log-likelihood due to the convex nonlinearity in taking the power of S. Nonetheless, one key problem with approximating machine learning losses is that the optimizer learns to exploit approximation errors to find a pathological solution that makes the objective unboundedly large. We appear to be safe from that pathology here, as we are simply forming predictions by averaging over K augmentations of the underlying image. Nonetheless, to form a lower bound, we can follow FixMatch family algorithms by pseudo-labelling, i.e. by taking only one term in the sum for class y∗. FixMatch chooses y∗ by using the highest-probability class for a weakly-augmented image. An alternative approach is to choose the y∗ giving the tightest bound, i.e. argmaxy 1 K ∑ kpy(X ′ k). In either case, log P (Y 6=None|X, θ) ≥ logE [py∗(X ′)|X] S ≥ S E [ log 1K ∑ kpy∗(X ′ k) ∣∣X] , (19) If K = 1 and y∗ is chosen using a separate “weak” augmentation, then this is exactly equal to the FixMatch objective for unlabelled points. Note that both of these objectives (Eq. 18 and 19) promote reduced predictive uncertainty. Importantly, this does not just increase confidence in the single-augmentation predictive distributions, py(X ′ k), but also increases alignment between the predictive distributions for different augmentations (Fig. 4B). In particular, if the single-augmentation predictives are all highly confident, but place that high-confidence on different classes, then the multi-augmentation predictive formed by averaging will have low-confidence (Fig. 4B right). The only way for the multi-augmentation predictive to have high confidence is if the underlying single-augmentation predictive distributions have high confidence in the same class (Fig. 4B left), which encourages the underlying network to become more invariant. This makes sense: if data-augmentation changes the class predicted by the neural network, then any predictions should be low confidence. And it implies that combining principled data augmentation with a generative model of data curation automatically gives rise to an objective encouraging invariance. 4 RESULTS We begin by giving a proof-of-principle for Bayesian SSL on a toy dataset generated from a known model. Next, we tested our theoretical results (rather than trying to achieve SOTA performance) on real-world datasets. In particular, our theory gives one explanation for why SSL is typically more effective when unlabelled data is taken from the original, curated training set. To confirm these results, we used Galaxy Zoo 2 as this was a real-world dataset which allowed us to generate matched curated and uncurated datasets. 4.1 BAYESIAN SSL ON A GENERATED DATASET Our formulation of SSL as a likelihood implies that it should be possible to take entirely novel approaches, such as using low-density separation SSL in a Bayesian neural network (BNN). We considered a toy dataset generated from a “true” neural network model with one hidden layer and 30 hidden units, 5 dimensional inputs and 2 output classes. We generated inputs IID from a Gaussian, then passed them through the “true” neural network, then sampled multiple categorical class-labels corresponding to different annotators. If all the simulated annotators agreed, consensus was reached and if any simulated annotators disagreed, consensus was not reached. We used 100 labelled datapoints, though not all of them will have reached consensus, and we used up to 1600 unlabelled points, though again not all of them will have reached consensus. Note that as the consensus/noconsensus status of a point arises from the generative model, we cannot independently specify the number of consensus/noconsensus points. We used Eq. (2) as the likelihood for labelled points, Eq. (6) as the likelihood for unlabelled points and Eq. (7) as the likelihood for noconsensus points. We sampled (and trained networks on) 500 datasets in parallel. We trained using Langevin dynamics with all data simultaneously (no minibatching) with no momentum and no rejection. For a generative model with S = 1, consensus is always reached and the problem is equivalent to standard supervised learning. As such, we found no benefits from including unlabelled points for S = 1. In contrast, for any setting of S > 1 we found that increasing the number of unlabelled points improved the test log-likelihood (Fig. 5A–B) and the test accuracy (Fig. 5C–D). 4.2 GALAXY ZOO 2 Our data curation based theory predicts that low-density separation based SSL should be much more effective on curated than uncurated data. To test this prediction on real-world data, we turned to Galaxy Zoo 22 (GZ2) (Willett et al., 2013) which uses images from the Sloan Digital Sky Survey. This dataset is particularly useful for us as it has received only very minimal filtering based on criteria such as object brightness and spatial extent. We defined 9 labels by truncating the complex decision tree followed by the annotators (for further details see Aitchison, 2021). Further, as each GZ2 image has received ∼ 50 labels, we can define a consensus coefficient by taking the fraction of annotators that agreed upon the highest probability class. We can then define a curated dataset by taking the images with consensus coefficient above some threshold within each class. Note that we needed to select images on a per-class basis, because annotators tend to be more confident on some classes than others, so taking the highest consensus coefficients overall would dramatically change the class balance. In particular, we used the top 8.2% of images, which gave a full curated dataset of just over 20,000 images. Of those, we randomly selected 2000 as labelled examples, 10000 as test examples, and 0 – 6000 as unlabelled examples. The images were preprocessed by center-cropping to 212× 212 and then scaled to 32× 32. We applied a FixMatch-inspired semi-supervised learning algorithm, with a standard supervised objective, with unlabelled objective given by Eq. (18) with K = 2. Data augmentation was given by vertical and horizontal flips, rotations from −180◦ to 180◦, translations by up to 40% on both axes and scaling from 20% to 180%. Note that as we were trying to mirror the standard SSL setup, we did not include noconsensus points in the objective. We trained a ResNet18 with our maximum likelihood objective using SGD with a batch size of 500, a learning rate of 0.01 and 1500 epochs. We used an internal cluster of nVidia 1080 and 2080 GPUs, and the experiments took roughly 300 GPU hours. We found that the test-log-likelihood for curated data improved slightly as more unlabelled points were included, whereas the test-log-likelihood for uncurated dramatically declined as unlabelled points were added (Fig. 6A–B). We saw strong improvements in test accuracy with the number of 2https://data.galaxyzoo.org; www.sdss.org/collaboration/image-use-policy/ unlabelled points for curated datasets (Fig. 6C–D). Note that in Fig. 6C the error rate for curated datasets is already very small, so to see any effect we needed to plot the test error, normalized to the initial test error (Fig. 6D). For uncurated data, the inclusion of large numbers of unlabelled points dramatically worsened performance, though the inclusion of a small number of unlabelled points gave very small performance improvements (Fig. 6C–D). Thus, this experiment is consistent with the idea that the effectiveness of SSL arises at least in part from curation of the underlying dataset. 5 RELATED WORK There are at least three main approaches to semi-supervised learning (Seeger, 2000; Zhu, 2005; Chapelle et al., 2006; Zhu & Goldberg, 2009). First there is low-density separation, where we assume that the class boundary lies in a region of low probability density away from both labelled and unlabelled points. This approach dates back at least to transductive support vector machines (SVMs) where the model is to be tested on a finite number of known test locations (Vapnik, 1998; Chapelle et al., 1999). Those known test locations are treated as unlabelled points, and we find the decision boundary that perfectly classifies the limited number of labelled points, while at the same time being as far as possible from labelled and unlabelled data. Alternative approaches include pseudo-labelling and entropy minimization (Grandvalet & Bengio, 2005; Lee, 2013). Second, there are graph-based methods such as (Zhu & Ghahramani, 2002) which are very different from the methods considered here. Third, there are approaches that use unlabelled points to build a generative model of the inputs and leverage that model to improve classification (e.g. Kingma et al., 2014; Odena, 2016; Gordon & Hernández-Lobato, 2017). This approach was originally explored in a considerable body of classical work (e.g. McLachlan, 1975; Castelli & Cover, 1995; Druck et al., 2007) for a review, see Seeger (2000) and references therein. These approaches are fundamentally different from the SSL approaches considered here, as they require a generative model of inputs, while low-density separation methods do not. Generative modelling can be problematic as training a generative model can be more involved than training a discriminative model and because the even when the model can produce excellent samples, the high-level representation may be “entangled” (Higgins et al., 2017) in which case it may not offer benefits for classification. 6 DISCUSSION Our theory provides a theoretical understanding of past results showing that SSL is more effective when unlabelled data is drawn from the original, curated training set (Cozman et al., 2003; Oliver et al., 2018; Chen et al., 2020; Guo et al., 2020). In the extreme, our theory might be taken to imply that if data has not been curated, then SSL cannot work, and therefore that low-density separation SSL methods will not be effective in messy, uncurated real-world datasets. However, this is not the complete picture. Low-density separation SSL methods, including our log-likelihood, fundamentally exploit class-boundaries lying in low-density regions. As such, low-density separation could equally come from the real underlying data or could be artificially induced by data curation (Fig. 3). None of these methods are able to distinguish between these different underlying sources of low-density separation and as such any of them may work on uncurated data where the underlying distribution displays low-density separation. However, the possibility for curation to artificially induce low-density separation does imply that we should be cautious about overinterpreting spectacular results obtained on very carefully curated benchmark datasets such as CIFAR-10. Surprisingly, the generative model of data curation used here also explains the cold-posterior effect in Bayesian neural networks (Wenzel et al., 2020; Aitchison, 2021), revealing a profound and previously unsuspected connection. In conclusion, we showed that low-density separation SSL objectives can be understood as a lowerbound on a log-probability which arises from a principled generative model of data curation. This gives a theoretical understanding of recent results showing that SSL is more effective on curated data, which we confirmed by developing a Bayesian SSL model applied to toy data, using GZ2, which allowed us to consider a completely uncurated dataset.
1. What is the focus of the paper regarding data curation? 2. What are the two main lower bounds shown in the paper for the log likelihood of the consensus? 3. How do the authors demonstrate the effectiveness of adding unlabeled curated data in their experiments? 4. What is the reviewer's concern regarding the connection between the theoretical results and the experimental outcomes? 5. How does the reviewer assess the technical contributions of the paper, both theoretically and experimentally?
Summary Of The Paper Review
Summary Of The Paper The paper is built on the previous work (Aitchison, 2021) that provides a generative model of data curation. In this model, the likelihood of a labeled data obtained with consensus is given by product of the probability that each labeler labels correctly. Under this model, the paper shows 1) the log likelihood of the consensus is lower bounded by entropy or pseudo labeling, 2) the log likelihood of the consensus is also lower bounded by the FixMatch type average likelihood of the augmented data. In toy experiments and Galaxy Zoo experiments, authors show that the test likelihood improves when unlabelled curated data is added, but decreases on the uncurated data. Review Under the generative model of data curation, the paper shows 1) the log likelihood of the consensus is lower bounded by entropy or pseudo labeling, 2) the log likelihood of the consensus is also lower bounded by the FixMatch type average likelihood of the augmented data. These results are potentially interesting. In toy experiments and Galaxy Zoo data experiments, authors show that the test likelihood improves when unlabelled curated data is added, but decreases on the uncurated data. However, this is not directly the consequence of the theoretical results. I am not clear on this connection. The theory shows that log likelihood is an upper bound on the entropy, pseudo labelling loss under the generative model of curated data. It does not say what happens when the data is uncurated. Also, I am not clear how the authors measured the test likelihood. Besides, the technical contribution, both theoretical and experimental are limited. On the experiments, it would be nicer to see the consequences of the theoretical results on modern semi supervised learning approaches and datasets they use.
ICLR
Title Switching Linear Dynamics for Variational Bayes Filtering Abstract System identification of complex and nonlinear systems is a central problem for model predictive control and model-based reinforcement learning. Despite their complexity, such systems can often be approximated well by a set of linear dynamical systems if broken into appropriate subsequences. This mechanism not only helps us find good approximations of dynamics, but also gives us deeper insight into the underlying system. Leveraging Bayesian inference and Variational Autoencoders, we show how to learn a richer and more meaningful state space, e.g. encoding joint constraints and collisions with walls in a maze, from partial and high-dimensional observations. This representation translates into a gain of accuracy of the learned dynamics which we showcase on various simulated tasks. 1 INTRODUCTION Learning dynamics from raw data (also known as system identification) is a key component of model predictive control and model-based reinforcement learning. Problematically, environments of interest often give rise to very complex and highly nonlinear dynamics which are seemingly difficult to approximate. However, switching linear dynamical systems (SLDS) approaches claim that those environments can often be broken down into simpler units made up of areas of equal and linear dynamics (Ackerson & Fu, 1970; Chang & Athans, 1978). Not only are those approaches capable of good predictive performance, which often is the sole goal of learning a system’s dynamics, they also encode valuable information into so called switching variables which determine the dynamics of the next transition. For example, when looking at the movement of an arm, one is intuitively aware of certain restrictions of possible movements, e.g. constraints to the movement due to joint constraints or obstacles. The knowledge is present without the need to simulate; it’s explicit. Exactly this kind of information will be encoded when successfully learning switching dynamics. Our goal in this work will therefore entail the search for richer representations in the form of latent state space models which encode knowledge about the underlying system dynamics. In turn, we expect this to improve the accuracy of our simulation as well. Such a representation alone could then be used in a reinforcement learning approach that possibly only takes advantage of the learned latent features but not necessarily its learned dynamics. To learn richer representations, we identify one common problem with prevalent recurrent Variational Autoencoder models (Karl et al., 2017a; Krishnan et al., 2015; Chung et al., 2015; Fraccaro et al., 2016): the non-probabilistic treatment of the transition dynamics often modeled by a powerful nonlinear function approximator. From the history of the Autoencoder to the Variational Autoencoder, we know that in order to detect features in an unsupervised manner, probabilistic treatment of the latent space is paramount. As our starting point, we will build on previously proposed approaches by Krishnan et al. (2017) and Karl et al. (2017a). The latter already made use of locally linear dynamics, but only in a deterministic fashion. We extend their approaches by a stochastic switching LDS model and show that such treatment is vital for learning richer representations and simulation accuracy. 2 BACKGROUND We consider discretized time-series data consisting of continuous observations xt ∈ X ⊂ Rnx and control inputs ut ∈ U ⊂ Rnu that we would like to model by corresponding latent states zt ∈ Z ⊂ Rnz . We’ll denote sequences of variables by x1:T = (x1, x2, ..., xT ). 2.1 SWITCHING LINEAR DYNAMICAL SYSTEMS Switching Linear Dynamical System models (SLDS) enable us to model nonlinear time series data by splitting it into sequences of linear dynamical models. At each time t = 1, 2, ..., T , a discrete switch variable st ∈ 1, ...,M chooses of a set LDSs a system which is to be used to transform our continuous latent state zt to the next time step (Barber, 2012). zt = A(st)zt−1 +B(st)ut−1 + (st) (st) ∼ N (0, Q(st)) xt = H(st)zt + η(st) η(st) ∼ N (0, R(st)) (1) Here A ∈ Rnz×nz is the state matrix, B ∈ Rnz×nu control matrix, the transition noise with covariance matrix Q and η the emission/sensor noise with covariance matrix R. Finally, the observation matrix H ∈ Rnx×nz defines a linear mapping from latent to observation space which we will replace by a nonlinear transformation parameterized by a neural net. These equations imply the following joint distribution: p(x1:T , z1:T , s1:T | u1:T ) = T∏ t=1 p(xt | zt) p(zt | zt−1, ut−1, st) p(st | zt−1, ut−1, st−1) (2) with p(z1 | z0, u0, s1) = p(z1) being the initial state distribution. The corresponding graphical model is shown in figure 1a. 2.2 STOCHASTIC GRADIENT VARIATIONAL BAYES p(x) = ∫ p(x, z) dz = ∫ p(x | z)p(z) dz (3) Given the simple graphical model in equation (3), Kingma & Welling (2014) and Rezende et al. (2014) introduced the Variational Autoencoder (VAE) which overcomes the intractability of posterior inference of q(z | x) by maximizing the evidence lower bound (ELBO) of the model log-likelihood. LELBO(x; θ, φ) = Eqφ(z|x)[ln pθ(x | z)]− DKL(qφ(z | x) || p(z)) ≤ log p(x) (4) Their main innovation was to approximate the intractable posterior distribution by a recognition network qφ(z|x) from which they can sample via the reparameterization trick to allow for stochastic backpropagation through both the recognition and generative model at once. Assuming that the latent state is normally distributed, a simple transformation allows us to obtain a Monte Carlo gradient estimate of Eqφ(z|x) [ln pθ(x|z)] w.r.t. to φ. Given that z ∼ N (µ, σ2), we can generate samples by drawing from an auxiliary variable ∼ N (0, 1) and applying the deterministic and differentiable transformation z = µ+ σ . 2.3 THE CONCRETE DISTRIBUTION One simple and efficient way to obtain samples d from a k-dimensional categorical distribution with class probabilities α is the Gumbel-Max trick: d = one_hot (argmax[gi + logαi]) , with g1, . . . , gk ∼ Gumbel(0, 1) (5) However, since the derivative of the argmax is 0 everywhere except at the boundary of state changes, where it is undefined, we can’t learn a parameterization by backpropagation. The Gumbel-Softmax trick approximates the argmax by a softmax which gives us a probability vector (Maddison et al., 2017; Jang et al., 2017). We can then draw samples via dk = exp((logαk + gk)/λ)∑n i=1 exp((logαi + gi)/λ) , with g1, . . . , gk ∼ Gumbel(0, 1) (6) This softmax computation approaches the discrete argmax as temperature λ → 0, for λ → ∞ it approaches a uniform distribution. 3 RELATED WORK Our model can be viewed as a Deep Kalman Filter (Krishnan et al., 2015) with structured inference (Krishnan et al., 2017). In our case, structured inference entails another stochastic variable model with parameter sharing inspired by Karl et al. (2017b) and Karl et al. (2017a) which pointed out the importance of backpropagating the reconstruction error through the transition. We are different to a number of stochastic sequential models like Bayer & Osendorfer (2014); Chung et al. (2015); Shabanian et al. (2017); Goyal et al. (2017) by directly transitioning the stochastic latent variable over time instead of having an RNN augmented by stochastic inputs. Fraccaro et al. (2016) has a transition over both a deterministic and a stochastic latent state sequence, wanting to combine the best of both worlds. Previous models (Watter et al., 2015; Karl et al., 2017a; Fraccaro et al., 2017) have already combined locally linear models with recurrent Variational Autoencoders, however they provide a weaker structural incentive for learning latent variables determining the transition function. Van Steenkiste et al. (2018) approach a similar multi bouncing ball problem (see section 5.1) by first distributing the representation of different balls into their own entities without supervision and then structurally hardwiring a transition function with interactions based on an attention mechanism. Recurrent switching linear dynamical systems (Linderman et al., 2016) uses message passing for approximate inference, but has restricted itself to low-dimensional observations and a multi-stage training process. Johnson et al. (2016) propose a similar model to ours but combine message passing for discrete switching variables with a neural network encoder for observations learned by stochastic backpropagation. Tackling the problem of propagating state uncertainty over time, various combinations of neural networks for inference and Gaussian processes for transition dynamics have been proposed (Eleftheriadis et al., 2017; Doerr et al., 2018). However, these models have not been demonstrated to work with high-dimensional observation spaces like images. One feature a switching LDS model may learn are interactions which have recently been approached by employing Graph Neural Networks (Battaglia et al., 2016; Kipf et al., 2018). These methods are similar in that they predict edges which encode interactions between components of the state space (nodes). 4 PROPOSED APPROACH Our goal is to fit a series of continuous state z1:T and switching variables s2:T to a given sequence of observations x1:T . We assume a nonlinear mapping between observations and latent space which we generally approximate by neural networks, apart from the transition which is modeled by a locally linear function. Our generative model is shown in figure 1b an our inference model in figure 2a. 4.1 GENERATIVE MODEL Our generative model for a single xt is described by p(xt) = ∫ s≤t ∫ z≤t p(xt | zt)p(zt | zt−1, st, ut−1)p(st | st−1, zt−1, ut−1)p(zt−1, st−1) (7) which is close to the one of the original SLDS model (see figure 1a). Latent states zt are continuous and represent the state of the system while states st are the switching variables determining the transition. We approximate the discrete switching variables by a continuous relaxation, namely the Concrete distribution.1 Differently to the original model, we do not condition the likelihood of the current observation pθ(xt | zt) directly on the switching variables. This limits the influence of the switching variables to choosing a proper transition dynamic for the continuous latent space. The likelihood model is parameterized by a neural network with either a Gaussian or a Bernoulli distribution as output depending on the data. There is both a transition on the continuous states zt and discrete latent states st. For the continuous state transition p(zt | zt−1, st, ut−1) we follow (1) and maintain a set of M base matrices { ( A(i), B(i), Q(i) ) | ∀i. 0 < i < M} as our linear dynamical systems to choose from. For the transition on discrete latent states p(st | st−1, zt−1, ut−1), we usually require the learning of a Markov transition matrix. However, since we approximate our discrete switching variables by a continuous relaxation, we can parameterize this transition by a neural network. Therefore, our entire generative model can be learned end-to-end by (stochastic) backpropagation. Finally, the resulting dynamics matrices are computed through a linear combination of the base matrices: At(st) = M∑ i=1 s (i) t A (i), B(st) = M∑ i=1 s (i) t B (i), Q(st) = M∑ i=1 s (i) t Q (i) (8) Both transition models – the continuous state transition pθ(zt | zt−1, st, ut−1) and concrete switching variables transition pθ(st | st−1, zt−1, ut−1) – are shared with the inference model which is key for good performance. pθ(zt | zt−1, st, ut−1) = N ( µ, σ2 ) where [µ, σ2] = fθ(zt−1, st, ut−1) pθ(st | st−1, zt−1, ut−1) = Concrete(α, λprior) where α = gθ(zt−1, st−1, ut−1) (9) 4.2 INFERENCE 4.2.1 STRUCTURED INFERENCE OF CONTINUOUS LATENT STATE We split our inference model qφ(zt | zt−1, st, x≥t, u≥t−1) into two parts: 1) transition model qtrans(zt | zt−1, st, ut−1) and 2) inverse measurement model qmeas(zt | x≥t, u≥t) as previously proposed in Karl et al. (2017b). This split allows us to reuse our generative transition model in place of qtrans(zt | zt−1, st, ut−1). This sharing of variables is essential for good performance as it forces the reconstruction error to be backpropagated through the transition model. For practical reasons, we only share the computation of the transition mean µtrans but not the variance σ2trans between inference and generative model. Both parts, qmeas and qtrans, will give us independent predictions about the new state zt which will be combined in a manner akin to a Bayesian update in a Kalman Filter. qφ(zt | zt−1, st, x≥t, u≥t−1) ∝ qmeas(zt | x≥t, u≥t)× qtrans(zt | zt−1, st, ut−1) = N ( µq, σ 2 q ) qmeas(zt | x≥t, u≥t) = N ( µmeas, σ 2 meas ) where [µmeas, σ2meas] = hφ(x≥t, u≥t) qtrans(zt | zt−1, st, ut−1) = N ( µtrans, σ 2 trans ) where [µtrans, σ2trans] = fθ(zt−1, st, ut−1) (10) The densities of qmeas and qtrans are multiplied resulting in another Gaussian density: µq = µtransσ 2 meas + µmeasσ 2 trans σ2meas + σ 2 trans , σ2q = σ2measσ 2 trans σ2meas + σ 2 trans (11) This update scheme is highlighted in figure 2b. We found empirically that conditioning the inverse measurement model qmeas(zt | x≥t, u≥t) solely on the current observation xt instead of the entire remaining trajectory to lead to better results. We hypothesize that the recurrent model needlessly introduces very high-dimensional and complicated dynamics which are harder to approximate with our locally linear transition model. For the initial state z1 we do not have a conditional prior from the transition model as in the rest of the sequence. Other methods (Krishnan et al., 2015) have used a standard normal prior, however this is not a good fit. We therefore decided that instead of predicting z1 directly to predict an auxiliary 1As an ablation study, we will compare this to modeling switching variables by a Gaussian distribution. variable w that is then mapped deterministically to a starting state z1. A standard Gaussian prior is then applied to w. Alternatively, we could specify a more complex or learned prior for the initial state like the VampPrior (Tomczak & Welling, 2017). Empirically, this has lead to worse results. qφ(w | x1:T , u1:T ) = N ( w;µw, σ 2 w ) where [µw, σ2w] = iφ(x1:T , u1:T ) z1 = fφ(w) (12) While we could condition on the entire sequence, we restrict it to just the first couple of observations. 4.2.2 INFERENCE OF SWITCHING VARIABLES Following Maddison et al. (2017) and Jang et al. (2017), we can reparameterize a discrete latent variable with the Gumbel-softmax trick. Again, we split our inference network qφ(st | st−1, zt−1, x≥t, u≥t−1) in an identical fashion into two components: 1) Transition model qtrans(st | st−1, zt−1, ut−1) and 2) inverse measurement model qmeas(st | x≥t, u≥t). The transition model is again shared with the generative model and is implemented via a neural network as we potentially require quick changes to chosen dynamics. The inverse measurement model is parametrized by a backward LSTM. However, for the case of concrete variables, we cannot do the same Gauss multiplication as in the previous case. Therefore, we let each network predict the logits of a Concrete distribution and our inverse measurement model qφ(st | x≥t, u≥t) produces an additional vector γ, which determines the value of a gate deciding how the two predictions are to be weighted: qφ(st | st−1, zt−1, x≥t, u≥t−1) = Concrete(α, λposterior) with α = γαtrans + (1− γ)αmeas qmeas(st | x≥t, u≥t) = Concrete(αmeas, λposterior) where [αmeas, γ] = kφ(x≥t, u≥t) qtrans(st | st−1, zt−1, ut−1) = Concrete(αtrans, λprior) where α = gθ(zt−1, st−1, ut−1) (13) The temperatures λposterior and λprior are set as a hyperparameter and can be set differently for the prior and approximate posterior. The gating mechanism gives the model the option to balance between prior and approximate posterior. If the prior is good enough to explain the next observation, γ will be pushed to 1 which ignores the measurement and minimizes the KL between prior and posterior by only propagating the prior. If the prior is not sufficient, information from the inverse measurement model can flow by decreasing γ and incurring a KL penalty. Since the concrete distribution is a relaxation of the categorical, our sample will not be a one-hot vector, but a vector whose elements sum up to 1. We face two options here: we could take a categorical sample by choosing the linear system corresponding to the highest value in the sample (hard forward pass) and only use the relaxation for our backward pass. This, however, means that we will follow a biased gradient. Alternatively, we can use the relaxed version for our forward pass and aggregate the linear systems based on their corresponding weighting (see (8)). Here, we lose the discrete switching of linear systems, but maintain a valid lower bound. We note that the hard forward pass has led to worse results and focus on the soft forward pass for this paper. Lastly, we could go further away from the theory and instead treat the switching variables also as normally distributed. If this worked better than the approach with Concrete variables, it would highlight still existing optimization problems of discrete random variables. As such, it will act as an ablation study for our model. The mixing coefficients for linear systems would then be determined by a linear combination of these latent variables: α = softmax(Wst + b) ∈ RM (14) Our inference scheme for normally distributed switching variables is then identical to the one described in the previous section. We compare both approaches throughout our experimental section. 4.3 TRAINING Our objective function is the commonly used evidence lower bound for our hierarchical model. Lθ,φ(x1:T | u1:T ) ≥ Eqφ(z1:T ,s1:T |x1:T )[log pθ(x1:T | z1:T , s1:T , u1:T )] − DKL(qφ(z1:T , s1:T | x1:T , u1:T ) || p(z1:T , s1:T | u1:T )) (15) We choose to factorize over time, so the loss for a single observation xt becomes: Lθ,φ(xt | u1:T ) = Eqφ(st|st−1,zt−1,x≥t,u≥t−1) [ Eqφ(zt|st,zt−1,x≥t,u≥t−1)[log pθ(xt | zt)] ] − Est−1 [ Ezt−1 [DKL(qφ(st | st−1, zt−1, x≥t, u≥t−1) || pθ(st | st−1, zt−1, ut−1))] ] − Ezt−1 [Est [DKL(qφ(zt | zt−1, st, x≥t, u≥t−1) || pθ(zt | zt−1, st, ut−1))]] (16) The full derivation can be found in appendix A. We learn the parameters of our model by backpropagation through time and we (generally) approximate the expectations with one sample by using the reparametrization trick. The exception is the KL between two Concrete random variables in which case we take 10 samples for the approximation. For the KL on the switching variables, we further introduce a scaling factor β < 1 (as first suggested in Higgins et al. (2016), although they suggested increasing the KL term) to down weigh its importance. More details on the training procedure can be found in appendix B.2. 5 EXPERIMENTS In this section, we evaluate our approach on a diverse set of physics and robotics simulations based on partially observable system states or high-dimensional images as observations. We show that our model outperforms previous models and that our switching variables learn meaningful representations. Models we compare to are Deep Variational Bayes Filter (DVBF) (Karl et al., 2017a), DVBF Fusion (Karl et al., 2017b) (called fusion as they do the same Gauss multiplication in the inference network) which is closest to our model but doesn’t have a stochastic treatment of the transition, the Kalman VAE (KVAE) (Fraccaro et al., 2017) and a LSTM (Hochreiter & Schmidhuber, 1997). (a) Multi agent maze environment. (b) Variable encoding free space for agent 2. (c) Variable encoding walls for agent 1. (d) System activation for deterministic transition. Figure 3: Figures (b) and (c) depict an agent’s position colored by the average value of a single latent variable s marginalized over all control inputs u and velocities. Figure (d) highlights a representative activation for a single transition system for the deterministic treatment of the transition dynamics. It doesn’t generalize to the entire maze and stays fairly active in proximity to the wall. 5.1 MULTIPLE BOUNCING BALLS IN A MAZE Our first experiment is a custom 3-agent maze environment simulated with Box2D. Each agent is fully described by its x and y coordinates and its current velocity and has the capability to accelerate in either direction. We learn in a partially observable setting and limit the observations to the agents’ positions, therefore x ∈ R6 while the true state space is in R12 and u ∈ R6. First, we train a linear regression model on the latent space z to see if we have recovered a linear encoding of the unobserved velocities. We achieve an R2 score of 0.92 averaged over all agents and velocity directions. Our focus shifts now to our switching variables which we expect to encode interactions with walls. We provide a visual confirmation of that in figure 3 where we see switching variables encoding all space where there is no interaction in the next time step, and variables which encode walls, distinguishing between vertical and horizontal ones. In figure 3d one can see show that if the choice of locally linear transition is treated deterministically, we don’t learn global features of the same kind. To confirm our visual inspection, we train a simple decision tree based on latent space s in order to predict interaction with a wall. Here, we achieve an F1 score of 0.46. It is difficult to say what a good value should look like as collisions with low velocity are virtually indistinguishable from no collision. We compare our prediction quality to several other methods in table 1 where we outperform all of our chosen baselines. Also, modeling switching variables by a Normal distribution outperforms the Concrete distribution in all of our experiments. Aside from known practical issues with training a discrete variable via backpropagation, we explore one reason why that may be in section 5.4, which is the greater susceptibility to the scale of temporal discretization. We provide plots of predicted trajectories in appendix D. Transitioning multiple agents with a single transition matrix comes with scalability issues with regards to switching dynamics which we explore further in appendix C. 5.2 REACHER We then evaluate our model on the Roboschool reacher environment. To make things more interesting, we learn only on partial observations, removing time derivative information (velocities), leaving us with just the positions or angles of various joints as observations. Table 1 shows a comparison of various methods on predicting the next couple of time steps. One critical point is the possible collision2 between lower and upper joint which is one we’d like our model to capture. We again learn a linear classifier based on latent space s to see if this is successfully encoded and reach an F1 score of 0.46. 5.3 BALL IN A BOX ON IMAGE DATA Finally, we evaluate our method on high-dimensional image observations using the single bouncing ball environment used by Fraccaro et al. (2017). They simulated 5000 sequences of 20 time steps each of a ball moving in a two-dimensional box, where each video frame is a 32× 32 binary image. There are no forces applied to the ball, except for the fully elastic collisions with the walls. Initial position and velocity are randomly sampled. 2We roughly identify a collision to be the point where the lower joint decelerates by over a fixed value of 2. In figure 5a we compare our model to both the smoothed and generative version of the KVAE. The smoothed version receives the final state of the trajectory after the n predicted steps which is fed into the smoothing capability of the KVAE. One can see that our model learns a better transition model, even outperforming the smoothed KVAE for longer sequences. For short sequences, KVAE performs better which highlights the value of it disentangling the latent space into separate object and dynamics representation. A sample trajectory is plotted in figure 4. 5.4 SUSCEPTIBILITY TO THE SCALE OF TEMPORAL DISCRETIZATION In this section, we’d like to explore how the choice of ∆t when discretizing a system influences our results. In particular, we’d expect our model with discrete (concrete) switching latent variables to be more susceptible to it than when modeled by a continuous distribution. This is because in the latter case the switching variables can scale the various matrices more freely, while in the former scaling up one system necessitates scaling down another. For empirical comparison, we go back to our custom maze environment (this time with only one agent as this is not pertinent to our question at hand) and learn the dynamics on various discretization scales. Then we compare the absolute error’s growth for both approaches in figure 5b which supports our hypothesis. While the discrete approximation even outperforms for small ∆t, there is a point where it rapidly becomes worse and gets overtaken by the continuous approximation. This suggests that ∆t was simply chosen to be too large in both the reacher and the ball in a box with image observations experiment. 6 DISCUSSION We want to emphasize some subtle differences to previously proposed architectures that make an empirical difference, in particular for the case when st is chosen to be continuous. In Watter et al. (2015) and Karl et al. (2017a), the latent space is already used to draw transition matrices, however they do not extract features such as walls or joint constraints. There are a few key differences from our approach. First, our latent switching variables st are only involved in predicting the current observation xt through the transition selection process. The likelihood model therefore doesn’t need to learn to ignore some input dimensions which are only helpful for reconstructing future observations but not the current one. There is also a clearer restriction on how st and zt may interact: st may now only influence zt by determining the dynamics, while previously zt influenced both the choice of transition function as well as acted inside the transition. These two opposing roles lead to conflicting gradients as to what should be improved. Furthermore, the learning signal for st is rather weak so that scaling down the KL-regularization was necessary to detect good features. Lastly, a (locally) linear transition may not be a good fit for variables determining dynamics as such variables may change very abruptly. 7 CONCLUSION We have shown that our construction of using switching variables encourages learning a richer and more interpretable latent space. In turn, the richer representation led to an improvement of simulation accuracy in various tasks. In the future, we’d like to look at other ways to approximate the discrete switching variables and exploit this approach for model-based control on real hardware systems. Furthermore, addressing the open problem of disentangling latent spaces is essential to fitting simple dynamics and would lead to significant improvements of this approach. A LOWER BOUND DERIVATION For brevity we omit conditioning on control inputs u1:T . log p(xT ) = log ∫ z1:T ∫ s1:T qφ(s1:T , z1:T | x1:T ) pθ(x1:T | z1:T )pθ(z1:T , s1:T ) qφ(s1:T , z1:T | x1:T ) ≥ ∫ z1:T ∫ s1:T qφ(s1:T , z1:T | x1:T ) log pθ(x1:T | z1:T )pθ(z1:T , s1:T ) qφ(s1:T , z1:T | x1:T ) = T∑ t=1 Est [Ezt [p(xt | zt, st)]]− DKL(q(z1:T , s1:T | x1:T ) || p(z1:T , s1:T )) A.1 FACTORIZATION OF THE KL DIVERGENCE The dependencies on data xT and uT as well as parameters φ and θ are omitted in the following for convenience. DKL(q(z1, s2, . . . , sT , zT ) || p(z1, s2, . . . , sT , zT )) (Factorization of the variational approximation) = ∫ z1 ∫ s2 · · · ∫ sT ∫ zT q(z1)q(s2 | z1) . . . q(sT | zT−1, sT−1)q(zT | zT−1, sT ) log q(z1)q(s2 | z1) . . . q(sT | zT−1, sT−1)q(zT | zT−1, sT ) p(z1, s2, . . . , sT , zT ) (Factorization of the prior) = ∫ z1 ∫ s2 · · · ∫ sT ∫ zT q(z1)q(s2 | z1) . . . q(sT | zT−1, sT−1)q(zT | zT−1, sT ) log q(z1)q(s2 | z1) . . . q(sT | zT−1, sT−1)q(zT | zT−1, sT ) p(z1)p(s2 | z1) . . . p(sT | zT−1, sT−1)p(zT | zT−1, sT ) (Expanding the logarithm by the product rule) = ∫ z1 q(z1) log q(z1) p(z1) + ∫ z1 ∫ s1 q(z1)q(s1 | z1) log q(s1 | z1) p(s1 | z1) + T∑ t=2 ∫ z1 ∫ s2 · · · ∫ sT ∫ zT q(z1)q(s2 | z1) . . . q(zT | zT−1, sT ) log q(zt | zt−1, st) p(zt | zt−1, st) + T∑ t=3 ∫ z1 ∫ s2 · · · ∫ sT ∫ zT q(z1)q(s2 | z1) . . . q(zT | zT−1, sT ) log q(st | zt−1, st−1) p(st | zt−1, st−1) (Ignoring constants) = DKL(q(z1) || p(z1)) + Ez1∼q(z1)[DKL(q(s2 | z1) || p(s2 | z1))] + T−1∑ t=2 Est,zt−1 [DKL(q(zt | zt−1, st) || p(zt | zt−1, st))] + T−1∑ t=3 Est−1,zt−1 [DKL(q(st | zt−1, st−1) || p(st | zt−1, st−1))] B DETAILS OF THE EXPERIMENTAL SETUP B.1.1 ROBOSCHOOL REACHER To generate data, we follow a Uniform distribution U ∼ [−1, 1] as the exploration policy. Before we record data, we take 20 warm-up steps in the environment to randomize our starting state. We take the data as is without any other preprocessing. B.1.2 MULTI AGENT MAZE Observations are normalized to be in [−1, 1]. Both position and velocity is randomized for the starting state. We again follow a Uniform distribution U ∼ [−1, 1] as the exploration policy. B.2 TRAINING Overall, training the Concrete distribution has given us the biggest challenge as it was very susceptible to various hyperparameters. We made use of the fact that we can use a different temperature for the prior and approximate posterior (Maddison et al., 2017) and we do independent hyperparameter search over both. For us, the best values were 0.75 for the posterior and 2 for the prior. Additionally, we employ an exponential annealing scheme for the temperature hyperparameter of the Concrete distribution. This leads to a more uniform combination of base matrices early in training which has two desirable effects. First, all matrices are scaled to a similar magnitude, making initialization less critical. Second, the model initially tries to fit a globally linear model, leading to a good starting state for optimization. We also tried increasing the number of samples taken (up to 100) to approximate the KL between the Concrete distributions, however we have not observed an improvement of performance. We therefore restrict ourselves to 10 samples for all experiments. In all experiments, we train everything end-to-end with the ADAM optimizer.(Kingma & Ba, 2015) We start with learning rate of 5e−4 and use an exponential decay schedule with rate 0.97 every 2000 iterations. B.3 NETWORK ARCHITECTURE For most networks, we use MLPs implemented as residual nets (He et al., 2016) with ReLU activations. Networks used for the reacher and maze experiments. • qmeas(zt | ·): MLP consisting of two residual blocks with 256 neurons each. We only condition on the current observation xt although we could condition on the entire sequence. This decision was taken based on empirical results. • qtrans(zt | ·): In the case of Concrete random variables, we just combine the base matrices and apply the transition dynamics to zt−1. For the Normal case, the combination of matrices is preceded by a linear combination with softmax activation. (see equation 14) • qmeas(st | ·): is implemented by a backward LSTM with 256 hidden units. We reuse the preprocessing of qmeas(zt | xt) and take the last hidden layer of that network as the input to the LSTM. • qtrans(st | ·): MLP consisting of one residual block with 256 neurons. • qinitial(w | ·): MLP consisting of two residual block with 256 neurons optionally followed by a backward LSTM. We only condition on the first 3 or 4 observations for our experiments. • qinitial(s2): The first switching variable in the sequence has no predecessor. We there- fore require a replacement for qtrans(st | ·) in the first time step, which we achieve by independently parameterizing another MLP. • p(xt | zt): MLP consisting of two residual block with 256 neurons. • p(zt | ·): Shared parameters with qtrans(zt | ·). • p(st | ·): Shared parameters with qtrans(st | ·). We use the same architecture for the image ball in a box experiment, however we increase number of neurons of qmeas(zt | ·) to 1024. B.4 HYPERPARAMETERS C ON SCALING ISSUES OF SWITCHING LINEAR DYNAMICAL SYSTEMS Let’s consider a simple representation of a ball in a rectangular box where its state is represented by its position and velocity. Given a small enough ∆t, we can approximate the dynamics decently by just 3 systems: no interaction with the wall, interaction with a vertical or horizontal wall (ignoring the corner case of interacting with two walls at the same time). Now consider the growth of required base systems if we increase the number of balls in the box (even if these balls cannot interact with each other). We would require a system for all combinations of a single ball’s possible states: 32. This will grow exponentially with the number of balls in the environment. One way to alleviate this problem that requires only a linear growth in base systems is to independently turn individual systems on and off and let the resulting system the sum of all activated systems. A base system may then represent solely the transition for a single ball being in specific state, while the complete system is then a combination ofN such systems whereN is the number of balls. Practically, this can be achieved by replacing the softmax by a sigmoid activation function or by replacing the categorical variable s of dimension M by M Bernoulli variables indicating whether a single system is active or not. We do this for our multiple agents in a maze environment. Theoretically, a preferred approach would be to disentangle multiple systems (like balls, joints) and apply transitions only to their respective states. This, however, would require a proper and unsupervised separation of (mostly) independent components. We defer this to future work. D FURTHER RESULTS D.1 3-AGENT MAZE D.2 IMAGE BALL IN A BOX
1. What is the main contribution of the paper regarding SLDS + neural network observation models? 2. What are the strengths and weaknesses of the proposed inference procedure compared to other works, specifically the SLDS-VAE model in Johnson et al? 3. How does the reviewer assess the complexity of the inference procedure and its room for improvement in the paper? 4. What are the similarities and differences between the proposed latent SLDS generative models and the SLDS-VAE model? 5. How does the reviewer suggest the authors better motivate the use of RNNs over message-passing ideas in Johnson et al? 6. What are the minor comments regarding the structure of the paper and the choice of hyperparameters?
Review
Review Thank you for the detailed reply and for updating the draft The authors have added in a sentence about the SLDS-VAE from Johnson et al and I agree that reproducing their results from the open source code is difficult. I think my concerns about similarities have been sufficiently addressed. My main concerns about the paper still stem from the complexity of the inference procedure. Although the inference section is still a bit dense, I think the restructuring helped quite a bit. I am changing my score to a 6 to reflect the authors' efforts to improve the clarity of the paper. The discussion in the comments has been helpful in better understanding the paper but there is still room for improvement in the paper itself. ============= Summary: The authors present an SLDS + neural network observation model for the purpose of fitting complex dynamical systems. They introduce an RNN-based inference procedure and evaluate how well this model fits various systems. (I’ll refer to the paper as SLDVBF for the rest of the review.) Writing: The paper is well-written and explains its ideas clearly Major Comments: There are many similarities between SLDVBF and the SLDS-VAE model in Johnson et al [1] and I think the authors need to address them, or at least properly compare the models and justify their choices: - The first is that the proposed latent SLDS generative models are very similar: both papers connect an SLDS with a neural network observation model. Johnson et al [1] present a slightly simpler SLDS (with no edges from z_t -> s_{t + 1} or s_t -> x_t) whereas LDVBF uses the “augmented SLDS” from Barber et al. It is unclear what exactly z_t -> s_{t + 1} is in the LDVBF model, as there is no stated form for p(s_t | s_{t -1}, z_{t - 1}). - When performing inference, Johnson et al use a recognition network that outputs potentials used for Kalman filtering for z_t and then do conjugate message passing for s_t. I see this as a simpler alternative to the inference algorithm proposed in SLDVBF. SLDVBF proposes relaxing the discrete random variables using Concrete distributions and using LSTMs to output potentials used in computing variational posteriors. There are few additional tricks used, such as having these networks output parameters that gate potentials from other sources. The authors state that this strategy allows reconstruction signal to backpropagate through transitions, but Johnson et al accomplish this (in theory) by backpropagating through the message passing fixed-point iteration itself. I think the authors need to better motivate the use of RNNs over the message-passing ideas presented in Johnson et al. - Although SLDVBF provides more experiments evaluating the SLDS than Johnson, there is an overlap. Johnson et al successfully simulates dynamics in toy image systems in an image-based ball-bouncing task (in 1d, not 2d). I find that the results from SLDVBF, on their own, are not quite convincing enough to distinguish their methods from those from Johnson et al and a direct comparison is necessary. Despite these similarities, I think this paper is a step in the right direction, though it needs to far more to differentiate it from Johnson et al. The paper draws on many ideas from recent literature for inference, and incorporating these ideas is a good start. Minor Comments: - Structurally, I found it odd that the authors present the inference algorithm before fully defining the generative model. I think it would be clearer if the authors provided a clear description of the model before describing variational approximations and inference strategies. - The authors do not justify setting $\beta = 0.1$ when training the model. Is there a particular reason you need to downweight the KL term as opposed to annealing? [1] Johnson, Matthew, et al. "Composing graphical models with neural networks for structured representations and fast inference." Advances in neural information processing systems. 2016.
ICLR
Title Switching Linear Dynamics for Variational Bayes Filtering Abstract System identification of complex and nonlinear systems is a central problem for model predictive control and model-based reinforcement learning. Despite their complexity, such systems can often be approximated well by a set of linear dynamical systems if broken into appropriate subsequences. This mechanism not only helps us find good approximations of dynamics, but also gives us deeper insight into the underlying system. Leveraging Bayesian inference and Variational Autoencoders, we show how to learn a richer and more meaningful state space, e.g. encoding joint constraints and collisions with walls in a maze, from partial and high-dimensional observations. This representation translates into a gain of accuracy of the learned dynamics which we showcase on various simulated tasks. 1 INTRODUCTION Learning dynamics from raw data (also known as system identification) is a key component of model predictive control and model-based reinforcement learning. Problematically, environments of interest often give rise to very complex and highly nonlinear dynamics which are seemingly difficult to approximate. However, switching linear dynamical systems (SLDS) approaches claim that those environments can often be broken down into simpler units made up of areas of equal and linear dynamics (Ackerson & Fu, 1970; Chang & Athans, 1978). Not only are those approaches capable of good predictive performance, which often is the sole goal of learning a system’s dynamics, they also encode valuable information into so called switching variables which determine the dynamics of the next transition. For example, when looking at the movement of an arm, one is intuitively aware of certain restrictions of possible movements, e.g. constraints to the movement due to joint constraints or obstacles. The knowledge is present without the need to simulate; it’s explicit. Exactly this kind of information will be encoded when successfully learning switching dynamics. Our goal in this work will therefore entail the search for richer representations in the form of latent state space models which encode knowledge about the underlying system dynamics. In turn, we expect this to improve the accuracy of our simulation as well. Such a representation alone could then be used in a reinforcement learning approach that possibly only takes advantage of the learned latent features but not necessarily its learned dynamics. To learn richer representations, we identify one common problem with prevalent recurrent Variational Autoencoder models (Karl et al., 2017a; Krishnan et al., 2015; Chung et al., 2015; Fraccaro et al., 2016): the non-probabilistic treatment of the transition dynamics often modeled by a powerful nonlinear function approximator. From the history of the Autoencoder to the Variational Autoencoder, we know that in order to detect features in an unsupervised manner, probabilistic treatment of the latent space is paramount. As our starting point, we will build on previously proposed approaches by Krishnan et al. (2017) and Karl et al. (2017a). The latter already made use of locally linear dynamics, but only in a deterministic fashion. We extend their approaches by a stochastic switching LDS model and show that such treatment is vital for learning richer representations and simulation accuracy. 2 BACKGROUND We consider discretized time-series data consisting of continuous observations xt ∈ X ⊂ Rnx and control inputs ut ∈ U ⊂ Rnu that we would like to model by corresponding latent states zt ∈ Z ⊂ Rnz . We’ll denote sequences of variables by x1:T = (x1, x2, ..., xT ). 2.1 SWITCHING LINEAR DYNAMICAL SYSTEMS Switching Linear Dynamical System models (SLDS) enable us to model nonlinear time series data by splitting it into sequences of linear dynamical models. At each time t = 1, 2, ..., T , a discrete switch variable st ∈ 1, ...,M chooses of a set LDSs a system which is to be used to transform our continuous latent state zt to the next time step (Barber, 2012). zt = A(st)zt−1 +B(st)ut−1 + (st) (st) ∼ N (0, Q(st)) xt = H(st)zt + η(st) η(st) ∼ N (0, R(st)) (1) Here A ∈ Rnz×nz is the state matrix, B ∈ Rnz×nu control matrix, the transition noise with covariance matrix Q and η the emission/sensor noise with covariance matrix R. Finally, the observation matrix H ∈ Rnx×nz defines a linear mapping from latent to observation space which we will replace by a nonlinear transformation parameterized by a neural net. These equations imply the following joint distribution: p(x1:T , z1:T , s1:T | u1:T ) = T∏ t=1 p(xt | zt) p(zt | zt−1, ut−1, st) p(st | zt−1, ut−1, st−1) (2) with p(z1 | z0, u0, s1) = p(z1) being the initial state distribution. The corresponding graphical model is shown in figure 1a. 2.2 STOCHASTIC GRADIENT VARIATIONAL BAYES p(x) = ∫ p(x, z) dz = ∫ p(x | z)p(z) dz (3) Given the simple graphical model in equation (3), Kingma & Welling (2014) and Rezende et al. (2014) introduced the Variational Autoencoder (VAE) which overcomes the intractability of posterior inference of q(z | x) by maximizing the evidence lower bound (ELBO) of the model log-likelihood. LELBO(x; θ, φ) = Eqφ(z|x)[ln pθ(x | z)]− DKL(qφ(z | x) || p(z)) ≤ log p(x) (4) Their main innovation was to approximate the intractable posterior distribution by a recognition network qφ(z|x) from which they can sample via the reparameterization trick to allow for stochastic backpropagation through both the recognition and generative model at once. Assuming that the latent state is normally distributed, a simple transformation allows us to obtain a Monte Carlo gradient estimate of Eqφ(z|x) [ln pθ(x|z)] w.r.t. to φ. Given that z ∼ N (µ, σ2), we can generate samples by drawing from an auxiliary variable ∼ N (0, 1) and applying the deterministic and differentiable transformation z = µ+ σ . 2.3 THE CONCRETE DISTRIBUTION One simple and efficient way to obtain samples d from a k-dimensional categorical distribution with class probabilities α is the Gumbel-Max trick: d = one_hot (argmax[gi + logαi]) , with g1, . . . , gk ∼ Gumbel(0, 1) (5) However, since the derivative of the argmax is 0 everywhere except at the boundary of state changes, where it is undefined, we can’t learn a parameterization by backpropagation. The Gumbel-Softmax trick approximates the argmax by a softmax which gives us a probability vector (Maddison et al., 2017; Jang et al., 2017). We can then draw samples via dk = exp((logαk + gk)/λ)∑n i=1 exp((logαi + gi)/λ) , with g1, . . . , gk ∼ Gumbel(0, 1) (6) This softmax computation approaches the discrete argmax as temperature λ → 0, for λ → ∞ it approaches a uniform distribution. 3 RELATED WORK Our model can be viewed as a Deep Kalman Filter (Krishnan et al., 2015) with structured inference (Krishnan et al., 2017). In our case, structured inference entails another stochastic variable model with parameter sharing inspired by Karl et al. (2017b) and Karl et al. (2017a) which pointed out the importance of backpropagating the reconstruction error through the transition. We are different to a number of stochastic sequential models like Bayer & Osendorfer (2014); Chung et al. (2015); Shabanian et al. (2017); Goyal et al. (2017) by directly transitioning the stochastic latent variable over time instead of having an RNN augmented by stochastic inputs. Fraccaro et al. (2016) has a transition over both a deterministic and a stochastic latent state sequence, wanting to combine the best of both worlds. Previous models (Watter et al., 2015; Karl et al., 2017a; Fraccaro et al., 2017) have already combined locally linear models with recurrent Variational Autoencoders, however they provide a weaker structural incentive for learning latent variables determining the transition function. Van Steenkiste et al. (2018) approach a similar multi bouncing ball problem (see section 5.1) by first distributing the representation of different balls into their own entities without supervision and then structurally hardwiring a transition function with interactions based on an attention mechanism. Recurrent switching linear dynamical systems (Linderman et al., 2016) uses message passing for approximate inference, but has restricted itself to low-dimensional observations and a multi-stage training process. Johnson et al. (2016) propose a similar model to ours but combine message passing for discrete switching variables with a neural network encoder for observations learned by stochastic backpropagation. Tackling the problem of propagating state uncertainty over time, various combinations of neural networks for inference and Gaussian processes for transition dynamics have been proposed (Eleftheriadis et al., 2017; Doerr et al., 2018). However, these models have not been demonstrated to work with high-dimensional observation spaces like images. One feature a switching LDS model may learn are interactions which have recently been approached by employing Graph Neural Networks (Battaglia et al., 2016; Kipf et al., 2018). These methods are similar in that they predict edges which encode interactions between components of the state space (nodes). 4 PROPOSED APPROACH Our goal is to fit a series of continuous state z1:T and switching variables s2:T to a given sequence of observations x1:T . We assume a nonlinear mapping between observations and latent space which we generally approximate by neural networks, apart from the transition which is modeled by a locally linear function. Our generative model is shown in figure 1b an our inference model in figure 2a. 4.1 GENERATIVE MODEL Our generative model for a single xt is described by p(xt) = ∫ s≤t ∫ z≤t p(xt | zt)p(zt | zt−1, st, ut−1)p(st | st−1, zt−1, ut−1)p(zt−1, st−1) (7) which is close to the one of the original SLDS model (see figure 1a). Latent states zt are continuous and represent the state of the system while states st are the switching variables determining the transition. We approximate the discrete switching variables by a continuous relaxation, namely the Concrete distribution.1 Differently to the original model, we do not condition the likelihood of the current observation pθ(xt | zt) directly on the switching variables. This limits the influence of the switching variables to choosing a proper transition dynamic for the continuous latent space. The likelihood model is parameterized by a neural network with either a Gaussian or a Bernoulli distribution as output depending on the data. There is both a transition on the continuous states zt and discrete latent states st. For the continuous state transition p(zt | zt−1, st, ut−1) we follow (1) and maintain a set of M base matrices { ( A(i), B(i), Q(i) ) | ∀i. 0 < i < M} as our linear dynamical systems to choose from. For the transition on discrete latent states p(st | st−1, zt−1, ut−1), we usually require the learning of a Markov transition matrix. However, since we approximate our discrete switching variables by a continuous relaxation, we can parameterize this transition by a neural network. Therefore, our entire generative model can be learned end-to-end by (stochastic) backpropagation. Finally, the resulting dynamics matrices are computed through a linear combination of the base matrices: At(st) = M∑ i=1 s (i) t A (i), B(st) = M∑ i=1 s (i) t B (i), Q(st) = M∑ i=1 s (i) t Q (i) (8) Both transition models – the continuous state transition pθ(zt | zt−1, st, ut−1) and concrete switching variables transition pθ(st | st−1, zt−1, ut−1) – are shared with the inference model which is key for good performance. pθ(zt | zt−1, st, ut−1) = N ( µ, σ2 ) where [µ, σ2] = fθ(zt−1, st, ut−1) pθ(st | st−1, zt−1, ut−1) = Concrete(α, λprior) where α = gθ(zt−1, st−1, ut−1) (9) 4.2 INFERENCE 4.2.1 STRUCTURED INFERENCE OF CONTINUOUS LATENT STATE We split our inference model qφ(zt | zt−1, st, x≥t, u≥t−1) into two parts: 1) transition model qtrans(zt | zt−1, st, ut−1) and 2) inverse measurement model qmeas(zt | x≥t, u≥t) as previously proposed in Karl et al. (2017b). This split allows us to reuse our generative transition model in place of qtrans(zt | zt−1, st, ut−1). This sharing of variables is essential for good performance as it forces the reconstruction error to be backpropagated through the transition model. For practical reasons, we only share the computation of the transition mean µtrans but not the variance σ2trans between inference and generative model. Both parts, qmeas and qtrans, will give us independent predictions about the new state zt which will be combined in a manner akin to a Bayesian update in a Kalman Filter. qφ(zt | zt−1, st, x≥t, u≥t−1) ∝ qmeas(zt | x≥t, u≥t)× qtrans(zt | zt−1, st, ut−1) = N ( µq, σ 2 q ) qmeas(zt | x≥t, u≥t) = N ( µmeas, σ 2 meas ) where [µmeas, σ2meas] = hφ(x≥t, u≥t) qtrans(zt | zt−1, st, ut−1) = N ( µtrans, σ 2 trans ) where [µtrans, σ2trans] = fθ(zt−1, st, ut−1) (10) The densities of qmeas and qtrans are multiplied resulting in another Gaussian density: µq = µtransσ 2 meas + µmeasσ 2 trans σ2meas + σ 2 trans , σ2q = σ2measσ 2 trans σ2meas + σ 2 trans (11) This update scheme is highlighted in figure 2b. We found empirically that conditioning the inverse measurement model qmeas(zt | x≥t, u≥t) solely on the current observation xt instead of the entire remaining trajectory to lead to better results. We hypothesize that the recurrent model needlessly introduces very high-dimensional and complicated dynamics which are harder to approximate with our locally linear transition model. For the initial state z1 we do not have a conditional prior from the transition model as in the rest of the sequence. Other methods (Krishnan et al., 2015) have used a standard normal prior, however this is not a good fit. We therefore decided that instead of predicting z1 directly to predict an auxiliary 1As an ablation study, we will compare this to modeling switching variables by a Gaussian distribution. variable w that is then mapped deterministically to a starting state z1. A standard Gaussian prior is then applied to w. Alternatively, we could specify a more complex or learned prior for the initial state like the VampPrior (Tomczak & Welling, 2017). Empirically, this has lead to worse results. qφ(w | x1:T , u1:T ) = N ( w;µw, σ 2 w ) where [µw, σ2w] = iφ(x1:T , u1:T ) z1 = fφ(w) (12) While we could condition on the entire sequence, we restrict it to just the first couple of observations. 4.2.2 INFERENCE OF SWITCHING VARIABLES Following Maddison et al. (2017) and Jang et al. (2017), we can reparameterize a discrete latent variable with the Gumbel-softmax trick. Again, we split our inference network qφ(st | st−1, zt−1, x≥t, u≥t−1) in an identical fashion into two components: 1) Transition model qtrans(st | st−1, zt−1, ut−1) and 2) inverse measurement model qmeas(st | x≥t, u≥t). The transition model is again shared with the generative model and is implemented via a neural network as we potentially require quick changes to chosen dynamics. The inverse measurement model is parametrized by a backward LSTM. However, for the case of concrete variables, we cannot do the same Gauss multiplication as in the previous case. Therefore, we let each network predict the logits of a Concrete distribution and our inverse measurement model qφ(st | x≥t, u≥t) produces an additional vector γ, which determines the value of a gate deciding how the two predictions are to be weighted: qφ(st | st−1, zt−1, x≥t, u≥t−1) = Concrete(α, λposterior) with α = γαtrans + (1− γ)αmeas qmeas(st | x≥t, u≥t) = Concrete(αmeas, λposterior) where [αmeas, γ] = kφ(x≥t, u≥t) qtrans(st | st−1, zt−1, ut−1) = Concrete(αtrans, λprior) where α = gθ(zt−1, st−1, ut−1) (13) The temperatures λposterior and λprior are set as a hyperparameter and can be set differently for the prior and approximate posterior. The gating mechanism gives the model the option to balance between prior and approximate posterior. If the prior is good enough to explain the next observation, γ will be pushed to 1 which ignores the measurement and minimizes the KL between prior and posterior by only propagating the prior. If the prior is not sufficient, information from the inverse measurement model can flow by decreasing γ and incurring a KL penalty. Since the concrete distribution is a relaxation of the categorical, our sample will not be a one-hot vector, but a vector whose elements sum up to 1. We face two options here: we could take a categorical sample by choosing the linear system corresponding to the highest value in the sample (hard forward pass) and only use the relaxation for our backward pass. This, however, means that we will follow a biased gradient. Alternatively, we can use the relaxed version for our forward pass and aggregate the linear systems based on their corresponding weighting (see (8)). Here, we lose the discrete switching of linear systems, but maintain a valid lower bound. We note that the hard forward pass has led to worse results and focus on the soft forward pass for this paper. Lastly, we could go further away from the theory and instead treat the switching variables also as normally distributed. If this worked better than the approach with Concrete variables, it would highlight still existing optimization problems of discrete random variables. As such, it will act as an ablation study for our model. The mixing coefficients for linear systems would then be determined by a linear combination of these latent variables: α = softmax(Wst + b) ∈ RM (14) Our inference scheme for normally distributed switching variables is then identical to the one described in the previous section. We compare both approaches throughout our experimental section. 4.3 TRAINING Our objective function is the commonly used evidence lower bound for our hierarchical model. Lθ,φ(x1:T | u1:T ) ≥ Eqφ(z1:T ,s1:T |x1:T )[log pθ(x1:T | z1:T , s1:T , u1:T )] − DKL(qφ(z1:T , s1:T | x1:T , u1:T ) || p(z1:T , s1:T | u1:T )) (15) We choose to factorize over time, so the loss for a single observation xt becomes: Lθ,φ(xt | u1:T ) = Eqφ(st|st−1,zt−1,x≥t,u≥t−1) [ Eqφ(zt|st,zt−1,x≥t,u≥t−1)[log pθ(xt | zt)] ] − Est−1 [ Ezt−1 [DKL(qφ(st | st−1, zt−1, x≥t, u≥t−1) || pθ(st | st−1, zt−1, ut−1))] ] − Ezt−1 [Est [DKL(qφ(zt | zt−1, st, x≥t, u≥t−1) || pθ(zt | zt−1, st, ut−1))]] (16) The full derivation can be found in appendix A. We learn the parameters of our model by backpropagation through time and we (generally) approximate the expectations with one sample by using the reparametrization trick. The exception is the KL between two Concrete random variables in which case we take 10 samples for the approximation. For the KL on the switching variables, we further introduce a scaling factor β < 1 (as first suggested in Higgins et al. (2016), although they suggested increasing the KL term) to down weigh its importance. More details on the training procedure can be found in appendix B.2. 5 EXPERIMENTS In this section, we evaluate our approach on a diverse set of physics and robotics simulations based on partially observable system states or high-dimensional images as observations. We show that our model outperforms previous models and that our switching variables learn meaningful representations. Models we compare to are Deep Variational Bayes Filter (DVBF) (Karl et al., 2017a), DVBF Fusion (Karl et al., 2017b) (called fusion as they do the same Gauss multiplication in the inference network) which is closest to our model but doesn’t have a stochastic treatment of the transition, the Kalman VAE (KVAE) (Fraccaro et al., 2017) and a LSTM (Hochreiter & Schmidhuber, 1997). (a) Multi agent maze environment. (b) Variable encoding free space for agent 2. (c) Variable encoding walls for agent 1. (d) System activation for deterministic transition. Figure 3: Figures (b) and (c) depict an agent’s position colored by the average value of a single latent variable s marginalized over all control inputs u and velocities. Figure (d) highlights a representative activation for a single transition system for the deterministic treatment of the transition dynamics. It doesn’t generalize to the entire maze and stays fairly active in proximity to the wall. 5.1 MULTIPLE BOUNCING BALLS IN A MAZE Our first experiment is a custom 3-agent maze environment simulated with Box2D. Each agent is fully described by its x and y coordinates and its current velocity and has the capability to accelerate in either direction. We learn in a partially observable setting and limit the observations to the agents’ positions, therefore x ∈ R6 while the true state space is in R12 and u ∈ R6. First, we train a linear regression model on the latent space z to see if we have recovered a linear encoding of the unobserved velocities. We achieve an R2 score of 0.92 averaged over all agents and velocity directions. Our focus shifts now to our switching variables which we expect to encode interactions with walls. We provide a visual confirmation of that in figure 3 where we see switching variables encoding all space where there is no interaction in the next time step, and variables which encode walls, distinguishing between vertical and horizontal ones. In figure 3d one can see show that if the choice of locally linear transition is treated deterministically, we don’t learn global features of the same kind. To confirm our visual inspection, we train a simple decision tree based on latent space s in order to predict interaction with a wall. Here, we achieve an F1 score of 0.46. It is difficult to say what a good value should look like as collisions with low velocity are virtually indistinguishable from no collision. We compare our prediction quality to several other methods in table 1 where we outperform all of our chosen baselines. Also, modeling switching variables by a Normal distribution outperforms the Concrete distribution in all of our experiments. Aside from known practical issues with training a discrete variable via backpropagation, we explore one reason why that may be in section 5.4, which is the greater susceptibility to the scale of temporal discretization. We provide plots of predicted trajectories in appendix D. Transitioning multiple agents with a single transition matrix comes with scalability issues with regards to switching dynamics which we explore further in appendix C. 5.2 REACHER We then evaluate our model on the Roboschool reacher environment. To make things more interesting, we learn only on partial observations, removing time derivative information (velocities), leaving us with just the positions or angles of various joints as observations. Table 1 shows a comparison of various methods on predicting the next couple of time steps. One critical point is the possible collision2 between lower and upper joint which is one we’d like our model to capture. We again learn a linear classifier based on latent space s to see if this is successfully encoded and reach an F1 score of 0.46. 5.3 BALL IN A BOX ON IMAGE DATA Finally, we evaluate our method on high-dimensional image observations using the single bouncing ball environment used by Fraccaro et al. (2017). They simulated 5000 sequences of 20 time steps each of a ball moving in a two-dimensional box, where each video frame is a 32× 32 binary image. There are no forces applied to the ball, except for the fully elastic collisions with the walls. Initial position and velocity are randomly sampled. 2We roughly identify a collision to be the point where the lower joint decelerates by over a fixed value of 2. In figure 5a we compare our model to both the smoothed and generative version of the KVAE. The smoothed version receives the final state of the trajectory after the n predicted steps which is fed into the smoothing capability of the KVAE. One can see that our model learns a better transition model, even outperforming the smoothed KVAE for longer sequences. For short sequences, KVAE performs better which highlights the value of it disentangling the latent space into separate object and dynamics representation. A sample trajectory is plotted in figure 4. 5.4 SUSCEPTIBILITY TO THE SCALE OF TEMPORAL DISCRETIZATION In this section, we’d like to explore how the choice of ∆t when discretizing a system influences our results. In particular, we’d expect our model with discrete (concrete) switching latent variables to be more susceptible to it than when modeled by a continuous distribution. This is because in the latter case the switching variables can scale the various matrices more freely, while in the former scaling up one system necessitates scaling down another. For empirical comparison, we go back to our custom maze environment (this time with only one agent as this is not pertinent to our question at hand) and learn the dynamics on various discretization scales. Then we compare the absolute error’s growth for both approaches in figure 5b which supports our hypothesis. While the discrete approximation even outperforms for small ∆t, there is a point where it rapidly becomes worse and gets overtaken by the continuous approximation. This suggests that ∆t was simply chosen to be too large in both the reacher and the ball in a box with image observations experiment. 6 DISCUSSION We want to emphasize some subtle differences to previously proposed architectures that make an empirical difference, in particular for the case when st is chosen to be continuous. In Watter et al. (2015) and Karl et al. (2017a), the latent space is already used to draw transition matrices, however they do not extract features such as walls or joint constraints. There are a few key differences from our approach. First, our latent switching variables st are only involved in predicting the current observation xt through the transition selection process. The likelihood model therefore doesn’t need to learn to ignore some input dimensions which are only helpful for reconstructing future observations but not the current one. There is also a clearer restriction on how st and zt may interact: st may now only influence zt by determining the dynamics, while previously zt influenced both the choice of transition function as well as acted inside the transition. These two opposing roles lead to conflicting gradients as to what should be improved. Furthermore, the learning signal for st is rather weak so that scaling down the KL-regularization was necessary to detect good features. Lastly, a (locally) linear transition may not be a good fit for variables determining dynamics as such variables may change very abruptly. 7 CONCLUSION We have shown that our construction of using switching variables encourages learning a richer and more interpretable latent space. In turn, the richer representation led to an improvement of simulation accuracy in various tasks. In the future, we’d like to look at other ways to approximate the discrete switching variables and exploit this approach for model-based control on real hardware systems. Furthermore, addressing the open problem of disentangling latent spaces is essential to fitting simple dynamics and would lead to significant improvements of this approach. A LOWER BOUND DERIVATION For brevity we omit conditioning on control inputs u1:T . log p(xT ) = log ∫ z1:T ∫ s1:T qφ(s1:T , z1:T | x1:T ) pθ(x1:T | z1:T )pθ(z1:T , s1:T ) qφ(s1:T , z1:T | x1:T ) ≥ ∫ z1:T ∫ s1:T qφ(s1:T , z1:T | x1:T ) log pθ(x1:T | z1:T )pθ(z1:T , s1:T ) qφ(s1:T , z1:T | x1:T ) = T∑ t=1 Est [Ezt [p(xt | zt, st)]]− DKL(q(z1:T , s1:T | x1:T ) || p(z1:T , s1:T )) A.1 FACTORIZATION OF THE KL DIVERGENCE The dependencies on data xT and uT as well as parameters φ and θ are omitted in the following for convenience. DKL(q(z1, s2, . . . , sT , zT ) || p(z1, s2, . . . , sT , zT )) (Factorization of the variational approximation) = ∫ z1 ∫ s2 · · · ∫ sT ∫ zT q(z1)q(s2 | z1) . . . q(sT | zT−1, sT−1)q(zT | zT−1, sT ) log q(z1)q(s2 | z1) . . . q(sT | zT−1, sT−1)q(zT | zT−1, sT ) p(z1, s2, . . . , sT , zT ) (Factorization of the prior) = ∫ z1 ∫ s2 · · · ∫ sT ∫ zT q(z1)q(s2 | z1) . . . q(sT | zT−1, sT−1)q(zT | zT−1, sT ) log q(z1)q(s2 | z1) . . . q(sT | zT−1, sT−1)q(zT | zT−1, sT ) p(z1)p(s2 | z1) . . . p(sT | zT−1, sT−1)p(zT | zT−1, sT ) (Expanding the logarithm by the product rule) = ∫ z1 q(z1) log q(z1) p(z1) + ∫ z1 ∫ s1 q(z1)q(s1 | z1) log q(s1 | z1) p(s1 | z1) + T∑ t=2 ∫ z1 ∫ s2 · · · ∫ sT ∫ zT q(z1)q(s2 | z1) . . . q(zT | zT−1, sT ) log q(zt | zt−1, st) p(zt | zt−1, st) + T∑ t=3 ∫ z1 ∫ s2 · · · ∫ sT ∫ zT q(z1)q(s2 | z1) . . . q(zT | zT−1, sT ) log q(st | zt−1, st−1) p(st | zt−1, st−1) (Ignoring constants) = DKL(q(z1) || p(z1)) + Ez1∼q(z1)[DKL(q(s2 | z1) || p(s2 | z1))] + T−1∑ t=2 Est,zt−1 [DKL(q(zt | zt−1, st) || p(zt | zt−1, st))] + T−1∑ t=3 Est−1,zt−1 [DKL(q(st | zt−1, st−1) || p(st | zt−1, st−1))] B DETAILS OF THE EXPERIMENTAL SETUP B.1.1 ROBOSCHOOL REACHER To generate data, we follow a Uniform distribution U ∼ [−1, 1] as the exploration policy. Before we record data, we take 20 warm-up steps in the environment to randomize our starting state. We take the data as is without any other preprocessing. B.1.2 MULTI AGENT MAZE Observations are normalized to be in [−1, 1]. Both position and velocity is randomized for the starting state. We again follow a Uniform distribution U ∼ [−1, 1] as the exploration policy. B.2 TRAINING Overall, training the Concrete distribution has given us the biggest challenge as it was very susceptible to various hyperparameters. We made use of the fact that we can use a different temperature for the prior and approximate posterior (Maddison et al., 2017) and we do independent hyperparameter search over both. For us, the best values were 0.75 for the posterior and 2 for the prior. Additionally, we employ an exponential annealing scheme for the temperature hyperparameter of the Concrete distribution. This leads to a more uniform combination of base matrices early in training which has two desirable effects. First, all matrices are scaled to a similar magnitude, making initialization less critical. Second, the model initially tries to fit a globally linear model, leading to a good starting state for optimization. We also tried increasing the number of samples taken (up to 100) to approximate the KL between the Concrete distributions, however we have not observed an improvement of performance. We therefore restrict ourselves to 10 samples for all experiments. In all experiments, we train everything end-to-end with the ADAM optimizer.(Kingma & Ba, 2015) We start with learning rate of 5e−4 and use an exponential decay schedule with rate 0.97 every 2000 iterations. B.3 NETWORK ARCHITECTURE For most networks, we use MLPs implemented as residual nets (He et al., 2016) with ReLU activations. Networks used for the reacher and maze experiments. • qmeas(zt | ·): MLP consisting of two residual blocks with 256 neurons each. We only condition on the current observation xt although we could condition on the entire sequence. This decision was taken based on empirical results. • qtrans(zt | ·): In the case of Concrete random variables, we just combine the base matrices and apply the transition dynamics to zt−1. For the Normal case, the combination of matrices is preceded by a linear combination with softmax activation. (see equation 14) • qmeas(st | ·): is implemented by a backward LSTM with 256 hidden units. We reuse the preprocessing of qmeas(zt | xt) and take the last hidden layer of that network as the input to the LSTM. • qtrans(st | ·): MLP consisting of one residual block with 256 neurons. • qinitial(w | ·): MLP consisting of two residual block with 256 neurons optionally followed by a backward LSTM. We only condition on the first 3 or 4 observations for our experiments. • qinitial(s2): The first switching variable in the sequence has no predecessor. We there- fore require a replacement for qtrans(st | ·) in the first time step, which we achieve by independently parameterizing another MLP. • p(xt | zt): MLP consisting of two residual block with 256 neurons. • p(zt | ·): Shared parameters with qtrans(zt | ·). • p(st | ·): Shared parameters with qtrans(st | ·). We use the same architecture for the image ball in a box experiment, however we increase number of neurons of qmeas(zt | ·) to 1024. B.4 HYPERPARAMETERS C ON SCALING ISSUES OF SWITCHING LINEAR DYNAMICAL SYSTEMS Let’s consider a simple representation of a ball in a rectangular box where its state is represented by its position and velocity. Given a small enough ∆t, we can approximate the dynamics decently by just 3 systems: no interaction with the wall, interaction with a vertical or horizontal wall (ignoring the corner case of interacting with two walls at the same time). Now consider the growth of required base systems if we increase the number of balls in the box (even if these balls cannot interact with each other). We would require a system for all combinations of a single ball’s possible states: 32. This will grow exponentially with the number of balls in the environment. One way to alleviate this problem that requires only a linear growth in base systems is to independently turn individual systems on and off and let the resulting system the sum of all activated systems. A base system may then represent solely the transition for a single ball being in specific state, while the complete system is then a combination ofN such systems whereN is the number of balls. Practically, this can be achieved by replacing the softmax by a sigmoid activation function or by replacing the categorical variable s of dimension M by M Bernoulli variables indicating whether a single system is active or not. We do this for our multiple agents in a maze environment. Theoretically, a preferred approach would be to disentangle multiple systems (like balls, joints) and apply transitions only to their respective states. This, however, would require a proper and unsupervised separation of (mostly) independent components. We defer this to future work. D FURTHER RESULTS D.1 3-AGENT MAZE D.2 IMAGE BALL IN A BOX
1. What is the main contribution of the paper, and how does it differ from previous works? 2. How well are the standard and proposed models described, and what are their strengths and weaknesses? 3. What are the issues with the variational inference procedure, and how does it relate to the posterior distribution? 4. What is the problem with equation (7), and how does it connect to the splitting of the a posteriori distribution? 5. How does the generative model work, and what are the difficulties with its definition? 6. Are the numerical and quantitative results convincing, and how do they compare to other methods? 7. Should the paper be accepted or rejected, and what changes would improve its quality?
Review
Review This paper proposes a new model for switching linear dynamical systems. The standard model and the proposed model are presented. Together with the inference procedure associated to the new model. This inference procedure is based on variational auto-encoders, which model the transition and measurement posterior distributions, which is exactly the methodological contribution of the manuscript. Experiments on three different tasks are reported, and qualitative and quantitative results (comparing with different state-of-the-art methods) are reported. The standard model is very well described, formally and graphically, except for the dynamic model of the switching variable, and its dependence on z_t-1. The proposed model has a clear graphical representation, but its formal counterpart is a bit more difficult to grasp, we need to reach 4.2 (after the inference procedure is discussed) to understand the main difference (the switching variable does not influence the observation model). Still, the dependency of the dynamics of s_t on z_t is not discussed. In my opinion, another issue is the discussion of the variational inference procedure, mainly because it is unclear what additional assumptions are made. This is because the procedure does not seem to derive from the a posteriori distribution (at least it is not presented like this). Sometimes we do not know if the authors are assuming further hypothesis or if there are typos in the equations. For instance (7) is quite problematic. Indeed, the starting point of (7) is the approximation of the a posteriori distribution q_phi(z_t|z_t-1,x_1:T,u_1:T), that is split into two parts, a transition model and an inverse measurement model. First, this split is neither well motivated nor justified: does it come from smartly using the Bayes and other probability rules? In particular, I do not understand how come, given that q_phi is not conditioned on s_t, the past measurements and control inputs can be discarded. Second, do the authors impose that this a posteriori probability is a Gaussian? Third, the variable s_t seems to be in and out at the authors discretion, which is not correct from a mathematical point of view, and critical since the interesting part of the model is exactly the existence of a switching variable and its relationship with the other latent/observed variables. Finally, if the posterior q_phi is conditioned to s_t (and I am sure it must), then the measurement model also has to be conditioned on s_t, which poses perhaps another inference problem. Equation (10) has the same problem, in the sense that we do not understand where does it derive from, why is the chosen split justified and why the convex sum of the two distributions is the appropriate way to merge the information of the inverse measurements and the transition model. Another difficulty is found in the generative model, when it is announced that the model uses M base matrices (but there are S possibilities for the switching variable). s_t(i) is not defined and the transition model for the switching variable is not defined. This part is difficult to understand and confusing. At the end, since we do not understand the basic assumptions of the model, it is very hard to grasp the contribution of the paper. In addition, the interpretation of the results is much harder, since we are missing an overall understanding of the proposed approach. The numerical and quantitative results demonstrate the ability of the approach to outperform the state-of-the-art (at least for the normal distribution and on the first two tasks). Due to the lack of discussion, motivation, justification and details of the proposed approach, I recommend this paper to be rejected and resubmitted when all these concerns will be addressed.
ICLR
Title Switching Linear Dynamics for Variational Bayes Filtering Abstract System identification of complex and nonlinear systems is a central problem for model predictive control and model-based reinforcement learning. Despite their complexity, such systems can often be approximated well by a set of linear dynamical systems if broken into appropriate subsequences. This mechanism not only helps us find good approximations of dynamics, but also gives us deeper insight into the underlying system. Leveraging Bayesian inference and Variational Autoencoders, we show how to learn a richer and more meaningful state space, e.g. encoding joint constraints and collisions with walls in a maze, from partial and high-dimensional observations. This representation translates into a gain of accuracy of the learned dynamics which we showcase on various simulated tasks. 1 INTRODUCTION Learning dynamics from raw data (also known as system identification) is a key component of model predictive control and model-based reinforcement learning. Problematically, environments of interest often give rise to very complex and highly nonlinear dynamics which are seemingly difficult to approximate. However, switching linear dynamical systems (SLDS) approaches claim that those environments can often be broken down into simpler units made up of areas of equal and linear dynamics (Ackerson & Fu, 1970; Chang & Athans, 1978). Not only are those approaches capable of good predictive performance, which often is the sole goal of learning a system’s dynamics, they also encode valuable information into so called switching variables which determine the dynamics of the next transition. For example, when looking at the movement of an arm, one is intuitively aware of certain restrictions of possible movements, e.g. constraints to the movement due to joint constraints or obstacles. The knowledge is present without the need to simulate; it’s explicit. Exactly this kind of information will be encoded when successfully learning switching dynamics. Our goal in this work will therefore entail the search for richer representations in the form of latent state space models which encode knowledge about the underlying system dynamics. In turn, we expect this to improve the accuracy of our simulation as well. Such a representation alone could then be used in a reinforcement learning approach that possibly only takes advantage of the learned latent features but not necessarily its learned dynamics. To learn richer representations, we identify one common problem with prevalent recurrent Variational Autoencoder models (Karl et al., 2017a; Krishnan et al., 2015; Chung et al., 2015; Fraccaro et al., 2016): the non-probabilistic treatment of the transition dynamics often modeled by a powerful nonlinear function approximator. From the history of the Autoencoder to the Variational Autoencoder, we know that in order to detect features in an unsupervised manner, probabilistic treatment of the latent space is paramount. As our starting point, we will build on previously proposed approaches by Krishnan et al. (2017) and Karl et al. (2017a). The latter already made use of locally linear dynamics, but only in a deterministic fashion. We extend their approaches by a stochastic switching LDS model and show that such treatment is vital for learning richer representations and simulation accuracy. 2 BACKGROUND We consider discretized time-series data consisting of continuous observations xt ∈ X ⊂ Rnx and control inputs ut ∈ U ⊂ Rnu that we would like to model by corresponding latent states zt ∈ Z ⊂ Rnz . We’ll denote sequences of variables by x1:T = (x1, x2, ..., xT ). 2.1 SWITCHING LINEAR DYNAMICAL SYSTEMS Switching Linear Dynamical System models (SLDS) enable us to model nonlinear time series data by splitting it into sequences of linear dynamical models. At each time t = 1, 2, ..., T , a discrete switch variable st ∈ 1, ...,M chooses of a set LDSs a system which is to be used to transform our continuous latent state zt to the next time step (Barber, 2012). zt = A(st)zt−1 +B(st)ut−1 + (st) (st) ∼ N (0, Q(st)) xt = H(st)zt + η(st) η(st) ∼ N (0, R(st)) (1) Here A ∈ Rnz×nz is the state matrix, B ∈ Rnz×nu control matrix, the transition noise with covariance matrix Q and η the emission/sensor noise with covariance matrix R. Finally, the observation matrix H ∈ Rnx×nz defines a linear mapping from latent to observation space which we will replace by a nonlinear transformation parameterized by a neural net. These equations imply the following joint distribution: p(x1:T , z1:T , s1:T | u1:T ) = T∏ t=1 p(xt | zt) p(zt | zt−1, ut−1, st) p(st | zt−1, ut−1, st−1) (2) with p(z1 | z0, u0, s1) = p(z1) being the initial state distribution. The corresponding graphical model is shown in figure 1a. 2.2 STOCHASTIC GRADIENT VARIATIONAL BAYES p(x) = ∫ p(x, z) dz = ∫ p(x | z)p(z) dz (3) Given the simple graphical model in equation (3), Kingma & Welling (2014) and Rezende et al. (2014) introduced the Variational Autoencoder (VAE) which overcomes the intractability of posterior inference of q(z | x) by maximizing the evidence lower bound (ELBO) of the model log-likelihood. LELBO(x; θ, φ) = Eqφ(z|x)[ln pθ(x | z)]− DKL(qφ(z | x) || p(z)) ≤ log p(x) (4) Their main innovation was to approximate the intractable posterior distribution by a recognition network qφ(z|x) from which they can sample via the reparameterization trick to allow for stochastic backpropagation through both the recognition and generative model at once. Assuming that the latent state is normally distributed, a simple transformation allows us to obtain a Monte Carlo gradient estimate of Eqφ(z|x) [ln pθ(x|z)] w.r.t. to φ. Given that z ∼ N (µ, σ2), we can generate samples by drawing from an auxiliary variable ∼ N (0, 1) and applying the deterministic and differentiable transformation z = µ+ σ . 2.3 THE CONCRETE DISTRIBUTION One simple and efficient way to obtain samples d from a k-dimensional categorical distribution with class probabilities α is the Gumbel-Max trick: d = one_hot (argmax[gi + logαi]) , with g1, . . . , gk ∼ Gumbel(0, 1) (5) However, since the derivative of the argmax is 0 everywhere except at the boundary of state changes, where it is undefined, we can’t learn a parameterization by backpropagation. The Gumbel-Softmax trick approximates the argmax by a softmax which gives us a probability vector (Maddison et al., 2017; Jang et al., 2017). We can then draw samples via dk = exp((logαk + gk)/λ)∑n i=1 exp((logαi + gi)/λ) , with g1, . . . , gk ∼ Gumbel(0, 1) (6) This softmax computation approaches the discrete argmax as temperature λ → 0, for λ → ∞ it approaches a uniform distribution. 3 RELATED WORK Our model can be viewed as a Deep Kalman Filter (Krishnan et al., 2015) with structured inference (Krishnan et al., 2017). In our case, structured inference entails another stochastic variable model with parameter sharing inspired by Karl et al. (2017b) and Karl et al. (2017a) which pointed out the importance of backpropagating the reconstruction error through the transition. We are different to a number of stochastic sequential models like Bayer & Osendorfer (2014); Chung et al. (2015); Shabanian et al. (2017); Goyal et al. (2017) by directly transitioning the stochastic latent variable over time instead of having an RNN augmented by stochastic inputs. Fraccaro et al. (2016) has a transition over both a deterministic and a stochastic latent state sequence, wanting to combine the best of both worlds. Previous models (Watter et al., 2015; Karl et al., 2017a; Fraccaro et al., 2017) have already combined locally linear models with recurrent Variational Autoencoders, however they provide a weaker structural incentive for learning latent variables determining the transition function. Van Steenkiste et al. (2018) approach a similar multi bouncing ball problem (see section 5.1) by first distributing the representation of different balls into their own entities without supervision and then structurally hardwiring a transition function with interactions based on an attention mechanism. Recurrent switching linear dynamical systems (Linderman et al., 2016) uses message passing for approximate inference, but has restricted itself to low-dimensional observations and a multi-stage training process. Johnson et al. (2016) propose a similar model to ours but combine message passing for discrete switching variables with a neural network encoder for observations learned by stochastic backpropagation. Tackling the problem of propagating state uncertainty over time, various combinations of neural networks for inference and Gaussian processes for transition dynamics have been proposed (Eleftheriadis et al., 2017; Doerr et al., 2018). However, these models have not been demonstrated to work with high-dimensional observation spaces like images. One feature a switching LDS model may learn are interactions which have recently been approached by employing Graph Neural Networks (Battaglia et al., 2016; Kipf et al., 2018). These methods are similar in that they predict edges which encode interactions between components of the state space (nodes). 4 PROPOSED APPROACH Our goal is to fit a series of continuous state z1:T and switching variables s2:T to a given sequence of observations x1:T . We assume a nonlinear mapping between observations and latent space which we generally approximate by neural networks, apart from the transition which is modeled by a locally linear function. Our generative model is shown in figure 1b an our inference model in figure 2a. 4.1 GENERATIVE MODEL Our generative model for a single xt is described by p(xt) = ∫ s≤t ∫ z≤t p(xt | zt)p(zt | zt−1, st, ut−1)p(st | st−1, zt−1, ut−1)p(zt−1, st−1) (7) which is close to the one of the original SLDS model (see figure 1a). Latent states zt are continuous and represent the state of the system while states st are the switching variables determining the transition. We approximate the discrete switching variables by a continuous relaxation, namely the Concrete distribution.1 Differently to the original model, we do not condition the likelihood of the current observation pθ(xt | zt) directly on the switching variables. This limits the influence of the switching variables to choosing a proper transition dynamic for the continuous latent space. The likelihood model is parameterized by a neural network with either a Gaussian or a Bernoulli distribution as output depending on the data. There is both a transition on the continuous states zt and discrete latent states st. For the continuous state transition p(zt | zt−1, st, ut−1) we follow (1) and maintain a set of M base matrices { ( A(i), B(i), Q(i) ) | ∀i. 0 < i < M} as our linear dynamical systems to choose from. For the transition on discrete latent states p(st | st−1, zt−1, ut−1), we usually require the learning of a Markov transition matrix. However, since we approximate our discrete switching variables by a continuous relaxation, we can parameterize this transition by a neural network. Therefore, our entire generative model can be learned end-to-end by (stochastic) backpropagation. Finally, the resulting dynamics matrices are computed through a linear combination of the base matrices: At(st) = M∑ i=1 s (i) t A (i), B(st) = M∑ i=1 s (i) t B (i), Q(st) = M∑ i=1 s (i) t Q (i) (8) Both transition models – the continuous state transition pθ(zt | zt−1, st, ut−1) and concrete switching variables transition pθ(st | st−1, zt−1, ut−1) – are shared with the inference model which is key for good performance. pθ(zt | zt−1, st, ut−1) = N ( µ, σ2 ) where [µ, σ2] = fθ(zt−1, st, ut−1) pθ(st | st−1, zt−1, ut−1) = Concrete(α, λprior) where α = gθ(zt−1, st−1, ut−1) (9) 4.2 INFERENCE 4.2.1 STRUCTURED INFERENCE OF CONTINUOUS LATENT STATE We split our inference model qφ(zt | zt−1, st, x≥t, u≥t−1) into two parts: 1) transition model qtrans(zt | zt−1, st, ut−1) and 2) inverse measurement model qmeas(zt | x≥t, u≥t) as previously proposed in Karl et al. (2017b). This split allows us to reuse our generative transition model in place of qtrans(zt | zt−1, st, ut−1). This sharing of variables is essential for good performance as it forces the reconstruction error to be backpropagated through the transition model. For practical reasons, we only share the computation of the transition mean µtrans but not the variance σ2trans between inference and generative model. Both parts, qmeas and qtrans, will give us independent predictions about the new state zt which will be combined in a manner akin to a Bayesian update in a Kalman Filter. qφ(zt | zt−1, st, x≥t, u≥t−1) ∝ qmeas(zt | x≥t, u≥t)× qtrans(zt | zt−1, st, ut−1) = N ( µq, σ 2 q ) qmeas(zt | x≥t, u≥t) = N ( µmeas, σ 2 meas ) where [µmeas, σ2meas] = hφ(x≥t, u≥t) qtrans(zt | zt−1, st, ut−1) = N ( µtrans, σ 2 trans ) where [µtrans, σ2trans] = fθ(zt−1, st, ut−1) (10) The densities of qmeas and qtrans are multiplied resulting in another Gaussian density: µq = µtransσ 2 meas + µmeasσ 2 trans σ2meas + σ 2 trans , σ2q = σ2measσ 2 trans σ2meas + σ 2 trans (11) This update scheme is highlighted in figure 2b. We found empirically that conditioning the inverse measurement model qmeas(zt | x≥t, u≥t) solely on the current observation xt instead of the entire remaining trajectory to lead to better results. We hypothesize that the recurrent model needlessly introduces very high-dimensional and complicated dynamics which are harder to approximate with our locally linear transition model. For the initial state z1 we do not have a conditional prior from the transition model as in the rest of the sequence. Other methods (Krishnan et al., 2015) have used a standard normal prior, however this is not a good fit. We therefore decided that instead of predicting z1 directly to predict an auxiliary 1As an ablation study, we will compare this to modeling switching variables by a Gaussian distribution. variable w that is then mapped deterministically to a starting state z1. A standard Gaussian prior is then applied to w. Alternatively, we could specify a more complex or learned prior for the initial state like the VampPrior (Tomczak & Welling, 2017). Empirically, this has lead to worse results. qφ(w | x1:T , u1:T ) = N ( w;µw, σ 2 w ) where [µw, σ2w] = iφ(x1:T , u1:T ) z1 = fφ(w) (12) While we could condition on the entire sequence, we restrict it to just the first couple of observations. 4.2.2 INFERENCE OF SWITCHING VARIABLES Following Maddison et al. (2017) and Jang et al. (2017), we can reparameterize a discrete latent variable with the Gumbel-softmax trick. Again, we split our inference network qφ(st | st−1, zt−1, x≥t, u≥t−1) in an identical fashion into two components: 1) Transition model qtrans(st | st−1, zt−1, ut−1) and 2) inverse measurement model qmeas(st | x≥t, u≥t). The transition model is again shared with the generative model and is implemented via a neural network as we potentially require quick changes to chosen dynamics. The inverse measurement model is parametrized by a backward LSTM. However, for the case of concrete variables, we cannot do the same Gauss multiplication as in the previous case. Therefore, we let each network predict the logits of a Concrete distribution and our inverse measurement model qφ(st | x≥t, u≥t) produces an additional vector γ, which determines the value of a gate deciding how the two predictions are to be weighted: qφ(st | st−1, zt−1, x≥t, u≥t−1) = Concrete(α, λposterior) with α = γαtrans + (1− γ)αmeas qmeas(st | x≥t, u≥t) = Concrete(αmeas, λposterior) where [αmeas, γ] = kφ(x≥t, u≥t) qtrans(st | st−1, zt−1, ut−1) = Concrete(αtrans, λprior) where α = gθ(zt−1, st−1, ut−1) (13) The temperatures λposterior and λprior are set as a hyperparameter and can be set differently for the prior and approximate posterior. The gating mechanism gives the model the option to balance between prior and approximate posterior. If the prior is good enough to explain the next observation, γ will be pushed to 1 which ignores the measurement and minimizes the KL between prior and posterior by only propagating the prior. If the prior is not sufficient, information from the inverse measurement model can flow by decreasing γ and incurring a KL penalty. Since the concrete distribution is a relaxation of the categorical, our sample will not be a one-hot vector, but a vector whose elements sum up to 1. We face two options here: we could take a categorical sample by choosing the linear system corresponding to the highest value in the sample (hard forward pass) and only use the relaxation for our backward pass. This, however, means that we will follow a biased gradient. Alternatively, we can use the relaxed version for our forward pass and aggregate the linear systems based on their corresponding weighting (see (8)). Here, we lose the discrete switching of linear systems, but maintain a valid lower bound. We note that the hard forward pass has led to worse results and focus on the soft forward pass for this paper. Lastly, we could go further away from the theory and instead treat the switching variables also as normally distributed. If this worked better than the approach with Concrete variables, it would highlight still existing optimization problems of discrete random variables. As such, it will act as an ablation study for our model. The mixing coefficients for linear systems would then be determined by a linear combination of these latent variables: α = softmax(Wst + b) ∈ RM (14) Our inference scheme for normally distributed switching variables is then identical to the one described in the previous section. We compare both approaches throughout our experimental section. 4.3 TRAINING Our objective function is the commonly used evidence lower bound for our hierarchical model. Lθ,φ(x1:T | u1:T ) ≥ Eqφ(z1:T ,s1:T |x1:T )[log pθ(x1:T | z1:T , s1:T , u1:T )] − DKL(qφ(z1:T , s1:T | x1:T , u1:T ) || p(z1:T , s1:T | u1:T )) (15) We choose to factorize over time, so the loss for a single observation xt becomes: Lθ,φ(xt | u1:T ) = Eqφ(st|st−1,zt−1,x≥t,u≥t−1) [ Eqφ(zt|st,zt−1,x≥t,u≥t−1)[log pθ(xt | zt)] ] − Est−1 [ Ezt−1 [DKL(qφ(st | st−1, zt−1, x≥t, u≥t−1) || pθ(st | st−1, zt−1, ut−1))] ] − Ezt−1 [Est [DKL(qφ(zt | zt−1, st, x≥t, u≥t−1) || pθ(zt | zt−1, st, ut−1))]] (16) The full derivation can be found in appendix A. We learn the parameters of our model by backpropagation through time and we (generally) approximate the expectations with one sample by using the reparametrization trick. The exception is the KL between two Concrete random variables in which case we take 10 samples for the approximation. For the KL on the switching variables, we further introduce a scaling factor β < 1 (as first suggested in Higgins et al. (2016), although they suggested increasing the KL term) to down weigh its importance. More details on the training procedure can be found in appendix B.2. 5 EXPERIMENTS In this section, we evaluate our approach on a diverse set of physics and robotics simulations based on partially observable system states or high-dimensional images as observations. We show that our model outperforms previous models and that our switching variables learn meaningful representations. Models we compare to are Deep Variational Bayes Filter (DVBF) (Karl et al., 2017a), DVBF Fusion (Karl et al., 2017b) (called fusion as they do the same Gauss multiplication in the inference network) which is closest to our model but doesn’t have a stochastic treatment of the transition, the Kalman VAE (KVAE) (Fraccaro et al., 2017) and a LSTM (Hochreiter & Schmidhuber, 1997). (a) Multi agent maze environment. (b) Variable encoding free space for agent 2. (c) Variable encoding walls for agent 1. (d) System activation for deterministic transition. Figure 3: Figures (b) and (c) depict an agent’s position colored by the average value of a single latent variable s marginalized over all control inputs u and velocities. Figure (d) highlights a representative activation for a single transition system for the deterministic treatment of the transition dynamics. It doesn’t generalize to the entire maze and stays fairly active in proximity to the wall. 5.1 MULTIPLE BOUNCING BALLS IN A MAZE Our first experiment is a custom 3-agent maze environment simulated with Box2D. Each agent is fully described by its x and y coordinates and its current velocity and has the capability to accelerate in either direction. We learn in a partially observable setting and limit the observations to the agents’ positions, therefore x ∈ R6 while the true state space is in R12 and u ∈ R6. First, we train a linear regression model on the latent space z to see if we have recovered a linear encoding of the unobserved velocities. We achieve an R2 score of 0.92 averaged over all agents and velocity directions. Our focus shifts now to our switching variables which we expect to encode interactions with walls. We provide a visual confirmation of that in figure 3 where we see switching variables encoding all space where there is no interaction in the next time step, and variables which encode walls, distinguishing between vertical and horizontal ones. In figure 3d one can see show that if the choice of locally linear transition is treated deterministically, we don’t learn global features of the same kind. To confirm our visual inspection, we train a simple decision tree based on latent space s in order to predict interaction with a wall. Here, we achieve an F1 score of 0.46. It is difficult to say what a good value should look like as collisions with low velocity are virtually indistinguishable from no collision. We compare our prediction quality to several other methods in table 1 where we outperform all of our chosen baselines. Also, modeling switching variables by a Normal distribution outperforms the Concrete distribution in all of our experiments. Aside from known practical issues with training a discrete variable via backpropagation, we explore one reason why that may be in section 5.4, which is the greater susceptibility to the scale of temporal discretization. We provide plots of predicted trajectories in appendix D. Transitioning multiple agents with a single transition matrix comes with scalability issues with regards to switching dynamics which we explore further in appendix C. 5.2 REACHER We then evaluate our model on the Roboschool reacher environment. To make things more interesting, we learn only on partial observations, removing time derivative information (velocities), leaving us with just the positions or angles of various joints as observations. Table 1 shows a comparison of various methods on predicting the next couple of time steps. One critical point is the possible collision2 between lower and upper joint which is one we’d like our model to capture. We again learn a linear classifier based on latent space s to see if this is successfully encoded and reach an F1 score of 0.46. 5.3 BALL IN A BOX ON IMAGE DATA Finally, we evaluate our method on high-dimensional image observations using the single bouncing ball environment used by Fraccaro et al. (2017). They simulated 5000 sequences of 20 time steps each of a ball moving in a two-dimensional box, where each video frame is a 32× 32 binary image. There are no forces applied to the ball, except for the fully elastic collisions with the walls. Initial position and velocity are randomly sampled. 2We roughly identify a collision to be the point where the lower joint decelerates by over a fixed value of 2. In figure 5a we compare our model to both the smoothed and generative version of the KVAE. The smoothed version receives the final state of the trajectory after the n predicted steps which is fed into the smoothing capability of the KVAE. One can see that our model learns a better transition model, even outperforming the smoothed KVAE for longer sequences. For short sequences, KVAE performs better which highlights the value of it disentangling the latent space into separate object and dynamics representation. A sample trajectory is plotted in figure 4. 5.4 SUSCEPTIBILITY TO THE SCALE OF TEMPORAL DISCRETIZATION In this section, we’d like to explore how the choice of ∆t when discretizing a system influences our results. In particular, we’d expect our model with discrete (concrete) switching latent variables to be more susceptible to it than when modeled by a continuous distribution. This is because in the latter case the switching variables can scale the various matrices more freely, while in the former scaling up one system necessitates scaling down another. For empirical comparison, we go back to our custom maze environment (this time with only one agent as this is not pertinent to our question at hand) and learn the dynamics on various discretization scales. Then we compare the absolute error’s growth for both approaches in figure 5b which supports our hypothesis. While the discrete approximation even outperforms for small ∆t, there is a point where it rapidly becomes worse and gets overtaken by the continuous approximation. This suggests that ∆t was simply chosen to be too large in both the reacher and the ball in a box with image observations experiment. 6 DISCUSSION We want to emphasize some subtle differences to previously proposed architectures that make an empirical difference, in particular for the case when st is chosen to be continuous. In Watter et al. (2015) and Karl et al. (2017a), the latent space is already used to draw transition matrices, however they do not extract features such as walls or joint constraints. There are a few key differences from our approach. First, our latent switching variables st are only involved in predicting the current observation xt through the transition selection process. The likelihood model therefore doesn’t need to learn to ignore some input dimensions which are only helpful for reconstructing future observations but not the current one. There is also a clearer restriction on how st and zt may interact: st may now only influence zt by determining the dynamics, while previously zt influenced both the choice of transition function as well as acted inside the transition. These two opposing roles lead to conflicting gradients as to what should be improved. Furthermore, the learning signal for st is rather weak so that scaling down the KL-regularization was necessary to detect good features. Lastly, a (locally) linear transition may not be a good fit for variables determining dynamics as such variables may change very abruptly. 7 CONCLUSION We have shown that our construction of using switching variables encourages learning a richer and more interpretable latent space. In turn, the richer representation led to an improvement of simulation accuracy in various tasks. In the future, we’d like to look at other ways to approximate the discrete switching variables and exploit this approach for model-based control on real hardware systems. Furthermore, addressing the open problem of disentangling latent spaces is essential to fitting simple dynamics and would lead to significant improvements of this approach. A LOWER BOUND DERIVATION For brevity we omit conditioning on control inputs u1:T . log p(xT ) = log ∫ z1:T ∫ s1:T qφ(s1:T , z1:T | x1:T ) pθ(x1:T | z1:T )pθ(z1:T , s1:T ) qφ(s1:T , z1:T | x1:T ) ≥ ∫ z1:T ∫ s1:T qφ(s1:T , z1:T | x1:T ) log pθ(x1:T | z1:T )pθ(z1:T , s1:T ) qφ(s1:T , z1:T | x1:T ) = T∑ t=1 Est [Ezt [p(xt | zt, st)]]− DKL(q(z1:T , s1:T | x1:T ) || p(z1:T , s1:T )) A.1 FACTORIZATION OF THE KL DIVERGENCE The dependencies on data xT and uT as well as parameters φ and θ are omitted in the following for convenience. DKL(q(z1, s2, . . . , sT , zT ) || p(z1, s2, . . . , sT , zT )) (Factorization of the variational approximation) = ∫ z1 ∫ s2 · · · ∫ sT ∫ zT q(z1)q(s2 | z1) . . . q(sT | zT−1, sT−1)q(zT | zT−1, sT ) log q(z1)q(s2 | z1) . . . q(sT | zT−1, sT−1)q(zT | zT−1, sT ) p(z1, s2, . . . , sT , zT ) (Factorization of the prior) = ∫ z1 ∫ s2 · · · ∫ sT ∫ zT q(z1)q(s2 | z1) . . . q(sT | zT−1, sT−1)q(zT | zT−1, sT ) log q(z1)q(s2 | z1) . . . q(sT | zT−1, sT−1)q(zT | zT−1, sT ) p(z1)p(s2 | z1) . . . p(sT | zT−1, sT−1)p(zT | zT−1, sT ) (Expanding the logarithm by the product rule) = ∫ z1 q(z1) log q(z1) p(z1) + ∫ z1 ∫ s1 q(z1)q(s1 | z1) log q(s1 | z1) p(s1 | z1) + T∑ t=2 ∫ z1 ∫ s2 · · · ∫ sT ∫ zT q(z1)q(s2 | z1) . . . q(zT | zT−1, sT ) log q(zt | zt−1, st) p(zt | zt−1, st) + T∑ t=3 ∫ z1 ∫ s2 · · · ∫ sT ∫ zT q(z1)q(s2 | z1) . . . q(zT | zT−1, sT ) log q(st | zt−1, st−1) p(st | zt−1, st−1) (Ignoring constants) = DKL(q(z1) || p(z1)) + Ez1∼q(z1)[DKL(q(s2 | z1) || p(s2 | z1))] + T−1∑ t=2 Est,zt−1 [DKL(q(zt | zt−1, st) || p(zt | zt−1, st))] + T−1∑ t=3 Est−1,zt−1 [DKL(q(st | zt−1, st−1) || p(st | zt−1, st−1))] B DETAILS OF THE EXPERIMENTAL SETUP B.1.1 ROBOSCHOOL REACHER To generate data, we follow a Uniform distribution U ∼ [−1, 1] as the exploration policy. Before we record data, we take 20 warm-up steps in the environment to randomize our starting state. We take the data as is without any other preprocessing. B.1.2 MULTI AGENT MAZE Observations are normalized to be in [−1, 1]. Both position and velocity is randomized for the starting state. We again follow a Uniform distribution U ∼ [−1, 1] as the exploration policy. B.2 TRAINING Overall, training the Concrete distribution has given us the biggest challenge as it was very susceptible to various hyperparameters. We made use of the fact that we can use a different temperature for the prior and approximate posterior (Maddison et al., 2017) and we do independent hyperparameter search over both. For us, the best values were 0.75 for the posterior and 2 for the prior. Additionally, we employ an exponential annealing scheme for the temperature hyperparameter of the Concrete distribution. This leads to a more uniform combination of base matrices early in training which has two desirable effects. First, all matrices are scaled to a similar magnitude, making initialization less critical. Second, the model initially tries to fit a globally linear model, leading to a good starting state for optimization. We also tried increasing the number of samples taken (up to 100) to approximate the KL between the Concrete distributions, however we have not observed an improvement of performance. We therefore restrict ourselves to 10 samples for all experiments. In all experiments, we train everything end-to-end with the ADAM optimizer.(Kingma & Ba, 2015) We start with learning rate of 5e−4 and use an exponential decay schedule with rate 0.97 every 2000 iterations. B.3 NETWORK ARCHITECTURE For most networks, we use MLPs implemented as residual nets (He et al., 2016) with ReLU activations. Networks used for the reacher and maze experiments. • qmeas(zt | ·): MLP consisting of two residual blocks with 256 neurons each. We only condition on the current observation xt although we could condition on the entire sequence. This decision was taken based on empirical results. • qtrans(zt | ·): In the case of Concrete random variables, we just combine the base matrices and apply the transition dynamics to zt−1. For the Normal case, the combination of matrices is preceded by a linear combination with softmax activation. (see equation 14) • qmeas(st | ·): is implemented by a backward LSTM with 256 hidden units. We reuse the preprocessing of qmeas(zt | xt) and take the last hidden layer of that network as the input to the LSTM. • qtrans(st | ·): MLP consisting of one residual block with 256 neurons. • qinitial(w | ·): MLP consisting of two residual block with 256 neurons optionally followed by a backward LSTM. We only condition on the first 3 or 4 observations for our experiments. • qinitial(s2): The first switching variable in the sequence has no predecessor. We there- fore require a replacement for qtrans(st | ·) in the first time step, which we achieve by independently parameterizing another MLP. • p(xt | zt): MLP consisting of two residual block with 256 neurons. • p(zt | ·): Shared parameters with qtrans(zt | ·). • p(st | ·): Shared parameters with qtrans(st | ·). We use the same architecture for the image ball in a box experiment, however we increase number of neurons of qmeas(zt | ·) to 1024. B.4 HYPERPARAMETERS C ON SCALING ISSUES OF SWITCHING LINEAR DYNAMICAL SYSTEMS Let’s consider a simple representation of a ball in a rectangular box where its state is represented by its position and velocity. Given a small enough ∆t, we can approximate the dynamics decently by just 3 systems: no interaction with the wall, interaction with a vertical or horizontal wall (ignoring the corner case of interacting with two walls at the same time). Now consider the growth of required base systems if we increase the number of balls in the box (even if these balls cannot interact with each other). We would require a system for all combinations of a single ball’s possible states: 32. This will grow exponentially with the number of balls in the environment. One way to alleviate this problem that requires only a linear growth in base systems is to independently turn individual systems on and off and let the resulting system the sum of all activated systems. A base system may then represent solely the transition for a single ball being in specific state, while the complete system is then a combination ofN such systems whereN is the number of balls. Practically, this can be achieved by replacing the softmax by a sigmoid activation function or by replacing the categorical variable s of dimension M by M Bernoulli variables indicating whether a single system is active or not. We do this for our multiple agents in a maze environment. Theoretically, a preferred approach would be to disentangle multiple systems (like balls, joints) and apply transitions only to their respective states. This, however, would require a proper and unsupervised separation of (mostly) independent components. We defer this to future work. D FURTHER RESULTS D.1 3-AGENT MAZE D.2 IMAGE BALL IN A BOX
1. What is the main contribution of the paper, and how does it fit into recent research in combining probabilistic models and deep learning? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty and originality? 3. How does the reviewer assess the clarity and notation of the paper's explanation, especially in section 4? 4. What are the potential limitations and practical considerations of using switching variables, and could alternative approaches achieve similar or better results? 5. How would training the model from scratch for new environments impact its applicability, and what might be required for adaptation? 6. Are there any minor errors or suggestions for improvement in the paper's presentation or equations?
Review
Review This paper proposes a deep probabilistic model for temporal data that leverages latent variables to switch between different learned linear dynamics. The probability distributions are parameterized by deep neural networks and learning is performed end-to-end with amortized variational inference using inference networks. There has been a lot of recent research trying to combine probabilistic models and deep learning to define powerful transition models that can be learned in an unsupervised way, to be used for model-based RL. This paper fits in this research area, and presents a nice combination of several interesting ideas presented in related works (switching variables, structured inference networks, merging updates as in the Kalman filter). The novelty of this paper in terms of original ideas is limited, the novel part lies in the clever combination of known approaches. The paper reads well, but I found the explanation and notation in section 4 quite confusing (although easy to improve). The authors propose a structured variational approximation, but the factorization assumptions are not clear from the notation (I had to rely on Figure 2a to fully understand them). - In the first line of equation 7 it seems that the variational approximation q_phi for z_t only depends on x_t, but it is actually dependent also on the future x through s_t and q_meas - The first line of section 4.1.1 shows that q_phi depends on x_{1:T}. However from figure 2a it seems that it only directly depends on x_{t:T}, and that the dependence on x_{1:t-1} is modelled through the dependence on z_{t-1}. - Is there a missing s_t in q_trans in the first line of (7)? - why do you keep the dependence on future outputs in q_meas if it is not used in the experiments and not shown in figure 2a? It only makes the notation more confusing. - You use f_phi to denote all the function in 4.1.1 (with different inputs). It would be clearer to use a different letter or for example add numbers (e.g. f^1_\phi) - Despite being often done in VAE papers, it feels strange to me to introduce the inference model (4.1) before the generative model (4.2), as the inference model defines an approximation to the true posterior which is derived from the generative model. One could in principle use other type of approximate inference techniques while keeping the generative model unchanged. It is difficult for me to understand how useful are in practice the switching variables. Reading the first part of the paper it seems that the authors will use discrete random variables, but they actually use for s_t continuous relaxiations of discrete variables (concrete distribution), or gaussian variables. As described in appendix B2 by the authors, training models with such continuous relaxations is often challenging in terms of hyper-parameter tuning. One may even wonder if it is worth the effort: could you have used instead a deterministic s_t parameterized for example as a bidirectional LSTM with softmax output? This may give equivalent results and remove a lot of complexity. Also, the fact that the gaussian switching variables perform better in the experiments is an indication that this may be the case. To be able to detect walls the z variables basically need to learn to represent the position of the agent and encoding the information on the position of the walls in the connection to s_t. Would you then need to train the model from scratch for any new environment? Minor comment: - in the softmax equation (6) there are missing brackets: lambda is at the denominator both for g and the log
ICLR
Title Learning Altruistic Behaviours in Reinforcement Learning without External Rewards Abstract Can artificial agents learn to assist others in achieving their goals without knowing what those goals are? Generic reinforcement learning agents could be trained to behave altruistically towards others by rewarding them for altruistic behaviour, i.e., rewarding them for benefiting other agents in a given situation. Such an approach assumes that other agents’ goals are known so that the altruistic agent can cooperate in achieving those goals. However, explicit knowledge of other agents’ goals is often difficult to acquire. In the case of human agents, their goals and preferences may be difficult to express fully; they might be ambiguous or even contradictory. Thus, it is beneficial to develop agents that do not depend on external supervision and learn altruistic behaviour in a task-agnostic manner. We propose to act altruistically towards other agents by giving them more choice and allowing them to achieve their goals better. Some concrete examples include opening a door for others or safeguarding them to pursue their objectives without interference. We formalize this concept and propose an altruistic agent that learns to increase the choices another agent has by preferring to maximize the number of states that the other agent can reach in its future. We evaluate our approach in three different multi-agent environments where another agent’s success depends on altruistic behaviour. Finally, we show that our unsupervised agents can perform comparably to agents explicitly trained to work cooperatively, in some cases even outperforming them. N/A Can artificial agents learn to assist others in achieving their goals without knowing what those goals are? Generic reinforcement learning agents could be trained to behave altruistically towards others by rewarding them for altruistic behaviour, i.e., rewarding them for benefiting other agents in a given situation. Such an approach assumes that other agents’ goals are known so that the altruistic agent can cooperate in achieving those goals. However, explicit knowledge of other agents’ goals is often difficult to acquire. In the case of human agents, their goals and preferences may be difficult to express fully; they might be ambiguous or even contradictory. Thus, it is beneficial to develop agents that do not depend on external supervision and learn altruistic behaviour in a task-agnostic manner. We propose to act altruistically towards other agents by giving them more choice and allowing them to achieve their goals better. Some concrete examples include opening a door for others or safeguarding them to pursue their objectives without interference. We formalize this concept and propose an altruistic agent that learns to increase the choices another agent has by preferring to maximize the number of states that the other agent can reach in its future. We evaluate our approach in three different multi-agent environments where another agent’s success depends on altruistic behaviour. Finally, we show that our unsupervised agents can perform comparably to agents explicitly trained to work cooperatively, in some cases even outperforming them. 1 INTRODUCTION Altruistic behaviour is often described as behaviour that is intended to benefit others, sometimes at a cost for the actor (Dowding and Monroe, 1997; Fehr and Fischbacher, 2003). Such behaviour is a desirable trait when integrating artificial intelligence into various aspects of human life and society – such as personal artificial assistants, house or warehouse robots, autonomous vehicles, and even recommender systems for news and entertainment. By observing and interacting with us, we may expect that artificial agents could adapt to our behaviour and objectives, and learn to act helpfully and selflessly. Altruistic behaviour could be a step towards value alignment (Allen et al., 2005; Gabriel, 2020), which aims to incorporate common-sense human values into artificial agents. Typically, we could achieve such an altruistic behaviour through various forms of supervision such as providing ground-truth actions at each time step, training agents with reinforcement learning (RL) and suitable rewards, or through imitation learning (Song et al., 2018). However, none of the approaches above scale up easily. They either require a large amount of supervision or carefully crafted rewards that can easily be misstated, leading to unwanted behaviour (Russell, 2019, ch. 1). How can one agent support another agent without knowing its goals? One clue might be the instrumental convergence hypothesis (Bostrom, 2017; Omohundro, 2008; Russell, 2019), which states that intelligent agents with varied goals are likely to pursue common subgoals which are generally useful (instrumental). Some examples are resource acquisition, cognitive enhancement or self-preservation, which all increase an agent’s chance of achieving almost arbitrary final goals. This hypothesis has been validated theoretically under many models, including resource games (BensonTilsen and Soares, 2016) and large classes of policies in discrete MDPs (Turner et al., 2019). While instrumental convergence is central to the discussion of value alignment and safe AI (Bostrom, 2017), since many instrumental subgoals have harmful effects, we believe that it is also a key to supporting agents with ill-defined goals and values, such as humans. The reason is that enabling instrumental subgoals for other agents (or not impeding them) can be beneficial, for a wide variety of goals and preferences. Since these subgoals occur frequently for rational agents, enabling them has the highest chance of success in the absence of more information about the other agent’s preferences, even if it is not guaranteed in the worst case. We speculate that having the ability to reach many future states is one of the most general convergent subgoals. It subsumes self-preservation (avoiding absorbent states), resource acquisition (if they are prerequisites to some actions), and generally maintaining the ability to pursue many goals. There is theoretical evidence that many optimal agents pursue this subgoal (Turner et al., 2019) (see sec. 3.2). Thus, we propose to train agents to support other agents by maximizing their choice (future state availability). This unsupervised approach learns altruistic behaviour without any extrinsic supervision such as rewards or expert demonstrations. We evaluate our methods in three diverse multi-agent environments. We always assume there are at least two agents: the leader agent that executes its own policy and can be trained using standard supervised methods, and an altruistic agent whose role is to help the leader. The performance of the altruistic agent is thus defined as the reward (success) achieved by the leader agent. In all our environments, the overall success of the leader agent depends on the altruistic agents’ behaviour. We show that our unsupervised approach outperforms unsupervised baselines by a large margin and, in some cases, also outperforms the supervised ones. Finally, we demonstrate possible failure cases of our approach where maximising the leader agent’s choice can lead to suboptimal behaviour. Our work makes the following three contributions: • We devise a multi-agent RL framework for intrinsically motivated artificial agents that act altruistically by maximising the choice of others. • We define and evaluate three task-agnostic methods to estimate the choice that an agent has in a given situation, which are all related to the variety in states it can reach. • We experimentally evaluate our unsupervised approach in three multi-agent environments and are able to match and, in some cases, outperform supervised baselines. 2 RELATED WORK To the best of our knowledge, we are the first to experimentally evaluate unsupervised agents with purely altruistic objectives. However, there are many related concepts in the literature. In human-robot cooperation, a robotic agent aids a human agent in achieving its goals (PérezD’Arpino and Shah, 2015; Hadfield-Menell et al., 2016; Baker et al., 2006; Dragan and Srinivasa, 2013; Fisac et al., 2017; 2020; Javdani et al., 2015; Dragan and Srinivasa, 2013; Macindoe et al., 2012; Pellegrinelli et al., 2016). Methods from Inverse RL (IRL) are often employed to infer human goals, which are then utilized by the robot agent to support the human. IRL itself aims to learn objectives from observations and can be used in single-agent (Fu et al., 2017) and multi-agent scenarios (Song et al., 2018; Yu et al., 2019; Jeon et al., 2020). However, IRL relies on the existence of expert demonstrations, which are often difficult to get at scale. In complex environments, it also often suffers from ambiguity of solutions (Arora and Doshi, 2021). In single-agent reinforcement learning, empowerment – which measures an agent’s capacity to affect its environment (Klyubin et al., 2005; 2008) – is used to enable intrinsically-motivated exploration (Gregor et al., 2016; Volpi and Polani, 2020). Empowerment is also used for multiagent cooperation (Guckelsberger et al., 2016; Du et al., 2020). Du et al. (2020) use empowerment to develop a helper agent that assists a (simulated) human agent by maximizing the human’s empowerment, constituting the research work most similar to ours. In contrast to our approach, it requires privileged access to an environment simulator and therefore does not allow to learn helpful or altruistic behaviour only from observation. Furthermore, the approach is not unsupervised. There are also mathematical formalizations of instrumental convergence (Bostrom, 2017). BensonTilsen and Soares (2016) analyze a MDP that makes finite resource allocation explicit, and find that optimal agents with arbitrary reward functions tend to deplete available resources. Turner et al. (2019) propose “power” as a convergent subgoal, which they define as the average difference between the state value of an optimal policy and the reward in the same state. They show that, for environments with certain symmetries, a larger proportion of optimal agents prefer states with higher power. In sec. 3.2 we will describe these symmetries and relate the result to our method. 3 METHODS In this section, we formalize our framework. We start with the generic definition describing multiagent setting. Next, we describe our framework where we show various approaches to estimate choice for a single agent, and how it can be applied to a two-agents Markov Game. Markov Game. We consider a Markov Game (Littman, 1994), which generalizes a Markov Decision Process (MDP) to a multi-agent scenario. In a Markov Game, agents interact in the same environment. At time step t, each agent (the ith of a total of N agents) takes the action ati, receives a reward rti , and finally the environment transitions from state s t to st+1. A Markov Game is then defined by a state space S (st ∈ S), a distribution of initial states η, the action space Ai (ati ∈ Ai) and reward function ri(s, a1, . . . , aN ) of each agent i, an environment state transition probability P (st+1|st, a1, . . . , aN ), and finally the agents’ discount factors γi. 3.1 ESTIMATING CHOICE FOR A SINGLE AGENT We first consider a single-agent scenario, i.e. N = 1, where only a leader agent, indicated by the subscript L, interacts with the environment through its pretrained stochastic policy πL. We assume that the leader acts Boltzmann-rationally, i.e. that it chooses high-value actions with higher probability. We believe this to be a reasonable assumption, as, in comparison to deterministic policies, stochastic policies are more robust (Zhang et al., 2020), and often achieve better results in real-world-alike partially observable stochastic domains (Kaelbling et al., 1998). We denote the leader agent’s generic choice in a given state s as CL(s), for which we propose concrete realizations below. Each method relies on the random variable St+n, with values st+n ∈ S, which refers to the leader agent’s state after n environment transitions from a starting state st. Its probability mass function is defined as the n-step state distribution of the underlying single-agent MDP, conditioned on the current state: p(st+n|st) = P (St+n = s|πL, st). Discrete choice. Our first derived method simply defines the choice of the leader agent in state st as the number of states that it can reach within n transitions, which we refer to as its discrete choice: DCnL(s t) = |range ( St+n|st ) |, (1) where range(X) is the set of all values that a random variable X takes on with positive probability and | · | measures the size of that set. While this count-based estimator of choice is intuitive and easily interpretable, it can hardly be estimated practically in large or continuous state spaces. It also discards information about the probability of reaching these states. Entropic choice. It can be shown that the entropy of a random variable X acts as a lower bound for the size of the set of values that X takes on with positive probability (Galvin, 2014, Property 2.6), i.e. H(X) ≤ log |range(X)|.We define a lower bound of the discrete choice by computing the Shannon entropy of the n-step state distribution, which we refer to as the agent’s entropic choice: ECnL(s t) = H(St+n|st) = − ∑ s∈S P (St+n = s|πL, st) log ( P (St+n = s|πL, st) ) , (2) which estimates the agent’s choice as the variety in its state after n transitions. Unlike eq. 1, ECnL can be computed in continuous state spaces or efficiently estimated by Monte Carlo sampling. Immediate choice. To further simplify entropic choice and reduce its computational complexity, we may limit the look-ahead horizon to n = 1 and assume an injective relationship from actions to states, i.e. no two actions taken at st lead to the equivalent state st+1. This assumption is often true in navigation environments, where different step-actions result in different states. We can then simplify the one-step state distribution of the leader agent to p(st+n|st) = P (St+1 = s|πL, st) = π(atL = a|st), and compute a simplified, short-horizon entropic choice, the immediate choice: ICL(s t) = H(St+1|st) = H(πtL(a|st)). (3) Immediate choice (IC) can be easily computed as the entropy over its policy conditioned on the current state. Even though the assumptions made for immediate choice often do not hold in complex or real-world environments, we found empirically that this objective can yield good results. 3.2 OPTIMALITY OF CHOICE AS AN INSTRUMENTAL CONVERGENT SUBGOAL Turner et al. (2019) analyze the instrumental convergence of optimal agents on power-seeking subgoals and show that optimal policies tend to keep their options open (Prop. 6.9). They consider two distinct actions a and a′ taken at a state s′, leading into two sets of possible future states (for an infinite horizon). These sets of future states are represented as nodes in two graphs, respectively G and G′ (with edges weighted by the probability of transitioning from one state to another). They also assume that the states in G ∪ G′ can only be reached from s′ by taking actions a or a′. In the case where G is “similar” to a subgraph of G′, in the sense that they are equivalent up to arbitrary swapping of pairs of states, the authors prove that the probability of a being optimal is higher than the probability of a′ being optimal (for most reward function distributions). Therefore, ifG′ contains more states than G, an optimal agent will choose a′ over a. Turner et al. (2019) thus lend theoretical support to our proposal: while there is no guarantee that any one optimal policy (corresponding to a rational agent with arbitrary reward function) pursues higher choice, in expectation (over a bounded space of reward functions) most policies do choose actions that lead to higher choice, all else being equal. As such, while we may not know a rational agent’s concrete goals, there is a high chance that choice works as an instrumental subgoal. 3.3 COMPARISON BETWEEN CHOICE AND EMPOWERMENT The empowerment (Klyubin et al., 2005) of a leader agent in a given state st and for horizon n is EnL(st) = maxω(an|st) I(St+n;An|st) = maxω(an|st)H(St+n|st) −H(St+n|An, st), with an as a sequence of n actions of the leader agent and ω as a probing distribution over its n-step action sequences. When setting the probing distribution ω equal to the leader agent’s policy, equation 3.3 simplifies to EnL(st) = ECnL(st)−H(St+n|At+n, st), with ECnL(st) as the entropic choice of the leader agent introduced in equation 2. If we further assume deterministic environment transitions, then empowerment becomes equal to entropic choice, i.e. EnL(st) = ECnL(st). In contrast to the previously introduced methods to estimate choice of another agent, empowerment of another agent cannot be estimated from observations of the environment transitions. To estimate another agent’s empowerment in a given state (EnL(st)), access to its action space as well as privileged access to an environment simulator are be required, which violates the main assumption of our research work, i.e. learning to assist others only from observations of the environment transitions. Even when assuming privileged access, computing empowerment in large or continuousstate environments often remains infeasible (Mohamed and Rezende, 2015; Gregor et al., 2016; Zhao et al., 2020), as it requires maximizing over all possible probing distributions ω of the leader agent. In contrast, estimating state entropy, as needed for the computation of the metrics introduced in this work, is feasible in large and continuous environments (Seo et al., 2021; Mutti et al., 2020). 3.4 BEHAVING ALTRUISTICALLY BY MAXIMIZING ANOTHER AGENT’S CHOICE Having considered three methods to estimate an agent’s choice (eq. 1-3) we now apply them to a Markov Game of two agents. The main hypothesis is that maximizing the choice of another agent is likely to allow it to reach more favourable regions of the state-space (for many possible policies of the agent), thus supporting it without a task-specific reward signal. Altruistic agent’s policy definition. In this Markov Game, one agent is the leader, with the subscript L, and another one is the altruistic agent, with the subscript A. We define the optimal policy of the altruistic agent as the one that maximizes the future discounted choice of the leader, π∗A = argmax πA ∞∑ t=0 γtA CL(s t), (4) where the generic choice CL(st) can be estimated by one of several methods: discrete choice DCnL(s t), entropic choice ECnL(s t) or immediate choice ICL(st). Conditional estimates of choice. As the agents interact in the same environment, they both have influence over the system state s, which contains the state of both agents. This makes applying single-agent objectives based on the state distribution (such as eq. 1 and 2) difficult to translate to a multi-agent setting, since the states of both agents are intermingled. For example, an altruistic agent that maximizes entropic choice naively (eq. 2) will maximize both the state availability of the leader agent (which mirrors the single-agent entropic choice) and its own state availability (which does not contribute towards the altruism goal). To maximize entropic choice without also increasing the entropy of the altruistic agent’s actions, we propose to condition the choice estimate on the altruistic agent’s actions over the same time horizon, denoted by the random variable At:t+n−1A : ECnL(s t) = H(St+n|At:t+n−1A , πL, s t). (5) In order to better understand eq. 5, we can use the chain rule of conditional entropy (Cover and Thomas, 2005, ch. 2) to decompose it into two terms: ECnL(s t) = H(St+n, At:t+n−1A |πL, st) − H(At:t+n−1A |πL, st), respectively the joint entropy of the states and actions, and the entropy of the actions. Therefore, we can interpret this objective as the altruistic agent maximizing the variety of states and actions, but subtracting the variety of its own actions, which is the undesired quantity. We can also relate eq. 5 to discrete choice (eq. 1). Using the fact that H(X|E) ≤ log |range(P (X|E))| for a random variable X and event E (Galvin, 2014, Property 2.12), we see that eq. 5 is a lower bound for a count-based choice estimate (analogous to eq. 1), also conditioned on the altruistic agent’s actions: ECnL(s t) ≤ logDCnL(st) = log |range ( St+n|At:t+n−1A , πL, st ) |. However, assuming simultaneous actions, the immediate choice estimate (eq. 3) stays unchanged, i.e. ICL(st) = H(πtL(a|st)|atA) = H(πtL(a|st)). The technical details of how these estimates can be computed from observations of the environment transitions are given in Appendix A. 4 EXPERIMENTAL EVALUATION We introduce three multi-agent environments of increasing complexity1, in which the success of a leader agent depends on the behaviour of one or more additional agents. In each environment, we first evaluate a subset of the proposed methods for choice estimation (DCnL, EC n L and ICL) by comparing the estimated choice of the leader agent in minimalistic scenarios. We then evaluate our approach of behaving altruistically towards others by maximizing their choice (section 3.4) and measure performance of our approach as the reward achieved by the leader agent. We provide videos of the emergent behaviours in the supp. mat. (see appendix F). We compare our method to both an unsupervised and a supervised approach. Note that the supervised approach has stronger assumptions, as it requires direct access to the leader agent’s reward function. We do not consider inverse RL (IRL) as a relevant baseline, as it would rely on demonstrations of expert behaviour, which we do not assume. Even if perfect knowledge of the state transition probabilities is assumed, this does not allow generating expert demonstrations of the leader agent’s policy, as its expert policy would in turn depend on the policy of the altruistic agent, which is yet to be found by IRL. 4.1 DISCRETE ENVIRONMENTS WITH CONTROLLABLE GATES We start by considering three different scenarios on a grid, illustrated in Fig. 1 (top row), with the starting positions of the leader (green) and an additional agent (blue) shown in faded colors, obstacles are gray, and agents may move in one of the four cardinal directions or stay still. Choice estimate analysis. We first verify whether the estimated choice for each state (agent position) correctly maps to our intuitive understanding of choice (that is, the diversity of actions that can be taken). Therefore, we conducted an analysis of the estimated choice of the leader agent using a simplified version of the environment (Fig. 1, top left), in which only the leader agent is present and selects actions uniformly at random. Fig. 1 (bottom row) shows the three different methods of estimating choice evaluated for each possible cell position of the leader agent. We can observe that states in less confined areas, e.g. further away from walls, generally feature higher choice estimates, with the least choice being afforded by the dead end at the right. All three method’s estimates are qualitatively similar, which validates the chosen approximations. In line 1In appendix E, we evaluate performance in a non-spatial environment. with the simplifications made, the immediate choice (IC) estimates tend to be more local, as can be observed when comparing the estimates for the cell at row 2, column 4. In conclusion, these results qualitatively agree with an intuitive understanding of choice of an agent in a grid environment. Environment setup. In the Door Scenario (Fig. 1, top center), the door switch (row 1, col. 8) can only be operated by the altruistic agent. The door (row 2, col. 4) remains open as long as the altruistic agent is on the switch cell and is closed otherwise. As the leader agent always starts to the left of the door and the altruistic agent to the right, the leader agent can only attain its goal, the apple (row 2, col. 6), if the altruistic agent uses the door switch to enable the leader agent to pass through the door. In the Dead End Scenario (Fig. 1, top right), the door is always open, and the leader agent’s target object (green apple) is moved to the top right cell. Hence, the leader agent can obtain the apple without additional help from the altruistic agent. However, the altruistic agent could potentially block the path by positioning itself at the entry to the dead end. This situation would be the opposite of altruistic behaviour and is, of course, undesired. We compare to a supervised approach, to Assistance via Empowerment (AvE, (Du et al., 2020)) and a random-policy baseline. Assistance via Empowerment baseline. We compare with the recently-proposed AvE, which has a similar goal (Du et al., 2020). There are two major differences: AvE is not unsupervised, and it requires privileged access to an environment simulator to produce estimates. Hence, its use in real or black-box environments is limited. We used the authors’ implementation with fixed hyperparameters, except for the crucial horizon n, for which we present a sweep in app. B. Training. We start by pretraining the leader agent with Q-Learning (Watkins and Dayan, 1992), with the altruistic agent executing a random policy. Hence, after convergence, the leader agent’s policy targets the green apple. Appendix B lists all details and parameters. Afterwards, the leader agent’s learning is frozen and the altruistic agent is trained; it always observes the position of the leader agent sL, its own position sA, and the environment state senv, which is composed of the door state (open, closed) and the food state (present, eaten). The altruistic agent is trained with Q-Learning to maximize the discounted future choice of the leader agent (see eq.. 4. For that, it uses one of the three proposed methods such as eq. 3, eq. 2 or eq. 1, as detailed in appendix A.1. Results. We investigate the developed behaviour of the altruistic agent after convergence for different choices of the hyperparameters – look-ahead horizon n ∈ {1, 3, 12} (which determines the scale at which choices are considered) and discount factor γa ∈ {0.1, 0.7} (which defines whether the altruistic agent gives higher importance to the short-term or long-term choice of the leader agent). Success is binary: either the leader agent attains its goal (green apple), or not. In the Door Scenario (Fig. 1, top center), we found that, for longer horizons n and higher discount factors γa, the altruistic agent opens the door to allow the leader agent to reach its target, by occupying the switch position (square outline; row 1, col. 8). For smaller n and lower γa, the altruistic agent does not execute any coordinated policy and the leader does not succeed. Using the AvE method, we find that it only opens the door for n = 3, but fails to do so for n = 1 and n = 12. In the Dead End Scenario (Fig. 1, top right), we observe that, for longer horizons n and large discount factors γa, the altruistic agent stays out of the leader agent’s way by occupying a far-away cell (square outline; row 1, col. 6). For short horizons n and high discount factors γa, the altruistic agent actively blocks the entry to the hallway that contains the target (circle outline; row 3, col. 7), to prohibit the leader agent from entering this region of low estimated choice (recall that the choice for each cell is visualized in Fig. 1, bottom right). This failure case can be prevented by having a large enough horizon n and discount factor γa, analogously to the selection of the temperature hyperparameter in maximum entropy single-agent RL (Haarnoja and Abbeel, 2018). We find that this configuration performs consistently better than others in both scenarios, and hence is more preferred. On the other hand, the AvE method does not block the path of the leader agent for n = 1, but blocks its path for n = 3 and n = 12. We found that the resulting behaviour of our approach is independent of the used method for choice estimation, i.e. either discrete choice (eq. 1) or entropic choice (eq. 2) yield the same outcome, with immediate choice (eq. 3) being a special case of entropic choice. As for the AvE baseline, we hypothesize that the variance of results is due to the nature of the proxy used in practice, which includes components of empowerment from both agents (sec. 3.4). The binary outcomes for all hyperparameter combinations are given in appendix B. We also compare to a supervised baseline (receiving a reward when the leader obtains the apple), in which case the leader always succeeds. 4.2 LEVEL-BASED FORAGING EXPERIMENTS Computational efficiency. Due to the computational complexity resulting from the need to estimate a long-term distribution of states, p(st+n|st), we focus on immediate choice (IC) to estimate the leader agent’s choice in the remaining sections. Furthermore, in rare state-action sequences, the assumptions made for IC, i.e. deterministic environment transitions and an injective relationship from actions to states, may not hold. Nonetheless, we did not find this to adversely affect the results. Due to its dependence on access to the environment simulator and its computational complexity, we do not consider the AvE baseline for the remainder of experiments. Setup. We use a fully-observable multi-agent environment that enables us to assess the level of cooperation among agents (level-based foraging, LBF, Christianos et al. (2020)) to evaluate the performance of altruistic agents in more complex environments with discrete state spaces. We compare our method to a maximum-entropy approach from single-agent RL (Mutti et al., 2020) and a random-policy baseline. A visualization of the environment is depicted in Fig. 2 (left). The two agents can forage apples by simultaneously taking positions at different sides of a targeted apple, yielding a fixed reward. We first train two agents – which receive an equal reward for foraging – using Deep Q-Learning (DQL, Van Hasselt et al. (2015)), corresponding to fully-supervised sharedreward in multi-agent reinforcement learning (MARL). We then take one of these pretrained agents that has learned to forage apples when accompanied by a cooperating agent, freeze its policy, and place it as the leader agent (green) into the test scenario (additional details are provided in app. C). Choice estimate analysis. We first qualitatively evaluate IC as an estimator for choice in Fig. 3, by comparing representative scenarios. To quantitatively analyse IC as an estimator for the leader agent’s choice, we compare the leader agent’s average IC (over 100 episodes) in two scenarios, one in which it can acquire many rewards, i.e. the other agent acts cooperatively, and one where it can acquire only few rewards, i.e. the other agent takes random actions. We show the results in Table 1. We observe that the leader agent’s estimated choice is substantially higher when it is able to acquire high rewards. Note that the IC estimate does not have privileged access to the reward function of the leader agent, and so this experiment evaluates its worth as a generic proxy for the leader’s reward. Assuming that an agent is able to acquire higher rewards when having more choice, these results indicate that IC is a reasonable estimator for the leader agent’s choice in LBF. Training. We now consider an environment that consists of the previously pretrained leader and an additional altruistic agent, which is trained from scratch and does not receive a reward for foraging apples, but is rewarded according to the leader agent’s choice. Its reward is given as the current estimate of the leader agent’s IC (eq. 3) and it is trained using DQL. To compute its internal reward signal, the altruistic agent would therefore need to estimate the state transition probabilities, as detailed in A.2. To decouple our approach’s performance from that of the state transition estimator, we instead directly compute the altruistic agent’s reward using the leader agent’s policy. Results. We define the performance of the altruistic agent not as its achieved internal reward but as the reward achieved by the leader agent, i.e. its performance in enabling the leader agent to forage apples. Fig. 4 shows a comparison of the altruistic agent’s performance to that achieved by 3 baselines (two unsupervised and one supervised), averaged over 5 random seeds, with the standard deviation as the shaded area. It can be observed that the performance of the altruistic agent converges to a similar performance to that of the supervised agent, and outperforms the baseline approaches by a large margin. Furthermore, the IC improvement of the leader agent is correlated with its reward improvement, which supports using IC as a reasonable proxy for the choice of the leader agent. 4.3 MULTI-AGENT TAG GAME WITH PROTECTIVE AGENTS Setup. We use a multi-agent tag environment (Tag, Mordatch and Abbeel (2018); Lowe et al. (2017); Terry et al. (2020)), illustrated in Fig. 2 (right), to evaluate the capabilities of altruistic agents in complex environments with continuous state spaces. Adversaries are rewarded for catching the leader, which in turn receives a negative reward for being caught or crossing the environment boundaries. To speed up training, altruistic agents additionally receive a small negative reward for violating the environment boundaries. We pretrain the adversaries and the leader (without the presence of altruistic agents) using MADDPG (Lowe et al., 2017) and DDPG (Lillicrap et al., 2016) respectively. After pretraining, the adversary agents have learned to cooperatively chase the leader agent, which in turn has learned to flee from the adversaries. Exact setup specifications and all parameters are given in appendix D. Choice estimate analysis. As done for LBF, we evaluate the IC of the leader agent in representative scenarios in Fig. 3. We also quantitatively evaluate IC as an estimator for the leader agent’s choice, by comparing the leader agent’s IC per timestep for a scenario in which it receives high rewards to one where it receives low rewards. We again hypothesize that the leader agent is able to acquire higher rewards when having more choice. Table 1 shows that the estimated choice is substantially higher in the high-success scenario, indicating that IC is a reasonable estimator also in Tag. Training. We freeze the pretrained policies of the adversary agents and the leader agent and insert three additional altruistic agents which observe all agents but are not observed themselves. Each additional altruistic agent’s internal reward signal is given as the IC of the leader agent (equation 3), which is directly computed as done in LBF (see 4.2). Results. Performance of the altruistic agents is defined as the times per episode that the leader agent is caught by the adversaries, i.e. the lower the better. In Table 2, the performance of the team of three altruistically trained agents (ours) is compared to three relevant baselines, with the altruistic agents either removed (None), acting randomly (random), or solely receiving a small negative reward for violating the environment boundaries (cage). In contrast to LBF, we do not compare to an unsupervised exploration approach, as we are not aware of such an implementation for cooperative MARL. Additionally, we report results for the case in which the altruistic agents receive the same reward as the leader agent (supervised), possibly appended by a negative reward for violating the environment boundaries (supervised + cage). It can be observed that our approach outperforms all relevant baselines by a substantial margin and also outperforms the supervised approach. We hypothesize this to be due to the dense internal reward signal that our approach provides, as compared to the sparse rewards in the supervised scenario: recall that in the supervised scenario the additional altruistic agents receive a large negative reward only when the leader agent is caught by the adversaries, whereas our approach provides a dense reward signal that corresponds to the current estimate of the leader agent’s choice. Fig. 5 displays the emerging protective behaviour of altruistic agents trained with our approach. Results videos are found in the supplemental material. 5 CONCLUSIONS We lay out some initial steps into developing artificial agents that learn altruistic behaviour from observations and interactions with other agents. Our experimental results demonstrate that artificial agents can behave altruistically towards other agents without knowledge of their objective or any external supervision, by actively maximizing their choice. This objective is justified by theoretical work on instrumental convergence, which shows that for a large proportion of rational agents this will be a useful subgoal, and thus can be leveraged to design generally altruistic agents. This work was motivated by a desire to address the potential negative outcomes of deploying agents that are oblivious to the values and objectives of others into the real world. As such, we hope that our work serves both as a baseline and facilitator for future research into value alignment in simulation settings, and as a complementary objective to standard RL that biases the behaviour towards more altruistic policies. In addition to the positive impacts of deployed altruistic agents outside of simulation, we remark that altruistic proxy objectives do not yet come with strict guarantees of optimizing for other agents’ rewards, and identify failure modes (sec. 4.1) which are hyperparameter-dependent, and which we hope provide interesting starting points for future work. 6 ETHICS STATEMENT We addressed the relevant aspects in our conclusion and have no conflicts of interest to declare. 7 REPRODUCIBILITY STATEMENT We provide detailed descriptions of our experiments in the appendix and list all relevant parameters in table 4. All experiments were run on single cores of Intel Xeon E7-8867v3 processors (2.5 GHz). Training times are given in the respective sections in the appendix. For the LBF and Tag experiments, we report mean and standard deviation over five different random seeds. The Gridworld experiments yield deterministic results. We will provide the source code for all experiments conducted with the final version of this publication. We created detailed instructions on how to run the code in order to replicate the experimental outcomes presented in this work. 8 ACKNOWLEDGEMENTS We thank Thore Graepel and Yoram Bachrach for their helpful feedback. We are also grateful to the anonymous reviewers for their valuable suggestions. This work was supported by the Royal Academy of Engineering (RF\201819\18\163). A ESTIMATION OF LEADER AGENT’S CHOICE FROM OBSERVATION A.1 MODEL-BASED ESTIMATION OF CHOICE FROM OBSERVATIONS We introduce a model-based estimator of choice that is suitable for small-scale discrete-state environments, having the advantage that it is easily interpretable. Recalling how we compute the discrete choice and entropic choice estimates for the leader agent, an estimate of the n-step state distribution conditioned on the altruistic agent’s actions is needed, i.e. P (st+n|πL, at:t+n−1A , st). To simplify this computation, we assume the altruistic agent’s action to equal hold for the next n steps. More specifically, we assume that the altruistic agent’s state is unchanged for the next n steps. Furthermore assuming that both the state and the action space are discrete, we compute P (st+n|πL, at:t+n−1A , s t) = st T (stA) n, (6) with T (stA)ij = P (s t+1 = sj | st = si, st+1A = s t A) (7) where the state transition matrix T (sA) holds the transition probabilities between all possible states, as a function of the state of the altruistic agent sA. To compute T (sA), the system state is encoded into a one-hot vector s1. The n-step discrete choice of the leader agent can then be computed as DCnL(s t) = ‖st1 T (stA)n‖0, (8) its n-step entropic choice as ECnL(s t) = H ( st1 T (s t A) n ) , (9) and its immediate choice as ICL(s t) = H ( πtL(a|st) ) = H ( s1 T (s t A) ) (10) In environments with a discrete state and action space, the altruistic agent can hence use an estimate of the state transition matrix T to estimate the choice of the leader agent using either of the proposed methods, i.e. DC, EC or IC. An estimate of T can be built over time, by observing the environment transitions and computing the transition probabilities as relative frequencies of observed transitions. A.2 MODEL-FREE ESTIMATION OF CHOICE FROM OBSERVATIONS To limit the computational complexity, which is important for environments with large or continuous state spaces, we also consider immediate choice as an estimator for the leader agent’s choice (ICL(st) = H(St+1|st)). As shown in section 3.1, this estimate can be simplified to H(St+1|st)) = H(πtL(a|st)), under the named assumptions. Hence, to compute the immediate choice of the leader, the altruistic agent requires an estimate of the leader agent’s policy entropy, which can be learned from observation using a policy estimation network (Hong et al., 2018; Papoudakis et al., 2020; Mao et al., 2019; Grover et al., 2018). B GRIDWORLD EXPERIMENTS B.1 TRAINING PROCEDURE B.1.1 SETUP The environment setup is described and displayed in section 4.1. AvE baseline. We evaluate the AvE baseline for different horizons n. For each horizon, we tested the AvE baseline as implemented in the provided source code2, using the hyper-parameters suggested by the authors. The original implementation uses a look-ahead horizon n = 10. We found 2https://github.com/yuqingd/ave that results are equal for both n = 10 and n = 12, which is why we only display results for n = 12. We further evaluated the AvE baseline for n between 1 and 12. For the Opens door task, we found that AvE yields success for n = 2, 3, 4, 5 and failing for the remaining. For the Non blocking task, we found that AvE yields success for n = 1, 2 and failing for the remaining. B.1.2 PRETRAINING We first pretrain the leader agent using tabular Q-Learning, with learning parameters given in Table 4. During this pretraining, the altruistic agent takes random actions. We train until all Q-Values are fully converged, i.e. training runs for 300000 environment steps. B.1.3 REWARD COMPUTATION FOR ALTRUISTIC AGENTS The altruistic agent is then also trained using tabular Q-Learning, and its internal reward signal is given as the choice estimate of the leader agent, i.e. either DCnL(s t), ECnL(s t) or ICL(st), which is computed with the model based-estimation introduced in appendix A.1. The altruistic agent records all environment transitions and frequently updates its estimate of the state transition matrix T (sA), which is needed to compute the internal reward signal for the altruistic agent. All training parameters can be found in Table 4. Training time is about 15 minutes per experiment. B.2 PERFORMANCE EVALUATION Performance of the altruistic agent is reported for two different categories, as shown in Table 3. For each category, we report success or failure for choice estimate look-ahead horizons n ∈ {1, 3, 12} and discount factors of the altruistic agent γa ∈ {0.1, 0.7}. Success or failure was always deterministic, conditioned on the experiment setup, i.e. 10 simulations were run for each setup which always yielded the same outcome. To estimate the leader agent’s choice, the altruistic agent uses either discrete choice (D, equations 1 and 8) or entropic choice (E, equations 2 and 9). It must be noted that horizon n = 12 is equivalent to an infinite horizon look-ahead for the given environment size and that entropic choice is equivalent to immediate choice (equations 3 and 10) at horizon n = 1, as the environment satisfies the necessary conditions listed for equation 3. Table 3 displays the results of this experiment. In the first row, it is evaluated whether the altruistic agent opens the door at all times, such that the leader agent can eat the green apple. It can be observed that the altruistic agent only opens the door for longer horizons n, respectively higher discount factors γa. Given the definitions of discrete choice (Equation 1) and entropic choice (Equation 2), it can be assumed that the choice horizon n determines the locality for which choice is considered and that the discount factor γa defines whether the altruistic agent gives higher importance to the short-term or long-term choice of the leader agent. This is in line with the observed results for the first category (Opens door). It can be assumed that, for short horizons n, the altruistic agent does not open the door, as it does not estimate that this would lead to an increase in the leader agent’s choice. A similar argumentation follows for low discount factors γa. The bottom-row category evaluates whether the altruistic agent does not block the hallway that leads up to the leader agent’s target apple in the top right environment cell. This category demonstrates a possible failure case of the proposed approach of maximizing another agent’s choice. For short horizons n and high discount factors γa, the altruistic agent actively blocks the entry to the lowentropy hallway towards the top right cell – by constantly occupying cell (2, 6) – to prohibit the leader agent from entering this region of low estimated choice. This failure case can be prevented by an appropriate selection of the hyperparameters – horizon n and discount factor γa. It is related to the selection of the temperature hyperparameter in maximum entropy single-agent RL (Haarnoja and Abbeel, 2018); if chosen incorrectly, the agent does not foster environment rewards in lowentropy regions. A possible solution to this problem would be to define a constrained optimization problem, as shown by Haarnoja and Abbeel (2018). B.3 ABLATION STUDY ON JOINT LEARNING Training. To investigate the effects of joint learning of the leader agent’s and the altruistic agent’s policy, we adapted the training process described in section 4.1 for the Gridworld experiments as following. Instead of first learning the policy of the leader agent while the altruistic agent takes random actions, we initialized both policies from scratch and trained both agents simultaneously with the parameters given in Table 4. Results. We evaluated the outcome for the same scenarios, i.e the scenarios described in section 4.1. We found that the results for the individual test cases were equivalent to those achieved when training the leader and the altruistic agent subsequently, i.e. the results are equivalent to those displayed in Table 3. C LEVEL BASED FORAGING EXPERIMENTS C.1 TRAINING PROCEDURE C.1.1 SETUP We adopted the Level Based Foraging3 environment as given in Christianos et al. (2020). We only focus on two-agent scenarios and only consider the subset of possible environments that require full cooperation among agents, i.e. those where food can only be foraged by two agents cooperatively. We therefore only consider environments where both agents are at level one, and all present food is at level two. In the original implementation, both agents have to simultaneously select the eat action while docking at different sides of a food object to forage the object and receive the reward. To reduce training time, we simplify this setup by reducing the action space to up, down, left, right, stay, i.e. we remove the eat action and enable agents to forage food by being simultaneously at different sides of a food object, with no further action required. C.1.2 PRETRAINING To obtain a pretrained leader agent, we first train two agents in the environment that are equally rewarded for foraging food. This setup corresponds to shared-reward cooperative MARL (Tan, 1993). Both agents are trained using Deep Q Learning (DQL, (Van Hasselt et al., 2015)), using a fully connected neural network with two hidden layers and five output values, resembling the Q values of the five possible actions. The exact training parameters are listed in Table 4. We then take either one of the two agents and set it as the pretrained leader agent for the subsequent evaluation of the altruistic agent. C.1.3 TRAINING OF ADDITIONAL AGENTS We then insert an additional agent into the environment that shall act altruistically towards the leader agent. This additional agent is trained in the same fashion and with the same parameters as the previously trained leader agents. Only its reward signal is different, as laid out in the next section. C.1.4 REWARD COMPUTATION FOR ADDITIONAL AGENTS We compare four different approaches for how the reward of the additional agent is defined, respectively how it behaves. Random: The agent takes random actions. Supervised: The agent receives the same reward as the leader agent, i.e. a shared reward as in cooperative MARL. Ours: 3https://github.com/semitable/lb-foraging The reward of the additional agent is defined as the immediate choice of the leader agent, as detailed in equation 3. We compute the leader agent’s policy entropy by computing the entropy of the softmax of the leader agent’s Q values in the given state. We further consider an unsupervised baseline, as detailed in the next paragraph. Unsupervised baseline (MaxEnt). As an unsupervised baseline, we implemented the MEPOL approach of Mutti et al. (2020). Their task-agnostic unsupervised exploration approach maximizes the entropy over the state distribution of trajectory rollouts. For this baseline, the additional agent is trained with the implementation given by the authors4, which itself builds on TRPO (Schulman et al., 2015). We leave all parameters unchanged but evaluate different learning rates; lr ∈ {1e − 6, 1e− 5, 1e− 4, 1e− 3, 1e− 2, 1e− 1}. Best results were achieved for a learning rate of 1e− 5, which was hence picked as the relevant baseline. C.2 PERFORMANCE EVALUATION Each experiment was run for 5 different random seeds and mean and standard deviation are reported. Training progress is shown in Figure 4. Evaluations are computed every 10000 environment steps for 200 episodes, with the exploration set to zero. Training time was about 14 hours for each run. Results are shown in Fig. 4. D TAG EXPERIMENTS D.1 TRAINING PROCEDURE D.1.1 PRETRAINING We use the Simple Tag (Tag) implementation by Terry et al. (2020)5 which is unchanged as compared to the original implementation of Mordatch and Abbeel (2018)6, only fixing minor errors. We first adopt the original configuration and pretrain three adversaries and one good agent (leader agent) using the parameters listed in Table 4. We use MADDPG (Lowe et al., 2017)7 to train adversary agents, and modify the framework as follows. The last layer of each agent’s actor-network outputs one value for each of the environment’s five possible actions, over which the softmax is computed. We then sample the agent’s action from the output softmax vector, which corresponds to the probabilities with which the agent takes a specific action in a given state. We train the leader agent with DDPG (Lillicrap et al., 2016),7 where we equally modify the output layer. Each actor and critic network is implemented as a fully-connected neural network with two hidden layers, with layer sizes as given in Table 4. To make the environment more challenging for the leader agent, we decrease its maximum speed and acceleration to 70% of the original value. We next insert three additional agents into the environment whose observations include all agents and objects. These additional agents are not observed by adversary agents or the leader agent. The additional agents are of the same size as the adversary agents, and their acceleration and maximum velocity are equal to that of the leader agent. To speed up training, we made the following changes to the environment, which are applied to our approach as well as to all baselines. First, we spawn the three additional agents in the vicinity of the leader agent, which itself is spawned at a random position. Furthermore, we randomly pick two out of the three adversary agents and decrease their maximum acceleration and maximum speed by 50%. We made these changes to be able to observe substantial differences between the different approaches after a training time of less than 24h. D.1.2 TRAINING OF ADDITIONAL AGENTS We train these three additionally inserted agents with the previously described modified version of MADDPG. The reward for each agent is defined either according to our developed approach, or any of the given baselines, as detailed in the next section. 4https://github.com/muttimirco/mepol 5https://github.com/PettingZoo-Team/PettingZoo 6https://github.com/openai/multiagent-particle-envs 7https://github.com/starry-sky6688/MADDPG D.1.3 REWARD COMPUTATION FOR ADDITIONAL AGENTS FOR DIFFERENT BASELINES We consider the following implementations for the reward computation of the additional agents, respectively different environment configurations. None: For this scenario, the additional agents are removed from the environment. The remaining approaches purely differ in the way that the reward of the additional agents is computed. No other changes are made. Random: The additional agents take random actions. Cage: The additional agents receive a negative reward for violating the environment boundaries, which is equal to the negative reward that the leader agent receives for itself violating the environment boundaries (part of the original Tag implementation). Supervised: The additional agents receive the same reward as the leader agent. That is, they receive a reward of -10 if the leader agent is caught by the adversaries and a small negative reward if the leader agent violates the environment boundaries. Supervised + Cage: The additional agents receive the same reward as the leader agent, and an additional small negative reward if they themselves violate the environment boundaries. Ours: The reward of the additional agents is defined as the immediate choice of the leader agent, as detailed in eq. 3. To reduce the variance in the estimate of the leader agent’s immediate choice, we implement an ensemble of five pretrained actor-networks for the leader agent, evaluate the policy entropy of each network, and take the median of the achieved values as the reward for the altruistic agents. Furthermore, the additional agents receive a small negative reward for themselves violating the environment boundaries. D.2 PERFORMANCE EVALUATION We train Cage, Supervised, Supervised + Cage and Ours for five different random seeds with parameters as detailed in Table 4. We then compute the results listed in Table 2 by freezing all weights across all networks, setting the exploration noise to zero and computing the average and standard deviation over 500 rollout episodes. E RESOURCE ENVIRONMENT E.0.1 MOTIVATION AND OVERVIEW This environment is a special case of the general resource-based MDP proposed by Benson-Tilsen and Soares (2016), which they used to show that intelligent agents pursue instrumentally useful subgoals. The motivation behind the choice for this environment is to evaluate our proposal in non-spatial and non-navigation environments. In the environment, there are 3 resource types, which two “consumer” agents may consume as an action. Each consumer has different preferences (reward function), and so will only consume 2 of the resource types. A third, altruistic agent, receives one resource unit of each type to distribute among the consumers, and its goal is to satisfy the preferences of the consumers without knowing their reward function. We define its performance as the average number of times that the consumers fail to consume their preferred resource (so lower is better). We compare our method to a supervised agent that is explicitly trained with the consumers’ reward function, as well as to an agent that assigns the resources randomly. E.0.2 ENVIRONMENT DESCRIPTION The environment is expressed as a Markov Game (see section 3). The Markov game is composed of two human-inspired consumers with subscript C1, C2 and an altruistic agent with subscript A. Three types of resources exist, RX , RY and RZ . The environment state s is given by the number of resources of each type available to each of the consumers. For example, s = [(1, 0, 1), (0, 1, 0)] means that agent C1 has one resource each of type X and Y available, while agent C2 only has one resource of type Z available. At the beginning of each time step, the altruistic agent is provided with one resource per category, i.e. RX , RY and RZ . The altruistic agent can assign each resource individually to any agent or discard the resource. The altruistic agent’s action space is hence defined by one sub-action per resource, i.e. aA = (aXA , a Y A , a Z A). Each sub-action assigns the resource either to one of the consumers or discards it. The resources are then distributed according to the action taken by the altruistic agent and the environment state is updated. Resources cannot be stacked, which means that each agent can only have one resource per category available at a time. Next, the consumers attempt to consume one resource each, according to their preference. Agent C1 dislikes resource RZ , hence it chooses RX or RY with equal probability. Agent C2 dislikes resource RX , hence it chooses RY or RZ with equal probability. The actions of agents C1 and C2 are sampled accordingly and the environment state is updated. For each round, we record how many agents failed to consume a resource that was not available. E.1 TRAINING The altruistic agent is trained with Q-Learning (Watkins and Dayan, 1992) to maximize the discounted future choice of the consumers (see eq. 4). For that, it uses one of the three proposed objectives, namely IC (eq. 3), EC (eq. 2) or DC (eq. 1), which it estimates as detailed in appendix A.1. The exact hyper-parameters are given in Table 4. We compare the performance of the altruistic agent that maximizes the choice of the consumers to that of a supervised agent. The reward of the supervised agent is the negative of the number of consumers that attempted to consume a resource, in that time step, and failed. Further, we compare to a random-policy baseline that distributes the resources randomly but does not discard any resources. E.2 RESULTS Table 5 shows that the results achieved by the altruistic agent trained with choice are equivalent to those achieved by the supervised agent. Furthermore, they are significantly better than those achieved by an agent with a random policy. F VIDEOS OF BEHAVIOUR OF ALTRUISTIC AGENT We provide videos for the most relevant outcomes of our experiments in the supplementary material. F.1 VIDEOS FOR RESULTS OF GRIDWORLD EXPERIMENTS (SECTION 4.1) F.1.1 DOOR SCENARIO IN FIG. 1 TOP CENTER 01 Altruistic agent opens door for leader agent: It can be observed that the altruistic agent has learned to operate the door switch to enable the leader agent to pass through the door and reach its target on the other side. 02 Altruistic agent does not open door for leader agent (failure case): It can be observed that for an unfavourable choice of hyperparameters, the altruistic agent does not open the door. F.1.2 DEAD END SCENARIO IN FIG. 1 TOP RIGHT 03 Altruistic agent gives way to leader agent: It can be observed that the altruistic agent does not get into the way of the leader agent, which is hence able to reach its target in the top right cell. 04 Altruistic agent blocks path of leader agent (failure case): It can be observed that for an unfavourable choice of hyperparameters, the altruistic agent blocks the entry to the hallway towards the right side of the environment such that the leader agent cannot reach its target at the top right cell. This happens as the altruistic agent forcefully maximizes the estimated choice of the leader agent by hindering it from entering the hallway, which is a region of fewer estimated choice. F.2 VIDEO FOR RESULTS OF LEVEL BASED FORAGING (SECTION 4.2) 05 Altruistic agent enables leader to forage apples: It can be observed how the altruistic agent (blue) learned to coordinate its movements with the leader agent (green), to enable the leader agent to forage apples. It has learned this behaviour purely through optimizing for the leader agents choice and is itself not rewarded for foraging apples. F.3 VIDEO FOR RESULTS OF TAG (SECTION 4.3) 06 Altruistic agents protect leader from adversaries: It can be observed how the altruistic agents (blue colors) learned to coordinate their movements to protect the leader agent (green) from its adversaries. The adversaries (red colors) try to catch the leader, which in turn tries to flee from them. The altruistic agents protect the leader by actively intercepting the paths of the adversaries. They have learned this behaviour purely through optimizing for the leader agents choice.
1. What is the main contribution of the paper in multi-agent RL? 2. What are the strengths and weaknesses of the proposed method in developing altruistic agents? 3. Do you have any concerns regarding the theoretical support for the objective in Section 3.2? 4. How does the reviewer assess the correlation between "choice" and the true environment reward in the presented environments? 5. What additional settings would you suggest to evaluate the method's performance in various environments? 6. Can you explain the contradiction in the leader agent's IC in the LBF environment as mentioned by the reviewer? 7. What do the scores in Table 1 represent, and why are they different for LBF and Tag environments?
Summary Of The Paper Review
Summary Of The Paper This paper introduces a method for developing altruistic agents in a multi-agent RL (MARL) setting. The core idea is that an altruistic agent, in the absence of any further reward or goal information, may try to increase the “choices” for the agent it is cooperating with as a proxy. The paper argues that this is a suitable proxy for the unknown true reward function in many environments, because optimal policies tend to choose actions which lead to greater choice (larger coverage of state visitation) in the future, using analysis of instrumental convergence. The method is evaluated on discrete environments where the altruistic agent has to help open a gate, a level-based foraging environment, and a continuous state space tag environment, and the paper shows that the method can lead to altruistic behavior and improved rewards obtained by the leader agent in these settings. Review Strong points of the paper: The main idea is quite conceptually simple and an interesting approach to develop altruism for MARL. The experimental section of the paper is well executed, and the analysis of the results is thorough. Particularly the choice estimate analysis for each environment is helpful for understanding why the method may help. The analysis of the failure case is also quite insightful. Weak points of the paper: The exposition in Section 3.2 about the theoretical support to the objective is rather nonspecific -- I feel that it would be helpful to introduce more specific, technical claims which follow from the arguments in Turner (2019) While the results on both the discrete gate and the continuous environment are strong, they seem less surprising because they are both navigation-type environments where the value of a state should correlate well with the “choice” afforded by being at that state. The analysis in the results of the hide-and-seek environment mention that it may have outperformed the supervised baseline because it provided a denser reward than the sparse supervised “catching” signal. So, these are environments where it’s unsurprising that the choice heuristic would work well. Because the contribution of this paper is quite dependent on the empirical performance of the method, I think that it needs to be evaluated on additional settings where it is less clear that “choice” is directly correlated with the reward, to be convincing as a generally useful metric when the true environment reward is unavailable. Questions: In Figure 3, for the LBF environment, I can see how the leader agent should have low IC when waiting at the apple at the top and lower IC when it can choose either of the two apples at the bottom. However, to me this seems to contradict the point that IC is a good proxy for the leader agent receiving high rewards, because the altruistic agent needs to help the leader to harvest the apple regardless, and the altruistic agent is closer to the apple at the top? What do the scores in Table 1 indicate? It is not described in the caption or in the text. Why is the score for LBF in percentages but it is not for Tag?
ICLR
Title Learning Altruistic Behaviours in Reinforcement Learning without External Rewards Abstract Can artificial agents learn to assist others in achieving their goals without knowing what those goals are? Generic reinforcement learning agents could be trained to behave altruistically towards others by rewarding them for altruistic behaviour, i.e., rewarding them for benefiting other agents in a given situation. Such an approach assumes that other agents’ goals are known so that the altruistic agent can cooperate in achieving those goals. However, explicit knowledge of other agents’ goals is often difficult to acquire. In the case of human agents, their goals and preferences may be difficult to express fully; they might be ambiguous or even contradictory. Thus, it is beneficial to develop agents that do not depend on external supervision and learn altruistic behaviour in a task-agnostic manner. We propose to act altruistically towards other agents by giving them more choice and allowing them to achieve their goals better. Some concrete examples include opening a door for others or safeguarding them to pursue their objectives without interference. We formalize this concept and propose an altruistic agent that learns to increase the choices another agent has by preferring to maximize the number of states that the other agent can reach in its future. We evaluate our approach in three different multi-agent environments where another agent’s success depends on altruistic behaviour. Finally, we show that our unsupervised agents can perform comparably to agents explicitly trained to work cooperatively, in some cases even outperforming them. N/A Can artificial agents learn to assist others in achieving their goals without knowing what those goals are? Generic reinforcement learning agents could be trained to behave altruistically towards others by rewarding them for altruistic behaviour, i.e., rewarding them for benefiting other agents in a given situation. Such an approach assumes that other agents’ goals are known so that the altruistic agent can cooperate in achieving those goals. However, explicit knowledge of other agents’ goals is often difficult to acquire. In the case of human agents, their goals and preferences may be difficult to express fully; they might be ambiguous or even contradictory. Thus, it is beneficial to develop agents that do not depend on external supervision and learn altruistic behaviour in a task-agnostic manner. We propose to act altruistically towards other agents by giving them more choice and allowing them to achieve their goals better. Some concrete examples include opening a door for others or safeguarding them to pursue their objectives without interference. We formalize this concept and propose an altruistic agent that learns to increase the choices another agent has by preferring to maximize the number of states that the other agent can reach in its future. We evaluate our approach in three different multi-agent environments where another agent’s success depends on altruistic behaviour. Finally, we show that our unsupervised agents can perform comparably to agents explicitly trained to work cooperatively, in some cases even outperforming them. 1 INTRODUCTION Altruistic behaviour is often described as behaviour that is intended to benefit others, sometimes at a cost for the actor (Dowding and Monroe, 1997; Fehr and Fischbacher, 2003). Such behaviour is a desirable trait when integrating artificial intelligence into various aspects of human life and society – such as personal artificial assistants, house or warehouse robots, autonomous vehicles, and even recommender systems for news and entertainment. By observing and interacting with us, we may expect that artificial agents could adapt to our behaviour and objectives, and learn to act helpfully and selflessly. Altruistic behaviour could be a step towards value alignment (Allen et al., 2005; Gabriel, 2020), which aims to incorporate common-sense human values into artificial agents. Typically, we could achieve such an altruistic behaviour through various forms of supervision such as providing ground-truth actions at each time step, training agents with reinforcement learning (RL) and suitable rewards, or through imitation learning (Song et al., 2018). However, none of the approaches above scale up easily. They either require a large amount of supervision or carefully crafted rewards that can easily be misstated, leading to unwanted behaviour (Russell, 2019, ch. 1). How can one agent support another agent without knowing its goals? One clue might be the instrumental convergence hypothesis (Bostrom, 2017; Omohundro, 2008; Russell, 2019), which states that intelligent agents with varied goals are likely to pursue common subgoals which are generally useful (instrumental). Some examples are resource acquisition, cognitive enhancement or self-preservation, which all increase an agent’s chance of achieving almost arbitrary final goals. This hypothesis has been validated theoretically under many models, including resource games (BensonTilsen and Soares, 2016) and large classes of policies in discrete MDPs (Turner et al., 2019). While instrumental convergence is central to the discussion of value alignment and safe AI (Bostrom, 2017), since many instrumental subgoals have harmful effects, we believe that it is also a key to supporting agents with ill-defined goals and values, such as humans. The reason is that enabling instrumental subgoals for other agents (or not impeding them) can be beneficial, for a wide variety of goals and preferences. Since these subgoals occur frequently for rational agents, enabling them has the highest chance of success in the absence of more information about the other agent’s preferences, even if it is not guaranteed in the worst case. We speculate that having the ability to reach many future states is one of the most general convergent subgoals. It subsumes self-preservation (avoiding absorbent states), resource acquisition (if they are prerequisites to some actions), and generally maintaining the ability to pursue many goals. There is theoretical evidence that many optimal agents pursue this subgoal (Turner et al., 2019) (see sec. 3.2). Thus, we propose to train agents to support other agents by maximizing their choice (future state availability). This unsupervised approach learns altruistic behaviour without any extrinsic supervision such as rewards or expert demonstrations. We evaluate our methods in three diverse multi-agent environments. We always assume there are at least two agents: the leader agent that executes its own policy and can be trained using standard supervised methods, and an altruistic agent whose role is to help the leader. The performance of the altruistic agent is thus defined as the reward (success) achieved by the leader agent. In all our environments, the overall success of the leader agent depends on the altruistic agents’ behaviour. We show that our unsupervised approach outperforms unsupervised baselines by a large margin and, in some cases, also outperforms the supervised ones. Finally, we demonstrate possible failure cases of our approach where maximising the leader agent’s choice can lead to suboptimal behaviour. Our work makes the following three contributions: • We devise a multi-agent RL framework for intrinsically motivated artificial agents that act altruistically by maximising the choice of others. • We define and evaluate three task-agnostic methods to estimate the choice that an agent has in a given situation, which are all related to the variety in states it can reach. • We experimentally evaluate our unsupervised approach in three multi-agent environments and are able to match and, in some cases, outperform supervised baselines. 2 RELATED WORK To the best of our knowledge, we are the first to experimentally evaluate unsupervised agents with purely altruistic objectives. However, there are many related concepts in the literature. In human-robot cooperation, a robotic agent aids a human agent in achieving its goals (PérezD’Arpino and Shah, 2015; Hadfield-Menell et al., 2016; Baker et al., 2006; Dragan and Srinivasa, 2013; Fisac et al., 2017; 2020; Javdani et al., 2015; Dragan and Srinivasa, 2013; Macindoe et al., 2012; Pellegrinelli et al., 2016). Methods from Inverse RL (IRL) are often employed to infer human goals, which are then utilized by the robot agent to support the human. IRL itself aims to learn objectives from observations and can be used in single-agent (Fu et al., 2017) and multi-agent scenarios (Song et al., 2018; Yu et al., 2019; Jeon et al., 2020). However, IRL relies on the existence of expert demonstrations, which are often difficult to get at scale. In complex environments, it also often suffers from ambiguity of solutions (Arora and Doshi, 2021). In single-agent reinforcement learning, empowerment – which measures an agent’s capacity to affect its environment (Klyubin et al., 2005; 2008) – is used to enable intrinsically-motivated exploration (Gregor et al., 2016; Volpi and Polani, 2020). Empowerment is also used for multiagent cooperation (Guckelsberger et al., 2016; Du et al., 2020). Du et al. (2020) use empowerment to develop a helper agent that assists a (simulated) human agent by maximizing the human’s empowerment, constituting the research work most similar to ours. In contrast to our approach, it requires privileged access to an environment simulator and therefore does not allow to learn helpful or altruistic behaviour only from observation. Furthermore, the approach is not unsupervised. There are also mathematical formalizations of instrumental convergence (Bostrom, 2017). BensonTilsen and Soares (2016) analyze a MDP that makes finite resource allocation explicit, and find that optimal agents with arbitrary reward functions tend to deplete available resources. Turner et al. (2019) propose “power” as a convergent subgoal, which they define as the average difference between the state value of an optimal policy and the reward in the same state. They show that, for environments with certain symmetries, a larger proportion of optimal agents prefer states with higher power. In sec. 3.2 we will describe these symmetries and relate the result to our method. 3 METHODS In this section, we formalize our framework. We start with the generic definition describing multiagent setting. Next, we describe our framework where we show various approaches to estimate choice for a single agent, and how it can be applied to a two-agents Markov Game. Markov Game. We consider a Markov Game (Littman, 1994), which generalizes a Markov Decision Process (MDP) to a multi-agent scenario. In a Markov Game, agents interact in the same environment. At time step t, each agent (the ith of a total of N agents) takes the action ati, receives a reward rti , and finally the environment transitions from state s t to st+1. A Markov Game is then defined by a state space S (st ∈ S), a distribution of initial states η, the action space Ai (ati ∈ Ai) and reward function ri(s, a1, . . . , aN ) of each agent i, an environment state transition probability P (st+1|st, a1, . . . , aN ), and finally the agents’ discount factors γi. 3.1 ESTIMATING CHOICE FOR A SINGLE AGENT We first consider a single-agent scenario, i.e. N = 1, where only a leader agent, indicated by the subscript L, interacts with the environment through its pretrained stochastic policy πL. We assume that the leader acts Boltzmann-rationally, i.e. that it chooses high-value actions with higher probability. We believe this to be a reasonable assumption, as, in comparison to deterministic policies, stochastic policies are more robust (Zhang et al., 2020), and often achieve better results in real-world-alike partially observable stochastic domains (Kaelbling et al., 1998). We denote the leader agent’s generic choice in a given state s as CL(s), for which we propose concrete realizations below. Each method relies on the random variable St+n, with values st+n ∈ S, which refers to the leader agent’s state after n environment transitions from a starting state st. Its probability mass function is defined as the n-step state distribution of the underlying single-agent MDP, conditioned on the current state: p(st+n|st) = P (St+n = s|πL, st). Discrete choice. Our first derived method simply defines the choice of the leader agent in state st as the number of states that it can reach within n transitions, which we refer to as its discrete choice: DCnL(s t) = |range ( St+n|st ) |, (1) where range(X) is the set of all values that a random variable X takes on with positive probability and | · | measures the size of that set. While this count-based estimator of choice is intuitive and easily interpretable, it can hardly be estimated practically in large or continuous state spaces. It also discards information about the probability of reaching these states. Entropic choice. It can be shown that the entropy of a random variable X acts as a lower bound for the size of the set of values that X takes on with positive probability (Galvin, 2014, Property 2.6), i.e. H(X) ≤ log |range(X)|.We define a lower bound of the discrete choice by computing the Shannon entropy of the n-step state distribution, which we refer to as the agent’s entropic choice: ECnL(s t) = H(St+n|st) = − ∑ s∈S P (St+n = s|πL, st) log ( P (St+n = s|πL, st) ) , (2) which estimates the agent’s choice as the variety in its state after n transitions. Unlike eq. 1, ECnL can be computed in continuous state spaces or efficiently estimated by Monte Carlo sampling. Immediate choice. To further simplify entropic choice and reduce its computational complexity, we may limit the look-ahead horizon to n = 1 and assume an injective relationship from actions to states, i.e. no two actions taken at st lead to the equivalent state st+1. This assumption is often true in navigation environments, where different step-actions result in different states. We can then simplify the one-step state distribution of the leader agent to p(st+n|st) = P (St+1 = s|πL, st) = π(atL = a|st), and compute a simplified, short-horizon entropic choice, the immediate choice: ICL(s t) = H(St+1|st) = H(πtL(a|st)). (3) Immediate choice (IC) can be easily computed as the entropy over its policy conditioned on the current state. Even though the assumptions made for immediate choice often do not hold in complex or real-world environments, we found empirically that this objective can yield good results. 3.2 OPTIMALITY OF CHOICE AS AN INSTRUMENTAL CONVERGENT SUBGOAL Turner et al. (2019) analyze the instrumental convergence of optimal agents on power-seeking subgoals and show that optimal policies tend to keep their options open (Prop. 6.9). They consider two distinct actions a and a′ taken at a state s′, leading into two sets of possible future states (for an infinite horizon). These sets of future states are represented as nodes in two graphs, respectively G and G′ (with edges weighted by the probability of transitioning from one state to another). They also assume that the states in G ∪ G′ can only be reached from s′ by taking actions a or a′. In the case where G is “similar” to a subgraph of G′, in the sense that they are equivalent up to arbitrary swapping of pairs of states, the authors prove that the probability of a being optimal is higher than the probability of a′ being optimal (for most reward function distributions). Therefore, ifG′ contains more states than G, an optimal agent will choose a′ over a. Turner et al. (2019) thus lend theoretical support to our proposal: while there is no guarantee that any one optimal policy (corresponding to a rational agent with arbitrary reward function) pursues higher choice, in expectation (over a bounded space of reward functions) most policies do choose actions that lead to higher choice, all else being equal. As such, while we may not know a rational agent’s concrete goals, there is a high chance that choice works as an instrumental subgoal. 3.3 COMPARISON BETWEEN CHOICE AND EMPOWERMENT The empowerment (Klyubin et al., 2005) of a leader agent in a given state st and for horizon n is EnL(st) = maxω(an|st) I(St+n;An|st) = maxω(an|st)H(St+n|st) −H(St+n|An, st), with an as a sequence of n actions of the leader agent and ω as a probing distribution over its n-step action sequences. When setting the probing distribution ω equal to the leader agent’s policy, equation 3.3 simplifies to EnL(st) = ECnL(st)−H(St+n|At+n, st), with ECnL(st) as the entropic choice of the leader agent introduced in equation 2. If we further assume deterministic environment transitions, then empowerment becomes equal to entropic choice, i.e. EnL(st) = ECnL(st). In contrast to the previously introduced methods to estimate choice of another agent, empowerment of another agent cannot be estimated from observations of the environment transitions. To estimate another agent’s empowerment in a given state (EnL(st)), access to its action space as well as privileged access to an environment simulator are be required, which violates the main assumption of our research work, i.e. learning to assist others only from observations of the environment transitions. Even when assuming privileged access, computing empowerment in large or continuousstate environments often remains infeasible (Mohamed and Rezende, 2015; Gregor et al., 2016; Zhao et al., 2020), as it requires maximizing over all possible probing distributions ω of the leader agent. In contrast, estimating state entropy, as needed for the computation of the metrics introduced in this work, is feasible in large and continuous environments (Seo et al., 2021; Mutti et al., 2020). 3.4 BEHAVING ALTRUISTICALLY BY MAXIMIZING ANOTHER AGENT’S CHOICE Having considered three methods to estimate an agent’s choice (eq. 1-3) we now apply them to a Markov Game of two agents. The main hypothesis is that maximizing the choice of another agent is likely to allow it to reach more favourable regions of the state-space (for many possible policies of the agent), thus supporting it without a task-specific reward signal. Altruistic agent’s policy definition. In this Markov Game, one agent is the leader, with the subscript L, and another one is the altruistic agent, with the subscript A. We define the optimal policy of the altruistic agent as the one that maximizes the future discounted choice of the leader, π∗A = argmax πA ∞∑ t=0 γtA CL(s t), (4) where the generic choice CL(st) can be estimated by one of several methods: discrete choice DCnL(s t), entropic choice ECnL(s t) or immediate choice ICL(st). Conditional estimates of choice. As the agents interact in the same environment, they both have influence over the system state s, which contains the state of both agents. This makes applying single-agent objectives based on the state distribution (such as eq. 1 and 2) difficult to translate to a multi-agent setting, since the states of both agents are intermingled. For example, an altruistic agent that maximizes entropic choice naively (eq. 2) will maximize both the state availability of the leader agent (which mirrors the single-agent entropic choice) and its own state availability (which does not contribute towards the altruism goal). To maximize entropic choice without also increasing the entropy of the altruistic agent’s actions, we propose to condition the choice estimate on the altruistic agent’s actions over the same time horizon, denoted by the random variable At:t+n−1A : ECnL(s t) = H(St+n|At:t+n−1A , πL, s t). (5) In order to better understand eq. 5, we can use the chain rule of conditional entropy (Cover and Thomas, 2005, ch. 2) to decompose it into two terms: ECnL(s t) = H(St+n, At:t+n−1A |πL, st) − H(At:t+n−1A |πL, st), respectively the joint entropy of the states and actions, and the entropy of the actions. Therefore, we can interpret this objective as the altruistic agent maximizing the variety of states and actions, but subtracting the variety of its own actions, which is the undesired quantity. We can also relate eq. 5 to discrete choice (eq. 1). Using the fact that H(X|E) ≤ log |range(P (X|E))| for a random variable X and event E (Galvin, 2014, Property 2.12), we see that eq. 5 is a lower bound for a count-based choice estimate (analogous to eq. 1), also conditioned on the altruistic agent’s actions: ECnL(s t) ≤ logDCnL(st) = log |range ( St+n|At:t+n−1A , πL, st ) |. However, assuming simultaneous actions, the immediate choice estimate (eq. 3) stays unchanged, i.e. ICL(st) = H(πtL(a|st)|atA) = H(πtL(a|st)). The technical details of how these estimates can be computed from observations of the environment transitions are given in Appendix A. 4 EXPERIMENTAL EVALUATION We introduce three multi-agent environments of increasing complexity1, in which the success of a leader agent depends on the behaviour of one or more additional agents. In each environment, we first evaluate a subset of the proposed methods for choice estimation (DCnL, EC n L and ICL) by comparing the estimated choice of the leader agent in minimalistic scenarios. We then evaluate our approach of behaving altruistically towards others by maximizing their choice (section 3.4) and measure performance of our approach as the reward achieved by the leader agent. We provide videos of the emergent behaviours in the supp. mat. (see appendix F). We compare our method to both an unsupervised and a supervised approach. Note that the supervised approach has stronger assumptions, as it requires direct access to the leader agent’s reward function. We do not consider inverse RL (IRL) as a relevant baseline, as it would rely on demonstrations of expert behaviour, which we do not assume. Even if perfect knowledge of the state transition probabilities is assumed, this does not allow generating expert demonstrations of the leader agent’s policy, as its expert policy would in turn depend on the policy of the altruistic agent, which is yet to be found by IRL. 4.1 DISCRETE ENVIRONMENTS WITH CONTROLLABLE GATES We start by considering three different scenarios on a grid, illustrated in Fig. 1 (top row), with the starting positions of the leader (green) and an additional agent (blue) shown in faded colors, obstacles are gray, and agents may move in one of the four cardinal directions or stay still. Choice estimate analysis. We first verify whether the estimated choice for each state (agent position) correctly maps to our intuitive understanding of choice (that is, the diversity of actions that can be taken). Therefore, we conducted an analysis of the estimated choice of the leader agent using a simplified version of the environment (Fig. 1, top left), in which only the leader agent is present and selects actions uniformly at random. Fig. 1 (bottom row) shows the three different methods of estimating choice evaluated for each possible cell position of the leader agent. We can observe that states in less confined areas, e.g. further away from walls, generally feature higher choice estimates, with the least choice being afforded by the dead end at the right. All three method’s estimates are qualitatively similar, which validates the chosen approximations. In line 1In appendix E, we evaluate performance in a non-spatial environment. with the simplifications made, the immediate choice (IC) estimates tend to be more local, as can be observed when comparing the estimates for the cell at row 2, column 4. In conclusion, these results qualitatively agree with an intuitive understanding of choice of an agent in a grid environment. Environment setup. In the Door Scenario (Fig. 1, top center), the door switch (row 1, col. 8) can only be operated by the altruistic agent. The door (row 2, col. 4) remains open as long as the altruistic agent is on the switch cell and is closed otherwise. As the leader agent always starts to the left of the door and the altruistic agent to the right, the leader agent can only attain its goal, the apple (row 2, col. 6), if the altruistic agent uses the door switch to enable the leader agent to pass through the door. In the Dead End Scenario (Fig. 1, top right), the door is always open, and the leader agent’s target object (green apple) is moved to the top right cell. Hence, the leader agent can obtain the apple without additional help from the altruistic agent. However, the altruistic agent could potentially block the path by positioning itself at the entry to the dead end. This situation would be the opposite of altruistic behaviour and is, of course, undesired. We compare to a supervised approach, to Assistance via Empowerment (AvE, (Du et al., 2020)) and a random-policy baseline. Assistance via Empowerment baseline. We compare with the recently-proposed AvE, which has a similar goal (Du et al., 2020). There are two major differences: AvE is not unsupervised, and it requires privileged access to an environment simulator to produce estimates. Hence, its use in real or black-box environments is limited. We used the authors’ implementation with fixed hyperparameters, except for the crucial horizon n, for which we present a sweep in app. B. Training. We start by pretraining the leader agent with Q-Learning (Watkins and Dayan, 1992), with the altruistic agent executing a random policy. Hence, after convergence, the leader agent’s policy targets the green apple. Appendix B lists all details and parameters. Afterwards, the leader agent’s learning is frozen and the altruistic agent is trained; it always observes the position of the leader agent sL, its own position sA, and the environment state senv, which is composed of the door state (open, closed) and the food state (present, eaten). The altruistic agent is trained with Q-Learning to maximize the discounted future choice of the leader agent (see eq.. 4. For that, it uses one of the three proposed methods such as eq. 3, eq. 2 or eq. 1, as detailed in appendix A.1. Results. We investigate the developed behaviour of the altruistic agent after convergence for different choices of the hyperparameters – look-ahead horizon n ∈ {1, 3, 12} (which determines the scale at which choices are considered) and discount factor γa ∈ {0.1, 0.7} (which defines whether the altruistic agent gives higher importance to the short-term or long-term choice of the leader agent). Success is binary: either the leader agent attains its goal (green apple), or not. In the Door Scenario (Fig. 1, top center), we found that, for longer horizons n and higher discount factors γa, the altruistic agent opens the door to allow the leader agent to reach its target, by occupying the switch position (square outline; row 1, col. 8). For smaller n and lower γa, the altruistic agent does not execute any coordinated policy and the leader does not succeed. Using the AvE method, we find that it only opens the door for n = 3, but fails to do so for n = 1 and n = 12. In the Dead End Scenario (Fig. 1, top right), we observe that, for longer horizons n and large discount factors γa, the altruistic agent stays out of the leader agent’s way by occupying a far-away cell (square outline; row 1, col. 6). For short horizons n and high discount factors γa, the altruistic agent actively blocks the entry to the hallway that contains the target (circle outline; row 3, col. 7), to prohibit the leader agent from entering this region of low estimated choice (recall that the choice for each cell is visualized in Fig. 1, bottom right). This failure case can be prevented by having a large enough horizon n and discount factor γa, analogously to the selection of the temperature hyperparameter in maximum entropy single-agent RL (Haarnoja and Abbeel, 2018). We find that this configuration performs consistently better than others in both scenarios, and hence is more preferred. On the other hand, the AvE method does not block the path of the leader agent for n = 1, but blocks its path for n = 3 and n = 12. We found that the resulting behaviour of our approach is independent of the used method for choice estimation, i.e. either discrete choice (eq. 1) or entropic choice (eq. 2) yield the same outcome, with immediate choice (eq. 3) being a special case of entropic choice. As for the AvE baseline, we hypothesize that the variance of results is due to the nature of the proxy used in practice, which includes components of empowerment from both agents (sec. 3.4). The binary outcomes for all hyperparameter combinations are given in appendix B. We also compare to a supervised baseline (receiving a reward when the leader obtains the apple), in which case the leader always succeeds. 4.2 LEVEL-BASED FORAGING EXPERIMENTS Computational efficiency. Due to the computational complexity resulting from the need to estimate a long-term distribution of states, p(st+n|st), we focus on immediate choice (IC) to estimate the leader agent’s choice in the remaining sections. Furthermore, in rare state-action sequences, the assumptions made for IC, i.e. deterministic environment transitions and an injective relationship from actions to states, may not hold. Nonetheless, we did not find this to adversely affect the results. Due to its dependence on access to the environment simulator and its computational complexity, we do not consider the AvE baseline for the remainder of experiments. Setup. We use a fully-observable multi-agent environment that enables us to assess the level of cooperation among agents (level-based foraging, LBF, Christianos et al. (2020)) to evaluate the performance of altruistic agents in more complex environments with discrete state spaces. We compare our method to a maximum-entropy approach from single-agent RL (Mutti et al., 2020) and a random-policy baseline. A visualization of the environment is depicted in Fig. 2 (left). The two agents can forage apples by simultaneously taking positions at different sides of a targeted apple, yielding a fixed reward. We first train two agents – which receive an equal reward for foraging – using Deep Q-Learning (DQL, Van Hasselt et al. (2015)), corresponding to fully-supervised sharedreward in multi-agent reinforcement learning (MARL). We then take one of these pretrained agents that has learned to forage apples when accompanied by a cooperating agent, freeze its policy, and place it as the leader agent (green) into the test scenario (additional details are provided in app. C). Choice estimate analysis. We first qualitatively evaluate IC as an estimator for choice in Fig. 3, by comparing representative scenarios. To quantitatively analyse IC as an estimator for the leader agent’s choice, we compare the leader agent’s average IC (over 100 episodes) in two scenarios, one in which it can acquire many rewards, i.e. the other agent acts cooperatively, and one where it can acquire only few rewards, i.e. the other agent takes random actions. We show the results in Table 1. We observe that the leader agent’s estimated choice is substantially higher when it is able to acquire high rewards. Note that the IC estimate does not have privileged access to the reward function of the leader agent, and so this experiment evaluates its worth as a generic proxy for the leader’s reward. Assuming that an agent is able to acquire higher rewards when having more choice, these results indicate that IC is a reasonable estimator for the leader agent’s choice in LBF. Training. We now consider an environment that consists of the previously pretrained leader and an additional altruistic agent, which is trained from scratch and does not receive a reward for foraging apples, but is rewarded according to the leader agent’s choice. Its reward is given as the current estimate of the leader agent’s IC (eq. 3) and it is trained using DQL. To compute its internal reward signal, the altruistic agent would therefore need to estimate the state transition probabilities, as detailed in A.2. To decouple our approach’s performance from that of the state transition estimator, we instead directly compute the altruistic agent’s reward using the leader agent’s policy. Results. We define the performance of the altruistic agent not as its achieved internal reward but as the reward achieved by the leader agent, i.e. its performance in enabling the leader agent to forage apples. Fig. 4 shows a comparison of the altruistic agent’s performance to that achieved by 3 baselines (two unsupervised and one supervised), averaged over 5 random seeds, with the standard deviation as the shaded area. It can be observed that the performance of the altruistic agent converges to a similar performance to that of the supervised agent, and outperforms the baseline approaches by a large margin. Furthermore, the IC improvement of the leader agent is correlated with its reward improvement, which supports using IC as a reasonable proxy for the choice of the leader agent. 4.3 MULTI-AGENT TAG GAME WITH PROTECTIVE AGENTS Setup. We use a multi-agent tag environment (Tag, Mordatch and Abbeel (2018); Lowe et al. (2017); Terry et al. (2020)), illustrated in Fig. 2 (right), to evaluate the capabilities of altruistic agents in complex environments with continuous state spaces. Adversaries are rewarded for catching the leader, which in turn receives a negative reward for being caught or crossing the environment boundaries. To speed up training, altruistic agents additionally receive a small negative reward for violating the environment boundaries. We pretrain the adversaries and the leader (without the presence of altruistic agents) using MADDPG (Lowe et al., 2017) and DDPG (Lillicrap et al., 2016) respectively. After pretraining, the adversary agents have learned to cooperatively chase the leader agent, which in turn has learned to flee from the adversaries. Exact setup specifications and all parameters are given in appendix D. Choice estimate analysis. As done for LBF, we evaluate the IC of the leader agent in representative scenarios in Fig. 3. We also quantitatively evaluate IC as an estimator for the leader agent’s choice, by comparing the leader agent’s IC per timestep for a scenario in which it receives high rewards to one where it receives low rewards. We again hypothesize that the leader agent is able to acquire higher rewards when having more choice. Table 1 shows that the estimated choice is substantially higher in the high-success scenario, indicating that IC is a reasonable estimator also in Tag. Training. We freeze the pretrained policies of the adversary agents and the leader agent and insert three additional altruistic agents which observe all agents but are not observed themselves. Each additional altruistic agent’s internal reward signal is given as the IC of the leader agent (equation 3), which is directly computed as done in LBF (see 4.2). Results. Performance of the altruistic agents is defined as the times per episode that the leader agent is caught by the adversaries, i.e. the lower the better. In Table 2, the performance of the team of three altruistically trained agents (ours) is compared to three relevant baselines, with the altruistic agents either removed (None), acting randomly (random), or solely receiving a small negative reward for violating the environment boundaries (cage). In contrast to LBF, we do not compare to an unsupervised exploration approach, as we are not aware of such an implementation for cooperative MARL. Additionally, we report results for the case in which the altruistic agents receive the same reward as the leader agent (supervised), possibly appended by a negative reward for violating the environment boundaries (supervised + cage). It can be observed that our approach outperforms all relevant baselines by a substantial margin and also outperforms the supervised approach. We hypothesize this to be due to the dense internal reward signal that our approach provides, as compared to the sparse rewards in the supervised scenario: recall that in the supervised scenario the additional altruistic agents receive a large negative reward only when the leader agent is caught by the adversaries, whereas our approach provides a dense reward signal that corresponds to the current estimate of the leader agent’s choice. Fig. 5 displays the emerging protective behaviour of altruistic agents trained with our approach. Results videos are found in the supplemental material. 5 CONCLUSIONS We lay out some initial steps into developing artificial agents that learn altruistic behaviour from observations and interactions with other agents. Our experimental results demonstrate that artificial agents can behave altruistically towards other agents without knowledge of their objective or any external supervision, by actively maximizing their choice. This objective is justified by theoretical work on instrumental convergence, which shows that for a large proportion of rational agents this will be a useful subgoal, and thus can be leveraged to design generally altruistic agents. This work was motivated by a desire to address the potential negative outcomes of deploying agents that are oblivious to the values and objectives of others into the real world. As such, we hope that our work serves both as a baseline and facilitator for future research into value alignment in simulation settings, and as a complementary objective to standard RL that biases the behaviour towards more altruistic policies. In addition to the positive impacts of deployed altruistic agents outside of simulation, we remark that altruistic proxy objectives do not yet come with strict guarantees of optimizing for other agents’ rewards, and identify failure modes (sec. 4.1) which are hyperparameter-dependent, and which we hope provide interesting starting points for future work. 6 ETHICS STATEMENT We addressed the relevant aspects in our conclusion and have no conflicts of interest to declare. 7 REPRODUCIBILITY STATEMENT We provide detailed descriptions of our experiments in the appendix and list all relevant parameters in table 4. All experiments were run on single cores of Intel Xeon E7-8867v3 processors (2.5 GHz). Training times are given in the respective sections in the appendix. For the LBF and Tag experiments, we report mean and standard deviation over five different random seeds. The Gridworld experiments yield deterministic results. We will provide the source code for all experiments conducted with the final version of this publication. We created detailed instructions on how to run the code in order to replicate the experimental outcomes presented in this work. 8 ACKNOWLEDGEMENTS We thank Thore Graepel and Yoram Bachrach for their helpful feedback. We are also grateful to the anonymous reviewers for their valuable suggestions. This work was supported by the Royal Academy of Engineering (RF\201819\18\163). A ESTIMATION OF LEADER AGENT’S CHOICE FROM OBSERVATION A.1 MODEL-BASED ESTIMATION OF CHOICE FROM OBSERVATIONS We introduce a model-based estimator of choice that is suitable for small-scale discrete-state environments, having the advantage that it is easily interpretable. Recalling how we compute the discrete choice and entropic choice estimates for the leader agent, an estimate of the n-step state distribution conditioned on the altruistic agent’s actions is needed, i.e. P (st+n|πL, at:t+n−1A , st). To simplify this computation, we assume the altruistic agent’s action to equal hold for the next n steps. More specifically, we assume that the altruistic agent’s state is unchanged for the next n steps. Furthermore assuming that both the state and the action space are discrete, we compute P (st+n|πL, at:t+n−1A , s t) = st T (stA) n, (6) with T (stA)ij = P (s t+1 = sj | st = si, st+1A = s t A) (7) where the state transition matrix T (sA) holds the transition probabilities between all possible states, as a function of the state of the altruistic agent sA. To compute T (sA), the system state is encoded into a one-hot vector s1. The n-step discrete choice of the leader agent can then be computed as DCnL(s t) = ‖st1 T (stA)n‖0, (8) its n-step entropic choice as ECnL(s t) = H ( st1 T (s t A) n ) , (9) and its immediate choice as ICL(s t) = H ( πtL(a|st) ) = H ( s1 T (s t A) ) (10) In environments with a discrete state and action space, the altruistic agent can hence use an estimate of the state transition matrix T to estimate the choice of the leader agent using either of the proposed methods, i.e. DC, EC or IC. An estimate of T can be built over time, by observing the environment transitions and computing the transition probabilities as relative frequencies of observed transitions. A.2 MODEL-FREE ESTIMATION OF CHOICE FROM OBSERVATIONS To limit the computational complexity, which is important for environments with large or continuous state spaces, we also consider immediate choice as an estimator for the leader agent’s choice (ICL(st) = H(St+1|st)). As shown in section 3.1, this estimate can be simplified to H(St+1|st)) = H(πtL(a|st)), under the named assumptions. Hence, to compute the immediate choice of the leader, the altruistic agent requires an estimate of the leader agent’s policy entropy, which can be learned from observation using a policy estimation network (Hong et al., 2018; Papoudakis et al., 2020; Mao et al., 2019; Grover et al., 2018). B GRIDWORLD EXPERIMENTS B.1 TRAINING PROCEDURE B.1.1 SETUP The environment setup is described and displayed in section 4.1. AvE baseline. We evaluate the AvE baseline for different horizons n. For each horizon, we tested the AvE baseline as implemented in the provided source code2, using the hyper-parameters suggested by the authors. The original implementation uses a look-ahead horizon n = 10. We found 2https://github.com/yuqingd/ave that results are equal for both n = 10 and n = 12, which is why we only display results for n = 12. We further evaluated the AvE baseline for n between 1 and 12. For the Opens door task, we found that AvE yields success for n = 2, 3, 4, 5 and failing for the remaining. For the Non blocking task, we found that AvE yields success for n = 1, 2 and failing for the remaining. B.1.2 PRETRAINING We first pretrain the leader agent using tabular Q-Learning, with learning parameters given in Table 4. During this pretraining, the altruistic agent takes random actions. We train until all Q-Values are fully converged, i.e. training runs for 300000 environment steps. B.1.3 REWARD COMPUTATION FOR ALTRUISTIC AGENTS The altruistic agent is then also trained using tabular Q-Learning, and its internal reward signal is given as the choice estimate of the leader agent, i.e. either DCnL(s t), ECnL(s t) or ICL(st), which is computed with the model based-estimation introduced in appendix A.1. The altruistic agent records all environment transitions and frequently updates its estimate of the state transition matrix T (sA), which is needed to compute the internal reward signal for the altruistic agent. All training parameters can be found in Table 4. Training time is about 15 minutes per experiment. B.2 PERFORMANCE EVALUATION Performance of the altruistic agent is reported for two different categories, as shown in Table 3. For each category, we report success or failure for choice estimate look-ahead horizons n ∈ {1, 3, 12} and discount factors of the altruistic agent γa ∈ {0.1, 0.7}. Success or failure was always deterministic, conditioned on the experiment setup, i.e. 10 simulations were run for each setup which always yielded the same outcome. To estimate the leader agent’s choice, the altruistic agent uses either discrete choice (D, equations 1 and 8) or entropic choice (E, equations 2 and 9). It must be noted that horizon n = 12 is equivalent to an infinite horizon look-ahead for the given environment size and that entropic choice is equivalent to immediate choice (equations 3 and 10) at horizon n = 1, as the environment satisfies the necessary conditions listed for equation 3. Table 3 displays the results of this experiment. In the first row, it is evaluated whether the altruistic agent opens the door at all times, such that the leader agent can eat the green apple. It can be observed that the altruistic agent only opens the door for longer horizons n, respectively higher discount factors γa. Given the definitions of discrete choice (Equation 1) and entropic choice (Equation 2), it can be assumed that the choice horizon n determines the locality for which choice is considered and that the discount factor γa defines whether the altruistic agent gives higher importance to the short-term or long-term choice of the leader agent. This is in line with the observed results for the first category (Opens door). It can be assumed that, for short horizons n, the altruistic agent does not open the door, as it does not estimate that this would lead to an increase in the leader agent’s choice. A similar argumentation follows for low discount factors γa. The bottom-row category evaluates whether the altruistic agent does not block the hallway that leads up to the leader agent’s target apple in the top right environment cell. This category demonstrates a possible failure case of the proposed approach of maximizing another agent’s choice. For short horizons n and high discount factors γa, the altruistic agent actively blocks the entry to the lowentropy hallway towards the top right cell – by constantly occupying cell (2, 6) – to prohibit the leader agent from entering this region of low estimated choice. This failure case can be prevented by an appropriate selection of the hyperparameters – horizon n and discount factor γa. It is related to the selection of the temperature hyperparameter in maximum entropy single-agent RL (Haarnoja and Abbeel, 2018); if chosen incorrectly, the agent does not foster environment rewards in lowentropy regions. A possible solution to this problem would be to define a constrained optimization problem, as shown by Haarnoja and Abbeel (2018). B.3 ABLATION STUDY ON JOINT LEARNING Training. To investigate the effects of joint learning of the leader agent’s and the altruistic agent’s policy, we adapted the training process described in section 4.1 for the Gridworld experiments as following. Instead of first learning the policy of the leader agent while the altruistic agent takes random actions, we initialized both policies from scratch and trained both agents simultaneously with the parameters given in Table 4. Results. We evaluated the outcome for the same scenarios, i.e the scenarios described in section 4.1. We found that the results for the individual test cases were equivalent to those achieved when training the leader and the altruistic agent subsequently, i.e. the results are equivalent to those displayed in Table 3. C LEVEL BASED FORAGING EXPERIMENTS C.1 TRAINING PROCEDURE C.1.1 SETUP We adopted the Level Based Foraging3 environment as given in Christianos et al. (2020). We only focus on two-agent scenarios and only consider the subset of possible environments that require full cooperation among agents, i.e. those where food can only be foraged by two agents cooperatively. We therefore only consider environments where both agents are at level one, and all present food is at level two. In the original implementation, both agents have to simultaneously select the eat action while docking at different sides of a food object to forage the object and receive the reward. To reduce training time, we simplify this setup by reducing the action space to up, down, left, right, stay, i.e. we remove the eat action and enable agents to forage food by being simultaneously at different sides of a food object, with no further action required. C.1.2 PRETRAINING To obtain a pretrained leader agent, we first train two agents in the environment that are equally rewarded for foraging food. This setup corresponds to shared-reward cooperative MARL (Tan, 1993). Both agents are trained using Deep Q Learning (DQL, (Van Hasselt et al., 2015)), using a fully connected neural network with two hidden layers and five output values, resembling the Q values of the five possible actions. The exact training parameters are listed in Table 4. We then take either one of the two agents and set it as the pretrained leader agent for the subsequent evaluation of the altruistic agent. C.1.3 TRAINING OF ADDITIONAL AGENTS We then insert an additional agent into the environment that shall act altruistically towards the leader agent. This additional agent is trained in the same fashion and with the same parameters as the previously trained leader agents. Only its reward signal is different, as laid out in the next section. C.1.4 REWARD COMPUTATION FOR ADDITIONAL AGENTS We compare four different approaches for how the reward of the additional agent is defined, respectively how it behaves. Random: The agent takes random actions. Supervised: The agent receives the same reward as the leader agent, i.e. a shared reward as in cooperative MARL. Ours: 3https://github.com/semitable/lb-foraging The reward of the additional agent is defined as the immediate choice of the leader agent, as detailed in equation 3. We compute the leader agent’s policy entropy by computing the entropy of the softmax of the leader agent’s Q values in the given state. We further consider an unsupervised baseline, as detailed in the next paragraph. Unsupervised baseline (MaxEnt). As an unsupervised baseline, we implemented the MEPOL approach of Mutti et al. (2020). Their task-agnostic unsupervised exploration approach maximizes the entropy over the state distribution of trajectory rollouts. For this baseline, the additional agent is trained with the implementation given by the authors4, which itself builds on TRPO (Schulman et al., 2015). We leave all parameters unchanged but evaluate different learning rates; lr ∈ {1e − 6, 1e− 5, 1e− 4, 1e− 3, 1e− 2, 1e− 1}. Best results were achieved for a learning rate of 1e− 5, which was hence picked as the relevant baseline. C.2 PERFORMANCE EVALUATION Each experiment was run for 5 different random seeds and mean and standard deviation are reported. Training progress is shown in Figure 4. Evaluations are computed every 10000 environment steps for 200 episodes, with the exploration set to zero. Training time was about 14 hours for each run. Results are shown in Fig. 4. D TAG EXPERIMENTS D.1 TRAINING PROCEDURE D.1.1 PRETRAINING We use the Simple Tag (Tag) implementation by Terry et al. (2020)5 which is unchanged as compared to the original implementation of Mordatch and Abbeel (2018)6, only fixing minor errors. We first adopt the original configuration and pretrain three adversaries and one good agent (leader agent) using the parameters listed in Table 4. We use MADDPG (Lowe et al., 2017)7 to train adversary agents, and modify the framework as follows. The last layer of each agent’s actor-network outputs one value for each of the environment’s five possible actions, over which the softmax is computed. We then sample the agent’s action from the output softmax vector, which corresponds to the probabilities with which the agent takes a specific action in a given state. We train the leader agent with DDPG (Lillicrap et al., 2016),7 where we equally modify the output layer. Each actor and critic network is implemented as a fully-connected neural network with two hidden layers, with layer sizes as given in Table 4. To make the environment more challenging for the leader agent, we decrease its maximum speed and acceleration to 70% of the original value. We next insert three additional agents into the environment whose observations include all agents and objects. These additional agents are not observed by adversary agents or the leader agent. The additional agents are of the same size as the adversary agents, and their acceleration and maximum velocity are equal to that of the leader agent. To speed up training, we made the following changes to the environment, which are applied to our approach as well as to all baselines. First, we spawn the three additional agents in the vicinity of the leader agent, which itself is spawned at a random position. Furthermore, we randomly pick two out of the three adversary agents and decrease their maximum acceleration and maximum speed by 50%. We made these changes to be able to observe substantial differences between the different approaches after a training time of less than 24h. D.1.2 TRAINING OF ADDITIONAL AGENTS We train these three additionally inserted agents with the previously described modified version of MADDPG. The reward for each agent is defined either according to our developed approach, or any of the given baselines, as detailed in the next section. 4https://github.com/muttimirco/mepol 5https://github.com/PettingZoo-Team/PettingZoo 6https://github.com/openai/multiagent-particle-envs 7https://github.com/starry-sky6688/MADDPG D.1.3 REWARD COMPUTATION FOR ADDITIONAL AGENTS FOR DIFFERENT BASELINES We consider the following implementations for the reward computation of the additional agents, respectively different environment configurations. None: For this scenario, the additional agents are removed from the environment. The remaining approaches purely differ in the way that the reward of the additional agents is computed. No other changes are made. Random: The additional agents take random actions. Cage: The additional agents receive a negative reward for violating the environment boundaries, which is equal to the negative reward that the leader agent receives for itself violating the environment boundaries (part of the original Tag implementation). Supervised: The additional agents receive the same reward as the leader agent. That is, they receive a reward of -10 if the leader agent is caught by the adversaries and a small negative reward if the leader agent violates the environment boundaries. Supervised + Cage: The additional agents receive the same reward as the leader agent, and an additional small negative reward if they themselves violate the environment boundaries. Ours: The reward of the additional agents is defined as the immediate choice of the leader agent, as detailed in eq. 3. To reduce the variance in the estimate of the leader agent’s immediate choice, we implement an ensemble of five pretrained actor-networks for the leader agent, evaluate the policy entropy of each network, and take the median of the achieved values as the reward for the altruistic agents. Furthermore, the additional agents receive a small negative reward for themselves violating the environment boundaries. D.2 PERFORMANCE EVALUATION We train Cage, Supervised, Supervised + Cage and Ours for five different random seeds with parameters as detailed in Table 4. We then compute the results listed in Table 2 by freezing all weights across all networks, setting the exploration noise to zero and computing the average and standard deviation over 500 rollout episodes. E RESOURCE ENVIRONMENT E.0.1 MOTIVATION AND OVERVIEW This environment is a special case of the general resource-based MDP proposed by Benson-Tilsen and Soares (2016), which they used to show that intelligent agents pursue instrumentally useful subgoals. The motivation behind the choice for this environment is to evaluate our proposal in non-spatial and non-navigation environments. In the environment, there are 3 resource types, which two “consumer” agents may consume as an action. Each consumer has different preferences (reward function), and so will only consume 2 of the resource types. A third, altruistic agent, receives one resource unit of each type to distribute among the consumers, and its goal is to satisfy the preferences of the consumers without knowing their reward function. We define its performance as the average number of times that the consumers fail to consume their preferred resource (so lower is better). We compare our method to a supervised agent that is explicitly trained with the consumers’ reward function, as well as to an agent that assigns the resources randomly. E.0.2 ENVIRONMENT DESCRIPTION The environment is expressed as a Markov Game (see section 3). The Markov game is composed of two human-inspired consumers with subscript C1, C2 and an altruistic agent with subscript A. Three types of resources exist, RX , RY and RZ . The environment state s is given by the number of resources of each type available to each of the consumers. For example, s = [(1, 0, 1), (0, 1, 0)] means that agent C1 has one resource each of type X and Y available, while agent C2 only has one resource of type Z available. At the beginning of each time step, the altruistic agent is provided with one resource per category, i.e. RX , RY and RZ . The altruistic agent can assign each resource individually to any agent or discard the resource. The altruistic agent’s action space is hence defined by one sub-action per resource, i.e. aA = (aXA , a Y A , a Z A). Each sub-action assigns the resource either to one of the consumers or discards it. The resources are then distributed according to the action taken by the altruistic agent and the environment state is updated. Resources cannot be stacked, which means that each agent can only have one resource per category available at a time. Next, the consumers attempt to consume one resource each, according to their preference. Agent C1 dislikes resource RZ , hence it chooses RX or RY with equal probability. Agent C2 dislikes resource RX , hence it chooses RY or RZ with equal probability. The actions of agents C1 and C2 are sampled accordingly and the environment state is updated. For each round, we record how many agents failed to consume a resource that was not available. E.1 TRAINING The altruistic agent is trained with Q-Learning (Watkins and Dayan, 1992) to maximize the discounted future choice of the consumers (see eq. 4). For that, it uses one of the three proposed objectives, namely IC (eq. 3), EC (eq. 2) or DC (eq. 1), which it estimates as detailed in appendix A.1. The exact hyper-parameters are given in Table 4. We compare the performance of the altruistic agent that maximizes the choice of the consumers to that of a supervised agent. The reward of the supervised agent is the negative of the number of consumers that attempted to consume a resource, in that time step, and failed. Further, we compare to a random-policy baseline that distributes the resources randomly but does not discard any resources. E.2 RESULTS Table 5 shows that the results achieved by the altruistic agent trained with choice are equivalent to those achieved by the supervised agent. Furthermore, they are significantly better than those achieved by an agent with a random policy. F VIDEOS OF BEHAVIOUR OF ALTRUISTIC AGENT We provide videos for the most relevant outcomes of our experiments in the supplementary material. F.1 VIDEOS FOR RESULTS OF GRIDWORLD EXPERIMENTS (SECTION 4.1) F.1.1 DOOR SCENARIO IN FIG. 1 TOP CENTER 01 Altruistic agent opens door for leader agent: It can be observed that the altruistic agent has learned to operate the door switch to enable the leader agent to pass through the door and reach its target on the other side. 02 Altruistic agent does not open door for leader agent (failure case): It can be observed that for an unfavourable choice of hyperparameters, the altruistic agent does not open the door. F.1.2 DEAD END SCENARIO IN FIG. 1 TOP RIGHT 03 Altruistic agent gives way to leader agent: It can be observed that the altruistic agent does not get into the way of the leader agent, which is hence able to reach its target in the top right cell. 04 Altruistic agent blocks path of leader agent (failure case): It can be observed that for an unfavourable choice of hyperparameters, the altruistic agent blocks the entry to the hallway towards the right side of the environment such that the leader agent cannot reach its target at the top right cell. This happens as the altruistic agent forcefully maximizes the estimated choice of the leader agent by hindering it from entering the hallway, which is a region of fewer estimated choice. F.2 VIDEO FOR RESULTS OF LEVEL BASED FORAGING (SECTION 4.2) 05 Altruistic agent enables leader to forage apples: It can be observed how the altruistic agent (blue) learned to coordinate its movements with the leader agent (green), to enable the leader agent to forage apples. It has learned this behaviour purely through optimizing for the leader agents choice and is itself not rewarded for foraging apples. F.3 VIDEO FOR RESULTS OF TAG (SECTION 4.3) 06 Altruistic agents protect leader from adversaries: It can be observed how the altruistic agents (blue colors) learned to coordinate their movements to protect the leader agent (green) from its adversaries. The adversaries (red colors) try to catch the leader, which in turn tries to flee from them. The altruistic agents protect the leader by actively intercepting the paths of the adversaries. They have learned this behaviour purely through optimizing for the leader agents choice.
1. What is the main contribution of the paper regarding altruistic behavior in artificial agents? 2. What are the strengths of the proposed approach, particularly in its generality and ability to assist other agents without knowing their reward functions or policies? 3. What are the weaknesses of the paper, such as the need for hyperparameter search and comparison with other baselines? 4. How does the method assume the leader can solve the task during training, and what are the potential issues with this assumption? 5. Aren't there cases where the altruistic agent can hurt the leader's performance, and how can the set of states enabled by the altruistic agent be biased towards desirable states for the leader? 6. Can the authors confirm whether the understanding of the method's limitations is correct and explain why they decided not to tackle the setting of training both agents simultaneously? 7. How does the approach require more training, and what is the trade-off between final performance and learning efficiency? 8. Could the algorithm be extended to non-deterministic environments or allow the two agents to learn at the same time?
Summary Of The Paper Review
Summary Of The Paper This paper proposes an unsupervised learning method for training an agent to assist another agent (called the leader) in solve its task (thus displaying a type of altruistic behavior) without access to the other agent's reward function or policy. The authors propose to use the notion of maximizing the leader's choice which is formalized as maximizing the number of different states the leader can reach at any point (within a given number of steps from its current state). The authors propose three variants of this method and evaluate them on three different domains, while also comparing them with other approaches. Review Strengths I think the paper aims to tackle an important problem, namely that of inducing more altruistic behavior in artificial agents acting in the same environment with others and assisting others in achieving their goals without knowing what those are. I also like the proposed approach because it seems quite general and doesn't assume access to the leader's reward function, state goal, policy, or trajectories (unlike other methods in this space). The method is new as far as I know, although some of its elements have been used in other contexts, but I think the authors do a good job at explaining the connections to related work. I also found the analysis of the different choice formulations from Figure 1 to be quite insightful. Weaknesses Baselines In the paper, you write that for the AvE baseline, you used the hyperparameters suggested by the authors, but this doesn't seem fair since typically these methods need to be fine-tuned for each task / domain as it may require very different hyperparameters than the ones used in the original paper. Could you do a hyperparameter search for AvE and present the results with the best HPs found on the tasks used for evaluation? It wasn't very clear to me why you are not using the AvE baseline for the LBF and Tag domains and also not using the Supervised baseline for the Gridworld tasks. Could you please add these for completion or explain in more detail why they are not used for comparison? I think it would be useful to compare the methods with an oracle which would be the optimal policy for assisting the leader agent. This could provide insight into how far the current method is from such a policy and whether there is still potential for improvement on these tasks by future work. Clarity One of my biggest concerns is that it seems like there might be some significant modes and implicit assumptions made by the method which are not openly discussed in the paper. First of all, it seems like the method assumes that the leader can solve the task during its training stage, while the altruistic agent is taking random actions. This implies that the altruistic agent doesn't need to learn a very complex behavior and is not necessary for the leader's success (even if it might help the leader achieve the goal sooner). A more realistic setting would be one in which the two are trained at the same time or at least, there are multiple training stages for both of them (that alternate). Aren't there cases where the altruistic agent can hurt the leader's performance? For example, there might be dead end states in the environment which can be activate by the altruistic agent's actions but which would be better avoided by the leader. Given that the two agents are not training at the same time, the leader's policy may not be robust to such changes in the environment / new states, so they may end up in them if the altruistic agent aims to increase the number of reachable states. Would it be possible to bias the set of states enabled by the altruistic agent towards the set of states which are desirable for the leader to reach? Could the authors confirm whether this understanding is correct and explain why they decided to not tackle this setting / train in this way? I think these issues should at least be discussed in more depth in the paper. It would also be great if the authors can train the method on a similar scenario (where the approach isn't necessarily expected to do well) to better understand the limitations of this approach and when it can be expected to be effective. Another thing which is not openly discussed in the paper is the fact that it seems like your approach may actually require more training since you have two separate training stages (i.e. first you train the leader on the task and then you train the altruistic agent to assist the leader). Can you comment on the trade-off between final performance and learning efficiency and make this fact more transparent in the paper? It would be great to include a graph with performance as a function of the number of samples used for training for the entire training process, with a breakdown for the leader and altruistic agent. At the beginning of the paper, it is not very clear what are the metrics you are looking to improve, so I suggest mentioning that in the introduction. Initially, it is not clear whether the altruistic agent is supposed to help with a) percentage of times the leader can solve the goal, b) number of steps needed to solve the goal, c) leader's sample / computational efficiency, or something else. Limitations Could you extend this algorithm to non-deterministic environments? Would you just need to replace the unsupervised learning objective with empowerement? Along similar lines, is it possible to extend the algorithm so that the two agents learn at the same time? This would involve dealing with the non-stationarity of the leader's policy. This seems like it would be a more general setting and might be in a better position to handle more challenging tasks, so it would be great to at least discuss it in the conclusion section.
ICLR
Title Learning Altruistic Behaviours in Reinforcement Learning without External Rewards Abstract Can artificial agents learn to assist others in achieving their goals without knowing what those goals are? Generic reinforcement learning agents could be trained to behave altruistically towards others by rewarding them for altruistic behaviour, i.e., rewarding them for benefiting other agents in a given situation. Such an approach assumes that other agents’ goals are known so that the altruistic agent can cooperate in achieving those goals. However, explicit knowledge of other agents’ goals is often difficult to acquire. In the case of human agents, their goals and preferences may be difficult to express fully; they might be ambiguous or even contradictory. Thus, it is beneficial to develop agents that do not depend on external supervision and learn altruistic behaviour in a task-agnostic manner. We propose to act altruistically towards other agents by giving them more choice and allowing them to achieve their goals better. Some concrete examples include opening a door for others or safeguarding them to pursue their objectives without interference. We formalize this concept and propose an altruistic agent that learns to increase the choices another agent has by preferring to maximize the number of states that the other agent can reach in its future. We evaluate our approach in three different multi-agent environments where another agent’s success depends on altruistic behaviour. Finally, we show that our unsupervised agents can perform comparably to agents explicitly trained to work cooperatively, in some cases even outperforming them. N/A Can artificial agents learn to assist others in achieving their goals without knowing what those goals are? Generic reinforcement learning agents could be trained to behave altruistically towards others by rewarding them for altruistic behaviour, i.e., rewarding them for benefiting other agents in a given situation. Such an approach assumes that other agents’ goals are known so that the altruistic agent can cooperate in achieving those goals. However, explicit knowledge of other agents’ goals is often difficult to acquire. In the case of human agents, their goals and preferences may be difficult to express fully; they might be ambiguous or even contradictory. Thus, it is beneficial to develop agents that do not depend on external supervision and learn altruistic behaviour in a task-agnostic manner. We propose to act altruistically towards other agents by giving them more choice and allowing them to achieve their goals better. Some concrete examples include opening a door for others or safeguarding them to pursue their objectives without interference. We formalize this concept and propose an altruistic agent that learns to increase the choices another agent has by preferring to maximize the number of states that the other agent can reach in its future. We evaluate our approach in three different multi-agent environments where another agent’s success depends on altruistic behaviour. Finally, we show that our unsupervised agents can perform comparably to agents explicitly trained to work cooperatively, in some cases even outperforming them. 1 INTRODUCTION Altruistic behaviour is often described as behaviour that is intended to benefit others, sometimes at a cost for the actor (Dowding and Monroe, 1997; Fehr and Fischbacher, 2003). Such behaviour is a desirable trait when integrating artificial intelligence into various aspects of human life and society – such as personal artificial assistants, house or warehouse robots, autonomous vehicles, and even recommender systems for news and entertainment. By observing and interacting with us, we may expect that artificial agents could adapt to our behaviour and objectives, and learn to act helpfully and selflessly. Altruistic behaviour could be a step towards value alignment (Allen et al., 2005; Gabriel, 2020), which aims to incorporate common-sense human values into artificial agents. Typically, we could achieve such an altruistic behaviour through various forms of supervision such as providing ground-truth actions at each time step, training agents with reinforcement learning (RL) and suitable rewards, or through imitation learning (Song et al., 2018). However, none of the approaches above scale up easily. They either require a large amount of supervision or carefully crafted rewards that can easily be misstated, leading to unwanted behaviour (Russell, 2019, ch. 1). How can one agent support another agent without knowing its goals? One clue might be the instrumental convergence hypothesis (Bostrom, 2017; Omohundro, 2008; Russell, 2019), which states that intelligent agents with varied goals are likely to pursue common subgoals which are generally useful (instrumental). Some examples are resource acquisition, cognitive enhancement or self-preservation, which all increase an agent’s chance of achieving almost arbitrary final goals. This hypothesis has been validated theoretically under many models, including resource games (BensonTilsen and Soares, 2016) and large classes of policies in discrete MDPs (Turner et al., 2019). While instrumental convergence is central to the discussion of value alignment and safe AI (Bostrom, 2017), since many instrumental subgoals have harmful effects, we believe that it is also a key to supporting agents with ill-defined goals and values, such as humans. The reason is that enabling instrumental subgoals for other agents (or not impeding them) can be beneficial, for a wide variety of goals and preferences. Since these subgoals occur frequently for rational agents, enabling them has the highest chance of success in the absence of more information about the other agent’s preferences, even if it is not guaranteed in the worst case. We speculate that having the ability to reach many future states is one of the most general convergent subgoals. It subsumes self-preservation (avoiding absorbent states), resource acquisition (if they are prerequisites to some actions), and generally maintaining the ability to pursue many goals. There is theoretical evidence that many optimal agents pursue this subgoal (Turner et al., 2019) (see sec. 3.2). Thus, we propose to train agents to support other agents by maximizing their choice (future state availability). This unsupervised approach learns altruistic behaviour without any extrinsic supervision such as rewards or expert demonstrations. We evaluate our methods in three diverse multi-agent environments. We always assume there are at least two agents: the leader agent that executes its own policy and can be trained using standard supervised methods, and an altruistic agent whose role is to help the leader. The performance of the altruistic agent is thus defined as the reward (success) achieved by the leader agent. In all our environments, the overall success of the leader agent depends on the altruistic agents’ behaviour. We show that our unsupervised approach outperforms unsupervised baselines by a large margin and, in some cases, also outperforms the supervised ones. Finally, we demonstrate possible failure cases of our approach where maximising the leader agent’s choice can lead to suboptimal behaviour. Our work makes the following three contributions: • We devise a multi-agent RL framework for intrinsically motivated artificial agents that act altruistically by maximising the choice of others. • We define and evaluate three task-agnostic methods to estimate the choice that an agent has in a given situation, which are all related to the variety in states it can reach. • We experimentally evaluate our unsupervised approach in three multi-agent environments and are able to match and, in some cases, outperform supervised baselines. 2 RELATED WORK To the best of our knowledge, we are the first to experimentally evaluate unsupervised agents with purely altruistic objectives. However, there are many related concepts in the literature. In human-robot cooperation, a robotic agent aids a human agent in achieving its goals (PérezD’Arpino and Shah, 2015; Hadfield-Menell et al., 2016; Baker et al., 2006; Dragan and Srinivasa, 2013; Fisac et al., 2017; 2020; Javdani et al., 2015; Dragan and Srinivasa, 2013; Macindoe et al., 2012; Pellegrinelli et al., 2016). Methods from Inverse RL (IRL) are often employed to infer human goals, which are then utilized by the robot agent to support the human. IRL itself aims to learn objectives from observations and can be used in single-agent (Fu et al., 2017) and multi-agent scenarios (Song et al., 2018; Yu et al., 2019; Jeon et al., 2020). However, IRL relies on the existence of expert demonstrations, which are often difficult to get at scale. In complex environments, it also often suffers from ambiguity of solutions (Arora and Doshi, 2021). In single-agent reinforcement learning, empowerment – which measures an agent’s capacity to affect its environment (Klyubin et al., 2005; 2008) – is used to enable intrinsically-motivated exploration (Gregor et al., 2016; Volpi and Polani, 2020). Empowerment is also used for multiagent cooperation (Guckelsberger et al., 2016; Du et al., 2020). Du et al. (2020) use empowerment to develop a helper agent that assists a (simulated) human agent by maximizing the human’s empowerment, constituting the research work most similar to ours. In contrast to our approach, it requires privileged access to an environment simulator and therefore does not allow to learn helpful or altruistic behaviour only from observation. Furthermore, the approach is not unsupervised. There are also mathematical formalizations of instrumental convergence (Bostrom, 2017). BensonTilsen and Soares (2016) analyze a MDP that makes finite resource allocation explicit, and find that optimal agents with arbitrary reward functions tend to deplete available resources. Turner et al. (2019) propose “power” as a convergent subgoal, which they define as the average difference between the state value of an optimal policy and the reward in the same state. They show that, for environments with certain symmetries, a larger proportion of optimal agents prefer states with higher power. In sec. 3.2 we will describe these symmetries and relate the result to our method. 3 METHODS In this section, we formalize our framework. We start with the generic definition describing multiagent setting. Next, we describe our framework where we show various approaches to estimate choice for a single agent, and how it can be applied to a two-agents Markov Game. Markov Game. We consider a Markov Game (Littman, 1994), which generalizes a Markov Decision Process (MDP) to a multi-agent scenario. In a Markov Game, agents interact in the same environment. At time step t, each agent (the ith of a total of N agents) takes the action ati, receives a reward rti , and finally the environment transitions from state s t to st+1. A Markov Game is then defined by a state space S (st ∈ S), a distribution of initial states η, the action space Ai (ati ∈ Ai) and reward function ri(s, a1, . . . , aN ) of each agent i, an environment state transition probability P (st+1|st, a1, . . . , aN ), and finally the agents’ discount factors γi. 3.1 ESTIMATING CHOICE FOR A SINGLE AGENT We first consider a single-agent scenario, i.e. N = 1, where only a leader agent, indicated by the subscript L, interacts with the environment through its pretrained stochastic policy πL. We assume that the leader acts Boltzmann-rationally, i.e. that it chooses high-value actions with higher probability. We believe this to be a reasonable assumption, as, in comparison to deterministic policies, stochastic policies are more robust (Zhang et al., 2020), and often achieve better results in real-world-alike partially observable stochastic domains (Kaelbling et al., 1998). We denote the leader agent’s generic choice in a given state s as CL(s), for which we propose concrete realizations below. Each method relies on the random variable St+n, with values st+n ∈ S, which refers to the leader agent’s state after n environment transitions from a starting state st. Its probability mass function is defined as the n-step state distribution of the underlying single-agent MDP, conditioned on the current state: p(st+n|st) = P (St+n = s|πL, st). Discrete choice. Our first derived method simply defines the choice of the leader agent in state st as the number of states that it can reach within n transitions, which we refer to as its discrete choice: DCnL(s t) = |range ( St+n|st ) |, (1) where range(X) is the set of all values that a random variable X takes on with positive probability and | · | measures the size of that set. While this count-based estimator of choice is intuitive and easily interpretable, it can hardly be estimated practically in large or continuous state spaces. It also discards information about the probability of reaching these states. Entropic choice. It can be shown that the entropy of a random variable X acts as a lower bound for the size of the set of values that X takes on with positive probability (Galvin, 2014, Property 2.6), i.e. H(X) ≤ log |range(X)|.We define a lower bound of the discrete choice by computing the Shannon entropy of the n-step state distribution, which we refer to as the agent’s entropic choice: ECnL(s t) = H(St+n|st) = − ∑ s∈S P (St+n = s|πL, st) log ( P (St+n = s|πL, st) ) , (2) which estimates the agent’s choice as the variety in its state after n transitions. Unlike eq. 1, ECnL can be computed in continuous state spaces or efficiently estimated by Monte Carlo sampling. Immediate choice. To further simplify entropic choice and reduce its computational complexity, we may limit the look-ahead horizon to n = 1 and assume an injective relationship from actions to states, i.e. no two actions taken at st lead to the equivalent state st+1. This assumption is often true in navigation environments, where different step-actions result in different states. We can then simplify the one-step state distribution of the leader agent to p(st+n|st) = P (St+1 = s|πL, st) = π(atL = a|st), and compute a simplified, short-horizon entropic choice, the immediate choice: ICL(s t) = H(St+1|st) = H(πtL(a|st)). (3) Immediate choice (IC) can be easily computed as the entropy over its policy conditioned on the current state. Even though the assumptions made for immediate choice often do not hold in complex or real-world environments, we found empirically that this objective can yield good results. 3.2 OPTIMALITY OF CHOICE AS AN INSTRUMENTAL CONVERGENT SUBGOAL Turner et al. (2019) analyze the instrumental convergence of optimal agents on power-seeking subgoals and show that optimal policies tend to keep their options open (Prop. 6.9). They consider two distinct actions a and a′ taken at a state s′, leading into two sets of possible future states (for an infinite horizon). These sets of future states are represented as nodes in two graphs, respectively G and G′ (with edges weighted by the probability of transitioning from one state to another). They also assume that the states in G ∪ G′ can only be reached from s′ by taking actions a or a′. In the case where G is “similar” to a subgraph of G′, in the sense that they are equivalent up to arbitrary swapping of pairs of states, the authors prove that the probability of a being optimal is higher than the probability of a′ being optimal (for most reward function distributions). Therefore, ifG′ contains more states than G, an optimal agent will choose a′ over a. Turner et al. (2019) thus lend theoretical support to our proposal: while there is no guarantee that any one optimal policy (corresponding to a rational agent with arbitrary reward function) pursues higher choice, in expectation (over a bounded space of reward functions) most policies do choose actions that lead to higher choice, all else being equal. As such, while we may not know a rational agent’s concrete goals, there is a high chance that choice works as an instrumental subgoal. 3.3 COMPARISON BETWEEN CHOICE AND EMPOWERMENT The empowerment (Klyubin et al., 2005) of a leader agent in a given state st and for horizon n is EnL(st) = maxω(an|st) I(St+n;An|st) = maxω(an|st)H(St+n|st) −H(St+n|An, st), with an as a sequence of n actions of the leader agent and ω as a probing distribution over its n-step action sequences. When setting the probing distribution ω equal to the leader agent’s policy, equation 3.3 simplifies to EnL(st) = ECnL(st)−H(St+n|At+n, st), with ECnL(st) as the entropic choice of the leader agent introduced in equation 2. If we further assume deterministic environment transitions, then empowerment becomes equal to entropic choice, i.e. EnL(st) = ECnL(st). In contrast to the previously introduced methods to estimate choice of another agent, empowerment of another agent cannot be estimated from observations of the environment transitions. To estimate another agent’s empowerment in a given state (EnL(st)), access to its action space as well as privileged access to an environment simulator are be required, which violates the main assumption of our research work, i.e. learning to assist others only from observations of the environment transitions. Even when assuming privileged access, computing empowerment in large or continuousstate environments often remains infeasible (Mohamed and Rezende, 2015; Gregor et al., 2016; Zhao et al., 2020), as it requires maximizing over all possible probing distributions ω of the leader agent. In contrast, estimating state entropy, as needed for the computation of the metrics introduced in this work, is feasible in large and continuous environments (Seo et al., 2021; Mutti et al., 2020). 3.4 BEHAVING ALTRUISTICALLY BY MAXIMIZING ANOTHER AGENT’S CHOICE Having considered three methods to estimate an agent’s choice (eq. 1-3) we now apply them to a Markov Game of two agents. The main hypothesis is that maximizing the choice of another agent is likely to allow it to reach more favourable regions of the state-space (for many possible policies of the agent), thus supporting it without a task-specific reward signal. Altruistic agent’s policy definition. In this Markov Game, one agent is the leader, with the subscript L, and another one is the altruistic agent, with the subscript A. We define the optimal policy of the altruistic agent as the one that maximizes the future discounted choice of the leader, π∗A = argmax πA ∞∑ t=0 γtA CL(s t), (4) where the generic choice CL(st) can be estimated by one of several methods: discrete choice DCnL(s t), entropic choice ECnL(s t) or immediate choice ICL(st). Conditional estimates of choice. As the agents interact in the same environment, they both have influence over the system state s, which contains the state of both agents. This makes applying single-agent objectives based on the state distribution (such as eq. 1 and 2) difficult to translate to a multi-agent setting, since the states of both agents are intermingled. For example, an altruistic agent that maximizes entropic choice naively (eq. 2) will maximize both the state availability of the leader agent (which mirrors the single-agent entropic choice) and its own state availability (which does not contribute towards the altruism goal). To maximize entropic choice without also increasing the entropy of the altruistic agent’s actions, we propose to condition the choice estimate on the altruistic agent’s actions over the same time horizon, denoted by the random variable At:t+n−1A : ECnL(s t) = H(St+n|At:t+n−1A , πL, s t). (5) In order to better understand eq. 5, we can use the chain rule of conditional entropy (Cover and Thomas, 2005, ch. 2) to decompose it into two terms: ECnL(s t) = H(St+n, At:t+n−1A |πL, st) − H(At:t+n−1A |πL, st), respectively the joint entropy of the states and actions, and the entropy of the actions. Therefore, we can interpret this objective as the altruistic agent maximizing the variety of states and actions, but subtracting the variety of its own actions, which is the undesired quantity. We can also relate eq. 5 to discrete choice (eq. 1). Using the fact that H(X|E) ≤ log |range(P (X|E))| for a random variable X and event E (Galvin, 2014, Property 2.12), we see that eq. 5 is a lower bound for a count-based choice estimate (analogous to eq. 1), also conditioned on the altruistic agent’s actions: ECnL(s t) ≤ logDCnL(st) = log |range ( St+n|At:t+n−1A , πL, st ) |. However, assuming simultaneous actions, the immediate choice estimate (eq. 3) stays unchanged, i.e. ICL(st) = H(πtL(a|st)|atA) = H(πtL(a|st)). The technical details of how these estimates can be computed from observations of the environment transitions are given in Appendix A. 4 EXPERIMENTAL EVALUATION We introduce three multi-agent environments of increasing complexity1, in which the success of a leader agent depends on the behaviour of one or more additional agents. In each environment, we first evaluate a subset of the proposed methods for choice estimation (DCnL, EC n L and ICL) by comparing the estimated choice of the leader agent in minimalistic scenarios. We then evaluate our approach of behaving altruistically towards others by maximizing their choice (section 3.4) and measure performance of our approach as the reward achieved by the leader agent. We provide videos of the emergent behaviours in the supp. mat. (see appendix F). We compare our method to both an unsupervised and a supervised approach. Note that the supervised approach has stronger assumptions, as it requires direct access to the leader agent’s reward function. We do not consider inverse RL (IRL) as a relevant baseline, as it would rely on demonstrations of expert behaviour, which we do not assume. Even if perfect knowledge of the state transition probabilities is assumed, this does not allow generating expert demonstrations of the leader agent’s policy, as its expert policy would in turn depend on the policy of the altruistic agent, which is yet to be found by IRL. 4.1 DISCRETE ENVIRONMENTS WITH CONTROLLABLE GATES We start by considering three different scenarios on a grid, illustrated in Fig. 1 (top row), with the starting positions of the leader (green) and an additional agent (blue) shown in faded colors, obstacles are gray, and agents may move in one of the four cardinal directions or stay still. Choice estimate analysis. We first verify whether the estimated choice for each state (agent position) correctly maps to our intuitive understanding of choice (that is, the diversity of actions that can be taken). Therefore, we conducted an analysis of the estimated choice of the leader agent using a simplified version of the environment (Fig. 1, top left), in which only the leader agent is present and selects actions uniformly at random. Fig. 1 (bottom row) shows the three different methods of estimating choice evaluated for each possible cell position of the leader agent. We can observe that states in less confined areas, e.g. further away from walls, generally feature higher choice estimates, with the least choice being afforded by the dead end at the right. All three method’s estimates are qualitatively similar, which validates the chosen approximations. In line 1In appendix E, we evaluate performance in a non-spatial environment. with the simplifications made, the immediate choice (IC) estimates tend to be more local, as can be observed when comparing the estimates for the cell at row 2, column 4. In conclusion, these results qualitatively agree with an intuitive understanding of choice of an agent in a grid environment. Environment setup. In the Door Scenario (Fig. 1, top center), the door switch (row 1, col. 8) can only be operated by the altruistic agent. The door (row 2, col. 4) remains open as long as the altruistic agent is on the switch cell and is closed otherwise. As the leader agent always starts to the left of the door and the altruistic agent to the right, the leader agent can only attain its goal, the apple (row 2, col. 6), if the altruistic agent uses the door switch to enable the leader agent to pass through the door. In the Dead End Scenario (Fig. 1, top right), the door is always open, and the leader agent’s target object (green apple) is moved to the top right cell. Hence, the leader agent can obtain the apple without additional help from the altruistic agent. However, the altruistic agent could potentially block the path by positioning itself at the entry to the dead end. This situation would be the opposite of altruistic behaviour and is, of course, undesired. We compare to a supervised approach, to Assistance via Empowerment (AvE, (Du et al., 2020)) and a random-policy baseline. Assistance via Empowerment baseline. We compare with the recently-proposed AvE, which has a similar goal (Du et al., 2020). There are two major differences: AvE is not unsupervised, and it requires privileged access to an environment simulator to produce estimates. Hence, its use in real or black-box environments is limited. We used the authors’ implementation with fixed hyperparameters, except for the crucial horizon n, for which we present a sweep in app. B. Training. We start by pretraining the leader agent with Q-Learning (Watkins and Dayan, 1992), with the altruistic agent executing a random policy. Hence, after convergence, the leader agent’s policy targets the green apple. Appendix B lists all details and parameters. Afterwards, the leader agent’s learning is frozen and the altruistic agent is trained; it always observes the position of the leader agent sL, its own position sA, and the environment state senv, which is composed of the door state (open, closed) and the food state (present, eaten). The altruistic agent is trained with Q-Learning to maximize the discounted future choice of the leader agent (see eq.. 4. For that, it uses one of the three proposed methods such as eq. 3, eq. 2 or eq. 1, as detailed in appendix A.1. Results. We investigate the developed behaviour of the altruistic agent after convergence for different choices of the hyperparameters – look-ahead horizon n ∈ {1, 3, 12} (which determines the scale at which choices are considered) and discount factor γa ∈ {0.1, 0.7} (which defines whether the altruistic agent gives higher importance to the short-term or long-term choice of the leader agent). Success is binary: either the leader agent attains its goal (green apple), or not. In the Door Scenario (Fig. 1, top center), we found that, for longer horizons n and higher discount factors γa, the altruistic agent opens the door to allow the leader agent to reach its target, by occupying the switch position (square outline; row 1, col. 8). For smaller n and lower γa, the altruistic agent does not execute any coordinated policy and the leader does not succeed. Using the AvE method, we find that it only opens the door for n = 3, but fails to do so for n = 1 and n = 12. In the Dead End Scenario (Fig. 1, top right), we observe that, for longer horizons n and large discount factors γa, the altruistic agent stays out of the leader agent’s way by occupying a far-away cell (square outline; row 1, col. 6). For short horizons n and high discount factors γa, the altruistic agent actively blocks the entry to the hallway that contains the target (circle outline; row 3, col. 7), to prohibit the leader agent from entering this region of low estimated choice (recall that the choice for each cell is visualized in Fig. 1, bottom right). This failure case can be prevented by having a large enough horizon n and discount factor γa, analogously to the selection of the temperature hyperparameter in maximum entropy single-agent RL (Haarnoja and Abbeel, 2018). We find that this configuration performs consistently better than others in both scenarios, and hence is more preferred. On the other hand, the AvE method does not block the path of the leader agent for n = 1, but blocks its path for n = 3 and n = 12. We found that the resulting behaviour of our approach is independent of the used method for choice estimation, i.e. either discrete choice (eq. 1) or entropic choice (eq. 2) yield the same outcome, with immediate choice (eq. 3) being a special case of entropic choice. As for the AvE baseline, we hypothesize that the variance of results is due to the nature of the proxy used in practice, which includes components of empowerment from both agents (sec. 3.4). The binary outcomes for all hyperparameter combinations are given in appendix B. We also compare to a supervised baseline (receiving a reward when the leader obtains the apple), in which case the leader always succeeds. 4.2 LEVEL-BASED FORAGING EXPERIMENTS Computational efficiency. Due to the computational complexity resulting from the need to estimate a long-term distribution of states, p(st+n|st), we focus on immediate choice (IC) to estimate the leader agent’s choice in the remaining sections. Furthermore, in rare state-action sequences, the assumptions made for IC, i.e. deterministic environment transitions and an injective relationship from actions to states, may not hold. Nonetheless, we did not find this to adversely affect the results. Due to its dependence on access to the environment simulator and its computational complexity, we do not consider the AvE baseline for the remainder of experiments. Setup. We use a fully-observable multi-agent environment that enables us to assess the level of cooperation among agents (level-based foraging, LBF, Christianos et al. (2020)) to evaluate the performance of altruistic agents in more complex environments with discrete state spaces. We compare our method to a maximum-entropy approach from single-agent RL (Mutti et al., 2020) and a random-policy baseline. A visualization of the environment is depicted in Fig. 2 (left). The two agents can forage apples by simultaneously taking positions at different sides of a targeted apple, yielding a fixed reward. We first train two agents – which receive an equal reward for foraging – using Deep Q-Learning (DQL, Van Hasselt et al. (2015)), corresponding to fully-supervised sharedreward in multi-agent reinforcement learning (MARL). We then take one of these pretrained agents that has learned to forage apples when accompanied by a cooperating agent, freeze its policy, and place it as the leader agent (green) into the test scenario (additional details are provided in app. C). Choice estimate analysis. We first qualitatively evaluate IC as an estimator for choice in Fig. 3, by comparing representative scenarios. To quantitatively analyse IC as an estimator for the leader agent’s choice, we compare the leader agent’s average IC (over 100 episodes) in two scenarios, one in which it can acquire many rewards, i.e. the other agent acts cooperatively, and one where it can acquire only few rewards, i.e. the other agent takes random actions. We show the results in Table 1. We observe that the leader agent’s estimated choice is substantially higher when it is able to acquire high rewards. Note that the IC estimate does not have privileged access to the reward function of the leader agent, and so this experiment evaluates its worth as a generic proxy for the leader’s reward. Assuming that an agent is able to acquire higher rewards when having more choice, these results indicate that IC is a reasonable estimator for the leader agent’s choice in LBF. Training. We now consider an environment that consists of the previously pretrained leader and an additional altruistic agent, which is trained from scratch and does not receive a reward for foraging apples, but is rewarded according to the leader agent’s choice. Its reward is given as the current estimate of the leader agent’s IC (eq. 3) and it is trained using DQL. To compute its internal reward signal, the altruistic agent would therefore need to estimate the state transition probabilities, as detailed in A.2. To decouple our approach’s performance from that of the state transition estimator, we instead directly compute the altruistic agent’s reward using the leader agent’s policy. Results. We define the performance of the altruistic agent not as its achieved internal reward but as the reward achieved by the leader agent, i.e. its performance in enabling the leader agent to forage apples. Fig. 4 shows a comparison of the altruistic agent’s performance to that achieved by 3 baselines (two unsupervised and one supervised), averaged over 5 random seeds, with the standard deviation as the shaded area. It can be observed that the performance of the altruistic agent converges to a similar performance to that of the supervised agent, and outperforms the baseline approaches by a large margin. Furthermore, the IC improvement of the leader agent is correlated with its reward improvement, which supports using IC as a reasonable proxy for the choice of the leader agent. 4.3 MULTI-AGENT TAG GAME WITH PROTECTIVE AGENTS Setup. We use a multi-agent tag environment (Tag, Mordatch and Abbeel (2018); Lowe et al. (2017); Terry et al. (2020)), illustrated in Fig. 2 (right), to evaluate the capabilities of altruistic agents in complex environments with continuous state spaces. Adversaries are rewarded for catching the leader, which in turn receives a negative reward for being caught or crossing the environment boundaries. To speed up training, altruistic agents additionally receive a small negative reward for violating the environment boundaries. We pretrain the adversaries and the leader (without the presence of altruistic agents) using MADDPG (Lowe et al., 2017) and DDPG (Lillicrap et al., 2016) respectively. After pretraining, the adversary agents have learned to cooperatively chase the leader agent, which in turn has learned to flee from the adversaries. Exact setup specifications and all parameters are given in appendix D. Choice estimate analysis. As done for LBF, we evaluate the IC of the leader agent in representative scenarios in Fig. 3. We also quantitatively evaluate IC as an estimator for the leader agent’s choice, by comparing the leader agent’s IC per timestep for a scenario in which it receives high rewards to one where it receives low rewards. We again hypothesize that the leader agent is able to acquire higher rewards when having more choice. Table 1 shows that the estimated choice is substantially higher in the high-success scenario, indicating that IC is a reasonable estimator also in Tag. Training. We freeze the pretrained policies of the adversary agents and the leader agent and insert three additional altruistic agents which observe all agents but are not observed themselves. Each additional altruistic agent’s internal reward signal is given as the IC of the leader agent (equation 3), which is directly computed as done in LBF (see 4.2). Results. Performance of the altruistic agents is defined as the times per episode that the leader agent is caught by the adversaries, i.e. the lower the better. In Table 2, the performance of the team of three altruistically trained agents (ours) is compared to three relevant baselines, with the altruistic agents either removed (None), acting randomly (random), or solely receiving a small negative reward for violating the environment boundaries (cage). In contrast to LBF, we do not compare to an unsupervised exploration approach, as we are not aware of such an implementation for cooperative MARL. Additionally, we report results for the case in which the altruistic agents receive the same reward as the leader agent (supervised), possibly appended by a negative reward for violating the environment boundaries (supervised + cage). It can be observed that our approach outperforms all relevant baselines by a substantial margin and also outperforms the supervised approach. We hypothesize this to be due to the dense internal reward signal that our approach provides, as compared to the sparse rewards in the supervised scenario: recall that in the supervised scenario the additional altruistic agents receive a large negative reward only when the leader agent is caught by the adversaries, whereas our approach provides a dense reward signal that corresponds to the current estimate of the leader agent’s choice. Fig. 5 displays the emerging protective behaviour of altruistic agents trained with our approach. Results videos are found in the supplemental material. 5 CONCLUSIONS We lay out some initial steps into developing artificial agents that learn altruistic behaviour from observations and interactions with other agents. Our experimental results demonstrate that artificial agents can behave altruistically towards other agents without knowledge of their objective or any external supervision, by actively maximizing their choice. This objective is justified by theoretical work on instrumental convergence, which shows that for a large proportion of rational agents this will be a useful subgoal, and thus can be leveraged to design generally altruistic agents. This work was motivated by a desire to address the potential negative outcomes of deploying agents that are oblivious to the values and objectives of others into the real world. As such, we hope that our work serves both as a baseline and facilitator for future research into value alignment in simulation settings, and as a complementary objective to standard RL that biases the behaviour towards more altruistic policies. In addition to the positive impacts of deployed altruistic agents outside of simulation, we remark that altruistic proxy objectives do not yet come with strict guarantees of optimizing for other agents’ rewards, and identify failure modes (sec. 4.1) which are hyperparameter-dependent, and which we hope provide interesting starting points for future work. 6 ETHICS STATEMENT We addressed the relevant aspects in our conclusion and have no conflicts of interest to declare. 7 REPRODUCIBILITY STATEMENT We provide detailed descriptions of our experiments in the appendix and list all relevant parameters in table 4. All experiments were run on single cores of Intel Xeon E7-8867v3 processors (2.5 GHz). Training times are given in the respective sections in the appendix. For the LBF and Tag experiments, we report mean and standard deviation over five different random seeds. The Gridworld experiments yield deterministic results. We will provide the source code for all experiments conducted with the final version of this publication. We created detailed instructions on how to run the code in order to replicate the experimental outcomes presented in this work. 8 ACKNOWLEDGEMENTS We thank Thore Graepel and Yoram Bachrach for their helpful feedback. We are also grateful to the anonymous reviewers for their valuable suggestions. This work was supported by the Royal Academy of Engineering (RF\201819\18\163). A ESTIMATION OF LEADER AGENT’S CHOICE FROM OBSERVATION A.1 MODEL-BASED ESTIMATION OF CHOICE FROM OBSERVATIONS We introduce a model-based estimator of choice that is suitable for small-scale discrete-state environments, having the advantage that it is easily interpretable. Recalling how we compute the discrete choice and entropic choice estimates for the leader agent, an estimate of the n-step state distribution conditioned on the altruistic agent’s actions is needed, i.e. P (st+n|πL, at:t+n−1A , st). To simplify this computation, we assume the altruistic agent’s action to equal hold for the next n steps. More specifically, we assume that the altruistic agent’s state is unchanged for the next n steps. Furthermore assuming that both the state and the action space are discrete, we compute P (st+n|πL, at:t+n−1A , s t) = st T (stA) n, (6) with T (stA)ij = P (s t+1 = sj | st = si, st+1A = s t A) (7) where the state transition matrix T (sA) holds the transition probabilities between all possible states, as a function of the state of the altruistic agent sA. To compute T (sA), the system state is encoded into a one-hot vector s1. The n-step discrete choice of the leader agent can then be computed as DCnL(s t) = ‖st1 T (stA)n‖0, (8) its n-step entropic choice as ECnL(s t) = H ( st1 T (s t A) n ) , (9) and its immediate choice as ICL(s t) = H ( πtL(a|st) ) = H ( s1 T (s t A) ) (10) In environments with a discrete state and action space, the altruistic agent can hence use an estimate of the state transition matrix T to estimate the choice of the leader agent using either of the proposed methods, i.e. DC, EC or IC. An estimate of T can be built over time, by observing the environment transitions and computing the transition probabilities as relative frequencies of observed transitions. A.2 MODEL-FREE ESTIMATION OF CHOICE FROM OBSERVATIONS To limit the computational complexity, which is important for environments with large or continuous state spaces, we also consider immediate choice as an estimator for the leader agent’s choice (ICL(st) = H(St+1|st)). As shown in section 3.1, this estimate can be simplified to H(St+1|st)) = H(πtL(a|st)), under the named assumptions. Hence, to compute the immediate choice of the leader, the altruistic agent requires an estimate of the leader agent’s policy entropy, which can be learned from observation using a policy estimation network (Hong et al., 2018; Papoudakis et al., 2020; Mao et al., 2019; Grover et al., 2018). B GRIDWORLD EXPERIMENTS B.1 TRAINING PROCEDURE B.1.1 SETUP The environment setup is described and displayed in section 4.1. AvE baseline. We evaluate the AvE baseline for different horizons n. For each horizon, we tested the AvE baseline as implemented in the provided source code2, using the hyper-parameters suggested by the authors. The original implementation uses a look-ahead horizon n = 10. We found 2https://github.com/yuqingd/ave that results are equal for both n = 10 and n = 12, which is why we only display results for n = 12. We further evaluated the AvE baseline for n between 1 and 12. For the Opens door task, we found that AvE yields success for n = 2, 3, 4, 5 and failing for the remaining. For the Non blocking task, we found that AvE yields success for n = 1, 2 and failing for the remaining. B.1.2 PRETRAINING We first pretrain the leader agent using tabular Q-Learning, with learning parameters given in Table 4. During this pretraining, the altruistic agent takes random actions. We train until all Q-Values are fully converged, i.e. training runs for 300000 environment steps. B.1.3 REWARD COMPUTATION FOR ALTRUISTIC AGENTS The altruistic agent is then also trained using tabular Q-Learning, and its internal reward signal is given as the choice estimate of the leader agent, i.e. either DCnL(s t), ECnL(s t) or ICL(st), which is computed with the model based-estimation introduced in appendix A.1. The altruistic agent records all environment transitions and frequently updates its estimate of the state transition matrix T (sA), which is needed to compute the internal reward signal for the altruistic agent. All training parameters can be found in Table 4. Training time is about 15 minutes per experiment. B.2 PERFORMANCE EVALUATION Performance of the altruistic agent is reported for two different categories, as shown in Table 3. For each category, we report success or failure for choice estimate look-ahead horizons n ∈ {1, 3, 12} and discount factors of the altruistic agent γa ∈ {0.1, 0.7}. Success or failure was always deterministic, conditioned on the experiment setup, i.e. 10 simulations were run for each setup which always yielded the same outcome. To estimate the leader agent’s choice, the altruistic agent uses either discrete choice (D, equations 1 and 8) or entropic choice (E, equations 2 and 9). It must be noted that horizon n = 12 is equivalent to an infinite horizon look-ahead for the given environment size and that entropic choice is equivalent to immediate choice (equations 3 and 10) at horizon n = 1, as the environment satisfies the necessary conditions listed for equation 3. Table 3 displays the results of this experiment. In the first row, it is evaluated whether the altruistic agent opens the door at all times, such that the leader agent can eat the green apple. It can be observed that the altruistic agent only opens the door for longer horizons n, respectively higher discount factors γa. Given the definitions of discrete choice (Equation 1) and entropic choice (Equation 2), it can be assumed that the choice horizon n determines the locality for which choice is considered and that the discount factor γa defines whether the altruistic agent gives higher importance to the short-term or long-term choice of the leader agent. This is in line with the observed results for the first category (Opens door). It can be assumed that, for short horizons n, the altruistic agent does not open the door, as it does not estimate that this would lead to an increase in the leader agent’s choice. A similar argumentation follows for low discount factors γa. The bottom-row category evaluates whether the altruistic agent does not block the hallway that leads up to the leader agent’s target apple in the top right environment cell. This category demonstrates a possible failure case of the proposed approach of maximizing another agent’s choice. For short horizons n and high discount factors γa, the altruistic agent actively blocks the entry to the lowentropy hallway towards the top right cell – by constantly occupying cell (2, 6) – to prohibit the leader agent from entering this region of low estimated choice. This failure case can be prevented by an appropriate selection of the hyperparameters – horizon n and discount factor γa. It is related to the selection of the temperature hyperparameter in maximum entropy single-agent RL (Haarnoja and Abbeel, 2018); if chosen incorrectly, the agent does not foster environment rewards in lowentropy regions. A possible solution to this problem would be to define a constrained optimization problem, as shown by Haarnoja and Abbeel (2018). B.3 ABLATION STUDY ON JOINT LEARNING Training. To investigate the effects of joint learning of the leader agent’s and the altruistic agent’s policy, we adapted the training process described in section 4.1 for the Gridworld experiments as following. Instead of first learning the policy of the leader agent while the altruistic agent takes random actions, we initialized both policies from scratch and trained both agents simultaneously with the parameters given in Table 4. Results. We evaluated the outcome for the same scenarios, i.e the scenarios described in section 4.1. We found that the results for the individual test cases were equivalent to those achieved when training the leader and the altruistic agent subsequently, i.e. the results are equivalent to those displayed in Table 3. C LEVEL BASED FORAGING EXPERIMENTS C.1 TRAINING PROCEDURE C.1.1 SETUP We adopted the Level Based Foraging3 environment as given in Christianos et al. (2020). We only focus on two-agent scenarios and only consider the subset of possible environments that require full cooperation among agents, i.e. those where food can only be foraged by two agents cooperatively. We therefore only consider environments where both agents are at level one, and all present food is at level two. In the original implementation, both agents have to simultaneously select the eat action while docking at different sides of a food object to forage the object and receive the reward. To reduce training time, we simplify this setup by reducing the action space to up, down, left, right, stay, i.e. we remove the eat action and enable agents to forage food by being simultaneously at different sides of a food object, with no further action required. C.1.2 PRETRAINING To obtain a pretrained leader agent, we first train two agents in the environment that are equally rewarded for foraging food. This setup corresponds to shared-reward cooperative MARL (Tan, 1993). Both agents are trained using Deep Q Learning (DQL, (Van Hasselt et al., 2015)), using a fully connected neural network with two hidden layers and five output values, resembling the Q values of the five possible actions. The exact training parameters are listed in Table 4. We then take either one of the two agents and set it as the pretrained leader agent for the subsequent evaluation of the altruistic agent. C.1.3 TRAINING OF ADDITIONAL AGENTS We then insert an additional agent into the environment that shall act altruistically towards the leader agent. This additional agent is trained in the same fashion and with the same parameters as the previously trained leader agents. Only its reward signal is different, as laid out in the next section. C.1.4 REWARD COMPUTATION FOR ADDITIONAL AGENTS We compare four different approaches for how the reward of the additional agent is defined, respectively how it behaves. Random: The agent takes random actions. Supervised: The agent receives the same reward as the leader agent, i.e. a shared reward as in cooperative MARL. Ours: 3https://github.com/semitable/lb-foraging The reward of the additional agent is defined as the immediate choice of the leader agent, as detailed in equation 3. We compute the leader agent’s policy entropy by computing the entropy of the softmax of the leader agent’s Q values in the given state. We further consider an unsupervised baseline, as detailed in the next paragraph. Unsupervised baseline (MaxEnt). As an unsupervised baseline, we implemented the MEPOL approach of Mutti et al. (2020). Their task-agnostic unsupervised exploration approach maximizes the entropy over the state distribution of trajectory rollouts. For this baseline, the additional agent is trained with the implementation given by the authors4, which itself builds on TRPO (Schulman et al., 2015). We leave all parameters unchanged but evaluate different learning rates; lr ∈ {1e − 6, 1e− 5, 1e− 4, 1e− 3, 1e− 2, 1e− 1}. Best results were achieved for a learning rate of 1e− 5, which was hence picked as the relevant baseline. C.2 PERFORMANCE EVALUATION Each experiment was run for 5 different random seeds and mean and standard deviation are reported. Training progress is shown in Figure 4. Evaluations are computed every 10000 environment steps for 200 episodes, with the exploration set to zero. Training time was about 14 hours for each run. Results are shown in Fig. 4. D TAG EXPERIMENTS D.1 TRAINING PROCEDURE D.1.1 PRETRAINING We use the Simple Tag (Tag) implementation by Terry et al. (2020)5 which is unchanged as compared to the original implementation of Mordatch and Abbeel (2018)6, only fixing minor errors. We first adopt the original configuration and pretrain three adversaries and one good agent (leader agent) using the parameters listed in Table 4. We use MADDPG (Lowe et al., 2017)7 to train adversary agents, and modify the framework as follows. The last layer of each agent’s actor-network outputs one value for each of the environment’s five possible actions, over which the softmax is computed. We then sample the agent’s action from the output softmax vector, which corresponds to the probabilities with which the agent takes a specific action in a given state. We train the leader agent with DDPG (Lillicrap et al., 2016),7 where we equally modify the output layer. Each actor and critic network is implemented as a fully-connected neural network with two hidden layers, with layer sizes as given in Table 4. To make the environment more challenging for the leader agent, we decrease its maximum speed and acceleration to 70% of the original value. We next insert three additional agents into the environment whose observations include all agents and objects. These additional agents are not observed by adversary agents or the leader agent. The additional agents are of the same size as the adversary agents, and their acceleration and maximum velocity are equal to that of the leader agent. To speed up training, we made the following changes to the environment, which are applied to our approach as well as to all baselines. First, we spawn the three additional agents in the vicinity of the leader agent, which itself is spawned at a random position. Furthermore, we randomly pick two out of the three adversary agents and decrease their maximum acceleration and maximum speed by 50%. We made these changes to be able to observe substantial differences between the different approaches after a training time of less than 24h. D.1.2 TRAINING OF ADDITIONAL AGENTS We train these three additionally inserted agents with the previously described modified version of MADDPG. The reward for each agent is defined either according to our developed approach, or any of the given baselines, as detailed in the next section. 4https://github.com/muttimirco/mepol 5https://github.com/PettingZoo-Team/PettingZoo 6https://github.com/openai/multiagent-particle-envs 7https://github.com/starry-sky6688/MADDPG D.1.3 REWARD COMPUTATION FOR ADDITIONAL AGENTS FOR DIFFERENT BASELINES We consider the following implementations for the reward computation of the additional agents, respectively different environment configurations. None: For this scenario, the additional agents are removed from the environment. The remaining approaches purely differ in the way that the reward of the additional agents is computed. No other changes are made. Random: The additional agents take random actions. Cage: The additional agents receive a negative reward for violating the environment boundaries, which is equal to the negative reward that the leader agent receives for itself violating the environment boundaries (part of the original Tag implementation). Supervised: The additional agents receive the same reward as the leader agent. That is, they receive a reward of -10 if the leader agent is caught by the adversaries and a small negative reward if the leader agent violates the environment boundaries. Supervised + Cage: The additional agents receive the same reward as the leader agent, and an additional small negative reward if they themselves violate the environment boundaries. Ours: The reward of the additional agents is defined as the immediate choice of the leader agent, as detailed in eq. 3. To reduce the variance in the estimate of the leader agent’s immediate choice, we implement an ensemble of five pretrained actor-networks for the leader agent, evaluate the policy entropy of each network, and take the median of the achieved values as the reward for the altruistic agents. Furthermore, the additional agents receive a small negative reward for themselves violating the environment boundaries. D.2 PERFORMANCE EVALUATION We train Cage, Supervised, Supervised + Cage and Ours for five different random seeds with parameters as detailed in Table 4. We then compute the results listed in Table 2 by freezing all weights across all networks, setting the exploration noise to zero and computing the average and standard deviation over 500 rollout episodes. E RESOURCE ENVIRONMENT E.0.1 MOTIVATION AND OVERVIEW This environment is a special case of the general resource-based MDP proposed by Benson-Tilsen and Soares (2016), which they used to show that intelligent agents pursue instrumentally useful subgoals. The motivation behind the choice for this environment is to evaluate our proposal in non-spatial and non-navigation environments. In the environment, there are 3 resource types, which two “consumer” agents may consume as an action. Each consumer has different preferences (reward function), and so will only consume 2 of the resource types. A third, altruistic agent, receives one resource unit of each type to distribute among the consumers, and its goal is to satisfy the preferences of the consumers without knowing their reward function. We define its performance as the average number of times that the consumers fail to consume their preferred resource (so lower is better). We compare our method to a supervised agent that is explicitly trained with the consumers’ reward function, as well as to an agent that assigns the resources randomly. E.0.2 ENVIRONMENT DESCRIPTION The environment is expressed as a Markov Game (see section 3). The Markov game is composed of two human-inspired consumers with subscript C1, C2 and an altruistic agent with subscript A. Three types of resources exist, RX , RY and RZ . The environment state s is given by the number of resources of each type available to each of the consumers. For example, s = [(1, 0, 1), (0, 1, 0)] means that agent C1 has one resource each of type X and Y available, while agent C2 only has one resource of type Z available. At the beginning of each time step, the altruistic agent is provided with one resource per category, i.e. RX , RY and RZ . The altruistic agent can assign each resource individually to any agent or discard the resource. The altruistic agent’s action space is hence defined by one sub-action per resource, i.e. aA = (aXA , a Y A , a Z A). Each sub-action assigns the resource either to one of the consumers or discards it. The resources are then distributed according to the action taken by the altruistic agent and the environment state is updated. Resources cannot be stacked, which means that each agent can only have one resource per category available at a time. Next, the consumers attempt to consume one resource each, according to their preference. Agent C1 dislikes resource RZ , hence it chooses RX or RY with equal probability. Agent C2 dislikes resource RX , hence it chooses RY or RZ with equal probability. The actions of agents C1 and C2 are sampled accordingly and the environment state is updated. For each round, we record how many agents failed to consume a resource that was not available. E.1 TRAINING The altruistic agent is trained with Q-Learning (Watkins and Dayan, 1992) to maximize the discounted future choice of the consumers (see eq. 4). For that, it uses one of the three proposed objectives, namely IC (eq. 3), EC (eq. 2) or DC (eq. 1), which it estimates as detailed in appendix A.1. The exact hyper-parameters are given in Table 4. We compare the performance of the altruistic agent that maximizes the choice of the consumers to that of a supervised agent. The reward of the supervised agent is the negative of the number of consumers that attempted to consume a resource, in that time step, and failed. Further, we compare to a random-policy baseline that distributes the resources randomly but does not discard any resources. E.2 RESULTS Table 5 shows that the results achieved by the altruistic agent trained with choice are equivalent to those achieved by the supervised agent. Furthermore, they are significantly better than those achieved by an agent with a random policy. F VIDEOS OF BEHAVIOUR OF ALTRUISTIC AGENT We provide videos for the most relevant outcomes of our experiments in the supplementary material. F.1 VIDEOS FOR RESULTS OF GRIDWORLD EXPERIMENTS (SECTION 4.1) F.1.1 DOOR SCENARIO IN FIG. 1 TOP CENTER 01 Altruistic agent opens door for leader agent: It can be observed that the altruistic agent has learned to operate the door switch to enable the leader agent to pass through the door and reach its target on the other side. 02 Altruistic agent does not open door for leader agent (failure case): It can be observed that for an unfavourable choice of hyperparameters, the altruistic agent does not open the door. F.1.2 DEAD END SCENARIO IN FIG. 1 TOP RIGHT 03 Altruistic agent gives way to leader agent: It can be observed that the altruistic agent does not get into the way of the leader agent, which is hence able to reach its target in the top right cell. 04 Altruistic agent blocks path of leader agent (failure case): It can be observed that for an unfavourable choice of hyperparameters, the altruistic agent blocks the entry to the hallway towards the right side of the environment such that the leader agent cannot reach its target at the top right cell. This happens as the altruistic agent forcefully maximizes the estimated choice of the leader agent by hindering it from entering the hallway, which is a region of fewer estimated choice. F.2 VIDEO FOR RESULTS OF LEVEL BASED FORAGING (SECTION 4.2) 05 Altruistic agent enables leader to forage apples: It can be observed how the altruistic agent (blue) learned to coordinate its movements with the leader agent (green), to enable the leader agent to forage apples. It has learned this behaviour purely through optimizing for the leader agents choice and is itself not rewarded for foraging apples. F.3 VIDEO FOR RESULTS OF TAG (SECTION 4.3) 06 Altruistic agents protect leader from adversaries: It can be observed how the altruistic agents (blue colors) learned to coordinate their movements to protect the leader agent (green) from its adversaries. The adversaries (red colors) try to catch the leader, which in turn tries to flee from them. The altruistic agents protect the leader by actively intercepting the paths of the adversaries. They have learned this behaviour purely through optimizing for the leader agents choice.
1. What is the main contribution of the paper regarding altruistic behavior learning for agents? 2. What are the strengths of the proposed approach, particularly in terms of task-agnostic methods for estimating agent choices? 3. What are the weaknesses of the paper, especially regarding the clarity and confusion in certain concepts and technical details? 4. How does the reviewer assess the experimental evaluation and comparison with baselines? 5. Are there any specific questions or concerns regarding the methodology, such as the estimation of policy entropy or the loss function used?
Summary Of The Paper Review
Summary Of The Paper The paper posits a method for agents that learns altruistic behaviour from observations and interactions with other agents without knowledge of their objective fucntion by estimating the sub-goals of the agent and actively maximizing their choice. The authors define 3 different task agnostic method to estimate the choices made by the agents. The experimental evaluation is extensive with testing the agents in 3 different environments while reporting the results in comparison with three baselines - Discrete Choice, Entropic Choice, and Assistance via Empowerment. Review The paper is technically sound but very hard to follow through. For instance, I found it very hard to find which baselines they had implemented until I referred to appendix in table 3. I still don't understand what does "choice" of the agent mean and how it is different from sub-goals of the agent. I read it multiple times but its very confusing. For the model free estimation of choice from observations, it is unclear how the altruistic agent estimates the leader agent’s policy entropy from observations. Can you provide the details such as the policy feature vector? FC layer? and a softmax layer? Additionally, what was the loss function used. In the Unsupervised baseline (MaxEnt) implementation, it would be helpful to provide the entropy index since you are trying to maximize this entropy value over the state distribution of trajectory rollouts.
ICLR
Title Learning Altruistic Behaviours in Reinforcement Learning without External Rewards Abstract Can artificial agents learn to assist others in achieving their goals without knowing what those goals are? Generic reinforcement learning agents could be trained to behave altruistically towards others by rewarding them for altruistic behaviour, i.e., rewarding them for benefiting other agents in a given situation. Such an approach assumes that other agents’ goals are known so that the altruistic agent can cooperate in achieving those goals. However, explicit knowledge of other agents’ goals is often difficult to acquire. In the case of human agents, their goals and preferences may be difficult to express fully; they might be ambiguous or even contradictory. Thus, it is beneficial to develop agents that do not depend on external supervision and learn altruistic behaviour in a task-agnostic manner. We propose to act altruistically towards other agents by giving them more choice and allowing them to achieve their goals better. Some concrete examples include opening a door for others or safeguarding them to pursue their objectives without interference. We formalize this concept and propose an altruistic agent that learns to increase the choices another agent has by preferring to maximize the number of states that the other agent can reach in its future. We evaluate our approach in three different multi-agent environments where another agent’s success depends on altruistic behaviour. Finally, we show that our unsupervised agents can perform comparably to agents explicitly trained to work cooperatively, in some cases even outperforming them. N/A Can artificial agents learn to assist others in achieving their goals without knowing what those goals are? Generic reinforcement learning agents could be trained to behave altruistically towards others by rewarding them for altruistic behaviour, i.e., rewarding them for benefiting other agents in a given situation. Such an approach assumes that other agents’ goals are known so that the altruistic agent can cooperate in achieving those goals. However, explicit knowledge of other agents’ goals is often difficult to acquire. In the case of human agents, their goals and preferences may be difficult to express fully; they might be ambiguous or even contradictory. Thus, it is beneficial to develop agents that do not depend on external supervision and learn altruistic behaviour in a task-agnostic manner. We propose to act altruistically towards other agents by giving them more choice and allowing them to achieve their goals better. Some concrete examples include opening a door for others or safeguarding them to pursue their objectives without interference. We formalize this concept and propose an altruistic agent that learns to increase the choices another agent has by preferring to maximize the number of states that the other agent can reach in its future. We evaluate our approach in three different multi-agent environments where another agent’s success depends on altruistic behaviour. Finally, we show that our unsupervised agents can perform comparably to agents explicitly trained to work cooperatively, in some cases even outperforming them. 1 INTRODUCTION Altruistic behaviour is often described as behaviour that is intended to benefit others, sometimes at a cost for the actor (Dowding and Monroe, 1997; Fehr and Fischbacher, 2003). Such behaviour is a desirable trait when integrating artificial intelligence into various aspects of human life and society – such as personal artificial assistants, house or warehouse robots, autonomous vehicles, and even recommender systems for news and entertainment. By observing and interacting with us, we may expect that artificial agents could adapt to our behaviour and objectives, and learn to act helpfully and selflessly. Altruistic behaviour could be a step towards value alignment (Allen et al., 2005; Gabriel, 2020), which aims to incorporate common-sense human values into artificial agents. Typically, we could achieve such an altruistic behaviour through various forms of supervision such as providing ground-truth actions at each time step, training agents with reinforcement learning (RL) and suitable rewards, or through imitation learning (Song et al., 2018). However, none of the approaches above scale up easily. They either require a large amount of supervision or carefully crafted rewards that can easily be misstated, leading to unwanted behaviour (Russell, 2019, ch. 1). How can one agent support another agent without knowing its goals? One clue might be the instrumental convergence hypothesis (Bostrom, 2017; Omohundro, 2008; Russell, 2019), which states that intelligent agents with varied goals are likely to pursue common subgoals which are generally useful (instrumental). Some examples are resource acquisition, cognitive enhancement or self-preservation, which all increase an agent’s chance of achieving almost arbitrary final goals. This hypothesis has been validated theoretically under many models, including resource games (BensonTilsen and Soares, 2016) and large classes of policies in discrete MDPs (Turner et al., 2019). While instrumental convergence is central to the discussion of value alignment and safe AI (Bostrom, 2017), since many instrumental subgoals have harmful effects, we believe that it is also a key to supporting agents with ill-defined goals and values, such as humans. The reason is that enabling instrumental subgoals for other agents (or not impeding them) can be beneficial, for a wide variety of goals and preferences. Since these subgoals occur frequently for rational agents, enabling them has the highest chance of success in the absence of more information about the other agent’s preferences, even if it is not guaranteed in the worst case. We speculate that having the ability to reach many future states is one of the most general convergent subgoals. It subsumes self-preservation (avoiding absorbent states), resource acquisition (if they are prerequisites to some actions), and generally maintaining the ability to pursue many goals. There is theoretical evidence that many optimal agents pursue this subgoal (Turner et al., 2019) (see sec. 3.2). Thus, we propose to train agents to support other agents by maximizing their choice (future state availability). This unsupervised approach learns altruistic behaviour without any extrinsic supervision such as rewards or expert demonstrations. We evaluate our methods in three diverse multi-agent environments. We always assume there are at least two agents: the leader agent that executes its own policy and can be trained using standard supervised methods, and an altruistic agent whose role is to help the leader. The performance of the altruistic agent is thus defined as the reward (success) achieved by the leader agent. In all our environments, the overall success of the leader agent depends on the altruistic agents’ behaviour. We show that our unsupervised approach outperforms unsupervised baselines by a large margin and, in some cases, also outperforms the supervised ones. Finally, we demonstrate possible failure cases of our approach where maximising the leader agent’s choice can lead to suboptimal behaviour. Our work makes the following three contributions: • We devise a multi-agent RL framework for intrinsically motivated artificial agents that act altruistically by maximising the choice of others. • We define and evaluate three task-agnostic methods to estimate the choice that an agent has in a given situation, which are all related to the variety in states it can reach. • We experimentally evaluate our unsupervised approach in three multi-agent environments and are able to match and, in some cases, outperform supervised baselines. 2 RELATED WORK To the best of our knowledge, we are the first to experimentally evaluate unsupervised agents with purely altruistic objectives. However, there are many related concepts in the literature. In human-robot cooperation, a robotic agent aids a human agent in achieving its goals (PérezD’Arpino and Shah, 2015; Hadfield-Menell et al., 2016; Baker et al., 2006; Dragan and Srinivasa, 2013; Fisac et al., 2017; 2020; Javdani et al., 2015; Dragan and Srinivasa, 2013; Macindoe et al., 2012; Pellegrinelli et al., 2016). Methods from Inverse RL (IRL) are often employed to infer human goals, which are then utilized by the robot agent to support the human. IRL itself aims to learn objectives from observations and can be used in single-agent (Fu et al., 2017) and multi-agent scenarios (Song et al., 2018; Yu et al., 2019; Jeon et al., 2020). However, IRL relies on the existence of expert demonstrations, which are often difficult to get at scale. In complex environments, it also often suffers from ambiguity of solutions (Arora and Doshi, 2021). In single-agent reinforcement learning, empowerment – which measures an agent’s capacity to affect its environment (Klyubin et al., 2005; 2008) – is used to enable intrinsically-motivated exploration (Gregor et al., 2016; Volpi and Polani, 2020). Empowerment is also used for multiagent cooperation (Guckelsberger et al., 2016; Du et al., 2020). Du et al. (2020) use empowerment to develop a helper agent that assists a (simulated) human agent by maximizing the human’s empowerment, constituting the research work most similar to ours. In contrast to our approach, it requires privileged access to an environment simulator and therefore does not allow to learn helpful or altruistic behaviour only from observation. Furthermore, the approach is not unsupervised. There are also mathematical formalizations of instrumental convergence (Bostrom, 2017). BensonTilsen and Soares (2016) analyze a MDP that makes finite resource allocation explicit, and find that optimal agents with arbitrary reward functions tend to deplete available resources. Turner et al. (2019) propose “power” as a convergent subgoal, which they define as the average difference between the state value of an optimal policy and the reward in the same state. They show that, for environments with certain symmetries, a larger proportion of optimal agents prefer states with higher power. In sec. 3.2 we will describe these symmetries and relate the result to our method. 3 METHODS In this section, we formalize our framework. We start with the generic definition describing multiagent setting. Next, we describe our framework where we show various approaches to estimate choice for a single agent, and how it can be applied to a two-agents Markov Game. Markov Game. We consider a Markov Game (Littman, 1994), which generalizes a Markov Decision Process (MDP) to a multi-agent scenario. In a Markov Game, agents interact in the same environment. At time step t, each agent (the ith of a total of N agents) takes the action ati, receives a reward rti , and finally the environment transitions from state s t to st+1. A Markov Game is then defined by a state space S (st ∈ S), a distribution of initial states η, the action space Ai (ati ∈ Ai) and reward function ri(s, a1, . . . , aN ) of each agent i, an environment state transition probability P (st+1|st, a1, . . . , aN ), and finally the agents’ discount factors γi. 3.1 ESTIMATING CHOICE FOR A SINGLE AGENT We first consider a single-agent scenario, i.e. N = 1, where only a leader agent, indicated by the subscript L, interacts with the environment through its pretrained stochastic policy πL. We assume that the leader acts Boltzmann-rationally, i.e. that it chooses high-value actions with higher probability. We believe this to be a reasonable assumption, as, in comparison to deterministic policies, stochastic policies are more robust (Zhang et al., 2020), and often achieve better results in real-world-alike partially observable stochastic domains (Kaelbling et al., 1998). We denote the leader agent’s generic choice in a given state s as CL(s), for which we propose concrete realizations below. Each method relies on the random variable St+n, with values st+n ∈ S, which refers to the leader agent’s state after n environment transitions from a starting state st. Its probability mass function is defined as the n-step state distribution of the underlying single-agent MDP, conditioned on the current state: p(st+n|st) = P (St+n = s|πL, st). Discrete choice. Our first derived method simply defines the choice of the leader agent in state st as the number of states that it can reach within n transitions, which we refer to as its discrete choice: DCnL(s t) = |range ( St+n|st ) |, (1) where range(X) is the set of all values that a random variable X takes on with positive probability and | · | measures the size of that set. While this count-based estimator of choice is intuitive and easily interpretable, it can hardly be estimated practically in large or continuous state spaces. It also discards information about the probability of reaching these states. Entropic choice. It can be shown that the entropy of a random variable X acts as a lower bound for the size of the set of values that X takes on with positive probability (Galvin, 2014, Property 2.6), i.e. H(X) ≤ log |range(X)|.We define a lower bound of the discrete choice by computing the Shannon entropy of the n-step state distribution, which we refer to as the agent’s entropic choice: ECnL(s t) = H(St+n|st) = − ∑ s∈S P (St+n = s|πL, st) log ( P (St+n = s|πL, st) ) , (2) which estimates the agent’s choice as the variety in its state after n transitions. Unlike eq. 1, ECnL can be computed in continuous state spaces or efficiently estimated by Monte Carlo sampling. Immediate choice. To further simplify entropic choice and reduce its computational complexity, we may limit the look-ahead horizon to n = 1 and assume an injective relationship from actions to states, i.e. no two actions taken at st lead to the equivalent state st+1. This assumption is often true in navigation environments, where different step-actions result in different states. We can then simplify the one-step state distribution of the leader agent to p(st+n|st) = P (St+1 = s|πL, st) = π(atL = a|st), and compute a simplified, short-horizon entropic choice, the immediate choice: ICL(s t) = H(St+1|st) = H(πtL(a|st)). (3) Immediate choice (IC) can be easily computed as the entropy over its policy conditioned on the current state. Even though the assumptions made for immediate choice often do not hold in complex or real-world environments, we found empirically that this objective can yield good results. 3.2 OPTIMALITY OF CHOICE AS AN INSTRUMENTAL CONVERGENT SUBGOAL Turner et al. (2019) analyze the instrumental convergence of optimal agents on power-seeking subgoals and show that optimal policies tend to keep their options open (Prop. 6.9). They consider two distinct actions a and a′ taken at a state s′, leading into two sets of possible future states (for an infinite horizon). These sets of future states are represented as nodes in two graphs, respectively G and G′ (with edges weighted by the probability of transitioning from one state to another). They also assume that the states in G ∪ G′ can only be reached from s′ by taking actions a or a′. In the case where G is “similar” to a subgraph of G′, in the sense that they are equivalent up to arbitrary swapping of pairs of states, the authors prove that the probability of a being optimal is higher than the probability of a′ being optimal (for most reward function distributions). Therefore, ifG′ contains more states than G, an optimal agent will choose a′ over a. Turner et al. (2019) thus lend theoretical support to our proposal: while there is no guarantee that any one optimal policy (corresponding to a rational agent with arbitrary reward function) pursues higher choice, in expectation (over a bounded space of reward functions) most policies do choose actions that lead to higher choice, all else being equal. As such, while we may not know a rational agent’s concrete goals, there is a high chance that choice works as an instrumental subgoal. 3.3 COMPARISON BETWEEN CHOICE AND EMPOWERMENT The empowerment (Klyubin et al., 2005) of a leader agent in a given state st and for horizon n is EnL(st) = maxω(an|st) I(St+n;An|st) = maxω(an|st)H(St+n|st) −H(St+n|An, st), with an as a sequence of n actions of the leader agent and ω as a probing distribution over its n-step action sequences. When setting the probing distribution ω equal to the leader agent’s policy, equation 3.3 simplifies to EnL(st) = ECnL(st)−H(St+n|At+n, st), with ECnL(st) as the entropic choice of the leader agent introduced in equation 2. If we further assume deterministic environment transitions, then empowerment becomes equal to entropic choice, i.e. EnL(st) = ECnL(st). In contrast to the previously introduced methods to estimate choice of another agent, empowerment of another agent cannot be estimated from observations of the environment transitions. To estimate another agent’s empowerment in a given state (EnL(st)), access to its action space as well as privileged access to an environment simulator are be required, which violates the main assumption of our research work, i.e. learning to assist others only from observations of the environment transitions. Even when assuming privileged access, computing empowerment in large or continuousstate environments often remains infeasible (Mohamed and Rezende, 2015; Gregor et al., 2016; Zhao et al., 2020), as it requires maximizing over all possible probing distributions ω of the leader agent. In contrast, estimating state entropy, as needed for the computation of the metrics introduced in this work, is feasible in large and continuous environments (Seo et al., 2021; Mutti et al., 2020). 3.4 BEHAVING ALTRUISTICALLY BY MAXIMIZING ANOTHER AGENT’S CHOICE Having considered three methods to estimate an agent’s choice (eq. 1-3) we now apply them to a Markov Game of two agents. The main hypothesis is that maximizing the choice of another agent is likely to allow it to reach more favourable regions of the state-space (for many possible policies of the agent), thus supporting it without a task-specific reward signal. Altruistic agent’s policy definition. In this Markov Game, one agent is the leader, with the subscript L, and another one is the altruistic agent, with the subscript A. We define the optimal policy of the altruistic agent as the one that maximizes the future discounted choice of the leader, π∗A = argmax πA ∞∑ t=0 γtA CL(s t), (4) where the generic choice CL(st) can be estimated by one of several methods: discrete choice DCnL(s t), entropic choice ECnL(s t) or immediate choice ICL(st). Conditional estimates of choice. As the agents interact in the same environment, they both have influence over the system state s, which contains the state of both agents. This makes applying single-agent objectives based on the state distribution (such as eq. 1 and 2) difficult to translate to a multi-agent setting, since the states of both agents are intermingled. For example, an altruistic agent that maximizes entropic choice naively (eq. 2) will maximize both the state availability of the leader agent (which mirrors the single-agent entropic choice) and its own state availability (which does not contribute towards the altruism goal). To maximize entropic choice without also increasing the entropy of the altruistic agent’s actions, we propose to condition the choice estimate on the altruistic agent’s actions over the same time horizon, denoted by the random variable At:t+n−1A : ECnL(s t) = H(St+n|At:t+n−1A , πL, s t). (5) In order to better understand eq. 5, we can use the chain rule of conditional entropy (Cover and Thomas, 2005, ch. 2) to decompose it into two terms: ECnL(s t) = H(St+n, At:t+n−1A |πL, st) − H(At:t+n−1A |πL, st), respectively the joint entropy of the states and actions, and the entropy of the actions. Therefore, we can interpret this objective as the altruistic agent maximizing the variety of states and actions, but subtracting the variety of its own actions, which is the undesired quantity. We can also relate eq. 5 to discrete choice (eq. 1). Using the fact that H(X|E) ≤ log |range(P (X|E))| for a random variable X and event E (Galvin, 2014, Property 2.12), we see that eq. 5 is a lower bound for a count-based choice estimate (analogous to eq. 1), also conditioned on the altruistic agent’s actions: ECnL(s t) ≤ logDCnL(st) = log |range ( St+n|At:t+n−1A , πL, st ) |. However, assuming simultaneous actions, the immediate choice estimate (eq. 3) stays unchanged, i.e. ICL(st) = H(πtL(a|st)|atA) = H(πtL(a|st)). The technical details of how these estimates can be computed from observations of the environment transitions are given in Appendix A. 4 EXPERIMENTAL EVALUATION We introduce three multi-agent environments of increasing complexity1, in which the success of a leader agent depends on the behaviour of one or more additional agents. In each environment, we first evaluate a subset of the proposed methods for choice estimation (DCnL, EC n L and ICL) by comparing the estimated choice of the leader agent in minimalistic scenarios. We then evaluate our approach of behaving altruistically towards others by maximizing their choice (section 3.4) and measure performance of our approach as the reward achieved by the leader agent. We provide videos of the emergent behaviours in the supp. mat. (see appendix F). We compare our method to both an unsupervised and a supervised approach. Note that the supervised approach has stronger assumptions, as it requires direct access to the leader agent’s reward function. We do not consider inverse RL (IRL) as a relevant baseline, as it would rely on demonstrations of expert behaviour, which we do not assume. Even if perfect knowledge of the state transition probabilities is assumed, this does not allow generating expert demonstrations of the leader agent’s policy, as its expert policy would in turn depend on the policy of the altruistic agent, which is yet to be found by IRL. 4.1 DISCRETE ENVIRONMENTS WITH CONTROLLABLE GATES We start by considering three different scenarios on a grid, illustrated in Fig. 1 (top row), with the starting positions of the leader (green) and an additional agent (blue) shown in faded colors, obstacles are gray, and agents may move in one of the four cardinal directions or stay still. Choice estimate analysis. We first verify whether the estimated choice for each state (agent position) correctly maps to our intuitive understanding of choice (that is, the diversity of actions that can be taken). Therefore, we conducted an analysis of the estimated choice of the leader agent using a simplified version of the environment (Fig. 1, top left), in which only the leader agent is present and selects actions uniformly at random. Fig. 1 (bottom row) shows the three different methods of estimating choice evaluated for each possible cell position of the leader agent. We can observe that states in less confined areas, e.g. further away from walls, generally feature higher choice estimates, with the least choice being afforded by the dead end at the right. All three method’s estimates are qualitatively similar, which validates the chosen approximations. In line 1In appendix E, we evaluate performance in a non-spatial environment. with the simplifications made, the immediate choice (IC) estimates tend to be more local, as can be observed when comparing the estimates for the cell at row 2, column 4. In conclusion, these results qualitatively agree with an intuitive understanding of choice of an agent in a grid environment. Environment setup. In the Door Scenario (Fig. 1, top center), the door switch (row 1, col. 8) can only be operated by the altruistic agent. The door (row 2, col. 4) remains open as long as the altruistic agent is on the switch cell and is closed otherwise. As the leader agent always starts to the left of the door and the altruistic agent to the right, the leader agent can only attain its goal, the apple (row 2, col. 6), if the altruistic agent uses the door switch to enable the leader agent to pass through the door. In the Dead End Scenario (Fig. 1, top right), the door is always open, and the leader agent’s target object (green apple) is moved to the top right cell. Hence, the leader agent can obtain the apple without additional help from the altruistic agent. However, the altruistic agent could potentially block the path by positioning itself at the entry to the dead end. This situation would be the opposite of altruistic behaviour and is, of course, undesired. We compare to a supervised approach, to Assistance via Empowerment (AvE, (Du et al., 2020)) and a random-policy baseline. Assistance via Empowerment baseline. We compare with the recently-proposed AvE, which has a similar goal (Du et al., 2020). There are two major differences: AvE is not unsupervised, and it requires privileged access to an environment simulator to produce estimates. Hence, its use in real or black-box environments is limited. We used the authors’ implementation with fixed hyperparameters, except for the crucial horizon n, for which we present a sweep in app. B. Training. We start by pretraining the leader agent with Q-Learning (Watkins and Dayan, 1992), with the altruistic agent executing a random policy. Hence, after convergence, the leader agent’s policy targets the green apple. Appendix B lists all details and parameters. Afterwards, the leader agent’s learning is frozen and the altruistic agent is trained; it always observes the position of the leader agent sL, its own position sA, and the environment state senv, which is composed of the door state (open, closed) and the food state (present, eaten). The altruistic agent is trained with Q-Learning to maximize the discounted future choice of the leader agent (see eq.. 4. For that, it uses one of the three proposed methods such as eq. 3, eq. 2 or eq. 1, as detailed in appendix A.1. Results. We investigate the developed behaviour of the altruistic agent after convergence for different choices of the hyperparameters – look-ahead horizon n ∈ {1, 3, 12} (which determines the scale at which choices are considered) and discount factor γa ∈ {0.1, 0.7} (which defines whether the altruistic agent gives higher importance to the short-term or long-term choice of the leader agent). Success is binary: either the leader agent attains its goal (green apple), or not. In the Door Scenario (Fig. 1, top center), we found that, for longer horizons n and higher discount factors γa, the altruistic agent opens the door to allow the leader agent to reach its target, by occupying the switch position (square outline; row 1, col. 8). For smaller n and lower γa, the altruistic agent does not execute any coordinated policy and the leader does not succeed. Using the AvE method, we find that it only opens the door for n = 3, but fails to do so for n = 1 and n = 12. In the Dead End Scenario (Fig. 1, top right), we observe that, for longer horizons n and large discount factors γa, the altruistic agent stays out of the leader agent’s way by occupying a far-away cell (square outline; row 1, col. 6). For short horizons n and high discount factors γa, the altruistic agent actively blocks the entry to the hallway that contains the target (circle outline; row 3, col. 7), to prohibit the leader agent from entering this region of low estimated choice (recall that the choice for each cell is visualized in Fig. 1, bottom right). This failure case can be prevented by having a large enough horizon n and discount factor γa, analogously to the selection of the temperature hyperparameter in maximum entropy single-agent RL (Haarnoja and Abbeel, 2018). We find that this configuration performs consistently better than others in both scenarios, and hence is more preferred. On the other hand, the AvE method does not block the path of the leader agent for n = 1, but blocks its path for n = 3 and n = 12. We found that the resulting behaviour of our approach is independent of the used method for choice estimation, i.e. either discrete choice (eq. 1) or entropic choice (eq. 2) yield the same outcome, with immediate choice (eq. 3) being a special case of entropic choice. As for the AvE baseline, we hypothesize that the variance of results is due to the nature of the proxy used in practice, which includes components of empowerment from both agents (sec. 3.4). The binary outcomes for all hyperparameter combinations are given in appendix B. We also compare to a supervised baseline (receiving a reward when the leader obtains the apple), in which case the leader always succeeds. 4.2 LEVEL-BASED FORAGING EXPERIMENTS Computational efficiency. Due to the computational complexity resulting from the need to estimate a long-term distribution of states, p(st+n|st), we focus on immediate choice (IC) to estimate the leader agent’s choice in the remaining sections. Furthermore, in rare state-action sequences, the assumptions made for IC, i.e. deterministic environment transitions and an injective relationship from actions to states, may not hold. Nonetheless, we did not find this to adversely affect the results. Due to its dependence on access to the environment simulator and its computational complexity, we do not consider the AvE baseline for the remainder of experiments. Setup. We use a fully-observable multi-agent environment that enables us to assess the level of cooperation among agents (level-based foraging, LBF, Christianos et al. (2020)) to evaluate the performance of altruistic agents in more complex environments with discrete state spaces. We compare our method to a maximum-entropy approach from single-agent RL (Mutti et al., 2020) and a random-policy baseline. A visualization of the environment is depicted in Fig. 2 (left). The two agents can forage apples by simultaneously taking positions at different sides of a targeted apple, yielding a fixed reward. We first train two agents – which receive an equal reward for foraging – using Deep Q-Learning (DQL, Van Hasselt et al. (2015)), corresponding to fully-supervised sharedreward in multi-agent reinforcement learning (MARL). We then take one of these pretrained agents that has learned to forage apples when accompanied by a cooperating agent, freeze its policy, and place it as the leader agent (green) into the test scenario (additional details are provided in app. C). Choice estimate analysis. We first qualitatively evaluate IC as an estimator for choice in Fig. 3, by comparing representative scenarios. To quantitatively analyse IC as an estimator for the leader agent’s choice, we compare the leader agent’s average IC (over 100 episodes) in two scenarios, one in which it can acquire many rewards, i.e. the other agent acts cooperatively, and one where it can acquire only few rewards, i.e. the other agent takes random actions. We show the results in Table 1. We observe that the leader agent’s estimated choice is substantially higher when it is able to acquire high rewards. Note that the IC estimate does not have privileged access to the reward function of the leader agent, and so this experiment evaluates its worth as a generic proxy for the leader’s reward. Assuming that an agent is able to acquire higher rewards when having more choice, these results indicate that IC is a reasonable estimator for the leader agent’s choice in LBF. Training. We now consider an environment that consists of the previously pretrained leader and an additional altruistic agent, which is trained from scratch and does not receive a reward for foraging apples, but is rewarded according to the leader agent’s choice. Its reward is given as the current estimate of the leader agent’s IC (eq. 3) and it is trained using DQL. To compute its internal reward signal, the altruistic agent would therefore need to estimate the state transition probabilities, as detailed in A.2. To decouple our approach’s performance from that of the state transition estimator, we instead directly compute the altruistic agent’s reward using the leader agent’s policy. Results. We define the performance of the altruistic agent not as its achieved internal reward but as the reward achieved by the leader agent, i.e. its performance in enabling the leader agent to forage apples. Fig. 4 shows a comparison of the altruistic agent’s performance to that achieved by 3 baselines (two unsupervised and one supervised), averaged over 5 random seeds, with the standard deviation as the shaded area. It can be observed that the performance of the altruistic agent converges to a similar performance to that of the supervised agent, and outperforms the baseline approaches by a large margin. Furthermore, the IC improvement of the leader agent is correlated with its reward improvement, which supports using IC as a reasonable proxy for the choice of the leader agent. 4.3 MULTI-AGENT TAG GAME WITH PROTECTIVE AGENTS Setup. We use a multi-agent tag environment (Tag, Mordatch and Abbeel (2018); Lowe et al. (2017); Terry et al. (2020)), illustrated in Fig. 2 (right), to evaluate the capabilities of altruistic agents in complex environments with continuous state spaces. Adversaries are rewarded for catching the leader, which in turn receives a negative reward for being caught or crossing the environment boundaries. To speed up training, altruistic agents additionally receive a small negative reward for violating the environment boundaries. We pretrain the adversaries and the leader (without the presence of altruistic agents) using MADDPG (Lowe et al., 2017) and DDPG (Lillicrap et al., 2016) respectively. After pretraining, the adversary agents have learned to cooperatively chase the leader agent, which in turn has learned to flee from the adversaries. Exact setup specifications and all parameters are given in appendix D. Choice estimate analysis. As done for LBF, we evaluate the IC of the leader agent in representative scenarios in Fig. 3. We also quantitatively evaluate IC as an estimator for the leader agent’s choice, by comparing the leader agent’s IC per timestep for a scenario in which it receives high rewards to one where it receives low rewards. We again hypothesize that the leader agent is able to acquire higher rewards when having more choice. Table 1 shows that the estimated choice is substantially higher in the high-success scenario, indicating that IC is a reasonable estimator also in Tag. Training. We freeze the pretrained policies of the adversary agents and the leader agent and insert three additional altruistic agents which observe all agents but are not observed themselves. Each additional altruistic agent’s internal reward signal is given as the IC of the leader agent (equation 3), which is directly computed as done in LBF (see 4.2). Results. Performance of the altruistic agents is defined as the times per episode that the leader agent is caught by the adversaries, i.e. the lower the better. In Table 2, the performance of the team of three altruistically trained agents (ours) is compared to three relevant baselines, with the altruistic agents either removed (None), acting randomly (random), or solely receiving a small negative reward for violating the environment boundaries (cage). In contrast to LBF, we do not compare to an unsupervised exploration approach, as we are not aware of such an implementation for cooperative MARL. Additionally, we report results for the case in which the altruistic agents receive the same reward as the leader agent (supervised), possibly appended by a negative reward for violating the environment boundaries (supervised + cage). It can be observed that our approach outperforms all relevant baselines by a substantial margin and also outperforms the supervised approach. We hypothesize this to be due to the dense internal reward signal that our approach provides, as compared to the sparse rewards in the supervised scenario: recall that in the supervised scenario the additional altruistic agents receive a large negative reward only when the leader agent is caught by the adversaries, whereas our approach provides a dense reward signal that corresponds to the current estimate of the leader agent’s choice. Fig. 5 displays the emerging protective behaviour of altruistic agents trained with our approach. Results videos are found in the supplemental material. 5 CONCLUSIONS We lay out some initial steps into developing artificial agents that learn altruistic behaviour from observations and interactions with other agents. Our experimental results demonstrate that artificial agents can behave altruistically towards other agents without knowledge of their objective or any external supervision, by actively maximizing their choice. This objective is justified by theoretical work on instrumental convergence, which shows that for a large proportion of rational agents this will be a useful subgoal, and thus can be leveraged to design generally altruistic agents. This work was motivated by a desire to address the potential negative outcomes of deploying agents that are oblivious to the values and objectives of others into the real world. As such, we hope that our work serves both as a baseline and facilitator for future research into value alignment in simulation settings, and as a complementary objective to standard RL that biases the behaviour towards more altruistic policies. In addition to the positive impacts of deployed altruistic agents outside of simulation, we remark that altruistic proxy objectives do not yet come with strict guarantees of optimizing for other agents’ rewards, and identify failure modes (sec. 4.1) which are hyperparameter-dependent, and which we hope provide interesting starting points for future work. 6 ETHICS STATEMENT We addressed the relevant aspects in our conclusion and have no conflicts of interest to declare. 7 REPRODUCIBILITY STATEMENT We provide detailed descriptions of our experiments in the appendix and list all relevant parameters in table 4. All experiments were run on single cores of Intel Xeon E7-8867v3 processors (2.5 GHz). Training times are given in the respective sections in the appendix. For the LBF and Tag experiments, we report mean and standard deviation over five different random seeds. The Gridworld experiments yield deterministic results. We will provide the source code for all experiments conducted with the final version of this publication. We created detailed instructions on how to run the code in order to replicate the experimental outcomes presented in this work. 8 ACKNOWLEDGEMENTS We thank Thore Graepel and Yoram Bachrach for their helpful feedback. We are also grateful to the anonymous reviewers for their valuable suggestions. This work was supported by the Royal Academy of Engineering (RF\201819\18\163). A ESTIMATION OF LEADER AGENT’S CHOICE FROM OBSERVATION A.1 MODEL-BASED ESTIMATION OF CHOICE FROM OBSERVATIONS We introduce a model-based estimator of choice that is suitable for small-scale discrete-state environments, having the advantage that it is easily interpretable. Recalling how we compute the discrete choice and entropic choice estimates for the leader agent, an estimate of the n-step state distribution conditioned on the altruistic agent’s actions is needed, i.e. P (st+n|πL, at:t+n−1A , st). To simplify this computation, we assume the altruistic agent’s action to equal hold for the next n steps. More specifically, we assume that the altruistic agent’s state is unchanged for the next n steps. Furthermore assuming that both the state and the action space are discrete, we compute P (st+n|πL, at:t+n−1A , s t) = st T (stA) n, (6) with T (stA)ij = P (s t+1 = sj | st = si, st+1A = s t A) (7) where the state transition matrix T (sA) holds the transition probabilities between all possible states, as a function of the state of the altruistic agent sA. To compute T (sA), the system state is encoded into a one-hot vector s1. The n-step discrete choice of the leader agent can then be computed as DCnL(s t) = ‖st1 T (stA)n‖0, (8) its n-step entropic choice as ECnL(s t) = H ( st1 T (s t A) n ) , (9) and its immediate choice as ICL(s t) = H ( πtL(a|st) ) = H ( s1 T (s t A) ) (10) In environments with a discrete state and action space, the altruistic agent can hence use an estimate of the state transition matrix T to estimate the choice of the leader agent using either of the proposed methods, i.e. DC, EC or IC. An estimate of T can be built over time, by observing the environment transitions and computing the transition probabilities as relative frequencies of observed transitions. A.2 MODEL-FREE ESTIMATION OF CHOICE FROM OBSERVATIONS To limit the computational complexity, which is important for environments with large or continuous state spaces, we also consider immediate choice as an estimator for the leader agent’s choice (ICL(st) = H(St+1|st)). As shown in section 3.1, this estimate can be simplified to H(St+1|st)) = H(πtL(a|st)), under the named assumptions. Hence, to compute the immediate choice of the leader, the altruistic agent requires an estimate of the leader agent’s policy entropy, which can be learned from observation using a policy estimation network (Hong et al., 2018; Papoudakis et al., 2020; Mao et al., 2019; Grover et al., 2018). B GRIDWORLD EXPERIMENTS B.1 TRAINING PROCEDURE B.1.1 SETUP The environment setup is described and displayed in section 4.1. AvE baseline. We evaluate the AvE baseline for different horizons n. For each horizon, we tested the AvE baseline as implemented in the provided source code2, using the hyper-parameters suggested by the authors. The original implementation uses a look-ahead horizon n = 10. We found 2https://github.com/yuqingd/ave that results are equal for both n = 10 and n = 12, which is why we only display results for n = 12. We further evaluated the AvE baseline for n between 1 and 12. For the Opens door task, we found that AvE yields success for n = 2, 3, 4, 5 and failing for the remaining. For the Non blocking task, we found that AvE yields success for n = 1, 2 and failing for the remaining. B.1.2 PRETRAINING We first pretrain the leader agent using tabular Q-Learning, with learning parameters given in Table 4. During this pretraining, the altruistic agent takes random actions. We train until all Q-Values are fully converged, i.e. training runs for 300000 environment steps. B.1.3 REWARD COMPUTATION FOR ALTRUISTIC AGENTS The altruistic agent is then also trained using tabular Q-Learning, and its internal reward signal is given as the choice estimate of the leader agent, i.e. either DCnL(s t), ECnL(s t) or ICL(st), which is computed with the model based-estimation introduced in appendix A.1. The altruistic agent records all environment transitions and frequently updates its estimate of the state transition matrix T (sA), which is needed to compute the internal reward signal for the altruistic agent. All training parameters can be found in Table 4. Training time is about 15 minutes per experiment. B.2 PERFORMANCE EVALUATION Performance of the altruistic agent is reported for two different categories, as shown in Table 3. For each category, we report success or failure for choice estimate look-ahead horizons n ∈ {1, 3, 12} and discount factors of the altruistic agent γa ∈ {0.1, 0.7}. Success or failure was always deterministic, conditioned on the experiment setup, i.e. 10 simulations were run for each setup which always yielded the same outcome. To estimate the leader agent’s choice, the altruistic agent uses either discrete choice (D, equations 1 and 8) or entropic choice (E, equations 2 and 9). It must be noted that horizon n = 12 is equivalent to an infinite horizon look-ahead for the given environment size and that entropic choice is equivalent to immediate choice (equations 3 and 10) at horizon n = 1, as the environment satisfies the necessary conditions listed for equation 3. Table 3 displays the results of this experiment. In the first row, it is evaluated whether the altruistic agent opens the door at all times, such that the leader agent can eat the green apple. It can be observed that the altruistic agent only opens the door for longer horizons n, respectively higher discount factors γa. Given the definitions of discrete choice (Equation 1) and entropic choice (Equation 2), it can be assumed that the choice horizon n determines the locality for which choice is considered and that the discount factor γa defines whether the altruistic agent gives higher importance to the short-term or long-term choice of the leader agent. This is in line with the observed results for the first category (Opens door). It can be assumed that, for short horizons n, the altruistic agent does not open the door, as it does not estimate that this would lead to an increase in the leader agent’s choice. A similar argumentation follows for low discount factors γa. The bottom-row category evaluates whether the altruistic agent does not block the hallway that leads up to the leader agent’s target apple in the top right environment cell. This category demonstrates a possible failure case of the proposed approach of maximizing another agent’s choice. For short horizons n and high discount factors γa, the altruistic agent actively blocks the entry to the lowentropy hallway towards the top right cell – by constantly occupying cell (2, 6) – to prohibit the leader agent from entering this region of low estimated choice. This failure case can be prevented by an appropriate selection of the hyperparameters – horizon n and discount factor γa. It is related to the selection of the temperature hyperparameter in maximum entropy single-agent RL (Haarnoja and Abbeel, 2018); if chosen incorrectly, the agent does not foster environment rewards in lowentropy regions. A possible solution to this problem would be to define a constrained optimization problem, as shown by Haarnoja and Abbeel (2018). B.3 ABLATION STUDY ON JOINT LEARNING Training. To investigate the effects of joint learning of the leader agent’s and the altruistic agent’s policy, we adapted the training process described in section 4.1 for the Gridworld experiments as following. Instead of first learning the policy of the leader agent while the altruistic agent takes random actions, we initialized both policies from scratch and trained both agents simultaneously with the parameters given in Table 4. Results. We evaluated the outcome for the same scenarios, i.e the scenarios described in section 4.1. We found that the results for the individual test cases were equivalent to those achieved when training the leader and the altruistic agent subsequently, i.e. the results are equivalent to those displayed in Table 3. C LEVEL BASED FORAGING EXPERIMENTS C.1 TRAINING PROCEDURE C.1.1 SETUP We adopted the Level Based Foraging3 environment as given in Christianos et al. (2020). We only focus on two-agent scenarios and only consider the subset of possible environments that require full cooperation among agents, i.e. those where food can only be foraged by two agents cooperatively. We therefore only consider environments where both agents are at level one, and all present food is at level two. In the original implementation, both agents have to simultaneously select the eat action while docking at different sides of a food object to forage the object and receive the reward. To reduce training time, we simplify this setup by reducing the action space to up, down, left, right, stay, i.e. we remove the eat action and enable agents to forage food by being simultaneously at different sides of a food object, with no further action required. C.1.2 PRETRAINING To obtain a pretrained leader agent, we first train two agents in the environment that are equally rewarded for foraging food. This setup corresponds to shared-reward cooperative MARL (Tan, 1993). Both agents are trained using Deep Q Learning (DQL, (Van Hasselt et al., 2015)), using a fully connected neural network with two hidden layers and five output values, resembling the Q values of the five possible actions. The exact training parameters are listed in Table 4. We then take either one of the two agents and set it as the pretrained leader agent for the subsequent evaluation of the altruistic agent. C.1.3 TRAINING OF ADDITIONAL AGENTS We then insert an additional agent into the environment that shall act altruistically towards the leader agent. This additional agent is trained in the same fashion and with the same parameters as the previously trained leader agents. Only its reward signal is different, as laid out in the next section. C.1.4 REWARD COMPUTATION FOR ADDITIONAL AGENTS We compare four different approaches for how the reward of the additional agent is defined, respectively how it behaves. Random: The agent takes random actions. Supervised: The agent receives the same reward as the leader agent, i.e. a shared reward as in cooperative MARL. Ours: 3https://github.com/semitable/lb-foraging The reward of the additional agent is defined as the immediate choice of the leader agent, as detailed in equation 3. We compute the leader agent’s policy entropy by computing the entropy of the softmax of the leader agent’s Q values in the given state. We further consider an unsupervised baseline, as detailed in the next paragraph. Unsupervised baseline (MaxEnt). As an unsupervised baseline, we implemented the MEPOL approach of Mutti et al. (2020). Their task-agnostic unsupervised exploration approach maximizes the entropy over the state distribution of trajectory rollouts. For this baseline, the additional agent is trained with the implementation given by the authors4, which itself builds on TRPO (Schulman et al., 2015). We leave all parameters unchanged but evaluate different learning rates; lr ∈ {1e − 6, 1e− 5, 1e− 4, 1e− 3, 1e− 2, 1e− 1}. Best results were achieved for a learning rate of 1e− 5, which was hence picked as the relevant baseline. C.2 PERFORMANCE EVALUATION Each experiment was run for 5 different random seeds and mean and standard deviation are reported. Training progress is shown in Figure 4. Evaluations are computed every 10000 environment steps for 200 episodes, with the exploration set to zero. Training time was about 14 hours for each run. Results are shown in Fig. 4. D TAG EXPERIMENTS D.1 TRAINING PROCEDURE D.1.1 PRETRAINING We use the Simple Tag (Tag) implementation by Terry et al. (2020)5 which is unchanged as compared to the original implementation of Mordatch and Abbeel (2018)6, only fixing minor errors. We first adopt the original configuration and pretrain three adversaries and one good agent (leader agent) using the parameters listed in Table 4. We use MADDPG (Lowe et al., 2017)7 to train adversary agents, and modify the framework as follows. The last layer of each agent’s actor-network outputs one value for each of the environment’s five possible actions, over which the softmax is computed. We then sample the agent’s action from the output softmax vector, which corresponds to the probabilities with which the agent takes a specific action in a given state. We train the leader agent with DDPG (Lillicrap et al., 2016),7 where we equally modify the output layer. Each actor and critic network is implemented as a fully-connected neural network with two hidden layers, with layer sizes as given in Table 4. To make the environment more challenging for the leader agent, we decrease its maximum speed and acceleration to 70% of the original value. We next insert three additional agents into the environment whose observations include all agents and objects. These additional agents are not observed by adversary agents or the leader agent. The additional agents are of the same size as the adversary agents, and their acceleration and maximum velocity are equal to that of the leader agent. To speed up training, we made the following changes to the environment, which are applied to our approach as well as to all baselines. First, we spawn the three additional agents in the vicinity of the leader agent, which itself is spawned at a random position. Furthermore, we randomly pick two out of the three adversary agents and decrease their maximum acceleration and maximum speed by 50%. We made these changes to be able to observe substantial differences between the different approaches after a training time of less than 24h. D.1.2 TRAINING OF ADDITIONAL AGENTS We train these three additionally inserted agents with the previously described modified version of MADDPG. The reward for each agent is defined either according to our developed approach, or any of the given baselines, as detailed in the next section. 4https://github.com/muttimirco/mepol 5https://github.com/PettingZoo-Team/PettingZoo 6https://github.com/openai/multiagent-particle-envs 7https://github.com/starry-sky6688/MADDPG D.1.3 REWARD COMPUTATION FOR ADDITIONAL AGENTS FOR DIFFERENT BASELINES We consider the following implementations for the reward computation of the additional agents, respectively different environment configurations. None: For this scenario, the additional agents are removed from the environment. The remaining approaches purely differ in the way that the reward of the additional agents is computed. No other changes are made. Random: The additional agents take random actions. Cage: The additional agents receive a negative reward for violating the environment boundaries, which is equal to the negative reward that the leader agent receives for itself violating the environment boundaries (part of the original Tag implementation). Supervised: The additional agents receive the same reward as the leader agent. That is, they receive a reward of -10 if the leader agent is caught by the adversaries and a small negative reward if the leader agent violates the environment boundaries. Supervised + Cage: The additional agents receive the same reward as the leader agent, and an additional small negative reward if they themselves violate the environment boundaries. Ours: The reward of the additional agents is defined as the immediate choice of the leader agent, as detailed in eq. 3. To reduce the variance in the estimate of the leader agent’s immediate choice, we implement an ensemble of five pretrained actor-networks for the leader agent, evaluate the policy entropy of each network, and take the median of the achieved values as the reward for the altruistic agents. Furthermore, the additional agents receive a small negative reward for themselves violating the environment boundaries. D.2 PERFORMANCE EVALUATION We train Cage, Supervised, Supervised + Cage and Ours for five different random seeds with parameters as detailed in Table 4. We then compute the results listed in Table 2 by freezing all weights across all networks, setting the exploration noise to zero and computing the average and standard deviation over 500 rollout episodes. E RESOURCE ENVIRONMENT E.0.1 MOTIVATION AND OVERVIEW This environment is a special case of the general resource-based MDP proposed by Benson-Tilsen and Soares (2016), which they used to show that intelligent agents pursue instrumentally useful subgoals. The motivation behind the choice for this environment is to evaluate our proposal in non-spatial and non-navigation environments. In the environment, there are 3 resource types, which two “consumer” agents may consume as an action. Each consumer has different preferences (reward function), and so will only consume 2 of the resource types. A third, altruistic agent, receives one resource unit of each type to distribute among the consumers, and its goal is to satisfy the preferences of the consumers without knowing their reward function. We define its performance as the average number of times that the consumers fail to consume their preferred resource (so lower is better). We compare our method to a supervised agent that is explicitly trained with the consumers’ reward function, as well as to an agent that assigns the resources randomly. E.0.2 ENVIRONMENT DESCRIPTION The environment is expressed as a Markov Game (see section 3). The Markov game is composed of two human-inspired consumers with subscript C1, C2 and an altruistic agent with subscript A. Three types of resources exist, RX , RY and RZ . The environment state s is given by the number of resources of each type available to each of the consumers. For example, s = [(1, 0, 1), (0, 1, 0)] means that agent C1 has one resource each of type X and Y available, while agent C2 only has one resource of type Z available. At the beginning of each time step, the altruistic agent is provided with one resource per category, i.e. RX , RY and RZ . The altruistic agent can assign each resource individually to any agent or discard the resource. The altruistic agent’s action space is hence defined by one sub-action per resource, i.e. aA = (aXA , a Y A , a Z A). Each sub-action assigns the resource either to one of the consumers or discards it. The resources are then distributed according to the action taken by the altruistic agent and the environment state is updated. Resources cannot be stacked, which means that each agent can only have one resource per category available at a time. Next, the consumers attempt to consume one resource each, according to their preference. Agent C1 dislikes resource RZ , hence it chooses RX or RY with equal probability. Agent C2 dislikes resource RX , hence it chooses RY or RZ with equal probability. The actions of agents C1 and C2 are sampled accordingly and the environment state is updated. For each round, we record how many agents failed to consume a resource that was not available. E.1 TRAINING The altruistic agent is trained with Q-Learning (Watkins and Dayan, 1992) to maximize the discounted future choice of the consumers (see eq. 4). For that, it uses one of the three proposed objectives, namely IC (eq. 3), EC (eq. 2) or DC (eq. 1), which it estimates as detailed in appendix A.1. The exact hyper-parameters are given in Table 4. We compare the performance of the altruistic agent that maximizes the choice of the consumers to that of a supervised agent. The reward of the supervised agent is the negative of the number of consumers that attempted to consume a resource, in that time step, and failed. Further, we compare to a random-policy baseline that distributes the resources randomly but does not discard any resources. E.2 RESULTS Table 5 shows that the results achieved by the altruistic agent trained with choice are equivalent to those achieved by the supervised agent. Furthermore, they are significantly better than those achieved by an agent with a random policy. F VIDEOS OF BEHAVIOUR OF ALTRUISTIC AGENT We provide videos for the most relevant outcomes of our experiments in the supplementary material. F.1 VIDEOS FOR RESULTS OF GRIDWORLD EXPERIMENTS (SECTION 4.1) F.1.1 DOOR SCENARIO IN FIG. 1 TOP CENTER 01 Altruistic agent opens door for leader agent: It can be observed that the altruistic agent has learned to operate the door switch to enable the leader agent to pass through the door and reach its target on the other side. 02 Altruistic agent does not open door for leader agent (failure case): It can be observed that for an unfavourable choice of hyperparameters, the altruistic agent does not open the door. F.1.2 DEAD END SCENARIO IN FIG. 1 TOP RIGHT 03 Altruistic agent gives way to leader agent: It can be observed that the altruistic agent does not get into the way of the leader agent, which is hence able to reach its target in the top right cell. 04 Altruistic agent blocks path of leader agent (failure case): It can be observed that for an unfavourable choice of hyperparameters, the altruistic agent blocks the entry to the hallway towards the right side of the environment such that the leader agent cannot reach its target at the top right cell. This happens as the altruistic agent forcefully maximizes the estimated choice of the leader agent by hindering it from entering the hallway, which is a region of fewer estimated choice. F.2 VIDEO FOR RESULTS OF LEVEL BASED FORAGING (SECTION 4.2) 05 Altruistic agent enables leader to forage apples: It can be observed how the altruistic agent (blue) learned to coordinate its movements with the leader agent (green), to enable the leader agent to forage apples. It has learned this behaviour purely through optimizing for the leader agents choice and is itself not rewarded for foraging apples. F.3 VIDEO FOR RESULTS OF TAG (SECTION 4.3) 06 Altruistic agents protect leader from adversaries: It can be observed how the altruistic agents (blue colors) learned to coordinate their movements to protect the leader agent (green) from its adversaries. The adversaries (red colors) try to catch the leader, which in turn tries to flee from them. The altruistic agents protect the leader by actively intercepting the paths of the adversaries. They have learned this behaviour purely through optimizing for the leader agents choice.
1. What is the focus and contribution of the paper regarding goal-agnostic assistance RL policy training? 2. What are the strengths of the proposed approach, particularly in its novelty and clarity? 3. What are the weaknesses of the paper, especially regarding comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. What are the concerns or questions raised by the reviewer regarding the comparison between the proposed approach and prior works, such as empowerment and MADDPG-like methods?
Summary Of The Paper Review
Summary Of The Paper This paper provides an alternative to empowerment for goal agnostic assistance RL policy training -- choice-based optimization. The core idea is to model possible subgoals as the choices of others and train policy to maximize the choices of other agents for achieving goal agnostic assistance. Evaluation against standard deep RL and the recent assistance via empowerment (AvE) method shows that the choice-based approach can be effective in multiple tasks. Review =====Strengths===== It is an interesting and novel way to train goal agnostic helping policy by maximizing the choice of other agents. The choice-based modeling is clearly motivated and defined, well situated in prior work, sufficiently compared against the similar yet different framework, empowerment. The writing is very clear and the core ideas are conveyed successfully. The experiments and discussions are thorough and informative. The results are promising. The implementation was well documented and the details seem to be sufficient for reproducing the results. =====Weaknesses===== While the overall writing is clear and the experiments are comprehensive, I do however still have confusion/concerns about the comparison between this approach and the prior work, specifically i) empowerment and ii) MADDPG-like approach that learns other agents' policies. It seems to me that choice is similar to empowerment, with the addition of using agent-specific policy instead of arbitrary probing policy. If I understand this correctly, this means that empowerment only depends on the environment and is agent agnostic, while choice is agent dependent -- the definition of choice depends on a specific agent's policy. If this understanding is correct, then I have two questions: Q1: Can you train a policy using a MADDPG-like method that jointly learns the policy of other agents and the policy of its own? Crucially, this is closer to your approach than the vanilla single agent DRL is, since it also explicitly depends on the policies of others. Q2: Empowerment is agent agnostic, so in theory, it generalizes better to other situations where agents will have different policies than the ones seen during training. Could you comment on the generalization?
ICLR
Title Learning Altruistic Behaviours in Reinforcement Learning without External Rewards Abstract Can artificial agents learn to assist others in achieving their goals without knowing what those goals are? Generic reinforcement learning agents could be trained to behave altruistically towards others by rewarding them for altruistic behaviour, i.e., rewarding them for benefiting other agents in a given situation. Such an approach assumes that other agents’ goals are known so that the altruistic agent can cooperate in achieving those goals. However, explicit knowledge of other agents’ goals is often difficult to acquire. In the case of human agents, their goals and preferences may be difficult to express fully; they might be ambiguous or even contradictory. Thus, it is beneficial to develop agents that do not depend on external supervision and learn altruistic behaviour in a task-agnostic manner. We propose to act altruistically towards other agents by giving them more choice and allowing them to achieve their goals better. Some concrete examples include opening a door for others or safeguarding them to pursue their objectives without interference. We formalize this concept and propose an altruistic agent that learns to increase the choices another agent has by preferring to maximize the number of states that the other agent can reach in its future. We evaluate our approach in three different multi-agent environments where another agent’s success depends on altruistic behaviour. Finally, we show that our unsupervised agents can perform comparably to agents explicitly trained to work cooperatively, in some cases even outperforming them. N/A Can artificial agents learn to assist others in achieving their goals without knowing what those goals are? Generic reinforcement learning agents could be trained to behave altruistically towards others by rewarding them for altruistic behaviour, i.e., rewarding them for benefiting other agents in a given situation. Such an approach assumes that other agents’ goals are known so that the altruistic agent can cooperate in achieving those goals. However, explicit knowledge of other agents’ goals is often difficult to acquire. In the case of human agents, their goals and preferences may be difficult to express fully; they might be ambiguous or even contradictory. Thus, it is beneficial to develop agents that do not depend on external supervision and learn altruistic behaviour in a task-agnostic manner. We propose to act altruistically towards other agents by giving them more choice and allowing them to achieve their goals better. Some concrete examples include opening a door for others or safeguarding them to pursue their objectives without interference. We formalize this concept and propose an altruistic agent that learns to increase the choices another agent has by preferring to maximize the number of states that the other agent can reach in its future. We evaluate our approach in three different multi-agent environments where another agent’s success depends on altruistic behaviour. Finally, we show that our unsupervised agents can perform comparably to agents explicitly trained to work cooperatively, in some cases even outperforming them. 1 INTRODUCTION Altruistic behaviour is often described as behaviour that is intended to benefit others, sometimes at a cost for the actor (Dowding and Monroe, 1997; Fehr and Fischbacher, 2003). Such behaviour is a desirable trait when integrating artificial intelligence into various aspects of human life and society – such as personal artificial assistants, house or warehouse robots, autonomous vehicles, and even recommender systems for news and entertainment. By observing and interacting with us, we may expect that artificial agents could adapt to our behaviour and objectives, and learn to act helpfully and selflessly. Altruistic behaviour could be a step towards value alignment (Allen et al., 2005; Gabriel, 2020), which aims to incorporate common-sense human values into artificial agents. Typically, we could achieve such an altruistic behaviour through various forms of supervision such as providing ground-truth actions at each time step, training agents with reinforcement learning (RL) and suitable rewards, or through imitation learning (Song et al., 2018). However, none of the approaches above scale up easily. They either require a large amount of supervision or carefully crafted rewards that can easily be misstated, leading to unwanted behaviour (Russell, 2019, ch. 1). How can one agent support another agent without knowing its goals? One clue might be the instrumental convergence hypothesis (Bostrom, 2017; Omohundro, 2008; Russell, 2019), which states that intelligent agents with varied goals are likely to pursue common subgoals which are generally useful (instrumental). Some examples are resource acquisition, cognitive enhancement or self-preservation, which all increase an agent’s chance of achieving almost arbitrary final goals. This hypothesis has been validated theoretically under many models, including resource games (BensonTilsen and Soares, 2016) and large classes of policies in discrete MDPs (Turner et al., 2019). While instrumental convergence is central to the discussion of value alignment and safe AI (Bostrom, 2017), since many instrumental subgoals have harmful effects, we believe that it is also a key to supporting agents with ill-defined goals and values, such as humans. The reason is that enabling instrumental subgoals for other agents (or not impeding them) can be beneficial, for a wide variety of goals and preferences. Since these subgoals occur frequently for rational agents, enabling them has the highest chance of success in the absence of more information about the other agent’s preferences, even if it is not guaranteed in the worst case. We speculate that having the ability to reach many future states is one of the most general convergent subgoals. It subsumes self-preservation (avoiding absorbent states), resource acquisition (if they are prerequisites to some actions), and generally maintaining the ability to pursue many goals. There is theoretical evidence that many optimal agents pursue this subgoal (Turner et al., 2019) (see sec. 3.2). Thus, we propose to train agents to support other agents by maximizing their choice (future state availability). This unsupervised approach learns altruistic behaviour without any extrinsic supervision such as rewards or expert demonstrations. We evaluate our methods in three diverse multi-agent environments. We always assume there are at least two agents: the leader agent that executes its own policy and can be trained using standard supervised methods, and an altruistic agent whose role is to help the leader. The performance of the altruistic agent is thus defined as the reward (success) achieved by the leader agent. In all our environments, the overall success of the leader agent depends on the altruistic agents’ behaviour. We show that our unsupervised approach outperforms unsupervised baselines by a large margin and, in some cases, also outperforms the supervised ones. Finally, we demonstrate possible failure cases of our approach where maximising the leader agent’s choice can lead to suboptimal behaviour. Our work makes the following three contributions: • We devise a multi-agent RL framework for intrinsically motivated artificial agents that act altruistically by maximising the choice of others. • We define and evaluate three task-agnostic methods to estimate the choice that an agent has in a given situation, which are all related to the variety in states it can reach. • We experimentally evaluate our unsupervised approach in three multi-agent environments and are able to match and, in some cases, outperform supervised baselines. 2 RELATED WORK To the best of our knowledge, we are the first to experimentally evaluate unsupervised agents with purely altruistic objectives. However, there are many related concepts in the literature. In human-robot cooperation, a robotic agent aids a human agent in achieving its goals (PérezD’Arpino and Shah, 2015; Hadfield-Menell et al., 2016; Baker et al., 2006; Dragan and Srinivasa, 2013; Fisac et al., 2017; 2020; Javdani et al., 2015; Dragan and Srinivasa, 2013; Macindoe et al., 2012; Pellegrinelli et al., 2016). Methods from Inverse RL (IRL) are often employed to infer human goals, which are then utilized by the robot agent to support the human. IRL itself aims to learn objectives from observations and can be used in single-agent (Fu et al., 2017) and multi-agent scenarios (Song et al., 2018; Yu et al., 2019; Jeon et al., 2020). However, IRL relies on the existence of expert demonstrations, which are often difficult to get at scale. In complex environments, it also often suffers from ambiguity of solutions (Arora and Doshi, 2021). In single-agent reinforcement learning, empowerment – which measures an agent’s capacity to affect its environment (Klyubin et al., 2005; 2008) – is used to enable intrinsically-motivated exploration (Gregor et al., 2016; Volpi and Polani, 2020). Empowerment is also used for multiagent cooperation (Guckelsberger et al., 2016; Du et al., 2020). Du et al. (2020) use empowerment to develop a helper agent that assists a (simulated) human agent by maximizing the human’s empowerment, constituting the research work most similar to ours. In contrast to our approach, it requires privileged access to an environment simulator and therefore does not allow to learn helpful or altruistic behaviour only from observation. Furthermore, the approach is not unsupervised. There are also mathematical formalizations of instrumental convergence (Bostrom, 2017). BensonTilsen and Soares (2016) analyze a MDP that makes finite resource allocation explicit, and find that optimal agents with arbitrary reward functions tend to deplete available resources. Turner et al. (2019) propose “power” as a convergent subgoal, which they define as the average difference between the state value of an optimal policy and the reward in the same state. They show that, for environments with certain symmetries, a larger proportion of optimal agents prefer states with higher power. In sec. 3.2 we will describe these symmetries and relate the result to our method. 3 METHODS In this section, we formalize our framework. We start with the generic definition describing multiagent setting. Next, we describe our framework where we show various approaches to estimate choice for a single agent, and how it can be applied to a two-agents Markov Game. Markov Game. We consider a Markov Game (Littman, 1994), which generalizes a Markov Decision Process (MDP) to a multi-agent scenario. In a Markov Game, agents interact in the same environment. At time step t, each agent (the ith of a total of N agents) takes the action ati, receives a reward rti , and finally the environment transitions from state s t to st+1. A Markov Game is then defined by a state space S (st ∈ S), a distribution of initial states η, the action space Ai (ati ∈ Ai) and reward function ri(s, a1, . . . , aN ) of each agent i, an environment state transition probability P (st+1|st, a1, . . . , aN ), and finally the agents’ discount factors γi. 3.1 ESTIMATING CHOICE FOR A SINGLE AGENT We first consider a single-agent scenario, i.e. N = 1, where only a leader agent, indicated by the subscript L, interacts with the environment through its pretrained stochastic policy πL. We assume that the leader acts Boltzmann-rationally, i.e. that it chooses high-value actions with higher probability. We believe this to be a reasonable assumption, as, in comparison to deterministic policies, stochastic policies are more robust (Zhang et al., 2020), and often achieve better results in real-world-alike partially observable stochastic domains (Kaelbling et al., 1998). We denote the leader agent’s generic choice in a given state s as CL(s), for which we propose concrete realizations below. Each method relies on the random variable St+n, with values st+n ∈ S, which refers to the leader agent’s state after n environment transitions from a starting state st. Its probability mass function is defined as the n-step state distribution of the underlying single-agent MDP, conditioned on the current state: p(st+n|st) = P (St+n = s|πL, st). Discrete choice. Our first derived method simply defines the choice of the leader agent in state st as the number of states that it can reach within n transitions, which we refer to as its discrete choice: DCnL(s t) = |range ( St+n|st ) |, (1) where range(X) is the set of all values that a random variable X takes on with positive probability and | · | measures the size of that set. While this count-based estimator of choice is intuitive and easily interpretable, it can hardly be estimated practically in large or continuous state spaces. It also discards information about the probability of reaching these states. Entropic choice. It can be shown that the entropy of a random variable X acts as a lower bound for the size of the set of values that X takes on with positive probability (Galvin, 2014, Property 2.6), i.e. H(X) ≤ log |range(X)|.We define a lower bound of the discrete choice by computing the Shannon entropy of the n-step state distribution, which we refer to as the agent’s entropic choice: ECnL(s t) = H(St+n|st) = − ∑ s∈S P (St+n = s|πL, st) log ( P (St+n = s|πL, st) ) , (2) which estimates the agent’s choice as the variety in its state after n transitions. Unlike eq. 1, ECnL can be computed in continuous state spaces or efficiently estimated by Monte Carlo sampling. Immediate choice. To further simplify entropic choice and reduce its computational complexity, we may limit the look-ahead horizon to n = 1 and assume an injective relationship from actions to states, i.e. no two actions taken at st lead to the equivalent state st+1. This assumption is often true in navigation environments, where different step-actions result in different states. We can then simplify the one-step state distribution of the leader agent to p(st+n|st) = P (St+1 = s|πL, st) = π(atL = a|st), and compute a simplified, short-horizon entropic choice, the immediate choice: ICL(s t) = H(St+1|st) = H(πtL(a|st)). (3) Immediate choice (IC) can be easily computed as the entropy over its policy conditioned on the current state. Even though the assumptions made for immediate choice often do not hold in complex or real-world environments, we found empirically that this objective can yield good results. 3.2 OPTIMALITY OF CHOICE AS AN INSTRUMENTAL CONVERGENT SUBGOAL Turner et al. (2019) analyze the instrumental convergence of optimal agents on power-seeking subgoals and show that optimal policies tend to keep their options open (Prop. 6.9). They consider two distinct actions a and a′ taken at a state s′, leading into two sets of possible future states (for an infinite horizon). These sets of future states are represented as nodes in two graphs, respectively G and G′ (with edges weighted by the probability of transitioning from one state to another). They also assume that the states in G ∪ G′ can only be reached from s′ by taking actions a or a′. In the case where G is “similar” to a subgraph of G′, in the sense that they are equivalent up to arbitrary swapping of pairs of states, the authors prove that the probability of a being optimal is higher than the probability of a′ being optimal (for most reward function distributions). Therefore, ifG′ contains more states than G, an optimal agent will choose a′ over a. Turner et al. (2019) thus lend theoretical support to our proposal: while there is no guarantee that any one optimal policy (corresponding to a rational agent with arbitrary reward function) pursues higher choice, in expectation (over a bounded space of reward functions) most policies do choose actions that lead to higher choice, all else being equal. As such, while we may not know a rational agent’s concrete goals, there is a high chance that choice works as an instrumental subgoal. 3.3 COMPARISON BETWEEN CHOICE AND EMPOWERMENT The empowerment (Klyubin et al., 2005) of a leader agent in a given state st and for horizon n is EnL(st) = maxω(an|st) I(St+n;An|st) = maxω(an|st)H(St+n|st) −H(St+n|An, st), with an as a sequence of n actions of the leader agent and ω as a probing distribution over its n-step action sequences. When setting the probing distribution ω equal to the leader agent’s policy, equation 3.3 simplifies to EnL(st) = ECnL(st)−H(St+n|At+n, st), with ECnL(st) as the entropic choice of the leader agent introduced in equation 2. If we further assume deterministic environment transitions, then empowerment becomes equal to entropic choice, i.e. EnL(st) = ECnL(st). In contrast to the previously introduced methods to estimate choice of another agent, empowerment of another agent cannot be estimated from observations of the environment transitions. To estimate another agent’s empowerment in a given state (EnL(st)), access to its action space as well as privileged access to an environment simulator are be required, which violates the main assumption of our research work, i.e. learning to assist others only from observations of the environment transitions. Even when assuming privileged access, computing empowerment in large or continuousstate environments often remains infeasible (Mohamed and Rezende, 2015; Gregor et al., 2016; Zhao et al., 2020), as it requires maximizing over all possible probing distributions ω of the leader agent. In contrast, estimating state entropy, as needed for the computation of the metrics introduced in this work, is feasible in large and continuous environments (Seo et al., 2021; Mutti et al., 2020). 3.4 BEHAVING ALTRUISTICALLY BY MAXIMIZING ANOTHER AGENT’S CHOICE Having considered three methods to estimate an agent’s choice (eq. 1-3) we now apply them to a Markov Game of two agents. The main hypothesis is that maximizing the choice of another agent is likely to allow it to reach more favourable regions of the state-space (for many possible policies of the agent), thus supporting it without a task-specific reward signal. Altruistic agent’s policy definition. In this Markov Game, one agent is the leader, with the subscript L, and another one is the altruistic agent, with the subscript A. We define the optimal policy of the altruistic agent as the one that maximizes the future discounted choice of the leader, π∗A = argmax πA ∞∑ t=0 γtA CL(s t), (4) where the generic choice CL(st) can be estimated by one of several methods: discrete choice DCnL(s t), entropic choice ECnL(s t) or immediate choice ICL(st). Conditional estimates of choice. As the agents interact in the same environment, they both have influence over the system state s, which contains the state of both agents. This makes applying single-agent objectives based on the state distribution (such as eq. 1 and 2) difficult to translate to a multi-agent setting, since the states of both agents are intermingled. For example, an altruistic agent that maximizes entropic choice naively (eq. 2) will maximize both the state availability of the leader agent (which mirrors the single-agent entropic choice) and its own state availability (which does not contribute towards the altruism goal). To maximize entropic choice without also increasing the entropy of the altruistic agent’s actions, we propose to condition the choice estimate on the altruistic agent’s actions over the same time horizon, denoted by the random variable At:t+n−1A : ECnL(s t) = H(St+n|At:t+n−1A , πL, s t). (5) In order to better understand eq. 5, we can use the chain rule of conditional entropy (Cover and Thomas, 2005, ch. 2) to decompose it into two terms: ECnL(s t) = H(St+n, At:t+n−1A |πL, st) − H(At:t+n−1A |πL, st), respectively the joint entropy of the states and actions, and the entropy of the actions. Therefore, we can interpret this objective as the altruistic agent maximizing the variety of states and actions, but subtracting the variety of its own actions, which is the undesired quantity. We can also relate eq. 5 to discrete choice (eq. 1). Using the fact that H(X|E) ≤ log |range(P (X|E))| for a random variable X and event E (Galvin, 2014, Property 2.12), we see that eq. 5 is a lower bound for a count-based choice estimate (analogous to eq. 1), also conditioned on the altruistic agent’s actions: ECnL(s t) ≤ logDCnL(st) = log |range ( St+n|At:t+n−1A , πL, st ) |. However, assuming simultaneous actions, the immediate choice estimate (eq. 3) stays unchanged, i.e. ICL(st) = H(πtL(a|st)|atA) = H(πtL(a|st)). The technical details of how these estimates can be computed from observations of the environment transitions are given in Appendix A. 4 EXPERIMENTAL EVALUATION We introduce three multi-agent environments of increasing complexity1, in which the success of a leader agent depends on the behaviour of one or more additional agents. In each environment, we first evaluate a subset of the proposed methods for choice estimation (DCnL, EC n L and ICL) by comparing the estimated choice of the leader agent in minimalistic scenarios. We then evaluate our approach of behaving altruistically towards others by maximizing their choice (section 3.4) and measure performance of our approach as the reward achieved by the leader agent. We provide videos of the emergent behaviours in the supp. mat. (see appendix F). We compare our method to both an unsupervised and a supervised approach. Note that the supervised approach has stronger assumptions, as it requires direct access to the leader agent’s reward function. We do not consider inverse RL (IRL) as a relevant baseline, as it would rely on demonstrations of expert behaviour, which we do not assume. Even if perfect knowledge of the state transition probabilities is assumed, this does not allow generating expert demonstrations of the leader agent’s policy, as its expert policy would in turn depend on the policy of the altruistic agent, which is yet to be found by IRL. 4.1 DISCRETE ENVIRONMENTS WITH CONTROLLABLE GATES We start by considering three different scenarios on a grid, illustrated in Fig. 1 (top row), with the starting positions of the leader (green) and an additional agent (blue) shown in faded colors, obstacles are gray, and agents may move in one of the four cardinal directions or stay still. Choice estimate analysis. We first verify whether the estimated choice for each state (agent position) correctly maps to our intuitive understanding of choice (that is, the diversity of actions that can be taken). Therefore, we conducted an analysis of the estimated choice of the leader agent using a simplified version of the environment (Fig. 1, top left), in which only the leader agent is present and selects actions uniformly at random. Fig. 1 (bottom row) shows the three different methods of estimating choice evaluated for each possible cell position of the leader agent. We can observe that states in less confined areas, e.g. further away from walls, generally feature higher choice estimates, with the least choice being afforded by the dead end at the right. All three method’s estimates are qualitatively similar, which validates the chosen approximations. In line 1In appendix E, we evaluate performance in a non-spatial environment. with the simplifications made, the immediate choice (IC) estimates tend to be more local, as can be observed when comparing the estimates for the cell at row 2, column 4. In conclusion, these results qualitatively agree with an intuitive understanding of choice of an agent in a grid environment. Environment setup. In the Door Scenario (Fig. 1, top center), the door switch (row 1, col. 8) can only be operated by the altruistic agent. The door (row 2, col. 4) remains open as long as the altruistic agent is on the switch cell and is closed otherwise. As the leader agent always starts to the left of the door and the altruistic agent to the right, the leader agent can only attain its goal, the apple (row 2, col. 6), if the altruistic agent uses the door switch to enable the leader agent to pass through the door. In the Dead End Scenario (Fig. 1, top right), the door is always open, and the leader agent’s target object (green apple) is moved to the top right cell. Hence, the leader agent can obtain the apple without additional help from the altruistic agent. However, the altruistic agent could potentially block the path by positioning itself at the entry to the dead end. This situation would be the opposite of altruistic behaviour and is, of course, undesired. We compare to a supervised approach, to Assistance via Empowerment (AvE, (Du et al., 2020)) and a random-policy baseline. Assistance via Empowerment baseline. We compare with the recently-proposed AvE, which has a similar goal (Du et al., 2020). There are two major differences: AvE is not unsupervised, and it requires privileged access to an environment simulator to produce estimates. Hence, its use in real or black-box environments is limited. We used the authors’ implementation with fixed hyperparameters, except for the crucial horizon n, for which we present a sweep in app. B. Training. We start by pretraining the leader agent with Q-Learning (Watkins and Dayan, 1992), with the altruistic agent executing a random policy. Hence, after convergence, the leader agent’s policy targets the green apple. Appendix B lists all details and parameters. Afterwards, the leader agent’s learning is frozen and the altruistic agent is trained; it always observes the position of the leader agent sL, its own position sA, and the environment state senv, which is composed of the door state (open, closed) and the food state (present, eaten). The altruistic agent is trained with Q-Learning to maximize the discounted future choice of the leader agent (see eq.. 4. For that, it uses one of the three proposed methods such as eq. 3, eq. 2 or eq. 1, as detailed in appendix A.1. Results. We investigate the developed behaviour of the altruistic agent after convergence for different choices of the hyperparameters – look-ahead horizon n ∈ {1, 3, 12} (which determines the scale at which choices are considered) and discount factor γa ∈ {0.1, 0.7} (which defines whether the altruistic agent gives higher importance to the short-term or long-term choice of the leader agent). Success is binary: either the leader agent attains its goal (green apple), or not. In the Door Scenario (Fig. 1, top center), we found that, for longer horizons n and higher discount factors γa, the altruistic agent opens the door to allow the leader agent to reach its target, by occupying the switch position (square outline; row 1, col. 8). For smaller n and lower γa, the altruistic agent does not execute any coordinated policy and the leader does not succeed. Using the AvE method, we find that it only opens the door for n = 3, but fails to do so for n = 1 and n = 12. In the Dead End Scenario (Fig. 1, top right), we observe that, for longer horizons n and large discount factors γa, the altruistic agent stays out of the leader agent’s way by occupying a far-away cell (square outline; row 1, col. 6). For short horizons n and high discount factors γa, the altruistic agent actively blocks the entry to the hallway that contains the target (circle outline; row 3, col. 7), to prohibit the leader agent from entering this region of low estimated choice (recall that the choice for each cell is visualized in Fig. 1, bottom right). This failure case can be prevented by having a large enough horizon n and discount factor γa, analogously to the selection of the temperature hyperparameter in maximum entropy single-agent RL (Haarnoja and Abbeel, 2018). We find that this configuration performs consistently better than others in both scenarios, and hence is more preferred. On the other hand, the AvE method does not block the path of the leader agent for n = 1, but blocks its path for n = 3 and n = 12. We found that the resulting behaviour of our approach is independent of the used method for choice estimation, i.e. either discrete choice (eq. 1) or entropic choice (eq. 2) yield the same outcome, with immediate choice (eq. 3) being a special case of entropic choice. As for the AvE baseline, we hypothesize that the variance of results is due to the nature of the proxy used in practice, which includes components of empowerment from both agents (sec. 3.4). The binary outcomes for all hyperparameter combinations are given in appendix B. We also compare to a supervised baseline (receiving a reward when the leader obtains the apple), in which case the leader always succeeds. 4.2 LEVEL-BASED FORAGING EXPERIMENTS Computational efficiency. Due to the computational complexity resulting from the need to estimate a long-term distribution of states, p(st+n|st), we focus on immediate choice (IC) to estimate the leader agent’s choice in the remaining sections. Furthermore, in rare state-action sequences, the assumptions made for IC, i.e. deterministic environment transitions and an injective relationship from actions to states, may not hold. Nonetheless, we did not find this to adversely affect the results. Due to its dependence on access to the environment simulator and its computational complexity, we do not consider the AvE baseline for the remainder of experiments. Setup. We use a fully-observable multi-agent environment that enables us to assess the level of cooperation among agents (level-based foraging, LBF, Christianos et al. (2020)) to evaluate the performance of altruistic agents in more complex environments with discrete state spaces. We compare our method to a maximum-entropy approach from single-agent RL (Mutti et al., 2020) and a random-policy baseline. A visualization of the environment is depicted in Fig. 2 (left). The two agents can forage apples by simultaneously taking positions at different sides of a targeted apple, yielding a fixed reward. We first train two agents – which receive an equal reward for foraging – using Deep Q-Learning (DQL, Van Hasselt et al. (2015)), corresponding to fully-supervised sharedreward in multi-agent reinforcement learning (MARL). We then take one of these pretrained agents that has learned to forage apples when accompanied by a cooperating agent, freeze its policy, and place it as the leader agent (green) into the test scenario (additional details are provided in app. C). Choice estimate analysis. We first qualitatively evaluate IC as an estimator for choice in Fig. 3, by comparing representative scenarios. To quantitatively analyse IC as an estimator for the leader agent’s choice, we compare the leader agent’s average IC (over 100 episodes) in two scenarios, one in which it can acquire many rewards, i.e. the other agent acts cooperatively, and one where it can acquire only few rewards, i.e. the other agent takes random actions. We show the results in Table 1. We observe that the leader agent’s estimated choice is substantially higher when it is able to acquire high rewards. Note that the IC estimate does not have privileged access to the reward function of the leader agent, and so this experiment evaluates its worth as a generic proxy for the leader’s reward. Assuming that an agent is able to acquire higher rewards when having more choice, these results indicate that IC is a reasonable estimator for the leader agent’s choice in LBF. Training. We now consider an environment that consists of the previously pretrained leader and an additional altruistic agent, which is trained from scratch and does not receive a reward for foraging apples, but is rewarded according to the leader agent’s choice. Its reward is given as the current estimate of the leader agent’s IC (eq. 3) and it is trained using DQL. To compute its internal reward signal, the altruistic agent would therefore need to estimate the state transition probabilities, as detailed in A.2. To decouple our approach’s performance from that of the state transition estimator, we instead directly compute the altruistic agent’s reward using the leader agent’s policy. Results. We define the performance of the altruistic agent not as its achieved internal reward but as the reward achieved by the leader agent, i.e. its performance in enabling the leader agent to forage apples. Fig. 4 shows a comparison of the altruistic agent’s performance to that achieved by 3 baselines (two unsupervised and one supervised), averaged over 5 random seeds, with the standard deviation as the shaded area. It can be observed that the performance of the altruistic agent converges to a similar performance to that of the supervised agent, and outperforms the baseline approaches by a large margin. Furthermore, the IC improvement of the leader agent is correlated with its reward improvement, which supports using IC as a reasonable proxy for the choice of the leader agent. 4.3 MULTI-AGENT TAG GAME WITH PROTECTIVE AGENTS Setup. We use a multi-agent tag environment (Tag, Mordatch and Abbeel (2018); Lowe et al. (2017); Terry et al. (2020)), illustrated in Fig. 2 (right), to evaluate the capabilities of altruistic agents in complex environments with continuous state spaces. Adversaries are rewarded for catching the leader, which in turn receives a negative reward for being caught or crossing the environment boundaries. To speed up training, altruistic agents additionally receive a small negative reward for violating the environment boundaries. We pretrain the adversaries and the leader (without the presence of altruistic agents) using MADDPG (Lowe et al., 2017) and DDPG (Lillicrap et al., 2016) respectively. After pretraining, the adversary agents have learned to cooperatively chase the leader agent, which in turn has learned to flee from the adversaries. Exact setup specifications and all parameters are given in appendix D. Choice estimate analysis. As done for LBF, we evaluate the IC of the leader agent in representative scenarios in Fig. 3. We also quantitatively evaluate IC as an estimator for the leader agent’s choice, by comparing the leader agent’s IC per timestep for a scenario in which it receives high rewards to one where it receives low rewards. We again hypothesize that the leader agent is able to acquire higher rewards when having more choice. Table 1 shows that the estimated choice is substantially higher in the high-success scenario, indicating that IC is a reasonable estimator also in Tag. Training. We freeze the pretrained policies of the adversary agents and the leader agent and insert three additional altruistic agents which observe all agents but are not observed themselves. Each additional altruistic agent’s internal reward signal is given as the IC of the leader agent (equation 3), which is directly computed as done in LBF (see 4.2). Results. Performance of the altruistic agents is defined as the times per episode that the leader agent is caught by the adversaries, i.e. the lower the better. In Table 2, the performance of the team of three altruistically trained agents (ours) is compared to three relevant baselines, with the altruistic agents either removed (None), acting randomly (random), or solely receiving a small negative reward for violating the environment boundaries (cage). In contrast to LBF, we do not compare to an unsupervised exploration approach, as we are not aware of such an implementation for cooperative MARL. Additionally, we report results for the case in which the altruistic agents receive the same reward as the leader agent (supervised), possibly appended by a negative reward for violating the environment boundaries (supervised + cage). It can be observed that our approach outperforms all relevant baselines by a substantial margin and also outperforms the supervised approach. We hypothesize this to be due to the dense internal reward signal that our approach provides, as compared to the sparse rewards in the supervised scenario: recall that in the supervised scenario the additional altruistic agents receive a large negative reward only when the leader agent is caught by the adversaries, whereas our approach provides a dense reward signal that corresponds to the current estimate of the leader agent’s choice. Fig. 5 displays the emerging protective behaviour of altruistic agents trained with our approach. Results videos are found in the supplemental material. 5 CONCLUSIONS We lay out some initial steps into developing artificial agents that learn altruistic behaviour from observations and interactions with other agents. Our experimental results demonstrate that artificial agents can behave altruistically towards other agents without knowledge of their objective or any external supervision, by actively maximizing their choice. This objective is justified by theoretical work on instrumental convergence, which shows that for a large proportion of rational agents this will be a useful subgoal, and thus can be leveraged to design generally altruistic agents. This work was motivated by a desire to address the potential negative outcomes of deploying agents that are oblivious to the values and objectives of others into the real world. As such, we hope that our work serves both as a baseline and facilitator for future research into value alignment in simulation settings, and as a complementary objective to standard RL that biases the behaviour towards more altruistic policies. In addition to the positive impacts of deployed altruistic agents outside of simulation, we remark that altruistic proxy objectives do not yet come with strict guarantees of optimizing for other agents’ rewards, and identify failure modes (sec. 4.1) which are hyperparameter-dependent, and which we hope provide interesting starting points for future work. 6 ETHICS STATEMENT We addressed the relevant aspects in our conclusion and have no conflicts of interest to declare. 7 REPRODUCIBILITY STATEMENT We provide detailed descriptions of our experiments in the appendix and list all relevant parameters in table 4. All experiments were run on single cores of Intel Xeon E7-8867v3 processors (2.5 GHz). Training times are given in the respective sections in the appendix. For the LBF and Tag experiments, we report mean and standard deviation over five different random seeds. The Gridworld experiments yield deterministic results. We will provide the source code for all experiments conducted with the final version of this publication. We created detailed instructions on how to run the code in order to replicate the experimental outcomes presented in this work. 8 ACKNOWLEDGEMENTS We thank Thore Graepel and Yoram Bachrach for their helpful feedback. We are also grateful to the anonymous reviewers for their valuable suggestions. This work was supported by the Royal Academy of Engineering (RF\201819\18\163). A ESTIMATION OF LEADER AGENT’S CHOICE FROM OBSERVATION A.1 MODEL-BASED ESTIMATION OF CHOICE FROM OBSERVATIONS We introduce a model-based estimator of choice that is suitable for small-scale discrete-state environments, having the advantage that it is easily interpretable. Recalling how we compute the discrete choice and entropic choice estimates for the leader agent, an estimate of the n-step state distribution conditioned on the altruistic agent’s actions is needed, i.e. P (st+n|πL, at:t+n−1A , st). To simplify this computation, we assume the altruistic agent’s action to equal hold for the next n steps. More specifically, we assume that the altruistic agent’s state is unchanged for the next n steps. Furthermore assuming that both the state and the action space are discrete, we compute P (st+n|πL, at:t+n−1A , s t) = st T (stA) n, (6) with T (stA)ij = P (s t+1 = sj | st = si, st+1A = s t A) (7) where the state transition matrix T (sA) holds the transition probabilities between all possible states, as a function of the state of the altruistic agent sA. To compute T (sA), the system state is encoded into a one-hot vector s1. The n-step discrete choice of the leader agent can then be computed as DCnL(s t) = ‖st1 T (stA)n‖0, (8) its n-step entropic choice as ECnL(s t) = H ( st1 T (s t A) n ) , (9) and its immediate choice as ICL(s t) = H ( πtL(a|st) ) = H ( s1 T (s t A) ) (10) In environments with a discrete state and action space, the altruistic agent can hence use an estimate of the state transition matrix T to estimate the choice of the leader agent using either of the proposed methods, i.e. DC, EC or IC. An estimate of T can be built over time, by observing the environment transitions and computing the transition probabilities as relative frequencies of observed transitions. A.2 MODEL-FREE ESTIMATION OF CHOICE FROM OBSERVATIONS To limit the computational complexity, which is important for environments with large or continuous state spaces, we also consider immediate choice as an estimator for the leader agent’s choice (ICL(st) = H(St+1|st)). As shown in section 3.1, this estimate can be simplified to H(St+1|st)) = H(πtL(a|st)), under the named assumptions. Hence, to compute the immediate choice of the leader, the altruistic agent requires an estimate of the leader agent’s policy entropy, which can be learned from observation using a policy estimation network (Hong et al., 2018; Papoudakis et al., 2020; Mao et al., 2019; Grover et al., 2018). B GRIDWORLD EXPERIMENTS B.1 TRAINING PROCEDURE B.1.1 SETUP The environment setup is described and displayed in section 4.1. AvE baseline. We evaluate the AvE baseline for different horizons n. For each horizon, we tested the AvE baseline as implemented in the provided source code2, using the hyper-parameters suggested by the authors. The original implementation uses a look-ahead horizon n = 10. We found 2https://github.com/yuqingd/ave that results are equal for both n = 10 and n = 12, which is why we only display results for n = 12. We further evaluated the AvE baseline for n between 1 and 12. For the Opens door task, we found that AvE yields success for n = 2, 3, 4, 5 and failing for the remaining. For the Non blocking task, we found that AvE yields success for n = 1, 2 and failing for the remaining. B.1.2 PRETRAINING We first pretrain the leader agent using tabular Q-Learning, with learning parameters given in Table 4. During this pretraining, the altruistic agent takes random actions. We train until all Q-Values are fully converged, i.e. training runs for 300000 environment steps. B.1.3 REWARD COMPUTATION FOR ALTRUISTIC AGENTS The altruistic agent is then also trained using tabular Q-Learning, and its internal reward signal is given as the choice estimate of the leader agent, i.e. either DCnL(s t), ECnL(s t) or ICL(st), which is computed with the model based-estimation introduced in appendix A.1. The altruistic agent records all environment transitions and frequently updates its estimate of the state transition matrix T (sA), which is needed to compute the internal reward signal for the altruistic agent. All training parameters can be found in Table 4. Training time is about 15 minutes per experiment. B.2 PERFORMANCE EVALUATION Performance of the altruistic agent is reported for two different categories, as shown in Table 3. For each category, we report success or failure for choice estimate look-ahead horizons n ∈ {1, 3, 12} and discount factors of the altruistic agent γa ∈ {0.1, 0.7}. Success or failure was always deterministic, conditioned on the experiment setup, i.e. 10 simulations were run for each setup which always yielded the same outcome. To estimate the leader agent’s choice, the altruistic agent uses either discrete choice (D, equations 1 and 8) or entropic choice (E, equations 2 and 9). It must be noted that horizon n = 12 is equivalent to an infinite horizon look-ahead for the given environment size and that entropic choice is equivalent to immediate choice (equations 3 and 10) at horizon n = 1, as the environment satisfies the necessary conditions listed for equation 3. Table 3 displays the results of this experiment. In the first row, it is evaluated whether the altruistic agent opens the door at all times, such that the leader agent can eat the green apple. It can be observed that the altruistic agent only opens the door for longer horizons n, respectively higher discount factors γa. Given the definitions of discrete choice (Equation 1) and entropic choice (Equation 2), it can be assumed that the choice horizon n determines the locality for which choice is considered and that the discount factor γa defines whether the altruistic agent gives higher importance to the short-term or long-term choice of the leader agent. This is in line with the observed results for the first category (Opens door). It can be assumed that, for short horizons n, the altruistic agent does not open the door, as it does not estimate that this would lead to an increase in the leader agent’s choice. A similar argumentation follows for low discount factors γa. The bottom-row category evaluates whether the altruistic agent does not block the hallway that leads up to the leader agent’s target apple in the top right environment cell. This category demonstrates a possible failure case of the proposed approach of maximizing another agent’s choice. For short horizons n and high discount factors γa, the altruistic agent actively blocks the entry to the lowentropy hallway towards the top right cell – by constantly occupying cell (2, 6) – to prohibit the leader agent from entering this region of low estimated choice. This failure case can be prevented by an appropriate selection of the hyperparameters – horizon n and discount factor γa. It is related to the selection of the temperature hyperparameter in maximum entropy single-agent RL (Haarnoja and Abbeel, 2018); if chosen incorrectly, the agent does not foster environment rewards in lowentropy regions. A possible solution to this problem would be to define a constrained optimization problem, as shown by Haarnoja and Abbeel (2018). B.3 ABLATION STUDY ON JOINT LEARNING Training. To investigate the effects of joint learning of the leader agent’s and the altruistic agent’s policy, we adapted the training process described in section 4.1 for the Gridworld experiments as following. Instead of first learning the policy of the leader agent while the altruistic agent takes random actions, we initialized both policies from scratch and trained both agents simultaneously with the parameters given in Table 4. Results. We evaluated the outcome for the same scenarios, i.e the scenarios described in section 4.1. We found that the results for the individual test cases were equivalent to those achieved when training the leader and the altruistic agent subsequently, i.e. the results are equivalent to those displayed in Table 3. C LEVEL BASED FORAGING EXPERIMENTS C.1 TRAINING PROCEDURE C.1.1 SETUP We adopted the Level Based Foraging3 environment as given in Christianos et al. (2020). We only focus on two-agent scenarios and only consider the subset of possible environments that require full cooperation among agents, i.e. those where food can only be foraged by two agents cooperatively. We therefore only consider environments where both agents are at level one, and all present food is at level two. In the original implementation, both agents have to simultaneously select the eat action while docking at different sides of a food object to forage the object and receive the reward. To reduce training time, we simplify this setup by reducing the action space to up, down, left, right, stay, i.e. we remove the eat action and enable agents to forage food by being simultaneously at different sides of a food object, with no further action required. C.1.2 PRETRAINING To obtain a pretrained leader agent, we first train two agents in the environment that are equally rewarded for foraging food. This setup corresponds to shared-reward cooperative MARL (Tan, 1993). Both agents are trained using Deep Q Learning (DQL, (Van Hasselt et al., 2015)), using a fully connected neural network with two hidden layers and five output values, resembling the Q values of the five possible actions. The exact training parameters are listed in Table 4. We then take either one of the two agents and set it as the pretrained leader agent for the subsequent evaluation of the altruistic agent. C.1.3 TRAINING OF ADDITIONAL AGENTS We then insert an additional agent into the environment that shall act altruistically towards the leader agent. This additional agent is trained in the same fashion and with the same parameters as the previously trained leader agents. Only its reward signal is different, as laid out in the next section. C.1.4 REWARD COMPUTATION FOR ADDITIONAL AGENTS We compare four different approaches for how the reward of the additional agent is defined, respectively how it behaves. Random: The agent takes random actions. Supervised: The agent receives the same reward as the leader agent, i.e. a shared reward as in cooperative MARL. Ours: 3https://github.com/semitable/lb-foraging The reward of the additional agent is defined as the immediate choice of the leader agent, as detailed in equation 3. We compute the leader agent’s policy entropy by computing the entropy of the softmax of the leader agent’s Q values in the given state. We further consider an unsupervised baseline, as detailed in the next paragraph. Unsupervised baseline (MaxEnt). As an unsupervised baseline, we implemented the MEPOL approach of Mutti et al. (2020). Their task-agnostic unsupervised exploration approach maximizes the entropy over the state distribution of trajectory rollouts. For this baseline, the additional agent is trained with the implementation given by the authors4, which itself builds on TRPO (Schulman et al., 2015). We leave all parameters unchanged but evaluate different learning rates; lr ∈ {1e − 6, 1e− 5, 1e− 4, 1e− 3, 1e− 2, 1e− 1}. Best results were achieved for a learning rate of 1e− 5, which was hence picked as the relevant baseline. C.2 PERFORMANCE EVALUATION Each experiment was run for 5 different random seeds and mean and standard deviation are reported. Training progress is shown in Figure 4. Evaluations are computed every 10000 environment steps for 200 episodes, with the exploration set to zero. Training time was about 14 hours for each run. Results are shown in Fig. 4. D TAG EXPERIMENTS D.1 TRAINING PROCEDURE D.1.1 PRETRAINING We use the Simple Tag (Tag) implementation by Terry et al. (2020)5 which is unchanged as compared to the original implementation of Mordatch and Abbeel (2018)6, only fixing minor errors. We first adopt the original configuration and pretrain three adversaries and one good agent (leader agent) using the parameters listed in Table 4. We use MADDPG (Lowe et al., 2017)7 to train adversary agents, and modify the framework as follows. The last layer of each agent’s actor-network outputs one value for each of the environment’s five possible actions, over which the softmax is computed. We then sample the agent’s action from the output softmax vector, which corresponds to the probabilities with which the agent takes a specific action in a given state. We train the leader agent with DDPG (Lillicrap et al., 2016),7 where we equally modify the output layer. Each actor and critic network is implemented as a fully-connected neural network with two hidden layers, with layer sizes as given in Table 4. To make the environment more challenging for the leader agent, we decrease its maximum speed and acceleration to 70% of the original value. We next insert three additional agents into the environment whose observations include all agents and objects. These additional agents are not observed by adversary agents or the leader agent. The additional agents are of the same size as the adversary agents, and their acceleration and maximum velocity are equal to that of the leader agent. To speed up training, we made the following changes to the environment, which are applied to our approach as well as to all baselines. First, we spawn the three additional agents in the vicinity of the leader agent, which itself is spawned at a random position. Furthermore, we randomly pick two out of the three adversary agents and decrease their maximum acceleration and maximum speed by 50%. We made these changes to be able to observe substantial differences between the different approaches after a training time of less than 24h. D.1.2 TRAINING OF ADDITIONAL AGENTS We train these three additionally inserted agents with the previously described modified version of MADDPG. The reward for each agent is defined either according to our developed approach, or any of the given baselines, as detailed in the next section. 4https://github.com/muttimirco/mepol 5https://github.com/PettingZoo-Team/PettingZoo 6https://github.com/openai/multiagent-particle-envs 7https://github.com/starry-sky6688/MADDPG D.1.3 REWARD COMPUTATION FOR ADDITIONAL AGENTS FOR DIFFERENT BASELINES We consider the following implementations for the reward computation of the additional agents, respectively different environment configurations. None: For this scenario, the additional agents are removed from the environment. The remaining approaches purely differ in the way that the reward of the additional agents is computed. No other changes are made. Random: The additional agents take random actions. Cage: The additional agents receive a negative reward for violating the environment boundaries, which is equal to the negative reward that the leader agent receives for itself violating the environment boundaries (part of the original Tag implementation). Supervised: The additional agents receive the same reward as the leader agent. That is, they receive a reward of -10 if the leader agent is caught by the adversaries and a small negative reward if the leader agent violates the environment boundaries. Supervised + Cage: The additional agents receive the same reward as the leader agent, and an additional small negative reward if they themselves violate the environment boundaries. Ours: The reward of the additional agents is defined as the immediate choice of the leader agent, as detailed in eq. 3. To reduce the variance in the estimate of the leader agent’s immediate choice, we implement an ensemble of five pretrained actor-networks for the leader agent, evaluate the policy entropy of each network, and take the median of the achieved values as the reward for the altruistic agents. Furthermore, the additional agents receive a small negative reward for themselves violating the environment boundaries. D.2 PERFORMANCE EVALUATION We train Cage, Supervised, Supervised + Cage and Ours for five different random seeds with parameters as detailed in Table 4. We then compute the results listed in Table 2 by freezing all weights across all networks, setting the exploration noise to zero and computing the average and standard deviation over 500 rollout episodes. E RESOURCE ENVIRONMENT E.0.1 MOTIVATION AND OVERVIEW This environment is a special case of the general resource-based MDP proposed by Benson-Tilsen and Soares (2016), which they used to show that intelligent agents pursue instrumentally useful subgoals. The motivation behind the choice for this environment is to evaluate our proposal in non-spatial and non-navigation environments. In the environment, there are 3 resource types, which two “consumer” agents may consume as an action. Each consumer has different preferences (reward function), and so will only consume 2 of the resource types. A third, altruistic agent, receives one resource unit of each type to distribute among the consumers, and its goal is to satisfy the preferences of the consumers without knowing their reward function. We define its performance as the average number of times that the consumers fail to consume their preferred resource (so lower is better). We compare our method to a supervised agent that is explicitly trained with the consumers’ reward function, as well as to an agent that assigns the resources randomly. E.0.2 ENVIRONMENT DESCRIPTION The environment is expressed as a Markov Game (see section 3). The Markov game is composed of two human-inspired consumers with subscript C1, C2 and an altruistic agent with subscript A. Three types of resources exist, RX , RY and RZ . The environment state s is given by the number of resources of each type available to each of the consumers. For example, s = [(1, 0, 1), (0, 1, 0)] means that agent C1 has one resource each of type X and Y available, while agent C2 only has one resource of type Z available. At the beginning of each time step, the altruistic agent is provided with one resource per category, i.e. RX , RY and RZ . The altruistic agent can assign each resource individually to any agent or discard the resource. The altruistic agent’s action space is hence defined by one sub-action per resource, i.e. aA = (aXA , a Y A , a Z A). Each sub-action assigns the resource either to one of the consumers or discards it. The resources are then distributed according to the action taken by the altruistic agent and the environment state is updated. Resources cannot be stacked, which means that each agent can only have one resource per category available at a time. Next, the consumers attempt to consume one resource each, according to their preference. Agent C1 dislikes resource RZ , hence it chooses RX or RY with equal probability. Agent C2 dislikes resource RX , hence it chooses RY or RZ with equal probability. The actions of agents C1 and C2 are sampled accordingly and the environment state is updated. For each round, we record how many agents failed to consume a resource that was not available. E.1 TRAINING The altruistic agent is trained with Q-Learning (Watkins and Dayan, 1992) to maximize the discounted future choice of the consumers (see eq. 4). For that, it uses one of the three proposed objectives, namely IC (eq. 3), EC (eq. 2) or DC (eq. 1), which it estimates as detailed in appendix A.1. The exact hyper-parameters are given in Table 4. We compare the performance of the altruistic agent that maximizes the choice of the consumers to that of a supervised agent. The reward of the supervised agent is the negative of the number of consumers that attempted to consume a resource, in that time step, and failed. Further, we compare to a random-policy baseline that distributes the resources randomly but does not discard any resources. E.2 RESULTS Table 5 shows that the results achieved by the altruistic agent trained with choice are equivalent to those achieved by the supervised agent. Furthermore, they are significantly better than those achieved by an agent with a random policy. F VIDEOS OF BEHAVIOUR OF ALTRUISTIC AGENT We provide videos for the most relevant outcomes of our experiments in the supplementary material. F.1 VIDEOS FOR RESULTS OF GRIDWORLD EXPERIMENTS (SECTION 4.1) F.1.1 DOOR SCENARIO IN FIG. 1 TOP CENTER 01 Altruistic agent opens door for leader agent: It can be observed that the altruistic agent has learned to operate the door switch to enable the leader agent to pass through the door and reach its target on the other side. 02 Altruistic agent does not open door for leader agent (failure case): It can be observed that for an unfavourable choice of hyperparameters, the altruistic agent does not open the door. F.1.2 DEAD END SCENARIO IN FIG. 1 TOP RIGHT 03 Altruistic agent gives way to leader agent: It can be observed that the altruistic agent does not get into the way of the leader agent, which is hence able to reach its target in the top right cell. 04 Altruistic agent blocks path of leader agent (failure case): It can be observed that for an unfavourable choice of hyperparameters, the altruistic agent blocks the entry to the hallway towards the right side of the environment such that the leader agent cannot reach its target at the top right cell. This happens as the altruistic agent forcefully maximizes the estimated choice of the leader agent by hindering it from entering the hallway, which is a region of fewer estimated choice. F.2 VIDEO FOR RESULTS OF LEVEL BASED FORAGING (SECTION 4.2) 05 Altruistic agent enables leader to forage apples: It can be observed how the altruistic agent (blue) learned to coordinate its movements with the leader agent (green), to enable the leader agent to forage apples. It has learned this behaviour purely through optimizing for the leader agents choice and is itself not rewarded for foraging apples. F.3 VIDEO FOR RESULTS OF TAG (SECTION 4.3) 06 Altruistic agents protect leader from adversaries: It can be observed how the altruistic agents (blue colors) learned to coordinate their movements to protect the leader agent (green) from its adversaries. The adversaries (red colors) try to catch the leader, which in turn tries to flee from them. The altruistic agents protect the leader by actively intercepting the paths of the adversaries. They have learned this behaviour purely through optimizing for the leader agents choice.
1. What is the focus and contribution of the paper on training an altruistic agent? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and potential impact? 3. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content? 4. Are there any concerns or limitations regarding the proposed method or its applications?
Summary Of The Paper Review
Summary Of The Paper This paper is the first try to train agent behave altruistically towards others without knowledge of their objective or any external supervision. The main idea is to train altruist agent giving the leader agents more choice and thereby allowing them to better achieve their goals. They introduce three multi-agent environments of increasing complexity to evaluate the proposed method. The results show that it can, in some cases, outperform the supervised baselines. Review I am not expert in the specified feild, but in my opnion: The writing of this paper is generally well-organized and of good quality. This paper, as far as I know, is the first to try to address this important problem. 3 This paper can serve as a baseline and also propose three testbeds for future research.
ICLR
Title Evaluating Predictive Distributions: Does Bayesian Deep Learning Work? Abstract Posterior predictive distributions quantify uncertainties ignored by point estimates. This paper introduces The Neural Testbed, which provides tools for the systematic evaluation of agents that generate such predictions. Crucially, these tools assess not only the quality of marginal predictions per input, but also joint predictions given many inputs. Joint distributions are often critical for useful uncertainty quantification, but they have been largely overlooked by the Bayesian deep learning community. We benchmark several approaches to uncertainty estimation using a neural-network-based data generating process. Our results reveal the importance of evaluation beyond marginal predictions. Further, they reconcile sources of confusion in the field, such as why Bayesian deep learning approaches that generate accurate marginal predictions perform poorly in sequential decision tasks, how incorporating priors can be helpful, and what roles epistemic versus aleatoric uncertainty play when evaluating performance. We also present experiments on real-world challenge datasets, which show a high correlation with testbed results, and that the importance of evaluating joint predictive distributions carries over to real data. As part of this effort, we opensource The Neural Testbed, including all implementations from this paper. 1 Introduction Deep learning has emerged as the state-of-the-art approach across a number of application domains in which agents learn from large amounts of data (LeCun et al., 2015). Neural networks are increasingly used not only to predict outcomes but also to inform decisions. Common approaches in deep learning produce point estimates but not uncertainty estimates, which are often required for effective decision-making. Bayesian deep learning extends the methodology to produce such uncertainty estimates (MacKay, 1992; Neal, 2012). We consider agents that are trained on data pairs ((Xt, Yt+1) : t = 0, 1, . . . , T − 1) and subsequently generate predictions given new inputs. When presented with an input XT , a Bayesian neural network can generate a predictive distribution of the outcome YT+1 that is yet to be observed. This distribution characterizes the agent’s uncertainty about YT+1. We refer to such a prediction as marginal to distinguish it from a joint predictive distribution over a list (YT+1, . . . , YT+τ ) of prospective outcomes corresponding to inputs (XT , . . . , XT+τ−1). The importance of uncertainty estimation has motivated a great deal of research over recent years (Kendall & Gal, 2017). This research has produced a variety of agents that learn to generate predictive distributions. With this proliferation of alternatives, it is increasingly important to analyze and compare their performance (Filos et al., 2019; Nado et al., 2021). In this paper, we introduce new tools for systematic evaluation of such agents. Our tools overcome several limitations faced by previous methods of evaluation. First, by focusing purely on predictive distributions, we allow for a unified treatment of approaches developed within the Bayesian neural network community and beyond. This sidesteps the Open source code available at https://anonymous.4open.science/r/neural-testbed-B839. question of whether any approach ‘is really Bayesian’ (Wilson & Izmailov, 2020). Second, our tools evaluate the quality of higher-order joint predictions (τ > 1). Until now, the Bayesian deep learning literature has focused almost exclusively on evaluating marginal predictions (Wang et al., 2021). Finally, we develop a neural-network-based data generating process for Bayesian deep learning that can be used to drive insight and algorithm development. Where research has focused on a small set of challenge datasets, this might introduce bias through overfitting via multiple iterations of algorithm development. We use these tools to compare hundreds of agent variants. Further, we show that performance on our synthetic data generating process data is highly correlated with performance on real-world challenge datasets. We opensource all code used in this paper as The Neural Testbed. Our results reconcile several sources of confusion in the field. One concerns why particular approaches developed by the Bayesian deep learning community, such as Bayes-by-backprop, dropout, and deep ensembles, perform poorly in sequential decision tasks despite faring well based on evaluation metrics of that community (Osband et al., 2018). Our results demonstrate that, while such methods produce accurate marginal predictions, they are no longer competitive when it comes to high-order joint predictions. Joint predictions play a critical role in sequential decision-making (Lu et al., 2021). Another puzzling issue is that state-of-the-art methods do not employ domain-specific priors. Whether Bayesian deep learning approaches should at all is a subject of controversy (Wenzel et al., 2020). We show that the benefits of domain-specific priors can be pronounced when evaluating high-order joint predictions, even where they are negligible for marginals. We also help to resolve a point of philosophical debate within the deep learning community: the importance of epistemic versus aleatoric uncertainty1. The strangeness of this distinction has even made its way into wider popular culture, as satirized in the XKCD comic of Figure 1 (Munroe, 2021). For a given parametric model, we can clearly distinguish parameter uncertainty from noise, or reducible from irreducible uncertainty. However, from the perspective of a learning agent, the choice of model is subjective; different models can lead to the same marginal predictions. Our formulation provides a clear and objective way to assess the quality of predictive distributions, without reliance on this subjective distinction between knowledge and chance. Crucially, we show that this can be judged via the quality of joint predictions, but that marginals are not sufficient. It is worth mentioning another notable contribution of this work. The quality of a predictive distribution is commonly assessed in terms of cross-entropy loss. While this measure is welldefined for both marginal and joint predictions, to the best of our knowledge, the literature has only addressed computation in the former case. For high-order joint predictions, the straightforward approach would require computing sums over exponentially many values. To render this computationally tractable, we developed a novel approximation algorithm that leverages a random partitioning operation and Monte Carlo simulation. While this approach is motivated by concepts from high-dimensional geometry (Kaski, 1998; Donoho, 2006), we leave its analysis as a topic for future theoretical research. 1Epistemic uncertainty relates to knowledge (ancient Greek episteme↔knowledge), as opposed to aleatoric uncertainty relating to chance (Latin alea↔dice) (Der Kiureghian & Ditlevsen, 2009). 2 Evaluating predictive distributions In this section, we introduce notation for the standard supervised learning framework we will consider (classification) as well as our evaluation metric (the KL-loss). We also explain how we estimate the KL-loss for high-order joint predictions where exact computation is infeasible, using random partitions and Monte Carlo simulation. 2.1 Kullback–Leibler loss Consider a sequence of pairs ((Xt, Yt+1) : t = 0, 1, 2, . . .), where each Xt is a feature vector and each Yt+1 is its target label. This sequence is i.i.d. conditioned on the environment E , which produces the data, and which we view as a latent random variable. We consider an agent that is uncertain about the environment and predicts class labels YT+1:T+τ ≡ (YT+1, . . . , YT+τ ) given training data pairs DT ≡ ((Xt, Yt+1) : t = 0, 1, 2, . . . , T − 1) and unlabelled feature vectors XT :T+τ−1 ≡ (XT , . . . , XT+τ−1). From the agent’s perspective, each feature vector Xt is generated i.i.d from a fixed distribution P(Xt ∈ ·), and each class label Yt+1 is then drawn from P(Yt+1 ∈ ·|E , Xt). We describe the agent’s predictions in terms of a generative model, parameterized by a vector θT that the agent learns from the training data DT . For any inputs XT :T+τ−1, θT determines a predictive distribution, which could be used to sample imagined outcomes ŶT+1:T+τ . We define the τ th-order expected KL-loss by dτKL =E [ dKL ( P (YT+1:T+τ ∈ ·|E , XT :T+τ−1)︸ ︷︷ ︸ environment likelihood ∥∥P(ŶT+1:T+τ ∈ ·|θT , XT :T+τ−1)︸ ︷︷ ︸ agent likelihood )] (1) =−E [ log ( P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣θT , XT :T+τ−1, YT+1:T+τ))]︸ ︷︷ ︸ cross-entropy loss ≡ negative log-likelihood + C, where C = E [log (P (YT+1:T+τ |E , XT :T+τ−1))] is independent of θT . The expectation is taken over all random variables, including the environment E , the parameters θT , XT :T+τ−1, and YT+1:T+τ . Note that dτKL is equivalent to the widely used notion of cross-entropy loss, though offset by a quantity that is independent of θT (Kullback & Leibler, 1951). For τ > 1, dτKL assesses joint rather than the marginal predictions. 2.2 Marginal Versus Joint Predictions Evaluating an agent’s ability to estimate uncertainty on joint instead of marginal predictions can result in very different answers. We provide a simple example that illustrates the point. Suppose the data ((Xt, Yt+1) : t = 0, 1, 2, . . .) is generated by repeated tosses of a possibly biased coin with unknown probability p of heads.2 Let Xt = 0, to indicate that there is no input, and let each outcome Yt+1 be 0 or 1 to indicate tails or heads, respectively. Consider two agents that, without any training, predict outcomes. Agent 1 assumes p = 2/3 and models the outcome of each flip as pure chance. Agent 2 assumes that the coin is fully biased, meaning that p ∈ {0, 1}, but assigns probabilities 1/3 and 2/3 to 0 and 1. Let Ŷ 1t+1 and Ŷ 2t+1 denote the outcomes imagined by the two agents. Despite their differing assumptions, the two agents generate identical marginal predictive distributions: P(Ŷ 1t+1 = 0) = P(Ŷ 2t+1 = 0) = 1/3. On the other hand, joint predictions greatly differ for large τ : P(Ŷ 11 = 0, .., Ŷ 1τ = 0) = 1/3τ 1/3 = P(Ŷ 21 = 0, . . . , Ŷ 2τ = 0). We can say that agent 1 attributes all uncertainty to aleatoric sources and agent 2, epistemic sources (although as Figure 1 alludes, there are many ways an agent can attribute sources of uncertainty). Evaluating marginal predictions cannot distinguish between the two possibilities, though for a specific prior distribution over p, one agent could be right and the other wrong. One must evaluate joint predictions to make this distinction. 2We consider this coin as an illustrative model of more complex binary outcomes, such as whether a user will click on an ad, or whether a given mortgage will default on payments. When it comes to decision-making, this distinction can be critical (Lu et al., 2021). In a casino, under the first agent’s assumption, there is large upside and little risk on repeatedly betting on heads in the long run. However, if there is a 1/3 chance the coin will always land tails, as is the case in the second agent’s prediction, there is a ruinous risk to repeatedly betting heads. Evaluating joint predictions beyond marginals distinguishes these cases. 2.3 Computation of Kullback–Leibler loss In contexts we will consider, it is not possible to compute dτKL exactly. As such, we will approximate dτKL via Monte Carlo simulation. This section provides a high level overview of our approach, we push the full details to Appendix A. Algorithm 1 outlines a basic approach to estimating dτKL with respect to a synthetic data generating process. The algorithm samples a set of environments and a training dataset for each environment. For each of these pairs, the agent is re-initialized, trained, and then tested on N independent test data τ -samples. Note that each test data τ -sample includes τ data pairs. For each test data τ -sample, the likelihood of the environment is computed exactly, but that of the agent’s belief distribution is approximated. The estimate of dτKL is taken to be the sample mean of the log-likelihood-ratios (Algorithm 2). Algorithm 1 KL-Loss Computation 1: for j = 1, 2, . . . , J do 2: sample environment and training dataset, and train agent 3: for n = 1, 2, . . . , N do 4: sample a test data τ -sample with τ feature-label pairs 5: compute pj,n . likelihood of environment 6: compute p̂j,n . estimated likelihood of agent’s belief distribution 7: return 1JN ∑J j=1 ∑N n=1 log (pj,n/p̂j,n) . estimated log-likelihood-ratio While the likelihood of an environment can be efficiently computed, that of an agent’s belief distribution poses a computational challenge. One approach is to estimate this likelihood via Monte Carlo simulation (Algorithm 3). This produces unbiased estimates, which can be accurate when τ is small. However, maintaining accuracy requires the number of samples to grow exponentially with τ , as discussed in Appendix A.1. To overcome this challenge, we propose a novel approach that estimates the likelihood of the agent’s beliefs via a combination of randomized partitioning and Monte Carlo simulation (Algorithm 4) (Kaski, 1998). We conjecture that, under suitable regularity conditions, this novel approach produces accurate estimates even when τ is large, but leave a formal analysis to future work. Even though Algorithm 1 is developed for a synthetic data generating process, it is straightforward to extend it to evaluate agents on real data. We outline our approach to real data in Section 5.1, with full details in Appendix A.2. 3 Benchmark agents In this section we outline the baseline agents that we use to benchmark canonical approaches to uncertainty estimation in deep learning. Table 1 links to papers that introduce these agents, as well as the hyperparamters that we tuned to optimize their performance via gridsearch. In each case, we attempt to match ‘canonical’ implementations, which we open source at https://anonymous.4open.science/r/neural-testbed-B839. In addition to these agent implementations, our opensource project contains all the evaluation code to reproduce the results of this paper. Our code is written in Python and makes use of Jax internally (Bradbury et al., 2018). However, our evaluation procedure is framework agnostic, and can equally be used with any Python package including Tensorflow, Pytorch or even SKlearn. Over the course of this paper, we have made extensive use of parallel computation to facilitate large hyperparameter sweeps over many problems. Nevertheless, the overall computational cost is relatively low by modern deep learning standards and relies only on standard CPU. For reference, evaluating the mlp agent across all the problems in our testbed and real data requires less than 3 CPU-hours. We view our opensource effort as one of the major contributions of this work. We provide clear and strong baselines, together with an objective and accessible method for assessing uncertainty estimates beyond marginal distributions. 4 The Neural Testbed In this section we introduce the Neural Testbed, a system for assessing and comparing agent performance. The Testbed implements synthetic data generating processes and streamlines the process of sampling data, training agents, and evaluating test performance by estimating KL-loss for marginal and high-order joint predictions. Since independent data can be generated for each execution, the Testbed can drive insight and multiple iterations of algorithm development without risk of overfitting to a fixed dataset. We begin by describing the simple generative model based around a random 2-layer MLP. We then apply this testbed to evaluate a comprehensive set of benchmark agents. 4.1 Synthetic data generating processes By data generating process, we do not mean only the conditional distribution of data pairs (Xt, Yt+1)|E but also the distribution of the environment E . The Testbed considers 2- dimensional inputs and binary classification problems, although the generating processes can be easily extended to any input dimension and number of classes. The Testbed offers three data generating processes distinguished by a “temperature” setting, which signifies the signal-to-noise ratio (SNR) regime of the generated data. The agent can be tuned separately for each setting. This reflects prior knowledge of whether the agent is operating in a high SNR regime such as image recognition or a low SNR regime such as weather forecasting. To generate a model, the Testbed samples a 2-hidden-layer ReLU MLP with 2 output units, which are scaled by 1/temperature and passed through a softmax function to produce class probabilities. The MLP is sampled according to standard Xavier initialization (Glorot & Bengio, 2010), with the exception that biases in the first layer are drawn from N(0, 12 ). The inputs (Xt : t = 0, 1, . . .) are drawn i.i.d. from N(0, I). The agent is provided with the data generating process as prior knowledge. In Section 2.1, we described KL-loss as a metric for evaluating performance of an agent. The Neural Testbed estimates KL-loss, with τ ∈ {1, 100}, for three temperature settings and several training dataset sizes. For each value of τ , the KL-losses are averaged to produce an aggregate performance measure. Further details concerning data generation and agent evaluation are offered in Appendix A. 4.2 Performance in marginal predictions We begin our evaluation of benchmark approaches to Bayesian deep learning in marginal predictions (τ = 1). This setting has been the main focus of the Bayesian deep learning literature. Despite this focus, it is surprising to see in Figure 2 that none of the benchmark methods significantly outperform a well-tuned MLP baseline according to d1KL. Of course, there are many other metrics one might consider, but in this fundamental metric of prediction quality, the mlp agent presents a baseline that is difficult to outperform. One of the keys to this result is that all of the agents are able to tune their hyperparameters, such as L2 weight decay, to the SNR regime and number of training points. This matches the way deep learning systems are typically implemented in practice, with extensive hyperparameter tuning on validation data. This methodology has led many practitioners to doubt the usefulness of automatic tuning procedures such as bootstrap sampling (Nixon et al., 2020). In Figure 3, we compare the performance of an ensemble+ agent that uses bootstrapping with and without the ability to tune the hyperparameters per problem setting. We see that bootstrap sampling is beneficial when the agent is expected to work robustly over a wide range of problem settings. However, the benefits are no longer apparent when the agent is allowed to tune its hyperparameters to individual tasks. 4.3 Performance beyond marginals One of the key contributions of this paper is to evaluate predictive distributions beyond marginals. In Figure 2, the red bars show the results of benchmark agents evaluated on joint predictive distributions with τ = 100. Unlike when evaluating on marginal predictions, where no method significantly outperforms a well-tuned MLP, the potential benefits afforded by Bayesian deep learning become clear when examining higher-order predictive distributions. Our results refute prior works’ claims that examining dτKL beyond marginals provides little new information (Wang et al., 2021). Figure 2 shows that sgmcmc is the top-performing agent overall. This should be reassuring to the Bayesian deep learning community and beyond. In the limit of large compute this agent should recover the ‘gold-standard’ of Bayesian inference, and it does indeed perform best (Welling & Teh, 2011). However, some of the most popular approaches in this field (ensemble, dropout) do not actually provide good approximations to the predictive distribution in τ = 100. In fact, we see that even though Bayesian purists may deride ensemble+ and hypermodels as ‘not really Bayesian’, these methods actually provide much better approximations to the Bayesian posterior than ‘fully Bayesian’ VI approaches like bbb. We note too that while sgmcmc performs best, it also requires orders of magnitude more computation than competitive methods even in this toy setting (see Appendix C.2). As we scale to more complex environments, it may therefore be worthwhile to consider alternative approaches to approximate Bayesian inference. For insight into where our top agents are able to outperform, we compare ensemble and ensemble+ under the medium SNR regime in Figures 4 and 5. These methods are identical, except for the addition of a randomized prior function (Osband et al., 2018). Figure 4 shows that, although these methods perform similarly in the quality of their marginal predictions (τ = 1), the addition of a prior function greatly improves the quality of joint predictive distributions (τ = 100) in the low data regime. Figure 5 provides additional intuition into how the randomized prior functions are able to drive improved performance. Figure 5a shows a sampled generative model from our Testbed, with the training data shown in red and blue circles. Figure 5b shows the mean predictions and 4 randomly sampled ensemble members from each agent (top=ensemble, bottom=ensemble+). We see that, although the agents mostly agree in their mean predictions, ensemble+ produces more diverse sampled outcomes enabled by the addition of randomized prior functions. In contrast, ensemble produces similar samples, which may explain why its performance is close to baseline mlp. 5 Performance on real data Section 4 provides a simple, sanitized testbed for clear insight to the efficacy of Bayesian deep learning techniques. However, most deep learning research is not driven by these sorts of synthetic generative models, but the ultimate goal of performing well on real datasets. In this section, we apply the same benchmark agents to a selection of small challenge datasets. We find that, on average, tuning agents for the synthetic problems leads to better performance on real data. We also find that, just as the synthetic testbed, agents that perform similarly in marginal predictions may be distinguished in the quality of their joint predictions. 5.1 Datasets We focus on 10 benchmark datasets (3 feature-based, 7 image from pixels) drawn from the literature including Iris, MNIST, and CIFAR-10 (TFD). This collection is not intended to be comprehensive, or to include the most challenging large-scale problems, but instead to represent some canonical real-world data that might reasonably be addressed with the MLP models of Section 4.1. We apply a basic pre-processing step to each dataset, normalizing input features and flattening observations. We push full details to Appendix D.1. To assess performance in real datasets, we follow a similar procedure as Algorithm 1. The only difference is that since it is impossible to compute the likelihood of environment for real datasets, we compute the negative log-likelihood (NLL) rather than dτKL. Appendix A.2 provides further details. Note that NLL and dτKL are equivalent for agent comparison since they differ by a constant (see Equation 1). Furthermore, to allow for more direct comparison with the synthetic testbed, we also consider variants of each dataset where the number of training pairs is limited to less than the ‘full’ dataset size. 5.2 Synthetic data is predictive of real data Recall that Figure 2 compares performance across an array of agents, assessed using our synthetic data generating process. Each agent’s hyperparameters were tuned by first enumerating a list of plausibly near-optimal choices and selecting the one that optimizes performance. Each of our real-world datasets can be viewed as generated by an environment sampled from an alternative data generating process. A natural question is whether performance on the synthetic data correlates with performance on the real-world data. The table of Figure 6a displays results pertaining to each of our agents. For each agent, performance for each candidate hyperparameter setting was assessed on synthetic and real data, and the correlation across these pairs is reported. The left and right columns restrict attention to datasets with low and high volumes of training data, respectively. If a correlation were equal to 1, the hyperparameter setting that optimizes agent performance on real data would be identical to that on synthetic data. It is reassuring that the correlations are high, reflecting a strong degree of alignment, with the exception of bbb in low data regime, for which there appear to be pathological outcomes distorting performance for small training sets. The values in parentheses express 5th and 95th percentile confidence bounds, measured via the statistical bootstrap. Figure 6b plots performance on real versus synthetic data for the high data regime. Each data point represents one agent-hyperparameter combination. If the correlation were equal to 1, the combination that performs best on the synthetic data would also perform best on the real data. It is reassuring that the correlation is large, and the confidence interval between the 5th and 95th percentiles small. Agent-hyperparameter combinations that perform better on the testbed tend to perform better on real data as well. 5.3 Higher order predictions and informative priors Our synthetic testbed can be helpful in driving innovations that carry over to real data. Section 5.2 indicated that performance on the Testbed is correlated with that on realworld data. We now repeat the observation from Figure 4 on real data; additive prior functions can significantly improve the accuracy of joint predictive distributions generated by ensembles. We show this by comparing the performance of ensemble+ with different forms of prior functions on benchmark datasets. We evaluate an ensemble with no prior function (none), a random MLP prior (MLP), and a random linear function of a 2-dimensional latent representation as the prior, trained via variational autoencoder (VAE) (Kingma & Welling, 2014). We provide full details in Appendix D.3. Figure 7 plots the improvement in NLL for the ensemble agent relative to a baseline MLP (lower is better), and breaks out the result for datasets=MNIST,Iris and τ = 1, 100. We can see that the results for Iris mirror our synthetic data almost exactly. The results for MNIST share some qualitative insights, but also reveal some important differences. For Iris τ = 1 none of the methods outperform the MLP baseline, but for τ = 100 we see significant benefits to an additive MLP prior in the low data regime. For MNIST τ = 1 we actually see benefits to ensembles, even without prior functions and even in the high data regime. This reveals some aspects of this real data that are not captured by our synthetic model, where we did not see this behaviour. For τ = 100 the random MLP prior gives a slight advantage, but the effect is much less pronounced. We hypothesize this is because, unlike the testbed, the MLP prior is not well-matched to the input image data. However, the VAE prior is able to provide significant benefit in the low data regime.3 These benefits also carry over to Iris, even where random MLPs already provided signficant value. Designing architectures that offer useful priors for learning agents is an exciting area for future work. 6 Conclusion This paper highlights the need to evaluate predictive distributions beyond marginals. In addition to this conceptual contribution, we develop a suite of practical computational tools that can evaluate diverse approaches to uncertainty estimation. Together with these tools, we provide a neural-network-based data generating process that facilitates research and iteration beyond a small set of challenge datasets. We package these together as The Neural Testbed, including a variety of baseline agent implementations. We believe that this represents an exciting and valuable new benchmark for Bayesian deep learning and beyond. We have already used this testbed to generate several new insights in this paper. We have shown many popular Bayesian deep learning approaches perform similarly in marginal predictions but quite differently in joint predictions. We reveal the importance of bootstrapping for parameter robustness, and also help reconcile the observed lack of improvement when tuned to specific datasets. We have shown that these insights from synthetic data can carry over to real datasets; that performance in these settings is correlated, that agents with similar marginal predictions can be distinguished by their joint predictions, and that suitable prior functions can play an important role in driving good performance. The results in this paper are in some sense preliminary. The grand challenge for Bayesian deep learning is to provide effective uncertainty estimates in large, rich datasets. While we have demonstrated benefits to predictive evaluation beyond marginals only in the ‘low data’ regime and small-scale problems, we believe that they will extend more broadly to situations where new test inputs appear novel relative to training data. As such, we believe our core insights should carry over to the related problems of nonstationarity and covariate shift that plague modern deep learning systems. As an agent takes on more and more complex tasks, it will continue to run into new and unfamiliar settings and uncertain outcomes; as such, effective predictive distributions will be more important than ever. 3We hypothesize that appropriately initialized convnet architectures may be able to leverage image structure as noted in prior work (Ulyanov et al., 2018). A Testbed Pseudocode We present the testbed pseudocode in this section. Specifically, Algorithm 2 is the pseudocode for our neural testbed, and Algorithm 3 and Algorithm 4 are two different approaches to estimate the likelihood of a test data τ -sample conditioned on an agent’s belief. Algorithm 3 is based on the standard Monte-Carlo estimation, while Algorithm 4 adopts a random partitioning approach. The presented testbed pseudocode works for any prior P(E ∈ ·) over the environment and any input distribution PX , including the ones described in Section 4.1. We also release full code and implementations at https://anonymous.4open.science/r/neural-testbed-B839. In addition to presenting the testbed pseudocode, we also discuss some core technical issues in the neural testbed design. Specifically, Appendix A.1 discusses how to estimate the likelihood of an agent’s belief distribution; Appendix A.2 discusses how to extend the testbed to agent evaluation on real data; finally, Appendix A.3 explains our choices of experiment parameters. Algorithm 2 Neural Testbed Require: the testbed requires the following inputs 1. prior distribution over the environment P(E ∈ ·), input distribution PX 2. agent fθ 3. number of training data T , test distribution order τ 4. number of sampled problems J , number of test data samples N 5. parameters for agent likelihood estimation, as is specified in Algorithm 3 and 4 for j = 1, 2, . . . , J do Step 1: sample environment and training data 1. sample environment E ∼ P(E ∈ ·) 2. sample T inputs X0, X1, . . . , XT−1 i.i.d. from PX 3. sample the training labels Y1, . . . , YT conditionally i.i.d. as Yt+1 ∼ P (Y ∈ ·|E , X = Xt) ∀t = 0, 1, . . . , T − 1 4. choose the training dataset as DT = {(Xt, Yt+1) , t = 0, . . . , T − 1} Step 2: train agent train agent fθT based on training dataset DT Step 3: compute likelihoods for n = 1, 2, . . . , N do 1. sample X(n)T , . . . , X (n) T+τ−1 i.i.d. from PX 2. generate Y (n)T+1, . . . , Y (n) T+τ conditionally independently as Y (n) t+1 ∼ P ( Y ∈ · ∣∣∣E , X = X(n)t ) ∀t = T, T + 1, . . . , T + τ − 1 3. compute the likelihood under the environment E as pj,n = P ( Y (n) T+1:T+τ ∣∣∣E , X(n)T :T+τ−1) = ∏T+τ−1t=T Pr(Y (n)t+1∣∣∣E , X(n)t ) 4. estimate the likelihood conditioned on the agent’s belief p̂j,n ≈ P ( ŶT+1:T+τ = Y (n)T+1:T+τ ∣∣∣θT , X(n)T :T+τ−1, Y (n)T+1:T+τ) , based on Algorithm 3 or 4 with test data τ -sample ( X (n) T :T+τ−1, Y (n) T+1:T+τ ) . return 1JN ∑J j=1 ∑N n=1 log (pj,n/p̂j,n) Algorithm 3 Monte Carlo Estimation of Likelihood of Agent’s Belief Require: 1. trained agent fθT and number of Monte Carlo samples M 2. test data τ -sample (XT :T+τ−1, YT+1:T+τ ) Step 1: sample M models Ê1, . . . , ÊM conditionally i.i.d. from P ( Ê ∈ · ∣∣∣fθT ) Step 2: estimate p̂ as p̂ = 1 M M∑ m=1 P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣Êm, XT :T+τ−1, YT+1:T+τ) return p̂ Algorithm 4 Estimation of Likelihood of Agent’s Belief via Random Partitioning Require: 1. trained agent fθT 2. number of Monte Carlo samples M 3. number of hyperplanes d 4. test data τ -sample (XT :T+τ−1, YT+1:T+τ ) Step 1: sample M models Ê1, . . . , ÊM conditionally i.i.d. from P(Ê ∈ ·|fθT ); for each model m = 1, . . . ,M , class k, and t = T, . . . , T + τ − 1, define pm,t,k = P(Ŷ (m)t+1 = k| Êm, Xt), and `m,t,k = Φ−1 (pm,t,k), where Φ(·) is the CDF of the standard normal function. For each model m, define a vector `m = [`m,T,1, `m,T,2, . . . , `m,T+τ−1,K ] ∈ <Kτ Step 2: sample a d × (Kτ) matrix A and a d-dimensional vector b, with each element/component sampled i.i.d. from N(0, 1). For each m = 1, . . . ,M , compute ψm = 1 [A`m + b ≥ 0] ∈ {0, 1}d. Step 3: partition the sampled models, with each cell indexed by ψ ∈ {0, 1}d and defined by Mψ = {m : ψm = ψ} and assign a probability to each cell: qψ = |Mψ| M Step 4: ∀ψ ∈ {0, 1}d and ∀t = T, T + 1, . . . , T + τ − 1, estimate the probability of predicting Ŷt+1 = k conditioned on the cell: pψ,t,k = { 1 |Mψ| ∑ m∈Mψ pm,t,k if |Mψ| > 0 1 if |Mψ| = 0 Step 5: estimate Pr(Ŷt+1:T+τ = Yt+1:T+τ |θT , Xt:T+τ−1, Yt+1:T+τ ) as p̂ = ∑ ψ∈{0,1}d qψ T+τ−1∏ t=T pψ,t,Yt+1 return p̂ A.1 Estimating Likelihood of Agent’s Belief Distribution We have presented two algorithms to estimate the likelihood of a test data τ -sample conditioned on a trained agent: Algorithm 3 is based on the standard Monte Carlo estimation, while Algorithm 4 adopts an approach combining random partitioning and Monte Carlo estimation. In this subsection, we briefly discuss the pros and cons between these two algorithms, and provide some general guidelines on how to choose between them. Algorithm 3 produces unbiased estimates of the likelihoods, which is usually accurate when τ is small (e.g. for τ ≤ 10). However, maintaining accuracy might require the number of samples M to grow exponentially with τ . The following is an illustrative example. Example 1 (Uniform belief over deterministic models): Consider a scenario where the number of class labels is K = 2. We say a model Ê is deterministic if for any feature vector Xt, P(Ŷt+1 = 1 | Ê , Xt) ∈ {0, 1}. Obviously, for any test data τ -sample (XT :T+τ−1, YT+1:T+τ ) with YT+1:T+τ ∈ {0, 1}τ , under a deterministic model Ê , we have P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣ Ê , XT :T+τ−1, YT+1:T+τ) ∈ {0, 1}. When restricted to the inputs XT :T+τ−1, there are 2τ distinguishable deterministic models. Assume the agent’s belief distribution is uniform over these 2τ distinguishable deterministic models, then for any YT+1:T+τ ∈ {0, 1}τ , the likelihood of the agent’s belief distribution is P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣ θT , XT :T+τ−1, YT+1:T+τ) = 2−τ . Now let’s consider Algorithm 3. When a model Êm is sampled from the agent’s belief distribution, with probability 2−τ , P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣ Êm, XT :T+τ−1, YT+1:T+τ) = 1, and with probability 1− 2−τ , P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣ Êm, XT :T+τ−1, YT+1:T+τ) = 0. Consequently, in expectation, we need the number of Monte Carlo samples M = Ω(2τ ) to ensure that the estimate p̂ returned by Algorithm 3 is non-zero. To overcome this challenge, we also propose a novel approach to estimate the likelihood of agent’s belief via a combination of randomized partitioning and Monte Carlo simulation, as is presented in Algorithm 4. This approach proceeds as follows. First, M models are sampled from the agent’s belief distribution. For each sampled model, each test data input Xt, and each class label k, a predictive probability pm,t,k and its probit `m,t,k = Φ−1(pm,t,k) are computed, where Φ(·) is the CDF of the standard normal distribution. For each sampled model, we also stack its probits into a probit vector `m ∈ <Kτ . Then, d random hyperplanes are sampled and used to partition <Kτ into 2d cells. Stacked probit vectors place models in cells. Predictive distributions of models in each cell are averaged, and the likelihood is calculated based on these averages, with each cell weighted according to the number of models it contains. The Neural Testbed applies Algorithm 4 with 2d M . Hence, some cells are assigned many models. We conjecture that, under suitable regularity conditions, models assigned to the same cell tend to generate similar predictions. If this is the case, this algorithm produces accurate estimates even when τ is large. We leave a formal analysis to future work. Finally, we briefly discuss how to choose between Algorithm 3 and Algorithm 4. As a rule of thumb, we recommend to choose Algorithm 3 for τ < 10 and Algorithm 4 with the number of hyperplanes d between 5 and 10 for τ ≥ 10. A.2 Agent Evaluation on Real Data Algorithm 2 (and its simplified version Algorithm 1) is developed for a synthetic data generating processes. We now discuss how to extend it to agent evaluation on real data. Consider a scenario with J real datasets, and each dataset is further partitioned into a training dataset and a test dataset. The main difference between this scenario and a synthetic data generating process is that we cannot compute the likelihood of environment for real data. Thus, we compute the cross-entropy loss instead (see Equation 1). The computational approach is similar to Algorithm 1: for each real dataset, we use its training dataset to train an agent. Then, we sample N test data τ -samples from the test dataset, and estimate the likelihoods of the agent’s belief distribution. The estimate of the cross-entropy loss is taken to be the sample mean of the negative log-likelihoods. Note that when ranking agents, the cross-entropy loss and dτKL will lead to the same order of agents, since these two losses differ by a constant independent of the agent (see Equation 1). A.3 Choices of Experiment Parameters To apply Algorithm 2, we need to specify an input distribution PX and a prior distribution on the environment P(E ∈ ·). Recall from Section 4.1 that we consider binary classification problems with input dimension 2. We choose PX = N(0, I), and we consider three environment priors distinguished by a temperature parameter that controls the signal-to-noise ratio (SNR) regime. We sweep over temperatures in {0.01, 0.1, 0.5}. The prior distribution P(E ∈ ·) is induced by a distribution over MLPs with 2 hidden layers and ReLU activation. The MLP is distributed according to standard Xavier initialization, except that biases in the first layer are drawn from N(0, 12 ). The MLP outputs two units, which are divided by the temperature parameter and passed through the softmax function to produce class probabilities. The implementation of this generative model is in our open source code under the path /generative/factories.py. We now describe the other parameters we use in the Testbed. In Algorithm 2, we pick the order of predictive distributions τ ∈ {1, 100}, training dataset size T ∈ {1, 3, 10, 30, 100, 300, 1000}, number of sampled problems J = 10, and number of testing data τ -samples N = 1000. We apply Algorithm 3 for evaluation of d1KL and Algorithm 4 for evaluation of d100KL . In both Algorithms 3 and 4, we sample M = 1000 models from the agent. In Algorithm 4, we set the number of hyperplanes d = 7. The specification of the testbed parameters is in our open soucre code under the path /leaderboard/sweep.py. On real datasets, we apply the same τ ∈ {1, 100}, N = 1000, and M = 1000. We set the number of hyperplanes d = 10 in Algorithm 4. B Agents In this section, we describe the benchmark agents in Section 3 and the choice of various hyperparameters used in the implementation of these agents. The list of agents include MLP, ensemble, dropout, Bayes by backprop, stochastic Langevin MCMC, ensemble+ and hypermodel. We will also include other agents such as KNN, random forest, and deep kernel, but the performance of these agents was worse than the other benchmark agents, so we chose not to include them in the comparison in Section 4. In each case, we attempt to match the “canonical” implementation. The complete implementation of these agents including the hyperparameter sweeps used for the Testbed are available at https://anonymous.4open.science/r/neural-testbed-B839. We make use of the Epistemic Neural Networks notation from (Osband et al., 2021) in our code. We set the default hyperparameters of each agent to be the ones that minimize the aggregated KL score daggKL = d1KL + d100KL/100. B.1 MLP The mlp agent learns a 2-layer MLP with 50 hidden units in each layer by minimizing the cross-entropy loss with L2 weight regularization. The L2 weight decay scale is chosen either to be λ 1T or λ d √ β T , where d is the input dimension, β is the temperature of the generative process and T is the size of the training dataset. We sweep over λ ∈ {10−4, 10−3, 10−2, 10−1, 1, 10, 100}. We implement the MLP agent as a special case of a deep ensemble (B.2). The implementation and hyperparameter sweeps for the mlp agent can be found in our open source code, as a special case of the ensemble agent, under the path /agents/factories/ensemble.py. B.2 Ensemble We implement the basic “deep ensembles” approach for posterior approximation (Lakshminarayanan et al., 2017). The ensemble agent learns an ensemble of MLPs by minimizing the cross-entropy loss with L2 weight regularization. The only difference between the ensemble members is their independently initialized network weights. We chose the L2 weight scale to be either λ 1MT or λ d √ β MT , where M is the ensemble size, d is the input dimension, β is the temperature of the generative process, and T is the size of the training dataset. We sweep over ensemble size M ∈ {1, 3, 10, 30, 100} and λ ∈ {10−4, 10−3, 10−2, 10−1, 1, 10, 100}. We find that larger ensembles work better, but this effect is within margin of error after 10 elements. The implementation and hyperparameter sweeps for the ensemble agent can be found in our open source code under the path /agents/factories/ensemble.py. B.3 Dropout We follow Gal & Ghahramani (2016) to build a droput agent for posterior approximation. The agent applies dropout on each layer of a fully connected MLP with ReLU activation and optimizes the network using the cross-entropy loss combined with L2 weight decay. The L2 weight decay scale is chosen to be either l 2 2T (1− pdrop) or d √ βl T where pdrop is the dropping probability, d is the input dimension, β is the temperature of the data generating process, and T is the size of the training dataset. We sweep over dropout rate pdrop ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}, length scale (used for L2 weight decay) l ∈ {0.01, 0.1, 0.3, 1, 3, 10}, number of neural network layers ∈ {2, 3}, and hidden layer size ∈ {50, 100}. The implementation and hyperparameter sweeps for the dropout agent can be found in our open source code under the path /agents/factories/dropout.py. B.4 Bayes-by-backprop We follow Blundell et al. (2015) to build a bbb agent for posterior approximation. We consider a scale mixture of two zero-mean Gaussian densities as the prior. The Gaussian densities have standard deviations σ1 and σ2, and they are mixed with probabilities p and 1− p, respectively. We sweep over σ1 ∈ {1, 2, 4}, σ2 ∈ {0.25, 0.5, 0.75}, p ∈ {0, 0.25, 0.5, 0.75, 1}, learning rate ∈ {10−3, 3× 10−3}, number of training steps ∈ {500, 1000, 10000}, number of neural network layers ∈ {2, 3}, hidden layer size ∈ {50, 100}, and the ratio of the complexity cost to the likelihood cost ∈ {1, d √ β}, where d is the input dimension and β is the temperature of the data generating process. The implementation and hyperparameter sweeps for the bbb agent can be found in our open source code under the path /agents/factories/bbb.py. B.5 Stochastic gradient Langevin dynamics We follow Welling & Teh (2011) to implement a sgmcmc agent using stochastic gradient Langevin dynamics (SGLD). We consider two versions of SGLD, one with momentum and other without the momentum. We consider independent Gaussian prior on the neural network parameters where the prior variance is set to be σ2 = λ T dβ , where λ is a hyperparameter that is swept over {0.01, 0.1, 0.5, 1}, d is the input dimension, β is the temperature of the data generating process, and T is the size of the training dataset. We consider a constant learning rate that is swept over {10−5, 5× 10−5, 10−4, 5× 10−4, 10−3, 5 × 10−3, 10−2}. For SGLD with momentum, the momentum decay term is always set to be 0.9. The number of training batches is 5 × 105 with burn-in time of 105 training batches. We save a model every 1000 steps after the burn-in time and use these models as an ensemble during the evaluation. The implementation and hyperparameter sweeps for the sgmcmc agent can be found in our open source code under the path /agents/ factories/sgmcmc.py. B.6 Ensemble+ We implement the ensemble+ agent using deep ensembles with randomized prior functions (Osband et al., 2018) and bootstrap sampling (Osband & Van Roy, 2015). Similar to the vanilla ensemble agent in Section B.2, we consider L2 weight scale to be either λ 1MT or λ d √ β MT . We sweep over ensemble size M ∈ {1, 3, 10, 30, 100} and λ ∈ {10−4, 10−3, 10−2, 10−1, 1, 10, 100}. The randomized prior functions are sampled exactly from the data generating process, and we sweep over prior scaling ∈ {0, √ β, 1}. In addition, we sweep over bootstrap type (none, exponential, bernoulli). We find that the addition of randomized prior functions is crucial for improvement in performance over vanilla deep ensembles in terms of the quality of joint predictions. We also find that bootstrap sampling improves agent robustness, although the advantage is less apparent when one is allowed to tune the L2 weight decay for each task (see Figure 3). The implementation and hyperparameter sweeps for the ensemble+ agent can be found in our open source code under the path /agents/factories/ensemble_plus.py. B.7 Hypermodel We follow Dwaracherla et al. (2020) to build a hypermodel agent for posterior approximation. We consider a linear hypermodel over a 2-layer MLP base model. We sweep over index dimension ∈ {1, 3, 5, 7}. The L2 weight decay is chosen to be either λ 1T or λ d √ β T with λ ∈ {0.1, 0.3, 1, 3, 10}, where d is the input dimension, β is the temperature of the data generating process, and T is the size of the training dataset. We chose three different bootstrapping methods of none, exponential, bernoulli. We use an additive prior which is a linear hypermodel prior over an MLP base model, which is similar to the generating process, with number of hidden layers in {1, 2}, 10 hidden units in each layer, and prior scale from {0, √ β, 1}. The implementation and hyperparameter sweeps for the hypermodel agent can be found in our open source code under the path /agents/factories/hypermodel.py. B.8 Non-parametric classifiers K-nearest neighbors (k-NN) (Cover & Hart, 1967) and random forest classifiers (Friedman, 2017) are simple and cheap off-the-shelf non-parametric baselines (Murphy, 2012; Pedregosa et al., 2011). The ‘uncertainty’ in these classifiers arises merely from the fact that they produce distributions over the labels and as such we do not expect them to perform well relative to more principled approaches. Moreover, these methods have no capacity to model dτKL for τ > 1. For the knn agent we swept over the number of neighbors k ∈ {1, 5, 10, 30, 50, 100} and the weighting of the contribution of each neighbor as either uniform or based on distance. For the random forest agent we swept over the number of trees in the forest {10, 100, 1000}, and the splitting criterion which was either the Gini impurity coefficient or the information gain. To prevent infinite values in the KL we truncate the probabilities produced by these classifiers to be in the interval [0.01, 0.99]. The implementation and hyperparameter sweeps for the knn and random forest agents can be found in our open source code under the paths /agents/factories/knn.py and /agents/factories/random_forest.py. B.9 Gaussian process with learned kernel A neural network takes input Xt ∈ X and produces output Zt+1 = Wφθ(Xt) + b ∈ RK , where W ∈ RK×m is a matrix, b ∈ RK is a bias vector, and φθ : X → Rm is the output of the penultimate layer of the neural network. In the case of classification the output Zt+1 corresponds to the logits over the class labels, i.e., Ŷt+1 ∝ exp(Zt+1). The neural network should learn a function that maps the input into a space where the classes are linearly distinguishable. In other words, the mapping that the neural network is learning can be considered a form of kernel (Schölkopf & Smola, 2018), where the kernel function k : X ×X → R is simply k(X,X ′) = φθ(X)>φθ(X ′). With this in mind, we can take a trained neural network and consider the learned mapping to be the kernel in a Gaussian process (GP) (Rasmussen, 2003), from which we can obtain approximate uncertainty estimates. Concretely, let Φ0:T−1 ∈ RT×m be the matrix corresponding to the φθ(Xt), t = 0, . . . , T −1, vectors stacked row-wise and let ΦT :T+τ−1 ∈ Rτ×m denote the same quantity for the test set. Fix index i ∈ {0, . . . ,K − 1} to be a particular class index. A GP models the joint distribution over the dataset to be a multi-variate Gaussian, i.e.,[ Z (i) 1:T Z (i) T+1:T+τ ] ∼ N ([ µ (i) 1:T µ (i) T+1:T+τ ] , [ σ2I + Φ0:T−1Φ>0:T−1 ΦT :T+τ−1Φ>0:T−1 Φ0:T−1Φ>T :T+τ−1 ΦT :T+τ−1Φ>T :T+τ−1 ]) where σ > 0 models the noise in the training data measurement and µ(i)1:T , µ (i) T+1:T+τ are the means under the GP. The conditional distribution is given by P (Z(i)T+1:T+τ | Z (i) 1:T , X0:T+τ−1) = N ( µ (i) T+1:T+τ |1:T ,ΣT+1:T+τ |1:T ) where ΣT+1:T+τ |1:T = ΦT :T+τ−1Φ>T :T+τ−1 − ΦT :T+τ−1Φ>0:T−1(σ2I + Φ0:T−1Φ>0:T−1)−1Φ0:T−1Φ>T :T+τ−1. and rather than use the GP to compute µ(i)T+1:T+τ |0:T (which would not be possible since we do not oberve the true logits) we just take it to be the output of the neural network when evaluated on the test dataset. The matrix being inverted in the expression for ΣT+1:T+τ |0:T has dimension T × T , which may be quite large. We use the Sherman-Morrison-Woodbury identity to rewrite it as follows (Woodbury, 1950) ΣT+1:T+τ |0:T = ΦT :T+τ−1(I − Φ>0:T−1(σ2I + Φ0:T−1Φ>0:T−1)−1Φ0:T−1)Φ>T :T+τ−1 = σ2ΦT :T+τ−1(σ2I + Φ>0:T−1Φ0:T−1)−1Φ>T :T+τ−1, which instead involves the inverse of an m×m matrix, which may be much smaller. If we perform a Cholesky factorization of positive definite matrix (σ2I + Φ>0:T−1Φ0:T−1) = LL> then the samples for all logits simultaneously can be drawn by first sampling ζ ∈ Rm×K , with each entry drawn IID from N (0, 1), then forming ŶT+1:T+τ ∝ exp(µT+1:T+τ |1:T + σΦT :T+τ−1L−>ζ). The implementation and hyperparameter sweeps for the deep kernel agent can be found in our open source code under the path /agents/factories/deep_kernel.py. B.10 Other agents In our paper we have made a concerted effort to include representative and canonical agents across different families of Bayesian deep learning and adjacent research. In addition to these implementations, we performed extensive tuning to make sure that each agent was given a fair shot. However, with the proliferation of research in this area, it was not possible for us to evaluate all competiting approaches. We hope that, by opensourcing the Neural Testbed, we can allow researchers in the field to easily assess and compare their agents to these baselines. For example, we highlight a few recent pieces of research that might be interesting to evaluate in our setting. Of course, there are many more methods to compare and benchmark. We leave this open as an exciting area for future research. • Neural Tangent Kernel Prior Functions (He et al., 2020). Proposes a specific type of prior function in ensemble+ inspired by connections to the neural tangent kernel. • Functional Variational Bayesian Neural Networks (Sun et al., 2019). Applies variational inference directly to the function outputs, rather than weights like bbb. • Variational normalizing flows (Rezende & Mohamed, 2015). Applies variational inference over a more expressive family than bbb. • No U-Turn Sampler (Hoffman et al., 2014). Another approach to sgmcmc that attempts to compute the posterior directly, computational costs can grow large. C Testbed results In this section, we provide the complete results of the performance of benchmark agents on the Testbed, broken down by the temperature setting, which controls the SNR, and the size of the training dataset. We select the best performing agent within each agent family and plot d1KL and d100KL with the performance of an MLP agent as a reference. We also provide a plot comparing the training time of different agents. C.1 Performance breakdown Figures 8 and 9 show the KL estimates evaluated on τ = 1 and τ = 100, respectively. For each agent, for each SNR regime, for each number of training points we plot the average KL estimate from the Testbed. In each plot, we include the “baseline” mlp agent as a black dashed line to allow for easy comparison across agents. A detailed description of these benchmark agents can be found in Appendix B. C.2 Training time Figure 10 shows a plot comparing the d100KL and training time of different agents normalized with that of an MLP. We can see that sgmcmc agent has the best performance, but at the cost of more training time (computation). Both ensemble+ and hypermodel agents have similar performance as sgmcmc with lower training time. We trained our agents on CPU only systems. D Real data This section provides supplementary details regarding the experiments in Section 5. As before, we include full implementation and source code at https://anonymous.4open. science/r/neural-testbed-B839. D.1 Datasets Table 2 outlines the datasets included in our experiments. Unlike to the synthetic testbed, which evaluates agents over a range of SNR regimes, these datasets are generally all high SNR regime. We can see this since the top-performing agents in the literature are able to obtain high levels of classification accuracy on held out data; something that is impossible if the underlying system has high levels of noise. Each of these datasets is provided with a canonical training/test set of specific sizes. In order to examine performance in different data regimes we augment the default settings of Table 2 by also examining the performance of agents on these datasets with reduced training data. In a way that mirrors the testbed sweep of Section 4.1, we also look at settings where the training data is restricted to T = 1, 10, 100, 1000, 10000 data points respectively. D.2 Correlation Figure 6 breaks down the correlation in performance between testbeds and real data. For the purposes of Table 6a we say that T = 1, 10 is the ‘low data’ regime, and the maximum training dataset size is the ‘high data’ regime. Our results show that, for each agent, for each data regime, performance of hyperparameters is correlated across settings. One concern might be that while performance on real data overall is highly correlated, that this might not necessarily be the case for any individual dataset. Or, alternatively, that this correlation is driven by extremely strong relationships in one dataset that are not present in others. Figure 11 shows that this is not the case. In fact, for each of the datasets considered we have strong and positive correlation over agent-hyperparameter pairs. This gives us confidence that the results of Figure 6b are robust not only to choice of agent, but also to some reasonable choice of datasets. D.3 Prior functions We consider two different forms of prior functions for ensemble+: a random MLP of the input data and a random linear function of a 2-dimensional latent trained via variational autoencoder (VAE) (Kingma & Welling, 2014). For the MLP prior, we tried both linear (MLP with no hidden layer) and MLP with hidden layers, and observed that the linear prior works better. To train the 2-dimensional latent, we considered a 2-layer (128, 64) MLP for the Gaussian encoder and a 2-layer (64, 128) MLP for the Bernoulli decoder. We trained the VAE using all unsupervised training data available for each dataset. After training the VAE for 10,000 steps, we used the output mean of the Gaussian encoder as the latent.
1. What is the focus and contribution of the paper regarding predictive distributions in uncertainty quantification? 2. What are the strengths and weaknesses of the proposed evaluation metric based on joint predictions? 3. Do you have any concerns about the significance of using a specific value of τ for joint predictions and its impact on model selection? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or issues regarding the experimental results and their demonstration of the proposed metric's advantages?
Summary Of The Paper Review
Summary Of The Paper This work advocates evaluating predictive distributions via joint predictions (rather than the standard practice of evaluating marginal predictions) and introduces The Neural Testbed, an open-source software which includes the testing suite, along with implementations of a handful of methods in uncertainty quantification (UQ). The core evaluation metric used is KL divergence (cross-entropy loss) between the predictive distribution and the true likelihood of the data-generating process, and this work proposes an algorithm to compute this metric with joint distributions. The empirical evaluations compare numerous standard UQ methods with the Testbed, with both synthetic and real datasets. Review Strengths: This paper is generally well-written and clear. Evaluating predictive distributions is an important and relevant topic to the general UQ community. As far as I can tell, the idea of evaluating predictive distributions via joint distributions over the numerous predictions is quite novel, as well as the proposed algorithm. The experiments section does a good job in demonstrating the discrepancies that can arise between marginal and joint predictions during evaluation, and these findings are interesting. Weaknesses: As this work propose an evaluation metric, I believe it's important to have a discussion on the suite of metrics that are currently used in UQ and how the proposed metric provides an advantage over existing ones. This paper only touches upon KL-divergence (equivalently, cross-entropy, or likelihood), and doesn't mention other widely used metrics, such as calibration, or other proper scoring rules. In fact, would calibration (+multiples notions exist, e.g. ECE, classwise-ECE, adaptive ECE, ...) of the marginal distributions also fail to differentiate between the different methods in Figure 2? What is the significance of using τ = 100 for the joint predictions? Would there be discrepancies in the ranking of methods based on other τ 's? In practice, if this metric was used for model selection, which τ should the practitioner base the metric on to choose a model? On the same note, I believe there needs to be more demonstration/elaboration that a model with better joint predictions is "necessarily better" than another model that has worse joint predictions, but possibly as good or better marginal predictions. In the second paragraph of page 2, the authors hint that some Bayesian models are not performant in sequential decision tasks because they have poor joint predictions. The proposed metric would be more convincing if this point was followed up with, or if other downstream tasks were demonstrated in the experiments. Questions on content: In Section 4.1, what does the environment variable ε represent, concretely? Is it the choice of SNR and the initialization distribution over the data generating NN? Let me know if I am missing something here. In the last sentence of the second paragraph of Section 4.1, I don't understand what is meant by "agent is provided with the data generating process"? How is this done, concretely? In Step 1 of Algorithm 4, what exactly is the distribution, P ( ε ^ | f θ T ) ? How is this distribution produced? Other points: The font and spaces between lines for this submission are quite different from the default style. However, as far as I can tell, it seems to reduce space for content, so I did not flag it. Table 1 protrudes beyond the side margins — I'm not sure if this isn't a formatting violation. In Figure 6(b), which τ setting was chosen each of the blue points?
ICLR
Title Evaluating Predictive Distributions: Does Bayesian Deep Learning Work? Abstract Posterior predictive distributions quantify uncertainties ignored by point estimates. This paper introduces The Neural Testbed, which provides tools for the systematic evaluation of agents that generate such predictions. Crucially, these tools assess not only the quality of marginal predictions per input, but also joint predictions given many inputs. Joint distributions are often critical for useful uncertainty quantification, but they have been largely overlooked by the Bayesian deep learning community. We benchmark several approaches to uncertainty estimation using a neural-network-based data generating process. Our results reveal the importance of evaluation beyond marginal predictions. Further, they reconcile sources of confusion in the field, such as why Bayesian deep learning approaches that generate accurate marginal predictions perform poorly in sequential decision tasks, how incorporating priors can be helpful, and what roles epistemic versus aleatoric uncertainty play when evaluating performance. We also present experiments on real-world challenge datasets, which show a high correlation with testbed results, and that the importance of evaluating joint predictive distributions carries over to real data. As part of this effort, we opensource The Neural Testbed, including all implementations from this paper. 1 Introduction Deep learning has emerged as the state-of-the-art approach across a number of application domains in which agents learn from large amounts of data (LeCun et al., 2015). Neural networks are increasingly used not only to predict outcomes but also to inform decisions. Common approaches in deep learning produce point estimates but not uncertainty estimates, which are often required for effective decision-making. Bayesian deep learning extends the methodology to produce such uncertainty estimates (MacKay, 1992; Neal, 2012). We consider agents that are trained on data pairs ((Xt, Yt+1) : t = 0, 1, . . . , T − 1) and subsequently generate predictions given new inputs. When presented with an input XT , a Bayesian neural network can generate a predictive distribution of the outcome YT+1 that is yet to be observed. This distribution characterizes the agent’s uncertainty about YT+1. We refer to such a prediction as marginal to distinguish it from a joint predictive distribution over a list (YT+1, . . . , YT+τ ) of prospective outcomes corresponding to inputs (XT , . . . , XT+τ−1). The importance of uncertainty estimation has motivated a great deal of research over recent years (Kendall & Gal, 2017). This research has produced a variety of agents that learn to generate predictive distributions. With this proliferation of alternatives, it is increasingly important to analyze and compare their performance (Filos et al., 2019; Nado et al., 2021). In this paper, we introduce new tools for systematic evaluation of such agents. Our tools overcome several limitations faced by previous methods of evaluation. First, by focusing purely on predictive distributions, we allow for a unified treatment of approaches developed within the Bayesian neural network community and beyond. This sidesteps the Open source code available at https://anonymous.4open.science/r/neural-testbed-B839. question of whether any approach ‘is really Bayesian’ (Wilson & Izmailov, 2020). Second, our tools evaluate the quality of higher-order joint predictions (τ > 1). Until now, the Bayesian deep learning literature has focused almost exclusively on evaluating marginal predictions (Wang et al., 2021). Finally, we develop a neural-network-based data generating process for Bayesian deep learning that can be used to drive insight and algorithm development. Where research has focused on a small set of challenge datasets, this might introduce bias through overfitting via multiple iterations of algorithm development. We use these tools to compare hundreds of agent variants. Further, we show that performance on our synthetic data generating process data is highly correlated with performance on real-world challenge datasets. We opensource all code used in this paper as The Neural Testbed. Our results reconcile several sources of confusion in the field. One concerns why particular approaches developed by the Bayesian deep learning community, such as Bayes-by-backprop, dropout, and deep ensembles, perform poorly in sequential decision tasks despite faring well based on evaluation metrics of that community (Osband et al., 2018). Our results demonstrate that, while such methods produce accurate marginal predictions, they are no longer competitive when it comes to high-order joint predictions. Joint predictions play a critical role in sequential decision-making (Lu et al., 2021). Another puzzling issue is that state-of-the-art methods do not employ domain-specific priors. Whether Bayesian deep learning approaches should at all is a subject of controversy (Wenzel et al., 2020). We show that the benefits of domain-specific priors can be pronounced when evaluating high-order joint predictions, even where they are negligible for marginals. We also help to resolve a point of philosophical debate within the deep learning community: the importance of epistemic versus aleatoric uncertainty1. The strangeness of this distinction has even made its way into wider popular culture, as satirized in the XKCD comic of Figure 1 (Munroe, 2021). For a given parametric model, we can clearly distinguish parameter uncertainty from noise, or reducible from irreducible uncertainty. However, from the perspective of a learning agent, the choice of model is subjective; different models can lead to the same marginal predictions. Our formulation provides a clear and objective way to assess the quality of predictive distributions, without reliance on this subjective distinction between knowledge and chance. Crucially, we show that this can be judged via the quality of joint predictions, but that marginals are not sufficient. It is worth mentioning another notable contribution of this work. The quality of a predictive distribution is commonly assessed in terms of cross-entropy loss. While this measure is welldefined for both marginal and joint predictions, to the best of our knowledge, the literature has only addressed computation in the former case. For high-order joint predictions, the straightforward approach would require computing sums over exponentially many values. To render this computationally tractable, we developed a novel approximation algorithm that leverages a random partitioning operation and Monte Carlo simulation. While this approach is motivated by concepts from high-dimensional geometry (Kaski, 1998; Donoho, 2006), we leave its analysis as a topic for future theoretical research. 1Epistemic uncertainty relates to knowledge (ancient Greek episteme↔knowledge), as opposed to aleatoric uncertainty relating to chance (Latin alea↔dice) (Der Kiureghian & Ditlevsen, 2009). 2 Evaluating predictive distributions In this section, we introduce notation for the standard supervised learning framework we will consider (classification) as well as our evaluation metric (the KL-loss). We also explain how we estimate the KL-loss for high-order joint predictions where exact computation is infeasible, using random partitions and Monte Carlo simulation. 2.1 Kullback–Leibler loss Consider a sequence of pairs ((Xt, Yt+1) : t = 0, 1, 2, . . .), where each Xt is a feature vector and each Yt+1 is its target label. This sequence is i.i.d. conditioned on the environment E , which produces the data, and which we view as a latent random variable. We consider an agent that is uncertain about the environment and predicts class labels YT+1:T+τ ≡ (YT+1, . . . , YT+τ ) given training data pairs DT ≡ ((Xt, Yt+1) : t = 0, 1, 2, . . . , T − 1) and unlabelled feature vectors XT :T+τ−1 ≡ (XT , . . . , XT+τ−1). From the agent’s perspective, each feature vector Xt is generated i.i.d from a fixed distribution P(Xt ∈ ·), and each class label Yt+1 is then drawn from P(Yt+1 ∈ ·|E , Xt). We describe the agent’s predictions in terms of a generative model, parameterized by a vector θT that the agent learns from the training data DT . For any inputs XT :T+τ−1, θT determines a predictive distribution, which could be used to sample imagined outcomes ŶT+1:T+τ . We define the τ th-order expected KL-loss by dτKL =E [ dKL ( P (YT+1:T+τ ∈ ·|E , XT :T+τ−1)︸ ︷︷ ︸ environment likelihood ∥∥P(ŶT+1:T+τ ∈ ·|θT , XT :T+τ−1)︸ ︷︷ ︸ agent likelihood )] (1) =−E [ log ( P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣θT , XT :T+τ−1, YT+1:T+τ))]︸ ︷︷ ︸ cross-entropy loss ≡ negative log-likelihood + C, where C = E [log (P (YT+1:T+τ |E , XT :T+τ−1))] is independent of θT . The expectation is taken over all random variables, including the environment E , the parameters θT , XT :T+τ−1, and YT+1:T+τ . Note that dτKL is equivalent to the widely used notion of cross-entropy loss, though offset by a quantity that is independent of θT (Kullback & Leibler, 1951). For τ > 1, dτKL assesses joint rather than the marginal predictions. 2.2 Marginal Versus Joint Predictions Evaluating an agent’s ability to estimate uncertainty on joint instead of marginal predictions can result in very different answers. We provide a simple example that illustrates the point. Suppose the data ((Xt, Yt+1) : t = 0, 1, 2, . . .) is generated by repeated tosses of a possibly biased coin with unknown probability p of heads.2 Let Xt = 0, to indicate that there is no input, and let each outcome Yt+1 be 0 or 1 to indicate tails or heads, respectively. Consider two agents that, without any training, predict outcomes. Agent 1 assumes p = 2/3 and models the outcome of each flip as pure chance. Agent 2 assumes that the coin is fully biased, meaning that p ∈ {0, 1}, but assigns probabilities 1/3 and 2/3 to 0 and 1. Let Ŷ 1t+1 and Ŷ 2t+1 denote the outcomes imagined by the two agents. Despite their differing assumptions, the two agents generate identical marginal predictive distributions: P(Ŷ 1t+1 = 0) = P(Ŷ 2t+1 = 0) = 1/3. On the other hand, joint predictions greatly differ for large τ : P(Ŷ 11 = 0, .., Ŷ 1τ = 0) = 1/3τ 1/3 = P(Ŷ 21 = 0, . . . , Ŷ 2τ = 0). We can say that agent 1 attributes all uncertainty to aleatoric sources and agent 2, epistemic sources (although as Figure 1 alludes, there are many ways an agent can attribute sources of uncertainty). Evaluating marginal predictions cannot distinguish between the two possibilities, though for a specific prior distribution over p, one agent could be right and the other wrong. One must evaluate joint predictions to make this distinction. 2We consider this coin as an illustrative model of more complex binary outcomes, such as whether a user will click on an ad, or whether a given mortgage will default on payments. When it comes to decision-making, this distinction can be critical (Lu et al., 2021). In a casino, under the first agent’s assumption, there is large upside and little risk on repeatedly betting on heads in the long run. However, if there is a 1/3 chance the coin will always land tails, as is the case in the second agent’s prediction, there is a ruinous risk to repeatedly betting heads. Evaluating joint predictions beyond marginals distinguishes these cases. 2.3 Computation of Kullback–Leibler loss In contexts we will consider, it is not possible to compute dτKL exactly. As such, we will approximate dτKL via Monte Carlo simulation. This section provides a high level overview of our approach, we push the full details to Appendix A. Algorithm 1 outlines a basic approach to estimating dτKL with respect to a synthetic data generating process. The algorithm samples a set of environments and a training dataset for each environment. For each of these pairs, the agent is re-initialized, trained, and then tested on N independent test data τ -samples. Note that each test data τ -sample includes τ data pairs. For each test data τ -sample, the likelihood of the environment is computed exactly, but that of the agent’s belief distribution is approximated. The estimate of dτKL is taken to be the sample mean of the log-likelihood-ratios (Algorithm 2). Algorithm 1 KL-Loss Computation 1: for j = 1, 2, . . . , J do 2: sample environment and training dataset, and train agent 3: for n = 1, 2, . . . , N do 4: sample a test data τ -sample with τ feature-label pairs 5: compute pj,n . likelihood of environment 6: compute p̂j,n . estimated likelihood of agent’s belief distribution 7: return 1JN ∑J j=1 ∑N n=1 log (pj,n/p̂j,n) . estimated log-likelihood-ratio While the likelihood of an environment can be efficiently computed, that of an agent’s belief distribution poses a computational challenge. One approach is to estimate this likelihood via Monte Carlo simulation (Algorithm 3). This produces unbiased estimates, which can be accurate when τ is small. However, maintaining accuracy requires the number of samples to grow exponentially with τ , as discussed in Appendix A.1. To overcome this challenge, we propose a novel approach that estimates the likelihood of the agent’s beliefs via a combination of randomized partitioning and Monte Carlo simulation (Algorithm 4) (Kaski, 1998). We conjecture that, under suitable regularity conditions, this novel approach produces accurate estimates even when τ is large, but leave a formal analysis to future work. Even though Algorithm 1 is developed for a synthetic data generating process, it is straightforward to extend it to evaluate agents on real data. We outline our approach to real data in Section 5.1, with full details in Appendix A.2. 3 Benchmark agents In this section we outline the baseline agents that we use to benchmark canonical approaches to uncertainty estimation in deep learning. Table 1 links to papers that introduce these agents, as well as the hyperparamters that we tuned to optimize their performance via gridsearch. In each case, we attempt to match ‘canonical’ implementations, which we open source at https://anonymous.4open.science/r/neural-testbed-B839. In addition to these agent implementations, our opensource project contains all the evaluation code to reproduce the results of this paper. Our code is written in Python and makes use of Jax internally (Bradbury et al., 2018). However, our evaluation procedure is framework agnostic, and can equally be used with any Python package including Tensorflow, Pytorch or even SKlearn. Over the course of this paper, we have made extensive use of parallel computation to facilitate large hyperparameter sweeps over many problems. Nevertheless, the overall computational cost is relatively low by modern deep learning standards and relies only on standard CPU. For reference, evaluating the mlp agent across all the problems in our testbed and real data requires less than 3 CPU-hours. We view our opensource effort as one of the major contributions of this work. We provide clear and strong baselines, together with an objective and accessible method for assessing uncertainty estimates beyond marginal distributions. 4 The Neural Testbed In this section we introduce the Neural Testbed, a system for assessing and comparing agent performance. The Testbed implements synthetic data generating processes and streamlines the process of sampling data, training agents, and evaluating test performance by estimating KL-loss for marginal and high-order joint predictions. Since independent data can be generated for each execution, the Testbed can drive insight and multiple iterations of algorithm development without risk of overfitting to a fixed dataset. We begin by describing the simple generative model based around a random 2-layer MLP. We then apply this testbed to evaluate a comprehensive set of benchmark agents. 4.1 Synthetic data generating processes By data generating process, we do not mean only the conditional distribution of data pairs (Xt, Yt+1)|E but also the distribution of the environment E . The Testbed considers 2- dimensional inputs and binary classification problems, although the generating processes can be easily extended to any input dimension and number of classes. The Testbed offers three data generating processes distinguished by a “temperature” setting, which signifies the signal-to-noise ratio (SNR) regime of the generated data. The agent can be tuned separately for each setting. This reflects prior knowledge of whether the agent is operating in a high SNR regime such as image recognition or a low SNR regime such as weather forecasting. To generate a model, the Testbed samples a 2-hidden-layer ReLU MLP with 2 output units, which are scaled by 1/temperature and passed through a softmax function to produce class probabilities. The MLP is sampled according to standard Xavier initialization (Glorot & Bengio, 2010), with the exception that biases in the first layer are drawn from N(0, 12 ). The inputs (Xt : t = 0, 1, . . .) are drawn i.i.d. from N(0, I). The agent is provided with the data generating process as prior knowledge. In Section 2.1, we described KL-loss as a metric for evaluating performance of an agent. The Neural Testbed estimates KL-loss, with τ ∈ {1, 100}, for three temperature settings and several training dataset sizes. For each value of τ , the KL-losses are averaged to produce an aggregate performance measure. Further details concerning data generation and agent evaluation are offered in Appendix A. 4.2 Performance in marginal predictions We begin our evaluation of benchmark approaches to Bayesian deep learning in marginal predictions (τ = 1). This setting has been the main focus of the Bayesian deep learning literature. Despite this focus, it is surprising to see in Figure 2 that none of the benchmark methods significantly outperform a well-tuned MLP baseline according to d1KL. Of course, there are many other metrics one might consider, but in this fundamental metric of prediction quality, the mlp agent presents a baseline that is difficult to outperform. One of the keys to this result is that all of the agents are able to tune their hyperparameters, such as L2 weight decay, to the SNR regime and number of training points. This matches the way deep learning systems are typically implemented in practice, with extensive hyperparameter tuning on validation data. This methodology has led many practitioners to doubt the usefulness of automatic tuning procedures such as bootstrap sampling (Nixon et al., 2020). In Figure 3, we compare the performance of an ensemble+ agent that uses bootstrapping with and without the ability to tune the hyperparameters per problem setting. We see that bootstrap sampling is beneficial when the agent is expected to work robustly over a wide range of problem settings. However, the benefits are no longer apparent when the agent is allowed to tune its hyperparameters to individual tasks. 4.3 Performance beyond marginals One of the key contributions of this paper is to evaluate predictive distributions beyond marginals. In Figure 2, the red bars show the results of benchmark agents evaluated on joint predictive distributions with τ = 100. Unlike when evaluating on marginal predictions, where no method significantly outperforms a well-tuned MLP, the potential benefits afforded by Bayesian deep learning become clear when examining higher-order predictive distributions. Our results refute prior works’ claims that examining dτKL beyond marginals provides little new information (Wang et al., 2021). Figure 2 shows that sgmcmc is the top-performing agent overall. This should be reassuring to the Bayesian deep learning community and beyond. In the limit of large compute this agent should recover the ‘gold-standard’ of Bayesian inference, and it does indeed perform best (Welling & Teh, 2011). However, some of the most popular approaches in this field (ensemble, dropout) do not actually provide good approximations to the predictive distribution in τ = 100. In fact, we see that even though Bayesian purists may deride ensemble+ and hypermodels as ‘not really Bayesian’, these methods actually provide much better approximations to the Bayesian posterior than ‘fully Bayesian’ VI approaches like bbb. We note too that while sgmcmc performs best, it also requires orders of magnitude more computation than competitive methods even in this toy setting (see Appendix C.2). As we scale to more complex environments, it may therefore be worthwhile to consider alternative approaches to approximate Bayesian inference. For insight into where our top agents are able to outperform, we compare ensemble and ensemble+ under the medium SNR regime in Figures 4 and 5. These methods are identical, except for the addition of a randomized prior function (Osband et al., 2018). Figure 4 shows that, although these methods perform similarly in the quality of their marginal predictions (τ = 1), the addition of a prior function greatly improves the quality of joint predictive distributions (τ = 100) in the low data regime. Figure 5 provides additional intuition into how the randomized prior functions are able to drive improved performance. Figure 5a shows a sampled generative model from our Testbed, with the training data shown in red and blue circles. Figure 5b shows the mean predictions and 4 randomly sampled ensemble members from each agent (top=ensemble, bottom=ensemble+). We see that, although the agents mostly agree in their mean predictions, ensemble+ produces more diverse sampled outcomes enabled by the addition of randomized prior functions. In contrast, ensemble produces similar samples, which may explain why its performance is close to baseline mlp. 5 Performance on real data Section 4 provides a simple, sanitized testbed for clear insight to the efficacy of Bayesian deep learning techniques. However, most deep learning research is not driven by these sorts of synthetic generative models, but the ultimate goal of performing well on real datasets. In this section, we apply the same benchmark agents to a selection of small challenge datasets. We find that, on average, tuning agents for the synthetic problems leads to better performance on real data. We also find that, just as the synthetic testbed, agents that perform similarly in marginal predictions may be distinguished in the quality of their joint predictions. 5.1 Datasets We focus on 10 benchmark datasets (3 feature-based, 7 image from pixels) drawn from the literature including Iris, MNIST, and CIFAR-10 (TFD). This collection is not intended to be comprehensive, or to include the most challenging large-scale problems, but instead to represent some canonical real-world data that might reasonably be addressed with the MLP models of Section 4.1. We apply a basic pre-processing step to each dataset, normalizing input features and flattening observations. We push full details to Appendix D.1. To assess performance in real datasets, we follow a similar procedure as Algorithm 1. The only difference is that since it is impossible to compute the likelihood of environment for real datasets, we compute the negative log-likelihood (NLL) rather than dτKL. Appendix A.2 provides further details. Note that NLL and dτKL are equivalent for agent comparison since they differ by a constant (see Equation 1). Furthermore, to allow for more direct comparison with the synthetic testbed, we also consider variants of each dataset where the number of training pairs is limited to less than the ‘full’ dataset size. 5.2 Synthetic data is predictive of real data Recall that Figure 2 compares performance across an array of agents, assessed using our synthetic data generating process. Each agent’s hyperparameters were tuned by first enumerating a list of plausibly near-optimal choices and selecting the one that optimizes performance. Each of our real-world datasets can be viewed as generated by an environment sampled from an alternative data generating process. A natural question is whether performance on the synthetic data correlates with performance on the real-world data. The table of Figure 6a displays results pertaining to each of our agents. For each agent, performance for each candidate hyperparameter setting was assessed on synthetic and real data, and the correlation across these pairs is reported. The left and right columns restrict attention to datasets with low and high volumes of training data, respectively. If a correlation were equal to 1, the hyperparameter setting that optimizes agent performance on real data would be identical to that on synthetic data. It is reassuring that the correlations are high, reflecting a strong degree of alignment, with the exception of bbb in low data regime, for which there appear to be pathological outcomes distorting performance for small training sets. The values in parentheses express 5th and 95th percentile confidence bounds, measured via the statistical bootstrap. Figure 6b plots performance on real versus synthetic data for the high data regime. Each data point represents one agent-hyperparameter combination. If the correlation were equal to 1, the combination that performs best on the synthetic data would also perform best on the real data. It is reassuring that the correlation is large, and the confidence interval between the 5th and 95th percentiles small. Agent-hyperparameter combinations that perform better on the testbed tend to perform better on real data as well. 5.3 Higher order predictions and informative priors Our synthetic testbed can be helpful in driving innovations that carry over to real data. Section 5.2 indicated that performance on the Testbed is correlated with that on realworld data. We now repeat the observation from Figure 4 on real data; additive prior functions can significantly improve the accuracy of joint predictive distributions generated by ensembles. We show this by comparing the performance of ensemble+ with different forms of prior functions on benchmark datasets. We evaluate an ensemble with no prior function (none), a random MLP prior (MLP), and a random linear function of a 2-dimensional latent representation as the prior, trained via variational autoencoder (VAE) (Kingma & Welling, 2014). We provide full details in Appendix D.3. Figure 7 plots the improvement in NLL for the ensemble agent relative to a baseline MLP (lower is better), and breaks out the result for datasets=MNIST,Iris and τ = 1, 100. We can see that the results for Iris mirror our synthetic data almost exactly. The results for MNIST share some qualitative insights, but also reveal some important differences. For Iris τ = 1 none of the methods outperform the MLP baseline, but for τ = 100 we see significant benefits to an additive MLP prior in the low data regime. For MNIST τ = 1 we actually see benefits to ensembles, even without prior functions and even in the high data regime. This reveals some aspects of this real data that are not captured by our synthetic model, where we did not see this behaviour. For τ = 100 the random MLP prior gives a slight advantage, but the effect is much less pronounced. We hypothesize this is because, unlike the testbed, the MLP prior is not well-matched to the input image data. However, the VAE prior is able to provide significant benefit in the low data regime.3 These benefits also carry over to Iris, even where random MLPs already provided signficant value. Designing architectures that offer useful priors for learning agents is an exciting area for future work. 6 Conclusion This paper highlights the need to evaluate predictive distributions beyond marginals. In addition to this conceptual contribution, we develop a suite of practical computational tools that can evaluate diverse approaches to uncertainty estimation. Together with these tools, we provide a neural-network-based data generating process that facilitates research and iteration beyond a small set of challenge datasets. We package these together as The Neural Testbed, including a variety of baseline agent implementations. We believe that this represents an exciting and valuable new benchmark for Bayesian deep learning and beyond. We have already used this testbed to generate several new insights in this paper. We have shown many popular Bayesian deep learning approaches perform similarly in marginal predictions but quite differently in joint predictions. We reveal the importance of bootstrapping for parameter robustness, and also help reconcile the observed lack of improvement when tuned to specific datasets. We have shown that these insights from synthetic data can carry over to real datasets; that performance in these settings is correlated, that agents with similar marginal predictions can be distinguished by their joint predictions, and that suitable prior functions can play an important role in driving good performance. The results in this paper are in some sense preliminary. The grand challenge for Bayesian deep learning is to provide effective uncertainty estimates in large, rich datasets. While we have demonstrated benefits to predictive evaluation beyond marginals only in the ‘low data’ regime and small-scale problems, we believe that they will extend more broadly to situations where new test inputs appear novel relative to training data. As such, we believe our core insights should carry over to the related problems of nonstationarity and covariate shift that plague modern deep learning systems. As an agent takes on more and more complex tasks, it will continue to run into new and unfamiliar settings and uncertain outcomes; as such, effective predictive distributions will be more important than ever. 3We hypothesize that appropriately initialized convnet architectures may be able to leverage image structure as noted in prior work (Ulyanov et al., 2018). A Testbed Pseudocode We present the testbed pseudocode in this section. Specifically, Algorithm 2 is the pseudocode for our neural testbed, and Algorithm 3 and Algorithm 4 are two different approaches to estimate the likelihood of a test data τ -sample conditioned on an agent’s belief. Algorithm 3 is based on the standard Monte-Carlo estimation, while Algorithm 4 adopts a random partitioning approach. The presented testbed pseudocode works for any prior P(E ∈ ·) over the environment and any input distribution PX , including the ones described in Section 4.1. We also release full code and implementations at https://anonymous.4open.science/r/neural-testbed-B839. In addition to presenting the testbed pseudocode, we also discuss some core technical issues in the neural testbed design. Specifically, Appendix A.1 discusses how to estimate the likelihood of an agent’s belief distribution; Appendix A.2 discusses how to extend the testbed to agent evaluation on real data; finally, Appendix A.3 explains our choices of experiment parameters. Algorithm 2 Neural Testbed Require: the testbed requires the following inputs 1. prior distribution over the environment P(E ∈ ·), input distribution PX 2. agent fθ 3. number of training data T , test distribution order τ 4. number of sampled problems J , number of test data samples N 5. parameters for agent likelihood estimation, as is specified in Algorithm 3 and 4 for j = 1, 2, . . . , J do Step 1: sample environment and training data 1. sample environment E ∼ P(E ∈ ·) 2. sample T inputs X0, X1, . . . , XT−1 i.i.d. from PX 3. sample the training labels Y1, . . . , YT conditionally i.i.d. as Yt+1 ∼ P (Y ∈ ·|E , X = Xt) ∀t = 0, 1, . . . , T − 1 4. choose the training dataset as DT = {(Xt, Yt+1) , t = 0, . . . , T − 1} Step 2: train agent train agent fθT based on training dataset DT Step 3: compute likelihoods for n = 1, 2, . . . , N do 1. sample X(n)T , . . . , X (n) T+τ−1 i.i.d. from PX 2. generate Y (n)T+1, . . . , Y (n) T+τ conditionally independently as Y (n) t+1 ∼ P ( Y ∈ · ∣∣∣E , X = X(n)t ) ∀t = T, T + 1, . . . , T + τ − 1 3. compute the likelihood under the environment E as pj,n = P ( Y (n) T+1:T+τ ∣∣∣E , X(n)T :T+τ−1) = ∏T+τ−1t=T Pr(Y (n)t+1∣∣∣E , X(n)t ) 4. estimate the likelihood conditioned on the agent’s belief p̂j,n ≈ P ( ŶT+1:T+τ = Y (n)T+1:T+τ ∣∣∣θT , X(n)T :T+τ−1, Y (n)T+1:T+τ) , based on Algorithm 3 or 4 with test data τ -sample ( X (n) T :T+τ−1, Y (n) T+1:T+τ ) . return 1JN ∑J j=1 ∑N n=1 log (pj,n/p̂j,n) Algorithm 3 Monte Carlo Estimation of Likelihood of Agent’s Belief Require: 1. trained agent fθT and number of Monte Carlo samples M 2. test data τ -sample (XT :T+τ−1, YT+1:T+τ ) Step 1: sample M models Ê1, . . . , ÊM conditionally i.i.d. from P ( Ê ∈ · ∣∣∣fθT ) Step 2: estimate p̂ as p̂ = 1 M M∑ m=1 P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣Êm, XT :T+τ−1, YT+1:T+τ) return p̂ Algorithm 4 Estimation of Likelihood of Agent’s Belief via Random Partitioning Require: 1. trained agent fθT 2. number of Monte Carlo samples M 3. number of hyperplanes d 4. test data τ -sample (XT :T+τ−1, YT+1:T+τ ) Step 1: sample M models Ê1, . . . , ÊM conditionally i.i.d. from P(Ê ∈ ·|fθT ); for each model m = 1, . . . ,M , class k, and t = T, . . . , T + τ − 1, define pm,t,k = P(Ŷ (m)t+1 = k| Êm, Xt), and `m,t,k = Φ−1 (pm,t,k), where Φ(·) is the CDF of the standard normal function. For each model m, define a vector `m = [`m,T,1, `m,T,2, . . . , `m,T+τ−1,K ] ∈ <Kτ Step 2: sample a d × (Kτ) matrix A and a d-dimensional vector b, with each element/component sampled i.i.d. from N(0, 1). For each m = 1, . . . ,M , compute ψm = 1 [A`m + b ≥ 0] ∈ {0, 1}d. Step 3: partition the sampled models, with each cell indexed by ψ ∈ {0, 1}d and defined by Mψ = {m : ψm = ψ} and assign a probability to each cell: qψ = |Mψ| M Step 4: ∀ψ ∈ {0, 1}d and ∀t = T, T + 1, . . . , T + τ − 1, estimate the probability of predicting Ŷt+1 = k conditioned on the cell: pψ,t,k = { 1 |Mψ| ∑ m∈Mψ pm,t,k if |Mψ| > 0 1 if |Mψ| = 0 Step 5: estimate Pr(Ŷt+1:T+τ = Yt+1:T+τ |θT , Xt:T+τ−1, Yt+1:T+τ ) as p̂ = ∑ ψ∈{0,1}d qψ T+τ−1∏ t=T pψ,t,Yt+1 return p̂ A.1 Estimating Likelihood of Agent’s Belief Distribution We have presented two algorithms to estimate the likelihood of a test data τ -sample conditioned on a trained agent: Algorithm 3 is based on the standard Monte Carlo estimation, while Algorithm 4 adopts an approach combining random partitioning and Monte Carlo estimation. In this subsection, we briefly discuss the pros and cons between these two algorithms, and provide some general guidelines on how to choose between them. Algorithm 3 produces unbiased estimates of the likelihoods, which is usually accurate when τ is small (e.g. for τ ≤ 10). However, maintaining accuracy might require the number of samples M to grow exponentially with τ . The following is an illustrative example. Example 1 (Uniform belief over deterministic models): Consider a scenario where the number of class labels is K = 2. We say a model Ê is deterministic if for any feature vector Xt, P(Ŷt+1 = 1 | Ê , Xt) ∈ {0, 1}. Obviously, for any test data τ -sample (XT :T+τ−1, YT+1:T+τ ) with YT+1:T+τ ∈ {0, 1}τ , under a deterministic model Ê , we have P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣ Ê , XT :T+τ−1, YT+1:T+τ) ∈ {0, 1}. When restricted to the inputs XT :T+τ−1, there are 2τ distinguishable deterministic models. Assume the agent’s belief distribution is uniform over these 2τ distinguishable deterministic models, then for any YT+1:T+τ ∈ {0, 1}τ , the likelihood of the agent’s belief distribution is P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣ θT , XT :T+τ−1, YT+1:T+τ) = 2−τ . Now let’s consider Algorithm 3. When a model Êm is sampled from the agent’s belief distribution, with probability 2−τ , P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣ Êm, XT :T+τ−1, YT+1:T+τ) = 1, and with probability 1− 2−τ , P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣ Êm, XT :T+τ−1, YT+1:T+τ) = 0. Consequently, in expectation, we need the number of Monte Carlo samples M = Ω(2τ ) to ensure that the estimate p̂ returned by Algorithm 3 is non-zero. To overcome this challenge, we also propose a novel approach to estimate the likelihood of agent’s belief via a combination of randomized partitioning and Monte Carlo simulation, as is presented in Algorithm 4. This approach proceeds as follows. First, M models are sampled from the agent’s belief distribution. For each sampled model, each test data input Xt, and each class label k, a predictive probability pm,t,k and its probit `m,t,k = Φ−1(pm,t,k) are computed, where Φ(·) is the CDF of the standard normal distribution. For each sampled model, we also stack its probits into a probit vector `m ∈ <Kτ . Then, d random hyperplanes are sampled and used to partition <Kτ into 2d cells. Stacked probit vectors place models in cells. Predictive distributions of models in each cell are averaged, and the likelihood is calculated based on these averages, with each cell weighted according to the number of models it contains. The Neural Testbed applies Algorithm 4 with 2d M . Hence, some cells are assigned many models. We conjecture that, under suitable regularity conditions, models assigned to the same cell tend to generate similar predictions. If this is the case, this algorithm produces accurate estimates even when τ is large. We leave a formal analysis to future work. Finally, we briefly discuss how to choose between Algorithm 3 and Algorithm 4. As a rule of thumb, we recommend to choose Algorithm 3 for τ < 10 and Algorithm 4 with the number of hyperplanes d between 5 and 10 for τ ≥ 10. A.2 Agent Evaluation on Real Data Algorithm 2 (and its simplified version Algorithm 1) is developed for a synthetic data generating processes. We now discuss how to extend it to agent evaluation on real data. Consider a scenario with J real datasets, and each dataset is further partitioned into a training dataset and a test dataset. The main difference between this scenario and a synthetic data generating process is that we cannot compute the likelihood of environment for real data. Thus, we compute the cross-entropy loss instead (see Equation 1). The computational approach is similar to Algorithm 1: for each real dataset, we use its training dataset to train an agent. Then, we sample N test data τ -samples from the test dataset, and estimate the likelihoods of the agent’s belief distribution. The estimate of the cross-entropy loss is taken to be the sample mean of the negative log-likelihoods. Note that when ranking agents, the cross-entropy loss and dτKL will lead to the same order of agents, since these two losses differ by a constant independent of the agent (see Equation 1). A.3 Choices of Experiment Parameters To apply Algorithm 2, we need to specify an input distribution PX and a prior distribution on the environment P(E ∈ ·). Recall from Section 4.1 that we consider binary classification problems with input dimension 2. We choose PX = N(0, I), and we consider three environment priors distinguished by a temperature parameter that controls the signal-to-noise ratio (SNR) regime. We sweep over temperatures in {0.01, 0.1, 0.5}. The prior distribution P(E ∈ ·) is induced by a distribution over MLPs with 2 hidden layers and ReLU activation. The MLP is distributed according to standard Xavier initialization, except that biases in the first layer are drawn from N(0, 12 ). The MLP outputs two units, which are divided by the temperature parameter and passed through the softmax function to produce class probabilities. The implementation of this generative model is in our open source code under the path /generative/factories.py. We now describe the other parameters we use in the Testbed. In Algorithm 2, we pick the order of predictive distributions τ ∈ {1, 100}, training dataset size T ∈ {1, 3, 10, 30, 100, 300, 1000}, number of sampled problems J = 10, and number of testing data τ -samples N = 1000. We apply Algorithm 3 for evaluation of d1KL and Algorithm 4 for evaluation of d100KL . In both Algorithms 3 and 4, we sample M = 1000 models from the agent. In Algorithm 4, we set the number of hyperplanes d = 7. The specification of the testbed parameters is in our open soucre code under the path /leaderboard/sweep.py. On real datasets, we apply the same τ ∈ {1, 100}, N = 1000, and M = 1000. We set the number of hyperplanes d = 10 in Algorithm 4. B Agents In this section, we describe the benchmark agents in Section 3 and the choice of various hyperparameters used in the implementation of these agents. The list of agents include MLP, ensemble, dropout, Bayes by backprop, stochastic Langevin MCMC, ensemble+ and hypermodel. We will also include other agents such as KNN, random forest, and deep kernel, but the performance of these agents was worse than the other benchmark agents, so we chose not to include them in the comparison in Section 4. In each case, we attempt to match the “canonical” implementation. The complete implementation of these agents including the hyperparameter sweeps used for the Testbed are available at https://anonymous.4open.science/r/neural-testbed-B839. We make use of the Epistemic Neural Networks notation from (Osband et al., 2021) in our code. We set the default hyperparameters of each agent to be the ones that minimize the aggregated KL score daggKL = d1KL + d100KL/100. B.1 MLP The mlp agent learns a 2-layer MLP with 50 hidden units in each layer by minimizing the cross-entropy loss with L2 weight regularization. The L2 weight decay scale is chosen either to be λ 1T or λ d √ β T , where d is the input dimension, β is the temperature of the generative process and T is the size of the training dataset. We sweep over λ ∈ {10−4, 10−3, 10−2, 10−1, 1, 10, 100}. We implement the MLP agent as a special case of a deep ensemble (B.2). The implementation and hyperparameter sweeps for the mlp agent can be found in our open source code, as a special case of the ensemble agent, under the path /agents/factories/ensemble.py. B.2 Ensemble We implement the basic “deep ensembles” approach for posterior approximation (Lakshminarayanan et al., 2017). The ensemble agent learns an ensemble of MLPs by minimizing the cross-entropy loss with L2 weight regularization. The only difference between the ensemble members is their independently initialized network weights. We chose the L2 weight scale to be either λ 1MT or λ d √ β MT , where M is the ensemble size, d is the input dimension, β is the temperature of the generative process, and T is the size of the training dataset. We sweep over ensemble size M ∈ {1, 3, 10, 30, 100} and λ ∈ {10−4, 10−3, 10−2, 10−1, 1, 10, 100}. We find that larger ensembles work better, but this effect is within margin of error after 10 elements. The implementation and hyperparameter sweeps for the ensemble agent can be found in our open source code under the path /agents/factories/ensemble.py. B.3 Dropout We follow Gal & Ghahramani (2016) to build a droput agent for posterior approximation. The agent applies dropout on each layer of a fully connected MLP with ReLU activation and optimizes the network using the cross-entropy loss combined with L2 weight decay. The L2 weight decay scale is chosen to be either l 2 2T (1− pdrop) or d √ βl T where pdrop is the dropping probability, d is the input dimension, β is the temperature of the data generating process, and T is the size of the training dataset. We sweep over dropout rate pdrop ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}, length scale (used for L2 weight decay) l ∈ {0.01, 0.1, 0.3, 1, 3, 10}, number of neural network layers ∈ {2, 3}, and hidden layer size ∈ {50, 100}. The implementation and hyperparameter sweeps for the dropout agent can be found in our open source code under the path /agents/factories/dropout.py. B.4 Bayes-by-backprop We follow Blundell et al. (2015) to build a bbb agent for posterior approximation. We consider a scale mixture of two zero-mean Gaussian densities as the prior. The Gaussian densities have standard deviations σ1 and σ2, and they are mixed with probabilities p and 1− p, respectively. We sweep over σ1 ∈ {1, 2, 4}, σ2 ∈ {0.25, 0.5, 0.75}, p ∈ {0, 0.25, 0.5, 0.75, 1}, learning rate ∈ {10−3, 3× 10−3}, number of training steps ∈ {500, 1000, 10000}, number of neural network layers ∈ {2, 3}, hidden layer size ∈ {50, 100}, and the ratio of the complexity cost to the likelihood cost ∈ {1, d √ β}, where d is the input dimension and β is the temperature of the data generating process. The implementation and hyperparameter sweeps for the bbb agent can be found in our open source code under the path /agents/factories/bbb.py. B.5 Stochastic gradient Langevin dynamics We follow Welling & Teh (2011) to implement a sgmcmc agent using stochastic gradient Langevin dynamics (SGLD). We consider two versions of SGLD, one with momentum and other without the momentum. We consider independent Gaussian prior on the neural network parameters where the prior variance is set to be σ2 = λ T dβ , where λ is a hyperparameter that is swept over {0.01, 0.1, 0.5, 1}, d is the input dimension, β is the temperature of the data generating process, and T is the size of the training dataset. We consider a constant learning rate that is swept over {10−5, 5× 10−5, 10−4, 5× 10−4, 10−3, 5 × 10−3, 10−2}. For SGLD with momentum, the momentum decay term is always set to be 0.9. The number of training batches is 5 × 105 with burn-in time of 105 training batches. We save a model every 1000 steps after the burn-in time and use these models as an ensemble during the evaluation. The implementation and hyperparameter sweeps for the sgmcmc agent can be found in our open source code under the path /agents/ factories/sgmcmc.py. B.6 Ensemble+ We implement the ensemble+ agent using deep ensembles with randomized prior functions (Osband et al., 2018) and bootstrap sampling (Osband & Van Roy, 2015). Similar to the vanilla ensemble agent in Section B.2, we consider L2 weight scale to be either λ 1MT or λ d √ β MT . We sweep over ensemble size M ∈ {1, 3, 10, 30, 100} and λ ∈ {10−4, 10−3, 10−2, 10−1, 1, 10, 100}. The randomized prior functions are sampled exactly from the data generating process, and we sweep over prior scaling ∈ {0, √ β, 1}. In addition, we sweep over bootstrap type (none, exponential, bernoulli). We find that the addition of randomized prior functions is crucial for improvement in performance over vanilla deep ensembles in terms of the quality of joint predictions. We also find that bootstrap sampling improves agent robustness, although the advantage is less apparent when one is allowed to tune the L2 weight decay for each task (see Figure 3). The implementation and hyperparameter sweeps for the ensemble+ agent can be found in our open source code under the path /agents/factories/ensemble_plus.py. B.7 Hypermodel We follow Dwaracherla et al. (2020) to build a hypermodel agent for posterior approximation. We consider a linear hypermodel over a 2-layer MLP base model. We sweep over index dimension ∈ {1, 3, 5, 7}. The L2 weight decay is chosen to be either λ 1T or λ d √ β T with λ ∈ {0.1, 0.3, 1, 3, 10}, where d is the input dimension, β is the temperature of the data generating process, and T is the size of the training dataset. We chose three different bootstrapping methods of none, exponential, bernoulli. We use an additive prior which is a linear hypermodel prior over an MLP base model, which is similar to the generating process, with number of hidden layers in {1, 2}, 10 hidden units in each layer, and prior scale from {0, √ β, 1}. The implementation and hyperparameter sweeps for the hypermodel agent can be found in our open source code under the path /agents/factories/hypermodel.py. B.8 Non-parametric classifiers K-nearest neighbors (k-NN) (Cover & Hart, 1967) and random forest classifiers (Friedman, 2017) are simple and cheap off-the-shelf non-parametric baselines (Murphy, 2012; Pedregosa et al., 2011). The ‘uncertainty’ in these classifiers arises merely from the fact that they produce distributions over the labels and as such we do not expect them to perform well relative to more principled approaches. Moreover, these methods have no capacity to model dτKL for τ > 1. For the knn agent we swept over the number of neighbors k ∈ {1, 5, 10, 30, 50, 100} and the weighting of the contribution of each neighbor as either uniform or based on distance. For the random forest agent we swept over the number of trees in the forest {10, 100, 1000}, and the splitting criterion which was either the Gini impurity coefficient or the information gain. To prevent infinite values in the KL we truncate the probabilities produced by these classifiers to be in the interval [0.01, 0.99]. The implementation and hyperparameter sweeps for the knn and random forest agents can be found in our open source code under the paths /agents/factories/knn.py and /agents/factories/random_forest.py. B.9 Gaussian process with learned kernel A neural network takes input Xt ∈ X and produces output Zt+1 = Wφθ(Xt) + b ∈ RK , where W ∈ RK×m is a matrix, b ∈ RK is a bias vector, and φθ : X → Rm is the output of the penultimate layer of the neural network. In the case of classification the output Zt+1 corresponds to the logits over the class labels, i.e., Ŷt+1 ∝ exp(Zt+1). The neural network should learn a function that maps the input into a space where the classes are linearly distinguishable. In other words, the mapping that the neural network is learning can be considered a form of kernel (Schölkopf & Smola, 2018), where the kernel function k : X ×X → R is simply k(X,X ′) = φθ(X)>φθ(X ′). With this in mind, we can take a trained neural network and consider the learned mapping to be the kernel in a Gaussian process (GP) (Rasmussen, 2003), from which we can obtain approximate uncertainty estimates. Concretely, let Φ0:T−1 ∈ RT×m be the matrix corresponding to the φθ(Xt), t = 0, . . . , T −1, vectors stacked row-wise and let ΦT :T+τ−1 ∈ Rτ×m denote the same quantity for the test set. Fix index i ∈ {0, . . . ,K − 1} to be a particular class index. A GP models the joint distribution over the dataset to be a multi-variate Gaussian, i.e.,[ Z (i) 1:T Z (i) T+1:T+τ ] ∼ N ([ µ (i) 1:T µ (i) T+1:T+τ ] , [ σ2I + Φ0:T−1Φ>0:T−1 ΦT :T+τ−1Φ>0:T−1 Φ0:T−1Φ>T :T+τ−1 ΦT :T+τ−1Φ>T :T+τ−1 ]) where σ > 0 models the noise in the training data measurement and µ(i)1:T , µ (i) T+1:T+τ are the means under the GP. The conditional distribution is given by P (Z(i)T+1:T+τ | Z (i) 1:T , X0:T+τ−1) = N ( µ (i) T+1:T+τ |1:T ,ΣT+1:T+τ |1:T ) where ΣT+1:T+τ |1:T = ΦT :T+τ−1Φ>T :T+τ−1 − ΦT :T+τ−1Φ>0:T−1(σ2I + Φ0:T−1Φ>0:T−1)−1Φ0:T−1Φ>T :T+τ−1. and rather than use the GP to compute µ(i)T+1:T+τ |0:T (which would not be possible since we do not oberve the true logits) we just take it to be the output of the neural network when evaluated on the test dataset. The matrix being inverted in the expression for ΣT+1:T+τ |0:T has dimension T × T , which may be quite large. We use the Sherman-Morrison-Woodbury identity to rewrite it as follows (Woodbury, 1950) ΣT+1:T+τ |0:T = ΦT :T+τ−1(I − Φ>0:T−1(σ2I + Φ0:T−1Φ>0:T−1)−1Φ0:T−1)Φ>T :T+τ−1 = σ2ΦT :T+τ−1(σ2I + Φ>0:T−1Φ0:T−1)−1Φ>T :T+τ−1, which instead involves the inverse of an m×m matrix, which may be much smaller. If we perform a Cholesky factorization of positive definite matrix (σ2I + Φ>0:T−1Φ0:T−1) = LL> then the samples for all logits simultaneously can be drawn by first sampling ζ ∈ Rm×K , with each entry drawn IID from N (0, 1), then forming ŶT+1:T+τ ∝ exp(µT+1:T+τ |1:T + σΦT :T+τ−1L−>ζ). The implementation and hyperparameter sweeps for the deep kernel agent can be found in our open source code under the path /agents/factories/deep_kernel.py. B.10 Other agents In our paper we have made a concerted effort to include representative and canonical agents across different families of Bayesian deep learning and adjacent research. In addition to these implementations, we performed extensive tuning to make sure that each agent was given a fair shot. However, with the proliferation of research in this area, it was not possible for us to evaluate all competiting approaches. We hope that, by opensourcing the Neural Testbed, we can allow researchers in the field to easily assess and compare their agents to these baselines. For example, we highlight a few recent pieces of research that might be interesting to evaluate in our setting. Of course, there are many more methods to compare and benchmark. We leave this open as an exciting area for future research. • Neural Tangent Kernel Prior Functions (He et al., 2020). Proposes a specific type of prior function in ensemble+ inspired by connections to the neural tangent kernel. • Functional Variational Bayesian Neural Networks (Sun et al., 2019). Applies variational inference directly to the function outputs, rather than weights like bbb. • Variational normalizing flows (Rezende & Mohamed, 2015). Applies variational inference over a more expressive family than bbb. • No U-Turn Sampler (Hoffman et al., 2014). Another approach to sgmcmc that attempts to compute the posterior directly, computational costs can grow large. C Testbed results In this section, we provide the complete results of the performance of benchmark agents on the Testbed, broken down by the temperature setting, which controls the SNR, and the size of the training dataset. We select the best performing agent within each agent family and plot d1KL and d100KL with the performance of an MLP agent as a reference. We also provide a plot comparing the training time of different agents. C.1 Performance breakdown Figures 8 and 9 show the KL estimates evaluated on τ = 1 and τ = 100, respectively. For each agent, for each SNR regime, for each number of training points we plot the average KL estimate from the Testbed. In each plot, we include the “baseline” mlp agent as a black dashed line to allow for easy comparison across agents. A detailed description of these benchmark agents can be found in Appendix B. C.2 Training time Figure 10 shows a plot comparing the d100KL and training time of different agents normalized with that of an MLP. We can see that sgmcmc agent has the best performance, but at the cost of more training time (computation). Both ensemble+ and hypermodel agents have similar performance as sgmcmc with lower training time. We trained our agents on CPU only systems. D Real data This section provides supplementary details regarding the experiments in Section 5. As before, we include full implementation and source code at https://anonymous.4open. science/r/neural-testbed-B839. D.1 Datasets Table 2 outlines the datasets included in our experiments. Unlike to the synthetic testbed, which evaluates agents over a range of SNR regimes, these datasets are generally all high SNR regime. We can see this since the top-performing agents in the literature are able to obtain high levels of classification accuracy on held out data; something that is impossible if the underlying system has high levels of noise. Each of these datasets is provided with a canonical training/test set of specific sizes. In order to examine performance in different data regimes we augment the default settings of Table 2 by also examining the performance of agents on these datasets with reduced training data. In a way that mirrors the testbed sweep of Section 4.1, we also look at settings where the training data is restricted to T = 1, 10, 100, 1000, 10000 data points respectively. D.2 Correlation Figure 6 breaks down the correlation in performance between testbeds and real data. For the purposes of Table 6a we say that T = 1, 10 is the ‘low data’ regime, and the maximum training dataset size is the ‘high data’ regime. Our results show that, for each agent, for each data regime, performance of hyperparameters is correlated across settings. One concern might be that while performance on real data overall is highly correlated, that this might not necessarily be the case for any individual dataset. Or, alternatively, that this correlation is driven by extremely strong relationships in one dataset that are not present in others. Figure 11 shows that this is not the case. In fact, for each of the datasets considered we have strong and positive correlation over agent-hyperparameter pairs. This gives us confidence that the results of Figure 6b are robust not only to choice of agent, but also to some reasonable choice of datasets. D.3 Prior functions We consider two different forms of prior functions for ensemble+: a random MLP of the input data and a random linear function of a 2-dimensional latent trained via variational autoencoder (VAE) (Kingma & Welling, 2014). For the MLP prior, we tried both linear (MLP with no hidden layer) and MLP with hidden layers, and observed that the linear prior works better. To train the 2-dimensional latent, we considered a 2-layer (128, 64) MLP for the Gaussian encoder and a 2-layer (64, 128) MLP for the Bernoulli decoder. We trained the VAE using all unsupervised training data available for each dataset. After training the VAE for 10,000 steps, we used the output mean of the Gaussian encoder as the latent.
1. What is the focus of the paper regarding Bayesian deep learning? 2. What are the strengths of the proposed simulation-based framework? 3. Are there any weaknesses or limitations in the paper's contributions? 4. How does the reviewer assess the clarity and organization of the paper's content? 5. What are some minor suggestions for improving the paper?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a simulation based framework to evaluate different techniques proposed for uncertainty estimation of predictive models. The approach relies on simulated data to control effects such as environments where data is collected, data and model uncertainty. This control enables generating interesting insights about various techniques. The paper also go beyond evaluating marginal posterior predictive distributions and extend their benchmarking work to joint distributions capturing sequential decisions that can be made with such models. Some of the highlights from the results: 1) Their results show that Bayesian deep learning is impactful for capturing the joint predictive distributions. 2) Priors used in ensemble+ help with diversity and therefore enable better predictive distributions. 3) Bootstrapping help with robustness of predictions if the models hyperparameters are not tuned. Review Pros: Bayesian deep learning is an important research area but the evaluations of the progress made in the field has not been consistent so far which makes it difficult to evaluate different techniques. Furthermore some of the common practice on evaluations on out of distribution data provides insight but lacks providing a quantitative benchmark. The paper addresses an important problem in Bayesian deep learning and provides a framework and tools to evaluate different techniques. The framework is based on simulations which helps provides control over variables and enables gathering insights about various techniques. Furthermore, the paper extend the evaluations to sequential decisions which makes the contributions unique. Overall I found the contributions of the paper very important for the field. I also think the paper is very written and organized. Cons: Even though in the abstract the authors mention that the proposed method provides insights into aleatory and epistemic uncertainty, the manuscript does not elaborate on this point. Even without this point, I think the paper is interesting and removing this point will not hurt its reach. Bayesian deep learning researchers are in general interested in out of distribution generalization. I found the discussion on OOD a bit limited. I believe the proposed framework can also enable evaluations with OOD and I would recommend authors adding a discussion on this. Minor comments: Abstract: Please explicitly define what you mean by "joint predictions". Figure 6. Would be interesting to color code the agents in the Figure.
ICLR
Title Evaluating Predictive Distributions: Does Bayesian Deep Learning Work? Abstract Posterior predictive distributions quantify uncertainties ignored by point estimates. This paper introduces The Neural Testbed, which provides tools for the systematic evaluation of agents that generate such predictions. Crucially, these tools assess not only the quality of marginal predictions per input, but also joint predictions given many inputs. Joint distributions are often critical for useful uncertainty quantification, but they have been largely overlooked by the Bayesian deep learning community. We benchmark several approaches to uncertainty estimation using a neural-network-based data generating process. Our results reveal the importance of evaluation beyond marginal predictions. Further, they reconcile sources of confusion in the field, such as why Bayesian deep learning approaches that generate accurate marginal predictions perform poorly in sequential decision tasks, how incorporating priors can be helpful, and what roles epistemic versus aleatoric uncertainty play when evaluating performance. We also present experiments on real-world challenge datasets, which show a high correlation with testbed results, and that the importance of evaluating joint predictive distributions carries over to real data. As part of this effort, we opensource The Neural Testbed, including all implementations from this paper. 1 Introduction Deep learning has emerged as the state-of-the-art approach across a number of application domains in which agents learn from large amounts of data (LeCun et al., 2015). Neural networks are increasingly used not only to predict outcomes but also to inform decisions. Common approaches in deep learning produce point estimates but not uncertainty estimates, which are often required for effective decision-making. Bayesian deep learning extends the methodology to produce such uncertainty estimates (MacKay, 1992; Neal, 2012). We consider agents that are trained on data pairs ((Xt, Yt+1) : t = 0, 1, . . . , T − 1) and subsequently generate predictions given new inputs. When presented with an input XT , a Bayesian neural network can generate a predictive distribution of the outcome YT+1 that is yet to be observed. This distribution characterizes the agent’s uncertainty about YT+1. We refer to such a prediction as marginal to distinguish it from a joint predictive distribution over a list (YT+1, . . . , YT+τ ) of prospective outcomes corresponding to inputs (XT , . . . , XT+τ−1). The importance of uncertainty estimation has motivated a great deal of research over recent years (Kendall & Gal, 2017). This research has produced a variety of agents that learn to generate predictive distributions. With this proliferation of alternatives, it is increasingly important to analyze and compare their performance (Filos et al., 2019; Nado et al., 2021). In this paper, we introduce new tools for systematic evaluation of such agents. Our tools overcome several limitations faced by previous methods of evaluation. First, by focusing purely on predictive distributions, we allow for a unified treatment of approaches developed within the Bayesian neural network community and beyond. This sidesteps the Open source code available at https://anonymous.4open.science/r/neural-testbed-B839. question of whether any approach ‘is really Bayesian’ (Wilson & Izmailov, 2020). Second, our tools evaluate the quality of higher-order joint predictions (τ > 1). Until now, the Bayesian deep learning literature has focused almost exclusively on evaluating marginal predictions (Wang et al., 2021). Finally, we develop a neural-network-based data generating process for Bayesian deep learning that can be used to drive insight and algorithm development. Where research has focused on a small set of challenge datasets, this might introduce bias through overfitting via multiple iterations of algorithm development. We use these tools to compare hundreds of agent variants. Further, we show that performance on our synthetic data generating process data is highly correlated with performance on real-world challenge datasets. We opensource all code used in this paper as The Neural Testbed. Our results reconcile several sources of confusion in the field. One concerns why particular approaches developed by the Bayesian deep learning community, such as Bayes-by-backprop, dropout, and deep ensembles, perform poorly in sequential decision tasks despite faring well based on evaluation metrics of that community (Osband et al., 2018). Our results demonstrate that, while such methods produce accurate marginal predictions, they are no longer competitive when it comes to high-order joint predictions. Joint predictions play a critical role in sequential decision-making (Lu et al., 2021). Another puzzling issue is that state-of-the-art methods do not employ domain-specific priors. Whether Bayesian deep learning approaches should at all is a subject of controversy (Wenzel et al., 2020). We show that the benefits of domain-specific priors can be pronounced when evaluating high-order joint predictions, even where they are negligible for marginals. We also help to resolve a point of philosophical debate within the deep learning community: the importance of epistemic versus aleatoric uncertainty1. The strangeness of this distinction has even made its way into wider popular culture, as satirized in the XKCD comic of Figure 1 (Munroe, 2021). For a given parametric model, we can clearly distinguish parameter uncertainty from noise, or reducible from irreducible uncertainty. However, from the perspective of a learning agent, the choice of model is subjective; different models can lead to the same marginal predictions. Our formulation provides a clear and objective way to assess the quality of predictive distributions, without reliance on this subjective distinction between knowledge and chance. Crucially, we show that this can be judged via the quality of joint predictions, but that marginals are not sufficient. It is worth mentioning another notable contribution of this work. The quality of a predictive distribution is commonly assessed in terms of cross-entropy loss. While this measure is welldefined for both marginal and joint predictions, to the best of our knowledge, the literature has only addressed computation in the former case. For high-order joint predictions, the straightforward approach would require computing sums over exponentially many values. To render this computationally tractable, we developed a novel approximation algorithm that leverages a random partitioning operation and Monte Carlo simulation. While this approach is motivated by concepts from high-dimensional geometry (Kaski, 1998; Donoho, 2006), we leave its analysis as a topic for future theoretical research. 1Epistemic uncertainty relates to knowledge (ancient Greek episteme↔knowledge), as opposed to aleatoric uncertainty relating to chance (Latin alea↔dice) (Der Kiureghian & Ditlevsen, 2009). 2 Evaluating predictive distributions In this section, we introduce notation for the standard supervised learning framework we will consider (classification) as well as our evaluation metric (the KL-loss). We also explain how we estimate the KL-loss for high-order joint predictions where exact computation is infeasible, using random partitions and Monte Carlo simulation. 2.1 Kullback–Leibler loss Consider a sequence of pairs ((Xt, Yt+1) : t = 0, 1, 2, . . .), where each Xt is a feature vector and each Yt+1 is its target label. This sequence is i.i.d. conditioned on the environment E , which produces the data, and which we view as a latent random variable. We consider an agent that is uncertain about the environment and predicts class labels YT+1:T+τ ≡ (YT+1, . . . , YT+τ ) given training data pairs DT ≡ ((Xt, Yt+1) : t = 0, 1, 2, . . . , T − 1) and unlabelled feature vectors XT :T+τ−1 ≡ (XT , . . . , XT+τ−1). From the agent’s perspective, each feature vector Xt is generated i.i.d from a fixed distribution P(Xt ∈ ·), and each class label Yt+1 is then drawn from P(Yt+1 ∈ ·|E , Xt). We describe the agent’s predictions in terms of a generative model, parameterized by a vector θT that the agent learns from the training data DT . For any inputs XT :T+τ−1, θT determines a predictive distribution, which could be used to sample imagined outcomes ŶT+1:T+τ . We define the τ th-order expected KL-loss by dτKL =E [ dKL ( P (YT+1:T+τ ∈ ·|E , XT :T+τ−1)︸ ︷︷ ︸ environment likelihood ∥∥P(ŶT+1:T+τ ∈ ·|θT , XT :T+τ−1)︸ ︷︷ ︸ agent likelihood )] (1) =−E [ log ( P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣θT , XT :T+τ−1, YT+1:T+τ))]︸ ︷︷ ︸ cross-entropy loss ≡ negative log-likelihood + C, where C = E [log (P (YT+1:T+τ |E , XT :T+τ−1))] is independent of θT . The expectation is taken over all random variables, including the environment E , the parameters θT , XT :T+τ−1, and YT+1:T+τ . Note that dτKL is equivalent to the widely used notion of cross-entropy loss, though offset by a quantity that is independent of θT (Kullback & Leibler, 1951). For τ > 1, dτKL assesses joint rather than the marginal predictions. 2.2 Marginal Versus Joint Predictions Evaluating an agent’s ability to estimate uncertainty on joint instead of marginal predictions can result in very different answers. We provide a simple example that illustrates the point. Suppose the data ((Xt, Yt+1) : t = 0, 1, 2, . . .) is generated by repeated tosses of a possibly biased coin with unknown probability p of heads.2 Let Xt = 0, to indicate that there is no input, and let each outcome Yt+1 be 0 or 1 to indicate tails or heads, respectively. Consider two agents that, without any training, predict outcomes. Agent 1 assumes p = 2/3 and models the outcome of each flip as pure chance. Agent 2 assumes that the coin is fully biased, meaning that p ∈ {0, 1}, but assigns probabilities 1/3 and 2/3 to 0 and 1. Let Ŷ 1t+1 and Ŷ 2t+1 denote the outcomes imagined by the two agents. Despite their differing assumptions, the two agents generate identical marginal predictive distributions: P(Ŷ 1t+1 = 0) = P(Ŷ 2t+1 = 0) = 1/3. On the other hand, joint predictions greatly differ for large τ : P(Ŷ 11 = 0, .., Ŷ 1τ = 0) = 1/3τ 1/3 = P(Ŷ 21 = 0, . . . , Ŷ 2τ = 0). We can say that agent 1 attributes all uncertainty to aleatoric sources and agent 2, epistemic sources (although as Figure 1 alludes, there are many ways an agent can attribute sources of uncertainty). Evaluating marginal predictions cannot distinguish between the two possibilities, though for a specific prior distribution over p, one agent could be right and the other wrong. One must evaluate joint predictions to make this distinction. 2We consider this coin as an illustrative model of more complex binary outcomes, such as whether a user will click on an ad, or whether a given mortgage will default on payments. When it comes to decision-making, this distinction can be critical (Lu et al., 2021). In a casino, under the first agent’s assumption, there is large upside and little risk on repeatedly betting on heads in the long run. However, if there is a 1/3 chance the coin will always land tails, as is the case in the second agent’s prediction, there is a ruinous risk to repeatedly betting heads. Evaluating joint predictions beyond marginals distinguishes these cases. 2.3 Computation of Kullback–Leibler loss In contexts we will consider, it is not possible to compute dτKL exactly. As such, we will approximate dτKL via Monte Carlo simulation. This section provides a high level overview of our approach, we push the full details to Appendix A. Algorithm 1 outlines a basic approach to estimating dτKL with respect to a synthetic data generating process. The algorithm samples a set of environments and a training dataset for each environment. For each of these pairs, the agent is re-initialized, trained, and then tested on N independent test data τ -samples. Note that each test data τ -sample includes τ data pairs. For each test data τ -sample, the likelihood of the environment is computed exactly, but that of the agent’s belief distribution is approximated. The estimate of dτKL is taken to be the sample mean of the log-likelihood-ratios (Algorithm 2). Algorithm 1 KL-Loss Computation 1: for j = 1, 2, . . . , J do 2: sample environment and training dataset, and train agent 3: for n = 1, 2, . . . , N do 4: sample a test data τ -sample with τ feature-label pairs 5: compute pj,n . likelihood of environment 6: compute p̂j,n . estimated likelihood of agent’s belief distribution 7: return 1JN ∑J j=1 ∑N n=1 log (pj,n/p̂j,n) . estimated log-likelihood-ratio While the likelihood of an environment can be efficiently computed, that of an agent’s belief distribution poses a computational challenge. One approach is to estimate this likelihood via Monte Carlo simulation (Algorithm 3). This produces unbiased estimates, which can be accurate when τ is small. However, maintaining accuracy requires the number of samples to grow exponentially with τ , as discussed in Appendix A.1. To overcome this challenge, we propose a novel approach that estimates the likelihood of the agent’s beliefs via a combination of randomized partitioning and Monte Carlo simulation (Algorithm 4) (Kaski, 1998). We conjecture that, under suitable regularity conditions, this novel approach produces accurate estimates even when τ is large, but leave a formal analysis to future work. Even though Algorithm 1 is developed for a synthetic data generating process, it is straightforward to extend it to evaluate agents on real data. We outline our approach to real data in Section 5.1, with full details in Appendix A.2. 3 Benchmark agents In this section we outline the baseline agents that we use to benchmark canonical approaches to uncertainty estimation in deep learning. Table 1 links to papers that introduce these agents, as well as the hyperparamters that we tuned to optimize their performance via gridsearch. In each case, we attempt to match ‘canonical’ implementations, which we open source at https://anonymous.4open.science/r/neural-testbed-B839. In addition to these agent implementations, our opensource project contains all the evaluation code to reproduce the results of this paper. Our code is written in Python and makes use of Jax internally (Bradbury et al., 2018). However, our evaluation procedure is framework agnostic, and can equally be used with any Python package including Tensorflow, Pytorch or even SKlearn. Over the course of this paper, we have made extensive use of parallel computation to facilitate large hyperparameter sweeps over many problems. Nevertheless, the overall computational cost is relatively low by modern deep learning standards and relies only on standard CPU. For reference, evaluating the mlp agent across all the problems in our testbed and real data requires less than 3 CPU-hours. We view our opensource effort as one of the major contributions of this work. We provide clear and strong baselines, together with an objective and accessible method for assessing uncertainty estimates beyond marginal distributions. 4 The Neural Testbed In this section we introduce the Neural Testbed, a system for assessing and comparing agent performance. The Testbed implements synthetic data generating processes and streamlines the process of sampling data, training agents, and evaluating test performance by estimating KL-loss for marginal and high-order joint predictions. Since independent data can be generated for each execution, the Testbed can drive insight and multiple iterations of algorithm development without risk of overfitting to a fixed dataset. We begin by describing the simple generative model based around a random 2-layer MLP. We then apply this testbed to evaluate a comprehensive set of benchmark agents. 4.1 Synthetic data generating processes By data generating process, we do not mean only the conditional distribution of data pairs (Xt, Yt+1)|E but also the distribution of the environment E . The Testbed considers 2- dimensional inputs and binary classification problems, although the generating processes can be easily extended to any input dimension and number of classes. The Testbed offers three data generating processes distinguished by a “temperature” setting, which signifies the signal-to-noise ratio (SNR) regime of the generated data. The agent can be tuned separately for each setting. This reflects prior knowledge of whether the agent is operating in a high SNR regime such as image recognition or a low SNR regime such as weather forecasting. To generate a model, the Testbed samples a 2-hidden-layer ReLU MLP with 2 output units, which are scaled by 1/temperature and passed through a softmax function to produce class probabilities. The MLP is sampled according to standard Xavier initialization (Glorot & Bengio, 2010), with the exception that biases in the first layer are drawn from N(0, 12 ). The inputs (Xt : t = 0, 1, . . .) are drawn i.i.d. from N(0, I). The agent is provided with the data generating process as prior knowledge. In Section 2.1, we described KL-loss as a metric for evaluating performance of an agent. The Neural Testbed estimates KL-loss, with τ ∈ {1, 100}, for three temperature settings and several training dataset sizes. For each value of τ , the KL-losses are averaged to produce an aggregate performance measure. Further details concerning data generation and agent evaluation are offered in Appendix A. 4.2 Performance in marginal predictions We begin our evaluation of benchmark approaches to Bayesian deep learning in marginal predictions (τ = 1). This setting has been the main focus of the Bayesian deep learning literature. Despite this focus, it is surprising to see in Figure 2 that none of the benchmark methods significantly outperform a well-tuned MLP baseline according to d1KL. Of course, there are many other metrics one might consider, but in this fundamental metric of prediction quality, the mlp agent presents a baseline that is difficult to outperform. One of the keys to this result is that all of the agents are able to tune their hyperparameters, such as L2 weight decay, to the SNR regime and number of training points. This matches the way deep learning systems are typically implemented in practice, with extensive hyperparameter tuning on validation data. This methodology has led many practitioners to doubt the usefulness of automatic tuning procedures such as bootstrap sampling (Nixon et al., 2020). In Figure 3, we compare the performance of an ensemble+ agent that uses bootstrapping with and without the ability to tune the hyperparameters per problem setting. We see that bootstrap sampling is beneficial when the agent is expected to work robustly over a wide range of problem settings. However, the benefits are no longer apparent when the agent is allowed to tune its hyperparameters to individual tasks. 4.3 Performance beyond marginals One of the key contributions of this paper is to evaluate predictive distributions beyond marginals. In Figure 2, the red bars show the results of benchmark agents evaluated on joint predictive distributions with τ = 100. Unlike when evaluating on marginal predictions, where no method significantly outperforms a well-tuned MLP, the potential benefits afforded by Bayesian deep learning become clear when examining higher-order predictive distributions. Our results refute prior works’ claims that examining dτKL beyond marginals provides little new information (Wang et al., 2021). Figure 2 shows that sgmcmc is the top-performing agent overall. This should be reassuring to the Bayesian deep learning community and beyond. In the limit of large compute this agent should recover the ‘gold-standard’ of Bayesian inference, and it does indeed perform best (Welling & Teh, 2011). However, some of the most popular approaches in this field (ensemble, dropout) do not actually provide good approximations to the predictive distribution in τ = 100. In fact, we see that even though Bayesian purists may deride ensemble+ and hypermodels as ‘not really Bayesian’, these methods actually provide much better approximations to the Bayesian posterior than ‘fully Bayesian’ VI approaches like bbb. We note too that while sgmcmc performs best, it also requires orders of magnitude more computation than competitive methods even in this toy setting (see Appendix C.2). As we scale to more complex environments, it may therefore be worthwhile to consider alternative approaches to approximate Bayesian inference. For insight into where our top agents are able to outperform, we compare ensemble and ensemble+ under the medium SNR regime in Figures 4 and 5. These methods are identical, except for the addition of a randomized prior function (Osband et al., 2018). Figure 4 shows that, although these methods perform similarly in the quality of their marginal predictions (τ = 1), the addition of a prior function greatly improves the quality of joint predictive distributions (τ = 100) in the low data regime. Figure 5 provides additional intuition into how the randomized prior functions are able to drive improved performance. Figure 5a shows a sampled generative model from our Testbed, with the training data shown in red and blue circles. Figure 5b shows the mean predictions and 4 randomly sampled ensemble members from each agent (top=ensemble, bottom=ensemble+). We see that, although the agents mostly agree in their mean predictions, ensemble+ produces more diverse sampled outcomes enabled by the addition of randomized prior functions. In contrast, ensemble produces similar samples, which may explain why its performance is close to baseline mlp. 5 Performance on real data Section 4 provides a simple, sanitized testbed for clear insight to the efficacy of Bayesian deep learning techniques. However, most deep learning research is not driven by these sorts of synthetic generative models, but the ultimate goal of performing well on real datasets. In this section, we apply the same benchmark agents to a selection of small challenge datasets. We find that, on average, tuning agents for the synthetic problems leads to better performance on real data. We also find that, just as the synthetic testbed, agents that perform similarly in marginal predictions may be distinguished in the quality of their joint predictions. 5.1 Datasets We focus on 10 benchmark datasets (3 feature-based, 7 image from pixels) drawn from the literature including Iris, MNIST, and CIFAR-10 (TFD). This collection is not intended to be comprehensive, or to include the most challenging large-scale problems, but instead to represent some canonical real-world data that might reasonably be addressed with the MLP models of Section 4.1. We apply a basic pre-processing step to each dataset, normalizing input features and flattening observations. We push full details to Appendix D.1. To assess performance in real datasets, we follow a similar procedure as Algorithm 1. The only difference is that since it is impossible to compute the likelihood of environment for real datasets, we compute the negative log-likelihood (NLL) rather than dτKL. Appendix A.2 provides further details. Note that NLL and dτKL are equivalent for agent comparison since they differ by a constant (see Equation 1). Furthermore, to allow for more direct comparison with the synthetic testbed, we also consider variants of each dataset where the number of training pairs is limited to less than the ‘full’ dataset size. 5.2 Synthetic data is predictive of real data Recall that Figure 2 compares performance across an array of agents, assessed using our synthetic data generating process. Each agent’s hyperparameters were tuned by first enumerating a list of plausibly near-optimal choices and selecting the one that optimizes performance. Each of our real-world datasets can be viewed as generated by an environment sampled from an alternative data generating process. A natural question is whether performance on the synthetic data correlates with performance on the real-world data. The table of Figure 6a displays results pertaining to each of our agents. For each agent, performance for each candidate hyperparameter setting was assessed on synthetic and real data, and the correlation across these pairs is reported. The left and right columns restrict attention to datasets with low and high volumes of training data, respectively. If a correlation were equal to 1, the hyperparameter setting that optimizes agent performance on real data would be identical to that on synthetic data. It is reassuring that the correlations are high, reflecting a strong degree of alignment, with the exception of bbb in low data regime, for which there appear to be pathological outcomes distorting performance for small training sets. The values in parentheses express 5th and 95th percentile confidence bounds, measured via the statistical bootstrap. Figure 6b plots performance on real versus synthetic data for the high data regime. Each data point represents one agent-hyperparameter combination. If the correlation were equal to 1, the combination that performs best on the synthetic data would also perform best on the real data. It is reassuring that the correlation is large, and the confidence interval between the 5th and 95th percentiles small. Agent-hyperparameter combinations that perform better on the testbed tend to perform better on real data as well. 5.3 Higher order predictions and informative priors Our synthetic testbed can be helpful in driving innovations that carry over to real data. Section 5.2 indicated that performance on the Testbed is correlated with that on realworld data. We now repeat the observation from Figure 4 on real data; additive prior functions can significantly improve the accuracy of joint predictive distributions generated by ensembles. We show this by comparing the performance of ensemble+ with different forms of prior functions on benchmark datasets. We evaluate an ensemble with no prior function (none), a random MLP prior (MLP), and a random linear function of a 2-dimensional latent representation as the prior, trained via variational autoencoder (VAE) (Kingma & Welling, 2014). We provide full details in Appendix D.3. Figure 7 plots the improvement in NLL for the ensemble agent relative to a baseline MLP (lower is better), and breaks out the result for datasets=MNIST,Iris and τ = 1, 100. We can see that the results for Iris mirror our synthetic data almost exactly. The results for MNIST share some qualitative insights, but also reveal some important differences. For Iris τ = 1 none of the methods outperform the MLP baseline, but for τ = 100 we see significant benefits to an additive MLP prior in the low data regime. For MNIST τ = 1 we actually see benefits to ensembles, even without prior functions and even in the high data regime. This reveals some aspects of this real data that are not captured by our synthetic model, where we did not see this behaviour. For τ = 100 the random MLP prior gives a slight advantage, but the effect is much less pronounced. We hypothesize this is because, unlike the testbed, the MLP prior is not well-matched to the input image data. However, the VAE prior is able to provide significant benefit in the low data regime.3 These benefits also carry over to Iris, even where random MLPs already provided signficant value. Designing architectures that offer useful priors for learning agents is an exciting area for future work. 6 Conclusion This paper highlights the need to evaluate predictive distributions beyond marginals. In addition to this conceptual contribution, we develop a suite of practical computational tools that can evaluate diverse approaches to uncertainty estimation. Together with these tools, we provide a neural-network-based data generating process that facilitates research and iteration beyond a small set of challenge datasets. We package these together as The Neural Testbed, including a variety of baseline agent implementations. We believe that this represents an exciting and valuable new benchmark for Bayesian deep learning and beyond. We have already used this testbed to generate several new insights in this paper. We have shown many popular Bayesian deep learning approaches perform similarly in marginal predictions but quite differently in joint predictions. We reveal the importance of bootstrapping for parameter robustness, and also help reconcile the observed lack of improvement when tuned to specific datasets. We have shown that these insights from synthetic data can carry over to real datasets; that performance in these settings is correlated, that agents with similar marginal predictions can be distinguished by their joint predictions, and that suitable prior functions can play an important role in driving good performance. The results in this paper are in some sense preliminary. The grand challenge for Bayesian deep learning is to provide effective uncertainty estimates in large, rich datasets. While we have demonstrated benefits to predictive evaluation beyond marginals only in the ‘low data’ regime and small-scale problems, we believe that they will extend more broadly to situations where new test inputs appear novel relative to training data. As such, we believe our core insights should carry over to the related problems of nonstationarity and covariate shift that plague modern deep learning systems. As an agent takes on more and more complex tasks, it will continue to run into new and unfamiliar settings and uncertain outcomes; as such, effective predictive distributions will be more important than ever. 3We hypothesize that appropriately initialized convnet architectures may be able to leverage image structure as noted in prior work (Ulyanov et al., 2018). A Testbed Pseudocode We present the testbed pseudocode in this section. Specifically, Algorithm 2 is the pseudocode for our neural testbed, and Algorithm 3 and Algorithm 4 are two different approaches to estimate the likelihood of a test data τ -sample conditioned on an agent’s belief. Algorithm 3 is based on the standard Monte-Carlo estimation, while Algorithm 4 adopts a random partitioning approach. The presented testbed pseudocode works for any prior P(E ∈ ·) over the environment and any input distribution PX , including the ones described in Section 4.1. We also release full code and implementations at https://anonymous.4open.science/r/neural-testbed-B839. In addition to presenting the testbed pseudocode, we also discuss some core technical issues in the neural testbed design. Specifically, Appendix A.1 discusses how to estimate the likelihood of an agent’s belief distribution; Appendix A.2 discusses how to extend the testbed to agent evaluation on real data; finally, Appendix A.3 explains our choices of experiment parameters. Algorithm 2 Neural Testbed Require: the testbed requires the following inputs 1. prior distribution over the environment P(E ∈ ·), input distribution PX 2. agent fθ 3. number of training data T , test distribution order τ 4. number of sampled problems J , number of test data samples N 5. parameters for agent likelihood estimation, as is specified in Algorithm 3 and 4 for j = 1, 2, . . . , J do Step 1: sample environment and training data 1. sample environment E ∼ P(E ∈ ·) 2. sample T inputs X0, X1, . . . , XT−1 i.i.d. from PX 3. sample the training labels Y1, . . . , YT conditionally i.i.d. as Yt+1 ∼ P (Y ∈ ·|E , X = Xt) ∀t = 0, 1, . . . , T − 1 4. choose the training dataset as DT = {(Xt, Yt+1) , t = 0, . . . , T − 1} Step 2: train agent train agent fθT based on training dataset DT Step 3: compute likelihoods for n = 1, 2, . . . , N do 1. sample X(n)T , . . . , X (n) T+τ−1 i.i.d. from PX 2. generate Y (n)T+1, . . . , Y (n) T+τ conditionally independently as Y (n) t+1 ∼ P ( Y ∈ · ∣∣∣E , X = X(n)t ) ∀t = T, T + 1, . . . , T + τ − 1 3. compute the likelihood under the environment E as pj,n = P ( Y (n) T+1:T+τ ∣∣∣E , X(n)T :T+τ−1) = ∏T+τ−1t=T Pr(Y (n)t+1∣∣∣E , X(n)t ) 4. estimate the likelihood conditioned on the agent’s belief p̂j,n ≈ P ( ŶT+1:T+τ = Y (n)T+1:T+τ ∣∣∣θT , X(n)T :T+τ−1, Y (n)T+1:T+τ) , based on Algorithm 3 or 4 with test data τ -sample ( X (n) T :T+τ−1, Y (n) T+1:T+τ ) . return 1JN ∑J j=1 ∑N n=1 log (pj,n/p̂j,n) Algorithm 3 Monte Carlo Estimation of Likelihood of Agent’s Belief Require: 1. trained agent fθT and number of Monte Carlo samples M 2. test data τ -sample (XT :T+τ−1, YT+1:T+τ ) Step 1: sample M models Ê1, . . . , ÊM conditionally i.i.d. from P ( Ê ∈ · ∣∣∣fθT ) Step 2: estimate p̂ as p̂ = 1 M M∑ m=1 P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣Êm, XT :T+τ−1, YT+1:T+τ) return p̂ Algorithm 4 Estimation of Likelihood of Agent’s Belief via Random Partitioning Require: 1. trained agent fθT 2. number of Monte Carlo samples M 3. number of hyperplanes d 4. test data τ -sample (XT :T+τ−1, YT+1:T+τ ) Step 1: sample M models Ê1, . . . , ÊM conditionally i.i.d. from P(Ê ∈ ·|fθT ); for each model m = 1, . . . ,M , class k, and t = T, . . . , T + τ − 1, define pm,t,k = P(Ŷ (m)t+1 = k| Êm, Xt), and `m,t,k = Φ−1 (pm,t,k), where Φ(·) is the CDF of the standard normal function. For each model m, define a vector `m = [`m,T,1, `m,T,2, . . . , `m,T+τ−1,K ] ∈ <Kτ Step 2: sample a d × (Kτ) matrix A and a d-dimensional vector b, with each element/component sampled i.i.d. from N(0, 1). For each m = 1, . . . ,M , compute ψm = 1 [A`m + b ≥ 0] ∈ {0, 1}d. Step 3: partition the sampled models, with each cell indexed by ψ ∈ {0, 1}d and defined by Mψ = {m : ψm = ψ} and assign a probability to each cell: qψ = |Mψ| M Step 4: ∀ψ ∈ {0, 1}d and ∀t = T, T + 1, . . . , T + τ − 1, estimate the probability of predicting Ŷt+1 = k conditioned on the cell: pψ,t,k = { 1 |Mψ| ∑ m∈Mψ pm,t,k if |Mψ| > 0 1 if |Mψ| = 0 Step 5: estimate Pr(Ŷt+1:T+τ = Yt+1:T+τ |θT , Xt:T+τ−1, Yt+1:T+τ ) as p̂ = ∑ ψ∈{0,1}d qψ T+τ−1∏ t=T pψ,t,Yt+1 return p̂ A.1 Estimating Likelihood of Agent’s Belief Distribution We have presented two algorithms to estimate the likelihood of a test data τ -sample conditioned on a trained agent: Algorithm 3 is based on the standard Monte Carlo estimation, while Algorithm 4 adopts an approach combining random partitioning and Monte Carlo estimation. In this subsection, we briefly discuss the pros and cons between these two algorithms, and provide some general guidelines on how to choose between them. Algorithm 3 produces unbiased estimates of the likelihoods, which is usually accurate when τ is small (e.g. for τ ≤ 10). However, maintaining accuracy might require the number of samples M to grow exponentially with τ . The following is an illustrative example. Example 1 (Uniform belief over deterministic models): Consider a scenario where the number of class labels is K = 2. We say a model Ê is deterministic if for any feature vector Xt, P(Ŷt+1 = 1 | Ê , Xt) ∈ {0, 1}. Obviously, for any test data τ -sample (XT :T+τ−1, YT+1:T+τ ) with YT+1:T+τ ∈ {0, 1}τ , under a deterministic model Ê , we have P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣ Ê , XT :T+τ−1, YT+1:T+τ) ∈ {0, 1}. When restricted to the inputs XT :T+τ−1, there are 2τ distinguishable deterministic models. Assume the agent’s belief distribution is uniform over these 2τ distinguishable deterministic models, then for any YT+1:T+τ ∈ {0, 1}τ , the likelihood of the agent’s belief distribution is P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣ θT , XT :T+τ−1, YT+1:T+τ) = 2−τ . Now let’s consider Algorithm 3. When a model Êm is sampled from the agent’s belief distribution, with probability 2−τ , P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣ Êm, XT :T+τ−1, YT+1:T+τ) = 1, and with probability 1− 2−τ , P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣ Êm, XT :T+τ−1, YT+1:T+τ) = 0. Consequently, in expectation, we need the number of Monte Carlo samples M = Ω(2τ ) to ensure that the estimate p̂ returned by Algorithm 3 is non-zero. To overcome this challenge, we also propose a novel approach to estimate the likelihood of agent’s belief via a combination of randomized partitioning and Monte Carlo simulation, as is presented in Algorithm 4. This approach proceeds as follows. First, M models are sampled from the agent’s belief distribution. For each sampled model, each test data input Xt, and each class label k, a predictive probability pm,t,k and its probit `m,t,k = Φ−1(pm,t,k) are computed, where Φ(·) is the CDF of the standard normal distribution. For each sampled model, we also stack its probits into a probit vector `m ∈ <Kτ . Then, d random hyperplanes are sampled and used to partition <Kτ into 2d cells. Stacked probit vectors place models in cells. Predictive distributions of models in each cell are averaged, and the likelihood is calculated based on these averages, with each cell weighted according to the number of models it contains. The Neural Testbed applies Algorithm 4 with 2d M . Hence, some cells are assigned many models. We conjecture that, under suitable regularity conditions, models assigned to the same cell tend to generate similar predictions. If this is the case, this algorithm produces accurate estimates even when τ is large. We leave a formal analysis to future work. Finally, we briefly discuss how to choose between Algorithm 3 and Algorithm 4. As a rule of thumb, we recommend to choose Algorithm 3 for τ < 10 and Algorithm 4 with the number of hyperplanes d between 5 and 10 for τ ≥ 10. A.2 Agent Evaluation on Real Data Algorithm 2 (and its simplified version Algorithm 1) is developed for a synthetic data generating processes. We now discuss how to extend it to agent evaluation on real data. Consider a scenario with J real datasets, and each dataset is further partitioned into a training dataset and a test dataset. The main difference between this scenario and a synthetic data generating process is that we cannot compute the likelihood of environment for real data. Thus, we compute the cross-entropy loss instead (see Equation 1). The computational approach is similar to Algorithm 1: for each real dataset, we use its training dataset to train an agent. Then, we sample N test data τ -samples from the test dataset, and estimate the likelihoods of the agent’s belief distribution. The estimate of the cross-entropy loss is taken to be the sample mean of the negative log-likelihoods. Note that when ranking agents, the cross-entropy loss and dτKL will lead to the same order of agents, since these two losses differ by a constant independent of the agent (see Equation 1). A.3 Choices of Experiment Parameters To apply Algorithm 2, we need to specify an input distribution PX and a prior distribution on the environment P(E ∈ ·). Recall from Section 4.1 that we consider binary classification problems with input dimension 2. We choose PX = N(0, I), and we consider three environment priors distinguished by a temperature parameter that controls the signal-to-noise ratio (SNR) regime. We sweep over temperatures in {0.01, 0.1, 0.5}. The prior distribution P(E ∈ ·) is induced by a distribution over MLPs with 2 hidden layers and ReLU activation. The MLP is distributed according to standard Xavier initialization, except that biases in the first layer are drawn from N(0, 12 ). The MLP outputs two units, which are divided by the temperature parameter and passed through the softmax function to produce class probabilities. The implementation of this generative model is in our open source code under the path /generative/factories.py. We now describe the other parameters we use in the Testbed. In Algorithm 2, we pick the order of predictive distributions τ ∈ {1, 100}, training dataset size T ∈ {1, 3, 10, 30, 100, 300, 1000}, number of sampled problems J = 10, and number of testing data τ -samples N = 1000. We apply Algorithm 3 for evaluation of d1KL and Algorithm 4 for evaluation of d100KL . In both Algorithms 3 and 4, we sample M = 1000 models from the agent. In Algorithm 4, we set the number of hyperplanes d = 7. The specification of the testbed parameters is in our open soucre code under the path /leaderboard/sweep.py. On real datasets, we apply the same τ ∈ {1, 100}, N = 1000, and M = 1000. We set the number of hyperplanes d = 10 in Algorithm 4. B Agents In this section, we describe the benchmark agents in Section 3 and the choice of various hyperparameters used in the implementation of these agents. The list of agents include MLP, ensemble, dropout, Bayes by backprop, stochastic Langevin MCMC, ensemble+ and hypermodel. We will also include other agents such as KNN, random forest, and deep kernel, but the performance of these agents was worse than the other benchmark agents, so we chose not to include them in the comparison in Section 4. In each case, we attempt to match the “canonical” implementation. The complete implementation of these agents including the hyperparameter sweeps used for the Testbed are available at https://anonymous.4open.science/r/neural-testbed-B839. We make use of the Epistemic Neural Networks notation from (Osband et al., 2021) in our code. We set the default hyperparameters of each agent to be the ones that minimize the aggregated KL score daggKL = d1KL + d100KL/100. B.1 MLP The mlp agent learns a 2-layer MLP with 50 hidden units in each layer by minimizing the cross-entropy loss with L2 weight regularization. The L2 weight decay scale is chosen either to be λ 1T or λ d √ β T , where d is the input dimension, β is the temperature of the generative process and T is the size of the training dataset. We sweep over λ ∈ {10−4, 10−3, 10−2, 10−1, 1, 10, 100}. We implement the MLP agent as a special case of a deep ensemble (B.2). The implementation and hyperparameter sweeps for the mlp agent can be found in our open source code, as a special case of the ensemble agent, under the path /agents/factories/ensemble.py. B.2 Ensemble We implement the basic “deep ensembles” approach for posterior approximation (Lakshminarayanan et al., 2017). The ensemble agent learns an ensemble of MLPs by minimizing the cross-entropy loss with L2 weight regularization. The only difference between the ensemble members is their independently initialized network weights. We chose the L2 weight scale to be either λ 1MT or λ d √ β MT , where M is the ensemble size, d is the input dimension, β is the temperature of the generative process, and T is the size of the training dataset. We sweep over ensemble size M ∈ {1, 3, 10, 30, 100} and λ ∈ {10−4, 10−3, 10−2, 10−1, 1, 10, 100}. We find that larger ensembles work better, but this effect is within margin of error after 10 elements. The implementation and hyperparameter sweeps for the ensemble agent can be found in our open source code under the path /agents/factories/ensemble.py. B.3 Dropout We follow Gal & Ghahramani (2016) to build a droput agent for posterior approximation. The agent applies dropout on each layer of a fully connected MLP with ReLU activation and optimizes the network using the cross-entropy loss combined with L2 weight decay. The L2 weight decay scale is chosen to be either l 2 2T (1− pdrop) or d √ βl T where pdrop is the dropping probability, d is the input dimension, β is the temperature of the data generating process, and T is the size of the training dataset. We sweep over dropout rate pdrop ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}, length scale (used for L2 weight decay) l ∈ {0.01, 0.1, 0.3, 1, 3, 10}, number of neural network layers ∈ {2, 3}, and hidden layer size ∈ {50, 100}. The implementation and hyperparameter sweeps for the dropout agent can be found in our open source code under the path /agents/factories/dropout.py. B.4 Bayes-by-backprop We follow Blundell et al. (2015) to build a bbb agent for posterior approximation. We consider a scale mixture of two zero-mean Gaussian densities as the prior. The Gaussian densities have standard deviations σ1 and σ2, and they are mixed with probabilities p and 1− p, respectively. We sweep over σ1 ∈ {1, 2, 4}, σ2 ∈ {0.25, 0.5, 0.75}, p ∈ {0, 0.25, 0.5, 0.75, 1}, learning rate ∈ {10−3, 3× 10−3}, number of training steps ∈ {500, 1000, 10000}, number of neural network layers ∈ {2, 3}, hidden layer size ∈ {50, 100}, and the ratio of the complexity cost to the likelihood cost ∈ {1, d √ β}, where d is the input dimension and β is the temperature of the data generating process. The implementation and hyperparameter sweeps for the bbb agent can be found in our open source code under the path /agents/factories/bbb.py. B.5 Stochastic gradient Langevin dynamics We follow Welling & Teh (2011) to implement a sgmcmc agent using stochastic gradient Langevin dynamics (SGLD). We consider two versions of SGLD, one with momentum and other without the momentum. We consider independent Gaussian prior on the neural network parameters where the prior variance is set to be σ2 = λ T dβ , where λ is a hyperparameter that is swept over {0.01, 0.1, 0.5, 1}, d is the input dimension, β is the temperature of the data generating process, and T is the size of the training dataset. We consider a constant learning rate that is swept over {10−5, 5× 10−5, 10−4, 5× 10−4, 10−3, 5 × 10−3, 10−2}. For SGLD with momentum, the momentum decay term is always set to be 0.9. The number of training batches is 5 × 105 with burn-in time of 105 training batches. We save a model every 1000 steps after the burn-in time and use these models as an ensemble during the evaluation. The implementation and hyperparameter sweeps for the sgmcmc agent can be found in our open source code under the path /agents/ factories/sgmcmc.py. B.6 Ensemble+ We implement the ensemble+ agent using deep ensembles with randomized prior functions (Osband et al., 2018) and bootstrap sampling (Osband & Van Roy, 2015). Similar to the vanilla ensemble agent in Section B.2, we consider L2 weight scale to be either λ 1MT or λ d √ β MT . We sweep over ensemble size M ∈ {1, 3, 10, 30, 100} and λ ∈ {10−4, 10−3, 10−2, 10−1, 1, 10, 100}. The randomized prior functions are sampled exactly from the data generating process, and we sweep over prior scaling ∈ {0, √ β, 1}. In addition, we sweep over bootstrap type (none, exponential, bernoulli). We find that the addition of randomized prior functions is crucial for improvement in performance over vanilla deep ensembles in terms of the quality of joint predictions. We also find that bootstrap sampling improves agent robustness, although the advantage is less apparent when one is allowed to tune the L2 weight decay for each task (see Figure 3). The implementation and hyperparameter sweeps for the ensemble+ agent can be found in our open source code under the path /agents/factories/ensemble_plus.py. B.7 Hypermodel We follow Dwaracherla et al. (2020) to build a hypermodel agent for posterior approximation. We consider a linear hypermodel over a 2-layer MLP base model. We sweep over index dimension ∈ {1, 3, 5, 7}. The L2 weight decay is chosen to be either λ 1T or λ d √ β T with λ ∈ {0.1, 0.3, 1, 3, 10}, where d is the input dimension, β is the temperature of the data generating process, and T is the size of the training dataset. We chose three different bootstrapping methods of none, exponential, bernoulli. We use an additive prior which is a linear hypermodel prior over an MLP base model, which is similar to the generating process, with number of hidden layers in {1, 2}, 10 hidden units in each layer, and prior scale from {0, √ β, 1}. The implementation and hyperparameter sweeps for the hypermodel agent can be found in our open source code under the path /agents/factories/hypermodel.py. B.8 Non-parametric classifiers K-nearest neighbors (k-NN) (Cover & Hart, 1967) and random forest classifiers (Friedman, 2017) are simple and cheap off-the-shelf non-parametric baselines (Murphy, 2012; Pedregosa et al., 2011). The ‘uncertainty’ in these classifiers arises merely from the fact that they produce distributions over the labels and as such we do not expect them to perform well relative to more principled approaches. Moreover, these methods have no capacity to model dτKL for τ > 1. For the knn agent we swept over the number of neighbors k ∈ {1, 5, 10, 30, 50, 100} and the weighting of the contribution of each neighbor as either uniform or based on distance. For the random forest agent we swept over the number of trees in the forest {10, 100, 1000}, and the splitting criterion which was either the Gini impurity coefficient or the information gain. To prevent infinite values in the KL we truncate the probabilities produced by these classifiers to be in the interval [0.01, 0.99]. The implementation and hyperparameter sweeps for the knn and random forest agents can be found in our open source code under the paths /agents/factories/knn.py and /agents/factories/random_forest.py. B.9 Gaussian process with learned kernel A neural network takes input Xt ∈ X and produces output Zt+1 = Wφθ(Xt) + b ∈ RK , where W ∈ RK×m is a matrix, b ∈ RK is a bias vector, and φθ : X → Rm is the output of the penultimate layer of the neural network. In the case of classification the output Zt+1 corresponds to the logits over the class labels, i.e., Ŷt+1 ∝ exp(Zt+1). The neural network should learn a function that maps the input into a space where the classes are linearly distinguishable. In other words, the mapping that the neural network is learning can be considered a form of kernel (Schölkopf & Smola, 2018), where the kernel function k : X ×X → R is simply k(X,X ′) = φθ(X)>φθ(X ′). With this in mind, we can take a trained neural network and consider the learned mapping to be the kernel in a Gaussian process (GP) (Rasmussen, 2003), from which we can obtain approximate uncertainty estimates. Concretely, let Φ0:T−1 ∈ RT×m be the matrix corresponding to the φθ(Xt), t = 0, . . . , T −1, vectors stacked row-wise and let ΦT :T+τ−1 ∈ Rτ×m denote the same quantity for the test set. Fix index i ∈ {0, . . . ,K − 1} to be a particular class index. A GP models the joint distribution over the dataset to be a multi-variate Gaussian, i.e.,[ Z (i) 1:T Z (i) T+1:T+τ ] ∼ N ([ µ (i) 1:T µ (i) T+1:T+τ ] , [ σ2I + Φ0:T−1Φ>0:T−1 ΦT :T+τ−1Φ>0:T−1 Φ0:T−1Φ>T :T+τ−1 ΦT :T+τ−1Φ>T :T+τ−1 ]) where σ > 0 models the noise in the training data measurement and µ(i)1:T , µ (i) T+1:T+τ are the means under the GP. The conditional distribution is given by P (Z(i)T+1:T+τ | Z (i) 1:T , X0:T+τ−1) = N ( µ (i) T+1:T+τ |1:T ,ΣT+1:T+τ |1:T ) where ΣT+1:T+τ |1:T = ΦT :T+τ−1Φ>T :T+τ−1 − ΦT :T+τ−1Φ>0:T−1(σ2I + Φ0:T−1Φ>0:T−1)−1Φ0:T−1Φ>T :T+τ−1. and rather than use the GP to compute µ(i)T+1:T+τ |0:T (which would not be possible since we do not oberve the true logits) we just take it to be the output of the neural network when evaluated on the test dataset. The matrix being inverted in the expression for ΣT+1:T+τ |0:T has dimension T × T , which may be quite large. We use the Sherman-Morrison-Woodbury identity to rewrite it as follows (Woodbury, 1950) ΣT+1:T+τ |0:T = ΦT :T+τ−1(I − Φ>0:T−1(σ2I + Φ0:T−1Φ>0:T−1)−1Φ0:T−1)Φ>T :T+τ−1 = σ2ΦT :T+τ−1(σ2I + Φ>0:T−1Φ0:T−1)−1Φ>T :T+τ−1, which instead involves the inverse of an m×m matrix, which may be much smaller. If we perform a Cholesky factorization of positive definite matrix (σ2I + Φ>0:T−1Φ0:T−1) = LL> then the samples for all logits simultaneously can be drawn by first sampling ζ ∈ Rm×K , with each entry drawn IID from N (0, 1), then forming ŶT+1:T+τ ∝ exp(µT+1:T+τ |1:T + σΦT :T+τ−1L−>ζ). The implementation and hyperparameter sweeps for the deep kernel agent can be found in our open source code under the path /agents/factories/deep_kernel.py. B.10 Other agents In our paper we have made a concerted effort to include representative and canonical agents across different families of Bayesian deep learning and adjacent research. In addition to these implementations, we performed extensive tuning to make sure that each agent was given a fair shot. However, with the proliferation of research in this area, it was not possible for us to evaluate all competiting approaches. We hope that, by opensourcing the Neural Testbed, we can allow researchers in the field to easily assess and compare their agents to these baselines. For example, we highlight a few recent pieces of research that might be interesting to evaluate in our setting. Of course, there are many more methods to compare and benchmark. We leave this open as an exciting area for future research. • Neural Tangent Kernel Prior Functions (He et al., 2020). Proposes a specific type of prior function in ensemble+ inspired by connections to the neural tangent kernel. • Functional Variational Bayesian Neural Networks (Sun et al., 2019). Applies variational inference directly to the function outputs, rather than weights like bbb. • Variational normalizing flows (Rezende & Mohamed, 2015). Applies variational inference over a more expressive family than bbb. • No U-Turn Sampler (Hoffman et al., 2014). Another approach to sgmcmc that attempts to compute the posterior directly, computational costs can grow large. C Testbed results In this section, we provide the complete results of the performance of benchmark agents on the Testbed, broken down by the temperature setting, which controls the SNR, and the size of the training dataset. We select the best performing agent within each agent family and plot d1KL and d100KL with the performance of an MLP agent as a reference. We also provide a plot comparing the training time of different agents. C.1 Performance breakdown Figures 8 and 9 show the KL estimates evaluated on τ = 1 and τ = 100, respectively. For each agent, for each SNR regime, for each number of training points we plot the average KL estimate from the Testbed. In each plot, we include the “baseline” mlp agent as a black dashed line to allow for easy comparison across agents. A detailed description of these benchmark agents can be found in Appendix B. C.2 Training time Figure 10 shows a plot comparing the d100KL and training time of different agents normalized with that of an MLP. We can see that sgmcmc agent has the best performance, but at the cost of more training time (computation). Both ensemble+ and hypermodel agents have similar performance as sgmcmc with lower training time. We trained our agents on CPU only systems. D Real data This section provides supplementary details regarding the experiments in Section 5. As before, we include full implementation and source code at https://anonymous.4open. science/r/neural-testbed-B839. D.1 Datasets Table 2 outlines the datasets included in our experiments. Unlike to the synthetic testbed, which evaluates agents over a range of SNR regimes, these datasets are generally all high SNR regime. We can see this since the top-performing agents in the literature are able to obtain high levels of classification accuracy on held out data; something that is impossible if the underlying system has high levels of noise. Each of these datasets is provided with a canonical training/test set of specific sizes. In order to examine performance in different data regimes we augment the default settings of Table 2 by also examining the performance of agents on these datasets with reduced training data. In a way that mirrors the testbed sweep of Section 4.1, we also look at settings where the training data is restricted to T = 1, 10, 100, 1000, 10000 data points respectively. D.2 Correlation Figure 6 breaks down the correlation in performance between testbeds and real data. For the purposes of Table 6a we say that T = 1, 10 is the ‘low data’ regime, and the maximum training dataset size is the ‘high data’ regime. Our results show that, for each agent, for each data regime, performance of hyperparameters is correlated across settings. One concern might be that while performance on real data overall is highly correlated, that this might not necessarily be the case for any individual dataset. Or, alternatively, that this correlation is driven by extremely strong relationships in one dataset that are not present in others. Figure 11 shows that this is not the case. In fact, for each of the datasets considered we have strong and positive correlation over agent-hyperparameter pairs. This gives us confidence that the results of Figure 6b are robust not only to choice of agent, but also to some reasonable choice of datasets. D.3 Prior functions We consider two different forms of prior functions for ensemble+: a random MLP of the input data and a random linear function of a 2-dimensional latent trained via variational autoencoder (VAE) (Kingma & Welling, 2014). For the MLP prior, we tried both linear (MLP with no hidden layer) and MLP with hidden layers, and observed that the linear prior works better. To train the 2-dimensional latent, we considered a 2-layer (128, 64) MLP for the Gaussian encoder and a 2-layer (64, 128) MLP for the Bernoulli decoder. We trained the VAE using all unsupervised training data available for each dataset. After training the VAE for 10,000 steps, we used the output mean of the Gaussian encoder as the latent.
1. What are the strengths and weaknesses of the proposed benchmark for approximate inference methods? 2. How does the reviewer assess the significance and contribution of the paper regarding its focus on joint predictions and comparison to existing benchmarks? 3. What are the concerns regarding the scale and synthetic nature of the benchmark, and how might they be addressed? 4. What additional downstream tasks could be included in the benchmark to evaluate the quality of joint predictions and marginal uncertainties? 5. How might the proposed benchmark be improved by emphasizing comparisons on downstream tasks and incorporating a variety of evaluations? 6. What reservations does the reviewer have about the focus on approximating true Bayesian posteriors, and how might these be addressed? 7. What additional explanation or motivation would be helpful in Algorithm 4 in appendix A.1, and why is it important to provide this information? 8. How might experimental comparisons with direct Monte Carlo estimates and documentation of hyperparameter determination improve confidence in the proposed evaluation metrics?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a new benchmark for approximate inference methods, with emphasis on comparing the quality of joint predictions instead of marginals. The proposed testbed is comprised of synethetic test functions generated by simple MLPs, and authors show results on the synthetic benchmark are well correlated with results on small scale real world datasets as well. Review I appreciate that benchmark emphasizes joint predictions as an important component in evalauting uncertainty estimates. I agree that these joint predictions are very important for certain applications of uncertainty estimation, especially in sequential d ecision making settings. However, I am not convinced this benchmark will be a significant and useful contribution to the community in its current state, for the reasons below. Existing benchmarks: Wang et al 2021 [1] already emphasize the differences between joint and marginal uncertainties and propose various ways of evaluating joint predictions, including a downstream transductive active learning task. Interestingly, Wang et al find that directly evaluating joint likelihoods gave little more information than simply marginals, while this paper concludes that different approximate inference methods can produce very different joint likelihoods with similar marginal performance. I suspect the difference here is in the sequence length τ used to evaluate joint predictions; Wang et al appear to only evaluate over 5 data points, while this paper uses 100 to show major performance differences. It would be good to include additional results at different τ 's between 1 and 100 to see how the difference between joint and marginal predictions grows. Another benchmark proposed by Wilson et al 2021 [2] also focus on evaluating the faithfulness of Bayesian posterior approximations, though they only evaluate marginal uncertainties. Nontheless, I find their benchmark has several advantanges: they study much larger scale networks (albeit this implies much higher computational cost) and also consider real world input distributions instead of only synthetic ones. Regarding likelihoods as the evaluation metric: While (joint) likelihoods are certainly a very natural choice for comparing predictions, what ultimately matters is how useful the predictions are in making downstream decisions. In addition to just measuring likelihoods, I think the benchmark would be a much stronger contribution if it also emphasized comparisons on downstream tasks, which can be synthetically generated similarly to the existing settings. Examples of possible downstream tasks that really test the quality of joint predictions could include active learning, contextual bandit, and Bayesian optimization tasks, plugging different posterior approximations into standard representative algorithms for each task (though Wang et al 2021 already do propose transductive active learning as a benchmark task for joint uncertainties). Another set of potential tasks focusing more on the quality of marginal predictions could include tasks like selective classification, possibly in conjuction with different levels of synethetic covariate shift. Being able to accurately gauge the (relative) trustworthiness of predictions (especially with distribution shift) is certainly an important task for uncertainty estimation, even if it does not rely on the quality of joint predictions. Within the scope of synethetically generated tasks, I think the inclusion of additional evaluations on a variety downstream tasks into a single consolidated benchmark would be a much stronger contribution, and emphasize how different applications of uncertainty estimation can have different needs and be more amenable to different algorithms. Regarding trying to approximate true Bayesian posteriors: Another hesitation I have is that the focus is that the target is given by quality is being measured against a ground truth consisting of random MLPs, whlie it is hard to say whether these pri ors actually perform well when faced with real world data and tasks. For example, in Izmailov et al [3], they find that (in larger scale settings), exhaustively running HMC, with presumably better approximations of the true posterior, actually performs worse than approximate methods like ensembling when faced with distribution shift, suggesting simply trying to faithfully approximate exact inference is not necessarily the right (or at least only) goal we consider. Scale and synthetic nature of the benchmark: I also have reservations about the scale of the proposed benchmark, as it is focused on a small data regime with very simple models (MLPs). While the authors state that the synthetic results are predictive of real world data, the real world datasets evaluated are also quite toy in nature, and extending insights to larger scale models and datasets like in [2] is quite important. Regarding likelihood evaluation in Algo 4: I believe it would be helpful provide more explanation of Algorithm 4 in appendix A.1. Currently, there is little motivation for the different steps in the algorithm, and I would appreciate greatly if the authors could summarize the relevant concepts from high-dimensional geometry as applied here. Given the paper is proposing a new set of evaluations, it would be especially important to make sure that users have confidence that the metrics used are meaningful. If no formal analysis of the accuracy of the approximation is feasible, there should at least experimental comparisons with the direct Monte Carlo estimate (run with exhaustively many samples to estimate the ground truth) to compare the sample complexities of the proposed estimate at different sequence lengths, as well as documentation of how hyperparameters of the estimator were or should be determined. Citations: [1] Wang, Chaoqi, Shengyang Sun, and Roger Grosse. "Beyond Marginal Uncertainty: How Accurately can Bayesian Regression Models Estimate Posterior Predictive Correlations?." International Conference on Artificial Intelligence and Statistics. PMLR, 2021. [2] Andrew Gordon Wilson, Pavel Izmailov, Matthew D Hoffman, Yarin Gal, Yingzhen Li, Melanie F Pradier, Sharad Vikram, Andrew Foong, Sanae Lotfi, Sebastian Farquhar. "Evaluating Approximate Inference in Bayesian Deep Learning." https://izmailovpavel.github.io/neurips_bdl_competition/. [3] Izmailov, Pavel, et al. "What Are Bayesian Neural Network Posteriors Really Like?." arXiv preprint arXiv:2104.14421 (2021). ~
ICLR
Title Evaluating Predictive Distributions: Does Bayesian Deep Learning Work? Abstract Posterior predictive distributions quantify uncertainties ignored by point estimates. This paper introduces The Neural Testbed, which provides tools for the systematic evaluation of agents that generate such predictions. Crucially, these tools assess not only the quality of marginal predictions per input, but also joint predictions given many inputs. Joint distributions are often critical for useful uncertainty quantification, but they have been largely overlooked by the Bayesian deep learning community. We benchmark several approaches to uncertainty estimation using a neural-network-based data generating process. Our results reveal the importance of evaluation beyond marginal predictions. Further, they reconcile sources of confusion in the field, such as why Bayesian deep learning approaches that generate accurate marginal predictions perform poorly in sequential decision tasks, how incorporating priors can be helpful, and what roles epistemic versus aleatoric uncertainty play when evaluating performance. We also present experiments on real-world challenge datasets, which show a high correlation with testbed results, and that the importance of evaluating joint predictive distributions carries over to real data. As part of this effort, we opensource The Neural Testbed, including all implementations from this paper. 1 Introduction Deep learning has emerged as the state-of-the-art approach across a number of application domains in which agents learn from large amounts of data (LeCun et al., 2015). Neural networks are increasingly used not only to predict outcomes but also to inform decisions. Common approaches in deep learning produce point estimates but not uncertainty estimates, which are often required for effective decision-making. Bayesian deep learning extends the methodology to produce such uncertainty estimates (MacKay, 1992; Neal, 2012). We consider agents that are trained on data pairs ((Xt, Yt+1) : t = 0, 1, . . . , T − 1) and subsequently generate predictions given new inputs. When presented with an input XT , a Bayesian neural network can generate a predictive distribution of the outcome YT+1 that is yet to be observed. This distribution characterizes the agent’s uncertainty about YT+1. We refer to such a prediction as marginal to distinguish it from a joint predictive distribution over a list (YT+1, . . . , YT+τ ) of prospective outcomes corresponding to inputs (XT , . . . , XT+τ−1). The importance of uncertainty estimation has motivated a great deal of research over recent years (Kendall & Gal, 2017). This research has produced a variety of agents that learn to generate predictive distributions. With this proliferation of alternatives, it is increasingly important to analyze and compare their performance (Filos et al., 2019; Nado et al., 2021). In this paper, we introduce new tools for systematic evaluation of such agents. Our tools overcome several limitations faced by previous methods of evaluation. First, by focusing purely on predictive distributions, we allow for a unified treatment of approaches developed within the Bayesian neural network community and beyond. This sidesteps the Open source code available at https://anonymous.4open.science/r/neural-testbed-B839. question of whether any approach ‘is really Bayesian’ (Wilson & Izmailov, 2020). Second, our tools evaluate the quality of higher-order joint predictions (τ > 1). Until now, the Bayesian deep learning literature has focused almost exclusively on evaluating marginal predictions (Wang et al., 2021). Finally, we develop a neural-network-based data generating process for Bayesian deep learning that can be used to drive insight and algorithm development. Where research has focused on a small set of challenge datasets, this might introduce bias through overfitting via multiple iterations of algorithm development. We use these tools to compare hundreds of agent variants. Further, we show that performance on our synthetic data generating process data is highly correlated with performance on real-world challenge datasets. We opensource all code used in this paper as The Neural Testbed. Our results reconcile several sources of confusion in the field. One concerns why particular approaches developed by the Bayesian deep learning community, such as Bayes-by-backprop, dropout, and deep ensembles, perform poorly in sequential decision tasks despite faring well based on evaluation metrics of that community (Osband et al., 2018). Our results demonstrate that, while such methods produce accurate marginal predictions, they are no longer competitive when it comes to high-order joint predictions. Joint predictions play a critical role in sequential decision-making (Lu et al., 2021). Another puzzling issue is that state-of-the-art methods do not employ domain-specific priors. Whether Bayesian deep learning approaches should at all is a subject of controversy (Wenzel et al., 2020). We show that the benefits of domain-specific priors can be pronounced when evaluating high-order joint predictions, even where they are negligible for marginals. We also help to resolve a point of philosophical debate within the deep learning community: the importance of epistemic versus aleatoric uncertainty1. The strangeness of this distinction has even made its way into wider popular culture, as satirized in the XKCD comic of Figure 1 (Munroe, 2021). For a given parametric model, we can clearly distinguish parameter uncertainty from noise, or reducible from irreducible uncertainty. However, from the perspective of a learning agent, the choice of model is subjective; different models can lead to the same marginal predictions. Our formulation provides a clear and objective way to assess the quality of predictive distributions, without reliance on this subjective distinction between knowledge and chance. Crucially, we show that this can be judged via the quality of joint predictions, but that marginals are not sufficient. It is worth mentioning another notable contribution of this work. The quality of a predictive distribution is commonly assessed in terms of cross-entropy loss. While this measure is welldefined for both marginal and joint predictions, to the best of our knowledge, the literature has only addressed computation in the former case. For high-order joint predictions, the straightforward approach would require computing sums over exponentially many values. To render this computationally tractable, we developed a novel approximation algorithm that leverages a random partitioning operation and Monte Carlo simulation. While this approach is motivated by concepts from high-dimensional geometry (Kaski, 1998; Donoho, 2006), we leave its analysis as a topic for future theoretical research. 1Epistemic uncertainty relates to knowledge (ancient Greek episteme↔knowledge), as opposed to aleatoric uncertainty relating to chance (Latin alea↔dice) (Der Kiureghian & Ditlevsen, 2009). 2 Evaluating predictive distributions In this section, we introduce notation for the standard supervised learning framework we will consider (classification) as well as our evaluation metric (the KL-loss). We also explain how we estimate the KL-loss for high-order joint predictions where exact computation is infeasible, using random partitions and Monte Carlo simulation. 2.1 Kullback–Leibler loss Consider a sequence of pairs ((Xt, Yt+1) : t = 0, 1, 2, . . .), where each Xt is a feature vector and each Yt+1 is its target label. This sequence is i.i.d. conditioned on the environment E , which produces the data, and which we view as a latent random variable. We consider an agent that is uncertain about the environment and predicts class labels YT+1:T+τ ≡ (YT+1, . . . , YT+τ ) given training data pairs DT ≡ ((Xt, Yt+1) : t = 0, 1, 2, . . . , T − 1) and unlabelled feature vectors XT :T+τ−1 ≡ (XT , . . . , XT+τ−1). From the agent’s perspective, each feature vector Xt is generated i.i.d from a fixed distribution P(Xt ∈ ·), and each class label Yt+1 is then drawn from P(Yt+1 ∈ ·|E , Xt). We describe the agent’s predictions in terms of a generative model, parameterized by a vector θT that the agent learns from the training data DT . For any inputs XT :T+τ−1, θT determines a predictive distribution, which could be used to sample imagined outcomes ŶT+1:T+τ . We define the τ th-order expected KL-loss by dτKL =E [ dKL ( P (YT+1:T+τ ∈ ·|E , XT :T+τ−1)︸ ︷︷ ︸ environment likelihood ∥∥P(ŶT+1:T+τ ∈ ·|θT , XT :T+τ−1)︸ ︷︷ ︸ agent likelihood )] (1) =−E [ log ( P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣θT , XT :T+τ−1, YT+1:T+τ))]︸ ︷︷ ︸ cross-entropy loss ≡ negative log-likelihood + C, where C = E [log (P (YT+1:T+τ |E , XT :T+τ−1))] is independent of θT . The expectation is taken over all random variables, including the environment E , the parameters θT , XT :T+τ−1, and YT+1:T+τ . Note that dτKL is equivalent to the widely used notion of cross-entropy loss, though offset by a quantity that is independent of θT (Kullback & Leibler, 1951). For τ > 1, dτKL assesses joint rather than the marginal predictions. 2.2 Marginal Versus Joint Predictions Evaluating an agent’s ability to estimate uncertainty on joint instead of marginal predictions can result in very different answers. We provide a simple example that illustrates the point. Suppose the data ((Xt, Yt+1) : t = 0, 1, 2, . . .) is generated by repeated tosses of a possibly biased coin with unknown probability p of heads.2 Let Xt = 0, to indicate that there is no input, and let each outcome Yt+1 be 0 or 1 to indicate tails or heads, respectively. Consider two agents that, without any training, predict outcomes. Agent 1 assumes p = 2/3 and models the outcome of each flip as pure chance. Agent 2 assumes that the coin is fully biased, meaning that p ∈ {0, 1}, but assigns probabilities 1/3 and 2/3 to 0 and 1. Let Ŷ 1t+1 and Ŷ 2t+1 denote the outcomes imagined by the two agents. Despite their differing assumptions, the two agents generate identical marginal predictive distributions: P(Ŷ 1t+1 = 0) = P(Ŷ 2t+1 = 0) = 1/3. On the other hand, joint predictions greatly differ for large τ : P(Ŷ 11 = 0, .., Ŷ 1τ = 0) = 1/3τ 1/3 = P(Ŷ 21 = 0, . . . , Ŷ 2τ = 0). We can say that agent 1 attributes all uncertainty to aleatoric sources and agent 2, epistemic sources (although as Figure 1 alludes, there are many ways an agent can attribute sources of uncertainty). Evaluating marginal predictions cannot distinguish between the two possibilities, though for a specific prior distribution over p, one agent could be right and the other wrong. One must evaluate joint predictions to make this distinction. 2We consider this coin as an illustrative model of more complex binary outcomes, such as whether a user will click on an ad, or whether a given mortgage will default on payments. When it comes to decision-making, this distinction can be critical (Lu et al., 2021). In a casino, under the first agent’s assumption, there is large upside and little risk on repeatedly betting on heads in the long run. However, if there is a 1/3 chance the coin will always land tails, as is the case in the second agent’s prediction, there is a ruinous risk to repeatedly betting heads. Evaluating joint predictions beyond marginals distinguishes these cases. 2.3 Computation of Kullback–Leibler loss In contexts we will consider, it is not possible to compute dτKL exactly. As such, we will approximate dτKL via Monte Carlo simulation. This section provides a high level overview of our approach, we push the full details to Appendix A. Algorithm 1 outlines a basic approach to estimating dτKL with respect to a synthetic data generating process. The algorithm samples a set of environments and a training dataset for each environment. For each of these pairs, the agent is re-initialized, trained, and then tested on N independent test data τ -samples. Note that each test data τ -sample includes τ data pairs. For each test data τ -sample, the likelihood of the environment is computed exactly, but that of the agent’s belief distribution is approximated. The estimate of dτKL is taken to be the sample mean of the log-likelihood-ratios (Algorithm 2). Algorithm 1 KL-Loss Computation 1: for j = 1, 2, . . . , J do 2: sample environment and training dataset, and train agent 3: for n = 1, 2, . . . , N do 4: sample a test data τ -sample with τ feature-label pairs 5: compute pj,n . likelihood of environment 6: compute p̂j,n . estimated likelihood of agent’s belief distribution 7: return 1JN ∑J j=1 ∑N n=1 log (pj,n/p̂j,n) . estimated log-likelihood-ratio While the likelihood of an environment can be efficiently computed, that of an agent’s belief distribution poses a computational challenge. One approach is to estimate this likelihood via Monte Carlo simulation (Algorithm 3). This produces unbiased estimates, which can be accurate when τ is small. However, maintaining accuracy requires the number of samples to grow exponentially with τ , as discussed in Appendix A.1. To overcome this challenge, we propose a novel approach that estimates the likelihood of the agent’s beliefs via a combination of randomized partitioning and Monte Carlo simulation (Algorithm 4) (Kaski, 1998). We conjecture that, under suitable regularity conditions, this novel approach produces accurate estimates even when τ is large, but leave a formal analysis to future work. Even though Algorithm 1 is developed for a synthetic data generating process, it is straightforward to extend it to evaluate agents on real data. We outline our approach to real data in Section 5.1, with full details in Appendix A.2. 3 Benchmark agents In this section we outline the baseline agents that we use to benchmark canonical approaches to uncertainty estimation in deep learning. Table 1 links to papers that introduce these agents, as well as the hyperparamters that we tuned to optimize their performance via gridsearch. In each case, we attempt to match ‘canonical’ implementations, which we open source at https://anonymous.4open.science/r/neural-testbed-B839. In addition to these agent implementations, our opensource project contains all the evaluation code to reproduce the results of this paper. Our code is written in Python and makes use of Jax internally (Bradbury et al., 2018). However, our evaluation procedure is framework agnostic, and can equally be used with any Python package including Tensorflow, Pytorch or even SKlearn. Over the course of this paper, we have made extensive use of parallel computation to facilitate large hyperparameter sweeps over many problems. Nevertheless, the overall computational cost is relatively low by modern deep learning standards and relies only on standard CPU. For reference, evaluating the mlp agent across all the problems in our testbed and real data requires less than 3 CPU-hours. We view our opensource effort as one of the major contributions of this work. We provide clear and strong baselines, together with an objective and accessible method for assessing uncertainty estimates beyond marginal distributions. 4 The Neural Testbed In this section we introduce the Neural Testbed, a system for assessing and comparing agent performance. The Testbed implements synthetic data generating processes and streamlines the process of sampling data, training agents, and evaluating test performance by estimating KL-loss for marginal and high-order joint predictions. Since independent data can be generated for each execution, the Testbed can drive insight and multiple iterations of algorithm development without risk of overfitting to a fixed dataset. We begin by describing the simple generative model based around a random 2-layer MLP. We then apply this testbed to evaluate a comprehensive set of benchmark agents. 4.1 Synthetic data generating processes By data generating process, we do not mean only the conditional distribution of data pairs (Xt, Yt+1)|E but also the distribution of the environment E . The Testbed considers 2- dimensional inputs and binary classification problems, although the generating processes can be easily extended to any input dimension and number of classes. The Testbed offers three data generating processes distinguished by a “temperature” setting, which signifies the signal-to-noise ratio (SNR) regime of the generated data. The agent can be tuned separately for each setting. This reflects prior knowledge of whether the agent is operating in a high SNR regime such as image recognition or a low SNR regime such as weather forecasting. To generate a model, the Testbed samples a 2-hidden-layer ReLU MLP with 2 output units, which are scaled by 1/temperature and passed through a softmax function to produce class probabilities. The MLP is sampled according to standard Xavier initialization (Glorot & Bengio, 2010), with the exception that biases in the first layer are drawn from N(0, 12 ). The inputs (Xt : t = 0, 1, . . .) are drawn i.i.d. from N(0, I). The agent is provided with the data generating process as prior knowledge. In Section 2.1, we described KL-loss as a metric for evaluating performance of an agent. The Neural Testbed estimates KL-loss, with τ ∈ {1, 100}, for three temperature settings and several training dataset sizes. For each value of τ , the KL-losses are averaged to produce an aggregate performance measure. Further details concerning data generation and agent evaluation are offered in Appendix A. 4.2 Performance in marginal predictions We begin our evaluation of benchmark approaches to Bayesian deep learning in marginal predictions (τ = 1). This setting has been the main focus of the Bayesian deep learning literature. Despite this focus, it is surprising to see in Figure 2 that none of the benchmark methods significantly outperform a well-tuned MLP baseline according to d1KL. Of course, there are many other metrics one might consider, but in this fundamental metric of prediction quality, the mlp agent presents a baseline that is difficult to outperform. One of the keys to this result is that all of the agents are able to tune their hyperparameters, such as L2 weight decay, to the SNR regime and number of training points. This matches the way deep learning systems are typically implemented in practice, with extensive hyperparameter tuning on validation data. This methodology has led many practitioners to doubt the usefulness of automatic tuning procedures such as bootstrap sampling (Nixon et al., 2020). In Figure 3, we compare the performance of an ensemble+ agent that uses bootstrapping with and without the ability to tune the hyperparameters per problem setting. We see that bootstrap sampling is beneficial when the agent is expected to work robustly over a wide range of problem settings. However, the benefits are no longer apparent when the agent is allowed to tune its hyperparameters to individual tasks. 4.3 Performance beyond marginals One of the key contributions of this paper is to evaluate predictive distributions beyond marginals. In Figure 2, the red bars show the results of benchmark agents evaluated on joint predictive distributions with τ = 100. Unlike when evaluating on marginal predictions, where no method significantly outperforms a well-tuned MLP, the potential benefits afforded by Bayesian deep learning become clear when examining higher-order predictive distributions. Our results refute prior works’ claims that examining dτKL beyond marginals provides little new information (Wang et al., 2021). Figure 2 shows that sgmcmc is the top-performing agent overall. This should be reassuring to the Bayesian deep learning community and beyond. In the limit of large compute this agent should recover the ‘gold-standard’ of Bayesian inference, and it does indeed perform best (Welling & Teh, 2011). However, some of the most popular approaches in this field (ensemble, dropout) do not actually provide good approximations to the predictive distribution in τ = 100. In fact, we see that even though Bayesian purists may deride ensemble+ and hypermodels as ‘not really Bayesian’, these methods actually provide much better approximations to the Bayesian posterior than ‘fully Bayesian’ VI approaches like bbb. We note too that while sgmcmc performs best, it also requires orders of magnitude more computation than competitive methods even in this toy setting (see Appendix C.2). As we scale to more complex environments, it may therefore be worthwhile to consider alternative approaches to approximate Bayesian inference. For insight into where our top agents are able to outperform, we compare ensemble and ensemble+ under the medium SNR regime in Figures 4 and 5. These methods are identical, except for the addition of a randomized prior function (Osband et al., 2018). Figure 4 shows that, although these methods perform similarly in the quality of their marginal predictions (τ = 1), the addition of a prior function greatly improves the quality of joint predictive distributions (τ = 100) in the low data regime. Figure 5 provides additional intuition into how the randomized prior functions are able to drive improved performance. Figure 5a shows a sampled generative model from our Testbed, with the training data shown in red and blue circles. Figure 5b shows the mean predictions and 4 randomly sampled ensemble members from each agent (top=ensemble, bottom=ensemble+). We see that, although the agents mostly agree in their mean predictions, ensemble+ produces more diverse sampled outcomes enabled by the addition of randomized prior functions. In contrast, ensemble produces similar samples, which may explain why its performance is close to baseline mlp. 5 Performance on real data Section 4 provides a simple, sanitized testbed for clear insight to the efficacy of Bayesian deep learning techniques. However, most deep learning research is not driven by these sorts of synthetic generative models, but the ultimate goal of performing well on real datasets. In this section, we apply the same benchmark agents to a selection of small challenge datasets. We find that, on average, tuning agents for the synthetic problems leads to better performance on real data. We also find that, just as the synthetic testbed, agents that perform similarly in marginal predictions may be distinguished in the quality of their joint predictions. 5.1 Datasets We focus on 10 benchmark datasets (3 feature-based, 7 image from pixels) drawn from the literature including Iris, MNIST, and CIFAR-10 (TFD). This collection is not intended to be comprehensive, or to include the most challenging large-scale problems, but instead to represent some canonical real-world data that might reasonably be addressed with the MLP models of Section 4.1. We apply a basic pre-processing step to each dataset, normalizing input features and flattening observations. We push full details to Appendix D.1. To assess performance in real datasets, we follow a similar procedure as Algorithm 1. The only difference is that since it is impossible to compute the likelihood of environment for real datasets, we compute the negative log-likelihood (NLL) rather than dτKL. Appendix A.2 provides further details. Note that NLL and dτKL are equivalent for agent comparison since they differ by a constant (see Equation 1). Furthermore, to allow for more direct comparison with the synthetic testbed, we also consider variants of each dataset where the number of training pairs is limited to less than the ‘full’ dataset size. 5.2 Synthetic data is predictive of real data Recall that Figure 2 compares performance across an array of agents, assessed using our synthetic data generating process. Each agent’s hyperparameters were tuned by first enumerating a list of plausibly near-optimal choices and selecting the one that optimizes performance. Each of our real-world datasets can be viewed as generated by an environment sampled from an alternative data generating process. A natural question is whether performance on the synthetic data correlates with performance on the real-world data. The table of Figure 6a displays results pertaining to each of our agents. For each agent, performance for each candidate hyperparameter setting was assessed on synthetic and real data, and the correlation across these pairs is reported. The left and right columns restrict attention to datasets with low and high volumes of training data, respectively. If a correlation were equal to 1, the hyperparameter setting that optimizes agent performance on real data would be identical to that on synthetic data. It is reassuring that the correlations are high, reflecting a strong degree of alignment, with the exception of bbb in low data regime, for which there appear to be pathological outcomes distorting performance for small training sets. The values in parentheses express 5th and 95th percentile confidence bounds, measured via the statistical bootstrap. Figure 6b plots performance on real versus synthetic data for the high data regime. Each data point represents one agent-hyperparameter combination. If the correlation were equal to 1, the combination that performs best on the synthetic data would also perform best on the real data. It is reassuring that the correlation is large, and the confidence interval between the 5th and 95th percentiles small. Agent-hyperparameter combinations that perform better on the testbed tend to perform better on real data as well. 5.3 Higher order predictions and informative priors Our synthetic testbed can be helpful in driving innovations that carry over to real data. Section 5.2 indicated that performance on the Testbed is correlated with that on realworld data. We now repeat the observation from Figure 4 on real data; additive prior functions can significantly improve the accuracy of joint predictive distributions generated by ensembles. We show this by comparing the performance of ensemble+ with different forms of prior functions on benchmark datasets. We evaluate an ensemble with no prior function (none), a random MLP prior (MLP), and a random linear function of a 2-dimensional latent representation as the prior, trained via variational autoencoder (VAE) (Kingma & Welling, 2014). We provide full details in Appendix D.3. Figure 7 plots the improvement in NLL for the ensemble agent relative to a baseline MLP (lower is better), and breaks out the result for datasets=MNIST,Iris and τ = 1, 100. We can see that the results for Iris mirror our synthetic data almost exactly. The results for MNIST share some qualitative insights, but also reveal some important differences. For Iris τ = 1 none of the methods outperform the MLP baseline, but for τ = 100 we see significant benefits to an additive MLP prior in the low data regime. For MNIST τ = 1 we actually see benefits to ensembles, even without prior functions and even in the high data regime. This reveals some aspects of this real data that are not captured by our synthetic model, where we did not see this behaviour. For τ = 100 the random MLP prior gives a slight advantage, but the effect is much less pronounced. We hypothesize this is because, unlike the testbed, the MLP prior is not well-matched to the input image data. However, the VAE prior is able to provide significant benefit in the low data regime.3 These benefits also carry over to Iris, even where random MLPs already provided signficant value. Designing architectures that offer useful priors for learning agents is an exciting area for future work. 6 Conclusion This paper highlights the need to evaluate predictive distributions beyond marginals. In addition to this conceptual contribution, we develop a suite of practical computational tools that can evaluate diverse approaches to uncertainty estimation. Together with these tools, we provide a neural-network-based data generating process that facilitates research and iteration beyond a small set of challenge datasets. We package these together as The Neural Testbed, including a variety of baseline agent implementations. We believe that this represents an exciting and valuable new benchmark for Bayesian deep learning and beyond. We have already used this testbed to generate several new insights in this paper. We have shown many popular Bayesian deep learning approaches perform similarly in marginal predictions but quite differently in joint predictions. We reveal the importance of bootstrapping for parameter robustness, and also help reconcile the observed lack of improvement when tuned to specific datasets. We have shown that these insights from synthetic data can carry over to real datasets; that performance in these settings is correlated, that agents with similar marginal predictions can be distinguished by their joint predictions, and that suitable prior functions can play an important role in driving good performance. The results in this paper are in some sense preliminary. The grand challenge for Bayesian deep learning is to provide effective uncertainty estimates in large, rich datasets. While we have demonstrated benefits to predictive evaluation beyond marginals only in the ‘low data’ regime and small-scale problems, we believe that they will extend more broadly to situations where new test inputs appear novel relative to training data. As such, we believe our core insights should carry over to the related problems of nonstationarity and covariate shift that plague modern deep learning systems. As an agent takes on more and more complex tasks, it will continue to run into new and unfamiliar settings and uncertain outcomes; as such, effective predictive distributions will be more important than ever. 3We hypothesize that appropriately initialized convnet architectures may be able to leverage image structure as noted in prior work (Ulyanov et al., 2018). A Testbed Pseudocode We present the testbed pseudocode in this section. Specifically, Algorithm 2 is the pseudocode for our neural testbed, and Algorithm 3 and Algorithm 4 are two different approaches to estimate the likelihood of a test data τ -sample conditioned on an agent’s belief. Algorithm 3 is based on the standard Monte-Carlo estimation, while Algorithm 4 adopts a random partitioning approach. The presented testbed pseudocode works for any prior P(E ∈ ·) over the environment and any input distribution PX , including the ones described in Section 4.1. We also release full code and implementations at https://anonymous.4open.science/r/neural-testbed-B839. In addition to presenting the testbed pseudocode, we also discuss some core technical issues in the neural testbed design. Specifically, Appendix A.1 discusses how to estimate the likelihood of an agent’s belief distribution; Appendix A.2 discusses how to extend the testbed to agent evaluation on real data; finally, Appendix A.3 explains our choices of experiment parameters. Algorithm 2 Neural Testbed Require: the testbed requires the following inputs 1. prior distribution over the environment P(E ∈ ·), input distribution PX 2. agent fθ 3. number of training data T , test distribution order τ 4. number of sampled problems J , number of test data samples N 5. parameters for agent likelihood estimation, as is specified in Algorithm 3 and 4 for j = 1, 2, . . . , J do Step 1: sample environment and training data 1. sample environment E ∼ P(E ∈ ·) 2. sample T inputs X0, X1, . . . , XT−1 i.i.d. from PX 3. sample the training labels Y1, . . . , YT conditionally i.i.d. as Yt+1 ∼ P (Y ∈ ·|E , X = Xt) ∀t = 0, 1, . . . , T − 1 4. choose the training dataset as DT = {(Xt, Yt+1) , t = 0, . . . , T − 1} Step 2: train agent train agent fθT based on training dataset DT Step 3: compute likelihoods for n = 1, 2, . . . , N do 1. sample X(n)T , . . . , X (n) T+τ−1 i.i.d. from PX 2. generate Y (n)T+1, . . . , Y (n) T+τ conditionally independently as Y (n) t+1 ∼ P ( Y ∈ · ∣∣∣E , X = X(n)t ) ∀t = T, T + 1, . . . , T + τ − 1 3. compute the likelihood under the environment E as pj,n = P ( Y (n) T+1:T+τ ∣∣∣E , X(n)T :T+τ−1) = ∏T+τ−1t=T Pr(Y (n)t+1∣∣∣E , X(n)t ) 4. estimate the likelihood conditioned on the agent’s belief p̂j,n ≈ P ( ŶT+1:T+τ = Y (n)T+1:T+τ ∣∣∣θT , X(n)T :T+τ−1, Y (n)T+1:T+τ) , based on Algorithm 3 or 4 with test data τ -sample ( X (n) T :T+τ−1, Y (n) T+1:T+τ ) . return 1JN ∑J j=1 ∑N n=1 log (pj,n/p̂j,n) Algorithm 3 Monte Carlo Estimation of Likelihood of Agent’s Belief Require: 1. trained agent fθT and number of Monte Carlo samples M 2. test data τ -sample (XT :T+τ−1, YT+1:T+τ ) Step 1: sample M models Ê1, . . . , ÊM conditionally i.i.d. from P ( Ê ∈ · ∣∣∣fθT ) Step 2: estimate p̂ as p̂ = 1 M M∑ m=1 P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣Êm, XT :T+τ−1, YT+1:T+τ) return p̂ Algorithm 4 Estimation of Likelihood of Agent’s Belief via Random Partitioning Require: 1. trained agent fθT 2. number of Monte Carlo samples M 3. number of hyperplanes d 4. test data τ -sample (XT :T+τ−1, YT+1:T+τ ) Step 1: sample M models Ê1, . . . , ÊM conditionally i.i.d. from P(Ê ∈ ·|fθT ); for each model m = 1, . . . ,M , class k, and t = T, . . . , T + τ − 1, define pm,t,k = P(Ŷ (m)t+1 = k| Êm, Xt), and `m,t,k = Φ−1 (pm,t,k), where Φ(·) is the CDF of the standard normal function. For each model m, define a vector `m = [`m,T,1, `m,T,2, . . . , `m,T+τ−1,K ] ∈ <Kτ Step 2: sample a d × (Kτ) matrix A and a d-dimensional vector b, with each element/component sampled i.i.d. from N(0, 1). For each m = 1, . . . ,M , compute ψm = 1 [A`m + b ≥ 0] ∈ {0, 1}d. Step 3: partition the sampled models, with each cell indexed by ψ ∈ {0, 1}d and defined by Mψ = {m : ψm = ψ} and assign a probability to each cell: qψ = |Mψ| M Step 4: ∀ψ ∈ {0, 1}d and ∀t = T, T + 1, . . . , T + τ − 1, estimate the probability of predicting Ŷt+1 = k conditioned on the cell: pψ,t,k = { 1 |Mψ| ∑ m∈Mψ pm,t,k if |Mψ| > 0 1 if |Mψ| = 0 Step 5: estimate Pr(Ŷt+1:T+τ = Yt+1:T+τ |θT , Xt:T+τ−1, Yt+1:T+τ ) as p̂ = ∑ ψ∈{0,1}d qψ T+τ−1∏ t=T pψ,t,Yt+1 return p̂ A.1 Estimating Likelihood of Agent’s Belief Distribution We have presented two algorithms to estimate the likelihood of a test data τ -sample conditioned on a trained agent: Algorithm 3 is based on the standard Monte Carlo estimation, while Algorithm 4 adopts an approach combining random partitioning and Monte Carlo estimation. In this subsection, we briefly discuss the pros and cons between these two algorithms, and provide some general guidelines on how to choose between them. Algorithm 3 produces unbiased estimates of the likelihoods, which is usually accurate when τ is small (e.g. for τ ≤ 10). However, maintaining accuracy might require the number of samples M to grow exponentially with τ . The following is an illustrative example. Example 1 (Uniform belief over deterministic models): Consider a scenario where the number of class labels is K = 2. We say a model Ê is deterministic if for any feature vector Xt, P(Ŷt+1 = 1 | Ê , Xt) ∈ {0, 1}. Obviously, for any test data τ -sample (XT :T+τ−1, YT+1:T+τ ) with YT+1:T+τ ∈ {0, 1}τ , under a deterministic model Ê , we have P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣ Ê , XT :T+τ−1, YT+1:T+τ) ∈ {0, 1}. When restricted to the inputs XT :T+τ−1, there are 2τ distinguishable deterministic models. Assume the agent’s belief distribution is uniform over these 2τ distinguishable deterministic models, then for any YT+1:T+τ ∈ {0, 1}τ , the likelihood of the agent’s belief distribution is P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣ θT , XT :T+τ−1, YT+1:T+τ) = 2−τ . Now let’s consider Algorithm 3. When a model Êm is sampled from the agent’s belief distribution, with probability 2−τ , P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣ Êm, XT :T+τ−1, YT+1:T+τ) = 1, and with probability 1− 2−τ , P ( ŶT+1:T+τ = YT+1:T+τ ∣∣∣ Êm, XT :T+τ−1, YT+1:T+τ) = 0. Consequently, in expectation, we need the number of Monte Carlo samples M = Ω(2τ ) to ensure that the estimate p̂ returned by Algorithm 3 is non-zero. To overcome this challenge, we also propose a novel approach to estimate the likelihood of agent’s belief via a combination of randomized partitioning and Monte Carlo simulation, as is presented in Algorithm 4. This approach proceeds as follows. First, M models are sampled from the agent’s belief distribution. For each sampled model, each test data input Xt, and each class label k, a predictive probability pm,t,k and its probit `m,t,k = Φ−1(pm,t,k) are computed, where Φ(·) is the CDF of the standard normal distribution. For each sampled model, we also stack its probits into a probit vector `m ∈ <Kτ . Then, d random hyperplanes are sampled and used to partition <Kτ into 2d cells. Stacked probit vectors place models in cells. Predictive distributions of models in each cell are averaged, and the likelihood is calculated based on these averages, with each cell weighted according to the number of models it contains. The Neural Testbed applies Algorithm 4 with 2d M . Hence, some cells are assigned many models. We conjecture that, under suitable regularity conditions, models assigned to the same cell tend to generate similar predictions. If this is the case, this algorithm produces accurate estimates even when τ is large. We leave a formal analysis to future work. Finally, we briefly discuss how to choose between Algorithm 3 and Algorithm 4. As a rule of thumb, we recommend to choose Algorithm 3 for τ < 10 and Algorithm 4 with the number of hyperplanes d between 5 and 10 for τ ≥ 10. A.2 Agent Evaluation on Real Data Algorithm 2 (and its simplified version Algorithm 1) is developed for a synthetic data generating processes. We now discuss how to extend it to agent evaluation on real data. Consider a scenario with J real datasets, and each dataset is further partitioned into a training dataset and a test dataset. The main difference between this scenario and a synthetic data generating process is that we cannot compute the likelihood of environment for real data. Thus, we compute the cross-entropy loss instead (see Equation 1). The computational approach is similar to Algorithm 1: for each real dataset, we use its training dataset to train an agent. Then, we sample N test data τ -samples from the test dataset, and estimate the likelihoods of the agent’s belief distribution. The estimate of the cross-entropy loss is taken to be the sample mean of the negative log-likelihoods. Note that when ranking agents, the cross-entropy loss and dτKL will lead to the same order of agents, since these two losses differ by a constant independent of the agent (see Equation 1). A.3 Choices of Experiment Parameters To apply Algorithm 2, we need to specify an input distribution PX and a prior distribution on the environment P(E ∈ ·). Recall from Section 4.1 that we consider binary classification problems with input dimension 2. We choose PX = N(0, I), and we consider three environment priors distinguished by a temperature parameter that controls the signal-to-noise ratio (SNR) regime. We sweep over temperatures in {0.01, 0.1, 0.5}. The prior distribution P(E ∈ ·) is induced by a distribution over MLPs with 2 hidden layers and ReLU activation. The MLP is distributed according to standard Xavier initialization, except that biases in the first layer are drawn from N(0, 12 ). The MLP outputs two units, which are divided by the temperature parameter and passed through the softmax function to produce class probabilities. The implementation of this generative model is in our open source code under the path /generative/factories.py. We now describe the other parameters we use in the Testbed. In Algorithm 2, we pick the order of predictive distributions τ ∈ {1, 100}, training dataset size T ∈ {1, 3, 10, 30, 100, 300, 1000}, number of sampled problems J = 10, and number of testing data τ -samples N = 1000. We apply Algorithm 3 for evaluation of d1KL and Algorithm 4 for evaluation of d100KL . In both Algorithms 3 and 4, we sample M = 1000 models from the agent. In Algorithm 4, we set the number of hyperplanes d = 7. The specification of the testbed parameters is in our open soucre code under the path /leaderboard/sweep.py. On real datasets, we apply the same τ ∈ {1, 100}, N = 1000, and M = 1000. We set the number of hyperplanes d = 10 in Algorithm 4. B Agents In this section, we describe the benchmark agents in Section 3 and the choice of various hyperparameters used in the implementation of these agents. The list of agents include MLP, ensemble, dropout, Bayes by backprop, stochastic Langevin MCMC, ensemble+ and hypermodel. We will also include other agents such as KNN, random forest, and deep kernel, but the performance of these agents was worse than the other benchmark agents, so we chose not to include them in the comparison in Section 4. In each case, we attempt to match the “canonical” implementation. The complete implementation of these agents including the hyperparameter sweeps used for the Testbed are available at https://anonymous.4open.science/r/neural-testbed-B839. We make use of the Epistemic Neural Networks notation from (Osband et al., 2021) in our code. We set the default hyperparameters of each agent to be the ones that minimize the aggregated KL score daggKL = d1KL + d100KL/100. B.1 MLP The mlp agent learns a 2-layer MLP with 50 hidden units in each layer by minimizing the cross-entropy loss with L2 weight regularization. The L2 weight decay scale is chosen either to be λ 1T or λ d √ β T , where d is the input dimension, β is the temperature of the generative process and T is the size of the training dataset. We sweep over λ ∈ {10−4, 10−3, 10−2, 10−1, 1, 10, 100}. We implement the MLP agent as a special case of a deep ensemble (B.2). The implementation and hyperparameter sweeps for the mlp agent can be found in our open source code, as a special case of the ensemble agent, under the path /agents/factories/ensemble.py. B.2 Ensemble We implement the basic “deep ensembles” approach for posterior approximation (Lakshminarayanan et al., 2017). The ensemble agent learns an ensemble of MLPs by minimizing the cross-entropy loss with L2 weight regularization. The only difference between the ensemble members is their independently initialized network weights. We chose the L2 weight scale to be either λ 1MT or λ d √ β MT , where M is the ensemble size, d is the input dimension, β is the temperature of the generative process, and T is the size of the training dataset. We sweep over ensemble size M ∈ {1, 3, 10, 30, 100} and λ ∈ {10−4, 10−3, 10−2, 10−1, 1, 10, 100}. We find that larger ensembles work better, but this effect is within margin of error after 10 elements. The implementation and hyperparameter sweeps for the ensemble agent can be found in our open source code under the path /agents/factories/ensemble.py. B.3 Dropout We follow Gal & Ghahramani (2016) to build a droput agent for posterior approximation. The agent applies dropout on each layer of a fully connected MLP with ReLU activation and optimizes the network using the cross-entropy loss combined with L2 weight decay. The L2 weight decay scale is chosen to be either l 2 2T (1− pdrop) or d √ βl T where pdrop is the dropping probability, d is the input dimension, β is the temperature of the data generating process, and T is the size of the training dataset. We sweep over dropout rate pdrop ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}, length scale (used for L2 weight decay) l ∈ {0.01, 0.1, 0.3, 1, 3, 10}, number of neural network layers ∈ {2, 3}, and hidden layer size ∈ {50, 100}. The implementation and hyperparameter sweeps for the dropout agent can be found in our open source code under the path /agents/factories/dropout.py. B.4 Bayes-by-backprop We follow Blundell et al. (2015) to build a bbb agent for posterior approximation. We consider a scale mixture of two zero-mean Gaussian densities as the prior. The Gaussian densities have standard deviations σ1 and σ2, and they are mixed with probabilities p and 1− p, respectively. We sweep over σ1 ∈ {1, 2, 4}, σ2 ∈ {0.25, 0.5, 0.75}, p ∈ {0, 0.25, 0.5, 0.75, 1}, learning rate ∈ {10−3, 3× 10−3}, number of training steps ∈ {500, 1000, 10000}, number of neural network layers ∈ {2, 3}, hidden layer size ∈ {50, 100}, and the ratio of the complexity cost to the likelihood cost ∈ {1, d √ β}, where d is the input dimension and β is the temperature of the data generating process. The implementation and hyperparameter sweeps for the bbb agent can be found in our open source code under the path /agents/factories/bbb.py. B.5 Stochastic gradient Langevin dynamics We follow Welling & Teh (2011) to implement a sgmcmc agent using stochastic gradient Langevin dynamics (SGLD). We consider two versions of SGLD, one with momentum and other without the momentum. We consider independent Gaussian prior on the neural network parameters where the prior variance is set to be σ2 = λ T dβ , where λ is a hyperparameter that is swept over {0.01, 0.1, 0.5, 1}, d is the input dimension, β is the temperature of the data generating process, and T is the size of the training dataset. We consider a constant learning rate that is swept over {10−5, 5× 10−5, 10−4, 5× 10−4, 10−3, 5 × 10−3, 10−2}. For SGLD with momentum, the momentum decay term is always set to be 0.9. The number of training batches is 5 × 105 with burn-in time of 105 training batches. We save a model every 1000 steps after the burn-in time and use these models as an ensemble during the evaluation. The implementation and hyperparameter sweeps for the sgmcmc agent can be found in our open source code under the path /agents/ factories/sgmcmc.py. B.6 Ensemble+ We implement the ensemble+ agent using deep ensembles with randomized prior functions (Osband et al., 2018) and bootstrap sampling (Osband & Van Roy, 2015). Similar to the vanilla ensemble agent in Section B.2, we consider L2 weight scale to be either λ 1MT or λ d √ β MT . We sweep over ensemble size M ∈ {1, 3, 10, 30, 100} and λ ∈ {10−4, 10−3, 10−2, 10−1, 1, 10, 100}. The randomized prior functions are sampled exactly from the data generating process, and we sweep over prior scaling ∈ {0, √ β, 1}. In addition, we sweep over bootstrap type (none, exponential, bernoulli). We find that the addition of randomized prior functions is crucial for improvement in performance over vanilla deep ensembles in terms of the quality of joint predictions. We also find that bootstrap sampling improves agent robustness, although the advantage is less apparent when one is allowed to tune the L2 weight decay for each task (see Figure 3). The implementation and hyperparameter sweeps for the ensemble+ agent can be found in our open source code under the path /agents/factories/ensemble_plus.py. B.7 Hypermodel We follow Dwaracherla et al. (2020) to build a hypermodel agent for posterior approximation. We consider a linear hypermodel over a 2-layer MLP base model. We sweep over index dimension ∈ {1, 3, 5, 7}. The L2 weight decay is chosen to be either λ 1T or λ d √ β T with λ ∈ {0.1, 0.3, 1, 3, 10}, where d is the input dimension, β is the temperature of the data generating process, and T is the size of the training dataset. We chose three different bootstrapping methods of none, exponential, bernoulli. We use an additive prior which is a linear hypermodel prior over an MLP base model, which is similar to the generating process, with number of hidden layers in {1, 2}, 10 hidden units in each layer, and prior scale from {0, √ β, 1}. The implementation and hyperparameter sweeps for the hypermodel agent can be found in our open source code under the path /agents/factories/hypermodel.py. B.8 Non-parametric classifiers K-nearest neighbors (k-NN) (Cover & Hart, 1967) and random forest classifiers (Friedman, 2017) are simple and cheap off-the-shelf non-parametric baselines (Murphy, 2012; Pedregosa et al., 2011). The ‘uncertainty’ in these classifiers arises merely from the fact that they produce distributions over the labels and as such we do not expect them to perform well relative to more principled approaches. Moreover, these methods have no capacity to model dτKL for τ > 1. For the knn agent we swept over the number of neighbors k ∈ {1, 5, 10, 30, 50, 100} and the weighting of the contribution of each neighbor as either uniform or based on distance. For the random forest agent we swept over the number of trees in the forest {10, 100, 1000}, and the splitting criterion which was either the Gini impurity coefficient or the information gain. To prevent infinite values in the KL we truncate the probabilities produced by these classifiers to be in the interval [0.01, 0.99]. The implementation and hyperparameter sweeps for the knn and random forest agents can be found in our open source code under the paths /agents/factories/knn.py and /agents/factories/random_forest.py. B.9 Gaussian process with learned kernel A neural network takes input Xt ∈ X and produces output Zt+1 = Wφθ(Xt) + b ∈ RK , where W ∈ RK×m is a matrix, b ∈ RK is a bias vector, and φθ : X → Rm is the output of the penultimate layer of the neural network. In the case of classification the output Zt+1 corresponds to the logits over the class labels, i.e., Ŷt+1 ∝ exp(Zt+1). The neural network should learn a function that maps the input into a space where the classes are linearly distinguishable. In other words, the mapping that the neural network is learning can be considered a form of kernel (Schölkopf & Smola, 2018), where the kernel function k : X ×X → R is simply k(X,X ′) = φθ(X)>φθ(X ′). With this in mind, we can take a trained neural network and consider the learned mapping to be the kernel in a Gaussian process (GP) (Rasmussen, 2003), from which we can obtain approximate uncertainty estimates. Concretely, let Φ0:T−1 ∈ RT×m be the matrix corresponding to the φθ(Xt), t = 0, . . . , T −1, vectors stacked row-wise and let ΦT :T+τ−1 ∈ Rτ×m denote the same quantity for the test set. Fix index i ∈ {0, . . . ,K − 1} to be a particular class index. A GP models the joint distribution over the dataset to be a multi-variate Gaussian, i.e.,[ Z (i) 1:T Z (i) T+1:T+τ ] ∼ N ([ µ (i) 1:T µ (i) T+1:T+τ ] , [ σ2I + Φ0:T−1Φ>0:T−1 ΦT :T+τ−1Φ>0:T−1 Φ0:T−1Φ>T :T+τ−1 ΦT :T+τ−1Φ>T :T+τ−1 ]) where σ > 0 models the noise in the training data measurement and µ(i)1:T , µ (i) T+1:T+τ are the means under the GP. The conditional distribution is given by P (Z(i)T+1:T+τ | Z (i) 1:T , X0:T+τ−1) = N ( µ (i) T+1:T+τ |1:T ,ΣT+1:T+τ |1:T ) where ΣT+1:T+τ |1:T = ΦT :T+τ−1Φ>T :T+τ−1 − ΦT :T+τ−1Φ>0:T−1(σ2I + Φ0:T−1Φ>0:T−1)−1Φ0:T−1Φ>T :T+τ−1. and rather than use the GP to compute µ(i)T+1:T+τ |0:T (which would not be possible since we do not oberve the true logits) we just take it to be the output of the neural network when evaluated on the test dataset. The matrix being inverted in the expression for ΣT+1:T+τ |0:T has dimension T × T , which may be quite large. We use the Sherman-Morrison-Woodbury identity to rewrite it as follows (Woodbury, 1950) ΣT+1:T+τ |0:T = ΦT :T+τ−1(I − Φ>0:T−1(σ2I + Φ0:T−1Φ>0:T−1)−1Φ0:T−1)Φ>T :T+τ−1 = σ2ΦT :T+τ−1(σ2I + Φ>0:T−1Φ0:T−1)−1Φ>T :T+τ−1, which instead involves the inverse of an m×m matrix, which may be much smaller. If we perform a Cholesky factorization of positive definite matrix (σ2I + Φ>0:T−1Φ0:T−1) = LL> then the samples for all logits simultaneously can be drawn by first sampling ζ ∈ Rm×K , with each entry drawn IID from N (0, 1), then forming ŶT+1:T+τ ∝ exp(µT+1:T+τ |1:T + σΦT :T+τ−1L−>ζ). The implementation and hyperparameter sweeps for the deep kernel agent can be found in our open source code under the path /agents/factories/deep_kernel.py. B.10 Other agents In our paper we have made a concerted effort to include representative and canonical agents across different families of Bayesian deep learning and adjacent research. In addition to these implementations, we performed extensive tuning to make sure that each agent was given a fair shot. However, with the proliferation of research in this area, it was not possible for us to evaluate all competiting approaches. We hope that, by opensourcing the Neural Testbed, we can allow researchers in the field to easily assess and compare their agents to these baselines. For example, we highlight a few recent pieces of research that might be interesting to evaluate in our setting. Of course, there are many more methods to compare and benchmark. We leave this open as an exciting area for future research. • Neural Tangent Kernel Prior Functions (He et al., 2020). Proposes a specific type of prior function in ensemble+ inspired by connections to the neural tangent kernel. • Functional Variational Bayesian Neural Networks (Sun et al., 2019). Applies variational inference directly to the function outputs, rather than weights like bbb. • Variational normalizing flows (Rezende & Mohamed, 2015). Applies variational inference over a more expressive family than bbb. • No U-Turn Sampler (Hoffman et al., 2014). Another approach to sgmcmc that attempts to compute the posterior directly, computational costs can grow large. C Testbed results In this section, we provide the complete results of the performance of benchmark agents on the Testbed, broken down by the temperature setting, which controls the SNR, and the size of the training dataset. We select the best performing agent within each agent family and plot d1KL and d100KL with the performance of an MLP agent as a reference. We also provide a plot comparing the training time of different agents. C.1 Performance breakdown Figures 8 and 9 show the KL estimates evaluated on τ = 1 and τ = 100, respectively. For each agent, for each SNR regime, for each number of training points we plot the average KL estimate from the Testbed. In each plot, we include the “baseline” mlp agent as a black dashed line to allow for easy comparison across agents. A detailed description of these benchmark agents can be found in Appendix B. C.2 Training time Figure 10 shows a plot comparing the d100KL and training time of different agents normalized with that of an MLP. We can see that sgmcmc agent has the best performance, but at the cost of more training time (computation). Both ensemble+ and hypermodel agents have similar performance as sgmcmc with lower training time. We trained our agents on CPU only systems. D Real data This section provides supplementary details regarding the experiments in Section 5. As before, we include full implementation and source code at https://anonymous.4open. science/r/neural-testbed-B839. D.1 Datasets Table 2 outlines the datasets included in our experiments. Unlike to the synthetic testbed, which evaluates agents over a range of SNR regimes, these datasets are generally all high SNR regime. We can see this since the top-performing agents in the literature are able to obtain high levels of classification accuracy on held out data; something that is impossible if the underlying system has high levels of noise. Each of these datasets is provided with a canonical training/test set of specific sizes. In order to examine performance in different data regimes we augment the default settings of Table 2 by also examining the performance of agents on these datasets with reduced training data. In a way that mirrors the testbed sweep of Section 4.1, we also look at settings where the training data is restricted to T = 1, 10, 100, 1000, 10000 data points respectively. D.2 Correlation Figure 6 breaks down the correlation in performance between testbeds and real data. For the purposes of Table 6a we say that T = 1, 10 is the ‘low data’ regime, and the maximum training dataset size is the ‘high data’ regime. Our results show that, for each agent, for each data regime, performance of hyperparameters is correlated across settings. One concern might be that while performance on real data overall is highly correlated, that this might not necessarily be the case for any individual dataset. Or, alternatively, that this correlation is driven by extremely strong relationships in one dataset that are not present in others. Figure 11 shows that this is not the case. In fact, for each of the datasets considered we have strong and positive correlation over agent-hyperparameter pairs. This gives us confidence that the results of Figure 6b are robust not only to choice of agent, but also to some reasonable choice of datasets. D.3 Prior functions We consider two different forms of prior functions for ensemble+: a random MLP of the input data and a random linear function of a 2-dimensional latent trained via variational autoencoder (VAE) (Kingma & Welling, 2014). For the MLP prior, we tried both linear (MLP with no hidden layer) and MLP with hidden layers, and observed that the linear prior works better. To train the 2-dimensional latent, we considered a 2-layer (128, 64) MLP for the Gaussian encoder and a 2-layer (64, 128) MLP for the Bernoulli decoder. We trained the VAE using all unsupervised training data available for each dataset. After training the VAE for 10,000 steps, we used the output mean of the Gaussian encoder as the latent.
1. What is the main contribution of the paper regarding Bayesian deep learning? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Do you have any concerns or questions regarding the presentation and discussion of topics in the paper?
Summary Of The Paper Review
Summary Of The Paper The authors discuss whether it is sufficient to consider the marginal posterior predictive vs considering a joint posterior predictive when evaluating Bayesian deep learning approaches. Introducing a set of experiments (The Neural Testbed), they demonstrate that the performance of common approaches can differ greatly depending on which of these predictive distributions are evaluated. Additionally, extensive code is provided for efficient implementation and evaluation of new models. Review Edit: Thank you for the rebuttal and detailed discussion. I consider the results an interesting contribution that is relevant to the field, which is why I vote for acceptance. The limited score is due to the presentation and discussion of topics in the paper not directly related to the theoretical contribution (see discussion below for details). A proper rewrite of the related sections to guide the reader to focus on the main topic marginal vs joint posterior predictive would strengthen the paper a lot. Strengths The paper is overall well written and structured The experiments demonstrate the difference between marginal and joint posterior predictive the authors ask the community to focus on An extensive library with code is provided, which allows for the implementation and test of further methods Weaknesses The title reads a little bit too attention-grabbing, especially given that the paper itself does not actually target the question of whether Bayesian deep learning (BDL) works, but rather whether evaluating a marginal posterior predictive is sufficient, of whether one should consider joint posterior predictive. The abstract claims to solve "what roles epistemic versus aleatoric uncertainty play", and the end of the introduction also claims to "resolve a point of philosophical debate" on this topic. However, can the authors comment to in which respect they do that? As far as I read it, the authors claim that epistemic and aleatoric uncertainty are highly dependent on the model (e.g. they differ depending on the model, yet can lead to the same predictive distributions for distinct models), and that to compare models, one should rely on the posterior predictive. (To be more precise in the rest of the paper, the joint posterior predictive.) But is that really a point that is being debated and needs to be resolved? That what is reducible and irreducible uncertainty is always conditioned on the model, and that we, therefore, need to focus on the posterior predictive to compare different models with each other properly seems to be completely obvious and not challenged by anybody to my knowledge. (Unfortunately, the supposed philosophical debate lacks all references.) The realization that we cannot usefully compare ir/reducible uncertainties between models does, of course, not at all tackle the question of whether it can be useful to distinguish between them within, i.e. conditioned on a specific model, to gain a deeper understanding of why the current model performs as it does, or how one can improve it (taking e.g. Depeweg et al. (2018) as an example of an active learning context). Section 2.1 and 2.2 read as if they are due to the paper. However, both the τ th-order KL-loss as well as a minor variation of the example are due to Lu et al. (2021), which are only cited in passing without highlighting this fact. Given the small nature of the nets further comparisons against an HMC giving access to the true posterior seems feasible and relevant in the set of models discussed, and if it is infeasible after all an SGHMC (Chen et al., 2014) could give a second point similar to the Langevin sgmcmc to have a richer exploration of the posterior. Given the topic, Izmailov et al. (2021) seem highly relevant with similar results concerning ensembles and sgmcmc performances (abbreviated SGLD in their case). This prior work is missing completely in the discussion. Specific additional questions to the authors Can the authors clarify the differences/similarities with Lu et al., 2021? Can the authors comment in greater detail on the claimed different observations between the current work and Wang et al. (2021)? The nets used in the experiments are rather small. Do the authors have any indication that the findings remain stable with deeper nets? Minor Several parts of the paper are "confusing", "puzzling", "surprising", unclear as to what is "really Bayesian", and seem to be formulated with the primary goal of teasing "Bayesian purists". While this is a nice structure for a poster/conference talk to provoke some good discussions in the next break, it feels somewhat unnecessary in a paper. (Feel free to ignore this complaint, as it is only my highly subjective prior) Similar to the comment on the teasing structure of the paper and the earlier comments on aleatoric/epistemic, the XKCD comic seems somewhat out of place. Especially given that it is introduced as if the deep learning community is its cause. While I do not know the original story behind Munroe's comic, the discussion of different types of sources of uncertainty is a lot older than our current deep learning popularization. "We see that ensemble ... actually provide much better approximations to the Bayesian posterior than 'fully Bayesian' VI approaches like bbb". I would claim that this finding is not too surprising. Given that most VI approaches take a unimodal, mean-field approximation as their starting point, it is directly clear that the approximation to a highly multimodal posterior is terrible. It seems reasonable that an ensemble approach gives a richer signal as long as its members do not collapse to the same optimum. This has also been extensively evaluated and demonstrated by Izmailov et al. (2021). I might have overlooked it, but the hyperparameter details in the appendix seem to miss the number of samples from the posterior for the BNNs. Edit: All details are provided, I had just overlooked them. See the author response below Typos The datasets in Sec 5.1 give (TFD) as a reference, while the references only contain the long-form TensorFlow Datasets. Depeweg et al. Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning, ICML 2018 Izmailov et al. What Are Bayesian Neural Network Posteriors Really Like?, ICML 2021
ICLR
Title Revisiting adapters with adversarial training Abstract While adversarial training is generally used as a defense mechanism, recent works show that it can also act as a regularizer. By co-training a deep network on clean and adversarial inputs, it is possible to improve classification accuracy on the clean, non-adversarial inputs. We demonstrate that, contrary to previous findings, it is not necessary to separate batch statistics when co-training on clean and adversarial inputs, and that it is sufficient to use adapters with few domain-specific parameters for each type of input. We establish that using the classification token of a Vision Transformer (VIT) as an adapter is enough to match the classification performance of dual normalization layers, while using significantly less additional parameters. First, we improve upon the top-1 accuracy of a non-adversarially trained VIT-B16 model by +1.12% on IMAGENET (reaching 83.76% top-1 accuracy). Second, and more importantly, we show that training with adapters enables model soups through linear combinations of the clean and adversarial tokens. These model soups, which we call adversarial model soups, allow us to trade-off between clean and robust accuracy without sacrificing efficiency. Finally, we show that we can easily adapt the resulting models in the face of distribution shifts. Our VIT-B16 obtains top-1 accuracies on IMAGENET variants that are on average +4.00% better than those obtained with Masked Autoencoders. 1 INTRODUCTION Deep networks are inherently susceptible to adversarial perturbations. Adversarial perturbations fool deep networks by adding an imperceptible amount of noise which leads to an incorrect prediction with high confidence (Carlini & Wagner, 2017; Goodfellow et al., 2015; Kurakin et al., 2016b; Szegedy et al., 2014). There has been a lot of work on building defenses against adversarial perturbations (Papernot et al., 2016; Kannan et al., 2018); the most commonly used defense is adversarial training as proposed by Madry et al. (2018) and its variants (Zhang et al., 2019; Pang et al., 2020; Huang et al., 2020; Rice et al., 2020; Gowal et al., 2020), which use adversarially perturbed images at each training step as training data. Earlier studies (Kurakin et al., 2016a; Xie et al., 2019b) showed that using adversarial samples during training leads to performance degradation on clean images. However, AdvProp (Xie et al., 2019a) challenged this observation by showing that adversarial training can act as a regularizer, and therefore improve nominal accuracy, when using dual batch normalization (BatchNorm) layers (Ioffe & Szegedy, 2015) to disentangle the clean and adversarial distributions. We draw attention to the broad similarity between the AdvProp approach and the adapters literature (Rebuffi et al., 2017; Houlsby et al., 2019) where a single backbone network is trained on multiple domains by means of adapters, where a few parameters specific to each domain are trained separately while the rest of the parameters are shared. In light of this comparison, we further develop the line of work introduced by AdvProp and analyze it from an adapter perspective. In particular, we explore various adapters and aim to obtain the best classification performance with minimal additional parameters. Our contributions are as follows: • We show that, in order to benefit from co-training on clean and adversarial samples, it is not necessary to separate the batch statistics of clean and adversarial images in BatchNorm layers. We demonstrate empirically that it is enough to use domain specific trainable parameters to achieve similar results. ∗Work done during an internship at DeepMind • Inspired by the adapters literature, we evaluate various adapters. We show that training separate classification tokens of a VIT for the clean and adversarial domains is enough to match the classification performance of dual normalization layers with 49× fewer domain specific parameters. This classification token acts as a conditioning token which can modify the behaviour of the network to be either in clean or robust mode (Figure 1). • Unlike Xie et al. (2019a) and Herrmann et al. (2022), we also aim at preserving the robust performance of the network against adversarial attacks. We show that our conditional token can obtain SOTA nominal accuracy in the clean mode while at the same time achieving competitive ℓ∞-robustness in the robust mode. As a by-product of our study, we show that adversarial training of VIT-B16 on IMAGENET leads to state-of-the-art robustness against ℓ∞-norm bounded perturbations of size 4/255. • We empirically demonstrate that training with adapters enables model soups (Wortsman et al., 2022). This allow us to introduce adversarial model soups, models that trade-off between clean and robust accuracy through linear interpolation of the clean and adversarial adapters. To the best of our knowledge, our work is the first to study adversarial model soups. We also show that adversarial model soups perform better on IMAGENET variants than the state-of-the-art with masked auto-encoding (He et al., 2022). 2 RELATED WORK Adversarial training. Although more recent approaches have been proposed, the most successful method to reduce the vulnerability of image classifiers to adversarial attacks is adversarial training, which generates on-the-fly adversarial counterparts for the training images and uses them to augment the training set (Croce et al., 2020). Goodfellow et al. (2015) used the single-step Fast Gradient Sign Method (FGSM) attack to craft such adversarial images. Later, Madry et al. (2018) found that using iterative Projected Gradient Descent (PGD) yields models robust to stronger attacks. Their scheme has been subsequently improved by several modifications, e.g. a different loss function (Zhang et al., 2019), unlabelled or synthetic data (Carmon et al., 2019; Uesato et al., 2019; Gowal et al., 2021), model weight averaging (Gowal et al., 2020), adversarial weight perturbations (Wu et al., 2020), and better data augmentation (Rebuffi et al., 2021). While the main drawback of adversarial training is the degradation of performance of robust models on clean images (Tsipras et al., 2018), Xie et al. (2019a) showed that adversarial images can be leveraged as a strong regularizer to improve the clean accuracy of classifiers on IMAGENET. In particular, they propose AdvProp, which introduces separate BatchNorm layers specific to clean or adversarial inputs, with the remaining layers being shared. This approach and the role of normalization layers when training with both clean and adversarial points has been further studied by (Xie & Yuille, 2019; Walter et al., 2022). Recently, Wang et al. (2022) suggest removing BatchNorm layers from the standard RESNET architecture (He et al., 2016) to retain high clean accuracy with adversarial training, but this negatively affects the robustness against stronger attacks.1 Finally, (Kireev et al., 2021; Herrmann et al., 2022) showed that carefully tuning the threat model in adversarial training might improve the performance on clean images and in the presence of distribution shifts, such as common corruptions (Hendrycks & Dietterich, 2018). Adapters. In early work on deep networks, Caruana (1997) showed that sharing network parameters among tasks acts as a regularizer. Aiming at a more efficient parameter sharing, (Rebuffi et al., 2017; Rosenfeld & Tsotsos, 2018) introduced adapters – small training modules specific to each task which can be stitched all along the network. In other lines of work, (Mallya et al., 2018; Mancini et al., 2018) adapt a model to new tasks using efficient weight masking and (Li et al., 2016; Maria Carlucci et al., 2017) perform domain adaptation by batch statistics modulation. While these approaches require having as many adapters as tasks, Perez et al. (2018) propose an adapter layer whose weights are generated by a conditioning network. Besides computer vision, adapters are also used in natural language processing for efficient fine-tuning (Houlsby et al., 2019; Pfeiffer et al., 2020; Wang et al., 2020) and multi-task learning (Stickland & Murray, 2019). Merging multiple models. While ensembles are a popular and successful way to combine multiple independently trained classifiers to improve on individual performance (Ovadia et al., 2019; GontijoLopes et al., 2021), they increase the inference cost as they require a forward pass for each sub-network 1See https://github.com/amazon-research/normalizer-free-robust-training/issues/2. of the ensemble. An alternative approach is taken by Wortsman et al. (2022) who propose to finetune a fully trained model with different hyperparameter configurations and then average the entire set of weights of the various networks. The obtained model soups get better performance than each individual model and even ensembles. Model soups are in spirit similar to Stochastic Weight Averaging (Izmailov et al., 2018) which consists in averaging weights along an optimization trajectory rather than averaging over independent runs. 3 METHOD 3.1 CO-TRAINING WITH NOMINAL AND ADVERSARIAL TRAINING Goodfellow et al. (2015) propose adversarial training as a way to regularize standard training. They jointly optimize the model parameters θ on clean and adversarial images with the co-training loss αL(f(x;θ), y) + (1− α)max δ∈S L(f(x+ δ;θ), y), (1) where pairs of associated examples x and labels y are sampled from the training dataset, f(·;θ) is a model parametrized by θ, L defines the loss function (such as the cross-entropy loss in the classification context), and S is the set of allowed perturbations. Setting α = 1 boils down to nominal training on clean images and setting α = 0 leads to adversarial training as defined by Madry et al. (2018). In our case, we consider ℓ∞ norm-bounded perturbations of size ϵ = 4/255, so we have S = {δ | ∥δ∥∞ ≤ ϵ}, and we use untargeted attacks to generate the adversarial perturbations δ (see details in Section 4). 3.2 SEPARATING BATCH STATISTICS IS NOT NECESSARY BatchNorm is a widely used normalization layer shown to improve performance and training stability of image classifiers (Ioffe & Szegedy, 2015). We recall that a BatchNorm layer, given a batch as input, first normalizes it by subtracting the mean and dividing by the standard deviation computed over the entire batch, then it applies an affine transformation, with learnable scale and offset parameters. During training, it accumulates these so-called batch statistics to use during test time, so that the output of the classifier for each image is independent of the other images in the batch. The batch statistics can be seen an approximation of the statistics over the image distribution. Xie et al. (2019a) show that optimizing the co-training loss in Eq. 1 can yield worse results on clean images than simple nominal training. This is especially the case when the network has a low capacity or the attack (i.e., the inner maximization) is too strong (such as using a large perturbation radius ϵ). To solve this issue, they propose AdvProp, which consists in using distinct BatchNorm layers for clean and adversarial images. They argue that “maintaining one set of [BatchNorm] statistics results in incorrect statistics estimation”, which could be the reason for the performance degradation. We note that using two sets of BatchNorm layers for the clean and adversarial samples as in AdvProp creates two sets of batch statistics but also two sets of learnable scale and offset parameters. In the following we investigate whether having separate batch statistics is a necessary condition for successful co-training. Figure 2 shows the clean and robust accuracy of various model architectures as training progresses. The left panel demonstrates that, if we share both batch statistics and scales/offsets (Shared BatchNorm, orange curves), the robust accuracy (orange dashed line) quickly drops, far from the one obtained by AdvProp (Dual BatchNorm, blue curve) which is above 34%. However, if we use a single set of batch statistics but specific scales and offsets for clean and adversarial images, we can observe on the right panel of Figure 2 that the robust accuracy (DualParams BatchNorm, orange dashed line) matches the one (blue dashed line) obtained by AdvProp. This demonstrates that it is possible to achieve nominal and robust classification results similar to those of AdvProp without separate batch statistics. Furthermore, there exist normalization layers such as LayerNorm (Ba et al., 2016) or GroupNorm (Wu & He, 2018) which do not use batch statistics, as their normalization step is done per sample and not per batch. Hence, according to the hypothesis of Xie et al. (2019a), these types of normalization layer should not suffer from performance degradation. Nevertheless, the left panel of Figure 2 shows that their robust accuracy (green and red dashed lines) does not match the robust accuracy of AdvProp (Dual BatchNorm), and is unstable over training steps. However, by making the scales and offsets of LayerNorm and GroupNorm specific to clean and adversarial images, their robust accuracy matches that obtained with dual BatchNorm layers, as shown in the right panel of Figure 2. This suggests that a key element to make the co-training loss of Eq. 1 work for various normalization layers is to have trainable parameters which are specific to the clean and adversarial images.2 3.3 REVISITING ADAPTERS WITH ADVERSARIAL TRAINING The last observation strongly relates this setting to the adapters literature where a single backbone architecture has some parameters, called adapters, which are specific to different domains while the rest of the parameters are shared among tasks. In our case, the clean images form one domain and the adversarial images constitute another domain. In this work, we go beyond having separate normalization layers for the clean and adversarial images and investigate other types of adapters. 2Interestingly, contrary to our observation that standard GroupNorm fails to retain robustness, Xie & Yuille (2019) report that GroupNorm matches Dual BatchNorm. We explain this difference as we use a stronger untargeted attack in this manuscript compared to the targeted attack of Xie & Yuille (2019). Using a stronger attack allows us to reveal failure modes that would have been hidden otherwise. Formally, the model parameters θ can be decomposed into parameters ψ which are shared among domains and parameters ϕ which are specific to a domain. We call ϕclean the parameters used when training on clean images and ϕadv the parameters used when training on adversarial images. For example, in Section 3.2, when we used dual LayerNorm layers, the scales and offsets of these normalization layers are contained in ϕclean and ϕadv whereas the rest of the model parameters are in ψ. Based on Eq. 1, we optimize the following loss: αL(f(x;ψ ∪ ϕclean), y) + (1− α)max δ∈S L(f(x+ δ;ψ ∪ ϕadv), y). (2) Finally, we introduce some notation for models with adapters at inference time: we call f(·;ψ∪ϕclean) the clean mode for prediction as we use the adapters ϕclean trained on the clean data. Conversely, we call f(·;ψ ∪ ϕadv) the robust mode when using the adapters ϕadv trained on the perturbed data. 3.4 TRAINING WITH ADAPTERS ENABLES ADVERSARIAL MODEL SOUPS Wortsman et al. (2022) propose model soups, which consist in averaging the weights of multiple models fine-tuned from the same pre-trained model. The resulting weight averaged model can benefit from the original models without incurring any extra compute and memory cost at inference time. Currently, in our setting the user would have to know at test time if the network should be in clean or robust mode. A model soup, by its ability to merge models, is a way to bypass this issue. We formulate the hypothesis that training with adapters enables model soups. With this in mind, we observe that training with adapters means that most of the model parameters are already shared, so model souping would simply consist in linearly interpolating the weights of the adapters for the two modes. We call adversarial model soups, the model soups with a model co-trained on clean and adversarial samples. We get the following parametrized model: f(·;ψ ∪ (βϕclean + (1− β)ϕadv)) (3) where β is the weighting factor when averaging the adapters. If β = 1, the model soup boils down to the clean mode and conversely β = 0 corresponds to the robust mode. In Section 5.2, we assess this hypothesis and show that forming model soups between independent nominal and robust models fails. 4 EXPERIMENTAL SETUP Architecture. We focus our study on the B16 variant of the Vision Transformer (VIT-B16) introduced by Dosovitskiy et al. (2020). We adopt the modifications proposed by He et al. (2022): the linear classifier is applied on the mean of the final tokens except the classification token. We train this network by using supervised training from scratch as proposed in He et al. (2022) (see the appendix). Attacks. We consider adversarial robustness against untargeted ℓ∞-bounded attacks with radius ϵ = 4/255. This is the most common setup for IMAGENET models, and it is more challenging to defend against than the targeted threat model used by Xie & Yuille (2019). To generate the adversarial perturbations we use Projected Gradient Descent (Madry et al., 2018) with 2 steps named PGD2 (see details in the appendix) at training time and with 40 steps for evaluation (PGD40). Datasets. We focus our experimental evaluation on the IMAGENET dataset (Russakovsky et al., 2015), with images at 224 × 224 resolution for both training and testing, as this is the standard large-scale benchmark for SOTA models and was used by Xie et al. (2019a) for AdvProp. We report clean and adversarial accuracy on the whole validation set. Moreover, we test the robustness against distribution shifts via several IMAGENET variants: IMAGENET-C (Hendrycks & Dietterich, 2018), IMAGENET-A (Hendrycks et al., 2019), IMAGENET-R (Hendrycks et al., 2020), IMAGENET-SKETCH (Wang et al., 2019), and Conflict Stimuli (Geirhos et al., 2018). 5 EXPERIMENTAL RESULTS Similarly to our observation in Section 3.2 for a RESNET-50, a fully shared VIT-B16 trained with the co-training loss Eq. 1 fails to retain any robustness. Therefore, we first investigate various adapters for VIT-B16 to find an efficient training setting in Section 5.1. Then we study adversarial model soups with adapters in Section 5.2 and finally show that training with adapters generalizes to other datasets and threat models. 5.1 FINDING AN EFFICIENT SETTING Choice of adapter. Using adapters increases the number of parameters as the layers which we choose as adapters have twice as many parameters: one set of parameters for clean images and another for adversarial images. Hence, to avoid increasing the network memory footprint too heavily, we restrict our adapters study to layers with few parameters, thus excluding self-attention (Vaswani et al., 2017) layers and MLP layers. This leaves the options of having dual embedder, positional embedding, normalization layers or classification token; among them, the classification token has by far the least amount of parameters, 49-770× fewer than the other candidates (see details in Table 1). We must still verify that so few parameters are enough to preserve the advantages of the AdvProp training scheme. Hence, we train a model for each type of adapter and compare them with two models without adapters, one trained with nominal training and the other with adversarial training. We observe in Table 1 that by using two classification tokens as adapters, which means only 768 extra parameters out of 86M, we reach 83.56% clean accuracy on IMAGENET, which is an improvement of +0.92% over standard training. Moreover, we obtain a robust accuracy of 49.87% in the robust mode, which is close to the robust accuracy given by adversarial training. Notably, we see that adapting other layers with more parameters such as all LayerNorm scales and offsets results in similar performances in both clean and robust modes. This indicates that (i) it is not necessary to split the normalization layers to reproduce the effect of AdvProp, and (ii) even a very small amount of dual parameters provide sufficient expressiveness to adapt the shared portion of the network to the two modes. Therefore, in the rest of the manuscript we focus on dual classification tokens as it requires the smallest number of extra parameters. Number of attack steps. As the results in Table 1 were obtained with PGD2, we check if we can reduce the number of attack steps to be more computationally efficient. In Table 2, we report the results for two one-step methods: N-FGSM by de Jorge et al. (2022) and FAST-AT by Wong et al. (2020). If we use the step sizes recommended in the corresponding papers, both methods suffer from catastrophic overfitting (Wong et al., 2020) (illustrated in Figure 6 in the appendix) and therefore have no robustness at all. In Table 2 we avoid such catastrophic overfitting by reducing the step sizes to ϵ and 0.75ϵ for FAST-AT and N-FGSM respectively and we observe that both methods perform more than 1% worse in robust accuracy than PGD2. We also increase the number of attack steps to 5 with PGD5. We notice a small improvement over PGD2 of 0.4% in robust accuracy while the clean accuracy is on par. Hence, PGD2 seems to be a good compromise between efficiency and classification performance. Weighting the co-training loss. In the co-training loss in Eq. 1, the α hyperparameter controls how much the loss is weighted towards clean or adversarial samples. For example, setting α = 0 means we train solely on adversarial samples. In Figure 3, where we evaluate several values for α (dividing the range between 0 and 1 into intervals of length 0.1), we notice that only the values between α = 0 and α = 0.4 form a Pareto front that strictly dominates the other intervals. Indeed, between α = 1 and α = 0.4, decreasing α leads to better performance both in clean and robust modes. In fact, setting α = 0.4 leads to 83.76% clean accuracy (in clean mode) and 52.19% robust accuracy (in robust mode) which are both better than the values obtained in Table 1 with α = 0.5. In Figure 7 (in the appendix), we visualize the filters of the embedder when training with various values of α. We observe that for α = 0.2 and for α = 0.8 the filters look quite similar to the filters learned with adversarial training (α = 0) and nominal training (α = 1), respectively. Interestingly, filters learned with α = 0.4 and α = 0.6 are not the simple combination of nominal and adversarial filters but rather new visually distinct filters. This indicates that co-training on clean and adversarial samples can lead to a new hybrid representation for the shared layers compared to nominal and adversarial training. Robustness to stronger attacks. For completeness we further test the robustness of a subset of our models with a mixture of AUTOATTACK (Croce & Hein, 2020) and MULTITARGETED (Gowal et al., 2019), denoted by AA+MT. Pure adversarial training, which obtains 56.19% robust accuracy against PGD40 (Table 1), reaches 54.15% robust accuracy against AA+MT. This is a new state-of-the-art robust accuracy on IMAGENET, improving by +6.55% over the 47.60% reported by Debenedetti et al. (2022). While Debenedetti et al. (2022) advocate for weak data augmentation for training robust VIT, our training procedure follows He et al. (2022) and contains heavy augmentations (see appendix): we conclude that large models still benefit from strong data augmentations even with adversarial training. Finally, the robust mode of the model co-trained with α = 0.4 in the previous paragraph reaches 49.55% robust accuracy against AA+MT, which still surpasses the prior art and preserves competitive robust performance. 5.2 EVALUATING MODEL SOUPS Adapters enable adversarial model soups. One downside of using adapters is that one needs to know if for an input image the network should be put in clean or robust mode. This motivates adversarial model soups which allow to create a single model performing well both in clean and robust accuracy. First, if we independently train two VIT-B16, one nominally and the other adversarially, and then try to perform model soups on them, we notice in Table 9 (in the appendix) that both robust and clean accuracies drop immediately when the weighting factor β between parameters is not equal to 0 or 1. We evaluate various model soups with the models of Table 1, meaning that the parameters specific to the clean and robust domain are averaged with weight β to obtain a single classifier. We notice in Figure 9 (in the appendix) that adversarial model soups work equally well with the various types of adapters, where sliding the value of β allows to smoothly trade-off clean accuracy for robustness. This validates our hypothesis that adapters enable model soups. Soup or ensemble. In Figure 4 we compare the classification performance of adversarial model soups and ensembles obtained by linear combination of the clean and robust modes at the probability prediction level. We notice that ensembling produces a better Pareto front than adversarial model soup but ensembles, with their two forward passes, require twice the compute of model soups. Hence, 78.2 78.6 79.1 79.7 80.5 81.5 82.4 83.0 83.5 83.7 83.8 84.7 85.0 85.4 85.9 86.5 87.2 87.8 88.3 88.5 88.5 88.5 13.4 14.0 15.1 17.0 19.6 23.6 28.6 33.7 36.8 38.2 38.4 55.2 55.3 55.2 55.4 55.5 55.6 55.6 55.4 55.1 54.7 54.4 39.6 39.8 40.0 40.3 40.5 40.7 41.0 41.2 41.2 41.2 41.1 56.5 55.7 54.4 53.4 51.1 49.1 46.9 44.4 41.6 40.0 39.8 56.7 57.3 58.2 59.7 61.9 64.7 67.5 69.3 70.0 70.1 69.9 adversarial model soups allow to choose the trade-off between clean and robust accuracy with performance close to ensembling while only requiring the same compute as a single network. Extrapolation. For the anecdote, we experiment with adversarial model soups for extrapolation with values of the weighting factor β above 1 and below 0. Interestingly, we observe that setting β = 1.05 leads to 83.81% clean accuracy which is better than the 83.76% obtained in the clean mode. Similarly, setting β = −0.05 leads to 52.26% robust accuracy which is slightly better than the 52.19% obtained in the robust mode. Hence, it appears that adversarial model soups do not need to be restricted to interpolation. Soups for IMAGENET variants. As adversarial model soups allow to create models with chosen trade-off between clean and robust accuracy, we might expect that such models perform better than nominal ones when distribution shifts occur. For example, Kireev et al. (2021) showed that adversarial training can even help with common corruptions when specifically tuned for such task (note that they use smaller datasets than IMAGENET). We then compute the accuracy of adversarial model soups with varying β on IMAGENET variants (results in Figure 5): while half of the best performance are obtained with the clean classification token, for some variants such as IMAGENET-R, IMAGENET-C and IMAGENET-SKETCH the best results are obtained with intermediate tokens. Hence, adversarial model soups can be used to reach a compromise between IMAGENET variants to get the best average performance. Here β = 0.9 yields the best mean accuracy 61.23%. In Table 3, we notice that this adversarial model soup improves the mean accuracy by +4.00% over a fine-tuned Masked Autoencoder (MAE-B16) checkpoint from He et al. (2022) and by +2.37% over Pyramid-AT from Herrmann et al. (2022). It also improves by +2.24% over the best performing ensemble of two networks trained independently with nominal and adversarial training respectively. 5.3 EVALUATING ON OTHER THREAT MODELS AND DATASETS Evaluating other threat models. IMAGENET variants are also a good benchmark to compare different types of adversarial attack to generate the perturbations for the co-training loss in Eq. 2: untargeted ℓ∞-bounded perturbations with budget ϵ = 4/255 (our standard setup), untargeted ℓ2bounded with ϵ ∈ {1, 2, 4, 8}, targeted (random target class as in Xie et al., 2019a) ℓ∞-bounded with ϵ ∈ {4/255, 8/255, 12/255}, and the Pyramid attack proposed by Herrmann et al. (2022). In Table 4, we select the best adversarial model soups after training with each method a VIT-B16 with dual classification tokens, and report its results on all variants. We see that the clean accuracy on the IMAGENET validation set improves in all cases compared to standard training. Moreover, although the best performing attack varies across variants, we notice that the untargeted ℓ∞ attack achieves the best average accuracy. Evaluating on other datasets. We further test the effect of using the co-training loss with the classification token as adapter on other datasets. In Table 5, we see that our training procedure provides a consistent performance boost in clean accuracy compared to nominal training on MNIST (LeCun et al., 2010), CIFAR-10, CIFAR-100 (Krizhevsky et al., 2014), SVHN (Netzer et al., 2011), SUN397 (Xiao et al., 2010), RESISC-45 (Cheng et al., 2017) and DMLAB (Beattie et al., 2016). This shows that our method generalizes well across datasets and can help regularize Vision Transformers on these smaller datasets, where they are known to perform worse compared to CNNs without pre-training (Zhang et al., 2021). In Appendix C, we also demonstrate that models pre-trained with co-training on IMAGENET yield significantly better classification results when fine-tuning nominally on small datasets compared to fine-tuning from nominally and adversarially pre-trained models. 6 CONCLUSION In this work we have shown that adapters with a few hundreds of domain specific parameters are sufficient to switch between models with radically different behaviors. In fact, just replacing the classification token of a VIT can turn a classifier with SOTA nominal accuracy and no adversarial robustness into another one with robust accuracy close to that achieved with standard adversarial training. Moreover, merging the adapters allows to smoothly transition between the two modes, finding classifiers (i.e. our adversarial model soups) with better performance on distribution shifts. These observations open up new interesting directions for future work to explore how to take advantage of the regularizing effect of adversarial training and whether it is possible to combine via soups other types of models. ACKNOWLEDGEMENTS We are grateful to Evan Shelhamer for reviewing the drafts of the paper and his literature comments, to Olivia Wiles, Florian Stimberg, Taylan Cemgil and others at DeepMind for helpful conversations and feedback on the project. A MORE EXPERIMENTAL DETAILS Training details. In this manuscript we train VIT-B16 models using the training pipeline proposed in He et al. (2022). The model is optimized for 300 epochs using the AdamW optimizer (Loshchilov & Hutter, 2017) with momenta β1 = 0.9, β2 = 0.95, with a weight decay of 0.3 and a cosine learning rate decay with base learning rate 1e-4 and linear ramp-up of 20 epochs. The batch size is set to 4096 and we scale the learning rates using the linear scaling rule of Goyal et al. (2017). We optimize the standard cross-entropy loss and we use a label smoothing of 0.1. We apply stochastic depth (Huang et al., 2016) with base value 0.1 and with a dropping probability linearly increasing with depth. Regarding data augmentation, we use random crops resized to 224 × 224 images, mixup (Zhang et al., 2018), CutMix (Yun et al., 2019) and RandAugment (Cubuk et al., 2020) with two layers, magnitude 9 and a random probability of 0.5. We note that our implementation of RandAugment is based on the version found in the timm library (Wightman, 2019). We also use exponential moving average with momentum 0.9999. For RESNET-50 we keep the same training scheme used for VIT-B16, and the standard architecture, except for combining GroupNorm with Weight Standardization in all convolutional layers following Kolesnikov et al. (2020). For the DualParams BatchNorm version we fix the robust branch to always use the accumulated statistics rather then the batch ones. Training on smaller datasets. When training from scratch on smaller datasets in Section 5.3, we optimize the smaller VIT-S with a batch size of 1024 and a base learning rate of 2e-4. For datasets with small image resolution such as CIFAR-10, we do not rescale the images to 224 × 224 but we use a patch size of 4 and a stride of 2 to get enough vision tokens. Attack details. For PGD2 and PGD5 we use a gradient descent update with a fixed step size of 2.5/255 and 1/255 respectively. For PGD40 we change the optimizer to Adam with step-size 0.1 decayed by 10 × at steps 20 and 30. Regarding one step attacks, we use a step size of 6/255 and initialization radius of 8/255 for N-FGSM and a step size of 5/255 for Fast-AT. B VISUALIZING FILTERS Visualization procedure. We visualize the embedding layer by first standardizing the weights to have zero mean and unit variance. We then extract the first 28 principal components. Finally we reshape them to 16 × 16 × 3 images and rescale them to have their values between 0 and 255 such as to display these components as RGB images. C TRANSFER LEARNING Training details. For completeness we evaluate the transfer learning performance of the VITB16 pre-trained on IMAGENET by co-training on clean and adversarial samples. We choose the model trained with classification token adapter and co-training coefficient α = 0.4, which we finetune nominally on CIFAR-10, CIFAR-100, SUN-397, RESISC-45 and DMLAB using SGD with momentum 0.9, a batch size of 512, gradient clipping at global norm 1 and no weight decay. We optimize the standard cross-entropy loss and we use a label smoothing of 0.1. For simplicity, we use the same training schedule for all the datasets: a total of 10k training steps and a base learning rate of 0.01 attained after a linear ramp-up of 500 steps followed by a cosine decay. Regarding data pre-processing, we simply rescale the images to 224 × 224 resolution without preserving aspect ratio and we apply random horizontal flipping as data augmentation. Finally, we use exponential moving average with momentum 0.999. Fine-tuning results. As the network was pre-trained with classification token adapter, we have several possibilities for initializing the classification token before fine-tuning: adversarial token, clean token and model soups interpolating between these two tokens. For comparison, we also fine-tune two VIT-B16 pre-trained on IMAGENET with nominal and adversarial training respectively. We report the results in Table 6 where we evaluate several fine-tuning strategies: fine-tuning (i) the classifier head, (ii) the classifier head and the classification token and (iii) all the weights. First, we observe that fine-tuning both the classification token and the classifier head brings only a small improvement (from 79.27% to 80.70% for the best average accuracy) over fine-tuning the classifier head alone. Fine-tuning all the weights is the best strategy as it reaches 88.40% average accuracy. Second, we observe that initializing the classification token with the adversarial token performs consistently better than with the clean token when fine-tuning all the weights. Finally, co-training as pre-training is significantly better than nominal and adversarial pre-training as fine-tuning from a co-trained model reaches 88.40% average accuracy, a +1.05% improvement over nominal and adversarial pre-training. D ACCURACY LANDSCAPE In our case, model soups are obtained by linear interpolation (or extrapolation) of the adversarial and clean tokens. We notice that the clean and adversarial tokens are almost orthogonal (cos(ϕclean,ϕadv) = 0.14), so we can extend our study beyond model soups by taking linear combinations of the two tokens β1ϕclean + β2ϕadv. By taking a sweep over the β1 and β2 coefficients, we obtain in Figure 8 the clean and robust accuracy landscapes in the plane defined by the two tokens and where the diagonal corresponds to the model soups. We observe that the main direction of change for the clean and robust accuracies is the model soups diagonal (top left to bottom right). We can clearly see the trade-off in clean/robust accuracy, but also there seems to be a compromise Table 6: Co-training as pre-training. We compare the transfer learning performance of a model pre-trained using co-training to models pre-trained with nominal and adversarial training. We evaluate various fine-tuning strategies on several datasets (headers in green) and we report the average over datasets in the last rows (orange header). We also assess several initializations for the classification token before fine-tuning: adversarial token, clean token and model soups between these two tokens with various weightings β. All models are pre-trained on IMAGENET and use the same VIT-B16 architecture during fine-tuning. SETUP BASELINES FROM CO-TRAINED NET Nominal Adversarial Robust mode β = 0.25 β = 0.5 β = 0.75 Clean mode CIFAR-10 Fine-tune head 96.07% 90.95% 90.28% 91.17% 93.61% 96.50% 97.15% Fine-tune head + cls token 96.62% 92.76% 97.73% 97.70% 97.77% 97.82% 97.84% Fine-tune all 98.68% 98.96% 99.09% 99.03% 99.01% 99.05% 99.03% CIFAR-100 Fine-tune head 83.30% 73.80% 71.94% 73.52% 77.78% 83.99% 85.47% Fine-tune head + cls token 84.59% 76.79% 87.26% 87.49% 87.55% 87.45% 87.43% Fine-tune all 91.18% 91.74% 92.37% 92.23% 92.32% 92.41% 92.29% SUN-397 Fine-tune head 72.70% 65.62% 65.93% 67.02% 70.19% 73.00% 73.47% Fine-tune head + cls token 73.05% 67.21% 73.99% 74.14% 74.19% 74.12% 74.15% Fine-tune all 76.48% 75.66% 77.87% 77.75% 77.74% 77.67% 77.72% RESISC-45 Fine-tune head 91.69% 86.70% 86.54% 87.37% 89.64% 90.58% 91.12% Fine-tune head + cls token 91.95% 87.52% 91.04% 91.07% 91.04% 91.49% 91.23% Fine-tune all 96.78% 96.14% 97.07% 96.72% 96.88% 97.07% 96.80% DMLAB Fine-tune head 50.02% 50.11% 48.58% 48.60% 49.08% 49.07% 49.16% Fine-tune head + cls token 50.91% 51.53% 50.81% 51.79% 52.47% 52.64% 52.41% Fine-tune all 73.65% 73.93% 75.61% 75.66% 75.74% 75.35% 75.58% AVERAGE Fine-tune head 78.76% 73.44% 72.65% 73.54% 76.06% 78.63% 79.27% Fine-tune head + cls token 79.42% 75.16% 80.17% 80.44% 80.60% 80.70% 80.61% Fine-tune all 87.35% 87.29% 88.40% 88.28% 88.34% 88.31% 88.28% -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Clean token weight 1.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 Ad v to ke n we ig ht 78 79 80 81 82 83 (a) Clean accuracy -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Clean token weight 1.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 Ad v to ke n we ig ht 37.5 40.0 42.5 45.0 47.5 50.0 52.5 55.0 57.5 (b) Robust accuracy (PGD2) -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Clean token weight 1.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 Ad v to ke n we ig ht (c) Arithmetic mean Figure 8: Linear combination of tokens. We report the clean accuracy (panel (a)) and robust accuracy against PGD2 (panel (b)) on IMAGENET for various linear combinations of the clean and adversarial tokens. Model soups, which are linear interpolation (and extrapolation) between these two tokens, are on the diagonal from top left to bottom right. Panel (c) shows the arithmetic mean between the normalized (with min/max rescaling) clean and robust accuracies (red means higher mean accuracy). region between clean and robust accuracy as the other diagonal (from bottom left to top right) is visually distinct for clean and robust accuracy. In panel (c) of Figure 8, we plot the arithmetic mean between the normalized (with min/max rescaling) clean and robust accuracies. We observe that the best compromises between clean and robust accuracy have a stronger adversarial token weight than the clean token weight. E LIMITATIONS AND FUTURE WORK We have empirically shown that co-training a fully shared VIT does not retain any robustness whereas having two classification tokens specific to the clean and adversarial images is enough to get competitive performance both in clean and robust accuracy. However, we leave to future work the theoretical explanation on why this small architecture change (adding only 768 parameters) results in such a gap in performance. Similarly, beyond our intuition that parameter sharing when using adapters makes model soups possible, we cannot support our empirical results with theory and leave it to future work. Another direction for future work is the automatic selection of the right soup for each sample which could be inspired by automatic selection modules like in Lo & Patel (2021). F ADDITIONAL TABLES AND FIGURE In the following we present additional tables and figures of results described in the main part but omitted above because of space limits.
1. What is the focus of the paper regarding semantic correspondence? 2. What are the strengths and weaknesses of the proposed approach in terms of neural representation? 3. Do you have any concerns or questions regarding the paper's contribution? 4. What are the limitations of the NeMF approach? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper introduces a new finding that is contrary to previous works. That is, it is not necessary to separate batch statistics when co-training on clean and adversarial data. It shows that using the classification token of a Vision Transformer (ViT) as an adapter is sufficient to achieve the performance of the dual normalization layers proposed by previous works. This paper also introduces “adversarial model soups” that allow a smooth transition between the clean and robust modes. These observations provide new insights into the regularization effect of adversarial training. Strengths And Weaknesses Strengths: AdvProp (Xie et al. 2019a) improves image recognition by adversarial training with separate batch statistics of clean and adversarial data, first showing that adversarial examples can benefit model accuracy. This paper views AdvProp from a novel adapter perspective. It demonstrates that separating batch statistics is not necessary and that using domain-specific trainable parameters can achieve similar performance. The motivation is clear, and the new finding is interesting. Compared to AdvProp that only consider clean performance, this paper takes both clean and robust performance into consideration. It introduces “adversarial model soups” that can trade off clean and robust accuracy via the adapter. This makes the models more flexible and applicable to more practical scenarios. In addition to adversarial examples, this paper also considers the robustness against distribution shifts, where multiple ImageNet variant datasets are used for benchmarking, such as ImageNet-C and ImageNet-A. It is always good to evaluate on broad datasets. Figure 2 is impressive. It first demonstrates the importance of the “dual” technique, then shows that AdvProp is not the only effective dual technique. Multiple normalization methods are evaluated. The results indicate that domain-specific trainable parameters are key, not batch statistics. Weaknesses / Questions / Suggestions: If my understanding is correct, the beta is a hyper-parameter of adversarial model soups that is manually adjusted. If that is true, it is less practical and less novel. At inference time, the given inputs would be any type (clean, adversarial, distribution shifts, etc). It is impossible to adjust the clean/adversarial modes or the beta value for each input sample. An automatic mechanism is needed. The model is expected to self-decide a proper mode or beta value for each input automatically with an end-to-end pipeline. In this case, the contribution of the “adversarial model soups” would be more significant. The author may refer to [r1], which combines an automatic selection module with separate batch normalization layers to achieve the idea. The author may figure out a way to automate the adversarial model soups. The current method is just a linear combination of domain-specific parameters, so the novelty is limited. Several experimental results are strange. In Table 1, Co-training (Eq. 1) gets 0% robust accuracy, which is unexpected. With alpha=0.5, Co-training should still get robustness to a certain extent (see Xie & Yuille, 2019). Similarly, in Table 2, it is also unexpected that Fast-AT gets 0% robust accuracy. According to (Wong et al. 2020), Fast-AT can achieve decent robustness with proper step size and random initialization. The authors are asked to explain these results, otherwise, the results would be less convincing. Several figures are not clear enough. Figure 1 is suggested to denote phi_clean, phi_adv, etc. (corresponds to Equations) on the figure to make it more understandable. Figure 4 (similarly, Figure 9 in the appendix) should provide the corresponding beta values like Figure 3 provides alpha values. Figure 5 should self-contain the x-axis tile (beta). The tables of experimental results can be more complete. Specifically, the section “Robustness to stronger attack” should have a table showing the numbers, which would be much more clear. Or expand columns in Table 1 for stronger attacks. Furthermore, Table 9 (in the appendix) should compare the numbers of adversarial model soups as well. [r1] S.-Y. Lo and V. M. Patel, “Defending Against Multiple and Unforeseen Adversarial Videos,” in IEEE Transactions on Image Processing, 2021. Clarity, Quality, Novelty And Reproducibility This paper has fair clarity, quality, novelty and reproducibility. The main concerns about clarity and novelty are discussed in Weaknesses.
ICLR
Title Revisiting adapters with adversarial training Abstract While adversarial training is generally used as a defense mechanism, recent works show that it can also act as a regularizer. By co-training a deep network on clean and adversarial inputs, it is possible to improve classification accuracy on the clean, non-adversarial inputs. We demonstrate that, contrary to previous findings, it is not necessary to separate batch statistics when co-training on clean and adversarial inputs, and that it is sufficient to use adapters with few domain-specific parameters for each type of input. We establish that using the classification token of a Vision Transformer (VIT) as an adapter is enough to match the classification performance of dual normalization layers, while using significantly less additional parameters. First, we improve upon the top-1 accuracy of a non-adversarially trained VIT-B16 model by +1.12% on IMAGENET (reaching 83.76% top-1 accuracy). Second, and more importantly, we show that training with adapters enables model soups through linear combinations of the clean and adversarial tokens. These model soups, which we call adversarial model soups, allow us to trade-off between clean and robust accuracy without sacrificing efficiency. Finally, we show that we can easily adapt the resulting models in the face of distribution shifts. Our VIT-B16 obtains top-1 accuracies on IMAGENET variants that are on average +4.00% better than those obtained with Masked Autoencoders. 1 INTRODUCTION Deep networks are inherently susceptible to adversarial perturbations. Adversarial perturbations fool deep networks by adding an imperceptible amount of noise which leads to an incorrect prediction with high confidence (Carlini & Wagner, 2017; Goodfellow et al., 2015; Kurakin et al., 2016b; Szegedy et al., 2014). There has been a lot of work on building defenses against adversarial perturbations (Papernot et al., 2016; Kannan et al., 2018); the most commonly used defense is adversarial training as proposed by Madry et al. (2018) and its variants (Zhang et al., 2019; Pang et al., 2020; Huang et al., 2020; Rice et al., 2020; Gowal et al., 2020), which use adversarially perturbed images at each training step as training data. Earlier studies (Kurakin et al., 2016a; Xie et al., 2019b) showed that using adversarial samples during training leads to performance degradation on clean images. However, AdvProp (Xie et al., 2019a) challenged this observation by showing that adversarial training can act as a regularizer, and therefore improve nominal accuracy, when using dual batch normalization (BatchNorm) layers (Ioffe & Szegedy, 2015) to disentangle the clean and adversarial distributions. We draw attention to the broad similarity between the AdvProp approach and the adapters literature (Rebuffi et al., 2017; Houlsby et al., 2019) where a single backbone network is trained on multiple domains by means of adapters, where a few parameters specific to each domain are trained separately while the rest of the parameters are shared. In light of this comparison, we further develop the line of work introduced by AdvProp and analyze it from an adapter perspective. In particular, we explore various adapters and aim to obtain the best classification performance with minimal additional parameters. Our contributions are as follows: • We show that, in order to benefit from co-training on clean and adversarial samples, it is not necessary to separate the batch statistics of clean and adversarial images in BatchNorm layers. We demonstrate empirically that it is enough to use domain specific trainable parameters to achieve similar results. ∗Work done during an internship at DeepMind • Inspired by the adapters literature, we evaluate various adapters. We show that training separate classification tokens of a VIT for the clean and adversarial domains is enough to match the classification performance of dual normalization layers with 49× fewer domain specific parameters. This classification token acts as a conditioning token which can modify the behaviour of the network to be either in clean or robust mode (Figure 1). • Unlike Xie et al. (2019a) and Herrmann et al. (2022), we also aim at preserving the robust performance of the network against adversarial attacks. We show that our conditional token can obtain SOTA nominal accuracy in the clean mode while at the same time achieving competitive ℓ∞-robustness in the robust mode. As a by-product of our study, we show that adversarial training of VIT-B16 on IMAGENET leads to state-of-the-art robustness against ℓ∞-norm bounded perturbations of size 4/255. • We empirically demonstrate that training with adapters enables model soups (Wortsman et al., 2022). This allow us to introduce adversarial model soups, models that trade-off between clean and robust accuracy through linear interpolation of the clean and adversarial adapters. To the best of our knowledge, our work is the first to study adversarial model soups. We also show that adversarial model soups perform better on IMAGENET variants than the state-of-the-art with masked auto-encoding (He et al., 2022). 2 RELATED WORK Adversarial training. Although more recent approaches have been proposed, the most successful method to reduce the vulnerability of image classifiers to adversarial attacks is adversarial training, which generates on-the-fly adversarial counterparts for the training images and uses them to augment the training set (Croce et al., 2020). Goodfellow et al. (2015) used the single-step Fast Gradient Sign Method (FGSM) attack to craft such adversarial images. Later, Madry et al. (2018) found that using iterative Projected Gradient Descent (PGD) yields models robust to stronger attacks. Their scheme has been subsequently improved by several modifications, e.g. a different loss function (Zhang et al., 2019), unlabelled or synthetic data (Carmon et al., 2019; Uesato et al., 2019; Gowal et al., 2021), model weight averaging (Gowal et al., 2020), adversarial weight perturbations (Wu et al., 2020), and better data augmentation (Rebuffi et al., 2021). While the main drawback of adversarial training is the degradation of performance of robust models on clean images (Tsipras et al., 2018), Xie et al. (2019a) showed that adversarial images can be leveraged as a strong regularizer to improve the clean accuracy of classifiers on IMAGENET. In particular, they propose AdvProp, which introduces separate BatchNorm layers specific to clean or adversarial inputs, with the remaining layers being shared. This approach and the role of normalization layers when training with both clean and adversarial points has been further studied by (Xie & Yuille, 2019; Walter et al., 2022). Recently, Wang et al. (2022) suggest removing BatchNorm layers from the standard RESNET architecture (He et al., 2016) to retain high clean accuracy with adversarial training, but this negatively affects the robustness against stronger attacks.1 Finally, (Kireev et al., 2021; Herrmann et al., 2022) showed that carefully tuning the threat model in adversarial training might improve the performance on clean images and in the presence of distribution shifts, such as common corruptions (Hendrycks & Dietterich, 2018). Adapters. In early work on deep networks, Caruana (1997) showed that sharing network parameters among tasks acts as a regularizer. Aiming at a more efficient parameter sharing, (Rebuffi et al., 2017; Rosenfeld & Tsotsos, 2018) introduced adapters – small training modules specific to each task which can be stitched all along the network. In other lines of work, (Mallya et al., 2018; Mancini et al., 2018) adapt a model to new tasks using efficient weight masking and (Li et al., 2016; Maria Carlucci et al., 2017) perform domain adaptation by batch statistics modulation. While these approaches require having as many adapters as tasks, Perez et al. (2018) propose an adapter layer whose weights are generated by a conditioning network. Besides computer vision, adapters are also used in natural language processing for efficient fine-tuning (Houlsby et al., 2019; Pfeiffer et al., 2020; Wang et al., 2020) and multi-task learning (Stickland & Murray, 2019). Merging multiple models. While ensembles are a popular and successful way to combine multiple independently trained classifiers to improve on individual performance (Ovadia et al., 2019; GontijoLopes et al., 2021), they increase the inference cost as they require a forward pass for each sub-network 1See https://github.com/amazon-research/normalizer-free-robust-training/issues/2. of the ensemble. An alternative approach is taken by Wortsman et al. (2022) who propose to finetune a fully trained model with different hyperparameter configurations and then average the entire set of weights of the various networks. The obtained model soups get better performance than each individual model and even ensembles. Model soups are in spirit similar to Stochastic Weight Averaging (Izmailov et al., 2018) which consists in averaging weights along an optimization trajectory rather than averaging over independent runs. 3 METHOD 3.1 CO-TRAINING WITH NOMINAL AND ADVERSARIAL TRAINING Goodfellow et al. (2015) propose adversarial training as a way to regularize standard training. They jointly optimize the model parameters θ on clean and adversarial images with the co-training loss αL(f(x;θ), y) + (1− α)max δ∈S L(f(x+ δ;θ), y), (1) where pairs of associated examples x and labels y are sampled from the training dataset, f(·;θ) is a model parametrized by θ, L defines the loss function (such as the cross-entropy loss in the classification context), and S is the set of allowed perturbations. Setting α = 1 boils down to nominal training on clean images and setting α = 0 leads to adversarial training as defined by Madry et al. (2018). In our case, we consider ℓ∞ norm-bounded perturbations of size ϵ = 4/255, so we have S = {δ | ∥δ∥∞ ≤ ϵ}, and we use untargeted attacks to generate the adversarial perturbations δ (see details in Section 4). 3.2 SEPARATING BATCH STATISTICS IS NOT NECESSARY BatchNorm is a widely used normalization layer shown to improve performance and training stability of image classifiers (Ioffe & Szegedy, 2015). We recall that a BatchNorm layer, given a batch as input, first normalizes it by subtracting the mean and dividing by the standard deviation computed over the entire batch, then it applies an affine transformation, with learnable scale and offset parameters. During training, it accumulates these so-called batch statistics to use during test time, so that the output of the classifier for each image is independent of the other images in the batch. The batch statistics can be seen an approximation of the statistics over the image distribution. Xie et al. (2019a) show that optimizing the co-training loss in Eq. 1 can yield worse results on clean images than simple nominal training. This is especially the case when the network has a low capacity or the attack (i.e., the inner maximization) is too strong (such as using a large perturbation radius ϵ). To solve this issue, they propose AdvProp, which consists in using distinct BatchNorm layers for clean and adversarial images. They argue that “maintaining one set of [BatchNorm] statistics results in incorrect statistics estimation”, which could be the reason for the performance degradation. We note that using two sets of BatchNorm layers for the clean and adversarial samples as in AdvProp creates two sets of batch statistics but also two sets of learnable scale and offset parameters. In the following we investigate whether having separate batch statistics is a necessary condition for successful co-training. Figure 2 shows the clean and robust accuracy of various model architectures as training progresses. The left panel demonstrates that, if we share both batch statistics and scales/offsets (Shared BatchNorm, orange curves), the robust accuracy (orange dashed line) quickly drops, far from the one obtained by AdvProp (Dual BatchNorm, blue curve) which is above 34%. However, if we use a single set of batch statistics but specific scales and offsets for clean and adversarial images, we can observe on the right panel of Figure 2 that the robust accuracy (DualParams BatchNorm, orange dashed line) matches the one (blue dashed line) obtained by AdvProp. This demonstrates that it is possible to achieve nominal and robust classification results similar to those of AdvProp without separate batch statistics. Furthermore, there exist normalization layers such as LayerNorm (Ba et al., 2016) or GroupNorm (Wu & He, 2018) which do not use batch statistics, as their normalization step is done per sample and not per batch. Hence, according to the hypothesis of Xie et al. (2019a), these types of normalization layer should not suffer from performance degradation. Nevertheless, the left panel of Figure 2 shows that their robust accuracy (green and red dashed lines) does not match the robust accuracy of AdvProp (Dual BatchNorm), and is unstable over training steps. However, by making the scales and offsets of LayerNorm and GroupNorm specific to clean and adversarial images, their robust accuracy matches that obtained with dual BatchNorm layers, as shown in the right panel of Figure 2. This suggests that a key element to make the co-training loss of Eq. 1 work for various normalization layers is to have trainable parameters which are specific to the clean and adversarial images.2 3.3 REVISITING ADAPTERS WITH ADVERSARIAL TRAINING The last observation strongly relates this setting to the adapters literature where a single backbone architecture has some parameters, called adapters, which are specific to different domains while the rest of the parameters are shared among tasks. In our case, the clean images form one domain and the adversarial images constitute another domain. In this work, we go beyond having separate normalization layers for the clean and adversarial images and investigate other types of adapters. 2Interestingly, contrary to our observation that standard GroupNorm fails to retain robustness, Xie & Yuille (2019) report that GroupNorm matches Dual BatchNorm. We explain this difference as we use a stronger untargeted attack in this manuscript compared to the targeted attack of Xie & Yuille (2019). Using a stronger attack allows us to reveal failure modes that would have been hidden otherwise. Formally, the model parameters θ can be decomposed into parameters ψ which are shared among domains and parameters ϕ which are specific to a domain. We call ϕclean the parameters used when training on clean images and ϕadv the parameters used when training on adversarial images. For example, in Section 3.2, when we used dual LayerNorm layers, the scales and offsets of these normalization layers are contained in ϕclean and ϕadv whereas the rest of the model parameters are in ψ. Based on Eq. 1, we optimize the following loss: αL(f(x;ψ ∪ ϕclean), y) + (1− α)max δ∈S L(f(x+ δ;ψ ∪ ϕadv), y). (2) Finally, we introduce some notation for models with adapters at inference time: we call f(·;ψ∪ϕclean) the clean mode for prediction as we use the adapters ϕclean trained on the clean data. Conversely, we call f(·;ψ ∪ ϕadv) the robust mode when using the adapters ϕadv trained on the perturbed data. 3.4 TRAINING WITH ADAPTERS ENABLES ADVERSARIAL MODEL SOUPS Wortsman et al. (2022) propose model soups, which consist in averaging the weights of multiple models fine-tuned from the same pre-trained model. The resulting weight averaged model can benefit from the original models without incurring any extra compute and memory cost at inference time. Currently, in our setting the user would have to know at test time if the network should be in clean or robust mode. A model soup, by its ability to merge models, is a way to bypass this issue. We formulate the hypothesis that training with adapters enables model soups. With this in mind, we observe that training with adapters means that most of the model parameters are already shared, so model souping would simply consist in linearly interpolating the weights of the adapters for the two modes. We call adversarial model soups, the model soups with a model co-trained on clean and adversarial samples. We get the following parametrized model: f(·;ψ ∪ (βϕclean + (1− β)ϕadv)) (3) where β is the weighting factor when averaging the adapters. If β = 1, the model soup boils down to the clean mode and conversely β = 0 corresponds to the robust mode. In Section 5.2, we assess this hypothesis and show that forming model soups between independent nominal and robust models fails. 4 EXPERIMENTAL SETUP Architecture. We focus our study on the B16 variant of the Vision Transformer (VIT-B16) introduced by Dosovitskiy et al. (2020). We adopt the modifications proposed by He et al. (2022): the linear classifier is applied on the mean of the final tokens except the classification token. We train this network by using supervised training from scratch as proposed in He et al. (2022) (see the appendix). Attacks. We consider adversarial robustness against untargeted ℓ∞-bounded attacks with radius ϵ = 4/255. This is the most common setup for IMAGENET models, and it is more challenging to defend against than the targeted threat model used by Xie & Yuille (2019). To generate the adversarial perturbations we use Projected Gradient Descent (Madry et al., 2018) with 2 steps named PGD2 (see details in the appendix) at training time and with 40 steps for evaluation (PGD40). Datasets. We focus our experimental evaluation on the IMAGENET dataset (Russakovsky et al., 2015), with images at 224 × 224 resolution for both training and testing, as this is the standard large-scale benchmark for SOTA models and was used by Xie et al. (2019a) for AdvProp. We report clean and adversarial accuracy on the whole validation set. Moreover, we test the robustness against distribution shifts via several IMAGENET variants: IMAGENET-C (Hendrycks & Dietterich, 2018), IMAGENET-A (Hendrycks et al., 2019), IMAGENET-R (Hendrycks et al., 2020), IMAGENET-SKETCH (Wang et al., 2019), and Conflict Stimuli (Geirhos et al., 2018). 5 EXPERIMENTAL RESULTS Similarly to our observation in Section 3.2 for a RESNET-50, a fully shared VIT-B16 trained with the co-training loss Eq. 1 fails to retain any robustness. Therefore, we first investigate various adapters for VIT-B16 to find an efficient training setting in Section 5.1. Then we study adversarial model soups with adapters in Section 5.2 and finally show that training with adapters generalizes to other datasets and threat models. 5.1 FINDING AN EFFICIENT SETTING Choice of adapter. Using adapters increases the number of parameters as the layers which we choose as adapters have twice as many parameters: one set of parameters for clean images and another for adversarial images. Hence, to avoid increasing the network memory footprint too heavily, we restrict our adapters study to layers with few parameters, thus excluding self-attention (Vaswani et al., 2017) layers and MLP layers. This leaves the options of having dual embedder, positional embedding, normalization layers or classification token; among them, the classification token has by far the least amount of parameters, 49-770× fewer than the other candidates (see details in Table 1). We must still verify that so few parameters are enough to preserve the advantages of the AdvProp training scheme. Hence, we train a model for each type of adapter and compare them with two models without adapters, one trained with nominal training and the other with adversarial training. We observe in Table 1 that by using two classification tokens as adapters, which means only 768 extra parameters out of 86M, we reach 83.56% clean accuracy on IMAGENET, which is an improvement of +0.92% over standard training. Moreover, we obtain a robust accuracy of 49.87% in the robust mode, which is close to the robust accuracy given by adversarial training. Notably, we see that adapting other layers with more parameters such as all LayerNorm scales and offsets results in similar performances in both clean and robust modes. This indicates that (i) it is not necessary to split the normalization layers to reproduce the effect of AdvProp, and (ii) even a very small amount of dual parameters provide sufficient expressiveness to adapt the shared portion of the network to the two modes. Therefore, in the rest of the manuscript we focus on dual classification tokens as it requires the smallest number of extra parameters. Number of attack steps. As the results in Table 1 were obtained with PGD2, we check if we can reduce the number of attack steps to be more computationally efficient. In Table 2, we report the results for two one-step methods: N-FGSM by de Jorge et al. (2022) and FAST-AT by Wong et al. (2020). If we use the step sizes recommended in the corresponding papers, both methods suffer from catastrophic overfitting (Wong et al., 2020) (illustrated in Figure 6 in the appendix) and therefore have no robustness at all. In Table 2 we avoid such catastrophic overfitting by reducing the step sizes to ϵ and 0.75ϵ for FAST-AT and N-FGSM respectively and we observe that both methods perform more than 1% worse in robust accuracy than PGD2. We also increase the number of attack steps to 5 with PGD5. We notice a small improvement over PGD2 of 0.4% in robust accuracy while the clean accuracy is on par. Hence, PGD2 seems to be a good compromise between efficiency and classification performance. Weighting the co-training loss. In the co-training loss in Eq. 1, the α hyperparameter controls how much the loss is weighted towards clean or adversarial samples. For example, setting α = 0 means we train solely on adversarial samples. In Figure 3, where we evaluate several values for α (dividing the range between 0 and 1 into intervals of length 0.1), we notice that only the values between α = 0 and α = 0.4 form a Pareto front that strictly dominates the other intervals. Indeed, between α = 1 and α = 0.4, decreasing α leads to better performance both in clean and robust modes. In fact, setting α = 0.4 leads to 83.76% clean accuracy (in clean mode) and 52.19% robust accuracy (in robust mode) which are both better than the values obtained in Table 1 with α = 0.5. In Figure 7 (in the appendix), we visualize the filters of the embedder when training with various values of α. We observe that for α = 0.2 and for α = 0.8 the filters look quite similar to the filters learned with adversarial training (α = 0) and nominal training (α = 1), respectively. Interestingly, filters learned with α = 0.4 and α = 0.6 are not the simple combination of nominal and adversarial filters but rather new visually distinct filters. This indicates that co-training on clean and adversarial samples can lead to a new hybrid representation for the shared layers compared to nominal and adversarial training. Robustness to stronger attacks. For completeness we further test the robustness of a subset of our models with a mixture of AUTOATTACK (Croce & Hein, 2020) and MULTITARGETED (Gowal et al., 2019), denoted by AA+MT. Pure adversarial training, which obtains 56.19% robust accuracy against PGD40 (Table 1), reaches 54.15% robust accuracy against AA+MT. This is a new state-of-the-art robust accuracy on IMAGENET, improving by +6.55% over the 47.60% reported by Debenedetti et al. (2022). While Debenedetti et al. (2022) advocate for weak data augmentation for training robust VIT, our training procedure follows He et al. (2022) and contains heavy augmentations (see appendix): we conclude that large models still benefit from strong data augmentations even with adversarial training. Finally, the robust mode of the model co-trained with α = 0.4 in the previous paragraph reaches 49.55% robust accuracy against AA+MT, which still surpasses the prior art and preserves competitive robust performance. 5.2 EVALUATING MODEL SOUPS Adapters enable adversarial model soups. One downside of using adapters is that one needs to know if for an input image the network should be put in clean or robust mode. This motivates adversarial model soups which allow to create a single model performing well both in clean and robust accuracy. First, if we independently train two VIT-B16, one nominally and the other adversarially, and then try to perform model soups on them, we notice in Table 9 (in the appendix) that both robust and clean accuracies drop immediately when the weighting factor β between parameters is not equal to 0 or 1. We evaluate various model soups with the models of Table 1, meaning that the parameters specific to the clean and robust domain are averaged with weight β to obtain a single classifier. We notice in Figure 9 (in the appendix) that adversarial model soups work equally well with the various types of adapters, where sliding the value of β allows to smoothly trade-off clean accuracy for robustness. This validates our hypothesis that adapters enable model soups. Soup or ensemble. In Figure 4 we compare the classification performance of adversarial model soups and ensembles obtained by linear combination of the clean and robust modes at the probability prediction level. We notice that ensembling produces a better Pareto front than adversarial model soup but ensembles, with their two forward passes, require twice the compute of model soups. Hence, 78.2 78.6 79.1 79.7 80.5 81.5 82.4 83.0 83.5 83.7 83.8 84.7 85.0 85.4 85.9 86.5 87.2 87.8 88.3 88.5 88.5 88.5 13.4 14.0 15.1 17.0 19.6 23.6 28.6 33.7 36.8 38.2 38.4 55.2 55.3 55.2 55.4 55.5 55.6 55.6 55.4 55.1 54.7 54.4 39.6 39.8 40.0 40.3 40.5 40.7 41.0 41.2 41.2 41.2 41.1 56.5 55.7 54.4 53.4 51.1 49.1 46.9 44.4 41.6 40.0 39.8 56.7 57.3 58.2 59.7 61.9 64.7 67.5 69.3 70.0 70.1 69.9 adversarial model soups allow to choose the trade-off between clean and robust accuracy with performance close to ensembling while only requiring the same compute as a single network. Extrapolation. For the anecdote, we experiment with adversarial model soups for extrapolation with values of the weighting factor β above 1 and below 0. Interestingly, we observe that setting β = 1.05 leads to 83.81% clean accuracy which is better than the 83.76% obtained in the clean mode. Similarly, setting β = −0.05 leads to 52.26% robust accuracy which is slightly better than the 52.19% obtained in the robust mode. Hence, it appears that adversarial model soups do not need to be restricted to interpolation. Soups for IMAGENET variants. As adversarial model soups allow to create models with chosen trade-off between clean and robust accuracy, we might expect that such models perform better than nominal ones when distribution shifts occur. For example, Kireev et al. (2021) showed that adversarial training can even help with common corruptions when specifically tuned for such task (note that they use smaller datasets than IMAGENET). We then compute the accuracy of adversarial model soups with varying β on IMAGENET variants (results in Figure 5): while half of the best performance are obtained with the clean classification token, for some variants such as IMAGENET-R, IMAGENET-C and IMAGENET-SKETCH the best results are obtained with intermediate tokens. Hence, adversarial model soups can be used to reach a compromise between IMAGENET variants to get the best average performance. Here β = 0.9 yields the best mean accuracy 61.23%. In Table 3, we notice that this adversarial model soup improves the mean accuracy by +4.00% over a fine-tuned Masked Autoencoder (MAE-B16) checkpoint from He et al. (2022) and by +2.37% over Pyramid-AT from Herrmann et al. (2022). It also improves by +2.24% over the best performing ensemble of two networks trained independently with nominal and adversarial training respectively. 5.3 EVALUATING ON OTHER THREAT MODELS AND DATASETS Evaluating other threat models. IMAGENET variants are also a good benchmark to compare different types of adversarial attack to generate the perturbations for the co-training loss in Eq. 2: untargeted ℓ∞-bounded perturbations with budget ϵ = 4/255 (our standard setup), untargeted ℓ2bounded with ϵ ∈ {1, 2, 4, 8}, targeted (random target class as in Xie et al., 2019a) ℓ∞-bounded with ϵ ∈ {4/255, 8/255, 12/255}, and the Pyramid attack proposed by Herrmann et al. (2022). In Table 4, we select the best adversarial model soups after training with each method a VIT-B16 with dual classification tokens, and report its results on all variants. We see that the clean accuracy on the IMAGENET validation set improves in all cases compared to standard training. Moreover, although the best performing attack varies across variants, we notice that the untargeted ℓ∞ attack achieves the best average accuracy. Evaluating on other datasets. We further test the effect of using the co-training loss with the classification token as adapter on other datasets. In Table 5, we see that our training procedure provides a consistent performance boost in clean accuracy compared to nominal training on MNIST (LeCun et al., 2010), CIFAR-10, CIFAR-100 (Krizhevsky et al., 2014), SVHN (Netzer et al., 2011), SUN397 (Xiao et al., 2010), RESISC-45 (Cheng et al., 2017) and DMLAB (Beattie et al., 2016). This shows that our method generalizes well across datasets and can help regularize Vision Transformers on these smaller datasets, where they are known to perform worse compared to CNNs without pre-training (Zhang et al., 2021). In Appendix C, we also demonstrate that models pre-trained with co-training on IMAGENET yield significantly better classification results when fine-tuning nominally on small datasets compared to fine-tuning from nominally and adversarially pre-trained models. 6 CONCLUSION In this work we have shown that adapters with a few hundreds of domain specific parameters are sufficient to switch between models with radically different behaviors. In fact, just replacing the classification token of a VIT can turn a classifier with SOTA nominal accuracy and no adversarial robustness into another one with robust accuracy close to that achieved with standard adversarial training. Moreover, merging the adapters allows to smoothly transition between the two modes, finding classifiers (i.e. our adversarial model soups) with better performance on distribution shifts. These observations open up new interesting directions for future work to explore how to take advantage of the regularizing effect of adversarial training and whether it is possible to combine via soups other types of models. ACKNOWLEDGEMENTS We are grateful to Evan Shelhamer for reviewing the drafts of the paper and his literature comments, to Olivia Wiles, Florian Stimberg, Taylan Cemgil and others at DeepMind for helpful conversations and feedback on the project. A MORE EXPERIMENTAL DETAILS Training details. In this manuscript we train VIT-B16 models using the training pipeline proposed in He et al. (2022). The model is optimized for 300 epochs using the AdamW optimizer (Loshchilov & Hutter, 2017) with momenta β1 = 0.9, β2 = 0.95, with a weight decay of 0.3 and a cosine learning rate decay with base learning rate 1e-4 and linear ramp-up of 20 epochs. The batch size is set to 4096 and we scale the learning rates using the linear scaling rule of Goyal et al. (2017). We optimize the standard cross-entropy loss and we use a label smoothing of 0.1. We apply stochastic depth (Huang et al., 2016) with base value 0.1 and with a dropping probability linearly increasing with depth. Regarding data augmentation, we use random crops resized to 224 × 224 images, mixup (Zhang et al., 2018), CutMix (Yun et al., 2019) and RandAugment (Cubuk et al., 2020) with two layers, magnitude 9 and a random probability of 0.5. We note that our implementation of RandAugment is based on the version found in the timm library (Wightman, 2019). We also use exponential moving average with momentum 0.9999. For RESNET-50 we keep the same training scheme used for VIT-B16, and the standard architecture, except for combining GroupNorm with Weight Standardization in all convolutional layers following Kolesnikov et al. (2020). For the DualParams BatchNorm version we fix the robust branch to always use the accumulated statistics rather then the batch ones. Training on smaller datasets. When training from scratch on smaller datasets in Section 5.3, we optimize the smaller VIT-S with a batch size of 1024 and a base learning rate of 2e-4. For datasets with small image resolution such as CIFAR-10, we do not rescale the images to 224 × 224 but we use a patch size of 4 and a stride of 2 to get enough vision tokens. Attack details. For PGD2 and PGD5 we use a gradient descent update with a fixed step size of 2.5/255 and 1/255 respectively. For PGD40 we change the optimizer to Adam with step-size 0.1 decayed by 10 × at steps 20 and 30. Regarding one step attacks, we use a step size of 6/255 and initialization radius of 8/255 for N-FGSM and a step size of 5/255 for Fast-AT. B VISUALIZING FILTERS Visualization procedure. We visualize the embedding layer by first standardizing the weights to have zero mean and unit variance. We then extract the first 28 principal components. Finally we reshape them to 16 × 16 × 3 images and rescale them to have their values between 0 and 255 such as to display these components as RGB images. C TRANSFER LEARNING Training details. For completeness we evaluate the transfer learning performance of the VITB16 pre-trained on IMAGENET by co-training on clean and adversarial samples. We choose the model trained with classification token adapter and co-training coefficient α = 0.4, which we finetune nominally on CIFAR-10, CIFAR-100, SUN-397, RESISC-45 and DMLAB using SGD with momentum 0.9, a batch size of 512, gradient clipping at global norm 1 and no weight decay. We optimize the standard cross-entropy loss and we use a label smoothing of 0.1. For simplicity, we use the same training schedule for all the datasets: a total of 10k training steps and a base learning rate of 0.01 attained after a linear ramp-up of 500 steps followed by a cosine decay. Regarding data pre-processing, we simply rescale the images to 224 × 224 resolution without preserving aspect ratio and we apply random horizontal flipping as data augmentation. Finally, we use exponential moving average with momentum 0.999. Fine-tuning results. As the network was pre-trained with classification token adapter, we have several possibilities for initializing the classification token before fine-tuning: adversarial token, clean token and model soups interpolating between these two tokens. For comparison, we also fine-tune two VIT-B16 pre-trained on IMAGENET with nominal and adversarial training respectively. We report the results in Table 6 where we evaluate several fine-tuning strategies: fine-tuning (i) the classifier head, (ii) the classifier head and the classification token and (iii) all the weights. First, we observe that fine-tuning both the classification token and the classifier head brings only a small improvement (from 79.27% to 80.70% for the best average accuracy) over fine-tuning the classifier head alone. Fine-tuning all the weights is the best strategy as it reaches 88.40% average accuracy. Second, we observe that initializing the classification token with the adversarial token performs consistently better than with the clean token when fine-tuning all the weights. Finally, co-training as pre-training is significantly better than nominal and adversarial pre-training as fine-tuning from a co-trained model reaches 88.40% average accuracy, a +1.05% improvement over nominal and adversarial pre-training. D ACCURACY LANDSCAPE In our case, model soups are obtained by linear interpolation (or extrapolation) of the adversarial and clean tokens. We notice that the clean and adversarial tokens are almost orthogonal (cos(ϕclean,ϕadv) = 0.14), so we can extend our study beyond model soups by taking linear combinations of the two tokens β1ϕclean + β2ϕadv. By taking a sweep over the β1 and β2 coefficients, we obtain in Figure 8 the clean and robust accuracy landscapes in the plane defined by the two tokens and where the diagonal corresponds to the model soups. We observe that the main direction of change for the clean and robust accuracies is the model soups diagonal (top left to bottom right). We can clearly see the trade-off in clean/robust accuracy, but also there seems to be a compromise Table 6: Co-training as pre-training. We compare the transfer learning performance of a model pre-trained using co-training to models pre-trained with nominal and adversarial training. We evaluate various fine-tuning strategies on several datasets (headers in green) and we report the average over datasets in the last rows (orange header). We also assess several initializations for the classification token before fine-tuning: adversarial token, clean token and model soups between these two tokens with various weightings β. All models are pre-trained on IMAGENET and use the same VIT-B16 architecture during fine-tuning. SETUP BASELINES FROM CO-TRAINED NET Nominal Adversarial Robust mode β = 0.25 β = 0.5 β = 0.75 Clean mode CIFAR-10 Fine-tune head 96.07% 90.95% 90.28% 91.17% 93.61% 96.50% 97.15% Fine-tune head + cls token 96.62% 92.76% 97.73% 97.70% 97.77% 97.82% 97.84% Fine-tune all 98.68% 98.96% 99.09% 99.03% 99.01% 99.05% 99.03% CIFAR-100 Fine-tune head 83.30% 73.80% 71.94% 73.52% 77.78% 83.99% 85.47% Fine-tune head + cls token 84.59% 76.79% 87.26% 87.49% 87.55% 87.45% 87.43% Fine-tune all 91.18% 91.74% 92.37% 92.23% 92.32% 92.41% 92.29% SUN-397 Fine-tune head 72.70% 65.62% 65.93% 67.02% 70.19% 73.00% 73.47% Fine-tune head + cls token 73.05% 67.21% 73.99% 74.14% 74.19% 74.12% 74.15% Fine-tune all 76.48% 75.66% 77.87% 77.75% 77.74% 77.67% 77.72% RESISC-45 Fine-tune head 91.69% 86.70% 86.54% 87.37% 89.64% 90.58% 91.12% Fine-tune head + cls token 91.95% 87.52% 91.04% 91.07% 91.04% 91.49% 91.23% Fine-tune all 96.78% 96.14% 97.07% 96.72% 96.88% 97.07% 96.80% DMLAB Fine-tune head 50.02% 50.11% 48.58% 48.60% 49.08% 49.07% 49.16% Fine-tune head + cls token 50.91% 51.53% 50.81% 51.79% 52.47% 52.64% 52.41% Fine-tune all 73.65% 73.93% 75.61% 75.66% 75.74% 75.35% 75.58% AVERAGE Fine-tune head 78.76% 73.44% 72.65% 73.54% 76.06% 78.63% 79.27% Fine-tune head + cls token 79.42% 75.16% 80.17% 80.44% 80.60% 80.70% 80.61% Fine-tune all 87.35% 87.29% 88.40% 88.28% 88.34% 88.31% 88.28% -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Clean token weight 1.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 Ad v to ke n we ig ht 78 79 80 81 82 83 (a) Clean accuracy -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Clean token weight 1.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 Ad v to ke n we ig ht 37.5 40.0 42.5 45.0 47.5 50.0 52.5 55.0 57.5 (b) Robust accuracy (PGD2) -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Clean token weight 1.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 Ad v to ke n we ig ht (c) Arithmetic mean Figure 8: Linear combination of tokens. We report the clean accuracy (panel (a)) and robust accuracy against PGD2 (panel (b)) on IMAGENET for various linear combinations of the clean and adversarial tokens. Model soups, which are linear interpolation (and extrapolation) between these two tokens, are on the diagonal from top left to bottom right. Panel (c) shows the arithmetic mean between the normalized (with min/max rescaling) clean and robust accuracies (red means higher mean accuracy). region between clean and robust accuracy as the other diagonal (from bottom left to top right) is visually distinct for clean and robust accuracy. In panel (c) of Figure 8, we plot the arithmetic mean between the normalized (with min/max rescaling) clean and robust accuracies. We observe that the best compromises between clean and robust accuracy have a stronger adversarial token weight than the clean token weight. E LIMITATIONS AND FUTURE WORK We have empirically shown that co-training a fully shared VIT does not retain any robustness whereas having two classification tokens specific to the clean and adversarial images is enough to get competitive performance both in clean and robust accuracy. However, we leave to future work the theoretical explanation on why this small architecture change (adding only 768 parameters) results in such a gap in performance. Similarly, beyond our intuition that parameter sharing when using adapters makes model soups possible, we cannot support our empirical results with theory and leave it to future work. Another direction for future work is the automatic selection of the right soup for each sample which could be inspired by automatic selection modules like in Lo & Patel (2021). F ADDITIONAL TABLES AND FIGURE In the following we present additional tables and figures of results described in the main part but omitted above because of space limits.
1. What is the focus of the paper on co-training? 2. What are the strengths of the proposed approach, particularly regarding simplicity and effectiveness? 3. What are the weaknesses of the paper, especially regarding its comparison with prior works and limitations in application? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper shows that separate batch statistics for co-training on clean and adversarial inputs are not necessary. An extremely lightweight adapter using the class token is enough to achieve comparable performance compared with the dual norm setting. It also enables model soup instead of model ensembling for faster model inference. Strengths And Weaknesses Strength The method is simple yet effective, it significantly reduces the number of domain-specific parameters compared with AdvProp. The interpolation/extrapolation experiments in Figure 3 are interesting and show the benefit of using the class token as the adapter. Weakness Even in the original AdvProp, the number of domain-specific parameters is marginally compared with the number of the whole model, which makes the benefit of further reducing the number of domain-specific parameters less useful. The interpolation, extrapolation, and model soup could also be applied to the AdvProp and the results would be interesting. Using the class token as the adapter so it doesn't work with CNN and transformer without class token, which limits the application of this method. Comparison with AdvProp is not thorough. AdvProp could be regarded as using dual norm as the adapters. The influence of different adapters should be studied. Clarity, Quality, Novelty And Reproducibility The paper is well-written and easy to follow. The idea is interesting but doesn't show advances over the existing AdvProp method.
ICLR
Title Revisiting adapters with adversarial training Abstract While adversarial training is generally used as a defense mechanism, recent works show that it can also act as a regularizer. By co-training a deep network on clean and adversarial inputs, it is possible to improve classification accuracy on the clean, non-adversarial inputs. We demonstrate that, contrary to previous findings, it is not necessary to separate batch statistics when co-training on clean and adversarial inputs, and that it is sufficient to use adapters with few domain-specific parameters for each type of input. We establish that using the classification token of a Vision Transformer (VIT) as an adapter is enough to match the classification performance of dual normalization layers, while using significantly less additional parameters. First, we improve upon the top-1 accuracy of a non-adversarially trained VIT-B16 model by +1.12% on IMAGENET (reaching 83.76% top-1 accuracy). Second, and more importantly, we show that training with adapters enables model soups through linear combinations of the clean and adversarial tokens. These model soups, which we call adversarial model soups, allow us to trade-off between clean and robust accuracy without sacrificing efficiency. Finally, we show that we can easily adapt the resulting models in the face of distribution shifts. Our VIT-B16 obtains top-1 accuracies on IMAGENET variants that are on average +4.00% better than those obtained with Masked Autoencoders. 1 INTRODUCTION Deep networks are inherently susceptible to adversarial perturbations. Adversarial perturbations fool deep networks by adding an imperceptible amount of noise which leads to an incorrect prediction with high confidence (Carlini & Wagner, 2017; Goodfellow et al., 2015; Kurakin et al., 2016b; Szegedy et al., 2014). There has been a lot of work on building defenses against adversarial perturbations (Papernot et al., 2016; Kannan et al., 2018); the most commonly used defense is adversarial training as proposed by Madry et al. (2018) and its variants (Zhang et al., 2019; Pang et al., 2020; Huang et al., 2020; Rice et al., 2020; Gowal et al., 2020), which use adversarially perturbed images at each training step as training data. Earlier studies (Kurakin et al., 2016a; Xie et al., 2019b) showed that using adversarial samples during training leads to performance degradation on clean images. However, AdvProp (Xie et al., 2019a) challenged this observation by showing that adversarial training can act as a regularizer, and therefore improve nominal accuracy, when using dual batch normalization (BatchNorm) layers (Ioffe & Szegedy, 2015) to disentangle the clean and adversarial distributions. We draw attention to the broad similarity between the AdvProp approach and the adapters literature (Rebuffi et al., 2017; Houlsby et al., 2019) where a single backbone network is trained on multiple domains by means of adapters, where a few parameters specific to each domain are trained separately while the rest of the parameters are shared. In light of this comparison, we further develop the line of work introduced by AdvProp and analyze it from an adapter perspective. In particular, we explore various adapters and aim to obtain the best classification performance with minimal additional parameters. Our contributions are as follows: • We show that, in order to benefit from co-training on clean and adversarial samples, it is not necessary to separate the batch statistics of clean and adversarial images in BatchNorm layers. We demonstrate empirically that it is enough to use domain specific trainable parameters to achieve similar results. ∗Work done during an internship at DeepMind • Inspired by the adapters literature, we evaluate various adapters. We show that training separate classification tokens of a VIT for the clean and adversarial domains is enough to match the classification performance of dual normalization layers with 49× fewer domain specific parameters. This classification token acts as a conditioning token which can modify the behaviour of the network to be either in clean or robust mode (Figure 1). • Unlike Xie et al. (2019a) and Herrmann et al. (2022), we also aim at preserving the robust performance of the network against adversarial attacks. We show that our conditional token can obtain SOTA nominal accuracy in the clean mode while at the same time achieving competitive ℓ∞-robustness in the robust mode. As a by-product of our study, we show that adversarial training of VIT-B16 on IMAGENET leads to state-of-the-art robustness against ℓ∞-norm bounded perturbations of size 4/255. • We empirically demonstrate that training with adapters enables model soups (Wortsman et al., 2022). This allow us to introduce adversarial model soups, models that trade-off between clean and robust accuracy through linear interpolation of the clean and adversarial adapters. To the best of our knowledge, our work is the first to study adversarial model soups. We also show that adversarial model soups perform better on IMAGENET variants than the state-of-the-art with masked auto-encoding (He et al., 2022). 2 RELATED WORK Adversarial training. Although more recent approaches have been proposed, the most successful method to reduce the vulnerability of image classifiers to adversarial attacks is adversarial training, which generates on-the-fly adversarial counterparts for the training images and uses them to augment the training set (Croce et al., 2020). Goodfellow et al. (2015) used the single-step Fast Gradient Sign Method (FGSM) attack to craft such adversarial images. Later, Madry et al. (2018) found that using iterative Projected Gradient Descent (PGD) yields models robust to stronger attacks. Their scheme has been subsequently improved by several modifications, e.g. a different loss function (Zhang et al., 2019), unlabelled or synthetic data (Carmon et al., 2019; Uesato et al., 2019; Gowal et al., 2021), model weight averaging (Gowal et al., 2020), adversarial weight perturbations (Wu et al., 2020), and better data augmentation (Rebuffi et al., 2021). While the main drawback of adversarial training is the degradation of performance of robust models on clean images (Tsipras et al., 2018), Xie et al. (2019a) showed that adversarial images can be leveraged as a strong regularizer to improve the clean accuracy of classifiers on IMAGENET. In particular, they propose AdvProp, which introduces separate BatchNorm layers specific to clean or adversarial inputs, with the remaining layers being shared. This approach and the role of normalization layers when training with both clean and adversarial points has been further studied by (Xie & Yuille, 2019; Walter et al., 2022). Recently, Wang et al. (2022) suggest removing BatchNorm layers from the standard RESNET architecture (He et al., 2016) to retain high clean accuracy with adversarial training, but this negatively affects the robustness against stronger attacks.1 Finally, (Kireev et al., 2021; Herrmann et al., 2022) showed that carefully tuning the threat model in adversarial training might improve the performance on clean images and in the presence of distribution shifts, such as common corruptions (Hendrycks & Dietterich, 2018). Adapters. In early work on deep networks, Caruana (1997) showed that sharing network parameters among tasks acts as a regularizer. Aiming at a more efficient parameter sharing, (Rebuffi et al., 2017; Rosenfeld & Tsotsos, 2018) introduced adapters – small training modules specific to each task which can be stitched all along the network. In other lines of work, (Mallya et al., 2018; Mancini et al., 2018) adapt a model to new tasks using efficient weight masking and (Li et al., 2016; Maria Carlucci et al., 2017) perform domain adaptation by batch statistics modulation. While these approaches require having as many adapters as tasks, Perez et al. (2018) propose an adapter layer whose weights are generated by a conditioning network. Besides computer vision, adapters are also used in natural language processing for efficient fine-tuning (Houlsby et al., 2019; Pfeiffer et al., 2020; Wang et al., 2020) and multi-task learning (Stickland & Murray, 2019). Merging multiple models. While ensembles are a popular and successful way to combine multiple independently trained classifiers to improve on individual performance (Ovadia et al., 2019; GontijoLopes et al., 2021), they increase the inference cost as they require a forward pass for each sub-network 1See https://github.com/amazon-research/normalizer-free-robust-training/issues/2. of the ensemble. An alternative approach is taken by Wortsman et al. (2022) who propose to finetune a fully trained model with different hyperparameter configurations and then average the entire set of weights of the various networks. The obtained model soups get better performance than each individual model and even ensembles. Model soups are in spirit similar to Stochastic Weight Averaging (Izmailov et al., 2018) which consists in averaging weights along an optimization trajectory rather than averaging over independent runs. 3 METHOD 3.1 CO-TRAINING WITH NOMINAL AND ADVERSARIAL TRAINING Goodfellow et al. (2015) propose adversarial training as a way to regularize standard training. They jointly optimize the model parameters θ on clean and adversarial images with the co-training loss αL(f(x;θ), y) + (1− α)max δ∈S L(f(x+ δ;θ), y), (1) where pairs of associated examples x and labels y are sampled from the training dataset, f(·;θ) is a model parametrized by θ, L defines the loss function (such as the cross-entropy loss in the classification context), and S is the set of allowed perturbations. Setting α = 1 boils down to nominal training on clean images and setting α = 0 leads to adversarial training as defined by Madry et al. (2018). In our case, we consider ℓ∞ norm-bounded perturbations of size ϵ = 4/255, so we have S = {δ | ∥δ∥∞ ≤ ϵ}, and we use untargeted attacks to generate the adversarial perturbations δ (see details in Section 4). 3.2 SEPARATING BATCH STATISTICS IS NOT NECESSARY BatchNorm is a widely used normalization layer shown to improve performance and training stability of image classifiers (Ioffe & Szegedy, 2015). We recall that a BatchNorm layer, given a batch as input, first normalizes it by subtracting the mean and dividing by the standard deviation computed over the entire batch, then it applies an affine transformation, with learnable scale and offset parameters. During training, it accumulates these so-called batch statistics to use during test time, so that the output of the classifier for each image is independent of the other images in the batch. The batch statistics can be seen an approximation of the statistics over the image distribution. Xie et al. (2019a) show that optimizing the co-training loss in Eq. 1 can yield worse results on clean images than simple nominal training. This is especially the case when the network has a low capacity or the attack (i.e., the inner maximization) is too strong (such as using a large perturbation radius ϵ). To solve this issue, they propose AdvProp, which consists in using distinct BatchNorm layers for clean and adversarial images. They argue that “maintaining one set of [BatchNorm] statistics results in incorrect statistics estimation”, which could be the reason for the performance degradation. We note that using two sets of BatchNorm layers for the clean and adversarial samples as in AdvProp creates two sets of batch statistics but also two sets of learnable scale and offset parameters. In the following we investigate whether having separate batch statistics is a necessary condition for successful co-training. Figure 2 shows the clean and robust accuracy of various model architectures as training progresses. The left panel demonstrates that, if we share both batch statistics and scales/offsets (Shared BatchNorm, orange curves), the robust accuracy (orange dashed line) quickly drops, far from the one obtained by AdvProp (Dual BatchNorm, blue curve) which is above 34%. However, if we use a single set of batch statistics but specific scales and offsets for clean and adversarial images, we can observe on the right panel of Figure 2 that the robust accuracy (DualParams BatchNorm, orange dashed line) matches the one (blue dashed line) obtained by AdvProp. This demonstrates that it is possible to achieve nominal and robust classification results similar to those of AdvProp without separate batch statistics. Furthermore, there exist normalization layers such as LayerNorm (Ba et al., 2016) or GroupNorm (Wu & He, 2018) which do not use batch statistics, as their normalization step is done per sample and not per batch. Hence, according to the hypothesis of Xie et al. (2019a), these types of normalization layer should not suffer from performance degradation. Nevertheless, the left panel of Figure 2 shows that their robust accuracy (green and red dashed lines) does not match the robust accuracy of AdvProp (Dual BatchNorm), and is unstable over training steps. However, by making the scales and offsets of LayerNorm and GroupNorm specific to clean and adversarial images, their robust accuracy matches that obtained with dual BatchNorm layers, as shown in the right panel of Figure 2. This suggests that a key element to make the co-training loss of Eq. 1 work for various normalization layers is to have trainable parameters which are specific to the clean and adversarial images.2 3.3 REVISITING ADAPTERS WITH ADVERSARIAL TRAINING The last observation strongly relates this setting to the adapters literature where a single backbone architecture has some parameters, called adapters, which are specific to different domains while the rest of the parameters are shared among tasks. In our case, the clean images form one domain and the adversarial images constitute another domain. In this work, we go beyond having separate normalization layers for the clean and adversarial images and investigate other types of adapters. 2Interestingly, contrary to our observation that standard GroupNorm fails to retain robustness, Xie & Yuille (2019) report that GroupNorm matches Dual BatchNorm. We explain this difference as we use a stronger untargeted attack in this manuscript compared to the targeted attack of Xie & Yuille (2019). Using a stronger attack allows us to reveal failure modes that would have been hidden otherwise. Formally, the model parameters θ can be decomposed into parameters ψ which are shared among domains and parameters ϕ which are specific to a domain. We call ϕclean the parameters used when training on clean images and ϕadv the parameters used when training on adversarial images. For example, in Section 3.2, when we used dual LayerNorm layers, the scales and offsets of these normalization layers are contained in ϕclean and ϕadv whereas the rest of the model parameters are in ψ. Based on Eq. 1, we optimize the following loss: αL(f(x;ψ ∪ ϕclean), y) + (1− α)max δ∈S L(f(x+ δ;ψ ∪ ϕadv), y). (2) Finally, we introduce some notation for models with adapters at inference time: we call f(·;ψ∪ϕclean) the clean mode for prediction as we use the adapters ϕclean trained on the clean data. Conversely, we call f(·;ψ ∪ ϕadv) the robust mode when using the adapters ϕadv trained on the perturbed data. 3.4 TRAINING WITH ADAPTERS ENABLES ADVERSARIAL MODEL SOUPS Wortsman et al. (2022) propose model soups, which consist in averaging the weights of multiple models fine-tuned from the same pre-trained model. The resulting weight averaged model can benefit from the original models without incurring any extra compute and memory cost at inference time. Currently, in our setting the user would have to know at test time if the network should be in clean or robust mode. A model soup, by its ability to merge models, is a way to bypass this issue. We formulate the hypothesis that training with adapters enables model soups. With this in mind, we observe that training with adapters means that most of the model parameters are already shared, so model souping would simply consist in linearly interpolating the weights of the adapters for the two modes. We call adversarial model soups, the model soups with a model co-trained on clean and adversarial samples. We get the following parametrized model: f(·;ψ ∪ (βϕclean + (1− β)ϕadv)) (3) where β is the weighting factor when averaging the adapters. If β = 1, the model soup boils down to the clean mode and conversely β = 0 corresponds to the robust mode. In Section 5.2, we assess this hypothesis and show that forming model soups between independent nominal and robust models fails. 4 EXPERIMENTAL SETUP Architecture. We focus our study on the B16 variant of the Vision Transformer (VIT-B16) introduced by Dosovitskiy et al. (2020). We adopt the modifications proposed by He et al. (2022): the linear classifier is applied on the mean of the final tokens except the classification token. We train this network by using supervised training from scratch as proposed in He et al. (2022) (see the appendix). Attacks. We consider adversarial robustness against untargeted ℓ∞-bounded attacks with radius ϵ = 4/255. This is the most common setup for IMAGENET models, and it is more challenging to defend against than the targeted threat model used by Xie & Yuille (2019). To generate the adversarial perturbations we use Projected Gradient Descent (Madry et al., 2018) with 2 steps named PGD2 (see details in the appendix) at training time and with 40 steps for evaluation (PGD40). Datasets. We focus our experimental evaluation on the IMAGENET dataset (Russakovsky et al., 2015), with images at 224 × 224 resolution for both training and testing, as this is the standard large-scale benchmark for SOTA models and was used by Xie et al. (2019a) for AdvProp. We report clean and adversarial accuracy on the whole validation set. Moreover, we test the robustness against distribution shifts via several IMAGENET variants: IMAGENET-C (Hendrycks & Dietterich, 2018), IMAGENET-A (Hendrycks et al., 2019), IMAGENET-R (Hendrycks et al., 2020), IMAGENET-SKETCH (Wang et al., 2019), and Conflict Stimuli (Geirhos et al., 2018). 5 EXPERIMENTAL RESULTS Similarly to our observation in Section 3.2 for a RESNET-50, a fully shared VIT-B16 trained with the co-training loss Eq. 1 fails to retain any robustness. Therefore, we first investigate various adapters for VIT-B16 to find an efficient training setting in Section 5.1. Then we study adversarial model soups with adapters in Section 5.2 and finally show that training with adapters generalizes to other datasets and threat models. 5.1 FINDING AN EFFICIENT SETTING Choice of adapter. Using adapters increases the number of parameters as the layers which we choose as adapters have twice as many parameters: one set of parameters for clean images and another for adversarial images. Hence, to avoid increasing the network memory footprint too heavily, we restrict our adapters study to layers with few parameters, thus excluding self-attention (Vaswani et al., 2017) layers and MLP layers. This leaves the options of having dual embedder, positional embedding, normalization layers or classification token; among them, the classification token has by far the least amount of parameters, 49-770× fewer than the other candidates (see details in Table 1). We must still verify that so few parameters are enough to preserve the advantages of the AdvProp training scheme. Hence, we train a model for each type of adapter and compare them with two models without adapters, one trained with nominal training and the other with adversarial training. We observe in Table 1 that by using two classification tokens as adapters, which means only 768 extra parameters out of 86M, we reach 83.56% clean accuracy on IMAGENET, which is an improvement of +0.92% over standard training. Moreover, we obtain a robust accuracy of 49.87% in the robust mode, which is close to the robust accuracy given by adversarial training. Notably, we see that adapting other layers with more parameters such as all LayerNorm scales and offsets results in similar performances in both clean and robust modes. This indicates that (i) it is not necessary to split the normalization layers to reproduce the effect of AdvProp, and (ii) even a very small amount of dual parameters provide sufficient expressiveness to adapt the shared portion of the network to the two modes. Therefore, in the rest of the manuscript we focus on dual classification tokens as it requires the smallest number of extra parameters. Number of attack steps. As the results in Table 1 were obtained with PGD2, we check if we can reduce the number of attack steps to be more computationally efficient. In Table 2, we report the results for two one-step methods: N-FGSM by de Jorge et al. (2022) and FAST-AT by Wong et al. (2020). If we use the step sizes recommended in the corresponding papers, both methods suffer from catastrophic overfitting (Wong et al., 2020) (illustrated in Figure 6 in the appendix) and therefore have no robustness at all. In Table 2 we avoid such catastrophic overfitting by reducing the step sizes to ϵ and 0.75ϵ for FAST-AT and N-FGSM respectively and we observe that both methods perform more than 1% worse in robust accuracy than PGD2. We also increase the number of attack steps to 5 with PGD5. We notice a small improvement over PGD2 of 0.4% in robust accuracy while the clean accuracy is on par. Hence, PGD2 seems to be a good compromise between efficiency and classification performance. Weighting the co-training loss. In the co-training loss in Eq. 1, the α hyperparameter controls how much the loss is weighted towards clean or adversarial samples. For example, setting α = 0 means we train solely on adversarial samples. In Figure 3, where we evaluate several values for α (dividing the range between 0 and 1 into intervals of length 0.1), we notice that only the values between α = 0 and α = 0.4 form a Pareto front that strictly dominates the other intervals. Indeed, between α = 1 and α = 0.4, decreasing α leads to better performance both in clean and robust modes. In fact, setting α = 0.4 leads to 83.76% clean accuracy (in clean mode) and 52.19% robust accuracy (in robust mode) which are both better than the values obtained in Table 1 with α = 0.5. In Figure 7 (in the appendix), we visualize the filters of the embedder when training with various values of α. We observe that for α = 0.2 and for α = 0.8 the filters look quite similar to the filters learned with adversarial training (α = 0) and nominal training (α = 1), respectively. Interestingly, filters learned with α = 0.4 and α = 0.6 are not the simple combination of nominal and adversarial filters but rather new visually distinct filters. This indicates that co-training on clean and adversarial samples can lead to a new hybrid representation for the shared layers compared to nominal and adversarial training. Robustness to stronger attacks. For completeness we further test the robustness of a subset of our models with a mixture of AUTOATTACK (Croce & Hein, 2020) and MULTITARGETED (Gowal et al., 2019), denoted by AA+MT. Pure adversarial training, which obtains 56.19% robust accuracy against PGD40 (Table 1), reaches 54.15% robust accuracy against AA+MT. This is a new state-of-the-art robust accuracy on IMAGENET, improving by +6.55% over the 47.60% reported by Debenedetti et al. (2022). While Debenedetti et al. (2022) advocate for weak data augmentation for training robust VIT, our training procedure follows He et al. (2022) and contains heavy augmentations (see appendix): we conclude that large models still benefit from strong data augmentations even with adversarial training. Finally, the robust mode of the model co-trained with α = 0.4 in the previous paragraph reaches 49.55% robust accuracy against AA+MT, which still surpasses the prior art and preserves competitive robust performance. 5.2 EVALUATING MODEL SOUPS Adapters enable adversarial model soups. One downside of using adapters is that one needs to know if for an input image the network should be put in clean or robust mode. This motivates adversarial model soups which allow to create a single model performing well both in clean and robust accuracy. First, if we independently train two VIT-B16, one nominally and the other adversarially, and then try to perform model soups on them, we notice in Table 9 (in the appendix) that both robust and clean accuracies drop immediately when the weighting factor β between parameters is not equal to 0 or 1. We evaluate various model soups with the models of Table 1, meaning that the parameters specific to the clean and robust domain are averaged with weight β to obtain a single classifier. We notice in Figure 9 (in the appendix) that adversarial model soups work equally well with the various types of adapters, where sliding the value of β allows to smoothly trade-off clean accuracy for robustness. This validates our hypothesis that adapters enable model soups. Soup or ensemble. In Figure 4 we compare the classification performance of adversarial model soups and ensembles obtained by linear combination of the clean and robust modes at the probability prediction level. We notice that ensembling produces a better Pareto front than adversarial model soup but ensembles, with their two forward passes, require twice the compute of model soups. Hence, 78.2 78.6 79.1 79.7 80.5 81.5 82.4 83.0 83.5 83.7 83.8 84.7 85.0 85.4 85.9 86.5 87.2 87.8 88.3 88.5 88.5 88.5 13.4 14.0 15.1 17.0 19.6 23.6 28.6 33.7 36.8 38.2 38.4 55.2 55.3 55.2 55.4 55.5 55.6 55.6 55.4 55.1 54.7 54.4 39.6 39.8 40.0 40.3 40.5 40.7 41.0 41.2 41.2 41.2 41.1 56.5 55.7 54.4 53.4 51.1 49.1 46.9 44.4 41.6 40.0 39.8 56.7 57.3 58.2 59.7 61.9 64.7 67.5 69.3 70.0 70.1 69.9 adversarial model soups allow to choose the trade-off between clean and robust accuracy with performance close to ensembling while only requiring the same compute as a single network. Extrapolation. For the anecdote, we experiment with adversarial model soups for extrapolation with values of the weighting factor β above 1 and below 0. Interestingly, we observe that setting β = 1.05 leads to 83.81% clean accuracy which is better than the 83.76% obtained in the clean mode. Similarly, setting β = −0.05 leads to 52.26% robust accuracy which is slightly better than the 52.19% obtained in the robust mode. Hence, it appears that adversarial model soups do not need to be restricted to interpolation. Soups for IMAGENET variants. As adversarial model soups allow to create models with chosen trade-off between clean and robust accuracy, we might expect that such models perform better than nominal ones when distribution shifts occur. For example, Kireev et al. (2021) showed that adversarial training can even help with common corruptions when specifically tuned for such task (note that they use smaller datasets than IMAGENET). We then compute the accuracy of adversarial model soups with varying β on IMAGENET variants (results in Figure 5): while half of the best performance are obtained with the clean classification token, for some variants such as IMAGENET-R, IMAGENET-C and IMAGENET-SKETCH the best results are obtained with intermediate tokens. Hence, adversarial model soups can be used to reach a compromise between IMAGENET variants to get the best average performance. Here β = 0.9 yields the best mean accuracy 61.23%. In Table 3, we notice that this adversarial model soup improves the mean accuracy by +4.00% over a fine-tuned Masked Autoencoder (MAE-B16) checkpoint from He et al. (2022) and by +2.37% over Pyramid-AT from Herrmann et al. (2022). It also improves by +2.24% over the best performing ensemble of two networks trained independently with nominal and adversarial training respectively. 5.3 EVALUATING ON OTHER THREAT MODELS AND DATASETS Evaluating other threat models. IMAGENET variants are also a good benchmark to compare different types of adversarial attack to generate the perturbations for the co-training loss in Eq. 2: untargeted ℓ∞-bounded perturbations with budget ϵ = 4/255 (our standard setup), untargeted ℓ2bounded with ϵ ∈ {1, 2, 4, 8}, targeted (random target class as in Xie et al., 2019a) ℓ∞-bounded with ϵ ∈ {4/255, 8/255, 12/255}, and the Pyramid attack proposed by Herrmann et al. (2022). In Table 4, we select the best adversarial model soups after training with each method a VIT-B16 with dual classification tokens, and report its results on all variants. We see that the clean accuracy on the IMAGENET validation set improves in all cases compared to standard training. Moreover, although the best performing attack varies across variants, we notice that the untargeted ℓ∞ attack achieves the best average accuracy. Evaluating on other datasets. We further test the effect of using the co-training loss with the classification token as adapter on other datasets. In Table 5, we see that our training procedure provides a consistent performance boost in clean accuracy compared to nominal training on MNIST (LeCun et al., 2010), CIFAR-10, CIFAR-100 (Krizhevsky et al., 2014), SVHN (Netzer et al., 2011), SUN397 (Xiao et al., 2010), RESISC-45 (Cheng et al., 2017) and DMLAB (Beattie et al., 2016). This shows that our method generalizes well across datasets and can help regularize Vision Transformers on these smaller datasets, where they are known to perform worse compared to CNNs without pre-training (Zhang et al., 2021). In Appendix C, we also demonstrate that models pre-trained with co-training on IMAGENET yield significantly better classification results when fine-tuning nominally on small datasets compared to fine-tuning from nominally and adversarially pre-trained models. 6 CONCLUSION In this work we have shown that adapters with a few hundreds of domain specific parameters are sufficient to switch between models with radically different behaviors. In fact, just replacing the classification token of a VIT can turn a classifier with SOTA nominal accuracy and no adversarial robustness into another one with robust accuracy close to that achieved with standard adversarial training. Moreover, merging the adapters allows to smoothly transition between the two modes, finding classifiers (i.e. our adversarial model soups) with better performance on distribution shifts. These observations open up new interesting directions for future work to explore how to take advantage of the regularizing effect of adversarial training and whether it is possible to combine via soups other types of models. ACKNOWLEDGEMENTS We are grateful to Evan Shelhamer for reviewing the drafts of the paper and his literature comments, to Olivia Wiles, Florian Stimberg, Taylan Cemgil and others at DeepMind for helpful conversations and feedback on the project. A MORE EXPERIMENTAL DETAILS Training details. In this manuscript we train VIT-B16 models using the training pipeline proposed in He et al. (2022). The model is optimized for 300 epochs using the AdamW optimizer (Loshchilov & Hutter, 2017) with momenta β1 = 0.9, β2 = 0.95, with a weight decay of 0.3 and a cosine learning rate decay with base learning rate 1e-4 and linear ramp-up of 20 epochs. The batch size is set to 4096 and we scale the learning rates using the linear scaling rule of Goyal et al. (2017). We optimize the standard cross-entropy loss and we use a label smoothing of 0.1. We apply stochastic depth (Huang et al., 2016) with base value 0.1 and with a dropping probability linearly increasing with depth. Regarding data augmentation, we use random crops resized to 224 × 224 images, mixup (Zhang et al., 2018), CutMix (Yun et al., 2019) and RandAugment (Cubuk et al., 2020) with two layers, magnitude 9 and a random probability of 0.5. We note that our implementation of RandAugment is based on the version found in the timm library (Wightman, 2019). We also use exponential moving average with momentum 0.9999. For RESNET-50 we keep the same training scheme used for VIT-B16, and the standard architecture, except for combining GroupNorm with Weight Standardization in all convolutional layers following Kolesnikov et al. (2020). For the DualParams BatchNorm version we fix the robust branch to always use the accumulated statistics rather then the batch ones. Training on smaller datasets. When training from scratch on smaller datasets in Section 5.3, we optimize the smaller VIT-S with a batch size of 1024 and a base learning rate of 2e-4. For datasets with small image resolution such as CIFAR-10, we do not rescale the images to 224 × 224 but we use a patch size of 4 and a stride of 2 to get enough vision tokens. Attack details. For PGD2 and PGD5 we use a gradient descent update with a fixed step size of 2.5/255 and 1/255 respectively. For PGD40 we change the optimizer to Adam with step-size 0.1 decayed by 10 × at steps 20 and 30. Regarding one step attacks, we use a step size of 6/255 and initialization radius of 8/255 for N-FGSM and a step size of 5/255 for Fast-AT. B VISUALIZING FILTERS Visualization procedure. We visualize the embedding layer by first standardizing the weights to have zero mean and unit variance. We then extract the first 28 principal components. Finally we reshape them to 16 × 16 × 3 images and rescale them to have their values between 0 and 255 such as to display these components as RGB images. C TRANSFER LEARNING Training details. For completeness we evaluate the transfer learning performance of the VITB16 pre-trained on IMAGENET by co-training on clean and adversarial samples. We choose the model trained with classification token adapter and co-training coefficient α = 0.4, which we finetune nominally on CIFAR-10, CIFAR-100, SUN-397, RESISC-45 and DMLAB using SGD with momentum 0.9, a batch size of 512, gradient clipping at global norm 1 and no weight decay. We optimize the standard cross-entropy loss and we use a label smoothing of 0.1. For simplicity, we use the same training schedule for all the datasets: a total of 10k training steps and a base learning rate of 0.01 attained after a linear ramp-up of 500 steps followed by a cosine decay. Regarding data pre-processing, we simply rescale the images to 224 × 224 resolution without preserving aspect ratio and we apply random horizontal flipping as data augmentation. Finally, we use exponential moving average with momentum 0.999. Fine-tuning results. As the network was pre-trained with classification token adapter, we have several possibilities for initializing the classification token before fine-tuning: adversarial token, clean token and model soups interpolating between these two tokens. For comparison, we also fine-tune two VIT-B16 pre-trained on IMAGENET with nominal and adversarial training respectively. We report the results in Table 6 where we evaluate several fine-tuning strategies: fine-tuning (i) the classifier head, (ii) the classifier head and the classification token and (iii) all the weights. First, we observe that fine-tuning both the classification token and the classifier head brings only a small improvement (from 79.27% to 80.70% for the best average accuracy) over fine-tuning the classifier head alone. Fine-tuning all the weights is the best strategy as it reaches 88.40% average accuracy. Second, we observe that initializing the classification token with the adversarial token performs consistently better than with the clean token when fine-tuning all the weights. Finally, co-training as pre-training is significantly better than nominal and adversarial pre-training as fine-tuning from a co-trained model reaches 88.40% average accuracy, a +1.05% improvement over nominal and adversarial pre-training. D ACCURACY LANDSCAPE In our case, model soups are obtained by linear interpolation (or extrapolation) of the adversarial and clean tokens. We notice that the clean and adversarial tokens are almost orthogonal (cos(ϕclean,ϕadv) = 0.14), so we can extend our study beyond model soups by taking linear combinations of the two tokens β1ϕclean + β2ϕadv. By taking a sweep over the β1 and β2 coefficients, we obtain in Figure 8 the clean and robust accuracy landscapes in the plane defined by the two tokens and where the diagonal corresponds to the model soups. We observe that the main direction of change for the clean and robust accuracies is the model soups diagonal (top left to bottom right). We can clearly see the trade-off in clean/robust accuracy, but also there seems to be a compromise Table 6: Co-training as pre-training. We compare the transfer learning performance of a model pre-trained using co-training to models pre-trained with nominal and adversarial training. We evaluate various fine-tuning strategies on several datasets (headers in green) and we report the average over datasets in the last rows (orange header). We also assess several initializations for the classification token before fine-tuning: adversarial token, clean token and model soups between these two tokens with various weightings β. All models are pre-trained on IMAGENET and use the same VIT-B16 architecture during fine-tuning. SETUP BASELINES FROM CO-TRAINED NET Nominal Adversarial Robust mode β = 0.25 β = 0.5 β = 0.75 Clean mode CIFAR-10 Fine-tune head 96.07% 90.95% 90.28% 91.17% 93.61% 96.50% 97.15% Fine-tune head + cls token 96.62% 92.76% 97.73% 97.70% 97.77% 97.82% 97.84% Fine-tune all 98.68% 98.96% 99.09% 99.03% 99.01% 99.05% 99.03% CIFAR-100 Fine-tune head 83.30% 73.80% 71.94% 73.52% 77.78% 83.99% 85.47% Fine-tune head + cls token 84.59% 76.79% 87.26% 87.49% 87.55% 87.45% 87.43% Fine-tune all 91.18% 91.74% 92.37% 92.23% 92.32% 92.41% 92.29% SUN-397 Fine-tune head 72.70% 65.62% 65.93% 67.02% 70.19% 73.00% 73.47% Fine-tune head + cls token 73.05% 67.21% 73.99% 74.14% 74.19% 74.12% 74.15% Fine-tune all 76.48% 75.66% 77.87% 77.75% 77.74% 77.67% 77.72% RESISC-45 Fine-tune head 91.69% 86.70% 86.54% 87.37% 89.64% 90.58% 91.12% Fine-tune head + cls token 91.95% 87.52% 91.04% 91.07% 91.04% 91.49% 91.23% Fine-tune all 96.78% 96.14% 97.07% 96.72% 96.88% 97.07% 96.80% DMLAB Fine-tune head 50.02% 50.11% 48.58% 48.60% 49.08% 49.07% 49.16% Fine-tune head + cls token 50.91% 51.53% 50.81% 51.79% 52.47% 52.64% 52.41% Fine-tune all 73.65% 73.93% 75.61% 75.66% 75.74% 75.35% 75.58% AVERAGE Fine-tune head 78.76% 73.44% 72.65% 73.54% 76.06% 78.63% 79.27% Fine-tune head + cls token 79.42% 75.16% 80.17% 80.44% 80.60% 80.70% 80.61% Fine-tune all 87.35% 87.29% 88.40% 88.28% 88.34% 88.31% 88.28% -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Clean token weight 1.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 Ad v to ke n we ig ht 78 79 80 81 82 83 (a) Clean accuracy -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Clean token weight 1.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 Ad v to ke n we ig ht 37.5 40.0 42.5 45.0 47.5 50.0 52.5 55.0 57.5 (b) Robust accuracy (PGD2) -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Clean token weight 1.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 Ad v to ke n we ig ht (c) Arithmetic mean Figure 8: Linear combination of tokens. We report the clean accuracy (panel (a)) and robust accuracy against PGD2 (panel (b)) on IMAGENET for various linear combinations of the clean and adversarial tokens. Model soups, which are linear interpolation (and extrapolation) between these two tokens, are on the diagonal from top left to bottom right. Panel (c) shows the arithmetic mean between the normalized (with min/max rescaling) clean and robust accuracies (red means higher mean accuracy). region between clean and robust accuracy as the other diagonal (from bottom left to top right) is visually distinct for clean and robust accuracy. In panel (c) of Figure 8, we plot the arithmetic mean between the normalized (with min/max rescaling) clean and robust accuracies. We observe that the best compromises between clean and robust accuracy have a stronger adversarial token weight than the clean token weight. E LIMITATIONS AND FUTURE WORK We have empirically shown that co-training a fully shared VIT does not retain any robustness whereas having two classification tokens specific to the clean and adversarial images is enough to get competitive performance both in clean and robust accuracy. However, we leave to future work the theoretical explanation on why this small architecture change (adding only 768 parameters) results in such a gap in performance. Similarly, beyond our intuition that parameter sharing when using adapters makes model soups possible, we cannot support our empirical results with theory and leave it to future work. Another direction for future work is the automatic selection of the right soup for each sample which could be inspired by automatic selection modules like in Lo & Patel (2021). F ADDITIONAL TABLES AND FIGURE In the following we present additional tables and figures of results described in the main part but omitted above because of space limits.
1. What is the focus and contribution of the paper regarding neural network robustness? 2. What are the strengths of the proposed approach, particularly in terms of practicality and simplicity? 3. What are the weaknesses of the paper, especially regarding comparisons with other works and ablation studies? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or suggestions for further research raised by the reviewer?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper works on the robustness of the neural networks and aims to boost the accuracy on clean ImageNet and different variants with the help of adversarial samples. Inspired by the AdvProp and model soups, the authors propose adversarial model soups, which are trained with adapters through linear combinations of the clean and adversarial tokens. Experiments show model soups can easily strike a balance between clean accuracy and dataset-shifting robustness. Strengths And Weaknesses Strength Model soups are drawing much attention due to their excellent performance. However, they require a great number of computing resources. The adversarial model soups alleviate the need for storage capacity and computing power and make them practical for mobile devices and independent researchers. Findings in Figure 2 are interesting, which shows that domain-specific normalization layers are enough to boost the robustness. The method is simple, and few hyper-parameters are required. Weakness AdvProp boosts both accuracy and robustness and doesn’t need more parameters. I think you had better add the results of AdvProp in Table 1. What’s the α used in the baseline Adversarial Training in Table 1? Could you do an ablation study here providing the results of different α ? Clarity, Quality, Novelty And Reproducibility The baselines Adversarial Training and Co-training in Table 1 are confusing. If Co-training uses Eq. 1 as shown in the Table, what’s the difference between them? In Section 5.1 (Weighting the co-training loss), you refer to Eq. 1 as the co-training loss. Later, in the caption of Figure 3, you refer the Eq. 2 as the co-training loss. You had better clarify the notions in your paper. In Section 5.2(Adapters enable adversarial model soups.), you try to verify that forming model soups between independent nominal and robust models fails. However, model soups work in pretraining-to-finetuning problem settings. Could you fine-tune some adversarial models (with different α s in Eq. 1) from the same pre-trained model, average the parameters just like standard model soups and then test the accuracy and robustness on such a model soup? In Fig. 5, the model performs differently on IN-R and Conflict Stimuli. Could you explain it?
ICLR
Title Revisiting adapters with adversarial training Abstract While adversarial training is generally used as a defense mechanism, recent works show that it can also act as a regularizer. By co-training a deep network on clean and adversarial inputs, it is possible to improve classification accuracy on the clean, non-adversarial inputs. We demonstrate that, contrary to previous findings, it is not necessary to separate batch statistics when co-training on clean and adversarial inputs, and that it is sufficient to use adapters with few domain-specific parameters for each type of input. We establish that using the classification token of a Vision Transformer (VIT) as an adapter is enough to match the classification performance of dual normalization layers, while using significantly less additional parameters. First, we improve upon the top-1 accuracy of a non-adversarially trained VIT-B16 model by +1.12% on IMAGENET (reaching 83.76% top-1 accuracy). Second, and more importantly, we show that training with adapters enables model soups through linear combinations of the clean and adversarial tokens. These model soups, which we call adversarial model soups, allow us to trade-off between clean and robust accuracy without sacrificing efficiency. Finally, we show that we can easily adapt the resulting models in the face of distribution shifts. Our VIT-B16 obtains top-1 accuracies on IMAGENET variants that are on average +4.00% better than those obtained with Masked Autoencoders. 1 INTRODUCTION Deep networks are inherently susceptible to adversarial perturbations. Adversarial perturbations fool deep networks by adding an imperceptible amount of noise which leads to an incorrect prediction with high confidence (Carlini & Wagner, 2017; Goodfellow et al., 2015; Kurakin et al., 2016b; Szegedy et al., 2014). There has been a lot of work on building defenses against adversarial perturbations (Papernot et al., 2016; Kannan et al., 2018); the most commonly used defense is adversarial training as proposed by Madry et al. (2018) and its variants (Zhang et al., 2019; Pang et al., 2020; Huang et al., 2020; Rice et al., 2020; Gowal et al., 2020), which use adversarially perturbed images at each training step as training data. Earlier studies (Kurakin et al., 2016a; Xie et al., 2019b) showed that using adversarial samples during training leads to performance degradation on clean images. However, AdvProp (Xie et al., 2019a) challenged this observation by showing that adversarial training can act as a regularizer, and therefore improve nominal accuracy, when using dual batch normalization (BatchNorm) layers (Ioffe & Szegedy, 2015) to disentangle the clean and adversarial distributions. We draw attention to the broad similarity between the AdvProp approach and the adapters literature (Rebuffi et al., 2017; Houlsby et al., 2019) where a single backbone network is trained on multiple domains by means of adapters, where a few parameters specific to each domain are trained separately while the rest of the parameters are shared. In light of this comparison, we further develop the line of work introduced by AdvProp and analyze it from an adapter perspective. In particular, we explore various adapters and aim to obtain the best classification performance with minimal additional parameters. Our contributions are as follows: • We show that, in order to benefit from co-training on clean and adversarial samples, it is not necessary to separate the batch statistics of clean and adversarial images in BatchNorm layers. We demonstrate empirically that it is enough to use domain specific trainable parameters to achieve similar results. ∗Work done during an internship at DeepMind • Inspired by the adapters literature, we evaluate various adapters. We show that training separate classification tokens of a VIT for the clean and adversarial domains is enough to match the classification performance of dual normalization layers with 49× fewer domain specific parameters. This classification token acts as a conditioning token which can modify the behaviour of the network to be either in clean or robust mode (Figure 1). • Unlike Xie et al. (2019a) and Herrmann et al. (2022), we also aim at preserving the robust performance of the network against adversarial attacks. We show that our conditional token can obtain SOTA nominal accuracy in the clean mode while at the same time achieving competitive ℓ∞-robustness in the robust mode. As a by-product of our study, we show that adversarial training of VIT-B16 on IMAGENET leads to state-of-the-art robustness against ℓ∞-norm bounded perturbations of size 4/255. • We empirically demonstrate that training with adapters enables model soups (Wortsman et al., 2022). This allow us to introduce adversarial model soups, models that trade-off between clean and robust accuracy through linear interpolation of the clean and adversarial adapters. To the best of our knowledge, our work is the first to study adversarial model soups. We also show that adversarial model soups perform better on IMAGENET variants than the state-of-the-art with masked auto-encoding (He et al., 2022). 2 RELATED WORK Adversarial training. Although more recent approaches have been proposed, the most successful method to reduce the vulnerability of image classifiers to adversarial attacks is adversarial training, which generates on-the-fly adversarial counterparts for the training images and uses them to augment the training set (Croce et al., 2020). Goodfellow et al. (2015) used the single-step Fast Gradient Sign Method (FGSM) attack to craft such adversarial images. Later, Madry et al. (2018) found that using iterative Projected Gradient Descent (PGD) yields models robust to stronger attacks. Their scheme has been subsequently improved by several modifications, e.g. a different loss function (Zhang et al., 2019), unlabelled or synthetic data (Carmon et al., 2019; Uesato et al., 2019; Gowal et al., 2021), model weight averaging (Gowal et al., 2020), adversarial weight perturbations (Wu et al., 2020), and better data augmentation (Rebuffi et al., 2021). While the main drawback of adversarial training is the degradation of performance of robust models on clean images (Tsipras et al., 2018), Xie et al. (2019a) showed that adversarial images can be leveraged as a strong regularizer to improve the clean accuracy of classifiers on IMAGENET. In particular, they propose AdvProp, which introduces separate BatchNorm layers specific to clean or adversarial inputs, with the remaining layers being shared. This approach and the role of normalization layers when training with both clean and adversarial points has been further studied by (Xie & Yuille, 2019; Walter et al., 2022). Recently, Wang et al. (2022) suggest removing BatchNorm layers from the standard RESNET architecture (He et al., 2016) to retain high clean accuracy with adversarial training, but this negatively affects the robustness against stronger attacks.1 Finally, (Kireev et al., 2021; Herrmann et al., 2022) showed that carefully tuning the threat model in adversarial training might improve the performance on clean images and in the presence of distribution shifts, such as common corruptions (Hendrycks & Dietterich, 2018). Adapters. In early work on deep networks, Caruana (1997) showed that sharing network parameters among tasks acts as a regularizer. Aiming at a more efficient parameter sharing, (Rebuffi et al., 2017; Rosenfeld & Tsotsos, 2018) introduced adapters – small training modules specific to each task which can be stitched all along the network. In other lines of work, (Mallya et al., 2018; Mancini et al., 2018) adapt a model to new tasks using efficient weight masking and (Li et al., 2016; Maria Carlucci et al., 2017) perform domain adaptation by batch statistics modulation. While these approaches require having as many adapters as tasks, Perez et al. (2018) propose an adapter layer whose weights are generated by a conditioning network. Besides computer vision, adapters are also used in natural language processing for efficient fine-tuning (Houlsby et al., 2019; Pfeiffer et al., 2020; Wang et al., 2020) and multi-task learning (Stickland & Murray, 2019). Merging multiple models. While ensembles are a popular and successful way to combine multiple independently trained classifiers to improve on individual performance (Ovadia et al., 2019; GontijoLopes et al., 2021), they increase the inference cost as they require a forward pass for each sub-network 1See https://github.com/amazon-research/normalizer-free-robust-training/issues/2. of the ensemble. An alternative approach is taken by Wortsman et al. (2022) who propose to finetune a fully trained model with different hyperparameter configurations and then average the entire set of weights of the various networks. The obtained model soups get better performance than each individual model and even ensembles. Model soups are in spirit similar to Stochastic Weight Averaging (Izmailov et al., 2018) which consists in averaging weights along an optimization trajectory rather than averaging over independent runs. 3 METHOD 3.1 CO-TRAINING WITH NOMINAL AND ADVERSARIAL TRAINING Goodfellow et al. (2015) propose adversarial training as a way to regularize standard training. They jointly optimize the model parameters θ on clean and adversarial images with the co-training loss αL(f(x;θ), y) + (1− α)max δ∈S L(f(x+ δ;θ), y), (1) where pairs of associated examples x and labels y are sampled from the training dataset, f(·;θ) is a model parametrized by θ, L defines the loss function (such as the cross-entropy loss in the classification context), and S is the set of allowed perturbations. Setting α = 1 boils down to nominal training on clean images and setting α = 0 leads to adversarial training as defined by Madry et al. (2018). In our case, we consider ℓ∞ norm-bounded perturbations of size ϵ = 4/255, so we have S = {δ | ∥δ∥∞ ≤ ϵ}, and we use untargeted attacks to generate the adversarial perturbations δ (see details in Section 4). 3.2 SEPARATING BATCH STATISTICS IS NOT NECESSARY BatchNorm is a widely used normalization layer shown to improve performance and training stability of image classifiers (Ioffe & Szegedy, 2015). We recall that a BatchNorm layer, given a batch as input, first normalizes it by subtracting the mean and dividing by the standard deviation computed over the entire batch, then it applies an affine transformation, with learnable scale and offset parameters. During training, it accumulates these so-called batch statistics to use during test time, so that the output of the classifier for each image is independent of the other images in the batch. The batch statistics can be seen an approximation of the statistics over the image distribution. Xie et al. (2019a) show that optimizing the co-training loss in Eq. 1 can yield worse results on clean images than simple nominal training. This is especially the case when the network has a low capacity or the attack (i.e., the inner maximization) is too strong (such as using a large perturbation radius ϵ). To solve this issue, they propose AdvProp, which consists in using distinct BatchNorm layers for clean and adversarial images. They argue that “maintaining one set of [BatchNorm] statistics results in incorrect statistics estimation”, which could be the reason for the performance degradation. We note that using two sets of BatchNorm layers for the clean and adversarial samples as in AdvProp creates two sets of batch statistics but also two sets of learnable scale and offset parameters. In the following we investigate whether having separate batch statistics is a necessary condition for successful co-training. Figure 2 shows the clean and robust accuracy of various model architectures as training progresses. The left panel demonstrates that, if we share both batch statistics and scales/offsets (Shared BatchNorm, orange curves), the robust accuracy (orange dashed line) quickly drops, far from the one obtained by AdvProp (Dual BatchNorm, blue curve) which is above 34%. However, if we use a single set of batch statistics but specific scales and offsets for clean and adversarial images, we can observe on the right panel of Figure 2 that the robust accuracy (DualParams BatchNorm, orange dashed line) matches the one (blue dashed line) obtained by AdvProp. This demonstrates that it is possible to achieve nominal and robust classification results similar to those of AdvProp without separate batch statistics. Furthermore, there exist normalization layers such as LayerNorm (Ba et al., 2016) or GroupNorm (Wu & He, 2018) which do not use batch statistics, as their normalization step is done per sample and not per batch. Hence, according to the hypothesis of Xie et al. (2019a), these types of normalization layer should not suffer from performance degradation. Nevertheless, the left panel of Figure 2 shows that their robust accuracy (green and red dashed lines) does not match the robust accuracy of AdvProp (Dual BatchNorm), and is unstable over training steps. However, by making the scales and offsets of LayerNorm and GroupNorm specific to clean and adversarial images, their robust accuracy matches that obtained with dual BatchNorm layers, as shown in the right panel of Figure 2. This suggests that a key element to make the co-training loss of Eq. 1 work for various normalization layers is to have trainable parameters which are specific to the clean and adversarial images.2 3.3 REVISITING ADAPTERS WITH ADVERSARIAL TRAINING The last observation strongly relates this setting to the adapters literature where a single backbone architecture has some parameters, called adapters, which are specific to different domains while the rest of the parameters are shared among tasks. In our case, the clean images form one domain and the adversarial images constitute another domain. In this work, we go beyond having separate normalization layers for the clean and adversarial images and investigate other types of adapters. 2Interestingly, contrary to our observation that standard GroupNorm fails to retain robustness, Xie & Yuille (2019) report that GroupNorm matches Dual BatchNorm. We explain this difference as we use a stronger untargeted attack in this manuscript compared to the targeted attack of Xie & Yuille (2019). Using a stronger attack allows us to reveal failure modes that would have been hidden otherwise. Formally, the model parameters θ can be decomposed into parameters ψ which are shared among domains and parameters ϕ which are specific to a domain. We call ϕclean the parameters used when training on clean images and ϕadv the parameters used when training on adversarial images. For example, in Section 3.2, when we used dual LayerNorm layers, the scales and offsets of these normalization layers are contained in ϕclean and ϕadv whereas the rest of the model parameters are in ψ. Based on Eq. 1, we optimize the following loss: αL(f(x;ψ ∪ ϕclean), y) + (1− α)max δ∈S L(f(x+ δ;ψ ∪ ϕadv), y). (2) Finally, we introduce some notation for models with adapters at inference time: we call f(·;ψ∪ϕclean) the clean mode for prediction as we use the adapters ϕclean trained on the clean data. Conversely, we call f(·;ψ ∪ ϕadv) the robust mode when using the adapters ϕadv trained on the perturbed data. 3.4 TRAINING WITH ADAPTERS ENABLES ADVERSARIAL MODEL SOUPS Wortsman et al. (2022) propose model soups, which consist in averaging the weights of multiple models fine-tuned from the same pre-trained model. The resulting weight averaged model can benefit from the original models without incurring any extra compute and memory cost at inference time. Currently, in our setting the user would have to know at test time if the network should be in clean or robust mode. A model soup, by its ability to merge models, is a way to bypass this issue. We formulate the hypothesis that training with adapters enables model soups. With this in mind, we observe that training with adapters means that most of the model parameters are already shared, so model souping would simply consist in linearly interpolating the weights of the adapters for the two modes. We call adversarial model soups, the model soups with a model co-trained on clean and adversarial samples. We get the following parametrized model: f(·;ψ ∪ (βϕclean + (1− β)ϕadv)) (3) where β is the weighting factor when averaging the adapters. If β = 1, the model soup boils down to the clean mode and conversely β = 0 corresponds to the robust mode. In Section 5.2, we assess this hypothesis and show that forming model soups between independent nominal and robust models fails. 4 EXPERIMENTAL SETUP Architecture. We focus our study on the B16 variant of the Vision Transformer (VIT-B16) introduced by Dosovitskiy et al. (2020). We adopt the modifications proposed by He et al. (2022): the linear classifier is applied on the mean of the final tokens except the classification token. We train this network by using supervised training from scratch as proposed in He et al. (2022) (see the appendix). Attacks. We consider adversarial robustness against untargeted ℓ∞-bounded attacks with radius ϵ = 4/255. This is the most common setup for IMAGENET models, and it is more challenging to defend against than the targeted threat model used by Xie & Yuille (2019). To generate the adversarial perturbations we use Projected Gradient Descent (Madry et al., 2018) with 2 steps named PGD2 (see details in the appendix) at training time and with 40 steps for evaluation (PGD40). Datasets. We focus our experimental evaluation on the IMAGENET dataset (Russakovsky et al., 2015), with images at 224 × 224 resolution for both training and testing, as this is the standard large-scale benchmark for SOTA models and was used by Xie et al. (2019a) for AdvProp. We report clean and adversarial accuracy on the whole validation set. Moreover, we test the robustness against distribution shifts via several IMAGENET variants: IMAGENET-C (Hendrycks & Dietterich, 2018), IMAGENET-A (Hendrycks et al., 2019), IMAGENET-R (Hendrycks et al., 2020), IMAGENET-SKETCH (Wang et al., 2019), and Conflict Stimuli (Geirhos et al., 2018). 5 EXPERIMENTAL RESULTS Similarly to our observation in Section 3.2 for a RESNET-50, a fully shared VIT-B16 trained with the co-training loss Eq. 1 fails to retain any robustness. Therefore, we first investigate various adapters for VIT-B16 to find an efficient training setting in Section 5.1. Then we study adversarial model soups with adapters in Section 5.2 and finally show that training with adapters generalizes to other datasets and threat models. 5.1 FINDING AN EFFICIENT SETTING Choice of adapter. Using adapters increases the number of parameters as the layers which we choose as adapters have twice as many parameters: one set of parameters for clean images and another for adversarial images. Hence, to avoid increasing the network memory footprint too heavily, we restrict our adapters study to layers with few parameters, thus excluding self-attention (Vaswani et al., 2017) layers and MLP layers. This leaves the options of having dual embedder, positional embedding, normalization layers or classification token; among them, the classification token has by far the least amount of parameters, 49-770× fewer than the other candidates (see details in Table 1). We must still verify that so few parameters are enough to preserve the advantages of the AdvProp training scheme. Hence, we train a model for each type of adapter and compare them with two models without adapters, one trained with nominal training and the other with adversarial training. We observe in Table 1 that by using two classification tokens as adapters, which means only 768 extra parameters out of 86M, we reach 83.56% clean accuracy on IMAGENET, which is an improvement of +0.92% over standard training. Moreover, we obtain a robust accuracy of 49.87% in the robust mode, which is close to the robust accuracy given by adversarial training. Notably, we see that adapting other layers with more parameters such as all LayerNorm scales and offsets results in similar performances in both clean and robust modes. This indicates that (i) it is not necessary to split the normalization layers to reproduce the effect of AdvProp, and (ii) even a very small amount of dual parameters provide sufficient expressiveness to adapt the shared portion of the network to the two modes. Therefore, in the rest of the manuscript we focus on dual classification tokens as it requires the smallest number of extra parameters. Number of attack steps. As the results in Table 1 were obtained with PGD2, we check if we can reduce the number of attack steps to be more computationally efficient. In Table 2, we report the results for two one-step methods: N-FGSM by de Jorge et al. (2022) and FAST-AT by Wong et al. (2020). If we use the step sizes recommended in the corresponding papers, both methods suffer from catastrophic overfitting (Wong et al., 2020) (illustrated in Figure 6 in the appendix) and therefore have no robustness at all. In Table 2 we avoid such catastrophic overfitting by reducing the step sizes to ϵ and 0.75ϵ for FAST-AT and N-FGSM respectively and we observe that both methods perform more than 1% worse in robust accuracy than PGD2. We also increase the number of attack steps to 5 with PGD5. We notice a small improvement over PGD2 of 0.4% in robust accuracy while the clean accuracy is on par. Hence, PGD2 seems to be a good compromise between efficiency and classification performance. Weighting the co-training loss. In the co-training loss in Eq. 1, the α hyperparameter controls how much the loss is weighted towards clean or adversarial samples. For example, setting α = 0 means we train solely on adversarial samples. In Figure 3, where we evaluate several values for α (dividing the range between 0 and 1 into intervals of length 0.1), we notice that only the values between α = 0 and α = 0.4 form a Pareto front that strictly dominates the other intervals. Indeed, between α = 1 and α = 0.4, decreasing α leads to better performance both in clean and robust modes. In fact, setting α = 0.4 leads to 83.76% clean accuracy (in clean mode) and 52.19% robust accuracy (in robust mode) which are both better than the values obtained in Table 1 with α = 0.5. In Figure 7 (in the appendix), we visualize the filters of the embedder when training with various values of α. We observe that for α = 0.2 and for α = 0.8 the filters look quite similar to the filters learned with adversarial training (α = 0) and nominal training (α = 1), respectively. Interestingly, filters learned with α = 0.4 and α = 0.6 are not the simple combination of nominal and adversarial filters but rather new visually distinct filters. This indicates that co-training on clean and adversarial samples can lead to a new hybrid representation for the shared layers compared to nominal and adversarial training. Robustness to stronger attacks. For completeness we further test the robustness of a subset of our models with a mixture of AUTOATTACK (Croce & Hein, 2020) and MULTITARGETED (Gowal et al., 2019), denoted by AA+MT. Pure adversarial training, which obtains 56.19% robust accuracy against PGD40 (Table 1), reaches 54.15% robust accuracy against AA+MT. This is a new state-of-the-art robust accuracy on IMAGENET, improving by +6.55% over the 47.60% reported by Debenedetti et al. (2022). While Debenedetti et al. (2022) advocate for weak data augmentation for training robust VIT, our training procedure follows He et al. (2022) and contains heavy augmentations (see appendix): we conclude that large models still benefit from strong data augmentations even with adversarial training. Finally, the robust mode of the model co-trained with α = 0.4 in the previous paragraph reaches 49.55% robust accuracy against AA+MT, which still surpasses the prior art and preserves competitive robust performance. 5.2 EVALUATING MODEL SOUPS Adapters enable adversarial model soups. One downside of using adapters is that one needs to know if for an input image the network should be put in clean or robust mode. This motivates adversarial model soups which allow to create a single model performing well both in clean and robust accuracy. First, if we independently train two VIT-B16, one nominally and the other adversarially, and then try to perform model soups on them, we notice in Table 9 (in the appendix) that both robust and clean accuracies drop immediately when the weighting factor β between parameters is not equal to 0 or 1. We evaluate various model soups with the models of Table 1, meaning that the parameters specific to the clean and robust domain are averaged with weight β to obtain a single classifier. We notice in Figure 9 (in the appendix) that adversarial model soups work equally well with the various types of adapters, where sliding the value of β allows to smoothly trade-off clean accuracy for robustness. This validates our hypothesis that adapters enable model soups. Soup or ensemble. In Figure 4 we compare the classification performance of adversarial model soups and ensembles obtained by linear combination of the clean and robust modes at the probability prediction level. We notice that ensembling produces a better Pareto front than adversarial model soup but ensembles, with their two forward passes, require twice the compute of model soups. Hence, 78.2 78.6 79.1 79.7 80.5 81.5 82.4 83.0 83.5 83.7 83.8 84.7 85.0 85.4 85.9 86.5 87.2 87.8 88.3 88.5 88.5 88.5 13.4 14.0 15.1 17.0 19.6 23.6 28.6 33.7 36.8 38.2 38.4 55.2 55.3 55.2 55.4 55.5 55.6 55.6 55.4 55.1 54.7 54.4 39.6 39.8 40.0 40.3 40.5 40.7 41.0 41.2 41.2 41.2 41.1 56.5 55.7 54.4 53.4 51.1 49.1 46.9 44.4 41.6 40.0 39.8 56.7 57.3 58.2 59.7 61.9 64.7 67.5 69.3 70.0 70.1 69.9 adversarial model soups allow to choose the trade-off between clean and robust accuracy with performance close to ensembling while only requiring the same compute as a single network. Extrapolation. For the anecdote, we experiment with adversarial model soups for extrapolation with values of the weighting factor β above 1 and below 0. Interestingly, we observe that setting β = 1.05 leads to 83.81% clean accuracy which is better than the 83.76% obtained in the clean mode. Similarly, setting β = −0.05 leads to 52.26% robust accuracy which is slightly better than the 52.19% obtained in the robust mode. Hence, it appears that adversarial model soups do not need to be restricted to interpolation. Soups for IMAGENET variants. As adversarial model soups allow to create models with chosen trade-off between clean and robust accuracy, we might expect that such models perform better than nominal ones when distribution shifts occur. For example, Kireev et al. (2021) showed that adversarial training can even help with common corruptions when specifically tuned for such task (note that they use smaller datasets than IMAGENET). We then compute the accuracy of adversarial model soups with varying β on IMAGENET variants (results in Figure 5): while half of the best performance are obtained with the clean classification token, for some variants such as IMAGENET-R, IMAGENET-C and IMAGENET-SKETCH the best results are obtained with intermediate tokens. Hence, adversarial model soups can be used to reach a compromise between IMAGENET variants to get the best average performance. Here β = 0.9 yields the best mean accuracy 61.23%. In Table 3, we notice that this adversarial model soup improves the mean accuracy by +4.00% over a fine-tuned Masked Autoencoder (MAE-B16) checkpoint from He et al. (2022) and by +2.37% over Pyramid-AT from Herrmann et al. (2022). It also improves by +2.24% over the best performing ensemble of two networks trained independently with nominal and adversarial training respectively. 5.3 EVALUATING ON OTHER THREAT MODELS AND DATASETS Evaluating other threat models. IMAGENET variants are also a good benchmark to compare different types of adversarial attack to generate the perturbations for the co-training loss in Eq. 2: untargeted ℓ∞-bounded perturbations with budget ϵ = 4/255 (our standard setup), untargeted ℓ2bounded with ϵ ∈ {1, 2, 4, 8}, targeted (random target class as in Xie et al., 2019a) ℓ∞-bounded with ϵ ∈ {4/255, 8/255, 12/255}, and the Pyramid attack proposed by Herrmann et al. (2022). In Table 4, we select the best adversarial model soups after training with each method a VIT-B16 with dual classification tokens, and report its results on all variants. We see that the clean accuracy on the IMAGENET validation set improves in all cases compared to standard training. Moreover, although the best performing attack varies across variants, we notice that the untargeted ℓ∞ attack achieves the best average accuracy. Evaluating on other datasets. We further test the effect of using the co-training loss with the classification token as adapter on other datasets. In Table 5, we see that our training procedure provides a consistent performance boost in clean accuracy compared to nominal training on MNIST (LeCun et al., 2010), CIFAR-10, CIFAR-100 (Krizhevsky et al., 2014), SVHN (Netzer et al., 2011), SUN397 (Xiao et al., 2010), RESISC-45 (Cheng et al., 2017) and DMLAB (Beattie et al., 2016). This shows that our method generalizes well across datasets and can help regularize Vision Transformers on these smaller datasets, where they are known to perform worse compared to CNNs without pre-training (Zhang et al., 2021). In Appendix C, we also demonstrate that models pre-trained with co-training on IMAGENET yield significantly better classification results when fine-tuning nominally on small datasets compared to fine-tuning from nominally and adversarially pre-trained models. 6 CONCLUSION In this work we have shown that adapters with a few hundreds of domain specific parameters are sufficient to switch between models with radically different behaviors. In fact, just replacing the classification token of a VIT can turn a classifier with SOTA nominal accuracy and no adversarial robustness into another one with robust accuracy close to that achieved with standard adversarial training. Moreover, merging the adapters allows to smoothly transition between the two modes, finding classifiers (i.e. our adversarial model soups) with better performance on distribution shifts. These observations open up new interesting directions for future work to explore how to take advantage of the regularizing effect of adversarial training and whether it is possible to combine via soups other types of models. ACKNOWLEDGEMENTS We are grateful to Evan Shelhamer for reviewing the drafts of the paper and his literature comments, to Olivia Wiles, Florian Stimberg, Taylan Cemgil and others at DeepMind for helpful conversations and feedback on the project. A MORE EXPERIMENTAL DETAILS Training details. In this manuscript we train VIT-B16 models using the training pipeline proposed in He et al. (2022). The model is optimized for 300 epochs using the AdamW optimizer (Loshchilov & Hutter, 2017) with momenta β1 = 0.9, β2 = 0.95, with a weight decay of 0.3 and a cosine learning rate decay with base learning rate 1e-4 and linear ramp-up of 20 epochs. The batch size is set to 4096 and we scale the learning rates using the linear scaling rule of Goyal et al. (2017). We optimize the standard cross-entropy loss and we use a label smoothing of 0.1. We apply stochastic depth (Huang et al., 2016) with base value 0.1 and with a dropping probability linearly increasing with depth. Regarding data augmentation, we use random crops resized to 224 × 224 images, mixup (Zhang et al., 2018), CutMix (Yun et al., 2019) and RandAugment (Cubuk et al., 2020) with two layers, magnitude 9 and a random probability of 0.5. We note that our implementation of RandAugment is based on the version found in the timm library (Wightman, 2019). We also use exponential moving average with momentum 0.9999. For RESNET-50 we keep the same training scheme used for VIT-B16, and the standard architecture, except for combining GroupNorm with Weight Standardization in all convolutional layers following Kolesnikov et al. (2020). For the DualParams BatchNorm version we fix the robust branch to always use the accumulated statistics rather then the batch ones. Training on smaller datasets. When training from scratch on smaller datasets in Section 5.3, we optimize the smaller VIT-S with a batch size of 1024 and a base learning rate of 2e-4. For datasets with small image resolution such as CIFAR-10, we do not rescale the images to 224 × 224 but we use a patch size of 4 and a stride of 2 to get enough vision tokens. Attack details. For PGD2 and PGD5 we use a gradient descent update with a fixed step size of 2.5/255 and 1/255 respectively. For PGD40 we change the optimizer to Adam with step-size 0.1 decayed by 10 × at steps 20 and 30. Regarding one step attacks, we use a step size of 6/255 and initialization radius of 8/255 for N-FGSM and a step size of 5/255 for Fast-AT. B VISUALIZING FILTERS Visualization procedure. We visualize the embedding layer by first standardizing the weights to have zero mean and unit variance. We then extract the first 28 principal components. Finally we reshape them to 16 × 16 × 3 images and rescale them to have their values between 0 and 255 such as to display these components as RGB images. C TRANSFER LEARNING Training details. For completeness we evaluate the transfer learning performance of the VITB16 pre-trained on IMAGENET by co-training on clean and adversarial samples. We choose the model trained with classification token adapter and co-training coefficient α = 0.4, which we finetune nominally on CIFAR-10, CIFAR-100, SUN-397, RESISC-45 and DMLAB using SGD with momentum 0.9, a batch size of 512, gradient clipping at global norm 1 and no weight decay. We optimize the standard cross-entropy loss and we use a label smoothing of 0.1. For simplicity, we use the same training schedule for all the datasets: a total of 10k training steps and a base learning rate of 0.01 attained after a linear ramp-up of 500 steps followed by a cosine decay. Regarding data pre-processing, we simply rescale the images to 224 × 224 resolution without preserving aspect ratio and we apply random horizontal flipping as data augmentation. Finally, we use exponential moving average with momentum 0.999. Fine-tuning results. As the network was pre-trained with classification token adapter, we have several possibilities for initializing the classification token before fine-tuning: adversarial token, clean token and model soups interpolating between these two tokens. For comparison, we also fine-tune two VIT-B16 pre-trained on IMAGENET with nominal and adversarial training respectively. We report the results in Table 6 where we evaluate several fine-tuning strategies: fine-tuning (i) the classifier head, (ii) the classifier head and the classification token and (iii) all the weights. First, we observe that fine-tuning both the classification token and the classifier head brings only a small improvement (from 79.27% to 80.70% for the best average accuracy) over fine-tuning the classifier head alone. Fine-tuning all the weights is the best strategy as it reaches 88.40% average accuracy. Second, we observe that initializing the classification token with the adversarial token performs consistently better than with the clean token when fine-tuning all the weights. Finally, co-training as pre-training is significantly better than nominal and adversarial pre-training as fine-tuning from a co-trained model reaches 88.40% average accuracy, a +1.05% improvement over nominal and adversarial pre-training. D ACCURACY LANDSCAPE In our case, model soups are obtained by linear interpolation (or extrapolation) of the adversarial and clean tokens. We notice that the clean and adversarial tokens are almost orthogonal (cos(ϕclean,ϕadv) = 0.14), so we can extend our study beyond model soups by taking linear combinations of the two tokens β1ϕclean + β2ϕadv. By taking a sweep over the β1 and β2 coefficients, we obtain in Figure 8 the clean and robust accuracy landscapes in the plane defined by the two tokens and where the diagonal corresponds to the model soups. We observe that the main direction of change for the clean and robust accuracies is the model soups diagonal (top left to bottom right). We can clearly see the trade-off in clean/robust accuracy, but also there seems to be a compromise Table 6: Co-training as pre-training. We compare the transfer learning performance of a model pre-trained using co-training to models pre-trained with nominal and adversarial training. We evaluate various fine-tuning strategies on several datasets (headers in green) and we report the average over datasets in the last rows (orange header). We also assess several initializations for the classification token before fine-tuning: adversarial token, clean token and model soups between these two tokens with various weightings β. All models are pre-trained on IMAGENET and use the same VIT-B16 architecture during fine-tuning. SETUP BASELINES FROM CO-TRAINED NET Nominal Adversarial Robust mode β = 0.25 β = 0.5 β = 0.75 Clean mode CIFAR-10 Fine-tune head 96.07% 90.95% 90.28% 91.17% 93.61% 96.50% 97.15% Fine-tune head + cls token 96.62% 92.76% 97.73% 97.70% 97.77% 97.82% 97.84% Fine-tune all 98.68% 98.96% 99.09% 99.03% 99.01% 99.05% 99.03% CIFAR-100 Fine-tune head 83.30% 73.80% 71.94% 73.52% 77.78% 83.99% 85.47% Fine-tune head + cls token 84.59% 76.79% 87.26% 87.49% 87.55% 87.45% 87.43% Fine-tune all 91.18% 91.74% 92.37% 92.23% 92.32% 92.41% 92.29% SUN-397 Fine-tune head 72.70% 65.62% 65.93% 67.02% 70.19% 73.00% 73.47% Fine-tune head + cls token 73.05% 67.21% 73.99% 74.14% 74.19% 74.12% 74.15% Fine-tune all 76.48% 75.66% 77.87% 77.75% 77.74% 77.67% 77.72% RESISC-45 Fine-tune head 91.69% 86.70% 86.54% 87.37% 89.64% 90.58% 91.12% Fine-tune head + cls token 91.95% 87.52% 91.04% 91.07% 91.04% 91.49% 91.23% Fine-tune all 96.78% 96.14% 97.07% 96.72% 96.88% 97.07% 96.80% DMLAB Fine-tune head 50.02% 50.11% 48.58% 48.60% 49.08% 49.07% 49.16% Fine-tune head + cls token 50.91% 51.53% 50.81% 51.79% 52.47% 52.64% 52.41% Fine-tune all 73.65% 73.93% 75.61% 75.66% 75.74% 75.35% 75.58% AVERAGE Fine-tune head 78.76% 73.44% 72.65% 73.54% 76.06% 78.63% 79.27% Fine-tune head + cls token 79.42% 75.16% 80.17% 80.44% 80.60% 80.70% 80.61% Fine-tune all 87.35% 87.29% 88.40% 88.28% 88.34% 88.31% 88.28% -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Clean token weight 1.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 Ad v to ke n we ig ht 78 79 80 81 82 83 (a) Clean accuracy -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Clean token weight 1.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 Ad v to ke n we ig ht 37.5 40.0 42.5 45.0 47.5 50.0 52.5 55.0 57.5 (b) Robust accuracy (PGD2) -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Clean token weight 1.2 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 Ad v to ke n we ig ht (c) Arithmetic mean Figure 8: Linear combination of tokens. We report the clean accuracy (panel (a)) and robust accuracy against PGD2 (panel (b)) on IMAGENET for various linear combinations of the clean and adversarial tokens. Model soups, which are linear interpolation (and extrapolation) between these two tokens, are on the diagonal from top left to bottom right. Panel (c) shows the arithmetic mean between the normalized (with min/max rescaling) clean and robust accuracies (red means higher mean accuracy). region between clean and robust accuracy as the other diagonal (from bottom left to top right) is visually distinct for clean and robust accuracy. In panel (c) of Figure 8, we plot the arithmetic mean between the normalized (with min/max rescaling) clean and robust accuracies. We observe that the best compromises between clean and robust accuracy have a stronger adversarial token weight than the clean token weight. E LIMITATIONS AND FUTURE WORK We have empirically shown that co-training a fully shared VIT does not retain any robustness whereas having two classification tokens specific to the clean and adversarial images is enough to get competitive performance both in clean and robust accuracy. However, we leave to future work the theoretical explanation on why this small architecture change (adding only 768 parameters) results in such a gap in performance. Similarly, beyond our intuition that parameter sharing when using adapters makes model soups possible, we cannot support our empirical results with theory and leave it to future work. Another direction for future work is the automatic selection of the right soup for each sample which could be inspired by automatic selection modules like in Lo & Patel (2021). F ADDITIONAL TABLES AND FIGURE In the following we present additional tables and figures of results described in the main part but omitted above because of space limits.
1. What is the focus and contribution of the paper regarding VIT and domain-specific training? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and inspiration? 3. What are the weaknesses of the paper, especially regarding the illustration of adversarial examples and reporting performances on other datasets? 4. Do you have any concerns or suggestions regarding the presentation and organization of the paper's content? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes to treat clean and adversarial samples as different domains and co-train them by separate classification tokens of a VIT, which produces adversarial model soup to trade off between clean and robust accuracy by simple interpolation. The authors present a good alternative for advprop with fewer parameters and better robustness. Strengths And Weaknesses Strength It is very novel and inspiring to separate tokens of VIT for domain-specific training in the adversarial setting, and implementing the well-known idea (clean and adversarial samples come from different domains) in the new tool (VIT) is interesting and inspiring. The authors also insightfully demonstrate that training with adapters enables model soup, which allows a simple trade-off between robustness and accuracy in adversarial model soup by interpolation between the clean and adversarial token. It frees the burden of retraining the whole model when the balance is to change. Extensive experiments show that the proposed method could achieve SOTA clean and robust performance in different modes. Besides, the results of the distribution shift are also encouraging. The authors also thoroughly analyze the influence of adapters, attack steps, the weighting hyperparameter, extrapolation, etc. The results in Table 9 convincingly necessitate the weight-sharing design. The paper is well-motivated, well-organized, and easy to follow. Weakness The authors should better illustrate how the adversarial examples are included in clean mode training. It would be better to report the adversarial model soup performance on other datasets, e.g., CIFAR-10, CIFAR-100, along with other models in the RobustBench. The authors should use \citep instead of \cite in some places and maintain a larger font size for tables. Section 3.2 does not seem necessary to be so long. In contrast, the results of the adversarial model soup, Figure 9, are more important to me. Would additional data further increase the performance as in [1,2]? [1] Improving robustness using generated data, NeurIPS 2021. [2] Data augmentation can improve robustness, NeurIPS 2021. Clarity, Quality, Novelty And Reproducibility See Strength And Weaknesses.
ICLR
Title ResPerfNet: Deep Residual Learning for Regressional Performance Modeling of Deep Neural Networks Abstract The rapid advancements of computing technology facilitate the development of diverse deep learning applications. Unfortunately, the efficiency of parallel computing infrastructures varies widely with neural network models, which hinders the exploration of the design space to find high-performance neural network architectures on specific computing platforms for a given application. To address such a challenge, we propose a deep learning-based method, ResPerfNet, which trains a residual neural network with representative datasets obtained on the target platform to predict the performance for a deep neural network. Our experimental results show that ResPerfNet can accurately predict the execution time of individual neural network layers and full network models on a variety of platforms. In particular, ResPerfNet achieves 8.4% of mean absolute percentage error for LeNet, AlexNet and VGG16 on the NVIDIA GTX 1080Ti, which is substantially lower than the previously published works. 1 INTRODUCTION Deep learning (DL) has exploded successfully and is applied to many application domains, such as image recognition and object detection Thus, a lot of human experts design high-accuracy neural network architectures for different applications. However, for Internet of Things (IoT) applications, large neural network models cannot fit into resource-constrained devices. On the other hand, a system designer often tries to find a proper computing platform or a deep learning accelerator (DLA) to execute a DL application with acceptable responsiveness. An exhaustive way to optimize the system design is to evaluate the cost and performance of desired DL models on all the available hardware/software options, but it is not only tedious but costly and lengthy in practice. Since DL frameworks and accelerators are evolving rapidly, and even some slight changes could significantly impact the performance of DL applications, it may be necessary to update the performance models frequently. Therefore, we need a systematic and efficient approach to produce accurate performance models when changes occur. While several works (Qi et al.; Justus et al. (2018); Wang et al.) have been proposed to estimate the delivered performance of a given DL model on a specific computing platform, so as to rapidly evaluate design alternatives, the estimates from these efforts are not very accurate. For example, the mean absolute percentage error (MAPE) for estimating full neural network models such as LeNet (LeCun et al. (1998)), AlexNet (Krizhevsky et al. (2012)) and VGG16 (Simonyan & Zisserman) on the NVIDIA GTX 1080Ti is as high as 24% in Wang et al., whose accuracy is the best among the previous works, but still has room for improvement. In this paper, we propose a deep residual network architecture, called ResPerfNet, to efficiently and accurately model the performance of DL models running on a wide range of DL frameworks and DLAs. It is based on the residual function approach proposed by (He et al. (2016) and inspired by the prior works Liu & Yang (2018); Jha et al. (2019); Wan et al. (2019)), which use residual neural networks to solve regression problems. The proposed model can be trained with performance data collected from many system configurations to establish a unified performance predictor which assists the users in selecting the DL model, the DL framework, and the DLA for their applications. Extensive experiments have been done to show that our unified approach not only provides more accurate performance estimates than the previous works, but also enables the users to quickly pre- dict the performance of their DL applications executed with various models-framework-accelerator configurations. The contributions of this paper are summarized as follows. • An unified DL-based approach for estimating the computing performance of DL applications on a variety of models-framework-accelerator configurations, which enables the users to explore the hardware/software design space quickly. • A novel deep residual neural architecture is proposed to deliver the most accurate performance predictions that we are aware of. Experimental results confirm that our approach yields lower prediction errors on across various platforms. The remaining of this paper is organized as follows. Section 2 presents the related work. Section 3 describes the architecture of ResPerfNet. Section 4 shows the proposed systematic modeling method. Section 5 elaborates the dataset and training mechanism to train the ResPerfNet models within a reasonable time span. Section 6 evaluates the efficiency of our approach. Section 7 concludes the paper. 2 BACKGROUND AND RELATED WORK With the rapid evolving of both hardware accelerators and DL models, the performance measure/estimation of the DL models on the DLA platforms is an important task to evaluate the effectiveness of the software/hardware solutions to the given problems. Different approaches have been proposed to serve the purposes. Benchmarking approaches, such as DAWNbench (Coleman et al. (2017)) and MLPerf (Reddi et al. (2020)), aim at the measurements of the training and inference performance of the machine-learning (ML) models on certain software/hardware combinations. By offering a set of standardized machine learning workloads and the instructions for performance benchmarking, these benchmarks are able to measure how fast a system can perform the training and inference for ML models. Analytical approach, as reported in PALEO (Qi et al.), constructs the analytical performance model for DL systems. The execution time is decomposed into the total time for the computation and communication parts, which are derived from the utilization of the computing and communication resources on the target hardware, respectively. For instance, the computation time is estimated by dividing the total floating-point operations required by the DL model to the actual processing speed (i.e., the processed floating-point operations per second for the DL model) delivered by the computing hardware. The communication time is calculated by the similar approach.This approach highly relies on the accuracy of the benchmarking results (i.e., to provide the actual processing speed of the target model on the hardware), which requires its users to choose the benchmarks wisely to perfectly match the program characteristics of their target deep learning models, so as to give a proper estimate of the actual processing speed. However, the manual process (of the benchmarks selection) limit its widespread adoption. DL-based approaches build the DNNs for estimating the DL models’ performance by learning the relationships between the characteristics of the DL models and the specifications of the accelerating hardware. The following works focus on TensorFlow-based DL models. Justus et al. (2018) use a fully-connected multiple-layer perceptron (MLP) network for performance prediction, using the configurations of the DL model and the specification of the hardware accelerator, and the training data of the DL model as the input features to the MLP network. However, due to the simplified communication time estimation model, where the communications from GPU to CPU for each of the DL layers are counted repeatedly for estimating the communication time, their model tends to provide over-estimated results. Wang et al. use PerfNet (an MLP network) to learn the relationships between the configurations and the execution time of the target DL model. They further decompose the execution of a DL model into three phases, preprocessing, execution, and postprocessing, and train multiple PerfNet network instances, each of which learns the relationships between the model configurations and the model execution time for a specific phase. By aggregating the prediction results for the three phases, their proposed work is able to predict the total execution time of a given DL model. Nevertheless, the MLP network has its own limitation, i.e., it is hard to further enhance its performance since a deeper MLP network will lead to lower prediction accuracy. In consideration of the limitations of the prior works listed above and the need of modeling the optimizing DL frameworks, our work uses the systematical approach to characterize the DL models built with various DL framework, and adopts the residual neural network to model their delivered performance on the DLAs. 3 RESPERFNET ARCHITECTURE ResPerfNet adopts a ML-based approach for the performance estimation of different types of neural network layers. Furthermore, ResPerfNet is specially designed to prevent the degradation problem, which refers to the phenomenon that increasing the depth and/or the width of each layer for the DNN may not only necessarily improve the accuracy, but get saturated rapidly and then degrades sharply as reported in (He & Sun (2015); Srivastava et al. (2015)). In other words, it is more likely to lead to a higher training error on the neural network with a wider or deeper architecture. To solve the problem, the deep residual learning is proposed and applied to each group of the stacked NN layers (He et al. (2016)), where a certain number of stacked layers are logically grouped together to form a residual block. Hence, in this work, to address the degradation problem, we adopt the deep residual learning to every few stacked layers (He et al. (2016)). The residual block is defined as Equation 1, where x and y represent the input feature maps and the output vectors of the residual layer, respectively. The function F(x, {Wi}) performs the residual operations to be learned. The operation F(x, {Wi}) + x is performed by a shortcut connection and element-wise addition. Figure 1 illustrates the network architecture of ResPerfNet. The second, third and fourth layers (i.e., two convolutional and one add layers) together form a residual block, and there are a total of six residual blocks in ResPerfNet. y = F(x, {Wi}) + x (1) As shown in Figure 1, the ResPerfNet consists of 26 layers, including 15 convolutional layers, 6 add layers, 4 fully-connected (FC) layers and 1 dropout layer. Before FC layers, every 7 layers contain one head convolutional layer (e.g., Conv1D 3 representing the head convolutional layer for the first residual block) and two residual blocks, each of which consists of two convolutional layers with the same filters and an element-wise add residual function layer. The first head convolutional layer has 128 filters of kernel size 3 with a stride length of 1. In order to reduce the complexity of ResPerfNet, the second head convolutional layer uses 64 filters of kernel size 3 with a stride length of 1. Moreover, the number of filters for the six residual blocks is decreasing from 128 filters in the first two blocks to 32 filters for the last two blocks. Three FC layers are attached to the last residual block, where each of the FC layers has 128 neurons. The dropout layer with the ratio of 0.2 is connected to the last FC layer, which uses a single neuron to perform the one-dimensional regression for predicting the elapsed time of the designated type of the layers. Our proposed residual neural architecture, ResPerfNet, gets significant improvements in accuracy compared with traditional machine learning algorithms, such as support vector regression, polynomial regression and XGBoost, and is even better than the MLP network. A series of experiments has been done to show ResPerfNet is superior to the previous works in Section 6.1. 4 METHODOLOGY This section presents the methodology of using ResPerfNet to relate the performance characteristics of a CNN layer to the delivered performance of the given layer. We first define the target neural networks for the performance modeling in Section 4.1. The three-phase based modeling of a given CNN based is presented in Section 4.2. Lastly, the same modeling for a given NN layer is further described in Section 4.3. 4.1 FORMALIZING THE NEURAL NETWORKS A neural network can be represented by a directed acyclic graph, denoted as N ({u(i)}ki=1), consisting of an ordered sequence of k nodes, where each graph node u(i) represents a layer of the neural networkN , such as convolutional, pooling, and fully-connected layers. The input and output feature maps of a graph node u(i) performing the operation f (i) are denoted as input(f (i)) and output(f (i)), respectively. In this work, we assume that a given neural network will be run on the host system h with a single hardware accelerating device d. 4.2 THE THREE-PHASE PERFORMANCE MODELING The execution time of a given neural network model includes the computation time spent on the acceleration device d and the data communication time between the host system h and the device d. As most of the computations are performed by the accelerating device and the communications occur merely at the first and the last layers of the given model, the estimated execution time of a given neural network model with k layers is formulated as follows, where the formulation assumes that all k layers within the given model are accelerated by the single device d. T (N ) = Tpre(u(1)) + k∑ i=1 Texe(u (i)) + Tpost(u (k)) (2) The above equation shows the three-phase performance modeling approach, where Tpre, Texe, and Tpost represent the execution time for the preprocess, execution, and postprocess phases, respectively. Specifically, the communication time of bringing the input data from the host system to the accelerating device at the first layer is denoted as Tpre(u(1)), where the i-th NN layer is represented as u(i). The summation of the execution time for all the NN layers is represented as ∑k i=1 Texe(u (i)). The communication time of transferring the inference results from the accelerating device to the host system is defined as Tpost(u(k)). Our prediction model delivers more accurate performance estimates than previously proposed methods by modeling these three phases defined in the following subsection for a DLA separately and adding the predicted results together as Equation 2. 4.3 MODELING INDIVIDUAL NN LAYERS The similar approach is used to model the performance of the i-th NN layer u(i). In particular, for each layer u(i), the execution times for the preprocess, execution, and postprocess phases are Tpre(u (i)), Texe(u(i)), and Tpost(u(i)), respectively. The above time components constitute the estimated execution time of the layer u(i), as defined in the equation below. The superscript index i is omitted to simplify the looks of the equations by using the simpler form u. T (u) = Tpre(u) + Texe(u) + Tpost(u) (3) The preprocess phase is for preparing the input data for the acceleration in d and involves with the four operations: 1) issuing the commands for copying input feature maps on h and d asynchronously, 2) performing the memory copy of the input feature maps in 1, 3) issuing the commands for the operation f on d, and 4) performing the data reshaping operations for input feature maps. The data reshaping operations, which transform the input/output data to the more efficient format for the next operation on d, usually occur in data transmissions between h and d. The lengths of time for the four operations areR(input(f), h, d),M(input(f),R(f, d), and T (input(f), d), respectively. As shown in Equation 4, the time consumed in the preprocess phase is defined as the summation of the time required by the above four operations. Tpre(u) = R(input(f), h, d) +M(input(f), h, d) +R(f, d) + T (input(f), d) (4) Intuitively, the time consumed for computation, which is C(f, d), in the execution phase would be identical to the computation time of f on d. Unfortunately, the measured execution time of a layer from the micro-benchmarks includes the time consumed by the data reshaping operations in both directions, from h to d and from d to h, which are T (input(f), d) and T (output(f), d), respectively. As the deployed NN layers collectively run on the acceleration device d, isolating the data reshaping time from the measured execution time for the NN layer of each micro-benchmark facilitates the execution time estimation of the deployed NN layers with the formula, ∑k i=1 Texe(u (i)). Regarding this situation, the time for the execution phase is defined in Equation 5. Texe(u) = C(f, d)− T (input(f), d)− T (output(f), d) (5) The postprocess phase is defined for dealing with the procedure of returning the inference computation results back to the invoking application on the host system. That is, it is about reshaping the output vector into the format accepted by h, copying the output vector back to h from d, and moving the prediction result to the application level (i.e., the call site of the model inference) on the host system. The corresponding execution time for the above three operations are denoted as T (output(f), d),M(output(f, d, h), and V(output(f), h), respectively. Tpost(u) = T (output(f), d) +M(output(f), d, h) + V(output(f), h) (6) 5 TRAINING DATA AND LOSS FUNCTION In this section, we present the details of the dataset used to build the proposed performance prediction models. In particular, the configurations of our developed benchmark tools for the training dataset is discussed in Section 5.1. The tool collecting and extracting the data is described in Section 5.2, and the data transformation techniques to facilitate the training convergence is introduced in Section 5.3. The specially designed loss function to better deal with the unbalanced training data is introduced in Section 5.4. 5.1 DATA PREPARATION The training data is the characteristics of the TensorFlow and TensorRT programs and the performance information of the programs running on the target computing hardware, where the proposed model helps correlate the characteristics and their runtimes during the training process. In order to better catch the characteristics of different TensorFlow and TensorRT configurations (i.e., the code patterns, which are considered as the features during the model training process), we have developed a benchmark tool to generate a set of micro-benchmarks, which are actually TensorFlow and TensorRT programs with different configurations for the three types of layers, including convolution, pooling and dense layers. The generation of the micro-benchmarks are done by randomly selecting the configurations for each type of the layer, so as to collect the performance for different configurations. The possible configurations (or features) for all three layer types and their ranges are listed in Table 3. These configurations are actually the function parameters for the three types of layers, which are extracted from TensorFlow 1.13 APIs, including tensorflow.layers.conv2d, tensorflow.layers.maxpooling2d, and tensorflow.layers.dense, and their possible combinations are 7.33 × 1014, 7.33 × 1010, and 2.14 × 109, respectively. While each microbenchmark takes at least seconds for the stable and accurate measurements, it is impossible to cover the entire design space with brute force, which requires over 1014 micro-benchmark runs. 5.2 DATA COLLECTION AND DATA EXTRACTION The data preparation is used to generate the TensorFlow- and TensorRT-based micro-benchmarks. The data collection takes about two weeks for running 100,000 different samples of the TensorFlow micro-benchmarks on the DLAs to collect the performance data. On the other hand, for the TensorRT micro-benchmarks, more than two weeks were spent to optimize and profile the 25,000 different configurations of the TensorRT programs. It is interesting to note that the TensorRT experiments generate large optimized intermediate files, especially for the dense layer, where it requires more than 5TB of storage space to keep its parameters. Due to the disk space limitation, we select 16,000 out of 25,000 samples to run and profile their performance. For data extraction, our data processing tool filters out the outliers (data with extreme values) before feeding the profiled data for the model training. The total elapsed time of each layer is decomposed into the preprocessing time (Tpre), the execution time (Texe), and the postprocessing time (Tpost), as mentioned in the previous section. In order to test the accuracy of our trained model, the collected samples are split into 80% of the samples as training datasets and 20% as testing datasets. 5.3 DATA TRANSFORMATION Now, suppose we are given a training dataset D, which is comprising m observations and p features of X and written as D = {ti, xi1, xi2, ..., xip}mi=1, where t is a vector of observed values ti (i = 1, ...,m), and X could be seen as a matrix of row-vectors xi (i = 1, ...,m) or of mdimensional column-vectors Xj(j = 1, ..., p). The coefficients vector w keeps the weights of the model. The predicted value is denoted as y(x,w), for any given model of weights w and the dataset x. In order to improve the convergence efficiency and stability of the stochastic gradient descent (SGD) algorithm, the three types of data transformations are adopted in this work, including scalar multiplication, Z-scores transformation, and Box-Cox transformation. Scalar multiplication is used to provide fine-grained updates of the SGD procedure and scales each observed value ti. Z-scores transformation puts each data feature Xj from different sources into the same scale to eliminate the prejudicial bias of the features values. Box-Cox transformation converts the values of the features Xj to standard normal random variables, which would further improve the effectiveness of Z-scores transformation. Details of these data transformations are available in Appendixes B, C and D. 5.4 LOSS FUNCTION As the observed vector t is with the positive-skew distribution and often contains some noises contributed by the measurement errors, we fine-tune the loss function as mean absolute percentage logarithmic error (MAPLE) for the prediction model (Wang et al.), as shown in Equation 7. To deal with the situation of the skewed distribution, the logarithmic operations for the predicted values 1 + y(xi,w) and the observed values 1 + ti, and the division operation on the observed values in MAPLE are expected to enhance the accuracy of the small data, which occurs frequently. On the other head, the absolute value of MAPLE helps increase the resistance against outliers that may unexpectedly appear in the measured data. Moreover, to prevent over-fitting, L2 regularization is added to the loss function, where λ2 is a scaling factor for the regularization. En(w) = 1 n n∑ i=0 ∣∣∣∣ log(1 + y(xi,w))− log(1 + ti)log(1 + ti) ∣∣∣∣+ λ2‖w‖2 (7) 6 EVALUATION The layer-wise and model-wise performance results are evaluated to demonstrate the effectiveness of ResPerfNet in this section. In particular, we compare the layer-wise estimated execution time produced by ResPerfNet and the previous works to show that ResPerfNet is superior to other regression based approaches, such as polynomial regression, support vector regression and PerfNet. Three statistical metrics, including mean absolute percentage error (MAPE), root mean squared error (RMSE) and mean absolute error (MAE), are used to quantify the effectiveness for each tested performance modeling approach. In addition, to demonstrate the capability of ResPerfNet for the full model prediction, three popular CNNs are considered in the model-wise experiments, e.g., LeNet, AlexNet, and VGG16. Note that three data transformations mentioned in Section 5.3 are applied in ResPerfNet by default unless specified otherwise. The details of our experimental environments are listed in Appendix F. 6.1 LAYER-WISE EXECUTION TIME PREDICTION Table 1 compares the MAPEs of the execution time for the convolutional layers, estimated by ResPerfNet and the prior works. While appropriate parameter adjustments are applied to obtain parame- ters for better results, the MAPEs of polynomial regression, support vector regression, and XGBoost are over 29%, which means the error is quite large and indicates that the corresponding approaches are not capable of doing good performance prediction for the real applications. On the contrary, the DL-based approaches, PerfNet and ResPerfNet, give more accurate estimations, which have less than 15% of the MAPEs. In particular, ResPerfNet outperforms the other approaches and has 11.75% and 14.23% of the MAPEs for the TensorFlow and TensorRT models. The results suggest that ResPerfNet correctly associates the program characteristics to the performance model. To further look into the effectiveness of PerfNet and ResPerfNet and the impact of the Box-Cox transformation on the predicted results, Figure 2 plots the error curves of the TensorFlow convolutional layer using the PerfNet and ResPerfNet with and without performing the data transformation. Figure 2(a) shows that most of the MAPE of ResPerfNet on the testing dataset are below 15%, as depicted by the red/black solid lines. Notably, ResPerfNet applying the Box-Cox data transformation reaches the lowest prediction error (11.7%), 2% less than ResPerfNet without the data transforming. Similar trends can be observed in Figure 2(b) using the RMSE metric, in which the black solid line also shows the best performance. The results presented in Figure 2 show that ResPerfNet with BoxCox transformation has better convergence rate, given the same training epoch. The detailed training process is illustrated in Appendix E. Moreover, the R2 values for the predicted and measured execution time of the convolutional, pooling and dense layers are all above 0.97, which demonstrate high prediction quality of ResPerfNet, as illustrated in Figure 4 of Appendix I. The layer-wise performance results of the TensorFlow and TensorRT models delivered by ResPerfNet are listed in Table 2. Overall, the MAPE for all phases are under 16%, which removes the concern of over-fitting. For RMSE, the value of the TensorFlow version convolutional layer is 0.84ms. It is better than the 0.98ms reported by PerfNet (Wang et al.), and is also better than the 2.55ms produced by the method in (Justus et al. (2018)). Detailed predicted results of the three layers under different phases for TensorFlow (with 3 additional platforms) and for TensorRT are also presented in Appendixes G and H, respectively. From the tables, we can see that ResPerfNet has better predicted results for TensorFlow than TensorRT. That is because currently the TensorRTbased ResPerfNet is trained with less training data, as described in Section 5.2. We believe that the accuracy for TensorRT predictions can be further improved with sufficient data as TensorFlow. 6.2 MODEL-WISE EXECUTION TIME PREDICTION Figure 3 plots the inference time estimated by PerfNet and ResPerfNet for the three popular DNNs, including LeNet, AlexNet, and VGG16, using TensorFlow and TensorRT frameworks. Figure 3(a)(c) shows that ResPerfNet has more accurate estimation than PerfNet since the averaged MAPE of the three models is 8.4% for all tested batch sizes, while PerfNet has the averaged MAPE of 24.04%. Figure 3(d)-(f) illustrates the similar trend for TensorRT based DNNs. The averaged MAPE of these DNNs using ResPerfNet is 17%. The results show that our modeling and methodology are effective on the two popular frameworks. 7 CONCLUSION In this paper, we proposed a deep residual network architecture, ResPerfNet, to model the performance of neural networks on the target DLAs by considering the interactions between the host and the GPU and decomposing a neural network operation into three phases. In addition, we apply ResPerfNet to predict the execution time of the optimized models, such as TensorRT, with the same performance characteristics as those used in unoptimized models. Our experimental results show that ResPerfNet is able to provide high-accuracy estimations on various DLAs, which helps facilitate the exploration of proper neural network architectures built with various DL frameworks. A FEATURES OF TRAINING DATA B SCALAR MULTIPLICATION Scalar multiplication is applied on the observed vector t as Equation 8 to magnify the prediction results since the original data are too small to provide accurate estimates. It is interesting to note that the scalar multiplication would be inefficient for some commonly used loss function, such as mean squared error MSE, based on our experiences; nevertheless, it works well with the MAPLE by making every gradient converging smoothly without frequently adjusting an appropriate learning rate in each epoch. scalar multiplication : t = t× scaler (8) C Z-SCORES TRANSFORMATION Z-scores transformation is performed on each n-dimensional column-vectorXj as Equation 9, where X̄j is the mean of the each column-vectorXj , and σj is the standard deviation of each column-vector Xj . Z-scores transformation resales the values of the features to ensure the mean to be zero and the standard deviation to be one. The values of the features are rescaled within the range between zero and one, which is useful for gradient decent algorithms. Z-scores transformation : Xj = Xj − X̄j σj (9) D BOX-COX TRANSFORM Box-Cox transformation transforms the input features Xj into a normal distribution for the best model accuracy. Box-Cox transformation is shown in Equation 10, where λ1 is the best approximation for the selected features. In our experiments, Box-Cox transformation is applied on Matrix Size and Kernel Size (See Table 3) for the convolutional layer data, and Matrix Size for the pooling layer data. Box-Cox transformation : X(λ1)j = { X λ1−1 j λ1 ifλ1 6= 0 lnXj ifλ1 = 0 (10) E THE PROPOSED GRADIENT DESCENT ALGORITHM Algorithm 1 is the pseudo-code of our proposed algorithm to train each phase of the layers. The required parameters are defined as follows: optimizer: algorithm used to update the attributes of a neural network, lr scheduler: sets the learning rate of each parameter group to the initial lr times a given function, total epochs: total epochs of the neural network algorithm, lr: learning rate, bs: maximum of the batch size for each epoch, η: period of learning rate decay, γ: multiplicative factor of learning rate decay, and λ2: multiplicative factor for the weight penalty. Algorithm 1 The stochastic gradient descent algorithm proposed by ResperfNet, where our default settings for the our DL regression problems are optimizer = Adam, total epochs = 200, lr = 0.1, bs = 128, η = 40, γ = 0.5, λ2 = 0.1, and scaler = 10. Require: α: Multiplicative factor for the weight, n: Current batch size, τ : Current iteration. 1: t← scalar multiplication(t, scalar) . Update t by Equation 8. 2: x← Z-scores(Box-Cox(x)) . Update x by Equation 10 and 9. 3: for e in total epochs do 4: lr ← lr scheduler(lr, e, η, γ) . Update the learning rate by scheduler. 5: for b in (m/bs+ 1) do 6: α← optimizer(lr) . Update the weight factor by optimizer. 7: n← x[b ∗ bs : min ((b+ 1) ∗ bs,m)] . Calculate n (current batch size). 8: ∇En ← calculate gradient of En on model wτ . Calculate En by Equation 7. 9: wτ+1 ← wτ − α∇En(wτ ) 10: τ ← τ + 1 11: end for 12: end for F EXPERIMENTAL SETUP The experiments are done on the Intel i7 processors with a variety of hardware accelerators listed in Table 4. TensorFlow 1.13.1 and TensorRT 5.0.2.6 with Python 3.6 are used to build the DL models, running on Ubuntu 18.04.4 LTS (kernel version 5.4.0-42-generic). G LAYER-WISE EXECUTION TIME PREDICTION FOR TENSORFLOW J MODEL-WISE EXECUTION TIME PREDICTION FOR TENSORFLOW I PREDICTED VS. MEASURED TIME (TENSORFLOW) H LAYER-WISE EXECUTION TIME PREDICTION FOR TENSORRT
1. What is the focus of the paper, and what are the proposed contributions? 2. What are the strengths and weaknesses of the proposed method, ResPerfNet? 3. Are there any questions or concerns regarding the architecture and design choices of ResPerfNet? 4. How does the reviewer assess the comparison and evaluation of the proposed method with other approaches? 5. Are there any issues or inconsistencies in the writing and presentation of the paper that need to be addressed? 6. What is the impact of the missing reference [Wang 2020] on the paper's validity and comparability with previous works?
Review
Review The paper presents a method, called ResPerfNet, to predict the performance of deep neural networks. The method relies on a residual neural network that is trained on a large number of different network architectures and performance measures on real hardware. The paper evaluates the proposed method on three networks, LeNet, AlexNet, and VGG16, in two different frameworks, i.e., TensorFlow and TensorRT. The results are promising, but comparison with other approaches is weak. For example, the proposed method, ResPerfNet, is only compared to one other approach, PerfNet (from a paper that don't seem to be published yet, at least I couldn't find it), using TensorFlow (but not using TensorRT). The paper has potential to have impact, but it needs to be improved before publication. For example, the following issues need to be addressed: Motivation for the selected structure / architecture of ResPerfNet. Some confusion about kernels, filters, etc. in the description of the ResPerfNet architecture (Section. 3 + Fig. 1) I'm a bit surprised that the dropout layer is very close to the end of the network. Why? Why 0.2 dropout (and not 0.1 or 0.4)? It's a bit confusing (and inconsistent) that the index I is left out sometimes and sometimes not, e.g., Eq (2) vs. Eq (3) vs. how it is written in the text flow. C(f,d) in Eq (5) is never defined. Platform for sample selection / data collection should be mentioned in Section 5.2 Section 5.4. Although using defining the loss function as MAPLE does reduce the problem with a skewed distribution, it doe not solve it so "cope with it" is a bit strong formulation. It is disturbing that one of the main references [Wang 2020] can't be found, despite extensive searching. This is problematic since the only other solution ResPerfNet is compared to is PerfNet, which is published in [Wang 2020]. This limits the possibility to compare this work with previous work. The conference where the [Wang 2020] paper was published took place in mid October 2020, which is after the deadline for ICLR.
ICLR
Title ResPerfNet: Deep Residual Learning for Regressional Performance Modeling of Deep Neural Networks Abstract The rapid advancements of computing technology facilitate the development of diverse deep learning applications. Unfortunately, the efficiency of parallel computing infrastructures varies widely with neural network models, which hinders the exploration of the design space to find high-performance neural network architectures on specific computing platforms for a given application. To address such a challenge, we propose a deep learning-based method, ResPerfNet, which trains a residual neural network with representative datasets obtained on the target platform to predict the performance for a deep neural network. Our experimental results show that ResPerfNet can accurately predict the execution time of individual neural network layers and full network models on a variety of platforms. In particular, ResPerfNet achieves 8.4% of mean absolute percentage error for LeNet, AlexNet and VGG16 on the NVIDIA GTX 1080Ti, which is substantially lower than the previously published works. 1 INTRODUCTION Deep learning (DL) has exploded successfully and is applied to many application domains, such as image recognition and object detection Thus, a lot of human experts design high-accuracy neural network architectures for different applications. However, for Internet of Things (IoT) applications, large neural network models cannot fit into resource-constrained devices. On the other hand, a system designer often tries to find a proper computing platform or a deep learning accelerator (DLA) to execute a DL application with acceptable responsiveness. An exhaustive way to optimize the system design is to evaluate the cost and performance of desired DL models on all the available hardware/software options, but it is not only tedious but costly and lengthy in practice. Since DL frameworks and accelerators are evolving rapidly, and even some slight changes could significantly impact the performance of DL applications, it may be necessary to update the performance models frequently. Therefore, we need a systematic and efficient approach to produce accurate performance models when changes occur. While several works (Qi et al.; Justus et al. (2018); Wang et al.) have been proposed to estimate the delivered performance of a given DL model on a specific computing platform, so as to rapidly evaluate design alternatives, the estimates from these efforts are not very accurate. For example, the mean absolute percentage error (MAPE) for estimating full neural network models such as LeNet (LeCun et al. (1998)), AlexNet (Krizhevsky et al. (2012)) and VGG16 (Simonyan & Zisserman) on the NVIDIA GTX 1080Ti is as high as 24% in Wang et al., whose accuracy is the best among the previous works, but still has room for improvement. In this paper, we propose a deep residual network architecture, called ResPerfNet, to efficiently and accurately model the performance of DL models running on a wide range of DL frameworks and DLAs. It is based on the residual function approach proposed by (He et al. (2016) and inspired by the prior works Liu & Yang (2018); Jha et al. (2019); Wan et al. (2019)), which use residual neural networks to solve regression problems. The proposed model can be trained with performance data collected from many system configurations to establish a unified performance predictor which assists the users in selecting the DL model, the DL framework, and the DLA for their applications. Extensive experiments have been done to show that our unified approach not only provides more accurate performance estimates than the previous works, but also enables the users to quickly pre- dict the performance of their DL applications executed with various models-framework-accelerator configurations. The contributions of this paper are summarized as follows. • An unified DL-based approach for estimating the computing performance of DL applications on a variety of models-framework-accelerator configurations, which enables the users to explore the hardware/software design space quickly. • A novel deep residual neural architecture is proposed to deliver the most accurate performance predictions that we are aware of. Experimental results confirm that our approach yields lower prediction errors on across various platforms. The remaining of this paper is organized as follows. Section 2 presents the related work. Section 3 describes the architecture of ResPerfNet. Section 4 shows the proposed systematic modeling method. Section 5 elaborates the dataset and training mechanism to train the ResPerfNet models within a reasonable time span. Section 6 evaluates the efficiency of our approach. Section 7 concludes the paper. 2 BACKGROUND AND RELATED WORK With the rapid evolving of both hardware accelerators and DL models, the performance measure/estimation of the DL models on the DLA platforms is an important task to evaluate the effectiveness of the software/hardware solutions to the given problems. Different approaches have been proposed to serve the purposes. Benchmarking approaches, such as DAWNbench (Coleman et al. (2017)) and MLPerf (Reddi et al. (2020)), aim at the measurements of the training and inference performance of the machine-learning (ML) models on certain software/hardware combinations. By offering a set of standardized machine learning workloads and the instructions for performance benchmarking, these benchmarks are able to measure how fast a system can perform the training and inference for ML models. Analytical approach, as reported in PALEO (Qi et al.), constructs the analytical performance model for DL systems. The execution time is decomposed into the total time for the computation and communication parts, which are derived from the utilization of the computing and communication resources on the target hardware, respectively. For instance, the computation time is estimated by dividing the total floating-point operations required by the DL model to the actual processing speed (i.e., the processed floating-point operations per second for the DL model) delivered by the computing hardware. The communication time is calculated by the similar approach.This approach highly relies on the accuracy of the benchmarking results (i.e., to provide the actual processing speed of the target model on the hardware), which requires its users to choose the benchmarks wisely to perfectly match the program characteristics of their target deep learning models, so as to give a proper estimate of the actual processing speed. However, the manual process (of the benchmarks selection) limit its widespread adoption. DL-based approaches build the DNNs for estimating the DL models’ performance by learning the relationships between the characteristics of the DL models and the specifications of the accelerating hardware. The following works focus on TensorFlow-based DL models. Justus et al. (2018) use a fully-connected multiple-layer perceptron (MLP) network for performance prediction, using the configurations of the DL model and the specification of the hardware accelerator, and the training data of the DL model as the input features to the MLP network. However, due to the simplified communication time estimation model, where the communications from GPU to CPU for each of the DL layers are counted repeatedly for estimating the communication time, their model tends to provide over-estimated results. Wang et al. use PerfNet (an MLP network) to learn the relationships between the configurations and the execution time of the target DL model. They further decompose the execution of a DL model into three phases, preprocessing, execution, and postprocessing, and train multiple PerfNet network instances, each of which learns the relationships between the model configurations and the model execution time for a specific phase. By aggregating the prediction results for the three phases, their proposed work is able to predict the total execution time of a given DL model. Nevertheless, the MLP network has its own limitation, i.e., it is hard to further enhance its performance since a deeper MLP network will lead to lower prediction accuracy. In consideration of the limitations of the prior works listed above and the need of modeling the optimizing DL frameworks, our work uses the systematical approach to characterize the DL models built with various DL framework, and adopts the residual neural network to model their delivered performance on the DLAs. 3 RESPERFNET ARCHITECTURE ResPerfNet adopts a ML-based approach for the performance estimation of different types of neural network layers. Furthermore, ResPerfNet is specially designed to prevent the degradation problem, which refers to the phenomenon that increasing the depth and/or the width of each layer for the DNN may not only necessarily improve the accuracy, but get saturated rapidly and then degrades sharply as reported in (He & Sun (2015); Srivastava et al. (2015)). In other words, it is more likely to lead to a higher training error on the neural network with a wider or deeper architecture. To solve the problem, the deep residual learning is proposed and applied to each group of the stacked NN layers (He et al. (2016)), where a certain number of stacked layers are logically grouped together to form a residual block. Hence, in this work, to address the degradation problem, we adopt the deep residual learning to every few stacked layers (He et al. (2016)). The residual block is defined as Equation 1, where x and y represent the input feature maps and the output vectors of the residual layer, respectively. The function F(x, {Wi}) performs the residual operations to be learned. The operation F(x, {Wi}) + x is performed by a shortcut connection and element-wise addition. Figure 1 illustrates the network architecture of ResPerfNet. The second, third and fourth layers (i.e., two convolutional and one add layers) together form a residual block, and there are a total of six residual blocks in ResPerfNet. y = F(x, {Wi}) + x (1) As shown in Figure 1, the ResPerfNet consists of 26 layers, including 15 convolutional layers, 6 add layers, 4 fully-connected (FC) layers and 1 dropout layer. Before FC layers, every 7 layers contain one head convolutional layer (e.g., Conv1D 3 representing the head convolutional layer for the first residual block) and two residual blocks, each of which consists of two convolutional layers with the same filters and an element-wise add residual function layer. The first head convolutional layer has 128 filters of kernel size 3 with a stride length of 1. In order to reduce the complexity of ResPerfNet, the second head convolutional layer uses 64 filters of kernel size 3 with a stride length of 1. Moreover, the number of filters for the six residual blocks is decreasing from 128 filters in the first two blocks to 32 filters for the last two blocks. Three FC layers are attached to the last residual block, where each of the FC layers has 128 neurons. The dropout layer with the ratio of 0.2 is connected to the last FC layer, which uses a single neuron to perform the one-dimensional regression for predicting the elapsed time of the designated type of the layers. Our proposed residual neural architecture, ResPerfNet, gets significant improvements in accuracy compared with traditional machine learning algorithms, such as support vector regression, polynomial regression and XGBoost, and is even better than the MLP network. A series of experiments has been done to show ResPerfNet is superior to the previous works in Section 6.1. 4 METHODOLOGY This section presents the methodology of using ResPerfNet to relate the performance characteristics of a CNN layer to the delivered performance of the given layer. We first define the target neural networks for the performance modeling in Section 4.1. The three-phase based modeling of a given CNN based is presented in Section 4.2. Lastly, the same modeling for a given NN layer is further described in Section 4.3. 4.1 FORMALIZING THE NEURAL NETWORKS A neural network can be represented by a directed acyclic graph, denoted as N ({u(i)}ki=1), consisting of an ordered sequence of k nodes, where each graph node u(i) represents a layer of the neural networkN , such as convolutional, pooling, and fully-connected layers. The input and output feature maps of a graph node u(i) performing the operation f (i) are denoted as input(f (i)) and output(f (i)), respectively. In this work, we assume that a given neural network will be run on the host system h with a single hardware accelerating device d. 4.2 THE THREE-PHASE PERFORMANCE MODELING The execution time of a given neural network model includes the computation time spent on the acceleration device d and the data communication time between the host system h and the device d. As most of the computations are performed by the accelerating device and the communications occur merely at the first and the last layers of the given model, the estimated execution time of a given neural network model with k layers is formulated as follows, where the formulation assumes that all k layers within the given model are accelerated by the single device d. T (N ) = Tpre(u(1)) + k∑ i=1 Texe(u (i)) + Tpost(u (k)) (2) The above equation shows the three-phase performance modeling approach, where Tpre, Texe, and Tpost represent the execution time for the preprocess, execution, and postprocess phases, respectively. Specifically, the communication time of bringing the input data from the host system to the accelerating device at the first layer is denoted as Tpre(u(1)), where the i-th NN layer is represented as u(i). The summation of the execution time for all the NN layers is represented as ∑k i=1 Texe(u (i)). The communication time of transferring the inference results from the accelerating device to the host system is defined as Tpost(u(k)). Our prediction model delivers more accurate performance estimates than previously proposed methods by modeling these three phases defined in the following subsection for a DLA separately and adding the predicted results together as Equation 2. 4.3 MODELING INDIVIDUAL NN LAYERS The similar approach is used to model the performance of the i-th NN layer u(i). In particular, for each layer u(i), the execution times for the preprocess, execution, and postprocess phases are Tpre(u (i)), Texe(u(i)), and Tpost(u(i)), respectively. The above time components constitute the estimated execution time of the layer u(i), as defined in the equation below. The superscript index i is omitted to simplify the looks of the equations by using the simpler form u. T (u) = Tpre(u) + Texe(u) + Tpost(u) (3) The preprocess phase is for preparing the input data for the acceleration in d and involves with the four operations: 1) issuing the commands for copying input feature maps on h and d asynchronously, 2) performing the memory copy of the input feature maps in 1, 3) issuing the commands for the operation f on d, and 4) performing the data reshaping operations for input feature maps. The data reshaping operations, which transform the input/output data to the more efficient format for the next operation on d, usually occur in data transmissions between h and d. The lengths of time for the four operations areR(input(f), h, d),M(input(f),R(f, d), and T (input(f), d), respectively. As shown in Equation 4, the time consumed in the preprocess phase is defined as the summation of the time required by the above four operations. Tpre(u) = R(input(f), h, d) +M(input(f), h, d) +R(f, d) + T (input(f), d) (4) Intuitively, the time consumed for computation, which is C(f, d), in the execution phase would be identical to the computation time of f on d. Unfortunately, the measured execution time of a layer from the micro-benchmarks includes the time consumed by the data reshaping operations in both directions, from h to d and from d to h, which are T (input(f), d) and T (output(f), d), respectively. As the deployed NN layers collectively run on the acceleration device d, isolating the data reshaping time from the measured execution time for the NN layer of each micro-benchmark facilitates the execution time estimation of the deployed NN layers with the formula, ∑k i=1 Texe(u (i)). Regarding this situation, the time for the execution phase is defined in Equation 5. Texe(u) = C(f, d)− T (input(f), d)− T (output(f), d) (5) The postprocess phase is defined for dealing with the procedure of returning the inference computation results back to the invoking application on the host system. That is, it is about reshaping the output vector into the format accepted by h, copying the output vector back to h from d, and moving the prediction result to the application level (i.e., the call site of the model inference) on the host system. The corresponding execution time for the above three operations are denoted as T (output(f), d),M(output(f, d, h), and V(output(f), h), respectively. Tpost(u) = T (output(f), d) +M(output(f), d, h) + V(output(f), h) (6) 5 TRAINING DATA AND LOSS FUNCTION In this section, we present the details of the dataset used to build the proposed performance prediction models. In particular, the configurations of our developed benchmark tools for the training dataset is discussed in Section 5.1. The tool collecting and extracting the data is described in Section 5.2, and the data transformation techniques to facilitate the training convergence is introduced in Section 5.3. The specially designed loss function to better deal with the unbalanced training data is introduced in Section 5.4. 5.1 DATA PREPARATION The training data is the characteristics of the TensorFlow and TensorRT programs and the performance information of the programs running on the target computing hardware, where the proposed model helps correlate the characteristics and their runtimes during the training process. In order to better catch the characteristics of different TensorFlow and TensorRT configurations (i.e., the code patterns, which are considered as the features during the model training process), we have developed a benchmark tool to generate a set of micro-benchmarks, which are actually TensorFlow and TensorRT programs with different configurations for the three types of layers, including convolution, pooling and dense layers. The generation of the micro-benchmarks are done by randomly selecting the configurations for each type of the layer, so as to collect the performance for different configurations. The possible configurations (or features) for all three layer types and their ranges are listed in Table 3. These configurations are actually the function parameters for the three types of layers, which are extracted from TensorFlow 1.13 APIs, including tensorflow.layers.conv2d, tensorflow.layers.maxpooling2d, and tensorflow.layers.dense, and their possible combinations are 7.33 × 1014, 7.33 × 1010, and 2.14 × 109, respectively. While each microbenchmark takes at least seconds for the stable and accurate measurements, it is impossible to cover the entire design space with brute force, which requires over 1014 micro-benchmark runs. 5.2 DATA COLLECTION AND DATA EXTRACTION The data preparation is used to generate the TensorFlow- and TensorRT-based micro-benchmarks. The data collection takes about two weeks for running 100,000 different samples of the TensorFlow micro-benchmarks on the DLAs to collect the performance data. On the other hand, for the TensorRT micro-benchmarks, more than two weeks were spent to optimize and profile the 25,000 different configurations of the TensorRT programs. It is interesting to note that the TensorRT experiments generate large optimized intermediate files, especially for the dense layer, where it requires more than 5TB of storage space to keep its parameters. Due to the disk space limitation, we select 16,000 out of 25,000 samples to run and profile their performance. For data extraction, our data processing tool filters out the outliers (data with extreme values) before feeding the profiled data for the model training. The total elapsed time of each layer is decomposed into the preprocessing time (Tpre), the execution time (Texe), and the postprocessing time (Tpost), as mentioned in the previous section. In order to test the accuracy of our trained model, the collected samples are split into 80% of the samples as training datasets and 20% as testing datasets. 5.3 DATA TRANSFORMATION Now, suppose we are given a training dataset D, which is comprising m observations and p features of X and written as D = {ti, xi1, xi2, ..., xip}mi=1, where t is a vector of observed values ti (i = 1, ...,m), and X could be seen as a matrix of row-vectors xi (i = 1, ...,m) or of mdimensional column-vectors Xj(j = 1, ..., p). The coefficients vector w keeps the weights of the model. The predicted value is denoted as y(x,w), for any given model of weights w and the dataset x. In order to improve the convergence efficiency and stability of the stochastic gradient descent (SGD) algorithm, the three types of data transformations are adopted in this work, including scalar multiplication, Z-scores transformation, and Box-Cox transformation. Scalar multiplication is used to provide fine-grained updates of the SGD procedure and scales each observed value ti. Z-scores transformation puts each data feature Xj from different sources into the same scale to eliminate the prejudicial bias of the features values. Box-Cox transformation converts the values of the features Xj to standard normal random variables, which would further improve the effectiveness of Z-scores transformation. Details of these data transformations are available in Appendixes B, C and D. 5.4 LOSS FUNCTION As the observed vector t is with the positive-skew distribution and often contains some noises contributed by the measurement errors, we fine-tune the loss function as mean absolute percentage logarithmic error (MAPLE) for the prediction model (Wang et al.), as shown in Equation 7. To deal with the situation of the skewed distribution, the logarithmic operations for the predicted values 1 + y(xi,w) and the observed values 1 + ti, and the division operation on the observed values in MAPLE are expected to enhance the accuracy of the small data, which occurs frequently. On the other head, the absolute value of MAPLE helps increase the resistance against outliers that may unexpectedly appear in the measured data. Moreover, to prevent over-fitting, L2 regularization is added to the loss function, where λ2 is a scaling factor for the regularization. En(w) = 1 n n∑ i=0 ∣∣∣∣ log(1 + y(xi,w))− log(1 + ti)log(1 + ti) ∣∣∣∣+ λ2‖w‖2 (7) 6 EVALUATION The layer-wise and model-wise performance results are evaluated to demonstrate the effectiveness of ResPerfNet in this section. In particular, we compare the layer-wise estimated execution time produced by ResPerfNet and the previous works to show that ResPerfNet is superior to other regression based approaches, such as polynomial regression, support vector regression and PerfNet. Three statistical metrics, including mean absolute percentage error (MAPE), root mean squared error (RMSE) and mean absolute error (MAE), are used to quantify the effectiveness for each tested performance modeling approach. In addition, to demonstrate the capability of ResPerfNet for the full model prediction, three popular CNNs are considered in the model-wise experiments, e.g., LeNet, AlexNet, and VGG16. Note that three data transformations mentioned in Section 5.3 are applied in ResPerfNet by default unless specified otherwise. The details of our experimental environments are listed in Appendix F. 6.1 LAYER-WISE EXECUTION TIME PREDICTION Table 1 compares the MAPEs of the execution time for the convolutional layers, estimated by ResPerfNet and the prior works. While appropriate parameter adjustments are applied to obtain parame- ters for better results, the MAPEs of polynomial regression, support vector regression, and XGBoost are over 29%, which means the error is quite large and indicates that the corresponding approaches are not capable of doing good performance prediction for the real applications. On the contrary, the DL-based approaches, PerfNet and ResPerfNet, give more accurate estimations, which have less than 15% of the MAPEs. In particular, ResPerfNet outperforms the other approaches and has 11.75% and 14.23% of the MAPEs for the TensorFlow and TensorRT models. The results suggest that ResPerfNet correctly associates the program characteristics to the performance model. To further look into the effectiveness of PerfNet and ResPerfNet and the impact of the Box-Cox transformation on the predicted results, Figure 2 plots the error curves of the TensorFlow convolutional layer using the PerfNet and ResPerfNet with and without performing the data transformation. Figure 2(a) shows that most of the MAPE of ResPerfNet on the testing dataset are below 15%, as depicted by the red/black solid lines. Notably, ResPerfNet applying the Box-Cox data transformation reaches the lowest prediction error (11.7%), 2% less than ResPerfNet without the data transforming. Similar trends can be observed in Figure 2(b) using the RMSE metric, in which the black solid line also shows the best performance. The results presented in Figure 2 show that ResPerfNet with BoxCox transformation has better convergence rate, given the same training epoch. The detailed training process is illustrated in Appendix E. Moreover, the R2 values for the predicted and measured execution time of the convolutional, pooling and dense layers are all above 0.97, which demonstrate high prediction quality of ResPerfNet, as illustrated in Figure 4 of Appendix I. The layer-wise performance results of the TensorFlow and TensorRT models delivered by ResPerfNet are listed in Table 2. Overall, the MAPE for all phases are under 16%, which removes the concern of over-fitting. For RMSE, the value of the TensorFlow version convolutional layer is 0.84ms. It is better than the 0.98ms reported by PerfNet (Wang et al.), and is also better than the 2.55ms produced by the method in (Justus et al. (2018)). Detailed predicted results of the three layers under different phases for TensorFlow (with 3 additional platforms) and for TensorRT are also presented in Appendixes G and H, respectively. From the tables, we can see that ResPerfNet has better predicted results for TensorFlow than TensorRT. That is because currently the TensorRTbased ResPerfNet is trained with less training data, as described in Section 5.2. We believe that the accuracy for TensorRT predictions can be further improved with sufficient data as TensorFlow. 6.2 MODEL-WISE EXECUTION TIME PREDICTION Figure 3 plots the inference time estimated by PerfNet and ResPerfNet for the three popular DNNs, including LeNet, AlexNet, and VGG16, using TensorFlow and TensorRT frameworks. Figure 3(a)(c) shows that ResPerfNet has more accurate estimation than PerfNet since the averaged MAPE of the three models is 8.4% for all tested batch sizes, while PerfNet has the averaged MAPE of 24.04%. Figure 3(d)-(f) illustrates the similar trend for TensorRT based DNNs. The averaged MAPE of these DNNs using ResPerfNet is 17%. The results show that our modeling and methodology are effective on the two popular frameworks. 7 CONCLUSION In this paper, we proposed a deep residual network architecture, ResPerfNet, to model the performance of neural networks on the target DLAs by considering the interactions between the host and the GPU and decomposing a neural network operation into three phases. In addition, we apply ResPerfNet to predict the execution time of the optimized models, such as TensorRT, with the same performance characteristics as those used in unoptimized models. Our experimental results show that ResPerfNet is able to provide high-accuracy estimations on various DLAs, which helps facilitate the exploration of proper neural network architectures built with various DL frameworks. A FEATURES OF TRAINING DATA B SCALAR MULTIPLICATION Scalar multiplication is applied on the observed vector t as Equation 8 to magnify the prediction results since the original data are too small to provide accurate estimates. It is interesting to note that the scalar multiplication would be inefficient for some commonly used loss function, such as mean squared error MSE, based on our experiences; nevertheless, it works well with the MAPLE by making every gradient converging smoothly without frequently adjusting an appropriate learning rate in each epoch. scalar multiplication : t = t× scaler (8) C Z-SCORES TRANSFORMATION Z-scores transformation is performed on each n-dimensional column-vectorXj as Equation 9, where X̄j is the mean of the each column-vectorXj , and σj is the standard deviation of each column-vector Xj . Z-scores transformation resales the values of the features to ensure the mean to be zero and the standard deviation to be one. The values of the features are rescaled within the range between zero and one, which is useful for gradient decent algorithms. Z-scores transformation : Xj = Xj − X̄j σj (9) D BOX-COX TRANSFORM Box-Cox transformation transforms the input features Xj into a normal distribution for the best model accuracy. Box-Cox transformation is shown in Equation 10, where λ1 is the best approximation for the selected features. In our experiments, Box-Cox transformation is applied on Matrix Size and Kernel Size (See Table 3) for the convolutional layer data, and Matrix Size for the pooling layer data. Box-Cox transformation : X(λ1)j = { X λ1−1 j λ1 ifλ1 6= 0 lnXj ifλ1 = 0 (10) E THE PROPOSED GRADIENT DESCENT ALGORITHM Algorithm 1 is the pseudo-code of our proposed algorithm to train each phase of the layers. The required parameters are defined as follows: optimizer: algorithm used to update the attributes of a neural network, lr scheduler: sets the learning rate of each parameter group to the initial lr times a given function, total epochs: total epochs of the neural network algorithm, lr: learning rate, bs: maximum of the batch size for each epoch, η: period of learning rate decay, γ: multiplicative factor of learning rate decay, and λ2: multiplicative factor for the weight penalty. Algorithm 1 The stochastic gradient descent algorithm proposed by ResperfNet, where our default settings for the our DL regression problems are optimizer = Adam, total epochs = 200, lr = 0.1, bs = 128, η = 40, γ = 0.5, λ2 = 0.1, and scaler = 10. Require: α: Multiplicative factor for the weight, n: Current batch size, τ : Current iteration. 1: t← scalar multiplication(t, scalar) . Update t by Equation 8. 2: x← Z-scores(Box-Cox(x)) . Update x by Equation 10 and 9. 3: for e in total epochs do 4: lr ← lr scheduler(lr, e, η, γ) . Update the learning rate by scheduler. 5: for b in (m/bs+ 1) do 6: α← optimizer(lr) . Update the weight factor by optimizer. 7: n← x[b ∗ bs : min ((b+ 1) ∗ bs,m)] . Calculate n (current batch size). 8: ∇En ← calculate gradient of En on model wτ . Calculate En by Equation 7. 9: wτ+1 ← wτ − α∇En(wτ ) 10: τ ← τ + 1 11: end for 12: end for F EXPERIMENTAL SETUP The experiments are done on the Intel i7 processors with a variety of hardware accelerators listed in Table 4. TensorFlow 1.13.1 and TensorRT 5.0.2.6 with Python 3.6 are used to build the DL models, running on Ubuntu 18.04.4 LTS (kernel version 5.4.0-42-generic). G LAYER-WISE EXECUTION TIME PREDICTION FOR TENSORFLOW J MODEL-WISE EXECUTION TIME PREDICTION FOR TENSORFLOW I PREDICTED VS. MEASURED TIME (TENSORFLOW) H LAYER-WISE EXECUTION TIME PREDICTION FOR TENSORRT
1. What is the focus of the paper regarding predicting model execution time? 2. What are the strengths of the proposed approach, particularly in terms of conducting extensive experiments? 3. What are the weaknesses of the paper, especially regarding the idea and motivation behind the proposed method? 4. How does the reviewer assess the significance and applicability of the proposed ResPerfNet in the context of Neural Architecture Search (NAS)? 5. Are there any concerns or suggestions regarding the experimental results and their support for the paper's claims?
Review
Review Summary: The authors design a specific ResNet for predicting the model execution time on different platforms. Pros conduct extensive experiments, particularly collect a large scale dataset for measuring different architectures, which can be helpful for further works if it can be released publicly Cons The idea is not novel. The main idea is to utilize a ResNet to perform regression on network latency data, which can only be considered as a normal application of ResNet. The motivation is questionable. In my opinion, making model execution time prediction more accurate should not be the ultimate end. The proposed ResPerfNet should be applied in network evaluation and search stages in Neural Architecture Search (NAS) area, and validate that a more accurate model performance predictor is helpful for architecture search. But I didn't see any supporting experimental results in this paper.
ICLR
Title ResPerfNet: Deep Residual Learning for Regressional Performance Modeling of Deep Neural Networks Abstract The rapid advancements of computing technology facilitate the development of diverse deep learning applications. Unfortunately, the efficiency of parallel computing infrastructures varies widely with neural network models, which hinders the exploration of the design space to find high-performance neural network architectures on specific computing platforms for a given application. To address such a challenge, we propose a deep learning-based method, ResPerfNet, which trains a residual neural network with representative datasets obtained on the target platform to predict the performance for a deep neural network. Our experimental results show that ResPerfNet can accurately predict the execution time of individual neural network layers and full network models on a variety of platforms. In particular, ResPerfNet achieves 8.4% of mean absolute percentage error for LeNet, AlexNet and VGG16 on the NVIDIA GTX 1080Ti, which is substantially lower than the previously published works. 1 INTRODUCTION Deep learning (DL) has exploded successfully and is applied to many application domains, such as image recognition and object detection Thus, a lot of human experts design high-accuracy neural network architectures for different applications. However, for Internet of Things (IoT) applications, large neural network models cannot fit into resource-constrained devices. On the other hand, a system designer often tries to find a proper computing platform or a deep learning accelerator (DLA) to execute a DL application with acceptable responsiveness. An exhaustive way to optimize the system design is to evaluate the cost and performance of desired DL models on all the available hardware/software options, but it is not only tedious but costly and lengthy in practice. Since DL frameworks and accelerators are evolving rapidly, and even some slight changes could significantly impact the performance of DL applications, it may be necessary to update the performance models frequently. Therefore, we need a systematic and efficient approach to produce accurate performance models when changes occur. While several works (Qi et al.; Justus et al. (2018); Wang et al.) have been proposed to estimate the delivered performance of a given DL model on a specific computing platform, so as to rapidly evaluate design alternatives, the estimates from these efforts are not very accurate. For example, the mean absolute percentage error (MAPE) for estimating full neural network models such as LeNet (LeCun et al. (1998)), AlexNet (Krizhevsky et al. (2012)) and VGG16 (Simonyan & Zisserman) on the NVIDIA GTX 1080Ti is as high as 24% in Wang et al., whose accuracy is the best among the previous works, but still has room for improvement. In this paper, we propose a deep residual network architecture, called ResPerfNet, to efficiently and accurately model the performance of DL models running on a wide range of DL frameworks and DLAs. It is based on the residual function approach proposed by (He et al. (2016) and inspired by the prior works Liu & Yang (2018); Jha et al. (2019); Wan et al. (2019)), which use residual neural networks to solve regression problems. The proposed model can be trained with performance data collected from many system configurations to establish a unified performance predictor which assists the users in selecting the DL model, the DL framework, and the DLA for their applications. Extensive experiments have been done to show that our unified approach not only provides more accurate performance estimates than the previous works, but also enables the users to quickly pre- dict the performance of their DL applications executed with various models-framework-accelerator configurations. The contributions of this paper are summarized as follows. • An unified DL-based approach for estimating the computing performance of DL applications on a variety of models-framework-accelerator configurations, which enables the users to explore the hardware/software design space quickly. • A novel deep residual neural architecture is proposed to deliver the most accurate performance predictions that we are aware of. Experimental results confirm that our approach yields lower prediction errors on across various platforms. The remaining of this paper is organized as follows. Section 2 presents the related work. Section 3 describes the architecture of ResPerfNet. Section 4 shows the proposed systematic modeling method. Section 5 elaborates the dataset and training mechanism to train the ResPerfNet models within a reasonable time span. Section 6 evaluates the efficiency of our approach. Section 7 concludes the paper. 2 BACKGROUND AND RELATED WORK With the rapid evolving of both hardware accelerators and DL models, the performance measure/estimation of the DL models on the DLA platforms is an important task to evaluate the effectiveness of the software/hardware solutions to the given problems. Different approaches have been proposed to serve the purposes. Benchmarking approaches, such as DAWNbench (Coleman et al. (2017)) and MLPerf (Reddi et al. (2020)), aim at the measurements of the training and inference performance of the machine-learning (ML) models on certain software/hardware combinations. By offering a set of standardized machine learning workloads and the instructions for performance benchmarking, these benchmarks are able to measure how fast a system can perform the training and inference for ML models. Analytical approach, as reported in PALEO (Qi et al.), constructs the analytical performance model for DL systems. The execution time is decomposed into the total time for the computation and communication parts, which are derived from the utilization of the computing and communication resources on the target hardware, respectively. For instance, the computation time is estimated by dividing the total floating-point operations required by the DL model to the actual processing speed (i.e., the processed floating-point operations per second for the DL model) delivered by the computing hardware. The communication time is calculated by the similar approach.This approach highly relies on the accuracy of the benchmarking results (i.e., to provide the actual processing speed of the target model on the hardware), which requires its users to choose the benchmarks wisely to perfectly match the program characteristics of their target deep learning models, so as to give a proper estimate of the actual processing speed. However, the manual process (of the benchmarks selection) limit its widespread adoption. DL-based approaches build the DNNs for estimating the DL models’ performance by learning the relationships between the characteristics of the DL models and the specifications of the accelerating hardware. The following works focus on TensorFlow-based DL models. Justus et al. (2018) use a fully-connected multiple-layer perceptron (MLP) network for performance prediction, using the configurations of the DL model and the specification of the hardware accelerator, and the training data of the DL model as the input features to the MLP network. However, due to the simplified communication time estimation model, where the communications from GPU to CPU for each of the DL layers are counted repeatedly for estimating the communication time, their model tends to provide over-estimated results. Wang et al. use PerfNet (an MLP network) to learn the relationships between the configurations and the execution time of the target DL model. They further decompose the execution of a DL model into three phases, preprocessing, execution, and postprocessing, and train multiple PerfNet network instances, each of which learns the relationships between the model configurations and the model execution time for a specific phase. By aggregating the prediction results for the three phases, their proposed work is able to predict the total execution time of a given DL model. Nevertheless, the MLP network has its own limitation, i.e., it is hard to further enhance its performance since a deeper MLP network will lead to lower prediction accuracy. In consideration of the limitations of the prior works listed above and the need of modeling the optimizing DL frameworks, our work uses the systematical approach to characterize the DL models built with various DL framework, and adopts the residual neural network to model their delivered performance on the DLAs. 3 RESPERFNET ARCHITECTURE ResPerfNet adopts a ML-based approach for the performance estimation of different types of neural network layers. Furthermore, ResPerfNet is specially designed to prevent the degradation problem, which refers to the phenomenon that increasing the depth and/or the width of each layer for the DNN may not only necessarily improve the accuracy, but get saturated rapidly and then degrades sharply as reported in (He & Sun (2015); Srivastava et al. (2015)). In other words, it is more likely to lead to a higher training error on the neural network with a wider or deeper architecture. To solve the problem, the deep residual learning is proposed and applied to each group of the stacked NN layers (He et al. (2016)), where a certain number of stacked layers are logically grouped together to form a residual block. Hence, in this work, to address the degradation problem, we adopt the deep residual learning to every few stacked layers (He et al. (2016)). The residual block is defined as Equation 1, where x and y represent the input feature maps and the output vectors of the residual layer, respectively. The function F(x, {Wi}) performs the residual operations to be learned. The operation F(x, {Wi}) + x is performed by a shortcut connection and element-wise addition. Figure 1 illustrates the network architecture of ResPerfNet. The second, third and fourth layers (i.e., two convolutional and one add layers) together form a residual block, and there are a total of six residual blocks in ResPerfNet. y = F(x, {Wi}) + x (1) As shown in Figure 1, the ResPerfNet consists of 26 layers, including 15 convolutional layers, 6 add layers, 4 fully-connected (FC) layers and 1 dropout layer. Before FC layers, every 7 layers contain one head convolutional layer (e.g., Conv1D 3 representing the head convolutional layer for the first residual block) and two residual blocks, each of which consists of two convolutional layers with the same filters and an element-wise add residual function layer. The first head convolutional layer has 128 filters of kernel size 3 with a stride length of 1. In order to reduce the complexity of ResPerfNet, the second head convolutional layer uses 64 filters of kernel size 3 with a stride length of 1. Moreover, the number of filters for the six residual blocks is decreasing from 128 filters in the first two blocks to 32 filters for the last two blocks. Three FC layers are attached to the last residual block, where each of the FC layers has 128 neurons. The dropout layer with the ratio of 0.2 is connected to the last FC layer, which uses a single neuron to perform the one-dimensional regression for predicting the elapsed time of the designated type of the layers. Our proposed residual neural architecture, ResPerfNet, gets significant improvements in accuracy compared with traditional machine learning algorithms, such as support vector regression, polynomial regression and XGBoost, and is even better than the MLP network. A series of experiments has been done to show ResPerfNet is superior to the previous works in Section 6.1. 4 METHODOLOGY This section presents the methodology of using ResPerfNet to relate the performance characteristics of a CNN layer to the delivered performance of the given layer. We first define the target neural networks for the performance modeling in Section 4.1. The three-phase based modeling of a given CNN based is presented in Section 4.2. Lastly, the same modeling for a given NN layer is further described in Section 4.3. 4.1 FORMALIZING THE NEURAL NETWORKS A neural network can be represented by a directed acyclic graph, denoted as N ({u(i)}ki=1), consisting of an ordered sequence of k nodes, where each graph node u(i) represents a layer of the neural networkN , such as convolutional, pooling, and fully-connected layers. The input and output feature maps of a graph node u(i) performing the operation f (i) are denoted as input(f (i)) and output(f (i)), respectively. In this work, we assume that a given neural network will be run on the host system h with a single hardware accelerating device d. 4.2 THE THREE-PHASE PERFORMANCE MODELING The execution time of a given neural network model includes the computation time spent on the acceleration device d and the data communication time between the host system h and the device d. As most of the computations are performed by the accelerating device and the communications occur merely at the first and the last layers of the given model, the estimated execution time of a given neural network model with k layers is formulated as follows, where the formulation assumes that all k layers within the given model are accelerated by the single device d. T (N ) = Tpre(u(1)) + k∑ i=1 Texe(u (i)) + Tpost(u (k)) (2) The above equation shows the three-phase performance modeling approach, where Tpre, Texe, and Tpost represent the execution time for the preprocess, execution, and postprocess phases, respectively. Specifically, the communication time of bringing the input data from the host system to the accelerating device at the first layer is denoted as Tpre(u(1)), where the i-th NN layer is represented as u(i). The summation of the execution time for all the NN layers is represented as ∑k i=1 Texe(u (i)). The communication time of transferring the inference results from the accelerating device to the host system is defined as Tpost(u(k)). Our prediction model delivers more accurate performance estimates than previously proposed methods by modeling these three phases defined in the following subsection for a DLA separately and adding the predicted results together as Equation 2. 4.3 MODELING INDIVIDUAL NN LAYERS The similar approach is used to model the performance of the i-th NN layer u(i). In particular, for each layer u(i), the execution times for the preprocess, execution, and postprocess phases are Tpre(u (i)), Texe(u(i)), and Tpost(u(i)), respectively. The above time components constitute the estimated execution time of the layer u(i), as defined in the equation below. The superscript index i is omitted to simplify the looks of the equations by using the simpler form u. T (u) = Tpre(u) + Texe(u) + Tpost(u) (3) The preprocess phase is for preparing the input data for the acceleration in d and involves with the four operations: 1) issuing the commands for copying input feature maps on h and d asynchronously, 2) performing the memory copy of the input feature maps in 1, 3) issuing the commands for the operation f on d, and 4) performing the data reshaping operations for input feature maps. The data reshaping operations, which transform the input/output data to the more efficient format for the next operation on d, usually occur in data transmissions between h and d. The lengths of time for the four operations areR(input(f), h, d),M(input(f),R(f, d), and T (input(f), d), respectively. As shown in Equation 4, the time consumed in the preprocess phase is defined as the summation of the time required by the above four operations. Tpre(u) = R(input(f), h, d) +M(input(f), h, d) +R(f, d) + T (input(f), d) (4) Intuitively, the time consumed for computation, which is C(f, d), in the execution phase would be identical to the computation time of f on d. Unfortunately, the measured execution time of a layer from the micro-benchmarks includes the time consumed by the data reshaping operations in both directions, from h to d and from d to h, which are T (input(f), d) and T (output(f), d), respectively. As the deployed NN layers collectively run on the acceleration device d, isolating the data reshaping time from the measured execution time for the NN layer of each micro-benchmark facilitates the execution time estimation of the deployed NN layers with the formula, ∑k i=1 Texe(u (i)). Regarding this situation, the time for the execution phase is defined in Equation 5. Texe(u) = C(f, d)− T (input(f), d)− T (output(f), d) (5) The postprocess phase is defined for dealing with the procedure of returning the inference computation results back to the invoking application on the host system. That is, it is about reshaping the output vector into the format accepted by h, copying the output vector back to h from d, and moving the prediction result to the application level (i.e., the call site of the model inference) on the host system. The corresponding execution time for the above three operations are denoted as T (output(f), d),M(output(f, d, h), and V(output(f), h), respectively. Tpost(u) = T (output(f), d) +M(output(f), d, h) + V(output(f), h) (6) 5 TRAINING DATA AND LOSS FUNCTION In this section, we present the details of the dataset used to build the proposed performance prediction models. In particular, the configurations of our developed benchmark tools for the training dataset is discussed in Section 5.1. The tool collecting and extracting the data is described in Section 5.2, and the data transformation techniques to facilitate the training convergence is introduced in Section 5.3. The specially designed loss function to better deal with the unbalanced training data is introduced in Section 5.4. 5.1 DATA PREPARATION The training data is the characteristics of the TensorFlow and TensorRT programs and the performance information of the programs running on the target computing hardware, where the proposed model helps correlate the characteristics and their runtimes during the training process. In order to better catch the characteristics of different TensorFlow and TensorRT configurations (i.e., the code patterns, which are considered as the features during the model training process), we have developed a benchmark tool to generate a set of micro-benchmarks, which are actually TensorFlow and TensorRT programs with different configurations for the three types of layers, including convolution, pooling and dense layers. The generation of the micro-benchmarks are done by randomly selecting the configurations for each type of the layer, so as to collect the performance for different configurations. The possible configurations (or features) for all three layer types and their ranges are listed in Table 3. These configurations are actually the function parameters for the three types of layers, which are extracted from TensorFlow 1.13 APIs, including tensorflow.layers.conv2d, tensorflow.layers.maxpooling2d, and tensorflow.layers.dense, and their possible combinations are 7.33 × 1014, 7.33 × 1010, and 2.14 × 109, respectively. While each microbenchmark takes at least seconds for the stable and accurate measurements, it is impossible to cover the entire design space with brute force, which requires over 1014 micro-benchmark runs. 5.2 DATA COLLECTION AND DATA EXTRACTION The data preparation is used to generate the TensorFlow- and TensorRT-based micro-benchmarks. The data collection takes about two weeks for running 100,000 different samples of the TensorFlow micro-benchmarks on the DLAs to collect the performance data. On the other hand, for the TensorRT micro-benchmarks, more than two weeks were spent to optimize and profile the 25,000 different configurations of the TensorRT programs. It is interesting to note that the TensorRT experiments generate large optimized intermediate files, especially for the dense layer, where it requires more than 5TB of storage space to keep its parameters. Due to the disk space limitation, we select 16,000 out of 25,000 samples to run and profile their performance. For data extraction, our data processing tool filters out the outliers (data with extreme values) before feeding the profiled data for the model training. The total elapsed time of each layer is decomposed into the preprocessing time (Tpre), the execution time (Texe), and the postprocessing time (Tpost), as mentioned in the previous section. In order to test the accuracy of our trained model, the collected samples are split into 80% of the samples as training datasets and 20% as testing datasets. 5.3 DATA TRANSFORMATION Now, suppose we are given a training dataset D, which is comprising m observations and p features of X and written as D = {ti, xi1, xi2, ..., xip}mi=1, where t is a vector of observed values ti (i = 1, ...,m), and X could be seen as a matrix of row-vectors xi (i = 1, ...,m) or of mdimensional column-vectors Xj(j = 1, ..., p). The coefficients vector w keeps the weights of the model. The predicted value is denoted as y(x,w), for any given model of weights w and the dataset x. In order to improve the convergence efficiency and stability of the stochastic gradient descent (SGD) algorithm, the three types of data transformations are adopted in this work, including scalar multiplication, Z-scores transformation, and Box-Cox transformation. Scalar multiplication is used to provide fine-grained updates of the SGD procedure and scales each observed value ti. Z-scores transformation puts each data feature Xj from different sources into the same scale to eliminate the prejudicial bias of the features values. Box-Cox transformation converts the values of the features Xj to standard normal random variables, which would further improve the effectiveness of Z-scores transformation. Details of these data transformations are available in Appendixes B, C and D. 5.4 LOSS FUNCTION As the observed vector t is with the positive-skew distribution and often contains some noises contributed by the measurement errors, we fine-tune the loss function as mean absolute percentage logarithmic error (MAPLE) for the prediction model (Wang et al.), as shown in Equation 7. To deal with the situation of the skewed distribution, the logarithmic operations for the predicted values 1 + y(xi,w) and the observed values 1 + ti, and the division operation on the observed values in MAPLE are expected to enhance the accuracy of the small data, which occurs frequently. On the other head, the absolute value of MAPLE helps increase the resistance against outliers that may unexpectedly appear in the measured data. Moreover, to prevent over-fitting, L2 regularization is added to the loss function, where λ2 is a scaling factor for the regularization. En(w) = 1 n n∑ i=0 ∣∣∣∣ log(1 + y(xi,w))− log(1 + ti)log(1 + ti) ∣∣∣∣+ λ2‖w‖2 (7) 6 EVALUATION The layer-wise and model-wise performance results are evaluated to demonstrate the effectiveness of ResPerfNet in this section. In particular, we compare the layer-wise estimated execution time produced by ResPerfNet and the previous works to show that ResPerfNet is superior to other regression based approaches, such as polynomial regression, support vector regression and PerfNet. Three statistical metrics, including mean absolute percentage error (MAPE), root mean squared error (RMSE) and mean absolute error (MAE), are used to quantify the effectiveness for each tested performance modeling approach. In addition, to demonstrate the capability of ResPerfNet for the full model prediction, three popular CNNs are considered in the model-wise experiments, e.g., LeNet, AlexNet, and VGG16. Note that three data transformations mentioned in Section 5.3 are applied in ResPerfNet by default unless specified otherwise. The details of our experimental environments are listed in Appendix F. 6.1 LAYER-WISE EXECUTION TIME PREDICTION Table 1 compares the MAPEs of the execution time for the convolutional layers, estimated by ResPerfNet and the prior works. While appropriate parameter adjustments are applied to obtain parame- ters for better results, the MAPEs of polynomial regression, support vector regression, and XGBoost are over 29%, which means the error is quite large and indicates that the corresponding approaches are not capable of doing good performance prediction for the real applications. On the contrary, the DL-based approaches, PerfNet and ResPerfNet, give more accurate estimations, which have less than 15% of the MAPEs. In particular, ResPerfNet outperforms the other approaches and has 11.75% and 14.23% of the MAPEs for the TensorFlow and TensorRT models. The results suggest that ResPerfNet correctly associates the program characteristics to the performance model. To further look into the effectiveness of PerfNet and ResPerfNet and the impact of the Box-Cox transformation on the predicted results, Figure 2 plots the error curves of the TensorFlow convolutional layer using the PerfNet and ResPerfNet with and without performing the data transformation. Figure 2(a) shows that most of the MAPE of ResPerfNet on the testing dataset are below 15%, as depicted by the red/black solid lines. Notably, ResPerfNet applying the Box-Cox data transformation reaches the lowest prediction error (11.7%), 2% less than ResPerfNet without the data transforming. Similar trends can be observed in Figure 2(b) using the RMSE metric, in which the black solid line also shows the best performance. The results presented in Figure 2 show that ResPerfNet with BoxCox transformation has better convergence rate, given the same training epoch. The detailed training process is illustrated in Appendix E. Moreover, the R2 values for the predicted and measured execution time of the convolutional, pooling and dense layers are all above 0.97, which demonstrate high prediction quality of ResPerfNet, as illustrated in Figure 4 of Appendix I. The layer-wise performance results of the TensorFlow and TensorRT models delivered by ResPerfNet are listed in Table 2. Overall, the MAPE for all phases are under 16%, which removes the concern of over-fitting. For RMSE, the value of the TensorFlow version convolutional layer is 0.84ms. It is better than the 0.98ms reported by PerfNet (Wang et al.), and is also better than the 2.55ms produced by the method in (Justus et al. (2018)). Detailed predicted results of the three layers under different phases for TensorFlow (with 3 additional platforms) and for TensorRT are also presented in Appendixes G and H, respectively. From the tables, we can see that ResPerfNet has better predicted results for TensorFlow than TensorRT. That is because currently the TensorRTbased ResPerfNet is trained with less training data, as described in Section 5.2. We believe that the accuracy for TensorRT predictions can be further improved with sufficient data as TensorFlow. 6.2 MODEL-WISE EXECUTION TIME PREDICTION Figure 3 plots the inference time estimated by PerfNet and ResPerfNet for the three popular DNNs, including LeNet, AlexNet, and VGG16, using TensorFlow and TensorRT frameworks. Figure 3(a)(c) shows that ResPerfNet has more accurate estimation than PerfNet since the averaged MAPE of the three models is 8.4% for all tested batch sizes, while PerfNet has the averaged MAPE of 24.04%. Figure 3(d)-(f) illustrates the similar trend for TensorRT based DNNs. The averaged MAPE of these DNNs using ResPerfNet is 17%. The results show that our modeling and methodology are effective on the two popular frameworks. 7 CONCLUSION In this paper, we proposed a deep residual network architecture, ResPerfNet, to model the performance of neural networks on the target DLAs by considering the interactions between the host and the GPU and decomposing a neural network operation into three phases. In addition, we apply ResPerfNet to predict the execution time of the optimized models, such as TensorRT, with the same performance characteristics as those used in unoptimized models. Our experimental results show that ResPerfNet is able to provide high-accuracy estimations on various DLAs, which helps facilitate the exploration of proper neural network architectures built with various DL frameworks. A FEATURES OF TRAINING DATA B SCALAR MULTIPLICATION Scalar multiplication is applied on the observed vector t as Equation 8 to magnify the prediction results since the original data are too small to provide accurate estimates. It is interesting to note that the scalar multiplication would be inefficient for some commonly used loss function, such as mean squared error MSE, based on our experiences; nevertheless, it works well with the MAPLE by making every gradient converging smoothly without frequently adjusting an appropriate learning rate in each epoch. scalar multiplication : t = t× scaler (8) C Z-SCORES TRANSFORMATION Z-scores transformation is performed on each n-dimensional column-vectorXj as Equation 9, where X̄j is the mean of the each column-vectorXj , and σj is the standard deviation of each column-vector Xj . Z-scores transformation resales the values of the features to ensure the mean to be zero and the standard deviation to be one. The values of the features are rescaled within the range between zero and one, which is useful for gradient decent algorithms. Z-scores transformation : Xj = Xj − X̄j σj (9) D BOX-COX TRANSFORM Box-Cox transformation transforms the input features Xj into a normal distribution for the best model accuracy. Box-Cox transformation is shown in Equation 10, where λ1 is the best approximation for the selected features. In our experiments, Box-Cox transformation is applied on Matrix Size and Kernel Size (See Table 3) for the convolutional layer data, and Matrix Size for the pooling layer data. Box-Cox transformation : X(λ1)j = { X λ1−1 j λ1 ifλ1 6= 0 lnXj ifλ1 = 0 (10) E THE PROPOSED GRADIENT DESCENT ALGORITHM Algorithm 1 is the pseudo-code of our proposed algorithm to train each phase of the layers. The required parameters are defined as follows: optimizer: algorithm used to update the attributes of a neural network, lr scheduler: sets the learning rate of each parameter group to the initial lr times a given function, total epochs: total epochs of the neural network algorithm, lr: learning rate, bs: maximum of the batch size for each epoch, η: period of learning rate decay, γ: multiplicative factor of learning rate decay, and λ2: multiplicative factor for the weight penalty. Algorithm 1 The stochastic gradient descent algorithm proposed by ResperfNet, where our default settings for the our DL regression problems are optimizer = Adam, total epochs = 200, lr = 0.1, bs = 128, η = 40, γ = 0.5, λ2 = 0.1, and scaler = 10. Require: α: Multiplicative factor for the weight, n: Current batch size, τ : Current iteration. 1: t← scalar multiplication(t, scalar) . Update t by Equation 8. 2: x← Z-scores(Box-Cox(x)) . Update x by Equation 10 and 9. 3: for e in total epochs do 4: lr ← lr scheduler(lr, e, η, γ) . Update the learning rate by scheduler. 5: for b in (m/bs+ 1) do 6: α← optimizer(lr) . Update the weight factor by optimizer. 7: n← x[b ∗ bs : min ((b+ 1) ∗ bs,m)] . Calculate n (current batch size). 8: ∇En ← calculate gradient of En on model wτ . Calculate En by Equation 7. 9: wτ+1 ← wτ − α∇En(wτ ) 10: τ ← τ + 1 11: end for 12: end for F EXPERIMENTAL SETUP The experiments are done on the Intel i7 processors with a variety of hardware accelerators listed in Table 4. TensorFlow 1.13.1 and TensorRT 5.0.2.6 with Python 3.6 are used to build the DL models, running on Ubuntu 18.04.4 LTS (kernel version 5.4.0-42-generic). G LAYER-WISE EXECUTION TIME PREDICTION FOR TENSORFLOW J MODEL-WISE EXECUTION TIME PREDICTION FOR TENSORFLOW I PREDICTED VS. MEASURED TIME (TENSORFLOW) H LAYER-WISE EXECUTION TIME PREDICTION FOR TENSORRT
1. What is the focus of the paper regarding using a residual-based network? 2. What are the strengths and weaknesses of the proposed approach? 3. How does the reviewer assess the novelty and technical contributions of the paper? 4. What are the concerns regarding the efficiency and generalizability of the method? 5. Are there any suggestions for improving the method by considering additional operations?
Review
Review Topic Using a residual-based network to the performance of another DL-based network. Contributions: Using a residual-based network for estimating the computing performance of DL applications on a variety of models-framework-accelerator configurations, which enables the users to explore the hardware/software design space. Using three-phase performance modeling to estimate computation time. Weakness 1, The main problem is the lack of Novelty and technical contributions. Using a DL-based model to predict the performance is not novel. Many NAS methods using the same trick to estimate the performance of one architecture ahead of running it directly on the hardware. 2, Using a DL-base regression is better than normal regression when the sample size is large. The experiment results are not surprising. 3, Box-cox transformation is a common technique in regression. That is not new. 4, Need to prepare the dataset using a lot of samples, i.e. 100000, which is computationally heavy. Require huge storage space to keep the samples for even one platform. Thus the whole method is not efficient. It is hard to generalize it to other hardware/platform. 5, Besides, the method only considers some common operations such as conv, pooling, and FC-layer. However, more operations should be considered for example ROI Pooling, NMS, Spatial-to-depth. Those operations are also commonly used in tasks such as detection and SR.
ICLR
Title Memory-Driven Text-to-Image Generation Abstract We introduce a memory-driven semi-parametric approach to text-to-image generation, which is based on both parametric and non-parametric techniques. The non-parametric component is a memory bank of image features constructed from a training set of images. The parametric component is a generative adversarial network. Given a new text description at inference time, the memory bank is used to selectively retrieve image features that are provided as basic information of target images, which enables the generator to produce realistic synthetic results. We also incorporate the content information into the discriminator, together with semantic features, allowing the discriminator to make a more reliable prediction. Experimental results demonstrate that the proposed memory-driven semi-parametric approach produces more realistic images than purely parametric approaches, in terms of both visual fidelity and text-image semantic consistency. 1 INTRODUCTION How to effectively produce realistic images from given natural language descriptions with semantic alignment has drawn much attention, because of its tremendous potential applications in art, design, and video games, to name a few. Recently, with the vast development of generative adversarial networks (Goodfellow et al., 2014; Gauthier, 2015; Mirza & Osindero, 2014) in realistic image generation, text-to-image generation has made much progress, where the progress has been mainly driven by parametric models — deep networks use their weights to represent all data concerning realistic appearance (Zhang et al., 2017; 2018; Xu et al., 2018; Li et al., 2019a; Qiao et al., 2019b; Zhu et al., 2019; Hinz et al., 2019; Cheng et al., 2020; Qiao et al., 2019a). Although these approaches can produce realistic results on well-structured datasets, containing a specific class of objects at the image center with fine-grained descriptions, such as birds (Wah et al., 2011) and flowers (Nilsback & Zisserman, 2008), there is still much room to improve. Besides, they usually fail on more complex datasets, which contain multiple objects with diverse backgrounds, e.g., COCO (Lin et al., 2014). This is likely because, for COCO, the generation process involves a large variety in objects (e.g., pose, shape, and location), backgrounds, and scenery settings. Thus, it is much easier for these approaches to only produce text-semantic-matched appearances instead of capturing difficult geometric structure. As shown in Fig. 1, current approaches are only capable of producing required appearances semantically matching the given descriptions (e.g., white and black stripes for zebra), but objects are unrealistic with distorted shape. Furthermore, these approaches are in contrast to earlier works on image synthesis, which were based on non-parametric techniques that could make use of large datasets of images at inference time (Chen et al., 2009; Hays & Efros, 2007; Isola & Liu, 2013; Zhu et al., 2015; Lalonde et al., 2007). Although parametric approaches can enable the benefits of end-to-end training of highly expressive models, they lose a strength of earlier non-parametric techniques, as they fail to make use of large datasets of images at inference time. In this paper, we introduce a memory-driven semi-parametric approach to text-to-image generation, where the approach takes the advantage of both parametric and non-parametric techniques. The non-parametric component is a memory bank of disentangled image features constructed from a training set of real images. The parametric component is a generative adversarial network. Given a novel text description at inference time, the memory bank is used to selectively retrieve compatible image features that are provided as basic information, allowing the generator to directly draw clues of target images, and thus to produce realistic synthetic results. Besides, to further improve the differentiation ability of the discriminator, we incorporate the content information into it. This is because, to make a prediction, the discriminator usually relies on semantic A zebra is standing on the grassy field. A white and blue bus is driving down a street. Given Text StackGAN++ (Zhang et al., 2018) AttnGAN (Xu et al., 2018) DF-GAN (Tao et al., 2020) Ours Figure 1: Examples of text-to-image generation on COCO. Current approaches only generate lowquality images with unrealistic objects. In contrast, our method can produce realistic images, in terms of both visual appearances and geometric structure. features, extracted from a given image using a series of convolution operators with local receptive fields. However, when the discriminator goes deeper, less content details are preserved, including the exact geometric structure information (Gatys et al., 2016; Johnson et al., 2016). We think that the loss of content details is likely one of the reasons why current approaches fail to produce realistic shapes for objects on difficult datasets, such as COCO. Thus, the adoption of content information allows the model to exploit the capability of content details and then improve the discriminator to make the final prediction more reliable. Finally, an extensive experimental analysis is performed, which demonstrates that our memory-driven semi-parametric method can generate more realistic images from natural language, compared with purely parametric models, in terms of both visual appearances and geometric structure. 2 RELATED WORK Text-to-image generation has made much progress because of the success of generative adversarial networks (GANs) (Goodfellow et al., 2014) in realistic image generation. Zhang et al. (2017) proposed a multi-stage architecture to generate realistic images progressively. Then, attention-based methods (Xu et al., 2018; Li et al., 2019a) are proposed to further improve the results. Zhu et al. (2019) introduced a dynamic memory module to refine image contents. Qiao et al. (2019a) proposed text-visual co-embeddings to replace input text with corresponding visual features. Cheng et al. (2020) introduced a rich feature generating text-to-image synthesis. Besides, extra information is adopted on the text-to-image generation process, such as scene graphs (Johnson et al., 2018; Ashual & Wolf, 2019) and layout (e.g., bounding boxes or segmentation masks) (Hong et al., 2018; Li et al., 2019b; Hinz et al., 2019). However, none of the above approaches adopt non-parametric techniques to make use of large datasets of images at inference time, neither feed content information into the discriminator to enable a finer training feedback. Also, our method does not make use of any additional semantic information, e.g., scene graphs and layout. Text-guided image manipulation is related to our work, where the task also takes natural language descriptions and real images as inputs, but it aims to modify the images using given texts to achieve semantic consistency (Nam et al., 2018; Dong et al., 2017; Li et al., 2020). Differently from it, our work focuses mainly on generating novel images, instead of editing some attributes of the given images. Also, the real images in the text-guided image manipulation task behave as a condition, where the synthetic results should reconstruct all text-irrelevant attributes from the given real images. Differently, the real images in our work are mainly to provide the generator with additional cues of target images, in order to ease the whole generation process. Memory Bank. Qi et al. (2018) introduced a semi-parametric approach to realistic image generation from semantic layouts. Li et al. (2019c) used the stored image crops to determine the appearance of objects. Tseng et al. (2020) used a differentiable retrieval process to select mutually compatible image patches. Li et al. (2021) studied conditional image extrapolation to synthesize new images guided by the input structured text. Differently, instead of using a concise semantic representation (a scene graph as input), which is less user-friendly and has limited context of general descriptions, we use natural language descriptions as input. Also, Liang et al. (2020) designed a memory structure to parse the textual content. Differently, our method simply uses a deep network to extract image features, instead of involving complex image preprocessing to build a memory bank. 3 OVERVIEW Given a sentence S, we aim to generate a fake image I ′ that is semantically aligned with the given S. The proposed model is trained on a set of paired text description and corresponding real image features v, denoted by (S, v). This set is also used to generate a memory bankM of disentangled image features v for different categories, where image features are extracted from the training image by using a pretrained VGG16 network (Simonyan & Zisserman, 2014) (see Fig. 2). Each element in M is an image feature extracted from a training image, associated with corresponding semantically-matched text descriptions from the training datasets. At inference time, we are given a novel text description S that was not seen during training. Then, S is used to retrieve semantically-aligned image features from the memory bank M , based on designed matching algorithms (more details are shown in Sec. 4.2). Next, the retrieved image features v, together with the given text description S, are fed into the generator to synthesize the output image (see Fig. 3). The generator utilizes the information from the image features, fuses them with hidden features produced from the given text description S, and generate realistic images semantically-aligned with S. The architecture and training of the network are described in Sec. 5. To incorporate image features into the generation pipeline, we borrow from the text-guided image manipulation literature (Li et al., 2020), and redesign the architecture to make full use of the given image features in text-to-image generation, shown in Fig. 3. 4 MEMORY BANK 4.1 REPRESENTATION The memory bank M is a set of image features vi extracted from training set images, and each image features vi is associated with matched text descriptions that are provided in the dataset, e.g., in COCO, each image has five matched text descriptions. These descriptions are used in the matching algorithms, allowing a given text to find the most compatible image features at inference time. 4.2 RETRIEVAL Given a new text description, in order to effectively retrieve the most compatible image features from the memory bank M , we have designed several matching algorithms and also explored the effectiveness of each algorithms. A detailed comparison between different algorithms is shown in the supplementary material. 4.2.1 SENTENCE-SENTENCE MATCHING Here, we use image features’ associated sentences S′i as keys, to find the most compatible image features vi for a given unseen sentence S at inference time. First, we feed both S and S′i into a pretrained text encoder (Xu et al., 2018) to produce sentence features s ∈ RD×1 and s′i ∈ RD×1, respectively, where D is the feature dimension. Then, for the given sentence S, we select the most compatible image features vi in M based on a cosine similarity score: αi = (s)T s′i ‖s‖ ‖s′i‖ . (1) Finally, we fetch the image features vi using the key S′i with the highest similarity score αi. 4.2.2 SENTENCE-IMAGE MATCHING Instead of using associated sentences as keys, we can calculate the similarity between the sentence feature s ∈ RD×1 and image features vi ∈ RD×H×W stored in M , where D is the number of channels, H is the height, and W is the width. To directly calculate the similarity, we first average the image features on the spatial direction to get a global image feature vGi ∈ RD×1. So, for a given unseen S, we select the most compatible image features vi in M based on βi: βi = (s)T vGi ‖s‖ ‖vGi‖ . (2) 4.2.3 WORDS-WORDS MATCHING Moreover, we can use a more fine-grained text representation (namely, word embeddings), as keys to find the most compatible image features vi stored in M for a given unseen sentence S. At inference time, we first feed both S and S′i into a pretrained text encoder (Xu et al., 2018) to generate word embeddings w ∈ RN×D and w′i ∈ RN×D, respectively, where N is the number of words and D is the feature dimension. Then, we reshape the size of both w and w′i to R(D∗N)×1. So, to find the most compatible image features, the cosine similarity score can be defined as follows: δi = (w)Tw′i ‖w‖ ‖w′i‖ . (3) However, different words in a sentence are not equally important. Thus, if we simply combine all words from a sentence together to calculate the similarity (like above), the similarity score may be less precise. To solve this issue, during training, we reweight each word in a sentence by its importance. We first use convolutional layers to remap word embeddings, and then calculate the importance λ (and λ′i) for each word in word embeddings w ∈ RN×D (and w′i ∈ RN×D), denoted by: λ = Softmax(wwT ) and λ′i = Softmax(w ′ iw ′T i ), respectively. Each elements in λ represents the correlation between different words in a sentence. Then, λw (and λ′iw ′ i) reweight word embeddings for each word based on its correlation with other words. So, using this reweighted word embeddings, we can achieve a more precise similarity calculation between two word embeddings. At inference time, after we reshape the size of both λw and λ′iw ′ i to R(D∗N)×1, the new equation is defined as follows: δi = (λw)Tλ′iw ′ i ‖λw‖ ‖λ′iw′i‖ . (4) 4.2.4 WORDS-IMAGE MATCHING Furthermore, we use the word embeddings w ∈ RN×D and image features vi ∈ RD×H×W to directly calculate the similarity score between them. To achieve this, we first reshape the size of the image features to vi ∈ RD×(H∗W ). Then, a correlation matrix ci ∈ RN×(H∗W ) can be obtained via: ci = Softmax(wvi), where each element in ci represents the correlation between each word and each image spatial location. Then, a reweighted word embedding w̃i ∈ RN×D containing image information can be achieved by w̃i = civTi . So, to find the most compatible image features, we first reshape the size of both w and w̃i to R(D∗N)×1, and the similarity score is defined as follows: γi = (w)T w̃i ‖w‖ ‖w̃i‖ . (5) Similarly, we can also reweight word embeddings w and image features vi based on their importance (see Sec.4.2.3) to achieve a more precise calculation. 5 GENERATIVE ADVERSARIAL NETWORKS To generate high-quality synthetic images from natural language descriptions, we propose to incorporate image features v, along with the given sentence S, into the generator. To incorporate image features into the generation pipeline, we borrow from the text-guided image manipulation literature (Li et al., 2020), and redesign the architecture to make full use of the given image features in text-to-image generation, shown in Fig. 3. 5.1 GENERATOR WITH IMAGE FEATURES To avoid the identity mapping and also to make full use of image features v in the generator, we first average v on each channel to filter potential content details (e.g., overall spatial structure) contained in v, getting a global image feature vG, where vG only keeps basic information of the corresponding real image I , serving as basic image priors. By doing this, the model can effectively avoid copying and pasting from I , and greatly ensure the diversity of output results, especially on the first stage. This is because the following stages focus more on refining basic images produced by the first stage, according to adding more details and improving their resolution, shown in Fig. 3. However, only feeding the global image feature vG at the beginning of the network, the model may fail to fully utilize the cues contained in the image features v. Thus, we further incorporate the image features v at each stage of the network. The reason to feed image features v rather than the global feature vG at the following stages is that v contains more information about the desired output image, such as image contents and geometric structure of objects, where these details can work as candidate information for the main generation pipeline to select. To enable this regional selection effect, we adopt the text-image affine combine module (ACM) (Li et al., 2020), which is able to selectively fuse text-required image information within v into the hidden features h, where h is generated from the given text description S. However, simply fusing image features v into the generation pipeline may introduce constraints on producing diverse and novel synthetic results, because different image information (e.g., objects and visual attributes) in v may be entangled, which means, for example, if the model only wants to generate one object, the corresponding entangled parts (e.g, objects and attributes) may be produced as well. This may cause an additional generation of text-irrelevant objects and attributes. Thus, to avoid these drawbacks, inspired by the study (Karras et al., 2019), we use several fully connected layers to disentangle the image features v, getting disentangled image features vD, which allows the model to disconnect relations between different objects and also attributes. By doing this, the model is able to prevent the constraints introduced by the image features v, and then selectively choose text-required image information within vD, where this information is effectively disentangled without a strong connection. Why does the generator with image features work better? Ideally, the generator produces a sample, e.g., an image, from a latent code, and the distribution of these samples should be indistinguishable from the training distribution, where the training distribution is actually drawn from the real samples in the training dataset. Based on this, incorporating image features from real images in training dataest into the generator allows the generator to directly draw cues of the desired distribution that it eventually needs to generate. Besides, the global feature vG and disentangled image features vD can provide basic information of target results in advance, and also work as candidate information, allowing the model to selectively choose text-required information without generating it by the model itself, and thus easing the whole generation process. To some extent, the global feature vG can be seen as the meta-data of target images, which may contain information about what kinds of objects to generate, e.g., zebra or bus, and vD is able to provides basic information of objects, e.g., the spatial structure like four legs and one head for the zebra and the rectangle shape for the bus. 5.2 DISCRIMINATOR WITH CONTENT INFORMATION To further improve the discriminator to make a more reliable prediction, with respect to both visual appearances and geometric structure, we propose to incorporate the content information into it. This is mainly because, in a deep convolution neural network, when the network goes deeper, the less content details are preserved, including the exact shape of objects (Gatys et al., 2016; Johnson et al., 2016). We think the loss of content details may prevent the discriminator to provide finegrained shape-quality-feedback to the genera- tor, which may cause the difficulty for the generator to produce realistic geometric structure. Also, Zhou et al. (2014) showed that the empirical receptive field of a deep convolution neural network is much smaller than the theoretical one especially on deep layers. This means, using convolution operators with a local receptive field only, the network may fail to capture the spatial structure of objects when the size of objects exceeds the receptive field. To incorporate the content details, we propose to generate a series of image content features, {a128, a64, a32, . . . , a4}, by aggregating different image regions via applying pooling operators on the given real or fake features. The size of these content features is from a128 ∈ RC×128×128 to a4 ∈ RC×4×4, where C represents the number of channels, and the width and the height of the next image content features are 1/2 the previous one. Thus, the given image is pooled into representations for different regions, from fine- (a128) to coarse-scale (a4), which is able to preserve content information of different subregions, such as the spatial structure of objects. Then, these features are concatenated with the corresponding hidden features on the channel-wise direction, incorporating the content information into the discriminator. The number of different-scale content features can be modified, which is dependent on the size of given images. These features aggregate different image subregions by repetitively adopting fixed-size pooling kernels with a small stride. Thus, these content features maintain a reasonable small gap for image information. For the type of pooling operation between max and average, we perform comparison studies to show the difference in Sec. 6.2. Why does the discriminator with content information work better? Basically, the discriminator in a generative adversarial network is simply a classifier (Goodfellow et al., 2014). It tries to distinguish real data from the data created by the generator (note that in our method, we implement the Minmax loss in the loss function, instead of the Wasserstein loss (Arjovsky et al., 2017)). Also, the implementation of content information has shown its great effectiveness on classification (Lazebnik et al., 2006; He et al., 2015) and semantic segmentation (Liu et al., 2015; Zhao et al., 2017). Based on this, incorporating the content information into the discriminator is helpful, allowing the discriminator to make a more reliable prediction on complex datasets, especially for the datasets with complex image scenery settings, such as COCO. 5.3 TRAINING To train the network, we follow (Li et al., 2020) and adopt adversarial training. There are three stages in the model, and each stage has a generator network and a discriminator network. The generator and discriminator are trained alternatively by minimizing the generator loss LG and discriminator loss LD. Please see the supplementary material for more details about training objectives. We only highlight some training differences compared with Li et al. (2020). Generator objective. The objective functions to train the generator are similar as in (Li et al., 2020), but, differently, the inputs for the generator are a pair of (S, v) and a noise z, denoted by Gi(z, S, v), where i indicates the stage number. Discriminator objective. To improve the convergence of our GAN-based generation model, the R1 regularization (Mescheder et al., 2018) is adopted in the discriminator: R1(ψ) := γ 2 EpD(x) [ ‖5Dψ(x)‖2 ] , (6) where ψ represents parameter values of the discriminator. 6 EXPERIMENTS To verify the effectiveness of our proposed method in realistic image generation from text descriptions, we conduct extensive experiments on the CUB bird (Wah et al., 2011) dataset and more complex COCO (Lin et al., 2014) dataset, where COCO contains multiple objects with diverse backgrounds. Evaluation metrics. We adopt the Fréchet inception distance (FID) (Heusel et al., 2017) as the primary metric to quantitatively evaluate the image quality and diversity. In our experiments, we use 30K synthetic images vs. 30K real test images to calculate the FID value. However, as FID cannot reflect the relevance between an image and a text description, we use the R-precision (Xu et al., 2018) to measure the correlation between a generated image and its corresponding text. Human evaluation. To better verify the performance of our proposed method, we conducted a user study between current state-of-the-art method DF-GAN (Tao et al., 2020) and ours on CUB and COCO. We randomly selected 100 text descriptions from the test dataset. Then, we asked 5 workers to compare the results after looking at the output images and given text descriptions based on two criteria: (1) alignment: whether the synthetic image is semantically aligned with the given description, and (2) realism: whether the synthetic image looks realistic, shown in Tables 1 and 2. Please see supplementary material for more details about the human evaluation. Implementation. There are three stages in the model, and each stage has a generator network and a discriminator network. The number of stages can be modified, which depends on the resolution of the output image. We utilize a deep neural network layer relu5 3 of a pre-trained VGG-16 to extract image features v, which is able to filter content details in I and keep more semantic information. In the discriminator, the number of different-scale image content features can be modified, which is related to the size of the given image. A same-size pooling kernel with a small stride (stride = 2) is repeatedly implemented on the image features, to maximize the preservation of the content information. For the type of pooling operation, average pooling is adopted. For the matching algorithms, word image matching with reweighting based on importance is adopted. The resolution of synthetic results is 256× 256. Our method and its variants are trained on a single Quadro RTX 6000 GPU, using the Adam optimizer (Kingma & Ba, 2014) with the learning rate 0.0002. The hyperparameter λ is set to 5. We preprocess datasets according to the method used in (Xu et al., 2018). No attention module is implemented in the whole architecture. 6.1 COMPARISON WITH OTHER APPROACHES Quantitative comparison. Quantitative results are shown in Tables 1 and 2. As we can see, compared to other approaches, our method achieves better FID and R-precision scores on both datasets, and even has a better performance than OP-GAN, where OP-GAN adopts bounding boxes. This indicates that (1) our method can produce more realistic images from given text descriptions, in terms of image quality and diversity, and (2) synthetic results produced by our method are more semantically aligned with the given text descriptions. Besides, in human evaluation, our method achieves better alignment and realism scores, compared with DF-GAN, which indicates that our results are most preferred by workers, which further verifies the better performance of our method, with respect to semantic alignment and image realism. Qualitative comparison. In Fig. 5, we present synthetic examples produced by our method at 256 × 256, along with the corresponding retrieved images that provide image features. As we can see, our method is able to produce highquality results on CUB and COCO, with respect to realistic appearances and geometric structure, and also semantically matching the given text descriptions. Besides, the synthetic results are different from the retrieved image features, which indicates there is no significant copy-and-paste problem in our method. Diversity evaluation. To further evaluate the diversity of our method, we fix the given text description and the corresponding retrieved image features, and only change the given noise z to generate output images, shown in Fig. 7. When we fix the sentence and image features and only change the noise, our method can generate obviously different images, but they still semantically match the given sentence and also make use information from the image features. More evaluations are shown in the supplementary material. 6.2 COMPONENT ANALYSIS Effectiveness of the image features. To better understand the effectiveness of image features in the generator, we conduct an ablation study shown in Table 3. Without image features, the model “Ours w/o Feature” achieves worse quantitative results on both FID and R-precision compared with the baseline, which verifies the effectiveness of image features on high-quality image generation. Interestingly, without image features, even our method becomes a pure text-to-image generation method, similar to other baselines, but the FID of “Ours w/o Feature” is still competitive with other baselines. This indicate that even without the image features fed into our method, our method can still generate better synthetic results, with respect to image quality and diversity. We think this is mainly because with the help of content information, our better discriminator is able to make a more reliable prediction on complex datasets, which in turn encourages the generator to produce better synthetic images. Effectiveness of the disentanglement. Here, we show the effectiveness of the fully connected layers applied on the image features v. Interestingly, from Table 3, the “model w/o Disen.” achieves better FID and R-precision compared with the baseline. This is likely because the model may suffer from an identity mapping problem. To verify this identity mapping problem, we conduct another experiment, where we feed mismatched sentence and image pairs into the network without using search algorithms, denoted “model w/o Disen.*”. As we can see, on mismatched pairs, although FID is still low, the R-precision degrades significantly. Effectiveness of the content information. To verify the effectiveness of the content information adopted in the discriminator, we conduct an ablation study, shown in Table 3. As we can see, FID and R-precision degrade when the discriminator without adopting the content information. This may indicate that the content information can effectively strengthens the differentiation abilities of the discriminator. Then, the improved discriminator is able to provide the generator with fine-grained training feedback, regarding to geometric structure, thus facilitating training a better generator to produce higher-quality synthetic results. Comparison between different pooling types. Here, we conduct a comparison study on different pooling types (i.e., max and average) in Table 3. As we can see, the model with the average pooling works better than max pooling. We think that this is likely because max pooling fails to capture the contextual information between neighboring pixels, because it only picks the maximum value among a region of pixels, while average pooling calculates the average value between them. Effectiveness of the regularization. We evaluate the effectiveness of the adopted regularization in the discriminator. From Table 3, the model without the regularization has worse quantitative results, compared with the full model. We think that this is because the regularization effectively improves GAN convergence by preventing the generator from training on junk feedback, once the discriminator cannot easily tell the difference between real and fake. 7 CONCLUSION We have introduced a memory-driven semi-parametric approach to text-to-image generation, which utilizes large datasets of images at inference time. Also, an alternative architecture is proposed for both the generator and the discriminator. Extensive experimental results on two datasets demonstrate the effectiveness of feeding retrieved image features into the generator and incorporating content information into the discriminator. 8 ETHICS STATEMENT All datasets and baselines used in the paper are public with corresponding citations. Our research mainly explores the interaction between different modal features, and aims to achieve an effective transformation from one domain to the other, which might not have significant potentially harmful insights and potential conflicts of interest and sponsorship. 9 REPRODUCIBILITY STATEMENT To reproduce our results, we include the details of the datasets we used in our paper (see Sec. D). In the implementation section (see Sec. 6), we show more details on our network, including how to extract image features, and how to generate content information used in the discriminator. We also include the values of hyperparameters, and the kinds of devices that we used to train our network. Sec. 5.3 and Sec. B show objective functions to train our network. Also, all data and baselines used in our paper are public with corresponding citations. We will release our code after the conference. A ARCHITECTURE Here we show details about the network architectures for the components of our model. A.1 TEXT ENCODER The text encoder used in our method is a pretrained bidirectional LSTM (Xu et al., 2018), which is trained together with an image encoder Inception-v3 (Szegedy et al., 2016), maximizing the cosine similarity between text features and the corresponding image features. The text features are encoded from a given text description using the text encoder, and the image features are extracted from the corresponding matched image. A.2 IMAGE ENCODER The image encoder used in our main architecture is a VGG-16 (Simonyan & Zisserman, 2014) network, pretrained on ImageNet (Russakovsky et al., 2015). A deep neural network layer relu5 3 is adopted to extract image features. Thus, the image features are able to contain more semantic information than content details. A.3 TEXT-IMAGE AFFINE COMBINATION MODULE To better fuse different-modal text and image features, and also to enable a regional selection effect, we adopt the text-image affine combination module (Li et al., 2020), shown in Fig. 8. The affine combination module takes two inputs: (1) the hidden features h ∈ RC×H×W from the given text description or intermediate hidden representation between two stages, where C is the number of channels, H is the height, and W is the width of the feature map, and (2) the corresponding disentangled image features vD ∈ RC×H×W , achieved by applying fully connected layers on the image features. According to applying two convolutional layers, the disentangled image features vD are converted into trainable weights W (vD) ∈ RC×H×W and trainable biases b(vD) ∈ RC×H×W . Then, the fused feature h′ ∈ RC×H×W is generated by h′ = h W (vD) + b(vD), (7) where W and b represent the functions that convert the image features vD into weights W (vD) and biases b(vD), and denotes the Hadamard element-wise product. A.4 REWEIGHTING IMAGE FEATURES BASED ON IMPORTANCE Here, we show how to reweight image features based on its importance, mentioned in Sec. 4.2.4. First, during the training, we use convolutional layers to remap image features, and then reshape image features into v ∈ RD×(H∗W ). Thus, to calculate the importance λ for each spatial locations in image features, we apply the following equation: λ = Softmax(vT v), where λ ∈ R(H∗W )×(H∗W ), and each element in λ represents the correlation between different spatial locations. Finally, we reweight image features based on importance by adopting vλ. B OBJECTIVE FUNCTIONS Here we show the complete objective functions for training our method. The discriminator and generator in our model are trained alternatively by minimizing both the generator loss LG and the discriminator loss LD. B.1 GENERATOR OBJECTIVE The generator objective for training a generator at stage i contains an unconditional adversarial loss, a conditional adversarial loss, and a text-image matching loss LDAMSM (Xu et al., 2018). LGi =− 1 2 Ez∼Pz,v∼Pdata [log(Di(Gi(z, S, v)))]︸ ︷︷ ︸ unconditional adversarial loss −1 2 Ez∼Pz,v∼Pdata [log(Di(Gi(z, S, v), S))]︸ ︷︷ ︸ conditional adversarial loss +λLDAMSM, (8) where Gi and Di represent the corresponding generator network and discriminator network at stage i, respectively, S is the text description, v is the image features that are extracted from the corresponding real image I that correctly semantically matches S, where the I is sampled from the true distribution Pdata, z is a noise vector drawn from the Gaussian distribution Pz . Thus, the complete objective function for training the generator networks is: LG = K∑ k=1 (LGi), (9) where K is the total number of stages in the network. B.2 DISCRIMINATOR OBJECTIVE The discriminator objective for training a discriminator at stage i contains an unconditional adversarial loss and a conditional adversarial loss. LDi =− 1 2 EIi∼Pdata [log(Di(Ii))]− 1 2 Ez∼Pz [log(1−Di(Gi(z, S, v)))]︸ ︷︷ ︸ unconditional adversarial loss −1 2 EIi∼Pdata [log(Di(Ii, S))]− 1 2 Ez∼Pz [log(1−Di(Gi(z, S, v), S))]︸ ︷︷ ︸ conditional adversarial loss , (10) where Ii denotes the real image sampled from the true image distribution Pdata at stage i. Thus, the complete objective function for training the discriminator networks is: LD = K∑ k=1 (LDi) +R1(ψ), (11) where R1(ψ) is a regularization term described in the paper. This regularization term is derived from zero-centered gradient penalties (Ross & Doshi-Velez, 2017) on local stability, which penalizes the discriminator for deviating from the Nash-equilibrium. This ensures that when a GAN-based model converges (i.e., the generator produces the true data distribution), the discriminator cannot create a non-zero gradient orthogonal to the data manifold without suffering a loss in the GAN game. C EVALUATION METRICS In this section, we show more details about the evaluation metrics used in the paper. C.1 FRÉCHET INCEPTION DISTANCE The Fréchet inception distance (FID) (Heusel et al., 2017) measures the Fréchet distance between generated image features and real image features, where both features are extracted by an Inception-v3 network (Szegedy et al., 2016) pretrained on ImageNet (Russakovsky et al., 2015). Consequently, a lower FID implies a closer distance between the synthetic image distribution and the real image distribution. C.2 R-PRECISION To measure the semantic alignment between the synthetic image and the given text description, the R-precision (Xu et al., 2018) is adopted. The R-precision is calculated by retrieving relevant text descriptions given an image query. To measure the relevance between the text and the image, the cosine similarity between text and image features is adopted. Thus, we compute a global image vector and 100 candidate sentence vectors, where the 100 candidate sentence vectors contain R number of ground-truth text descriptions that correctly describe the image, and 100−R randomly chosen mismatched descriptions. For each image query, if a results in the top R ranked retrieval text descriptions are relevant, then the R-precision is a/R. In the paper, we measure the top-1 R-precision (i.e., R = 1). D MORE EXPERIMENTS In this section, we show additional experimental results to further evaluate and verify the performance of our proposed method. D.1 DATASETS CUB bird (Wah et al., 2011) contains 8,855 training images and 2,933 test images, and each image has 10 corresponding text descriptions. COCO (Lin et al., 2014) contains 82,783 training images and 40,504 validation images. Each image has 5 descriptions. D.2 QUANTITATIVE COMPARISON BETWEEN DIFFERENT ALGORITHMS Here, we show the quantitative comparison between different matching algorithms, shown in Tables 4 and 5. As we can see, the algorithm word image matching with reweighting based on importance achieves the best FID and R-psr scores on CUB and COCO datasets. Therefore, the algorithm word image matching with reweighting is adopted in our method. D.3 DETAILS OF HUMAN EVALUATION Because the automatic metric cannot comprehensively evaluate the improvement of our proposed method, we conducted a side-by-side human evaluation study to analyze the improvement. The study compares synthetic images from our method and current state-of-the-art text-to-image generation method DF-GAN (Tao et al., 2020) on both CUB and COCO, according to (1) alignment, and (2) realism. We presented synthetic images from different methods along with the given text descriptions. We randomly switch our method and the baseline and also anonymized them. Then, we asked workers to choose the best images based on above two criteria. In this study, we randomly choose 100 text descriptions sampled from the test dataset, and then assign corresponding synthetic images generated by different methods to 5 workers to reduce variance. D.4 QUALITATIVE RESULTS In Fig. 10, we show more qualitative results generated by our method on the CUB bird dataset, along with the corresponding retrieved images that provide image features. As we can see, our method is able to produce high-quality results on CUB, semantically matching the given text descriptions. Also, the synthetic results look obviously different from the retrieved images, but our method can selectively choose information from the retrieved image to generate better synthetic results. D.5 DIVERSITY D.5.1 SSIM We also compare the Structural Similarity Index (SSIM) score (Hore & Ziou, 2010) between the generated images and corresponding ground-truth images to evaluate the diversity of our method. SSIM is originally used to measure the recovery result from distorted images. In our case, higher SSIM means synthetic and real images are more similar, which indicates that there may exist a copy-and-paste problem and the network has a worse diversity. Based on this, for SSIM, lower is better, which means a better diversity. To calculate the SSIM, for other baseline methods, we evaluate them on the test dataset by calculating the SSIM between each synthetic and ground-truth image pairs, and then get the average of all scores; for our method, we calculate the SSIM between the synthetic image and the image that provide image features. As shown in Table 6, our method achieves competitive SSIM scores on both CUB and COCO, compared with other baselines. This indicates that (1) even if our method has image features as image priors, it can still produce diverse synthetic results that are different from the corresponding real images, (2) there is no significant copy-and-paste problem in our method, and (3) our method can effectively disentangle objects and attributes in the given image features, which then can work as candidate information for the main generation pipeline to choose. D.5.2 SEMANTIC INFORMATION EXPLORATION Here, we further verify whether our method suffers from an copy-and-paste problem by exploring whether our method can make use of semantic information contained in the retrieved image features. To verify this, instead of extracting image features from RGB images, we use segmentation masks to provide semantic image features, shown in Fig. 11. As we can see, although there is no any content information provided in the given segmentation masks, our method is still able to generate realistic images, which indicate that our method can make use of semantic information contained in the image features, instead of simply copying and pasting the retrieved image features to produce output images. Furthermore, discussed in the following Sec. D.7, given a partially matched text and image features, our method is able to pick the semantic information (e.g., structure of train, cat, and bus) and filter detailed content color information (e.g., yellow and green, brown, and yellow) to generate text-required output images, as shown in Fig. 12. D.6 EFFECTIVENESS OF IMAGE FEATURES Actually, when there are no image features that are fed into our method, our method becomes a traditional text-to-image generation model, where the inputs for the model are only the natural language descriptions and random noise. As shown in Table 7, “Ours w/o Feature” still has a competitive performance, compared with other baselines, which means that our method can still generate images with good quality and diversity. We think this is mainly because of the powerful discriminator with content information, which is able to provide fine-grained training feedback to the generator, in terms of realistic appearance and geometric structure. Note that the way to block image features to build the model “Ours w/o Feature” is to remove image features and ACM components in the network, and only keep the new discriminator with content information. D.7 IMAGE GENERATION WITH PARTIAL TEXT-IMAGE MATCHING Interestingly, when the retrieved image features have a good quality (e.g., desired objects in image features can provide enough information), but are not perfectly aligned with the given text descriptions, Table 7: Quantitative comparison: Fréchet inception distance (FID) and R-precision (R-psr) of StackGAN++ (Zhang et al., 2018), AttnGAN (Xu et al., 2018), ControlGAN (Li et al., 2019a), DM-GAN (Zhu et al., 2019), OP-GAN (Hinz et al., 2019), and our method on the COCO dataset. “Ours w/o Feature” denotes that our model does not have any image features and just has a similar generation pipeline as other traditional text-to-image generation methods. For FID, lower is better, while for R-psr, higher is better. Matrix StackGAN++ AttnGAN ControlGAN DM-GAN OP-GAN Ours w/o Feature FID 81.59 32.32 33.58 32.64 24.70 22.20 R-prs (%) 71.88 85.47 82.43 88.56 89.01 84.63 which means that the given text description and corresponding retrieved image features only partially match on the semantic meaning, our method is still able to produce realistic images, shown in Fig. 12. As we can see, our method is able to generate the desired objects with required attributes, even if image features only partially match the given text description. For example, in the provided “train” image features, there is a yellow and green train, but the given description requires a red train. However, our method is still able to generate a realistic train with a red color. Besides, our method can even produce a novel composition, e.g., the sign is flying in the sky. We think that this is mainly because the generator can selectively make use of the information provided by the image features, instead of directly copying and pasting information from it. Also, features and attributes are disentangled in the provided image features, which enable this independent selection without additional generation. D.8 REGIONAL SELECTION EFFECT In Fig. 12, we can observe the regional selection effect involved in the generation process. For the train example, our full model is able to selectively keep the relevant information (e.g., train) and filter the irrelevant contents (e.g., yellow and green color) to avoid a wrong object generation (e.g., red color). This effect can be magnified when the given image has multiple objects, and the given text only partially describes it, shown in Fig. 13. There are multiple objects (e.g., vase, flowers, chairs, and window for the top example; three zebras, enclosure, and grass for the bottom one) in the given image features. However, our method only selectively makes use of some information (e.g., shape and texture of flowers and zebra) and generates text-required objects without keeping irrelevant contents in the image features (e.g., chair, window, and multiple zebras). E LIMITATIONS AND FUTURE WORK Here, we discuss some limitations of the proposed method and also the future work. We have observed that our method may fail to produce realistic images when the retrieved image features can only provide limited information, e.g., the target object is too small in the corresponding real image, or there are no desired objects in the retrieved image features. As shown in Fig. 14 left, the stop sign, zebra, bus, and train in the corresponding image are too small, which means that the extracted image features can only provide very limited information about the desired object zebra, stop sign, bus, and train to the generation pipeline. Furthermore, when the retrieved image features have no desired objects, shown in Fig. 14 right, our proposed method may fail to generate high-quality images as well. No desired objects presented in the retrieved image features are mainly caused by the image preprocessing (e.g., crop) and also the limitation of matching algorithms. In such cases, our method is more similar to a pure text-to-image generation method, like other baselines, because the provided image features cannot provide any useful information. To solve these problems, we suggest to build a better memory bank with higher-quality image features, and also improve the matching algorithms to find the most compatible image features for a given text description. Besides, our method is a semi-parametric approach, which needs to retrieve image features from the memory bank. So, it might slow down the inference time, compared with other purely parametric methods. To solve this problem, we suggest to (1) run matching algorithms parallel to speed up the whole inference time, and (2) encourage users to provide the category of the main object in their text descriptions, and then we can use this category as a key to narrow down the retrieval regions. F ADDITIONAL QUALITATIVE COMPARISON Here, we show an additional qualitative comparison between the different text-to-image generation approaches StackGAN++ (Zhang et al., 2018), AttnGAN (Xu et al., 2018), and DF-GAN (Tao et al., 2020) with our method on the COCO dataset (Lin et al., 2014). A zebra is grazing out in the field of grass. Pink flowers sitting in a clear vase full of water. A green and yellow train pulling out of a train station next to train tracks. A bathroom has a white toilet, sitting next to a white bath tub. A pizza is full of pepperoni and cheese on a white plate. A black laptop sitting on a wooden table. Given Text StackGAN++ (Zhang et al., 2018) AttnGAN (Xu et al., 2018) DF-GAN (Tao et al., 2020) Ours Figure 15: Additional comparison results between StackGAN++, AttnGAN, DF-GAN, and Ours on the COCO dataset. A clock that is on the side of a tower. A green double decker bus is driving on a road. A white and grey cat laying on a table. A stop sign with a dark sky in the background. Two baseball players a catcher and a batter who has just hit a baseball. A white bed with white pillows and a lamp in a room. Given Text StackGAN++ (Zhang et al., 2018) AttnGAN (Xu et al., 2018) DF-GAN (Tao et al., 2020) Ours Figure 16: Additional comparison results between StackGAN++, AttnGAN, DF-GAN, and Ours on the COCO dataset. A large black train is traveling down a track. A vase filled with colorful assorted flowers and green leafs in it. A zebra walking in an open grassy field. A stop sign with a blue sky in the background. A double red decker bus parked on the street. A silver laptop sitting on a desk. A plate holding a pizza topped with cheese and vegetable on a table. A white toilet with lid open in a bathroom. The room has a lamp and a bed with white sheets on it. A black cat laying on a table. A baseball player is standing ready to bat. Given Text StackGAN++ (Zhang et al., 2018) AttnGAN (Xu et al., 2018) DF-GAN (Tao et al., 2020) Ours Figure 18: Additional comparison results between StackGAN++, AttnGAN, DF-GAN, and Ours on the COCO dataset.
1. What is the focus and contribution of the paper on text-to-image generation? 2. What are the strengths of the proposed approach, particularly in terms of memory construction and retrieval? 3. What are the weaknesses of the paper, especially regarding the lack of novelty in the proposed methods and the limited human evaluation? 4. Do you have any concerns or suggestions regarding the memory bank size and the similarity scores used for memory bank retrieval? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper introduces a memory-driven semi-parametric approach to text-to-image generation. A memory bank of image features is constructed from a training set of images. Then the retrieved image features are provided to the generator to produce realistic synthetic results. The novelty of this paper comes form two folds: i) memory construction and retrieval; ii) two new architectures of the generator and discriminator to exploit the memory. Review XMC-GAN is a strong baseline and current state-of the art results. Why not do a comparison with XMC-GAN. Even if the FID score is not good, what about the human evaluation? It is good to see improvement on the memory bank, and I wonder the similarity score effects. Could you add an ablation study on its effects on retrieval. The proposed methods lack novelty. The architecture of generator adopts another existing work. The memory bank construction and retrieval is simple heuristics. Human evaluation is not extensive, and more methods should be evaluated. Some other metrics such as SOA-C, SOA-I, FID 0-1-2-4-8 should also be reported. Equation 1-5 show the similarity scores, and which one is used for memory back retrieval. Besides, the memory bank size seems a balance between quality and efficiency.
ICLR
Title Memory-Driven Text-to-Image Generation Abstract We introduce a memory-driven semi-parametric approach to text-to-image generation, which is based on both parametric and non-parametric techniques. The non-parametric component is a memory bank of image features constructed from a training set of images. The parametric component is a generative adversarial network. Given a new text description at inference time, the memory bank is used to selectively retrieve image features that are provided as basic information of target images, which enables the generator to produce realistic synthetic results. We also incorporate the content information into the discriminator, together with semantic features, allowing the discriminator to make a more reliable prediction. Experimental results demonstrate that the proposed memory-driven semi-parametric approach produces more realistic images than purely parametric approaches, in terms of both visual fidelity and text-image semantic consistency. 1 INTRODUCTION How to effectively produce realistic images from given natural language descriptions with semantic alignment has drawn much attention, because of its tremendous potential applications in art, design, and video games, to name a few. Recently, with the vast development of generative adversarial networks (Goodfellow et al., 2014; Gauthier, 2015; Mirza & Osindero, 2014) in realistic image generation, text-to-image generation has made much progress, where the progress has been mainly driven by parametric models — deep networks use their weights to represent all data concerning realistic appearance (Zhang et al., 2017; 2018; Xu et al., 2018; Li et al., 2019a; Qiao et al., 2019b; Zhu et al., 2019; Hinz et al., 2019; Cheng et al., 2020; Qiao et al., 2019a). Although these approaches can produce realistic results on well-structured datasets, containing a specific class of objects at the image center with fine-grained descriptions, such as birds (Wah et al., 2011) and flowers (Nilsback & Zisserman, 2008), there is still much room to improve. Besides, they usually fail on more complex datasets, which contain multiple objects with diverse backgrounds, e.g., COCO (Lin et al., 2014). This is likely because, for COCO, the generation process involves a large variety in objects (e.g., pose, shape, and location), backgrounds, and scenery settings. Thus, it is much easier for these approaches to only produce text-semantic-matched appearances instead of capturing difficult geometric structure. As shown in Fig. 1, current approaches are only capable of producing required appearances semantically matching the given descriptions (e.g., white and black stripes for zebra), but objects are unrealistic with distorted shape. Furthermore, these approaches are in contrast to earlier works on image synthesis, which were based on non-parametric techniques that could make use of large datasets of images at inference time (Chen et al., 2009; Hays & Efros, 2007; Isola & Liu, 2013; Zhu et al., 2015; Lalonde et al., 2007). Although parametric approaches can enable the benefits of end-to-end training of highly expressive models, they lose a strength of earlier non-parametric techniques, as they fail to make use of large datasets of images at inference time. In this paper, we introduce a memory-driven semi-parametric approach to text-to-image generation, where the approach takes the advantage of both parametric and non-parametric techniques. The non-parametric component is a memory bank of disentangled image features constructed from a training set of real images. The parametric component is a generative adversarial network. Given a novel text description at inference time, the memory bank is used to selectively retrieve compatible image features that are provided as basic information, allowing the generator to directly draw clues of target images, and thus to produce realistic synthetic results. Besides, to further improve the differentiation ability of the discriminator, we incorporate the content information into it. This is because, to make a prediction, the discriminator usually relies on semantic A zebra is standing on the grassy field. A white and blue bus is driving down a street. Given Text StackGAN++ (Zhang et al., 2018) AttnGAN (Xu et al., 2018) DF-GAN (Tao et al., 2020) Ours Figure 1: Examples of text-to-image generation on COCO. Current approaches only generate lowquality images with unrealistic objects. In contrast, our method can produce realistic images, in terms of both visual appearances and geometric structure. features, extracted from a given image using a series of convolution operators with local receptive fields. However, when the discriminator goes deeper, less content details are preserved, including the exact geometric structure information (Gatys et al., 2016; Johnson et al., 2016). We think that the loss of content details is likely one of the reasons why current approaches fail to produce realistic shapes for objects on difficult datasets, such as COCO. Thus, the adoption of content information allows the model to exploit the capability of content details and then improve the discriminator to make the final prediction more reliable. Finally, an extensive experimental analysis is performed, which demonstrates that our memory-driven semi-parametric method can generate more realistic images from natural language, compared with purely parametric models, in terms of both visual appearances and geometric structure. 2 RELATED WORK Text-to-image generation has made much progress because of the success of generative adversarial networks (GANs) (Goodfellow et al., 2014) in realistic image generation. Zhang et al. (2017) proposed a multi-stage architecture to generate realistic images progressively. Then, attention-based methods (Xu et al., 2018; Li et al., 2019a) are proposed to further improve the results. Zhu et al. (2019) introduced a dynamic memory module to refine image contents. Qiao et al. (2019a) proposed text-visual co-embeddings to replace input text with corresponding visual features. Cheng et al. (2020) introduced a rich feature generating text-to-image synthesis. Besides, extra information is adopted on the text-to-image generation process, such as scene graphs (Johnson et al., 2018; Ashual & Wolf, 2019) and layout (e.g., bounding boxes or segmentation masks) (Hong et al., 2018; Li et al., 2019b; Hinz et al., 2019). However, none of the above approaches adopt non-parametric techniques to make use of large datasets of images at inference time, neither feed content information into the discriminator to enable a finer training feedback. Also, our method does not make use of any additional semantic information, e.g., scene graphs and layout. Text-guided image manipulation is related to our work, where the task also takes natural language descriptions and real images as inputs, but it aims to modify the images using given texts to achieve semantic consistency (Nam et al., 2018; Dong et al., 2017; Li et al., 2020). Differently from it, our work focuses mainly on generating novel images, instead of editing some attributes of the given images. Also, the real images in the text-guided image manipulation task behave as a condition, where the synthetic results should reconstruct all text-irrelevant attributes from the given real images. Differently, the real images in our work are mainly to provide the generator with additional cues of target images, in order to ease the whole generation process. Memory Bank. Qi et al. (2018) introduced a semi-parametric approach to realistic image generation from semantic layouts. Li et al. (2019c) used the stored image crops to determine the appearance of objects. Tseng et al. (2020) used a differentiable retrieval process to select mutually compatible image patches. Li et al. (2021) studied conditional image extrapolation to synthesize new images guided by the input structured text. Differently, instead of using a concise semantic representation (a scene graph as input), which is less user-friendly and has limited context of general descriptions, we use natural language descriptions as input. Also, Liang et al. (2020) designed a memory structure to parse the textual content. Differently, our method simply uses a deep network to extract image features, instead of involving complex image preprocessing to build a memory bank. 3 OVERVIEW Given a sentence S, we aim to generate a fake image I ′ that is semantically aligned with the given S. The proposed model is trained on a set of paired text description and corresponding real image features v, denoted by (S, v). This set is also used to generate a memory bankM of disentangled image features v for different categories, where image features are extracted from the training image by using a pretrained VGG16 network (Simonyan & Zisserman, 2014) (see Fig. 2). Each element in M is an image feature extracted from a training image, associated with corresponding semantically-matched text descriptions from the training datasets. At inference time, we are given a novel text description S that was not seen during training. Then, S is used to retrieve semantically-aligned image features from the memory bank M , based on designed matching algorithms (more details are shown in Sec. 4.2). Next, the retrieved image features v, together with the given text description S, are fed into the generator to synthesize the output image (see Fig. 3). The generator utilizes the information from the image features, fuses them with hidden features produced from the given text description S, and generate realistic images semantically-aligned with S. The architecture and training of the network are described in Sec. 5. To incorporate image features into the generation pipeline, we borrow from the text-guided image manipulation literature (Li et al., 2020), and redesign the architecture to make full use of the given image features in text-to-image generation, shown in Fig. 3. 4 MEMORY BANK 4.1 REPRESENTATION The memory bank M is a set of image features vi extracted from training set images, and each image features vi is associated with matched text descriptions that are provided in the dataset, e.g., in COCO, each image has five matched text descriptions. These descriptions are used in the matching algorithms, allowing a given text to find the most compatible image features at inference time. 4.2 RETRIEVAL Given a new text description, in order to effectively retrieve the most compatible image features from the memory bank M , we have designed several matching algorithms and also explored the effectiveness of each algorithms. A detailed comparison between different algorithms is shown in the supplementary material. 4.2.1 SENTENCE-SENTENCE MATCHING Here, we use image features’ associated sentences S′i as keys, to find the most compatible image features vi for a given unseen sentence S at inference time. First, we feed both S and S′i into a pretrained text encoder (Xu et al., 2018) to produce sentence features s ∈ RD×1 and s′i ∈ RD×1, respectively, where D is the feature dimension. Then, for the given sentence S, we select the most compatible image features vi in M based on a cosine similarity score: αi = (s)T s′i ‖s‖ ‖s′i‖ . (1) Finally, we fetch the image features vi using the key S′i with the highest similarity score αi. 4.2.2 SENTENCE-IMAGE MATCHING Instead of using associated sentences as keys, we can calculate the similarity between the sentence feature s ∈ RD×1 and image features vi ∈ RD×H×W stored in M , where D is the number of channels, H is the height, and W is the width. To directly calculate the similarity, we first average the image features on the spatial direction to get a global image feature vGi ∈ RD×1. So, for a given unseen S, we select the most compatible image features vi in M based on βi: βi = (s)T vGi ‖s‖ ‖vGi‖ . (2) 4.2.3 WORDS-WORDS MATCHING Moreover, we can use a more fine-grained text representation (namely, word embeddings), as keys to find the most compatible image features vi stored in M for a given unseen sentence S. At inference time, we first feed both S and S′i into a pretrained text encoder (Xu et al., 2018) to generate word embeddings w ∈ RN×D and w′i ∈ RN×D, respectively, where N is the number of words and D is the feature dimension. Then, we reshape the size of both w and w′i to R(D∗N)×1. So, to find the most compatible image features, the cosine similarity score can be defined as follows: δi = (w)Tw′i ‖w‖ ‖w′i‖ . (3) However, different words in a sentence are not equally important. Thus, if we simply combine all words from a sentence together to calculate the similarity (like above), the similarity score may be less precise. To solve this issue, during training, we reweight each word in a sentence by its importance. We first use convolutional layers to remap word embeddings, and then calculate the importance λ (and λ′i) for each word in word embeddings w ∈ RN×D (and w′i ∈ RN×D), denoted by: λ = Softmax(wwT ) and λ′i = Softmax(w ′ iw ′T i ), respectively. Each elements in λ represents the correlation between different words in a sentence. Then, λw (and λ′iw ′ i) reweight word embeddings for each word based on its correlation with other words. So, using this reweighted word embeddings, we can achieve a more precise similarity calculation between two word embeddings. At inference time, after we reshape the size of both λw and λ′iw ′ i to R(D∗N)×1, the new equation is defined as follows: δi = (λw)Tλ′iw ′ i ‖λw‖ ‖λ′iw′i‖ . (4) 4.2.4 WORDS-IMAGE MATCHING Furthermore, we use the word embeddings w ∈ RN×D and image features vi ∈ RD×H×W to directly calculate the similarity score between them. To achieve this, we first reshape the size of the image features to vi ∈ RD×(H∗W ). Then, a correlation matrix ci ∈ RN×(H∗W ) can be obtained via: ci = Softmax(wvi), where each element in ci represents the correlation between each word and each image spatial location. Then, a reweighted word embedding w̃i ∈ RN×D containing image information can be achieved by w̃i = civTi . So, to find the most compatible image features, we first reshape the size of both w and w̃i to R(D∗N)×1, and the similarity score is defined as follows: γi = (w)T w̃i ‖w‖ ‖w̃i‖ . (5) Similarly, we can also reweight word embeddings w and image features vi based on their importance (see Sec.4.2.3) to achieve a more precise calculation. 5 GENERATIVE ADVERSARIAL NETWORKS To generate high-quality synthetic images from natural language descriptions, we propose to incorporate image features v, along with the given sentence S, into the generator. To incorporate image features into the generation pipeline, we borrow from the text-guided image manipulation literature (Li et al., 2020), and redesign the architecture to make full use of the given image features in text-to-image generation, shown in Fig. 3. 5.1 GENERATOR WITH IMAGE FEATURES To avoid the identity mapping and also to make full use of image features v in the generator, we first average v on each channel to filter potential content details (e.g., overall spatial structure) contained in v, getting a global image feature vG, where vG only keeps basic information of the corresponding real image I , serving as basic image priors. By doing this, the model can effectively avoid copying and pasting from I , and greatly ensure the diversity of output results, especially on the first stage. This is because the following stages focus more on refining basic images produced by the first stage, according to adding more details and improving their resolution, shown in Fig. 3. However, only feeding the global image feature vG at the beginning of the network, the model may fail to fully utilize the cues contained in the image features v. Thus, we further incorporate the image features v at each stage of the network. The reason to feed image features v rather than the global feature vG at the following stages is that v contains more information about the desired output image, such as image contents and geometric structure of objects, where these details can work as candidate information for the main generation pipeline to select. To enable this regional selection effect, we adopt the text-image affine combine module (ACM) (Li et al., 2020), which is able to selectively fuse text-required image information within v into the hidden features h, where h is generated from the given text description S. However, simply fusing image features v into the generation pipeline may introduce constraints on producing diverse and novel synthetic results, because different image information (e.g., objects and visual attributes) in v may be entangled, which means, for example, if the model only wants to generate one object, the corresponding entangled parts (e.g, objects and attributes) may be produced as well. This may cause an additional generation of text-irrelevant objects and attributes. Thus, to avoid these drawbacks, inspired by the study (Karras et al., 2019), we use several fully connected layers to disentangle the image features v, getting disentangled image features vD, which allows the model to disconnect relations between different objects and also attributes. By doing this, the model is able to prevent the constraints introduced by the image features v, and then selectively choose text-required image information within vD, where this information is effectively disentangled without a strong connection. Why does the generator with image features work better? Ideally, the generator produces a sample, e.g., an image, from a latent code, and the distribution of these samples should be indistinguishable from the training distribution, where the training distribution is actually drawn from the real samples in the training dataset. Based on this, incorporating image features from real images in training dataest into the generator allows the generator to directly draw cues of the desired distribution that it eventually needs to generate. Besides, the global feature vG and disentangled image features vD can provide basic information of target results in advance, and also work as candidate information, allowing the model to selectively choose text-required information without generating it by the model itself, and thus easing the whole generation process. To some extent, the global feature vG can be seen as the meta-data of target images, which may contain information about what kinds of objects to generate, e.g., zebra or bus, and vD is able to provides basic information of objects, e.g., the spatial structure like four legs and one head for the zebra and the rectangle shape for the bus. 5.2 DISCRIMINATOR WITH CONTENT INFORMATION To further improve the discriminator to make a more reliable prediction, with respect to both visual appearances and geometric structure, we propose to incorporate the content information into it. This is mainly because, in a deep convolution neural network, when the network goes deeper, the less content details are preserved, including the exact shape of objects (Gatys et al., 2016; Johnson et al., 2016). We think the loss of content details may prevent the discriminator to provide finegrained shape-quality-feedback to the genera- tor, which may cause the difficulty for the generator to produce realistic geometric structure. Also, Zhou et al. (2014) showed that the empirical receptive field of a deep convolution neural network is much smaller than the theoretical one especially on deep layers. This means, using convolution operators with a local receptive field only, the network may fail to capture the spatial structure of objects when the size of objects exceeds the receptive field. To incorporate the content details, we propose to generate a series of image content features, {a128, a64, a32, . . . , a4}, by aggregating different image regions via applying pooling operators on the given real or fake features. The size of these content features is from a128 ∈ RC×128×128 to a4 ∈ RC×4×4, where C represents the number of channels, and the width and the height of the next image content features are 1/2 the previous one. Thus, the given image is pooled into representations for different regions, from fine- (a128) to coarse-scale (a4), which is able to preserve content information of different subregions, such as the spatial structure of objects. Then, these features are concatenated with the corresponding hidden features on the channel-wise direction, incorporating the content information into the discriminator. The number of different-scale content features can be modified, which is dependent on the size of given images. These features aggregate different image subregions by repetitively adopting fixed-size pooling kernels with a small stride. Thus, these content features maintain a reasonable small gap for image information. For the type of pooling operation between max and average, we perform comparison studies to show the difference in Sec. 6.2. Why does the discriminator with content information work better? Basically, the discriminator in a generative adversarial network is simply a classifier (Goodfellow et al., 2014). It tries to distinguish real data from the data created by the generator (note that in our method, we implement the Minmax loss in the loss function, instead of the Wasserstein loss (Arjovsky et al., 2017)). Also, the implementation of content information has shown its great effectiveness on classification (Lazebnik et al., 2006; He et al., 2015) and semantic segmentation (Liu et al., 2015; Zhao et al., 2017). Based on this, incorporating the content information into the discriminator is helpful, allowing the discriminator to make a more reliable prediction on complex datasets, especially for the datasets with complex image scenery settings, such as COCO. 5.3 TRAINING To train the network, we follow (Li et al., 2020) and adopt adversarial training. There are three stages in the model, and each stage has a generator network and a discriminator network. The generator and discriminator are trained alternatively by minimizing the generator loss LG and discriminator loss LD. Please see the supplementary material for more details about training objectives. We only highlight some training differences compared with Li et al. (2020). Generator objective. The objective functions to train the generator are similar as in (Li et al., 2020), but, differently, the inputs for the generator are a pair of (S, v) and a noise z, denoted by Gi(z, S, v), where i indicates the stage number. Discriminator objective. To improve the convergence of our GAN-based generation model, the R1 regularization (Mescheder et al., 2018) is adopted in the discriminator: R1(ψ) := γ 2 EpD(x) [ ‖5Dψ(x)‖2 ] , (6) where ψ represents parameter values of the discriminator. 6 EXPERIMENTS To verify the effectiveness of our proposed method in realistic image generation from text descriptions, we conduct extensive experiments on the CUB bird (Wah et al., 2011) dataset and more complex COCO (Lin et al., 2014) dataset, where COCO contains multiple objects with diverse backgrounds. Evaluation metrics. We adopt the Fréchet inception distance (FID) (Heusel et al., 2017) as the primary metric to quantitatively evaluate the image quality and diversity. In our experiments, we use 30K synthetic images vs. 30K real test images to calculate the FID value. However, as FID cannot reflect the relevance between an image and a text description, we use the R-precision (Xu et al., 2018) to measure the correlation between a generated image and its corresponding text. Human evaluation. To better verify the performance of our proposed method, we conducted a user study between current state-of-the-art method DF-GAN (Tao et al., 2020) and ours on CUB and COCO. We randomly selected 100 text descriptions from the test dataset. Then, we asked 5 workers to compare the results after looking at the output images and given text descriptions based on two criteria: (1) alignment: whether the synthetic image is semantically aligned with the given description, and (2) realism: whether the synthetic image looks realistic, shown in Tables 1 and 2. Please see supplementary material for more details about the human evaluation. Implementation. There are three stages in the model, and each stage has a generator network and a discriminator network. The number of stages can be modified, which depends on the resolution of the output image. We utilize a deep neural network layer relu5 3 of a pre-trained VGG-16 to extract image features v, which is able to filter content details in I and keep more semantic information. In the discriminator, the number of different-scale image content features can be modified, which is related to the size of the given image. A same-size pooling kernel with a small stride (stride = 2) is repeatedly implemented on the image features, to maximize the preservation of the content information. For the type of pooling operation, average pooling is adopted. For the matching algorithms, word image matching with reweighting based on importance is adopted. The resolution of synthetic results is 256× 256. Our method and its variants are trained on a single Quadro RTX 6000 GPU, using the Adam optimizer (Kingma & Ba, 2014) with the learning rate 0.0002. The hyperparameter λ is set to 5. We preprocess datasets according to the method used in (Xu et al., 2018). No attention module is implemented in the whole architecture. 6.1 COMPARISON WITH OTHER APPROACHES Quantitative comparison. Quantitative results are shown in Tables 1 and 2. As we can see, compared to other approaches, our method achieves better FID and R-precision scores on both datasets, and even has a better performance than OP-GAN, where OP-GAN adopts bounding boxes. This indicates that (1) our method can produce more realistic images from given text descriptions, in terms of image quality and diversity, and (2) synthetic results produced by our method are more semantically aligned with the given text descriptions. Besides, in human evaluation, our method achieves better alignment and realism scores, compared with DF-GAN, which indicates that our results are most preferred by workers, which further verifies the better performance of our method, with respect to semantic alignment and image realism. Qualitative comparison. In Fig. 5, we present synthetic examples produced by our method at 256 × 256, along with the corresponding retrieved images that provide image features. As we can see, our method is able to produce highquality results on CUB and COCO, with respect to realistic appearances and geometric structure, and also semantically matching the given text descriptions. Besides, the synthetic results are different from the retrieved image features, which indicates there is no significant copy-and-paste problem in our method. Diversity evaluation. To further evaluate the diversity of our method, we fix the given text description and the corresponding retrieved image features, and only change the given noise z to generate output images, shown in Fig. 7. When we fix the sentence and image features and only change the noise, our method can generate obviously different images, but they still semantically match the given sentence and also make use information from the image features. More evaluations are shown in the supplementary material. 6.2 COMPONENT ANALYSIS Effectiveness of the image features. To better understand the effectiveness of image features in the generator, we conduct an ablation study shown in Table 3. Without image features, the model “Ours w/o Feature” achieves worse quantitative results on both FID and R-precision compared with the baseline, which verifies the effectiveness of image features on high-quality image generation. Interestingly, without image features, even our method becomes a pure text-to-image generation method, similar to other baselines, but the FID of “Ours w/o Feature” is still competitive with other baselines. This indicate that even without the image features fed into our method, our method can still generate better synthetic results, with respect to image quality and diversity. We think this is mainly because with the help of content information, our better discriminator is able to make a more reliable prediction on complex datasets, which in turn encourages the generator to produce better synthetic images. Effectiveness of the disentanglement. Here, we show the effectiveness of the fully connected layers applied on the image features v. Interestingly, from Table 3, the “model w/o Disen.” achieves better FID and R-precision compared with the baseline. This is likely because the model may suffer from an identity mapping problem. To verify this identity mapping problem, we conduct another experiment, where we feed mismatched sentence and image pairs into the network without using search algorithms, denoted “model w/o Disen.*”. As we can see, on mismatched pairs, although FID is still low, the R-precision degrades significantly. Effectiveness of the content information. To verify the effectiveness of the content information adopted in the discriminator, we conduct an ablation study, shown in Table 3. As we can see, FID and R-precision degrade when the discriminator without adopting the content information. This may indicate that the content information can effectively strengthens the differentiation abilities of the discriminator. Then, the improved discriminator is able to provide the generator with fine-grained training feedback, regarding to geometric structure, thus facilitating training a better generator to produce higher-quality synthetic results. Comparison between different pooling types. Here, we conduct a comparison study on different pooling types (i.e., max and average) in Table 3. As we can see, the model with the average pooling works better than max pooling. We think that this is likely because max pooling fails to capture the contextual information between neighboring pixels, because it only picks the maximum value among a region of pixels, while average pooling calculates the average value between them. Effectiveness of the regularization. We evaluate the effectiveness of the adopted regularization in the discriminator. From Table 3, the model without the regularization has worse quantitative results, compared with the full model. We think that this is because the regularization effectively improves GAN convergence by preventing the generator from training on junk feedback, once the discriminator cannot easily tell the difference between real and fake. 7 CONCLUSION We have introduced a memory-driven semi-parametric approach to text-to-image generation, which utilizes large datasets of images at inference time. Also, an alternative architecture is proposed for both the generator and the discriminator. Extensive experimental results on two datasets demonstrate the effectiveness of feeding retrieved image features into the generator and incorporating content information into the discriminator. 8 ETHICS STATEMENT All datasets and baselines used in the paper are public with corresponding citations. Our research mainly explores the interaction between different modal features, and aims to achieve an effective transformation from one domain to the other, which might not have significant potentially harmful insights and potential conflicts of interest and sponsorship. 9 REPRODUCIBILITY STATEMENT To reproduce our results, we include the details of the datasets we used in our paper (see Sec. D). In the implementation section (see Sec. 6), we show more details on our network, including how to extract image features, and how to generate content information used in the discriminator. We also include the values of hyperparameters, and the kinds of devices that we used to train our network. Sec. 5.3 and Sec. B show objective functions to train our network. Also, all data and baselines used in our paper are public with corresponding citations. We will release our code after the conference. A ARCHITECTURE Here we show details about the network architectures for the components of our model. A.1 TEXT ENCODER The text encoder used in our method is a pretrained bidirectional LSTM (Xu et al., 2018), which is trained together with an image encoder Inception-v3 (Szegedy et al., 2016), maximizing the cosine similarity between text features and the corresponding image features. The text features are encoded from a given text description using the text encoder, and the image features are extracted from the corresponding matched image. A.2 IMAGE ENCODER The image encoder used in our main architecture is a VGG-16 (Simonyan & Zisserman, 2014) network, pretrained on ImageNet (Russakovsky et al., 2015). A deep neural network layer relu5 3 is adopted to extract image features. Thus, the image features are able to contain more semantic information than content details. A.3 TEXT-IMAGE AFFINE COMBINATION MODULE To better fuse different-modal text and image features, and also to enable a regional selection effect, we adopt the text-image affine combination module (Li et al., 2020), shown in Fig. 8. The affine combination module takes two inputs: (1) the hidden features h ∈ RC×H×W from the given text description or intermediate hidden representation between two stages, where C is the number of channels, H is the height, and W is the width of the feature map, and (2) the corresponding disentangled image features vD ∈ RC×H×W , achieved by applying fully connected layers on the image features. According to applying two convolutional layers, the disentangled image features vD are converted into trainable weights W (vD) ∈ RC×H×W and trainable biases b(vD) ∈ RC×H×W . Then, the fused feature h′ ∈ RC×H×W is generated by h′ = h W (vD) + b(vD), (7) where W and b represent the functions that convert the image features vD into weights W (vD) and biases b(vD), and denotes the Hadamard element-wise product. A.4 REWEIGHTING IMAGE FEATURES BASED ON IMPORTANCE Here, we show how to reweight image features based on its importance, mentioned in Sec. 4.2.4. First, during the training, we use convolutional layers to remap image features, and then reshape image features into v ∈ RD×(H∗W ). Thus, to calculate the importance λ for each spatial locations in image features, we apply the following equation: λ = Softmax(vT v), where λ ∈ R(H∗W )×(H∗W ), and each element in λ represents the correlation between different spatial locations. Finally, we reweight image features based on importance by adopting vλ. B OBJECTIVE FUNCTIONS Here we show the complete objective functions for training our method. The discriminator and generator in our model are trained alternatively by minimizing both the generator loss LG and the discriminator loss LD. B.1 GENERATOR OBJECTIVE The generator objective for training a generator at stage i contains an unconditional adversarial loss, a conditional adversarial loss, and a text-image matching loss LDAMSM (Xu et al., 2018). LGi =− 1 2 Ez∼Pz,v∼Pdata [log(Di(Gi(z, S, v)))]︸ ︷︷ ︸ unconditional adversarial loss −1 2 Ez∼Pz,v∼Pdata [log(Di(Gi(z, S, v), S))]︸ ︷︷ ︸ conditional adversarial loss +λLDAMSM, (8) where Gi and Di represent the corresponding generator network and discriminator network at stage i, respectively, S is the text description, v is the image features that are extracted from the corresponding real image I that correctly semantically matches S, where the I is sampled from the true distribution Pdata, z is a noise vector drawn from the Gaussian distribution Pz . Thus, the complete objective function for training the generator networks is: LG = K∑ k=1 (LGi), (9) where K is the total number of stages in the network. B.2 DISCRIMINATOR OBJECTIVE The discriminator objective for training a discriminator at stage i contains an unconditional adversarial loss and a conditional adversarial loss. LDi =− 1 2 EIi∼Pdata [log(Di(Ii))]− 1 2 Ez∼Pz [log(1−Di(Gi(z, S, v)))]︸ ︷︷ ︸ unconditional adversarial loss −1 2 EIi∼Pdata [log(Di(Ii, S))]− 1 2 Ez∼Pz [log(1−Di(Gi(z, S, v), S))]︸ ︷︷ ︸ conditional adversarial loss , (10) where Ii denotes the real image sampled from the true image distribution Pdata at stage i. Thus, the complete objective function for training the discriminator networks is: LD = K∑ k=1 (LDi) +R1(ψ), (11) where R1(ψ) is a regularization term described in the paper. This regularization term is derived from zero-centered gradient penalties (Ross & Doshi-Velez, 2017) on local stability, which penalizes the discriminator for deviating from the Nash-equilibrium. This ensures that when a GAN-based model converges (i.e., the generator produces the true data distribution), the discriminator cannot create a non-zero gradient orthogonal to the data manifold without suffering a loss in the GAN game. C EVALUATION METRICS In this section, we show more details about the evaluation metrics used in the paper. C.1 FRÉCHET INCEPTION DISTANCE The Fréchet inception distance (FID) (Heusel et al., 2017) measures the Fréchet distance between generated image features and real image features, where both features are extracted by an Inception-v3 network (Szegedy et al., 2016) pretrained on ImageNet (Russakovsky et al., 2015). Consequently, a lower FID implies a closer distance between the synthetic image distribution and the real image distribution. C.2 R-PRECISION To measure the semantic alignment between the synthetic image and the given text description, the R-precision (Xu et al., 2018) is adopted. The R-precision is calculated by retrieving relevant text descriptions given an image query. To measure the relevance between the text and the image, the cosine similarity between text and image features is adopted. Thus, we compute a global image vector and 100 candidate sentence vectors, where the 100 candidate sentence vectors contain R number of ground-truth text descriptions that correctly describe the image, and 100−R randomly chosen mismatched descriptions. For each image query, if a results in the top R ranked retrieval text descriptions are relevant, then the R-precision is a/R. In the paper, we measure the top-1 R-precision (i.e., R = 1). D MORE EXPERIMENTS In this section, we show additional experimental results to further evaluate and verify the performance of our proposed method. D.1 DATASETS CUB bird (Wah et al., 2011) contains 8,855 training images and 2,933 test images, and each image has 10 corresponding text descriptions. COCO (Lin et al., 2014) contains 82,783 training images and 40,504 validation images. Each image has 5 descriptions. D.2 QUANTITATIVE COMPARISON BETWEEN DIFFERENT ALGORITHMS Here, we show the quantitative comparison between different matching algorithms, shown in Tables 4 and 5. As we can see, the algorithm word image matching with reweighting based on importance achieves the best FID and R-psr scores on CUB and COCO datasets. Therefore, the algorithm word image matching with reweighting is adopted in our method. D.3 DETAILS OF HUMAN EVALUATION Because the automatic metric cannot comprehensively evaluate the improvement of our proposed method, we conducted a side-by-side human evaluation study to analyze the improvement. The study compares synthetic images from our method and current state-of-the-art text-to-image generation method DF-GAN (Tao et al., 2020) on both CUB and COCO, according to (1) alignment, and (2) realism. We presented synthetic images from different methods along with the given text descriptions. We randomly switch our method and the baseline and also anonymized them. Then, we asked workers to choose the best images based on above two criteria. In this study, we randomly choose 100 text descriptions sampled from the test dataset, and then assign corresponding synthetic images generated by different methods to 5 workers to reduce variance. D.4 QUALITATIVE RESULTS In Fig. 10, we show more qualitative results generated by our method on the CUB bird dataset, along with the corresponding retrieved images that provide image features. As we can see, our method is able to produce high-quality results on CUB, semantically matching the given text descriptions. Also, the synthetic results look obviously different from the retrieved images, but our method can selectively choose information from the retrieved image to generate better synthetic results. D.5 DIVERSITY D.5.1 SSIM We also compare the Structural Similarity Index (SSIM) score (Hore & Ziou, 2010) between the generated images and corresponding ground-truth images to evaluate the diversity of our method. SSIM is originally used to measure the recovery result from distorted images. In our case, higher SSIM means synthetic and real images are more similar, which indicates that there may exist a copy-and-paste problem and the network has a worse diversity. Based on this, for SSIM, lower is better, which means a better diversity. To calculate the SSIM, for other baseline methods, we evaluate them on the test dataset by calculating the SSIM between each synthetic and ground-truth image pairs, and then get the average of all scores; for our method, we calculate the SSIM between the synthetic image and the image that provide image features. As shown in Table 6, our method achieves competitive SSIM scores on both CUB and COCO, compared with other baselines. This indicates that (1) even if our method has image features as image priors, it can still produce diverse synthetic results that are different from the corresponding real images, (2) there is no significant copy-and-paste problem in our method, and (3) our method can effectively disentangle objects and attributes in the given image features, which then can work as candidate information for the main generation pipeline to choose. D.5.2 SEMANTIC INFORMATION EXPLORATION Here, we further verify whether our method suffers from an copy-and-paste problem by exploring whether our method can make use of semantic information contained in the retrieved image features. To verify this, instead of extracting image features from RGB images, we use segmentation masks to provide semantic image features, shown in Fig. 11. As we can see, although there is no any content information provided in the given segmentation masks, our method is still able to generate realistic images, which indicate that our method can make use of semantic information contained in the image features, instead of simply copying and pasting the retrieved image features to produce output images. Furthermore, discussed in the following Sec. D.7, given a partially matched text and image features, our method is able to pick the semantic information (e.g., structure of train, cat, and bus) and filter detailed content color information (e.g., yellow and green, brown, and yellow) to generate text-required output images, as shown in Fig. 12. D.6 EFFECTIVENESS OF IMAGE FEATURES Actually, when there are no image features that are fed into our method, our method becomes a traditional text-to-image generation model, where the inputs for the model are only the natural language descriptions and random noise. As shown in Table 7, “Ours w/o Feature” still has a competitive performance, compared with other baselines, which means that our method can still generate images with good quality and diversity. We think this is mainly because of the powerful discriminator with content information, which is able to provide fine-grained training feedback to the generator, in terms of realistic appearance and geometric structure. Note that the way to block image features to build the model “Ours w/o Feature” is to remove image features and ACM components in the network, and only keep the new discriminator with content information. D.7 IMAGE GENERATION WITH PARTIAL TEXT-IMAGE MATCHING Interestingly, when the retrieved image features have a good quality (e.g., desired objects in image features can provide enough information), but are not perfectly aligned with the given text descriptions, Table 7: Quantitative comparison: Fréchet inception distance (FID) and R-precision (R-psr) of StackGAN++ (Zhang et al., 2018), AttnGAN (Xu et al., 2018), ControlGAN (Li et al., 2019a), DM-GAN (Zhu et al., 2019), OP-GAN (Hinz et al., 2019), and our method on the COCO dataset. “Ours w/o Feature” denotes that our model does not have any image features and just has a similar generation pipeline as other traditional text-to-image generation methods. For FID, lower is better, while for R-psr, higher is better. Matrix StackGAN++ AttnGAN ControlGAN DM-GAN OP-GAN Ours w/o Feature FID 81.59 32.32 33.58 32.64 24.70 22.20 R-prs (%) 71.88 85.47 82.43 88.56 89.01 84.63 which means that the given text description and corresponding retrieved image features only partially match on the semantic meaning, our method is still able to produce realistic images, shown in Fig. 12. As we can see, our method is able to generate the desired objects with required attributes, even if image features only partially match the given text description. For example, in the provided “train” image features, there is a yellow and green train, but the given description requires a red train. However, our method is still able to generate a realistic train with a red color. Besides, our method can even produce a novel composition, e.g., the sign is flying in the sky. We think that this is mainly because the generator can selectively make use of the information provided by the image features, instead of directly copying and pasting information from it. Also, features and attributes are disentangled in the provided image features, which enable this independent selection without additional generation. D.8 REGIONAL SELECTION EFFECT In Fig. 12, we can observe the regional selection effect involved in the generation process. For the train example, our full model is able to selectively keep the relevant information (e.g., train) and filter the irrelevant contents (e.g., yellow and green color) to avoid a wrong object generation (e.g., red color). This effect can be magnified when the given image has multiple objects, and the given text only partially describes it, shown in Fig. 13. There are multiple objects (e.g., vase, flowers, chairs, and window for the top example; three zebras, enclosure, and grass for the bottom one) in the given image features. However, our method only selectively makes use of some information (e.g., shape and texture of flowers and zebra) and generates text-required objects without keeping irrelevant contents in the image features (e.g., chair, window, and multiple zebras). E LIMITATIONS AND FUTURE WORK Here, we discuss some limitations of the proposed method and also the future work. We have observed that our method may fail to produce realistic images when the retrieved image features can only provide limited information, e.g., the target object is too small in the corresponding real image, or there are no desired objects in the retrieved image features. As shown in Fig. 14 left, the stop sign, zebra, bus, and train in the corresponding image are too small, which means that the extracted image features can only provide very limited information about the desired object zebra, stop sign, bus, and train to the generation pipeline. Furthermore, when the retrieved image features have no desired objects, shown in Fig. 14 right, our proposed method may fail to generate high-quality images as well. No desired objects presented in the retrieved image features are mainly caused by the image preprocessing (e.g., crop) and also the limitation of matching algorithms. In such cases, our method is more similar to a pure text-to-image generation method, like other baselines, because the provided image features cannot provide any useful information. To solve these problems, we suggest to build a better memory bank with higher-quality image features, and also improve the matching algorithms to find the most compatible image features for a given text description. Besides, our method is a semi-parametric approach, which needs to retrieve image features from the memory bank. So, it might slow down the inference time, compared with other purely parametric methods. To solve this problem, we suggest to (1) run matching algorithms parallel to speed up the whole inference time, and (2) encourage users to provide the category of the main object in their text descriptions, and then we can use this category as a key to narrow down the retrieval regions. F ADDITIONAL QUALITATIVE COMPARISON Here, we show an additional qualitative comparison between the different text-to-image generation approaches StackGAN++ (Zhang et al., 2018), AttnGAN (Xu et al., 2018), and DF-GAN (Tao et al., 2020) with our method on the COCO dataset (Lin et al., 2014). A zebra is grazing out in the field of grass. Pink flowers sitting in a clear vase full of water. A green and yellow train pulling out of a train station next to train tracks. A bathroom has a white toilet, sitting next to a white bath tub. A pizza is full of pepperoni and cheese on a white plate. A black laptop sitting on a wooden table. Given Text StackGAN++ (Zhang et al., 2018) AttnGAN (Xu et al., 2018) DF-GAN (Tao et al., 2020) Ours Figure 15: Additional comparison results between StackGAN++, AttnGAN, DF-GAN, and Ours on the COCO dataset. A clock that is on the side of a tower. A green double decker bus is driving on a road. A white and grey cat laying on a table. A stop sign with a dark sky in the background. Two baseball players a catcher and a batter who has just hit a baseball. A white bed with white pillows and a lamp in a room. Given Text StackGAN++ (Zhang et al., 2018) AttnGAN (Xu et al., 2018) DF-GAN (Tao et al., 2020) Ours Figure 16: Additional comparison results between StackGAN++, AttnGAN, DF-GAN, and Ours on the COCO dataset. A large black train is traveling down a track. A vase filled with colorful assorted flowers and green leafs in it. A zebra walking in an open grassy field. A stop sign with a blue sky in the background. A double red decker bus parked on the street. A silver laptop sitting on a desk. A plate holding a pizza topped with cheese and vegetable on a table. A white toilet with lid open in a bathroom. The room has a lamp and a bed with white sheets on it. A black cat laying on a table. A baseball player is standing ready to bat. Given Text StackGAN++ (Zhang et al., 2018) AttnGAN (Xu et al., 2018) DF-GAN (Tao et al., 2020) Ours Figure 18: Additional comparison results between StackGAN++, AttnGAN, DF-GAN, and Ours on the COCO dataset.
1. How does the proposed method directly use training data to extract image features for text-to-image generation? 2. What are the strengths of the proposed method, particularly in its ability to outperform baselines in realism and text alignment? 3. What are some potential weaknesses or limitations of the proposed method, such as time and memory requirements, effectiveness of disentanglement, and word-image matching? 4. How does the proposed method handle issues related to the number of words in test text inputs and training sets? 5. What are some suggestions or potential directions for future research related to this paper's work on semi-parametric text-to-image synthesis?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a semi-parametric method for text-to-image generation. The main idea to directly use the training data to extract image features useful to synthesize novel images from novel text prompts. The proposed method first performs a retrieval step given an input text prompt where the most compatible image features are selected from the training set. Next, the selected image features are given to a generator network as global and disentangled more localized features in order to synthesize the image containing the semantics provided by the text input. The generator network is trained using a content aware discriminator network that uses skip connections to prevent information loss when judging real and fake images. In experiments, the authors show the proposed method outperforms the baselines in FID, R-psr, and a user study testing for realism and text alignment. Review Strengths: Clever direct use of the training data for image generation. Outperforms the baselines in both realism and alignment with text condition. Well written paper. Questions / weaknesses: Time and memory requirements: From the results, we can see that the proposed method in fact generates diverse images that better represent the input text. However, having to search over the entire training data (non-parametric methods) comes at a price. To give a complete picture of the proposed method and to potentially spark followup research, I would suggest the authors to provide time and memory requirements in comparison to the baselines. I believe this is something that the authors need to make sure to mention in the final manuscript. Effectiveness of the disentanglement: The authors check whether the fully connected layers meant to result in disentangled features are doing their job by feeding a mismatched text and image pairs into the network, and conclude that because the R-psr score drops significantly the disentangled features are necessary. I am not sure how this experiment shows this since, as far as I understand, one of the main reasons to use disentangled features is to prevent copy / pasting being done by the generator. If we want to check whether the network is simply learning the copy / paste operation from the non-disentangled features, I would suggest the authors perform a diversity test by providing different noise inputs and checking whether the outputs change. If the network is simply copying the training data as a result of the non-disentangled features, then the outputs should not be diverse. How is the word-image matching exactly done? The authors mention that the final method uses the word-image matching. However, I am not sure how the authors made sure that the image embedding and text embeddings can be compared. The image embedding is extracted with the pre-trained VGG network from Simonyan & Zisserman, 2014. This embedding space was not trained to be aligned with any textual data, and so, I am not sure how the authors are able to make these alignments. Is there some training step where the text embeddings are learned such that they can be aligned with the image features? If so, when is this done? I would appreciate it if the authors can clarify this in the rebuttal or let me know if I am missing something. Section 4.2.3 (Words-Words Matching): This section assumes that the test text input has the same number of words as the text in the training set (w \in R^{NxD} and w_i' \in R^{NxD}). Is this an assumption in this method? I would assume that the text text input is free to be any number of words and the same goes to the training set text. If so, Equation 3 is invalid. Can the authors clarify this in the rebuttal? Fig6 comparisons: In Fig6, we can clearly see the advantage of this method against the baselines. Nevertheless, I think the authors should also provide the training images that were used to synthesize the images highlighted in this Figure. The do show evidence of diversity in the generation in Fig7. However, I feel showing the training images in Fig6 is necessary to show a complete picture of how close these images are to the ones retrieved from the training set. Missing related work: A very influential text-to-image work from Reed et al., 2016 is missing from the related work: Generative Adversarial Text-to-Image Synthesis (https://arxiv.org/pdf/1605.05396v2.pdf) Typo: Page 5, last paragraph, line 5, second word: dataest -> dataset ====================================================================== Suggestions / potentially interesting things to try: One very interesting characteristic of this method is that it models global and disentangled / more localized features. What if you provide mismatching global and disentangled features? Will it make the model, say, generate a tiger with zebra colors? If so, this could open up cool applications for people into art to try.
ICLR
Title Memory-Driven Text-to-Image Generation Abstract We introduce a memory-driven semi-parametric approach to text-to-image generation, which is based on both parametric and non-parametric techniques. The non-parametric component is a memory bank of image features constructed from a training set of images. The parametric component is a generative adversarial network. Given a new text description at inference time, the memory bank is used to selectively retrieve image features that are provided as basic information of target images, which enables the generator to produce realistic synthetic results. We also incorporate the content information into the discriminator, together with semantic features, allowing the discriminator to make a more reliable prediction. Experimental results demonstrate that the proposed memory-driven semi-parametric approach produces more realistic images than purely parametric approaches, in terms of both visual fidelity and text-image semantic consistency. 1 INTRODUCTION How to effectively produce realistic images from given natural language descriptions with semantic alignment has drawn much attention, because of its tremendous potential applications in art, design, and video games, to name a few. Recently, with the vast development of generative adversarial networks (Goodfellow et al., 2014; Gauthier, 2015; Mirza & Osindero, 2014) in realistic image generation, text-to-image generation has made much progress, where the progress has been mainly driven by parametric models — deep networks use their weights to represent all data concerning realistic appearance (Zhang et al., 2017; 2018; Xu et al., 2018; Li et al., 2019a; Qiao et al., 2019b; Zhu et al., 2019; Hinz et al., 2019; Cheng et al., 2020; Qiao et al., 2019a). Although these approaches can produce realistic results on well-structured datasets, containing a specific class of objects at the image center with fine-grained descriptions, such as birds (Wah et al., 2011) and flowers (Nilsback & Zisserman, 2008), there is still much room to improve. Besides, they usually fail on more complex datasets, which contain multiple objects with diverse backgrounds, e.g., COCO (Lin et al., 2014). This is likely because, for COCO, the generation process involves a large variety in objects (e.g., pose, shape, and location), backgrounds, and scenery settings. Thus, it is much easier for these approaches to only produce text-semantic-matched appearances instead of capturing difficult geometric structure. As shown in Fig. 1, current approaches are only capable of producing required appearances semantically matching the given descriptions (e.g., white and black stripes for zebra), but objects are unrealistic with distorted shape. Furthermore, these approaches are in contrast to earlier works on image synthesis, which were based on non-parametric techniques that could make use of large datasets of images at inference time (Chen et al., 2009; Hays & Efros, 2007; Isola & Liu, 2013; Zhu et al., 2015; Lalonde et al., 2007). Although parametric approaches can enable the benefits of end-to-end training of highly expressive models, they lose a strength of earlier non-parametric techniques, as they fail to make use of large datasets of images at inference time. In this paper, we introduce a memory-driven semi-parametric approach to text-to-image generation, where the approach takes the advantage of both parametric and non-parametric techniques. The non-parametric component is a memory bank of disentangled image features constructed from a training set of real images. The parametric component is a generative adversarial network. Given a novel text description at inference time, the memory bank is used to selectively retrieve compatible image features that are provided as basic information, allowing the generator to directly draw clues of target images, and thus to produce realistic synthetic results. Besides, to further improve the differentiation ability of the discriminator, we incorporate the content information into it. This is because, to make a prediction, the discriminator usually relies on semantic A zebra is standing on the grassy field. A white and blue bus is driving down a street. Given Text StackGAN++ (Zhang et al., 2018) AttnGAN (Xu et al., 2018) DF-GAN (Tao et al., 2020) Ours Figure 1: Examples of text-to-image generation on COCO. Current approaches only generate lowquality images with unrealistic objects. In contrast, our method can produce realistic images, in terms of both visual appearances and geometric structure. features, extracted from a given image using a series of convolution operators with local receptive fields. However, when the discriminator goes deeper, less content details are preserved, including the exact geometric structure information (Gatys et al., 2016; Johnson et al., 2016). We think that the loss of content details is likely one of the reasons why current approaches fail to produce realistic shapes for objects on difficult datasets, such as COCO. Thus, the adoption of content information allows the model to exploit the capability of content details and then improve the discriminator to make the final prediction more reliable. Finally, an extensive experimental analysis is performed, which demonstrates that our memory-driven semi-parametric method can generate more realistic images from natural language, compared with purely parametric models, in terms of both visual appearances and geometric structure. 2 RELATED WORK Text-to-image generation has made much progress because of the success of generative adversarial networks (GANs) (Goodfellow et al., 2014) in realistic image generation. Zhang et al. (2017) proposed a multi-stage architecture to generate realistic images progressively. Then, attention-based methods (Xu et al., 2018; Li et al., 2019a) are proposed to further improve the results. Zhu et al. (2019) introduced a dynamic memory module to refine image contents. Qiao et al. (2019a) proposed text-visual co-embeddings to replace input text with corresponding visual features. Cheng et al. (2020) introduced a rich feature generating text-to-image synthesis. Besides, extra information is adopted on the text-to-image generation process, such as scene graphs (Johnson et al., 2018; Ashual & Wolf, 2019) and layout (e.g., bounding boxes or segmentation masks) (Hong et al., 2018; Li et al., 2019b; Hinz et al., 2019). However, none of the above approaches adopt non-parametric techniques to make use of large datasets of images at inference time, neither feed content information into the discriminator to enable a finer training feedback. Also, our method does not make use of any additional semantic information, e.g., scene graphs and layout. Text-guided image manipulation is related to our work, where the task also takes natural language descriptions and real images as inputs, but it aims to modify the images using given texts to achieve semantic consistency (Nam et al., 2018; Dong et al., 2017; Li et al., 2020). Differently from it, our work focuses mainly on generating novel images, instead of editing some attributes of the given images. Also, the real images in the text-guided image manipulation task behave as a condition, where the synthetic results should reconstruct all text-irrelevant attributes from the given real images. Differently, the real images in our work are mainly to provide the generator with additional cues of target images, in order to ease the whole generation process. Memory Bank. Qi et al. (2018) introduced a semi-parametric approach to realistic image generation from semantic layouts. Li et al. (2019c) used the stored image crops to determine the appearance of objects. Tseng et al. (2020) used a differentiable retrieval process to select mutually compatible image patches. Li et al. (2021) studied conditional image extrapolation to synthesize new images guided by the input structured text. Differently, instead of using a concise semantic representation (a scene graph as input), which is less user-friendly and has limited context of general descriptions, we use natural language descriptions as input. Also, Liang et al. (2020) designed a memory structure to parse the textual content. Differently, our method simply uses a deep network to extract image features, instead of involving complex image preprocessing to build a memory bank. 3 OVERVIEW Given a sentence S, we aim to generate a fake image I ′ that is semantically aligned with the given S. The proposed model is trained on a set of paired text description and corresponding real image features v, denoted by (S, v). This set is also used to generate a memory bankM of disentangled image features v for different categories, where image features are extracted from the training image by using a pretrained VGG16 network (Simonyan & Zisserman, 2014) (see Fig. 2). Each element in M is an image feature extracted from a training image, associated with corresponding semantically-matched text descriptions from the training datasets. At inference time, we are given a novel text description S that was not seen during training. Then, S is used to retrieve semantically-aligned image features from the memory bank M , based on designed matching algorithms (more details are shown in Sec. 4.2). Next, the retrieved image features v, together with the given text description S, are fed into the generator to synthesize the output image (see Fig. 3). The generator utilizes the information from the image features, fuses them with hidden features produced from the given text description S, and generate realistic images semantically-aligned with S. The architecture and training of the network are described in Sec. 5. To incorporate image features into the generation pipeline, we borrow from the text-guided image manipulation literature (Li et al., 2020), and redesign the architecture to make full use of the given image features in text-to-image generation, shown in Fig. 3. 4 MEMORY BANK 4.1 REPRESENTATION The memory bank M is a set of image features vi extracted from training set images, and each image features vi is associated with matched text descriptions that are provided in the dataset, e.g., in COCO, each image has five matched text descriptions. These descriptions are used in the matching algorithms, allowing a given text to find the most compatible image features at inference time. 4.2 RETRIEVAL Given a new text description, in order to effectively retrieve the most compatible image features from the memory bank M , we have designed several matching algorithms and also explored the effectiveness of each algorithms. A detailed comparison between different algorithms is shown in the supplementary material. 4.2.1 SENTENCE-SENTENCE MATCHING Here, we use image features’ associated sentences S′i as keys, to find the most compatible image features vi for a given unseen sentence S at inference time. First, we feed both S and S′i into a pretrained text encoder (Xu et al., 2018) to produce sentence features s ∈ RD×1 and s′i ∈ RD×1, respectively, where D is the feature dimension. Then, for the given sentence S, we select the most compatible image features vi in M based on a cosine similarity score: αi = (s)T s′i ‖s‖ ‖s′i‖ . (1) Finally, we fetch the image features vi using the key S′i with the highest similarity score αi. 4.2.2 SENTENCE-IMAGE MATCHING Instead of using associated sentences as keys, we can calculate the similarity between the sentence feature s ∈ RD×1 and image features vi ∈ RD×H×W stored in M , where D is the number of channels, H is the height, and W is the width. To directly calculate the similarity, we first average the image features on the spatial direction to get a global image feature vGi ∈ RD×1. So, for a given unseen S, we select the most compatible image features vi in M based on βi: βi = (s)T vGi ‖s‖ ‖vGi‖ . (2) 4.2.3 WORDS-WORDS MATCHING Moreover, we can use a more fine-grained text representation (namely, word embeddings), as keys to find the most compatible image features vi stored in M for a given unseen sentence S. At inference time, we first feed both S and S′i into a pretrained text encoder (Xu et al., 2018) to generate word embeddings w ∈ RN×D and w′i ∈ RN×D, respectively, where N is the number of words and D is the feature dimension. Then, we reshape the size of both w and w′i to R(D∗N)×1. So, to find the most compatible image features, the cosine similarity score can be defined as follows: δi = (w)Tw′i ‖w‖ ‖w′i‖ . (3) However, different words in a sentence are not equally important. Thus, if we simply combine all words from a sentence together to calculate the similarity (like above), the similarity score may be less precise. To solve this issue, during training, we reweight each word in a sentence by its importance. We first use convolutional layers to remap word embeddings, and then calculate the importance λ (and λ′i) for each word in word embeddings w ∈ RN×D (and w′i ∈ RN×D), denoted by: λ = Softmax(wwT ) and λ′i = Softmax(w ′ iw ′T i ), respectively. Each elements in λ represents the correlation between different words in a sentence. Then, λw (and λ′iw ′ i) reweight word embeddings for each word based on its correlation with other words. So, using this reweighted word embeddings, we can achieve a more precise similarity calculation between two word embeddings. At inference time, after we reshape the size of both λw and λ′iw ′ i to R(D∗N)×1, the new equation is defined as follows: δi = (λw)Tλ′iw ′ i ‖λw‖ ‖λ′iw′i‖ . (4) 4.2.4 WORDS-IMAGE MATCHING Furthermore, we use the word embeddings w ∈ RN×D and image features vi ∈ RD×H×W to directly calculate the similarity score between them. To achieve this, we first reshape the size of the image features to vi ∈ RD×(H∗W ). Then, a correlation matrix ci ∈ RN×(H∗W ) can be obtained via: ci = Softmax(wvi), where each element in ci represents the correlation between each word and each image spatial location. Then, a reweighted word embedding w̃i ∈ RN×D containing image information can be achieved by w̃i = civTi . So, to find the most compatible image features, we first reshape the size of both w and w̃i to R(D∗N)×1, and the similarity score is defined as follows: γi = (w)T w̃i ‖w‖ ‖w̃i‖ . (5) Similarly, we can also reweight word embeddings w and image features vi based on their importance (see Sec.4.2.3) to achieve a more precise calculation. 5 GENERATIVE ADVERSARIAL NETWORKS To generate high-quality synthetic images from natural language descriptions, we propose to incorporate image features v, along with the given sentence S, into the generator. To incorporate image features into the generation pipeline, we borrow from the text-guided image manipulation literature (Li et al., 2020), and redesign the architecture to make full use of the given image features in text-to-image generation, shown in Fig. 3. 5.1 GENERATOR WITH IMAGE FEATURES To avoid the identity mapping and also to make full use of image features v in the generator, we first average v on each channel to filter potential content details (e.g., overall spatial structure) contained in v, getting a global image feature vG, where vG only keeps basic information of the corresponding real image I , serving as basic image priors. By doing this, the model can effectively avoid copying and pasting from I , and greatly ensure the diversity of output results, especially on the first stage. This is because the following stages focus more on refining basic images produced by the first stage, according to adding more details and improving their resolution, shown in Fig. 3. However, only feeding the global image feature vG at the beginning of the network, the model may fail to fully utilize the cues contained in the image features v. Thus, we further incorporate the image features v at each stage of the network. The reason to feed image features v rather than the global feature vG at the following stages is that v contains more information about the desired output image, such as image contents and geometric structure of objects, where these details can work as candidate information for the main generation pipeline to select. To enable this regional selection effect, we adopt the text-image affine combine module (ACM) (Li et al., 2020), which is able to selectively fuse text-required image information within v into the hidden features h, where h is generated from the given text description S. However, simply fusing image features v into the generation pipeline may introduce constraints on producing diverse and novel synthetic results, because different image information (e.g., objects and visual attributes) in v may be entangled, which means, for example, if the model only wants to generate one object, the corresponding entangled parts (e.g, objects and attributes) may be produced as well. This may cause an additional generation of text-irrelevant objects and attributes. Thus, to avoid these drawbacks, inspired by the study (Karras et al., 2019), we use several fully connected layers to disentangle the image features v, getting disentangled image features vD, which allows the model to disconnect relations between different objects and also attributes. By doing this, the model is able to prevent the constraints introduced by the image features v, and then selectively choose text-required image information within vD, where this information is effectively disentangled without a strong connection. Why does the generator with image features work better? Ideally, the generator produces a sample, e.g., an image, from a latent code, and the distribution of these samples should be indistinguishable from the training distribution, where the training distribution is actually drawn from the real samples in the training dataset. Based on this, incorporating image features from real images in training dataest into the generator allows the generator to directly draw cues of the desired distribution that it eventually needs to generate. Besides, the global feature vG and disentangled image features vD can provide basic information of target results in advance, and also work as candidate information, allowing the model to selectively choose text-required information without generating it by the model itself, and thus easing the whole generation process. To some extent, the global feature vG can be seen as the meta-data of target images, which may contain information about what kinds of objects to generate, e.g., zebra or bus, and vD is able to provides basic information of objects, e.g., the spatial structure like four legs and one head for the zebra and the rectangle shape for the bus. 5.2 DISCRIMINATOR WITH CONTENT INFORMATION To further improve the discriminator to make a more reliable prediction, with respect to both visual appearances and geometric structure, we propose to incorporate the content information into it. This is mainly because, in a deep convolution neural network, when the network goes deeper, the less content details are preserved, including the exact shape of objects (Gatys et al., 2016; Johnson et al., 2016). We think the loss of content details may prevent the discriminator to provide finegrained shape-quality-feedback to the genera- tor, which may cause the difficulty for the generator to produce realistic geometric structure. Also, Zhou et al. (2014) showed that the empirical receptive field of a deep convolution neural network is much smaller than the theoretical one especially on deep layers. This means, using convolution operators with a local receptive field only, the network may fail to capture the spatial structure of objects when the size of objects exceeds the receptive field. To incorporate the content details, we propose to generate a series of image content features, {a128, a64, a32, . . . , a4}, by aggregating different image regions via applying pooling operators on the given real or fake features. The size of these content features is from a128 ∈ RC×128×128 to a4 ∈ RC×4×4, where C represents the number of channels, and the width and the height of the next image content features are 1/2 the previous one. Thus, the given image is pooled into representations for different regions, from fine- (a128) to coarse-scale (a4), which is able to preserve content information of different subregions, such as the spatial structure of objects. Then, these features are concatenated with the corresponding hidden features on the channel-wise direction, incorporating the content information into the discriminator. The number of different-scale content features can be modified, which is dependent on the size of given images. These features aggregate different image subregions by repetitively adopting fixed-size pooling kernels with a small stride. Thus, these content features maintain a reasonable small gap for image information. For the type of pooling operation between max and average, we perform comparison studies to show the difference in Sec. 6.2. Why does the discriminator with content information work better? Basically, the discriminator in a generative adversarial network is simply a classifier (Goodfellow et al., 2014). It tries to distinguish real data from the data created by the generator (note that in our method, we implement the Minmax loss in the loss function, instead of the Wasserstein loss (Arjovsky et al., 2017)). Also, the implementation of content information has shown its great effectiveness on classification (Lazebnik et al., 2006; He et al., 2015) and semantic segmentation (Liu et al., 2015; Zhao et al., 2017). Based on this, incorporating the content information into the discriminator is helpful, allowing the discriminator to make a more reliable prediction on complex datasets, especially for the datasets with complex image scenery settings, such as COCO. 5.3 TRAINING To train the network, we follow (Li et al., 2020) and adopt adversarial training. There are three stages in the model, and each stage has a generator network and a discriminator network. The generator and discriminator are trained alternatively by minimizing the generator loss LG and discriminator loss LD. Please see the supplementary material for more details about training objectives. We only highlight some training differences compared with Li et al. (2020). Generator objective. The objective functions to train the generator are similar as in (Li et al., 2020), but, differently, the inputs for the generator are a pair of (S, v) and a noise z, denoted by Gi(z, S, v), where i indicates the stage number. Discriminator objective. To improve the convergence of our GAN-based generation model, the R1 regularization (Mescheder et al., 2018) is adopted in the discriminator: R1(ψ) := γ 2 EpD(x) [ ‖5Dψ(x)‖2 ] , (6) where ψ represents parameter values of the discriminator. 6 EXPERIMENTS To verify the effectiveness of our proposed method in realistic image generation from text descriptions, we conduct extensive experiments on the CUB bird (Wah et al., 2011) dataset and more complex COCO (Lin et al., 2014) dataset, where COCO contains multiple objects with diverse backgrounds. Evaluation metrics. We adopt the Fréchet inception distance (FID) (Heusel et al., 2017) as the primary metric to quantitatively evaluate the image quality and diversity. In our experiments, we use 30K synthetic images vs. 30K real test images to calculate the FID value. However, as FID cannot reflect the relevance between an image and a text description, we use the R-precision (Xu et al., 2018) to measure the correlation between a generated image and its corresponding text. Human evaluation. To better verify the performance of our proposed method, we conducted a user study between current state-of-the-art method DF-GAN (Tao et al., 2020) and ours on CUB and COCO. We randomly selected 100 text descriptions from the test dataset. Then, we asked 5 workers to compare the results after looking at the output images and given text descriptions based on two criteria: (1) alignment: whether the synthetic image is semantically aligned with the given description, and (2) realism: whether the synthetic image looks realistic, shown in Tables 1 and 2. Please see supplementary material for more details about the human evaluation. Implementation. There are three stages in the model, and each stage has a generator network and a discriminator network. The number of stages can be modified, which depends on the resolution of the output image. We utilize a deep neural network layer relu5 3 of a pre-trained VGG-16 to extract image features v, which is able to filter content details in I and keep more semantic information. In the discriminator, the number of different-scale image content features can be modified, which is related to the size of the given image. A same-size pooling kernel with a small stride (stride = 2) is repeatedly implemented on the image features, to maximize the preservation of the content information. For the type of pooling operation, average pooling is adopted. For the matching algorithms, word image matching with reweighting based on importance is adopted. The resolution of synthetic results is 256× 256. Our method and its variants are trained on a single Quadro RTX 6000 GPU, using the Adam optimizer (Kingma & Ba, 2014) with the learning rate 0.0002. The hyperparameter λ is set to 5. We preprocess datasets according to the method used in (Xu et al., 2018). No attention module is implemented in the whole architecture. 6.1 COMPARISON WITH OTHER APPROACHES Quantitative comparison. Quantitative results are shown in Tables 1 and 2. As we can see, compared to other approaches, our method achieves better FID and R-precision scores on both datasets, and even has a better performance than OP-GAN, where OP-GAN adopts bounding boxes. This indicates that (1) our method can produce more realistic images from given text descriptions, in terms of image quality and diversity, and (2) synthetic results produced by our method are more semantically aligned with the given text descriptions. Besides, in human evaluation, our method achieves better alignment and realism scores, compared with DF-GAN, which indicates that our results are most preferred by workers, which further verifies the better performance of our method, with respect to semantic alignment and image realism. Qualitative comparison. In Fig. 5, we present synthetic examples produced by our method at 256 × 256, along with the corresponding retrieved images that provide image features. As we can see, our method is able to produce highquality results on CUB and COCO, with respect to realistic appearances and geometric structure, and also semantically matching the given text descriptions. Besides, the synthetic results are different from the retrieved image features, which indicates there is no significant copy-and-paste problem in our method. Diversity evaluation. To further evaluate the diversity of our method, we fix the given text description and the corresponding retrieved image features, and only change the given noise z to generate output images, shown in Fig. 7. When we fix the sentence and image features and only change the noise, our method can generate obviously different images, but they still semantically match the given sentence and also make use information from the image features. More evaluations are shown in the supplementary material. 6.2 COMPONENT ANALYSIS Effectiveness of the image features. To better understand the effectiveness of image features in the generator, we conduct an ablation study shown in Table 3. Without image features, the model “Ours w/o Feature” achieves worse quantitative results on both FID and R-precision compared with the baseline, which verifies the effectiveness of image features on high-quality image generation. Interestingly, without image features, even our method becomes a pure text-to-image generation method, similar to other baselines, but the FID of “Ours w/o Feature” is still competitive with other baselines. This indicate that even without the image features fed into our method, our method can still generate better synthetic results, with respect to image quality and diversity. We think this is mainly because with the help of content information, our better discriminator is able to make a more reliable prediction on complex datasets, which in turn encourages the generator to produce better synthetic images. Effectiveness of the disentanglement. Here, we show the effectiveness of the fully connected layers applied on the image features v. Interestingly, from Table 3, the “model w/o Disen.” achieves better FID and R-precision compared with the baseline. This is likely because the model may suffer from an identity mapping problem. To verify this identity mapping problem, we conduct another experiment, where we feed mismatched sentence and image pairs into the network without using search algorithms, denoted “model w/o Disen.*”. As we can see, on mismatched pairs, although FID is still low, the R-precision degrades significantly. Effectiveness of the content information. To verify the effectiveness of the content information adopted in the discriminator, we conduct an ablation study, shown in Table 3. As we can see, FID and R-precision degrade when the discriminator without adopting the content information. This may indicate that the content information can effectively strengthens the differentiation abilities of the discriminator. Then, the improved discriminator is able to provide the generator with fine-grained training feedback, regarding to geometric structure, thus facilitating training a better generator to produce higher-quality synthetic results. Comparison between different pooling types. Here, we conduct a comparison study on different pooling types (i.e., max and average) in Table 3. As we can see, the model with the average pooling works better than max pooling. We think that this is likely because max pooling fails to capture the contextual information between neighboring pixels, because it only picks the maximum value among a region of pixels, while average pooling calculates the average value between them. Effectiveness of the regularization. We evaluate the effectiveness of the adopted regularization in the discriminator. From Table 3, the model without the regularization has worse quantitative results, compared with the full model. We think that this is because the regularization effectively improves GAN convergence by preventing the generator from training on junk feedback, once the discriminator cannot easily tell the difference between real and fake. 7 CONCLUSION We have introduced a memory-driven semi-parametric approach to text-to-image generation, which utilizes large datasets of images at inference time. Also, an alternative architecture is proposed for both the generator and the discriminator. Extensive experimental results on two datasets demonstrate the effectiveness of feeding retrieved image features into the generator and incorporating content information into the discriminator. 8 ETHICS STATEMENT All datasets and baselines used in the paper are public with corresponding citations. Our research mainly explores the interaction between different modal features, and aims to achieve an effective transformation from one domain to the other, which might not have significant potentially harmful insights and potential conflicts of interest and sponsorship. 9 REPRODUCIBILITY STATEMENT To reproduce our results, we include the details of the datasets we used in our paper (see Sec. D). In the implementation section (see Sec. 6), we show more details on our network, including how to extract image features, and how to generate content information used in the discriminator. We also include the values of hyperparameters, and the kinds of devices that we used to train our network. Sec. 5.3 and Sec. B show objective functions to train our network. Also, all data and baselines used in our paper are public with corresponding citations. We will release our code after the conference. A ARCHITECTURE Here we show details about the network architectures for the components of our model. A.1 TEXT ENCODER The text encoder used in our method is a pretrained bidirectional LSTM (Xu et al., 2018), which is trained together with an image encoder Inception-v3 (Szegedy et al., 2016), maximizing the cosine similarity between text features and the corresponding image features. The text features are encoded from a given text description using the text encoder, and the image features are extracted from the corresponding matched image. A.2 IMAGE ENCODER The image encoder used in our main architecture is a VGG-16 (Simonyan & Zisserman, 2014) network, pretrained on ImageNet (Russakovsky et al., 2015). A deep neural network layer relu5 3 is adopted to extract image features. Thus, the image features are able to contain more semantic information than content details. A.3 TEXT-IMAGE AFFINE COMBINATION MODULE To better fuse different-modal text and image features, and also to enable a regional selection effect, we adopt the text-image affine combination module (Li et al., 2020), shown in Fig. 8. The affine combination module takes two inputs: (1) the hidden features h ∈ RC×H×W from the given text description or intermediate hidden representation between two stages, where C is the number of channels, H is the height, and W is the width of the feature map, and (2) the corresponding disentangled image features vD ∈ RC×H×W , achieved by applying fully connected layers on the image features. According to applying two convolutional layers, the disentangled image features vD are converted into trainable weights W (vD) ∈ RC×H×W and trainable biases b(vD) ∈ RC×H×W . Then, the fused feature h′ ∈ RC×H×W is generated by h′ = h W (vD) + b(vD), (7) where W and b represent the functions that convert the image features vD into weights W (vD) and biases b(vD), and denotes the Hadamard element-wise product. A.4 REWEIGHTING IMAGE FEATURES BASED ON IMPORTANCE Here, we show how to reweight image features based on its importance, mentioned in Sec. 4.2.4. First, during the training, we use convolutional layers to remap image features, and then reshape image features into v ∈ RD×(H∗W ). Thus, to calculate the importance λ for each spatial locations in image features, we apply the following equation: λ = Softmax(vT v), where λ ∈ R(H∗W )×(H∗W ), and each element in λ represents the correlation between different spatial locations. Finally, we reweight image features based on importance by adopting vλ. B OBJECTIVE FUNCTIONS Here we show the complete objective functions for training our method. The discriminator and generator in our model are trained alternatively by minimizing both the generator loss LG and the discriminator loss LD. B.1 GENERATOR OBJECTIVE The generator objective for training a generator at stage i contains an unconditional adversarial loss, a conditional adversarial loss, and a text-image matching loss LDAMSM (Xu et al., 2018). LGi =− 1 2 Ez∼Pz,v∼Pdata [log(Di(Gi(z, S, v)))]︸ ︷︷ ︸ unconditional adversarial loss −1 2 Ez∼Pz,v∼Pdata [log(Di(Gi(z, S, v), S))]︸ ︷︷ ︸ conditional adversarial loss +λLDAMSM, (8) where Gi and Di represent the corresponding generator network and discriminator network at stage i, respectively, S is the text description, v is the image features that are extracted from the corresponding real image I that correctly semantically matches S, where the I is sampled from the true distribution Pdata, z is a noise vector drawn from the Gaussian distribution Pz . Thus, the complete objective function for training the generator networks is: LG = K∑ k=1 (LGi), (9) where K is the total number of stages in the network. B.2 DISCRIMINATOR OBJECTIVE The discriminator objective for training a discriminator at stage i contains an unconditional adversarial loss and a conditional adversarial loss. LDi =− 1 2 EIi∼Pdata [log(Di(Ii))]− 1 2 Ez∼Pz [log(1−Di(Gi(z, S, v)))]︸ ︷︷ ︸ unconditional adversarial loss −1 2 EIi∼Pdata [log(Di(Ii, S))]− 1 2 Ez∼Pz [log(1−Di(Gi(z, S, v), S))]︸ ︷︷ ︸ conditional adversarial loss , (10) where Ii denotes the real image sampled from the true image distribution Pdata at stage i. Thus, the complete objective function for training the discriminator networks is: LD = K∑ k=1 (LDi) +R1(ψ), (11) where R1(ψ) is a regularization term described in the paper. This regularization term is derived from zero-centered gradient penalties (Ross & Doshi-Velez, 2017) on local stability, which penalizes the discriminator for deviating from the Nash-equilibrium. This ensures that when a GAN-based model converges (i.e., the generator produces the true data distribution), the discriminator cannot create a non-zero gradient orthogonal to the data manifold without suffering a loss in the GAN game. C EVALUATION METRICS In this section, we show more details about the evaluation metrics used in the paper. C.1 FRÉCHET INCEPTION DISTANCE The Fréchet inception distance (FID) (Heusel et al., 2017) measures the Fréchet distance between generated image features and real image features, where both features are extracted by an Inception-v3 network (Szegedy et al., 2016) pretrained on ImageNet (Russakovsky et al., 2015). Consequently, a lower FID implies a closer distance between the synthetic image distribution and the real image distribution. C.2 R-PRECISION To measure the semantic alignment between the synthetic image and the given text description, the R-precision (Xu et al., 2018) is adopted. The R-precision is calculated by retrieving relevant text descriptions given an image query. To measure the relevance between the text and the image, the cosine similarity between text and image features is adopted. Thus, we compute a global image vector and 100 candidate sentence vectors, where the 100 candidate sentence vectors contain R number of ground-truth text descriptions that correctly describe the image, and 100−R randomly chosen mismatched descriptions. For each image query, if a results in the top R ranked retrieval text descriptions are relevant, then the R-precision is a/R. In the paper, we measure the top-1 R-precision (i.e., R = 1). D MORE EXPERIMENTS In this section, we show additional experimental results to further evaluate and verify the performance of our proposed method. D.1 DATASETS CUB bird (Wah et al., 2011) contains 8,855 training images and 2,933 test images, and each image has 10 corresponding text descriptions. COCO (Lin et al., 2014) contains 82,783 training images and 40,504 validation images. Each image has 5 descriptions. D.2 QUANTITATIVE COMPARISON BETWEEN DIFFERENT ALGORITHMS Here, we show the quantitative comparison between different matching algorithms, shown in Tables 4 and 5. As we can see, the algorithm word image matching with reweighting based on importance achieves the best FID and R-psr scores on CUB and COCO datasets. Therefore, the algorithm word image matching with reweighting is adopted in our method. D.3 DETAILS OF HUMAN EVALUATION Because the automatic metric cannot comprehensively evaluate the improvement of our proposed method, we conducted a side-by-side human evaluation study to analyze the improvement. The study compares synthetic images from our method and current state-of-the-art text-to-image generation method DF-GAN (Tao et al., 2020) on both CUB and COCO, according to (1) alignment, and (2) realism. We presented synthetic images from different methods along with the given text descriptions. We randomly switch our method and the baseline and also anonymized them. Then, we asked workers to choose the best images based on above two criteria. In this study, we randomly choose 100 text descriptions sampled from the test dataset, and then assign corresponding synthetic images generated by different methods to 5 workers to reduce variance. D.4 QUALITATIVE RESULTS In Fig. 10, we show more qualitative results generated by our method on the CUB bird dataset, along with the corresponding retrieved images that provide image features. As we can see, our method is able to produce high-quality results on CUB, semantically matching the given text descriptions. Also, the synthetic results look obviously different from the retrieved images, but our method can selectively choose information from the retrieved image to generate better synthetic results. D.5 DIVERSITY D.5.1 SSIM We also compare the Structural Similarity Index (SSIM) score (Hore & Ziou, 2010) between the generated images and corresponding ground-truth images to evaluate the diversity of our method. SSIM is originally used to measure the recovery result from distorted images. In our case, higher SSIM means synthetic and real images are more similar, which indicates that there may exist a copy-and-paste problem and the network has a worse diversity. Based on this, for SSIM, lower is better, which means a better diversity. To calculate the SSIM, for other baseline methods, we evaluate them on the test dataset by calculating the SSIM between each synthetic and ground-truth image pairs, and then get the average of all scores; for our method, we calculate the SSIM between the synthetic image and the image that provide image features. As shown in Table 6, our method achieves competitive SSIM scores on both CUB and COCO, compared with other baselines. This indicates that (1) even if our method has image features as image priors, it can still produce diverse synthetic results that are different from the corresponding real images, (2) there is no significant copy-and-paste problem in our method, and (3) our method can effectively disentangle objects and attributes in the given image features, which then can work as candidate information for the main generation pipeline to choose. D.5.2 SEMANTIC INFORMATION EXPLORATION Here, we further verify whether our method suffers from an copy-and-paste problem by exploring whether our method can make use of semantic information contained in the retrieved image features. To verify this, instead of extracting image features from RGB images, we use segmentation masks to provide semantic image features, shown in Fig. 11. As we can see, although there is no any content information provided in the given segmentation masks, our method is still able to generate realistic images, which indicate that our method can make use of semantic information contained in the image features, instead of simply copying and pasting the retrieved image features to produce output images. Furthermore, discussed in the following Sec. D.7, given a partially matched text and image features, our method is able to pick the semantic information (e.g., structure of train, cat, and bus) and filter detailed content color information (e.g., yellow and green, brown, and yellow) to generate text-required output images, as shown in Fig. 12. D.6 EFFECTIVENESS OF IMAGE FEATURES Actually, when there are no image features that are fed into our method, our method becomes a traditional text-to-image generation model, where the inputs for the model are only the natural language descriptions and random noise. As shown in Table 7, “Ours w/o Feature” still has a competitive performance, compared with other baselines, which means that our method can still generate images with good quality and diversity. We think this is mainly because of the powerful discriminator with content information, which is able to provide fine-grained training feedback to the generator, in terms of realistic appearance and geometric structure. Note that the way to block image features to build the model “Ours w/o Feature” is to remove image features and ACM components in the network, and only keep the new discriminator with content information. D.7 IMAGE GENERATION WITH PARTIAL TEXT-IMAGE MATCHING Interestingly, when the retrieved image features have a good quality (e.g., desired objects in image features can provide enough information), but are not perfectly aligned with the given text descriptions, Table 7: Quantitative comparison: Fréchet inception distance (FID) and R-precision (R-psr) of StackGAN++ (Zhang et al., 2018), AttnGAN (Xu et al., 2018), ControlGAN (Li et al., 2019a), DM-GAN (Zhu et al., 2019), OP-GAN (Hinz et al., 2019), and our method on the COCO dataset. “Ours w/o Feature” denotes that our model does not have any image features and just has a similar generation pipeline as other traditional text-to-image generation methods. For FID, lower is better, while for R-psr, higher is better. Matrix StackGAN++ AttnGAN ControlGAN DM-GAN OP-GAN Ours w/o Feature FID 81.59 32.32 33.58 32.64 24.70 22.20 R-prs (%) 71.88 85.47 82.43 88.56 89.01 84.63 which means that the given text description and corresponding retrieved image features only partially match on the semantic meaning, our method is still able to produce realistic images, shown in Fig. 12. As we can see, our method is able to generate the desired objects with required attributes, even if image features only partially match the given text description. For example, in the provided “train” image features, there is a yellow and green train, but the given description requires a red train. However, our method is still able to generate a realistic train with a red color. Besides, our method can even produce a novel composition, e.g., the sign is flying in the sky. We think that this is mainly because the generator can selectively make use of the information provided by the image features, instead of directly copying and pasting information from it. Also, features and attributes are disentangled in the provided image features, which enable this independent selection without additional generation. D.8 REGIONAL SELECTION EFFECT In Fig. 12, we can observe the regional selection effect involved in the generation process. For the train example, our full model is able to selectively keep the relevant information (e.g., train) and filter the irrelevant contents (e.g., yellow and green color) to avoid a wrong object generation (e.g., red color). This effect can be magnified when the given image has multiple objects, and the given text only partially describes it, shown in Fig. 13. There are multiple objects (e.g., vase, flowers, chairs, and window for the top example; three zebras, enclosure, and grass for the bottom one) in the given image features. However, our method only selectively makes use of some information (e.g., shape and texture of flowers and zebra) and generates text-required objects without keeping irrelevant contents in the image features (e.g., chair, window, and multiple zebras). E LIMITATIONS AND FUTURE WORK Here, we discuss some limitations of the proposed method and also the future work. We have observed that our method may fail to produce realistic images when the retrieved image features can only provide limited information, e.g., the target object is too small in the corresponding real image, or there are no desired objects in the retrieved image features. As shown in Fig. 14 left, the stop sign, zebra, bus, and train in the corresponding image are too small, which means that the extracted image features can only provide very limited information about the desired object zebra, stop sign, bus, and train to the generation pipeline. Furthermore, when the retrieved image features have no desired objects, shown in Fig. 14 right, our proposed method may fail to generate high-quality images as well. No desired objects presented in the retrieved image features are mainly caused by the image preprocessing (e.g., crop) and also the limitation of matching algorithms. In such cases, our method is more similar to a pure text-to-image generation method, like other baselines, because the provided image features cannot provide any useful information. To solve these problems, we suggest to build a better memory bank with higher-quality image features, and also improve the matching algorithms to find the most compatible image features for a given text description. Besides, our method is a semi-parametric approach, which needs to retrieve image features from the memory bank. So, it might slow down the inference time, compared with other purely parametric methods. To solve this problem, we suggest to (1) run matching algorithms parallel to speed up the whole inference time, and (2) encourage users to provide the category of the main object in their text descriptions, and then we can use this category as a key to narrow down the retrieval regions. F ADDITIONAL QUALITATIVE COMPARISON Here, we show an additional qualitative comparison between the different text-to-image generation approaches StackGAN++ (Zhang et al., 2018), AttnGAN (Xu et al., 2018), and DF-GAN (Tao et al., 2020) with our method on the COCO dataset (Lin et al., 2014). A zebra is grazing out in the field of grass. Pink flowers sitting in a clear vase full of water. A green and yellow train pulling out of a train station next to train tracks. A bathroom has a white toilet, sitting next to a white bath tub. A pizza is full of pepperoni and cheese on a white plate. A black laptop sitting on a wooden table. Given Text StackGAN++ (Zhang et al., 2018) AttnGAN (Xu et al., 2018) DF-GAN (Tao et al., 2020) Ours Figure 15: Additional comparison results between StackGAN++, AttnGAN, DF-GAN, and Ours on the COCO dataset. A clock that is on the side of a tower. A green double decker bus is driving on a road. A white and grey cat laying on a table. A stop sign with a dark sky in the background. Two baseball players a catcher and a batter who has just hit a baseball. A white bed with white pillows and a lamp in a room. Given Text StackGAN++ (Zhang et al., 2018) AttnGAN (Xu et al., 2018) DF-GAN (Tao et al., 2020) Ours Figure 16: Additional comparison results between StackGAN++, AttnGAN, DF-GAN, and Ours on the COCO dataset. A large black train is traveling down a track. A vase filled with colorful assorted flowers and green leafs in it. A zebra walking in an open grassy field. A stop sign with a blue sky in the background. A double red decker bus parked on the street. A silver laptop sitting on a desk. A plate holding a pizza topped with cheese and vegetable on a table. A white toilet with lid open in a bathroom. The room has a lamp and a bed with white sheets on it. A black cat laying on a table. A baseball player is standing ready to bat. Given Text StackGAN++ (Zhang et al., 2018) AttnGAN (Xu et al., 2018) DF-GAN (Tao et al., 2020) Ours Figure 18: Additional comparison results between StackGAN++, AttnGAN, DF-GAN, and Ours on the COCO dataset.
1. What is the main contribution of the paper in terms of text-to-image generation? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its memory bank requirement and diversity analysis? 3. How does the method compare to other baselines in terms of memory usage and performance degradation when reducing the memory bank size? 4. Is a good retrieval system important for the proposed method, and how can it handle multi-object scenes? 5. Can using more contemporary pretrained weights improve generation quality, and what are some relevant papers that the authors did not cite?
Summary Of The Paper Review
Summary Of The Paper The authors propose a new approach to text-to-image generation. In this approach firstly, the image features are retrieved from the memory bank based on the text. Then based on these features and the text the neural network generates a new image. The memory bank is constructed by extracting features from the training set of real images. The method achieves the state-of-the-art in terms of FID and R-precision on COCO and CUB bird datasets. In addition to this, the paper introduces a few improvements in generator and discriminator architectures and provides an analysis of the quality and diversity of generated images. Review --Strengths The authors obtain state-of-the-art results in terms of FID and R-precision. FID improvement is significant on the CUB bird dataset, from 14.81 to 10.49. They conduct a human evaluation that demonstrates the superiority of their approach as well. The diversity analysis seems legit and I am convinced by the paper that the model does not simply copy-paste retrieved images. The manuscript provides a meaningful ablation study. The manuscript compares different retrieval matching strategies. --Weaknesses The method requires storing a memory bank to generate images. The size of this memory bank can be huge depending on the number of samples one needs to store. The method combines architecture[3], the training pipeline[3], and an idea of the memory bank[5] from prior works with reasonable modifications. I believe this contribution is important but limited and a further investigation of the proposed pipeline is needed to strengthen the manuscript. I think that answers to the following questions will strengthen the paper: How much memory is required for the proposed method compared to other baselines? What happens if the memory bank gets reduced? For example, how does performance degrade if only 25, 50, or 75% of the training set is used in the memory bank? Is it important to have a good retrieval system? The addition of retrieval metrics for the train and test set to Table 4 in Supplementary can answer this question. In my opinion, it is hard for the proposed method to handle multi-object scenes if object combinations are not present in one image in the memory bank. It would be great to have this intuition confirmed or disproved. The authors use the intermediate output of VGG16 from the memory bank as input to the generator network. Can one improve generation quality by using more contemporary pretrained weights instead of VGG16, e.g. BYOL[6], VQGAN[7], or VIT[8]? --Additional remarks I believe there are two relevant papers, that the authors did not cite: DALL-E[1] generates images based on COCO captions in zero-shot regime, and there is a well-known popular technique on the Internet that combines CLIP [2] with a pure image generation pipeline to generate images consistent with the text (e.g. [5]). It is better to clarify to which dimension the softmax function is applied in section 4.2. [1] Zero-Shot Text-to-Image Generation, Ramesh et al. [2] Learning Transferable Visual Models From Natural Language Supervision, Radford et al. [3] ManiGAN: Text-Guided Image Manipulation, Li et al. [4] PasteGAN: A semiparametric method to generate image from scene graph, Li et al. [5] https://colab.research.google.com/drive/12a_Wrfi2_gwwAuN3VvMTwVMz9TfqctNj?usp=sharing [6] Bootstrap your own latent: A new approach to self-supervised Learning, Grill et al. [7] Taming Transformers for High-Resolution Image Synthesis, Esser et al [8] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, Dosovitskiy et al
ICLR
Title Memory-Driven Text-to-Image Generation Abstract We introduce a memory-driven semi-parametric approach to text-to-image generation, which is based on both parametric and non-parametric techniques. The non-parametric component is a memory bank of image features constructed from a training set of images. The parametric component is a generative adversarial network. Given a new text description at inference time, the memory bank is used to selectively retrieve image features that are provided as basic information of target images, which enables the generator to produce realistic synthetic results. We also incorporate the content information into the discriminator, together with semantic features, allowing the discriminator to make a more reliable prediction. Experimental results demonstrate that the proposed memory-driven semi-parametric approach produces more realistic images than purely parametric approaches, in terms of both visual fidelity and text-image semantic consistency. 1 INTRODUCTION How to effectively produce realistic images from given natural language descriptions with semantic alignment has drawn much attention, because of its tremendous potential applications in art, design, and video games, to name a few. Recently, with the vast development of generative adversarial networks (Goodfellow et al., 2014; Gauthier, 2015; Mirza & Osindero, 2014) in realistic image generation, text-to-image generation has made much progress, where the progress has been mainly driven by parametric models — deep networks use their weights to represent all data concerning realistic appearance (Zhang et al., 2017; 2018; Xu et al., 2018; Li et al., 2019a; Qiao et al., 2019b; Zhu et al., 2019; Hinz et al., 2019; Cheng et al., 2020; Qiao et al., 2019a). Although these approaches can produce realistic results on well-structured datasets, containing a specific class of objects at the image center with fine-grained descriptions, such as birds (Wah et al., 2011) and flowers (Nilsback & Zisserman, 2008), there is still much room to improve. Besides, they usually fail on more complex datasets, which contain multiple objects with diverse backgrounds, e.g., COCO (Lin et al., 2014). This is likely because, for COCO, the generation process involves a large variety in objects (e.g., pose, shape, and location), backgrounds, and scenery settings. Thus, it is much easier for these approaches to only produce text-semantic-matched appearances instead of capturing difficult geometric structure. As shown in Fig. 1, current approaches are only capable of producing required appearances semantically matching the given descriptions (e.g., white and black stripes for zebra), but objects are unrealistic with distorted shape. Furthermore, these approaches are in contrast to earlier works on image synthesis, which were based on non-parametric techniques that could make use of large datasets of images at inference time (Chen et al., 2009; Hays & Efros, 2007; Isola & Liu, 2013; Zhu et al., 2015; Lalonde et al., 2007). Although parametric approaches can enable the benefits of end-to-end training of highly expressive models, they lose a strength of earlier non-parametric techniques, as they fail to make use of large datasets of images at inference time. In this paper, we introduce a memory-driven semi-parametric approach to text-to-image generation, where the approach takes the advantage of both parametric and non-parametric techniques. The non-parametric component is a memory bank of disentangled image features constructed from a training set of real images. The parametric component is a generative adversarial network. Given a novel text description at inference time, the memory bank is used to selectively retrieve compatible image features that are provided as basic information, allowing the generator to directly draw clues of target images, and thus to produce realistic synthetic results. Besides, to further improve the differentiation ability of the discriminator, we incorporate the content information into it. This is because, to make a prediction, the discriminator usually relies on semantic A zebra is standing on the grassy field. A white and blue bus is driving down a street. Given Text StackGAN++ (Zhang et al., 2018) AttnGAN (Xu et al., 2018) DF-GAN (Tao et al., 2020) Ours Figure 1: Examples of text-to-image generation on COCO. Current approaches only generate lowquality images with unrealistic objects. In contrast, our method can produce realistic images, in terms of both visual appearances and geometric structure. features, extracted from a given image using a series of convolution operators with local receptive fields. However, when the discriminator goes deeper, less content details are preserved, including the exact geometric structure information (Gatys et al., 2016; Johnson et al., 2016). We think that the loss of content details is likely one of the reasons why current approaches fail to produce realistic shapes for objects on difficult datasets, such as COCO. Thus, the adoption of content information allows the model to exploit the capability of content details and then improve the discriminator to make the final prediction more reliable. Finally, an extensive experimental analysis is performed, which demonstrates that our memory-driven semi-parametric method can generate more realistic images from natural language, compared with purely parametric models, in terms of both visual appearances and geometric structure. 2 RELATED WORK Text-to-image generation has made much progress because of the success of generative adversarial networks (GANs) (Goodfellow et al., 2014) in realistic image generation. Zhang et al. (2017) proposed a multi-stage architecture to generate realistic images progressively. Then, attention-based methods (Xu et al., 2018; Li et al., 2019a) are proposed to further improve the results. Zhu et al. (2019) introduced a dynamic memory module to refine image contents. Qiao et al. (2019a) proposed text-visual co-embeddings to replace input text with corresponding visual features. Cheng et al. (2020) introduced a rich feature generating text-to-image synthesis. Besides, extra information is adopted on the text-to-image generation process, such as scene graphs (Johnson et al., 2018; Ashual & Wolf, 2019) and layout (e.g., bounding boxes or segmentation masks) (Hong et al., 2018; Li et al., 2019b; Hinz et al., 2019). However, none of the above approaches adopt non-parametric techniques to make use of large datasets of images at inference time, neither feed content information into the discriminator to enable a finer training feedback. Also, our method does not make use of any additional semantic information, e.g., scene graphs and layout. Text-guided image manipulation is related to our work, where the task also takes natural language descriptions and real images as inputs, but it aims to modify the images using given texts to achieve semantic consistency (Nam et al., 2018; Dong et al., 2017; Li et al., 2020). Differently from it, our work focuses mainly on generating novel images, instead of editing some attributes of the given images. Also, the real images in the text-guided image manipulation task behave as a condition, where the synthetic results should reconstruct all text-irrelevant attributes from the given real images. Differently, the real images in our work are mainly to provide the generator with additional cues of target images, in order to ease the whole generation process. Memory Bank. Qi et al. (2018) introduced a semi-parametric approach to realistic image generation from semantic layouts. Li et al. (2019c) used the stored image crops to determine the appearance of objects. Tseng et al. (2020) used a differentiable retrieval process to select mutually compatible image patches. Li et al. (2021) studied conditional image extrapolation to synthesize new images guided by the input structured text. Differently, instead of using a concise semantic representation (a scene graph as input), which is less user-friendly and has limited context of general descriptions, we use natural language descriptions as input. Also, Liang et al. (2020) designed a memory structure to parse the textual content. Differently, our method simply uses a deep network to extract image features, instead of involving complex image preprocessing to build a memory bank. 3 OVERVIEW Given a sentence S, we aim to generate a fake image I ′ that is semantically aligned with the given S. The proposed model is trained on a set of paired text description and corresponding real image features v, denoted by (S, v). This set is also used to generate a memory bankM of disentangled image features v for different categories, where image features are extracted from the training image by using a pretrained VGG16 network (Simonyan & Zisserman, 2014) (see Fig. 2). Each element in M is an image feature extracted from a training image, associated with corresponding semantically-matched text descriptions from the training datasets. At inference time, we are given a novel text description S that was not seen during training. Then, S is used to retrieve semantically-aligned image features from the memory bank M , based on designed matching algorithms (more details are shown in Sec. 4.2). Next, the retrieved image features v, together with the given text description S, are fed into the generator to synthesize the output image (see Fig. 3). The generator utilizes the information from the image features, fuses them with hidden features produced from the given text description S, and generate realistic images semantically-aligned with S. The architecture and training of the network are described in Sec. 5. To incorporate image features into the generation pipeline, we borrow from the text-guided image manipulation literature (Li et al., 2020), and redesign the architecture to make full use of the given image features in text-to-image generation, shown in Fig. 3. 4 MEMORY BANK 4.1 REPRESENTATION The memory bank M is a set of image features vi extracted from training set images, and each image features vi is associated with matched text descriptions that are provided in the dataset, e.g., in COCO, each image has five matched text descriptions. These descriptions are used in the matching algorithms, allowing a given text to find the most compatible image features at inference time. 4.2 RETRIEVAL Given a new text description, in order to effectively retrieve the most compatible image features from the memory bank M , we have designed several matching algorithms and also explored the effectiveness of each algorithms. A detailed comparison between different algorithms is shown in the supplementary material. 4.2.1 SENTENCE-SENTENCE MATCHING Here, we use image features’ associated sentences S′i as keys, to find the most compatible image features vi for a given unseen sentence S at inference time. First, we feed both S and S′i into a pretrained text encoder (Xu et al., 2018) to produce sentence features s ∈ RD×1 and s′i ∈ RD×1, respectively, where D is the feature dimension. Then, for the given sentence S, we select the most compatible image features vi in M based on a cosine similarity score: αi = (s)T s′i ‖s‖ ‖s′i‖ . (1) Finally, we fetch the image features vi using the key S′i with the highest similarity score αi. 4.2.2 SENTENCE-IMAGE MATCHING Instead of using associated sentences as keys, we can calculate the similarity between the sentence feature s ∈ RD×1 and image features vi ∈ RD×H×W stored in M , where D is the number of channels, H is the height, and W is the width. To directly calculate the similarity, we first average the image features on the spatial direction to get a global image feature vGi ∈ RD×1. So, for a given unseen S, we select the most compatible image features vi in M based on βi: βi = (s)T vGi ‖s‖ ‖vGi‖ . (2) 4.2.3 WORDS-WORDS MATCHING Moreover, we can use a more fine-grained text representation (namely, word embeddings), as keys to find the most compatible image features vi stored in M for a given unseen sentence S. At inference time, we first feed both S and S′i into a pretrained text encoder (Xu et al., 2018) to generate word embeddings w ∈ RN×D and w′i ∈ RN×D, respectively, where N is the number of words and D is the feature dimension. Then, we reshape the size of both w and w′i to R(D∗N)×1. So, to find the most compatible image features, the cosine similarity score can be defined as follows: δi = (w)Tw′i ‖w‖ ‖w′i‖ . (3) However, different words in a sentence are not equally important. Thus, if we simply combine all words from a sentence together to calculate the similarity (like above), the similarity score may be less precise. To solve this issue, during training, we reweight each word in a sentence by its importance. We first use convolutional layers to remap word embeddings, and then calculate the importance λ (and λ′i) for each word in word embeddings w ∈ RN×D (and w′i ∈ RN×D), denoted by: λ = Softmax(wwT ) and λ′i = Softmax(w ′ iw ′T i ), respectively. Each elements in λ represents the correlation between different words in a sentence. Then, λw (and λ′iw ′ i) reweight word embeddings for each word based on its correlation with other words. So, using this reweighted word embeddings, we can achieve a more precise similarity calculation between two word embeddings. At inference time, after we reshape the size of both λw and λ′iw ′ i to R(D∗N)×1, the new equation is defined as follows: δi = (λw)Tλ′iw ′ i ‖λw‖ ‖λ′iw′i‖ . (4) 4.2.4 WORDS-IMAGE MATCHING Furthermore, we use the word embeddings w ∈ RN×D and image features vi ∈ RD×H×W to directly calculate the similarity score between them. To achieve this, we first reshape the size of the image features to vi ∈ RD×(H∗W ). Then, a correlation matrix ci ∈ RN×(H∗W ) can be obtained via: ci = Softmax(wvi), where each element in ci represents the correlation between each word and each image spatial location. Then, a reweighted word embedding w̃i ∈ RN×D containing image information can be achieved by w̃i = civTi . So, to find the most compatible image features, we first reshape the size of both w and w̃i to R(D∗N)×1, and the similarity score is defined as follows: γi = (w)T w̃i ‖w‖ ‖w̃i‖ . (5) Similarly, we can also reweight word embeddings w and image features vi based on their importance (see Sec.4.2.3) to achieve a more precise calculation. 5 GENERATIVE ADVERSARIAL NETWORKS To generate high-quality synthetic images from natural language descriptions, we propose to incorporate image features v, along with the given sentence S, into the generator. To incorporate image features into the generation pipeline, we borrow from the text-guided image manipulation literature (Li et al., 2020), and redesign the architecture to make full use of the given image features in text-to-image generation, shown in Fig. 3. 5.1 GENERATOR WITH IMAGE FEATURES To avoid the identity mapping and also to make full use of image features v in the generator, we first average v on each channel to filter potential content details (e.g., overall spatial structure) contained in v, getting a global image feature vG, where vG only keeps basic information of the corresponding real image I , serving as basic image priors. By doing this, the model can effectively avoid copying and pasting from I , and greatly ensure the diversity of output results, especially on the first stage. This is because the following stages focus more on refining basic images produced by the first stage, according to adding more details and improving their resolution, shown in Fig. 3. However, only feeding the global image feature vG at the beginning of the network, the model may fail to fully utilize the cues contained in the image features v. Thus, we further incorporate the image features v at each stage of the network. The reason to feed image features v rather than the global feature vG at the following stages is that v contains more information about the desired output image, such as image contents and geometric structure of objects, where these details can work as candidate information for the main generation pipeline to select. To enable this regional selection effect, we adopt the text-image affine combine module (ACM) (Li et al., 2020), which is able to selectively fuse text-required image information within v into the hidden features h, where h is generated from the given text description S. However, simply fusing image features v into the generation pipeline may introduce constraints on producing diverse and novel synthetic results, because different image information (e.g., objects and visual attributes) in v may be entangled, which means, for example, if the model only wants to generate one object, the corresponding entangled parts (e.g, objects and attributes) may be produced as well. This may cause an additional generation of text-irrelevant objects and attributes. Thus, to avoid these drawbacks, inspired by the study (Karras et al., 2019), we use several fully connected layers to disentangle the image features v, getting disentangled image features vD, which allows the model to disconnect relations between different objects and also attributes. By doing this, the model is able to prevent the constraints introduced by the image features v, and then selectively choose text-required image information within vD, where this information is effectively disentangled without a strong connection. Why does the generator with image features work better? Ideally, the generator produces a sample, e.g., an image, from a latent code, and the distribution of these samples should be indistinguishable from the training distribution, where the training distribution is actually drawn from the real samples in the training dataset. Based on this, incorporating image features from real images in training dataest into the generator allows the generator to directly draw cues of the desired distribution that it eventually needs to generate. Besides, the global feature vG and disentangled image features vD can provide basic information of target results in advance, and also work as candidate information, allowing the model to selectively choose text-required information without generating it by the model itself, and thus easing the whole generation process. To some extent, the global feature vG can be seen as the meta-data of target images, which may contain information about what kinds of objects to generate, e.g., zebra or bus, and vD is able to provides basic information of objects, e.g., the spatial structure like four legs and one head for the zebra and the rectangle shape for the bus. 5.2 DISCRIMINATOR WITH CONTENT INFORMATION To further improve the discriminator to make a more reliable prediction, with respect to both visual appearances and geometric structure, we propose to incorporate the content information into it. This is mainly because, in a deep convolution neural network, when the network goes deeper, the less content details are preserved, including the exact shape of objects (Gatys et al., 2016; Johnson et al., 2016). We think the loss of content details may prevent the discriminator to provide finegrained shape-quality-feedback to the genera- tor, which may cause the difficulty for the generator to produce realistic geometric structure. Also, Zhou et al. (2014) showed that the empirical receptive field of a deep convolution neural network is much smaller than the theoretical one especially on deep layers. This means, using convolution operators with a local receptive field only, the network may fail to capture the spatial structure of objects when the size of objects exceeds the receptive field. To incorporate the content details, we propose to generate a series of image content features, {a128, a64, a32, . . . , a4}, by aggregating different image regions via applying pooling operators on the given real or fake features. The size of these content features is from a128 ∈ RC×128×128 to a4 ∈ RC×4×4, where C represents the number of channels, and the width and the height of the next image content features are 1/2 the previous one. Thus, the given image is pooled into representations for different regions, from fine- (a128) to coarse-scale (a4), which is able to preserve content information of different subregions, such as the spatial structure of objects. Then, these features are concatenated with the corresponding hidden features on the channel-wise direction, incorporating the content information into the discriminator. The number of different-scale content features can be modified, which is dependent on the size of given images. These features aggregate different image subregions by repetitively adopting fixed-size pooling kernels with a small stride. Thus, these content features maintain a reasonable small gap for image information. For the type of pooling operation between max and average, we perform comparison studies to show the difference in Sec. 6.2. Why does the discriminator with content information work better? Basically, the discriminator in a generative adversarial network is simply a classifier (Goodfellow et al., 2014). It tries to distinguish real data from the data created by the generator (note that in our method, we implement the Minmax loss in the loss function, instead of the Wasserstein loss (Arjovsky et al., 2017)). Also, the implementation of content information has shown its great effectiveness on classification (Lazebnik et al., 2006; He et al., 2015) and semantic segmentation (Liu et al., 2015; Zhao et al., 2017). Based on this, incorporating the content information into the discriminator is helpful, allowing the discriminator to make a more reliable prediction on complex datasets, especially for the datasets with complex image scenery settings, such as COCO. 5.3 TRAINING To train the network, we follow (Li et al., 2020) and adopt adversarial training. There are three stages in the model, and each stage has a generator network and a discriminator network. The generator and discriminator are trained alternatively by minimizing the generator loss LG and discriminator loss LD. Please see the supplementary material for more details about training objectives. We only highlight some training differences compared with Li et al. (2020). Generator objective. The objective functions to train the generator are similar as in (Li et al., 2020), but, differently, the inputs for the generator are a pair of (S, v) and a noise z, denoted by Gi(z, S, v), where i indicates the stage number. Discriminator objective. To improve the convergence of our GAN-based generation model, the R1 regularization (Mescheder et al., 2018) is adopted in the discriminator: R1(ψ) := γ 2 EpD(x) [ ‖5Dψ(x)‖2 ] , (6) where ψ represents parameter values of the discriminator. 6 EXPERIMENTS To verify the effectiveness of our proposed method in realistic image generation from text descriptions, we conduct extensive experiments on the CUB bird (Wah et al., 2011) dataset and more complex COCO (Lin et al., 2014) dataset, where COCO contains multiple objects with diverse backgrounds. Evaluation metrics. We adopt the Fréchet inception distance (FID) (Heusel et al., 2017) as the primary metric to quantitatively evaluate the image quality and diversity. In our experiments, we use 30K synthetic images vs. 30K real test images to calculate the FID value. However, as FID cannot reflect the relevance between an image and a text description, we use the R-precision (Xu et al., 2018) to measure the correlation between a generated image and its corresponding text. Human evaluation. To better verify the performance of our proposed method, we conducted a user study between current state-of-the-art method DF-GAN (Tao et al., 2020) and ours on CUB and COCO. We randomly selected 100 text descriptions from the test dataset. Then, we asked 5 workers to compare the results after looking at the output images and given text descriptions based on two criteria: (1) alignment: whether the synthetic image is semantically aligned with the given description, and (2) realism: whether the synthetic image looks realistic, shown in Tables 1 and 2. Please see supplementary material for more details about the human evaluation. Implementation. There are three stages in the model, and each stage has a generator network and a discriminator network. The number of stages can be modified, which depends on the resolution of the output image. We utilize a deep neural network layer relu5 3 of a pre-trained VGG-16 to extract image features v, which is able to filter content details in I and keep more semantic information. In the discriminator, the number of different-scale image content features can be modified, which is related to the size of the given image. A same-size pooling kernel with a small stride (stride = 2) is repeatedly implemented on the image features, to maximize the preservation of the content information. For the type of pooling operation, average pooling is adopted. For the matching algorithms, word image matching with reweighting based on importance is adopted. The resolution of synthetic results is 256× 256. Our method and its variants are trained on a single Quadro RTX 6000 GPU, using the Adam optimizer (Kingma & Ba, 2014) with the learning rate 0.0002. The hyperparameter λ is set to 5. We preprocess datasets according to the method used in (Xu et al., 2018). No attention module is implemented in the whole architecture. 6.1 COMPARISON WITH OTHER APPROACHES Quantitative comparison. Quantitative results are shown in Tables 1 and 2. As we can see, compared to other approaches, our method achieves better FID and R-precision scores on both datasets, and even has a better performance than OP-GAN, where OP-GAN adopts bounding boxes. This indicates that (1) our method can produce more realistic images from given text descriptions, in terms of image quality and diversity, and (2) synthetic results produced by our method are more semantically aligned with the given text descriptions. Besides, in human evaluation, our method achieves better alignment and realism scores, compared with DF-GAN, which indicates that our results are most preferred by workers, which further verifies the better performance of our method, with respect to semantic alignment and image realism. Qualitative comparison. In Fig. 5, we present synthetic examples produced by our method at 256 × 256, along with the corresponding retrieved images that provide image features. As we can see, our method is able to produce highquality results on CUB and COCO, with respect to realistic appearances and geometric structure, and also semantically matching the given text descriptions. Besides, the synthetic results are different from the retrieved image features, which indicates there is no significant copy-and-paste problem in our method. Diversity evaluation. To further evaluate the diversity of our method, we fix the given text description and the corresponding retrieved image features, and only change the given noise z to generate output images, shown in Fig. 7. When we fix the sentence and image features and only change the noise, our method can generate obviously different images, but they still semantically match the given sentence and also make use information from the image features. More evaluations are shown in the supplementary material. 6.2 COMPONENT ANALYSIS Effectiveness of the image features. To better understand the effectiveness of image features in the generator, we conduct an ablation study shown in Table 3. Without image features, the model “Ours w/o Feature” achieves worse quantitative results on both FID and R-precision compared with the baseline, which verifies the effectiveness of image features on high-quality image generation. Interestingly, without image features, even our method becomes a pure text-to-image generation method, similar to other baselines, but the FID of “Ours w/o Feature” is still competitive with other baselines. This indicate that even without the image features fed into our method, our method can still generate better synthetic results, with respect to image quality and diversity. We think this is mainly because with the help of content information, our better discriminator is able to make a more reliable prediction on complex datasets, which in turn encourages the generator to produce better synthetic images. Effectiveness of the disentanglement. Here, we show the effectiveness of the fully connected layers applied on the image features v. Interestingly, from Table 3, the “model w/o Disen.” achieves better FID and R-precision compared with the baseline. This is likely because the model may suffer from an identity mapping problem. To verify this identity mapping problem, we conduct another experiment, where we feed mismatched sentence and image pairs into the network without using search algorithms, denoted “model w/o Disen.*”. As we can see, on mismatched pairs, although FID is still low, the R-precision degrades significantly. Effectiveness of the content information. To verify the effectiveness of the content information adopted in the discriminator, we conduct an ablation study, shown in Table 3. As we can see, FID and R-precision degrade when the discriminator without adopting the content information. This may indicate that the content information can effectively strengthens the differentiation abilities of the discriminator. Then, the improved discriminator is able to provide the generator with fine-grained training feedback, regarding to geometric structure, thus facilitating training a better generator to produce higher-quality synthetic results. Comparison between different pooling types. Here, we conduct a comparison study on different pooling types (i.e., max and average) in Table 3. As we can see, the model with the average pooling works better than max pooling. We think that this is likely because max pooling fails to capture the contextual information between neighboring pixels, because it only picks the maximum value among a region of pixels, while average pooling calculates the average value between them. Effectiveness of the regularization. We evaluate the effectiveness of the adopted regularization in the discriminator. From Table 3, the model without the regularization has worse quantitative results, compared with the full model. We think that this is because the regularization effectively improves GAN convergence by preventing the generator from training on junk feedback, once the discriminator cannot easily tell the difference between real and fake. 7 CONCLUSION We have introduced a memory-driven semi-parametric approach to text-to-image generation, which utilizes large datasets of images at inference time. Also, an alternative architecture is proposed for both the generator and the discriminator. Extensive experimental results on two datasets demonstrate the effectiveness of feeding retrieved image features into the generator and incorporating content information into the discriminator. 8 ETHICS STATEMENT All datasets and baselines used in the paper are public with corresponding citations. Our research mainly explores the interaction between different modal features, and aims to achieve an effective transformation from one domain to the other, which might not have significant potentially harmful insights and potential conflicts of interest and sponsorship. 9 REPRODUCIBILITY STATEMENT To reproduce our results, we include the details of the datasets we used in our paper (see Sec. D). In the implementation section (see Sec. 6), we show more details on our network, including how to extract image features, and how to generate content information used in the discriminator. We also include the values of hyperparameters, and the kinds of devices that we used to train our network. Sec. 5.3 and Sec. B show objective functions to train our network. Also, all data and baselines used in our paper are public with corresponding citations. We will release our code after the conference. A ARCHITECTURE Here we show details about the network architectures for the components of our model. A.1 TEXT ENCODER The text encoder used in our method is a pretrained bidirectional LSTM (Xu et al., 2018), which is trained together with an image encoder Inception-v3 (Szegedy et al., 2016), maximizing the cosine similarity between text features and the corresponding image features. The text features are encoded from a given text description using the text encoder, and the image features are extracted from the corresponding matched image. A.2 IMAGE ENCODER The image encoder used in our main architecture is a VGG-16 (Simonyan & Zisserman, 2014) network, pretrained on ImageNet (Russakovsky et al., 2015). A deep neural network layer relu5 3 is adopted to extract image features. Thus, the image features are able to contain more semantic information than content details. A.3 TEXT-IMAGE AFFINE COMBINATION MODULE To better fuse different-modal text and image features, and also to enable a regional selection effect, we adopt the text-image affine combination module (Li et al., 2020), shown in Fig. 8. The affine combination module takes two inputs: (1) the hidden features h ∈ RC×H×W from the given text description or intermediate hidden representation between two stages, where C is the number of channels, H is the height, and W is the width of the feature map, and (2) the corresponding disentangled image features vD ∈ RC×H×W , achieved by applying fully connected layers on the image features. According to applying two convolutional layers, the disentangled image features vD are converted into trainable weights W (vD) ∈ RC×H×W and trainable biases b(vD) ∈ RC×H×W . Then, the fused feature h′ ∈ RC×H×W is generated by h′ = h W (vD) + b(vD), (7) where W and b represent the functions that convert the image features vD into weights W (vD) and biases b(vD), and denotes the Hadamard element-wise product. A.4 REWEIGHTING IMAGE FEATURES BASED ON IMPORTANCE Here, we show how to reweight image features based on its importance, mentioned in Sec. 4.2.4. First, during the training, we use convolutional layers to remap image features, and then reshape image features into v ∈ RD×(H∗W ). Thus, to calculate the importance λ for each spatial locations in image features, we apply the following equation: λ = Softmax(vT v), where λ ∈ R(H∗W )×(H∗W ), and each element in λ represents the correlation between different spatial locations. Finally, we reweight image features based on importance by adopting vλ. B OBJECTIVE FUNCTIONS Here we show the complete objective functions for training our method. The discriminator and generator in our model are trained alternatively by minimizing both the generator loss LG and the discriminator loss LD. B.1 GENERATOR OBJECTIVE The generator objective for training a generator at stage i contains an unconditional adversarial loss, a conditional adversarial loss, and a text-image matching loss LDAMSM (Xu et al., 2018). LGi =− 1 2 Ez∼Pz,v∼Pdata [log(Di(Gi(z, S, v)))]︸ ︷︷ ︸ unconditional adversarial loss −1 2 Ez∼Pz,v∼Pdata [log(Di(Gi(z, S, v), S))]︸ ︷︷ ︸ conditional adversarial loss +λLDAMSM, (8) where Gi and Di represent the corresponding generator network and discriminator network at stage i, respectively, S is the text description, v is the image features that are extracted from the corresponding real image I that correctly semantically matches S, where the I is sampled from the true distribution Pdata, z is a noise vector drawn from the Gaussian distribution Pz . Thus, the complete objective function for training the generator networks is: LG = K∑ k=1 (LGi), (9) where K is the total number of stages in the network. B.2 DISCRIMINATOR OBJECTIVE The discriminator objective for training a discriminator at stage i contains an unconditional adversarial loss and a conditional adversarial loss. LDi =− 1 2 EIi∼Pdata [log(Di(Ii))]− 1 2 Ez∼Pz [log(1−Di(Gi(z, S, v)))]︸ ︷︷ ︸ unconditional adversarial loss −1 2 EIi∼Pdata [log(Di(Ii, S))]− 1 2 Ez∼Pz [log(1−Di(Gi(z, S, v), S))]︸ ︷︷ ︸ conditional adversarial loss , (10) where Ii denotes the real image sampled from the true image distribution Pdata at stage i. Thus, the complete objective function for training the discriminator networks is: LD = K∑ k=1 (LDi) +R1(ψ), (11) where R1(ψ) is a regularization term described in the paper. This regularization term is derived from zero-centered gradient penalties (Ross & Doshi-Velez, 2017) on local stability, which penalizes the discriminator for deviating from the Nash-equilibrium. This ensures that when a GAN-based model converges (i.e., the generator produces the true data distribution), the discriminator cannot create a non-zero gradient orthogonal to the data manifold without suffering a loss in the GAN game. C EVALUATION METRICS In this section, we show more details about the evaluation metrics used in the paper. C.1 FRÉCHET INCEPTION DISTANCE The Fréchet inception distance (FID) (Heusel et al., 2017) measures the Fréchet distance between generated image features and real image features, where both features are extracted by an Inception-v3 network (Szegedy et al., 2016) pretrained on ImageNet (Russakovsky et al., 2015). Consequently, a lower FID implies a closer distance between the synthetic image distribution and the real image distribution. C.2 R-PRECISION To measure the semantic alignment between the synthetic image and the given text description, the R-precision (Xu et al., 2018) is adopted. The R-precision is calculated by retrieving relevant text descriptions given an image query. To measure the relevance between the text and the image, the cosine similarity between text and image features is adopted. Thus, we compute a global image vector and 100 candidate sentence vectors, where the 100 candidate sentence vectors contain R number of ground-truth text descriptions that correctly describe the image, and 100−R randomly chosen mismatched descriptions. For each image query, if a results in the top R ranked retrieval text descriptions are relevant, then the R-precision is a/R. In the paper, we measure the top-1 R-precision (i.e., R = 1). D MORE EXPERIMENTS In this section, we show additional experimental results to further evaluate and verify the performance of our proposed method. D.1 DATASETS CUB bird (Wah et al., 2011) contains 8,855 training images and 2,933 test images, and each image has 10 corresponding text descriptions. COCO (Lin et al., 2014) contains 82,783 training images and 40,504 validation images. Each image has 5 descriptions. D.2 QUANTITATIVE COMPARISON BETWEEN DIFFERENT ALGORITHMS Here, we show the quantitative comparison between different matching algorithms, shown in Tables 4 and 5. As we can see, the algorithm word image matching with reweighting based on importance achieves the best FID and R-psr scores on CUB and COCO datasets. Therefore, the algorithm word image matching with reweighting is adopted in our method. D.3 DETAILS OF HUMAN EVALUATION Because the automatic metric cannot comprehensively evaluate the improvement of our proposed method, we conducted a side-by-side human evaluation study to analyze the improvement. The study compares synthetic images from our method and current state-of-the-art text-to-image generation method DF-GAN (Tao et al., 2020) on both CUB and COCO, according to (1) alignment, and (2) realism. We presented synthetic images from different methods along with the given text descriptions. We randomly switch our method and the baseline and also anonymized them. Then, we asked workers to choose the best images based on above two criteria. In this study, we randomly choose 100 text descriptions sampled from the test dataset, and then assign corresponding synthetic images generated by different methods to 5 workers to reduce variance. D.4 QUALITATIVE RESULTS In Fig. 10, we show more qualitative results generated by our method on the CUB bird dataset, along with the corresponding retrieved images that provide image features. As we can see, our method is able to produce high-quality results on CUB, semantically matching the given text descriptions. Also, the synthetic results look obviously different from the retrieved images, but our method can selectively choose information from the retrieved image to generate better synthetic results. D.5 DIVERSITY D.5.1 SSIM We also compare the Structural Similarity Index (SSIM) score (Hore & Ziou, 2010) between the generated images and corresponding ground-truth images to evaluate the diversity of our method. SSIM is originally used to measure the recovery result from distorted images. In our case, higher SSIM means synthetic and real images are more similar, which indicates that there may exist a copy-and-paste problem and the network has a worse diversity. Based on this, for SSIM, lower is better, which means a better diversity. To calculate the SSIM, for other baseline methods, we evaluate them on the test dataset by calculating the SSIM between each synthetic and ground-truth image pairs, and then get the average of all scores; for our method, we calculate the SSIM between the synthetic image and the image that provide image features. As shown in Table 6, our method achieves competitive SSIM scores on both CUB and COCO, compared with other baselines. This indicates that (1) even if our method has image features as image priors, it can still produce diverse synthetic results that are different from the corresponding real images, (2) there is no significant copy-and-paste problem in our method, and (3) our method can effectively disentangle objects and attributes in the given image features, which then can work as candidate information for the main generation pipeline to choose. D.5.2 SEMANTIC INFORMATION EXPLORATION Here, we further verify whether our method suffers from an copy-and-paste problem by exploring whether our method can make use of semantic information contained in the retrieved image features. To verify this, instead of extracting image features from RGB images, we use segmentation masks to provide semantic image features, shown in Fig. 11. As we can see, although there is no any content information provided in the given segmentation masks, our method is still able to generate realistic images, which indicate that our method can make use of semantic information contained in the image features, instead of simply copying and pasting the retrieved image features to produce output images. Furthermore, discussed in the following Sec. D.7, given a partially matched text and image features, our method is able to pick the semantic information (e.g., structure of train, cat, and bus) and filter detailed content color information (e.g., yellow and green, brown, and yellow) to generate text-required output images, as shown in Fig. 12. D.6 EFFECTIVENESS OF IMAGE FEATURES Actually, when there are no image features that are fed into our method, our method becomes a traditional text-to-image generation model, where the inputs for the model are only the natural language descriptions and random noise. As shown in Table 7, “Ours w/o Feature” still has a competitive performance, compared with other baselines, which means that our method can still generate images with good quality and diversity. We think this is mainly because of the powerful discriminator with content information, which is able to provide fine-grained training feedback to the generator, in terms of realistic appearance and geometric structure. Note that the way to block image features to build the model “Ours w/o Feature” is to remove image features and ACM components in the network, and only keep the new discriminator with content information. D.7 IMAGE GENERATION WITH PARTIAL TEXT-IMAGE MATCHING Interestingly, when the retrieved image features have a good quality (e.g., desired objects in image features can provide enough information), but are not perfectly aligned with the given text descriptions, Table 7: Quantitative comparison: Fréchet inception distance (FID) and R-precision (R-psr) of StackGAN++ (Zhang et al., 2018), AttnGAN (Xu et al., 2018), ControlGAN (Li et al., 2019a), DM-GAN (Zhu et al., 2019), OP-GAN (Hinz et al., 2019), and our method on the COCO dataset. “Ours w/o Feature” denotes that our model does not have any image features and just has a similar generation pipeline as other traditional text-to-image generation methods. For FID, lower is better, while for R-psr, higher is better. Matrix StackGAN++ AttnGAN ControlGAN DM-GAN OP-GAN Ours w/o Feature FID 81.59 32.32 33.58 32.64 24.70 22.20 R-prs (%) 71.88 85.47 82.43 88.56 89.01 84.63 which means that the given text description and corresponding retrieved image features only partially match on the semantic meaning, our method is still able to produce realistic images, shown in Fig. 12. As we can see, our method is able to generate the desired objects with required attributes, even if image features only partially match the given text description. For example, in the provided “train” image features, there is a yellow and green train, but the given description requires a red train. However, our method is still able to generate a realistic train with a red color. Besides, our method can even produce a novel composition, e.g., the sign is flying in the sky. We think that this is mainly because the generator can selectively make use of the information provided by the image features, instead of directly copying and pasting information from it. Also, features and attributes are disentangled in the provided image features, which enable this independent selection without additional generation. D.8 REGIONAL SELECTION EFFECT In Fig. 12, we can observe the regional selection effect involved in the generation process. For the train example, our full model is able to selectively keep the relevant information (e.g., train) and filter the irrelevant contents (e.g., yellow and green color) to avoid a wrong object generation (e.g., red color). This effect can be magnified when the given image has multiple objects, and the given text only partially describes it, shown in Fig. 13. There are multiple objects (e.g., vase, flowers, chairs, and window for the top example; three zebras, enclosure, and grass for the bottom one) in the given image features. However, our method only selectively makes use of some information (e.g., shape and texture of flowers and zebra) and generates text-required objects without keeping irrelevant contents in the image features (e.g., chair, window, and multiple zebras). E LIMITATIONS AND FUTURE WORK Here, we discuss some limitations of the proposed method and also the future work. We have observed that our method may fail to produce realistic images when the retrieved image features can only provide limited information, e.g., the target object is too small in the corresponding real image, or there are no desired objects in the retrieved image features. As shown in Fig. 14 left, the stop sign, zebra, bus, and train in the corresponding image are too small, which means that the extracted image features can only provide very limited information about the desired object zebra, stop sign, bus, and train to the generation pipeline. Furthermore, when the retrieved image features have no desired objects, shown in Fig. 14 right, our proposed method may fail to generate high-quality images as well. No desired objects presented in the retrieved image features are mainly caused by the image preprocessing (e.g., crop) and also the limitation of matching algorithms. In such cases, our method is more similar to a pure text-to-image generation method, like other baselines, because the provided image features cannot provide any useful information. To solve these problems, we suggest to build a better memory bank with higher-quality image features, and also improve the matching algorithms to find the most compatible image features for a given text description. Besides, our method is a semi-parametric approach, which needs to retrieve image features from the memory bank. So, it might slow down the inference time, compared with other purely parametric methods. To solve this problem, we suggest to (1) run matching algorithms parallel to speed up the whole inference time, and (2) encourage users to provide the category of the main object in their text descriptions, and then we can use this category as a key to narrow down the retrieval regions. F ADDITIONAL QUALITATIVE COMPARISON Here, we show an additional qualitative comparison between the different text-to-image generation approaches StackGAN++ (Zhang et al., 2018), AttnGAN (Xu et al., 2018), and DF-GAN (Tao et al., 2020) with our method on the COCO dataset (Lin et al., 2014). A zebra is grazing out in the field of grass. Pink flowers sitting in a clear vase full of water. A green and yellow train pulling out of a train station next to train tracks. A bathroom has a white toilet, sitting next to a white bath tub. A pizza is full of pepperoni and cheese on a white plate. A black laptop sitting on a wooden table. Given Text StackGAN++ (Zhang et al., 2018) AttnGAN (Xu et al., 2018) DF-GAN (Tao et al., 2020) Ours Figure 15: Additional comparison results between StackGAN++, AttnGAN, DF-GAN, and Ours on the COCO dataset. A clock that is on the side of a tower. A green double decker bus is driving on a road. A white and grey cat laying on a table. A stop sign with a dark sky in the background. Two baseball players a catcher and a batter who has just hit a baseball. A white bed with white pillows and a lamp in a room. Given Text StackGAN++ (Zhang et al., 2018) AttnGAN (Xu et al., 2018) DF-GAN (Tao et al., 2020) Ours Figure 16: Additional comparison results between StackGAN++, AttnGAN, DF-GAN, and Ours on the COCO dataset. A large black train is traveling down a track. A vase filled with colorful assorted flowers and green leafs in it. A zebra walking in an open grassy field. A stop sign with a blue sky in the background. A double red decker bus parked on the street. A silver laptop sitting on a desk. A plate holding a pizza topped with cheese and vegetable on a table. A white toilet with lid open in a bathroom. The room has a lamp and a bed with white sheets on it. A black cat laying on a table. A baseball player is standing ready to bat. Given Text StackGAN++ (Zhang et al., 2018) AttnGAN (Xu et al., 2018) DF-GAN (Tao et al., 2020) Ours Figure 18: Additional comparison results between StackGAN++, AttnGAN, DF-GAN, and Ours on the COCO dataset.
1. What is the main contribution of the paper in terms of text-based image synthesis? 2. What are the strengths and weaknesses of the proposed approach, particularly in combining parametric and non-parametric knowledge? 3. How does the reviewer assess the benefits of the image feature conditioning, and what further analysis or evaluation would they suggest? 4. How does the proposed approach compare to previous works such as SAGAN/Attentional Generative Network (AttnGAN)? 5. What are the reviewer's concerns regarding the potential limitations of the current setup, and how could they be addressed? 6. How would multiple images be combined, and what would be the appropriate way to explore the use of multiple images in the memory bank?
Summary Of The Paper Review
Summary Of The Paper The paper explores the novel idea of text based image synthesis images using a combination of parametric knowledge (encoded via training a GAN) and non-parametric knowledge (encoded by some images relating to the target of the synthesis). Essentially the architecture is a text-driven GAN with additional conditioning from one or more semantic image features. Conveniently the images can come from a text based image retrieval process. The additional guidance of the synthesis by is named ‘memory driven’ synthesis by the authors. Review My view of this paper is on the borderline, tending towards rejection. The paper seems an early idea in need of deeper exploration – specifically, whilst the idea of combining retrieved image features into GAN is interesting the benefits of the image feature conditioning are not well demonstrated or evaluated. The use of additional conditioning and guidance in the generator for text2image synthesis has been explored previously for example in SAGAN/Attentional Generative Network (AttnGAN as cited) where the text encoding is fed both to the generator / upsampling input and to secondary generator blocks. In terms of architecture the proposed is similar, but instead of feeding in the text-derived semantic features, they are instead semantic features coming from retrieved images from the memory bank. The FID scores show clear improvement – i.e. the memory is providing a second source of information that is helpful – but the paper is silent / does not explore the nature of this information? Is it the image layout that is useful, since this is poorly expressed via the text prompt? Or the textures present in the image are better captured via the memory (non-parametric) that the trained GAN (parametric) model? The paper shows quantitatively an improvement in FID score but there is little analysis as to what mutual or complementary information might exist in these inputs. If the aforementioned hypotheses are the case, then why would a late stage fc layer (VGG conv5_3) i.e. semantic feature provide additionality? If not, then what exactly is being gained from the image features? Can some visualization be produced showing which information is coming from which source? Digging deeper into the question would also help prove the value of the approach beyond simple an FID. The results included show that the synthesised image resembles very closely the retrieved image. For example in figure 5, the appearance of the bird or the vehicle are nearly identical between the retrieved image and the synthesised image. It would be useful to show multiple synthesis results for a given text prompt but different retrieved images (rather than the current ‘diversity’ experiment showing different synthesis runs with different noise seeds). My concern with the current setup, is that the model may be learning largely identity function mapping the retrieved image to the synthesised result and largely ignoring the text prompt. Showing diverse synthetic output for fixed noise and text but different memory images would help allay that concern. Throughout the paper, images are referred to in the plural but the results appear to show just a single retrieved image – how would multiple images be combined? If average pooled, does that make sense if multiple modes might exist in retrieved image data? Currently the paper discussed average vs. max pooling for image regions but it was unclear to me how multiple images could be used. In summary I think the idea is interesting but beyond superficial exploration of FID (and related effect of ablation on FID) there is no real exploration of the information sources and the impact. I have some doubts as to the effectiveness of the fusion between image and text features as a result, and whether the paper is really learning from both. Given the current result set, it could simply be that the improved FID in synthesis is due to over-reliance on the image feature prompt from the retrieval.
ICLR
Title Reinforcement Learning with Efficient Active Feature Acquisition Abstract Solving real-life sequential decision making problems under partial observability involves an exploration-exploitation problem. To be successful, an agent needs to efficiently gather valuable information about the state of the world for making rewarding decisions. However, in real world applications, acquiring valuable information is often highly costly, e.g., in the medical domain, information acquisition might correspond to performing a medical test on a patient. This poses a significant challenge for the agent to learn optimal policy for the task. In this paper, we propose a model-based reinforcement learning framework that learns a policy which solves this exploration-exploitation problem during its execution. Key to the success is a novel sequential variational autoencoder that learns high-quality representations from partially observed states, which are then used by the policy to maximize the task reward in a cost-efficient manner. We demonstrate the efficacy of our proposed framework in a control domain as well as using a medical simulator. In both tasks, our proposed method outperforms conventional baselines and results in policies with greater cost efficiency. 1 INTRODUCTION Recently, machine learning models for automated sequential decision making have shown remarkable success across many application areas, such as visual recognition (Mathe et al., 2016; Das et al., 2017), robotics control (Finn et al., 2016; Zhang et al., 2018), medical diagnosis (Ling et al., 2017; Peng et al., 2018) and computer games (Mnih et al., 2015; Silver et al., 2016). One fundamental reason that drives the success of such models and enables them to outperform classical algorithms is the availability of large amounts of training data. Typically such training data is either fully observed or the features stem from an action-independent observation model (which clearly can depend on the state of the system). However, the fundamental assumption that the same features are always readily available during deployment could not hold in many real-world applications. For instance, consider a medical support system for monitoring and treating patients during their stay at hospital which was trained on rich historical medical data. To provide the best possible treatment, the system might need to perform several measurements of the patient over time, while some of them could be costly or even pose a health risk. Therefore, during deployment, it is more ideal that the system could function with minimal features while during training more features might have been available. In such cases, we are interested in decision making models that actively take the measurement process, i.e., feature acquisition, into account and only acquire the information relevant for making a decision. In this paper, we consider the challenging problem of learning effective policies when the cost of information acquisition cannot be neglected. To be successful,we need to learn policies which acquires the information required for solving a task in the cheapest way possible. For simplicity, we can think of the policy as being constituted of an acquisition policy which actively selects meaningful features to be observed and a task policy, which selects actions to change the state of the system towards some goal.1 As such, we consider a partially observable learning problem with the following two distinguishing properties compared to the most commonly studied problems (see also Figure 3.2 for an illustration). (i) By incorporating active feature acquisition, the training of the task policy is based upon subsets of features only, i.e., there are missing features, where the missingness is 1Clearly, these two policies are not independent in general, e.g., acquiring features can change the state of the system. observe action acquire (e.g. navigation) (e.g. medical treatments) observe action controlled by the acquisition policy. Thus, the resulting POMDP is different from the conventional POMDPs in RL literature (Cassandra, 1998) where the partial observability for later stems from a fixed and action-independent observation model. Also, the state transitions in conventional POMDPs are only determined by the choice of the task action, whereas in our setting the state-transition is affected by both the task action and the feature acquisition choice. (ii) The learning of the acquisition policy introduces an additional dimension to the exploration-exploitation problem: each execution of the policy needs to solve an exploration-exploitation problem, and thus we often need to learn sophisticated policies. Most reinforcement learning research has not taken active feature acquisition into consideration. In this work, we propose a unified approach that jointly learns a policy for optimizing the task reward while performing active feature acquisition. Although some of the prior works have exploited the use of reinforcement learning for sequential feature acquisition tasks (Shim et al., 2018; Zannone et al., 2019), they considered variable-wise information acquisition in a static setting only, corresponding to feature selection for non-time-dependent prediction tasks. However, our considered setting is truly time-dependent and feature acquisitions need to be made at each time step while the state of the system evolves simultaneously. As such, both the model dynamics of the underlying MDP and the choice of feature acquisition introduce considerable challenges to the learning of the sequential feature acquisition strategy. Due to the challenge of the exploration-exploitation problem, it is a non-trivial task to jointly learn the two policies. The conventional end-to-end approaches often result in inferior solutions in complex scenarios. Ideally, policies based on high-quality representations would be easier for the algorithm to search for better solutions through exploration-exploitation. Therefore, our proposed framework also tackles the joint policy training task from a representation learning perspective. Specifically, we introduce a representation learning model that not only encodes the sequential partially observed information into its latent features, but also efficiently imputes the unobserved features to offer more meaningful information for the policy training. To this end, we formulate a sequential generative model that can efficiently learn model dynamics during representation learning. Overall, the contributions of our paper are three-fold: • We propose an approach for learning sequential decision making policies with active feature acquisition through a unified reinforcement learning framework. Our proposed approach simultaneously learns policies for reward optimization and active feature acquisition. • We present a novel sequential representation learning approach to account for the encoding of the partially observed states. Our proposed approach is based on variational autoencoders (VAE) with amortized inference. The imputation of the unobserved features is achieved via learning the model dynamics. • We demonstrate our proposed framework can be applied to various applications. We conduct extensive experiments on an image-based control task as well as a medical simulator fitted from real-life data where our method shows clear improvements over conventional baselines. 2 RELATED WORK In this work, we integrate active learning with reinforcement learning to accomplish the policy training task while attempting to acquire fewest observed features as possible. We thus review related methods on active feature acquisition and representation learning for POMDP, respectively. 2.1 ACTIVE FEATURE ACQUISITION Our work draws motivation from the existing instance-wise active feature selection approaches. One category of the instance-wise feature selection methods consider feature acquisition as a one time effort to select a subset of features as a whole. One typical example is the conventional linear model that poses sparsity inducing prior distribution to the model (Tibshirani, 1996). Recently, there also emerged approaches that adopt reinforcement learning to actively find optimal feature subsets (Yoon et al., 2018; Shim et al., 2018; Zannone et al., 2019). Though such attempts have demonstrated certain efficacy in handling non time-series instance-wise data, they do not suffice for handling sequential dataset. There is also an alternative category that models feature acquisition as a Bayesian experimental design (Ma et al., 2019; Gong et al., 2019). However, the sequential decision making is for variable-wise feature acquisition and the problems are still non time-series tasks in nature. The key difference between all the aforementioned approaches and ours is that we tackle active feature acquisition problems with time-series data, where an active feature selection decision needs to be formed at each time step along the multi-step reinforcement learning trajectory. Therefore, the feature acquisition for our presented work needs to consider more complex information over model dynamics and control, apart from the static instance-wise features. 2.2 REPRESENTATION LEARNING IN POMDP In complex tasks, policies trained upon different representations can even converge to different performance levels. Most conventional deep reinforcement learning approaches unifies the process of representation learning with policy training and results in policies trained in an end-to-end fashion (Mnih et al., 2013; Lillicrap et al., 2016; Mnih et al., 2016). However, to accomplish the representation learning task, such models often engage trainable parameters which could come with considerable size and thereby result in significant degradation in sample efficiency. When considering problems with POMDPs where the state space is partially accessible to the agent, representation learning becomes an important and non-trivial research challenge. Among the existing literature, one prominent line of research tackles the representation learning for POMDP in an off-line fashion and thus resulting in multi-stage reinforcement learning. Higgins et al. (2016; 2017) adopt pretrained VAE models as a representation module to build agents with strong domain adaptation performance. The key difference between their work and ours is that they encode instance-wise image frames from POMDP domains where each image presents a partial view over the task environment, while our work considers cost-sensitive reinforcement learning with distinct partial observability, i.e., the feature-level information is missing at each time step for the agent. We thus adopt a sequential representation learning approach to infer a more representative state information. Recently, there also emerged several works on sequential representation learning for POMDP (Gregor et al., 2019; Vezzani et al., 2019). However, most of the works utilize VAE training as an auxiliary task to jointly update the representation model with the policy learning loss. In our work, due to the high acquisition cost to observe the features, we adopt an off-line representation learning setting. Also, our proposed representation learning is model-based, where the model learns to impute the missing features with such attempt yielding significant benefit to derive high-quality representation for policy training. 3 METHODOLOGY 3.1 TASK SETTING In this section, we formally define the problem settings for the task of jointly learning the task and feature acquisition policy. To this end, we define the active feature acquisition POMDP, a rich class of discrete-time stochastic control processes generalizing standard POMDPs: Definition 1 (AFA-POMDP). The active feature acquisition POMDP is a tuple M = 〈S,A, T ,O,R, C, γ〉, where S is the state space and A = (Af ,Ac) is a joint action space of feature acquisition actionsAf and control actionsAc. The transition kernel T : S ×Ac×Af → PS maps any joint action a = (af ,ac) in state s ∈ S to a distribution PS over next states. In each state s, when taking action af , the agent observes xp = x(af ), i.e., a subset of the features x = (xp,xu) ∼ O(s) indicated by af , whereO(s) is a distribution over possible feature observation for state s and xu are features not observed by the agent. When taking a joint action, the agent obtains rewards according to the reward functionR : S ×Ac → R and pays a cost of C : S ×Af → R+ for feature acquisition. Rewards and costs are discounted by the discount factor γ ∈ [0, 1). Simplifying assumptions For simplicity, we assume that x consists of a fixed number of features Nf for all states, that Af = 2[Nf ] is the power set of all the Nf features, and that xp(af ) consists of all the features in x indicated by the subset af ∈ Af . Note that the feature acquisition action for a specific application can take various different forms. For instance, in our experiments in Section 4, for the Sepsis task, we define feature acquisition as selecting a subset over possible measurement tests, whereas for the Bouncing Ball+ task, we divide an image into four observation regions and let the feature acquisition policy select a subset of observation regions (rather than raw pixels). Please also note that while in a general AFA-POMDP, the transition between two states depends on the joint action, we assume in the following that it depends only on the control action, i.e., T (s,ac,af ′) = T (s,ac,af ) for all af ′ ,af ∈ Af . While not true for all possible applications, this assumption can be a reasonable approximation for instance for medical settings in which tests are non-invasive. For simplicity we furthermore assume that acquiring each feature has the same cost, denoted as c, i.e., C(af , s) = c |af |, but our approach can be straightforwardly adapted to have different costs for different feature acquisitions. Objective We aim to learn a policy which trades off reward maximization and the cost for feature acquisition by jointly optimizing a task policy πc and a feature acquisition policy πf . That is, we aim to solve the optimization problem max πf ,πc E ∞∑ t=0 γt ( R(xt,act)− |Af |∑ i c · I (af(i)t ) ), (1) where the expectation is over the randomness of the stochastic process and the policies, af(i)t denotes the i-th feature acquisition action at timestep t, and I (·) is an indicator function whose value equals to 1 if that feature has been acquired. Note that the above optimization problem is very challenging: an optimal solution needs to maintain beliefs bt over the state of the system at time t which is a function of partial observations obtained so far. Both the the feature acquisition policy πf (aft | bt) and the task policy i.e., πc(act | bt) depend on this belief. The information in the belief itself can be controlled by the feature acquisition policy through querying subsets from the features xt and hence the task policy and feature acquisition policy itself strongly depend on effectiveness of the feature acquisition policy. 3.2 SEQUENTIAL REPRESENTATION LEARNING WITH PARTIAL OBSERVATIONS We introduce a sequential representation learning approach to facilitate the task of policy training with active feature acquisition. Let x1:T = (x1, ...,xT ) and a1:T = (a1, ...,aT ) denote a sequence of observations and actions, respectively. Alternatively, we also denote these sequences as x≤T and a≤T . Overall, our task of interest is to train a sequential representation learning model to learn the distribution of the full sequential observations x1:T , i.e., for both the observed part x p 1:T and the unobserved part xu1:T . Given only partial observations, we can perform inference only with the observed features xp1:T . Therefore, our proposed approach extends the conventional unsupervised representation learning task to a supervised learning task, which learns to impute the unobserved features by synthesizing the acquired information and learning the model dynamics. As such, the key underlying assumption is that learning to impute the unobserved features would result in better representations which can be leveraged by the task policy. And performing sequential representation learning, as we propose, is a more adequate choice than non-sequential modeling, for our task of interest with partial observability. Furthermore, unlike many conventional sequential representation learning models for reinforcement learning that only reason over the observation sequence xp1:T , in our work, we take into account both the observation sequence x p 1:T and the action sequence a1:T for conducting inference. The intuition is that since x p 1:T by itself consists of very limited information over the agent’s underlying MDP state, incorporating the action sequence would be an informative add-on to the agent’s acquired information to infer the belief state. To summarize, our proposed sequential representation model learns to encode xp1:T and a1:T into meaningful latent 1 2 3 0 1 2 ℎ1 1 2 3 ℎ2 ℎ3 1 2 3 Decoder Inference Figure 2: Observation decoder and belief inference model for the partially observable sequential VAE. Shaded nodes represent the observed variables. The inference model filters information over the partial observations and actions, to predict both the observed and unobserved features. features, for predicting xp1:T and x u 1:T . The architecture of our proposed sequential representation learning model is shown in Figure 2. Observation Decoder Let z1:T = (z1, ..., zT ) denote a sequence of latent states. We consider the following probabilistic model: pθ(x p,xu, z) = T∏ t=1 p(xpt ,x u t |zt) p(zt), (2) For simplicity of the notations, we assume z0 = 0. We impose a simple prior distribution over z, i.e., a standard Gaussian prior, instead of incorporating some learned prior distribution over the latent space of z, such as an autoregressive prior distribution like p(zt|zt−1,xp1:t,a0:t−1). The reason is that using a static prior distribution results in latent representation zt that is stronger regularized and more normalized than using a learned prior distribution which is stochastically changing over time. This is crucial for deriving stable policy training performance. At time t, the generation of data xpt and x u t depends on the corresponding latent variable zt. Given zt, the observed variables are conditionally independent of the unobserved ones. Therefore, p(xpt , x u t |zt) = p(x p t |zt) p(xut |zt). (3) Belief Inference Model During policy training we only assume access to partially observed data. This requires an inference model which takes in the past observation and action sequences to infer the latent states z. Specifically, we present a structured inference network qφ as shown in Figure 2, which has an autoregressive structure: qφ(z|x,a) = ∏ t qφ(zt|xp≤t,a<t), (4) where qφ(·) is a function that aggregates the filtering posteriors of the history of observation and action sequences. Following the common practice in existing sequential VAE literature, we adopt a forward RNN model as the backbone for the filtering function qφ(·) (Gregor et al., 2019). Specifically, at step t, the RNN processes the encoded partial observation xpt , action at−1 and its past hidden state ht−1 to update its hidden state ht. Then the latent distribution zt is inferred from ht. The belief state bt is defined as the mean of the distribution zt. By accomplishing the supervised learning task, the belief state could provide abundant information for not only the observed sequential features, but also for the missing features, so that the policy trained over it could benefit from it and progress faster towards getting better convergent performance. Learning We proposed to pre-train both the generative and inference models offline before learning the RL policies. In this case, we assume the access to the unobserved features, so that we can construct a supervised learning task to learn to impute unobserved features. Concretely, the pre-training task update the parameters θ, φ by maximizing the following variational lower-bound (Jordan et al., 1999; Kingma & Welling, 2013): log p(xp,xu) ≥ Eqφ [∑ t log pθ(x p t |zt) + log pθ(xut |zt)− KL ( qφ(zt|xp≤t,a<t) || p(zt) )] = ELBO(xp,xu). (5) By incorporating the term log pθ(xut |zt), the training of sequential VAE generalizes from an unsupervised task to a supervised task that learns the model dynamics from past observed transitions and imputes the missing features. We perform multi-stage reinforcement learning to jointly learn the feature acquisition policy and the task policy. The VAE model is pretrained and kept fixed during policy learning. The reason for not updating VAE online is that computing the loss in Eq (5) would require the access to unobserved features and therefore, is cost intensive. The pseudocode for our proposed method is in Appendix A. 4 EXPERIMENTS We examine the characteristics of our proposed model in the following two experimental domains: a bouncing ball control task with high-dimensional image pixels as input, adapted from (Fraccaro et al., 2017); a sepsis medical simulator fitted from real-world data (Oberst & Sontag, 2019). Baselines For comparison, we mainly consider variants of the strong VAE baseline beta-VAE (Higgins et al., 2016), which works on non-time-dependent data instances. For representing the missing features, we adopt the zero-imputing method, proposed in (Nazabal et al., 2018) over the unobserved features. Thus, we denote the VAE baseline as NonSeq-ZI. We train the VAE with either the full loss over the entire features, or the partial loss which only applies to the observed features (Ma et al., 2019). We denoted our proposed sequential VAE model for POMDPs as Seq-PO-VAE. All the VAE-based approaches adopt an identical policy architecture. Detailed information on the model architecture is presented in appendix. Data Collection To pre-train the VAE models, data generated by a non-random policy is unavoidably needed to incorporate abundant dynamics information. For both tasks, we collect a small scale dataset of 2000 trajectories, where half of the data is collected from a random policy and the the other half from a policy which better captures the state space that would be encountered by a learned model (e.g., by training a data collection policy end-to-end or using human generated trajectories). The simple mixture of dataset works very well on both tasks without the need of further fine-tuning the VAEs. We also create a testing set that consists of 2000 trajectories to evaluate the models. 4.1 BOUNCING BALL+ Task Settings We adapted the original bouncing ball experiment presented in (Fraccaro et al., 2017) by adding a navigation objective and introducing control actions. Specifically, a ball moves in a 2D box and at each step, a binary image of size 32× 32 showing the box and the ball. Initially, the ball appears at a random position in the upper left quadrant, and has a random velocity. The objective is to control the ball to reach a fixed target location set at (5, 25). We incorporate five RL actions: a null action and four actions for changing the velocity of the ball in either the x or y direction with a fixed scale: {∆Vx : ±0.5, ∆Vy : ±0.5, null}. A reward of 1.0 is issued if the ball reaches its target location. Each episode runs up to 50 time steps. Representation Learning Results We evaluate the missing feature imputing performance of each VAE model in terms of negative log likelihood (nll) loss and present results in Table 1. We notice that our proposed model yields to significantly better imputing result than all the other baselines. This reveals the fact that our proposed sequential VAE model can efficiently capture the environment dynamics and learn meaningful information over the missing features. Such effect is vital in determining the policy training performance in AFA-POMDP, since the policy is conditioned on the VAE latent features. We also demonstrate sample trajectories reconstructed by different VAE models in the Appendix. The result shows that our model learns to impute significant amount of missing information given the partially observed sequence. Policy Training Results We evaluate the policy training performance in terms of episodic number of acquired observations and the task rewards (w/o cost). The results are presented in Figure 3 (a) and (b), respectively. First, we notice that the end-to-end method is vital and fails to learn task skills under the given feature acquisition cost. However, the VAE-based representation learning methods manage to learn the navigation skill under the same cost setting. This verifies our assumption that representation learning could bring significant benefit to the policy training under the AFA-POMDP scenario. Furthermore, we also notice that the joint policies trained by Seq-PO-VAE can develop the target navigation skill at a much faster pace than the non-sequential baselines. Our method also converges to a standard where much less feature acquisition is required to perform the task. We also show that our proposed method can learn meaningful feature acquisition policies. To this end, we show three sampled trajectories upon convergence of training in Figure 4. From the examples, we notice that our feature acquisition policy acquires meaningful features with a majority grasping the exact ball location. Thus, it demonstrates that the feature acquisition policy adapts to the dynamics of the problem and learns to acquire meaningful features. We also show the actively learned feature acquisition policy works better than random acquisition. From the results in Figure 4 (c), our method converges to better standard than random policies with considerably high selection probabilities. 4.2 SEPSIS MEDICAL SIMULATOR Task Setting Our second evaluation domain adopts a medical simulator for treating sepsis among ICU patients, proposed in (Oberst & Sontag, 2019). Overall, the task is to learn to apply three treatment actions to the patient, i.e, {antibiotic, ventilation, vasopressors}. The state space consists of 8 features: 3 of them indicate the current treatment state for the patient; 4 of them are the measurement states over heart rate, sysBP rate, percoxyg state and glucose level; the rest is an index specifying the patent’s diabetes condition. The feature acquisition policy learns to actively select the measurement features. Each episode runs for up to 30 steps. The patient will be discharged if his/her measurement states all return to normal values. An episode terminates upon mortality or discharge, with a reward −1.0 or 1.0. Representation Learning Result We evaluate the imputation performance for each VAE model on the testing dataset. The loss is evaluated in terms of MSE, presented in Table 1. Our proposed method leads to the lowest MSE loss compared to the baselines. The result reveals that our proposed sequential VAE could promisingly learn model dynamics for tasks with stochastic transitions. Policy Training Result We show the policy training results for Sepsis in Figure 5. Overall, our proposed method results in substantially better task reward compared to all baselines. Noticeably, the learning of discharge for our method progresses significantly faster than baseline approaches and converges to substantially better values. The result shows that our method can be trained in a much more sample efficient way. Moreover, upon convergence, our model outperforms the best non-sequential VAE baseline with a gap of > 5% for discharge ratio. For all the evaluation metrics, we notice that VAE-based representation learning models outperform the end-to-end baseline by significant margins. This indicates that efficient representation learning may be crucial for deriving satisfying task performance in AFA-POMDP setting. The result also reveals that learning to impute missing features contributes greatly to improve the policy training performance for AFA-POMDP. Ablation: Efficacy of Active Feature Acquisition We study the effect of actively learning sequential feature acquisition strategy with RL. To this end, we compare our method with a baseline that randomly acquires features. We evaluate our method under different cost values, and the results are shown in Figure 6. From the results, we notice that there is a clear cost-performance trade-off, i.e., a higher feature acquisition cost results in feature acquisition policies that obtain fewer observations, with a sacrifice of task performance. Overall, our acquisition method results in significantly better task performance than the random acquisition baselines. Noticeably, with the learned active feature acquisition strategy, we acquire only about half of the total number of features (refer to the value derived by Random-100%) to obtain comparable task performance. Also, we notice that the specified cost has a very clear impact on the final task performance, i.e., the number of acquired features per episode decreases significantly as the cost increases. Thereby, our proposed solution can promisingly compute feature acquisition policies that meet different budgets. Ablation: Impact on Total Acquisition Cost For different representation learning methods, we also investigate the total number of features acquired at different stage of training. The results are shown in Figure 7. As expected, to obtain better task policies, the models need to take longer training steps and thus the total feature acquisition cost would increases accordingly. We notice that policies trained by our method result in the highest convergent task performance (max x-value). Given a certain performance level (same x-value), our method consumes substantially less total feature acquisition cost (y-value) than the others. We also notice that the overall feature acquisition cost increases with a near exponential trend. Therefore, it is essential to train the policy for AFA-POMDP with advanced representation learning method, so that the feature acquisition cost could be reduced. 5 CONCLUSION We present a novel AFA-POMDP framework that jointly learns the task policy and the active feature acquisition strategy with a unified reinforcement learning formalism. We introduce a model-based sequential VAE model to facilitate policy training under partial observability. We demonstrate that imputing missing features via learning model dynamics could significantly benefit policy training with partial observability. Our proposed model, by efficiently synthesizing the sequential information to impute the missing features, can significantly outperform conventional representation learning baselines and leads to policy training with significantly better sample efficiency as well as obtained solutions. Future work may investigate whether our proposed model could be applied to more diverse and complex application domains. Another promising direction is to integrate our framework with model-based planning for further reducing the feature acquisition cost. ETHICS STATEMENT When deploying machine learning models in real-world applications, the fundamental assumption that the features used during training are always readily available during the deployment phase does not necessarily hold. Our work addresses the aforementioned problem via formulating a novel AFA-POMDP framework that extends the conventional instance-wise non-time-dependent active feature acquisition task to a more challenging time-dependent sequential decision making task. The sequential active feature acquisition module enables the decision making to be performed in a more cost-efficient way when partial features are accessed only during model deployment. Considering that the task of learning and applying machine learning models is rather problem specific, it is unlikely that our method can equally benefit all possible application scenarios. We also fully acknowledge the existence of risk in applying our model in sensitive and high risk domains, e.g., healthcare, and its potential bias if the model itself or the used representations are trained on biased data. In high risk settings, human supervision of the proposed model might be desired and the model is suggested to be mainly used for decision support systems. To alleviate the reliance on fully observed data during representation learning, it is very promising to trigger follow-up works studying data efficient sequential autoencoder training paradigms. APPENDIX This appendix is organized as follows: • Sec A: the detailed algorithm. • Sec B: experimental settings and additional results on the Bouncing Ball domain. • Sec C: experimental settings and additional results on the Sepsis domain. A RL WITH ACTIVE FEATURE ACQUISITION ALGORITHM Algorithm 1 RL with Active Feature Acquisition 1: Input: learning rate α > 0, dataset D 2: Initialize RL policy πf , πc, VAE parameters θ, φ. 3: Train VAE on dataset D using Eq (5). 4: while Not Converge do 5: Reset the environment. 6: Initialize null observation xp1 = Ø, feature acquisition action a f 0 and control action a c 0. 7: for i = 1 to T do 8: Compute representation with VAE: bt = qφ(x p ≤t,a<t). 9: Sample a feature acquisition action aft ∼ πf (bt) and a control action act ∼ πc(bt). 10: Step the environment and receive partial features, reward and terminal: xpt+1, rt, term ∼ env(aft ,a c t) 11: Compute cost ct = ∑ i c · I(a f(i) t ). 12: Save the transitions {bt,aft ,act , rt, ct, term}. 13: if term then 14: break 15: end if 16: end for 17: Update πf , πc using the saved transitions with an RL algorithm under learning rate α. 18: end while B BOUNCING BALL+ B.1 TASK SPECIFICATION The task consists of a ball moving in a 2D box of size 32×32 pixels. The radius of the ball equals to 2 pixels. At each step, a binary image is returned as an observation of the MDP state. At the beginning of every episode, the ball starts at a random position in the upper left quadrant (sampled uniformly). The initial velocity of the ball is randomly defined as follows: ~v = [Vx, Vy] = 4 · ~̃v/‖~̃v‖, where the x- and y-component of ~̃v are sampled uniformly from the interval [−0.5, 0.5]. There is a navigation target set at (5, 25) pixels, which is in the lower left quadrant. The navigation is considered to be successful if the ball reaches the specified target location within a threshold of 1 pixel along both x/y-axis. The action spaces is defined as follows. There are five task actions Ac: • Increase velocity leftwards, i.e., change Vx by −0.5 • Increase velocity rightwards, i.e., change Vx by +0.5 • Increase velocity downwards, i.e., change Vy by +0.5 • Increase velocity upwards, i.e., change Vy by −0.5 • Keep velocities unchanged The maximum velocity along the x/y-axis is 5.0. The velocity will stay unchanged if it exceeds this threshold. The feature acquisition action af ∈ Af is specified as acquiring the observation of a subset of the quadrants (this also includes acquiring the observation of all 4 quadrants). Thus, the agent can acquire 0− 4 quadrants to observe. Each episode runs up to 50 steps. The episode terminates if agent reaches the target location. B.2 IMPLEMENTATION DETAILS For all the compared methods, Zero-Imputing (Nazabal et al., 2018) is adopted to fill in missing features with a fixed value of 0.5. End-to-End The end-to-end model first processes the imputed image by 2 convolutional layers with filter sizes of 16 and 32, respectively. Each convolutional layer is followed by a ReLU activation function. Then the output is passed to a fully connected layer of size 1024. The weights for the fully connected layer are initialized by orthogonal weights initialization and the biases are initialized as zeros. NonSeq-ZI The non-sequential VAE models first process the imputed image by 2 convolutional layers with filter sizes of 32 and 64, respectively. Each convolutional layer is followed by a ReLU activation function. Then the output passes through a fully connected layer of size 256, followed by two additional fully connected layers of size 32 to generate the mean and variance of a Gaussian distribution. To decode an image, the sampled code first passes through a fully connected layer with size 256, followed by 3 convolutional layers with filters of 32, 32, and nc and strides of 2, 2 and 1, respectively, where nc is the channel size that equals to 2 for the binary image. There are two variants for NonSeq-ZI: one employs the partial loss that is only for the observed variables; the other employs the full loss that is computed on all the variables, i.e., the ground-truth image with full observation is employed as the target to train the model to impute the missing features. The hyperparameters for training NonSeq-ZI are summarized in Table 2. Seq-PO-VAE (ours) At each step, the Seq-PO-VAE takes an imputed image and an action vector of size 9 as input. The imputed image is processed by 3 convolutional layers with filter size 32 and stride 2. Each convolutional layer employs ReLU as its activation function. Then the output passes through a fully connected layer of size 32 to generate a latent representation for the image fx. The action vector passes through a fully connected layer of 32 to generate latent representation for the action fa. Then the image and action features are concatenated and augmented to form a feature vector fc = [fx, fa, fx ∗ fa], where [·] denotes concatenation of features. Then fc is fed to fully connected projection layers of size 64 and 32, respectively. The output is then fed to an LSTM module, with latent size of 32. The output ht of LSTM is passed to two independent fully connected layers of size 32 for each to generate the mean and variance for the Gaussian distribution filtered from the sequential inputs. To decode an image, the model adopts deconvolutional layers that are identical to those for NonSeq-ZI. The hyperparameters for training Seq-PO-VAE are shown in Table 2. LSTM-A3C We adopt LSTM-A3C (Mnih et al., 2016) to train the RL policy. The policy takes the features derived from the representation learning module as input. For the VAE-based methods, the input features are passed through a fully connected layer of size 1024. Then the features are fed to an LSTM with 1024 units. The output of the LSTM is fed to three independent fully connected layers to generate the estimations for value, task policy and feature acquisition policy. We adopt normalized column initialization for all the fully connected layers and the biases for the LSTM module are set to be zero. B.3 DATA COLLECTION To train the VAEs, we prepare a training set that consists of 2000 trajectories. Half of the trajectories are derived from a random policy and the other half is derived from a policy learned from end-to-end method. To train the end-to-end method, we employ a cost of 0.01 over first 2m steps and then increase it to 0.02 for the following 0.5m steps. All the VAE models are evaluated on a test dataset that has identical size and data distribution as the training dataset. We present the best achieved task performance of the data collection policy (End-to-End) and our representation learning approach in Table 5. We notice that our proposed method, by employing an advanced representation model, leads to significantly better feature acquisition policy than End-to-End (smaller number of observations while achieving similar or better reward). B.4 IMPUTING MISSING FEATURES VIA LEARNING MODEL DYNAMICS We present an illustrative example to demonstrate the process of imputing missing features and the role of learning model dynamics. To this end, we collect trajectories under an End-to-End policy (the choice of the underlying RL policy is not that important since we just want to derive some trajectory samples for the VAE models to reconstruct) and use different VAE models to impute the observations. From the results presented in Figure 9, we observe that under the partially observable setting with missing features, the latent representation derived from our proposed method provides abundant information as compared to only using information from a single time step and thereby offers significant benefit for the policy model to learn to acquire meaningful features/gain task reward. B.5 INVESTIGATION ON COST-PERFORMANCE TRADE-OFF We perform a case study on investigating the cost-performance trade-off for each representation learning method, presented in Figure 9. Apparently, as we increase the cost, the explorationexploitation task becomes more challenging and each compared method has its own upper bound on the cost above which it fails to learn an effective task policy while acquiring minimum observation. First, we notice that the End-to-End model takes a long time to progress in learning task skills, while the VAE-based models can progress much faster. Among the VAE-based methods, we notice that our proposed method (Figure 9(d)) can achieve as low as 8 observations whereas the baselines NonSeq-ZI (Full) (Figure 9(b)) and NonSeq-ZI (partial) (Figure 9(c)) achieve a standard of ∼20 (lowest point among the solid lines). Thus, we could conclude that our proposed approach can significantly benefit the cost-sensitive policy training and lead to a policy which acquires much fewer observations while still succeeding in terms of task performance. C SEPSIS MEDICAL SIMULATOR C.1 TASK SPECIFICATIONS For this task we employ a Sepsis simulator proposed in previous work (Oberst & Sontag, 2019). The task is to learn to apply three treatment actions for Sepsis patients in intensive care units, i.e., Ac = {antibiotic, ventilation, vasopressors}. At each time step, the agent selects a subset of the treatment actions to apply. The state space consists of 8 features: 3 of them specify the current treatment status; 4 of them specify the measurement status in terms of heart rate, sysBP rate, percoxyg stage and glucose level; the remaining one is a categorical feature indicating the patent’s antibiotic status. The feature acquisition actively selects a subset among the measurement features for observation, i.e., Af = {heart rate, sysBP rate, percoxyg state, glucose level}. The objective for learning a active feature acquisition strategy is to help the decision making system to reduce measurement cost at a significant scale. C.2 IMPLEMENTATION DETAILS For all the compared methods, we adopt Zero-Imputing (Nazabal et al., 2018) to fill in missing features. In particular, a fixed value of -10 which is outside the range of feature values is used to impute missing values. End-to-End The end-to-end model first processes the imputed state by 3 fully connected layers of size 32, 64 and 32, respectively. Each fully connected layer is followed by a ReLU activation. NonSeq-ZI The VAE model first processes the imputed state by 2 fully connected layers with size 32 and 64, with the first fully connected layer being followed by ReLU activation functions. Then the output is fed into two independent fully connected layers of size 10 for each, to generate the mean and variance for the Gaussian distribution. To decode the state, the latent code is first processed by a fully connected layer of size 64, then fed into three fully connected layers of size 64, 32, and 8. The intermediate fully connected layers employ ReLU activation functions. Also, we adopt two variants for NonSeq-ZI, trained under either full loss or partial loss. The details of the hyperparameter settings used for training are presented in Table 4. Seq-PO-VAE (ours) At each time step, the inputs for state and action are first processed by their corresponding projection layers. The projection layers for the state consists of 3 fully connected layers of size 32, 16 and 10, where the intermediate fully connected layers are followed by a ReLU activation function. The projection layer for the action input is a fully connected layer of size 10. Then the projected state feature fc and action feature fa are combined in the following manner: fc = [fx, fa, fx ∗ fa]. fc is passed to 2 fully connected layers of size 64 and 32 to form the input to the LSTM module. The output ht of the LSTM is fed to two independent fully connected layers of size 10 to generate the mean and variance for the Gaussian distribution. The decoder for Seq-PO-VAE has identical architecture as NonSeq-ZI. The details for training Seq-PO-VAE are presented in Table 4. LSTM-A3C The LSTM-A3C (Mnih et al., 2016) takes encoded state features derived from the corresponding representation model as its input. The encoded featuresare fed into an LSTM with size 256. Then the ht for the LSTM is fed to three independent fully connected layers, to predict the state value, feature acquisition policy and task policy. Normalized column initialization is applied to all fully connected layers. The biases for the LSTM and fully connected layers are initialized as zero. C.3 DATA COLLECTION To train the VAEs, we prepare a training set that consists of 2000 trajectories. Half of the trajectories are derived from a random policy and the other half is derived from a policy learned from the End-to-End method with cost 0.0. All the VAE models are evaluated on a test dataset that consists of identical size and data distribution as the the training dataset. We present the task treatment reward obtained by our data collection policy derived from the End-to-End method and that obtained by our proposed method in Table 5. Noticeably, by performing representation learning, we obtained much better treatment reward as compared to the data collection policy, which demonstrates the necessity of performing representation learning. C.4 MORE COMPARISON RESULT UNDER DIFFERENT VALUES FOR COST We present additional experiment results that compare our proposed method and the non-sequential baselines under the cost values {0, 0.025}. The results for cost value of 0.01 are shown in the main paper. Overall, under all the cost settings, our method leads to significantly better discharge ratio and task reward compared to the baselines. Also, we demonstrate the cost-performance trade-off on Sepsis domain. By increasing the value of cost, we could obtain feature acquisition policy that acquires substantially decreased amount of features within each episode. C.5 ILLUSTRATIVE EXAMPLES FOR MISSING FEATURE IMPUTATION IN Sepsis We present two illustrative examples in Figure 12 to demonstrate how imputing missing features via learning model dynamics would help the decision making with partial observability in Sepsis domain. The policy training process with partial observability could only access very limited information, due to the employment of active feature acquisition. Under such circumstances, imputing the missing features would offer much more abundant information to the decision making process. From the results shown in Figure 12, our model demonstrates considerable accuracy in imputing the missing features, even though it is extremely challenging to perform the missing feature imputation task given the distribution shift from the data collection policy and the online policy. The imputed missing information would be greatly beneficial for training the task policy and feature acquisition policy.
1. What is the focus of the paper regarding reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its performance and problem significance? 3. What are the weaknesses of the paper, including limitations in the experimental domains and assumptions made by the proposed method? 4. Do you have any questions or concerns regarding the paper's content or conclusions?
Summary Of The Paper Review
Summary Of The Paper The authors study the problem of reinforcement learning in environments where the agent can spend some reward in order to gain access to observations. The authors introduce a generalization of a POMDP, which they call an AFA-POMDP for Active Feature Acquisition, that divides the action into two pieces, an action for control and action for feature acquisition. The authors' solution approach begins with fully observed trajectories that are used to train a sequential VAE as an inference model. Then, using the pre-trained VAE, an RL algorithm jointly learns the control and feature acquisition policies. The experiments are on a synthetic "bouncing ball" task where there are five discrete control actions that hit the ball from different directions and the feature acquisition chooses which quadrants of the space to acquire. There is also a sepsis task with three discrete actions and 8 features that correspond to measurements of the patient. Review Strengths: The authors approach outperforms the other VAE baselines, both in terms of reward, and in terms of MSE in inferring the unobserved state. The problem of joint control and feature acquisition is interesting and important. Weaknesses: The authors claim that AFA-POMDPs generalize POMDPs is false—an AFA-POMDP is a special case of a POMDP. An example of past work using POMDPs for feature acquisition can be found here: Shi and Cain. Cost-sensitive feature acquisition and classification. 2007. The authors approach of using a neural network-based model to infer the belief state in a POMDP was done here: Karkus et al. QMDP-Net: Deep Learning for Planning under Partial Observability, 2017. The domains seem very simple, like they probably could be solved with planning approaches like QMDP-net. They don't really show off the benefit of using RL. Despite having much higher MSE, the baselines are very competitive on the tasks. NonSeq-ZI (partial) is almost as good as the trained method on bouncing ball and NonSeq-ZI (full) performs better in terms of mortality on sepsis, but this is not mentioned. The authors approach makes a quite strong assumption about having access to fully observed data for training. The paper doesn't really discuss this as a limitation. Post-response: The authors have addressed some of my concerns, and I have raised my score to a 5. I still believe the paper is not ready for publication.
ICLR
Title Reinforcement Learning with Efficient Active Feature Acquisition Abstract Solving real-life sequential decision making problems under partial observability involves an exploration-exploitation problem. To be successful, an agent needs to efficiently gather valuable information about the state of the world for making rewarding decisions. However, in real world applications, acquiring valuable information is often highly costly, e.g., in the medical domain, information acquisition might correspond to performing a medical test on a patient. This poses a significant challenge for the agent to learn optimal policy for the task. In this paper, we propose a model-based reinforcement learning framework that learns a policy which solves this exploration-exploitation problem during its execution. Key to the success is a novel sequential variational autoencoder that learns high-quality representations from partially observed states, which are then used by the policy to maximize the task reward in a cost-efficient manner. We demonstrate the efficacy of our proposed framework in a control domain as well as using a medical simulator. In both tasks, our proposed method outperforms conventional baselines and results in policies with greater cost efficiency. 1 INTRODUCTION Recently, machine learning models for automated sequential decision making have shown remarkable success across many application areas, such as visual recognition (Mathe et al., 2016; Das et al., 2017), robotics control (Finn et al., 2016; Zhang et al., 2018), medical diagnosis (Ling et al., 2017; Peng et al., 2018) and computer games (Mnih et al., 2015; Silver et al., 2016). One fundamental reason that drives the success of such models and enables them to outperform classical algorithms is the availability of large amounts of training data. Typically such training data is either fully observed or the features stem from an action-independent observation model (which clearly can depend on the state of the system). However, the fundamental assumption that the same features are always readily available during deployment could not hold in many real-world applications. For instance, consider a medical support system for monitoring and treating patients during their stay at hospital which was trained on rich historical medical data. To provide the best possible treatment, the system might need to perform several measurements of the patient over time, while some of them could be costly or even pose a health risk. Therefore, during deployment, it is more ideal that the system could function with minimal features while during training more features might have been available. In such cases, we are interested in decision making models that actively take the measurement process, i.e., feature acquisition, into account and only acquire the information relevant for making a decision. In this paper, we consider the challenging problem of learning effective policies when the cost of information acquisition cannot be neglected. To be successful,we need to learn policies which acquires the information required for solving a task in the cheapest way possible. For simplicity, we can think of the policy as being constituted of an acquisition policy which actively selects meaningful features to be observed and a task policy, which selects actions to change the state of the system towards some goal.1 As such, we consider a partially observable learning problem with the following two distinguishing properties compared to the most commonly studied problems (see also Figure 3.2 for an illustration). (i) By incorporating active feature acquisition, the training of the task policy is based upon subsets of features only, i.e., there are missing features, where the missingness is 1Clearly, these two policies are not independent in general, e.g., acquiring features can change the state of the system. observe action acquire (e.g. navigation) (e.g. medical treatments) observe action controlled by the acquisition policy. Thus, the resulting POMDP is different from the conventional POMDPs in RL literature (Cassandra, 1998) where the partial observability for later stems from a fixed and action-independent observation model. Also, the state transitions in conventional POMDPs are only determined by the choice of the task action, whereas in our setting the state-transition is affected by both the task action and the feature acquisition choice. (ii) The learning of the acquisition policy introduces an additional dimension to the exploration-exploitation problem: each execution of the policy needs to solve an exploration-exploitation problem, and thus we often need to learn sophisticated policies. Most reinforcement learning research has not taken active feature acquisition into consideration. In this work, we propose a unified approach that jointly learns a policy for optimizing the task reward while performing active feature acquisition. Although some of the prior works have exploited the use of reinforcement learning for sequential feature acquisition tasks (Shim et al., 2018; Zannone et al., 2019), they considered variable-wise information acquisition in a static setting only, corresponding to feature selection for non-time-dependent prediction tasks. However, our considered setting is truly time-dependent and feature acquisitions need to be made at each time step while the state of the system evolves simultaneously. As such, both the model dynamics of the underlying MDP and the choice of feature acquisition introduce considerable challenges to the learning of the sequential feature acquisition strategy. Due to the challenge of the exploration-exploitation problem, it is a non-trivial task to jointly learn the two policies. The conventional end-to-end approaches often result in inferior solutions in complex scenarios. Ideally, policies based on high-quality representations would be easier for the algorithm to search for better solutions through exploration-exploitation. Therefore, our proposed framework also tackles the joint policy training task from a representation learning perspective. Specifically, we introduce a representation learning model that not only encodes the sequential partially observed information into its latent features, but also efficiently imputes the unobserved features to offer more meaningful information for the policy training. To this end, we formulate a sequential generative model that can efficiently learn model dynamics during representation learning. Overall, the contributions of our paper are three-fold: • We propose an approach for learning sequential decision making policies with active feature acquisition through a unified reinforcement learning framework. Our proposed approach simultaneously learns policies for reward optimization and active feature acquisition. • We present a novel sequential representation learning approach to account for the encoding of the partially observed states. Our proposed approach is based on variational autoencoders (VAE) with amortized inference. The imputation of the unobserved features is achieved via learning the model dynamics. • We demonstrate our proposed framework can be applied to various applications. We conduct extensive experiments on an image-based control task as well as a medical simulator fitted from real-life data where our method shows clear improvements over conventional baselines. 2 RELATED WORK In this work, we integrate active learning with reinforcement learning to accomplish the policy training task while attempting to acquire fewest observed features as possible. We thus review related methods on active feature acquisition and representation learning for POMDP, respectively. 2.1 ACTIVE FEATURE ACQUISITION Our work draws motivation from the existing instance-wise active feature selection approaches. One category of the instance-wise feature selection methods consider feature acquisition as a one time effort to select a subset of features as a whole. One typical example is the conventional linear model that poses sparsity inducing prior distribution to the model (Tibshirani, 1996). Recently, there also emerged approaches that adopt reinforcement learning to actively find optimal feature subsets (Yoon et al., 2018; Shim et al., 2018; Zannone et al., 2019). Though such attempts have demonstrated certain efficacy in handling non time-series instance-wise data, they do not suffice for handling sequential dataset. There is also an alternative category that models feature acquisition as a Bayesian experimental design (Ma et al., 2019; Gong et al., 2019). However, the sequential decision making is for variable-wise feature acquisition and the problems are still non time-series tasks in nature. The key difference between all the aforementioned approaches and ours is that we tackle active feature acquisition problems with time-series data, where an active feature selection decision needs to be formed at each time step along the multi-step reinforcement learning trajectory. Therefore, the feature acquisition for our presented work needs to consider more complex information over model dynamics and control, apart from the static instance-wise features. 2.2 REPRESENTATION LEARNING IN POMDP In complex tasks, policies trained upon different representations can even converge to different performance levels. Most conventional deep reinforcement learning approaches unifies the process of representation learning with policy training and results in policies trained in an end-to-end fashion (Mnih et al., 2013; Lillicrap et al., 2016; Mnih et al., 2016). However, to accomplish the representation learning task, such models often engage trainable parameters which could come with considerable size and thereby result in significant degradation in sample efficiency. When considering problems with POMDPs where the state space is partially accessible to the agent, representation learning becomes an important and non-trivial research challenge. Among the existing literature, one prominent line of research tackles the representation learning for POMDP in an off-line fashion and thus resulting in multi-stage reinforcement learning. Higgins et al. (2016; 2017) adopt pretrained VAE models as a representation module to build agents with strong domain adaptation performance. The key difference between their work and ours is that they encode instance-wise image frames from POMDP domains where each image presents a partial view over the task environment, while our work considers cost-sensitive reinforcement learning with distinct partial observability, i.e., the feature-level information is missing at each time step for the agent. We thus adopt a sequential representation learning approach to infer a more representative state information. Recently, there also emerged several works on sequential representation learning for POMDP (Gregor et al., 2019; Vezzani et al., 2019). However, most of the works utilize VAE training as an auxiliary task to jointly update the representation model with the policy learning loss. In our work, due to the high acquisition cost to observe the features, we adopt an off-line representation learning setting. Also, our proposed representation learning is model-based, where the model learns to impute the missing features with such attempt yielding significant benefit to derive high-quality representation for policy training. 3 METHODOLOGY 3.1 TASK SETTING In this section, we formally define the problem settings for the task of jointly learning the task and feature acquisition policy. To this end, we define the active feature acquisition POMDP, a rich class of discrete-time stochastic control processes generalizing standard POMDPs: Definition 1 (AFA-POMDP). The active feature acquisition POMDP is a tuple M = 〈S,A, T ,O,R, C, γ〉, where S is the state space and A = (Af ,Ac) is a joint action space of feature acquisition actionsAf and control actionsAc. The transition kernel T : S ×Ac×Af → PS maps any joint action a = (af ,ac) in state s ∈ S to a distribution PS over next states. In each state s, when taking action af , the agent observes xp = x(af ), i.e., a subset of the features x = (xp,xu) ∼ O(s) indicated by af , whereO(s) is a distribution over possible feature observation for state s and xu are features not observed by the agent. When taking a joint action, the agent obtains rewards according to the reward functionR : S ×Ac → R and pays a cost of C : S ×Af → R+ for feature acquisition. Rewards and costs are discounted by the discount factor γ ∈ [0, 1). Simplifying assumptions For simplicity, we assume that x consists of a fixed number of features Nf for all states, that Af = 2[Nf ] is the power set of all the Nf features, and that xp(af ) consists of all the features in x indicated by the subset af ∈ Af . Note that the feature acquisition action for a specific application can take various different forms. For instance, in our experiments in Section 4, for the Sepsis task, we define feature acquisition as selecting a subset over possible measurement tests, whereas for the Bouncing Ball+ task, we divide an image into four observation regions and let the feature acquisition policy select a subset of observation regions (rather than raw pixels). Please also note that while in a general AFA-POMDP, the transition between two states depends on the joint action, we assume in the following that it depends only on the control action, i.e., T (s,ac,af ′) = T (s,ac,af ) for all af ′ ,af ∈ Af . While not true for all possible applications, this assumption can be a reasonable approximation for instance for medical settings in which tests are non-invasive. For simplicity we furthermore assume that acquiring each feature has the same cost, denoted as c, i.e., C(af , s) = c |af |, but our approach can be straightforwardly adapted to have different costs for different feature acquisitions. Objective We aim to learn a policy which trades off reward maximization and the cost for feature acquisition by jointly optimizing a task policy πc and a feature acquisition policy πf . That is, we aim to solve the optimization problem max πf ,πc E ∞∑ t=0 γt ( R(xt,act)− |Af |∑ i c · I (af(i)t ) ), (1) where the expectation is over the randomness of the stochastic process and the policies, af(i)t denotes the i-th feature acquisition action at timestep t, and I (·) is an indicator function whose value equals to 1 if that feature has been acquired. Note that the above optimization problem is very challenging: an optimal solution needs to maintain beliefs bt over the state of the system at time t which is a function of partial observations obtained so far. Both the the feature acquisition policy πf (aft | bt) and the task policy i.e., πc(act | bt) depend on this belief. The information in the belief itself can be controlled by the feature acquisition policy through querying subsets from the features xt and hence the task policy and feature acquisition policy itself strongly depend on effectiveness of the feature acquisition policy. 3.2 SEQUENTIAL REPRESENTATION LEARNING WITH PARTIAL OBSERVATIONS We introduce a sequential representation learning approach to facilitate the task of policy training with active feature acquisition. Let x1:T = (x1, ...,xT ) and a1:T = (a1, ...,aT ) denote a sequence of observations and actions, respectively. Alternatively, we also denote these sequences as x≤T and a≤T . Overall, our task of interest is to train a sequential representation learning model to learn the distribution of the full sequential observations x1:T , i.e., for both the observed part x p 1:T and the unobserved part xu1:T . Given only partial observations, we can perform inference only with the observed features xp1:T . Therefore, our proposed approach extends the conventional unsupervised representation learning task to a supervised learning task, which learns to impute the unobserved features by synthesizing the acquired information and learning the model dynamics. As such, the key underlying assumption is that learning to impute the unobserved features would result in better representations which can be leveraged by the task policy. And performing sequential representation learning, as we propose, is a more adequate choice than non-sequential modeling, for our task of interest with partial observability. Furthermore, unlike many conventional sequential representation learning models for reinforcement learning that only reason over the observation sequence xp1:T , in our work, we take into account both the observation sequence x p 1:T and the action sequence a1:T for conducting inference. The intuition is that since x p 1:T by itself consists of very limited information over the agent’s underlying MDP state, incorporating the action sequence would be an informative add-on to the agent’s acquired information to infer the belief state. To summarize, our proposed sequential representation model learns to encode xp1:T and a1:T into meaningful latent 1 2 3 0 1 2 ℎ1 1 2 3 ℎ2 ℎ3 1 2 3 Decoder Inference Figure 2: Observation decoder and belief inference model for the partially observable sequential VAE. Shaded nodes represent the observed variables. The inference model filters information over the partial observations and actions, to predict both the observed and unobserved features. features, for predicting xp1:T and x u 1:T . The architecture of our proposed sequential representation learning model is shown in Figure 2. Observation Decoder Let z1:T = (z1, ..., zT ) denote a sequence of latent states. We consider the following probabilistic model: pθ(x p,xu, z) = T∏ t=1 p(xpt ,x u t |zt) p(zt), (2) For simplicity of the notations, we assume z0 = 0. We impose a simple prior distribution over z, i.e., a standard Gaussian prior, instead of incorporating some learned prior distribution over the latent space of z, such as an autoregressive prior distribution like p(zt|zt−1,xp1:t,a0:t−1). The reason is that using a static prior distribution results in latent representation zt that is stronger regularized and more normalized than using a learned prior distribution which is stochastically changing over time. This is crucial for deriving stable policy training performance. At time t, the generation of data xpt and x u t depends on the corresponding latent variable zt. Given zt, the observed variables are conditionally independent of the unobserved ones. Therefore, p(xpt , x u t |zt) = p(x p t |zt) p(xut |zt). (3) Belief Inference Model During policy training we only assume access to partially observed data. This requires an inference model which takes in the past observation and action sequences to infer the latent states z. Specifically, we present a structured inference network qφ as shown in Figure 2, which has an autoregressive structure: qφ(z|x,a) = ∏ t qφ(zt|xp≤t,a<t), (4) where qφ(·) is a function that aggregates the filtering posteriors of the history of observation and action sequences. Following the common practice in existing sequential VAE literature, we adopt a forward RNN model as the backbone for the filtering function qφ(·) (Gregor et al., 2019). Specifically, at step t, the RNN processes the encoded partial observation xpt , action at−1 and its past hidden state ht−1 to update its hidden state ht. Then the latent distribution zt is inferred from ht. The belief state bt is defined as the mean of the distribution zt. By accomplishing the supervised learning task, the belief state could provide abundant information for not only the observed sequential features, but also for the missing features, so that the policy trained over it could benefit from it and progress faster towards getting better convergent performance. Learning We proposed to pre-train both the generative and inference models offline before learning the RL policies. In this case, we assume the access to the unobserved features, so that we can construct a supervised learning task to learn to impute unobserved features. Concretely, the pre-training task update the parameters θ, φ by maximizing the following variational lower-bound (Jordan et al., 1999; Kingma & Welling, 2013): log p(xp,xu) ≥ Eqφ [∑ t log pθ(x p t |zt) + log pθ(xut |zt)− KL ( qφ(zt|xp≤t,a<t) || p(zt) )] = ELBO(xp,xu). (5) By incorporating the term log pθ(xut |zt), the training of sequential VAE generalizes from an unsupervised task to a supervised task that learns the model dynamics from past observed transitions and imputes the missing features. We perform multi-stage reinforcement learning to jointly learn the feature acquisition policy and the task policy. The VAE model is pretrained and kept fixed during policy learning. The reason for not updating VAE online is that computing the loss in Eq (5) would require the access to unobserved features and therefore, is cost intensive. The pseudocode for our proposed method is in Appendix A. 4 EXPERIMENTS We examine the characteristics of our proposed model in the following two experimental domains: a bouncing ball control task with high-dimensional image pixels as input, adapted from (Fraccaro et al., 2017); a sepsis medical simulator fitted from real-world data (Oberst & Sontag, 2019). Baselines For comparison, we mainly consider variants of the strong VAE baseline beta-VAE (Higgins et al., 2016), which works on non-time-dependent data instances. For representing the missing features, we adopt the zero-imputing method, proposed in (Nazabal et al., 2018) over the unobserved features. Thus, we denote the VAE baseline as NonSeq-ZI. We train the VAE with either the full loss over the entire features, or the partial loss which only applies to the observed features (Ma et al., 2019). We denoted our proposed sequential VAE model for POMDPs as Seq-PO-VAE. All the VAE-based approaches adopt an identical policy architecture. Detailed information on the model architecture is presented in appendix. Data Collection To pre-train the VAE models, data generated by a non-random policy is unavoidably needed to incorporate abundant dynamics information. For both tasks, we collect a small scale dataset of 2000 trajectories, where half of the data is collected from a random policy and the the other half from a policy which better captures the state space that would be encountered by a learned model (e.g., by training a data collection policy end-to-end or using human generated trajectories). The simple mixture of dataset works very well on both tasks without the need of further fine-tuning the VAEs. We also create a testing set that consists of 2000 trajectories to evaluate the models. 4.1 BOUNCING BALL+ Task Settings We adapted the original bouncing ball experiment presented in (Fraccaro et al., 2017) by adding a navigation objective and introducing control actions. Specifically, a ball moves in a 2D box and at each step, a binary image of size 32× 32 showing the box and the ball. Initially, the ball appears at a random position in the upper left quadrant, and has a random velocity. The objective is to control the ball to reach a fixed target location set at (5, 25). We incorporate five RL actions: a null action and four actions for changing the velocity of the ball in either the x or y direction with a fixed scale: {∆Vx : ±0.5, ∆Vy : ±0.5, null}. A reward of 1.0 is issued if the ball reaches its target location. Each episode runs up to 50 time steps. Representation Learning Results We evaluate the missing feature imputing performance of each VAE model in terms of negative log likelihood (nll) loss and present results in Table 1. We notice that our proposed model yields to significantly better imputing result than all the other baselines. This reveals the fact that our proposed sequential VAE model can efficiently capture the environment dynamics and learn meaningful information over the missing features. Such effect is vital in determining the policy training performance in AFA-POMDP, since the policy is conditioned on the VAE latent features. We also demonstrate sample trajectories reconstructed by different VAE models in the Appendix. The result shows that our model learns to impute significant amount of missing information given the partially observed sequence. Policy Training Results We evaluate the policy training performance in terms of episodic number of acquired observations and the task rewards (w/o cost). The results are presented in Figure 3 (a) and (b), respectively. First, we notice that the end-to-end method is vital and fails to learn task skills under the given feature acquisition cost. However, the VAE-based representation learning methods manage to learn the navigation skill under the same cost setting. This verifies our assumption that representation learning could bring significant benefit to the policy training under the AFA-POMDP scenario. Furthermore, we also notice that the joint policies trained by Seq-PO-VAE can develop the target navigation skill at a much faster pace than the non-sequential baselines. Our method also converges to a standard where much less feature acquisition is required to perform the task. We also show that our proposed method can learn meaningful feature acquisition policies. To this end, we show three sampled trajectories upon convergence of training in Figure 4. From the examples, we notice that our feature acquisition policy acquires meaningful features with a majority grasping the exact ball location. Thus, it demonstrates that the feature acquisition policy adapts to the dynamics of the problem and learns to acquire meaningful features. We also show the actively learned feature acquisition policy works better than random acquisition. From the results in Figure 4 (c), our method converges to better standard than random policies with considerably high selection probabilities. 4.2 SEPSIS MEDICAL SIMULATOR Task Setting Our second evaluation domain adopts a medical simulator for treating sepsis among ICU patients, proposed in (Oberst & Sontag, 2019). Overall, the task is to learn to apply three treatment actions to the patient, i.e, {antibiotic, ventilation, vasopressors}. The state space consists of 8 features: 3 of them indicate the current treatment state for the patient; 4 of them are the measurement states over heart rate, sysBP rate, percoxyg state and glucose level; the rest is an index specifying the patent’s diabetes condition. The feature acquisition policy learns to actively select the measurement features. Each episode runs for up to 30 steps. The patient will be discharged if his/her measurement states all return to normal values. An episode terminates upon mortality or discharge, with a reward −1.0 or 1.0. Representation Learning Result We evaluate the imputation performance for each VAE model on the testing dataset. The loss is evaluated in terms of MSE, presented in Table 1. Our proposed method leads to the lowest MSE loss compared to the baselines. The result reveals that our proposed sequential VAE could promisingly learn model dynamics for tasks with stochastic transitions. Policy Training Result We show the policy training results for Sepsis in Figure 5. Overall, our proposed method results in substantially better task reward compared to all baselines. Noticeably, the learning of discharge for our method progresses significantly faster than baseline approaches and converges to substantially better values. The result shows that our method can be trained in a much more sample efficient way. Moreover, upon convergence, our model outperforms the best non-sequential VAE baseline with a gap of > 5% for discharge ratio. For all the evaluation metrics, we notice that VAE-based representation learning models outperform the end-to-end baseline by significant margins. This indicates that efficient representation learning may be crucial for deriving satisfying task performance in AFA-POMDP setting. The result also reveals that learning to impute missing features contributes greatly to improve the policy training performance for AFA-POMDP. Ablation: Efficacy of Active Feature Acquisition We study the effect of actively learning sequential feature acquisition strategy with RL. To this end, we compare our method with a baseline that randomly acquires features. We evaluate our method under different cost values, and the results are shown in Figure 6. From the results, we notice that there is a clear cost-performance trade-off, i.e., a higher feature acquisition cost results in feature acquisition policies that obtain fewer observations, with a sacrifice of task performance. Overall, our acquisition method results in significantly better task performance than the random acquisition baselines. Noticeably, with the learned active feature acquisition strategy, we acquire only about half of the total number of features (refer to the value derived by Random-100%) to obtain comparable task performance. Also, we notice that the specified cost has a very clear impact on the final task performance, i.e., the number of acquired features per episode decreases significantly as the cost increases. Thereby, our proposed solution can promisingly compute feature acquisition policies that meet different budgets. Ablation: Impact on Total Acquisition Cost For different representation learning methods, we also investigate the total number of features acquired at different stage of training. The results are shown in Figure 7. As expected, to obtain better task policies, the models need to take longer training steps and thus the total feature acquisition cost would increases accordingly. We notice that policies trained by our method result in the highest convergent task performance (max x-value). Given a certain performance level (same x-value), our method consumes substantially less total feature acquisition cost (y-value) than the others. We also notice that the overall feature acquisition cost increases with a near exponential trend. Therefore, it is essential to train the policy for AFA-POMDP with advanced representation learning method, so that the feature acquisition cost could be reduced. 5 CONCLUSION We present a novel AFA-POMDP framework that jointly learns the task policy and the active feature acquisition strategy with a unified reinforcement learning formalism. We introduce a model-based sequential VAE model to facilitate policy training under partial observability. We demonstrate that imputing missing features via learning model dynamics could significantly benefit policy training with partial observability. Our proposed model, by efficiently synthesizing the sequential information to impute the missing features, can significantly outperform conventional representation learning baselines and leads to policy training with significantly better sample efficiency as well as obtained solutions. Future work may investigate whether our proposed model could be applied to more diverse and complex application domains. Another promising direction is to integrate our framework with model-based planning for further reducing the feature acquisition cost. ETHICS STATEMENT When deploying machine learning models in real-world applications, the fundamental assumption that the features used during training are always readily available during the deployment phase does not necessarily hold. Our work addresses the aforementioned problem via formulating a novel AFA-POMDP framework that extends the conventional instance-wise non-time-dependent active feature acquisition task to a more challenging time-dependent sequential decision making task. The sequential active feature acquisition module enables the decision making to be performed in a more cost-efficient way when partial features are accessed only during model deployment. Considering that the task of learning and applying machine learning models is rather problem specific, it is unlikely that our method can equally benefit all possible application scenarios. We also fully acknowledge the existence of risk in applying our model in sensitive and high risk domains, e.g., healthcare, and its potential bias if the model itself or the used representations are trained on biased data. In high risk settings, human supervision of the proposed model might be desired and the model is suggested to be mainly used for decision support systems. To alleviate the reliance on fully observed data during representation learning, it is very promising to trigger follow-up works studying data efficient sequential autoencoder training paradigms. APPENDIX This appendix is organized as follows: • Sec A: the detailed algorithm. • Sec B: experimental settings and additional results on the Bouncing Ball domain. • Sec C: experimental settings and additional results on the Sepsis domain. A RL WITH ACTIVE FEATURE ACQUISITION ALGORITHM Algorithm 1 RL with Active Feature Acquisition 1: Input: learning rate α > 0, dataset D 2: Initialize RL policy πf , πc, VAE parameters θ, φ. 3: Train VAE on dataset D using Eq (5). 4: while Not Converge do 5: Reset the environment. 6: Initialize null observation xp1 = Ø, feature acquisition action a f 0 and control action a c 0. 7: for i = 1 to T do 8: Compute representation with VAE: bt = qφ(x p ≤t,a<t). 9: Sample a feature acquisition action aft ∼ πf (bt) and a control action act ∼ πc(bt). 10: Step the environment and receive partial features, reward and terminal: xpt+1, rt, term ∼ env(aft ,a c t) 11: Compute cost ct = ∑ i c · I(a f(i) t ). 12: Save the transitions {bt,aft ,act , rt, ct, term}. 13: if term then 14: break 15: end if 16: end for 17: Update πf , πc using the saved transitions with an RL algorithm under learning rate α. 18: end while B BOUNCING BALL+ B.1 TASK SPECIFICATION The task consists of a ball moving in a 2D box of size 32×32 pixels. The radius of the ball equals to 2 pixels. At each step, a binary image is returned as an observation of the MDP state. At the beginning of every episode, the ball starts at a random position in the upper left quadrant (sampled uniformly). The initial velocity of the ball is randomly defined as follows: ~v = [Vx, Vy] = 4 · ~̃v/‖~̃v‖, where the x- and y-component of ~̃v are sampled uniformly from the interval [−0.5, 0.5]. There is a navigation target set at (5, 25) pixels, which is in the lower left quadrant. The navigation is considered to be successful if the ball reaches the specified target location within a threshold of 1 pixel along both x/y-axis. The action spaces is defined as follows. There are five task actions Ac: • Increase velocity leftwards, i.e., change Vx by −0.5 • Increase velocity rightwards, i.e., change Vx by +0.5 • Increase velocity downwards, i.e., change Vy by +0.5 • Increase velocity upwards, i.e., change Vy by −0.5 • Keep velocities unchanged The maximum velocity along the x/y-axis is 5.0. The velocity will stay unchanged if it exceeds this threshold. The feature acquisition action af ∈ Af is specified as acquiring the observation of a subset of the quadrants (this also includes acquiring the observation of all 4 quadrants). Thus, the agent can acquire 0− 4 quadrants to observe. Each episode runs up to 50 steps. The episode terminates if agent reaches the target location. B.2 IMPLEMENTATION DETAILS For all the compared methods, Zero-Imputing (Nazabal et al., 2018) is adopted to fill in missing features with a fixed value of 0.5. End-to-End The end-to-end model first processes the imputed image by 2 convolutional layers with filter sizes of 16 and 32, respectively. Each convolutional layer is followed by a ReLU activation function. Then the output is passed to a fully connected layer of size 1024. The weights for the fully connected layer are initialized by orthogonal weights initialization and the biases are initialized as zeros. NonSeq-ZI The non-sequential VAE models first process the imputed image by 2 convolutional layers with filter sizes of 32 and 64, respectively. Each convolutional layer is followed by a ReLU activation function. Then the output passes through a fully connected layer of size 256, followed by two additional fully connected layers of size 32 to generate the mean and variance of a Gaussian distribution. To decode an image, the sampled code first passes through a fully connected layer with size 256, followed by 3 convolutional layers with filters of 32, 32, and nc and strides of 2, 2 and 1, respectively, where nc is the channel size that equals to 2 for the binary image. There are two variants for NonSeq-ZI: one employs the partial loss that is only for the observed variables; the other employs the full loss that is computed on all the variables, i.e., the ground-truth image with full observation is employed as the target to train the model to impute the missing features. The hyperparameters for training NonSeq-ZI are summarized in Table 2. Seq-PO-VAE (ours) At each step, the Seq-PO-VAE takes an imputed image and an action vector of size 9 as input. The imputed image is processed by 3 convolutional layers with filter size 32 and stride 2. Each convolutional layer employs ReLU as its activation function. Then the output passes through a fully connected layer of size 32 to generate a latent representation for the image fx. The action vector passes through a fully connected layer of 32 to generate latent representation for the action fa. Then the image and action features are concatenated and augmented to form a feature vector fc = [fx, fa, fx ∗ fa], where [·] denotes concatenation of features. Then fc is fed to fully connected projection layers of size 64 and 32, respectively. The output is then fed to an LSTM module, with latent size of 32. The output ht of LSTM is passed to two independent fully connected layers of size 32 for each to generate the mean and variance for the Gaussian distribution filtered from the sequential inputs. To decode an image, the model adopts deconvolutional layers that are identical to those for NonSeq-ZI. The hyperparameters for training Seq-PO-VAE are shown in Table 2. LSTM-A3C We adopt LSTM-A3C (Mnih et al., 2016) to train the RL policy. The policy takes the features derived from the representation learning module as input. For the VAE-based methods, the input features are passed through a fully connected layer of size 1024. Then the features are fed to an LSTM with 1024 units. The output of the LSTM is fed to three independent fully connected layers to generate the estimations for value, task policy and feature acquisition policy. We adopt normalized column initialization for all the fully connected layers and the biases for the LSTM module are set to be zero. B.3 DATA COLLECTION To train the VAEs, we prepare a training set that consists of 2000 trajectories. Half of the trajectories are derived from a random policy and the other half is derived from a policy learned from end-to-end method. To train the end-to-end method, we employ a cost of 0.01 over first 2m steps and then increase it to 0.02 for the following 0.5m steps. All the VAE models are evaluated on a test dataset that has identical size and data distribution as the training dataset. We present the best achieved task performance of the data collection policy (End-to-End) and our representation learning approach in Table 5. We notice that our proposed method, by employing an advanced representation model, leads to significantly better feature acquisition policy than End-to-End (smaller number of observations while achieving similar or better reward). B.4 IMPUTING MISSING FEATURES VIA LEARNING MODEL DYNAMICS We present an illustrative example to demonstrate the process of imputing missing features and the role of learning model dynamics. To this end, we collect trajectories under an End-to-End policy (the choice of the underlying RL policy is not that important since we just want to derive some trajectory samples for the VAE models to reconstruct) and use different VAE models to impute the observations. From the results presented in Figure 9, we observe that under the partially observable setting with missing features, the latent representation derived from our proposed method provides abundant information as compared to only using information from a single time step and thereby offers significant benefit for the policy model to learn to acquire meaningful features/gain task reward. B.5 INVESTIGATION ON COST-PERFORMANCE TRADE-OFF We perform a case study on investigating the cost-performance trade-off for each representation learning method, presented in Figure 9. Apparently, as we increase the cost, the explorationexploitation task becomes more challenging and each compared method has its own upper bound on the cost above which it fails to learn an effective task policy while acquiring minimum observation. First, we notice that the End-to-End model takes a long time to progress in learning task skills, while the VAE-based models can progress much faster. Among the VAE-based methods, we notice that our proposed method (Figure 9(d)) can achieve as low as 8 observations whereas the baselines NonSeq-ZI (Full) (Figure 9(b)) and NonSeq-ZI (partial) (Figure 9(c)) achieve a standard of ∼20 (lowest point among the solid lines). Thus, we could conclude that our proposed approach can significantly benefit the cost-sensitive policy training and lead to a policy which acquires much fewer observations while still succeeding in terms of task performance. C SEPSIS MEDICAL SIMULATOR C.1 TASK SPECIFICATIONS For this task we employ a Sepsis simulator proposed in previous work (Oberst & Sontag, 2019). The task is to learn to apply three treatment actions for Sepsis patients in intensive care units, i.e., Ac = {antibiotic, ventilation, vasopressors}. At each time step, the agent selects a subset of the treatment actions to apply. The state space consists of 8 features: 3 of them specify the current treatment status; 4 of them specify the measurement status in terms of heart rate, sysBP rate, percoxyg stage and glucose level; the remaining one is a categorical feature indicating the patent’s antibiotic status. The feature acquisition actively selects a subset among the measurement features for observation, i.e., Af = {heart rate, sysBP rate, percoxyg state, glucose level}. The objective for learning a active feature acquisition strategy is to help the decision making system to reduce measurement cost at a significant scale. C.2 IMPLEMENTATION DETAILS For all the compared methods, we adopt Zero-Imputing (Nazabal et al., 2018) to fill in missing features. In particular, a fixed value of -10 which is outside the range of feature values is used to impute missing values. End-to-End The end-to-end model first processes the imputed state by 3 fully connected layers of size 32, 64 and 32, respectively. Each fully connected layer is followed by a ReLU activation. NonSeq-ZI The VAE model first processes the imputed state by 2 fully connected layers with size 32 and 64, with the first fully connected layer being followed by ReLU activation functions. Then the output is fed into two independent fully connected layers of size 10 for each, to generate the mean and variance for the Gaussian distribution. To decode the state, the latent code is first processed by a fully connected layer of size 64, then fed into three fully connected layers of size 64, 32, and 8. The intermediate fully connected layers employ ReLU activation functions. Also, we adopt two variants for NonSeq-ZI, trained under either full loss or partial loss. The details of the hyperparameter settings used for training are presented in Table 4. Seq-PO-VAE (ours) At each time step, the inputs for state and action are first processed by their corresponding projection layers. The projection layers for the state consists of 3 fully connected layers of size 32, 16 and 10, where the intermediate fully connected layers are followed by a ReLU activation function. The projection layer for the action input is a fully connected layer of size 10. Then the projected state feature fc and action feature fa are combined in the following manner: fc = [fx, fa, fx ∗ fa]. fc is passed to 2 fully connected layers of size 64 and 32 to form the input to the LSTM module. The output ht of the LSTM is fed to two independent fully connected layers of size 10 to generate the mean and variance for the Gaussian distribution. The decoder for Seq-PO-VAE has identical architecture as NonSeq-ZI. The details for training Seq-PO-VAE are presented in Table 4. LSTM-A3C The LSTM-A3C (Mnih et al., 2016) takes encoded state features derived from the corresponding representation model as its input. The encoded featuresare fed into an LSTM with size 256. Then the ht for the LSTM is fed to three independent fully connected layers, to predict the state value, feature acquisition policy and task policy. Normalized column initialization is applied to all fully connected layers. The biases for the LSTM and fully connected layers are initialized as zero. C.3 DATA COLLECTION To train the VAEs, we prepare a training set that consists of 2000 trajectories. Half of the trajectories are derived from a random policy and the other half is derived from a policy learned from the End-to-End method with cost 0.0. All the VAE models are evaluated on a test dataset that consists of identical size and data distribution as the the training dataset. We present the task treatment reward obtained by our data collection policy derived from the End-to-End method and that obtained by our proposed method in Table 5. Noticeably, by performing representation learning, we obtained much better treatment reward as compared to the data collection policy, which demonstrates the necessity of performing representation learning. C.4 MORE COMPARISON RESULT UNDER DIFFERENT VALUES FOR COST We present additional experiment results that compare our proposed method and the non-sequential baselines under the cost values {0, 0.025}. The results for cost value of 0.01 are shown in the main paper. Overall, under all the cost settings, our method leads to significantly better discharge ratio and task reward compared to the baselines. Also, we demonstrate the cost-performance trade-off on Sepsis domain. By increasing the value of cost, we could obtain feature acquisition policy that acquires substantially decreased amount of features within each episode. C.5 ILLUSTRATIVE EXAMPLES FOR MISSING FEATURE IMPUTATION IN Sepsis We present two illustrative examples in Figure 12 to demonstrate how imputing missing features via learning model dynamics would help the decision making with partial observability in Sepsis domain. The policy training process with partial observability could only access very limited information, due to the employment of active feature acquisition. Under such circumstances, imputing the missing features would offer much more abundant information to the decision making process. From the results shown in Figure 12, our model demonstrates considerable accuracy in imputing the missing features, even though it is extremely challenging to perform the missing feature imputation task given the distribution shift from the data collection policy and the online policy. The imputed missing information would be greatly beneficial for training the task policy and feature acquisition policy.
1. What is the main contribution of the paper regarding POMDPs? 2. What are the strengths of the proposed approach, particularly in its key innovations? 3. Do you have any concerns or questions regarding the cost and complexity of training the model? 4. How does the reviewer assess the limitations of the paper regarding its focus on a specific pool of works and lack of comparison with other strategies? 5. Are there any questions regarding the robustness of the approach when considering varying acquisition costs?
Summary Of The Paper Review
Summary Of The Paper In this paper the authors propose an approach for simultaneously learning how to explore more efficiently in POMDPs via targeted feature acquisition, and learning a reward-maximizing control policy, balancing the cost of feature acquisition with the expected reward. Learning is done via a VAE framework which combines a belief inference model and an observation decoder, with a key innovation being that inference is done as a sequential process. Results comparing this approach to other variational inference approaches show the proposed framework reaches better performance with lower cost (particularly, number of acquired features). Review This is an interesting paper which tackles a variation on the perennial challenge of exploration vs. exploitation in POMDPs. The overall approach seems reasonable to me and I liked what I identify to be two key ideas in the paper, which are learning the feature acquisition policy and the target policy simultaneously, and introducing the notion of sequence in the VAE inference framework. However, I do have some questions, My first concern regarding the proposed approach is that it seems wildly expensive to train even for modestly sized problems, and even in the fraught landscape of POMDP learning strategies. I have not seen concrete estimations of sample complexities required in training in the supplementary material, how do they compare to other approaches for solving POMDPs (more on that in a bit). On that note, I am somewhat concerned by the pretraining of the VAEs and the fact that it is then essentially fixed during learning, implying that while there's a sequential component to it, it isn't really an RL approach for hidden state imputation and therefore might be very brittle in practice, especially when a lot of distributions are unknown prior to training. Speaking of other approaches for solving POMDPs, a problem which can be spotted already in the related work section is the fact that the author restrict themselves essentially to a small pool of works which pursue a POMDP solving strategy that is largely similar in spirit to that proposed in this paper. While this focus is understandable, it not only does a disservice to a great deal of prior work on efficient exploration and representation learning in POMDPs, it also substantially limits the set of useful baselines the authors compare against. What about other strategies for learning behavior policies and work on balancing exploration vs. exploitation in POMDPs in general? Or hierarchical abstractions? The current set of experiments seems to mostly show that introducing the notion of sequence in the VAE formulation is beneficial, which is neither negligible nor surprising, but it's not obvious to me that the proposed (again, monstrously complex) approach outperforms simpler approaches under most settings. The term "cost" here mostly refers to number of features acquired. However, it seems to me that such an approach is overly simplistic, given that certain features are vastly more expensive to acquire than others (consider the difference between a simple blood test and a 24 hour EEG monitoring experiment). Is the proposed approach resilient to varying acquisition costs?
ICLR
Title Reinforcement Learning with Efficient Active Feature Acquisition Abstract Solving real-life sequential decision making problems under partial observability involves an exploration-exploitation problem. To be successful, an agent needs to efficiently gather valuable information about the state of the world for making rewarding decisions. However, in real world applications, acquiring valuable information is often highly costly, e.g., in the medical domain, information acquisition might correspond to performing a medical test on a patient. This poses a significant challenge for the agent to learn optimal policy for the task. In this paper, we propose a model-based reinforcement learning framework that learns a policy which solves this exploration-exploitation problem during its execution. Key to the success is a novel sequential variational autoencoder that learns high-quality representations from partially observed states, which are then used by the policy to maximize the task reward in a cost-efficient manner. We demonstrate the efficacy of our proposed framework in a control domain as well as using a medical simulator. In both tasks, our proposed method outperforms conventional baselines and results in policies with greater cost efficiency. 1 INTRODUCTION Recently, machine learning models for automated sequential decision making have shown remarkable success across many application areas, such as visual recognition (Mathe et al., 2016; Das et al., 2017), robotics control (Finn et al., 2016; Zhang et al., 2018), medical diagnosis (Ling et al., 2017; Peng et al., 2018) and computer games (Mnih et al., 2015; Silver et al., 2016). One fundamental reason that drives the success of such models and enables them to outperform classical algorithms is the availability of large amounts of training data. Typically such training data is either fully observed or the features stem from an action-independent observation model (which clearly can depend on the state of the system). However, the fundamental assumption that the same features are always readily available during deployment could not hold in many real-world applications. For instance, consider a medical support system for monitoring and treating patients during their stay at hospital which was trained on rich historical medical data. To provide the best possible treatment, the system might need to perform several measurements of the patient over time, while some of them could be costly or even pose a health risk. Therefore, during deployment, it is more ideal that the system could function with minimal features while during training more features might have been available. In such cases, we are interested in decision making models that actively take the measurement process, i.e., feature acquisition, into account and only acquire the information relevant for making a decision. In this paper, we consider the challenging problem of learning effective policies when the cost of information acquisition cannot be neglected. To be successful,we need to learn policies which acquires the information required for solving a task in the cheapest way possible. For simplicity, we can think of the policy as being constituted of an acquisition policy which actively selects meaningful features to be observed and a task policy, which selects actions to change the state of the system towards some goal.1 As such, we consider a partially observable learning problem with the following two distinguishing properties compared to the most commonly studied problems (see also Figure 3.2 for an illustration). (i) By incorporating active feature acquisition, the training of the task policy is based upon subsets of features only, i.e., there are missing features, where the missingness is 1Clearly, these two policies are not independent in general, e.g., acquiring features can change the state of the system. observe action acquire (e.g. navigation) (e.g. medical treatments) observe action controlled by the acquisition policy. Thus, the resulting POMDP is different from the conventional POMDPs in RL literature (Cassandra, 1998) where the partial observability for later stems from a fixed and action-independent observation model. Also, the state transitions in conventional POMDPs are only determined by the choice of the task action, whereas in our setting the state-transition is affected by both the task action and the feature acquisition choice. (ii) The learning of the acquisition policy introduces an additional dimension to the exploration-exploitation problem: each execution of the policy needs to solve an exploration-exploitation problem, and thus we often need to learn sophisticated policies. Most reinforcement learning research has not taken active feature acquisition into consideration. In this work, we propose a unified approach that jointly learns a policy for optimizing the task reward while performing active feature acquisition. Although some of the prior works have exploited the use of reinforcement learning for sequential feature acquisition tasks (Shim et al., 2018; Zannone et al., 2019), they considered variable-wise information acquisition in a static setting only, corresponding to feature selection for non-time-dependent prediction tasks. However, our considered setting is truly time-dependent and feature acquisitions need to be made at each time step while the state of the system evolves simultaneously. As such, both the model dynamics of the underlying MDP and the choice of feature acquisition introduce considerable challenges to the learning of the sequential feature acquisition strategy. Due to the challenge of the exploration-exploitation problem, it is a non-trivial task to jointly learn the two policies. The conventional end-to-end approaches often result in inferior solutions in complex scenarios. Ideally, policies based on high-quality representations would be easier for the algorithm to search for better solutions through exploration-exploitation. Therefore, our proposed framework also tackles the joint policy training task from a representation learning perspective. Specifically, we introduce a representation learning model that not only encodes the sequential partially observed information into its latent features, but also efficiently imputes the unobserved features to offer more meaningful information for the policy training. To this end, we formulate a sequential generative model that can efficiently learn model dynamics during representation learning. Overall, the contributions of our paper are three-fold: • We propose an approach for learning sequential decision making policies with active feature acquisition through a unified reinforcement learning framework. Our proposed approach simultaneously learns policies for reward optimization and active feature acquisition. • We present a novel sequential representation learning approach to account for the encoding of the partially observed states. Our proposed approach is based on variational autoencoders (VAE) with amortized inference. The imputation of the unobserved features is achieved via learning the model dynamics. • We demonstrate our proposed framework can be applied to various applications. We conduct extensive experiments on an image-based control task as well as a medical simulator fitted from real-life data where our method shows clear improvements over conventional baselines. 2 RELATED WORK In this work, we integrate active learning with reinforcement learning to accomplish the policy training task while attempting to acquire fewest observed features as possible. We thus review related methods on active feature acquisition and representation learning for POMDP, respectively. 2.1 ACTIVE FEATURE ACQUISITION Our work draws motivation from the existing instance-wise active feature selection approaches. One category of the instance-wise feature selection methods consider feature acquisition as a one time effort to select a subset of features as a whole. One typical example is the conventional linear model that poses sparsity inducing prior distribution to the model (Tibshirani, 1996). Recently, there also emerged approaches that adopt reinforcement learning to actively find optimal feature subsets (Yoon et al., 2018; Shim et al., 2018; Zannone et al., 2019). Though such attempts have demonstrated certain efficacy in handling non time-series instance-wise data, they do not suffice for handling sequential dataset. There is also an alternative category that models feature acquisition as a Bayesian experimental design (Ma et al., 2019; Gong et al., 2019). However, the sequential decision making is for variable-wise feature acquisition and the problems are still non time-series tasks in nature. The key difference between all the aforementioned approaches and ours is that we tackle active feature acquisition problems with time-series data, where an active feature selection decision needs to be formed at each time step along the multi-step reinforcement learning trajectory. Therefore, the feature acquisition for our presented work needs to consider more complex information over model dynamics and control, apart from the static instance-wise features. 2.2 REPRESENTATION LEARNING IN POMDP In complex tasks, policies trained upon different representations can even converge to different performance levels. Most conventional deep reinforcement learning approaches unifies the process of representation learning with policy training and results in policies trained in an end-to-end fashion (Mnih et al., 2013; Lillicrap et al., 2016; Mnih et al., 2016). However, to accomplish the representation learning task, such models often engage trainable parameters which could come with considerable size and thereby result in significant degradation in sample efficiency. When considering problems with POMDPs where the state space is partially accessible to the agent, representation learning becomes an important and non-trivial research challenge. Among the existing literature, one prominent line of research tackles the representation learning for POMDP in an off-line fashion and thus resulting in multi-stage reinforcement learning. Higgins et al. (2016; 2017) adopt pretrained VAE models as a representation module to build agents with strong domain adaptation performance. The key difference between their work and ours is that they encode instance-wise image frames from POMDP domains where each image presents a partial view over the task environment, while our work considers cost-sensitive reinforcement learning with distinct partial observability, i.e., the feature-level information is missing at each time step for the agent. We thus adopt a sequential representation learning approach to infer a more representative state information. Recently, there also emerged several works on sequential representation learning for POMDP (Gregor et al., 2019; Vezzani et al., 2019). However, most of the works utilize VAE training as an auxiliary task to jointly update the representation model with the policy learning loss. In our work, due to the high acquisition cost to observe the features, we adopt an off-line representation learning setting. Also, our proposed representation learning is model-based, where the model learns to impute the missing features with such attempt yielding significant benefit to derive high-quality representation for policy training. 3 METHODOLOGY 3.1 TASK SETTING In this section, we formally define the problem settings for the task of jointly learning the task and feature acquisition policy. To this end, we define the active feature acquisition POMDP, a rich class of discrete-time stochastic control processes generalizing standard POMDPs: Definition 1 (AFA-POMDP). The active feature acquisition POMDP is a tuple M = 〈S,A, T ,O,R, C, γ〉, where S is the state space and A = (Af ,Ac) is a joint action space of feature acquisition actionsAf and control actionsAc. The transition kernel T : S ×Ac×Af → PS maps any joint action a = (af ,ac) in state s ∈ S to a distribution PS over next states. In each state s, when taking action af , the agent observes xp = x(af ), i.e., a subset of the features x = (xp,xu) ∼ O(s) indicated by af , whereO(s) is a distribution over possible feature observation for state s and xu are features not observed by the agent. When taking a joint action, the agent obtains rewards according to the reward functionR : S ×Ac → R and pays a cost of C : S ×Af → R+ for feature acquisition. Rewards and costs are discounted by the discount factor γ ∈ [0, 1). Simplifying assumptions For simplicity, we assume that x consists of a fixed number of features Nf for all states, that Af = 2[Nf ] is the power set of all the Nf features, and that xp(af ) consists of all the features in x indicated by the subset af ∈ Af . Note that the feature acquisition action for a specific application can take various different forms. For instance, in our experiments in Section 4, for the Sepsis task, we define feature acquisition as selecting a subset over possible measurement tests, whereas for the Bouncing Ball+ task, we divide an image into four observation regions and let the feature acquisition policy select a subset of observation regions (rather than raw pixels). Please also note that while in a general AFA-POMDP, the transition between two states depends on the joint action, we assume in the following that it depends only on the control action, i.e., T (s,ac,af ′) = T (s,ac,af ) for all af ′ ,af ∈ Af . While not true for all possible applications, this assumption can be a reasonable approximation for instance for medical settings in which tests are non-invasive. For simplicity we furthermore assume that acquiring each feature has the same cost, denoted as c, i.e., C(af , s) = c |af |, but our approach can be straightforwardly adapted to have different costs for different feature acquisitions. Objective We aim to learn a policy which trades off reward maximization and the cost for feature acquisition by jointly optimizing a task policy πc and a feature acquisition policy πf . That is, we aim to solve the optimization problem max πf ,πc E ∞∑ t=0 γt ( R(xt,act)− |Af |∑ i c · I (af(i)t ) ), (1) where the expectation is over the randomness of the stochastic process and the policies, af(i)t denotes the i-th feature acquisition action at timestep t, and I (·) is an indicator function whose value equals to 1 if that feature has been acquired. Note that the above optimization problem is very challenging: an optimal solution needs to maintain beliefs bt over the state of the system at time t which is a function of partial observations obtained so far. Both the the feature acquisition policy πf (aft | bt) and the task policy i.e., πc(act | bt) depend on this belief. The information in the belief itself can be controlled by the feature acquisition policy through querying subsets from the features xt and hence the task policy and feature acquisition policy itself strongly depend on effectiveness of the feature acquisition policy. 3.2 SEQUENTIAL REPRESENTATION LEARNING WITH PARTIAL OBSERVATIONS We introduce a sequential representation learning approach to facilitate the task of policy training with active feature acquisition. Let x1:T = (x1, ...,xT ) and a1:T = (a1, ...,aT ) denote a sequence of observations and actions, respectively. Alternatively, we also denote these sequences as x≤T and a≤T . Overall, our task of interest is to train a sequential representation learning model to learn the distribution of the full sequential observations x1:T , i.e., for both the observed part x p 1:T and the unobserved part xu1:T . Given only partial observations, we can perform inference only with the observed features xp1:T . Therefore, our proposed approach extends the conventional unsupervised representation learning task to a supervised learning task, which learns to impute the unobserved features by synthesizing the acquired information and learning the model dynamics. As such, the key underlying assumption is that learning to impute the unobserved features would result in better representations which can be leveraged by the task policy. And performing sequential representation learning, as we propose, is a more adequate choice than non-sequential modeling, for our task of interest with partial observability. Furthermore, unlike many conventional sequential representation learning models for reinforcement learning that only reason over the observation sequence xp1:T , in our work, we take into account both the observation sequence x p 1:T and the action sequence a1:T for conducting inference. The intuition is that since x p 1:T by itself consists of very limited information over the agent’s underlying MDP state, incorporating the action sequence would be an informative add-on to the agent’s acquired information to infer the belief state. To summarize, our proposed sequential representation model learns to encode xp1:T and a1:T into meaningful latent 1 2 3 0 1 2 ℎ1 1 2 3 ℎ2 ℎ3 1 2 3 Decoder Inference Figure 2: Observation decoder and belief inference model for the partially observable sequential VAE. Shaded nodes represent the observed variables. The inference model filters information over the partial observations and actions, to predict both the observed and unobserved features. features, for predicting xp1:T and x u 1:T . The architecture of our proposed sequential representation learning model is shown in Figure 2. Observation Decoder Let z1:T = (z1, ..., zT ) denote a sequence of latent states. We consider the following probabilistic model: pθ(x p,xu, z) = T∏ t=1 p(xpt ,x u t |zt) p(zt), (2) For simplicity of the notations, we assume z0 = 0. We impose a simple prior distribution over z, i.e., a standard Gaussian prior, instead of incorporating some learned prior distribution over the latent space of z, such as an autoregressive prior distribution like p(zt|zt−1,xp1:t,a0:t−1). The reason is that using a static prior distribution results in latent representation zt that is stronger regularized and more normalized than using a learned prior distribution which is stochastically changing over time. This is crucial for deriving stable policy training performance. At time t, the generation of data xpt and x u t depends on the corresponding latent variable zt. Given zt, the observed variables are conditionally independent of the unobserved ones. Therefore, p(xpt , x u t |zt) = p(x p t |zt) p(xut |zt). (3) Belief Inference Model During policy training we only assume access to partially observed data. This requires an inference model which takes in the past observation and action sequences to infer the latent states z. Specifically, we present a structured inference network qφ as shown in Figure 2, which has an autoregressive structure: qφ(z|x,a) = ∏ t qφ(zt|xp≤t,a<t), (4) where qφ(·) is a function that aggregates the filtering posteriors of the history of observation and action sequences. Following the common practice in existing sequential VAE literature, we adopt a forward RNN model as the backbone for the filtering function qφ(·) (Gregor et al., 2019). Specifically, at step t, the RNN processes the encoded partial observation xpt , action at−1 and its past hidden state ht−1 to update its hidden state ht. Then the latent distribution zt is inferred from ht. The belief state bt is defined as the mean of the distribution zt. By accomplishing the supervised learning task, the belief state could provide abundant information for not only the observed sequential features, but also for the missing features, so that the policy trained over it could benefit from it and progress faster towards getting better convergent performance. Learning We proposed to pre-train both the generative and inference models offline before learning the RL policies. In this case, we assume the access to the unobserved features, so that we can construct a supervised learning task to learn to impute unobserved features. Concretely, the pre-training task update the parameters θ, φ by maximizing the following variational lower-bound (Jordan et al., 1999; Kingma & Welling, 2013): log p(xp,xu) ≥ Eqφ [∑ t log pθ(x p t |zt) + log pθ(xut |zt)− KL ( qφ(zt|xp≤t,a<t) || p(zt) )] = ELBO(xp,xu). (5) By incorporating the term log pθ(xut |zt), the training of sequential VAE generalizes from an unsupervised task to a supervised task that learns the model dynamics from past observed transitions and imputes the missing features. We perform multi-stage reinforcement learning to jointly learn the feature acquisition policy and the task policy. The VAE model is pretrained and kept fixed during policy learning. The reason for not updating VAE online is that computing the loss in Eq (5) would require the access to unobserved features and therefore, is cost intensive. The pseudocode for our proposed method is in Appendix A. 4 EXPERIMENTS We examine the characteristics of our proposed model in the following two experimental domains: a bouncing ball control task with high-dimensional image pixels as input, adapted from (Fraccaro et al., 2017); a sepsis medical simulator fitted from real-world data (Oberst & Sontag, 2019). Baselines For comparison, we mainly consider variants of the strong VAE baseline beta-VAE (Higgins et al., 2016), which works on non-time-dependent data instances. For representing the missing features, we adopt the zero-imputing method, proposed in (Nazabal et al., 2018) over the unobserved features. Thus, we denote the VAE baseline as NonSeq-ZI. We train the VAE with either the full loss over the entire features, or the partial loss which only applies to the observed features (Ma et al., 2019). We denoted our proposed sequential VAE model for POMDPs as Seq-PO-VAE. All the VAE-based approaches adopt an identical policy architecture. Detailed information on the model architecture is presented in appendix. Data Collection To pre-train the VAE models, data generated by a non-random policy is unavoidably needed to incorporate abundant dynamics information. For both tasks, we collect a small scale dataset of 2000 trajectories, where half of the data is collected from a random policy and the the other half from a policy which better captures the state space that would be encountered by a learned model (e.g., by training a data collection policy end-to-end or using human generated trajectories). The simple mixture of dataset works very well on both tasks without the need of further fine-tuning the VAEs. We also create a testing set that consists of 2000 trajectories to evaluate the models. 4.1 BOUNCING BALL+ Task Settings We adapted the original bouncing ball experiment presented in (Fraccaro et al., 2017) by adding a navigation objective and introducing control actions. Specifically, a ball moves in a 2D box and at each step, a binary image of size 32× 32 showing the box and the ball. Initially, the ball appears at a random position in the upper left quadrant, and has a random velocity. The objective is to control the ball to reach a fixed target location set at (5, 25). We incorporate five RL actions: a null action and four actions for changing the velocity of the ball in either the x or y direction with a fixed scale: {∆Vx : ±0.5, ∆Vy : ±0.5, null}. A reward of 1.0 is issued if the ball reaches its target location. Each episode runs up to 50 time steps. Representation Learning Results We evaluate the missing feature imputing performance of each VAE model in terms of negative log likelihood (nll) loss and present results in Table 1. We notice that our proposed model yields to significantly better imputing result than all the other baselines. This reveals the fact that our proposed sequential VAE model can efficiently capture the environment dynamics and learn meaningful information over the missing features. Such effect is vital in determining the policy training performance in AFA-POMDP, since the policy is conditioned on the VAE latent features. We also demonstrate sample trajectories reconstructed by different VAE models in the Appendix. The result shows that our model learns to impute significant amount of missing information given the partially observed sequence. Policy Training Results We evaluate the policy training performance in terms of episodic number of acquired observations and the task rewards (w/o cost). The results are presented in Figure 3 (a) and (b), respectively. First, we notice that the end-to-end method is vital and fails to learn task skills under the given feature acquisition cost. However, the VAE-based representation learning methods manage to learn the navigation skill under the same cost setting. This verifies our assumption that representation learning could bring significant benefit to the policy training under the AFA-POMDP scenario. Furthermore, we also notice that the joint policies trained by Seq-PO-VAE can develop the target navigation skill at a much faster pace than the non-sequential baselines. Our method also converges to a standard where much less feature acquisition is required to perform the task. We also show that our proposed method can learn meaningful feature acquisition policies. To this end, we show three sampled trajectories upon convergence of training in Figure 4. From the examples, we notice that our feature acquisition policy acquires meaningful features with a majority grasping the exact ball location. Thus, it demonstrates that the feature acquisition policy adapts to the dynamics of the problem and learns to acquire meaningful features. We also show the actively learned feature acquisition policy works better than random acquisition. From the results in Figure 4 (c), our method converges to better standard than random policies with considerably high selection probabilities. 4.2 SEPSIS MEDICAL SIMULATOR Task Setting Our second evaluation domain adopts a medical simulator for treating sepsis among ICU patients, proposed in (Oberst & Sontag, 2019). Overall, the task is to learn to apply three treatment actions to the patient, i.e, {antibiotic, ventilation, vasopressors}. The state space consists of 8 features: 3 of them indicate the current treatment state for the patient; 4 of them are the measurement states over heart rate, sysBP rate, percoxyg state and glucose level; the rest is an index specifying the patent’s diabetes condition. The feature acquisition policy learns to actively select the measurement features. Each episode runs for up to 30 steps. The patient will be discharged if his/her measurement states all return to normal values. An episode terminates upon mortality or discharge, with a reward −1.0 or 1.0. Representation Learning Result We evaluate the imputation performance for each VAE model on the testing dataset. The loss is evaluated in terms of MSE, presented in Table 1. Our proposed method leads to the lowest MSE loss compared to the baselines. The result reveals that our proposed sequential VAE could promisingly learn model dynamics for tasks with stochastic transitions. Policy Training Result We show the policy training results for Sepsis in Figure 5. Overall, our proposed method results in substantially better task reward compared to all baselines. Noticeably, the learning of discharge for our method progresses significantly faster than baseline approaches and converges to substantially better values. The result shows that our method can be trained in a much more sample efficient way. Moreover, upon convergence, our model outperforms the best non-sequential VAE baseline with a gap of > 5% for discharge ratio. For all the evaluation metrics, we notice that VAE-based representation learning models outperform the end-to-end baseline by significant margins. This indicates that efficient representation learning may be crucial for deriving satisfying task performance in AFA-POMDP setting. The result also reveals that learning to impute missing features contributes greatly to improve the policy training performance for AFA-POMDP. Ablation: Efficacy of Active Feature Acquisition We study the effect of actively learning sequential feature acquisition strategy with RL. To this end, we compare our method with a baseline that randomly acquires features. We evaluate our method under different cost values, and the results are shown in Figure 6. From the results, we notice that there is a clear cost-performance trade-off, i.e., a higher feature acquisition cost results in feature acquisition policies that obtain fewer observations, with a sacrifice of task performance. Overall, our acquisition method results in significantly better task performance than the random acquisition baselines. Noticeably, with the learned active feature acquisition strategy, we acquire only about half of the total number of features (refer to the value derived by Random-100%) to obtain comparable task performance. Also, we notice that the specified cost has a very clear impact on the final task performance, i.e., the number of acquired features per episode decreases significantly as the cost increases. Thereby, our proposed solution can promisingly compute feature acquisition policies that meet different budgets. Ablation: Impact on Total Acquisition Cost For different representation learning methods, we also investigate the total number of features acquired at different stage of training. The results are shown in Figure 7. As expected, to obtain better task policies, the models need to take longer training steps and thus the total feature acquisition cost would increases accordingly. We notice that policies trained by our method result in the highest convergent task performance (max x-value). Given a certain performance level (same x-value), our method consumes substantially less total feature acquisition cost (y-value) than the others. We also notice that the overall feature acquisition cost increases with a near exponential trend. Therefore, it is essential to train the policy for AFA-POMDP with advanced representation learning method, so that the feature acquisition cost could be reduced. 5 CONCLUSION We present a novel AFA-POMDP framework that jointly learns the task policy and the active feature acquisition strategy with a unified reinforcement learning formalism. We introduce a model-based sequential VAE model to facilitate policy training under partial observability. We demonstrate that imputing missing features via learning model dynamics could significantly benefit policy training with partial observability. Our proposed model, by efficiently synthesizing the sequential information to impute the missing features, can significantly outperform conventional representation learning baselines and leads to policy training with significantly better sample efficiency as well as obtained solutions. Future work may investigate whether our proposed model could be applied to more diverse and complex application domains. Another promising direction is to integrate our framework with model-based planning for further reducing the feature acquisition cost. ETHICS STATEMENT When deploying machine learning models in real-world applications, the fundamental assumption that the features used during training are always readily available during the deployment phase does not necessarily hold. Our work addresses the aforementioned problem via formulating a novel AFA-POMDP framework that extends the conventional instance-wise non-time-dependent active feature acquisition task to a more challenging time-dependent sequential decision making task. The sequential active feature acquisition module enables the decision making to be performed in a more cost-efficient way when partial features are accessed only during model deployment. Considering that the task of learning and applying machine learning models is rather problem specific, it is unlikely that our method can equally benefit all possible application scenarios. We also fully acknowledge the existence of risk in applying our model in sensitive and high risk domains, e.g., healthcare, and its potential bias if the model itself or the used representations are trained on biased data. In high risk settings, human supervision of the proposed model might be desired and the model is suggested to be mainly used for decision support systems. To alleviate the reliance on fully observed data during representation learning, it is very promising to trigger follow-up works studying data efficient sequential autoencoder training paradigms. APPENDIX This appendix is organized as follows: • Sec A: the detailed algorithm. • Sec B: experimental settings and additional results on the Bouncing Ball domain. • Sec C: experimental settings and additional results on the Sepsis domain. A RL WITH ACTIVE FEATURE ACQUISITION ALGORITHM Algorithm 1 RL with Active Feature Acquisition 1: Input: learning rate α > 0, dataset D 2: Initialize RL policy πf , πc, VAE parameters θ, φ. 3: Train VAE on dataset D using Eq (5). 4: while Not Converge do 5: Reset the environment. 6: Initialize null observation xp1 = Ø, feature acquisition action a f 0 and control action a c 0. 7: for i = 1 to T do 8: Compute representation with VAE: bt = qφ(x p ≤t,a<t). 9: Sample a feature acquisition action aft ∼ πf (bt) and a control action act ∼ πc(bt). 10: Step the environment and receive partial features, reward and terminal: xpt+1, rt, term ∼ env(aft ,a c t) 11: Compute cost ct = ∑ i c · I(a f(i) t ). 12: Save the transitions {bt,aft ,act , rt, ct, term}. 13: if term then 14: break 15: end if 16: end for 17: Update πf , πc using the saved transitions with an RL algorithm under learning rate α. 18: end while B BOUNCING BALL+ B.1 TASK SPECIFICATION The task consists of a ball moving in a 2D box of size 32×32 pixels. The radius of the ball equals to 2 pixels. At each step, a binary image is returned as an observation of the MDP state. At the beginning of every episode, the ball starts at a random position in the upper left quadrant (sampled uniformly). The initial velocity of the ball is randomly defined as follows: ~v = [Vx, Vy] = 4 · ~̃v/‖~̃v‖, where the x- and y-component of ~̃v are sampled uniformly from the interval [−0.5, 0.5]. There is a navigation target set at (5, 25) pixels, which is in the lower left quadrant. The navigation is considered to be successful if the ball reaches the specified target location within a threshold of 1 pixel along both x/y-axis. The action spaces is defined as follows. There are five task actions Ac: • Increase velocity leftwards, i.e., change Vx by −0.5 • Increase velocity rightwards, i.e., change Vx by +0.5 • Increase velocity downwards, i.e., change Vy by +0.5 • Increase velocity upwards, i.e., change Vy by −0.5 • Keep velocities unchanged The maximum velocity along the x/y-axis is 5.0. The velocity will stay unchanged if it exceeds this threshold. The feature acquisition action af ∈ Af is specified as acquiring the observation of a subset of the quadrants (this also includes acquiring the observation of all 4 quadrants). Thus, the agent can acquire 0− 4 quadrants to observe. Each episode runs up to 50 steps. The episode terminates if agent reaches the target location. B.2 IMPLEMENTATION DETAILS For all the compared methods, Zero-Imputing (Nazabal et al., 2018) is adopted to fill in missing features with a fixed value of 0.5. End-to-End The end-to-end model first processes the imputed image by 2 convolutional layers with filter sizes of 16 and 32, respectively. Each convolutional layer is followed by a ReLU activation function. Then the output is passed to a fully connected layer of size 1024. The weights for the fully connected layer are initialized by orthogonal weights initialization and the biases are initialized as zeros. NonSeq-ZI The non-sequential VAE models first process the imputed image by 2 convolutional layers with filter sizes of 32 and 64, respectively. Each convolutional layer is followed by a ReLU activation function. Then the output passes through a fully connected layer of size 256, followed by two additional fully connected layers of size 32 to generate the mean and variance of a Gaussian distribution. To decode an image, the sampled code first passes through a fully connected layer with size 256, followed by 3 convolutional layers with filters of 32, 32, and nc and strides of 2, 2 and 1, respectively, where nc is the channel size that equals to 2 for the binary image. There are two variants for NonSeq-ZI: one employs the partial loss that is only for the observed variables; the other employs the full loss that is computed on all the variables, i.e., the ground-truth image with full observation is employed as the target to train the model to impute the missing features. The hyperparameters for training NonSeq-ZI are summarized in Table 2. Seq-PO-VAE (ours) At each step, the Seq-PO-VAE takes an imputed image and an action vector of size 9 as input. The imputed image is processed by 3 convolutional layers with filter size 32 and stride 2. Each convolutional layer employs ReLU as its activation function. Then the output passes through a fully connected layer of size 32 to generate a latent representation for the image fx. The action vector passes through a fully connected layer of 32 to generate latent representation for the action fa. Then the image and action features are concatenated and augmented to form a feature vector fc = [fx, fa, fx ∗ fa], where [·] denotes concatenation of features. Then fc is fed to fully connected projection layers of size 64 and 32, respectively. The output is then fed to an LSTM module, with latent size of 32. The output ht of LSTM is passed to two independent fully connected layers of size 32 for each to generate the mean and variance for the Gaussian distribution filtered from the sequential inputs. To decode an image, the model adopts deconvolutional layers that are identical to those for NonSeq-ZI. The hyperparameters for training Seq-PO-VAE are shown in Table 2. LSTM-A3C We adopt LSTM-A3C (Mnih et al., 2016) to train the RL policy. The policy takes the features derived from the representation learning module as input. For the VAE-based methods, the input features are passed through a fully connected layer of size 1024. Then the features are fed to an LSTM with 1024 units. The output of the LSTM is fed to three independent fully connected layers to generate the estimations for value, task policy and feature acquisition policy. We adopt normalized column initialization for all the fully connected layers and the biases for the LSTM module are set to be zero. B.3 DATA COLLECTION To train the VAEs, we prepare a training set that consists of 2000 trajectories. Half of the trajectories are derived from a random policy and the other half is derived from a policy learned from end-to-end method. To train the end-to-end method, we employ a cost of 0.01 over first 2m steps and then increase it to 0.02 for the following 0.5m steps. All the VAE models are evaluated on a test dataset that has identical size and data distribution as the training dataset. We present the best achieved task performance of the data collection policy (End-to-End) and our representation learning approach in Table 5. We notice that our proposed method, by employing an advanced representation model, leads to significantly better feature acquisition policy than End-to-End (smaller number of observations while achieving similar or better reward). B.4 IMPUTING MISSING FEATURES VIA LEARNING MODEL DYNAMICS We present an illustrative example to demonstrate the process of imputing missing features and the role of learning model dynamics. To this end, we collect trajectories under an End-to-End policy (the choice of the underlying RL policy is not that important since we just want to derive some trajectory samples for the VAE models to reconstruct) and use different VAE models to impute the observations. From the results presented in Figure 9, we observe that under the partially observable setting with missing features, the latent representation derived from our proposed method provides abundant information as compared to only using information from a single time step and thereby offers significant benefit for the policy model to learn to acquire meaningful features/gain task reward. B.5 INVESTIGATION ON COST-PERFORMANCE TRADE-OFF We perform a case study on investigating the cost-performance trade-off for each representation learning method, presented in Figure 9. Apparently, as we increase the cost, the explorationexploitation task becomes more challenging and each compared method has its own upper bound on the cost above which it fails to learn an effective task policy while acquiring minimum observation. First, we notice that the End-to-End model takes a long time to progress in learning task skills, while the VAE-based models can progress much faster. Among the VAE-based methods, we notice that our proposed method (Figure 9(d)) can achieve as low as 8 observations whereas the baselines NonSeq-ZI (Full) (Figure 9(b)) and NonSeq-ZI (partial) (Figure 9(c)) achieve a standard of ∼20 (lowest point among the solid lines). Thus, we could conclude that our proposed approach can significantly benefit the cost-sensitive policy training and lead to a policy which acquires much fewer observations while still succeeding in terms of task performance. C SEPSIS MEDICAL SIMULATOR C.1 TASK SPECIFICATIONS For this task we employ a Sepsis simulator proposed in previous work (Oberst & Sontag, 2019). The task is to learn to apply three treatment actions for Sepsis patients in intensive care units, i.e., Ac = {antibiotic, ventilation, vasopressors}. At each time step, the agent selects a subset of the treatment actions to apply. The state space consists of 8 features: 3 of them specify the current treatment status; 4 of them specify the measurement status in terms of heart rate, sysBP rate, percoxyg stage and glucose level; the remaining one is a categorical feature indicating the patent’s antibiotic status. The feature acquisition actively selects a subset among the measurement features for observation, i.e., Af = {heart rate, sysBP rate, percoxyg state, glucose level}. The objective for learning a active feature acquisition strategy is to help the decision making system to reduce measurement cost at a significant scale. C.2 IMPLEMENTATION DETAILS For all the compared methods, we adopt Zero-Imputing (Nazabal et al., 2018) to fill in missing features. In particular, a fixed value of -10 which is outside the range of feature values is used to impute missing values. End-to-End The end-to-end model first processes the imputed state by 3 fully connected layers of size 32, 64 and 32, respectively. Each fully connected layer is followed by a ReLU activation. NonSeq-ZI The VAE model first processes the imputed state by 2 fully connected layers with size 32 and 64, with the first fully connected layer being followed by ReLU activation functions. Then the output is fed into two independent fully connected layers of size 10 for each, to generate the mean and variance for the Gaussian distribution. To decode the state, the latent code is first processed by a fully connected layer of size 64, then fed into three fully connected layers of size 64, 32, and 8. The intermediate fully connected layers employ ReLU activation functions. Also, we adopt two variants for NonSeq-ZI, trained under either full loss or partial loss. The details of the hyperparameter settings used for training are presented in Table 4. Seq-PO-VAE (ours) At each time step, the inputs for state and action are first processed by their corresponding projection layers. The projection layers for the state consists of 3 fully connected layers of size 32, 16 and 10, where the intermediate fully connected layers are followed by a ReLU activation function. The projection layer for the action input is a fully connected layer of size 10. Then the projected state feature fc and action feature fa are combined in the following manner: fc = [fx, fa, fx ∗ fa]. fc is passed to 2 fully connected layers of size 64 and 32 to form the input to the LSTM module. The output ht of the LSTM is fed to two independent fully connected layers of size 10 to generate the mean and variance for the Gaussian distribution. The decoder for Seq-PO-VAE has identical architecture as NonSeq-ZI. The details for training Seq-PO-VAE are presented in Table 4. LSTM-A3C The LSTM-A3C (Mnih et al., 2016) takes encoded state features derived from the corresponding representation model as its input. The encoded featuresare fed into an LSTM with size 256. Then the ht for the LSTM is fed to three independent fully connected layers, to predict the state value, feature acquisition policy and task policy. Normalized column initialization is applied to all fully connected layers. The biases for the LSTM and fully connected layers are initialized as zero. C.3 DATA COLLECTION To train the VAEs, we prepare a training set that consists of 2000 trajectories. Half of the trajectories are derived from a random policy and the other half is derived from a policy learned from the End-to-End method with cost 0.0. All the VAE models are evaluated on a test dataset that consists of identical size and data distribution as the the training dataset. We present the task treatment reward obtained by our data collection policy derived from the End-to-End method and that obtained by our proposed method in Table 5. Noticeably, by performing representation learning, we obtained much better treatment reward as compared to the data collection policy, which demonstrates the necessity of performing representation learning. C.4 MORE COMPARISON RESULT UNDER DIFFERENT VALUES FOR COST We present additional experiment results that compare our proposed method and the non-sequential baselines under the cost values {0, 0.025}. The results for cost value of 0.01 are shown in the main paper. Overall, under all the cost settings, our method leads to significantly better discharge ratio and task reward compared to the baselines. Also, we demonstrate the cost-performance trade-off on Sepsis domain. By increasing the value of cost, we could obtain feature acquisition policy that acquires substantially decreased amount of features within each episode. C.5 ILLUSTRATIVE EXAMPLES FOR MISSING FEATURE IMPUTATION IN Sepsis We present two illustrative examples in Figure 12 to demonstrate how imputing missing features via learning model dynamics would help the decision making with partial observability in Sepsis domain. The policy training process with partial observability could only access very limited information, due to the employment of active feature acquisition. Under such circumstances, imputing the missing features would offer much more abundant information to the decision making process. From the results shown in Figure 12, our model demonstrates considerable accuracy in imputing the missing features, even though it is extremely challenging to perform the missing feature imputation task given the distribution shift from the data collection policy and the online policy. The imputed missing information would be greatly beneficial for training the task policy and feature acquisition policy.
1. What is the focus of the paper regarding reinforcement learning and feature acquisition? 2. What are the strengths of the proposed approach, particularly in its ability to trade off feature acquisition costs with control rewards? 3. Do you have any concerns about the novelty of the work compared to previous studies, such as Igl et al. (2018)? 4. How does the reviewer assess the experimental details and baselines used in the study? 5. What are the potential ethical impacts of the work, and how could they be addressed? 6. Are there any minor issues with the paper's formatting, notation, or consistency?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a reinforcement learning + representation learning approach for simultaneously learning a control policy and feature acquisition policy in environments where feature observation is costly. The authors formulate an approach for learning time series latent variable models that incorporate information from both observation and action histories. They demonstrate through a series of experiments that their approach leads to better imputation (i.e., filling in missing values) and better rewards. Review Strengths: The problem is an important one: feature acquisition is indeed a major problem in the healthcare space, for example, and the ability to learn policies that effectively trade off information gathering/feature acquisition costs with environment control. The proposed VAE model seems to learn what it's supposed to learn - Figure 4 was a nice way to qualitatively capture exactly how the model is trading off feature acquisition costs with control rewards. Weaknesses: My main concern is that there is already a fairly similar work by Igl et al (2018) [1] that proposed a sequential VAE and had some promising results on a related set of tasks. I would expect to see their DVRL algorithm as well as something like "Deep Recurrent Q-Learning" [2] implemented as baselines before I agree with any claims around significant contributions or novelty. There are a few critical experimental details that are left ambiguous. For example, after reading through the manuscript and searching explicitly for information on the "end-to-end" baseline, it is not clear to me exactly what model was used and how it selectively acquired features. The requirement that the VAE be pretrained offline is a fairly restrictive one. It's not clear to me how realistic this is. Also, does that preclude any improvement to the VAE model while the agent is interacting with the environment online? That seems like a missed opportunity. There was little, if any, discussion about the potential ethical impacts of the work. This would be a requirement, in my mind, for acceptance. Minor comments: There are a number of minor grammatical errors throughout. The paper would benefit from some detailed proofreading. I tried to keep track of the errors but it quickly got out of hand. Citations are formatted awkwardly (use \citep rather than \citet or \cite). I believe A f in equation (1) should be A f in order to be consistent with prior notation. It wasn't fully clear to me based on the text what the difference is between the "full loss" and the "partial loss" (I understand that one is over "the entire features" and the other "only applies to the observed features", but how do you calculate the loss over "the entire features"?) [1] Igl, Maximilian, et al. "Deep variational reinforcement learning for POMDPs." International Conference on Machine Learning. PMLR, 2018. [2] Hausknecht, Matthew, and Peter Stone. "Deep recurrent q-learning for partially observable mdps." 2015 aaai fall symposium series. 2015.
ICLR
Title Hardware-aware compression with Random Operation Access Specific Tile (ROAST) hashing Abstract Advancements in deep learning are often associated with increasing model sizes. Training and deploying large models require sophisticated hardware and incur significantly higher costs. Thus, model compression is a widely explored approach to solving the problem. However, SOTA techniques fall short in one or more desirable aspects of compression for instance, pruning does not reduce memory for training, quantization can only provide up to 32x compression, HashedNet is cache-inefficient, etc. This paper proposes a model-agnostic, cache-friendly, and hardware-aware model compression approach: Random Operation Access Specific Tile (ROAST) hashing. ROAST collapses the parameters by clubbing them through a lightweight mapping. While clubbing these parameters, ROAST utilizes cache hierarchies by aligning the memory access pattern with the parameter access pattern. ROAST is up to ∼25× faster to train and ∼50× faster to infer than the popular parameter sharing method HashedNet. Additionally, ROAST introduces global weight sharing, which is empirically and theoretically superior to local weight sharing in HashedNet, and can be of independent interest. With ROAST, we can efficiently train and deploy the model using a much smaller memory footprint (∼ 10− 100× lesser) in text and image classification tasks. 1 INTRODUCTION Models across different domains, including Natural Language Processing (NLP), Computer Vision (CV), and Information Retrieval (IR), are exploding in size. State-of-the-art (SOTA) results in these domains are being obtained at a disproportionate increase in model sizes, questioning the sustainability of deep learning (Thompson et al., 2021). For instance, SOTA architectures for vision include VGG (Simonyan & Zisserman, 2014) (150M params, 0.6GB) and ViT (Dosovitskiy et al., 2020) (up to 304M params, 1.2GB). Additionally, SOTA NLP models range from BERT (Devlin et al., 2018) (340M params, 1.36GB) to GShard (Lepikhin et al., 2020) (600B params, 2.4TB). Similarly, industrial-scale recommendation models such as DLRM (Naumov et al., 2019; Mudigere et al., 2021) can have up to 10s of trillions of parameters (50TB). Large models, such as the above, come with various challenges. They need high-end distributed hardware for training and deployment, incurring higher costs. Additionally, the required modelparallel setup has higher inference and training-iteration latency for these models. Model compression is a research direction that aims to resolve these issues by reducing the memory footprint of the model. Compression of the order of 100× can eliminate the need for model-parallel setup for many SOTA models like GPT(Radford et al., 2019), Gshard(Lepikhin et al., 2020), DLRM (Naumov et al., 2019) which now can fit on a single GPU. Furthermore, compressing large models to small sizes come with immediate latency benefits. For example, Desai et al. (2022) showed that by compressing the DLRM model 1000× and using 1 GPU instead of 8 GPUs, we could get 3× faster inference at a lower cost. Also, in the case of CPU inference, a smaller model is efficient. For example, (Diamos et al., 2016) showed that if a single RNN layer can fit in registers, it leads to 146× faster inference. Thus, the ML community has heavily invested in model compression. A variety of model compression paradigms now exist in literature like pruning (Han et al., 2016b), quantisation (Han et al., 2016b), knowledge distillation (Buciluǎ et al., 2006), parameter-sharing (Chen et al., 2015; Desai et al., 2022), and low rank decomposition (Hrinchuk et al., 2020; Yin et al., 2021). Table 1 compares these approaches on three considerations (1) if the model memory is reduced for training. (2) if the memory size can be controlled independently of the model, and (3) if the approach considers the underlying Table 1: Various compression techniques on three aspects (1) Memory reduction during training ( apart from inference) (2) arbitrary control over memory (3) Hardware awareness / cache-efficiency * Some versions of pruning that are tuned to the underlying hardware and are cache-efficient Training memory reduction Arbitrary control on memory Cache efficient Pruning No No No* Low-rank decomposition Yes No Yes Low-precision Yes No Yes Quantization aware training (QAT) No No N.A Parameter sharing - HashedNet Yes Yes No Knowledge Distillation No No N.A ROAST (ours) Yes Yes Yes memory hierarchies and is cache-efficient. We want the techniques to fare positively in these three aspects. However, techniques like pruning, QAT, and knowledge distillation require us to use the memory of the original model while training and only reduce inference time memory. Additionally, there are limits to compression obtained by quantization and pruning depending on which component we are compressing. For example, we cannot prune an embedding table (N × d) more than d× as we do not want any embedding vector to have all zeros. HashedNet provides memory reduction during training and arbitrary control over memory. However, the look-ups in HashedNet are randomly and independently distributed across the total memory. This makes HashedNet cache-inefficient. This paper presents Random Operation Access Specific Tile (ROAST) hashing, a parameter-sharing approach that provides cache-efficiency and arbitrary control over memory during training as well as inference. ROAST does not change the model’s functional form and can be applied to all computational modules of a model, such as MLP layers, attention blocks, convolution layers, and embedding tables. ROAST is hardware aware: it proposes a tile-based hashing scheme tuned to the memory access pattern of the algorithmic implementation of the operation being performed. ROAST uses this hash function to recover blocks of the model from a single array of parameters - ROAST array. ROAST is superior to HashedNet due to two factors (1) Unlike HashedNet, ROAST proposes global weight-sharing where parameters are shared across the different computational modules. As we shall see, global weight-sharing is empirically and theoretically superior to local weight-sharing and might be of independent interest. (2) ROAST uses block-based hashing, which is theoretically superior to count-sketch hashing used in HashedNet. (Desai et al., 2022) We show that with ROAST, we can train a BERT-2-2 ( 2 layers, 2 attention heads) model on the largest available text-classification datasets (amazon-polarity, yelp-polarity) using 100× lesser memory without loss of quality. In cases where the model is overly parameterized, like using BERT-12-12 in the text classification task above, we can still obtain similar compression of 100×. Thus it is a good alternative to neural architecture search. The results extend to CV datasets as well. Specifically, we can train a ResNet-9 model with 10× lesser memory for the CIFAR10 dataset. Importantly, we show that ROAST, due to its hardware-aware nature, is significantly faster than HashedNet: ROAST is up to ∼ 25× faster to train and ∼ 50× faster to infer than HashedNet for large matrix multiplications. Our current implementation of ROAST matrix multiplication is about 1.34× slower than full matrix multiplication in pytorch. This is a testament to how optimized CUBLAS libraries are. We believe, with enough investigation, we can make ROAST-MM comparably efficient to pytorch-MM as well. Limitations of ROAST: One of the goals of model compression, apart from reducing memory usage, is to reduce computational workload for deployment. ROAST, currently, is not devised to decrease computation; it only decreases the memory footprint of a model. Reducing computation with a small memory is left for future work. However, it should be noted that reducing the memory footprint can significantly reduce computation latency and power consumption. As shown in (Han et al., 2016a), accessing memory from RAM is 6400× costlier than 32bit INT ADD and 128× more expensive than on-chip SRAM access in terms of energy consumption. Additionally, RAM access generally is ∼100× slower than a floating-point operation. So, this model compression with ROAST, in our opinion, is an important step for efficient training and inference. 2 RELATED WORK This section briefly reviews the rich history of model compression paradigms. Model compression can be generally classified into two categories: (1) Compressing a learned model and (2) Learning a compressed model. ROAST lies in the second category. Compressing learned models: 1) Pruning: Pruning (Zhu & Gupta, 2017) is a technique to remove parts of a large model, including weights, blocks, and layers, to make the model lighter. Pruning can be performed as a one-time operation or gradually interspersed with training. 2) Quantization: Quantization can involve reducing the precision of the parameters of a model. Mixed precision models are sometimes used where different precision is used with different weights. KMeans quantization is another type of quantization, where models’ weights are clustered using KMeans, and each cluster’s centroid is used for all cluster weights. Model compression, in this case, is achieved by reducing the number of distinct weights. 3) Knowledge distillation: Knowledge distillation (Buciluǎ et al., 2006) is widely applied in model compression with a focus on distilled architectures. Knowledge distillation involves training a teacher model (large original model); then, a student model is trained using the logits of the teacher model. Empirically, the student model trained under this paradigm generalizes better than the student model trained standalone. Many variations exist on this basic idea of knowledge distillation. While these techniques have successfully reduced memory for inference, one of the drawbacks of this line of compression is that the memory usage while training the model is not reduced. ROAST, however, provides a solution that reduces the model’s memory during training and inference. Learning compressed models 1) Low-rank decomposition: In this method, matrices in the model are decomposed into a product of two low-rank matrices, thus saving memory per matrix. A generalization of low-rank decomposition to tensors is called tensor-train decomposition 2) Parameter sharing: Parameter sharing approaches such as HashedNet (Chen et al., 2015) are generally used for matrix compression. These approaches randomly share weights among different parameters, reducing the model’s memory usage. This line of research provides model reduction even during training. However, Low-rank decomposition does not offer arbitrary control over memory footprint, and HashedNets are inefficient due to heavy cache-trashing caused by non-local lookups. Conversely, ROAST is a model-agnostic parameter-sharing approach that can arbitrarily reduce the model size without affecting the functional form while keeping the model recovery efficient. 3 BACKGROUND HashedNet: Compressing MLP matrices Previous work (Chen et al., 2015) introduced a weight sharing method to compress weight matrices of MLP models. They map each matrix parameter to a shared parameter array using a random hash function xxhash (Collet, 2016). In the forward pass, this mapping is used to recover a weight matrix and perform matrix multiplication for each MLP layer. In the backward pass, the gradients of each weight matrix are mapped to the shared compressed array and aggregated using the sum operation. It should also be noted that each MLP layer uses an independent array of parameters. One of the main concerns with HashedNet is that memory accesses on the compressed array are non-coalesced. Thus, fetching a compressed matrix via HashedNet requires significantly more memory read transactions than fetching an uncompressed matrix for which memory accesses can coalesce. Our evaluation shows that uncoalesced memory accesses lead to high latency, especially for large matrices. Random Block Offset Embedding Array (ROBE) for embedding compression In ROBE (Desai et al., 2022), the embedding table is generated using an array of parameters. The embedding of a token is obtained by drawing chunks of the embedding from the ROBE array. The locations of the chunks are decided randomly via light-weight universal hash functions. Authors of ROBE showed that ROBE hashing is theoretically superior to feature hashing used in HashedNet. Also, the use of chunks causes memory accesses to coalesce, making embedding lookup efficient. ROAST proposes a component agnostic, global parameter sharing approach that tunes the hashing function to match memory accesses of algorithmic implementation of operation over available hardware, thus giving a superior parameter sharing scheme. 4 RANDOM OPERATION ACCESS SPECIFIC TILE (ROAST) HASHING LetM be the compressed memory from which parameters will be used, f be the model or the function that we want to run usingM, and W be the recovered weights used in f . f can be considered as a composition of operations {Oi(Xi,Wi)}. By operation, we mean the smaller functions that, when composed together, give us the model f . Here Xi is the input to the operation, and Wi is the weights (i.e., learnable parameters) that Oi uses. Generally, Wis are distinct and do not share parameters. Random Operation Access Specific Tile (ROAST) hashing is a way to perform efficient modelagnostic parameter sharing-based compression. The following distinct aspects of ROAST set it apart from previous parameter sharing-based methods. (1) ROAST is a generic technique applicable to all computational modules. (2) ROAST proposes to tune its mapping from Wi toM in a way that coalesces memory accesses according to how memory is accessed during the operation. This makes ROAST efficient and up to 45× faster than competing approaches like HashedNet. (3) ROAST proposes Global Memory Sharing (GMS) as opposed to Local Memory Sharing (LMS) used in HashedNet. We show GMS to be theoretically and empirically superior to LMS in Section 5 and 6. 4.1 ROAST OPERATIONS IN DEEP LEARNING Any model f can be considered as a composition of smaller functions {Oi(Xi,Wi)}. There are multiple ways to perform this decomposition depending upon what we consider a valid (or small enough) operation. In ROAST, we consider three types of operations: (1) L(l,W ), lookup that accessesM and recovers lth element of W , say w. By element, we mean some particular part of W that is identifiable by an integer. An example with embedding tables is given in figure 1. (2) MM(X,W ), matrix multiplication that multiplies X with W and returns the result, and (3) N(X), various operations that only act on the input but do not interact withM. In ROAST, in order to limit the memory usage, we make sure that L is used only on a small w and MM is performed without recovering the entire matrix. We find that most deep learning models, if not all, can be written as a composition of operations N, MM and L, where L is only applied on small parameters. Let us discuss how ROAST implements L and MM operations in the following paragraphs. Lookup (L(l,W )) We recover a parameter weight w of any shape in a row-major format. Thus, we can consider w = W (l) to be a 1D vector without loss of generality. ROAST recovers w fromM in a blocked fashion. Consider w to be composed of chunks of size Z. Each chunk c is located inM using a universal hash function h1 and is recovered from the location h1(c) inM. Let C(i) give the chunk number of index i and O(i) give the offset of i in this chunk. w[i] = λM[h1(C(i)) +O(i)] h1 : N→ {0, ..., |M| − Z} (1) The recovered W has λ as a scaling factor discussed in section 4.2. The hash function hashes to a range {0, ..., |M| − Z} to avoid overflows while reading the memory. For example, Figure 1 (right) illustrates the embedding lookup using L with chunk size of 2. ROAST uses L to implement computational modules such as embeddings, bias vectors, and so on. We generalize the embedding lookup kernel from ROBE (Desai et al., 2022) to implement our L kernel. Matrix multiplication (MM(Xi,Wi)) 2D matrix multiplication is one of the most widely used operations in deep learning. We implement our ROAST-MM kernel with parameter sharing performed in a way that the algorithm for matrix multiplication accesses coalesced pieces ofM. An efficient implementation of matrix multiplication on GPU follows a block multiplication algorithm to use the on-chip shared memory efficiently. While computing C = A × B, A, B and C are divided in tiles of size Z0 × Z1, Z1 × Z2 and Z0 × Z2 respectively. Thus, we divide our 2D weight matrix into tiles of size Z1 × Z2. The tile, (x, y), where x and y are the coordinates of the tile, is located in M in a row-major format via a universal hash function h2(x, y). Let C1(i, j) and C2(i, j) give the x-coordinate and y-coordinate of the tile to which i, j belongs. Similarly, let O1(i, j) and O2(i, j) give the x-offset and y-offset of a location (i, j) on the tile. Then, we use the following mapping for ROAST-MM, W [i, j] = λM[h2(C1(i, j), C2(i, j)) + Z2O1(i, j) +O2(i, j)] h2 : N2 → {0, ..., |M| − Z1Z2} Again, λ is the scaling factor discussed in section 4.2. The hash function hashes to a range {0, ..., |M| − Z1Z2} to avoid overflows while reading the chunk. Figure 1 (left) illustrates ROASTMM with a chunk size of 2× 2. The above mapping is used whenever a 2D tile is accessed in the matrix multiplication algorithm. The pseudo code for ROAST-MM is shown in algorithm 1. We talk about implementation of this kernel and its evaluation in the later part of the paper. ROAST uses ROAST-MM kernel to implement computational modules such as MLP layers, attention blocks, etc. Each module invoking ROAST kernels uses independent hash functions. Algorithm 1 ROAST-MM(I ×H ×O) Require: X ∈ RI×H ,M, λ, h : N2 → {0, ..., |M| − Z1Z2} Ensure: output = MM(X,M[h(:, :)]) value← TILE(Z0, Z2) ▷ Allocate a 2D tile of size Z0 × Z2 to accumulate results for i ∈ {0, 1, ..., ⌈I/Z0⌉ − 1} do for j ∈ {0, 1, ..., ⌈O/Z2⌉ − 1} do value[:, :]← 0 for k ∈ {0, 1, ..., ⌈H/Z1⌉ − 1} do value← value+MM(X[i : i+ Z0, k : k + Z1],M(h(k : k + Z1, j : j + Z2))) ▷ Access to the weight tile passes through the hash function end for output[i : i+ Z0, j : j + Z2]← λ ∗ value end for end for Apart from scaling each recovered parameter with module-specifc λ, we can also multiply it with another independent hash function g : Nk → {±1} (k=1 or k=2). 4.2 GLOBAL MEMORY SHARING (GMS) HashedNet uses local memory sharing (LMS), which states that each layer will have independent compressed memory. In contrast, ROAST proposes global memory sharing (GMS), wherein we share memory across modules. However, modules cannot directly use the parameters stored inM as each module’s weights requires initialization and optimization at different scales. For instance, in the Xavier’s initialization (Glorot & Bengio, 2010), weights are initialized with distribution Uniform(−1/ √ n, 1/ √ n) where n is size of the input to the module. In GMS, we must ensure that each module gets weights at the required scale. To achieve this, we first initialize the entire ROAST parameter array with values from the distribution Uniform(−1/C, 1/C) for some constant C. Then, for each module, we scale the weights retrieved from the ROAST array by a factor of λ = C/ √ n. One can understand the benefit of GMS over LMS in terms of the number of distinct functions in f that can be expressed using a fixedM. Consider a family of functions with n parameters. GMS can potentially express |M|n functions across different random mappings. In LMS, let separate parameters be of sizes n1, n2, ..nk and each of them is mapped into memoriesM1,M2, ...,Mk. Thus, n = ∑ i ni and |M| = ∑ i |Mi|. Then LMS can only express |M1|n1 |M2|n2 ....|Mk|nk different functions. Thus expressivity of LMS is strictly less than that of GMS and can be orders of magnitude less depending on exact values of ni and |Mi|. We also show that GMS is superior to LMS in terms of dimensionality reduction (feature hashing) in Section 5. Figure 2: Local memory sharing : each module compresses its parameters separately. In Global memory sharing, all parameters from accross the modules share the same memory 4.3 FORWARD AND BACKWARD PASSES Recall that in ROAST, operations are of three types L,MM and N. The forward pass proceeds by applying each operation in sequence. If an operation is of type N, we directly apply its function on the input. For L and MM operations, outputs are computed according to the procedure described in Section 4.1. The gradient of the loss w.r.t a weight wi inM is the λ-scaled aggregation of gradients of loss w.r.t all the parameters that map to this weight. For simplicity of notation, consider θ as the complete parameter, λ(j) as the scaling factor we use for the module that θj belongs to, and h be the mapping from θ toM. See Appendix B.1 for more details. ∇wif(w) = ∑ j,h(j)=i λ(j) ∗ ∇θjf(θ) (2) 4.4 IMPLEMENTATION OF ROAST-MM The high-performance community has heavily investigated the fast implementation of the General Matrix Multiplication (GEMM) kernel, a fundamental operation in many computational workloads, including deep learning. Optimized implementations of GEMM kernels are available in vendor libraries such as cuBLAS (NVIDIA Corporation, 2022a) and CUTLASS (NVIDIA Corporation, 2022b). Unfortunately, these implementations do not support custom tile loading operations, which is the key of ROAST-MM. To implement ROAST-MM to a reasonable level of efficiency, we used Triton (Tillet et al., 2019): an intermediate language for tiled neural network computations. Triton abstracts out the shared memory management to make it helpful in customizing tiled operations with high efficiency. In our implementation of ROAST-MM, the optimal size of coalesced tiles is a parameter that depends on the shape of the weight matrix. Therefore, different tile sizes can lead to different parallelism, occupancy, and shared memory efficiency, resulting in different execution times. We autotune this parameter to obtain the best performance for particular matrix shapes. We propose two strategies for autotuning each ROAST-MM layer - (1) Optimize the inference workload by autotuning the forward kernel and sharing the tile size with the backward kernels. (2) Optimize the training workload by autotuning the forward and backward kernels together. Extensive evaluation of this kernel is presented in appendix C.2. 5 FEATURE HASHING QUALITY: GLOBAL MEMORY SHARING ADVANTAGE OVER LOCAL MEMORY SHARING We can consider model compression as dimensionality reduction of a parameter vector (a one dimensional vector of all parameters in a model) of size n into a vector of size |M| = m. Quality of inner-product preservation is used as a metric to measure the quality of dimensionality reduction. In terms of dimensionality reduction, ROAST uses ROBE hashing, which shows that chunk based hashing is theoretically better than hashing individual elements. In this section, we compare ROAST’s GMS proposal against HashedNet’s LMS using a chunck size of one. Consider two parameter vectors x, y ∈ Rn, we are interested in how the inner product of parameter vectors are preserved under hashing. Let x = [x1, x2, ..., xk] and y = [y1, y2, ..., yk] be composed of k vectors of sizes n1, n2, ...nk where [] denotes concatentation. In LMS, let each piece map to memory of size fim where ∑ i fi = 1. The estimated inner product with GMS is ⟨̂x, y⟩G,m = m∑ j=1 ( n∑ i=1 I(h(i)=j)g(i)x[i] n∑ i=1 I(h(i)=j)g(i)y[i] ) (3) Table 2: Experimental settings: The datasets and models used in experiments. Domain Task Dataset #Samples Model Model size NLP text-classification amazon-polarity 3.6M/0.4M BERT-2-2 37.43M NLP text-classification yelp-polarity 560K/38K BERT-2-2 37.43M CV image-classification cifar10 50K/10K ResNet 6.5M The estimated inner product with LMS can be written as ⟨̂x, y⟩L,m,f⃗ = k∑ l=1 flm∑ j=1 nl∑ i=1 I(h(i)=j)g(i)xl[i] nl∑ j=1 I(h(i)=j)g(i)yl[i] = k∑ l=1 ⟨̂xl, yl⟩G,(flm) (4) Theorem 1 Let x, y ∈ Rn and be composed of k vectors x = [x1, x2, ..., xk] and y = [y1, y2, ..., yk]. Then the inner product estimation of global and local weight sharing are unbiased. E(⟨̂x, y⟩G,m) = ⟨x, y⟩ E(⟨̂x, y⟩L,m,f⃗ ) = ⟨x, y⟩ (5) The variance for inner product estimation can be written as, VG(⟨̂x, y⟩) = ∑ i fiVi + 1 m ∑ i,j,i ̸=j (||xi||2||yj ||2) + ⟨xi, yi⟩⟨xj , yj⟩ (6) VL( ˆ⟨x, y⟩) = ∑ i Vi (7) where Vl = 1 fl 1 m ∑ i ̸=j a2i b 2 j + ∑ i ̸=j aibiajbj , where xl = (a1, a2..., anl) and yl = (b1, b2..., bnl) (8) where VL is local memory sharing variance and VG is global memory sharing variance. Intuition: The two terms in VG can be understood as follows: The first term is the local variance with individual terms reduced by a factor of fi. This is because each piece of the vector is being distributed in a memory that is 1/fi× larger. However, in GMS, there is a possibility of more collisions across pieces. This leads to the second term in VG. Note that, for a given x, y and a finite value for m, VG is always bounded. At the same time, VL is unbounded due to 0 < fi < 1 in the denominator. So if the number of pieces increases or particular fi grows smaller, VL increases. While we cannot prove that VG is strictly less than VL, we can investigate the equation under some assumptions on the data. Practically, each piece of the parameter vector is a computational block like a matrix for multiplication or embedding table lookup. These blocks are initialized at a scale proportional to the square root of their size. So the norms of these vectors are similar. Let us assume the norm of each piece to be √ α. Also, let us assume that over random data distributions over x and y, all the inner products to be β in expectation. Then, VG ≈ k2 m (α2 + β2) VL ≈ 1 m (α2 + β2)( 1 f1 + 1 f2 + ...+ 1 fk ) ≥ 1 m (α2 + β2)k2 1 ( ∑ fi) = VG (9) Thus, VL is greater than VG, and it can be much greater depending on the exact values of fi. The proof of the theorem and other details are presented in Appendix B.2 6 EXPERIMENTAL EVALUATION Setup: In this section, we evaluate the ROAST compression approach on two types of tasks. The details of the tasks, datasets and models used are mentioned in table 2. . For image-classification tasks, we choose the cifar-10 dataset and the leader for the DawnBenchmark (Coleman et al., 2017) - a ResNet-9 model1 for cifar-10. The target accuracy for this benchmark is 94% and hence we perform hyper-parameter tuning to get a test accuracy of ≥ 94%. We stop the tuning once we 1https://github.com/apple/ml-cifar-10-faster reach this accuracy and hence the results for CIFAR-10 should be compared w.r.t whether it crosses 94.0%. For NLP tasks, we use two largest available text-classification datasets on huggingface (HuggingFace, 2022). For the model, we use BERT-x-y (x:number of layers, y:number of attention heads) architecture with classification head. On both NLP datasets, using models larger than BERT-22 lead to similar test accuracy and hence we choose BERT-2-2 as the base model. The other hyper parameters for NLP tasks are { batch 64 for amazon-polarity and 32 for yelp-polarity, learning rate 2e-5, AdamW optimizer, Linear scheduler} Roast for compression As we can see in tables 3 and 4 , with ROAST, we can achieve similar quality of model in much smaller space. Specifically, for text-classification, we see that we can train and deploy the BERT-2-2 model in 100× lesser space. Similarly, we can train and deploy ResNet model in 10× lesser space for same target test accuracy. Thus, ROAST is an effective method for training and deploying models on memory-constrained systems. Managing excess parameters It is clear from table 3, that BERT-base architecture is highly over parameterized for the tasks under consideration. However, even in this case, ROAST can be used to control the memory footprint while maintaining the functional form of the larger model. Pruning and ROAST We perform unstructured iterative-magnitude pruning (Han et al., 2016b) on ResNet model and find that pruning gives upto 100× compression. However note that pruning requires us to train the model using memory required to store the original model. However, compression with ROAST means using lesser memory even for training. Additionally, pruning can be used in conjunction with ROAST to obtain smaller models using smaller memory. In table 4, we see that we can prune 90% of weights in 10× compressed ROAST array and still achieve the same quality. Local vs. Global memory sharing In the figure 3, we show that the quality of the model while using global memory sharing is, indeed, better than local memory sharing. This supports our theoretical observation about these memory sharing schemes. Efficiency of ROAST operators as compared to HashedNet Table 7 shows the inference performance of a simple model using ROAST-MM for matrix multiplication on compressed memory. Our model linearly transforms the input vector and computes its norm. We optimized the ROAST-MM kernel for this experiment using the inference-optimal strategy. We make the following observations from Table 7: (1) ROAST-MM outperforms HashedNet kernel consistently across the different multiplication workloads. On an average over different workloads, ROAST-MM is up to 45× faster than HashedNet. (2) ROAST-MM is 1.34× slower than PyTorch-MM. This is expected as Pytorch-MM uses extremely optimized libraries for matrix multiplication and ROAST-MM implementation is comparatively under-optimized. It is still interesting to note that ROAST-MM’s performance better in terms of scaling efficiency than PyTorch-MM with the increase in workload. As the workload increases 1600× (from 512×512 to 20480×20480), PyTorch-MM takes 39× time, HashedNet takes 106× time whereas ROAST-MM only takes around 16× time. We present more detailed measurements across different optimizers for training-optimal strategy in the appending C.2 7 CONCLUSION Traditionally model compression has focused on memory reduction during inference. However, model memory during training is also an important consideration. While some of the existing methods such as HashedNet and Low-rank factorisation provide model reduction during training, these methods either do not provide cache-efficient model recovery or have implicit cap on memory reduction. ROAST overcomes these obstacles and provides a cache-efficient, arbitrary control over the memory footprint of model during training and inference. ROAST, essentially provides a practical parameter sharing method. ROAST is theoretically better than HashedNet in terms of dimensionality reduction due to block based hashing and global memory sharing. We empirically validate the efficiency advantage of ROAST over HashedNet and that we can achieve high compression with ROAST. A ADDITIONAL DATA FOR REVIEWERS - PARTS OF WHICH WILL GO IN MAIN PAPER IN FINAL VERSION A.1 EXTENDED TABLE 3 WITH EPOCH INFORMATION AND MORE BASELINES We add a lot of information and new results to the table. Specifically, • We add the GMS and LMS results to the table separately. So that readers can get an idea of each of the method on the task. • We add unstructured pruning (best pruning quality wise) resutls for NLP tasks as well. The pruning results are obtained in the following manner. With the full-9-1 schedule, we start from the fully trained model, perform iterative pruning during next 9 epochs and then tune the final pruned model for 1 more epoch. Whereas in the full-1-9 schedule, we again start from the fully trained model, perform pruning in the next 1 epoch and then tune the model further for 9 epochs. We note the best achieved accuracy with the final model structure and the epoch at which this accuracy is reached. • For each result, we note the number of epoch when the best accuracy was reached. • We append an additional small table which notes the number of epochs required to reach a target accuracy to compare the convergence of different models. We make the following observations. • GMS reaches better accuracy than LMS for the same amount of compression for both the datasets. Additionally, GMS reaches the same target accuracy faster than the LMS. • The ROAST approach is more effective than pruning approaches in NLP tasks of textclassification for architectures like BERT. • It is interesting that GMS-10× converges faster than original model on both datasets. We leave investigating this as future work. A.2 GMS VS LMS FOR YELP As can be seen from the two plots in figure4, it is clear the GMS performs superior to LMS in both the compression settings. B THEORY ROAST is a generalized model compression which performs operation specific system-friendly lookup and global memory sharing. This raises some interesting theoretical questions B.1 BACKWARD PASS FOR MODEL SHARING WEIGHTS ACROSS DIFFERENT COMPONENTS A general function sharing a weight, say x across different components can be written as , f(x, g(x)) The interpretation is that x was used in g(.) and then again used ahead in f. (In case of MLP, we can think of x being used in multiple layers) Let f(g1, g2) where both g1 and g2 are functions of x. ∂f(g1, g2) ∂x = ∂f(g1, g2) ∂g1 ∗ ∂g1 ∂x + ∂f(g1, g2) ∂g2 ∗ ∂g2 ∂x (10) g1 = x and g2 = g(x) ∂f(g1, g2) ∂x = ∂f(x, g(y)) ∂x |y=x + ∂f(y, g(x)) ∂g(x) ∗ ∂g(x) ∂x |y=x (11) ∂f(g1, g2) ∂x = ∂f(x, g(y)) ∂x |y=x + ∂f(y, g(x)) ∂x |y=x (12) Renaming, ∂f(x, g(x)) ∂x = ∂f(z, g(y)) ∂z |y=x,z=x + ∂f(z, g(y)) ∂y |y=x,z=x (13) Thus, we can essentially consider each place where x appears as new variables and then gradient w.r.t x is just summation of partial derivatives of the function w.r.t these new variables. Thus, it is easy to implement this in the backward pass. In order to make sure that the memory utilization in backward pass is not of the order of the recovered model size, we do not use the auto-differentiation of tensorflow/pytorch. We implement our own backward pass and it can be found in the code. B.2 GLOBAL FEATURE HASHING VS LOCAL FEATURE HASHING. We can consider model compression techniques as dimensionality reduction of the parameter vector (a one dimensional vector of all parameters in a model) of size n into a vector of size |M| = m. Quality of inner-product preservation is used as a metric to measure the quality of dimensionality reduction. In terms of dimensionality reduction, ROAST uses ROBE hashing Desai et al. (2022), which showed that chunk based hashing is theoretically better than hashing individual elements. In this section, we analyse GMS proposal of ROAST against LMS of HashedNet. For the purpose of this comparison we assume a chunk size of 1. Consider two parameter vectors x, y ∈ Rn. We are interested in how inner product between these parameter vectors are preserved under hashing. Let x = [x1x2...xk] and y = [y1y2...yk] be composed of k pieces of sizes n1, n2, ...nk. In LMS, let each piece be mapped into memory of size fim where ∑ i fi = 1. The estimators of inner product in the GMS case can be written as , ⟨̂x, y⟩G,m = m∑ j=1 ( n∑ i=1 I(h(i)=j)g(i)x[i])( n∑ i=1 I(h(i)=j)g(i)y[i]) (14) The estimate of inner product with LMS can be written as, ⟨̂x, y⟩L,m,f⃗ = k∑ l=1 flm∑ j=1 ( nl∑ i=1 I(h(i)=j)g(i)xl[i])( nl∑ j=1 I(h(i)=j)g(i)yl[i]) = k∑ l=1 ⟨̂xl, yl⟩G,(fim) (15) Note that ⟨̂x, y⟩L,m,f⃗ = k∑ l=1 ⟨̂xl, yl⟩G,(flm) (16) The GMS estimator is the standard feature hashing estimator and the LMS is essentially sum of GMS estimators for each of the piece. as E[g(i)] = 0, it is easy to check by linearity of expectations that Expectation The suffix L refers to local hashing and G refers to global hashing. EG = E(⟨̂x, y⟩G,m) = ⟨x, y⟩ (17) EL = E(⟨̂x, y⟩L,m,f⃗ ) = ⟨x, y⟩ (18) Let us now look at the variance. Let us follow the following notation, • VG = V(⟨̂x, y⟩G,m). GMS variance of entire vectors • VL = V(⟨̂x, y⟩L,m,f⃗ ). LMS variance of entire vectors • Vl = V(⟨̂xl, yl⟩G,flm). variance of each piece we can write Vl as follows. The following equation is easy to derive and it can be found the lemma 2 of Weinberger et al. (2009) Vl = 1 fl 1 m ( ∑ i ̸=j a2i b 2 j + ∑ i ̸=j aibiajbj) where xl = (a1, a2...anl) and yl = (b1, b2...bnl) (19) As, each of the piece is independently hashed in LSM, we can see VL = k∑ l=1 Vl (20) Let us now look at VG. Again, using lemma 2 from Weinberger et al. (2009) VG = 1 m ( ∑ i̸=j x2i y 2 j + ∑ i̸=j xiyixjyj) (21) The expression can be split into terms that belong to same pieces and those across pieces VG = 1 m k∑ l=1 ( ∑ i̸=j∈piece-l x2i y 2 j + ∑ i ̸=j∈piece-l xiyixjyj) + 1 m k∑ l1=1 k∑ l2=1,l1 ̸=l2 ( ∑ i∈piece-l1,j∈pieces-l2 (x2i y 2 j ) + ∑ i∈piece-l1,j∈pieces-l2 xiyixjyj)) VG = k∑ l=1 flVl + 1 m l∑ l1=1 l∑ l2=1,l1 ̸=l2 ||xl1||22||yl2||22 + ⟨xl1, yl2⟩⟨xl2, yl2⟩ (22) Observation 1: In VL we can see that there are terms with 1fl which makes it unbounded. It makes sense as if number of pieces increase a lot a lot of compressions will not work for example if number of peices > |M|. Also, it will affect VL a lot when some fl is very small which can often be the case. For example, generally embedding tables in DLRM model are much larger than that of matrix multiplciation modules (MLP) . which can make f ≈ 0.001 for MLP components. Observation 2: Practically we can assume each piece, no matter the size of the vector, to be of same norm. The reason lies in initialization. According to Xavier’s initialization the weights of a particular node are initialized with norm 1. So for now lets assume a more practical case of all norms being equal to √ α. Also, in order to make the comparisons we need to consider some average case over the data. So let us assume that under independent randomized data assumption, the expected value of all inner products are β. With this , in expectation over randomized data, we have VG = ∑ flVl + k(k − 1) m (α2 + β2) (23) Now note that, Vl = 1 fl 1 m ( ∑ i ̸=j a2i b 2 j + ∑ i ̸=j aibiajbj) where xl = (a1, a2...anl) and yl = (b1, b2...bnl) (24) (dropping the subscript "l" below) Vl = 1 fl 1 m ((||x||22||y||22 + ⟨x, y⟩2)− 2 ∑ i x2i y 2 i ) (25) Vl = 1 fl 1 m ((α2 + β2)− 2 ∑ i x2i y 2 i ) (26) Note that for each negative term, there are nl positive terms. To simplify we disregard this term in the equation above. This is an approximation which is practical and only made to get a sense of VL and VG relation. VL − VG = ∑ Vl − ∑ flVl − k(k − 1) m (α2 + β2) VL − VG = ∑ l 1 m ( 1 fl − 1)((α2 + β2))− k(k − 1) m (α2 + β2) VL − VG = ∑ l 1 m ( 1 fl − 1)((α2 + β2)− k(k − 1) m (α2 + β2) VL − VG ≥ k(k − 1) m ((α2 + β2)− k(k − 1) m (α2 + β2) VL − VG ≥ 0 Note that we ignored a term which reduces the VL a bit, Let the error be ϵ VL − VG ≥ −ϵ (27) The above equation shows even for the best case, VG might be slightly more than VL. However for general case where harmonic mean is much worse than arithmetic mean, VL will be much larger depending on exact fl s C ROAST-MM LATENCY MEASUREMENTS C.1 INFERENCE OPTIMIZATION C.2 TRAINING OPTIMIZATION See tables 8, 9, 10, 11 D VARIANCE IN QUALITY OVER DIFFERENT RUNS The figure 5 shows three runs of ROASTed BERT and BERT models
1. What is the focus and contribution of the paper on model compression? 2. What are the strengths of the proposed approach, particularly in terms of its ability to reduce memory footprint? 3. What are the weaknesses of the paper, especially regarding its applicability to complex tasks and sensitivity to model-specific parameters? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper introduces ROAST (Random Operation Access Specific Tile) hashing, a model-agnostic, hardware-aware model compression framework. ROAST essentially provides a global parameter sharing method to give arbitrary control to the user over the memory footprint of model during both training and inference. Evaluation with both BERT and ResNet-9 demonstrates the feasibility of 100x memory footprint reduction without accuracy degradation. Strengths And Weaknesses Strengths 100 × reduction of memory usage with no accuracy drop is impressive. ROAST addresses the memory usage in both training and serving. It is interesting to see only three ROAST operations are sufficient for running DL models. Weaknesses ROAST has been evaluated on relatively simple tasks with a small number of classes and it is not clear how it is applicable to larger, more complex tasks without causing accuracy drop. The baseline used for evaluation is HashedNet, which seems somewhat outdated. In ROAST-MM there is a model-specific parameter λ , which is determined by "some constant C ", and it is not clear how sensitive the performance is to the setting of this parameter. Clarity, Quality, Novelty And Reproducibility The paper is clearly written for the most part. This work builds on the idea of ROBE, but expands its scope to a generalized embedding lookup operation and non-embedding operations.
ICLR
Title Hardware-aware compression with Random Operation Access Specific Tile (ROAST) hashing Abstract Advancements in deep learning are often associated with increasing model sizes. Training and deploying large models require sophisticated hardware and incur significantly higher costs. Thus, model compression is a widely explored approach to solving the problem. However, SOTA techniques fall short in one or more desirable aspects of compression for instance, pruning does not reduce memory for training, quantization can only provide up to 32x compression, HashedNet is cache-inefficient, etc. This paper proposes a model-agnostic, cache-friendly, and hardware-aware model compression approach: Random Operation Access Specific Tile (ROAST) hashing. ROAST collapses the parameters by clubbing them through a lightweight mapping. While clubbing these parameters, ROAST utilizes cache hierarchies by aligning the memory access pattern with the parameter access pattern. ROAST is up to ∼25× faster to train and ∼50× faster to infer than the popular parameter sharing method HashedNet. Additionally, ROAST introduces global weight sharing, which is empirically and theoretically superior to local weight sharing in HashedNet, and can be of independent interest. With ROAST, we can efficiently train and deploy the model using a much smaller memory footprint (∼ 10− 100× lesser) in text and image classification tasks. 1 INTRODUCTION Models across different domains, including Natural Language Processing (NLP), Computer Vision (CV), and Information Retrieval (IR), are exploding in size. State-of-the-art (SOTA) results in these domains are being obtained at a disproportionate increase in model sizes, questioning the sustainability of deep learning (Thompson et al., 2021). For instance, SOTA architectures for vision include VGG (Simonyan & Zisserman, 2014) (150M params, 0.6GB) and ViT (Dosovitskiy et al., 2020) (up to 304M params, 1.2GB). Additionally, SOTA NLP models range from BERT (Devlin et al., 2018) (340M params, 1.36GB) to GShard (Lepikhin et al., 2020) (600B params, 2.4TB). Similarly, industrial-scale recommendation models such as DLRM (Naumov et al., 2019; Mudigere et al., 2021) can have up to 10s of trillions of parameters (50TB). Large models, such as the above, come with various challenges. They need high-end distributed hardware for training and deployment, incurring higher costs. Additionally, the required modelparallel setup has higher inference and training-iteration latency for these models. Model compression is a research direction that aims to resolve these issues by reducing the memory footprint of the model. Compression of the order of 100× can eliminate the need for model-parallel setup for many SOTA models like GPT(Radford et al., 2019), Gshard(Lepikhin et al., 2020), DLRM (Naumov et al., 2019) which now can fit on a single GPU. Furthermore, compressing large models to small sizes come with immediate latency benefits. For example, Desai et al. (2022) showed that by compressing the DLRM model 1000× and using 1 GPU instead of 8 GPUs, we could get 3× faster inference at a lower cost. Also, in the case of CPU inference, a smaller model is efficient. For example, (Diamos et al., 2016) showed that if a single RNN layer can fit in registers, it leads to 146× faster inference. Thus, the ML community has heavily invested in model compression. A variety of model compression paradigms now exist in literature like pruning (Han et al., 2016b), quantisation (Han et al., 2016b), knowledge distillation (Buciluǎ et al., 2006), parameter-sharing (Chen et al., 2015; Desai et al., 2022), and low rank decomposition (Hrinchuk et al., 2020; Yin et al., 2021). Table 1 compares these approaches on three considerations (1) if the model memory is reduced for training. (2) if the memory size can be controlled independently of the model, and (3) if the approach considers the underlying Table 1: Various compression techniques on three aspects (1) Memory reduction during training ( apart from inference) (2) arbitrary control over memory (3) Hardware awareness / cache-efficiency * Some versions of pruning that are tuned to the underlying hardware and are cache-efficient Training memory reduction Arbitrary control on memory Cache efficient Pruning No No No* Low-rank decomposition Yes No Yes Low-precision Yes No Yes Quantization aware training (QAT) No No N.A Parameter sharing - HashedNet Yes Yes No Knowledge Distillation No No N.A ROAST (ours) Yes Yes Yes memory hierarchies and is cache-efficient. We want the techniques to fare positively in these three aspects. However, techniques like pruning, QAT, and knowledge distillation require us to use the memory of the original model while training and only reduce inference time memory. Additionally, there are limits to compression obtained by quantization and pruning depending on which component we are compressing. For example, we cannot prune an embedding table (N × d) more than d× as we do not want any embedding vector to have all zeros. HashedNet provides memory reduction during training and arbitrary control over memory. However, the look-ups in HashedNet are randomly and independently distributed across the total memory. This makes HashedNet cache-inefficient. This paper presents Random Operation Access Specific Tile (ROAST) hashing, a parameter-sharing approach that provides cache-efficiency and arbitrary control over memory during training as well as inference. ROAST does not change the model’s functional form and can be applied to all computational modules of a model, such as MLP layers, attention blocks, convolution layers, and embedding tables. ROAST is hardware aware: it proposes a tile-based hashing scheme tuned to the memory access pattern of the algorithmic implementation of the operation being performed. ROAST uses this hash function to recover blocks of the model from a single array of parameters - ROAST array. ROAST is superior to HashedNet due to two factors (1) Unlike HashedNet, ROAST proposes global weight-sharing where parameters are shared across the different computational modules. As we shall see, global weight-sharing is empirically and theoretically superior to local weight-sharing and might be of independent interest. (2) ROAST uses block-based hashing, which is theoretically superior to count-sketch hashing used in HashedNet. (Desai et al., 2022) We show that with ROAST, we can train a BERT-2-2 ( 2 layers, 2 attention heads) model on the largest available text-classification datasets (amazon-polarity, yelp-polarity) using 100× lesser memory without loss of quality. In cases where the model is overly parameterized, like using BERT-12-12 in the text classification task above, we can still obtain similar compression of 100×. Thus it is a good alternative to neural architecture search. The results extend to CV datasets as well. Specifically, we can train a ResNet-9 model with 10× lesser memory for the CIFAR10 dataset. Importantly, we show that ROAST, due to its hardware-aware nature, is significantly faster than HashedNet: ROAST is up to ∼ 25× faster to train and ∼ 50× faster to infer than HashedNet for large matrix multiplications. Our current implementation of ROAST matrix multiplication is about 1.34× slower than full matrix multiplication in pytorch. This is a testament to how optimized CUBLAS libraries are. We believe, with enough investigation, we can make ROAST-MM comparably efficient to pytorch-MM as well. Limitations of ROAST: One of the goals of model compression, apart from reducing memory usage, is to reduce computational workload for deployment. ROAST, currently, is not devised to decrease computation; it only decreases the memory footprint of a model. Reducing computation with a small memory is left for future work. However, it should be noted that reducing the memory footprint can significantly reduce computation latency and power consumption. As shown in (Han et al., 2016a), accessing memory from RAM is 6400× costlier than 32bit INT ADD and 128× more expensive than on-chip SRAM access in terms of energy consumption. Additionally, RAM access generally is ∼100× slower than a floating-point operation. So, this model compression with ROAST, in our opinion, is an important step for efficient training and inference. 2 RELATED WORK This section briefly reviews the rich history of model compression paradigms. Model compression can be generally classified into two categories: (1) Compressing a learned model and (2) Learning a compressed model. ROAST lies in the second category. Compressing learned models: 1) Pruning: Pruning (Zhu & Gupta, 2017) is a technique to remove parts of a large model, including weights, blocks, and layers, to make the model lighter. Pruning can be performed as a one-time operation or gradually interspersed with training. 2) Quantization: Quantization can involve reducing the precision of the parameters of a model. Mixed precision models are sometimes used where different precision is used with different weights. KMeans quantization is another type of quantization, where models’ weights are clustered using KMeans, and each cluster’s centroid is used for all cluster weights. Model compression, in this case, is achieved by reducing the number of distinct weights. 3) Knowledge distillation: Knowledge distillation (Buciluǎ et al., 2006) is widely applied in model compression with a focus on distilled architectures. Knowledge distillation involves training a teacher model (large original model); then, a student model is trained using the logits of the teacher model. Empirically, the student model trained under this paradigm generalizes better than the student model trained standalone. Many variations exist on this basic idea of knowledge distillation. While these techniques have successfully reduced memory for inference, one of the drawbacks of this line of compression is that the memory usage while training the model is not reduced. ROAST, however, provides a solution that reduces the model’s memory during training and inference. Learning compressed models 1) Low-rank decomposition: In this method, matrices in the model are decomposed into a product of two low-rank matrices, thus saving memory per matrix. A generalization of low-rank decomposition to tensors is called tensor-train decomposition 2) Parameter sharing: Parameter sharing approaches such as HashedNet (Chen et al., 2015) are generally used for matrix compression. These approaches randomly share weights among different parameters, reducing the model’s memory usage. This line of research provides model reduction even during training. However, Low-rank decomposition does not offer arbitrary control over memory footprint, and HashedNets are inefficient due to heavy cache-trashing caused by non-local lookups. Conversely, ROAST is a model-agnostic parameter-sharing approach that can arbitrarily reduce the model size without affecting the functional form while keeping the model recovery efficient. 3 BACKGROUND HashedNet: Compressing MLP matrices Previous work (Chen et al., 2015) introduced a weight sharing method to compress weight matrices of MLP models. They map each matrix parameter to a shared parameter array using a random hash function xxhash (Collet, 2016). In the forward pass, this mapping is used to recover a weight matrix and perform matrix multiplication for each MLP layer. In the backward pass, the gradients of each weight matrix are mapped to the shared compressed array and aggregated using the sum operation. It should also be noted that each MLP layer uses an independent array of parameters. One of the main concerns with HashedNet is that memory accesses on the compressed array are non-coalesced. Thus, fetching a compressed matrix via HashedNet requires significantly more memory read transactions than fetching an uncompressed matrix for which memory accesses can coalesce. Our evaluation shows that uncoalesced memory accesses lead to high latency, especially for large matrices. Random Block Offset Embedding Array (ROBE) for embedding compression In ROBE (Desai et al., 2022), the embedding table is generated using an array of parameters. The embedding of a token is obtained by drawing chunks of the embedding from the ROBE array. The locations of the chunks are decided randomly via light-weight universal hash functions. Authors of ROBE showed that ROBE hashing is theoretically superior to feature hashing used in HashedNet. Also, the use of chunks causes memory accesses to coalesce, making embedding lookup efficient. ROAST proposes a component agnostic, global parameter sharing approach that tunes the hashing function to match memory accesses of algorithmic implementation of operation over available hardware, thus giving a superior parameter sharing scheme. 4 RANDOM OPERATION ACCESS SPECIFIC TILE (ROAST) HASHING LetM be the compressed memory from which parameters will be used, f be the model or the function that we want to run usingM, and W be the recovered weights used in f . f can be considered as a composition of operations {Oi(Xi,Wi)}. By operation, we mean the smaller functions that, when composed together, give us the model f . Here Xi is the input to the operation, and Wi is the weights (i.e., learnable parameters) that Oi uses. Generally, Wis are distinct and do not share parameters. Random Operation Access Specific Tile (ROAST) hashing is a way to perform efficient modelagnostic parameter sharing-based compression. The following distinct aspects of ROAST set it apart from previous parameter sharing-based methods. (1) ROAST is a generic technique applicable to all computational modules. (2) ROAST proposes to tune its mapping from Wi toM in a way that coalesces memory accesses according to how memory is accessed during the operation. This makes ROAST efficient and up to 45× faster than competing approaches like HashedNet. (3) ROAST proposes Global Memory Sharing (GMS) as opposed to Local Memory Sharing (LMS) used in HashedNet. We show GMS to be theoretically and empirically superior to LMS in Section 5 and 6. 4.1 ROAST OPERATIONS IN DEEP LEARNING Any model f can be considered as a composition of smaller functions {Oi(Xi,Wi)}. There are multiple ways to perform this decomposition depending upon what we consider a valid (or small enough) operation. In ROAST, we consider three types of operations: (1) L(l,W ), lookup that accessesM and recovers lth element of W , say w. By element, we mean some particular part of W that is identifiable by an integer. An example with embedding tables is given in figure 1. (2) MM(X,W ), matrix multiplication that multiplies X with W and returns the result, and (3) N(X), various operations that only act on the input but do not interact withM. In ROAST, in order to limit the memory usage, we make sure that L is used only on a small w and MM is performed without recovering the entire matrix. We find that most deep learning models, if not all, can be written as a composition of operations N, MM and L, where L is only applied on small parameters. Let us discuss how ROAST implements L and MM operations in the following paragraphs. Lookup (L(l,W )) We recover a parameter weight w of any shape in a row-major format. Thus, we can consider w = W (l) to be a 1D vector without loss of generality. ROAST recovers w fromM in a blocked fashion. Consider w to be composed of chunks of size Z. Each chunk c is located inM using a universal hash function h1 and is recovered from the location h1(c) inM. Let C(i) give the chunk number of index i and O(i) give the offset of i in this chunk. w[i] = λM[h1(C(i)) +O(i)] h1 : N→ {0, ..., |M| − Z} (1) The recovered W has λ as a scaling factor discussed in section 4.2. The hash function hashes to a range {0, ..., |M| − Z} to avoid overflows while reading the memory. For example, Figure 1 (right) illustrates the embedding lookup using L with chunk size of 2. ROAST uses L to implement computational modules such as embeddings, bias vectors, and so on. We generalize the embedding lookup kernel from ROBE (Desai et al., 2022) to implement our L kernel. Matrix multiplication (MM(Xi,Wi)) 2D matrix multiplication is one of the most widely used operations in deep learning. We implement our ROAST-MM kernel with parameter sharing performed in a way that the algorithm for matrix multiplication accesses coalesced pieces ofM. An efficient implementation of matrix multiplication on GPU follows a block multiplication algorithm to use the on-chip shared memory efficiently. While computing C = A × B, A, B and C are divided in tiles of size Z0 × Z1, Z1 × Z2 and Z0 × Z2 respectively. Thus, we divide our 2D weight matrix into tiles of size Z1 × Z2. The tile, (x, y), where x and y are the coordinates of the tile, is located in M in a row-major format via a universal hash function h2(x, y). Let C1(i, j) and C2(i, j) give the x-coordinate and y-coordinate of the tile to which i, j belongs. Similarly, let O1(i, j) and O2(i, j) give the x-offset and y-offset of a location (i, j) on the tile. Then, we use the following mapping for ROAST-MM, W [i, j] = λM[h2(C1(i, j), C2(i, j)) + Z2O1(i, j) +O2(i, j)] h2 : N2 → {0, ..., |M| − Z1Z2} Again, λ is the scaling factor discussed in section 4.2. The hash function hashes to a range {0, ..., |M| − Z1Z2} to avoid overflows while reading the chunk. Figure 1 (left) illustrates ROASTMM with a chunk size of 2× 2. The above mapping is used whenever a 2D tile is accessed in the matrix multiplication algorithm. The pseudo code for ROAST-MM is shown in algorithm 1. We talk about implementation of this kernel and its evaluation in the later part of the paper. ROAST uses ROAST-MM kernel to implement computational modules such as MLP layers, attention blocks, etc. Each module invoking ROAST kernels uses independent hash functions. Algorithm 1 ROAST-MM(I ×H ×O) Require: X ∈ RI×H ,M, λ, h : N2 → {0, ..., |M| − Z1Z2} Ensure: output = MM(X,M[h(:, :)]) value← TILE(Z0, Z2) ▷ Allocate a 2D tile of size Z0 × Z2 to accumulate results for i ∈ {0, 1, ..., ⌈I/Z0⌉ − 1} do for j ∈ {0, 1, ..., ⌈O/Z2⌉ − 1} do value[:, :]← 0 for k ∈ {0, 1, ..., ⌈H/Z1⌉ − 1} do value← value+MM(X[i : i+ Z0, k : k + Z1],M(h(k : k + Z1, j : j + Z2))) ▷ Access to the weight tile passes through the hash function end for output[i : i+ Z0, j : j + Z2]← λ ∗ value end for end for Apart from scaling each recovered parameter with module-specifc λ, we can also multiply it with another independent hash function g : Nk → {±1} (k=1 or k=2). 4.2 GLOBAL MEMORY SHARING (GMS) HashedNet uses local memory sharing (LMS), which states that each layer will have independent compressed memory. In contrast, ROAST proposes global memory sharing (GMS), wherein we share memory across modules. However, modules cannot directly use the parameters stored inM as each module’s weights requires initialization and optimization at different scales. For instance, in the Xavier’s initialization (Glorot & Bengio, 2010), weights are initialized with distribution Uniform(−1/ √ n, 1/ √ n) where n is size of the input to the module. In GMS, we must ensure that each module gets weights at the required scale. To achieve this, we first initialize the entire ROAST parameter array with values from the distribution Uniform(−1/C, 1/C) for some constant C. Then, for each module, we scale the weights retrieved from the ROAST array by a factor of λ = C/ √ n. One can understand the benefit of GMS over LMS in terms of the number of distinct functions in f that can be expressed using a fixedM. Consider a family of functions with n parameters. GMS can potentially express |M|n functions across different random mappings. In LMS, let separate parameters be of sizes n1, n2, ..nk and each of them is mapped into memoriesM1,M2, ...,Mk. Thus, n = ∑ i ni and |M| = ∑ i |Mi|. Then LMS can only express |M1|n1 |M2|n2 ....|Mk|nk different functions. Thus expressivity of LMS is strictly less than that of GMS and can be orders of magnitude less depending on exact values of ni and |Mi|. We also show that GMS is superior to LMS in terms of dimensionality reduction (feature hashing) in Section 5. Figure 2: Local memory sharing : each module compresses its parameters separately. In Global memory sharing, all parameters from accross the modules share the same memory 4.3 FORWARD AND BACKWARD PASSES Recall that in ROAST, operations are of three types L,MM and N. The forward pass proceeds by applying each operation in sequence. If an operation is of type N, we directly apply its function on the input. For L and MM operations, outputs are computed according to the procedure described in Section 4.1. The gradient of the loss w.r.t a weight wi inM is the λ-scaled aggregation of gradients of loss w.r.t all the parameters that map to this weight. For simplicity of notation, consider θ as the complete parameter, λ(j) as the scaling factor we use for the module that θj belongs to, and h be the mapping from θ toM. See Appendix B.1 for more details. ∇wif(w) = ∑ j,h(j)=i λ(j) ∗ ∇θjf(θ) (2) 4.4 IMPLEMENTATION OF ROAST-MM The high-performance community has heavily investigated the fast implementation of the General Matrix Multiplication (GEMM) kernel, a fundamental operation in many computational workloads, including deep learning. Optimized implementations of GEMM kernels are available in vendor libraries such as cuBLAS (NVIDIA Corporation, 2022a) and CUTLASS (NVIDIA Corporation, 2022b). Unfortunately, these implementations do not support custom tile loading operations, which is the key of ROAST-MM. To implement ROAST-MM to a reasonable level of efficiency, we used Triton (Tillet et al., 2019): an intermediate language for tiled neural network computations. Triton abstracts out the shared memory management to make it helpful in customizing tiled operations with high efficiency. In our implementation of ROAST-MM, the optimal size of coalesced tiles is a parameter that depends on the shape of the weight matrix. Therefore, different tile sizes can lead to different parallelism, occupancy, and shared memory efficiency, resulting in different execution times. We autotune this parameter to obtain the best performance for particular matrix shapes. We propose two strategies for autotuning each ROAST-MM layer - (1) Optimize the inference workload by autotuning the forward kernel and sharing the tile size with the backward kernels. (2) Optimize the training workload by autotuning the forward and backward kernels together. Extensive evaluation of this kernel is presented in appendix C.2. 5 FEATURE HASHING QUALITY: GLOBAL MEMORY SHARING ADVANTAGE OVER LOCAL MEMORY SHARING We can consider model compression as dimensionality reduction of a parameter vector (a one dimensional vector of all parameters in a model) of size n into a vector of size |M| = m. Quality of inner-product preservation is used as a metric to measure the quality of dimensionality reduction. In terms of dimensionality reduction, ROAST uses ROBE hashing, which shows that chunk based hashing is theoretically better than hashing individual elements. In this section, we compare ROAST’s GMS proposal against HashedNet’s LMS using a chunck size of one. Consider two parameter vectors x, y ∈ Rn, we are interested in how the inner product of parameter vectors are preserved under hashing. Let x = [x1, x2, ..., xk] and y = [y1, y2, ..., yk] be composed of k vectors of sizes n1, n2, ...nk where [] denotes concatentation. In LMS, let each piece map to memory of size fim where ∑ i fi = 1. The estimated inner product with GMS is ⟨̂x, y⟩G,m = m∑ j=1 ( n∑ i=1 I(h(i)=j)g(i)x[i] n∑ i=1 I(h(i)=j)g(i)y[i] ) (3) Table 2: Experimental settings: The datasets and models used in experiments. Domain Task Dataset #Samples Model Model size NLP text-classification amazon-polarity 3.6M/0.4M BERT-2-2 37.43M NLP text-classification yelp-polarity 560K/38K BERT-2-2 37.43M CV image-classification cifar10 50K/10K ResNet 6.5M The estimated inner product with LMS can be written as ⟨̂x, y⟩L,m,f⃗ = k∑ l=1 flm∑ j=1 nl∑ i=1 I(h(i)=j)g(i)xl[i] nl∑ j=1 I(h(i)=j)g(i)yl[i] = k∑ l=1 ⟨̂xl, yl⟩G,(flm) (4) Theorem 1 Let x, y ∈ Rn and be composed of k vectors x = [x1, x2, ..., xk] and y = [y1, y2, ..., yk]. Then the inner product estimation of global and local weight sharing are unbiased. E(⟨̂x, y⟩G,m) = ⟨x, y⟩ E(⟨̂x, y⟩L,m,f⃗ ) = ⟨x, y⟩ (5) The variance for inner product estimation can be written as, VG(⟨̂x, y⟩) = ∑ i fiVi + 1 m ∑ i,j,i ̸=j (||xi||2||yj ||2) + ⟨xi, yi⟩⟨xj , yj⟩ (6) VL( ˆ⟨x, y⟩) = ∑ i Vi (7) where Vl = 1 fl 1 m ∑ i ̸=j a2i b 2 j + ∑ i ̸=j aibiajbj , where xl = (a1, a2..., anl) and yl = (b1, b2..., bnl) (8) where VL is local memory sharing variance and VG is global memory sharing variance. Intuition: The two terms in VG can be understood as follows: The first term is the local variance with individual terms reduced by a factor of fi. This is because each piece of the vector is being distributed in a memory that is 1/fi× larger. However, in GMS, there is a possibility of more collisions across pieces. This leads to the second term in VG. Note that, for a given x, y and a finite value for m, VG is always bounded. At the same time, VL is unbounded due to 0 < fi < 1 in the denominator. So if the number of pieces increases or particular fi grows smaller, VL increases. While we cannot prove that VG is strictly less than VL, we can investigate the equation under some assumptions on the data. Practically, each piece of the parameter vector is a computational block like a matrix for multiplication or embedding table lookup. These blocks are initialized at a scale proportional to the square root of their size. So the norms of these vectors are similar. Let us assume the norm of each piece to be √ α. Also, let us assume that over random data distributions over x and y, all the inner products to be β in expectation. Then, VG ≈ k2 m (α2 + β2) VL ≈ 1 m (α2 + β2)( 1 f1 + 1 f2 + ...+ 1 fk ) ≥ 1 m (α2 + β2)k2 1 ( ∑ fi) = VG (9) Thus, VL is greater than VG, and it can be much greater depending on the exact values of fi. The proof of the theorem and other details are presented in Appendix B.2 6 EXPERIMENTAL EVALUATION Setup: In this section, we evaluate the ROAST compression approach on two types of tasks. The details of the tasks, datasets and models used are mentioned in table 2. . For image-classification tasks, we choose the cifar-10 dataset and the leader for the DawnBenchmark (Coleman et al., 2017) - a ResNet-9 model1 for cifar-10. The target accuracy for this benchmark is 94% and hence we perform hyper-parameter tuning to get a test accuracy of ≥ 94%. We stop the tuning once we 1https://github.com/apple/ml-cifar-10-faster reach this accuracy and hence the results for CIFAR-10 should be compared w.r.t whether it crosses 94.0%. For NLP tasks, we use two largest available text-classification datasets on huggingface (HuggingFace, 2022). For the model, we use BERT-x-y (x:number of layers, y:number of attention heads) architecture with classification head. On both NLP datasets, using models larger than BERT-22 lead to similar test accuracy and hence we choose BERT-2-2 as the base model. The other hyper parameters for NLP tasks are { batch 64 for amazon-polarity and 32 for yelp-polarity, learning rate 2e-5, AdamW optimizer, Linear scheduler} Roast for compression As we can see in tables 3 and 4 , with ROAST, we can achieve similar quality of model in much smaller space. Specifically, for text-classification, we see that we can train and deploy the BERT-2-2 model in 100× lesser space. Similarly, we can train and deploy ResNet model in 10× lesser space for same target test accuracy. Thus, ROAST is an effective method for training and deploying models on memory-constrained systems. Managing excess parameters It is clear from table 3, that BERT-base architecture is highly over parameterized for the tasks under consideration. However, even in this case, ROAST can be used to control the memory footprint while maintaining the functional form of the larger model. Pruning and ROAST We perform unstructured iterative-magnitude pruning (Han et al., 2016b) on ResNet model and find that pruning gives upto 100× compression. However note that pruning requires us to train the model using memory required to store the original model. However, compression with ROAST means using lesser memory even for training. Additionally, pruning can be used in conjunction with ROAST to obtain smaller models using smaller memory. In table 4, we see that we can prune 90% of weights in 10× compressed ROAST array and still achieve the same quality. Local vs. Global memory sharing In the figure 3, we show that the quality of the model while using global memory sharing is, indeed, better than local memory sharing. This supports our theoretical observation about these memory sharing schemes. Efficiency of ROAST operators as compared to HashedNet Table 7 shows the inference performance of a simple model using ROAST-MM for matrix multiplication on compressed memory. Our model linearly transforms the input vector and computes its norm. We optimized the ROAST-MM kernel for this experiment using the inference-optimal strategy. We make the following observations from Table 7: (1) ROAST-MM outperforms HashedNet kernel consistently across the different multiplication workloads. On an average over different workloads, ROAST-MM is up to 45× faster than HashedNet. (2) ROAST-MM is 1.34× slower than PyTorch-MM. This is expected as Pytorch-MM uses extremely optimized libraries for matrix multiplication and ROAST-MM implementation is comparatively under-optimized. It is still interesting to note that ROAST-MM’s performance better in terms of scaling efficiency than PyTorch-MM with the increase in workload. As the workload increases 1600× (from 512×512 to 20480×20480), PyTorch-MM takes 39× time, HashedNet takes 106× time whereas ROAST-MM only takes around 16× time. We present more detailed measurements across different optimizers for training-optimal strategy in the appending C.2 7 CONCLUSION Traditionally model compression has focused on memory reduction during inference. However, model memory during training is also an important consideration. While some of the existing methods such as HashedNet and Low-rank factorisation provide model reduction during training, these methods either do not provide cache-efficient model recovery or have implicit cap on memory reduction. ROAST overcomes these obstacles and provides a cache-efficient, arbitrary control over the memory footprint of model during training and inference. ROAST, essentially provides a practical parameter sharing method. ROAST is theoretically better than HashedNet in terms of dimensionality reduction due to block based hashing and global memory sharing. We empirically validate the efficiency advantage of ROAST over HashedNet and that we can achieve high compression with ROAST. A ADDITIONAL DATA FOR REVIEWERS - PARTS OF WHICH WILL GO IN MAIN PAPER IN FINAL VERSION A.1 EXTENDED TABLE 3 WITH EPOCH INFORMATION AND MORE BASELINES We add a lot of information and new results to the table. Specifically, • We add the GMS and LMS results to the table separately. So that readers can get an idea of each of the method on the task. • We add unstructured pruning (best pruning quality wise) resutls for NLP tasks as well. The pruning results are obtained in the following manner. With the full-9-1 schedule, we start from the fully trained model, perform iterative pruning during next 9 epochs and then tune the final pruned model for 1 more epoch. Whereas in the full-1-9 schedule, we again start from the fully trained model, perform pruning in the next 1 epoch and then tune the model further for 9 epochs. We note the best achieved accuracy with the final model structure and the epoch at which this accuracy is reached. • For each result, we note the number of epoch when the best accuracy was reached. • We append an additional small table which notes the number of epochs required to reach a target accuracy to compare the convergence of different models. We make the following observations. • GMS reaches better accuracy than LMS for the same amount of compression for both the datasets. Additionally, GMS reaches the same target accuracy faster than the LMS. • The ROAST approach is more effective than pruning approaches in NLP tasks of textclassification for architectures like BERT. • It is interesting that GMS-10× converges faster than original model on both datasets. We leave investigating this as future work. A.2 GMS VS LMS FOR YELP As can be seen from the two plots in figure4, it is clear the GMS performs superior to LMS in both the compression settings. B THEORY ROAST is a generalized model compression which performs operation specific system-friendly lookup and global memory sharing. This raises some interesting theoretical questions B.1 BACKWARD PASS FOR MODEL SHARING WEIGHTS ACROSS DIFFERENT COMPONENTS A general function sharing a weight, say x across different components can be written as , f(x, g(x)) The interpretation is that x was used in g(.) and then again used ahead in f. (In case of MLP, we can think of x being used in multiple layers) Let f(g1, g2) where both g1 and g2 are functions of x. ∂f(g1, g2) ∂x = ∂f(g1, g2) ∂g1 ∗ ∂g1 ∂x + ∂f(g1, g2) ∂g2 ∗ ∂g2 ∂x (10) g1 = x and g2 = g(x) ∂f(g1, g2) ∂x = ∂f(x, g(y)) ∂x |y=x + ∂f(y, g(x)) ∂g(x) ∗ ∂g(x) ∂x |y=x (11) ∂f(g1, g2) ∂x = ∂f(x, g(y)) ∂x |y=x + ∂f(y, g(x)) ∂x |y=x (12) Renaming, ∂f(x, g(x)) ∂x = ∂f(z, g(y)) ∂z |y=x,z=x + ∂f(z, g(y)) ∂y |y=x,z=x (13) Thus, we can essentially consider each place where x appears as new variables and then gradient w.r.t x is just summation of partial derivatives of the function w.r.t these new variables. Thus, it is easy to implement this in the backward pass. In order to make sure that the memory utilization in backward pass is not of the order of the recovered model size, we do not use the auto-differentiation of tensorflow/pytorch. We implement our own backward pass and it can be found in the code. B.2 GLOBAL FEATURE HASHING VS LOCAL FEATURE HASHING. We can consider model compression techniques as dimensionality reduction of the parameter vector (a one dimensional vector of all parameters in a model) of size n into a vector of size |M| = m. Quality of inner-product preservation is used as a metric to measure the quality of dimensionality reduction. In terms of dimensionality reduction, ROAST uses ROBE hashing Desai et al. (2022), which showed that chunk based hashing is theoretically better than hashing individual elements. In this section, we analyse GMS proposal of ROAST against LMS of HashedNet. For the purpose of this comparison we assume a chunk size of 1. Consider two parameter vectors x, y ∈ Rn. We are interested in how inner product between these parameter vectors are preserved under hashing. Let x = [x1x2...xk] and y = [y1y2...yk] be composed of k pieces of sizes n1, n2, ...nk. In LMS, let each piece be mapped into memory of size fim where ∑ i fi = 1. The estimators of inner product in the GMS case can be written as , ⟨̂x, y⟩G,m = m∑ j=1 ( n∑ i=1 I(h(i)=j)g(i)x[i])( n∑ i=1 I(h(i)=j)g(i)y[i]) (14) The estimate of inner product with LMS can be written as, ⟨̂x, y⟩L,m,f⃗ = k∑ l=1 flm∑ j=1 ( nl∑ i=1 I(h(i)=j)g(i)xl[i])( nl∑ j=1 I(h(i)=j)g(i)yl[i]) = k∑ l=1 ⟨̂xl, yl⟩G,(fim) (15) Note that ⟨̂x, y⟩L,m,f⃗ = k∑ l=1 ⟨̂xl, yl⟩G,(flm) (16) The GMS estimator is the standard feature hashing estimator and the LMS is essentially sum of GMS estimators for each of the piece. as E[g(i)] = 0, it is easy to check by linearity of expectations that Expectation The suffix L refers to local hashing and G refers to global hashing. EG = E(⟨̂x, y⟩G,m) = ⟨x, y⟩ (17) EL = E(⟨̂x, y⟩L,m,f⃗ ) = ⟨x, y⟩ (18) Let us now look at the variance. Let us follow the following notation, • VG = V(⟨̂x, y⟩G,m). GMS variance of entire vectors • VL = V(⟨̂x, y⟩L,m,f⃗ ). LMS variance of entire vectors • Vl = V(⟨̂xl, yl⟩G,flm). variance of each piece we can write Vl as follows. The following equation is easy to derive and it can be found the lemma 2 of Weinberger et al. (2009) Vl = 1 fl 1 m ( ∑ i ̸=j a2i b 2 j + ∑ i ̸=j aibiajbj) where xl = (a1, a2...anl) and yl = (b1, b2...bnl) (19) As, each of the piece is independently hashed in LSM, we can see VL = k∑ l=1 Vl (20) Let us now look at VG. Again, using lemma 2 from Weinberger et al. (2009) VG = 1 m ( ∑ i̸=j x2i y 2 j + ∑ i̸=j xiyixjyj) (21) The expression can be split into terms that belong to same pieces and those across pieces VG = 1 m k∑ l=1 ( ∑ i̸=j∈piece-l x2i y 2 j + ∑ i ̸=j∈piece-l xiyixjyj) + 1 m k∑ l1=1 k∑ l2=1,l1 ̸=l2 ( ∑ i∈piece-l1,j∈pieces-l2 (x2i y 2 j ) + ∑ i∈piece-l1,j∈pieces-l2 xiyixjyj)) VG = k∑ l=1 flVl + 1 m l∑ l1=1 l∑ l2=1,l1 ̸=l2 ||xl1||22||yl2||22 + ⟨xl1, yl2⟩⟨xl2, yl2⟩ (22) Observation 1: In VL we can see that there are terms with 1fl which makes it unbounded. It makes sense as if number of pieces increase a lot a lot of compressions will not work for example if number of peices > |M|. Also, it will affect VL a lot when some fl is very small which can often be the case. For example, generally embedding tables in DLRM model are much larger than that of matrix multiplciation modules (MLP) . which can make f ≈ 0.001 for MLP components. Observation 2: Practically we can assume each piece, no matter the size of the vector, to be of same norm. The reason lies in initialization. According to Xavier’s initialization the weights of a particular node are initialized with norm 1. So for now lets assume a more practical case of all norms being equal to √ α. Also, in order to make the comparisons we need to consider some average case over the data. So let us assume that under independent randomized data assumption, the expected value of all inner products are β. With this , in expectation over randomized data, we have VG = ∑ flVl + k(k − 1) m (α2 + β2) (23) Now note that, Vl = 1 fl 1 m ( ∑ i ̸=j a2i b 2 j + ∑ i ̸=j aibiajbj) where xl = (a1, a2...anl) and yl = (b1, b2...bnl) (24) (dropping the subscript "l" below) Vl = 1 fl 1 m ((||x||22||y||22 + ⟨x, y⟩2)− 2 ∑ i x2i y 2 i ) (25) Vl = 1 fl 1 m ((α2 + β2)− 2 ∑ i x2i y 2 i ) (26) Note that for each negative term, there are nl positive terms. To simplify we disregard this term in the equation above. This is an approximation which is practical and only made to get a sense of VL and VG relation. VL − VG = ∑ Vl − ∑ flVl − k(k − 1) m (α2 + β2) VL − VG = ∑ l 1 m ( 1 fl − 1)((α2 + β2))− k(k − 1) m (α2 + β2) VL − VG = ∑ l 1 m ( 1 fl − 1)((α2 + β2)− k(k − 1) m (α2 + β2) VL − VG ≥ k(k − 1) m ((α2 + β2)− k(k − 1) m (α2 + β2) VL − VG ≥ 0 Note that we ignored a term which reduces the VL a bit, Let the error be ϵ VL − VG ≥ −ϵ (27) The above equation shows even for the best case, VG might be slightly more than VL. However for general case where harmonic mean is much worse than arithmetic mean, VL will be much larger depending on exact fl s C ROAST-MM LATENCY MEASUREMENTS C.1 INFERENCE OPTIMIZATION C.2 TRAINING OPTIMIZATION See tables 8, 9, 10, 11 D VARIANCE IN QUALITY OVER DIFFERENT RUNS The figure 5 shows three runs of ROASTed BERT and BERT models
1. What is the focus of the paper regarding memory usage reduction for neural networks? 2. What are the strengths of the proposed approach in terms of organization and experiment results? 3. What are the weaknesses of the paper regarding its claims and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the effectiveness and efficiency of the proposed method in terms of computation, memory access, and energy consumption?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper Weight sharing is one of the effective way to compress the model. This paper propose a new hash method to lower the memory usage for neural network models. Compare with the previous hash method, which use the local memory sharing, this paper propose a new idea call global memory sharing, which is aimed to lower the memory usage. Strengths And Weaknesses Strength: Overall, this paper is well organised. Experiment results has well proved the efficiency of the method Weakness: 1. Weight sharing can lower the memory usage but can not reduce the number of operations. As authors claimed, the RAM access is around 100X slower than the computation. This is true. However, from the hardware perspective, a efficient way to lower the memory access delay is cache. The real memory access delay can be much shorter, and can be almost the same as the computation time. Besides, as the computation of neural network is regular, we can use pipelined design to further eliminate the negative impact of memory access. Besides, with the cache design, we can substantially reduce the energy consumption of memory access. I understand this is the common problem of memory sharing, not the problem of this paper only. However, it would be better to do a more comprehensive comparison with other model compression methods not only on model size but also on latency and energy consumption. The comparison is not enough. In terms of weight sharing, please do a more comprehensive comparison with other state-of-the-art weight sharing methods. Does table 5 compared under the same accuracy? Clarity, Quality, Novelty And Reproducibility The quality of the paper is good. The idea is clearly presented.
ICLR
Title Hardware-aware compression with Random Operation Access Specific Tile (ROAST) hashing Abstract Advancements in deep learning are often associated with increasing model sizes. Training and deploying large models require sophisticated hardware and incur significantly higher costs. Thus, model compression is a widely explored approach to solving the problem. However, SOTA techniques fall short in one or more desirable aspects of compression for instance, pruning does not reduce memory for training, quantization can only provide up to 32x compression, HashedNet is cache-inefficient, etc. This paper proposes a model-agnostic, cache-friendly, and hardware-aware model compression approach: Random Operation Access Specific Tile (ROAST) hashing. ROAST collapses the parameters by clubbing them through a lightweight mapping. While clubbing these parameters, ROAST utilizes cache hierarchies by aligning the memory access pattern with the parameter access pattern. ROAST is up to ∼25× faster to train and ∼50× faster to infer than the popular parameter sharing method HashedNet. Additionally, ROAST introduces global weight sharing, which is empirically and theoretically superior to local weight sharing in HashedNet, and can be of independent interest. With ROAST, we can efficiently train and deploy the model using a much smaller memory footprint (∼ 10− 100× lesser) in text and image classification tasks. 1 INTRODUCTION Models across different domains, including Natural Language Processing (NLP), Computer Vision (CV), and Information Retrieval (IR), are exploding in size. State-of-the-art (SOTA) results in these domains are being obtained at a disproportionate increase in model sizes, questioning the sustainability of deep learning (Thompson et al., 2021). For instance, SOTA architectures for vision include VGG (Simonyan & Zisserman, 2014) (150M params, 0.6GB) and ViT (Dosovitskiy et al., 2020) (up to 304M params, 1.2GB). Additionally, SOTA NLP models range from BERT (Devlin et al., 2018) (340M params, 1.36GB) to GShard (Lepikhin et al., 2020) (600B params, 2.4TB). Similarly, industrial-scale recommendation models such as DLRM (Naumov et al., 2019; Mudigere et al., 2021) can have up to 10s of trillions of parameters (50TB). Large models, such as the above, come with various challenges. They need high-end distributed hardware for training and deployment, incurring higher costs. Additionally, the required modelparallel setup has higher inference and training-iteration latency for these models. Model compression is a research direction that aims to resolve these issues by reducing the memory footprint of the model. Compression of the order of 100× can eliminate the need for model-parallel setup for many SOTA models like GPT(Radford et al., 2019), Gshard(Lepikhin et al., 2020), DLRM (Naumov et al., 2019) which now can fit on a single GPU. Furthermore, compressing large models to small sizes come with immediate latency benefits. For example, Desai et al. (2022) showed that by compressing the DLRM model 1000× and using 1 GPU instead of 8 GPUs, we could get 3× faster inference at a lower cost. Also, in the case of CPU inference, a smaller model is efficient. For example, (Diamos et al., 2016) showed that if a single RNN layer can fit in registers, it leads to 146× faster inference. Thus, the ML community has heavily invested in model compression. A variety of model compression paradigms now exist in literature like pruning (Han et al., 2016b), quantisation (Han et al., 2016b), knowledge distillation (Buciluǎ et al., 2006), parameter-sharing (Chen et al., 2015; Desai et al., 2022), and low rank decomposition (Hrinchuk et al., 2020; Yin et al., 2021). Table 1 compares these approaches on three considerations (1) if the model memory is reduced for training. (2) if the memory size can be controlled independently of the model, and (3) if the approach considers the underlying Table 1: Various compression techniques on three aspects (1) Memory reduction during training ( apart from inference) (2) arbitrary control over memory (3) Hardware awareness / cache-efficiency * Some versions of pruning that are tuned to the underlying hardware and are cache-efficient Training memory reduction Arbitrary control on memory Cache efficient Pruning No No No* Low-rank decomposition Yes No Yes Low-precision Yes No Yes Quantization aware training (QAT) No No N.A Parameter sharing - HashedNet Yes Yes No Knowledge Distillation No No N.A ROAST (ours) Yes Yes Yes memory hierarchies and is cache-efficient. We want the techniques to fare positively in these three aspects. However, techniques like pruning, QAT, and knowledge distillation require us to use the memory of the original model while training and only reduce inference time memory. Additionally, there are limits to compression obtained by quantization and pruning depending on which component we are compressing. For example, we cannot prune an embedding table (N × d) more than d× as we do not want any embedding vector to have all zeros. HashedNet provides memory reduction during training and arbitrary control over memory. However, the look-ups in HashedNet are randomly and independently distributed across the total memory. This makes HashedNet cache-inefficient. This paper presents Random Operation Access Specific Tile (ROAST) hashing, a parameter-sharing approach that provides cache-efficiency and arbitrary control over memory during training as well as inference. ROAST does not change the model’s functional form and can be applied to all computational modules of a model, such as MLP layers, attention blocks, convolution layers, and embedding tables. ROAST is hardware aware: it proposes a tile-based hashing scheme tuned to the memory access pattern of the algorithmic implementation of the operation being performed. ROAST uses this hash function to recover blocks of the model from a single array of parameters - ROAST array. ROAST is superior to HashedNet due to two factors (1) Unlike HashedNet, ROAST proposes global weight-sharing where parameters are shared across the different computational modules. As we shall see, global weight-sharing is empirically and theoretically superior to local weight-sharing and might be of independent interest. (2) ROAST uses block-based hashing, which is theoretically superior to count-sketch hashing used in HashedNet. (Desai et al., 2022) We show that with ROAST, we can train a BERT-2-2 ( 2 layers, 2 attention heads) model on the largest available text-classification datasets (amazon-polarity, yelp-polarity) using 100× lesser memory without loss of quality. In cases where the model is overly parameterized, like using BERT-12-12 in the text classification task above, we can still obtain similar compression of 100×. Thus it is a good alternative to neural architecture search. The results extend to CV datasets as well. Specifically, we can train a ResNet-9 model with 10× lesser memory for the CIFAR10 dataset. Importantly, we show that ROAST, due to its hardware-aware nature, is significantly faster than HashedNet: ROAST is up to ∼ 25× faster to train and ∼ 50× faster to infer than HashedNet for large matrix multiplications. Our current implementation of ROAST matrix multiplication is about 1.34× slower than full matrix multiplication in pytorch. This is a testament to how optimized CUBLAS libraries are. We believe, with enough investigation, we can make ROAST-MM comparably efficient to pytorch-MM as well. Limitations of ROAST: One of the goals of model compression, apart from reducing memory usage, is to reduce computational workload for deployment. ROAST, currently, is not devised to decrease computation; it only decreases the memory footprint of a model. Reducing computation with a small memory is left for future work. However, it should be noted that reducing the memory footprint can significantly reduce computation latency and power consumption. As shown in (Han et al., 2016a), accessing memory from RAM is 6400× costlier than 32bit INT ADD and 128× more expensive than on-chip SRAM access in terms of energy consumption. Additionally, RAM access generally is ∼100× slower than a floating-point operation. So, this model compression with ROAST, in our opinion, is an important step for efficient training and inference. 2 RELATED WORK This section briefly reviews the rich history of model compression paradigms. Model compression can be generally classified into two categories: (1) Compressing a learned model and (2) Learning a compressed model. ROAST lies in the second category. Compressing learned models: 1) Pruning: Pruning (Zhu & Gupta, 2017) is a technique to remove parts of a large model, including weights, blocks, and layers, to make the model lighter. Pruning can be performed as a one-time operation or gradually interspersed with training. 2) Quantization: Quantization can involve reducing the precision of the parameters of a model. Mixed precision models are sometimes used where different precision is used with different weights. KMeans quantization is another type of quantization, where models’ weights are clustered using KMeans, and each cluster’s centroid is used for all cluster weights. Model compression, in this case, is achieved by reducing the number of distinct weights. 3) Knowledge distillation: Knowledge distillation (Buciluǎ et al., 2006) is widely applied in model compression with a focus on distilled architectures. Knowledge distillation involves training a teacher model (large original model); then, a student model is trained using the logits of the teacher model. Empirically, the student model trained under this paradigm generalizes better than the student model trained standalone. Many variations exist on this basic idea of knowledge distillation. While these techniques have successfully reduced memory for inference, one of the drawbacks of this line of compression is that the memory usage while training the model is not reduced. ROAST, however, provides a solution that reduces the model’s memory during training and inference. Learning compressed models 1) Low-rank decomposition: In this method, matrices in the model are decomposed into a product of two low-rank matrices, thus saving memory per matrix. A generalization of low-rank decomposition to tensors is called tensor-train decomposition 2) Parameter sharing: Parameter sharing approaches such as HashedNet (Chen et al., 2015) are generally used for matrix compression. These approaches randomly share weights among different parameters, reducing the model’s memory usage. This line of research provides model reduction even during training. However, Low-rank decomposition does not offer arbitrary control over memory footprint, and HashedNets are inefficient due to heavy cache-trashing caused by non-local lookups. Conversely, ROAST is a model-agnostic parameter-sharing approach that can arbitrarily reduce the model size without affecting the functional form while keeping the model recovery efficient. 3 BACKGROUND HashedNet: Compressing MLP matrices Previous work (Chen et al., 2015) introduced a weight sharing method to compress weight matrices of MLP models. They map each matrix parameter to a shared parameter array using a random hash function xxhash (Collet, 2016). In the forward pass, this mapping is used to recover a weight matrix and perform matrix multiplication for each MLP layer. In the backward pass, the gradients of each weight matrix are mapped to the shared compressed array and aggregated using the sum operation. It should also be noted that each MLP layer uses an independent array of parameters. One of the main concerns with HashedNet is that memory accesses on the compressed array are non-coalesced. Thus, fetching a compressed matrix via HashedNet requires significantly more memory read transactions than fetching an uncompressed matrix for which memory accesses can coalesce. Our evaluation shows that uncoalesced memory accesses lead to high latency, especially for large matrices. Random Block Offset Embedding Array (ROBE) for embedding compression In ROBE (Desai et al., 2022), the embedding table is generated using an array of parameters. The embedding of a token is obtained by drawing chunks of the embedding from the ROBE array. The locations of the chunks are decided randomly via light-weight universal hash functions. Authors of ROBE showed that ROBE hashing is theoretically superior to feature hashing used in HashedNet. Also, the use of chunks causes memory accesses to coalesce, making embedding lookup efficient. ROAST proposes a component agnostic, global parameter sharing approach that tunes the hashing function to match memory accesses of algorithmic implementation of operation over available hardware, thus giving a superior parameter sharing scheme. 4 RANDOM OPERATION ACCESS SPECIFIC TILE (ROAST) HASHING LetM be the compressed memory from which parameters will be used, f be the model or the function that we want to run usingM, and W be the recovered weights used in f . f can be considered as a composition of operations {Oi(Xi,Wi)}. By operation, we mean the smaller functions that, when composed together, give us the model f . Here Xi is the input to the operation, and Wi is the weights (i.e., learnable parameters) that Oi uses. Generally, Wis are distinct and do not share parameters. Random Operation Access Specific Tile (ROAST) hashing is a way to perform efficient modelagnostic parameter sharing-based compression. The following distinct aspects of ROAST set it apart from previous parameter sharing-based methods. (1) ROAST is a generic technique applicable to all computational modules. (2) ROAST proposes to tune its mapping from Wi toM in a way that coalesces memory accesses according to how memory is accessed during the operation. This makes ROAST efficient and up to 45× faster than competing approaches like HashedNet. (3) ROAST proposes Global Memory Sharing (GMS) as opposed to Local Memory Sharing (LMS) used in HashedNet. We show GMS to be theoretically and empirically superior to LMS in Section 5 and 6. 4.1 ROAST OPERATIONS IN DEEP LEARNING Any model f can be considered as a composition of smaller functions {Oi(Xi,Wi)}. There are multiple ways to perform this decomposition depending upon what we consider a valid (or small enough) operation. In ROAST, we consider three types of operations: (1) L(l,W ), lookup that accessesM and recovers lth element of W , say w. By element, we mean some particular part of W that is identifiable by an integer. An example with embedding tables is given in figure 1. (2) MM(X,W ), matrix multiplication that multiplies X with W and returns the result, and (3) N(X), various operations that only act on the input but do not interact withM. In ROAST, in order to limit the memory usage, we make sure that L is used only on a small w and MM is performed without recovering the entire matrix. We find that most deep learning models, if not all, can be written as a composition of operations N, MM and L, where L is only applied on small parameters. Let us discuss how ROAST implements L and MM operations in the following paragraphs. Lookup (L(l,W )) We recover a parameter weight w of any shape in a row-major format. Thus, we can consider w = W (l) to be a 1D vector without loss of generality. ROAST recovers w fromM in a blocked fashion. Consider w to be composed of chunks of size Z. Each chunk c is located inM using a universal hash function h1 and is recovered from the location h1(c) inM. Let C(i) give the chunk number of index i and O(i) give the offset of i in this chunk. w[i] = λM[h1(C(i)) +O(i)] h1 : N→ {0, ..., |M| − Z} (1) The recovered W has λ as a scaling factor discussed in section 4.2. The hash function hashes to a range {0, ..., |M| − Z} to avoid overflows while reading the memory. For example, Figure 1 (right) illustrates the embedding lookup using L with chunk size of 2. ROAST uses L to implement computational modules such as embeddings, bias vectors, and so on. We generalize the embedding lookup kernel from ROBE (Desai et al., 2022) to implement our L kernel. Matrix multiplication (MM(Xi,Wi)) 2D matrix multiplication is one of the most widely used operations in deep learning. We implement our ROAST-MM kernel with parameter sharing performed in a way that the algorithm for matrix multiplication accesses coalesced pieces ofM. An efficient implementation of matrix multiplication on GPU follows a block multiplication algorithm to use the on-chip shared memory efficiently. While computing C = A × B, A, B and C are divided in tiles of size Z0 × Z1, Z1 × Z2 and Z0 × Z2 respectively. Thus, we divide our 2D weight matrix into tiles of size Z1 × Z2. The tile, (x, y), where x and y are the coordinates of the tile, is located in M in a row-major format via a universal hash function h2(x, y). Let C1(i, j) and C2(i, j) give the x-coordinate and y-coordinate of the tile to which i, j belongs. Similarly, let O1(i, j) and O2(i, j) give the x-offset and y-offset of a location (i, j) on the tile. Then, we use the following mapping for ROAST-MM, W [i, j] = λM[h2(C1(i, j), C2(i, j)) + Z2O1(i, j) +O2(i, j)] h2 : N2 → {0, ..., |M| − Z1Z2} Again, λ is the scaling factor discussed in section 4.2. The hash function hashes to a range {0, ..., |M| − Z1Z2} to avoid overflows while reading the chunk. Figure 1 (left) illustrates ROASTMM with a chunk size of 2× 2. The above mapping is used whenever a 2D tile is accessed in the matrix multiplication algorithm. The pseudo code for ROAST-MM is shown in algorithm 1. We talk about implementation of this kernel and its evaluation in the later part of the paper. ROAST uses ROAST-MM kernel to implement computational modules such as MLP layers, attention blocks, etc. Each module invoking ROAST kernels uses independent hash functions. Algorithm 1 ROAST-MM(I ×H ×O) Require: X ∈ RI×H ,M, λ, h : N2 → {0, ..., |M| − Z1Z2} Ensure: output = MM(X,M[h(:, :)]) value← TILE(Z0, Z2) ▷ Allocate a 2D tile of size Z0 × Z2 to accumulate results for i ∈ {0, 1, ..., ⌈I/Z0⌉ − 1} do for j ∈ {0, 1, ..., ⌈O/Z2⌉ − 1} do value[:, :]← 0 for k ∈ {0, 1, ..., ⌈H/Z1⌉ − 1} do value← value+MM(X[i : i+ Z0, k : k + Z1],M(h(k : k + Z1, j : j + Z2))) ▷ Access to the weight tile passes through the hash function end for output[i : i+ Z0, j : j + Z2]← λ ∗ value end for end for Apart from scaling each recovered parameter with module-specifc λ, we can also multiply it with another independent hash function g : Nk → {±1} (k=1 or k=2). 4.2 GLOBAL MEMORY SHARING (GMS) HashedNet uses local memory sharing (LMS), which states that each layer will have independent compressed memory. In contrast, ROAST proposes global memory sharing (GMS), wherein we share memory across modules. However, modules cannot directly use the parameters stored inM as each module’s weights requires initialization and optimization at different scales. For instance, in the Xavier’s initialization (Glorot & Bengio, 2010), weights are initialized with distribution Uniform(−1/ √ n, 1/ √ n) where n is size of the input to the module. In GMS, we must ensure that each module gets weights at the required scale. To achieve this, we first initialize the entire ROAST parameter array with values from the distribution Uniform(−1/C, 1/C) for some constant C. Then, for each module, we scale the weights retrieved from the ROAST array by a factor of λ = C/ √ n. One can understand the benefit of GMS over LMS in terms of the number of distinct functions in f that can be expressed using a fixedM. Consider a family of functions with n parameters. GMS can potentially express |M|n functions across different random mappings. In LMS, let separate parameters be of sizes n1, n2, ..nk and each of them is mapped into memoriesM1,M2, ...,Mk. Thus, n = ∑ i ni and |M| = ∑ i |Mi|. Then LMS can only express |M1|n1 |M2|n2 ....|Mk|nk different functions. Thus expressivity of LMS is strictly less than that of GMS and can be orders of magnitude less depending on exact values of ni and |Mi|. We also show that GMS is superior to LMS in terms of dimensionality reduction (feature hashing) in Section 5. Figure 2: Local memory sharing : each module compresses its parameters separately. In Global memory sharing, all parameters from accross the modules share the same memory 4.3 FORWARD AND BACKWARD PASSES Recall that in ROAST, operations are of three types L,MM and N. The forward pass proceeds by applying each operation in sequence. If an operation is of type N, we directly apply its function on the input. For L and MM operations, outputs are computed according to the procedure described in Section 4.1. The gradient of the loss w.r.t a weight wi inM is the λ-scaled aggregation of gradients of loss w.r.t all the parameters that map to this weight. For simplicity of notation, consider θ as the complete parameter, λ(j) as the scaling factor we use for the module that θj belongs to, and h be the mapping from θ toM. See Appendix B.1 for more details. ∇wif(w) = ∑ j,h(j)=i λ(j) ∗ ∇θjf(θ) (2) 4.4 IMPLEMENTATION OF ROAST-MM The high-performance community has heavily investigated the fast implementation of the General Matrix Multiplication (GEMM) kernel, a fundamental operation in many computational workloads, including deep learning. Optimized implementations of GEMM kernels are available in vendor libraries such as cuBLAS (NVIDIA Corporation, 2022a) and CUTLASS (NVIDIA Corporation, 2022b). Unfortunately, these implementations do not support custom tile loading operations, which is the key of ROAST-MM. To implement ROAST-MM to a reasonable level of efficiency, we used Triton (Tillet et al., 2019): an intermediate language for tiled neural network computations. Triton abstracts out the shared memory management to make it helpful in customizing tiled operations with high efficiency. In our implementation of ROAST-MM, the optimal size of coalesced tiles is a parameter that depends on the shape of the weight matrix. Therefore, different tile sizes can lead to different parallelism, occupancy, and shared memory efficiency, resulting in different execution times. We autotune this parameter to obtain the best performance for particular matrix shapes. We propose two strategies for autotuning each ROAST-MM layer - (1) Optimize the inference workload by autotuning the forward kernel and sharing the tile size with the backward kernels. (2) Optimize the training workload by autotuning the forward and backward kernels together. Extensive evaluation of this kernel is presented in appendix C.2. 5 FEATURE HASHING QUALITY: GLOBAL MEMORY SHARING ADVANTAGE OVER LOCAL MEMORY SHARING We can consider model compression as dimensionality reduction of a parameter vector (a one dimensional vector of all parameters in a model) of size n into a vector of size |M| = m. Quality of inner-product preservation is used as a metric to measure the quality of dimensionality reduction. In terms of dimensionality reduction, ROAST uses ROBE hashing, which shows that chunk based hashing is theoretically better than hashing individual elements. In this section, we compare ROAST’s GMS proposal against HashedNet’s LMS using a chunck size of one. Consider two parameter vectors x, y ∈ Rn, we are interested in how the inner product of parameter vectors are preserved under hashing. Let x = [x1, x2, ..., xk] and y = [y1, y2, ..., yk] be composed of k vectors of sizes n1, n2, ...nk where [] denotes concatentation. In LMS, let each piece map to memory of size fim where ∑ i fi = 1. The estimated inner product with GMS is ⟨̂x, y⟩G,m = m∑ j=1 ( n∑ i=1 I(h(i)=j)g(i)x[i] n∑ i=1 I(h(i)=j)g(i)y[i] ) (3) Table 2: Experimental settings: The datasets and models used in experiments. Domain Task Dataset #Samples Model Model size NLP text-classification amazon-polarity 3.6M/0.4M BERT-2-2 37.43M NLP text-classification yelp-polarity 560K/38K BERT-2-2 37.43M CV image-classification cifar10 50K/10K ResNet 6.5M The estimated inner product with LMS can be written as ⟨̂x, y⟩L,m,f⃗ = k∑ l=1 flm∑ j=1 nl∑ i=1 I(h(i)=j)g(i)xl[i] nl∑ j=1 I(h(i)=j)g(i)yl[i] = k∑ l=1 ⟨̂xl, yl⟩G,(flm) (4) Theorem 1 Let x, y ∈ Rn and be composed of k vectors x = [x1, x2, ..., xk] and y = [y1, y2, ..., yk]. Then the inner product estimation of global and local weight sharing are unbiased. E(⟨̂x, y⟩G,m) = ⟨x, y⟩ E(⟨̂x, y⟩L,m,f⃗ ) = ⟨x, y⟩ (5) The variance for inner product estimation can be written as, VG(⟨̂x, y⟩) = ∑ i fiVi + 1 m ∑ i,j,i ̸=j (||xi||2||yj ||2) + ⟨xi, yi⟩⟨xj , yj⟩ (6) VL( ˆ⟨x, y⟩) = ∑ i Vi (7) where Vl = 1 fl 1 m ∑ i ̸=j a2i b 2 j + ∑ i ̸=j aibiajbj , where xl = (a1, a2..., anl) and yl = (b1, b2..., bnl) (8) where VL is local memory sharing variance and VG is global memory sharing variance. Intuition: The two terms in VG can be understood as follows: The first term is the local variance with individual terms reduced by a factor of fi. This is because each piece of the vector is being distributed in a memory that is 1/fi× larger. However, in GMS, there is a possibility of more collisions across pieces. This leads to the second term in VG. Note that, for a given x, y and a finite value for m, VG is always bounded. At the same time, VL is unbounded due to 0 < fi < 1 in the denominator. So if the number of pieces increases or particular fi grows smaller, VL increases. While we cannot prove that VG is strictly less than VL, we can investigate the equation under some assumptions on the data. Practically, each piece of the parameter vector is a computational block like a matrix for multiplication or embedding table lookup. These blocks are initialized at a scale proportional to the square root of their size. So the norms of these vectors are similar. Let us assume the norm of each piece to be √ α. Also, let us assume that over random data distributions over x and y, all the inner products to be β in expectation. Then, VG ≈ k2 m (α2 + β2) VL ≈ 1 m (α2 + β2)( 1 f1 + 1 f2 + ...+ 1 fk ) ≥ 1 m (α2 + β2)k2 1 ( ∑ fi) = VG (9) Thus, VL is greater than VG, and it can be much greater depending on the exact values of fi. The proof of the theorem and other details are presented in Appendix B.2 6 EXPERIMENTAL EVALUATION Setup: In this section, we evaluate the ROAST compression approach on two types of tasks. The details of the tasks, datasets and models used are mentioned in table 2. . For image-classification tasks, we choose the cifar-10 dataset and the leader for the DawnBenchmark (Coleman et al., 2017) - a ResNet-9 model1 for cifar-10. The target accuracy for this benchmark is 94% and hence we perform hyper-parameter tuning to get a test accuracy of ≥ 94%. We stop the tuning once we 1https://github.com/apple/ml-cifar-10-faster reach this accuracy and hence the results for CIFAR-10 should be compared w.r.t whether it crosses 94.0%. For NLP tasks, we use two largest available text-classification datasets on huggingface (HuggingFace, 2022). For the model, we use BERT-x-y (x:number of layers, y:number of attention heads) architecture with classification head. On both NLP datasets, using models larger than BERT-22 lead to similar test accuracy and hence we choose BERT-2-2 as the base model. The other hyper parameters for NLP tasks are { batch 64 for amazon-polarity and 32 for yelp-polarity, learning rate 2e-5, AdamW optimizer, Linear scheduler} Roast for compression As we can see in tables 3 and 4 , with ROAST, we can achieve similar quality of model in much smaller space. Specifically, for text-classification, we see that we can train and deploy the BERT-2-2 model in 100× lesser space. Similarly, we can train and deploy ResNet model in 10× lesser space for same target test accuracy. Thus, ROAST is an effective method for training and deploying models on memory-constrained systems. Managing excess parameters It is clear from table 3, that BERT-base architecture is highly over parameterized for the tasks under consideration. However, even in this case, ROAST can be used to control the memory footprint while maintaining the functional form of the larger model. Pruning and ROAST We perform unstructured iterative-magnitude pruning (Han et al., 2016b) on ResNet model and find that pruning gives upto 100× compression. However note that pruning requires us to train the model using memory required to store the original model. However, compression with ROAST means using lesser memory even for training. Additionally, pruning can be used in conjunction with ROAST to obtain smaller models using smaller memory. In table 4, we see that we can prune 90% of weights in 10× compressed ROAST array and still achieve the same quality. Local vs. Global memory sharing In the figure 3, we show that the quality of the model while using global memory sharing is, indeed, better than local memory sharing. This supports our theoretical observation about these memory sharing schemes. Efficiency of ROAST operators as compared to HashedNet Table 7 shows the inference performance of a simple model using ROAST-MM for matrix multiplication on compressed memory. Our model linearly transforms the input vector and computes its norm. We optimized the ROAST-MM kernel for this experiment using the inference-optimal strategy. We make the following observations from Table 7: (1) ROAST-MM outperforms HashedNet kernel consistently across the different multiplication workloads. On an average over different workloads, ROAST-MM is up to 45× faster than HashedNet. (2) ROAST-MM is 1.34× slower than PyTorch-MM. This is expected as Pytorch-MM uses extremely optimized libraries for matrix multiplication and ROAST-MM implementation is comparatively under-optimized. It is still interesting to note that ROAST-MM’s performance better in terms of scaling efficiency than PyTorch-MM with the increase in workload. As the workload increases 1600× (from 512×512 to 20480×20480), PyTorch-MM takes 39× time, HashedNet takes 106× time whereas ROAST-MM only takes around 16× time. We present more detailed measurements across different optimizers for training-optimal strategy in the appending C.2 7 CONCLUSION Traditionally model compression has focused on memory reduction during inference. However, model memory during training is also an important consideration. While some of the existing methods such as HashedNet and Low-rank factorisation provide model reduction during training, these methods either do not provide cache-efficient model recovery or have implicit cap on memory reduction. ROAST overcomes these obstacles and provides a cache-efficient, arbitrary control over the memory footprint of model during training and inference. ROAST, essentially provides a practical parameter sharing method. ROAST is theoretically better than HashedNet in terms of dimensionality reduction due to block based hashing and global memory sharing. We empirically validate the efficiency advantage of ROAST over HashedNet and that we can achieve high compression with ROAST. A ADDITIONAL DATA FOR REVIEWERS - PARTS OF WHICH WILL GO IN MAIN PAPER IN FINAL VERSION A.1 EXTENDED TABLE 3 WITH EPOCH INFORMATION AND MORE BASELINES We add a lot of information and new results to the table. Specifically, • We add the GMS and LMS results to the table separately. So that readers can get an idea of each of the method on the task. • We add unstructured pruning (best pruning quality wise) resutls for NLP tasks as well. The pruning results are obtained in the following manner. With the full-9-1 schedule, we start from the fully trained model, perform iterative pruning during next 9 epochs and then tune the final pruned model for 1 more epoch. Whereas in the full-1-9 schedule, we again start from the fully trained model, perform pruning in the next 1 epoch and then tune the model further for 9 epochs. We note the best achieved accuracy with the final model structure and the epoch at which this accuracy is reached. • For each result, we note the number of epoch when the best accuracy was reached. • We append an additional small table which notes the number of epochs required to reach a target accuracy to compare the convergence of different models. We make the following observations. • GMS reaches better accuracy than LMS for the same amount of compression for both the datasets. Additionally, GMS reaches the same target accuracy faster than the LMS. • The ROAST approach is more effective than pruning approaches in NLP tasks of textclassification for architectures like BERT. • It is interesting that GMS-10× converges faster than original model on both datasets. We leave investigating this as future work. A.2 GMS VS LMS FOR YELP As can be seen from the two plots in figure4, it is clear the GMS performs superior to LMS in both the compression settings. B THEORY ROAST is a generalized model compression which performs operation specific system-friendly lookup and global memory sharing. This raises some interesting theoretical questions B.1 BACKWARD PASS FOR MODEL SHARING WEIGHTS ACROSS DIFFERENT COMPONENTS A general function sharing a weight, say x across different components can be written as , f(x, g(x)) The interpretation is that x was used in g(.) and then again used ahead in f. (In case of MLP, we can think of x being used in multiple layers) Let f(g1, g2) where both g1 and g2 are functions of x. ∂f(g1, g2) ∂x = ∂f(g1, g2) ∂g1 ∗ ∂g1 ∂x + ∂f(g1, g2) ∂g2 ∗ ∂g2 ∂x (10) g1 = x and g2 = g(x) ∂f(g1, g2) ∂x = ∂f(x, g(y)) ∂x |y=x + ∂f(y, g(x)) ∂g(x) ∗ ∂g(x) ∂x |y=x (11) ∂f(g1, g2) ∂x = ∂f(x, g(y)) ∂x |y=x + ∂f(y, g(x)) ∂x |y=x (12) Renaming, ∂f(x, g(x)) ∂x = ∂f(z, g(y)) ∂z |y=x,z=x + ∂f(z, g(y)) ∂y |y=x,z=x (13) Thus, we can essentially consider each place where x appears as new variables and then gradient w.r.t x is just summation of partial derivatives of the function w.r.t these new variables. Thus, it is easy to implement this in the backward pass. In order to make sure that the memory utilization in backward pass is not of the order of the recovered model size, we do not use the auto-differentiation of tensorflow/pytorch. We implement our own backward pass and it can be found in the code. B.2 GLOBAL FEATURE HASHING VS LOCAL FEATURE HASHING. We can consider model compression techniques as dimensionality reduction of the parameter vector (a one dimensional vector of all parameters in a model) of size n into a vector of size |M| = m. Quality of inner-product preservation is used as a metric to measure the quality of dimensionality reduction. In terms of dimensionality reduction, ROAST uses ROBE hashing Desai et al. (2022), which showed that chunk based hashing is theoretically better than hashing individual elements. In this section, we analyse GMS proposal of ROAST against LMS of HashedNet. For the purpose of this comparison we assume a chunk size of 1. Consider two parameter vectors x, y ∈ Rn. We are interested in how inner product between these parameter vectors are preserved under hashing. Let x = [x1x2...xk] and y = [y1y2...yk] be composed of k pieces of sizes n1, n2, ...nk. In LMS, let each piece be mapped into memory of size fim where ∑ i fi = 1. The estimators of inner product in the GMS case can be written as , ⟨̂x, y⟩G,m = m∑ j=1 ( n∑ i=1 I(h(i)=j)g(i)x[i])( n∑ i=1 I(h(i)=j)g(i)y[i]) (14) The estimate of inner product with LMS can be written as, ⟨̂x, y⟩L,m,f⃗ = k∑ l=1 flm∑ j=1 ( nl∑ i=1 I(h(i)=j)g(i)xl[i])( nl∑ j=1 I(h(i)=j)g(i)yl[i]) = k∑ l=1 ⟨̂xl, yl⟩G,(fim) (15) Note that ⟨̂x, y⟩L,m,f⃗ = k∑ l=1 ⟨̂xl, yl⟩G,(flm) (16) The GMS estimator is the standard feature hashing estimator and the LMS is essentially sum of GMS estimators for each of the piece. as E[g(i)] = 0, it is easy to check by linearity of expectations that Expectation The suffix L refers to local hashing and G refers to global hashing. EG = E(⟨̂x, y⟩G,m) = ⟨x, y⟩ (17) EL = E(⟨̂x, y⟩L,m,f⃗ ) = ⟨x, y⟩ (18) Let us now look at the variance. Let us follow the following notation, • VG = V(⟨̂x, y⟩G,m). GMS variance of entire vectors • VL = V(⟨̂x, y⟩L,m,f⃗ ). LMS variance of entire vectors • Vl = V(⟨̂xl, yl⟩G,flm). variance of each piece we can write Vl as follows. The following equation is easy to derive and it can be found the lemma 2 of Weinberger et al. (2009) Vl = 1 fl 1 m ( ∑ i ̸=j a2i b 2 j + ∑ i ̸=j aibiajbj) where xl = (a1, a2...anl) and yl = (b1, b2...bnl) (19) As, each of the piece is independently hashed in LSM, we can see VL = k∑ l=1 Vl (20) Let us now look at VG. Again, using lemma 2 from Weinberger et al. (2009) VG = 1 m ( ∑ i̸=j x2i y 2 j + ∑ i̸=j xiyixjyj) (21) The expression can be split into terms that belong to same pieces and those across pieces VG = 1 m k∑ l=1 ( ∑ i̸=j∈piece-l x2i y 2 j + ∑ i ̸=j∈piece-l xiyixjyj) + 1 m k∑ l1=1 k∑ l2=1,l1 ̸=l2 ( ∑ i∈piece-l1,j∈pieces-l2 (x2i y 2 j ) + ∑ i∈piece-l1,j∈pieces-l2 xiyixjyj)) VG = k∑ l=1 flVl + 1 m l∑ l1=1 l∑ l2=1,l1 ̸=l2 ||xl1||22||yl2||22 + ⟨xl1, yl2⟩⟨xl2, yl2⟩ (22) Observation 1: In VL we can see that there are terms with 1fl which makes it unbounded. It makes sense as if number of pieces increase a lot a lot of compressions will not work for example if number of peices > |M|. Also, it will affect VL a lot when some fl is very small which can often be the case. For example, generally embedding tables in DLRM model are much larger than that of matrix multiplciation modules (MLP) . which can make f ≈ 0.001 for MLP components. Observation 2: Practically we can assume each piece, no matter the size of the vector, to be of same norm. The reason lies in initialization. According to Xavier’s initialization the weights of a particular node are initialized with norm 1. So for now lets assume a more practical case of all norms being equal to √ α. Also, in order to make the comparisons we need to consider some average case over the data. So let us assume that under independent randomized data assumption, the expected value of all inner products are β. With this , in expectation over randomized data, we have VG = ∑ flVl + k(k − 1) m (α2 + β2) (23) Now note that, Vl = 1 fl 1 m ( ∑ i ̸=j a2i b 2 j + ∑ i ̸=j aibiajbj) where xl = (a1, a2...anl) and yl = (b1, b2...bnl) (24) (dropping the subscript "l" below) Vl = 1 fl 1 m ((||x||22||y||22 + ⟨x, y⟩2)− 2 ∑ i x2i y 2 i ) (25) Vl = 1 fl 1 m ((α2 + β2)− 2 ∑ i x2i y 2 i ) (26) Note that for each negative term, there are nl positive terms. To simplify we disregard this term in the equation above. This is an approximation which is practical and only made to get a sense of VL and VG relation. VL − VG = ∑ Vl − ∑ flVl − k(k − 1) m (α2 + β2) VL − VG = ∑ l 1 m ( 1 fl − 1)((α2 + β2))− k(k − 1) m (α2 + β2) VL − VG = ∑ l 1 m ( 1 fl − 1)((α2 + β2)− k(k − 1) m (α2 + β2) VL − VG ≥ k(k − 1) m ((α2 + β2)− k(k − 1) m (α2 + β2) VL − VG ≥ 0 Note that we ignored a term which reduces the VL a bit, Let the error be ϵ VL − VG ≥ −ϵ (27) The above equation shows even for the best case, VG might be slightly more than VL. However for general case where harmonic mean is much worse than arithmetic mean, VL will be much larger depending on exact fl s C ROAST-MM LATENCY MEASUREMENTS C.1 INFERENCE OPTIMIZATION C.2 TRAINING OPTIMIZATION See tables 8, 9, 10, 11 D VARIANCE IN QUALITY OVER DIFFERENT RUNS The figure 5 shows three runs of ROASTed BERT and BERT models
1. What is the focus and contribution of the paper on reducing the memory usage of ML models? 2. What are the strengths of the proposed approach, particularly in terms of its block-based memory accesses and global memory sharing policy? 3. What are the weaknesses of the paper regarding its experimental setup and trainability of the models? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions or ideas for future work that the reviewer has after reading the paper?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper presents ROAST, a hashing-based technique for reducing the memory usage of ML models for both training and inference. ROAST improves on the previously existing hashing techniques for this purpose (ROBE/HashedNet) by using block-based (cache/HW – friendly) memory accesses. It also proposes the policy of globally sharing weights (across layers) as opposed to prior art (HashedNet) which shared weights per layer only. This in turn allows ROAST to achieve much higher compression ratios, which translate to much larger performance gains with much lower memory footprint over the state-of-the-art techniques. The authors provide theoretical and empirical proof showing that their global memory sharing approach is superior to local memory sharing used in prior art. Lastly, authors demonstrate strong accuracy results using ROAST for a range of Text classification and Image classification benchmarks. Strengths And Weaknesses Section 2 and 3 provide a good overview of the targeted problem space and the immediate baselines (ROBE/HashedNet) which are closest to the proposed ideas. The diagrams provided in the paper are helpful in explaining the proposed ideas and authors also present detailed proofs that help justify the superiority of GMS approach over LMS. Authors show large speed up compared to the previous hashing approach, HashedNet. In Section 6, authors explain the experimental setup designed to verify the efficacy of ROAST. While the authors explain that they perform some hyperparameter tuning to achieve a given target accuracy it fails to communicate how using GMS approach in ROAST impacts the trainability or ease of training of their models. E.g., Adding the epochs of training incurred to achieve the target accuracy for the different baselines in Table 3 and 4, might help explain that. The intuition is this, since the weights are shared across all layers the cumulative gradient update application might make it harder to get the same level of accuracy with fewer unique weights (compared to LMS). In Table 4, authors demonstrate that ROAST can be combined with other model compression techniques e.g., PRUNING to achieve further model compression. While this is lucrative, structured pruning that takes advantage of the block-wise nature of ROAST memory access could achieve even better accuracy with low training effort. Consider trying that as well. Table 5 shows the inference runtime for a range of memory size and model size values on a GPU with 48MB cache. While the ROAST inference time dominates over HashedNet baseline, especially for large model sizes (high compression ratio). The comparison with PyTorch based matrix multiplication (MM) is also interesting. While PyTorch with its efficient matrix multiply implementation achieves fast inference time for lower model sizes, as the model size grows the benefit of PyTorch over ROAST reduces. Hinting potentially that ROAST based memory optimization could further help improve PyTorch MM implementation for hashed DNNs. The results in Figure 3b are somewhat contradictory to that result that GMS is superior to LMS for a range of models. Considering that the Local ROAST approaches (10x and 100x) end on a slightly higher accuracy number than corresponding GMS approaches. Even seeing accuracy values that match closely for GMS and LMS approaches is surprising. Kindly explain why the yelp-polarity model defies the expected trend as seen in Figure 3a. I appreciate the authors clearly stating the limitations of the proposed work (focus on reducing memory access during training/inference only, not compute cost) Clarity, Quality, Novelty And Reproducibility Questions related to quality of results are discussed in strengths and weakness section.
ICLR
Title Hardware-aware compression with Random Operation Access Specific Tile (ROAST) hashing Abstract Advancements in deep learning are often associated with increasing model sizes. Training and deploying large models require sophisticated hardware and incur significantly higher costs. Thus, model compression is a widely explored approach to solving the problem. However, SOTA techniques fall short in one or more desirable aspects of compression for instance, pruning does not reduce memory for training, quantization can only provide up to 32x compression, HashedNet is cache-inefficient, etc. This paper proposes a model-agnostic, cache-friendly, and hardware-aware model compression approach: Random Operation Access Specific Tile (ROAST) hashing. ROAST collapses the parameters by clubbing them through a lightweight mapping. While clubbing these parameters, ROAST utilizes cache hierarchies by aligning the memory access pattern with the parameter access pattern. ROAST is up to ∼25× faster to train and ∼50× faster to infer than the popular parameter sharing method HashedNet. Additionally, ROAST introduces global weight sharing, which is empirically and theoretically superior to local weight sharing in HashedNet, and can be of independent interest. With ROAST, we can efficiently train and deploy the model using a much smaller memory footprint (∼ 10− 100× lesser) in text and image classification tasks. 1 INTRODUCTION Models across different domains, including Natural Language Processing (NLP), Computer Vision (CV), and Information Retrieval (IR), are exploding in size. State-of-the-art (SOTA) results in these domains are being obtained at a disproportionate increase in model sizes, questioning the sustainability of deep learning (Thompson et al., 2021). For instance, SOTA architectures for vision include VGG (Simonyan & Zisserman, 2014) (150M params, 0.6GB) and ViT (Dosovitskiy et al., 2020) (up to 304M params, 1.2GB). Additionally, SOTA NLP models range from BERT (Devlin et al., 2018) (340M params, 1.36GB) to GShard (Lepikhin et al., 2020) (600B params, 2.4TB). Similarly, industrial-scale recommendation models such as DLRM (Naumov et al., 2019; Mudigere et al., 2021) can have up to 10s of trillions of parameters (50TB). Large models, such as the above, come with various challenges. They need high-end distributed hardware for training and deployment, incurring higher costs. Additionally, the required modelparallel setup has higher inference and training-iteration latency for these models. Model compression is a research direction that aims to resolve these issues by reducing the memory footprint of the model. Compression of the order of 100× can eliminate the need for model-parallel setup for many SOTA models like GPT(Radford et al., 2019), Gshard(Lepikhin et al., 2020), DLRM (Naumov et al., 2019) which now can fit on a single GPU. Furthermore, compressing large models to small sizes come with immediate latency benefits. For example, Desai et al. (2022) showed that by compressing the DLRM model 1000× and using 1 GPU instead of 8 GPUs, we could get 3× faster inference at a lower cost. Also, in the case of CPU inference, a smaller model is efficient. For example, (Diamos et al., 2016) showed that if a single RNN layer can fit in registers, it leads to 146× faster inference. Thus, the ML community has heavily invested in model compression. A variety of model compression paradigms now exist in literature like pruning (Han et al., 2016b), quantisation (Han et al., 2016b), knowledge distillation (Buciluǎ et al., 2006), parameter-sharing (Chen et al., 2015; Desai et al., 2022), and low rank decomposition (Hrinchuk et al., 2020; Yin et al., 2021). Table 1 compares these approaches on three considerations (1) if the model memory is reduced for training. (2) if the memory size can be controlled independently of the model, and (3) if the approach considers the underlying Table 1: Various compression techniques on three aspects (1) Memory reduction during training ( apart from inference) (2) arbitrary control over memory (3) Hardware awareness / cache-efficiency * Some versions of pruning that are tuned to the underlying hardware and are cache-efficient Training memory reduction Arbitrary control on memory Cache efficient Pruning No No No* Low-rank decomposition Yes No Yes Low-precision Yes No Yes Quantization aware training (QAT) No No N.A Parameter sharing - HashedNet Yes Yes No Knowledge Distillation No No N.A ROAST (ours) Yes Yes Yes memory hierarchies and is cache-efficient. We want the techniques to fare positively in these three aspects. However, techniques like pruning, QAT, and knowledge distillation require us to use the memory of the original model while training and only reduce inference time memory. Additionally, there are limits to compression obtained by quantization and pruning depending on which component we are compressing. For example, we cannot prune an embedding table (N × d) more than d× as we do not want any embedding vector to have all zeros. HashedNet provides memory reduction during training and arbitrary control over memory. However, the look-ups in HashedNet are randomly and independently distributed across the total memory. This makes HashedNet cache-inefficient. This paper presents Random Operation Access Specific Tile (ROAST) hashing, a parameter-sharing approach that provides cache-efficiency and arbitrary control over memory during training as well as inference. ROAST does not change the model’s functional form and can be applied to all computational modules of a model, such as MLP layers, attention blocks, convolution layers, and embedding tables. ROAST is hardware aware: it proposes a tile-based hashing scheme tuned to the memory access pattern of the algorithmic implementation of the operation being performed. ROAST uses this hash function to recover blocks of the model from a single array of parameters - ROAST array. ROAST is superior to HashedNet due to two factors (1) Unlike HashedNet, ROAST proposes global weight-sharing where parameters are shared across the different computational modules. As we shall see, global weight-sharing is empirically and theoretically superior to local weight-sharing and might be of independent interest. (2) ROAST uses block-based hashing, which is theoretically superior to count-sketch hashing used in HashedNet. (Desai et al., 2022) We show that with ROAST, we can train a BERT-2-2 ( 2 layers, 2 attention heads) model on the largest available text-classification datasets (amazon-polarity, yelp-polarity) using 100× lesser memory without loss of quality. In cases where the model is overly parameterized, like using BERT-12-12 in the text classification task above, we can still obtain similar compression of 100×. Thus it is a good alternative to neural architecture search. The results extend to CV datasets as well. Specifically, we can train a ResNet-9 model with 10× lesser memory for the CIFAR10 dataset. Importantly, we show that ROAST, due to its hardware-aware nature, is significantly faster than HashedNet: ROAST is up to ∼ 25× faster to train and ∼ 50× faster to infer than HashedNet for large matrix multiplications. Our current implementation of ROAST matrix multiplication is about 1.34× slower than full matrix multiplication in pytorch. This is a testament to how optimized CUBLAS libraries are. We believe, with enough investigation, we can make ROAST-MM comparably efficient to pytorch-MM as well. Limitations of ROAST: One of the goals of model compression, apart from reducing memory usage, is to reduce computational workload for deployment. ROAST, currently, is not devised to decrease computation; it only decreases the memory footprint of a model. Reducing computation with a small memory is left for future work. However, it should be noted that reducing the memory footprint can significantly reduce computation latency and power consumption. As shown in (Han et al., 2016a), accessing memory from RAM is 6400× costlier than 32bit INT ADD and 128× more expensive than on-chip SRAM access in terms of energy consumption. Additionally, RAM access generally is ∼100× slower than a floating-point operation. So, this model compression with ROAST, in our opinion, is an important step for efficient training and inference. 2 RELATED WORK This section briefly reviews the rich history of model compression paradigms. Model compression can be generally classified into two categories: (1) Compressing a learned model and (2) Learning a compressed model. ROAST lies in the second category. Compressing learned models: 1) Pruning: Pruning (Zhu & Gupta, 2017) is a technique to remove parts of a large model, including weights, blocks, and layers, to make the model lighter. Pruning can be performed as a one-time operation or gradually interspersed with training. 2) Quantization: Quantization can involve reducing the precision of the parameters of a model. Mixed precision models are sometimes used where different precision is used with different weights. KMeans quantization is another type of quantization, where models’ weights are clustered using KMeans, and each cluster’s centroid is used for all cluster weights. Model compression, in this case, is achieved by reducing the number of distinct weights. 3) Knowledge distillation: Knowledge distillation (Buciluǎ et al., 2006) is widely applied in model compression with a focus on distilled architectures. Knowledge distillation involves training a teacher model (large original model); then, a student model is trained using the logits of the teacher model. Empirically, the student model trained under this paradigm generalizes better than the student model trained standalone. Many variations exist on this basic idea of knowledge distillation. While these techniques have successfully reduced memory for inference, one of the drawbacks of this line of compression is that the memory usage while training the model is not reduced. ROAST, however, provides a solution that reduces the model’s memory during training and inference. Learning compressed models 1) Low-rank decomposition: In this method, matrices in the model are decomposed into a product of two low-rank matrices, thus saving memory per matrix. A generalization of low-rank decomposition to tensors is called tensor-train decomposition 2) Parameter sharing: Parameter sharing approaches such as HashedNet (Chen et al., 2015) are generally used for matrix compression. These approaches randomly share weights among different parameters, reducing the model’s memory usage. This line of research provides model reduction even during training. However, Low-rank decomposition does not offer arbitrary control over memory footprint, and HashedNets are inefficient due to heavy cache-trashing caused by non-local lookups. Conversely, ROAST is a model-agnostic parameter-sharing approach that can arbitrarily reduce the model size without affecting the functional form while keeping the model recovery efficient. 3 BACKGROUND HashedNet: Compressing MLP matrices Previous work (Chen et al., 2015) introduced a weight sharing method to compress weight matrices of MLP models. They map each matrix parameter to a shared parameter array using a random hash function xxhash (Collet, 2016). In the forward pass, this mapping is used to recover a weight matrix and perform matrix multiplication for each MLP layer. In the backward pass, the gradients of each weight matrix are mapped to the shared compressed array and aggregated using the sum operation. It should also be noted that each MLP layer uses an independent array of parameters. One of the main concerns with HashedNet is that memory accesses on the compressed array are non-coalesced. Thus, fetching a compressed matrix via HashedNet requires significantly more memory read transactions than fetching an uncompressed matrix for which memory accesses can coalesce. Our evaluation shows that uncoalesced memory accesses lead to high latency, especially for large matrices. Random Block Offset Embedding Array (ROBE) for embedding compression In ROBE (Desai et al., 2022), the embedding table is generated using an array of parameters. The embedding of a token is obtained by drawing chunks of the embedding from the ROBE array. The locations of the chunks are decided randomly via light-weight universal hash functions. Authors of ROBE showed that ROBE hashing is theoretically superior to feature hashing used in HashedNet. Also, the use of chunks causes memory accesses to coalesce, making embedding lookup efficient. ROAST proposes a component agnostic, global parameter sharing approach that tunes the hashing function to match memory accesses of algorithmic implementation of operation over available hardware, thus giving a superior parameter sharing scheme. 4 RANDOM OPERATION ACCESS SPECIFIC TILE (ROAST) HASHING LetM be the compressed memory from which parameters will be used, f be the model or the function that we want to run usingM, and W be the recovered weights used in f . f can be considered as a composition of operations {Oi(Xi,Wi)}. By operation, we mean the smaller functions that, when composed together, give us the model f . Here Xi is the input to the operation, and Wi is the weights (i.e., learnable parameters) that Oi uses. Generally, Wis are distinct and do not share parameters. Random Operation Access Specific Tile (ROAST) hashing is a way to perform efficient modelagnostic parameter sharing-based compression. The following distinct aspects of ROAST set it apart from previous parameter sharing-based methods. (1) ROAST is a generic technique applicable to all computational modules. (2) ROAST proposes to tune its mapping from Wi toM in a way that coalesces memory accesses according to how memory is accessed during the operation. This makes ROAST efficient and up to 45× faster than competing approaches like HashedNet. (3) ROAST proposes Global Memory Sharing (GMS) as opposed to Local Memory Sharing (LMS) used in HashedNet. We show GMS to be theoretically and empirically superior to LMS in Section 5 and 6. 4.1 ROAST OPERATIONS IN DEEP LEARNING Any model f can be considered as a composition of smaller functions {Oi(Xi,Wi)}. There are multiple ways to perform this decomposition depending upon what we consider a valid (or small enough) operation. In ROAST, we consider three types of operations: (1) L(l,W ), lookup that accessesM and recovers lth element of W , say w. By element, we mean some particular part of W that is identifiable by an integer. An example with embedding tables is given in figure 1. (2) MM(X,W ), matrix multiplication that multiplies X with W and returns the result, and (3) N(X), various operations that only act on the input but do not interact withM. In ROAST, in order to limit the memory usage, we make sure that L is used only on a small w and MM is performed without recovering the entire matrix. We find that most deep learning models, if not all, can be written as a composition of operations N, MM and L, where L is only applied on small parameters. Let us discuss how ROAST implements L and MM operations in the following paragraphs. Lookup (L(l,W )) We recover a parameter weight w of any shape in a row-major format. Thus, we can consider w = W (l) to be a 1D vector without loss of generality. ROAST recovers w fromM in a blocked fashion. Consider w to be composed of chunks of size Z. Each chunk c is located inM using a universal hash function h1 and is recovered from the location h1(c) inM. Let C(i) give the chunk number of index i and O(i) give the offset of i in this chunk. w[i] = λM[h1(C(i)) +O(i)] h1 : N→ {0, ..., |M| − Z} (1) The recovered W has λ as a scaling factor discussed in section 4.2. The hash function hashes to a range {0, ..., |M| − Z} to avoid overflows while reading the memory. For example, Figure 1 (right) illustrates the embedding lookup using L with chunk size of 2. ROAST uses L to implement computational modules such as embeddings, bias vectors, and so on. We generalize the embedding lookup kernel from ROBE (Desai et al., 2022) to implement our L kernel. Matrix multiplication (MM(Xi,Wi)) 2D matrix multiplication is one of the most widely used operations in deep learning. We implement our ROAST-MM kernel with parameter sharing performed in a way that the algorithm for matrix multiplication accesses coalesced pieces ofM. An efficient implementation of matrix multiplication on GPU follows a block multiplication algorithm to use the on-chip shared memory efficiently. While computing C = A × B, A, B and C are divided in tiles of size Z0 × Z1, Z1 × Z2 and Z0 × Z2 respectively. Thus, we divide our 2D weight matrix into tiles of size Z1 × Z2. The tile, (x, y), where x and y are the coordinates of the tile, is located in M in a row-major format via a universal hash function h2(x, y). Let C1(i, j) and C2(i, j) give the x-coordinate and y-coordinate of the tile to which i, j belongs. Similarly, let O1(i, j) and O2(i, j) give the x-offset and y-offset of a location (i, j) on the tile. Then, we use the following mapping for ROAST-MM, W [i, j] = λM[h2(C1(i, j), C2(i, j)) + Z2O1(i, j) +O2(i, j)] h2 : N2 → {0, ..., |M| − Z1Z2} Again, λ is the scaling factor discussed in section 4.2. The hash function hashes to a range {0, ..., |M| − Z1Z2} to avoid overflows while reading the chunk. Figure 1 (left) illustrates ROASTMM with a chunk size of 2× 2. The above mapping is used whenever a 2D tile is accessed in the matrix multiplication algorithm. The pseudo code for ROAST-MM is shown in algorithm 1. We talk about implementation of this kernel and its evaluation in the later part of the paper. ROAST uses ROAST-MM kernel to implement computational modules such as MLP layers, attention blocks, etc. Each module invoking ROAST kernels uses independent hash functions. Algorithm 1 ROAST-MM(I ×H ×O) Require: X ∈ RI×H ,M, λ, h : N2 → {0, ..., |M| − Z1Z2} Ensure: output = MM(X,M[h(:, :)]) value← TILE(Z0, Z2) ▷ Allocate a 2D tile of size Z0 × Z2 to accumulate results for i ∈ {0, 1, ..., ⌈I/Z0⌉ − 1} do for j ∈ {0, 1, ..., ⌈O/Z2⌉ − 1} do value[:, :]← 0 for k ∈ {0, 1, ..., ⌈H/Z1⌉ − 1} do value← value+MM(X[i : i+ Z0, k : k + Z1],M(h(k : k + Z1, j : j + Z2))) ▷ Access to the weight tile passes through the hash function end for output[i : i+ Z0, j : j + Z2]← λ ∗ value end for end for Apart from scaling each recovered parameter with module-specifc λ, we can also multiply it with another independent hash function g : Nk → {±1} (k=1 or k=2). 4.2 GLOBAL MEMORY SHARING (GMS) HashedNet uses local memory sharing (LMS), which states that each layer will have independent compressed memory. In contrast, ROAST proposes global memory sharing (GMS), wherein we share memory across modules. However, modules cannot directly use the parameters stored inM as each module’s weights requires initialization and optimization at different scales. For instance, in the Xavier’s initialization (Glorot & Bengio, 2010), weights are initialized with distribution Uniform(−1/ √ n, 1/ √ n) where n is size of the input to the module. In GMS, we must ensure that each module gets weights at the required scale. To achieve this, we first initialize the entire ROAST parameter array with values from the distribution Uniform(−1/C, 1/C) for some constant C. Then, for each module, we scale the weights retrieved from the ROAST array by a factor of λ = C/ √ n. One can understand the benefit of GMS over LMS in terms of the number of distinct functions in f that can be expressed using a fixedM. Consider a family of functions with n parameters. GMS can potentially express |M|n functions across different random mappings. In LMS, let separate parameters be of sizes n1, n2, ..nk and each of them is mapped into memoriesM1,M2, ...,Mk. Thus, n = ∑ i ni and |M| = ∑ i |Mi|. Then LMS can only express |M1|n1 |M2|n2 ....|Mk|nk different functions. Thus expressivity of LMS is strictly less than that of GMS and can be orders of magnitude less depending on exact values of ni and |Mi|. We also show that GMS is superior to LMS in terms of dimensionality reduction (feature hashing) in Section 5. Figure 2: Local memory sharing : each module compresses its parameters separately. In Global memory sharing, all parameters from accross the modules share the same memory 4.3 FORWARD AND BACKWARD PASSES Recall that in ROAST, operations are of three types L,MM and N. The forward pass proceeds by applying each operation in sequence. If an operation is of type N, we directly apply its function on the input. For L and MM operations, outputs are computed according to the procedure described in Section 4.1. The gradient of the loss w.r.t a weight wi inM is the λ-scaled aggregation of gradients of loss w.r.t all the parameters that map to this weight. For simplicity of notation, consider θ as the complete parameter, λ(j) as the scaling factor we use for the module that θj belongs to, and h be the mapping from θ toM. See Appendix B.1 for more details. ∇wif(w) = ∑ j,h(j)=i λ(j) ∗ ∇θjf(θ) (2) 4.4 IMPLEMENTATION OF ROAST-MM The high-performance community has heavily investigated the fast implementation of the General Matrix Multiplication (GEMM) kernel, a fundamental operation in many computational workloads, including deep learning. Optimized implementations of GEMM kernels are available in vendor libraries such as cuBLAS (NVIDIA Corporation, 2022a) and CUTLASS (NVIDIA Corporation, 2022b). Unfortunately, these implementations do not support custom tile loading operations, which is the key of ROAST-MM. To implement ROAST-MM to a reasonable level of efficiency, we used Triton (Tillet et al., 2019): an intermediate language for tiled neural network computations. Triton abstracts out the shared memory management to make it helpful in customizing tiled operations with high efficiency. In our implementation of ROAST-MM, the optimal size of coalesced tiles is a parameter that depends on the shape of the weight matrix. Therefore, different tile sizes can lead to different parallelism, occupancy, and shared memory efficiency, resulting in different execution times. We autotune this parameter to obtain the best performance for particular matrix shapes. We propose two strategies for autotuning each ROAST-MM layer - (1) Optimize the inference workload by autotuning the forward kernel and sharing the tile size with the backward kernels. (2) Optimize the training workload by autotuning the forward and backward kernels together. Extensive evaluation of this kernel is presented in appendix C.2. 5 FEATURE HASHING QUALITY: GLOBAL MEMORY SHARING ADVANTAGE OVER LOCAL MEMORY SHARING We can consider model compression as dimensionality reduction of a parameter vector (a one dimensional vector of all parameters in a model) of size n into a vector of size |M| = m. Quality of inner-product preservation is used as a metric to measure the quality of dimensionality reduction. In terms of dimensionality reduction, ROAST uses ROBE hashing, which shows that chunk based hashing is theoretically better than hashing individual elements. In this section, we compare ROAST’s GMS proposal against HashedNet’s LMS using a chunck size of one. Consider two parameter vectors x, y ∈ Rn, we are interested in how the inner product of parameter vectors are preserved under hashing. Let x = [x1, x2, ..., xk] and y = [y1, y2, ..., yk] be composed of k vectors of sizes n1, n2, ...nk where [] denotes concatentation. In LMS, let each piece map to memory of size fim where ∑ i fi = 1. The estimated inner product with GMS is ⟨̂x, y⟩G,m = m∑ j=1 ( n∑ i=1 I(h(i)=j)g(i)x[i] n∑ i=1 I(h(i)=j)g(i)y[i] ) (3) Table 2: Experimental settings: The datasets and models used in experiments. Domain Task Dataset #Samples Model Model size NLP text-classification amazon-polarity 3.6M/0.4M BERT-2-2 37.43M NLP text-classification yelp-polarity 560K/38K BERT-2-2 37.43M CV image-classification cifar10 50K/10K ResNet 6.5M The estimated inner product with LMS can be written as ⟨̂x, y⟩L,m,f⃗ = k∑ l=1 flm∑ j=1 nl∑ i=1 I(h(i)=j)g(i)xl[i] nl∑ j=1 I(h(i)=j)g(i)yl[i] = k∑ l=1 ⟨̂xl, yl⟩G,(flm) (4) Theorem 1 Let x, y ∈ Rn and be composed of k vectors x = [x1, x2, ..., xk] and y = [y1, y2, ..., yk]. Then the inner product estimation of global and local weight sharing are unbiased. E(⟨̂x, y⟩G,m) = ⟨x, y⟩ E(⟨̂x, y⟩L,m,f⃗ ) = ⟨x, y⟩ (5) The variance for inner product estimation can be written as, VG(⟨̂x, y⟩) = ∑ i fiVi + 1 m ∑ i,j,i ̸=j (||xi||2||yj ||2) + ⟨xi, yi⟩⟨xj , yj⟩ (6) VL( ˆ⟨x, y⟩) = ∑ i Vi (7) where Vl = 1 fl 1 m ∑ i ̸=j a2i b 2 j + ∑ i ̸=j aibiajbj , where xl = (a1, a2..., anl) and yl = (b1, b2..., bnl) (8) where VL is local memory sharing variance and VG is global memory sharing variance. Intuition: The two terms in VG can be understood as follows: The first term is the local variance with individual terms reduced by a factor of fi. This is because each piece of the vector is being distributed in a memory that is 1/fi× larger. However, in GMS, there is a possibility of more collisions across pieces. This leads to the second term in VG. Note that, for a given x, y and a finite value for m, VG is always bounded. At the same time, VL is unbounded due to 0 < fi < 1 in the denominator. So if the number of pieces increases or particular fi grows smaller, VL increases. While we cannot prove that VG is strictly less than VL, we can investigate the equation under some assumptions on the data. Practically, each piece of the parameter vector is a computational block like a matrix for multiplication or embedding table lookup. These blocks are initialized at a scale proportional to the square root of their size. So the norms of these vectors are similar. Let us assume the norm of each piece to be √ α. Also, let us assume that over random data distributions over x and y, all the inner products to be β in expectation. Then, VG ≈ k2 m (α2 + β2) VL ≈ 1 m (α2 + β2)( 1 f1 + 1 f2 + ...+ 1 fk ) ≥ 1 m (α2 + β2)k2 1 ( ∑ fi) = VG (9) Thus, VL is greater than VG, and it can be much greater depending on the exact values of fi. The proof of the theorem and other details are presented in Appendix B.2 6 EXPERIMENTAL EVALUATION Setup: In this section, we evaluate the ROAST compression approach on two types of tasks. The details of the tasks, datasets and models used are mentioned in table 2. . For image-classification tasks, we choose the cifar-10 dataset and the leader for the DawnBenchmark (Coleman et al., 2017) - a ResNet-9 model1 for cifar-10. The target accuracy for this benchmark is 94% and hence we perform hyper-parameter tuning to get a test accuracy of ≥ 94%. We stop the tuning once we 1https://github.com/apple/ml-cifar-10-faster reach this accuracy and hence the results for CIFAR-10 should be compared w.r.t whether it crosses 94.0%. For NLP tasks, we use two largest available text-classification datasets on huggingface (HuggingFace, 2022). For the model, we use BERT-x-y (x:number of layers, y:number of attention heads) architecture with classification head. On both NLP datasets, using models larger than BERT-22 lead to similar test accuracy and hence we choose BERT-2-2 as the base model. The other hyper parameters for NLP tasks are { batch 64 for amazon-polarity and 32 for yelp-polarity, learning rate 2e-5, AdamW optimizer, Linear scheduler} Roast for compression As we can see in tables 3 and 4 , with ROAST, we can achieve similar quality of model in much smaller space. Specifically, for text-classification, we see that we can train and deploy the BERT-2-2 model in 100× lesser space. Similarly, we can train and deploy ResNet model in 10× lesser space for same target test accuracy. Thus, ROAST is an effective method for training and deploying models on memory-constrained systems. Managing excess parameters It is clear from table 3, that BERT-base architecture is highly over parameterized for the tasks under consideration. However, even in this case, ROAST can be used to control the memory footprint while maintaining the functional form of the larger model. Pruning and ROAST We perform unstructured iterative-magnitude pruning (Han et al., 2016b) on ResNet model and find that pruning gives upto 100× compression. However note that pruning requires us to train the model using memory required to store the original model. However, compression with ROAST means using lesser memory even for training. Additionally, pruning can be used in conjunction with ROAST to obtain smaller models using smaller memory. In table 4, we see that we can prune 90% of weights in 10× compressed ROAST array and still achieve the same quality. Local vs. Global memory sharing In the figure 3, we show that the quality of the model while using global memory sharing is, indeed, better than local memory sharing. This supports our theoretical observation about these memory sharing schemes. Efficiency of ROAST operators as compared to HashedNet Table 7 shows the inference performance of a simple model using ROAST-MM for matrix multiplication on compressed memory. Our model linearly transforms the input vector and computes its norm. We optimized the ROAST-MM kernel for this experiment using the inference-optimal strategy. We make the following observations from Table 7: (1) ROAST-MM outperforms HashedNet kernel consistently across the different multiplication workloads. On an average over different workloads, ROAST-MM is up to 45× faster than HashedNet. (2) ROAST-MM is 1.34× slower than PyTorch-MM. This is expected as Pytorch-MM uses extremely optimized libraries for matrix multiplication and ROAST-MM implementation is comparatively under-optimized. It is still interesting to note that ROAST-MM’s performance better in terms of scaling efficiency than PyTorch-MM with the increase in workload. As the workload increases 1600× (from 512×512 to 20480×20480), PyTorch-MM takes 39× time, HashedNet takes 106× time whereas ROAST-MM only takes around 16× time. We present more detailed measurements across different optimizers for training-optimal strategy in the appending C.2 7 CONCLUSION Traditionally model compression has focused on memory reduction during inference. However, model memory during training is also an important consideration. While some of the existing methods such as HashedNet and Low-rank factorisation provide model reduction during training, these methods either do not provide cache-efficient model recovery or have implicit cap on memory reduction. ROAST overcomes these obstacles and provides a cache-efficient, arbitrary control over the memory footprint of model during training and inference. ROAST, essentially provides a practical parameter sharing method. ROAST is theoretically better than HashedNet in terms of dimensionality reduction due to block based hashing and global memory sharing. We empirically validate the efficiency advantage of ROAST over HashedNet and that we can achieve high compression with ROAST. A ADDITIONAL DATA FOR REVIEWERS - PARTS OF WHICH WILL GO IN MAIN PAPER IN FINAL VERSION A.1 EXTENDED TABLE 3 WITH EPOCH INFORMATION AND MORE BASELINES We add a lot of information and new results to the table. Specifically, • We add the GMS and LMS results to the table separately. So that readers can get an idea of each of the method on the task. • We add unstructured pruning (best pruning quality wise) resutls for NLP tasks as well. The pruning results are obtained in the following manner. With the full-9-1 schedule, we start from the fully trained model, perform iterative pruning during next 9 epochs and then tune the final pruned model for 1 more epoch. Whereas in the full-1-9 schedule, we again start from the fully trained model, perform pruning in the next 1 epoch and then tune the model further for 9 epochs. We note the best achieved accuracy with the final model structure and the epoch at which this accuracy is reached. • For each result, we note the number of epoch when the best accuracy was reached. • We append an additional small table which notes the number of epochs required to reach a target accuracy to compare the convergence of different models. We make the following observations. • GMS reaches better accuracy than LMS for the same amount of compression for both the datasets. Additionally, GMS reaches the same target accuracy faster than the LMS. • The ROAST approach is more effective than pruning approaches in NLP tasks of textclassification for architectures like BERT. • It is interesting that GMS-10× converges faster than original model on both datasets. We leave investigating this as future work. A.2 GMS VS LMS FOR YELP As can be seen from the two plots in figure4, it is clear the GMS performs superior to LMS in both the compression settings. B THEORY ROAST is a generalized model compression which performs operation specific system-friendly lookup and global memory sharing. This raises some interesting theoretical questions B.1 BACKWARD PASS FOR MODEL SHARING WEIGHTS ACROSS DIFFERENT COMPONENTS A general function sharing a weight, say x across different components can be written as , f(x, g(x)) The interpretation is that x was used in g(.) and then again used ahead in f. (In case of MLP, we can think of x being used in multiple layers) Let f(g1, g2) where both g1 and g2 are functions of x. ∂f(g1, g2) ∂x = ∂f(g1, g2) ∂g1 ∗ ∂g1 ∂x + ∂f(g1, g2) ∂g2 ∗ ∂g2 ∂x (10) g1 = x and g2 = g(x) ∂f(g1, g2) ∂x = ∂f(x, g(y)) ∂x |y=x + ∂f(y, g(x)) ∂g(x) ∗ ∂g(x) ∂x |y=x (11) ∂f(g1, g2) ∂x = ∂f(x, g(y)) ∂x |y=x + ∂f(y, g(x)) ∂x |y=x (12) Renaming, ∂f(x, g(x)) ∂x = ∂f(z, g(y)) ∂z |y=x,z=x + ∂f(z, g(y)) ∂y |y=x,z=x (13) Thus, we can essentially consider each place where x appears as new variables and then gradient w.r.t x is just summation of partial derivatives of the function w.r.t these new variables. Thus, it is easy to implement this in the backward pass. In order to make sure that the memory utilization in backward pass is not of the order of the recovered model size, we do not use the auto-differentiation of tensorflow/pytorch. We implement our own backward pass and it can be found in the code. B.2 GLOBAL FEATURE HASHING VS LOCAL FEATURE HASHING. We can consider model compression techniques as dimensionality reduction of the parameter vector (a one dimensional vector of all parameters in a model) of size n into a vector of size |M| = m. Quality of inner-product preservation is used as a metric to measure the quality of dimensionality reduction. In terms of dimensionality reduction, ROAST uses ROBE hashing Desai et al. (2022), which showed that chunk based hashing is theoretically better than hashing individual elements. In this section, we analyse GMS proposal of ROAST against LMS of HashedNet. For the purpose of this comparison we assume a chunk size of 1. Consider two parameter vectors x, y ∈ Rn. We are interested in how inner product between these parameter vectors are preserved under hashing. Let x = [x1x2...xk] and y = [y1y2...yk] be composed of k pieces of sizes n1, n2, ...nk. In LMS, let each piece be mapped into memory of size fim where ∑ i fi = 1. The estimators of inner product in the GMS case can be written as , ⟨̂x, y⟩G,m = m∑ j=1 ( n∑ i=1 I(h(i)=j)g(i)x[i])( n∑ i=1 I(h(i)=j)g(i)y[i]) (14) The estimate of inner product with LMS can be written as, ⟨̂x, y⟩L,m,f⃗ = k∑ l=1 flm∑ j=1 ( nl∑ i=1 I(h(i)=j)g(i)xl[i])( nl∑ j=1 I(h(i)=j)g(i)yl[i]) = k∑ l=1 ⟨̂xl, yl⟩G,(fim) (15) Note that ⟨̂x, y⟩L,m,f⃗ = k∑ l=1 ⟨̂xl, yl⟩G,(flm) (16) The GMS estimator is the standard feature hashing estimator and the LMS is essentially sum of GMS estimators for each of the piece. as E[g(i)] = 0, it is easy to check by linearity of expectations that Expectation The suffix L refers to local hashing and G refers to global hashing. EG = E(⟨̂x, y⟩G,m) = ⟨x, y⟩ (17) EL = E(⟨̂x, y⟩L,m,f⃗ ) = ⟨x, y⟩ (18) Let us now look at the variance. Let us follow the following notation, • VG = V(⟨̂x, y⟩G,m). GMS variance of entire vectors • VL = V(⟨̂x, y⟩L,m,f⃗ ). LMS variance of entire vectors • Vl = V(⟨̂xl, yl⟩G,flm). variance of each piece we can write Vl as follows. The following equation is easy to derive and it can be found the lemma 2 of Weinberger et al. (2009) Vl = 1 fl 1 m ( ∑ i ̸=j a2i b 2 j + ∑ i ̸=j aibiajbj) where xl = (a1, a2...anl) and yl = (b1, b2...bnl) (19) As, each of the piece is independently hashed in LSM, we can see VL = k∑ l=1 Vl (20) Let us now look at VG. Again, using lemma 2 from Weinberger et al. (2009) VG = 1 m ( ∑ i̸=j x2i y 2 j + ∑ i̸=j xiyixjyj) (21) The expression can be split into terms that belong to same pieces and those across pieces VG = 1 m k∑ l=1 ( ∑ i̸=j∈piece-l x2i y 2 j + ∑ i ̸=j∈piece-l xiyixjyj) + 1 m k∑ l1=1 k∑ l2=1,l1 ̸=l2 ( ∑ i∈piece-l1,j∈pieces-l2 (x2i y 2 j ) + ∑ i∈piece-l1,j∈pieces-l2 xiyixjyj)) VG = k∑ l=1 flVl + 1 m l∑ l1=1 l∑ l2=1,l1 ̸=l2 ||xl1||22||yl2||22 + ⟨xl1, yl2⟩⟨xl2, yl2⟩ (22) Observation 1: In VL we can see that there are terms with 1fl which makes it unbounded. It makes sense as if number of pieces increase a lot a lot of compressions will not work for example if number of peices > |M|. Also, it will affect VL a lot when some fl is very small which can often be the case. For example, generally embedding tables in DLRM model are much larger than that of matrix multiplciation modules (MLP) . which can make f ≈ 0.001 for MLP components. Observation 2: Practically we can assume each piece, no matter the size of the vector, to be of same norm. The reason lies in initialization. According to Xavier’s initialization the weights of a particular node are initialized with norm 1. So for now lets assume a more practical case of all norms being equal to √ α. Also, in order to make the comparisons we need to consider some average case over the data. So let us assume that under independent randomized data assumption, the expected value of all inner products are β. With this , in expectation over randomized data, we have VG = ∑ flVl + k(k − 1) m (α2 + β2) (23) Now note that, Vl = 1 fl 1 m ( ∑ i ̸=j a2i b 2 j + ∑ i ̸=j aibiajbj) where xl = (a1, a2...anl) and yl = (b1, b2...bnl) (24) (dropping the subscript "l" below) Vl = 1 fl 1 m ((||x||22||y||22 + ⟨x, y⟩2)− 2 ∑ i x2i y 2 i ) (25) Vl = 1 fl 1 m ((α2 + β2)− 2 ∑ i x2i y 2 i ) (26) Note that for each negative term, there are nl positive terms. To simplify we disregard this term in the equation above. This is an approximation which is practical and only made to get a sense of VL and VG relation. VL − VG = ∑ Vl − ∑ flVl − k(k − 1) m (α2 + β2) VL − VG = ∑ l 1 m ( 1 fl − 1)((α2 + β2))− k(k − 1) m (α2 + β2) VL − VG = ∑ l 1 m ( 1 fl − 1)((α2 + β2)− k(k − 1) m (α2 + β2) VL − VG ≥ k(k − 1) m ((α2 + β2)− k(k − 1) m (α2 + β2) VL − VG ≥ 0 Note that we ignored a term which reduces the VL a bit, Let the error be ϵ VL − VG ≥ −ϵ (27) The above equation shows even for the best case, VG might be slightly more than VL. However for general case where harmonic mean is much worse than arithmetic mean, VL will be much larger depending on exact fl s C ROAST-MM LATENCY MEASUREMENTS C.1 INFERENCE OPTIMIZATION C.2 TRAINING OPTIMIZATION See tables 8, 9, 10, 11 D VARIANCE IN QUALITY OVER DIFFERENT RUNS The figure 5 shows three runs of ROASTed BERT and BERT models
1. What is the focus and contribution of the paper on model compression? 2. What are the strengths of the proposed approach, particularly in terms of memory usage and model accuracy? 3. What are the weaknesses of the paper, especially regarding comparisons with other works and potential limitations? 4. Do you have any concerns or suggestions regarding the implementation and reproducibility of the proposed method? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a model compression approach: Random Operation Access Specific Tile (ROAST) hashing. The authors consider three operations for block-based hashing to reduce memory usage, and they also introduce a global memory-sharing method to improve model accuracy. Experiments on BERT and ResNet show that the proposed ROAST exceeds HashedNet and achieves high compression when training models. Strengths And Weaknesses Strength The proposed ROAST achieves similar quality in almost 100x less space for compression model training. ROAST performs well in many research areas including text classification and image classification. The experimental results show that the proposed global memory sharing method performs better than previous local memory-sharing. Weakness Lack of comparison of time costs between GMS and LMS. Will this global memory share cause access conflict? For Local vs. Global memory sharing in Figure 3(b), the GMS shows more significant accuracy degradation than LMS. How do you know this is not relevant to the GMS method but to other factors? Lack of experiments on different models and datasets. Many text-classification tasks are introduced in the introduction but only experiment on BERT-2-2 and BERT-base. For the image classification task, only ResNet-9 on CIFAR-10 is used. Need more details about the implementation of ROAST operations. It is hard to follow. Clarity, Quality, Novelty And Reproducibility Please see the questions above.